Jump to content

The Ketchum Report (Continued)


Guest Admin

Recommended Posts

Ridgerunner,

It's my understanding that when a scientist publishes a paper, they explain what they did to get their results - specifically to permit other scientists (the review panel as well as others who want to test the paper's findings and hopefully, replicate them) to fully evaluate their hypothesis, process and findings. So, since this was the paper submitted for peer review, shouldn't it be fully explanatory? It's already troublesome that the inadequate bits of data provided in her paper leave so many unanswered questions, according to most of the scientists who have commented on it so far, but why is there so much guesswork about how she did the work itself? Is it customary for scientists to have so many questions about how someone did their research once the paper is out? Just wondering.

Normally there are many levels of checks in getting a paper published. First it the peer review, which typically catches a lot of inadequacies of the paper like missing data, incomplete figure legends, missing references, inadequate descriptions of methods, over stated conclusions, etc. Once it passes that, normally the senior editor of the journal has a lot of technical questions and more or less checks for completeness. Then there is the copy editor, who find a gramatical errors, wrong legends with figures, poor quality figures, etc. The quality of peer review varies from journal to journal, typically with the top tier journals being able to solicit the top tier of reviewers. I have no idea what level of peer review DeNovo would have had as an unestablished journal. It is not customary to have so many questions about the data or methods after a paper is published - these are almost always dealt with prior to publication.

As I have said before, a scientific paper, at the time of publication, should be a complete work, as you said, containing the data (or access to) and detailed enough description so that one with the required expertise to verify/replicate the results. In my opinion, there is insufficient data presented, and the methods to ambiguous to replicate the findings and conclusions of this paper. WITH THE DATA PRESENTED IN THIS PAPER I do not agree with the conclusions that nuDNA sequence published is from a biological creature nor that this biological creature is a sub species of human.

Link to comment
Share on other sites

Samples 28, 35 and others were then sent to SeqWright to have the sequences confirmed with the design of new MC1R primers. As with other loci analyzed, MC1R analysis at SeqWright found partial human sequences in some DNA samples, while others had novel sequences and still others failed to amplify. All human control DNA amplified and sequenced successfully as before."

My sample had whole mito sequenced but failed to amplify any nuDNA. It sure would be interesting to see whats there.

Edited by southernyahoo
Link to comment
Share on other sites

Being how potentially groundbreaking and Earth shattering this research is, does it justify the unconventional manner in which it is being handled and released?

Link to comment
Share on other sites

RR, can you explain the difference between standard sequencing and next gen sequencing?

The big difference between standard sequencing and next-generation sequencing is the number of templates.

In standard sequencing, you are working on a single template, or piece of DNA, either a pcr product, purified DNA fragment, or plasmid containing a DNA fragment of interest. You typically use either a sequence specific primer (for a pcr product), or a generic primer for something cloned in a plasmid. By using sequential primers, you can in a few independent runs sequence moderate amounts of DNA (for example get 5000 bp from 10-15 sequential runs). Often you will get your first sequence, then make a new primer to begin sequencing from the farthest end, and repeat.

For next gen sequencing, you are looking at millions of templates at one time and determining their sequence in a massively parallel fashion. The methods and chemistries for doing so are varied and complex, but in short you synthesize a complementary strand and using color specific nucleotide analogs (colored bases) you can determine the order of the bases in the sequence.

In standard sequencing you typically can have "reads" of 300-500 bases, where as the next gen technology often gives only short (50-100bases) sequences which are then re-assembled using complex computer algorithms, potentially generating contigs of hundres of thousand of bases. For standard sequencing the data is technically raw or unprocessed with little interpretation other than it is good sequence or not (analogous to a good Q30 score). With the next gen sequencing, what is usually presented is the processed concatenated sequence that can have assembly errors. Often 500-5000 bp of standard sequence is more convincing than 100,000 bp of concatenated sequences as it is less prone to error. This is especially true for really novel sequences (ie a never before discovered bacteria from the Antarctic) where there is no good reference to gage it too. With the MK sequence, even though it was supposedly assembled using human Chromosome 11 as a reference, it "bears" little homology to its' own template. This is where I think the assembly went awry and standard sequencing may have been more reliable. Mixed or contaminated samples, especially if it had two separate hominids of very similar sequence, would come back clearly as one species (for one DNA fragment) with conventional sequencing - no franken DNA. BF and human may still come back very similar, and might not be clearly identified with too little sequence though.

Edited by ridgerunner
Link to comment
Share on other sites

Being how potentially groundbreaking and Earth shattering this research is, does it justify the unconventional manner in which it is being handled and released?

I'd think it would be all the more reason to be as conventional as possible.

Link to comment
Share on other sites

Guest OntarioSquatch

WITH THE DATA PRESENTED IN THIS PAPER I do not agree with the conclusions that nuDNA sequence published is from a biological creature nor that this biological creature is a sub species of human.

Is it possible that the nuDNA sequence that was provided was simply fabricated?

Link to comment
Share on other sites

Guest spurfoot

ridgerunner, in short, we need to wait for Sykes for reliable results. Melba's mito data is probably OK though, its just the age derived from it that is so doubtful

Link to comment
Share on other sites

Guest Tyler H

I too used NDA's in working with the lab I contracted - I wanted some confidentiality ... but I didn't ask to control everything conceivably related to Sasquatch and didn't try to hoard any and all monetary benefit from any "discovery" and certainly never made my sample submitter sign over all such rights to me, before I would help them get testing done.

@ LC Even though we currently lack the knowledge and insight of MK's total NDAs, it wouldn't be to farfetched to think that consideration or compensation is not known by the all those who signed it. Otherwise, why sign off, if one doesn't feel fairly compensated?

How can you say that we "lack ... total NDA's"?... is there evidence out there to indicate that there is more information associated with the NDA's posted here, than what we can read in them? I know of no one who used or has seen these NDA's who is claiming there is more that we can't see.

Thermalman, what is your horse in this race?

Everyone knows mine, knows Bart's knows Justins... why do you so vociferously come to Melba's rescue on every issue that comes up, even when some of these issues are not attacking her?

I think with all your demands for transparency, that it's time you tell us why you are so invested in her defense.

Link to comment
Share on other sites

Bart or Tyler, If your samples had tested differently, as almost certainly being from an unknown primate, would your labs have stood behind their work, or disavowed it because of the implications? I,m just wondering how hard it would be to actually get a professional with a lot to lose and maybe nothing to gain to publicly state they have definitive proof of BigFoot. I know your samples didnt indicate that, I was just wondering if you think they would have stood behind it if it did and enter the fray. Thanks PB

Link to comment
Share on other sites

Zigo

I can agree with your post on the grounds that we all should stick to the argument but the hard part is that human beings are behind that argument and it is difficult to not discuss the study without involving the actions of the person involved. Calling out the actions and methodology that Tyler, Bart, and their labs used is just as relavent as calling Ketchum out for her methodologies and actions. These actions and methodologies are part of the process that led to a conclusion. They cannot be simply ignored. Nor should their discussion be labeled as a personal attack. Saying that Ketchum's methods were bad is not the same as saying she is a bad person. People are trying to misconstrue the two.

Edited by BipedalCurious
Link to comment
Share on other sites

Moderator

ridgerunner, in short, we need to wait for Sykes for reliable results. Melba's mito data is probably OK though, its just the age derived from it that is so doubtful

I thought someone reported that Sykes was doing mito-only. Has that changed? If not ... nothing new is going to come from Sykes' results, only "confirmation" of the same ol' "human DNA contamination" that's been reported in the pre-Melba samples. A mtDNA-only study is essentially pre-destined to only preserve the status quo, not discover anything new.

If Sykes is going to look at nuDNA, then things (potentially) get a lot more interesting.

Anyone have any definitive answers to what Sykes is/is not looking at? Thanks in advance!

MIB

Edited by MIB
Link to comment
Share on other sites

I need some help here. Dr. Ketchum made this claim:

Whole Human Genome SNP analysis:

Twenty-four samples were tested on the whole human genome (2.5 million SNPs) Illumina® Bead Array69 platform using the Illumina® iSCAN instrument.

Of these, in a clear departure from the results obtained with normal human DNA, 100% of the 24 samples failed to meet the human threshold of 95% SNP

performance. The results ranged from 53% to 89% SNP performance. In the top 12 performing samples, only 45 SNPs out of the 2.5 million SNPs tested failed

across all 12 samples, while simultaneously the human controls all yielded above 95% results on those SNPs.

But she did not reference the 95% claim. Does anyone know where that information came from? When you make that big of a claim, it is typical that you reference where that information came from.

So I am hoping someone here knows where I can find that in a peer reviewed journal or book!

Edited by slowstepper
Link to comment
Share on other sites

ridgerunner, in short, we need to wait for Sykes for reliable results. Melba's mito data is probably OK though, its just the age derived from it that is so doubtful

Yes, I think we are all waiting on Sykes. I don't know exactly what to make of MK's mtDNA stuff. I find in unlikely that 111 of 111 samples tested for mtDNA all came back as human. LC mentioned that mis-identification of hair may run as high as 10%, so there should have been some negatives in there. And so little actual mtDNA data is presented - just tables of processed data - that I find it imposible to verify (and difficult to downright reject) these claims. But given other results in the paper, I am not inclined to giver her the benefit of doubt that she got it correct. As to any suggestions on what the mtDNA finding show with regard to age of this species/hybridization event, I am personally doubtful of that as well.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...