一个标准的TCGA大文章应该做哪些数据?

很多人总是问我如何挖掘TCGA的数据,发文章!
可是他却连TCGA的数据是怎么来的都不知道,TCGA发了几十篇CNS大文章(自己测序的)了,每篇文章都有几百个左右的癌症样本的6种数据,这几年凑成了一万多个样本,都放在GDC里面可以任意下载。同时也出来了十几篇TCGA的数据挖掘大文章(主要包括亚型,driver mutation,假基因等新型研究领域)
那么一篇标准的一个标准的TCGA大文章应该自己测哪些数据?
其实稍微仔细浏览几篇文章就明白了,套路也是存在的,https://tcga-data.nci.nih.gov/docs/publications/
我们就以2013年发表在新英格兰杂志上面的Genomic and Epigenomic Landscapes of Adult De Novo Acute Myeloid Leukemia 为例子吧!

研究的是acute myeloid leukemia (AML),在医院花个十年时间精心挑选了200 adults with de novo AML ,当然病人详细信息是要给的,还要符合伦理,签知情协议书吧。
We performed whole-genome sequencing of the primary tumor and matched normal skin samples from 50 patients (with data from 24 of these patients reported previously17) and exome capture and sequencing for another 150 paired samples of AML tumor and skin (see Table S3 in the Supplementary Appendix for coverage data for the 200 samples).
全基因组测序毕竟贵,就只测50个吧,当然,癌症样本要取癌旁配对研究才有意义。剩余的就做外显子吧,毕竟便宜一点!
We performed RNA-expression profiling on the Affymetrix U133 Plus 2 platform for 197 samples, RNA sequencing for 179 samples, microRNA (miRNA) sequencing for 194 samples, Illumina Infinium HumanMethylation450 BeadChip profiling for 192 samples, and Affymetrix SNP Array 6.0 for both tumor and normal skin samples from all 200 patients.
接着就是芯片和测序的mRNA表达数据,然后是测序的miRNA表达就是,然后是芯片的甲基化数据,和芯片的拷贝数变异检测数据。
Data sets were not completed for all samples on all platforms because of assay failures and availability and quality issues for some samples. The complete list of data sets is provided in Table S4 in the Supplementary Appendix. All data sets are available through the Cancer Genome Atlas (TCGA) data portal (https://tcga-data.nci.nih.gov/tcga).
这么多数据都给TCGA贡献出来了,不发大文章,就没天理了。
至于怎么分析,在现在我们看来,就是一些套路了。
但是这些数据,他们一个组分析肯定只能是挑重点说咯,所以TCGA数据挖掘首先就是可以捡人家剩下的,然后可以把多个癌种合起来分析。
就先说到这里吧

Comments are closed.