Posts by Collection

portfolio

publications

talks

Statistical and computational methods for the meta-analysis and resemblance analysis of transcriptomic studies

Published:

Advancement in high-throughput technologies has generated a large amount of “-omics” data that become an inevitable component of modern biomedical and public health research. Practical statistical and computational methods are needed to meta-analyze and compare “-omics” data from different studies or experiments. In this talk, I will introduce two problem-driven methods and one software for the meta-analysis and resemblance analysis of multiple transcriptomic studies. In the first part, we proposed a Bayesian hierarchical model for RNA-seq meta-analysis by modeling count data, integrating information across genes and across studies, and modeling differential signals across studies via latent variables. In the second part, as motivated by two PNAS papers presenting contradicting conclusions of mouse model resemblance to human studies, we proposed a novel method to quantify the continuous measure of resemblance across model organisms and characterize in what pathways they most agree or disagree. In addition, I will also briefly introduce a R-Shiny based modularized software suite called “MetaOmics” to meta-analyze multiple transcriptomic studies for seven biological purposes.

High-dimensional variable screening: from single study to multiple studies

Published:

Advancement in technology has generated abundant high-dimensional data from many studies. Due to huge computational advantage, variable screening methods based on marginal association have become promising alternatives to the popular regularization methods. However, all screening methods are limited to single study so far. We consider a general framework for variable screening with multiple related studies, and further propose a novel two-step screening procedure for high-dimensional regression analysis under this framework. Compared to the one-step procedure, our procedure greatly reduces false negative errors while keeping a low false positive rate. Theoretically, we show that our procedure possesses the sure screening property with weaker assumptions on signal strengths and allows the number of features to grow at an exponential rate of the sample size. Post screening, the dimension is greatly reduced so common regularization methods such as group lasso can be applied to identify the final set of variables. Under the same framework, we also extend the screening procedure to Cox proportional hazards model to detect survival-associated biomarkers from multiple studies, while allowing censoring proportions and baseline hazard rates to vary across studies. Simulations and application to cancer transcriptomic data has illustrated the advantage of our proposed methods.

Congruence evaluation for model organisms in transcriptomic response

Published:

Model organisms are instrumental substitute for human studies to expedite basic and clinical research. Despite their indispensable role in mechanistic investigation and drug development, resemblance of animal models to human has long been questioned and debated. Little effort has been made for an objective and quantitative congruence evaluation system for model organisms. We hereby propose a framework, namely Congruence Analysis for Model Organisms (CAMO), for transcriptomic response analysis by developing threshold-free differential expression analysis, quantitative resemblance score controlling data variabilities, pathway-centric downstream investigation and knowledge retrieval by text mining. Instead of a genome-wide dichotomous answer of “poorly/greatly” mimicking, CAMO assists researchers to quantify and visually identify biological functions that are best or least mimicked by model organisms, providing foundations for hypothesis generation and subsequent translational decisions.

Novel variable screening methods for omics data integration

Published:

Sure screening are a series of simple and effective dimension reduction methods to reduce noise accumulation for variable selection in high-dimensional regression and classification problems. Since the first method proposed by Fan and Lv (2008), numerous sure screening methods have been developed for various model settings and showed their advantage for big data analysis with desired scalability and theoretical guarantees. However, none of the methods are directly applicable to reduce dimension and select variables in omics data integration problems. In this talk, I will introduce two novel variable screening methods recently developed in our group for both horizontal and vertical omics data integration. In the first project, we proposed a general framework and a two-step procedure to perform variable screening when combining the same type of omics data from multiple related studies and showed the inclusion of multiple studies provided more evidence to reduce dimension. In the second project, we developed a fast and robust variable screening method to detect epigenetic regulators of gene expression over the whole genome by combining epigenomic and transcriptomic data, where both predictor and response spaces are of high-dimension. We used extensive simulations and real data to demonstrate the strengths of our methods as compared to existing screening methods.

High-dimension to high-dimension screening for detecting genome-wide epigenetic regulators of gene expression

Published:

The advancement of high-throughput technology characterizes a wide range of epigenetic modifications across the genome involved in disease pathogenesis via regulating gene expression. The high-dimensionality of both epigenetic and gene expression data make it challenging to identify the important epigenetic regulators of genes. Conducting univariate test for each epigenetic-gene pair is subject to serious multiple comparison burden, and direct application of regularization methods to select epigenetic-gene pairs is computationally infeasible. Applying fast screening to reduce dimension first before regularization is more efficient and stable than applying regularization methods alone. We propose a novel screening method based on robust partial correlation to detect epigenetic regulators of gene expression over the whole genome, a problem that includes both high-dimensional predictors and high-dimensional responses. Compared to existing screening methods, our method is conceptually innovative that it reduces the dimension of both predictor and response, and screens at both node (epigenetic features or genes) and edge (epigenetic-gene pairs) levels. We develop data-driven procedures to determine the conditional sets and the optimal screening threshold, and implement a fast iterative algorithm. Simulations and applications to long non-coding RNA and microRNA regulation in Kidney cancer and DNA methylation regulation in Glioblastoma Multiforme illustrate the validity and advantage of our method.

teaching