Wednesday, November 18, 2015

DNA and RNA structure

Let's review the basic thing we learned from Molecular Biology! :)


Each strand of DNA double helix is a chain of bases linked to the sugar-phosphate backbone. The two strands are connected with hydrogen bonds. As below:
The phosphate group is linked between the 5th carbon of an upstream sugar and 3rd carbon and a downstream sugar. The 5-carbon sugar is called deoxyribose in DNA and ribose in RNA, which can be distinguished by the loss or presence of an oxygen atom at the 2' carbon, as highlighted in red in the following figure. 

RNA structure is very similar as single-strand DNA, except that
  • The sugar in RNA is a ribose sugar (as opposed to deoxy-ribose) and has an –OH at the 2' C position highlighted in red in the figure below (DNA sugars have –H at that position)
  • Thymine in DNA is replaced by Uracil in RNA. T has a methyl (-CH3) group instead of the H atom shown in red in U.

Now let's see the cap structure of RNA:

Cap structure are most commonly added to the 5' of RNA, known as mRNA capping. The 5' capping, as the left brown part shown in following figure, is like a guanosine(G) nucleotide but with a methyl group added at the 7' in guanosine. Its 5' is connected to the 5' of RNA with a triphosphate bridge. 

The following figure also contains a 2' cap, which can rarely happen in some eukaryote and viral genome.

Note that mitochondrial mRNAs don't have cap.

Friday, October 23, 2015

Steps to push code to a github private repository

  • If you are the first time user:
Step1: Create a new repository on (supposed you already have a github account), private or public. In example below, the repository is named as PREDICT-HD

Step2: [optional] generate the public key and add to your github account
Follow this help page exactly:

Step3: Push the code to github from your local computer,
cd ~/project/PREDICT-HD/src
git init
git add *
git commit -m "my first commit"

git config --global "Your Name"
git config --global
git remote add origin
git push origin master
  • If collaborator already push a commit before you want to push your commit
git add xxx
git commit -m ""
git pull
git push origin master

Monday, October 05, 2015

A parallel and fast way to download multiple files

We can write a short script to download multiple files easily in command, e..g

for i in X Y Z; do wget$i.url; done

If we want them to run in background (so that in a pseudo-parallel way), we can use -b option for wget.

But this is still not fast enough, and the parallel with wget -b won't give me any notice once it's done.

Here is my solution: axel + parallel

parallel -a urls.file axel

Let's say I want to download all brain sample bigwig files of H3K4me1 marks from the Roadmap Epigenomics project. Here is the code:

> url.$mark # to generate an empty file
for i in E071 E074 E068 E069 E072 E067 E073 E070 E082 E081;
  echo$i-$mark.pval.signal.bigwig >> url.$mark
parallel -a url.$mark axel -n 5 

Regarding what's axel and how fast it is comparing to wget, please refer to this link:

Friday, September 11, 2015

PLINK2 vs. SNAP for linkage disequilibrium (LD) calculation

Among other ways to calculate SNPs in linkage disequilibrium (LD), two methods have been used in many literatures: PLINK2 and SNAP

With PLINK2, SNPs in LD can be calculated using parameters like:
PLINK2 -r2 dprime --ld-window-kb 1000 --ld-window 10 --ld-window-r2 0.8
SNAP is a web-based service with pre-calculated data: You can set the population, distance limit, r2 threshold.

Note that PLINK2 has one more control: --ld-window

In its manual (, it says "By default, when a limited window report is requested, every pair of variants with at least (10-1) variants between them, or more than 1000 kilobases apart".

That means PLINK2 limits SNPs with two kinds of radius: both the genomic distance (--ld-window-kb) and the number of SNPs apart (--ld-window), while SNAP only has the distance limit. Both of them have the r2 threshold.

Sunday, August 16, 2015

Using ANOVA to get correlation between categorical and continuous variables

How to calculate the correlation between categorical variables and continuous variables?

This is the question I was facing when attempting to check the correlation of PEER inferred factors vs. known covariates (e.g. batch).

One solution I found is, I can use ANOVA to calculate the R-square between categorical input and continuous output.

Here is my R code snip:

## correlation of inferred factors vs. known factors
# name PEER factors
# continuous known covariates:
covs2=subset(covs, select=c(RIN, PMI, Age));
# re-generate batch categorical variable from individual binary indicators (required by PEER)
covs2=cbind(covs2, batch=paste0("batch",apply(covs[,1:6],1,which.max)))
covs2=cbind(covs2, Sex=ifelse(covs$Sex,"M","F"), readLength=ifelse(covs$readsLength_75nt, "75nt", "50nt"))

# ref:
r2 <- laply(xvars, function(x) {
  laply(yvars, function(y) {
rownames(r2) <- colnames(xvars)
colnames(r2) <- colnames(yvars)

pvalue <- laply(xvars, function(x) {
  laply(yvars, function(y) {
rownames(pvalue) <- colnames(xvars)
colnames(pvalue) <- colnames(yvars)

pheatmap(-log10(t(pvalue)),color= colorRampPalette(c("white", "blue"))(10), cluster_row =F, cluster_col=F, display_numbers=as.matrix(t(round(r2,2))), filename="peer.factor.correlation.pdf")

I highlighted the core part in yellow color. As it shows, we can use aov() function in R to run ANOVA. Its result can be summarized with summary.lm() function, which show output like:

> summary.lm(results)

aov(formula = weight ~ group)

    Min      1Q  Median      3Q     Max 
-1.0710 -0.4180 -0.0060  0.2627  1.3690 

            Estimate Std. Error t value Pr(>|t|)
(Intercept)   5.0320     0.1971  25.527   <2e-16
grouptrt1    -0.3710     0.2788  -1.331   0.1944
grouptrt2     0.4940     0.2788   1.772   0.0877

Residual standard error: 0.6234 on 27 degrees of freedom
Multiple R-squared: 0.2641,     Adjusted R-squared: 0.2096 
F-statistic: 4.846 on 2 and 27 DF,  p-value: 0.01591

R^2 and p-value are shown at the end of output.

Note: the summary.lm() object doesn't contain value of p-value directly. But we can compute p-value in command like:

> F=summary.lm(results)$fstatistic
> F=as.numeric(F)
> pf(F[1],F[2],F[3])

Below table is a nice summary the methods applicable to corresponding data type.

CategoricalChi Square, Log linear, Logistict-test, ANOVA (Analysis of Varirance)Linear regression
ContinuousLogistic regressionLinear regression,  Pearson correlation
Mixture of Categorical and ContinuousLogistic regressionLinear regressionAnalysis of Covariance

Next thing I need to refresh my mind is how different in calculating the correlation using cor() and the above ANOVA method above.

I know the correlation coefficient r can be inferred from the coefficient and sd of two variables. For example, we know sd(x) and sd(y), then when regressing y~x, we got regression line e.g. y=b0 + b1x. Then we can calculate r as

r = b1 * SDx / SDy

When x and y are in standard normal distribution, e.g. u=0, sd=1, then r=b1.

Wednesday, August 12, 2015

use getline to capture system command output in awk

I just learnt this today: awk also has its pipe (|) and getline, just like unix. If I want to call a system command in awk and capture its output (Note: system() won't work as it only return the exit status), I can use pipe the output to getline. 

For example,

$cat > test.txt
aa bb cc
11 22 33
44 55 cc

$awk 'BEGIN{cmd="grep cc test.txt | sort -k1,1 | head -n1 | cut -f2 -d\" \""; cmd | getline a; print a}'


$awk 'BEGIN{cmd="grep cc test.txt | sort -k1,1 | head -n1 | cut -f2 -d\" \""; system(cmd);}'

Note that if only use cmd, it won't print out anything, because cmd itself won't switch to console (unlike system). 

$awk 'BEGIN{cmd="grep cc test.txt | sort -k1,1 | head -n1 | cut -f2 -d\" \""; cmd;}'

If you want to output the multiple lines and process them in awk, you can do

$awk 'BEGIN{cmd="grep cc test.txt | sort -k1,1 | cut -f2 -d\" \""; while( (cmd | getline a) >0) print a;}'


Wednesday, August 05, 2015

Dopamine, L-DOPA, catecholamines, TH, Nuur1 etc.

catecholamines (儿茶酚胺) is the collection name for three neurotransmitters: epinephrine (adrenaline), norepinephrine (noradrenaline) and dopamine.

L-DOPA is its precursor. Its corresponding drug is called levodopa.

L-DOPA is produced from the amino acid L-tyrosine by the enzyme tyrosine hydroxylase (TH).

Tyrosine cannot pass the BBB (blood brain barrier), but L-DOPA can.

L-DOPA can be converted to dopamine, which can be further converted to epinephrine (adrenaline), norepinephrine (noradrenaline).

Dopaminergic neurons can be divided into 11 cell groups according to where they located, using histochemical fluorescence method. It includes A8-A16, Aaq and Telencephalic group. Among that, A9 is where the substantia nigra pars compacta (SNpc) refers.

Number of TH-positive neurons in SN declines with age, while a-synuclein level increases with age. This inverse relationship is also seen in the surviving DA neurons of PD patients.

And it shows that the down-regulation of TH is linked to a reduction of expression of transcription factor called Nurr1 (orphan nuclear receptor, encoded by gene NR4A2).

Nurr1 is shown to regulate a category of nuclear-encoded mitochondrial genes. (

Tuesday, July 28, 2015

Note from GEUVADIS paper

Note from GEUVADIS papers:

Lappalainen et al. Nature 2013 : Transcriptome and genome sequencing uncovers functional variation in humans,
‘t Hoen et al. Nature Biotechnology 2013: Reproducibility of high-throughput mRNA and small RNA sequencing across laboratories,

1. how to detect sample outlier?
a. before alignment: distance of k-mer profile
b. after alignment: Spearman rank correlation between samples --> D-statistics (i.e.  the median correlation of one sample against all the other samples)
c. gender mismatch: XIST vs. chrY
d. ASE bias rate among heterozygous sites

2. eQTL
a. exon/gene quantification
b. filter out lowly expressed ones (e.g. 0 in >50% samples)
c. for each group, normalize with PEER, adding mean
    c1. use subset (??e.g. chr20, or chr20- 22??) using K=????0,1,,3,,57,10,13,15,20 for each dataset
    c2. run eQTL and number of genes (eGenes) for each K.
    c3. get the optimal K = K(with most number of eQTL genes)
    c4. run PEER on 20,000 exons to get covairantes for the final normalization
    c5. final PEER normalization using all dataset, residual + mean as final quantification
d. transform the final quantification to standard normal distribution (by ?)
e. eQTL using Matrix-eQTL: linear regression of quantification ~ genotypes + genotype_covariates
3. Differential expression analysis
a. TMM normalization (from edgeR)
b. filter: genes with more than 5 counts per million in at least 1 sample were analyzed in pairwise population comparisons
c. tweeDEseq (good for large samples), significance: FDR < 0.05 and log2 fold change greater than 2

Wednesday, June 24, 2015

median filter in AWK

Here is what median filter does from wikipedia:
To demonstrate, using a window size of three with one entry immediately preceding and following each entry, a median filter will be applied to the following simple 1D signal:
x = [2 80 6 3]
So, the median filtered output signal y will be:
y[1] = Median[2 2 80] = 2
y[2] = Median[2 80 6] = Median[2 6 80] = 6
y[3] = Median[80 6 3] = Median[3 6 80] = 6
y[4] = Median[6 3 3] = Median[3 3 6] = 3
i.e. y = [2 6 6 3].
Note that, in the example above, because there is no entry preceding the first value, the first value is repeated, as with the last value, to obtain enough entries to fill the window. This is one way of handling missing window entries at the boundaries of the signal.
Here is my awk code to implement this:
#!/bin/awk -f
# awk script to filter noise by sliding a window and taking mean or median per window
# Authos: Xianjun Dong
# Date: 2015-06-23
# Usage: _filter.awk values.txt
# bigWigSummary input.bigwig chr10 101130293 101131543 104 | _filter.awk -vW=5 -vtype=median
  if(W=="") W=5; 
  if(type=="") type="median";

  for(i=1;i<=NF;i++) {
    if(type=="median") {asort(array); x=array[half+1];}
    if(type=="mean") {x=0; for(j=1;j<=W;j++) x+=array[j]; x=x/W;}
    printf("%s\t", x);
  print "";

Friday, May 22, 2015

Be cautious of using grep --colour=always

I noticed that grep --colour=always can embed additional code (for coloring, for example, ^[[01;31m^[[K    ^[[m^[[K) in your text, which could lead downstream confusion. Here is one such example:

$ echo 1 2 3 | grep 2
1 2 3
$ echo 1 2 3 | grep 2 | cat -v
1 2 3
$ echo 1 2 3 | grep --colour=always 2 
2 3
$ echo 1 2 3 | grep --colour=always 2 | cat -v
1 ^[[01;31m^[[K2^[[m^[[K 3

See my answer for more detail.

The solution is to use --colour=autoor not use the --colour option at all.

$ echo 1 2 3 | grep --colour=auto 2 
2 3
$ echo 1 2 3 | grep --colour=auto 2 | cat -v
1 2 3

Check your .bashrc to see if you set alias grep='grep --colour=always' like I did.

Thursday, May 21, 2015

Which version should I install? 32 bit or 64 bit? Ubuntu or CentOS?

I was posting two relevant articles on this topic:

Unix vs. Linux
Which version of tools I should install? i386 vs. x86_64, and 32bit vs. 64bit kernel (

However, this can still be confusing sometimes... for example, when you visit the SRA toolkit download page:

OK. Here is the short answer.

1. How to tell my CPU architecture is 32 bit or 64 bit?

$ uname -aIf the output is i386 or i586, it's 32 bit. If it's x86_64, it's 64 bit.


2. How to tell my OS (operation system) kernel is 32 bit or 64 bit?

Note that the OS kernel can be different from the above hardware architecture. Usually the OS changes version following the update of hardware, but the 32bit version can work in a 64bit machine (not the other way around), so it might not be consistent. The Macbook Pro I am using is running a 32-bit kernel on a 64-bit processor (see my previous post). 

3. How to tell which OS I am using? CentOS or Ubuntu?

In my case, I used:
$ ls -d /etc/[A-Za-z]*[_-][rv]e[lr]* | grep -v "lsb" | cut -d'/' -f3 | cut -d'-' -f1 | cut -d'_' -f1

it returns redhat, which is part of CentOS now.

## For more detail, please refer terdon's answer in

Friday, May 15, 2015

How to correctly set color in the image() function?

Sometimes we want to make our own heatmap using image() function. I recently found it's tricky to set the color option there, as its manual has very little information on col:

I posted my question on BioStars. The short answer is: Unless the breaks is set, the range of Z is evenly cut into N intervals (where N = the length of color) and values in Z are assigned to the color of corresponding interval.

For example, when x=c(3,1,2,1) and col=c("blue","red",'green','yellow'), the minimal of x is assigned as the first color, and max to the last color. Any value between is calculated proportionally to a color. In this case, 2 is the the middle one, according to the principal that intervals are closed on the right and open on the left, it's assigned to "red". So, that's why we see the colors are yellow-->blue-->red-->blue.


image(1,1:length(x), matrix(x, nrow=1, ncol=length(x)), col=c("blue","red",'green','yellow'))

In practice, unless we want to manually define the color break points, we just set the first and last color, it will automatically find colors for the values in Z.

image(1:ncol(x),1:nrow(x), as.matrix(t(x)), col=collist, asp=1)

If we want to manually define the color break points, we need to

xmin=0; xmax=100;
x[x<xmin]=xmin; x[x>xmax]=xmax;
ColorLevels<-seq(from=xmin, to=xmax, length=10000)
ColorRamp_ex <- ColorRamp[round(1+(min(x)-xmin)*10000/(xmax-xmin)) : round( (max(x)-xmin)*10000/(xmax-xmin) )]
par(mar=c(2,0,2,0), oma=c(3,3,3,3))
image(t(as.matrix(x)), col=ColorRamp_ex, las=1, xlab="",ylab="",cex.axis=1,xaxt="n",yaxt="n")
image(as.matrix(ColorLevels),col=ColorRamp, xlab="",ylab="",cex.axis=1,xaxt="n",yaxt="n")

Friday, May 08, 2015

Tips and Tools you may need for working on BIG data

Nowadays everyone is talking about big data. As a genomic scientist, I could feel hungry of a collection of tools more specialized for the mediate-to-big data we deal everyday.

Here are some tips I found useful when getting, processing or visualizing large data set:

1. How to download data faster than wget?

We can use wget to download the data to local disk. If it's large, we can download with other faster alternative, such as axel, aria2.

2. Process the data in parallel with hidden option in GNU commands

  • If you have many many files to process, and they are independent, you can process them in a parallel manner. GNU has a command called parallel. Lindenbaum Pierre wrote a nice notebook for "GNU Parallel in Bioinformatics", worthy to read. 
  • Many commonly used commands also have a hidden option to run in a parallel way. For example, GNU sort command has --parallel=N to set it with multiple cores. 
  • You can set -F when doing grep -f on a large seed file. People also suggest to set export LC_ALL=C line to get X2 speed.

3. In R, there are several must-have tips for large data, e.g. data.table
  • If using read.table(), set stringsAsFactors = F and colClass. See the example here
  • use fread(), not read.table(). Some more details here. But so far, fread() does not support reading *.gz file directly. Use fread('zcat file.gz')
  • use data.table, rather data.frame. Learn the difference online here.
  • There is a nice View for how to process data in parallel in R:, but I have not followed them practically. Hopefully there will be some easy tutorials there, or I become less procrastinated to learn some of them ... At least I can start with foreach()
4. How to open scatter plot with too many points in Illustrator?

This is really a problem for me as we usually have a figure with >30k dots (i.e. each dot is a gene). Even though they are highly overlapping each other, opening it in Illustrator is extremely slow. Here is a tip:
From that, probably a better idea is to "compress" the data before plotting it, such as merge the overlapped ones if they overlapped some %.
or this one:
or this one:

Still working on the post...

Thursday, May 07, 2015

A clarity on the Illumina TruSeq Small RNA prep kit manual

In the TruSeq® Small RNALibrary Prep Guide, below the Figure 1, there is a sentence: "The RNA 3' adapter is modified to target microRNAs and other small RNAs that have a 3' hydroxyl group resulting from enzymatic cleavage by Dicer or other RNA processing enzymes." It's right, but could be very misleading if you are not clear of the diverse picture of transcriptome (scroll down for more detail). I want to emphasize that the 3' hydroxyl group (and the 5'-phosphate group) is NOT specific to microRNAs or any small RNAs. And it doesn't necessarily result from enzymatic cleavage by Dicer. Sonic fragmentation can also break the full length mRNA (with 5'-cap and 3'-polyA) into truncated RNA pieces with 5'-phosphate and 3' hydroxyl free ends. I just called Illumina to confirm that the 3' and 5' ligation steps don't guarantee the selection of miRNAs (but rather any RNAs with 5'-phosphate and 3' hydroxyl ends, if more accurately). The last step of gel purification is the key to select (or enrich, if more accurately) miRNAs.

OK. Here is what I learned from my colleagues about the different RNA species in the trancriptome:

There are 4 species in the transcriptome, where the later 3 are intermediates of transcription (or half product of degradation).
  • me7Gppp-------------------------3' (1) 
  •                 p------------- 3' (2) 
  •                 OH-------------3' (3) 
  •        ppp---------------------3' (4) 
Only group(2) will ligate to 5’adaptor. The 3' end can also have different format, at least two:
  • 5' ----------------- AAAAAA (1) 
  • 5' ---------- OH (2) 
Also note there are two enzymes used to repair the 5' ends: CIP and TAP. CIP (Calf Intestinal alkaline phosphatase) can remove the 5’ phosphate group of DNA strand. The TAP (Tobacco acid pyrophosphatase) is to remove the 5' cap structure (or 5'-5' triphosphate linkage) and leave a mono-phosphate at the 5' end. So, applying first CIP and then TAP will convert the above (2) and (4) to (3), then convert (1) to (2). That's one way to do capture the 5' cap structure, same purpose as CAGE (but CAGE use the type IIs restriction enzyme MmeI and type III restriction enzyme EcoP15I)

Friday, March 20, 2015

grep -wfF list.txt input.txt

If you are just greping the list from a file, and your list are store in a file, let's say, list.txt, then you can always do grep -wf list.txt input.txt

When list.txt is huge, "-F" will be much faster.

Extracted from

Wednesday, March 11, 2015

X11 connection error in Mac

Typically, I log into my remote server/cluster via "ssh -X" and from there launch R program for plotting. But it always shows an error as

unable to open connection to X11 display ''

after a while, when you want to call functions such as plot(). 

This is very annoying. So that I have to exit the server and re-login again. 

Does this sound familiar for you?

Here is the solution I found via website below:

Two ways to solve this:

1. add the following line to the Mac client’s /etc/ssh_config:
ForwardX11Timeout 596h
2. use “ssh -Y <remote system>”, instead of -X, as it may not trigger the untrusted auth timeout.

Thursday, March 05, 2015

How to extract the gap region in human genome?

Just notice that I should avoid the gap region, esp. when we generate a random background as your null distribution using tools such as bedtools shuffle.

Short answer: go below UCSC Table Browser link and choose to save as a bed file

As below table shown, 8.28% of hg19 assembly are simply gap.

Gap (gap) Summary Statistics
item count457
item bases239,845,127 (8.28%)
item total239,845,127 (8.28%)
smallest item47
average item524,825
biggest item30,000,000

Thursday, February 26, 2015

reshape: from long to wide format

This is to continue on the topic of using the melt/cast functions in reshape to convert between long and wide format of data frame. Here is the example I found helpful in generating covariate table required for PEER (or Matrix_eQTL) analysis:

Here is my original covariate table:

Let's say we need to convert the categorical variables such as condition, cellType, batch, replicate, readLength, sex into indicators (Note: this is required by most regression programs like PEER or Matrix-eQTL, since for example the batch 5 does not match it's higher than batch 1, unlike the age or PMI). So, we need to convert this long format into wide format. Here is my R code for that:

categorical_varaibles = c("batch", "sex", "readsLength", "condition", "cellType", "replicate");
for(x in categorical_varaibles) {cvrt = cbind(cvrt, value=1); cvrt[,x]=paste0(x,cvrt[,x]); cvrt = dcast(cvrt, as.formula(paste0("... ~ ", x)), fill=0);}

Here is output:

Monday, January 12, 2015

altColor in UCSC track setting

Many track types allow setting a color range that varies from color to altColor. For instance the CpG Island tracks use the altColor setting to display the weaker islands, while the stronger ones are rendered in color. If altColor is not specified, the system will use a color halfway between that specified in the color tag and white instead.

Be aware that wiggles with negative values are drawn in altColor not color as positive values are.

Using one line command as input for LSF bsub

In a simple case, you can use bsub command arguments to submit your command job to LSF cluster.

If you have a complicated script with many commands,  you can save into a lsf script (including the shell pathname in the first line) and then submit that script to LSF cluster, e.g.  bsub yourscript arguments

In your script, you wrote something like this:

myFirstArgument = $1

Here I found I can also use pipe to connect multiple commands into one line and simply quote them as one command and works in bsub. Here is an example:

bsub "echo -ne 'ab\tcss' | awk '{print \$2}'"

So far, I found I have to add "\" (backslash) to escape the special character, such as $ in awk. Wondering there might be a way in bsub options to set this.