Thursday, November 19, 2009

Using Vmatch to combine assemblies

In praise of Vmatch

If I could take only one bioinformatics tool with me to a desert island it would be Vmatch.

In addition to being a versatile alignment tool, Vmatch has many lesser-known features which also leverage suffix trees. The dbcluster option allows Vmatch to cluster similar sequences using Vmatch index files. This method is about 1000x faster than attempting to align all sequences against each other, no matter how clever the algorithm.

Originally presented as means to cluster EST sequences, this option is useful for processing the output from de novo short read assemblers like Velvet and ABYSS. Vmatch -dbcluster allows us to easily create a non-redundant set of contig sequences originating from several assemblies.

Why would we want to combine assemblies?

Every kmer is sacred

Why not just take the one with the highest N50? At the recent Genome Informatics meeting at CSHL, Inanc Birol (ABYSS) said "every kmer is sacred". Our experience with the de novo transcriptome assembly of plants has been that the best set of contigs is spread out all over the parameter space. Longer kmer and cvCut settings can produce a longer set of elite contigs at the expense of omitting lowly expressed (or spurious?) contigs that appear at less stringent settings.

Reads held hostage

To the point above, I would add "every cvCut is also sacred". That doesn't exactly roll off the tongue, but we have seen some instances in which an assembly will have a higher read usage at a cvCut of 10 than at 5. This paradox suggests there is a "contig read hostage" situation, in which reads critical to the formation of longer contigs can be held captive in low coverage short contigs. Raising the cvCut threshold frees up these critical reads to extend more plausible contigs and allowing additional reads from the pool to be recruited into the assembly.

What is a non-redundant set?

The following sequences have some redundancies. NODE2 and NODE3 do not add any information.
The non-redundant set will have only NODE1 and NODE4.
>NODE2 a substring of NODE1
>NODE3 a reverse complement substring of NODE1
>NODE4 a new sequence

How do we create a non-redundant set?

Do not use -nonredundant

No, really. There is an option called -nonredundant which should presumably do what we want, but unfortunately that writes a "representative" member of each cluster to a file, which may or may not be the longest contig. I'm not sure what makes a sequence representative of a cluster, but for this application we want the longest member of each cluster.

On April 29th 2010, Vmatch 2.1.3 was released. The most important change is that the option -nonredundant now delivers the longest sequence from the corresponding cluster (instead
of an unspecified representative). This should make the approach unnecessary.

To create a non-redundant set we will produce cluster files and then extract the longest sequence from each file. Use the following commands to produce your cluster files from an index:
mkvtree -allout -pl -db contigs1.fa contigs2.fa -dna -indexname myIndex
#mkvtree will accept multiple fasta or gzipped fasta files

#if using Vmatch 2.1.3 or later:
vmatch -d -p -l 25 -dbcluster 100 0 -v -nonredundant nonredundantset.fa myIndex > mySeqs.rpt
# thx to Hamid Ashrafi for debugging this syntax

#if using older Vmatch
mkdir sequenceMatches 
#this is where vmatch will put each cluster

vmatch -d -p -l 25 -dbcluster 100 0 mySeqs "(1,0)" -v -s myIndex > mySeqs.rpt
mv mySeqs*.match mySeqs*.fna sequenceMatches

What do these options do?

  • -d direct matches (forward strand)
  • -p palindromic matches (reverse strand)
  • -l search length (set this below your shortest sequence)
  • -dbcluster queryPerc targetPerc - runs an internal alignment of the index created by mkvtree.
    • The two numeric arguments specify what percentage of your query sequence (the smaller) is involved in an alignment to the cluster sentinel/target superstring. For our purposes we require 100% of our query sequence substring to match the target. We don't care what percentage of the target is aligned, so set the second parameter to 0.
    • The third argument to dbcluster is the index prefix name that vmatch will give to the THOUSANDS of cluster .fna and .match files it will create.
    • The fourth argument "(1,0)" specifies that we want to keep singletons (in a file called mySeqs.single.fna) and that there is no limit to the number of sequences in an acceptable cluster.

  • -v verbose report (redirected to a the .rpt)
  • -s create the individual cluster fasta files and match reports

Consult the vmatch manual for fuzzy matches and more examples:

The standard output details the clusters and singlets.
0:761437: NODE_83_length_31_cov_22.451612857642: 
NODE_106_length_31_cov_22.451612621409: NODE_152_length_29_cov_27.758621749981: 
NODE_185_length_29_cov_27.7586211:761861: NODE_531_length_424_cov_26.851416805590: 
NODE_1413_length_320_cov_28.187500837480: NODE_1236_length_320_cov_28.187500858407: 
NODE_937_length_320_cov_28.187500765510: NODE_1542_length_108_cov_34.870369621979: 
NODE_786_length_425_cov_32.602352750915: NODE_1290_length_321_cov_34.392525

Extract the longest sequence from your cluster files

If using Vmatch 2.1.3 or later this is unnecessary
Here is a perl script, which we will call, to do that
#print the longest sequence in a fasta file
use strict;my %seqs;my $defline;
while (<>) {
  if ( /^#/ || /^\n/ ) {
    #comment line or empty line do nothing
  }elsif (/>/) {
    s/>//g;$defline = $_;
    $seqs{$defline} = "";
  }elsif ($defline) {
    $seqs{$defline} .= $_;
    die('invalid FASTA file');
my $max = $defline;
foreach my $def ( keys %seqs ) {
  $max = ( length( $seqs{$def} ) > length( $seqs{$max} ) ) ? $def : $max;
print ">" . $max . "\n" . $seqs{$max} . "\n";

We want the longest member of each cluster and all sequences in the singletons file:
for f in sequenceMatches/*fna;
if [ "$f" = "sequenceMatches/mySeqs.single.fna" ];
cat $f >> mySeqs_longest_seqs.fa;
else perl $f >> mySeqs_longest_seqs.fa;

That's it - now you have a comprehensive and non-redundant set of the longest contigs from a number of assemblies.

Wednesday, November 4, 2009

R's xtabs for total weighted read coverage

Samtools and its BioPerl wrapper Bio::DB:Sam prefer to give read coverage on a depth per base pair basis. This is typically an array of depths, one for every position that has at least one read aligned. OK, works for me. But how can we quickly see which targets (in my case transcripts) have the greatest total weighted read coverage (i.e. sum every base pair of every read that aligned)?

My solution is to take that target-pos-depth information and import a table into R with at least the following columns:

I added the pos column here to emphasize the base-pair granularity
         tx pos depth
1 tx500090 227 1
2 tx500090 228 1
3 tx500090 229 1
4 tx500090 230 1
5 tx500090 231 1
66 tx500123 184 1
67 tx500123 185 1
68 tx500123 186 1
69 tx500123 187 2
70 tx500123 188 2
71 tx500123 189 2

In R
myxTab<-xtabs(depth ~ tx,data=myCoverage)

xtabs will sum up depth-weighted positions by default (i suppose this is what tabulated contigency really means) and return an unsorted list of transcripts and their weighted coverage (total base pair read coverage)
> myxTab[1:100]
tx500090 tx500123 tx500134 tx500155 tx500170 tx500178 tx500203 tx500207
38 92 610 46 176 46 92 130
tx500273 tx500441 tx500481 tx500482 tx500501 tx500507 tx500667 tx500684
76 2390 114 71228 762 222 542 442
tx500945 tx500955 tx501016 tx501120 tx501127 tx501169 tx501190 tx501192
1378 3604 46 46 420 854 130 352
tx501206 tx501226 tx501229 tx501245 tx501270 tx501297 tx501390 tx501405
244 1204 206 15926 214 46 168 46
tx501406 tx501438 tx501504 tx501694 tx501702 tx501877 tx501902 tx502238
38 2572 7768 3274 314 298 84 198
tx502320 tx502364 tx502403 tx502414 tx502462 tx502515 tx502517 tx502519
122 38 588 46 46 38 38 466
tx502610 tx502624 tx502680 tx502841 tx502882 tx503090 tx503192 tx503204
206 38 168 3750 38 122 76 92
tx503416 tx503468 tx503523 tx503536 tx503571 tx503578 tx503623 tx503700
260 38 168 38 46 46 84 38
tx503720 tx503721 tx503722 tx503788 tx503872 tx503892 tx503930 tx503970
97112 38 38 4708 38 38 1290 84
tx503995 tx504107 tx504115 tx504346 tx504353 tx504355 tx504357 tx504398
46 152 206 46 3416 1402 122 290
tx504434 tx504483 tx504523 tx504589 tx504612 tx504711 tx504751 tx504827
290 8728 176 46 46 76 5644 1308
tx504828 tx504834 tx504882 tx504931 tx504952 tx505017 tx505029 tx505078
2336 328 46 34138 1000 1838 46 474
tx505123 tx505146 tx505159 tx505184
38 123344 160 588

this is approximately 10000x faster than using a formula like: