BioInfo Survey - Florida State University

Download Report

Transcript BioInfo Survey - Florida State University

An Introduction to Bioinformatics
Special Topics BSC4933/5936
Florida State University
The Department of Biological Science
http://www.bio.fsu.edu
Sept. 23, 2003
Database Similarity
Searching
Steven M. Thompson
Florida State University School of
Computational Science and
Information Technology (CSIT)
How can you search the databases for similar
sequences, if pair-wise alignments take N2
time?! Significance and heuristics . . .
But, why even do database searches?
We can imagine screening databases for sequences similar to ours
using the concepts of dynamic programming and log-odds scoring
matrices and some yet to be described tricks. But what do database
searches tell us; what can we gain from them? Why even bother?
Inference through homology is a fundamental principle of biology.
When a sequence is found to fall into a preexisting family we may be
able to infer function, mechanism, evolution, perhaps even structure,
based on homology with its neighbors. If no significant similarity can
be found, the very fact that your sequence is new and different could
be very important. Granted, its characterization may prove difficult,
but it could be well worth it.
Homology and similarity —
Don’t confuse homology with similarity: there is a huge difference!
Similarity is a statistic that describes how much two (sub)sequences
are alike according to some set scoring criteria. It can be normalized
to ascertain statistical significance, but it’s still just a number.
Homology, in contrast and by definition, implies an evolutionary
relationship — more than just the fact that we’ve all evolved from the
same old primordial ‘ooze.’ To demonstrate homology reconstruct the
phylogeny of the organisms or genes of interest. Better yet, show
experimental evidence — structural, morphological, genetic, or fossil
— that corroborates your assertion.
There is no such thing as percent homology; something is either
homologous or it is not. Walter Fitch is credited with “homology is like
pregnancy — you can’t be 45% pregnant, just like something can’t be
45% homologous. You either are or you are not.” Highly significant
similarity can argue for homology, but never the other way around.
Independent of all that, what is a
‘good’ alignment?
So, first — Significance:
when is any alignment worth
anything biologically?
An old statistics trick — Monte Carlo simulations:
Z score = [ ( actual score ) - ( mean of randomized scores ) ]
( standard deviation of randomized score distribution )
The Normal (Abby Normal?) distribution —
Many Z scores measure the distance from the mean
using this simplistic Monte Carlo model assuming a
Gaussian distribution, aka the Normal distribution
(http://mathworld.wolfram.com/NormalDistribution.html),
in spite of the fact that ‘sequence-space’ actually
follows what is know as the ‘Extreme Value distribution.’
However, the Monte Carlo method does approximate
significance estimates fairly well.
0:==
< 20 650
0:
0
22
0:=
3
24
8:*
26 22
28 98 87:*
30 289 528:*
32 1714 2042:===*
34 5585 5539:=========*
36 12495 11375:==================*==
38 21957 18799:===============================*=====
40 28875 26223:===========================================*====
42 34153 32054:=====================================================*===
44 35427 35359:==========================================================*
46 36219 36014:===========================================================*
48 33699 34479:======================================================== *
50 30727 31462:=================================================== *
52 27288 27661:=============================================*
54 22538 23627:====================================== *
56 18055 19736:============================== *
58 14617 16203:========================= *
60 12595 13125:=====================*
62 10563 10522:=================*
64 8626 8368:=============*=
66 6426 6614:==========*
68 4770 5203:========*
70 4017 4077:======*
72 2920 3186:=====*
74 2448 2484:====*
76 1696 1933:===*
78 1178 1503:==*
80 935 1167:=*
82 722 893:=*
84 454 707:=*
86 438 547:*
88 322 423:*
90 257 328:*
92 175 253:*
94 210 196:*
96 102 152:*
98 63 117:*
100 58 91:*
102 40 70:*
104 30 54:*
106 17 42:*
108 14 33:*
110 14 25:*
112 12 20:*
9 15:*
114
6 12:*
116
9:*
8
118
7:*=
>120 1030
‘Sequence-space’ (Huh, what’s that?)
actually follows the ‘Extreme Value distribution’
(http://mathworld.wolfram.com/ExtremeValueDistribution.html).
Based on this known statistical
distribution, and robust statistical
methodology, a realistic Expectation
function, the E Value, can be
calculated from database searches.
The particulars of how this is done
will wait, but the ‘take-home’
message is the same . . .
The Expectation Value?
The higher the E value is, the more probable
that the observed match is due to chance in a
search of the same size database, and the
lower its Z score will be, i.e. is NOT significant.
Therefore, the smaller the E value, i.e. the
closer it is to zero, the more significant it is and
the higher its Z score will be! The E value is
the number that really matters.
Rules of thumb for a protein search —
The Z score represents the number of standard deviations some
particular alignment is from a distribution of random alignments
(the normal distribution).
They very roughly correspond to the listed E Values (based on
the Extreme Value distribution) for a typical protein sequence
similarity search through a database with ~125,000 protein entries.
On to the searches —
But N2 is way too slow!
How can it be done?
Database searching programs use the two concepts of
dynamic programming and log-odds scoring matrices;
however, dynamic programming takes far too long when
used against most sequence databases with a ‘normal’
computer. Remember how big the databases are!
Therefore, the programs use tricks to make things
happen faster. These tricks fall into two main categories,
that of hashing, and that of approximation.
Corn beef hash? Huh . . .
Hashing is the process of breaking your sequence into
small ‘words’ or ‘k-tuples’ (think all chopped up, just like
corn beef hash) of a set size and creating a ‘look-up’
table with those words keyed to position numbers.
Computers can deal with numbers way faster than they
can deal with strings of letters, and this preprocessing
step happens very quickly.
Then when any of the word positions match part of an
entry in the database, that match, the ‘offset,’ is saved.
In general, hashing reduces the complexity of the search
problem from N2 for dynamic programming to N, the
length of all the sequences in the database.
A simple hash table —
(this example is from the Krane and Raymer text p.50)
The sequence FAMLGFIKYLPGCM and a word size of one,
would produce this query lookup hash table:
word
A
C
F
G
I
K
L
M
P
Y
Pos.
2
13
1
5
7
8
4
3
11
9
6
12
10
14
comparing it to the database sequence TGFIKYLPGACT,
would yield the following offset table:
1
2
3
4
5
6
7
8
9
10
11
12
T
G
F
I
K
Y
L
P
G
A
C
T
3
-2
3
3
3
-3
3
-4
-8
2
10
3
3
3
Hmmm & some interpretation —
The offset numbers come from the difference
between the positions of the words in the query
sequence and the position of the occurrence of that
word in the target sequence. Then . . . .
Look at all of the offsets equal to three in the previous
table. Therefore, offset the alignment by three:
FAMLGFIKYLPGCM
||||||||
TGFIKYLPGACT
Quick and easy. Computers can compare these sorts
of tables very fast. The trick is to ‘know’ how far to
attempt to extend the alignment out.
OK. Heuristics . . . What’s that?
Approximation techniques are collectively known as ‘heuristics.’
Webster’s defines heuristic as “serving to guide, discover, or reveal;
. . . but unproved or incapable of proof.”
In database similarity searching techniques the heuristic usually
restricts the necessary search space by calculating some sort of a
statistic that allows the program to decide whether further scrutiny
of a particular match should be pursued. This statistic may miss
things depending on the parameters set — that’s what makes it
heuristic. ‘Worthwhile’ results at the end are compiled and the
longest alignment within the program’s restrictions is created.
The exact implementation varies between the different programs,
but the basic idea follows in most all of them.
Two predominant versions exist: BLAST and Fast
Both return local alignments, and are not a single program, but rather
a family of programs with implementations designed to compare a
sequence to a database in about every which way imaginable.
These include:
1) a DNA sequence against a DNA database (not recommended unless
forced to do so because you are dealing with a non-translated region of
the genome — DNA is just too darn noisy, only identity & four bases!),
2) a translated (where the translation is done ‘on-the-fly’ in all six frames)
version of a DNA sequence against a translated (‘on-the-fly’ six-frame)
version of the DNA database (not available in the Fast package),
3) a translated (‘on-the-fly’ six-frame) version of a DNA sequence against a
protein database,
4) a protein sequence against a translated (‘on-the-fly’ six-frame) version of
a DNA database,
5) or a protein sequence against a protein database.
Many implementations allow for the possibility of frame shifts in
translated comparisons and don’t penalize the score for doing so.
The BLAST and Fast programs — some generalities
BLAST — Basic Local Alignment
Search Tool, developed at NCBI.
1) Normally NOT a good idea
to use for DNA against
DNA searches w/o
translation (not optimized);
FastA — and its family of relatives,
developed by Bill Pearson at the
University of Virginia.
1) Works well for DNA against
DNA searches (within limits
of possible sensitivity);
2) Pre-filters repeat and “low
complexity” sequence
regions;
2) Can find only one gapped
4) Can find more than one
region of gapped similarity;
3) Relatively slow, should
5) Very fast heuristic and
parallel implementation;
6) Restricted to precompiled,
specially formatted
databases;
region of similarity;
often be run in the
background;
4) Does not require specially
prepared, preformatted
databases.
The algorithms, in brief —
BLAST:
Two word hits on the
same diagonal above
some similarity
threshold triggers
ungapped extension until
the score isn’t improved
enough above another
threshold:
the HSP.
Initiate gapped extensions
using dynamic programming for
those HSP’s above a third
threshold up to the point where
the score starts to drop below a
fourth threshold: yields
alignment.
Find all ungapped exact
word hits; maximize the
ten best continuous
regions’ scores: init1.
Fast:
Combine nonoverlapping init
regions on different
diagonals:
initn.
Use dynamic
programming ‘in a
band’ for all regions
with initn scores
better than some
threshold: opt score.
BLAST — the algorithm in more detail
1)
After BLAST has sorted its lookup table, it tries to find all double word
hits along the same diagonal within some specified distance using what
NCBI calls a Discrete Finite Automaton (DFA). These word hits of size
W do not have to be identical; rather, they have to be better than some
threshold value T. To identify these double word hits, the DFA scans
through all strings of words (typically W=3 for peptides) that score at
least T (usually 11 for peptides).
2)
Each double word hit that passes this step then triggers a process called
un-gapped extension in both directions, such that each diagonal is
extended as far as it can, until the running score starts to drop below a
pre-defined value X within a certain range A. The result of this pass is
called a High-Scoring segment Pair or HSP.
3)
Those HSPs that pass this step with a score better than S then begin a
gapped extension step utilizing dynamic programming. Those gapped
alignments with Expectation values better than the user specified cutoff
are reported. The extreme value distribution of BLAST Expectation
values is pre-computed against each precompiled database — this is
one area that speeds up the algorithm considerably.
The BLAST algorithm, continued
The math can be generalized thus: for any two sequences of length m and
n, local, best alignments are identified as HSPs. HSPs are stretches of
sequence pairs that cannot be further improved by extension or trimming,
as described above. For un-gapped alignments, the number of expected
HSPs with a score of at least S is given by the formula:
E = Kmnes
This is called an E-value for the score S. In a database search n is the
size of the database in residues, so N=mn is the search space size. K and
 are be supplied by statistical theory, and, as mentioned above, can be
calculated by comparison to pre-computed, simulated distributions. These
two parameters define the statistical significance of an E-value.
The E-value defines the significance of the search. As mentioned above,
the smaller an E-value is, the more likely it is significant. A value of 0.01 to
0.001 is a good starting point for significance in most typical searches. In
other words, in order to assess whether a given alignment constitutes
evidence for homology, it helps to know how strong an alignment can be
expected from chance alone.
The Fast algorithm — in more detail
Fast is an older algorithm than BLAST. The original Fast paper
came out in 1988, based on David Lipman’s work in a 1983 paper;
the original BLAST paper was published in 1990. Both algorithms
have been upgraded substantially since originally released.
Fast was the first widely used, powerful sequence database
searching algorithm. Bill Pearson continually refines the programs
such that they remain a viable alternative to BLAST, especially if
one is restricted to searching DNA against DNA without translation.
They are also very helpful in situations where BLAST finds no
significant alignments — arguably, Fast may be more sensitive than
BLAST in these situations.
Fast is also a hashing style algorithm and builds words of a set ktuple size, by default two for peptides. It then identifies all exact
word matches between the sequence and the database members.
Note that the word matches must be exact for Fast and only similar,
above some threshold, for BLAST.
The Fast algorithm, continued
From these exact word matches:
1) Scores are assigned to each continuous, ungapped, diagonal by
adding all of the exact match BLOSUM values.
2) The ten highest scoring diagonals for each query-database pair
are then rescored using BLOSUM similarities as well as identities
and ends are trimmed to maximize the score. The best of each of
these is called the Init1 score.
3) Next the program ‘looks’ around to see if nearby off-diagonal Init1
alignments can be combined by incorporating gaps. If so, a new
score, Initn, is calculated by summing up all the contributing Init1
scores, penalizing gaps with a penalty for each.
4) The program then constructs an optimal local alignment for all
Initn pairs with scores better than some set threshold using a
variation of dynamic programming “in a band.” A sixteen residue
band centered at the highest Init1 region is used by default with
peptides. The score generated from this step called opt.
The Fast algorithm, still continued
5) Next, Fast uses a simple linear regression against the natural
log of the search set sequence length to calculate a normalized
z-score for the sequence pair. Note that this is not the same
Monte Carlo style Z score described earlier, and can not be
directly compared to one.
6) Finally, it compares the distribution of these z-scores to the
actual extreme-value distribution of the search. Using this
distribution, the program estimates the number of sequences
that would be expected to have, purely by chance, a z-score
greater than or equal to the z-score obtained in the search. This
is reported as the Expectation value.
7) If the user requests pair-wise alignments in the output, then the
program uses full Smith-Waterman local dynamic programming,
not ‘restricted to a band,’ to produce its final alignments.
Let’s see ‘em in action
To begin we’ll go to the most widely used (and abused!)
biocomputing program on earth: NCBI’s BLAST —
Connect to NCBI’s BLAST page with any Web browser:
http://www.ncbi.nlm.nih.gov/BLAST/.
There is a wealth of information there, including a
wonderful tutorial and several very good essays for
teaching yourself way more about BLAST than this lecture
can ever hope for.
For now I’ll demonstrate with a simple example, one of my
favorites, the elongation factor 1 protein from Giardia
lamblia, named EF1A_Giala in the Swiss-Prot database,
but we have to use the accession code, Q08046, for
NCBI’s BLAST server to find the sequence. Let’s see how
it works and how quickly we get results back.
Let’s contrast that with GCG’s BLAST version.
I’ll illustrate with the same molecule and I’ll use GCG’s
SeqLab GUI to show the difference between the two
implementations of the program.
And finally, let’s see how GCG’s FastA version
compares to either BLAST implementation.
Again, I’ll launch the program from SeqLab with the
same example, but this time I’ll take advantage of
Fast’s flexible database search syntax, being able to
use any valid GCG sequence specification. Here I’ll
search against a precompiled LookUp list file of all of
the so-called ‘primitive’ eukaryotes in Swiss-Prot.
Finally — Why do I keep ‘diss’ing’ DNA
for searches and alignment?
All database similarity searching and sequence
alignment, regardless of the algorithm used, is far more
sensitive at the amino acid level than at the DNA level.
This is because proteins have twenty match criteria
versus DNA’s four, and those four DNA bases can
generally only be identical, not similar, to each other; and
many DNA base changes (especially third position
changes) do not change the encoded protein.
All of these factors drastically increase the ‘noise’ level of
a DNA against DNA search, and give protein searches a
much greater ‘look-back’ time, typically doubling it.
Therefore, whenever dealing with coding sequence, it is
always prudent to search at the protein level!
References and a Comment:
The better you understand the chemical, physical, and biological systems
involved, the better your chance of success in analyzing them. Certain
strategies are inherently more appropriate to others in certain circumstances.
Making these types of subjective, discriminatory decisions and utilizing all of
the available options so that you can generate the most practical data for
evaluation are two of the most important ‘take-home’ messages that I can offer!
Altschul, S. F., Gish, W., Miller, W., Myers, E. W., and Lipman, D. J. (1990) Basic Local Alignment Tool.
Journal of Molecular Biology 215, 403-410.
Altschul, S.F., Madden, T.L., Schaffer, A.A., Zhang, J., Zhang, Z., Miller, W., and Lipman, D.J. (1997) Gapped
BLAST and PSI-BLAST: a New Generation of Protein Database Search Programs. Nucleic Acids
Research 25, 3389-3402.
Genetics Computer Group (GCG) (Copyright 1982-2002) Program Manual for the Wisconsin Package,
Version 10.3, Accelrys, Inc. A Pharmocopeia Company, San Diego, California, U.S.A.
Gribskov, M. and Devereux, J., editors (1992) Sequence Analysis Primer. W.H. Freeman and Company, New
York, New York, U.S.A.
Henikoff, S. and Henikoff, J.G. (1992) Amino Acid Substitution Matrices from Protein Blocks. Proceedings
of the National Academy of Sciences U.S.A. 89, 10915-10919.
Needleman, S.B. and Wunsch, C.D. (1970) A General Method Applicable to the Search for Similarities in the
Amino Acid Sequence of Two Proteins. Journal of Molecular Biology 48, 443-453.
Pearson, W.R. and Lipman, D.J. (1988) Improved Tools for Biological Sequence Analysis. Proceedings of
the National Academy of Sciences U.S.A. 85, 2444-2448.
Schwartz, R.M. and Dayhoff, M.O. (1979) Matrices for Detecting Distant Relationships. In Atlas of Protein
Sequences and Structure, (M.O. Dayhoff editor) 5, Suppl. 3, 353-358, National Biomedical Research
Foundation, Washington D.C., U.S.A.
Smith, T.F. and Waterman, M.S. (1981) Comparison of Bio-Sequences. Advances in Applied Mathematics 2,
482-489.
Wilbur, W.J. and Lipman, D.J. (1983) Rapid Similarity Searches of Nucleic Acid and Protein Data Banks.
Proceedings of the National Academy of Sciences U.S.A. 80, 726-730.