Bioinformatics is the use of mathematical, statistical and computer methods to analyze biological, biochemical, and biophysical data. Because bioinformatics is a young, rapidly evolving field, however, it also has a number of other credible definitions. It can also be defined as the science and technology of learning, managing, and processing biological information. Bioinformatics is often focused on obtaining biologically oriented data, organizing this information into databases, developing methods to get useful information from such databases, and devising methods to integrate related data from disparate sources. The computer databases and algorithms are developed to speed up and enhance biological research.
Bioinformatics can help answer such questions as whether a newly analyzed gene is similar to any previously known gene, whether a protein's sequence can suggest how the protein functions, and whether the genes turned on in a cancer cell are different from those turned on in a healthy cell.
Databases and Analysis Programs
A good deal of the early work in bioinformatics focused on processing and analyzing gene and protein sequences catalogued in databases such as GenBank, EMBL, and SWISS-PROT. Such databases were developed in academia or by government-sponsored groups and served as repositories where scientists could store and share their sequence data with other researchers. With the start of the Human Genome Project in 1990, efforts in bioinformatics intensified, rising to the challenge of handling the large amounts of DNA sequence data being generated at an unprecedented rate. By the midto late-1990s, much of the efforts in bioinformatics centered around genomic data, generated by the Human Genome Project and by private companies, and around proteomic data.
Early analysis of sequence information focused on looking for similarities between genes and between proteins. Algorithms were developed to help researchers rapidly identify similar gene or protein sequences. Such tools were extremely useful for determining whether a newly sequenced piece of DNA was at all similar to sequences already entered in a database. To determine how multiple sequences align and to view their similarities, multiplealignment programs were developed. Such programs helped scientists compare the sequences of closely related genes or compare the sequence of a particular gene or protein as it appears in several species.
To better understand the functional roles of new nucleotide and amino acid sequences, researchers developed algorithms to look for particular sequence "domains." Domains are regions where a particular sequence of nucleotides or amino acids is indicative of function in the protein. For example, a protein may have a domain that binds to ATP or GTP, two important protein regulators.
In addition, these algorithms can detect sequences that denote a region involved in particular types of post-translational modifications, such as tyrosine phosphorylation . Tools such as prosite, blocks, prints, and Pfam can be used to detect and predict such protein domains in sequence data.
Structure is central to protein function, and another set of tools, including SWISS-MODEL, allows researchers to use gene and protein sequence data to predict a protein's three-dimensional structure. Such tools can help predict how mutations in a gene sequence could alter the three-dimensional structure of the corresponding protein. They accomplish such molecular modeling by comparing a novel sequence to the sequences of genes whose protein structures are known.
The majority of tools were developed as academic freeware distributed on the Internet. In the early-to mid-1990s, commercial companies began to develop their own proprietary algorithms and tools, as well as their own proprietary databases. Those databases were then marketed to pharmaceutical and biotech companies as well as to academic research groups. The most commercially viable and profitable businesses focused on the production and sale of proprietary DNA-and gene-sequence databases in the mid-to late-1990s. These databases primarily contained genetic information that were not in the public domain databases, such as GenBank, and they thus offered potential competitive advantages to the drug discovery groups of large pharmaceutical and biotech companies.
Applications of Bioinformatics to Drug Discovery
The application of bioinformatics to genomics data could be a huge potential boon for the discovery of new drugs. During the 1990s many pharmaceutical companies and biotech companies became convinced that they could speed up their drug-discovery pipelines by taking advantage of the data from the Human Genome Project as well as by funding their own internal genomics programs and by collaborating with third-party genomics companies.
The goal in such practical applications is to use such data as DNA sequence information and gene expression levels to help discover new drug targets. The vast majority of drugs target proteins, but there are a handful of drugs, such as some chemotherapeutic agents, that bind to DNA. In cases where the target is a protein, the drugs themselves are primarily small chemical molecules or, in some cases, small proteins, such as hormones, that bind to a larger protein in the body. Some drugs are therapeutic proteins delivered to the site of the disease.
The extent to which genomics will actually be able to help identify validated drug targets is uncertain. Genomics and bioinformatics are still young areas, and the drug development cycle can take up to ten years. As of 2001 relatively few of the drugs on the market or in the late stages of clinical trials were discovered via genomics or bioinformatics programs.
Bioinformatics is applied to at least five major types of activities: data acquisition, database development, data analysis, data integration, and analysis of integrated data.
Data acquisition is primarily concerned with accessing and storing data generated directly off of laboratory instruments. Many of these instruments are either automated or semi-automated high-throughput instruments that generate large volumes of data. The Human Genome Project utilized hundreds of DNA sequencers, producing enormous amounts of data. The data had to be captured in the appropriate format, and it had to be capable of being linked to all the information related to the DNA samples, such as the species, tissue type, and quality parameters used in the experiments. This area of bioinformatics primarily relates to the use of "laboratory information management systems," which are the computer systems used to manage the information needs of a particular laboratory.
Many laboratories generate large volumes of such data as DNA sequences, gene expression information, three-dimensional molecular structure, and high-throughput screening. Consequently, they must develop effective databases for storing and quickly accessing data. For each type of data, it is likely that a different database organization must be used. A database must be designed to allow efficient storage, search, and analysis of the data it contains. Designing a high-quality database is complicated by the fact that there are several formats for many types of data and a wide variety of ways in which scientists may want to use the data. Many of these databases are best built using a relational database architecture, often based on Oracle or Sybase.
A strong background in relational databases is a fundamental requirement for working in database development. Having some background in the molecular biology techniques used to generate the data is also important. Most critical for the bioinformatics specialist is to have a strong working relationship with the researchers who will be using the database and the ability to understand and interpret their needs into functional database capabilities.
Being able to analyze data efficiently requires having a good database design, allowing researchers to query the database effectively and letting them quickly obtain the types of information they need to begin their data analysis. If queries cannot be performed, or if performance is tediously slow, the whole system breaks down, since scientists will not be inclined to use the database. Once data is obtained from the database, the user must be able to easily transform it into the format appropriate for the desired analysis tools.
This can be challenging, since researchers often use a combination of publicly available tools, tools developed in-house, and third-party commercial tools. Each tool may have different input and output formats. Starting in the late 1990s, there have been both commercial and in-house efforts at pharmaceutical and biotech companies to reduce the formatting complexities. Such simplification efforts focus on building analysis systems with a number of tools integrated within them such that the transfer of data between tools appears seamless to the end user.
Bioinformatics analysts have a broad range of opportunities. They may write specific algorithms to analyze data, or they may be expert users of analysis tools, helping scientists understand how the tools analyze the data and how to interpret results. A knowledge of various programming languages, such as Java, PERL, C, C++, and Visual Basic, is very useful, if not required, for those working in this area.
Once information has been analyzed, a researcher often needs to associate or integrate it with related data from other databases. For example, a scientist may run a series of gene expression analysis experiments and observe that a particular set of 100 genes is more highly expressed in cancerous lung tissue than in normal lung tissue. The scientist might wonder which of the genes is most likely to be truly related to the disease. To answer the question, the researcher might try to find out more information about those 100 genes, including any associated gene sequence, protein, enzyme, disease, metabolic pathways, or signal transduction pathway data.
Such information will help the researcher narrow the list down to a smaller set of genes. Finding this information, however, requires connections or links between the different databases and a good way to present and store the information. An understanding of database architectures and the relationship between the various biological concepts in the databases is key to doing effective data integration.
Analysis of Integrated Data.
Once various types of data are integrated, users need a good way to present these various pieces of data so they can be interpreted and analyzed. The information should be capable of being stored and retrieved so that, over time, various pieces of information can be combined to form a "knowledge base" that can be extended as more experiments are run and additional data are integrated from other sources. This type of work requires skills related to database design and architecture. It also requires specific programming skills in various computer languages, as well as expertise in developing interfaces between a computer and its user.
see also Combinatorial Chemistry; Computational Biologist; Evolution of Genes; Genomics; Genomics Industry; High-Throughput Screening; Human Genome Project; Pharmacogenetics and Pharmacogenomics; Proteins; Proteomics; Sequencing DNA.
Anthony J. Recupero
Howard, Ken. "The Bioinformatics Gold Rush." Scientific American 283, no. 1 (2000): 58-64.
EMBL Nucleotide Sequence Database. Release 69. December 2001. European Bioinformatics Institute. <http://www.ebi.ac.uk/>.
GenBank. National Center for Biotechnology Information. <http://www.ncbi.nlm.nih.gov/>.
SWISS-PROT. Swiss Institute of Bioinformatics. <http://www.expasy.org/sprot/>.
Bioinformatics is a new field that centers on the development and application of computational methods to organize, integrate, and analyze gene -related data. The Human Genome Project (HGP) was an international effort to determine the deoxyribonucleic acid (DNA) base sequence of the entire human genome, which includes about thirty thousand protein -encoding genes, their regulatory elements, and many highly repeated noncoding sections. In 1985, a group of visionary scientists led by Charles DeLisi, who was then the director of the office of health and environmental research at the U.S. Department of Energy (DOE), realized that having the entire human genome in hand would provide the foundation for a revolution in biology and medicine. As a result, the 1988 presidential budget submission to U.S. Congress requested funds to start the HGP. Momentum built quickly and by 1990, DOE and the U.S. National Institutes of Health had laid out plans for a fifteen-year project. An international public consortium and a private company announced completion of a rough draft of the human genome sequence on June 26, 2000, with papers describing the data published eight months later. This is the first generation bestowed with the "parts list" of life, as well as the daunting task of making sense out of it.
The Human Genome Project and other genome projects have generated massive data on genome sequences, disease-causing gene variants, protein three-dimensional structures and functions, protein-protein interactions, and gene regulation. Bioinformatics is closely tied to two other new fields: genomics (identification and functional characterization of genes in a massively parallel and high-throughput fashion) and proteomics (analysis of the biological functions of proteins and their interactions), which have also resulted from the genome projects. The fruits of the HGP will have major impacts on understanding evolution and developmental biology, and on scientists' ability to diagnose and treat diseases. Areas outside of traditional biology, such as anthropology and forensic medicine, are also embracing genome information.
Knowing the sequence of the billions of bases in the human genome does not tell scientists where the genes are (about 1.5 percent of the human genome encodes protein). Nor does it tell scientists what the genes do, how genes are regulated, how gene products form a cell, how cells form organs, which mutations underlie genetic diseases, why humans age, and how to develop drugs. Bioinformatics, genomics, and proteomics try to answer these questions using technologies that take advantage of as much gene sequence information as possible. In particular, bioinformatics focuses on computational approaches.
Bioinformatics includes development of databases and computational algorithms to store, disseminate, and rapidly retrieve genomic data. Biological data are complex and abundant. For example, the U.S. National Center for Biotechnology Information (NCBI), a division of the National Institutes of Health, houses central databases for gene sequences (GenBank), disease associations (OMIM), and protein structure (MMDB), and publishes biomedical articles (PubMed). The best way to get a feeling for the magnitude and variety of the data is to access the homepage of NCBI via the World Wide Web (http://ncbi.nlm.nih.gov). A bioinformatics team at NCBI works on the design of the databases and the development of efficient algorithms for retrieving data and comparing DNA sequences.
Bioinformatics also covers the design of genomics and proteomics experiments and subsequent analysis of the results. For instance, disease tissues (such as those from cancer patients) express different sets of proteins than their normal counterparts. Therefore protein abundance can be used to diagnose diseases. Moreover, proteins that are highly (or uniquely) expressed in disease tissues may be potential drug targets.
Genomics and proteomics generate protein abundance data using different approaches. Genomics determines gene abundance (which is a good indicator of protein abundance) using DNA microarrays, also known as DNA chips, which are high-density arrays of short DNA sequences, each recognizing a particular gene. By hybridizing a tissue sample to a DNA chip, one can determine the activities of many genes in a single experiment. The design of DNA chips—that is, which gene fragments to use in order to achieve maximum sensitivity and specificity, as well as how to interpret the results of DNA chip experiments—are difficult problems in bioinformatics.
Proteomics measures protein abundance directly using mass spectroscopy , which is a way to measure the mass of a protein. Since mass is not unique enough for identifying a protein, one usually cuts the protein with enzymes (that cut at specific places according to the protein sequence) and measures the masses of the resulting fragments using mass spectroscopy. Such "mass distributions" for all proteins with known sequences can be generated using computers and stored. By comparing the mass distribution of an unknown protein sample to those of known proteins, one can identify the sample. Such comparisons require complex computational algorithms, especially when the sample is a mixture of proteins. Although not as efficient as DNA chips, mass spectroscopy can directly measure protein abundance. In fact, spectrometric identification of proteins has been the one of the most significant advances in proteomics.
Bioinformatics can lead to discovery of new proteins. When the cystic fibrosis gene (CF) was first identified in 1989, for example, researchers compared its DNA sequence computationally to all sequences known at that time. The comparison revealed striking homology (sequence similarity) to a large family of proteins involved in active transport across cell membranes. Indeed, the CF gene encodes a membrane-spanning chloride ion channel, called the cystic fibrosis transmembrane regulator, or CFTR. The identification of gene function by searching for sequence homology is a widely used bioinformatics method. When no homology is found, one may still be able to tell if a gene codes for membrane-spanning channels using computational tools. Membranes are bilayers of lipid molecules, which are water insoluble. An ion channel typically has regions outside the membrane (water soluble) and regions inside the membrane (water insoluble) arranged in a certain pattern. Computer algorithms have been developed to capture such patterns in a gene sequence.
By thinking boldly and by setting ambitious goals, the Human Genome Project has brought about a new era in biological and biomedical research. Many revolutionarily new technologies are being developed, most of which have significant computational components. The avalanche of genomic data also enables model-based reasoning. The bright future of bioinformatics calls for individuals who can think quantitatively and in the meantime love biology—an unusual combination.
see also Biotechnology; Genome; Human Genome Project
Butler, Declan. "Are You Ready for the Revolution?" Nature 409 (15 February 2001): 758–760.
DeLisi, Charles. "The Human Genome Project." American Scientist 76 (1988): 488–493.
Marshall, Eliot. "Bioinformatics: Hot Property: Biologists Who Compute." Science 272 (1996): 1730–1732.
Roos, D. S. "Bioinformatics: Trying to Swim in a Sea of Data." Science 291 (16 February 2001): 1260–1261.