Gather data, find pockets, dock small molecules with conversational AI. Try for free!

Ben Siranosian

-

January 10, 2023

Accelerated computing is the future of genomics

“We’re out of storage, and we’re out of compute.” I’ll never forget the 2016 Broad Institute Cancer Program meeting where Eric Banks, senior director of the Data Sciences Platform, showed the audience how much genomic data the Broad was generating. The exponential curve was plotted against the current capacity of the on-premises compute cluster –  the time until intersection of the curves could be measured in months. In fact, we were generating data even faster than we could add new storage drives. This meeting sparked the Broad’s move to the cloud for the majority of data storage and compute.

While moving to the cloud may have been the simple answer seven years ago, it’s not a catch-all solution today. Genome sequencing costs have dropped precipitously, and newer high-content assays like single-cell sequencing and spatial transcriptomics are being developed every day. Storing and processing all these data requires a massive amount of resources, and a cloud bill to match (even if you’re trying to do the analysis on the cheap). We need new, more efficient ways to process, store and transfer genomic data. Enter accelerated computing.

Accelerated computing, the use of special-purpose hardware for specific computational tasks, can help solve many of the problems facing genomics today. Using Graphical Processing Units (GPUs) and similar hardware, these custom-developed algorithms can reduce the time needed to run an analysis by a factor of 10 to 100. This great reduction in compute time has paved the way for more efficient data processing and real-time analysis in clinical genomics.

In this series, I’ll cover three topics related to accelerated computing in genomics:

  • An overview of the basics of accelerated computing, the popular tools, and the companies developing them (this post).
  • Practical considerations. How can you use these tools today? (coming soon)
  • Algorithmic details. How do accelerated tools work to drastically decrease the runtime of common tasks? What problems in genomics are amenable to acceleration? (a future post)

Accelerated computing is a general term describing the use of specialized hardware to speed up a certain computation. Commonly used hardware in the genomics space includes:

  • Graphical Processing Units (GPUs)
  • Field Programmable Gate Arrays (FPGAs)
  • Application-specific integrated circuits (ASICs)
  • Tensor Processing Units (TPUs)

While Central Processing Units (CPUs) excel at general-purpose tasks, they lack the ability to run many computations in parallel. GPUs are the opposite. While they were originally developed for video game rendering, GPUs excel at parallel computations. They can have thousands of processor cores, each capable of running a calculation in parallel. Accelerated computing takes advantage of each hardware type for the tasks it is best at. Control functionality and single-threaded work is left to the CPU, and parallelizable computations are done on the GPU. NVIDIA Clara Parabricks leads the way in GPU-accelerated genomics.

FPGAs allow for the hardware to be reconfigured on the fly to run a specific algorithm. They are used in the Illumina DRAGEN tools I’ll cover later. ASICs are less common. They have a fixed configuration and perform a limited set of functions, so they’re best used in very specific settings, like controlling the pores on an Oxford Nanopore MinION. TPUs are used in training ML models to interpret genomic data, but not in the processing of the data directly.

While GPU-based training of deep learning models is standard and supported by key libraries in the ecosystem, in traditional genomics fashion, the field is 5-10 years behind other industries. We are starting to see GPU-based genomics tools being released, but they’re closed source and still gaining traction. By contrast, PyTorch, a popular open source machine learning framework, was released in 2016! My prediction is that accelerated tools will become standard in genomics as well, but we need a lot of work to get there.

How can I use hardware acceleration in genomics today?

Your best bet is to use one of these developed toolkits. If you don’t have access to a machine with GPUs or FPGAs, they can be rented from AWS or GCP for a low hourly fee.

NVIDIA Clara Parabricks: GPU-accelerated alignment, variant calling, and RNA-seq

NVIDIA’s entry into GPU-accelerated genomics is the result of the 2019 acquisition of the software startup Parabricks. NVIDIA first released the software as closed access, but with the version 4.0 release in 2022, anyone can download the docker container and run the software. Parabricks runs on most modern NVIDIA GPUs and accelerates alignment of DNA and RNA-seq data, variant calling, and other time-intensive processes by up to 80x (using a multi-GPU machine). Parabricks was designed as a drop-in replacement for common tools like GATK, and is guaranteed to produce an identical output as certain GATK versions. Running the software is simple, all you have to do is pull the docker container at nvcr.io/nvidia/clara/clara-parabricks:4.0.0-1 and run one of the command line tools.

The strengths of Parabricks lie in its ease of use, wide applicability, and cost effectiveness. GPUs are everywhere: on the cloud, in gaming PCs, and in servers used for ML/AI training. The docker container is available for anyone to try for free, without a license or multiple sales calls. Parabricks also attempts to automate some processes that may have been spread across multiple tools in the past: alignments are always  coordinate-sorted, for example.

The weaknesses of Parabricks come down to the limited functionality and lack of integration, at least compared to DRAGEN. Parabricks doesn’t have the advanced functionality of DRAGEN for tasks like single-cell sequencing and star-allele calling for pharmacogenomics. And you obviously can’t buy an Illumina sequencer with a competitor’s hardware and software in it!

Illumina DRAGEN: FPGA-accelerated Bio-IT platform

Illumina recognized the challenges in storage and processing of genomic data early, and acquired Edico Genome and the DRAGEN Bio-IT platform in 2018 to architect a solution. DRAGEN uses FPGAs to speed up generation of FASTQ files, alignment, variant calling, and many other processes. In addition to a standard GATK implementation, DRAGEN designed their own variant calling algorithms which have won two out of three of the Precision FDA Truth Challenge V2 Illumina categories. DRAGEN also provides new algorithms for accelerated single-cell genomics, star-allele calling, and other processes.

The strengths of DRAGEN lie in the tight integration with Illumina products. You can buy an Illumina sequencer with a DRAGEN server built in, so that everything up to variant calling can be completed on the sequencer. This means the large raw data files never have to be transferred to the cloud or elsewhere, saving on storage costs (as long as you don’t need the raw data for backup or compliance purposes). The accuracy, speed, and continued improvement of the algorithms are another key advantage.

The weaknesses of DRAGEN come with the costs of using the software. Since Illumina doesn’t benefit from the purchase of FPGAs, they charge a LOT for the DRAGEN license. In fact, the license cost is 80% of the total cost when running DRAGEN on the cloud! This deters researchers in academia and lower-resourced companies from using DRAGEN, and may push them to a free alternative instead.

Nanopore and PacBio sequencers: accelerated computing right under the hood

Oxford Nanopore uses hardware acceleration at multiple places in their long-read sequencers. Pores on the flowcells are controlled by ASICs, and the more advanced multi-flowcell workstations use GPUs to accelerate data analysis. The Nanopore Promethion comes with 4 NVIDIA A100 GPUs, for example. PacBio’s new Revio sequencer has a similar arrangement, with on-board GPUs to speed up the processing of raw data. While Nanopore and PacBio sequencers both take advantage of hardware acceleration, there’s much less direct interaction with the algorithms, compared to the user-facing toolkits above.

Where’s the PyTorch of genomics?

All of the accelerated genomics toolkits I’ve talked about today are being developed closed-source by publicly traded companies. That’s great for efficient development of high-performance code, but it shuts out the community of developers in academia or low-resource industries that might use and contribute to your code. NVIDIA had GenomeWorks, but that hasn’t seen a commit in a year and a half. Some other groups are repurposing GPU-accelerated Python libraries for single-cell analysis.

If you’re working on an open-source GPU genomics toolkit, I’d love to hear about it.

One final thought: the story of how GPUs transitioned from gaming devices to general-purpose compute accelerators is both fascinating and entertaining. It all started with a quantum chemistry professor at Stanford buying NVIDIA gaming cards in 2006 and hacking them to do the computation he needed. Acquired has a great podcast on the topic.

Subscribe to
our blogs

We will use the information you provide to contact you about relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

Ben Siranosian

-

January 10, 2023

Accelerated computing is the future of genomics

“We’re out of storage, and we’re out of compute.” I’ll never forget the 2016 Broad Institute Cancer Program meeting where Eric Banks, senior director of the Data Sciences Platform, showed the audience how much genomic data the Broad was generating. The exponential curve was plotted against the current capacity of the on-premises compute cluster –  the time until intersection of the curves could be measured in months. In fact, we were generating data even faster than we could add new storage drives. This meeting sparked the Broad’s move to the cloud for the majority of data storage and compute.

While moving to the cloud may have been the simple answer seven years ago, it’s not a catch-all solution today. Genome sequencing costs have dropped precipitously, and newer high-content assays like single-cell sequencing and spatial transcriptomics are being developed every day. Storing and processing all these data requires a massive amount of resources, and a cloud bill to match (even if you’re trying to do the analysis on the cheap). We need new, more efficient ways to process, store and transfer genomic data. Enter accelerated computing.

Accelerated computing, the use of special-purpose hardware for specific computational tasks, can help solve many of the problems facing genomics today. Using Graphical Processing Units (GPUs) and similar hardware, these custom-developed algorithms can reduce the time needed to run an analysis by a factor of 10 to 100. This great reduction in compute time has paved the way for more efficient data processing and real-time analysis in clinical genomics.

In this series, I’ll cover three topics related to accelerated computing in genomics:

  • An overview of the basics of accelerated computing, the popular tools, and the companies developing them (this post).
  • Practical considerations. How can you use these tools today? (coming soon)
  • Algorithmic details. How do accelerated tools work to drastically decrease the runtime of common tasks? What problems in genomics are amenable to acceleration? (a future post)

Accelerated computing is a general term describing the use of specialized hardware to speed up a certain computation. Commonly used hardware in the genomics space includes:

  • Graphical Processing Units (GPUs)
  • Field Programmable Gate Arrays (FPGAs)
  • Application-specific integrated circuits (ASICs)
  • Tensor Processing Units (TPUs)

While Central Processing Units (CPUs) excel at general-purpose tasks, they lack the ability to run many computations in parallel. GPUs are the opposite. While they were originally developed for video game rendering, GPUs excel at parallel computations. They can have thousands of processor cores, each capable of running a calculation in parallel. Accelerated computing takes advantage of each hardware type for the tasks it is best at. Control functionality and single-threaded work is left to the CPU, and parallelizable computations are done on the GPU. NVIDIA Clara Parabricks leads the way in GPU-accelerated genomics.

FPGAs allow for the hardware to be reconfigured on the fly to run a specific algorithm. They are used in the Illumina DRAGEN tools I’ll cover later. ASICs are less common. They have a fixed configuration and perform a limited set of functions, so they’re best used in very specific settings, like controlling the pores on an Oxford Nanopore MinION. TPUs are used in training ML models to interpret genomic data, but not in the processing of the data directly.

While GPU-based training of deep learning models is standard and supported by key libraries in the ecosystem, in traditional genomics fashion, the field is 5-10 years behind other industries. We are starting to see GPU-based genomics tools being released, but they’re closed source and still gaining traction. By contrast, PyTorch, a popular open source machine learning framework, was released in 2016! My prediction is that accelerated tools will become standard in genomics as well, but we need a lot of work to get there.

How can I use hardware acceleration in genomics today?

Your best bet is to use one of these developed toolkits. If you don’t have access to a machine with GPUs or FPGAs, they can be rented from AWS or GCP for a low hourly fee.

NVIDIA Clara Parabricks: GPU-accelerated alignment, variant calling, and RNA-seq

NVIDIA’s entry into GPU-accelerated genomics is the result of the 2019 acquisition of the software startup Parabricks. NVIDIA first released the software as closed access, but with the version 4.0 release in 2022, anyone can download the docker container and run the software. Parabricks runs on most modern NVIDIA GPUs and accelerates alignment of DNA and RNA-seq data, variant calling, and other time-intensive processes by up to 80x (using a multi-GPU machine). Parabricks was designed as a drop-in replacement for common tools like GATK, and is guaranteed to produce an identical output as certain GATK versions. Running the software is simple, all you have to do is pull the docker container at nvcr.io/nvidia/clara/clara-parabricks:4.0.0-1 and run one of the command line tools.

The strengths of Parabricks lie in its ease of use, wide applicability, and cost effectiveness. GPUs are everywhere: on the cloud, in gaming PCs, and in servers used for ML/AI training. The docker container is available for anyone to try for free, without a license or multiple sales calls. Parabricks also attempts to automate some processes that may have been spread across multiple tools in the past: alignments are always  coordinate-sorted, for example.

The weaknesses of Parabricks come down to the limited functionality and lack of integration, at least compared to DRAGEN. Parabricks doesn’t have the advanced functionality of DRAGEN for tasks like single-cell sequencing and star-allele calling for pharmacogenomics. And you obviously can’t buy an Illumina sequencer with a competitor’s hardware and software in it!

Illumina DRAGEN: FPGA-accelerated Bio-IT platform

Illumina recognized the challenges in storage and processing of genomic data early, and acquired Edico Genome and the DRAGEN Bio-IT platform in 2018 to architect a solution. DRAGEN uses FPGAs to speed up generation of FASTQ files, alignment, variant calling, and many other processes. In addition to a standard GATK implementation, DRAGEN designed their own variant calling algorithms which have won two out of three of the Precision FDA Truth Challenge V2 Illumina categories. DRAGEN also provides new algorithms for accelerated single-cell genomics, star-allele calling, and other processes.

The strengths of DRAGEN lie in the tight integration with Illumina products. You can buy an Illumina sequencer with a DRAGEN server built in, so that everything up to variant calling can be completed on the sequencer. This means the large raw data files never have to be transferred to the cloud or elsewhere, saving on storage costs (as long as you don’t need the raw data for backup or compliance purposes). The accuracy, speed, and continued improvement of the algorithms are another key advantage.

The weaknesses of DRAGEN come with the costs of using the software. Since Illumina doesn’t benefit from the purchase of FPGAs, they charge a LOT for the DRAGEN license. In fact, the license cost is 80% of the total cost when running DRAGEN on the cloud! This deters researchers in academia and lower-resourced companies from using DRAGEN, and may push them to a free alternative instead.

Nanopore and PacBio sequencers: accelerated computing right under the hood

Oxford Nanopore uses hardware acceleration at multiple places in their long-read sequencers. Pores on the flowcells are controlled by ASICs, and the more advanced multi-flowcell workstations use GPUs to accelerate data analysis. The Nanopore Promethion comes with 4 NVIDIA A100 GPUs, for example. PacBio’s new Revio sequencer has a similar arrangement, with on-board GPUs to speed up the processing of raw data. While Nanopore and PacBio sequencers both take advantage of hardware acceleration, there’s much less direct interaction with the algorithms, compared to the user-facing toolkits above.

Where’s the PyTorch of genomics?

All of the accelerated genomics toolkits I’ve talked about today are being developed closed-source by publicly traded companies. That’s great for efficient development of high-performance code, but it shuts out the community of developers in academia or low-resource industries that might use and contribute to your code. NVIDIA had GenomeWorks, but that hasn’t seen a commit in a year and a half. Some other groups are repurposing GPU-accelerated Python libraries for single-cell analysis.

If you’re working on an open-source GPU genomics toolkit, I’d love to hear about it.

One final thought: the story of how GPUs transitioned from gaming devices to general-purpose compute accelerators is both fascinating and entertaining. It all started with a quantum chemistry professor at Stanford buying NVIDIA gaming cards in 2006 and hacking them to do the computation he needed. Acquired has a great podcast on the topic.

Recent articles

We value your privacy

We use statistics cookies to help us improve your experience of our website. By using our website, you consent to our use of cookies. To learn more, read our Privacy Policy and Cookie Policy.