Who’s using futures, where, and how?

If we look at our main R packages repositories, CRAN (~18,000 packages) and Bioconductor (~2,000 packages), we find that the future framework is used by R packages spanning a wide range of areas, e.g. statistics, modeling & prediction, time-series analysis & forecasting, life sciences, drug analysis, clinical trials, disease modeling, cancer research, computational biology, genomics, bioinformatics, biomarker discovery, epidemiology, ecology, economics & finance, spatial, geospatial & satellite analysis, and natural language processing. That is just a sample based on published R packages - we can only guess how futures are used by users at the R prompt, in users’ R scripts, non-published R packages, Shiny applications, and R pipelines running internally in the industry and academia.

There are two major use cases of the future framework: (i) performance improvement through parallelization, and (ii) non-blocking, asynchronous user experience (UX). Below are some prominent examples.

EpiNow2: Estimate Real-Time Case Counts and Time-Varying Epidemiological Parameters

Screenshot of the COVID-19 website dashboard with a world map annotated with colors indicating the trend of COVID infections in different regions
Image credit: EpiNow2 team

EpiNow2 is an R package to estimate real-time case counts and time-varying epidemiological parameters, such as current trends of COVID-19 incidents in different regions around the globe.

EpiNow2 uses futures to speed up processing. The future framework is used to estimate incident rates in different regions concurrently as well as running Markov Chain Monte Carlo (MCMC) in parallel.

Seurat: Large-Scale Single-Cell Genomics

A two-dimensional, UMAP-space, scatter plot displaying individual cells grouped into 23 well-separated subclasses that are color and label annotated.
Image credit: Seurat team

Seurat is an R package designed for QC, analysis, and exploration of single-cell RNA-seq data. Seurat aims to enable users to identify and interpret sources of heterogeneity from single-cell transcriptomic measurements, and to integrate diverse types of single-cell data. Azimuth is Seurat-based web application, e.g. HuBMAP - NIH Human Biomolecular Atlas Project

Seurat uses futures to speed up processing. The future framework makes it possible process large data sets and large number of samples in parallel on the local machine, distributed on multiple machines, or via large-scale high-performance compute (HPC) environments. Azimuth uses futures to provide a non-blocking web interface.

Shiny: Scalable, Asynchronous UX

Thumbnail of the Shiny ICGC Genome Browser webpage. There is a title banner on top above two panels. In the left-hand side panel, there is a circular plot showing the 24 human chromosomes laid out on the circumference. Interacting genes are connected with edges, creating a web of connections across the plane of the circle but also short loops back to the same chromosome. In the right-hand side panel, there is a table that appears to list the genes of interest with some kind of values.
Image credit: International Cancer Genome Consortium (ICGC) team

Shiny is an R package that makes it easy to build interactive web applications and dashboards directly from R. Shiny apps can run locally, be embedded in an R Markdown document, and be hosted on a webpage - all with a few clicks or commands. The combination of being simple and powerful has made Shiny the most popular solution for web applications in the R community. See the Shiny Gallery for real-world examples, e.g. the Genome Browser by the International Cancer Genome Consortium (ICGC) team.

Shiny uses the future framework to provide a non-blocking user interface and to scale up computationally heavy requests. It combines future with promises to turn a blocking, synchronous web interface into a non-blocking, asynchronous, responsive user experience.

Mlr3: Next-Generation Machine Learning

A schematic outline of a ML pipeline. On top, there is a left-to-right pipeline with 'Training Data' as input, with steps 'Scaling', 'Factor Encoding', 'Median Imputation', and a final 'Learner' state. At the bottom, there is a similar pipeline but with 'New Data' as the input.  In each of the corresponding steps, there is a arrow coming from the top pipeline indicating pre-learned parameters.  After the 'New Data' has flown through all steps, the output is a 'Prediction'.
Image credit: ml3r team

The mlr3 ecosystem provides efficient, object-oriented building blocks for machine learning (ML) for tasks, learners, resamplings, and measures. It supports large-scale, out-of-memory data processing.

mlr3 uses futures to speed up processing. The future framework is used in different ML steps, e.g. resampling of learners can be performed much faster when run in parallel. The framework makes sure proper parallel random-number generation (RNG) is used and guarantees reproducible results.

Drake/Targets: Pipeline Toolkit for Reproducible Computation at Scale

A drake dependency graph with a file 'raw_data_x.xlsx' node to the left, that a 'raw_data' node depends on, which in turn two nodes 'fit' and 'hist' depends on.  The following 'report' node depend on the latter two nodes, and the last is the file 'report.html' output node. There is a legend to the left explaining how the states of the nodes are represented as colors and shapes.
Image credit: targets/drake team

The targets package, and its predecessor drake, is a general-purpose computational engine for statistics and data science that brings together function-oriented programming in R with make-like declarative workflows. It has native support for parallel and distributed computing while preserving reproducibility.

Both targets and drake identify targets in the declared dependency graph that can be resolved concurrently, which then can be processed in parallel on the local computer or distributed in the cloud via the future framework.