Statistical Consultation, Statistical Methods Development, and Mathematical Modeling for Immunological Assay and Immunoassay

Tyson H. Holmes, Ph.D.

Statistical Director, Human Immune Monitoring Center

Institute for Immunity, Transplantation and Infection

Stanford University School of Medicine

tholmes@stanford.edu 

“We are here to help.”

The Service:

The purpose of this service is very specific and two-fold.

·      To provide investigators with guidance on how to analyze their data themselves, including instruction on applicable R and/or SAS script.

·      To develop new statistical methods or other mathematical models where existing methods will not permit adequate analysis.

Selected Assay Technologies:

·      Luminex® platform for secreted soluble proteins

·      Flow and mass cytometry, including intracellular staining

Selected Current Research Interests:

·      Nonspecific binding, batch effects, antigen-excess, and other assay artifacts

·      Optimization of immunological analysis pipelines

·      Cluster analysis and gap finding for immunological and immunoassay data

·      Estimation of and computations on biological networks (graph theoretic)

·      Low-dimensional visualization and testing for high-dimensional data

·      In vivo standard (calibration) curve development

·      Microbiome characterization

·      Distribution-free, regularized regression modeling of immunological and immunoassay data   

 

Selected Survey of Publications:

·    Holmes TH. 2020. Generalized mathematical model for immunoassay interference. Autoimmunity Reviews 19:102663. 

  • ·   Holmes TH, Subrahmanyam PB, Wang W, Maecker HT. 2019. Penalized supervised star plots: Example application in influenza-specific CD4+ T Cells. Viral Immunology 32:102-109.

·      Holmes TH, He X. 2016. Human immunophenotyping via low-variance, low-bias, interpretive regression modeling of small, wide data sets: application to aging and immune response to influenza vaccination. Journal of Immunological Methods 437:1-12.

·      Holmes TH, Lewis DB. 2014. Bayesian immunological model development from the literature: example investigation of recent thymic emigrants. Journal of Immunological Methods 414: 32-50.

Regarding Data...

Many consultations will not require data sharing. For those rare occasions that do, please proceed as follows.

  • Data must be fully de-identified of...
  • Do not provide us with any information that would allow us to re-identify the data.
  • Provide concise description of the dataset, its variables, and any limitations in the data, including missing values, low microbead counts, measurements above or below limits of detection, serial dilutions, etc. Here too, do not send any PHI (https://acp.stanford.edu/hipaa/hipaa-faq) or any other high-risk data (https://uit.stanford.edu/guide/riskclassifications) and do not provide us with any information that would allow us to re-identify the data
  • Studies must already have all applicable and necessary Institutional Review Board review and approvals, including for sharing data with us (https://researchcompliance.stanford.edu/panels/hs).
  • Structure data with columns as variables and rows as observations. Longitudinal data will require multiple rows per participant.
  • First row, and first row only, of data set will always contain variable names. Variable names should not include any special characters, including spaces or hyphens. Each variable name should begin with a letter of the alphabet.
  • Contact Dr. Holmes (tholmes@stanford.edu) for acceptable file formats.
  • * Contact Dr. Holmes regarding how to submit your data.

For more information on this data policy, please contact the HIMC Director,

Dr. Holden Maecker (maecker@stanford.edu).

Getting Started:

·      Please do not begin by sending any data. Instead, please send via encrypted email to Dr. Holmes (tholmes@stanford.edu), copying Dr. Maecker (maecker@stanford.edu), a one-page summary of the study design, primary hypotheses, and the type(s) of data collected.

·      This one-page summary cannot include any PHI (https://acp.stanford.edu/hipaa/hipaa-faq) or other high-risk information (https://uit.stanford.edu/guide/riskclassifications) or any access to any PHI or other high-risk information.

·      Dr. Holmes will respond promptly with an estimate of “hours not to exceed” required for the work.

Existing Tools for Download:

Please direct all questions regarding any of the utilities posted here to Tyson Holmes (tholmes@stanford.edu).

R utility for correcting for plate/batch/lot and nonspecific binding artifacts (Updated 8 Apr 2021). April 8, 2021: The R script has been updated to add the dpMFI values back to the mean pMFI values per SP in the output file (not in Figure 4). Use this version with revised output file to make comparisons among cytokines more meaningful in downstream analyses.To avoid a harmless warning message, download and install version 1.5.2-1 of emmeans. We thank Heather Pankow (Department of Psychiatry and Behavioral Sciences, Stanford University) for helping us to identify an error in the R script.

User's Guide for R utility for correcting for plate/batch/lot and nonspecific binding artifacts.

Programmer's Guide for R utility for correcting for plate/batch/lot and nonspecific binding artifacts.

User's Guide on Default Settings for VSpecimen, VPipette, and Fold

General Guidance on Single-Well Designs (Non-longitudinal or Longitudinal)

Example R Script for Analysis of Longitudinal, Single-Well Luminex® Data Sets

 

R utility is here for producing penalized supervised star plots. The paper that introduced and details penalized supervised star plots is published in Viral Immunology 32(2):1–8.

Notes on Olink and Luminex Assays.

Olink readouts can be below the limit of detection (< LOD). The LOD is a conservative boundary below which readouts may respond nonlinearly or may be pure noise with high coefficients of variation. < LOD readouts can be negative, an artifact of normalization. While < LOD readouts are to be regarded with caution, the LOD is conservative in that some readouts < LOD might still be meaningful. Indeed, Olink recommends routinely retaining readouts < LOD in statistical analyses, which HIMC typically does. You may wish to view results from proteins with high percentages < LOD as possibly being less reliable. Failure to detect statistically significant findings for a protein with a high percentage < LOD may either indicate that protein has truly not responded biologically or that < LOD readouts have compromised the result. Olink advises that < LOD readouts are unlikely to result in false positives if proper control for false positives has been applied to the statistical analysis, which the HIMC can provide. Please consult with us (yaelhr@stanford.edutholmes@stanford.edu) further if you have any questions or concerns about < LOD readouts in your data.

Olink assays have good normal and pathological linear dynamic range. Even so, Olink assays are bounded above by the upper limit of quantification (ULOQ). Results above the ULOQ, while not necessarily completely useless, are definitely less reliable. Unfortunately, published ULOQ values are not available in NPX units, the units reported to you. Values above ULOQ can display a distorting "hook" effect" wherein NPX values actually decline with increasing protein concentration. We strongly advise treating results with caution from specimens of especially high NPX values. Based on our experience thus far, three proteins that tend to have especially high NPX values (14 - 16 NPX) are MMP-1, MCP4 and CXCL5. Please consult with us further if you have any additional questions about the ULOQ (yaelhr@stanford.edutholmes@stanford.edu).

Invariably, a time delay exists between when a blood tissue specimen is collected and when it is processed through Olink or Luminex assays. Any delay may cause some, even if very small, change in measurable protein levels in plasma and serum specimens. Recent work in Luminex (Scientific Reports 10: 17328) suggests that protein levels may remain relatively stable for up to 15 hours. To be safe, however, we recommend processing within 2 hours (here). We recommend freezing at -80°C (here). Protein levels may remain stable for up to 2 years when frozen at -80°C, but multiple freeze-thaw cycles can distort protein levels (see here). If you choose to have both Luminex and Olink assays performed on your specimens, these assays may be performed in series (not in parallel), so protein levels will be distorted to some degree for whichever assay is performed second. Please consult our webpage here for more specific details on how to collect and process your serum and plasma samples. Please feel free to direct any additional questions to us (yaelhr@stanford.edu).

Olink and Luminex assay data may contain missing values, either because a specimen did not process properly or a reported value was removed during quality control. Always inquire (yaelhr@stanford.edu) as to why data are missing from the data set that you receive. Missing data can be addressed in different ways. If missing data are extremely rare in your data and your plan is analyze data separately by cytokine (univariate analyses), missing data can probably be safely ignored. A minor loss of statistical power and possibly minor bias will result. If missing data are more common but only occur in one or two proteins, consider excluding those specific proteins from your analysis. If missing data are more common and occur in multiple proteins and/or you are planning multivariate analyses (e.g., cluster analyses), consider imputing missing data. A current standard is multiple imputation wherein multiple complete data sets are imputed, properly accounting for the random uncertainty of imputation. A "tipping point" analysis (here) might be warranted in certain circumstances as a sensivity analysis. Please feel free to contact us about how to perform statistical analysis with missing data (tholmes@stanford.edu).

Some of the above information was obtained through correspondence with the very helpful professionals at Olink. For more information, visit their resources at https://www.olink.com and here.

Preparing data for statistical analysis

When creating spreadsheets, please be sure to name columns (i.e., variable names) in a format that can be read by statistical software.

  • Create names using letters, numbers, and the underbar.
  • Do not use any other special characters such as parentheses, hyphens, spaces, or plus signs.
  • Always start column names with a letter, not with a number.
  • Keep column names to a length of 32 characters or less.

If you need variable names in output graphs and tables that contain special characters, please create a crosswalk with the column name on the left and the output name on the right. Here is a toy example of that structure.

 

"CD4pTCells" = "CD4+ T-cells"

"CD8pTCells" = "CD8+ T-cells"

"mDC" = "Myeloid Dendritic Cells"

 

Following these guidelines will ensure that all variables are named properly throughout the entire analysis.

Measurement error 

An inescapable fact in assay work is measurement error. Measurement error can assume two separable forms, bias and variance. Bias is systematic departure from truth, while variance is unstructured noise. Bias correction is typically associated with artifact correction. Artifacts may be corrected through tuning the assay itself or through subsequent statistical means. Some artifacts may be correctable (e.g., nonspecific binding in Luminex assay) while others less so (e.g., nonspecific binding in Olink assay). Measurement variance is typically termed "technical error", measurement often requires technical replicates, and quantification often employs the coefficient of variation (technical CV). Technical error degrades statistical stability of measured parameters (e.g., means) and erodes statistical power in hypothesis testing. Technical error in predictor variables can bias estimates of regression coefficients unless special measures are taken. A drawback to measuring technical error is cost and, for that reason, is often bypassed. Even so, having technical replicates is definitely advantageous because we can "smooth" over them statistically, for example by averaging, and thereby recuperate statistical stability and power. Also, technical replicates need not be performed for all specimens in an experiment. Even a subset of technical replicates (e.g., 10 specimens) opens our eyes to extent to which technical variance may be compromising findings, even if we cannot correct for it. Bias and variance are not mutually exclusive. The same factor (e.g., freeze/thaw) might introduce both. When both are present, their effects compound. That's why their quantification and correction, whenever possible, can be so important.

Further Information:

Contact Dr. Tyson Holmes (Statistical Director, HIMC, tholmes@stanford.edu) about scheduling and availability and Dr. Holden Maecker (Director, HIMC, maecker@stanford.edu) about fees.