Statistical Consultation, Statistical Methods Development, and Mathematical Modeling for Immunological Assay and Immunoassay
Tyson H. Holmes, Ph.D.
Statistical Director, Human Immune Monitoring Center
Institute for Immunity, Transplantation and Infection
Stanford University School of Medicine
“We are here to help.”
The purpose of this service is three-fold.
· To provide investigators with guidance on how to analyze their data themselves, including instruction on applicable R and/or SAS script.
· To develop new statistical methods or other mathematical models where existing methods will not permit adequate analysis.
· To provide general statistical consulting, including drafting and review of text on statistical methods.
Selected Assay Technologies:
· Luminex® and Olink® platforms for secreted soluble proteins
· Flow and mass cytometry, including intracellular staining
· Nonspecific binding, batch effects, antigen-excess, and other assay artifacts
Selected Survey of Publications:
· Holmes TH. 2023. A taxonomy of multiple regression methods for immunologists. Journal of Immunological Methods 519:113506.
· Holmes TH. 2020. Generalized mathematical model for immunoassay interference. Autoimmunity Reviews 19:102663.
- · Holmes TH, Subrahmanyam PB, Wang W, Maecker HT. 2019. Penalized supervised star plots: Example application in influenza-specific CD4+ T Cells. Viral Immunology 32:102-109.
· Holmes TH, He X. 2016. Human immunophenotyping via low-variance, low-bias, interpretive regression modeling of small, wide data sets: application to aging and immune response to influenza vaccination. Journal of Immunological Methods 437:1-12.
· Holmes TH, Lewis DB. 2014. Bayesian immunological model development from the literature: example investigation of recent thymic emigrants. Journal of Immunological Methods 414: 32-50.
Many consultations will not require data sharing. For those rare occasions that do, please proceed as follows.
- Data must be fully de-identified of...
- Protected Health Information (PHI) (https://med.stanford.edu/irt/security/stanfordinfo/hipaa.html) and
- any other high-risk data (https://uit.stanford.edu/guide/riskclassifications).
- Do not provide us with any information that would allow us to re-identify the data.
- Provide concise description of the dataset, its variables, and any limitations in the data, including missing values, low microbead counts, measurements above or below limits of detection, serial dilutions, etc. Here too, do not send any PHI (https://med.stanford.edu/irt/security/stanfordinfo/hipaa.html) or any other high-risk data (https://uit.stanford.edu/guide/riskclassifications) and do not provide us with any information that would allow us to re-identify the data.
- Studies must already have all applicable and necessary Institutional Review Board review and approvals, including for sharing data with us (https://researchcompliance.stanford.edu/panels/hs).
- Structure data with columns as variables and rows as observations. Longitudinal data will require multiple rows per participant.
- First row, and first row only, of data set will always contain variable names. Variable names should not include any special characters, including spaces or hyphens. Each variable name should begin with a letter of the alphabet.
- Contact Dr. Holmes (firstname.lastname@example.org) for acceptable file formats.
- * Contact Dr. Holmes regarding how to submit your data.
For more information on this data policy, please contact the HIMC Director,
Dr. Holden Maecker (email@example.com).
· Please do not begin by sending any data. Instead, please send via encrypted email to Dr. Holmes (firstname.lastname@example.org), copying Dr. Maecker (email@example.com), a one-page summary of the study design, primary hypotheses, and the type(s) of data collected.
· This one-page summary cannot include any PHI (https://med.stanford.edu/irt/security/stanfordinfo/hipaa.html) or other high-risk information (https://uit.stanford.edu/guide/riskclassifications) or any access to any PHI or other high-risk information.
· Dr. Holmes will respond promptly with an estimate of “hours not to exceed” required for the work.
Existing Tools for Download:
Please direct all questions regarding any of the utilities posted here to Tyson Holmes (firstname.lastname@example.org).
R utility for correcting for plate/batch/lot and nonspecific binding artifacts (Updated 20 Jan 2022). April 8, 2021: The R script has been updated to add the dpMFI values back to the mean pMFI values per SP in the output file (not in Figure 4). Use this version with revised output file to make comparisons among cytokines more meaningful in downstream analyses. To avoid a harmless warning message, download and install most recent version of emmeans. We thank Heather Pankow (Department of Psychiatry and Behavioral Sciences, Stanford University) for helping us to identify an error in the R script. January 20, 2022: Correction made to naming of one variable in output file.
R utility is here for producing penalized supervised star plots. The paper that introduced and details penalized supervised star plots is published in Viral Immunology 32(2):1–8.
Whenever possible, all biological / clinical factors of interest should be balanced in their ratios across batches (e.g., plates). For example, equal ratios of each treatment condition should appear in each batch. All longitudinal specimens for each participant should occur in the same batch. Balance is important because it provides the cleanest means of disentangling batch effects from biological / clinical effects.
Notes on Olink and Luminex Assays.
Olink readouts can be below the limit of detection (< LOD). The LOD is a conservative boundary below which readouts may respond nonlinearly or may be pure noise with high coefficients of variation. < LOD readouts can be negative, an artifact of normalization. While < LOD readouts are to be regarded with caution, the LOD is conservative in that some readouts < LOD might still be meaningful. Indeed, Olink recommends routinely retaining readouts < LOD in statistical analyses, which HIMC typically does. You may wish to view results from proteins with high percentages < LOD as possibly being less reliable. Failure to detect statistically significant findings for a protein with a high percentage < LOD may either indicate that protein has truly not responded biologically or that < LOD readouts have compromised the result. Olink advises that < LOD readouts are unlikely to result in false positives if proper control for false positives has been applied to the statistical analysis, which the HIMC can provide. Please consult with us (email@example.com, firstname.lastname@example.org) further if you have any questions or concerns about < LOD readouts in your data.
Olink assays have good normal and pathological linear dynamic range. Even so, Olink assays are bounded above by the upper limit of quantification (ULOQ). Results above the ULOQ, while not necessarily completely useless, are definitely less reliable. Unfortunately, published ULOQ values are not available in NPX units, the units reported to you. Values above ULOQ can display a distorting "hook" effect" wherein NPX values actually decline with increasing protein concentration. We strongly advise treating results with caution from specimens of especially high NPX values. Based on our experience thus far, three proteins that tend to have especially high NPX values (14 - 16 NPX) are MMP-1, MCP4 and CXCL5. Please consult with us further if you have any additional questions about the ULOQ (email@example.com, firstname.lastname@example.org).
Invariably, a time delay exists between when a blood tissue specimen is collected and when it is processed through Olink or Luminex assays. Any delay may cause some, even if very small, change in measurable protein levels in plasma and serum specimens. Recent work in Luminex (Scientific Reports 10: 17328) suggests that protein levels may remain relatively stable for up to 15 hours. To be safe, however, we recommend processing within 2 hours (here). We recommend freezing at -80°C (here). Protein levels may remain stable for up to 2 years when frozen at -80°C, but multiple freeze-thaw cycles can distort protein levels (see here). If you choose to have both Luminex and Olink assays performed on your specimens, these assays may be performed in series (not in parallel), so protein levels will be distorted to some degree for whichever assay is performed second. Please consult our webpage here for more specific details on how to collect and process your serum and plasma samples. Please feel free to direct any additional questions to us (email@example.com).
Olink and Luminex assay data may contain missing values, either because a specimen did not process properly or a reported value was removed during quality control. Always inquire (firstname.lastname@example.org) as to why data are missing from the data set that you receive. Missing data can be addressed in different ways. If missing data are extremely rare in your data and your plan is analyze data separately by cytokine (univariate analyses), missing data can probably be safely ignored. A minor loss of statistical power and possibly minor bias will result. If missing data are more common but only occur in one or two proteins, consider excluding those specific proteins from your analysis. If missing data are more common and occur in multiple proteins and/or you are planning multivariate analyses (e.g., cluster analyses), consider imputing missing data. A current standard is multiple imputation wherein multiple complete data sets are imputed, properly accounting for the random uncertainty of imputation. A "tipping point" analysis (here) might be warranted in certain circumstances as a sensivity analysis. Please feel free to contact us about how to perform statistical analysis with missing data (email@example.com).
If you collect plasma so some specimens have EDTA and others have heparin, use EDTA/heparin as a covariate in your statistical regression analyses. That said, the HIMC does not recommend collecting some specimens in EDTA and others in heparin within the same experiment.
Preparing data for statistical analysis
When creating spreadsheets, please be sure to name columns (i.e., variable names) in a format that can be read by statistical software.
- Create names using letters, numbers, and the underbar.
- Do not use any other special characters such as parentheses, hyphens, spaces, or plus signs.
- Always start column names with a letter, not with a number.
- Keep column names to a length of 32 characters or less.
If you need variable names in output graphs and tables that contain special characters, please create a crosswalk with the column name on the left and the output name on the right. Here is a toy example of that structure.
"CD4pTCells" = "CD4+ T-cells"
"CD8pTCells" = "CD8+ T-cells"
"mDC" = "Myeloid Dendritic Cells"
Following these guidelines will ensure that all variables are named properly throughout the entire analysis.
Next I describe a structure for spreadsheets that is generically useful—rectangular data sets. As name suggests, all data fall within a rectangle. Each row and each column of data set contains data. No rows are blank. No columns are blank. All rows are each of the same length, and all columns are each of the same length. No extra notes or data fall outside this rectangle. The first row, and first row only, of each column is variable name for column. Rather than spreading data across multiple worksheets, to extent possible, all data are collated into a single rectangular data set. All information is contained in numeric data entries themselves, not provided through highlighting, bolding or other special formatting. The simple, clean structure of rectangular data sets has advantages.
- Compact, complete representation of all data.
Allows rapid, clear interpretation of contents of each cell.
Allows quick scan by eye for missing or incomplete data.
Machine-readable for statistical software.
May be saved in a portable CSV file format.
An inescapable fact in assay work is measurement error. Measurement error can assume two separable forms, bias and variance. Bias is systematic departure from truth, while variance is unstructured noise. Bias correction is typically associated with artifact correction. Artifacts may be corrected through tuning the assay itself or through subsequent statistical means. Some artifacts may be correctable (e.g., nonspecific binding in Luminex assay) while others less so (e.g., nonspecific binding in Olink assay). Measurement variance is typically termed "technical error", measurement often requires technical replicates, and quantification often employs the coefficient of variation (technical CV). Technical error degrades statistical stability of measured parameters (e.g., means) and erodes statistical power in hypothesis testing. Technical error in predictor variables can bias estimates of regression coefficients unless special measures are taken. A drawback to measuring technical error is cost and, for that reason, is often bypassed. Even so, having technical replicates is definitely advantageous because we can "smooth" over them statistically, for example by averaging, and thereby recuperate statistical stability and power. Also, technical replicates need not be performed for all specimens in an experiment. Even a subset of technical replicates (e.g., 10 specimens) opens our eyes to extent to which technical variance may be compromising findings, even if we cannot correct for it. Bias and variance are not mutually exclusive. The same factor (e.g., freeze/thaw) might introduce both. When both are present, their effects compound. That's why their quantification and correction, whenever possible, can be so important.