Filtering by: Personalized Medicine

Feb
2
12:00 PM12:00

PSMG: Leonard Bickman

Improving Mental Health Services: A 50-Year Journey from Randomized Experiments to Artificial Intelligence and Precision Mental Health

Leonard Bickman, Ph.D.
Vanderbilt University

ABSTRACT:
This presentation describes the current state of mental health services, identifies critical problems, and suggests how to solve them. I focus on the potential contributions of artificial intelligence and precision mental health to improving mental health services. Toward that end, I draw upon my own research, which has changed over the last half century, to highlight the need to transform the way we conduct mental health services research and program development. I identify exemplars from the emerging literature on artificial intelligence and precision approaches to treatment in which there is an attempt to personalize or fit the treatment to the client in order to produce more effective interventions.

View Event →
Jan
19
12:00 PM12:00

PSMG: Kimberly Johnson and Zili Sloboda

Looking Over the Wall—The Professionalization of the Field of Prevention

Kimberly Johnson, Ph.D.
University of South Florida

Zili Sloboda, Sc.D., President
Applied Prevention Science International

ABSTRACT:
The field of prevention science and practice has matured over the past 50 years and is increasingly being recognized as a profession. The sociology of professions provides parameters as to what constitutes a profession: having a systematic body of theory; an established knowledge-base; the authority to define problems and their treatment; community sanctions to admit and train its members; ethical codes that stress an ideal of service to others; and a culture that includes the institutions necessary to carry out all of its functions. Another component is achieving international recognition and acceptance and acknowledgement. The status and maturation of the prevention as a profession is reviewed. Recommendations for moving forward are presented including developing a structure to ‘internationalize’ the field of prevention to fully professionalize it that would include such groups as the U.S. and EU Societies for Prevention Research and the International Consortium of Universities for Demand Reduction.

View Event →
Feb
13
12:00 PM12:00

Linda Collins: Why conduct a factorial optimization trial?

Why conduct a factorial optimization trial?

Linda Collins, Ph.D.
Pennsylvania State University

ABSTRACT:
Behavioral interventions are used widely for prevention and treatment of health problems, and for promotion of well-being and academic achievement.  These interventions are typically developed and evaluated using the classical treatment package approach, in which the intervention is assembled a priori and evaluated by means of a two-group randomized controlled trial (RCT).   I will describe an alternative framework for developing, optimizing, and evaluating behavioral interventions.  This framework, called the multiphase optimization strategy (MOST), is a principled approach that has been inspired by ideas from engineering.  MOST includes the RCT for evaluation, but also includes other steps before the RCT, including one or more optimization trials.  Optimization trials are experiments that gather the empirical information needed to optimize an intervention.  Optimization trials can be conducted using any experimental design, but factorial designs are often the most efficient and economical.  Factorial designs have been around for more than one hundred years, and remain widely used in many fields, such as agriculture and engineering.  However, behavioral scientists are typically not trained in the potential advantages of factorial experiments, and how best to use them as a tool for intervention optimization.  In this webinar I will first introduce MOST.  I will then focus on factorial experiments, particularly on why they are an important alternative for conducting an optimization trial in MOST.

View Event →
Jan
9
12:00 PM12:00

Susan Murphy: Assessing time-varying causal interactions and treatment effects with applications to mobile health

Assessing time-varying causal interactions and treatment effects with applications to mobile health

Susan Murphy, Ph.D.
University of Michigan

ABSTRACT:
Mobile devices along with wearable sensors facilitate our ability to deliver supportive treatments anytime and anywhere. Indeed mobile interventions are being developed and employed across a variety of health fields, including to support HIV medication adherence, encourage physical activity and healthier eating as well as to support recovery in addictions. A critical question in the optimization of mobile health interventions is: "When and in which contexts, is it most useful to deliver treatments to the user?" This question concerns time-varying dynamic moderation by the context (location, stress, time of day, mood, ambient noise, etc.) of the effectiveness of the treatments on user behavior. In this talk we discuss the micro-randomized trial design and associated data analyses for use in assessing moderation. We illustrate this approach with the micro-randomized trial of HeartSteps, a physical activity mobile intervention.

View Event →
May
23
12:00 PM12:00

Chen-Pin Wang: Assessing causal inference and disparity in the latent variable prediction framework

Assessing causal inference and disparity in the latent variable prediction framework

Chen-Pin Wang, Ph.D.
University of Texas-San Antonio

ABSTRACT:
Latent Growth Mixture Modeling (LGMM) is a useful statistical tool to characterize the heterogeneity of the longitudinal development of a prognostic variable using the so-called latent classes. Recently an advanced statistical learning methodology (Jo 2016) was developed to validate the scientific utility of the latent classes regarding the prediction of a target outcome of interest. This presentation focuses on deriving causal inference and health disparity in this prediction model framework. The proposed method involves LGMM analysis of the prognostic variable, validating the prediction of the latent classes for the distal (future) outcome of interest, and then incorporating the inverse propensity score weighting technique to deriving causal relationship between the prognostics classes with the distal outcome and the associated health disparity. I will demonstrate the proposed method using a longitudinal epidemiology study of patients with type 2 diabetes that aimed at assessing the prediction of glycemic control for cardiovascular diseases related hospitalization and the racial disparity in this relationship.

View Event →
May
16
12:00 PM12:00

Booil Jo: Statistical learning with latent prediction targets

Statistical learning with latent prediction targets

Booil Jo, Ph.D.
Stanford University

ABSTRACT:
In predictive modeling, often aided by machine learning methods, much effort is concentrated in identifying good predictors. However, the same level of rigor is often absent in improving the outcome side of models. In this study, we focus on this rather neglected aspect of model development and demonstrate the use of longitudinal information as a way of improving the outcome side of predictive models. This involves optimally characterizing individuals’ outcome status, classifying them, and validating the formulated prediction targets. None of these tasks are straightforward, which may explain why longitudinal prediction targets are not commonly used in practice despite their compelling benefits. As a practical way of improving this situation, we explore a semi-supervised learning approach based on growth mixture modeling (GMM), a method of identifying latent subpopulations that manifest heterogeneous outcome trajectories. In the proposed approach, we utilize the benefits of the conventional use of GMM for the purpose of generating potential candidate models based on empirical model fitting, which can be viewed as unsupervised learning. We then evaluate candidate GMM models on the basis of a direct measure of success; how well the trajectory types are predicted by clinically and demographically relevant baseline features, which can be viewed as supervised learning. We examine the proposed approach focusing on a particular utility of latent trajectory classes, as outcomes that can be used as valid prediction targets in clinical prognostic models. Our approach is illustrated using data from the Longitudinal Assessment of Manic Symptoms study.

Suggested readings:

Jo B, Findling RL, Wang C-P, Hastie JT & the LAMS group (2017). Targeted use of growth mixture modeling: A learning perspective. Statistics in Medicine, 36, 671-686.

Jo B, Findling RL, Hastie JT, Youngstrom EA, Wang C-P & the LAMS group (in press). Construction of longitudinal prediction targets using semi-supervised learning. Statistical Methods in Medical Research.

View Event →
May
2
12:00 PM12:00

Teppei Yamamoto: Introduction to causal mediation analysis using R

Introduction to causal mediation analysis using R

Teppei Yamamoto, Ph.D.
MIT

ABSTRACT:
Researchers often seek to study not only whether a treatment has a causal effect on an outcome but also how and why such a causal relationship comes about. Causal mediation analysis is a popular method to analyze causal mechanisms using experimental or observational data. In this webinar, we provide an overview of the theoretical framework underpinning the mediation methods and discuss assumptions that play a key role for valid inference about causal mechanisms. We also cover practical issues in using the framework for social, behavioral and medical science applications. A particular focus will be on the R package mediation and how to use it in various applied setting.

View Event →
Apr
25
12:00 PM12:00

Naihua Duan: Personalized biostatistics, small data, and N-of-1 trials

Personalized biostatistics, small data, and N-of-1 trials

Naihua Duan, Ph.D.
Columbia University

ABSTRACT:
As personalized medicine is emerging as a promising paradigm to improve clinical decision-making, to customize clinical decisions for individual patients, to accommodate the unique needs and preferences for each specific patient, there is a growing need for biostatistical methods to be developed and deployed to serve the needs for this emerging paradigm. As an example, the on-going NINR-funded PREEMPT Study, http://www.ucdmc.ucdavis.edu/chpr/preempt/, has developed a smartphone app that allows chronic pain patients and clinicians to run personalized experiments (n-of-1 trials), comparing two different pain treatments, to help patients and their clinicians to choose the most appropriate pain treatment for each individual patient. Such personalized biostatistical toolkits can be utilized by frontline clinicians and their patients to address the specific clinical questions confronted by each individual patient, to enable the specification and execution of the personalized trial protocol, to facilitate the collection of outcome and process data, to analyze and interpret the data acquired, and to produce reports to the end users to help them with evidence-based decision making. This paradigm exemplifies the potential for “Small Data” (as opposed to “Big Data”) to be deployed in clinical applications for the benefits of today’s patients as the primary aim for the evidence-based investigations. Importantly, personalized biostatistics based on Small Data provides strong incentives for end-users to participate in the evidence-based investigations, as they are targeted to benefit directly from such investigations, instead of the traditional clinical research that aims primarily to benefit future patients. There is a promising potential this merging paradigm will play an important role in the future of health and related domains, to supplement the traditional top-down model for research with a bottom-up model for quality improvement.

View Event →
Apr
18
12:00 PM12:00

Min Lu: Estimating individual treatment effect in observational data using random forest methods

Estimating individual treatment effect in observational data using random forest methods

Min Lu, Doctoral student
University of Miami
 

ABSTRACT:
Estimation of individual treatment effect in observational data is complicated due to the challenges of confounding and selection bias. A useful inferential framework to address this is the counterfactual model which takes the hypothetical stance of asking what if an individual had received both treatments. Making use of random forests (RF) within the counterfactual framework we estimate individual treatment effects by directly modeling the response. We find accurate estimation of individual treatment effects is possible even in complex heterogeneous settings but that the type of RF approach plays an important role in accuracy. Methods designed to be adaptive to confounding, when used in parallel with out-of-sample estimation, do best. One method found to be especially promising is counterfactual synthetic forests. We illustrate this new methodology by applying it to a large comparative effectiveness trial, Project Aware, in order to explore the role drug use plays in sexual risk. The analysis reveals important connections between risky behavior, drug usage, and sexual risk.

View Event →
Apr
4
12:00 PM12:00

Tyler VanderWeele: Conceptual foundations for selecting optimal subgroups for treatment

Conceptual foundations for selecting optimal subgroups for treatment

Tyler VanderWeele, Ph.D.
Harvard University

ABSTRACT:
What data are relevant when making a treatment decision for me? What replications are relevant for quantifying the uncertainty of this personalized decision? What does “relevant” even mean here? The multi-resolution (MR) perspective from the wavelets literature provides a convenient theoretical framework for contemplating such questions. Within the MR framework, signal and noise are two sides of the same coin: variation. They differ only in the resolution of that variation—a threshold, the primary resolution, divides them. We use observed variations at or below the primary resolution (signal) to estimate a model and those above the primary resolution (noise) to estimate our uncertainty. The higher the primary resolution, the more relevant our model is for predicting a personalized response. The search for the appropriate primary resolution is a quest for an age old bias-variance trade-off: estimating more precisely a less relevant treatment decision versus estimating less precisely a more relevant one. However, the MR setup crystallizes how the tradeoff depends on three objects: (i) the estimand which is independent of any statistical model, (ii) a model which links the estimand to the data, and (iii) the estimator of the model. This trivial, yet often overlooked distinction, between estimand, model, and estimator, supplies surprising new ways to improve mean squared error. The MR framework also permits a conceptual journey into the counterfactual world as the resolution level approaches infinite, where “me” becomes unique and hence can only be given a single treatment, necessitating the potential outcome setup. A real-life Simpson’s paradox involving two kidney stone treatments will be used to illustrate these points and engage the audience.

View Event →
Apr
4
12:00 PM12:00

Xiao-Li Meng: Building a statistical theory for individualized treatments: A mulit-resolution perspective

Building a statistical theory for individualized treatments: A mulit-resolution perspective

Xiao-Li Meng, Ph.D.
Harvard University

ABSTRACT:
What data are relevant when making a treatment decision for me? What replications are relevant for quantifying the uncertainty of this personalized decision? What does “relevant” even mean here? The multi-resolution (MR) perspective from the wavelets literature provides a convenient theoretical framework for contemplating such questions. Within the MR framework, signal and noise are two sides of the same coin: variation. They differ only in the resolution of that variation—a threshold, the primary resolution, divides them. We use observed variations at or below the primary resolution (signal) to estimate a model and those above the primary resolution (noise) to estimate our uncertainty. The higher the primary resolution, the more relevant our model is for predicting a personalized response. The search for the appropriate primary resolution is a quest for an age old bias-variance trade-off: estimating more precisely a less relevant treatment decision versus estimating less precisely a more relevant one. However, the MR setup crystallizes how the tradeoff depends on three objects: (i) the estimand which is independent of any statistical model, (ii) a model which links the estimand to the data, and (iii) the estimator of the model. This trivial, yet often overlooked distinction, between estimand, model, and estimator, supplies surprising new ways to improve mean squared error. The MR framework also permits a conceptual journey into the counterfactual world as the resolution level approaches infinite, where “me” becomes unique and hence can only be given a single treatment, necessitating the potential outcome setup. A real-life Simpson’s paradox involving two kidney stone treatments will be used to illustrate these points and engage the audience.

View Event →
Jan
12
12:00 PM12:00

Alyson Zalta: Conducting psychopathology prevention research in the RDoC era

Conducting psychopathology prevention research in the RDoC era

Alyson Zalta, Ph.D.
Rush University Medical Center

ABSTRACT:
The Research Domain Criteria (RDoC) initiative promoted by the National Institute of Mental Health emphasizes a dimensional approach to psychopathology that is agnostic to DSM diagnosis. The RDoC project offers exciting possibilities for advancing research aimed at preventing psychopathology. However, prevention has historically been defined using diagnostic status, requiring the field to redefine what constitutes prevention using an RDoC approach. This presentation will describe the potential advantages of RDoC for prevention research, outline new criteria for prevention research in the RDoC context, and provide guidance for implementing these criteria. 

View Event →
Nov
12
12:00 PM12:00

Naihua Duan: Single-patient (N-of-1) trials: A pragmatic clinical decision methodology for patient-centered comparative effectiveness research

Single-patient (N-of-1) trials: A pragmatic clinical decision methodology for patient-centered comparative effectiveness research

Naihua Duan, Ph.D.
Columbia University

ABSTRACT:
Single-patient trials (SPTs, a.k.a. n-of-1 trials) are multiple-period crossover trials conducted within individual patients to evaluate the comparative effectiveness of two or more treatments for each specific patient (Duan, Kravitz, Schmid, 2013, J. of Clinical Epidemiology). SPT has the potential to serve a rather unique dual role, both to inform individual treatment decisions in clinical care, and to support pragmatic patient-centered comparative effectiveness research.  I will give a brief overview of the core methodology for SPT, and discuss key implementation issues, including:

  • Indications and contraindications for clinical conditions and treatments suitable for evaluation with the SPT;
  • Interpretation of the SPT as human subjects research vs. quality improvement;
  • Pros and cons for blinding and washout as design features for SPTs;
  • Statistical methods for the combination of individual SPT data and aggregate data from similar SPTs (borrowing from strength) to enhance treatment decisions for the specific patient;
  • Information technology infrastructure for the implementation of SPTs; and
  • Methodological developments that will enhance the utility of SPTs.
View Event →