Symposia
Dissemination & Implementation Science
Simone Schriger, M.A. (she/her/hers)
Doctoral Candidate
University of Pennsylvania
Los Angeles, California
Steven C. Marcus, PhD (he/him/his)
Research Associate Professor
School of Social Policy and Practice, University of Pennsylvania
philadelphia, Pennsylvania
Rinad Beidas, Ph.D. (she/her/hers)
Ralph Seal Paffenbarger Professor and Chair
Northwestern University Feinberg School of Medicine
Chicago, Illinois
Emily Becker-Haimes, Ph.D.
Assistant Professor
University of Pennsylvania
Philadelphia, Pennsylvania
The effectiveness of evidence-based practices (EPBs) for psychiatric disorders is often attenuated when interventions are delivered in community mental health settings. One reason for this is lack of clinician fidelity to the treatment model. Challenges with fidelity can stem from insufficient training, burnout, clinician discomfort with delivering the EBP as designed, and unintentional “drift” from the EBP over time in the absence of booster trainings. The gold-standard method of measuring fidelity is through direct observation; client sessions are video- or audio-recorded and rated on how well they align with the core intervention components of the EBP. Unfortunately, direct observation can be time-consuming, logistically complicated, and expensive. More pragmatic methods of measuring fidelity are needed . We randomized clinicians from 27 community mental health agencies serving youth (n = 126; M age = 38 years, SD = 13; 76% female) 1:1:1 to one of three fidelity conditions: self-report (n = 41), chart-stimulated recall (semi-structured interviews with the chart available; n = 42), or behavioral rehearsal (simulated role-plays; n = 43). All participating clinicians completed up to 3 fidelity assessment sessions with different clients recruited from their caseloads (n = 288; M age = 13 years SD = 4; 42% female); we calculated a direct-observation score from audio-recordings of all treatment sessions. By comparing these direct observation scores with the score from the assigned fidelity condition, we determined whether scores from direct observation and the alternate conditions aligned for 12 individual CBT components and whether predictors of high direct observation fidelity scores aligned with predictors of high self-report fidelity scores for four candidate predictor variables. We found that for certain CBT components, all three fidelity conditions could be suitable alternatives to direct observation. Alternate fidelity scores better aligned with direct observation scores for behavioral strategies than for cognitive strategies. With regard to predictors of high fidelity, we found that there was no alignment between predictors of self-reported and direct observed fidelity, suggesting that self-report has significant limitations beyond what is documented in the literature. Findings have implications regarding the possible utility, or lack-thereof, of these “lower-intensity” pragmatic fidelity measurement methods in community mental health settings.