ADHD - Child
Shauntal Van Dreel, MSW
School Mental Health Liaison
Seattle Children’s Hospital
Seattle, Washington
Mercedes Ortiz Rodriguez, B.A.
Clinical Research Coordinator
Seattle Children’s Hospital
Seattle, Washington
Stephanie K. Brewer, Ph.D.
Research Scientist
University of Washington School of Medicine
Seattle, Washington
Pablo Martin, LCSW
Program Manager
Florida International University
Miami, Florida
Paulo Graziano, Ph.D.
Professor
Florida International University
Miami, Florida
Margaret Sibley, Ph.D. (she/her/hers)
Professor of Psychiatry & Behavioral Sciences
University of Washington School of Medicine
Seattle, Washington
Recent evidence-based research on school mental health outlines a need for a multi-dimensional assessment of implementation integrity (Sutherland & McLeod, 2021). Using an existing quality measure for adolescent ADHD treatment (Sibley et al., 2016), we systematically adapted a treatment quality measure for STRIPES, a school-based behavioral treatment for adolescents with ADHD symptoms, expanding the existing adherence tool with 5 quality indices (Positivity, Proof, Participation, Plan, and Problem Solve).
Three aims guided measure development. 1) Compare the psychometric properties of an extensive manualized coding system “long form” versus an abbreviated Likert-style “short form”, 2) Identify psychometric differences when administered by trained professionals vs. lay observers and 3) Evaluate whether inter-rater reliability and coder confidence is enhanced when coding from recordings versus live. Data was collected over 9 months at 2 sites. Inter-rater reliability was analyzed using an intraclass correlation coefficient (ICC), examined separately for 5 subscales and overall extensiveness. Confidence, ease of use, and barriers to fidelity coding were measured in each phase.
Phase 1. 15 recordings from an urban U.S. high school were coded by 4 randomly assigned professionals counter-balancing the long vs. short form. Using the Cicchetti (1994) guidelines for ICC, we found ‘good’ or ‘excellent’ inter-rater reliability for the long form’s 6 subscales and 5 of 6 short form scales. Correlations between long and short form scales exceeded ‘fair’. Coder perception of ease was rated using a 1-5 scale from ‘Not at All Confident’ to ‘Very Confident’ and was not significantly different between forms. Thus, the short form was retained with next steps to improve the ill-performing subscale (participation).
Phase 2. New coders (2 undergraduate students, 2 trained professionals) recoded phase 1 recordings using the short form with improved wording for the participation scale. Average inter-rater reliability of undergraduates was ‘excellent’ (.756) compared to ‘good’ for trained professionals (.612). Thus, undergraduates were deemed capable of reliably administering the measure.
Phase 3. New undergraduate observers (N=4) were randomly assigned to code 12 sessions live vs. from a recording at a suburban U.S. high school. For live sessions, we found ‘excellent’ and ‘good’ inter-rater reliability across 6 subscales, overcoming prior issues with the participation global. Recorded sessions had a range of ‘fair’ to ‘excellent’ inter-rater reliability, with continued underperformance of the participation subscale. Live sessions had a higher average confidence (4.36, SD= .66) compared to recorded sessions (3.38, SD= 1.06) concluding that live coding optimized psychometric properties of the measure.
Despite the limitations of small sample sizes and not directly investigating scale validity, findings demonstrate feasibility of developing a short, Likert-style quality fidelity tool that can be administered live by lay coders in school-based interventions. Other treatment studies may replicate our process and contribute to a growing literature on implementation fidelity and outcomes from school mental health interventions.