Symposia
Improved Use of Research Evidence
Eleanor Wu, M.A. (she/her/hers)
Graduate Student
University of South Carolina
Columbia, South Carolina
Kimberly D. Becker, Ph.D. (she/her/hers)
Associate Professor
University of South Carolina
Chapin, South Carolina
Ben Isenberg, MEd
Graduate Student
UCLA
Los Angeles, California
In the context of dynamic and responsive care, assessment of service quality should extend beyond treatment to include clinical reasoning. We developed a modular system (Action Cycle and Use of Evidence Behavioral Observation Coding System; ACEBOCS) with 13 core clinical reasoning codes (e.g., “considers,” “selects”), which are combined with problem and practice codes. Our focus on treatment engagement used 5 REACH codes for engagement problems, 16 practice codes, and 42 step (sub-practice) codes (documented in the Use of Research Evidence Content Application – Engagement; URECA-E). This design allows a compact set of codes to describe roughly 250 unique specifications of clinical activities with high extensibility for a variety of applications (see Gruber, 1995). This system has the capacity to code multiple event types (supervision, treatment, planning meetings) and any available documentation (e.g., transcribed recordings, chart notes, treatment plans).
Study 1 examined the interrater reliability of the ACEBOCS codes, rating (1) presence/absence and (2) extensiveness across 84 supervision events. Results showed at least moderate reliability for presence/absence (M kappa = .65) and extensiveness ratings (M ICC = .71), supporting our ability to detect clinical reasoning and implementation.
Study 2 examined the interrater reliability of the URECA-E problem, practice, and step codes in 80 treatment events and 84 supervision events. Results showed at least moderate reliability for presence/absence of steps, practices, and problems (M kappas = .66, .71, .72) and extensiveness ratings for steps and practices (M ICCs = .69, .57), supporting our ability to detect engagement problems and procedures.
In study 3, we applied URECA-E codes to chart notes of 50 treatment sessions and compared results to the coding of their corresponding transcribed recordings. Most step (81%) and practice (71%) codes achieved at least moderate interrater reliability (ICCs > .50). Correlations revealed, across chart notes and transcripts, greater concordance for practice codes than for step codes. Coding of clinical chart notes may be a reliable method of assessing what occurs in therapy, although computational models might be helpful to infer practices from the steps reported in chart notes.
These collective findings demonstrate measurement of a variety of activities and processes within and across multiple event types and documentation types have the potential to put a lens not only on whether service quality is good, but potentially on why it is good.