Curated Milestones Evaluation Exhibit

An Iterative Approach to Milestone Assessments Across Settings

Program Size: More than 101 residents
Academic Setting: University-Based
Clinical Setting: All

Overview

We developed new web based evaluations which are consistent with ACGME milestone reporting. These evaluations are more relevant and intuitive for both trainees and faculty. They eliminate 'edu-speak' that can be confusing for faculty and shorten the length of time that it takes an attending to asses a trainee. Finally, these tools provide easily interpretable information to the Clinical Competency Committee to facilitate efficient reporting to the ACGME and feedback to the trainee. Our evaluations consist of three types of questions: a custom sub-competency or grid, a free text response, and a binary ‘red flag’. The grid style of question contains milestone behaviors that are based on the ACGME language, but modified for clarity and/or relevance to what is being evaluated during that rotation. Each question was designed to inform multiple sub-competencies. The progression ranks from critical deficiency, through developing, ready for unsupervised practice, and aspirational. Our free text responses allow attendings to document strengths and opportunities to improve upon for each resident. Finally, a binary question identifying concerning behaviors is utilized on this assessment. Indication of witnessing any of the behaviors detailed in the question leads to a notification to Program leadership for timely investigation and referral to the CCC, if needed. This should allow critical deficiencies and red flags to be detected by the program earlier and remediation initiated, if needed.

Download Tools:

Development

We evaluated our CCC processes by soliciting faculty feedback. Based on this, we redesigned all faculty evaluations of trainees. We first explored different evaluation formats, testing the reporting format for each style of evaluation. This was instrumental in reaching a consensus on which format to use. Next, we created questions, linking each question to sub-competencies. This allowed us to perform a gap analysis to identify the need for new questions or evaluations. The single key step to developing multiple evaluations was to focus on creating the inpatient assessment first and then using this as a template to develop subsequent evaluations.

Lessons Learned

We formed multiple teams consisting of PD, APDs, Core faculty, Program Coordinators and Chief residents. Each team developed one evaluation and also functioned as a reviewer for another team. In addition, we solicited feedback from many attending physicians ‘in the trenches’.

Lessons learned in this project were:

  1. The process took much longer than we anticipated (the entire system was re-designed in five months) and,
  2. It would have been less time consuming to focus on one assessment first and use this as a template for subsequent assessments rather than multiple teams working simultaneously on 4 evaluations.

Faculty Development and Training

At our institution we had a three pronged approach to faculty development. First, APD’s and staff attended division meetings to present the new tool, reiterating the progressive nature of trainee development and importance of direct observation in evaluating trainee behaviors. We coupled these sessions with an email detailing the rationale for redesigning evaluations. Finally, we provided a less than five minute video that re-iterated the core concepts and demonstrated the functions of the tool.

How Used to Inform Decisions about a Learner's Milestone

As a faculty member the assessment prompts a statement, or stub, followed by behaviors spread across five developmental levels (linked to a nine point scale). By selecting a particular rating the faculty is indicating progression along the developmental levels on the observed behavior. These ratings in turn feed reporting on each trainee by sub-competency. This reporting provides scoring that equates to progression along the developmental spectrum that is easily comparable to peers and prior years of training. All of this data is used by the CCC (in conjunction with free text statements regarding strengths and weaknesses) to inform advancement decisions.

For more information, please contact RGFisher@umn.edu.

University of Minnesota Evaluation Improvement Committee

Co-Chairs Nacide Ercan-Fang, MD, and R. Gordon Fisher; L. James Nixon, MD, Alisa Duran, MD, Meghan Rothenberger, MD, Paula Skarda, MD, Briar Duffy, MD, Andrew Olson, MD, Reut Danieli, MD, Erin Wetherbee, MD, Erin Lorence, MD, Kevin Rank, MD, and Jessica Voight, MD; Program Administrator Amy Palmer