Skip to end of metadata
Go to start of metadata

Meeting Information


December 13, 2011


8 PST/9 MST/10 CST/11 EST/16 GMT

Attending: Linda Lewin and  Alan Schwartz, Co-Chairs; Carol Carraccio, Mike Dugan, Simon Grant, Kimberly Hoffman, John Jackson, David Melamed, Howard Silverman, Valerie Smothers, Janet Trial, Lori Troy, and Sandra Waters.

Agenda Items

1 Review minutes of last meeting

Alan began the meeting with a review of the minutes. The minutes were accepted as submitted.

2 Discuss revised data analysis

Alan asked Valerie to point out the slides that were new or revised. Valerie replied that the slides with the red asterisks in the title are new or revised.  On the first slide there is a slight change in the wording there. Instead of statements of awarded responsibility, there is an activities and responsibilities link. Slide 11 shows the responsibilities awarded and progress the individual is making with regard to EPA’s.  Carol had commented on the last call that she wanted to see the progression of competence; it maybe we want to have the same capability for competencies as well.  There are two milestones that are listed on slide eleven; the first one is managed patients with a single system diagnoses with a yellow status indicator meaning entrustment has not been awarded yet.  The second one is a green indicator, showing entrustment has taken place; this mirrors what is in the summary.  Details is intended to link to the dashboard view of assessments for a specific EPA.  The learner can see her score and the mean score of peers.  Slide thirteen is the pie chart for a particular date that shows details about peer performance. 

Carol commented this was wonderful.  Valerie explained that she showed the slides to Steve Clyman, and Steve suggested having the charts for sub-competencies for the EPA.  Valerie requested feedback from the group on this.  Are the assessments holistic or should we allow granular assessment based on sub-competencies?  Carol commented she would like this to grow from the EPA’s rather than the level of sub-competencies.  She noted the beauty of EPA’s are that people understand them, sub-competencies aren’t that intuitive.  She thinks that when you start with EPA’s, we would still be including all of the sub-competencies that would be critical for making an entrustment decision, all sub-competencies that are relative for that EPA.  You can map to the sub-competencies and milestones and then look at the behavior that is representative of milestones and competencies and put that into a clinical vignette.  That would be one assessment tool, and then other tools could be used to pinpoint where a learner happens to be with relation to each subcompetency.  Valerie asked if the group would want the ability to drill into sub-competencies related to this milestone.  Carol said you could.  She said milestones could be used differently.  An example would be an individual faculty member using assessment tool, like clinical vignettes that describe behaviors of learners at different performance levels, to match with their residents.  The program director would see the comprehensive view of the learner; they may sit down and look at sub-competencies and milestones and see where the learner is.  Linda asked if the standard needed to have the capacity to do both.  Carol mentioned the slides like number eight that represent sub-competencies under patient care.  Valerie commented if someone wanted to go into detail to find out where they are falling short they could go over that with the mentor. 

Valerie commented it would be nice to hear from other specialties; she agreed to do that via email.  She continued with the next change on slide fourteen. All of the words in blue on left hand side, expert to proficient, would link to a description of those behaviors.  The blue continue link shows that you could see the full set of behaviors and any supporting links, videos etc.  Carol asked if had to include labels because there are certain connotations associated with that.  She thought people might be more honest if there weren’t labels, no stigma attached to it.   Valerie commented that the way that was presented could vary (ie different labels or no labels).  Howard commented that there should be one main label and that was entrustment, or entrustable.  Valerie noted that point is going to be different for different programs. Some programs may want to see a learner at 4 before they entrust or some programs may entrust at 3.  Valerie had tried to find Tara Kennedy’s work showing different levels of supervision that Linda had recommended but she wasn’t able to find the literature. Carol offered to send supervision milestones to the group with references. 

Kim asked if different levels of supervision, novice, advance competence was to be replaced by level of supervision or complimented by it.  Valerie answered complimented by it.  Linda mentioned the level of supervision is not always tied together in the mind of the person that is doing the evaluation.  She questioned whether competency and level of supervision are two separate questions.  Carol commented she has been struck over the years with student evaluations, checking off average numbers and then giving the person honors.  They do mediocre work and then they get an A.  She worries that we need to study that, and overlay it in some way.  Have it connected and see if there is alignment.  Valerie commented maybe we shouldn’t overlay it at this time, if further study is needed.  Carol shared it would be good to include it in assessments to see if things hang well together.  Linda mentioned you can’t overlay it but use as an adjunct.  Lori noted the problem is that there really is no calibration. Anchors are described well, but inconsistent ratings are made.  She liked the idea of trying to calibrate readers on the same scale. Overlaying these anchors with a more precise description as to what it means in terms of entrustment would be helpful.  More information will yield more reliable results.  Valerie agreed to read through the milestones and references on supervision incorporate that notion into the emerging requirements. 

Linda clarified that each program decides who would be eligible to entrust. Carol agreed and said it would be nice to see how that fits together with where they are in the program (18 months into training, etc).  Alan thought this was more of an application question than a specification question.  Would there be a way to link date and time and matriculation?  He was wondering since people come and go at different times; make it transparent, awarded 18 months into training, instead of when this person came into the program. Kim suggested combining the educational trajectory and the statements of awarded responsibility; take the multiple-colored bar graph and impose colored stars. Then you have everything at a glance. Linda said that may work and it would really be nice data to have for other reasons; it would be nice to know how many months it took to get entrusted.  From a research perspective it would be a nice piece of data to have.  Valerie asked the group if there were any other piece of data they would want in this slide deck.  She informed the group that if they had any other ideas email group or her directly.  Carol commented that these really are terrific representations of what the group has been thinking. 

3 Discuss sending to student representatives for review

Valerie began the discussion about getting feedback from student representatives. There are four student representatives; she asked if this group feels this slide deck is ready for that feedback.  Carol stated she didn’t know if a student would understand this and she asked how it was going to be presented to them.  Valerie commented she was going to develop a one page summary of the project and describe what is in the slide and what they are intended to represent.  Carol mentioned that there is so much lingo medical students might not understand.  She suggested presenting it face to face, show it to a student or two and see if they have any idea what we are trying to say.  Valerie offered to create a summary and provide a description of underlying educational lingo that is included in here.  It will probably be more than a page.  John recommended a podcast to present to students.  Valerie thought that was a great suggestion.  She will work on the one page summary and create a recording to be made available for the group to comment.  The AAMC contact suggested student groups could review over the holidays.  Linda volunteered to help Valerie with the podcast.  John encouraged discussion of trends in competency based measurement on the first page.  Most medical students are unfamiliar with competency tracking at the UME level.  Valerie set up the deadlines.  Jan suggested sending out the same information December 30th as a reminder requesting feedback from the students by January 13th.    

4 Review and discuss open questions

Valerie mentioned the sample data was not yet represented but that if anybody had any comments, to let the group know.  She noted the second question was a good one for discussion.  It looks at competency longitudinally; the learner is assessed in different competency frameworks.  She asked the group if they would keep it separate or have a harmonized view of the data.  Linda commented she would keep it separate because there is no way to translate.  Alan mentioned it is somebody else’s job to do that.  John noted you’re not going to find one to one mapping on these elements, because it is hard to harmonize the data. 

Question number three talks about whether exam type data should be integrated into competency view.  Valerie commented she has never been sure how to incorporate high stakes exams like NBME exams.  She asked if the NBME subject exam is strictly medical knowledge or are other competencies also being assessment; how can you say what competency the exam is related to?  Simon said it depends on the exam itself.  If the exams give scores on competencies then you can integrate them. John asked if it was the intent of the USMLE exam to represent anything other than medical knowledge.  Alan commented that step 2 is not just medical knowledge.

Linda thought that was a good question for Bob or Steve.  She asked who describes what is measured and Simon said whoever created and validated the exam.  Mike mentioned the clinical skills section of the USMLE is over and above medical knowledge and basic science, so there could be components to categorize.  Valerie suggested we leave that to the USMLE folks to tell us and Mike agreed. 

David commented that a question on an evaluation doesn’t have to come from a single competency but could be related to a milestone that has multiple subcompetencies.  You could also state that 10% of the medical knowledge score is based on performance on a specific exam like USMLE step 1.  This would create a level of standardization nationally that is aligned with ACGME and LCME interests.

Valerie asked if the schools local system assessment is something that we wouldn’t want to capture in our national standard. David replied that on a national scale it will have huge applications; every location is independently setting up methods of teaching residents and fellows how to become entrusted.  All of these come to same standard.  Milestones are elements along that path. USMLE could allow you to standardize; and step one will contribute certain percentage to the progress of the student. 

Lori asked why it is necessary to do that when the licensure process is established.  A student can’t progress unless they meet those benchmarks, she doesn’t see that we have to put everything on the same scale.  A student has to meet all those critical points to become a physician, but they don’t all have to map. They just have to be noted on the education trajectory and be able to easily find evidence that the student has passed each step.  Valerie expressed concern that programs might have different weightings.  Alan commented the time was up and we would need to continue this discussion at a future meeting. 

5 Open discussion


  • There should be a way to see relevant subcompetencies and their data from the EPA.
  • There should be a way to overlay levels of supervision for an EPA.
  • STARs should be combined with the Educational trajectory data.
  • Assessment data using different frameworks will not be harmonized initially.
  • Exam data can only be integrated into competency scores if the exam creator provides competency-specific scores.

Action Items

  • Valerie will add “links” to subcompetency data from the EPAs.
  • Valerie will request feedback on the new slides from Internal Medicine folks (Bill Iobst, Scott Smith).
  • Carol will send supervision milestones to the group.
  • Valerie will illustrate how levels of supervision could be overlaid.
  • Valerie will illustrate how Educational trajectory and STARs can be combined.
  • Valerie will draft a summary and podcast for medical student review; the group will provide feedback by Dec 20. Valerie will send to student reps on Dec 21.
  • No labels