Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

April 23, 2013

Time:

7 PDT/8 MDT/9 CDT/10 EDT/15 BST

Call in Number

AUSTRALIA
BRISBANE +61-7-3102-0973
NETHERLANDS +31-20-718-8593
SWEDEN +46-8-566-19-394
SINGAPORE +65-6883-9223
UNITED KINGDOM
GLASGOW +44-141-202-3228
LONDON +44-20-3043-2495
MANCHESTER +44-161-601-1428
USA +1-203-418-3123

Passcode

1599520

Attending: Tim Willett, Co-Chair; Susan Albright, Kelly Caverzagie, Mary Jo Clark, Stephen Clyman, Robert Englander, Robyn Herring, Kevin Krane, Steve Lieberman, Karen Macauley, Valerie Smothers and Eric Warm (invited guest).

Agenda Items

1.       Review of minutes (3/26) and in person meeting (4/9 - minutes pending)

Tim recapped the minutes from the prior meeting. Eric Warm was invited to speak on his work at the University of Cincinnati describing milestones and EPA’s and mapping them to each other. The group had lots of questions, so Eric was invited back again today.  His power point presentation was circulated to give us a visual to understand it better. Tim apologized for not posting the minutes from the in person meeting yet, but noted the bulk of the time was spent reviewing the current work to date and the different performance frameworks reviewed.  One question came up during the in person meeting that we hadn’t discussed before, the notion of performance level set each having a sub-performance level hierarchy.  There are no published examples of that and this might be discussed later on the call.  Tim asked if anyone had any corrections or in accuracies with the call a month ago on March 26th.  Having heard none, a motion was made and seconded to accept the minutes as submitted. 

2.       Review of Eric Warm’s work and how that would map to MedBiquitous specifications 

Eric discussed his presentation on Mapping Milestones and Competencies to Entrustable Professional Activities.  He began with the definition of Competencies being “Observable abilities of a health professional, integrating multiple components such as knowledge, skills, values and attitudes” and Curricular Milestones, “Observable developmental steps that describe progression from a beginning learner to the expected level of proficiency at the completion of training” and EPA’s, “Activities the public entrusts all physicians are capable of doing.” Reporting Milestones go from early learners to ready for unsupervised practice and beyond that, aspirational. There is also a level below early learners for learners with critical deficiencies. 

It’s not an evolution from core competencies to reporting milestones; all of these facets are coexisting and in use simultaneously.  Cincinnati has developed content-based EPA’s and process-based EPA’s, Process-based EPAs are applicable across multiple rotations. Content-based EPAs are specific to a single rotation. Process-based EPAs may be in-patient, consult or ambulatory.  They also created EPAs for non-attending evaluations.  Slide thirteen shows the rating scale. There are different EPAs for interns and senior residents; they are progressive in nature. 

Susan asked about the discreet things you’re judging people on, would feedback be generalized to the whole block of things?  Eric answered it’s discreet, not generalized to the block.  The faculty has become better at giving feedback as they go along.  Tim asked what the yellow highlights on the slide meant. Eric answered the highlights are for presentation purposes. A PGY1 will reach a different level than a PGY 2. Tim asked does everyone get to Level 4?  Eric answered ideally yes, but not everybody will.  Mutisource evaluations use the same 1-5 scale. 

Slide 26 shows a content-based EPA for the cardiac rotation and how it maps to curricular milestones.  Susan asked if an assessor rates an EPA on a scale of 1-5 would they also be assessing mapped milestones? Eric answered yes. They found the Content EPA in cardiology is more heavy on patient care than medical knowledge.  The Process EPA for managing teams mapped to many problem based learning and improvement competencies. Tim asked if they make the assumption that the underlying competencies are being developed.  Eric commented the assumption is if we mapped correctly, the mapped milestones are being assessed at the same level.  They have found that over time people on track get better over time.  EPAs can be mapped to curricular milestones and get to competencies and reporting milestones.  They generate a series of averages for the competencies (slide 32).  Milestones are the same even though skills are different. 

Susan wanted clarification that the discreet boxes for PGY2, each has a score, and the scores are mapped to larger lumping of things?  Eric noted that was correct, the attending makes the entrustment decision.  Susan asked how you map it up to a level.  Eric explained in slide 38 that level 1-5 of entrustment centers around interpersonal and communication skills. The green is all PGY 2’s as a class, blue is this particular resident.  Green is out pacing this resident.  We would need a deep dive on communication skills with this person; in medical knowledge; this person is doing well, approaching independence.  Tim noted in medical knowledge, they are halfway between 3-4, because they’ve been scored between 1-5 and some of those are mapped to MKA1, the average is 3.5.  Their communication skills are a problem but they are very smart. Eric confirmed.  In slide 42 you can compare performance over time. 

The last few slides give the example of riding a bike as it relates to curricular milestones, EPA’s, Sub-competencies, and reporting milestones.  Curricular milestones are granular: can put helmet on, feet reach pedals, etc. The EPA’s are different ways to ride a bike (in the driveway, in traffic, etc). They are skills that can be assessed.  Sub-competencies are riding a bike safely. Reporting milestones, falls off the bike, can ride in rush hour traffic, etc. He aggregates EPA scores for a milestone, whatever level entrustment they will get on reporting milestones, multiply by 9 and divide by 5 and put in the correct box. 

Susan commented at the in-person meeting there was a lack of complexity; she expressed problems trying to deal with this. She commended Eric on a great presentation.  Tim commented that since curricular milestones have a progression to them, how would you decide which curricular milestone would be mapped to that EPA.  Eric commented they did their best to put the milestones in order; however, there is a lot of overlap.  Early milestones are mapped to intern EPAs, and later milestones mapped to senior resident EPAs.  There is no clear gradation.  They gathered 10 people together and threw an EPA up on the screen.  Each person had a list of 142 milestones and they worked in pairs, each pair focusing on a domain.  They asked the patient care people about which of your milestones seem to map.  It took about 40 hours to do this and they over mapped.  It took awhile to get the shared vision. 

Kelly commented that curricular milestone work started in 2007.  They are generically in a rough developmental framework. He advocates for dropping the time lines.  He added that they refer to curriculum milestones as a pick list that you can go to as a starting point.  They are less worried about whether a first or second year can do this.  Eric commented that people may not use curricular milestones as much. There are observed behaviors in every rotation, and an aggregation of those behaviors in the document.  Most people are going to try and find observed behaviors and tie them to reporting milestones.  Kelly commented that rather than having to come up with your own behaviors that define that EPA, current curricular milestones are helpful.  

Tim asked Valerie if there was a link to the MedBiquitous specifications. Valerie shared there are two different specifications. The competency framework represents things like the ACGME competencies and sub-competencies, Can meds, and it describes EPA’s.  If you click on the link in the agenda, it will show how Eric’s work maps to medbiq specifications. Each EPA points to an ACGME competency inherent in that activity.  EPA would also have connection to curricular milestones represented as a competency framework.  And the curricular milestones have already been mapped to the ACGME competencies. The orange boxes represent performance frameworks; they define a continuum of performance, ie a scale, for measuring a competency or EPA.  There would be performance framework representing the rating scale, 5 levels of supervision.  Then there would be a performance framework for the reporting milestones. EPA’s would be assessed on the rating scale.  That would get averaged and those scores on EPA’s would inform reporting milestones; it all ties together.  Ultimately when that gets reported for an individual learner, that would be represented as educational achievement data. 

Valerie asked Eric if that correlates with what he described. Eric commented there should be a direct mapping from the EPA to reporting milestones. You take all EPA’s which map to which sub-competencies; you expect to see same kind of thing, slow steady rise, as people become entrusted.  Eric mentioned he pitched this presentation a year and a half ago, and was invited to speak at a national MedHub meeting to get other people to do what he was doing, however, curriculum milestones got the shaft, reporting milestones was the way to go.  He determined they probably didn’t have a shared vision, company doesn’t want to do things people aren’t going to use.  Kelly can attest to that.  They had two different shared visions, leaders and innovators are congealing upon what Eric has shown you to some degree. 

Steve congratulated Eric on his work and commented it was very insightful and clarified questions on how are we going to get content specificity into a generic EPA.  Eric mentioned it was a total team effort with Kelly’s help.  

Eric asked the group what they do with this information.  Valerie shared that MedBiquitous is a not-for-profit organization that develops open technology standards. MedBiquitous wants to ensure that performance frameworks/rating scales and learner data can be expressed in a standard way. 

  
  
4.       Review of revised data model
  
5.       Open discussion

Decisions

Action Items

  • No labels