Child pages
  • 2011-10-11
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

October 11, 2011

Time:

8 PDT/9 MDT/10 CDT/11 EDT/16 BST

Attendees: Linda Lewin and Alan Schwartz, Co-Chairs; Carol Carraccio, Mike Dugan, Robert Englander, Maureen Garrity, Simon Grant, Steve Kenney, Morgan Passiment, Howard Silverman, Scott Smith, and Patricia O’Sullivan.

Agenda Items

1 Review minutes of last meeting

Alan commented there was a nice paragraph in the minutes from the prior meeting about competency achievement and evidence of achievement.  Minutes were approved as submitted.

2 Discuss pediatric EPAs (population care worksheetand mappings, transition worksheet and mappings)

Alan invited Carol and Bob to begin the discussion on pediatric EPA’s. Bob began with background on the development concept.  He noted looking at competencies it is almost impossible to assess them as discrete variables.  The concept of entrustment is the point at which a learner is performing without requiring direct supervision.  Entrustable professional activities (EPAs) are used to integrate competencies. They can be observed and mapped to competencies, sub-competencies and milestones. With that mapping, decision makers can then determine the level of performance at which entrustment occurs.  That is a local decision and may differ from program to program. EPAs are being broadly adopted by specialties and a range of health professions both in North America and Europe.

In pediatrics they have created a functional groups of EPA’s that define key professional activities of their profession.  Some of these will be general to almost any specialty but some might be pediatric specific.  The worksheets they have are based on work done by Olle ten Cate and are intended to help faculty quickly assess and document competencies. A single EPA links to competencies and sub-competencies.  They select only the subcompetencies that they feel are essential for entrustment. When you put that together with milestones, it paints a picture of where the learner is and where they should be once entrustment occurs, allowing you to make an inference of competence when entrustment has been granted.

Carol added that the subcompetencies are part of the training requirements with the exception of personal and professional development.  When you look at full spectrum of 20 EPAs, each sub-competency is evaluated.

Scott asked what are “X”’s beside certain milestones, is that level of mastery.  Bob replied that there are six core competencies, or domains of competence, the ACGME have defined.  They added personal and professional development.  Those are broken down into sub-competencies.  He explained an “X” means we think that sub-competency is critical to the entrustment decision for this activity.  Milestones indicate the levels of performance related to a specific subcompetency.  

Simon questioned the significance of the word “critical.”  Bob commented if you look at most EPA’s, they could map to many sub-competencies. The key is which ones are going to be the focus for the decision to entrust. They play a key role in the assessment of entrustability. 

Carol added that there is tremendous overlap on sub-competencies. Scott shared you could do generalizability theory to make the case for the overlap.  Bob agreed, adding that there are fifty-two sub-competencies; some of them will travel together.

Valerie asked what data the decision makers might look at to make an entrustment decision. Bob replied you could create a table where critical sub-competencies for EPAs are rows and the columns are level of performance of milestones.  You could see what a novice looks like, competency, then master level.  The next stage takes each of those columns and creates a clinical vignette that illustrates what that means for a specific EPA. The vignette would become the basis for faculty development. It gives more power to observational tools one has.  A quick example has already been done with patient care. There was a video of a learner presenting a patient in the pediatric emergency department. They asked faculty to rate the learner using the ABIM 9 point scale. First they told faculty the learner was a first year resident, then they described the learner as a third year resident. In that approach, there is a wide variation in scoring dependent on the level of training. When they use milestones, faculty consistently put person at the second level. That doesn’t change depending on the level of training. 

Scott said it was similar to work he is doing.  They have been showing faculty videos of learners at different levels to start to normalize ratings. Bob Galbraith shared that from the assessment point of view, you can take EPAs and decompose into competencies, then define levels for each sub-competency. You can then roll up performance measures within assessment of an EPA.  

Susan commented that all of this work is aimed at residencies. Would all work done apply to medical students?  Bob answered yes and no.  Milestones are designed to apply to medical students on up to practicing physicians.  The next step would be to determine EPA’s across disciplines or a portion of the EPA’s that apply to medical school.

Patty commented that given certain components may be universal are there system issues, something more granular then EPA, program specific additions needed for entrustability.  Bob commented that there is an opportunity to teach and assess other aspects locally.  The question is are the subcompetencies necessary and sufficient but not exclusive.  If we agree on a set of professional activities, and we find that those professional activities cannot be adequately taught and assessed in a local setting, that speaks to the outcomes of the curriculum.  Carol added that we’re looking at fundamentals and creating a national map. What gets done at a local level is enhanced.  We need to come to consensus with a national map add have the ability to add a local map.  Valerie thought it was helpful to hear discussion and explanation.

3 Discuss draft powerpointfor educational achievement data

Valerie explained to the group that she is looking for lots of feedback.  The idea was to look at a series of slides describing educational achievement to help visualize the data.  The first slide provides two options for looking at educational achievement data; one event-based, one competency-based.  It provides two different competency frameworks representing two different parts of the learner’s educational experience.   The second slide with radar plots shows what the learner’s performance is in relationship to CanMeds competencies at the University of Toronto program.  The third slide shows that the you could click on any of the role names and get a description.  Another click would take you to the full framework. The fifth slide shows definitions for each performance level.  You could also click on University of Toronto and integrate educational trajectory data (data summarizing their medical school experience, including enrichment activities).  

Slide six shows the learner’s achievement in relation to ACGME competencies as determined in their residency program.  The competency framework is different as are the performance levels, just to demonstrate that that may change.  Slide seven shows detailed achievement data on the patient care competency, breaking it doens into performance on specific sub competencies. Slide eight shows where the learner is on a single sub-competency and provides a link to evidence and benchmark to peers.  Valerie commented that the group should provide input on what the evidence should look like. 

Slide nine shows a histogram indicating where the learner fell in relation to their classmates on specific sub-competencies.  Valerie stated the challenge on this one is how it is created: does the individual school create it, or is it something a centralized system would create and that system, in which case the central system would need access to peer data? 

Simon questioned the use of the word attainment verses achievement.  By rating people on a scale you aren’t saying they’ve achieved a particular level, but you estimate where they are on a scale. Patty commented that comparative rating and benchmark to peers are important. The rater profile is also important. The histogram should not only benchmarks them by their peers, but adjust for the tendency of the rater.  A centralized process would be very helpful. Patty agreed to share a sample rater profile. 

Valerie asked Bob Galbraith if this was within scope for the planned eFolio system. Bob Galbraith replied that he hoped the system would support that kind of analysis.  The histograms have potential to be a rich aspect of the system.  Bob Englander agreed, adding that we are right on target. 

Valerie returned to reviewing the slides. Slide ten is a rubric from UCSF.  We are still missing an example that shows performance on components of a competency; pediatrics may be able to help us think through that.  We can also do more reporting on EPAs based on the earlier discussion. 

Linda asked how things are embedded in each other. Do we want one view of the data for starters or do we want to display it in more than one way.  Valerie shared that with the technical specification you can slice and dice the data anyway you want: by competency or event.  She asked the group to send other examples to consider or any other feedback send it on to her.  She will send around questions to start with next time. 

4 Open discussion 

Questions for the group:

  1. Is there a good example of data that you feel should be included in an educational achievement report that is not yet represented? If so, please send it to Valerie for distribution to the larger group.
  2. What do you do if the learner was assessed using one competency framework in one environment and then another in another environment? (ie CanMeds for UME, ACGME competencies for residency). Is that data harmonized, or do you display UME and GME data separately (as it is now).
  3. How would high stakes exams be integrated into the competency reports? Are USMLE scores specifically related to medical knowledge, or do high stakes exams span multiple competencies?
  4. What would the evidence supporting a subcompetency look like? Would that provide results of an EPA-based MiniCEX? And potentially exam scores and surveys?
  5. Does the integration of rater profiles look appropriate – are there changes you would recommend?
  6. For UCSF: can you provide more details on how the rubric is used, in particular the Evidence and Summary areas?

Decisions

Action Items

  • Valerie will add slides for EPAs and rater profiles
  • Patty will send examples of rater profiles
  • No labels