November 22, 2011
8 PST/9 MST/10 CST/11 EST/16 GMT
Attendees: Linda Lewin – Co-Chair; Susan Albright, Carol Carraccio, Bob Galbraith, Simon Grant, Kimberly Hoffman, William Iobst, Steve Kenney, Howard Silverman, Valerie Smothers, Morgan Passiment, and Janet Trial.
1 Review minutes of last meeting
Linda began the meeting with a review of the minutes. She noted a few typos which she will send to Valerie and Valerie will correct. Otherwise, the minutes were approved as submitted.
2 Discuss concerns about benchmark data and request for supporting documentation
Kim began with some concerns that were raised around several areas on comparison data. She commented that data is not always used as intended. The EAG talked about criterion referenced verses norm referenced advancement decisions. While the decision is based against a certain criteria; normative data may dilute that message a bit. Comparison data, the ability to compare apples to apples, requires a great deal of clarity of definition. Linda thought it was a good summary of concerns and she asked the group if anyone had any comments. Kim noted most of the concerns were at the UME level and with GME folks too.
Bill commented the ACGME are working hard on a new accreditation system to be piloted in 2014. It would be competency based and will rely on attestation of trainee competence using narrative report. It would be worthwhile to consider input from Tim Brigham or Susan Swing. We should make sure we understand the reporting mechanism that will inform the new system. There are three pilots underway involving Medicine, Pediatrics and one other specialty. Medicine is defining an initial draft. Robust competency based assessment data will be developed, informed through EPAs. The committee will look at data and attest to where a learner is in the development of competency.
Linda asked if every program will have attestation that can be compared nationally. Bob answered that would be the ideal, but he doesn’t know how realistic that is. The ACGME is working on defining a common language for general competencies and .even that is controversial. Pediatrics developed robust EPAs, not sure it will fly. Carol mentioned the last she talked to Susan Swing, she supported that. Common language that the groups of experts are working on would be available to specialties that haven’t developed milestones. For pediatrics, surgery, medicine and urology, it would be acceptable to have their own language. Linda asked if we should try and get Tim Brigham. Bill said he would try to talk to Tim and he will copy Linda on the email.
Bob mentioned there are two separate thoughts. From the standards point of view there is a clear benefit in getting as granular as we can. It will allow things to happen that are currently just pipe dreams. Secondly, we don’t have that data and we don’t know if we want to do it in that format. There are real and appropriate concerns regarding the potential consequences if we make the data available. Just because the specification includes the potential for more granularity doesn’t mean we will do it any time soon.
Kim commented that the EAG is taking the position to think carefully about the actions that we take and the intended and unintended consequences of moving forward. Just because the standard makes it possible to exchange the data doesn’t’ mean that functionality has to be turned on in every instance. Valerie agreed. Bob noted one clear need would be trials and tests (pilots) to decide which areas would benefit from comparison data. The specification would make it possible to see whether comparison data is feasible. Kim added it could be useful for faculty development as well.
Valerie mentioned UCSF had raised a comment about supporting documentation, i.e. using attachments to provide additional support to the competency data. The example Pat gave to Valerie was that if a learner has developed a brochure for a Spanish speaking population of patients they should be able to have a link to that. She asked the group if they would be supportive of that addition. Kim commented the same approach was used in the Educational Trajectory work and added she would support the addition of that feature. Howard questioned the use of student supplied documents and other materials, asking if there would be any annotation on the part of the supervisor. Valerie questioned whether categories would be helpful. Kim mentioned there were too many things that wouldn’t fit into categories.
Simon questioned mixing institution provided with learner provided data noting the reluctance of institutions point to learner provided information. We don’t know what is there, do we certify or put the burden entirely on the learner. Valerie was concerned that if we have learner provided documentation in support of institutional data, we are muddying the waters. Bob commented that at a high level we want not to be dismissive in self reporting data. We do need to think about the mechanism of reporting. You could have things clearly labeled and reported separately. Or you could mix it in the same report. Data might be misread. Simon commented the learner data is central but where and how do we articulate that data when registrars are reluctant to include.
Linda asked if it would have to come from institutions at all. Could there be data represented that is my own stuff? Bob commented that from a learner perspective, there is going to be a lot of self-reporting, but self reporting to institutions will be limited. Simon mentioned a single link on an official record to the learners own record. Valerie commented there are a lot of documents that are part of official records. If the learner has developed some kind of document as part of their degree program that would be in their portfolio as long as we have a way to distinguish the two. It is necessary for learner to have attachments. We can keep our minds open for more formal development. Kim agreed that everybody thinks it is necessary for learner to be able to do; we just need to figure out how.
3 Review and discuss revised data analysis
Valerie continued with the changes to the document. The references to rater profiles have been removed. The EPA assessment slides ten and eleven are new and are based on the work presented by Bob Englander and Carol Carraccio on previous calls. The question is how this fits in with the rest of the data, is this evidence linked to more competencies or linked in a different way? We need to know what the EPA is and see where the learner is on the scale of performance levels. Slide ten shows results for a learner who is at the advanced novice level. The blue words are representative of a hyperlink; slide eleven shows behaviors associated with that performance level. There could also be videos for that performance level.
Carol said this approach makes sense to her from the pediatrics community. It is an important opportunity to be able to look at progress but wasn’t sure how that could be represented. Could there be an opportunity to see where somebody started and where they are now, the delta rather than the absolute point? Carol also mentioned the possibility of dashboards so that people could compare, it would be nice if there was some national dashboards, adding the ability to benchmark to peers. Bill commented the ACGME is talking about something similar in terms of including narrative description of what a person would look like. He questioned the use of the Dreyfus model; many are not enamored with it, including the surgical community. They are concerned about the progression in medicine, the ability to practice unsupervised patient care. They do not assign a title to them. This would be ideal if this framework describes development over time.
Carol shared that in pediatrics they are not using labels. Bill mentioned through composite data we could identify a normative curve an individual could compare themselves to. The cardiology community has taken this approach. Eric Williams is leading that work, and Jeff Keuven is charged with lifelong learning competency. He will send us his information if we are interested. Linda commented that the exact terms would not matter; Valerie agreed. The competency working group is developing a specification that would allow an organization to define a framework of performance levels. Linda mentioned the good work by Cara (or Karen?) Kennedy on levels of supervision, that sort of continuum starting with direct supervision could be important. It would be interesting to overlay a supervisory level with actual behavior descriptors. How do you decide person no longer needs direct supervision? She added that this would be a back door approach to competency assessment.
Bill agreed that the assessment of ability to practice unsupervised was essential. Graded supervision is critically important. Carol added that the decision regarding graded supervision requires different components of trustworthiness. For example, you may trust a learner to practice with less supervision if they would come to you if they have a problem. Valerie will make some edits to the slides based on these comments, and show example using supervision model and the dashboard as well. Linda asked Carol for a picture example. Carol suggested a pie chart showing which slice of pie you fit into. You know where you are and it would show you how many of your peers are in 2, 3, 4 or 5. Bill offered to send narratives from the medicine working group as soon as they are available.
Linda asked whether an EPA is at a higher level than something like the ACGME competencies. Bill and Carol replied that EPAs would be above the ACGME competencies. An EPA is a work based assessment, Valerie commented it’s like a different view of the data. She commented we could add something new to the 1st slide, a link to EPA’s and what you’ve done and haven’t done and where you are. Valerie will send around revisions for next call which is on December 13th.
4 Review and discuss open questions
5 Open discussion
- Learners should be able to include attachments and other self reported data in some way; the mechanism is not yet clear. We should be sure to distinguish between learner reported and institution reported data when addressing.
- One should be able to show the progression of competence using the specification (ie multiple data points for a single assessment).
- The standard should support the use of a supervisory framework in concert with a performance level framework.
- Valerie is going to update the power point
- Bill is going to be in touch with Tim ACGME and send contact information for Jeff Keuven from Cardiology.
- Bill will send narratives from the Medicine working group on milestones as soon as the narratives are available.