Meeting Information
Date: | May 14, 2013 |
Time: | 7 PDT/8 MDT/9 CDT/10 EDT/15 BST |
Attending: Tim Willett, Co-Chair; S. Toufeeq Ahmed, Susan Albright, Connie Bowe, Terri Cameron, Mary Jo Clark, Sascha Cohen, Robert Englander, Ian Graham, Simon Grant, Linda Gwinn, PJ Kania, Jason Ladicos, Steve Lieberman, Karen Macauley, David Melamed, Dan Nelson, and Valerie Smothers.
Agenda Items
1 Review minutes of April 9 and April 23
Tim provided a brief recap of the in-person meeting on April 9th. They spent the majority of the time going over the work to date. There were several comments on the application and context. One comment related to the need for a binary related to each criterion within a performance level: “met or not met”. Ian introduced us to his work with the Royal Australasian College of Surgeons. He spoke about competency development and specific behavioral markers illustrating optimal or sub optimal behaviors for competency assessment. There were comments about the notion of milestones being too vague to be used as a meaningful assessment tool. The notion of levels having sub levels came up but was not seen in print or practice yet. If that is something that exists we need to take that into consideration.
Susan moved that the minutes for April 9th, be approved. The motion was seconded and approved.
Tim continued with a recap of the teleconference on April 23, when Eric Warm spent a lot of time taking us through his PowerPoint and explaining how EPA’s are mapped to curricular milestones. Overtime a global picture of milestones progress develops. Mary Jo moved for approval of the minutes; the motion was seconded and approved.
Tim and Valerie welcomed the new members to the group and asked them to provide a brief introduction of themselves:
Ian Graham is a physician executive working in Australia half time as a medical manager of several small rural hospitals, and halftime as a consultant for post graduate medical education and informatics ,working with Royal Australasian College of Surgeons and others on the development of a performance framework and assessment tools for practicing specialists with the potential for use in post graduate areas.
Jason Ladicos is in production management at One45 software. They are making lots of tools for gathering information on students and faculty activities, measuring competency, measuring performance. He attended the MedBiquitous meeting and has worked on performance management systems for their parent company.
Toufeeq Ahmed is a leader in informatics at Vanderbilt University, where they are building a new learning management system. They’ve also built a new curriculum management system and portfolio system. They are now moving into competency assessment systems.
Dan Nelson is a business analyst at New Innovations, in the graduate medical education world. He implements milestones and is interested in performance framework.
Sasha Cohen is the director for strategic development of Ilios at UCSF. He is interested in strategic development of e-folios and was previously on the Curriculum inventory Working Group, whose standard just received approval and was finalized last week.
David Melamed is a CEO who specializes in medical education management systems, in the graduate medical education and undergraduate medical education worlds. He services suites dealing with evaluations, electronic portfolios and is invested in milestones, competencies and EPA’s.
Tim continued with a description of the competency framework standard. The initial work of the group was to develop a standard for competency framework, technical specifications capable of representing CanMeds and the ACGME competencies, and competencies from other health professions. Every statement is represented as a competency object, and the competency framework document describes the competency objects included in that framework. It can be used to transfer competency frameworks from one system to another. Every framework has a URL, and every competency object has a URL.
In parallel, different residency programs started looking at milestones, and discovered it was not enough to state what competencies are required; one has to describe development or pathway from early stage learner to expert with respect to those competencies. This can guide curriculum development and assess learner progress over time. They recognized there may be multiple performance frameworks or milestone frameworks that refer to the same competency framework. We have developed use cases so far and reviewed all of the existing milestones or performance level frameworks to see what is common amongst them and to see what should be included in the data specification. He reiterated the distinction between our work and the Curriculum inventory group, and the educational achievement specification. Our job is not to connect performance levels to people or curricula. The scope is a digital representation of performance framework, or milestone framework. Once that exists, it can be used as the basis for assessment and assessment related data. The curriculum inventory group looks at how you represent curriculum electronically and link to competency. The educational achievement group works on how you document an actual assessment of person and capture how they were assessed and how they performed based on that assessment. We are looking at representation and transmission of the framework themselves.
2 Review of recently published ACGME milestones
Valerie explained that about a month ago the new ACGME milestones were released for many specialties. Valerie took some time to look through those new publications and took a quick inventory of their structure and features. She summarized aspects of the different frameworks on the wiki. As far as the structure, they all have a great deal in common and some things are different. Each document has a preface that describes what each level means, attributes, and a background of the framework as a whole. Some specialties focus more on clinical programs, describing medical knowledge and patient care, others take a broader approach. Valerie noted from our perspective that is not as important. They all look similar to Internal Medicine milestones narrative or reporting milestones.
Valerie noted some important differences. Some suggest assessment and evaluation methods. The radiology framework includes suggested educational resources to accompany that competency. Radiology has a unique approach; in the levels of performance residents must be able to meet previous milestones. Another thing they provide is categories for criteria; on page six there are possible methods of evaluation and examples and suggested educational tools. On page eight they have different categories for their criteria.
Sasha asked how idiosyncratic are these additional fields? You could have a fill-in-the-blank, abstract approach to categories, educational strategies, etc. Valerie commented several of the frameworks included a not yet assessable column. Mary Jo stated, from a nursing perspective, their interpretation is it’s not yet assessable; we don’t expect a student to do it yet as opposed to you can’t tell whether they can do it yet. Bob Englander commented that in Pediatrics “Not yet assessable” should be used only when a resident has not yet had a learning experience in the sub-competency. That is how Mary Jo interpreted it. Dan commented the surgery one has the same type of idea except their notation is they have not yet rotated. Tim asked whether using the term not yet assessable or not yet achieved, does it matter from a technical level if they are not on the scale yet?
From a software perspective, Sacha said they are not interested in knowing nuances between specialties. Jason commented if they get multiple NA values on forms, it means they are under achieving or not assessed yet. Mary Jo mentioned they would use that information to tailor a student’s clinic experiences, for example if they have not done a pap smear they would make sure they have that opportunity. Tim mentioned that distinction makes sense to capture. In psychiatry, they have notion a lot of things different in their milestones, number page 1, level one and 3 criteria, 1.1 obtains history, everything is numbered and viewed as a discrete. They include footnotes, on criteria in performance level and footnotes in other places, annotations for competency and criteria, annotation and footnote differences. Sacha commented the annotations are very long. Tim noted from data point of view longer descriptions.
Valerie noted Surgery has 9 practice domains, an alternate way of organizing competencies. They created a new competency framework to sit on top of existing competencies. Urology has competencies that are complex. The Pediatrics example is different than the pediatric milestones we reviewed previously, are they all five levels now. And there is the “not yet assessable” column. Bob commented the content of pediatric milestones is the same. What is missing is the background information for each section. Tim commented in Urology, some are bulleted and Valerie commented on page 26 ICS 3 communicates with physician, writing diagnostic reports, medical records, she is not sure we need to worry about that. Those can be seen as different sub-competencies, you could do it a few different ways. Tim suggested moving on to what this means for our specification.
3 Review of initial specification (see spec, schema, and illustrative powerpoint) (note, schema for the techies only)
Valerie continued with a review of the specification and the illustrative example. In slide two you see the Internal Medicine example: the title, identifier, descriptions, contributors, etc. The effective date is included so you know when this Performance framework is going into effect. Slide three shows the supporting information element. That can capture all the content in the preface (the pages with Roman numerals), and can have a link to a PDF document. One change we made based on input from Simon, the level scale defines worst to best. Making best practice recommendation start with 1 and reserve 0 for things like not yet assessable isn’t any data.
Susan recommended novice to best. Valerie commented sometimes it’s not novice; for example, critical deficiencies. Simon recommended least competent and most competent. The group concurred. Valerie will make that change in the presentation and the rest of the document. In the Internal Medicine scale 5 is most competent. There is also a sale of 1-3 (for the question is the learner demonstrating satisfactory development: yes, no, marginal).
Tim asked about distinguishing between 0 not yet on the scale, or 0 meaning not yet assessable. De we need to have something more than zero? Mary Jo said they could have not Assessable and then Zero. Something distinguishes it from a level verses no expectation for it at this point.
Sascha added that he has concerns that we may be conflating different things. There is the measure of competence, and then there is the measure of grade. Steve added that 0 would be no evidence of competence. The question is how to interpret. Ian commented that he shared Sascha’s concerns. Connie commented that there could be a problem with the context of performance frameworks; they may be implemented differently. Assessments need to be designed appropriately. Sascha paraphrased the conflation problem (with regard to not yet assessable) as where do you put a car that isn’t in the race? Valerie agreed that ideas like not yet assessable may be better addressed by the Educational Achievement specification. She agreed to take that to the Educational trajectory Working Group.
David commented we send data in standard format, and the organization receiving it can verify the scale they are expecting is the scale being used.
4 Open discussion
Decisions
Valerie will change best/worst to least competent/most competent.
Action Items
Valerie will continue to iterate on the specification and illustrative powerpoint.
Valerie will take the issue of not yet assessable to the educational trajectory working group for incorporation into the Educational Achievement specification.