Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

June 25, 2013

Time:

7 PDT/8 MDT/9 CDT/10 EDT/15 BST/22 SGT/Midnight AEST

Call in Number

AUSTRALIA BRISBANE: 61-7-3102-0973
AUSTRALIA MELBOURNE: 61-3-9010-7742
NETHERLANDS 31-20-718-8593
SINGAPORE 65-6883-9223
SWEDEN 46-8-566-19-394
UNITED KINGDOM GLASGOW: 44-141-202-3228
UNITED KINGDOM LONDON: 44-20-3043-2495
UNITED KINGDOM MANCHESTER: 44-161-601-1428
USA 1-203-418-3123

Passcode

1599520

Tim Willett, Co-Chair; Susan Albright, Connie Bowe, Sascha Cohen, Robert Englander, Linda Gwinn, Robyn Herring, PJ Kania, Scott Kroyer, Steve Lieberman, Karen Macauley, Dan Nelson, Paul Schilling and Valerie Smothers.

Agenda Items

  1. Review minutes of last call

Tim provided a brief recap of the prior meeting.  Valerie spent time reviewing the power point presentation of more frameworks and features of the specification.  Tim noted what used to be called criteria is now called indicators.  Each component can have one or more sub-components or have one and only one performance level set.  The group talked about complex performance level tables having to do with prescribing and performing procedures from the pediatrics framework and decided they were best represented as a number of sub-components each with their own performance level set.  There were a couple of questions about the nature of scales, for example Susan shared their ten point scale only has five positions, as does Critical Care Nursing. The draft specification can accommodate that.  The National University of Singapore’s performance framework is currently organized by entrustable professional activities using one common performance level set across all their EPA’s.  The performance level framework is fairly simple representing that one scale.   Use cases were also addressed and there were slight changes associated with level of performance in a portfolio.  Data captured elsewhere is not in our performance level specification.  Bob moved to accept the minutes as submitted and it was seconded.  

Scott Kroyer recently joined the group and introduced himself as a developer at E-value who has been involved in curriculum mapping and curriculum inventory efforts.  He is interested in getting involved in other standards that are underway.

2.  Discuss new examples in illustrative powerpoint (Pediatrics, Radiology, Psychiatry)

a. Examples

b. Assessment methods

c. Educational tools

d. Notes at the component level

e. Notes at the indicator level

Valerie continued with a discussion of examples of performance framework released in the US (Pediatrics, Radiology and Psychiatry).  She noted the groups had some unique features, and the power point shows subtle differences in each and she updated the specification for each.  The changes begin on slide 24 of the PDF document with a link to the pediatric milestones document. This example is from page 8 of the most recent pediatrics milestone document that is restricted to 5 levels (page 13 of the pdf). The Examples shown at the bottom of the column are represented in the background element of the specification. Valerie asked the group where this should be put in the specification. 

Sascha questioned whether we should use a scale of 1-6 so that Not yet Assessable could be represented.  Valerie mentioned on a previous call it was decided not to include “not yet accessible”.  Sascha gave an example before of a car not in a race.  Sascha commented that opens the potential for having other issues not captured in the standard.  Valerie noted we should think about the scale and learner data separately.  Dan agreed.  He noted in their software, it is separate from assessment; you can’t assess somebody who is not a part of that level.  Tim commented when thinking about measurement in general, there are always things that don’t apply to the measure, typically that doesn’t appear on the ruler.  Tim can see why pediatrics included it, just to remind assessors where it’s not appropriate to use the tool.  He is not convinced “not yet accessible” should be included in the performance framework.  Sasha commented that it is important that everybody is using the same ideals.  “Has not yet rotated” is one term being used.  If it doesn’t impact statistical analysis, we don’t use it as part of the scale; however, it’s still an indicator. We should make recommendations regarding the definition of 0 in the best practices guide.  Susan asked if this was an implementation issue as opposed to a scale and standard problem.  Valerie noted that will be a standard issue for the educational trajectory working group and the educational achievement specification; she is not sure the current draft of educational achievement would accommodate that. She added that the pediatrics milestone document serves as a template for learner evaluation.  It is helpful for people implementing milestones.  For our purposes we want to be clean about what the ruler is and is not.  Tim thought it was worth flagging for the implementation guide as we need to address it.  He asked Bob if that approach makes sense to him and Bob thought it did.

Valerie continued describing the changes on slide 25.  She noted the example is put into the element background.  Performance level has indicators and indicator can have background information, she placed that in background information. She asked the group if that sounded appropriate.  Bob commented as a user, ideally if I am assessing an individual and trying to rate them, I should be able to see that there is a link that would give me this example.  Valerie asked the tool developers if putting it in the background would work.  Sasha doesn’t see a problem with that.  His concern would be not explicitly calling out the background element, could get chopped full of not pertinent information and open it up for potential abuse.  It might be useful for further refining.  Valerie mentioned we could come up with something more generic, some kind of category field so we can label it with background examples.   Sascha noted the more generic we provide, the more information we will be provided with.  For example, the category attribute we would have to organize a number of categories in different ways.  We don’t want to restrict it, may be provide some funneling of information so that content developers and educators who are generating this information self select information that is of value. 

Connie noted there may be different sites for evaluators: trainer versions and common use versions.  Bob agreed.  We want to have them see what it looks like, a descriptor of the indicator.  Tim asked if in these instances is it just extra text we want to see.  Is it helpful to have categories, or is the point more text for the user?  Valerie suggested having something more generic.  Tim noted it doesn’t help from a data control perspective.  Dan asked from a software perspective, do we provide label and descriptor?  Valerie thought that made sense.  She noted we have background and note for the performance level and the indicator. The framework overall may have supporting information; we could have something like supporting details and provide a way to label it.  Tim suggested using a controlled list and using the term “additional information” as a generic field and include an example.

Sasha thought this might be best set into a best practices discussion or document.  From a tools perspective we can consume and present information within our context.  It’s important to get information that is well crafted and presented to assessors and students. We should lay down some best practices and see what is going to be the most valuable.  Tim commented if we go with something generic, we can recommend labels that are useful.  Valerie mentioned we will no longer have background and note but will create a new element available for background information and notes. We can provide a recommended list and include note, example, and background.  We should make it flexible enough so if something new comes up, the standard would be able to accommodate it. 

Susan asked about score value and if there is textual feedback from the learner where do we handle that?  Valerie shared that would be in the educational achievement specification. She was unsure if it was there yet but she will check and let the group know.   Susan noted their medical school uses a range of scores (1-3) for a single performance level; how would this be reflected here?  Valerie suggested Susan share that example with the group and we can discuss later.  Susan agreed to send the example later. 

Valerie continued with slide 33 on Diagnostic Radiology and pointed out a number of things they were doing a little differently.  They have possible methods of assessment. On page 11 of 19 in the PDF document, there are five levels for professionalism, with possible examples and educational tools underneath.  Tim asked whether we were looking at a six point scale, if zero means they aren’t at one, or if zero means it’s not appropriate to apply the scale.  Valerie clarified if the learner has not achieved level 1, if the learner’s performance is not assessed, than the scale would be 0-5. You could create a performance level 0 with no indicators.  There would have to be something to indicate that level.  We could represent that in our specification.  Sascha commented this falls back on how the discipline is defining that zero level and if the standard can handle it.  Are they assessing on how they have achieved level one? Valerie mentioned it has to be defined by whoever is creating the framework. 

Bob said the ACGME provides guidance on this issue.  Tim suggested putting this in the best practices guidelines and then let them go to the ACGME to decide.  He noted we have invited the ACGME to join this group.  Valerie made a note to add to the list of things for the best practices guide on the wiki to make recommendations on the definition of zero and how that is interpreted. 

Valerie continued with power point slides showing suggested educational tools.  If we have a new additional information field, we can use that same field for resources and methods of assessment.  The educational tools are not specific to a particular performance level in this instance.  Tim commented this is attached to the component, but do we allow a catch all container for component.  Valerie answered at the component level we have title, abbreviation, competency reference, author, reviewer, background, resources and references. We could try to have the same generic approach with the component.  We may need to still have references called out because the order of them can be important.  She suggested doing some consolidation at the component level, unless the group feels they want resources and background as separate items.  Tim asked if we know what resources are.   Valerie mentioned Radiology has suggested educational tools would go under resources.  Tim asked whether things that are ordered 1-4, would the additional information element support XHTML.  Valerie commented it’s not the same as having an order attribute.  Tim asked whether element has order attributes.  Valerie confirmed it does, we are still writing that part and we talked about having some kind of order.  Tim suggested the group think on this in the interim and bring the topic up on the next call and keep looking at any other idiosyncrasies of other frameworks.  Valerie will make a proposal for something more generic for indicator level.  

Valerie continued with slide 41, an example from Psychiatry.  This example describes the performance levels for PC4 psychotherapy on page 10 of 36 of the linked pdf document.  There are footnotes within the description within competency where performance levels are being described.  Footnotes for indicators are at the bottom; we don’t have a note field, however in a more generic approach we could state that.  Right now we can’t fully accommodate what they are doing but we can address these gaps on the next call.  Valerie didn’t think anyone else was doing this. 

Tim commented maybe there is a generic thing we could come up with that could be a description.  We will get back to Ian Graham’s points on the next call.

 f. Examples at the performance level

3.  Points from conversation with Ian Graham
a. Identifying level of entrustability

b. Identifying level recommended for progression within a training program

  1. Open discussion

Decisions

Action Items

Valerie will note the need for the educational achievement spec to accommodate

  • A “not yet assessable” option or equivalent
  • Feedback to the learner

Valerie will ensure that the following are addressed in the implementation guidelines:

  • “not yet assessable” and equivalent options
  • Best practices around the meaning of 0 in a particular scale and how to represent that.

Valerie will eliminate the background and note elements in Performance level and Indicator and add an AdditionalInformation element with a label attribute and recommended values (Background, Example, Resources, Assessment Methods, and Note). A similar approach will be adopted for the Component element.

The group will follow up on representing a range of scores for a single performance level.

Valerie will come up with mechanisms for addressing the footnotes shown in the psychiatry example (within the competency).

  • No labels