Skip to end of metadata
Go to start of metadata

Meeting Information


July 26, 2013


8 PDT/9 MDT/10 CDT/11 EDT/16 BST

1 AM AEST (with apologies, Ian)

Call in Number

AUSTRALIA BRISBANE: 61-7-3102-0973
NETHERLANDS 31-20-718-8593
SINGAPORE 65-6883-9223
SWEDEN 46-8-566-19-394
UNITED KINGDOM LONDON: 44-20-3043-2495
USA 1-203-418-3123



Attending: Tim Willett and Rosalyn Scott, Co-Chairs; Toufeeq Ahmed, Kirstin Cirulis, Mary Jo Clark, Stephen Clyman, Robert Englander, Steve Lieberman, Paul Schilling and Valerie Smothers. 

Agenda Items

  1. Review minutes of last call

Tim provided a brief recap of the last call.  The group spent the call discussing entrustability. Ian brought up the point that some organizations would want to specify at what point entrustability occurs.  Based on what we looked at, it is the implementer’s deciding what the thresholds were.  Ian thought the publisher might want that published in the performance framework.  There was a generic discussion about thresholds. Entrustability isn’t the only threshold, moving to next level of training is another example.  Valerie agreed to go back to Ian to get published examples, and Mary Jo suggested some examples from nursing.  Valerie was going to think about how we could allow statements about threshold options within the framework.  Mary Jo moved to accept the minutes as submitted and Kirsten seconded the motion.    

    2.  Review options for specifying thresholds, including level of entrustability

Valerie continued the discussion on creating performance framework word set.  Some people said it would be customized by program.  There are two options illustrated.  

Option 1: Build a threshold framework into the performance framework, as shown on slide 6. The illustration is an example from Internal Medicine, consistent with what we’ve seen so far. Level 4 is labeled ready for unsupervised practice, and there is a threshold description indicating this is the level for entrustment. 

Option 2: Develop a companion specification that describes the thresholds associated with a performance framework. This option answers the call for flexibility. Slide 9 shows what that separate specification might look like. The threshold would point back to a component in a performance framework and provide a threshold score and description. Valerie commented it is more flexible specifying the score. It doesn’t have to be an exact level of performance; it could be 3.5. 

Slides 10 & 11 describe the pros and cons of the two approaches.  Option we could do now, it’s fairly simple, and organizations that are regulatory bodies can easily set thresholds for entrustment.  The cons are it is inflexible, and a change in threshold would be making a substantive change and would need to change performance framework.  Slide 11 shows the pros and cons for the second option.  It’s flexible; programs can modify a thresholds associated with a framework without making changes to the framework itself. You could have a 3.5 threshold in one program, and another program can set it at 4. The cons are it needs a separate specification.  It would take several more months to develop that.  It is much more complex, and would result in two different XML documents.  Regulatory bodies that did want to distribute performance framework with thresholds would need to distribute two different XML documents.  That would be hard to enforce. 

Bob commented that over the next five years educators will work to determine thresholds. Many haven’t worked with milestones or EPAs, so we would want flexibility. We may think level 3 is the entrustment level, but it may not be.  The ultimate outcome would be standards across programs, but that standard isn’t set yet.  Valerie suggested one way to accomodate that would be to include it in the performance framework but make it optional.  Framework developers could leave that element blank and provide the ability for that to be added at a later time.  As you do this research and uncover information there may be other changes as a result of that.  Bob agreed that is likely to happen. 

Rosalyn asked what type of threshold change would require a change to the identifier in the performance framework.  Valerie replied that any substantive change would make it a new framework: Under option 1 that would include changing where thresholds are or deleting a threshold.  Rosalyn commented that principle would be applied to any other change in the performance framework; it is not specific to thresholds.  She added that we thought performance of competencies is fairly distinct from the competencies themselves.  Thresholds seem integral to the notion of performance. Are threshold so distinct that they deserve a separate specification? 

Tim commented that we should come back to Peter’s view on standards: it’s what goes over the wire.  Programs could still set thresholds in their own curriculum management systems. The question is does that data need to go anywhere?  Rosalyn commented that if you came up with a system that satisfied the regulatory bodies, people would want it.  Valerie commented that regarding what goes over the wire, it would likely be individual learners’ accomplishments. You would want to send the decision of entrustment, which is different than indicating a level of entrustablility. Tim questioned whether the following use case might occur: the American Board of Surgery creates a milestone framework without thresholds, University of California implements that and adds threshold, then University of California repackages that performance framework and sends it to someone else.  Might that happen? 

Bob mentioned he could see that happening if there were pilots related to determining thresholds.  That data would be important for standard setting purposes.  Rosalyn agreed that would be fine.  Tim suggested you might send data to a regulatory body. They could then integrate it and release a new version of the performance framework.  Valerie agreed and commented that in a research scenario you know everybody is using the same basic framework except the thresholds.

Tim noted another distinction between the two options. With option 1 you are fixing threshold to a position, for which you have labels. Option 2 is fixed to a score.  The latter possibility is realistic.  He asked if thresholds will always be attached to a level, or do people see threshold attached to a score in between levels?  Tim asked the implementers if tying a threshold to a score would work for them.  Paul commented the parent object could have a threshold value, anything under that could be considered entrustable.  Tofeeq asked how this could be used with the RIME (Researcher, Interpreter, Manager, Educator) framework.  Valerie explained we would say that RIME is a performance framework not tied to a specific competency.  Say a learner has to be at the interpreter level in order to progress in training. You would be able to indicate that.  If a manager level has a score of 3, you might need a score of 3.5 for entrustment.  Tim thought they should go with the last option using just the score.  We would make that an optional element.  There could be more than one threshold for performance level sets. 

    3.  Review updates to specification and schema (also illustrative powerpoint)

Valerie continued to the illustrative power point linked on the agenda. The second slide is a visual overview of what’s in here.  There are a few key changes 1) in the component, instead of background and note there is additional information. Additional information is also within the performance level and the indicator. In addition, the data model for score is different.  Now you can have a single value or a range.  Valerie noted Susan Albright provided examples where the first level of low performance had a range of 1 to 3. Mid-level performer had another range of values, etc.  Tim commented that he has seen the instance where both range of scores have a label and score within that label.  Mary Jo stated why have ranges unless you can differentiate among scores in that range. Valerie thought that was a very good question.  Have others seen this?  Tim noted that for low performance you may want to add a label to every score within that range.  Valerie commented if we do that we are getting into nested performance levels.  Rosalyn asked if this was representing that a learner had the an opportunity to try something three times and those three scores were grouped together.  Valerie replied that was not the intent. Tim suggested sending examples around. 

Mary Jo commented that as an evaluator she would have difficulty deciding where to put students.  Valerie agreed and added that despite those concerns we have to consider what people are doing.  Valerie and Tim will follow-up offline regarding the data model. 

Valerie continued with changes to the slides. On slide 5, with the way competencies are described reflects the use of Dublin core and rdf.  That will only make a difference to technical folks.  On slide 6 there is score indicated as a single value.  On slide 9, there is additional information instead of background.  The background provides the background for this set of performance levels for this competency.   Tim commented a component could have more additional information. Valerie said yes, on slide 14, there is additional information labeled background.  If you look at the next slide there are several more additional information tags for references. 

Mary Jo commented that examples might be used to differentiate scores in a range. Programs could define those examples themselves. That would provide the assistance the evaluator needs. Valerie agreed and commented that approach could be accommodated in this version of the spec.

Paul asked how the range information would be used.  Valerie commented it’s important for the evaluator and the system that has to display the evaluator instrument; labels will be important in that context.  Rosalyn thought satisfactory performance may be a range of scores; aren’t you going to want to know whether they are satisfactory or not?  Valerie agreed with Roslyn and shared that this new approach will accommodate that.  Unsatisfactory has a score range of 3 to 5.  Do you put an additional label on 1, 2, 3 even though they are unsatisfactory?  The examples we’ve seen are not providing individual labels for 1, 2, and 3 if they are scores within a range.

Tim confirmed the group came to a consensus that they don’t want to go to nested hierarchical performance level, and they were all supportive of Valerie’s proposal to attach a label to a single score or a range of scores but not both.  If you really, really want to do both, you’ll have to use the additional information element to describe.  Mary Jo thought that made sense.  Paul commented it makes it easier for them. That is easier than adding a hierarchy.  It is more feasible to implement something that doesn’t have an additional level of hierarchy. 

Tim added that we should come up with fake examples of a scale that would have labels as well as groups and how that is represented by using additional information.  Valerie will draft an email to Ian letting him know what the group settled on. Valerie offered to talk to Ian next week.   

  1. Open discussion


  • We will include thresholds with label, description, and score within the component of a performance framework. Thresholds will be optional, and there may be multiple.
  • We will not accommodate nested performance levels.
  • A label may be attached to a single score or a range of scores, but not to a score within a range.

Action Items

  • Valerie will revise the specification and schema to reflect the group’s decisions.
  • Valerie and Tim will work on illustrations showing how to accommodate different approaches to grouping levels.
  • Valerie will draft an email to Ian letting him know what the group settled on.
  • No labels