Child pages
  • 2012-05-08
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

May 8, 2012

Time:

8 PDT/9 MDT/10 CDT/11 EDT/16 BST

Attending: Linda Lewin, Chair; Susan Albright, Carol Carraccio, Maureen Garrity, Patty Hicks, David Melamed, Howard Silverman, and Valerie Smothers.

Agenda Items

1 Review minutesof last meeting

The minutes were approved as submitted. 

2 Overview of specification

Valerie continued with an update on the current version .011 dated May 1.  She noted the spec is incomplete due to the early stage it is in the process.  She began with the terminology on page thirteen.  The competency working group has provided some assistance on the definitions.  She provided a link to go to current list of definitions.  Tim Willett and Bob Englander have been working to develop common terms for this space.  Valerie asked if there were any concerns about the definitions. 

Linda mentioned most of it made sense but questioned the definition of learning object.  Valerie explained that term is commonly used in educational and technical space, but she is unsure whether we need that definition in our spec. She agreed to take it out. Carol asked about the term Competency Object. Valerie explained that we want to be able to reference competencies and use them in frameworks that are comprehensive and show relationships between discreet competency descriptions.  Someone asked what would be the example of a competency object. Valerie replied that the ACGME has a competency framework put out; Can Meds is another competency framework. We deconstruct the pieces, and those discreet competency statements are competency objects. 

Linda asked about the performance framework.  Valerie answered the performance framework is like the pediatric milestones: there is a scale or set of scales for rating performance in relationship to a competency.  Linda shared it’s the tool to measure milestones.  Valerie commented that there is a competency on collecting patient information during a history, for example.  There are five milestones defined for that competency; those milestones show behavioral descriptions for novice to expert on a scale. That scale is the performance framework.   Valerie will harmonize and use the definition that is online. 

Patty commented that we should have a common language on competencies and domains are. She noted the ACGME and Internal Medicine are not using the term sub-competencies; competencies are the most granular level, and the six larger categories are domains.   Valerie commented that she is working closely with Bob Englander, who is helping to craft those terms in pediatrics. In general MedBiquitous tries to keep the perspective of where technology meets pedagogy.  We would be using same technical piece to describe domain as we would sub-competencies. She agreed to map the commonly used terms to the terms used in the technical specifications to add greater clarity. Linda noted we will continue the discussion on the next call.

Valerie mentioned on page fourteen begins the technical description of data structure.  Each school will report their own chunk of data.  Everything gets reported within the context where it occurred.  On page sixteen there is a picture of all data and achievement in context.  In addition to data elements from the Curriculum Inventory, there are summary scores describing a learner’s overall competence.  She added a link to a portfolio so that data can be referenced.  She recently spoke with Helen Chen and individual’s from the Stanford University Registrar’s office.  They have e-portfolio and “enhanced” transcripts.  They want to have more links between the two. Right now it goes in one direction: the portfolio can link to the transcript but not the other way around. The transcript does include links to university vetted documents like dissertations. That might be a good model to follow.

There are assessment results on page twenty-six, which leverages specifications out of the PESC (Post-Secondary Education Standards Council) group. PESC has developed formats for transcripts, test scores, (sat scores), and their specifications are already used by many organizations. Assessment results include the score, evidence that is used to obtain the score, and there may be sub-scores broken down by competency or something else. 

Page twenty-nine does into detail on the score elements from PESC. You can present a raw score, % correct, scale score.  A label score is something like “proficient.” There is the opportunity to describe that label as well. The mastery value is pass fail.  Course grade are A,B,C, but you have to present the full scale you are using, and it gets complex.  The AAMC have codes for letter grades and related things.  There is the opportunity to provide GPA’s and more detailed information.  Norm referenced values include:

  • a description of the norm population,
  • Rank value (if you want to express as rank relation to size of population)
  • Percentile lower bound value ( % of examinees below student’s score)
  • Percentile rank value (# of examinees below the current score plus half the examinees at the same score divided by the number of examinees )
  • Percentile Upper bound value (% of examinies scoring at or below the current score )
  • Standard score value
  • Normal curve value
  • Stanine value
  • Probability value

For full details on the contents of PESC elements, you can see: http://www.pesc.org/library/docs/standards/ETSR/Implementation%20Guide.ETSR.v1.0.0.1.pdf

What you don’t see is a histogram showing a range of values and where learner’s fall in that range.  Valerie sent an email to PESC to see if they wanted to add that. If not, we can add it.  Median grade is missing as well. 

Linda agreed we needed that data.  David questioned how we would get to longitudinal performance data.  The smallest container is a PGY year.  Valerie commented we have to start somewhere.  As long as you can define the norm population you can then define the learner’s performance in relation to that population.  Before we can get there we need to have a way to exchange basic data to see how the learner did.  We don’t know what the norms are. David asked if these are starting points for longitudinal calculations, and norm reference values.  Valerie commented that was in the PESC specification and she agreed to send a link to the group. The data underneath provides the details. 

David shared they are at a new starting point looking at this set of requirements and he will give more substantial feedback later in the process.  David commented that data is being calculated in real time, and it is challenging to associate a score with normative data when it the normative data is constantly changing.  Valerie commented at some point you have to issue a report.  Valerie asked for clarification on what kind of a test would it be.  David answered a formative assessment in which EPA’s might not change, but individual performance level may change.  Valerie noted there needs to be best practices around the use of these standards that should address if you change the grading criteria are you going to report it as the same population or separate populations so you can compare.  There has to be best practices.  Susan asked if the group would recommend having a conversation with Tufts registrar to see what they think about it.  Linda commented that registrars are very focused on logistics of what they do; it might be hard for them to see the big picture we’re looking at.    

3 Open discussion

Valerie asked the group to send any comments on today’s discussion to her or the mailing list. The next call is May 22.

Decisions

Action Items

  • Valerie will update the definitions in the specification and map them to commonly used terminology for describing competencies (ie domains, roles, etc).
  • Valerie will send a link to the PESC Education Test Score Report implementation Guide.
  • Valerie will continue on the development of the specification.
  • No labels