Child pages
  • 2013-02-12
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

February 12, 2013

Time:

7 PST/8 MST/9 CST/10 EST/15 GMT

Call in Number

NETHERLANDS +31-20-718-8593
SWEDEN +46-8-566-19-394
SINGAPORE +65-6883-9223
UNITED KINGDOM
GLASGOW +44-141-202-3228
LONDON +44-20-3043-2495
MANCHESTER +44-161-601-1428
USA +1-203-418-3123

Passcode

1599520

Attending: Rosalyn Scott and Tim Willett, Co-Chairs: Susan Albright, Connie Bowe, Terri Cameron, Mary Jo Clark, Stephen Clyman, Robert Englander, Simon Grant, Linda Gwinn, Steve Lieberman, Karen Macauley, Valerie Smothers and Laura Vail.

Agenda Items

  1. Review minutes of last call

Tim began with a recap of the previous meeting.  The group continued with a review of performance frameworks, and Mary Jo talked about frameworks for nurse practioners.  Josh provided an overview of the National University of Singapore’s approach to performance frameworks.  The group was going to look for milestones from other health professions, and there was a brief discussion on definitions.  Mary Jo sent comments about definitions offline and Valerie was going to update the document. 

Simon proposed a change to the minutes in section 2d. Valerie took out the words “qualifications framework” and changed it to frameworks.  Mary Jo made a motion to accept the minutes, and Rosalyn seconded the motion.  The minutes were accepted with approved changes.

2.  Update on Internal Medicine milestones (see article and milestones document)

Kelly was not on the call so Tim suggested coming back to this topic later in the call. 

3.  Discuss standards research report

Valerie continued with a discussion on the standards research report.  Laura Vail conducted the research, scouring standards development websites, looking for existing technical standards related to the performance framework.  The organizations researched can be found in section 2 on page five.  She researched European standards and workshop agreements on the CEN website, e-Competence Framework, CEDS form the US Department of Education and Ed-Fi Alliance, Europass, HR XML (a human resources standards body), inLOC (an effort Simon is involved in) and ISO standards.  The summary begins on page five.  In a nutshell there is a lot of work done but much of it doesn’t take into account unique models for medical education.  Ed Fi Alliance is designed to support K-12 educations and provides a set of standards for those k12 schools to be able to do their business and provide performance data back to the state and parents, etc.  There is a need for interoperability among k12 schools in the US; Ed-Fi and CEDS are designed to offer that.  Performance levels are defined as a range of scores, with no connection to a competency.  Proficient could be 80%.  We didn’t see anything in those standards related to levels of competency or describing a level in relation to competency.  Laura commented there was nothing specific describing skill sets.  Rosalyn asked about assessment. Valerie shared there is a lot in assessment, very detailed mechanisms for conveying assessment.  She thinks there is a way to associate assessment with competency.  But there is no kind of milestone achievement notion. 

Tim commented their performance levels are the grades that they get on a particular test.  Valerie agreed. Valerie mentioned on pages six into seven, there is information about Common Education Data Standards (CEDS) which Ed-Fi builds on. Performance is really a score range in this model. 

Rosalyn commented it doesn’t sound like activities are related to milestones, seem like they are isolated events.  Valerie confirmed there is no connection to an overall model of competency.  Competencies are integrated in this framework but not in a way we have been looking at.  Connie mentioned it seems like their objective is to create standards to compare schools and identify outliers. We seem to be at a granular level for individual trainees at medical schools.  Mary Jo mentioned that is why she added the two aggregate focused use cases that allow you to get an aggregate picture from a program perspective.  What we’re aiming for is capable of doing that, you just need to aggregate the data in different ways. 

Tim commented, from a data point of view, the specifications we are working on should support all those use cases.  He thinks we should be mindful of both approaches and make sure we can support either way.  Rosalyn mentioned the continual attention on how one organization might adopt the standard verses standard accommodating all organizations, doesn’t mean all organizations have to use every piece of it.  Tim noted different organizations test differently; it may not be valid to compare data.  Rosalyn thought it was important for us to say this particular assessment was done in a particular way, that situation can be flushed out at a later time.  Tim commented if you have two campuses running the same competency and reporting data in a standardized way, you can compare two sites to ensure the test was valid between two sites.  He sees the concern, but he is not sure that it impacts the data specification. 

Valerie thinks it points to the key differences.  In 12k education you have standardized test, once we get to training programs we don’t have that level of standardization.  Having a tool could lead to standardized instruments, but it’s still pretty early for that.  It is not clear politically whether any attempts at standardization are feasible.  Mary Jo noted in some ways we have that at the aggregate level for accreditation, looking at past rates, on certification exams.  Tim asked if there was an example of an exam that would be administered in multiple locations at multiple times.  Mary Jo answered it is nationally for nursing, intended to demonstrate that graduate from pediatric nursing need national competency standards for PNP’s. That is one example - there are certification exams for many nursing specialties. It is offered at specific times during the year.  Valerie commented that process is separate from the educational process.  Mary Jo commented that accreditors do look at past rates, the same way we look at our end licensure. 

The group discussed the European set of standards. Europass indicates levels of language skills based on the Common European Framework of Reference on Language skills. There is an online tool you can use to create a CV for an individual that indicates your levels of language skills. Each level is further broken down with behavioral descriptors.  There are XML schemas available, and the way those schemas are written includes free text to describe performance levels. 

Simon agreed providing there is a better way of doing XML for that.  Susan thought it seems to have milestones in a way and then performance levels, and she likes the way it is organized.  Simon mentioned Europass allows you to create text oriented XML.  In the last couple of weeks they have finalized proposals to allow addition to inLOC to allow the representation of Europass CVs. Simon added that inLOC is out for public comment as CEN workshop agreement; the review lasts 60 days.   

Valerie continued with a review of InLOC. It provides a very thorough and flexible model for representing competency and performance framework.  The InLOC data model uses the concept of triples, similar to the Subject Verb Object structure of sentences.  Triples are used in a lot of semantic web technologies because it is good for describing relationships between two different things.  On page 15 the example given is history taking has performance level collects comprehensive history without prioritization. It expresses a relationship for a particular competency object.  InLOC is very flexible and very abstract Valerie noted.  It is hard to follow the data model.  Rosalyn commented the rest of us will be lost then as well. 

Simon shared the complexity is a tradeoff ; it is on the abstract side.  He envisions the end user will not see that complexity, complexity is only at implementer level.  The logic is straight forward, and the specification is optimized so developers can get to it easily.  The challenge is to present end users tools to have them use it easily.  Valerie noted the challenge is that often there is not a high level of technical sophistication in the organizations adopting MedBiquitous standards, and simpler works better. 

Connie mentioned the approach seems similar to RIME; people can easily interpret the framework and apply it.  Simon shared InLOC structures are aimed at developers to create simple tools, easy to deal with technically.  Valerie said it absolutely is dependent on tool developers developing interfaces.  Steve shared it matches with residency milestones in medicine and pediatrics.  Performance levels feel like they are consistent with the way ACGME is going.  He worries it’s difficult to apply in preclinical education. Milestones are presented in clinical performance terms. Would it work for pre clinical duty?  Valerie thought if performance levels are framed correctly it would work. 

Simon commented InLOC doesn’t apply to just triples.  There is a relationship between competency and performance level and different values for different level descriptors.  InLOC is designed to be general purpose.  Tim asked if it would be conceivable to take a portion of InLOC representing framework we looked at and make it more accessable by picking and choosing the structure as it relates to the health care framework.  Simon thought that was worth a try.  Simon suggested taking subset of the possible relationships between competency and sub competency and representing things you need. 

Susan asked Simon how far along he was in the process with InLOC. Simon answered they reached an agreement about the form of the information model (generic) and how that can be bound. An XML binding is coming soon.  They don’t have a tool that is going to support it.  They are working towards a demonstration tool to provide an interface to the framework, a tool that steps you through framework.  Tim thought it was worthwhile going through all the frameworks and types of relationships they represent and comparing it to the types of relationships InLOC can support.  Valerie is working with staff here to see what performance framework data would look like, and its relationship to a competency, as we do our analysis. 

Simon offered to go through that and he would help put that data into an InLOC.  Tim asked as far as fields go and actual data contained within a performance level, how does InLOC match up with data fields we anticipate?  Simon asked which data fields? Tim answered given performance level, fields with ID and URL with that level, title description.  Simon was confident it has the right kind of level types and fields. 

Valerie noted there was one other standard to call attention to in section 3.9 page 16 bottom. ISO has a model for competency, Information Technology for Learning, Education and Training, and part of that is proficiency level.  This came out of work from Japan; standardized testing is nationalized and very specific.  One thing to note is it gives lots of examples, like Judo on page 19, where you have the notion of a sequence of proficiency levels.  Judo student may begin at Kyu 10, Kyu 1 is highest. Then you move on to Dans, with Dan 1 being the lowest and Dan 10 being the highest.  The ISO standard is currently in ballot and has not been approved yet. It is an information model not binding, similar to what Simon described. That means there is no XML schema we could use.  Tim noted it is not copyrighted and you have to pay for it.  Valerie thought that could be problematic getting larger community to adopt to that.  Each level is seen in relationship to total number of levels; it adds context. 

Tim asked Valerie where we go from here. Valerie commented we move forward to develop a data model and we can take that work into account.  It would benefit us to come up with something relevant to the health professions community.  She is ready to take a stab at that.  Karl Burke from Hopkins is working with Valerie to develop data model based on the frameworks we reviewed so far.  Having information from Carl will help develop our data model. 

Tim asked if they will have something to look at for the next call in two weeks.  Valerie commented they will shoot for that and see how far they can get. 

4.   Review updates to summary of frameworks reviewed

Tim mentioned getting Kelly on the next call to briefly update us on Internal Medicine.  

5.   Review updates to definitions

Tim noted the definitions had been updated taking into account comments from the last call.  Mary Jo stated she did pass definitions to AACN and they were going to look at them but she has not received any feedback yet.

6.   Open discussion

Decisions

  • It was decided none of the existing specifications/standards perfectly align with the needs we have identified so far, but that our work may be informed by some of these (e.g. InLOC, ISO).

Action Items

  • Valerie to begin working on a draft data model.
  • No labels