June 3, 2013
10 AM PDT/12 PM CDT/1 PM EDT/6 PM BST/7 PM CEST
Attending: Nabil Zary, JB McGee, co-chairs; Matt Cownie, Andrzej Kononowicz, Michael Steele, Luke Woodham.
Nabil asked those who attended the meeting in Baltimore to comment on the minutes.
Matt commented that they have 3 projects coming out at same time. He will have things to share in the fall. With regard to the conference, there was a lot of time spent discussing linking virtual patients to competency frameworks. That was a main thrust.
Nabil commented that people are now looking at virtual patients as a learning activity as opposed to purely content.
2 Discuss use cases for version 2.0 and other decisions from 4/8 and 3/7
a) connect VPs to competencies, Milestones, and EPAs
b) consider how Experience API could be used for virtual patient logging
c) other integration or version 2.0 requirements (were there advancements from Open Labyrinth 2 to discuss? Integration with other types of simulation, or other systems?)
Nabil commented that if you look at the minutes, there were decisions in past meetings. Connecting to competency frameworks, milestones, and EPAs was a main topic of the in person meeting.
Valerie commented that was a major topic of discussion. During the plenary, john Stamper focused on using big data to make learning more efficient. We also discussed how the Experience API could help capture more granular data about learner activities. There was also a lot of discussion about linked data and how assessment and activity data can be linked to competency frameworks. John Jackson from UVA commented that he wants to know what actions within a virtual patient are associated with a particular competency.
There was also a lot of discussion about Milestones and Entrustable Professional Activities, or EPAs. Milestones are generally defined as developmental steps that describe progression from a beginning learner to the expected level of proficiency at the completion of training. The accreditation organization for residency training programs in the US, the Accreditation Council for Graduate Medical Education (ACGME), has mandated that all specialties will create milestones for the assessment of residents. The milestones generally define a spectrum of behaviors, novice to expert, for each competency definied by the ACGME. EPAs are one tool being used for milestone assessment. The conceot was originally developed by Olle ten Cate in the Netherlands for the development of curriculum for physician assistants. Because it was a new profession in the Netherlands, and because each PA’s expertise was so customized, it seemed more appropriate to define the professional tasks the learner would be entrusted to do. The EPA is then linked to competencies, so it is a more holistic was of assessing competencies.
Much of MedBiquitous’ active standards development work focuses on competency-based learning and assessment. We have a more general term than milestones, performance frameworks, and that is a standard under development.
Valerie mentioned there is also the Experience (or Tin Can) API. ADL, the organization that created SCORM, developed this API. It’s based on social networking technology, activity streams, which is used to communicate about a learner’s activities, I did this. There may be Verbs and qualifiers.
Nabil commented it is valuable to hear from working group, is this a good direction. What we have today in the Virtual Patient standard is a scorm package. There is metadata to describe the virtual patient, and a package to deliver it. Version 2.0 may be a new direction that looks more at a virtual patient as a learning activity. The Experience API could be a way to log virtual patient activity. Then we need to decide what is worth logging. Competencies, milestones, epas, would that mean there is a need to expend metadata? This is clearly describing the virtual patient in new way.
Michael commented that his perception of the experience api is that it is related to assessment. You could bolt it on after the fact. Assessment was a weakness in scorm. You can push very granular information if its valuable. It almost becomes an add on. How do you connect the dots? I perform a neuro exam, I recognize a cranial nerve deficiency, that maps to an objective, which maps to an EPA and ACGME competency. You could jump from I did this to what level I did this. Is it assessment? How do you connect back to the competency framework?
Matt commented that they have more apis there. There is the Activity API, which defines activities. Then you can say you’ve done it. The idea of exporting learning objects breaks down when so many systems are games; things are changing so fast. You need to link them together. That’s the way things are going. Instead of exporting, log what people have done. Eventually, you won’t export, you’ll play on different systems and log what happened.
Nabil commented that this has major implications for the standard. Should we keep content where it is?
Luke commented that in order to discuss, we need to define clearly what the goal of the standard is and in what domain it sits. Competencies relate to metadata, although they may be metadata related to a certain part of the case. Does the virtual patient standard need to change, or does the healthcare LOM standard need to change? If you are logging activity and throwing out mechanisms to interact, it’s not the virtual patient, it is how it works. There is some scope for things like that in v1 envisioned in the player. But no one opted for the player option. It does change the scope from what was used in v 1.
Michael asked is it possible to bolt on tin can api support to the existing standard without changing it? We don’t want to throw away v 1. You could map actions to the competency framework. The biggest problem with tin can is that they decided not to define verbs, and the spec is useless without a common understanding of verbs. Verb ontology makes it powerful. Is this a bolt on?
Valerie commented that the Learning Objects working group is updating Healthcare LOM to address the issue fo pointing to competency frameworks. But no other working group is developing a verb ontology for the Experience API.
Nabil asked how much of this is part of further work of the VP standard, and how much is a feature of the system implementing? It would be nice to describe a Virtual Patient based on competencies, milestones, and epas. Are we moving away from a technical spec to an educational modeling spec? If this activity happens, how does it relate to milestones? That is a tough one.
Michael commented that he is hoping we don’t have to solve that in our working group. What would help him is knowing that a learner just successfully diagnosed this disease. We need to send that message to this Learning Record Store (LRS). Here is the verb you would use, and some system elsewhere picks up and says I know what to do with that. There is another layer in between. Learning objectives can be mapped up to competencies. We would leave it to someone else to figure out rules for how a competency is achieved.
Nabil recommended including that in the implementation guideline: how to tie the current standard to other specs.
Michael commented that maybe the first step is to come up with a best practice for how you would implement the Experience API with MVP 1.0 and see if anyone embraces it.
Matt commented he was happy to have a go and see what that looks like. They already track loads of data. He will see how he would use it against an LRS and what the ontology is. He will have a go and distribute to everyone else.
Michael asked what assessment information Matt is pumping out of the system, and what do you do with the information. That is where the magic occurs. He has not seen anyone translate it into something an educator could consume.
Matt commented that there was an interesting presentation from john Stamper at Carnegie mellon about data shop, making sense of that data. It takes lots of analysis. It’s a chicken or egg thing.
Nabil agreed that having Matt test out the API was a sensible idea. That’s also a good way to drive improvement of current version.
JB asked if we have to do anything to the standard to enable that. Nabil replied no. Matt will do a practical trial and highlight the strengths and weaknesses of the current spec.
Nabil commented that a VP is one type of learning activity to achieve milestones. We can have competencies and milestones as part of the metadata. To really make sense, we should work with the competencies working group. They will map down to the activity level.
Michael commented that they have an opportunity to do a more granular level of detail. They did the eye exam well, but the strength test was not good. That may be area for intervention. If the competecnies working group came up with a list of verbs and targets, we could take that back to educators and ask them to map to a verb and target.
JB agreed that would add to the quality of the cases. Michael commented it would be like a cookbook of competencies. JB added that other organizations may pick up on it, like AAMC. There is value in coming up with those verbs.
Nabil summarized the results of the call. The current version may hold well. We will take into account the work of experience api and others, and increase implementation. If what people want is access to data , that will increase the adoption rate.
JB agreed. There are examples that are more concrete that may accelerate adoption
3 Updates to Frequently asked questions
Valerie incorporated comments from Matt regarding multiple choice questions.
4 Implementation guide progress
Valerie agreed to continue to working on the implementation guideline. Text from eViP has been added, but not edited.
5 Open discussion
Michael commented that TATRC is funding an open source physiology engine. They are collaborating on that. It is kicking off soon. There is a patient description file. It goes into great detail, like Ven ductility, etc. We may want to think about for version 2. You can say 53 year old patient , with a heart attack, and can mathematically predict what happens when you give a certain drug. They are defining that file in the next yr. Also, AAMChas partnered with Khan academy to prepare for the 2015 mcat. They’ve decomposed mcat into learning pieces, asking people to generate content to teach those little pieces. If they did that for the whole curriculum , that would lead to the action verb stuff. He asked if anyone had gone through khan academy medical science topics. They hired an MD to spearhead. And they have a contest to submit videos. But it is not interactive.
JB commented that he has looked at it, and their science educators would have a lot to add. It’s a different approach, with lots of mistakes. Hard to follow.
Nabil commented that Karolinska recently signed on for edx. He is working on 4 moocs in medical science.
- Matt will test out the Experience API with the current MVP spec.
- Valerie will continue to update the implementation guideline.