September 5, 2013
10 EDT/15 GMT/16 CEST
Attending: JB McGee , Nabil Zary, co-chairs; Susan Albright, Matt Cownie, Ankur Doshi, Michael Steele, Valerie Smothers, Luke Woodham.
1 Review minutes of last meeting
The minutes were approved.
JB introduced Ankur. He has worked closely with Ankur on a course they teach called integrated case studies, which is at the end of the second year. It pulls together basic and clinical science and applies them in a case based setting. It is traditional problem-based learning with the case on the screen. Over the last 18 months, Ankur has converted the cases to branched virtual patients. That is leading to expanded use of virtual patients at Pittsburgh. He and Nabil have expressed Increased concern with making sure the group meets educator needs. Ankur will be listening in. He is not a technologist. His perspective as an educator is important for the group to consider.
Nabil added that we have had a discussion regarding the working group and level of activity and expectations of the outcomes.
JB expanded. The mission of MedBiquitous has shifted to more applied uses of technology. The education environment is looking for outcomes: data about competency, more focused on results of education, etc. We’ve always supported that, but putting that in the forefront is needed. How does this affect our work? If you look back at our charter, we wanted to accomplish a technical goal: “to develop XML standards and Web services requirements to enable interoperability, accessibility and reusability of Web-based virtual patient learning content.” We are ready to move beyond that. We need to update that charter. Nabil and Valerie are talking about centering that around use cases.
Nabil commented that MedBiquitous revised its mission less than a year ago. That’s not reflected in our charter. The working group has focused on how to exchange, but as VPs become learning activities, we may need to revisit if some aspects are missing. It’s important to revisit our work of group in relation to other groups in MedBiquitous. We can focus on what is really needed. A second reason is that the field is evolving. The first time we created a spec based on use cases; that was a good approach. As the field evolves, we need more use cases to see what is appropriate. Are there new ones?
Valerie added that the new mission does emphasize continuous improvement. Recent developments include standards for competency frameworks and curriculum inventory. Use cases are a good way to solidify how virtual patients teach or assess competencies and the role they play in the curriculum.
JB commented that we are still a technical standards development organization. We could do many things to support continuous improvement. For example, if you are using VPs for CME in gastroenterology, and the topic is ulcerative colitis and a student picks a drug that is off market, you should be able to tell the system that I’ve selected something that is not the current approach. We need to track that, and provide feedback as well. We could take data back to whomever is tracking progress.
Susan commented that is a whole other level of complication.
JB replied yes, but it supports the direction the organization is going. Is communicating activity instead of content a use case the working group should take on? Susan replied that her comment about the layer of complication was related to the artificial intelligence that would be required to recognize an off market drug. JB answered that he was thinking at a simpler level, of a case designed to detect who is using old drugs. The case would know this is key decision point that indicates competence or incompetence. Susan thanked JB for clarifying.
Nabil commented that the use case principle is good; it needs iteration. The work done is of great value. But if you look at way data is being used, most things are in the cloud. Also from educator perspective, is the way we look at VPS still relevant? Do we need to extend ? he added that if we do nothing to link to other specs, we will be left out of the MedBiquitous ecosystem. It’s time to revisit. Both technical and educator perspectives are important. We can get use cases from publications and conferences, too.
Susan commented that she was shocked at how much activity in VPs was part of the program at AMEE. It has grown exponentially. JB agreed. And that is without the working group promoting vps.
JB and Nabil asked the group if this was a direction we should take.
Luke commented that developing use cases is a good thing from many points of view. It will allow us to understand what we are trying to do. Use cases will also be interesting from the perspective of others. We can disseminate the work. It helps to explain the concept. At AMEE, many people hear about VPs, but there is not a lot of discussion about the different ways in which they are used. A series of use cases could kick off such a dialogue. Michael agreed. Matt commented that one thing authors struggle with is pedagogy. That is still fluffy.
JB asked how do we operationalize. It is a challenge to get work done with monthly calls. A specification requires focused effort. How can we get use cases written up?
Susan offered to go through AMEE conference from last 5 years and gather some data on what the VP presentations have been about.
JB agreed that would inform the use cases. Also getting Ankur involved could inform us. What does an educator want to get out of virtual patients from a data standpoint?
Susan commented that the CLIPP cases do not conform, do we consider them in thinking about use cases? We should think about those as well.
JB agreed. There may be reasons for non-conformance; activity tracking would be more compelling of a standard
Valerie offered to start a page on the wiki for v2 use cases.
Susan asked if we should do a survey on the AAMC website with Morgan. Valerie thought that was a great idea.
JB asked do recent publications address that question? Valerie offered to investigate that.
Nabil commented that Andrzej Kononowicz is doing review of vps. He can ask him. It’s quite extensive. When people use vp in their studies, he sees what those vps are. It’s a work in progress. He can ask him to join the next call and summarize the different types of vps he found.
JB added that maybe on next call Ankur can talk about types of vps and what data he might want out of a vp. Ankur agreed. JB added that he can jot down things they are doing at Pitt, and also what decision simulation customers are doing. On the next call we can collate ideas for use cases.
Nabil agreed that was a good next step. Then we can discuss the relevance of the current standard.
3 AMEE update
JB put on the wiki a reference to a talk about a BEME review coming out in October (see http://www.bemecollaboration.org/Reviews+In+Progress/Virtual-Patients/). There is shared data coming out of assessment on research of effectiveness. There was lots of data presented. There’s nothing shocking. VPs are more effective over nothing and more effective compared to traditional teaching. There is no difference with PBL, which is not unexpected. He is looking to see what designs they used and how far back they went. It’s something we should refer to when designing use cases.
Susancommented that she didn’t go to all the VP sessions. The only one she read in detail was Nabil’s.
Nabil explained that they got help to set up OpenTusk at KI. Faculty developed progressive VPs in primary care. They wrote it up and presented to AMEE. People are aware, they use it.
Susan added that cases were validated by primary care physicians and used by primary care students. She forwarded the abstract to their family medicine course director.
Nabil commented that each specialty has their own model. They use a conversational model, lots of videos. And they created case by themselves using open tusk.
JB noted that they often receive comments regarding how structured templates should be. In this case, they figured out how to use the tool to make it work the way they wanted to.
Nabil agreed. The role of educator is important. Only people teaching can have that.
Luke commented that he saw some presenters exploring, how vps can be used in experiential learning. There were a number of things looking at vps on mobile devices. Adrian from Glasgow had a presentation on an application he’s developed.
JB asked do they use the standard.
Luke replied he doesn’t think so. It’s a linear model, stand alone, not shareable.
Matt commented that they have had repeating arguments about what it means to play a vp, within their team and with academics. You can do so many things with it. It’s very hard to come up with simple way to report what happened. They also struggle with SCORM. Tin Can has been renamed XAPI. He has been interested in having a better way of reporting that makes sense. That’s where the interest came from. They would like to link into a standard place to dump data and have someone else make sense of it. A guy from Chicago was doing interesting thing with MCQs, but that was massive. He could see it extended to VPs easily. What does it mean if someone goes the wrong way in the graph? Can the data we have be shoved into XAPI successfully?
Matt walked through the powerpoint. There is a simple actor- verb-object data structure. The Actor can be the learner or instructor. The activity can be simulation, etc. ADL has a limited set of verbs. The end result is He/she did something. Unique ids are used for statements. Registration identifies the session. We would need to also export what an activity means. You can group activities within a vp together, or group vps together.
Communities of practice can develop profiles. We could build on the adl profile, or develop our own de novo.
Do we want to track rules? We would need best guidance on how to use extensions. We should also think about if we want a public repository or registry. Content brokering is possible.
JB thanked Matt for his work and said it was fantastic. Things have progressed quite a bit. It’s complicated and shows what we as a working group should be considering. Which comes back to use cases. There is only so much an api can do. Let people doing reporting sort it out.
Susan commented this will this help us figure out a way to organize scoring mechanisms.
Matt offered to write something to interpret the data. We can use that to populate another system. Take Ai and let it loose.
JB asked could you come up with 1 or 2 use cases that relate back to the API.
Nabil asked how relevant is the current spec in relation to activity. How can this be used to interconnect different specs? That would be interesting to know. That would be good to feedback to current spec.
5 Review updates to implementation guidelines and address questions (sections 4.3, 4.6.2, 4.7.4)
6 Open discussion
We will focus on use case ideas at next meeting.
The group agreed to develop use cases focusing on communicating virtual patient activities.
- Susan will gather data on virtual patient related presentations from AMEE meetings form the last 5 years.
- Valerie will start a wiki page for uses cases.
- Susan and Valerie will coordinate on a survey of the AAMC GIR related to how people are using virtual patients.
- Nabil will ask Andrzej to review the different types of vps he has found on the next call.
- Ankur will speak about types of virtual patients and data of interest to educators on the next call.
- JB will share how people are using virtual patients at University of Pittsburgh. He will also share how Decision Simulation customers are using virtual patients.
- Matt will write something to interpret Experience API data.