Child pages
  • 2017-09-20
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

September 20, 2017

Time:

11 AM PDT/12 PM MDT/1 PM CDT/2 PM EDT/7 PM BST

Attending: Ellen Meiselman, Co-Chair; Jennifer Dunleavy, Erick Emde, Andy Hicken, Chad Jackson, Jeff Korab, Valerie Smothers, and Tim Willett.

Agenda Items

1 American College of Chest Physicians - Chad Jackson will discuss their  Laerdal simulator/xAPI tracking project

Abstract from this year's Medbiq conference [download presentation]

Chad Jackson, American College of Chest Physicians

Andy Hicken, Web Courseworks

The American College of Chest Physicians (CHEST) conducts simulation courses using high-fidelity Laerdal simulators. Evaluation of these learning experiences has traditionally been hampered by the simulators' lack of interoperability with other learning technologies. Using its Airway Management course, CHEST conducted a pilot project in which xAPI was used as a connector between their simulators' desktop reporting application, a learning record store (Learning Locker) set up to receive the data, and their Moodle-based learning management system.

This presentation will discuss (1) technical details of the implementation, including xAPI statements used and factors in the LRS selection, (2) educational benefits of the pilot, (3) technical lessons learned during the pilot, and (4) directions for future research and development.

Andy is Director of Product Development at WebCourseworks; Chad Jackson is Vice President of Innovation at the American College of Chest Physicians. Andy noted Chad’s interest in getting as much data as possible about the simulation courses he runs. They have used custom-built simulation models for an airway management course with Sim Man. Laerdal will output a text file in native format that contain many data. They built a translator so that the data can be pulled into a Learner Record Store using xAPI and tracked.  They used an open source LRS and showed reports in their LMS, which CHEST uses.  Reports included faculty skills evaluation. Chad added it provided an opportunity to look at things they had not thought about before such as time it takes to intubate patient.  They expected and saw a decrease in those times pre vs post. They looked at correlation between time head tilt and puting the tube in. It takes average person 4.7 times, based on comfort level of procedures.  Getting data to have a more formative use with the learner.  They now have an automated checklist on IPad that can pull sensor data from robot and can pre-populate the checklist in real time. Faculty observing are able to focus on performance.

 

Ellen asked if they were getting data from physical practitioner and the robot.  Andy noted they bring together different data sources to automate so they can focus on improvement techniques.  Ellen questioned the verb use. Chad noted they defined some of their own verbs.  The Laerdal robot was putting out 440 codes, but when they reached out to Laerdal, they were not very helpful until they hacked their code, then they provided more information. Andy noted that he looked at what Chad wanted to measure and then looked at the Laerdal file and verbs they used. RWRR is respiratory rate. Laerdal has over 400 codes. They get a fraction of them.

 

Ellen asked about standardization across different types of simulation and the content of verbs.  In the Virtual Patient scenario, they added verbs to represent “the learner arrived at node x,” and “the learner ignored the warning.” in what was going on. It doesn’t get into the content of what the learner was doing. In this model, measured blood pressure of would be the verb, the manikin would be the object, scenario the activity, context would be scenario itself.  Andy commented when they designed it they made assumptions about the activity, actors, doctors and those improvements made sense.

 

Chad thought it made sense, if MedBiq looked at standards across contexts, but the action behind the verb would be specific for each situation.  Ellen added they could have scenarios with multiple robots.  Chad agreed the statements could travel from one platform to another. Andy agreed verbs could stand to be more human readable.  Chad added they could pick up from their platform and go to the next. 

 

Erik provided an example of student 1 checked blood pressure on mannequin five.   Ellen added student checked blood pressure Laerdal mannequin 2002 in context of airway management registration four.  Student one measured blood pressure is object, making reporting difficult.  Valerie suggested creating verbs for identifying clinical things, like vitals measurement, or develop a specific kind of scenario and look at action (clinical).  Andy agreed it made sense to start with a list of vital signs.  It may be possible to look at what is output by major simulation companies, so you can make sure you are covering what is shared in common.  Ellen suggested thinking about clinical encounters as a profile eventually and detailing performance in real life.  Valerie agreed.   Andy described the step in between those two was a class of simulation done on actual humans.  Chad added scanning human bodies, you are not getting data.  He was unsure why you would capture literally. 

 

Ellen thanked Chad and Andy and asked them to keep us updated on their work.  

2 Discussion on tracking behaviors and allowing competencies or best-practices for learners to emerge from the data.

Behavior patterns may emerge from the data that will inform our understanding of the thought processes or behaviors of a high- or low-performer.

In online apps, behaviors can be tracked as clicks or nodes, but  in real-life situations, like clinical encounters, it may not be so easy. Question: In the real life medical education or medical activity of your choice how would you track significant behaviors? Not all of them would be part of the activity itself.  Is this something that is already tracked in some manner?  Is there anything you wish you could see that would help in assessment, that isn't currently tracked?

 

Discussion tabled until David could be present.

3 Brief Discussion of IMS Caliper

Caliper people reaching out to us for collaboration. No plans to collaborate due to licensing issues and the need to focus our efforts on the profiles we already have planned. There was a mention by one of the IMS people of a possible project that might make such collaboration less important: A conversion endpoint which would translate between the standards and send the result on to the LRS of choice. Not sure how well that would work since xAPI is a very flexible standard. This is only an idea, not at all a definite project at this time.

 

Ellen noted that IMS is another standards group well known for LTI, which provides standards for launching learning activities.  IMS Caliper is similar to xAPI with emphasis with higher education use cases. It provides core profiles but less flexibility.  They are looking for ways to collaborate with Medbiq group for adoption and developing profiles similar to ours. 

 

Jeff was interested in creating standards and unlocking the process.  Valerie noted that collaboration with IMS could include licensing issues, as their standards are offered under a more restrictive license.  Ellen commented we have plenty on our plate to take on more work would be difficult.  There are interesting differences between XaPI and Caliper; Caliper has highlighted different entities to make it easier to report on.  Enshrined in their standard are some core profiles, so you can immediately start using it.  Erik mentioned the benefit of data in a common format is that you can look at the overall picture. 

 

Ellen explained that Caliper people reaching out to us for collaboration. There are no plans to collaborate due to licensing issues and the need to focus our efforts on the profiles we already have planned. There was a mention by one of the IMS people of a possible project that might make such collaboration less important: A conversion endpoint, which would translate between the standards and send the result on to the LRS of choice. Not sure how well that would work since xAPI is a very flexible standard. This is only an idea, not at all a definite project at this time.

 

Valerie noted that everything MedBiq develops is open license.  No one pays for the standards, and anyone can use them to create derivative works. They can also be distributed as part of standard license.  IMS doesn’t require payment for standards, but only IMS members have access them for the first 6 months.  They are not as transparent as Medbiq.  In addition, one cannot create derivative works or distribute their standards.  We would need a special license to build upon anything created by IMS.  

 

Valerie asked for volunteers to start working on standardizing what Andy and Chad have done.  Jeff was interested.  One take away Valerie noted was asking won’t go very far, we have to start doing something in order to get simulation companies’ attention.  We can try both.  Ellen will try to find out more about work with open mannequin. 

 

Earlier this week Erik noted he saw a presentation of about HL7 info button standard. can click on and it pulls up information relative to the patient at hand. Valerie offered to talk with them again.  Peter and Valerie visited him and talked about it.  They wanted learning content with Medbiq metadata. Valerie added that Info button is an HL7 standard that is used for sending a detailed query to a system with educational content. The idea is that a blue info button would be available within the EHR, and with a click you could find out more information about a specific topic related to that patient. Erik thought it linked in to update, and pulled information from Pub Med.  The competency information was also pertinent and seemed to overlap with Medbiq.  He will forward the presentation to Valerie and she will follow-up if interested.  Valerie thanked Chad and Andy for sharing their work and she hopes it informs our progress. 

 

 



Decisions

Action Items

  • No labels