Skip to end of metadata
Go to start of metadata

Screen Shot 2015-11-19 at 11.21.37 AM.jpg

Meeting Information

Date:

November 19, 2015

Time:

9 PST/9 MST/10 CST/11 EST/16 GMT/17 CET/22 AEDT

Please note: the conferencing service will ask you to enter the pound sign. Press # for pound.
To mute, press *6.

Attending: Ellen Meiselman, David Topps, Co-Chairs; Tom Creighton, Mike Hruska, Nick Hruska, Lucas Huang, and Valerie Smothers

Agenda Items

1 Review Minutes

Minutes were accepted as submitted.

2 Mike Hruska of Problem Solutions will present to the group on how his firm is using xAPI for training soldiers. Although this is not a medical application, they are using it in a very advanced manner and have created best practices that we may want to borrow from.

A See chart of constructs used in HPML to indicate performance

B See presentation 

Mike has worked with Jonathan and Tom over the past seven years helping people build learning ecosystems. They applied the 70-20-10 rule (that 70% of employee learning comes from on-the-job experiences, 20 from other employees, 10% from formal learning experiences) and gained venture capital investment.  Two big questions surfaced, 1) How can we leverage performance data to save time and money training personnel? 2) How can we increase training effectiveness by using data collected along the continuum of training?  There are seven elements within the learning Ecosystem: actors, resources, events, signals, sensors, flows, and patterns. 

In the army research lab, they looked at historical proficiency/performance over time, seeing threads around competencies and connecting learning and performance. They developed the Pipeline xAPI tool to determine effectiveness of learning and to make recommendations. 

David asked Mike to discuss the challenges in integrating with HPML and the roadblocks he faced.  Mike explained that they wanted to define some set of constrained data that would work across domains. They evaluated HPML constructs and found ways those map into XAPI statements, enabling architecture and dashboard type tools.  Mike will share Pipeline and best practices and examples with the group.  The key lesson was that competencies have to serve as the center point for performance data, otherwise there is too much or not enough detail. A balance has to exist to capture data that is useful. 

Ellen asked if Mike could go through Virtual Patient profile relating those constructs to the constructs in his chart.  Mike asked about the use case, Ellen explained that they are taking a larger competency, interviewing a patient, and breaking it into sub-behaviors. Mike suggested google doc to discuss offline. 

David asked if it was better to record finer granularity as that allows for deconstructing the dysfunctional learner.  Mike commented that most vendors don’t want to share detailed data. He recommends deliberate data creation and traceability to the source systems.  Using those constructs in an adaptive learning context, they were able to reduce time for gunnery training by 40%.  Valerie added another important use cases is identifying outliers at the lower end of the scale and either remediating or removing them.  David noted evidence of why they were removed is also needed. 

David commented that he is interested in capturing detailed data and seeing what trends or questions emerge. The ability to look across systems with xAPI is attractive.  Mike provided the example for a perfusionist simulation. The simulation has 25 dimensions; they are working on a competency model. They listed to data streams and measure based on multiple variables. Valerie noted the similarity to the VA code simulation.  Mike agreed to communicate offline with further examples.  Ellen surmised that one could send competency data to the LRS, including observational assessments, and develop competency dashboards. Humans could make an assessment about the learner’s level of competence and record that determination in the agent profile. Activities could access that profile directly and direct learners appropriately. Mike mentioned she may want to point at an existing system because you don’t need to store everything in the LRS.  

3 Review or confirmation of approach (note - recorded in Development Principles):

    • Develop medical education/development-relevant profiles with the goal of publishing them on broadly accepted sites such as Tincan Registry and ADL registry - once it is available.

    • For now, we will use Google Docs for authoring and revisions, since neither registry is likely to have collaborative tools for authoring profiles.

    • Each profile will be specific to a type of activity, such as virtual patient, mannequins, clinical training.

    • Our first profile is still the Virtual Patient profile we were already working on, because it is relatively easy to define the verbs.  

    • We will keep the scope of the first profile fairly limited, and later evaluate if we want to add notes on how to use the context-activities or results sections of statements to reference competencies.

    • Profiles will contain contextual notes regarding verbs used (ie what initialized means in the context of a virtual patient activity)

    • Profiles will have IRIs. The statement context will point to the IRI of the profile to clarify verb meaning.

    • We¹ll define verbs, when new verbs are necessary, in a manner to be used across multiple profiles.

Decisions

  • Mike will share Pipeline and best practices and examples with the group. 
  • Mike and Ellen will talk further about how HPML constructs relate to Virtual Patient constructs.

Action Items

  • No labels