Child pages
  • Scenarios and requirements, University of Michigan
Skip to end of metadata
Go to start of metadata

From Ellen Meiselman, University of Michigan

I work for the Learning Management team for University of Michigan Health System. We run a learning management system for all faculty and staff at UMHS. We also develop and create much of the online learning that is used in the LMS, and provide education and support for embedded educators throughout the health system.

There are many things we wish we could do that cannot be done with the LMS technology we have now. Among them:

  • We can't easily use demographic and performance data within our online learning to drive adaptive learning.
  • We can't track what people actually use to learn and perform better. Our LMS data is summary at best, and limited to activities it knows about up front.
  • We can't provide data that could be used to correlate how people and teams learn and perform with outcomes in a systematic manner.

In addition, the SCORM 1.2 standard that we use for most of our trackable online learning has a variety of technical limitations which is what originally led me to participate in the early discussions of what SCORM 2.0 should look like.

Examples of the requirements typical of what I asked for in those SCORM 2.0 workshops and interviews include:

  • The LMS should vanish from view for most learners. They have little or no interest in logging into a specialized site they visit once a year. We need to make the experience of taking their mandatories as lightweight as possible, and as focused as possible.
  • Let us easily access demographic data in a standards-based manner.
  • Security is non-existent with SCORM, it needs to be fixed, at least for high-stakes activities.
  • There should be a place to store "a lot" of arbitrary data.
  • We need to be able to track using third party and disconnected sessions.
  • The LMS shouldn't be involved in judging outcomes.
  • Get rid of assumptions about sequencing and the need for complex sequencing to be built into the standard. We build our own anyway.

Now that we have been working with the xAPI standard for a while, there are some areas I would particularly like to discuss with the people involved in Medbiquitous:

Agent Profile Standards

Besides the Statement API, there are APIs for the Activity Profile and Agent Profile.
We can now access any part of a user or team profile from a learning activity, using a standard! For example, we could access roles or attributes that would useful for delivering role-specific pieces in a larger training module, like "is a Clinician", "works in a Patient Care Area", "performs Central Line Insertion procedures", "works in an operating room", etc. This can be done to some degree within an LMS, but not on a granular intra-activity level. It is usually difficult to add new vocabularies and taxonomies, and there is certainly no standard. Each learning object would have to be customized individually to access that data from an LMS and use it within the learning object.

Besides roles, we could locate competencies in the Agent Profile. These values could be static information, or dynamic links to competencies and performance criteria maintained by other applications.

All of these Agent Profile properties take the form of key value pairs, and you can have as many as you like. This means the learning activity can be adaptive, based on aggregated data from numerous disparate sources and modalities. Although many of these properties will need to be ad hoc, I think at least some of these profile facets should be standards,  so that applications can all access the same data the same way.  

Extensions

The Result object contains space for extensions. I think there is plenty of room for discussion here about what would constitute useful extensions for medical education and training, including HPML - Human Performance Markup Language - a way to encode performance data into xAPI statements, interoperably, so performance can be tracked somewhat uniformly across modalities.

http://www.adlnet.gov/tla/experience-api/adopters/xapi-and-simulation-interoperable-performance-tracking-to-support-tailored-learning/

  • No labels

15 Comments

  1. Thanks for sharing that link. It would be interesting for us in the Competencies Working Group, who have just finalized a Performance Framework specification, to hear more about how this project is using xAPI and HPML. Do you happen to know anyone working on the project (or are you part of it, Ellen)?

  2. Where can I find more about HPML? Google only turned up a paper from I/ITSEC. I see ADL added some code to GIFT to support pushing data into xAPI. Do you have a sense of whether this code has been released and is working?

     

    Thanks

  3. I really like the idea of what xAPI will enable but I am struggling to find information about how to explore the resulting analytics. I'm being a bit superficial here and I apologize. As I see it, an LRS by itself is not much use. It is only when you can do some analytics on the stored data that it becomes useful.

    So I am curious as to what others have used so far to analyze their LRS data.

    In my limited exploring so far, there is only one open-source LRS available, and the analytics module on this still seems to be under development. 

  4. Hi David, if you are talking about Learning Locker then come to my presentation at this years conference! You are correct in spotting that the reporting is still a little sketchy......

    As for the xAPI i am in complete agreement with Ellen. These things have been too hard for to long with existing LMS's and have been a real road block to innovation. Having a standard that open things up in this way is very exciting and a huge opportunity.  

     

  5. Davis, there is a tool that provides some analytics: saltbox. But it is not open source. See: http://www.saltbox.com/ 

    Matt, I look forward to hearing more at your presentation!

  6. Hi Michael! Sorry that I'm so late to comment on this thread.

    The Army Research Lab (ARL) commissioned the use of HPML (Human Performance Measurement Language) in conjunction with xAPI in order to develop best practices for encoding performance data. The end goal in this case was getting closer to something called Interoperability Performance Assessments (IPA) which allow the data from observed performance assessments across multiple modalities to weave together in order to facilitate tailored training.

    You can read a bit more about this project involving HPML on the ADL website here. It's interesting, if you are into pulling accurate, nuanced data from simulated performance services (which, I think it's safe to say, we all are). If you have access to the eLearning Guild Ecosystem 2014 conference archive, you can learn more about IPA here. Finally here's are two Google-available papers on HPML:

    Regarding the Generalized Intelligent Framework for Tutoring (GIFT), you can download the source code once you register on the project website. 

  7. This is a great summary, Elaine, and I particularly like how you summarized in three points what we cannot do with an LMS. (In my view LMSs are slightly misnamed. They are really Learner Management Systems for keeping the kids in control.) It has stimulated some useful discussion here - looking forward to next week. 

    One interesting theme that seemed to arise from several sessions and plenaries at the Ottawa Conference was a suggestion that detailed psychometrics in exam performance were possibly leading us astray and that it might be time to look once again at more global rating scales (but properly directed - not just Satisfactory | Unsatisfactory). So this would be a bit divergent from the trend we are currently seeing in learning analytics, big data etc.

    It may just be reflective of another hype cycle trend and that different groups are at different points on the curve. Of course, given that it is a cycle generating a repeated (non-sinusoidal) curve, it is also interesting to note whether groups that are at different points on the vertical axis of the curve regard themselves as being in front or trailing on the horizontal axis. Hard to know but it does skew perspectives. (Sorry if this pseudo-math approach is making things more confusing). 

  8. In trying to get my head into gear to keep up with the smart kids at Medbiq, I've been noodling around looking at stuff on paradata, learning analytics etc and came across this interesting paper: http://publications.cetis.ac.uk/2013/767 - it explores the 3 main approaches used for making OERs more discoverable: search engine optimization, improved (semantic) metadata eg. RDFa, and using paradata (how learners and teachers use OER materials). It alludes to some of the interesting stuff being done in the Learning Registry (http://learningregistry.org/). 

    Since the xAPI potentially provides the means to expose a lot of data about how learning objects are used, is this worth exploring as part of our discussions on how/why to use xAPI? Or have others found better ways to make their learning objects more discoverable? 

  9. David I'd be interested to see how you would characterize how learning objects are used to get something reportable. I suppose you would use the context area to make some meaningful structure here?

  10. I'm not sure that I quite understand your question. Here is an attempt at a response (in which I will look very foolish if I have misinterpreted... ah well, it won't be the first time). 

    The learning objects that we are most closely working with, just now, are virtual patients. Specifically, we are working with OpenLabyrinth virtual patients (http://openlabyrinth.ca/) - these support the Medbiq Virtual Patient standard. Have had some very fruitful conversations with Matt and Simon who are developing UChoose, another excellent vp platform that also supports the Mebiq MVP standard very well. They are much further ahead on their thinking than we are - I hope that they will chime in on this. 

    At present, OpenLabyrinth does report metrics on quite a range of parameters on how the learner navigated a VP case. We have timestamped data for all nodes touched, questions answered, as well as the scores and data for these responses. A single case session can generate a thousand lines and tens of thousands of data points... all of which are so much fun to plough through when trying to analyse what they did! Hence xAPI. 

    So, for example, we know what path they took through a case, whether they backtracked or got stuck going round in circles, how they responded to questions/sliders/Likerts/MCQs/free-text etc. Can map these paths out in a simple form of graph analysis at present. But we would prefer to be able to do this using 3rd party software, examining an LRS, rather than having to generate our own report formats etc. 

    We have also been doing some interesting comparative real-time reporting to our participant groups. Who chose what, in what order of priority etc, and then reflecting these back to the small group as a stimulant for further discussion. Makes for much more interesting webinar sessions. Being able to group cases in a series, present them as linked scenarios, with timing controls on who can access what case when, has been very useful as we research the use of our cases. 

    It would also be useful to be able to generate more high level analytics on our VP servers. Which cases are most popular? Which ones have the highest completion rates (or lowest? are they getting lost or bored?) Would be interesting to generate sparklines or engagement graphs, similar to how Google does with YouTube and Google Insight. Or even better if we could do this across several of our VP servers - get them all to report to the same LRS and then use that to look at which servers and sites are generating the most interest. 

     

  11. David, it would be really helpful if you could generate a list of the analytic questions you would like to ask of your XAPI data. That would allow us to look at the existing XAPI spec and vocabularies to see if the existing specs and approaches give you what you need, or if something more/different is required. Of course others could add their needs and ideas as well! I would recommend making it a separate page though in this section of the wiki. If you log in and look at the sidebar to the left, there is a link that says "+Create child page." 

    Even if existing tools don't offer the analytics you are looking for, there may be those with an entrepreneurial or open source spirit out there that would create such a tool. (smile) 

  12. Thanks Valerie, you said it much better than I could have. Looking ahead to what we will want to see in the analytics is crucial in deciding what we will want to track and what verbs, results or extensions we will want to use.

    David, regarding your wish for 3rd party software that could make sense out of a student's path through a VP case, I'd be interested in how generalizable you think that sort of navigation is? Is the path through a VP case part of the VP specification or is it pretty much a custom set of descriptors useful only for Open Labyrinth? Could it be generalized to other similar software?

    BTW, I apologize for taking so long to answer all of the comments above. I didn't select "watch this page" at first and didn't realize that anyone had commented on my original post.

     

     

  13. Good question, Ellen. In thinking about it, no, the path is not defined in the VP specification. You can define Must Visit and Must Avoid nodes. Whether the user touched these designated nodes or not is reported in the User Session Report. 

    OpenLabyrinth does internally track and report on all this data (all nodes visited, timestamps for each node, question responses, choices made etc). But, yes, how generalizable is that? The verb set for recording these activities is not large and could easily be made consistent across virtual patients that conform to the Medbiq MVP standard. But I have not looked at how you would usefully analyze such activities enough to know whether the analytics generated by the LRS would be generalizable. Matt Cownie would be more able to comment usefully on this. 

  14. Hi folks, just catching up. so far demand for analytics from authors has been distinctly and surprisingly sparse. from what i can gather they ask thier students in the lecture for any comments, feedback, etc. The question 'what does it mean to play a virtual case' gets bandied around now and again and I think the answer is 'it depends on the learning outcomes' which undermines the generic view somewhat. outcomes can range from pass / fail mark to write an account of your journey explaining your actions.  I think if we had a project that mass produced a large number of similar cases then we might start to get a better idea of a general view, but no ones offering that kind of investment at the moment. The original evip program might have been suitable, but there was awide variation in the case portfolio even there. 

     

  15. I don't know if this will work. This is a post from some working notes, where I was considering what kind of questions I would like my analytics to address. (was copied from a MS Word outline so will probably be clobbered in the translation).

    ==========

    1.       Questions

    A.          How thoroughly does a strong learner explore a good case, compared to a weak learner?

    1)What does ‘thoroughly’ mean?

    a)Time on task

    (1) Not too long
    (2) Not too short

    b)Pathway through the task or set of nodes

    (1) Nodes chosen
    (2) Path circularity – “going around in circles”
    (3) Straight line “git er done” approach
    (4) Script Concordance analogy
    (a) Not a single path
    (b) Difference between rich variance and random “what the hell”
    (c) Bimodal concordance – when is it ok?

    c)Time per node

    (1) Variation
    (a) Between users
    (b) Between nodes
    (c) How concordant with the experts?
    (2) Long pauses
    (a) Reading ancillary material
    (b) Got bored
    (c) Nutrition break

    B.          Feedback to authors about the case vs about the learners

    1)Path circularity may mean a poor learner or a poor design

    2)Straight line path may mean that the case is too obvious or not complex enough to consider the variations in approach

    C.          Patterns of responses seen in weak learners

    1)Clustering of cases

    a)All done on day before deadline

    b)All done late at night

    2)Lack of variance in response

    a)All Likerts tend to be similar and neutral

    b)More likely to be non-committal than an outlier on an opinion