February 15, 2018
12 EST/11 CST/10 MST/9 PST
Attending: Ellen Meiselman, David Topps, Co-Chairs; Erick Emde, Andy Johnson, and Valerie Smothers
1 Review minutes
The minutes were approved as submitted.
2 Discussion of issues in developing a profile for Mannequin Simulations - Link to working draft
Ellen noted that the American College of Chest Physicians presented at the last meeting and showed the use of very specific verbs tailored to their measurement interests, such as “blood pressured.” David commented the verbs from Chest Physicians are not generalizable. Ellen agreed we should stick with verbs that are more generic, but we may need extensions for exact measurement. She spoke with Kirsty Kitto, Senior Lecturer in Data Science, University of Technology Sydney in Australia. She recommended starting from the top down, doing rapid testing and iteration. She also recommended taking data out of JSON format and putting it in SQL. Valerie asked the reason for putting in SQL. Ellen replied that the reason given was that JSON statements are hard to use. David commented that it may be hard to di using traditional rules. Andy added that XAPI is not intended to be the end all. He anticipated a querying spec being needed, but he did not think SQL brought added benefit. A different type of tool may be needed. Valerie was surprised Kirstie would use SQL and not big data tools. Ellen clarified Kirsti’s team decided the best way to do rapid analytics was moving data out of the LRS.
Andy noted that the only querying in xAPI was around verbs and activities. There are not agreed upon practices for search and retrieval yet. There is bigger value handing it off to tools that can do bigger things. Ellen requested help building the workflow. Andy suggested finding out what the DoD is doing. He suggested starting with outcomes first. Ellen suggested considering simulations as variations of a single type. Track low-level event data and map to higher meaning across simulations to compare apples to apples. Andy agreed that made sense. David expressed concern that the top down approach made sense for contractors delivering specific tools for specific projects but not for standards development. Valerie suggested determining what the key questions are. David asked about simple tools used for elastic query of non-relational data. Andy agreed to look for data visualization and analytics around JSON. Ellen questioned what activities and extensions we should standardize to enable analysis of objectives. Valerie agreed to check with Jeff, from University of Wisconsin, about doing a pilot with some real use cases.
Andy mentioned two reasons to use standard, 1) less resources spent if a standardized approach exists already and 2) sharing results with other organizations in a meaningful way. Ellen agreed it is the same with reproducible research.
3 Generic vs specific verbs
4 Discussion of the main points (see Extras below) gleaned from talking about xAPI and Learning Analytics with Kirsty Kitto - List of publications, Research-Bio
5 Teamwork Profile – ongoing work on this
David discussed the struggles with tools assessing team performance. They do not look at individual team members and use very subjectively observed behaviors. There needs to be a way to track objectively. The Teamwork profile was an attempt to look at how people interact in-group situation. Andy agreed the profile is a good idea to focus on social interactions. David expressed concern with having a more objective way to track who contributes. Valerie mentioned, “Delegated”, “volunteered” and “solicited” for social interactions. David recommended posting to live document. Ellen will send an article to the group on managing groups in a setting.
6 Note posted by Andy Johnson: ADL will be doing more with the profile spec in terms of aiding adoption and mapping it to previously authored profiles in the coming year.
Andy provided a brief update. They are working on getting more people on board and getting DISC under contract. They want to expand their effort, creating conformance testing and validation against a profile. They intend to create an alternate version of what their profile group is doing and offer best practices. He noted Megan is running DISC and Russel Duhon is doing most of the work.
Extra Topics relevant to discussions above
Whether it makes sense to consider several simulation types at once
The possible utility of information models
The possibility of using simultaneous profiles for each main type of agent - student, system, observer
The value of thinking backwards from possible analytics or use cases
Recording low-level event data and then higher-order mappings of that data to meaningful concepts
The need for better tooling for rapid testing purposes without having to build the real thing, etc.
Also: Russell Duhon (wrote the new profile specification) may be willing to talk to all or some of us possibly about testing profiles and learning analytics. Caveat - he is a consultant so would be interested in a paid or funded project.