Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

December 4, 2007

Time:

11:00 AM EST

Attending: Ed Kennedy, Jack Kues, Matt Lewis, Andy Rabin, Hillary Schmidt, Valerie Smothers, Tim Willett

Agenda Items (hold for next call)

  1. Review and approve minutes of last meeting
  2. Review and discuss minutes of informal meeting October 18
  3. Working group survey results
  4. Proposed schema changes based on recent decisions

Discussion

Valerie informed the group that Francis was ill and Linda unavailable, therefore the call would be an informal one to discuss the comments that Andy Rabin brought up in looking at the proposed schema changes.

Andy went over his comments. The first comment related to identifying the question and responses. Could the question and responses be defined by an external resource? He clarified that in discussing with Valerie, she agreed that responses inherently matched the question and that the reference to a source taxonomy should exist at the survey item level. In addition, the types of answers were 5 point likert scale, what if someone had a 10 point? Valerie responded that we could make both components externally referenceable.

Andy commented that the actual responses was the second issue. Is it within scope of the working group to describe any type of question? Or is our work really to pass aggregate information? At some level you have to provide context. And he acknowledged that QTI is one option for describing questions, but that using QTI brings up IP and complexity issues. He provided an example of a likert question in QTI.

Valerie replied that it may be possible to develop a specification that is informed by QTI but avoids the complexity. Andy commented that QTI is complex because it is reflective of what it's trying to do. At the end, we would be in the same boat.

Jack agreed that probably 90 to 95% of QTI is not necessary. Unless we want to drive evaluation, for starters, we could capture vast majority of questions in a simpler model than QTI. Otherwise, we would be burying the components that people would use. And he agreed.

Jack added that we need to maintain the goal of enabling comparison of evaluation data. We need to share data and be able to compare across institutions. If that's still primary goal, a simple model that captures questions and data would be a practical place to start. At some point we may want to see if people want to scale it up to other areas.

The group looked at the QTI example that Andy circulated. He commented that this different from the multiple choice structure we currently have in place in that there was a header that describe the context of the question. Valerie replied that we could include headers in all multiple-choice questions and make it an optional element.

Valerie commented that within the IMS specification nearly every level had a unique identifier. Including identifiers for responses could be helpful in a more generic structure for multiple-choice questions. Response identifiers would enable systems to ascertain that the strongly agree response is in fact the strongly agree response, even if strongly agree is not capitalized, misspelled, or appears differently within the XML. The group agreed that identifiers on responses would be helpful.

Tim commented that a single survey item may contain a number of questions. Could a structure indicate that the question is a likert, has X. response options (5), and label each response option, and reference a taxonomy to declare the header, survey question, and responses. Anyone could use labels or not. For those familiar they could reference taxonomy that declares what strongly agree is.

Jack asked how sophisticated does the taxonomy have to be? Valerie commented that she didn't think it had to be too complex. It would require unique identifiers for the questions and responses.

Tim added that you need questions and responses you can refer to. It's easy enough to have item labeled strongly agree, that's all you would need. Other items for disagree, etc. Taxonomy would declare question text.

Andy agreed that taxonomies need to exist. He asked is that under purview of the alliance database? Valerie agree that it's not within the purview of the metrics working group. Valerie commented that IMS Vdex may be the simpler way to put together a taxonomy.
Tim clarified that Vdex defines value lists. For likert question 1, 10, strongly agree, etc. what the question looks like would be qti.

Andy added that if we settle on a few different types of questions, qti examples seem fairly straightforward. We could optionally reference an external source.

Likert- they have an identifier, but no value or label. One agree, etc. We would have two diff fields for each. He can take a cut a structure for basic question types.

Jack - comparing 5 point to 10 point, you can standardize values in direct comparison of data. If there's no label, change to percentiles, any number of standard conversions to directly compare responses. You could convert responses to percentile and make a comparison. There are a couple of key variables - unipolar or bipoloar, number of points, labels or no labels. Jack recommend that we take a look at examples that are out there and ensure that we have all the types that are commonly used. Make sure what we come up with would be able to correctly classify what's being used out there.

Tim asked if we would report how many people responded to each item as well as mean and median score? And number of responses? Valerie commented that the working group had been operating in more granular level, capturing the number of responses to each potential response. Mean, median, and his number could be calculated.

Andy asked if we want to group responses by profession or other category? How do answers break down by specialty? These categorizations are important for some providers. Valerie agreed it was a good idea.

Recommendations

  • SurveyItem should have a source and id attribute to indicate the source of the question and its responses. Responses and questions should have identifers, too.
  • It may be possible to develop a more generic and flexible model. Andy will propose some ideas on how this could be modeled.
  • Include a category for survey responses.
  • No labels