Child pages
  • Use Cases 2007 MEMS 1.0
Skip to end of metadata
Go to start of metadata

Preamble

As recommended during the last teleconference of the Working Group (07 Jun 2007), a number of preliminary use cases have been prepared. The purpose of use cases, in general, is to define the future functions that the specification is meant to facilitate. The future functionality needs can then be used to help guide the development of the specification.

These specific use cases are only preliminary cases, meant to stimulate discussion and debate. The use cases can then be edited and refined (or removed altogether, or new ones added) to reflect the identified needs of the community the specification will serve.

The use cases are prepared according to the MedBiq use case template and drawing on information from:
the ACCME accreditation criteria, Sep 2006
the Metrics Working Group charter
an early version of the Group's scope and basic use case suggestions
MEMS version 0.6
a draft "CME Impact Matrix" authored by Valerie
the VA Final Report on their Training Evaluation Instrument, which evaluated learner response
the "Standardized Approach to Assessing Physician Expectations and Perceptions of Continuing Medical Education" document
Parker K and Parikh SV (2001). Standardized Approach to Assessing Physician Expectations and Perceptions of Continuing Medical Education. Journal of Evaluation in Clinical Practice 7(4):365-371.
• Models of educational evaluation such as Kirkpatrick's model and the VA's model
The MEMS-CMEQUAL comparison

In preparing these use cases, a number of intentional distinctions have been made.

1. Participation data versus evaluation data for an activity. The need for participation data to be reported for a given CME activity is described in the 'early scope and use case' document. Participation data include elements such as number of participants; number of materials distributed; credits awarded; activity length; etc. Evaluation data includes measurements of the outcomes of an educational intervention according to one or more of Kirkpatrick's levels.

2. Evaluation reporting framework versus evaluation methods. This group has clearly identified, according to its Charter, a need for a standard way to electronically summarize and report. An evaluation reporting framework (or a metrics framework) would serve this purpose. It would provide users with a way of reporting their metrics in a standard way, without specifying which specific metrics (i.e. evaluation methods) are to be used. There was less consensus among the group about whether evaluation methods (i.e. specific methods of evaluation based on best evidence) should be endorsed by MedBiq.

3. Needs assessment versus programme design versus outcome measurement. These three steps in CME programme development are echoed by the ACCME and in Parker and Parikh (2001). In general, a needs assessment identifies gaps in competence - for a particular target learner population - that must be remedied. Based on this, the programme (both its content and teaching/learning strategies) are designed and delivered. Following delivery, outcome measurement can be performed to evaluate the activity generally (Was it well-received? Did it change practitioners' behaviours?), or specifically targeted to evaluate the degree to which the programme met the identified needs (Was the needs assessment accurate? Did the programme actually address the needs of the target learners?), or to evaluate the programme design and delivery (Was a particular teaching strategy effective? Was a particular teacher effective?).

4. Educational activity versus programme. This distinction is defined in the "Key Concepts" section of the Use Case document. Evaluation of a programme includes evaluation of the individual activities with it, plus the overall evaluation of the programme; sometimes the whole is greater (or lesser) than the sum of its parts.

The use cases intentionally do not make distinctions between the educational methods used. Whether an intervention uses print literature, face-to-face teaching, eLearning, simulation, other approaches, or a combination of methods, the evaluation process is essentially the same although the evaluation methods and tools used may vary.

The Use Cases

All use cases are described in the attached document. When editing this document, please use "track changes."

MedBiquitousUseCases-Metrics.doc

Comments

  • No labels