Child pages
  • 2011-11-17
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

November 17, 2011

Time:

8 PST/9 MST/10 CST/11 EST/16 GMT/12 AM 11/18 SGT

Attending:  Co-Chairs, Susan Albright and Marc Triola; Dmitriy Babichenko, Adrian Ballard, Terri Cameron, Erica Friedman, Al Salas, Karen Sanders, Paul Schilling, Juliane Schneider, Rosalyn Scott, Hugh Stoddard, and Janet Trial

Agenda Items

1.       Review minutes of last two meetings ( 11/8 and 10/27)  

Susan began the meeting with a review of the minutes from the last two meetings. She suggested a few small changes to the minutes for the Denver meeting regarding Tufts and Valerie made those changes.  The other minutes stand as submitted.

2.       Review of the proposed instructional methods, assessment methods, and resourcesand  discussion of next steps 

          Note Pending changes:

  • Add Mannequin to resource
  • Combine Simulated/Standardized Patient in resources
  • Clarify whether Specimen is subsumed by Prosection in resources

Susan mentioned the brief discussion that took place in Denver and she asked Terri to comment.   Terri commented that Adrian is working on revisions to the lists of instructional methods, assessment methods, and resources. She noted that Rachel came up with the idea of a faceted approach. The initial version was seen as too complex, so what is presented here is scaled back. The resources list is optional and intended to provide more details regarding the instructional or assessment method. The subcommittee has had many calls to develop these lists. Autopsy, prosection, dissection, prosected specimen would be resources that would link to instructional methods.  Multiples methods could be used for each event.  Terri commented they are gathering feedback to send out for wider review, to all curriculum deans in the US and Canada. They also plan to send to other health professions and to European and Asian schools for feedback. There will be a Frequently Asked Questions section, too.  The hope is to end the feedback by February at the latest and then have a final product. 

Susan asked about the definition of “large group and asked when does a large group become a lecture?  Rosalyn noted a discussion is different than a lecture because a discussion signifies a two way activity.  Terri agreed and mentioned a discussion never transitions to be a lecture based on size.  Susan asked if there were thirty students would you still call that a discussion. Rosalyn and Terri replied it would depend on if the activity is a two-way discussion or a lecture.  Susan suggested those aspects need to be clarified in the definition   Rosalyn suggested removing “large” from the definition of lecture. 

Susan questioned the grouping of dissection and prosection.  Terri shared that topic was a subject of debate and Adrian is working on making those items separate resources. Adrian will also add a note to the definitions of demonstration and laboratory linking the concepts of dissection and prosection. Marc said and Susan agreed.  Terri mentioned nobody linked autopsy to anything in CurrMIT which is why it is under resource.  

Susan commented that case based learning and the simulation definitions overlap. She questioned how a virtual patient would be categorized.  Terri commented that case-based learning is when a case is the basis of a discussion or lecture or learning that is occurring; simulation means they are using a simulator.  Susan asked if you wouldn’t put Virtual Patient under simulation then.  Terri answered yes they would, and they would also put it under case based learning.  Valerie agreed you could list it as both and then indicate that a virtual patient is used using the Resource list. 

Hugh commented that you should look at the role of the learner in determining the instructional method. If the learner adopts the role of the healthcare provider, that is simulation.  In case-based learning, the learner is presented with a case, but they are not necessarily adopting a different role. They could be learning about the scenario.  You can do simulation with a mannequin or other simulation resource.  Terri agreed to add more information.  Susan suggested working on the definition on case based learning and Terri agreed to do that. 

Susan asked the group about the use of the word “real” in the definition of Problem Based Learning (PBL).  Hugh suggested taking it out because often you have to eliminate some things and add some things, so it is not always real patient data.  Susan agreed. If they don’t have a radiograph for the patient who is the basis for the PBL case, they find one that works.  Terri agreed to take out the word “real”. 

Susan continued with comments on Assessment Methods and asked if written /computer based was used to differentiate from oral exams.  Valerie replied yes, it was there for purposes of differentiation from oral exams.  Ericka asked why we audience response was separate. It is used for multiple choice questions, which would be an exam.  The technique used to get the information is not any different than paper or computer based.  She questioned whether it should be a resource.  Terri commented that the group did discuss having it under resource.  Hugh asked where the audio piece went.  Hugh added that Heather had some thoughts on and recommended checking in with her before making a change.

Susan asked if it was ok to use two assessments methods. They use ratings and provide feedback.  Valerie commented you could use multiple terms from each list and Terri confirmed that. 

Rosalyn questioned the use of computer resource verses virtual patient verses virtual computerized laboratory.  Virtual patient is a computer based activity; what would go under computer resource.  Terri shared the definition is simply the resource we’re using: an I-pad, e-reader, smart phone, if using a Virtual Patient that is a separate entity.  Susan added you might have two resources listed in a particular event and Terri concurred.  Rosalyn asked if virtual computerized lab would be for a virtual reality task trainer. Terri and Hugh clarified that commented that Wet lab is related to microbiology (blood, tissue. Etc), whereas prosection is related to gross anatomy.  Terri asked the group to share any other synonyms.  Susan mentioned you could use multiple resources in some of these things.  She asked if Virtual computerized lab is the same thing as a simulation lab.  For example a dental school has heads at every seat. Is a computerized laboratory also a simulation laboratory?  Terri clarified that a virtual lab is related to using computers to view electronic images to look at slides.  Erica agreed.

The group asked about the term computer resource.  Hugh explained that computer resource is a parent terms, virtual patient is a child term.  Terri asked the group if anyone feel strongly about the term  computer resource.  Hugh commented that it was included for distance learning instruction; he wanted to find a generic term.  Terri said the title is misleading and she thought changing the title to computer assisted instruction would be better because it sounds like instructional method instead of a resource.  She will look for literature to come up with a better name for computer resource. 

Marc questioned whether the vocabularies would be used solely for the Curriculum Inventory project or I they would be used more broadly throughout AAMC.  Terri shared they are going to widely distribute it to see if they can adopt a standard vocabulary.  The goal is to broadcast more widely.  Robby commented that he thinks the LCME will adopt these vocabularies. Marc commented it’s only a matter of time until they change, and asked if we should focus on how they would be modified if they change over time? Terri replied that changes would not be made lightly.  The reports that are generated affect historical data, and they have to put a focus group together how to look at how this is this different.   Susan asked how institutions would report experiments with instructional methods.  Terri replied that the school would have to match to an existing method.  If it starts to be used more broadly then we would have to change the list. 

3.       Continue granularity discussion

Valerie continued with a discussion on granularity. She summarized what was discussed on the call on October 27th.  Different schools have different levels of granularity in their curriculum and use different names.  Even with a numeric system it won’t be consistent across schools in the current model (because some schools have more levels of granularity than others).  Valerie asked how the granularity data would be used.  We can drop it all together if it’s not useful.  Valerie reviewed that  the granularity element is a required element.  It is indicated on a number between one and five; one is the highest level and five is the smallest granularity. But schools may only have four levels, or two or three, it varies on the institution and the part of the curriculum.  Susan asked if we could drop granularity.  Marc asked if we talked about the academic element.  Valerie commented that is a different element.  Marc mentioned the academic element was there, but never really reused or referenced, put in sequence event object references.  Terri agreed that made sense to her.  Susan suggested removing granularity.  Valerie agreed and offered to  follow-up with Marc to map out referencing academic levels in sequence blocks. 

4.       Review updates to specification 

Valerie mentioned on page three there is a summary of the changes in red.   She has added an attribute to indicate that a sequence block is a track.  This will help with LCME reporting.  She took out the place holder for event category and added instructional method, assessment method, and resource. Instead of enumerated list, right now the vocabularies are recommended lists included in the appendix.  This is the Technical Steering Committees recommendation as well.  There is room for that to change based on input.   Terri asked the group how they felt about that. 

Susan mentioned it might take awhile for people to adjust to the new standard; it will take time for people to catch up.  She asked if something is required and it’s missing, how does that work?  Valerie replied it won’t work.  If an element is required in a specification and you don’t have it, that would produce an XML error. You would have to come up with something.  Susan asked if both instructional method and assessment method would be required.  Valerie replied that was a good point; that is something that should change in the specification.

Terri mentioned requiring assessment method for courses.  Valerie commented all of the instructional methods and assessment methods were currently  at the event level not at the sequence block level.  Terri asked if there was some way to make it an either or.  Susan commented that this requires more thought.  Susan asked if you would require at the sequence block level and Terri thought for assessment method she would.  Susan asked about a basic science course with fifty lectures, a midterm exam, and an exam on the first ten lectures. How would you reflect that assessment method?  She recommended asking Kevin; they report it at the event level and use aggregate data for each course.  Could you report assessment as a separate event Valerie asked?  Hugh noted one of the ideas of emphasizing assessment is to move towards a competency world that is no longer time based.  Here is a competency, here is an assessment, tracking as cohort as moving through a curriculum, rather than they completed so many hours of coursework.  It’s the future vision of what education would like that.  Susan suggested putting this topic high on the agenda for next time.  If anyone has comments regarding today’s call send emails to Valerie.  

5.       Open discussion

Decisions

We will remove granularity from the specification.

Action Items

Adrian will update the instructional methods and assessment methods.

Valerie will follow up with Marc on referencing academic levels within the sequence block.

Valerie will change the spec so that either instructional method or assessment method is required, but not both.

  • No labels