Child pages
  • 2011-07-26
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

July 26, 2011

Time:

8 PDT/10 CDT/11 EDT/16 BST

Attending: Linda Lewin, co-chair; Valerie Smothers, staff; Bob Galbraith, Steve Kenney, Howard Silverman, and Loreen Troy.

Agenda Items

1. Review minutes of last meeting

Linda asked if anyone had any changes or additions to the prior minutes.  Howard moved that the minutes be accepted as submitted and the group concurred.

2. Summary of discussion points from last meeting

Linda continued with a summary of the points listed below:

a. Each role has a unique purpose reflected in the definition of that role. An administrator is conducting program evaluation, a researcher is interested in disseminating knowledge, and an external reviewer is evaluating the individual learner. 

Howard recommended clarifying that any individual might act in different roles at different times.  Linda asked if that statement has implications for the system. Valerie noted that in many systems an individual can have several roles, so it should not be a concern for the group.

b. External reviewers will only see the data that learners choose to share with them. If data is omitted, there is nothing that appears saying that data was omitted.

Linda thought that this was the way the group was leaning but she wanted confirmation from the group. Howard didn’t remember having consensus on that point and thought it would be nice to have more folks to validate that conclusion. Steve concurred he didn’t remember a strong consensus on that and he suggested Valerie send an email to the group asking them to voice their opinion or complete a formal poll to see if there is agreement.  Linda agreed and suggested discussing the answers on the next call.  Loreen noted it’s a busy time of year for many people. Valerie agreed to send out an email and to allow an option for those who are unsure and a section for comments. Linda commented we can always bring back the discussion if we need to. Valerie will coordinate with Linda on the survey and the wording of the email.  

c. The learner's summary or preface to a specific educational achievement may be included as metadata about an educational achievement.  

Linda summarized that this is the cover letter concept.  Howard suggested having a short descriptor.  Linda asked if that could be more reflective, such as the following pieces of data show you how I have become competent in this area of interest that I want to do specific training in?  Howard urged a limit on the maximum characters only allowing 50 words or less forces the person to be brief.  Valerie noted we get into those kinds of specifics when we develop the written specifications.  The group agreed to allow for preface; details of how extensive that is will be decided later.   

d. If self-reported data is included, the data source must be identified; information on the degree of validation may be included as well.  

Linda asked what the degree of validation means. Valerie mentioned self reported data and asked if schools verify self-reported data.  She wasn’t sure if there was consensus on that point.  Linda asked if it would be better to be more general stating the source of all data will be identified.  Loreen commented someone can always inquire if it is not clear. 

Linda commented the only thing left is to get more feedback on data that is seen by external reviewers because they are looking at individual learner data.  Howard commented researcher data sets would have random holes all over it and the administrators will need to see everything.  Linda asked if she were a researcher and wanted to learn about professionalism, and requested data would she get what she requested and would she know that there would be more?  Valerie commented the governance guideline issues would clarify that.  Linda agreed it would be good to think of how this is going to work. 

3. Outstanding use casequestions

a. Types of external reviewers

Valerie explained Carol made comments asking if we should be explicit about the types of external reviewers. Linda asked if we needed a comprehensive list when we create the standard.  Valerie replied that we should try to capture all the major types of external reviewers.  Howard mentioned the educational achievement data will not change much but there may be little things that differ depending on who is reviewing and what they are reviewing.  Ideally we should get those groups involved to help create the standard.  He added the standards may not change but the minimal data set could be specified for the convenience of the student and state agency, having a configuration file for each of these broad categories.  Linda agreed a list of types of external reviewers would be helpful and Valerie commented we should make sure we have involvement from those groups. Howard recommended soliciting nominations for categories of external reviewers from the group. The group agreed.

b. Research use case

Valerie continued by asking the group if all their concerns were addressed regarding the research use case. Howard recommended asking an IRB person to look at the use case. Valerie shared that IRB processes will be changing in accordance with changes to the Common Rule recently proposed by the Department of Health and Human Services (see http://www.washingtonpost.com/national/health-science/us-proposes-rule-changes-for-human-subject-research/2011/07/22/gIQA1IAhVI_story.html).

c. Additional comments

Howard wanted to know about the trajectory of work in the next couple of weeks.  Valerie continued with a discussion on next steps.  Now that we know what problems we’re trying to solve, we need to examine what the data looks like and do we have examples to develop a mockup, something visual for the group to look at.  We would look at what the educational data would look like now and in the future and start collecting as much data as we can to develop standards.  Once we have many examples, we can look for commonalities across the examples and develop our data requirements.

4. What level of detail will support the use cases?

Linda suggested continuing this discussion at the next meeting.  Valerie asked the group to think about how much information you want to know about educational achievements, and do you want to know the person is competent in the area of professionalism or do you want to know what says they are competent?  That was a point raised by Alan.  Linda questioned whether we are interested in pass/fail or interested in level of achievement within the pass/fail framework and would it always be the same, (i.e. would everyone want the same level)?  Loreen commented somebody may want to see who should be inducted into an honor society, and may be looking at different data as the data will vary.  Linda asked if we were allowed to say we wanted data that doesn’t exist yet.  Valerie replied yes. Linda shared that in a perfect world you’d want every detail you had and summary views of that detail, but that may not be practical.  Valerie suggested the group think about what the people who have the data are going to be willing and capable of sharing.  Using the USMLE results as an example, imagine you could obtain an overall score, scores that are broken down in different areas, scores related to a particular competency,  and item level data, ir how the learner answered each question.

Linda shared that SAT college boards do this. You can find out which questions you answered right and wrong.  Linda noted when you send SAT scores to a college they don’t want individual scores, they want total scores, but you as an individual might want individual question scores so you can improve.  It’s not reasonable to have all that information.  Bob talked about the merit badge approach, emphasizing that we likely would not want item level data.  Linda commented that in a clerkship somebody gets grades and they are made up of different things, and most people aren’t interested in all of it but they may want to see it. 

Steve recalled that Bob had said his dream for content was to look back and determine what made a physician great, what the process was, and what levels of success did they achieve and attempt to try and replicate that.  Bob concurred there are multiple reasons why having a whole collection of data that we don’t currently have will help us do better.  We don’t necessarily know what the important measurements are at the moment, and we don’t know how the important measures correlate, and we don’t know if the current measures that are being used to improve outcomes work.  He further stated that as educators we don’t know what we ought to be doing to make the best doctor possible.  It’s faith-based, we don’t know how to do that and we don’t have a data set that is longitudinal.  If we could look at two students and their practice performance 20 years after, and look at the curriculum they took in medical school and draw some conclusions, that would be useful.  We need long term studies to look at outcomes to improve education and we need a certain amount of detail.  We won’t know what we need until we need it. Valerie suggested revisiting this topic as we are going through data analysis when we see what people want to report. 

5. Open discussion

The next call is August 9. 

Decisions

Action Items

Valerie will send out an email to obtain consensus on guiding principles related to subsets of data and to request input regarding the types of people that may be external reviewers.

  • No labels