Child pages
  • 2011-10-25
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

October 25, 2011

Time:

8 PDT/9 MDT/10 CDT/11 EDT/16 BST

Attending:  Linda Lewin – Co-chair; Susan Albright, Dana Bostrom, Carol Carraccio, Robert Englander, Patty Hicks, Kimberly Hoffman, Steve Kenney, Morgan Passiment, Sandhya Samavedam, Howard Silverman, Scott Smith, Valerie Smothers, Kevin Souza, Andria Thomas, Janet Trial and Lori Troy. 

Agenda Items

1 Review minutes of last meeting

Linda began the meeting with review of the minutes.  She mentioned a typo on page two which Valerie corrected.  The minutes were approved as amended.  She noted the important part of the previous minutes were the six questions at the end.  She suggested Valerie keep an ongoing list of questions and Valerie agreed.

2 Discuss documents from Patty Hicks (rotation, summative 1, summative 2 - see also Koganand EPAarticles)

Patty continued with an explanation of the Individual Resident Evaluations General Pediatric Resident Assessment.  It is an example of one evaluation for one resident that features resident scores related to interpersonal and communication skills (ICS).  You can see the score given; each score represents a descriptor. It’s important to note that the same evaluation can be used for 1st years as could be used for 2nd and 3rd years on that same rotation.  The attending average isn’t as useful.  The group column refers to the individual assessor, and provide’s the rater’s average for all PL1s.  The word group means the representative group of which the learner is a memberTotal represents this rater’s average score for all learner levels on this particular item.  At the bottom on the page you see the learner information. 

The summative example shows a competency-based summary of an individual’s scores for each competency. Scores are averaged from 48 evaluations. The summary also provides the average of peers and of all PGYs. The overall summary weights different items to come up with the score. The graphical forms is what the residents like.  The turquoise bar represents the peer group; the brown bar represent all PGYs.

 Linda asked how those rater characteristics translate into the learner’s score. Patty answered the score that is produced can adjust for the individual rater.  Linda asked the group if anyone had any comments or thoughts about how this will affect our standards.  Scott asked if the behaviors on which the learner is jusdged vary based on post graduate year. Patty replied that they are standard items that go on every evaluation, and the progression is the same.  They are trying to get the learner to relate to not just their numeric scores, but where they are relative to where faculty think they should be.  The milestones are similar but more sophisticated and grounded than these items. 

The group questioned the use of numbers versus the use of descriptors. Carol commented that in a sliding bar scale, the faculty rater would not see the numbers but would see behavioral description and can drag and drop at some point along the continuum at milestone post or between milestones.  Kevin asked whether we should track rater reliability.  Linda mentioned rater reliability came up on the last call and she asked the group if that was something that should be part of what we are doing or not.  Kevin replied that we are getting into the weeds.  The rater reliability should be the responsibility of the institution; it’s unclear why this would rise up to a national reporting level. Howard agreed it was valuable for learners to benchmark, but it may not be useful beyond institutional boundaries.  Patty commented that she also provided this as an example of how to display summative performance data and how to give the learner knowledge of where they are in relation to their group.  Linda mentioned we had discussed bar graphs and histograms; the question about rater reliability is probably too complicated for our standard. 

Linda commented that if we are going to give national data regarding performance on an assessment that implies we are using the same assessment tool.  Carol agreed that to get the information we want we need to have the same tools and we need a large enough sample size. The programs can use different tools but in the specifications we would want the ability for people to input the data from the same tool and across national outcomes.  Valerie commented that was a new requirement: we want the capability to compile data on a national scale. That would certainly have implications for the standard.  Valerie agreed to highlight that as a requirement. Linda added that she would want the ability to compare within the program and nationally.

Carol noted that the ACGME is creating a new accreditation system, cycling between systems with milestones; it will be interesting to see if a program’s trainees are meeting milestones and looking at the progression of the learner over time.  Valerie asked if there was anything Carol could share with the group about the new accreditation system. Carol agreed to see.

The group moved on to discussing the EPA article from Carol and Bob.  Carol explained that this was one example of where to go with assessments. They looked at what are the typical daily activities of pediatricians and then map those EPAs (entrustable professional activities) to sub-competencies and milestones.  The article looks at the EPA of single system diagnosis and mapped that to critical competencies and sub-competencies.  It’s all about the entrustment decision: when can you entrust learner to perform the activity without direct supervision.  The document creates a matrix that demonstrates a learner maturing, moving from novice to master.  Once the matrix was together, they created vignettes illustrating each level of competence for the EPA. The key decision to make is which one of these represents where we would entrust a learner. A vignette could be put into video to calibrate raters or rolled into an assessment tool.  This would be helpful to get at meaningful assessments. 

Valerie asked if a faculty rater indicates that the novice vignette matches the learner , what would that data look like? Carol suggested using it as a learning road map that shows the learner where they are and where they should be. Bob added that like the sliding bar, there could be numerical data collection on the back end

3 Discuss evolving powerpointfor educational achievement data 

Valerie continued with a discussion of the powerpoint for educational data.  The first slide in the revised presentation displays a link that relates to EPA’s and statements of awarded responsibility (STARs).  She commented that she hasn’t seen it in practice and wasn’t sure if it should be included or not. Scott commented he would love to have it in here.  He makes assumptions about what medical students can do; he would love for it to be explicitly attested to.  Lori mentioned having clinical exams for students, with a list of required skills and procedures they have to demonstrate.  All students who graduate have achieved standard for required procedures but it is not mentioned in the Deans letter. Patty added that skills are not standardized across medical schools, but it would be helpful to program directors.  They compile a number of items to determine whether learners can supervise in a clinical setting.

 Scott commented it will be a small set for the national level, a bigger set to individualize in your program.  Carol commented knowing a baseline for residents would provide a path forward.  Valerie added that she thought it would be helpful for the VA; they have to learn a lot about residents coming into their hospitals.  Valerie asked if any programs had implemented EPAs. Carol commented that many fellowships are developing EPA’s for sub-specialties. She met with Judy Bowen about this, and there is the possibility for collaboration and cross-disciplinary work.

4 Open discussion

Decisions

Action Items

Valerie will maintain a list of open questions.

Valerie will note the following data requirement: the ability to compile assessment data on a national scale and show how the individual compares nationally and within a program.

We will continue to evolve the data analysis slides based on the discussion.

  • No labels