Child pages
  • 2011-08-09
Skip to end of metadata
Go to start of metadata

Meeting Information

Date:

August 9, 2011

Time:

8 PST/9 MDT/10 CDT/11 EDT/16 BST

Attending: Linda Lewin and Alan Schwartz, co-chairs; Carol Carraccio, Sharon Coull, Mike Dugan, Bpb Galbraith, Simon Grant, Hilary Haftel, Kimberly Hoffman, Steve Kenney, Valerie Smothers, Morgan Passiment, Andria Thomas, and Loreen Troy

Agenda Items

1. Review minutes of last meeting

Valerie asked Sharon to provide a brief introduction since this was her first time participating on the call.  Sharon shared she is a family practitioner, clinician and teacher interested in assessment at University of Dundee. They currently use a paper portfolio system and are looking to move to an electronic system.  She was introduced to the group by Hettie Till.

Alan asked the group to review the minutes while waiting for others to join the call.  The minutes were approved as submitted.

2. Discuss survey results

Alan continued with a reference to the link to the survey.  The first question explored whether external reviewers will only see the data that learners choose to share with them. Valerie commented that looking at the survey results; you can see there is no consensus.  She suggested a few ways to manage this.  One way would be to say that this is an issue for the application and it does not need to be addressed in the specification.  The specification is going to be implemented within a system providing a lot of context and functionality.   

Valerie then addressed some of the comments made on the survey. The first comment related to confirmation of data transmission. There are going to be standards related to transmission of data that are separate from what we develop in this group. Valerie compared it to a letter. The letter has content, ie the greeting and bodty of the letter, but it also has an envelope designed to work with a postal system that gets the letter from Baltimore to San Francisco. We’re focusing on the structure of the letter, ie the content, not the envelope and how it’s going to be transmitted. 

Sharon asked for clarification about the possibility that the data may not be complete.  Alan commented that in some applications it may be that all the data is needed and in other areas it might be that not all the data is required.  Bob agreed this was a good approach. Rather than having multiple specifications for different purposes, we could have constraints you place at the application level.  Linda agreed that the application should display the data it requires.  Kim shared that she likes this approach and that it is also consistent in the way we approach technical specifications for educational trajectory. Alan asked if there was anybody opposed to this idea; no response was given. 

3. Prioritize use casesand potential reviewers

Alan continued with a discussion on external reviewers and use cases. The second survey question was about the kinds of external reviewers of educational achievement data. He mentioned the priority we give use cases will influence the degree of attention given to them in the specification; knowing which reviewers are most important would also impact how the specification gets built.

Carol commented that this was a learner driven initiative. She recommended starting with the learner reviewing data first and then flow in order from learner to external person to finally the administrator and researcher (Use case 1, 7 6, 5, 2, 4, 3). The learner reviews and then selects data for other reviewer, and finally the mentor reviews. The external person reviews, then the administrator looks at trends and finally the researcher. Simon asked how we are deciding the priority and he noted anything that logically supports the other things should be included. Linda agreed with Carol but she was not sure the order in which the list made sense. We need to get some of the things up and running having to do with licensure maybe that should have a higher priority. Alan commented that the granularity of data a learner is interested in is likely different than the kind of data the certifying body wants to look at. \

Valerie commented that we could always use high, medium, and low priority as opposed to having a strict order. Bob questioned whether we need to prioritize the use cases to proceed. Valerie commented in general we like to prioritize use cases to make sure we are solving the most important problems; one use case may require different data than another use case. In this example as Alan said, the granularity of data the learner is interested in reviewing may be different than the granularity of the data the external reviewer might be interested in. If the certifying board is a lower priority than the learner, we target data for the learner for version 1. Bob commented that use cases 4 and 3 (An administrator examines educational achievement data and a researcher examines educational achievement data) are powerful uses of the specification. Bob commented it should be a parallel pathway, where the learner and research centric use cases are pursued simultaneously.

Simon remarked that 6 and 7 (the learner preauthorizes a reviewer or mentor and the learner selects a subset of data for presentation) have a lot of overlap. He recommended merging those two. He was not sure about the necessity of 6 and 7 for this particular specification. Valerie commented that 6 and 7 are more about the system itself; she added that it can be difficult to divorce the specification’s uses from the system specific parts just because the system is so necessary to provide contextual meaning, ie why do you want to exchange the data?

Bob noted there are key pieces that we don’t want to leave out, like the administrator and researcher use cases. Currently we have no way to pull together a longitudinal data set for educational research. That’s a key use of the standard

Patty questioned how those providing funding for the development of a portfolio system would influence the prioritization of the use cases. Valerie stated that can certainly play a role in priority and factor into prioritization. She added that we do want to hear what this group of experts has to say because funder priorities are an important factor but not the only factor. In keeping with our process of having an open and democratic process, we consider a multitude of perspectives. Alan commented that everything is important; the learner centric use case is a higher priority. Valerie mentioned the consensus of learner centric things being a high priority is a significant decision. Simon shared the key things about specification it does enable a number of different uses of the same data, and the more it does that the more mileage we will get on it. We don’t want to miss corresponding buy in from others.

Bob mentioned that the secondary uses of the data are important for stakeholder buy in. He added that he was happy with the learner centric approach as long as the educational research side is not left out.

Sharon asked for further clarification. Alan explained that prioritization might make a difference. If the specification was purely learner centric, collecting local educational achievements that are of interest to the mentor, the researcher would look at the data and say it was not enough information to use as an outcome measure. If the specification had never thought of that in advance, there is no place to hang those pieces of information the researcher would care about. That is an argument for doing them in parallel if we agree they are both important use cases. Alan would like to see all the use cases as a high priority. Sharon agreed with that.

Bob asked Valerie if she could help or suggest how that should be done. Valerie said if everything is a high priority than that decision is fine; we just need make sure we have all of these groups represented and get input from researchers, administrators, and a variety of external reviewers. She noted that the external reviewer list is pretty long.

Carol commented that Valerie’s example made her think that this is more of an iterative process. We should think about how each use case links to the others and use that to inform the one you are working on. Alan confirmed the suggestion that everything is important, and encouraged the group to look at the data through each of these 7 lenses.

Simon suggested two more potential use cases, a) using the student record system to compile educational achievement data off a transcript and b) the employer getting a quick overview of the employee achievement. Alan commented that B is the same as use case 2 (an external reviewer reviews educational achievement data) and the first one may be out of scope. Simon mentioned that the Higher Education Achievement Report is envisaged as superset of the Europass diploma supplement, so A would be important in Europe. Linda inquired whether there are specifications to transcripts. Valerie mentioned that there are transcript standards, but we’re hoping to accomplish something pretty different with this standard. Simon commented about the need to build a bridge in some way between the two types of uses. Valerie mentioned we could build with an eye on harmonization. Simon agreed. Alan agreed we should be able to transform from one to the other so that student record systems could pull in educational achievement data when needed. Alan questioned whether there would be a new use case for that. Valerie commented it could be a use case or a design principle. Linda responded that there may be other things we need to harmonize with as well; she recommended principles of design and having a list because we want to be able to interact with many different systems. Valerie suggested documenting the design principles on the wiki and including exam systems, assessments delivery, things that the group refers to from time to time to make sure they are harmonized.

Alan asked Valerie to summarize the points of consensus. Valerie summarized that all of the use cases are of high importance with the highest importance given to use case 1. The first version of the specification must support all the use cases. Kim asked how that would flow operationally. She commented that in the past we assigned “champions” for different perspectives, like Alan assuming the role of researcher, etc. Linda liked the idea of looking at it through different lenses. The group agreed that was a viable strategy.

Alan suggested prioritizing the list of external reviewers. The question again is are all of these equally important or do we need to pick a small set of core ones and champion the others? He added that the reviewers come in a couple of basic categories, they make pass or fail decision about someone, do you get certified, license, others are engaged in more formal educational process, and what do we know about your achievements. Then there are a few accrediting bodies interested in what performance of individual says about their program. It may be helpful to take this list and group these into categories according to their task and data. Kim thought that sounded like a good next step, creating clusters to create a more addressable list. Valerie will take that as an action item. Alan commented that discussion will continue on the next call

4. Discuss level of detail required by the use cases

5. Open discussion

Decisions

Action Items

  • No labels