27 October 2010
Location: Woburn House, London

This event is now fully booked

Summary

This workshop will enable participants to use a methodology for improving assessment at programme level that is proving highly effective in a number of universities. It will provide:

  • a discussion of the assessment problems the methodology addresses
  • a brief summary of the conceptual and empirical basis of the methodology
  • copies of the research tools involved and a briefing on their use and interpretation
  • case study data from several degree programmes for participants to practice interpretation
  • examples of patterns of assessment, their consequences for student learning and responses and interpretations of the links between the patterns and the learning responses
  • examples of the practical steps programme teams are taking to change assessment to address the issues identified through this methodology

  • examples of changes in quality assurance at institutional level necessary to allow and lever appropriate changes in assessment.

Background

The worst National Student Survey scale scores are for assessment and feedback. Variations between institutions on these questions are so large that they, on their own, determine much of the institutional national ranking for teaching. Many projects and institutional initiatives have therefore focussed on trying to improve assessment and feedback (e.g. FLAP at Leeds Metropolitan). Most of these initiatives have focussed on assessment tactics that individual teachers should implement involving individual assignments or within individual modules. However I will argue that it is this micro-level tactical focus on assessment that has got us into this mess in the first place. Research funded by the HEA (Dunbar-Goddet and Gibbs, in press; Gibbs and Dunbar Goddet 2007; 2009), has shown that there are extraordinarily wide programme-level differences in assessment environments, substantial differences in how students respond to these environments, and close and consistent relationships between particular features of these programme-level environments and students’ learning responses and perceptions, independently of details about assessment within modules. For example the volume of feedback students receive does not predict even whether students think they receive enough feedback, without taking into account the way assessment across the programme operates. It is programme-level features that enable you to understand why students respond, in their learning behaviour and in their questionnaire responses, to their assessment in the way that they do. These overall responses can be very negative even when individual teachers are highly skilled and committed in their assessment and feedback within individual modules. The proper focus of attempts to improve the learning environment that assessment creates is therefore the programme, rather than the module or assignment.

A current NTFS project (TESTA) is taking this programme-level approach to changing assessment at programme-level. It is using the conceptual framework (Gibbs and Simpson, 2004) and research tools (Dunbar-Goddet and Gibbs ibid; Gibbs and Dunbar-Goddet, ibid) developed in the earlier research and using them to undertake action research in collaboration with programme teams: collecting data about the overall assessment environment (rather than about assessment methods) using three tools:

  • an audit methodology that provides an illuminating summary of the assessment environment
  • a thoroughly developed questionnaire that measures students’ learning response: the Assessment Experience Questionnaire (AEQ)
  • a list of questions for discussion within focus groups of third year students, to uncover and explain the relationships between the audit data and the AEQ scores.

Programme teams in four universities are currently finding this triangulation of data enormously helpful in diagnosing the causes of assessment problems. All teams in the project are currently implementing programme-level changes, guided by their triangulated data, having selected and adopted research-informed global assessment strategies rather than only local tactics. The impact of these changes will be measured after one year of implementing the new assessment environments by repeating the audit, repeating administration of the AEQ and by convening further focus groups.

A number of programme-level features of assessment environments are the direct result of rules of the institutional quality assurance (QA) regime introduced to safeguard standards (such as delaying feedback until examiners have confirmed marks), and such regimes also place obstacles in the way of some appropriate changes to assessment (for example forbidding required assignments that are not marked, or forbidding feedback on drafts of assignments). The TESTA project has therefore also convened meetings with the PVCs (Teaching) in each institution, both separately and together, to discuss the problems and issues thrown up by the data gathering phase of the project. These meetings have identified potential changes to QA that could not only allow appropriate changes to assessment (including fast-tracking approval for change), but also lever wider change across all programmes in the future (for example by requiring course documentation to focus on specified programme-level assessment issues). The workshop will identify the kinds of QA issues so far identified and discuss how to get senior management on board in supporting appropriate changes to assessment.

Participants will be provided with a manual to support use of the methodology as part of their curriculum-focussed educational development work in their own institution.

Workshop Outline

Introduction 1: The problem. A short presentation will summarise what has been changing in patterns of assessment over recent decades and participants will identify the extent to which this summary characterises their own institutional context and how their own institution has responded to NSS scores for assessment.

Introduction 2: The theoretical and empirical background to the methodology: short presentation and questions

The research tools: participants will examine data from past assessment audits, complete and score the AEQ, and read and discuss focus group transcripts, in order to familiarise themselves with the research tools

Interpretation of data: participants will examine a summary of data from one programme in order to practice triangulating data of three kinds in arriving at a diagnosis of problems and their causes. This will be followed by a short presentation illustrating other patterns that have been found in different degree programmes.

From diagnosis to action: participants will be provided with a diagnosis of an assessment problem and invited to suggest what assessment changes would be appropriate at programme level. Examples of programme-level strategies will be discussed.

The Quality Assurance environment: examples will be provided of ways in which quality assurance hinders, and less frequently helps, the improvement of assessment, and participants will discuss their own QA regulatory framework and what needs to change.

Evaluation of impact. A very short presentation will outline how the impact of changes can be measured, using the same methodology.

Follow-up consultancy All participants will be offered email support if they implement any component of the methodology. Up to five participants will be offered a day’s consultancy support, including an on-site visit, to support implementation of the methodology.

References Dunbar-Goddet, H. and Gibbs. G. (in press) A research tool for evaluating the effects of programme assessment environments on student learning: the Assessment Experience Questionnaire (AEQ) Assessment and Evaluation in Higher Education.

Gibbs, G. and Dunbar-Goddet, H. (2007) The effects of programme assessment environments on student learning. Oxford Learning Institute, University of Oxford. Available at: http://www.heacademy.ac.uk/assets/York/documents/ourwork/research/gibbs_0506.pdf

Gibbs, G. and Dunbar-Goddet (2009) Characterising programme level assessment environments. Assessment and Evaluation in Higher Education

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1

Workshop facilitator

Graham Gibbs is retired and is currently an Honorary Professor at the University of Winchester, working on the TESTA project: Transforming the Experience of Students Through Assessment. He was previously Professor and Director of the Oxford Learning Institute at the University of Oxford, and Professor and head of teaching development units at Oxford Brookes University and the Open University. He is the founder of the Improving Student Learning Symposium and the International Consortium for Educational Development in Higher Education. His recent research and development work has focussed on evidence- and theory-based changes to assessment environments.