Students and Universities - Innovation, Universities, Science and Skills Committee Contents


Memorandum 102

Supplementary submission from Mantz Yorke[375]

  1.  This Memorandum is a personal response to the outcomes of the meetings of the Committee that have so far taken place. It addresses some points which the Committee might wish to consider when preparing its report.

2.  Comparability of standards. Discussion of the comparability of academic standards sometimes fails to differentiate between the standards set by institutions for students ("aspirational standards") and those actually achieved by students ("achieved standards"). The evidence that I have presented to the Committee shows that there is a broad correlation between achieved standards and institutional grouping: there is a generally higher percentage of "good honours degrees" in Russell Group universities than in post-92 universities, for example. Reports from HEFCE indicate that entry qualifications are positively correlated with honours degree classifications,[376] so it would be expected that there would be a gradation of achieved standards by institutional type. My evidence provides a partial answer to the questions that the Committee asked of vice-chancellors, and offers them a line of defence against any accusation that they are unable to comment on the relativities of achieved standards. However, the line is not conclusive, since a lot depends on what the institutions expect of their students. My analyses do not have anything to say about aspirational standards: statistics do not address these. It would require some detailed study of curriculum content, teaching approach and assessment methodology to enable a judgement to be made about the extent to which aspirational standards in University X correlate with those in University Y.

  3.  Programme validation. Validation typically involves external assessors in order to ensure that the aspirational standards are broadly equivalent to those of cognate programmes elsewhere. The QAA subject benchmarks provide a point of reference against which proposals for programmes can be appraised, but they are inevitably (and, in my view, desirably) open to interpretation. Institutions differ with respect to what programmes in similar subject areas intend their students to achieve: University P might give its programmes a distinctly academic slant, whereas University Q might give its programmes a more practical colouring. From a labour market perspective, such heterogeneity has advantages since graduates fill a variety of employment roles.

  4.  External examiners. External examiners can comment on the programme in operation and have a particular responsibility for addressing the standards achieved by students. However, as programmes have become modularised, the role of the external examiner has shifted from that of commentator on achieved standards (by looking at assessment tasks and a sample of student work) towards that of commentator on the assessment processes and procedures adopted by institutions (ie on matters of quality assurance). Modular schemes are often too complex for externals to deal in detail with matters at the level of the individual module. The stress on the external examiner system was noted more than a decade ago in The Silver Report.[377]

  5.  Curricular change. Over the years, the expectations regarding undergraduate education have changed. Graduate employability has become a policy objective, and this has led institutions to incorporate various aspects of employability (eg "soft skills") into programmes. Student retention and completion have also become important in policy terms: with the bulk of non-continuation occurring in the first year of full-time study, attention has increasingly been focused on the first-year experience. Both policy initiatives, in their different ways, have influenced institutional assessment practices. The assessment of employability-related achievements is more complex than is generally appreciated, and is not best served by fine differentiation in grading practice. Some institutions offer, in respect of employability-related achievements, an award in parallel to the honours degree (which is focused on academic achievement).

  6.  Retention and completion. Some students, and particularly those from disadvantaged backgrounds, may take longer than a term or a semester to come to terms with the demands of study in higher education. As a consequence, there has been a shift back from semester-length to year-long modules. This should allow institutions more opportunity to provide formative feedback on student work. An issue that appears to have been given little attention in research is the impact of funding council policy (which can be construed as a mild variant of outcomes-based funding) on institutional policy and practice regarding student progression and retention.

  7.  Student engagement. Student engagement has become a focus of attention in US higher education during the present decade, with attention being given to the development and implementation of the National Survey of Student Engagement [NSSE]. The significance of the concept is being picked up elsewhere: Australia is developing its own version of NSSE. At the heart of this work is a concern for students' commitment to learning. In the UK, the perceived importance of obtaining an upper second class honours degree implicitly presses students towards "getting the grade" rather than focusing on learning.

  8.  Research into higher education. The evidence submitted to the Committee's inquiry has pointed, with varying degrees of explicitness, to gaps in research bearing on the student experience. Examples are: the variation in assessment regulations across the sector; the reasons for trends in honours degree classifications; the comparability of standards; the part-time student experience; and the relationship between funding policy and institutional action relating to student progression and continuation. Research into higher education is something of a Cinderella as regards its status, even though its position as regards the Research Assessment Exercise has improved somewhat over the years. Much research into higher education is relatively small-scale and fragmented: more might be gained through the adoption of a more strategic approach in respect of issues of broad relevance to the sector.

April 2009






375   Visiting Professor, Lancaster University. Back

376   See HEFCE (2003) Schooling effects on higher education achievement, at www.hefce.ac.uk/pubs/hefce/2003/03_32.htm , and HEFCE (2005) Schooling effects on higher education achievement: further analysis-entry at 19, at www.hefce.ac.uk/pubs/hefce/2005/05_09/Back

377   See Silver, H. et al. (1995) The external examiner system: possible futures. Report of the project commissioned by the Higher Education Quality Council. London: Quality Support Centre, The Open University. Back


 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2009
Prepared 2 August 2009