5 STANDARDS AND QUALITY
Introduction
197. Standards and quality in relation
to higher education tend to be well used but often elusive and
misunderstood terms. We have defined academic standards
as predetermined and explicit levels of achievement which must
be reached for a student to be granted a qualification. We have
used academic quality as a way of describing the effectiveness
of everything that is done or provided ("the learning opportunities")
by individual institutions, to ensure that the students have the
best possible opportunity to meet the stated outcomes of their
programmes and the academic standards of the awards they are seeking.[365]
STANDARDS
198. The question of standards ran through much of
the written and oral evidence we received. There were two key
issues:
- have standards required to
achieve a particular class of degree fallen over the past 20 years;
and
- whether the current arrangements for measuring
and safeguarding standards are adequate, in both individual institutions
and across the sector as a whole.
To our surprise, though the debate about standards
could be fierce, much of the evidence was partial and incomplete,
even anecdotal.
199. Our consideration of standards focussed on whether
standards for first degrees had remained equivalent over time,
and also whether outcomes of degree programmes from different
universities were equivalent. In this context, we also considered
the question of degree classification, both whether the current
arrangements should be replaced and also, again, whether the classifications
awarded in different universities were equivalent.
200. Universities UK explained to us that universities
themselves "have the responsibility for maintaining the standards
of their awards and the quality of the learning opportunities
which support students to achieve against those standards".[366]
Universities UK added that universities work "hard, both
collectively and individually, to fulfil those responsibilities"[367]
and that all "universities have systems in place to ensure
that new courses meet the right standards, and that courses are
regularly reviewed, by looking at evidence from students, graduates,
employers and external examiners".[368]
201. When pressed on standards, Universities UK told
us that universities had a "really strong stake in maintaining
our standards, our good processes, and our reputation for having
them". It stated that what mattered "to a significant
degree" was "in terms of our ability to retain interest
from students applying from around the world, but it is also a
crucial bit of our responsibility to our home students.[369]
Professor Arthur, Vice-Chancellor of Leeds University, echoed
this point in oral evidence: "there is no wholesale problem
with the standards in British Higher Education [
b]ecause
we have an internationally successful highly competitive higher
education system that is the envy of the world that other people
are copying and multiple international students wish to come here."[370]
We have not examined the position of international students but
we are uneasy about the conclusion that part of the sector appears
to draw from the attraction of English universities to international
students as evidence that standards are being maintained and are
largely unproblematic. While we consider it likely that standards
and quality are part of the attraction of the higher education
sector in England to international students, other factors, such
as the vigorous marketing undertaken by universities, and the
fact that England is an Anglophone country, together with the
relative current weakness of sterling, may also have an effect.
We conclude that it is simplistic and unsatisfactory for higher
education institutions to be seen to rely on the fact that international
students continue to apply as evidence that standards are being
maintained. It is absurd and disreputable to justify academic
standards with a market mechanism.
VIEWS OF EMPLOYERS
202. We begin this chapter with the views of employers
on standards and quality. The employers' representatives, from
whom we took oral evidence, were complimentary about the higher
education sector, though some of those who gave written evidence
had concernsfor example, the Institution of Engineering
and Technology said that "a sizeable proportion of today's
students appear to have problems of poor motivation and a less
than ideal approach to learning".[371]
But Mike Harris from the Institute of Directors (IoD) said that
his members were "genuinely upbeat about the quality of education
delivered by universities".[372]
Where those who gave oral evidence did, however, perceive problems
was "right throughout the education system, beginning in
schools and also in further education colleges, so that when you
get your ultimate employee there are particular skills weaknesses".[373]
Mr Harris explained that his members were looking for "employability"
in graduates.[374]
He defined this as a:
mixture of basic skills, personal qualities,
good attitude, genuine employment skills, meeting deadlines, being
reliable, and personal qualities. That really means, aside from
the technical skills and the academic knowledge [
] it is
getting on with people, it is being flexible and it is being reliable.
That is what we have found to be valued above all other things
when our members are recruiting graduates. It is that emphasis
on employability and fitting into the workplace. The technical
skills and the technical knowledge acquired through a degree have
a much lower profile when they are recruiting. In terms of the
message for what to do, I would focus on work experience, getting
greater exposure to the workplace, even bringing your professional
skills to bear in a work setting. That is what employers are using
to distinguish between some very able candidates.[375]
203. We have not in this inquiry examined the effectiveness
of the curriculum on offer in higher education over the longer
term. For the most part the students who gave evidence were, by
our invitation, undergraduates currently at university with no
previous experience of higher education, although a few mature
students had previously taken degrees. While employers were broadly
content with the operation of the system for the immediate future
this may not hold for the longer term. The question of whether
higher education offers graduates a suitable preparation both
lifelong and lifewide in a changing world (see paragraph 7) is
another matter, which our successor committee with responsibility
for scrutinising higher education may wish to examine.
COMPARABILITY: VIEWS OF STUDENTS
204. The views of students contrasted with those
of employers. Their prevalent view[376]
was that the current degree classification did not provide a satisfactory
method of measuring the work done or a satisfactory basis for
comparing degrees between universities, and even between subjects
at the same university. Sally Tye, a student, drew attention to
the contrast between school where "you are measured against
your peer group [where] A-levels are across the board" and
said that "it seems strange that universities are on a different
measurement and I think for your own personal sense of how well
you are doing."[377]
Victoria Edwards, a mature student, made the point that many with
family responsibilities did not have the option of applying to
a prestigious university:
anybody in my situation, if they are living in
Newcastle or Stockport or wherever it is, and they have got their
family there and children in schools there, you do not have a
choice about which university you apply to, so you need to know
that your 2:1 from that university is going to be exactly the
same as far as employers are concerned. There are lots of reasons
why people choose their university and sometimes you do not have
a choice. If I had not got a place at Oxford Brookes I would not
have gone on the course because there is nowhere else I could
have commuted to.[378]
For several there were concerns that employers had
preconceptions that favoured degrees from certain universities.
Ricky Chotai, studying business management at Salford, explained
that his "degree isn't just as worthy as a business management
degree from the University of Manchester. Employers [
] immediately
pick up on that and if I managed to get a first class [or ] 2:1
against one of those students I think my application would be
further down the list."[379]
205. In the e-consultation several students commented
on whether an upper second honours degree in two different subjects
within the same university could be of different value. One pointed
out that it was possible to have a well respected department within
a poorly performing university and that in media coverage of the
league tables caveats were rarely added that certain departments
were outstanding. Another took a different view: "based on
Cambridge, [honours] degrees classes was roughly equivalent within
an institution. The range of marks varied between subjects, but
the proportion of students getting a 2:i wasn't [
] hugely
different. Given that the entry criteria were also broadly similar
for each course, the degrees are probably roughly equivalent in
value".[380] We
examine comparability of standards at paragraph 250 and following.
Quality Assurance Agency
206. The cornerstone of what is in effect a system
of self-regulation by individual institutions is the Quality Assurance
Agency (QAA). The QAA, established in 1997, is a charity and a
company limited by guarantee, governed by a Board and managed
by an Executive Committee. It is funded through subscriptions
from higher education institutions and through contracts with
the major funding councils, to whom it reports annually on its
activities.[381] The
QAA employs 125 members of staff and uses over 550 reviewers to
undertake audits (drawn predominantly from working academic practitioners
in higher education institutions). It has an annual turnover of
£11 million.[382]
207. Universities UK explained that the QAA conducted
regular visits to universities to scrutinise quality and that
QAA reports were publicly available and included judgements about
the confidence that could be placed in universities' management
of quality and standards.[383]
But the QAA is not the higher education equivalent of Ofsted (the
Office for Standards in Education, Children's Services and Skills),
which inspects education for those under 19 and the further education
sector.[384] The QAA's
purpose, in its own words, is "to safeguard the public interest
in sound standards of higher education qualifications and to inform
and encourage continuous improvement in the management of the
quality of higher education".[385]
The QAA pointed out in its written evidence to the Committee that:
The primary responsibility for academic standards
and quality rests with individual institutions. QAA reviews and
reports on how well they meet those responsibilities, identifies
good practice and makes recommendations for improvement.
We visit institutions to conduct our audits,
make judgements and publish reports, but we are not an inspectorate
or a regulator and do not have statutory powers. We aim to ensure
that institutions have effective processes in place to secure
their academic standards, but we do not judge the standards themselves.[386]
One main aspect of the QAA's work is institutional
reviews, which are reviews and audits of the academic performance
of institutions. We noted that the QAA used to carry out reviews
of individual subjects but that it has discontinued this process.
208. We agree with the QAA that it is in the public
interest that there are sound standards of higher education qualifications.
The public purse supports higher education to the tune of £15
billion and it is essential those studying at higher education
institutions are awarded degrees that measure accurately and consistently
the intellectual development and skills that students have achieved.
We consider that it is essential that a body concerns itself with
assuring the comparability of standards both between institutions
and over time.
THE OPERATION OF THE QUALITY ASSURANCE
AGENCY
209. The heads of higher education institutions that
had been subject to QAA audits stressed the strength of the QAA's
processes. Professor Trainor, President of Universities UK and
Principal of King's College London, said that any institution
coming up to a periodic institutional audit by the QAAand
his was then preparing for one in January 2009did "not
think that the QAA lacks teeth".[387]
He saw the QAA as "having a great deal of independence"
and a body that was "above any ability of an individual institution
to influence what is going on".[388]
He saw the QAA in combination with each higher education institution
as policing "standards and processes in UK higher education".[389]
Representatives from the higher education sector also made the
point that the current arrangements as well as safeguarding standards
also led to improvement. Professor Driscoll, Vice-Chancellor of
Middlesex University, had been subject to a recent institutional
audit by the QAA and said that "enhancement was very much
part of their review".[390]
We are not surprised that those from institutions that the QAA
has on whole found to be working well have commended the QAA.
210. The operation of the QAA, however, also came
in for criticism. Professor Geoffrey Alderman considered that:
the QAA [
] should be refocused to concentrate
squarely on standards. At the moment it concentrates on process.
It is possible to come out of the QAA with a glowing report but
in fact have poor standards.[391]
Others submitting written evidence echoed Professor
Alderman's criticism:
HEFCE/QAA etc. concern themselves merely with
the written documentation of the courses[
] Each department
or faculty assesses the "quality" of its own course,
but this assessment is usually merely an examination of the course
documentation. There is no genuine external scrutiny. This self-regulation
is remarkably similar to that performed by the Financial Services
Authority, and we are all now aware of the ineffectiveness of
this type of "regulation".[392]
[T]he QAA thinks in terms of "course delivery"
and "course providers" rather than disciplines and teachers.
Its notion of how to square academic freedom with quality assurance
is to avoid making any judgment about the content of courseswhich
allows Oxford to teach theology and Westminster complementary
medicinebut to insist on a particular form of bureaucratic
packaging; this means that a higher value is put on it being absolutely
clear and predictable what a student will be told than is put
on waking up their minds and seeing how far they can go if they
are stretched.[393]
In oral evidence Dr Fenton, an academic, said that
in her experience the QAA was "another bureaucratic, administrative
burden that you learn to play the game of" and that "You
do it very well, you show the processes are there, but it does
not actually command the respect of the academics delivering the
teaching on the ground".[394]
211. A number of those submitting written evidence
made the point that before the 1992 reorganisation of higher education,
there had been a body, the Council for National Academic Awards
(CNAA),[395] which
was "a version of Ofqual for universities".[396]
Professor Ryan, an academic, explained:
The non-old-fashioned sector gave CNAA-validated
degrees and nobody in the CNAA believed that there was anything
very clever to be said about whether a CNAA degree in history
was more or less demanding than a CNAA degree in sociology or
whatever. What was true was that you could not put on a degree
course without getting it past the CNAA, it did look at the syllabuses,
it looked at your teaching resources and the external examiners
came from the CNAA and what they had going for them was they would
have been deeply humiliated to validate and approve of courses
that other people later thought were not up to scratch. It is
not so to speak, therefore, an impossible state of affairs; to
my mind the CNAA was much more like the right animal than the
QAA.[397]
212. We put the criticisms about the focus on process
to Peter Williams, Chief Executive of the QAA, who replied that
process and outcomes were "very strongly linked".[398]
He pointed out that "because teachers plan their teaching,
then students will learn. Because students are guided in their
learning, they will learn. It is that careful, systematic approach
which is important and it is even more important given the size
of the system."[399]
Following media coverage and with our encouragement, Mr Williams
explained that the QAA had carried out an analysis of the critical
media stories relating to standards in the last year and he was
"coming to the conclusion that there are some areas where
there is probably something which requires more systematic investigation
than we have been able to give it so far".[400]
He added that as the cases investigated under the "causes
for concern" process were concerned, the QAA had found the
vast majority were in the first instance either personal complaints
or grievances or, in the case of staff, post-dismissal or cases
where they had been to an employment tribunal; in other words
they were personal cases. He considered, however, that it was
"also fair to say that it is sometimes quite difficult to
discover whether the personal case is masking a systemic problem
or is just a one-off administrative failure" and that was
where we are needing to do more work on individual
cases, some of which remain open because we are not satisfied
that this thing is simply a personal grievance and we want to
come back and look at them, but we cannot do that while the cases
are open.[401]
213. Under the arrangements operated by the QAA,
"a cause for concern" is defined as "any policy,
procedure or action implemented, or omitted, by a higher or further
education institution in England, which appears likely to jeopardise
the institution's capacity to assure the academic standards and
quality of any of its HE programmes and/or awards".[402]
The power to declare a possible cause for concern is limited to
a group of named organisations, principally statutory, regulatory,
and some professional bodies.[403]
Any response by the QAA to a request from one of those organisations
to investigate an apparent difficulty is "phased and proportionate",
beginning with an informal enquiry and only progressing to a full
investigation where this is considered to be necessary in the
light of evidence gathered.[404]
214. Following Mr Williams' oral evidence, the QAA
supplied a supplementary memorandum[405]
providing additional information. It pointed out that:
Since 2002 we have interviewed more than 10,000
students and a similar number of staff in [higher education institutions],
to discover whether their institutions' views of themselves and
the way they assure their own standards stand up to scrutiny.
This contrasts markedly with the handful of individual complainants
who have written to us since last summer and with the equally
small number who have responded to your Committee's invitation
to make submissions. Every audit has led to both commendations
for good practice and recommendations for action, categorised
as being either 'essential', 'advisable', or 'desirable', and
these are almost invariably accepted and acted upon.[406]
In supporting material provided "by way of illustration
of our effectiveness" the QAA described "the specific
responses and actions from those institutions that received judgements
of 'no confidence' and 'limited confidence' in their institutional
audits between 2003 and 2007."[407]
215. In April 2009 the QAA published its final report
on its "thematic enquiries into concerns about academic quality
and standards in higher education".[408]
The report comprised a commentary on the five areas of interest
identified from articles and comments made in the media over the
summer of 2008:
- student workload and contact
hours;
- language requirements for the acceptance of international
students;
- recruitment and admission practices for international
students;
- use of external examiners; and
- assessment practices, including institutions'
arrangements for setting the academic standards of their awards.[409]
216. We have noted the QAA's recommendations on each
of these five areas. This Report is not the vehicle to examine
them in detail but we consider that they are usefulfor
example, on student workload and contact hours that "provision
by institutions of readily available and clear information about
the nature and amount of contact students may expect with staff
in respect of individual study programmes, and the expectations
that the institutions have of students as independent learners"
was required[410]chimes
with many of conclusions in this Report, though in some cases
we would go further than the QAA.[411]
In our view, it is matter of some regretand a symptom
of complacencythat it was only after pressure from outside
the higher education sector, that is, the media, ministers and
us that it appears that the QAA used the "cause for concern"
process to examine more generally institutions' capacity to assure
the academic standards and quality of their higher education programmes
and awards. We consider that the QAA needs to make up for lost
time and develop its expertise in this area. In addition, we consider
that the Government and higher education institutions must find
the resources to support this endeavour.
217. We also raised the role and operation of the
QAA with John Denham. While he considered that the "work
of QAA in general shows that we do not have a systemic problem
with quality and standards in the system", he identified
three areas which "we need to look at"[412]
and which he had discussed with both HEFCE and the QAA:
The first is that the system is not very good
at closing down those sorts of issues, stories and allegations
that were brought before [the] Committee. We are not good enough
at getting in with the individual institutions and actually having
an outcome where we can say we managed to sort it out.
The second thing is that it is not clear enough
that essentially one bodyI think it should be the QAAhas
the lead responsibility for communicating to the public both here
and indeed internationally the real story about the quality of
higher education. I think QAA essentially services the higher
education sector; the information is there but there is no obvious
responsibility on anybody for communicating that effectively and
for recognising how damaging it can be if an allegationalbeit
a completely unsubstantiated allegationis allowed to run
for ages.
The third thing is that there are some persistent
issues that come up from time to time, external examiners being
one, where I think it is useful to have a body that looks at that
and says (as I think the QAA will do), "This is pretty much
okay, but here are some ways that people could do it better; here's
some good practice to handle it better". I think if the QAA
were better able to make sure that the allegations that are made
are sorted out, that they had a clearer responsibility for communicating
quality and standards issues for the broader public and [
]
they do show proactively that if there are certain types of issues
that keep coming up they have a look at them, then we could move
forward.[413]
218. Although we found the former Secretary of State's
response constructive, we would wish to question the role he appeared
to envisage for the QAA. While accepting that the QAA had a role
in investigating and safeguarding quality in both the sector and
in individual institutions, he appeared to see it also as having
some of the characteristics of a public relations body charged
with improving communications with the public in this country
and abroad and in closing down stories. In our view a body
with responsibilities for standards which has as its primary function
promoting UK higher education would be misconceived and likely
to undermine faith in the quality of higher education.
219. We accept that quality needs to be underpinned
with sound processes and, indeed, also the converse that deficient
or chaotic processes will undermine quality. But we do not accept
that sound processes necessarily denote high quality. That is
the trap that many bureaucracies and those that run them fall
into. That said, in response to concerns which DIUS, HEFCE, the
media and we raised, we have found that the QAA has shown itself
willing to, and capable of, investigating standards and concerns
about quality in higher education. We consider that in not
judging "the standards themselves", the QAA is taking
an unduly limited view of its potential role.
220. In our view the most effective way to safeguard
standards and serve the public interest is to make the body responsible
for supervising and reporting on standards more independent both
from government and from the higher education institutions that
currently subscribe to it. If we were designing a new system we
would not recommend the current arrangements with the QAA reporting
on processes and leaving standards to individual higher education
institutions. In our view, there is a justifiable case for recommending
the abolition of the QAA and starting afresh with a new body.
We are, however, concerned that the inevitable hiatus, disruption
and costs caused by the abolition of the QAA and the establishment
of a new body would not serve the best interests of students,
universities and the taxpayer. We have concluded that, on balance,
the QAA, rather than be abolished, should be reformed and re-established
as a Quality and Standards Agencypossibly by Royal Charter
(which was the arrangement used to set up the former Council for
National Academic Awards)with the responsibility for maintaining
consistent, national standards in higher education institutions
in England and for monitoring and reporting on standards. We also
recommend that the remit of the new body includeif necessary,
on the basis of statutea duty to safeguard, and report
on, standards in higher education in England. It should also report
annually on standards to Parliament. We further recommend that,
to ensure its independence, the funding of the Agency's activities
in England be provided through a mechanism requiring half its
funding to be provided by the Higher Education Funding Council
for England and half from levies on higher education institutions
in England. In making these recommendations we are looking to
see a fundamental change in the operation of the QAA and that,
if this cannot be achieved within two years, the QAA/Quality and
Standards Agency should be abolished and an entirely new organisation
be established in its place.
Variations in demands made of
students
221. Drawing on reports published by the Higher Education
Policy Institute (HEPI)[414]
we pointed out on several occasions to Vice-Chancellors that it
appeared that the study timewhich includes lectures, tutorials
and private studyfor students working towards degrees in
similar subjects varied significantly.[415]
We were disappointed by the responses. Professor Driscoll, Vice-Chancellor
of Middlesex, was forceful, though not untypical, in his response:
A couple of things to say about the HEPI studies.
The ones that were carried out in 2006-07 surveyed 15,000 students.
This latest update surveyed 2000 students; the report does not
even say how many responded. It is a woefully small sample and
I do not think that any statistician would stand by those results.
The other thing that disturbs me more seriously about the conclusions
of those HEPI reports is that they take one statisticthat
is formal contact hoursand extrapolate some extraordinary
statements about effort and the work that students do. I think
it is quite unreasonable. [
] what is important is not just
the contact hours, it is the quality of those hours, and it is
everything else that goes into that. My institutionand
I am sure this is true of most institutions across the sectorproduces
course handbooks and in those course handbooks it describes the
contact, the nature of the contact, the number of assignments
they will have to do and the nature of the assessment, and it
provides all the other information around the reading lists.[416]
222. It is not our job to evaluate the work of HEPI
but we were concerned by the responses of the Vice-Chancellors,
not just Professor Driscoll, when pressed on the apparent disparity
in the levels of effort required in different universities to
obtain degrees in similar subjects. First, they raised methodological
questions. If the HEPI studies are as unreliable as some of the
leaders of the sector appear to contend, they should commission
and publish their own study but they have not sought to do so.
As Professor Brown, former Vice-Chancellor of Southampton Solent,
pointed out, the HEPI studies "were done because of no other
work being done".[417]
We consider that the fact that the higher education sector does
not appear to have assembled its own evidence undermines the Vice-Chancellors'
arguments. Second, the Vice-Chancellors' answers concentrated
on contact time between staff and students. Yet the HEPI studies
are explicit that they are examining total study time which is
broader than contact time.[418]
We conclude that it appears that different levels of effort
are required in different universities to obtain degrees in similar
subjects, which may suggest that different standards may be being
applied. Furthermore, the HEPI studies' consistent message is
that more research is necessary in this vital area of student
contact, and we conclude that those responsible for standards
in higher education (both institutions and the sector level bodies)
should ensure that such research is carried out.
223. Professor Brown also commented that the notion
that "British students who go to university seem to study
less intensively than continental students has been validated
by a number of independent surveys, so that aspect of the HEPI
survey [
] is right".[419]
The Centre for Higher Education Research and Information (CHERI),
in commenting on the quality of teaching provision in UK higher
education, also noted the results of international comparisons
and suggested that, while the quality of teaching appeared to
be relatively high within UK universities, "the level of
demands made on learners and the achievements of those learners
may be relatively low".[420]
CHERI's general conclusions were that:
there is some evidence to suggest that the educational
experience of higher education students in the UK is in some respects
somewhat less than "world class" when compared with
its counterparts elsewhere in Europe. With the Bologna process
of harmonisation between different higher education systems, differences
may become increasingly visible. [
] this may shatter some
myths and any complacency about the superiority of UK higher education.
[We] recommend to Government and HEFCE that further attention
be given to the growing amount of research evidence on the differences
(and similarities) between the higher education experiences provided
by different national systems.[421]
In April 2009, HEFCE published "Diversity in
the student learning experience and time devoted to study: a comparative
analysis of the UK and European evidence", a report to HEFCE
by CHERI. The study found that:
When looking at students' workload overall (i.e.
lectures, classes and all forms of study) two separate studies
[
] both found that students in the UK spent an average of
about 30 hours a week on studying, the least amount of time compared
to their counterparts in other European countries. [
] The
results of these studies support the conclusions of the HEPI report
and add to the body of evidence that UK students commit fewer
hours to study than students in other European countries.[422]
We add that in our discussions with students during
our visit to the USA they claimed to spend up to 60 hours a week
studying, that is in lectures, tutorials and private study.
224. The findings of CHERI and HEPI reports indicate
that students in England may spend considerably less time studying
than their counterparts in Europe or the USA. This is a potentially
serious finding in view of the fact that degrees in this country
are also often shorter than those overseas taking into account
variations in student/staff ratios in classes compared with the
UK and methods of teaching, three years compared with four. We
recommend that the Government investigate and establish whether
students in England spend significantly less time studying, which
includes lectures, contact time with academic staff and private
study, than their counterparts overseas and that, if this proves
to be the case, establish what effect this has on the standards
of degrees awarded by the higher education sector in England.
Assessment of teaching quality
225. When we took evidence at Liverpool Hope University,
the Vice-Chancellor, Professor Pillay, was of the view that there
had been an "over-emphasis [
] on management of quality
rather than enhancing quality".[423]
Looking to the future he said that the QAA would have to consider:
whether we have the same rigour in our teaching
quality measurement as we have about research at the moment. Nothing
[
] is assessing teaching quality. Nothing is assessing yet
the quality of scholarship. I do not just mean research outputs,
because that is only part of what a university does. Something
is going missing but I think these are the challenges and questions
we raise for the future. [
] I think more responsibility
must be given to the university to actually show why it maintains
and enhances quality, with the emphasis now on teaching quality
not just on research quality.[424]
226. We found Professor Pillay's analysis compelling.
We have indicated in the previous chapter that the higher education
sector needs to adopt a strategy to improve teaching and lecturer
training and development and we identified two elements: professional
development and universities themselves identifying and addressing
poor teaching. The third element in the strategy we consider has
to be supplied by an external body, a reformed QAA charged with
the responsibility of monitoring, and reporting on, teaching standards,
which, in our view, will act as a stimulus to improvement in teaching
standards. We conclude that the reformed QAA's new remit should
include the review of, and reporting, on the quality of teaching
in universities and, where shortcomings are identified, ensuring
that they are reported publicly and addressed by the institution
concerned. We also conclude that the QAA should develop its current
policy of giving greater attention to institutions' policies and
procedures in relation to improving quality and that the QAA should
produce more guidance and feedback based on its institutional
reviews.
Institutional accreditation
227. In order to be able to award a recognised higher
education degree in the UK, an organisation needs to be authorised
to do so either by Royal Charter or Act of Parliament. Section
76 of the Further and Higher Education Act 1992 empowers the Privy
Council to specify institutions of higher education as competent
to grant awards, in other words to grant them powers to award
their own degrees. In considering applications for such powers,
the Privy Council seeks advice from the appropriate territorial
minister with higher education responsibilities. In turn, the
minister seeks advice from the appropriate agency.[425]
In England this was DIUS (now BIS) and the QAA respectively.[426]
In advising on applications, the QAA is guided by criteria and
the associated evidence requirements. The QAA's work is overseen
by its Advisory Committee on Degree Awarding Powers, a sub-committee
of its Board.[427]
228. Professor Baker from GuildHE explained the QAA's
assessment of institutions seeking the power to award degrees:
My own institution went through the taught degree-awarding
powers assessment some three years ago, and it was a two-and-a-half-year
process. Believe me, it was not easy. So I think that the QAA
does have teeth; it does look very long and hard at institutions,
and their quality assurance processes in particular. It does not
give away the confidence vote or the taught degree-awarding powers
award lightly.[428]
229. Once granted, degree awarding powers are held
in perpetuity. We observe that, since the 12th century,
it has been the pattern that once founded, a university was a
significant national asset which was expected to endure for centuries.
In the 21st century, however, we now have a diverse
higher education sector in England with 133 higher education institutions.
It is increasingly questionable whether we should adhere rigidly
to this medieval approach to the status of universities and we
see risks that it could breed complacency and become, for example,
a barrier to closing an institution that deserved to have its
powers removed. When, however, we suggested to Universities UK,
the 1994 Group, Million+ and the Russell Group that institutional
accreditation might be reviewed periodically, they unanimously
rejected the idea.[429]
Professor Trainor from Universities UK considered that a system
that reviewed accreditation did not have "any more teeth
than the [current] institutional audit system [
] because
de facto, periodically, getting a good result from the
institutional audit is prerequisite for the university carrying
on with its reputation in good order".[430]
We were not convinced. When we were in the USA we were told that
all higher education institutions had their accreditation to award
degrees reviewed periodically. We cannot see why universities
in England need to be excluded from a review of powers to award
degrees, especially as the number, range and diversity of universities
increase and include a number of private, commercial providers.
It could be carried out as part of a broadened institutional review
by the reformed QAA, which examined not only process but also
the quality of courses and standards, and it would add discipline
to the process ifin admittedly extreme casesan institution's
degree awarding powers could be revoked or curtailed. We recommend
that all higher education institutions in England have their accreditation
to award degrees reviewed no less often than every 10 years by
the reformed QAA. Where the Agency concludes that all or some
of an institution's powers should be withdrawn, we recommend that
the Government draw up and put in place arrangements which would
allow accreditation to award degrees to be withdrawn or curtailed
by the Agency.
230. As we have explained we envisage that the review
of degree awarding powers could be part of the periodic institutional
review. There may, exceptionally, be a need to review these powers
in the period between institutional reviews. In our view, there
needs to be a trigger for an exceptional review. We recommend
that the reformed QAA have powers to carry out reviews of the
quality of, and standards applied in, the assessment arrangements
for an institution's courses, including, if necessary, its degree
awarding powers, in response to external examiners' or public
concerns about the standards in an institution or at the direction
of the Secretary of State.
Whistle-blowers
231. We received a small number of submissions from
academics alleging that their attempts to raise concerns about
standards in their institutions had been suppressed by their university
authorities.[431] As
the focus of our inquiry was the experience of students and because
a select committee is not generally an appropriate or effective
forum for the pursuit of what are individual circumstances, we
decided not to investigate each of these cases. We did wish to
establish, however, whether these cases were prima facie
evidence of a systematic failure within the higher education sector.
In its written evidence the University and College Union (UCU)
told us that it received "occasional reports from members
about pressure to admit or to pass students, or to approve new
programmes, against their academic judgement".[432]
UCU explained that institutions were also under pressure in the
"higher education marketplace" not to disclose concerns
about their own standards.[433]
An academic, Dr Dearden, said in a written submission that academic
standards had been compromised by amongst other factors, "management
pressure on academic staff to 'fully utilise the range of marks'
and, in the extreme case, the threat of loss of teaching leading
to staff priming students on exam content" and he said that
much of the compromise in standards was impossible to identify
through formal monitoring procedures.[434]
232. The oral evidence we received from some academics
seemed to confirm this picture. Dr Fenton told us that staff who
were vulnerable, especially younger members or newer members to
the profession, who had "not got as much clout, standing
or protection within the institution[, were] very nervous about
speaking out, or recommending that certain students should not
be getting certain grades".[435]
Another academic, Dr Reid, in giving oral evidence, explained
that:
There is no doubt there is nothing an institution
values more closely than its external reputation, and they are
very protective of that. I know people certainly feel as though
they cannot speak out; they cannot even speak out in their own
department's staff meetings, never mind to colleagues from The
Times Higher who may be interested.[436]
233. When it gave oral evidence UCU was, understandably,
reluctant to cite specific cases of bullying[437]
but it indicated that "our colleagues certainly are telling
us it is getting worse".[438]
We pressed for general information[439]
and in a supplementary memorandum UCU drew attention to a press
release it had issued on 6 November 2008 on bullying at work.[440]
The release goes wider than bullying over standards. It related
to individual academics' capacity to "speak out" with
regard to areas of research that were prioritised, the university's
reputation and more general questioning of management policy.
The release said that a survey of 9,700 members working in higher
education revealed that 6.7% of members said they were "always"
or "often" bullied at work and 16.7% said "sometimes".
Only half (51%) said they had "never" been bullied at
work.[441]
234. In its original memorandum the UCU stated that
the "Whistleblowing procedures and the academic freedom protections"
in the 1988 Education Reform Act had "proved to be inadequate
in protecting academic whistleblowers".[442]
We noted one instance where this appeared to be a problem. This
was where an academic who had raised concerns about standards
then left a higher education institution after signing a confidentiality
agreement.[443] It
appears that the whistle-blowing procedures in the 1988 Act would
not give protection from action by the institution for breach
of contract to an academic seeking to raise concerns about standards
at the higher education institution. We make no comment about
the merits of the case raised with us but it does highlight a
wider concern about the system: confidentiality agreements preventing
the operation of the whistle-blowing provisions in the 1988 Act
where the whistle-blower has concerns about standards may not
be the public interest.
235. It appears to us that the current protections
within the sector and the internal arrangements of some higher
education institutions may not provide sufficient protection to
whistle-blowers raising, in good faith, potentially serious concerns
about standards at higher education institutions. The pressures
within the system to protect the reputation of the institution
are so strong that they risk not only sweeping problems under
the mat but isolating and ostracising unjustly those raising legitimate
concerns. It is not acceptable that the only avenue available
to some of those who considered themselves aggrieved was to raise
their concerns through the immunity provided by us as a select
committee of the House of Commons when we accepted their representations
as evidence. We see grounds for concluding that the system
for reviewing the concerns of academics about standards needs
to be rebalanced to provide greater protection for those raising
concerns alongside a clear move to independent and external review.
Our initial view is that such a service which provides, for example,
independent arbitration and adjudication might be the responsibility
of a reformed QAA. We also recommend that Government bring forward
legislation to strengthen the whistle-blowing procedures in the
1988 Education Reform Act to provide greater protection to academics.
We are reluctant to go further and to reach firm conclusions without
carrying out a more detailed inquiry into adequacy of the protection
for whistle-blowers within higher educationand this is
an issue that a successor committee with responsibility for scrutinising
higher education may wish to return tobut on the basis
of the evidence from individual academics and the UCU we consider
that there could be a systematic problem here.
MANCHESTER METROPOLITAN UNIVERSITY
236. There was one case concerning an allegation
about standards where we became involved. This concerned Walter
Cairns, Senior Lecturer at Manchester Metropolitan University,
who submitted written evidence to our inquiry which was critical
of the University's marking processes and he told us that, as
a consequence, he was removed from the Academic Board of the Universitysee
chapter 6. The case does raise a point of general application
and relevance to this chapter. The case of Mr Cairns, the details
of which we set out in chapter 6 of this Report, reinforces our
uneasiness about the adequacy of the internal systems within higher
education institutions to resolve disputes involving those who
raise concerns about standards. In our view, the ability of an
academic to appeal to an external, independent body would provide
a safety-value for potentially explosive disputes. At a late
stage in our inquiry, a second academic from Manchester Metropolitan
University, Susan Evans, made a complaint about the University's
response to her evidence. We also deal with her representations
at chapter 6.
The autonomy of higher education
institutions
237. The question of standards in the higher education
sector highlighted the issue of autonomy of higher education institutions,
which arose at several points during this inquiry. Sir Alan Langlands,
Chief Executive of HEFCE, described higher education institutions
as "private bodies serving public functions".[444]
To avoid any confusion we must make it clear that we have not
examined in this inquiry academic freedom, which is held to be
central to the role of universities as institutions and their
academic staff as individuals in advancing knowledge and critical
education, and is often defined as the "right of each individual
member of the faculty of an institution to enjoy the freedom to
study, to inquire, to speak his mind, to communicate his ideas,
and to assert the truth as he sees it".[445]
Nor are we questioning what the Robbins Report called the individual
freedom[446] of the
academic, though on occasion, as we noted in the evidence from
the UCU, there was a potential for tension between an individual's
academic freedom to comment on standards and the actions of university
management. Our main difficulty was understanding the extent and
range of the autonomy that higher education institutions havewhat
the Robbins Report called "institutional freedom".[447]
We found ourselves drawing some comparison with the operation
of the so-called Haldane Principle, which has featured in our
inquiries into the allocation of resources for scientific research.[448]
The Haldane Principle is taken to hold that the scientific research
councils (and universities) should choose which research to support
on scientific criteria at "arms length" from the Government
and political considerations. The most striking parallel was that,
while all parties supported the principle of autonomy and had
a general idea what it meant, its detailed operation was far from
clear.
238. It is instructive to start with the 1963 Robbins
Report which saw institutional freedom as encompassing
(with in some cases a limited role for government) appointments,
curricula and standards, admission of students, the balance between
teaching and research, freedom of development and salaries and
staffing ratios.[449]
When we asked Dr Hood, Vice-Chancellor of the University of Oxford,
what autonomy Oxford had, his answer showed that the institutional
freedom of the 1960s had reduced. He replied:
We have autonomy and we protect our autonomy
in the sense of academic freedom but we do not have autonomy in
the sense that we are unregulated, that we are in a non-compliant
regime, for example, where we set our own regulatory framework,
our own compliance norms, quite the contrary. The Government's
funding, be it teaching funding or research funding or funding
for various outreach purposes or for tech transfer purposes comes
with very prescriptive conditions attaching to it and very strong
audit and other related requirements.[450]
239. We asked John Denham about the extent to which
he was able to direct the higher education sector given the dependence
of the sector on government for financial support. He took the
example of the Higher Education Achievement Report (HEAR), which
we examine at paragraph 261 and following. He explained that it
"would not have happened without ministers saying to the
sector that there is an issue here and we have to grasp it. That
is seen as a product of the sector and the HEAR Report is being
accepted around the sector because it is owned by them."[451]
In our view, this approach is to be commended but we have reservations
that it may come under pressureand the pattern since the
1960s shows a growing role for government in "institutional"
matters in higher education institutionsand we consider
that a clearer arrangement is needed in the 21st century.
240. It is worth noting that the Government did not
adopt the approach based on the dialogue Mr Denham outlined to
us when, last year, it withdrew resources for those undertaking
equivalent or lower qualifications (ELQs) with the result that
their fees increased. Instead, it withdrew the funding by means
of a directive issued to HEFCE.[452]
As well as the constraints Dr Hood described, we would add that,
from our experience in this inquiry, there is also within universities
the constraint of managerialism as managers align the work of
their staff to their strategic goals. As the financial effects
of the recession build we would expect the pressures on universities
from government above and within institutions to increase.
241. The lack of clarity about institutional autonomy
in the higher education sector also makes it difficult to see
where responsibility for delivering government policy lies when
matters do not work out as planned. For example, in the case of
widening participation, which we examined at chapter 2, we were
told by the sector that they were "doing somersaults, metaphorically
speaking, to try to encourage applications from a broader spectrum"[453]
and that many of the levers to widen participation were not, as
we have noted, within their grasp but arose from schooling and
family circumstances.[454]
On the other hand, when Oxford and Cambridge recently fell short
of their benchmarks for students from state schools, the former
Secretary of State said that the figures showed that there were
"wide variations between the performance of different institutions
against their benchmarks in [
] widening participation[...]
We need to explain why this is if we are to make further progress
which is why I am writing to HEFCE today to explore what further
action we can take and what part the QAA could play in creating
greater visibility and a better understanding [
] variations
between institutions".[455]
242. We consider that both the higher education sectoracademics,
managers and studentsand government would benefit if the
roles and responsibilities of each were set out in a concordat.
We do not envisage a detailed legal document but an agreed set
of principles governing the relationship between the government
and the sector. We recommend that the Government request HEFCE,
the higher education sector and student bodies to draw up, and
seek to agree, a concordat defining those areas over which universities
have autonomy, including a definition of academic freedom and,
on the other side, those areas where the Government, acting on
behalf of the taxpayer, can reasonably and legitimately lay down
requirements or intervene. Drawing on the issues raised across
this inquiry we set out in chapter 7 some matters which we suggest
could be included in a concordat.
Degree classification
GRADE INFLATION
243. The table below sets out first degrees awarded
by UK higher education institutions by class of degree. The figures
for 1997-98 to 2007-08 were supplied by DIUS and are taken from
the Higher Education Statistics Agency (HESA) student record which
is collected annually.[456]
Because HESA has also published figures from 1994-95 we have added
these to the sequence in italics, though they are not recorded
on the same basis as the later figures.
Table 5: Degrees by class awarded by UK
higher education institution

244. Professor Yorke, in a paper he supplied with
his written evidence, identified the "good honours degree"
as an upper second or a first class honours degree which was "often
taken as a yardstick of success, in that it opens doors to careers
and other opportunities that would generally remain closed to
graduates with lower classes of honours" that is lower second
and third class honours.[457]
Professor Yorke has analysed the detailed data behind the figures
in the table above. He commented that:
The analyses [in his paper] for the period 1994-2002
showed that the percentage of "good honours degrees"
[
] tended to rise in almost all subject areas. When the
award data were disaggregated by institutional type, the rises
were most apparent in the elite "Russell Group" universities.
Similar analyses for the period 2002-2007 showed
that there was still a general tendency for the percentage of
"good honours degrees" to rise, but that the strongest
rises were scattered more evenly throughout institutional types.[458]
245. We put on record our thanks to Professor Yorke
and also to Professor Longden for the interest that they have
taken in our inquiry and their diligence in supplying statistical
information and analyses. Professor Yorke suggested a number of
possible reasons for the changes he observed.
Amongst those likely to influence an upward
movement in classifications were:
- Improvements in teaching
- Greater student diligence
- Curricula being expressed in terms of specific
learning outcomes which gave students a clear indication of what
they need to achieve
- Students being 'strategic' about curricular choices
- Developments in assessment methods.
Changes in the way that classifications were
determined:
- The significance for institutions
of 'league tables'.
Classifications may be influenced downwards
by:
- Student part-time employment
- The distraction from teaching of other demands
on academics' time.
The following might also be influential, but
it was unclear what their effects might be:
- Changes in institutions' student
entry profiles
- Changes in the portfolios of
subjects offered by institutions.[459]
246. Several academics stated that standards had,
or were, declining.
[A] typical degree awarded in the Arts &
Humanities (I cannot speak for other areas) is worth less than
its equivalent of even five years ago, and certainly less than
ten or twenty years ago. This is despite the proliferation of
quality controls, some aspects of which, I believe, contribute
to declining standards.[460]
Despite educating more students, who are less
well selected, and with resources stretched more thinly, increasing
numbers of university students obtain a 2:1 or a 1st class degree.
This indicates an obvious decline in standards. [
] For my
final year course I have received essays that were almost impossible
to follow, largely empty of content, a regurgitation of lecture
notes or basic textbooks and factually incorrect. I routinely
awarded these essays low grades but have been brought under pressure,
internally and externally, to provide higher grades.[461]
To those of us who have been involved in the
assessment of law subjects taught at the level of higher education,
it is obvious that standards have dropped substantially. [
]This
is not only the case, as is generally believed, because of the
incidence of course work and of "seen" examination papers.
It also has to do with the manner in which the various assessed
elementswhether in the form of examinations, tests, essays
and other items of courseworkare evaluated and marked.
More particularly it relates to a tacit understanding amongst
university staff that assessment levels and methods shall be geared
mainly, if not exclusively, to the need to retain as many students
as possible for the subsequent years and for graduation.[462]
We also found of interest the comment of a mature
student who, having obtained a degree in engineering 25 years
earlier, returned to university to obtain an MSc in Biological
Sciences and so was studying alongside students who had come through
the system a generation later. He said that "much of what
I had learnt at school now had to be taught at University, inevitably
pushing out other material that would otherwise have been taught".[463]
247. With a few exceptions such as the Quality Strategy
Network,[464] we found
that representatives from the sector were not inclined to engage
in a detailed examination of the trend Professor Yorke observed.
Professor Trainor from Universities UK, while acknowledging that
there had been "a lot of talk and publicity on this in the
last six months or so, about degree classification, and so on",
noted that "the patterns of degree classification have not
changed all that much over the last ten yearsonly a six
per cent rise in the percentage of Firsts and 2.1s".[465]
John Denham appeared to make a similar point:
The proportion of graduates who were awarded
a first went from 8.1% to 13.3%; upper seconds increased from
45.1% to 48%. If you look at how many people got them, you are
ignoring the fact that far more people go to university, so the
significance is that if you start in a particular year what is
your chance of getting a higher degree? Those figures would not
suggest to me that you have rampant grade inflation in the way
that some people are saying.[466]
248. We found Mr Denham's explanation unsatisfactory.
Both he and Professor Trainor appear to have ignored the overall
percentage increase by emphasising, or confusing it with, the
percentage point increase. The figures in the above table show
a clear trend: a steady increase in the proportion of first degree
students achieving first class honours from 7.7% in 1996-97 to
13.3% in 2007-08, which is an increase over the period of 72%.
The trend on the proportion of upper seconds is not so pronounced
but is still significant. The trend for lower seconds is pronounced:
downwards with some exceptions. Thirds appear, since 2002-03,
to have stabilised at around 8%. Today, 61% of classified degrees
awarded are either first or upper seconds, compared with 53% in
1996-97, again a pronounced trend showing an increase of 15%.
The changes between 1996-97 and 2007-08 are shown in the two pie
charts below.
Chart 1: Proportion of classes of honours
degrees in 1996-97

Chart 2: Proportion of classes of honours
degrees in 2007-08

249. The Russell Group said that there was no evidence
of "degree inflation" at the expense of standards at
Russell Group universities. It pointed out that research from
HEFCE had
demonstrated a strong correlation between entry
qualifications and degree results that continues to exist. The
increase in the percentage of Russell Group students gaining firsts
and 2:1s from 1994-2002 correlates with a rise in the entrants'
qualifications and an increase in standards at the time the Russell
Group was established.[467]
250. In our view, it is not a sufficient defence
of the comparability of standards to show that they match the
improvement in A-level grades. On this logic, if A-level grades
have inflated unjustifiably (and there are many who think they
have), then so must higher education degree classes. Imperial
College London said that the "improvement in A Level grades
has not been accompanied by a comparable increase in knowledge
and understanding".[468]
251. The research by Professor Yorke shows that since
1994-95 the proportion of first and upper second honours degrees
has increased and the proportion of lower second class honours
degrees has decreased. We made little progress in establishing
the reasons for these changes and we found no appetite within
the higher education sector for a systematic analysis of the reasons
for the increase in the proportion of first and upper second honours
degrees. We found it telling that Professor Yorke in his memorandum
called for a study to be undertaken of the influences upon the
classification of honours degrees. As a Committee we are reluctant
to recommend more research but in this case we consider that there
is a strong case for a study along the lines suggested by Professor
Yorke, in order to establish the reasons for the increases in
firsts and upper seconds and to remove suspicions of what, we
hope, are unfounded misgivings that the increase may result from
factors other than greater intellectual achievement. We recommend
that the Higher Education Funding Council for England commission
a study to examine the influences upon the classification of honours
degrees since 1994 and that this be undertaken in a representative
range of subject disciplines.
COMPARISON OF DEGREES
252. Equally frustrating was our attempt to establish
whether the outcomes of degrees were comparable across the higher
education sector. We asked Professor Goodman of the University
of Oxford to define the difference between the classes of honours
degree. He explained:
The criteria that we use in our university which
we ask people to mark against is a 2:2 shows you have done the
work, you have understood the work and you are quite comfortable
with the work, a 2:1 is somebody who is actually able to use the
work and show that they can unpick the question and work around
the question and use it in a critical way, and a first class examination
answer is something that really takes you to another level. It
is a pleasure to read, you know that there is something going
on there, that it is doing something very, very interesting with
the work.[469]
He added:
Examiners very rarely disagree about that 2:1
and the first class category. I find elsewhere as wellI
taught briefly at the University of Essex and the very best students
at the University of Essex were definitely as good as the ones
here in that first class bracket.[470]
253. We found Professor Goodman's definition useful
as it was capable of application across subjects and institutions
and should mean that a student attaining a first class honours
degree at the University of Oxford is the equivalent of a student
with a first from the University of Essex. When, however, we pressed
Vice-Chancellors on the comparability of degrees the position
was less clear. In its written evidence Universities UK submitted
that, although degrees were "different and more diverse with
far more choices available to students and employers than in the
past, [
] all courses are subject to the same processes to
ensure a minimum 'threshold standard' is maintained"[471]
and that "while the content of courses may differ, the level
of understanding required in each case across different universities
will be broadly equivalent".[472]
When we took oral evidence, we asked the Vice-Chancellors of Oxford
Brookes University and the University of Oxford whether upper
seconds in history from their respective universities were equivalent.
Professor Beer, Vice-Chancellor of Oxford Brookes, replied:
It depends what you mean by equivalent. I am
sorry to quibble around the word but is it worth the same is a
question that is weighted with too many social complexities. In
terms of the way in which quality and standards are managed in
the university I have every confidence that a 2:1 in history from
Oxford Brookes is of a nationally recognised standard.[473]
When asked the same question Dr Hood, Vice-Chancellor
of the University of Oxford, responded:
We teach in very different ways between the two
institutions and I think our curricula are different between the
two institutions, so the question really is are we applying a
consistent standard in assessing our students as to firsts, 2:1s,
2:2s et cetera? What I want to say in that respect is simply
this, that we use external examiners to moderate our examination
processes in all of our disciplinary areas at Oxford, and we take
that external examination assessment very, very seriously. The
external examiners' reports after each round are submitted through
our faculty boards, they are assessed and considered by the faculty
boards, they are then assessed at the divisional board level and
by the educational committee of the university. This is a process
that goes on round the clock annually, so we would be comfortable
that our degree classifications are satisfying an expectation
of national norms.[474]
254. We asked John Denham a similar questionwhether
a first in geography from the University of Oxford was the same
as a first in geography from Southampton Solent Universityand
he replied along the same lines:
I think the institutions are different institutions.
The teaching may well be different. The nature of the staff may
be different. There will be some nationally agreed reference points
in the academic infrastructure about what should be in the course
and each institution will have its own system for verifying the
quality and standards of it. People are studying the same subject
in different institutions. Where I am reluctant to go is into
an argument about better or worse. A lot is going to depend on
the individual student, the nature of the study and what they
are going to get out of it.[475]
255. We found these answers unclear. Nor did we find
the other arguments deployed by universities convincing when we
raised the question of comparisons. First, the argument that a
comparison of degree outcomes across the sector would require
national curricula and national testing[476]
rests, in our view, on the unqualified proposition that the only
method to achieve comparability is via the single route of national
tests. We consider that national standards can be established
and enforced by other methods such as peer review against a national
standarda development of the role conventionally played
by external examiners. Second, the argument was put forward that
minimum standards, not comparability, was the issue.[477]
We fail to see why minimum standards should be a substitute for
the comparison of excellence. Both are important.
256. With 133 institutions the higher education sector
is diverse. While we celebrate and encourage the diversity of
the higher education sector in England, it is our view that there
need to be some common reference points. We consider that standards
have to be capable of comprehensive and consistent application
across the sector. As we noted at the beginning of this chapter,
students, understandably, want to know the worth of their degrees.
We were therefore concerned when staff at Imperial College London
informed the Chairman of the Committee during his visit as a rapporteur
that some academics had noticed that masters students enrolled
at Imperial, who had graduated from certain other universities
with first class honours degrees, sometimes struggled at Imperial
College.[478] We consider
that this could be evidence of a devaluation of degrees in those
institutions. We consider that so long as there is a classification
system it is essential that it should categorise all degrees against
a consistent set of standards across all higher education institutions
in England. Such work will need to build upon work previously
undertaken by the QAA and other bodies with responsibilities for
accreditation of degrees such as those in engineering. On the
basis of the evidence we received, however, we have concerns that
the higher education sector neither sees the need for this step
nor is willing to implement it across the sector as whole. We
consider that this is a task that would fit well within the work
of the reformed QAA. We conclude that a key task of a reformed
QAA, in consultation with higher education institutions and government,
should be to define the characteristics of each class of honours
degree and to ensure that the standards which each university
draws up and applies are derived from these classification standards.
METHODS OF ASSESSMENT
257. In its evidence, the Assessment Standards Knowledge
Exchange (ASKe) argued that there were "numerous and significant
methodological flaws in current assessment practice at both the
macro level of degree classification, and at the micro level of
the assessment of individual students" which meant that "there
should be growing concern about the integrity of the degree as
a qualification and what it means to be a graduate."[479]
ASKe in its memorandum drew on the published work of Dr Rust,
from Oxford Brookes University, who provided examples of "major
questionable beliefs and bad practices in the system":
a) the practice of combining scores, which
obscured the different types of learning outcome represented by
the separate scores; and
b) the practice of combining scores where the
variation (standard deviation) for each component is different.
Dr Rust commented that this latter example "would
be unacceptable in the practice of a first year statistics student,
but university assessment systems do this all the time, both within
modules, and in combining the total marks from different modules
or units of study."[480]
258. In its memorandum the Student Assessment and
Classification Working Group (SACWG), on whose behalf Dr Rust
gave oral evidence, indicated that there was "considerable
variation across the higher education sector in assessment practices.
Whilst this can be seen as a consequence of institutional autonomy,
the rationales for the various institutional choices that have
been made are unclear."[481]
The memorandum cited research which showed that:
Quite small variations in the way in which degree
classifications are determined (the "award algorithm")
can have more effect on the classification of some students than
is probably generally realised. Running a set of results through
other institutional award algorithms produces different profiles
of classifications.[482]
A number of institutions permit a small proportion
of module results to be dropped from the determination of the
class of the honours degree (provided all the relevant credits
are gained). Dropping the "worst" 30 credit points from
the normal 240 of the final two years of full-time study might
raise one classification in six, and (separately) changing the
ratio of weightings of results from the penultimate year to the
final year from 1:1 to 1:3 might change one classification in
ten, the majority of changes being upwards.[483]
Marks for coursework assignments tend to be higher
than those for formal examinations, though some instances were
found where the reverse was the case.[484]
The distribution of marks (usually in the form
of percentages) varies between subject disciplines in terms of
both mean mark and spread.[485]
A study of assessment regulations across 35 varied
institutions in the UK showed that there were considerable variations
between them [
]. Amongst the variations were the following:
- The weightings in the award
algorithm ranging between 1:1 and 1:4 for penultimate final year;
- The treatment of "borderline" performances
as regards classification;
- The adoption (or not) of "compensation"
(i.e. allowing weakness in one aspect to be offset against strength
in another) and "condonement" (i.e. not requiring a
relatively minor failure to be redeemed);
- The "capping" of marks for re-taken
assessments (at the level of a bare pass).[486]
259. The evidence from ASKe and SACWG was underscored
by a number of academics with responsibility for assessing students.[487]
Dr Reid explained in oral evidence that:
my university runs what has been described as
a very perverse model for classifying degree schemes, and it was
my external examiner who called it perverse. What happens is that
low marks between 0 and 20 are rounded up to 20 and high marks
from 80 to 100 are rounded downwards, and then they are averaged
together, so you have this non-linear average before making a
classification.[488]
260. The evidence we received on assessment methodologies
gave us serious grounds for concern. First, there needs to be
transparency about the methodological assumptions underpinning
the assessments used by universities. We recommend that the
government require those higher education institutions in receipt
of support from the taxpayer to publish the details of the methodological
assumptions underpinning assessments for all degrees. We would
expect greater transparency of these methods to expose any methodological
flaws that those who gave evidence suggest are present. We believe
that publication would allow the QAA, even under its current remit
which is limited to the examination of "process", to
review comprehensively the methodologies used by universities.
We conclude that the QAA should review the methodological assumptions
underpinning assessments for degrees to ensure that they meet
acceptable statistical practice.
RECORD OF ACHIEVEMENT
261. Universities UK made the point that any system
which attempted to summarise the achievement of students on a
wide variety of programmes in a large number of institutions to
a single, common, summative judgement was a "blunt instrument".[489]
Universities UK agreed with the finding of the Burgess Group,
which it and GuildHE had established in 2004, that the current
undergraduate degree classification system did not adequately
represent the achievement of students in a modern, diverse higher
education system, though it noted that it was easier to identify
the problems with the current system than to reach consensus on
what should replace it.[490]
The Burgess Group's Report[491]
published in 2007 recommended that the Higher Education Achievement
Report (HEAR) should become the main vehicle for measuring and
recording a student's achievement. The report proposed that the
HEAR should be developed and tested over a four-year period alongside
the existing degree classification system. Following consultation
and development work, the Burgess Implementation Steering Group
is now working with a wide range of universities across the UKwith
support from the funding bodies of England, Northern Ireland,
Scotland and Walesto trial the new approach. Initially,
the HEAR was tested on data relating to recently graduated students
to ensure that it is compatible with student record systems. It
will then be trialled with current students, alongside current
methods of recording student achievement.[492]
262. We found broad support across the sector, and
beyond it, for the HEAR.[493]
Professor Ebdon from Million+ considered the current classification
system was outmoded. He explained:
It always used to strike me as a chemist that
I would be telling my students not to average the unaverageable,
and then I would walk into an examination board and do exactly
that! As a chemist, I know very well that some people have very
strong practical skills; others are stronger theoretically. I
would like to be able to identify that, and I think that the higher
education achievement record will enable us to do that. I am therefore
strongly in favour of that.[494]
263. There were two issues that concerned those who
submitted evidence to us. First, whether the HEAR, if it emerged
successfully from the trials, would replace the current honours
classification system. The Institute of Directors (IoD) disagreed
with the "Burgess Group assertion that there is 'conclusive
evidence' that while the summative judgement 'endures', it will
actively inhibit the use of wider information".[495]
The IoD called for the summative judgementin other words,
the current classification systemto be retained permanently
as part of the HEAR.[496]
In oral evidence, the IoD explained that it did not "argue
the system is perfect but it is a very useful and very simple
metric very early on in the recruitment process to give an indication
of the overall calibre of an applicant."[497]
Similarly, the Engineering Council UK (ECUK)
saw a role for both: it welcomed the "recommendations of
the Burgess Report, in particular the introduction of the HE Achievement
Record (HEAR) alongside the current honours degree classification
system".[498]
264. We agree with the employers' representatives.
While we fully support the work of the Burgess Group in developing
the HEAR, we consider that it would be precipitate to replace
the summative judgment provided by the current classification
system. It will take employers and those outside the higher education
sector some time to become familiar with, and accept, any new
system based on the HEAR. Speaking as lay people we understand
why employers and others may require a summary judgement as well
as a detailed review of a student's achievements, which is unlikely
to be useful in an initial trawl of applications for jobs. We
are therefore concerned that any abrupt switch would risk undermining
the excellent work the Burgess Group has done, especially if the
inevitable complexities of the system foster unwarranted suspicions
that it is masking further grade "inflation". In our
view, the higher education sector should run the two together
for a significant period after the end of the trial and allow
the current classification system either to wither on the vine
or to survive if experience shows that it is wanted. We conclude
that the HEAR and the current honours degree classification system
should run in parallel for at least five years.
265. The second concern was whether the HEAR should
include non-academic achievements (including non-assessed work-based
learning, and personal qualities extended through paid employment).
Carrie Donaghy, a student on the panel that returned to give oral
evidence in April, believed that the current degree classification
was "outdated and rigid" and that it bore "no reflection
of students' contributions to sport and volunteering".[499]
She said that she had consulted her fellow students who believed
that the HEAR project was going to be "an excellent way to
keep the traditional elements of the degree classification"
which employers recognise but also give "something further
for employers to consider, because the ideal candidates [
]
for jobs are often those who are involved with things like volunteering
and sport, they are more social, they are team-players and team-leaders
and the HEAR pilot will really see this through".[500]
The counter view was given by another student, Anand Raja, who
returned in April:
University is a place where you go to learn,
just as a hospital is a place where you go to get treatment, it
is not a place where you go for entertainment. Our universities
are for learning; that should be kept in focus. Also the idea
that including such variables in the degree would help employers
make better sense of what a person is like is a good idea but
it is not necessary to include those variables in the degree because
you can always write about them in your CV.[501]
266. The HEAR as currently drafted would provide,
by comparison with the current degree classification system, much
more information and there were some concerns that the HEAR could
be unwieldy. As Ed Steward, a student, said to us, it should not
be "a short synopsis of every course you have done and you
end up handing a booklet over to your employer [
] and we
end up with far too much information for employers."[502]
There is, however, a question about accessibility and balance
within a HEAR document, including the fact that much of the information
which might be provided is, de facto, probably already
available within institutions at the point a student leaves but
not currently brought together in a coherent whole. We also consider
that inclusion of information beyond the academic could act as
a stimulus to students to broaden their skills while at university
and that it has the potential to affect a student's attitude to,
and involvement in, higher education; it could, for example, help
to diminish non-completion rates. We conclude that the Higher
Education Achievement Report (HEAR) should record academic achievement
and reflect significant non-academic achievement. The record will,
however, need to be carefully structured to enable a convenient
reading of academic achievement separate from other activity.
Furthermore, we consider that, as part of the review of the HEAR
pilot, various good practice models incorporating the range of
academic and non-academic elements, should be provided to enable
those who will use the HEARfor example, employers, those
providing training and students themselvesto gain ready
access to the information required.
External Examiners
267. When we turned to consider the role and value
of the external examiner system, we found it illuminating to start
with what the Robbins Report said in 1963 on standards:
[S]tandards vary to some extent: such variations
are in the nature of things. But an autonomous institution should
be free to establish and maintain its own standards of competence
without reference to any central authority. The habit of appointing
external examiners from other universities and the obvious incentive
to maintain a high place in public esteem provide in our judgment
a sufficient safeguard against any serious abuse of this liberty.[503]
268. The one part of the system that the Robbins
Report described nearly 50 years ago that still appears recognisable
is the role of the external examiner. External examiners continue
to haveor it might be more accurate to say, are perceived
to havea key role in safeguarding standards, although the
degree to which this remains true is unclear. As Universities
UK explained, universities in this country have a "long history
of cross-checking the quality and standards of their own provision
with that of other institutions through a system of external examiners"
and that the involvement of external examiners was "recognised
internationally as a key mechanism for ensuring comparability
across the UK higher education system".[504]
Professor Trainor from Universities UK in his oral evidence
called the external examiner system "a jewel in the crown
of UK quality maintenance".[505]
He explained that the UK had a "double system, double
insurance, [
] of internal scrutiny and external scrutiny,
and the two join together in the external examiner system".[506]
269. We received evidence that indicated that this
"jewel in the crown" had become tarnished. One academic
in his evidence stated: "External scrutiny is supposed to
be provided by the external examiner system, a procedure which
is too often abused. External examiners are often friends of the
module leaders and are frequently asked to scrutinise subject
areas with which they are unfamiliar. They are not encouraged
to pass adverse comments."[507]
Another academic wrote in his memorandum:
The role of the external examiner is, in principle,
supposed to be that of a supervisor and guarantor of certain standards
of quality and probity. Sadly, this lofty aspiration is met more
in the breach than in the observance because of two main factors.
In the first place, many universities have succeeded in severely
restricting the scope for action by the external examiner by the
manner in which they circumscribe his/her duties in the relevant
regulations. In many cases, the external examiner does not monitor
the general level of the marks [nor] is given the opportunity
to change individual grades, since all he/she is called upon to
do is to arbitrate between first and second markers and/or make
a decision in borderline cases. [
]
[T]here is another way in which the external
examiner is unable fully to exercise his role as guardian of standards,
in that he/she cannot possibly know what has passed between tutor
and student prior to the assessment, or the input which the tutor
has had in it (in the case of coursework). For it is the worst-kept
secret in the academic world that, for unseen examination papers,
most tutors provide their students with the contents of the paper
beforehand, or at least give them a list of topics from which
the questions will be drawn. The role of the external examiner
is therefore predicated on an assumption of academic integrity
which, for the most part, does not exist.[508]
270. Professor Brown, former Vice-Chancellor of Southampton
Solent University, said that the external examiner system was
becoming "outmoded" not only "because of the basic
weaknesses in the system" but also because of the growth
of multi-disciplinary and modular courses which meant that the
external examiner was "not in close contact with the student
on a piece of work, which was the original rationale for the system.
But then on top of that you have these forces of competition which
inevitably will make people cut corners."[509]
He also commented that there was "no substitute for an independent,
impartial expert view of the curriculum from professional academics
who know their subject and that is the gap in our arrangements
at the moment and that is what needs to be done".[510]
271. In 1997, the National Committee of Inquiry
into Higher Education ("the Dearing Report") recommended
that the sector "create, within three years, a UK-wide pool
of academic staff recognised by the Quality Assurance Agency,
from which institutions must select external examiners".[511]
As far as we are aware this recommendation has never been implemented
and in the years since the Dearing Report we cannot see that higher
education institutions have done much to safeguard or improve
the external examiner system. The evidence we received showed
that far from being the jewel in the crown that Universities UK
claimed it was, it appeared that system might be simultaneously
wilting and rotting from within as it has become exposed to the
pressures and heat of sector-wide changes, internal pressures
and external demands. In our view, if matters continue as they
have been the system of external examiners will become outmoded.
Whilst it had value in the past in guaranteeing, as Professor
Brown put it, that "anyone who takes a British degree is
getting a worthwhile qualification with a worthwhile curriculum",[512]
we believe that it will be unable to continue to provide an assurance
of quality unless the independence, rigour and consistency of
the system is reinvigorated and enhanced.
272. From the evidence that we received we would
say that the problems of the external examiner system at present
can be summarised as:
- the remit and autonomy of external
examiners is often unclear and may sometimes differ substantially
across institutions in terms of operational practices;
- the reports produced by external examiners are
often insufficiently rigorous and critical;
- the external examiner's report's recommendations
are often not acted uponpartly because their remit is unclear;
and
- the appointment of external examiners is generally
not transparent.
273. Notwithstanding these deficiencies, we agree
with Universities UK that the external examiner system is fundamental
to ensuring high and comparable standards across the sector and
that is why we believe that it is worth making the effort to refurbish
the system. The starting point for the repair of the external
examiner system is the recommendation made by the Dearing Report
to the Quality Assurance Agency "to work with universities
and other degree awarding institutions to create, within three
years, a UK-wide pool of academic staff recognised by the Quality
Assurance Agency, from which institutions must select external
examiners". We conclude that the sector should now implement
this recommendation. Drawing on the evidence we received we would
add that the reformed QAA should be given the responsibility of
ensuring that the system of external examiners works and that,
to enable comparability, the QAA should ensure that standards
are applied consistently across institutions. We strongly support
the development of a national "remit" for external examiners,
clarifying, for example, what documents external examiners should
be able to access, the extent to which they can amend marksin
our view, they should have wide discretionand the matters
on which they can comment. This should be underpinned with an
enhanced system of training, which would allow examiners to develop
the generic skills necessary for multi-disciplinary courses. We
conclude that higher education institutions should only employ
external examiners from the national pool. The system should also
be transparent and we conclude that, to assist current and prospective
students, external examiners' reports should be published without
redaction, other than to remove material which could be used to
identify an individual's mark or performance.
Plagiarism
274. In its memorandum ASKe[513]
commented that plagiarism was a problem and that "concern
about student plagiarism is an even greater problem". ASKe
reported that:
There is evidence to show it is rising, and in
particular, that deliberate attempts to deceive assessors are
rising sharply from a relatively low base of (a generally agreed
assumed level of) 10-15 cases per 1000 submissions. Statistics
about levels of plagiarism are contradictory and hard to evaluate
as they ask very different questions of different groups of students.
Surveys that show "almost all students cheat" are frequent
but irrelevant since they usually refer to one-off or pragmatic
decisions with little or no impact on students' overall skills/learning
or on the credibility of their final award. [
] There is
much useless scaremongering in this area, implying that UK graduates
are not reliably assessed on discipline specific skills.
The opportunities for plagiarism have risen exponentially
since 2003, both in terms of available internet resources and
via bespoke writing "services" [
] It is estimated
that the latter are available via more than 250 sites in the UK
alone. In 2005, the Guardian stated such "services"
attracted spending of more than 200 million pounds per year. These
opportunities and evidence of their use do now present a threat
to generic, coursework-assessed courses. Copying and faking work
is likely to be a regular practice in large, generic courses in
some disciplines. Business, Computing and Law are most often mentioned
though concern in all disciplines is widespread. In some cases,
studies show up to 50 per cent of students say they submit others'
work, at least for some of the assessment, in large, generic courses
assessed by coursework. [
]
Simplistic reactions to the problems of plagiarism,
like a retreat to exams or reliance on technology are not the
solution. Addressing plagiarism is well within the capacity of
university pedagogic and administrative processes and there are
examples of it being handled with creativity and good effect across
the UK. There are also many examples of universities who have
yet to address the issue systematically and in those cases, a
significant issue remains.[514]
275. A number of academics commented in their written
submissions on plagiarism as part of a decline in academic standards.
For example:
In the time I have been teaching I have witnessed
a remarkable decline in academic standards. At many institutions,
grades have been inflated, plagiarism is often ignored.[515]
The University strategies to identify plagiarism
were inadequate and the procedures available to combat plagiarism
were ineffective. I repeatedly tried to have my concerns about
excessive toleration of plagiarism considered by the University.
However, I was constantly put off by the University Management.
All my complaints were ignored despite a litany of requests for
action and no penalties were sanctioned when plagiarism was suspected
and detected.[516]
276. When she gave evidence to us Dr Fenton, an academic,
said:
I am in charge of all plagiarism cases in our
department. I reckon 10 to 20 per cent of all assignments are
plagiarised. We do offer extensive advice on what plagiarism is
and how to avoid it to all students at all levels through all
course handbooks, and they have to sign bits of paper when they
hand work in saying they understand those criteria and they have
not plagiarised. We ask for electronic copies of all assessments
handed in and they are put through plagiarism detection software.
If, at the point of marking, they are suspected of plagiarism
then they are put through the software and then we pick them up.
We probably pick up about 2 per cent of what I imagine is 10 to
20 per cent.[517]
At the same session another academic, Dr Reid, added
that his experience was
certainly plagiarism levels have increased, but
on the science side it is perhaps a slightly different problem
than having a big pile of essays; we are often in a situation
where there are right answers and wrong answers and it is very
easy to distinguish between the two, and it is sometimes difficult
to understand how a student has arrived at the right solution
and whether they have done that independently or in a group. I
have had very nasty plagiarism cases in my department to deal
with; I am Director of Learning and Teaching and I have overall
responsibility for those issues. Almost invariably, the student's
excuse was pressure of time, the deadline coming up and they had
to work 17 hours that week to pay the rent, and really regretted
doing it but in a moment of weakness took a piece of work from
somebody else, and handed the same thing in. It is devastating.[518]
277. The evidence we received from students showed
us that they were aware of plagiarism and significantly they told
us of the steps that the sector was taking to combat the problem.
Ed Steward, a student explained:
you have huge amounts of guidance on plagiarism.
In every single book that you are given there is guidance on plagiarism,
it is given out on separate sheets, it is sent out before you
even arrive at university, it is on the website, it is absolutely
everywhere because it is so crucial that you understand plagiarism
in order not to commit it. I sit on some disciplinaries for students
who have been accused of plagiarism and the two types of students
that I see are those that panic and have not done the work, and
plagiarise in order just to submit the work on time, and those
who genuinely do not understand that they have plagiarised. It
can be as simple as referencing, not putting things in quotation
marks; that counts as plagiarism, so the university is keen to
ensure that every student fully understands every aspect of plagiarism.[519]
278. There was, however, some evidence of variation
in the level of vigilance against plagiarism within the sector.
Ricky Chotai, a student, considered that not "enough emphasis
is put on the structure, do we use the Harvard system [of referencing],
and then some academics are also somewhat laxas long as
you are putting references down and as long as it is not the strict
systemother academics are very strict as in you must use
a specific system."[520]
He also said that in his university that "we have seen an
increasing trend in plagiarism [
] with international students
and where [
] the university is using agencies to recruit
students from abroad [
] they are just not explaining about
plagiarism".[521]
Mr Chotai added that "we have had some really shocking cases
of a lot of students in a single class plagiarising and being
simply unaware of it."[522]
279. From the limited evidence we received it is
clear that plagiarism by students is a serious problem and challenge
but one that the higher education sector in general is both aware
of, and, the higher education sector claims, actively responding
to. There is, however, no room for complacency. Since 2003 the
opportunities for plagiarism have risen exponentially, both in
terms of material available on the Internet and, apparently, by
the development of a market in so-called writing services for
students. We conclude that the growth in opportunities for
plagiarism is such that the sector needs to be especially vigilant,
establish the application of consistent approaches across the
sector and ensure that it fully shares intelligence. We recognise
that many students accused of plagiarism may be guilty of little
more than failing to reference sources correctly and that the
majority of students are conscientious and act in good faith.
Given, however, the scale and potential for damage to the reputation
of English universities it is vital that the problem is held in
check and then progressively "educated" and "managed"
out of the system. We recommend that the Government, in consultation
with the higher education sector including students' representatives,
put in place arrangements to establish standards, which set out
what is and what is not plagiarism, ensure that comprehensive
guidance is available across the sector, and co-ordinate action
to combat plagiarism. One possible candidate for this work is
the Higher Education Academy working with the reformed QAA. We
also request that the Government, in responding to this Report,
advise whether those providing or using so-called "writing
services", to produce work which students can misrepresent
as their own, are liable for criminal prosecution.
365 These definitions draw on definitions given in
a talk by Peter Williams, Chief Executive of the QAA, and "The
Evolution of Institutional Audit in England" posted on the
Internet at www.hrk.de/de/download/dateien/QA_in_England.pdf. Back
366
Ev 438, para 17 Back
367
As above Back
368
Ev 438, para 18 Back
369
Q 40 Back
370
Qq 412 (Professor Arthur), 413 Back
371
Ev 283, para 15 Back
372
HC 370-iii, Q 369 Back
373
As above Back
374
HC 370-iii, Q 387 Back
375
HC 370-iii, Q 387 Back
376
Ev 171, para 6; see also Ev 159 (Informal meeting with students
at Imperial College London) and Ev 164-65 (Informal meeting with
University of Oxford students), Qq 252-53. Back
377
HC 370-ii, Q 352 Back
378
HC 370-ii, Q 354 Back
379
Q 198 Back
380
Ev 171 Back
381
QAA, Self Evaluation: External Review for Confirmation of Full
Membership of the European Association for Quality Assurance in
Higher Education (ENQA), February 2008, and QAA's website provide
the following information. The Board of QAA has 15 members: four
are appointed by the representative bodies of the heads of higher
education institutions; four are appointed by the funding bodies
in higher education in the UK; six are independent directors who
have wide experience of industry, commerce, finance or the practice
of a profession, and are appointed by the Board as a whole; and
one is a student, also appointed by the Board as a whole. Back
382
QAA, An Introduction to the QAA, 2009 Back
383
Ev 438, para 18 Back
384
The Office for Standards in Education, Children's Services and
Skills (Ofsted) is a non-ministerial government department created
on 1st April 2007. The Education & Inspections Act 2006 established
the new Department as the single inspectorate in England for children,
young people and adult learners, bringing together functions from
the Commission for Social Care Inspection (CSCI), the Children
and Family Court Advisory and Support Service (CAFCASS) of HM
Inspectorate of Court Administration (HMICA), the Adult Learning
Inspectorate (ALI), and the Office for Standards in Education
(the former Ofsted). (Office for Standards in Education, Children's
Services and Skills Resource Accounts 2007-08, HC (2007-08) 582) Back
385
QAA, Annual review 2006-07, 2008, p 16 Back
386
Ev 237, paras 1.2-1.3 Back
387
Q 40; see also Qq 102, 105 and 107. Back
388
Q 40 Back
389
As above Back
390
Q 412 (Professor Driscoll) Back
391
Q 317 Back
392
Ev 187, (Mr Royle) para 3.5 Back
393
Ev 186 (Professor Ryan) Back
394
Q 470 Back
395
Ev 185 (Professor Ryan) and Ev 381 (Staffordshire University),
para 3.1 Back
396
HC 370-ii, Q 258; Ofqual is the Office of the Qualifications and
Examinations Regulator and is regulator of qualifications, examinations
and tests in England; it does not cover the higher education sector. Back
397
HC 370-ii, Q 258 Back
398
Q 342 Back
399
As above Back
400
Q 343 Back
401
As above Back
402
QAA, Procedure for identifying and handling causes for concern
in English institutions offering higher education programmes or
awards, Procedure for adoption from 1 March 2007. Back
403
DIUS, Higher Education Funding Council for England, Department
of Health, Ofsted, Training and Development Agency, National Union
of Students, The National Postgraduate Committee, Architects Registration
Board, Royal Institute of British Architects, Engineering Council
UK, Royal College of Veterinary Surgeons, The Law Society, The
Bar Council, Health Professions Council, General Medical Council,
General Dental Council, General Optical Council, General Social
Care Council, General Chiropractic Council, General Osteopathic
Council, Royal Pharmaceutical Society of Great Britain, Nursing
and Midwifery Council, The British Psychological Society, Association
of Chartered Certified Accountants, The Association of International
Accountants, The Chartered Institute of Management Accountants,
The Chartered Institute of Public Finance and Accountancy and
The Institute of Chartered Accountants in England and Wales Back
404
QAA, Procedure for identifying and handling causes for concern
in English institutions offering higher education programmes or
awards, Procedure for adoption from 1 March 2007 Back
405
Ev 518 Back
406
As above Back
407
As above Back
408
QAA, Thematic enquiries into concerns about academic quality and
standards in higher education in England: Final report, April
2009, p 1 Back
409
QAA, Thematic enquiries into concerns about academic quality and
standards in higher education in England: Final report, April
2009, p 1 Back
410
QAA, Thematic enquiries into concerns about academic quality and
standards in higher education in England: Final report, April
2009, para 30 Back
411
For example, in this case we favour a code of practice on information
for prospective students-see para 98. Back
412
Q 541 Back
413
Q 541; see also Qq 546-50 Back
414
The Academic Experience of Students in English Universities (2007
report), HEPI, September 2007, and The Academic Experience of
Students in English Universities (2009 Report), HEPI, May 2009 Back
415
Qq 48-57, 419-21; HC 370-i, Qq 6-10 Back
416
Q 419 Back
417
Q 422 Back
418
The Academic Experience of Students in English Universities (2009
Report), HEPI, May 2009, paras 1 and 15 Back
419
Q 423 Back
420
Ev 243, para 4 Back
421
Ev 244, para 10 Back
422
www.hefce.ac.uk/Pubs/rdreports/2009/rd06_09/
Back
423
HC 370-i, Q 16 Back
424
HC 370-i, Qq 16-17 Back
425
Department for Education and Skills, Applications for the grant
of taught degree-awarding powers, research degree-awarding powers
and university title Guidance for applicant organisations in England
and Wales, August 2008, para 2 Back
426
QAA, A brief guide to QAA's involvement in degree-awarding powers
and university title, www.qaa.ac.uk/reviews/dap/briefGuideDAP.asp
Back
427
Department for Education and Skills, Applications for the grant
of taught degree-awarding powers, research degree-awarding powers
and university title Guidance for applicant organisations in England
and Wales, August 2008, para 21 ff. and Appendix 1 Back
428
Q 102 Back
429
Q 45 Back
430
Q 46 Back
431
For example, Ev 500 (Professor El-Sayed), Ev 537 [WJ Cairns] Back
432
Ev 499, para 21 Back
433
As above Back
434
Ev 513, para 5 Back
435
Q 479 Back
436
Q 480; see also Q 477 Back
437
Q 482 Back
438
Q 484 Back
439
Q 483 Back
440
Ev 531 Back
441
Ev 531; see also "'Bullied' academics' blog attack",
BBC, 8 May 2007 Back
442
Ev 499, para 21 Back
443
Ev 500 (Professor El-Sayed) Back
444
Q 517 Back
445
Dictionary of the History of Ideas at etext.lib.virginia.edu/cgi-local/DHI/dhiana.cgi?id=dv1-02
Back
446
Robbins Report, para 705-06 Back
447
Robbins Report, para 707ff Back
448
Fourth Report of Session 2007-08, Science Budget Allocations,
HC 215-i, paras 20-27; Eighth Report of Session 2008-09, Putting
Science and Engineering at the Heart of Government Policy, HC
168-I, para 138ff Back
449
Robbins Report, para 707 ff. Back
450
HC 370-ii, Q 175 Back
451
Q 507 Back
452
See Third Report of Session 2007-08, Withdrawal of funding for
equivalent or lower level qualifications (ELQs), HC 178-i, para
2. Back
453
Q 400 Back
454
Ev 182 (Professor Gorard); Ev 322 (157 Group), paras 4-5; Ev 345
(1994 Group), para 3; Ev 402, 404-07 (Russell Group); Ev 437 (Universities
UK), para 9; Ev 506 (NAO), para 20; Q 85 (Professor Baker); Q
145 (Mr Streeting); see also Ev 160 (Informal meeting with Liverpool
Hope students), Ev 162 (Informal meeting with University of Oxford
students); Ev 166 (E-Consultation). Back
455
"DIUS response to HESA performance indicators", DIUS
Press Release, 4 June 2009 Back
456
Ev 534 Back
457
"Eddies in the current? Trends in honours degree classifications
in England, Wales and Northern Ireland, 2002-07", Mantz Yorke,
Visiting Professor, Lancaster University, Paper presented on 9
December 2008 at the Society for Research into Higher Education
held in Liverpool Conference, Liverpool, Ev 202 Back
458
Ev 201, para 4-5 Back
459
Ev 201, para 6 Back
460
Ev 258 (Dr Penhallurick), para 4 Back
461
Ev 212 (Dr Derbyshire), para 3 Back
462
Ev 538 (Mr Cairns), para 4 Back
463
Ev 344 (Mr Dyer) Back
464
Ev 320, para 17 Back
465
Q 42 Back
466
Q 544 Back
467
Ev 414, para 54 Back
468
Ev 350, para 7 Back
469
HC 370-ii, Q274 Back
470
As above Back
471
Ev 439, para 20 Back
472
Ev 439, para 21 Back
473
HC 370-ii, Q 201 Back
474
HC 370-ii, Q 203 (Dr Hood) Back
475
Q 553 Back
476
Q 416 (Professor Arthur) Back
477
Q 416 (Professor Brown) Back
478
Ev 157 Back
479
Ev 194, para 3.1 Back
480
Ev 195, para 3.3.5 Back
481
Ev 223 Back
482
Ev 221, para 7 Back
483
Ev 221, para 8 Back
484
Ev 221, para 9 Back
485
Ev 221, para 10 Back
486
Ev 222, para 13 Back
487
For example, Ev 537 (Mr Cairns) Back
488
Q 469 Back
489
Ev 439, para 28 Back
490
Ev 439, para 28 Back
491
Universities UK, Beyond the honours degree classification: Burgess
Group Final Report, October 2007, at www.universitiesuk.ac.uk/Publications/Documents/Burgess_final.pdf
Back
492
"Institutions pilot new student achievement report",
Universities UK Press Release, 21 October 2008 Back
493
Ev 196 (ASKe), para 3.5; Ev 239 (QAA), para 5.1; Ev 263 (NUS),
para 19; Ev 306 (Higher Education Academy), para 2.5; Ev 310 and
314 (Million+); Ev 330 (University of the Creative Arts), para
11.5; Ev 367 (ECUK), para 9; Ev 435 (Birmingham City
University), para 3.4.1; Ev 498 (UCU), para 17 Back
494
Q 44 Back
495
Ev 333, para 8 Back
496
Ev 333, para 9 Back
497
HC 370-iii, Q 371 Back
498
Ev 367, para 9 Back
499
HC 370-iii, Q 422 Back
500
As above Back
501
HC 370-iii, Q 423 Back
502
HC 370-iii, Q 428 Back
503
Robbins Report, para 713 Back
504
Ev 439, para 24 Back
505
Q 40 Back
506
Q 40 Back
507
Ev 187 (Mr Royle), para 3.9 Back
508
Ev 538 (Mr Cairns), paras 10-11 Back
509
Q 417 Back
510
Q 423 (Professor Brown) Back
511
Dearing Report, recommendation 25 Back
512
Q 417 Back
513
Assessment Standards Knowledge Exchange Back
514
Ev 196, para 3.6; see also Ev 193, para 18; Ev 232, para 22; and
Ev 258, para 8. Back
515
Ev 187 (Mr Royle), para 3.1 Back
516
Ev 501 (Professor El-Sayed), para 4.1 Back
517
Q 446 Back
518
Q 449 Back
519
HC 370-iii, Q 441 (Mr Steward); see also Q 450. Back
520
HC 370-iii, Q 452 Back
521
HC 370-iii, Q 445 Back
522
HC 370-iii, Q 445 (Mr Chotai) Back
|