Grade inflation is used in two senses: (1) grading leniency: the awarding of higher grades than students deserve, which yields a higher average grade given to students (2) the tendency to award progressively higher academic grades for work that would have received lower grades in the past.
This article is about grade inflation in the second sense. Higher grades in themselves do not prove grade inflation and many believe there is no such problem. It is also necessary to demonstrate that the grades are not deserved.
Grade inflation is frequently discussed in relation to education in the United States, and to GCSEs and A levels in England and Wales. It is also an issue in many other nations, such as Canada, Australia, New Zealand, France, Germany, South Korea and India.
Louis Goldman, professor at Wichita State University, states that an increase of .404 points was reported from a survey in 134 colleges from 1965 to 1973. A second study in 180 colleges, showed a .432 GPA increase from 1960 to 1974, both indicating grade inflation.
Stuart Rojstaczer, a retired geophysics professor at Duke University, has collected historical data from over 400 four-year schools, in some cases dating back to the 1920s, showing evidence of nationwide grade inflation over time, and regular differences between classes of schools and departments.
Harvey Mansfield, a professor of government at Harvard University, argues that just denying the existence of grading inflation at Harvard proves that the problem is serious. He states that students are given easy grades by some professors to be popular, and these professors will be forgotten; only the ones challenging students will be remembered.
Main historical trends identified include:
The average at private schools is currently 3.3, while at public schools it is 3.0. This difference is partly but not entirely attributed to differences in quality of student body, as measured by standardized test scores or selectivity. After correcting for these factors, private schools grade on average 0.1 or 0.2 points higher than comparable public schools, depending on which measure is used.
There is significant variation in grading between different schools, and across disciplines. Between classes of schools, engineering schools grade lower by an average of 0.15 points, while public flagship schools grade somewhat higher. Across disciplines, science departments grade on average 0.4 points below humanities and 0.2 points below social sciences. While engineering schools grade lower on average, engineering departments grade comparably to social sciences departments, about 0.2 points above science departments. These differences between disciplines have been present for at least 40 years, and sparse earlier data suggests that they date back 70 years or more.
Until recently, the evidence for grade inflation in the US has been sparse, largely anecdotal, and sometimes even contradictory; firm data on this issue was not abundant, nor was it easily attainable or amenable for analysis. National surveys in the 1990s generally showed rising grades at American colleges and universities, but a survey of college transcripts by a senior research analyst in the U.S. Department of Education found that grades declined slightly in the 1970s and 1980s. Data for American high schools were lacking.
Recent data leave little doubt that grades are rising at American colleges, universities and high schools. An evaluation of grading practices in US colleges and universities written in 2003, shows that since the 1960s, grades in the US have risen at a rate of 0.15 per decade on a 4.0 scale. The study included over 80 institutions with a combined enrollment of over 1,000,000 students. An annual national survey of college freshmen indicates that students are studying less in high school, yet an increasing number report high school grades of A- or better. Studies are being made in order to figure out who is more likely to inflate grades, and why an instructor would inflate a grade.
In an attempt to combat the grade inflation prevalent at many top US institutions, Princeton began in the autumn of 2004 to employ guidelines for grading distributions across departments. Under the new guidelines, departments have been encouraged to re-evaluate and clarify their grading policies. The administration suggests that, averaged over the course of several years in an individual department, A-range grades should constitute 35% of grades in classroom work, and 55% of grades in independent work such as Senior Theses. These guidelines are enforced by the academic departments. Since the policy's inception, A-range grades have declined significantly in Humanities departments, while remaining nearly constant in the Natural Science departments, which were typically at or near the 35% guideline already.
In 2009, it was confirmed that the policy implemented in 2004 had brought undergraduate grades within the ranges targeted by the initiative. In 2008-09, A grades (A+, A, A-) accounted for 39.7% of grades in undergraduate courses across the University, the first time that A grades have fallen below 40% since the policy was approved. The results were in marked contrast to those from 2002-03, when As accounted for a high of 47.9% of all grades.
Deflation has varied by division, with the social sciences and natural sciences largely holding steady for the last four years. During that period, A grades have ranged from 37.1 to 37.9% in the social sciences and from 35.1 to 35.9% in the natural sciences. In the humanities and engineering, where deflation has been slower, 2008-09 brought significant movement. A's accounted for 42.5% of grades in the humanities last year and 40.6% of grades in engineering, both down two percentage points compared to 2007-08. In the period from fall 2006 through spring 2009, the most recent three-year period under the new grading policy, A's accounted for 40.1% of grades in undergraduate courses, down from 47.0% in 2001-04, the three years before the faculty adopted the policy. The 2006-09 results also mark continued deflation from those reported a year ago, when A's accounted for 40.4% of undergraduate grades in the 2005-08 period. In humanities departments, A's accounted for 44.1% of the grades in undergraduate courses in 2006-09, down from 55.6% in 2001-04. In the social sciences, there were 37.7% A grades in 2006-09, down from 43.3% in 2001-04. In the natural sciences, there were 35.6% A grades in 2006-09, compared to 37.2% in 2001-04. In engineering, the figures were 41.7% A's in 2006-09, down from 50.2% in 2001-04.
Grade inflation is often equated with lax academic standards. For example, the following quote about lax standards from a Harvard University report in 1894 has been used to claim that grade inflation has been a longstanding issue: "Grades A and B are sometimes given too readily--Grade A for work of no very high merit, and Grade B for work not far above mediocrity. ... insincere students gain passable grades by sham work." Issues of standards in American education have been longstanding. However, rising grades did not become a major issue in American education until the 1960s. For example, in 1890 Harvard's average GPA was 2.27. In 1950, its average GPA was 2.55. By 2004, its GPA, as a result of dramatic rises in the 1960s and gradual rises since, had risen to 3.48.
Harvard graduate and professor Harvey Mansfield is a longtime vocal opponent of grade inflation at his alma mater. In 2013, Mansfield, after hearing from a dean that "the most frequent grade is an A", claimed to give students two grades: one for their transcript, and the one he thinks they deserve. He commented, "I didn't want my students to be punished by being the only ones to suffer for getting an accurate grade". In response, Nathaniel Stein published a satirical "leaked" grading rubric in The New York Times, which included such grades as an A++ and A+++, or "A+ with garlands". In his 2001 article in the Chronicle of Higher Education, Mansfield blames grade inflation on affirmative action and unqualified African American students: "I said that when grade inflation got started, in the late 60's and early 70's, white professors, imbibing the spirit of affirmative action, stopped giving low or average grades to black students and, to justify or conceal it, stopped giving those grades to white students as well." He also claimed that he did not have figures to back up his claims, but he is convinced that grade inflation began with African American students getting grades which were too high, "Because I have no access to the figures, I have to rely on what I saw and heard at the time. Although it is not so now, it was then utterly commonplace for white professors to overgrade black students. Any professor who did not overgrade black students either felt the impulse to do so or saw others doing it. From that, I inferred a motive for overgrading white students, too."
The University of Alabama has been cited as a recent case of grade inflation. In 2003, Robert Witt, president of the university, responded to criticism that his administration encouraged grade inflation on campus by shutting down access to the records of the Office of Institutional Research, which, until that year, had made grade distribution data freely available. It is however, still available on the Greek Affairs website. The Alabama Scholars Organization, and its newspaper, the Alabama Observer, had been instrumental in exposing the situation and recommending that the Witt administration adopt public accountability measures. The paper had revealed that several departments awarded more than 50 percent "A"s in introductory courses and that one department, Women's Studies, handed out 90 percent "A"s (the vast majority of those being "A+"). Grades had grown consistently higher during the period examined, from 1973 to 2003.
UC Berkeley has a reputation for rigorous grading policies. Engineering departmental guidelines state that no more than 17% of the students in any given class may be awarded A grades, and that the class GPA should be in the range of 2.7 to 2.9 out of a maximum of 4.0 grade points. Some departments, however, are not adhering to such strict guidelines, as data from the UCB's Office of Student Research indicates that the average overall undergraduate GPA was about 3.25 in 2006. Other campuses have stricter grading policies. For example, the average undergraduate GPA of UC San Diego is 3.05, and fewer students have GPA > 3.5 in science majors.UC Irvine's average GPA is 3.01.
A small liberal arts college in New Hampshire, Saint Anselm College has received national attention and recognition for attempting to buck the trend of grade inflation seen on the campuses of many American colleges and universities. At Saint Anselm, the top 25% of the class has a 3.1 GPA; the median grade at the college is around a 2.50 GPA. Some professors and administrators believe that inflating grades makes it harder for students to realize their academic strengths and weaknesses and may encourage students to take classes based on grade expectation. The practice also makes it harder for parents and students to determine whether or not the grade was earned. Because of this, at Saint Anselm College, a curriculum committee was set up in 1980 to meet with the academic dean and review the grading policies on a monthly basis. This committee fights the practice of inflation by joining the administration and faculty in an effort to mend them into a working force against grade inflation. The former president of the college, Father Jonathan DeFelice, is quoted as saying, "I cannot speak for everyone, but if I'm headed for the operating room, I will take the surgeon who earned his or her "A" the honest way," in support of Saint Anselm's stringent grading system.
Other colleges such as Washington and Lee University, University of Rochester, Middlebury College,  The College of William and Mary, Fordham University, Swarthmore College, Bates College, Cornell University, the University of Chicago and Boston University are also known for their rigorous grading practices. However, data indicate that even schools known for their traditionally rigorous grading practices have experienced grade inflation and these claims may now be overstated. Washington and Lee had an average GPA of 3.27 in 2006 and Swarthmore's graduates had a mean GPA of 3.24 in 1997. At some schools there are concerns about different grading practices in different departments; engineering and science departments at schools such as Northwestern University are reputed to have more rigorous standards than departments in other disciplines. To clarify the grades on its graduates' transcripts, Reed College includes a card, the current edition of which reports that "The average GPA for all students in 2013-14 was 3.15 on a 4.00 scale. This figure has increased by less than 0.2 of a grade point in the past 30 years. During that period, only eleven students have graduated from Reed with perfect 4.00 grade averages." Wellesley College implemented a maximum per-class grade cap of 3.33 in 2004, though professors could award a higher average grade by filing a written explanation. Grades lowered to comply with the cap, and student evaluations of professors also lowered. The number of students majoring in economics increased and other social sciences decreased though this may have been part of larger general trends at the time.
A January 7, 2009 article in the Pittsburgh Post-Gazette used the term "grade inflation" to describe how some people viewed a grading policy in the Pittsburgh public school district. According to the article, the policy sets 50% as the minimum score that a student can get on any given school assignment. The article also stated that some students said they would rather get a score of 50% than do the school work. A March 2, 2009 follow-up article in the same newspaper said that the policy had been amended so that students who refuse to do the work will receive a grade of zero, and that the minimum grade of 50% will only apply to students who make a "good-faith effort". A March 3, 2009, article in the same newspaper quoted Bill Hileman, a Pittsburgh Federation of Teachers staff representative, as saying, "The No. 1 problem with the 50 percent minimum was the negative impact on student behavior." The same article also said that the school district was planning to adopt a new grading scale in at least two schools by the end of the month. The article stated that under the original grading scale, the minimum scores required to earn an A, B, C, D, or F, were, respectively, 90%, 80%, 70%, 60%, and 0%. Under the new 5-point grading scale, the minimum scores required to earn an A, B, C, D, or F would be changed, respectively, to 4.0, 3.0, 2.0, 1.0, and 0.
James Côté and Anton L. Allahar, both professors of sociology at the University of Western Ontario conducted a rigorous empirical study of grade inflation in Canada, particularly of the province of Ontario. Up until the 1960s, grading in Ontario had been borne out of the British system, in which no more than 5% of students were given As, and 30% given Bs. In the 1960s, average performers in Ontario were C-students, while A-students were considered exceptional. As of 2007, 90% of Ontario students have a B average or above. In Ontario, high school grades began to rise with the abolition of province-wide standardized exams in 1967.
The abolition of province-wide exams meant that student marks were entirely assigned by individual teachers. In 1983, 38% of students registering in universities had an average that was higher than 80%. By 1992, this figure was 44%. According to the Council of Ontario Universities, 52.6% of high school graduates applying to Ontario universities in 1995 had an A average. In 2004, this figure had risen 61%. In 1995, 9.4 percent of high school graduates reported an A+ average. In 2003, this figure had risen to a high of 14.9%. The average grade of university applicants was 80% in 1997, and this percentage has steadily increased each year since.
In 2004, Quebec's McGill University admitted that students from Ontario were given a higher cutoff grade than students from other provinces, because of concerns about grade inflation originating from the fact that Ontario does not have standardized provincial testing as a key component of high school graduation requirements.
In 2007, the Atlantic Institute for Market Studies released a report on grade inflation in Atlantic Canada. Mathematics scores in New Brunswick francophone high schools indicate that teacher-assigned marks are inflated in relation to marks achieved on provincial exams. It was found that the average school marks and the average provincial exam were discordant. When looking at marks for school years 2001-2002 to 2003-2004, it was found that school marks in all 21 high schools were higher than the provincial exam marks. The provincial average for school marks is 73.7% while the average for provincial exams marks is 60.1% over the three years. School marks in all 21 high schools were higher than the provincial exam marks.
In the context of provincial exams and teacher assigned grades, grade inflation is defined as the difference between the teacher-assigned marks and the results on a provincial exam for that particular course. It was found that higher grade inflation points to lower provincial exam results. Of the 21 high schools, École Marie-Gaëtane had the highest grade inflation, at 24.7%. With a provincial exam average of 52.3% this school is also the least achieving school in the province. In contrast, schools Polyvalente Louis-J-Robichaud, Polyvalente Mathieu-Martin, École Grande-Rivière and Polyvalente Roland-Pépin had the lowest grade inflation with values ranging from -0.7% to 9.3%. They were the four top performing schools on the grade 11 mathematics provincial exams. Similar results were found for Anglophone New Brunswick high schools, as well as for Newfoundland and Labrador schools. Despite the high marks assigned by teachers, Atlantic Canadian high school students have consistently ranked poorly in pan Canadian and international assessments.
In 2008, in British Columbia, the University of Victoria (UVic) and the University of British Columbia (UBC) reduced the number of Grade 12 provincial exams that high school students were required to write in order to gain admission to those universities. Prior to 2008, high school students applying to UVic and UBC were required to write 4 provincial exams, including Grade 12 English. In 2008, this standard was reduced so that students were only required to write the provincial exam for Grade 12 English. A UVic administrator claimed that the rationale for this reduction in standards is that it allows the university to better compete with central Canadian universities (i.e. Ontario and Québec universities) for students, and prevent enrollment from falling. Universities in central Canada do not require high school students to write provincial exams, and can offer early admission based on class marks alone. A Vancouver high school principal criticized the change in requirements by charging that it would become difficult to detect grade inflation. The president of the University Presidents' Council of British Columbia also criticized the move and said the provincial exams are "the great equalizer". The British Columbia Teachers Federation supported the change because in the past some students avoided certain subjects for fear that poor marks on provincial exams would bring down their average.
In the fall of 2009, Simon Fraser University (SFU) changed its requirements so that high school students only need to pass the English 12 provincial exam. Previously, students were required to pass 4 provincial exams, including English 12, in order to apply. This change brought SFU into line with UVic and UBC. Administrators claimed that this reduction of standards was necessary so that SFU could better compete with UBC and UVic for students. The change was criticized on the ground that it leads to "a race to the bottom".
As of 2007, 40% of Ontario high school graduates leave with A averages - 8 times as many as would be awarded in the traditional British system. In Alberta, as of 2007, just over 20% of high school graduates leave with an A average. This discrepancy may be explained that all Alberta high school students must write province-wide standardized exams, Diploma exams, in core subjects, in order to graduate.
The Alberta Diploma exams are given in grade 12, covering core subjects such as biology, chemistry, English, math, physics and social studies. The exams are worth 30 percent of a grade 12 student's final mark. Quebec also requires its students to write Diploma Exams for graduating students. Saskatchewan, Manitoba, and Nova Scotia have similar tests. British Columbia has a mandatory English proficiency test in grade 12, provincial tests in other subjects are optional.
Alberta's focus on standardized exams keeps grade inflation in check, but can put Albertan high school students at a disadvantage relative to students in other provinces. However, Alberta has the highest standards in Canada and produces students who are among the best in international comparisons. By preventing grade inflation, Albertan high schools have been able to greatly ameliorate the problem of compressing students with different abilities into the same category (i.e. inflating grades so that a student in the 98th percentile, for example, cannot be distinguished from one in the 82nd percentile).
In relation to grade inflation at the university level, the research of the aforementioned Professors Côté and Allahar concluded that: "We find significant evidence of grade inflation in Canadian universities in both historical and comparative terms, as well as evidence that it is continuing beyond those levels at some universities so as to be comparable with levels found in some American universities. It is also apparent that the inflated grades at Canadian universities are now taken for granted as normal, or as non-inflated, by many people, including professors who never knew the traditional system, have forgotten it, or are in denial".
A 2000 study of grade patterns over 20 years at seven Ontario universities (Brock, Guelph, McMaster, Ottawa, Trent, Wilfrid Laurier and Windsor) found that grade point averages rose in 11 of 12 arts and sciences courses between 1973-74 and 1993-94. In addition, it was found that a higher percentage of students received As and Bs and fewer got Cs, Ds and Fs.
A 2006 study by the Canadian Undergraduate Survey Consortium released earlier in 2007 found students at the University of Toronto Scarborough got lower marks on average than their counterparts at Carleton and Ryerson. Marking, not ability, was determined to be the reason.
In 2009 a presentation by Greg Mayer on Grade Inflation at the University of Waterloo reported that grade inflation was occurring there. The study initially stated that there was "no consensus on how Grade Inflation is defined ... I will define GI as an increase in grades in one or more academic departments over time". From 1988/89 to 2006/07 it was determined that there had been an 11.02% increase in undergraduate A grades, with the rate of increase being 0.656% per year. In 100 level Math for the year 2006/07, the grade distribution of 11,042 assigned grades was: 31.9% A, 22.0% B, 18%C, 16.3% D, 11.8% F. In 400 level Fine Arts courses for 2006/07, the distribution of 50 assigned grades was: 100% A. In relation to increased scores in first-year mathematics, there was no evidence of better preparedness of UW students. A possible source of grade inflation may have been pressure from administrators to raise grades. A case was documented in which a math dean adjusted grades without the consent or authorization of the instructor.
When comparing the 1988-1993 school years with that of the years from 2002-2007, it was discovered that the percentage of As assigned in 400 levels in the Faculty of Arts had risen as follows for every department (first figure is percentage of As for 1988-1993 years, second is percentage of As for 2002-2007 years): Music 65%/93%, Fine Art 51%/84%, Sociology 54%/73%, History 66%/71%, Philosophy 63%/69%, Anthropology 63%/68%, Drama 39%/63%, Political Science 46%/57%, English 43%/57%, French 39%/56%, Economics 36%/51%, Business 28%/47%, Psychology 80%/81%. It is important to note that this study examined only 400-level courses and conclusions regarding grade inflation should not be generalized to courses at other levels.
Annual grade inflation has been a continuing feature of the UK public examination system for several decades. In April 2012 Glenys Stacey, the chief executive of Ofqual, the UK public examinations regulator, acknowledged its presence and announced a series of measures to restrict further grade devaluation.
Since the turn of the millennium the percentage of pupils obtaining 5 or more good GCSEs has increased by about 30%, while independent tests performed as part of the OECD PISA and IES TIMSS studies have reported Literacy, Maths and Science scores in England and Wales having fallen by about 6%, based on their own tests
In September 2009 and June 2012, The Daily Mail and The Telegraph respectively reported that teenagers' maths skills are no better than 30 years ago, despite soaring GCSE passes. The articles are based on a 2009 paper by Jeremy Hodgen, of King's College London, who compared the results of 3,000 fourteen-year-olds sitting a mathematics paper containing questions identical to one set in 1976. He found similar overall levels of attainment between the two cohorts. The articles suggest rising GCSE scores owe more to 'teaching to the test' and grade inflation than to real gains in mathematical understanding.
Between 1975, with the introduction of the national alphabetic grades to the O-Level, and the replacement of both the O-Level and CSE with the GCSE, in 1988, approximately 36% of pupils entered for a Mathematics exam sat the O-Level and 64% the CSE paper. With grades allocated on a normative basis with approximately ~53% (10% A, 15% B, 25-30% C) obtaining a C or above at O-Level, and 10% the O-Level C equivalent Grade 1 CSE; a proportion being entered for neither paper. The percentage of the population obtaining at least a grade "C" or equivalent in maths, at O-level, remained fixed in 22-26% band.
Note: Historically an:
With the replacement of the previous exams with the GCSE and a move from a normative to a criterion referencing grade system, reliant on examiner judgement, the percentage obtaining at least a grade C, in mathematics, has risen to 58.4%, in 2012.
An analysis of the GCSE awards to pupils achieving the average YELLIS ability test score of 45, between 1996-2006, identified a general increase in awards over the 10 years, ranging from 0.2 (Science) to 0.8 (Maths) of a GCSE grade.
It has also been suggested that the incorporation of GCSE awards into school league tables, and the setting of School level targets, at above national average levels of attainment, may be a driver of GCSE grade inflation. At the time of introduction the E grade was intended to equivalent to the CSE grade 4, and so obtainable by a candidate of average/median ability; Sir Keith Joseph set Schools a target to have 90% of their pupil obtain a minimum of a grade F (which was the 'average' grade achieved in 1988), the target was achieved nationally in summer of 2005. David Blunkett went further and set schools the goal of ensuring 50% of 16-year olds gained 5 GCSEs or equivalent at grade C and above, requiring schools to device a means for 50% of their pupils to achieve the grades previously only obtained by the top 30%, this was achieved by the summer of 2004 with the help of equivalent and largely vocational qualifications. Labelling Schools failing if they are unable to achieve at least 5 Cs, including English and Maths at GCSE, for 40% of their pupils has also been criticised, as it essentially requires 40% of each intake to achieve the grades only obtained by the top 20% at the time of the qualifications introduction.
A number of reports have also suggested the licensing of competing commercial entities to award GCSEs may be contributing to the increasing pass rates, with schools that aggressively switch providers appearing to obtain an advantage in exam pass rates.
The five exam boards that certify examinations have little incentive to uphold higher standards than their competitors - although an independent regulator, Ofqual is in place to guard against lowering standards. Nevertheless, there remains strong incentives for "gaming" and "teaching to the test".-- Henrik Braconier, OECD 2012: Reforming Education in England
In response to allegations of grade inflation, a number of schools have switched to other exams, such as the International GCSE, or the International Baccalaureate middle years programme.
|A*||A (A* + A)||B||C||D||E||F||G||U||A* to C||entries|
Source: Joint Council for General Qualifications via Brian Stubbs.
Sources: Hansard, DfEGender and education: the evidence on pupils in England, Brian Stubbs, Expanding Higher Education in the UK, Comparing Educational Performance, by C Banford and T Schuller[permanent dead link], School Curriculum and Assessment Authority (SCAA 1996a) GCSE Results Analysis: an analysis of the 1995 GCSE results and trends over time
Between 1963 and 1986 A-Level grades were awarded according to norm-referenced percentile quotas (A <= 10%, B = 15%, C = 10%, D = 15%, E = 20%, O/N = 20%, F/U >= 10% of candidates). The validity of this system was questioned in the early 1980s because, rather than reflecting a standard, norm referencing might have simply maintained a specific proportion of candidates at each grade. In small cohorts this could lead to grades which only indicated a candidate's relative performance against others sitting that particular paper, and so not be comparable between cohorts (e.g., if one year, only 11 candidates were entered for A-Level English nationally, and the next year only 12, this would raise doubt whether the single A awarded in year one was equivalent to the single A awarded in year two). In 1984 the Secondary Examinations Council decided to replace the norm referencing with criteria referencing, wherein grades would be awarded on "examiner judgement". The criteria referencing scheme came into effect in June 1987, and since its introduction examiner judgment, along with the merger of the E and O/N grades and a change to a resitable modular format from June 2002, has increased the percentage of A grade awards from 10 to >25%, and the A-E awards from 70 to >98%.
In 2007 Robert Coe, of Durham University, published a report analysing the historic A-Level awards to candidates who'd obtained the average norm-referenced ALIS TDA/ITDA test scores, he noted:
From 1988 until 2006 the achievement levels have risen by about an average of 2 grades in each subject. Exceptionally, from 1988 the rise appears to be about 3.5 grades for Mathematics.
This suggests that a candidate rejected with a U classification in mathematics in 1988 would likely be awarded a B/C grade in 2012, while in all subjects a 1980s C candidate would now be awarded an A*/A.
The OECD noted in 2012, that the same competing commercial entities are licensed to award A-Levels as GCSEs (see above).
An educationalist at Buckingham University thinks grades inflate when examiners check scripts that lie on boundaries between grades.[clarification needed] Every year some are pushed up but virtually none down, resulting in a subtle year-on-year shift.
Note: norm* - June 1963 - 1986 grades allocated per the norm-referenced percentile quotas described above.
The Higher Education Statistics Agency gathers and publishes annual statistics relating to the higher qualifications awarded in the UK. The Students and Qualifiers data sets indicate that the percentage of "GOOD" first degree classifications have increased annually since 1995. For example, 7% of all first-degree students who graduated in the academic year 1995/96 achieved first class honours; by 2008/09 this had risen to 14%.
Between 1995 and 2011, the proportion of upper second class honours awarded for first degree courses increased from 40.42% to 48.38%, whilst lower second class honours dropped from 34.97% to 28.9%. The number of third class honours, "ordinary" (i.e. pass), and unclassified awards dropped substantially during the same period. During this time, the total number of first degrees awarded in the UK increased by 56%, from 212,000 to 331,000.
Grade inflation in UK universities appears to be caused by administrators wishing to improve their league table standings, a desire to attract non-European students who can be charged full fees, academics who fear receiving unfavourable course evaluations from students, the breakdown of the external examiner system, and a growing indifference towards academic dishonesty and plagiarism.
Note: The doubling of institutions and quadrupling of student numbers, following the Further and Higher Education Act 1992, makes any direct comparison of pre and post 1995 awards non trivial, if not meaningless.
Source: Sunday Times Good University Guide, 1983-4 (1st Ed), 1984-5 (2nd Ed), 2006, 2008, 2012
|Awards by class||First (1)||%'age||Upper Second (2.1)||%'age||Undivided + Lower Second (2.2)||%'age|
In CBSE, a 95 per cent aggregate is 21 times as prevalent today as it was in 2004, and a 90 per cent close to nine times as prevalent. In the ISC Board, a 95 per cent is almost twice as prevalent today as it was in 2012. CBSE called a meeting of all 40 school boards early in 2017 to urge them to discontinue "artificial spiking of marks". CBSE decided to lead by example and promised not to inflate its results. But although the 2017 results have seen a small correction, the board has clearly not discarded the practice completely. Almost 6.5 per cent of mathematics examinees in 2017 scored 95 or more -- 10 times higher than in 2004 -- and almost 6 per cent of physics examinees scored 95 or more, 35 times more than in 2004.
Grade inflation is a specific instance of a broader phenomena of ratings or reputation inflation where rating decisions are made by individuals. This has been occurring in peer-to-peer services such as Uber.
Board examination results have lost their credibility in India.