Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A-Levels: The Model is not the Student (thaines.com)
120 points by tosh on Aug 17, 2020 | hide | past | favorite | 128 comments


I don't understand why they even need to gives these kids a single grade. The biggest consumer of grading are schools, colleges and university anyway. Just make more of the underlying data available to them and let the decide for themselves. And as an employer I think it would be useful to have a more complete picture of the student than just a single grade. All this fiasco does is demonstrate how unnecessarily reductive the whole concept is.


University entrance requirements in the UK are pretty much always expressed in terms of sets of grades for A levels (rUK) or Highers (Scotland) - often with additional requirements (e.g. must include maths).

The entire entrance process pretty much uses grades as the first filter - changing that would be a huge amount of work not to mention a lot of people will have conditional offers that are expressed in terms of these grades.

So picking a course at random (Physics at Edinburgh) requires: SQA Highers: AAAA - AAAB or A Levels: AAA - ABB

https://www.ed.ac.uk/studying/undergraduate/degrees/index.ph...


To re-implement the University application process would indeed be a huge amount of work.

However, the governing principal of Cambridge University, for example, is that all operations are done at a human scale, and I think when you factor in the large numbers of staff whose labour the Universities have access to, it becomes possible.

For a group of applicants for Computer Science at a Cambridge college, for example, you would have a Director of Studies and one or two other staff ranking around 20 applicants for half as many places. (The ratio of applicants to places is much higher prior to interview.) It is a tractable problem to solve, even if for other institutions the process is scaled by 2x/3x (4:1 / 6:1, up to Nottingham’s reputed 13:1) or maybe even more.

An idea we mooted over dinner last night was for these applicant rankings to be fed to UCAS — the University and Colleges Admissions Service — and for UCAS to combine these rankings with the applicant preferences to fill up each course from the top down. No need for grades, just use the desirability metric provided by the institutions.

Grades themselves are a strange beast. By their nature they are different to raw marks. A grade speaks to your ability as part of the entire cohort of examined pupils in that subject. Without the exam, taken nationally under examination conditions, it just doesn’t seem to make sense to refer to grades.

If we called them “exam grades” perhaps the folly of trying to assign them to pupils, who have not actually taken an exam, would be clearer.

(I speak with no real authority, but I am a high school teacher in England.)


What's more, the people involved in the application process know the reputation of schools as well, both in respect to what kind and how much bias they have in terms predicted grades and in terms of how much the school tends to 'teach the test' or otherwise be good at grooming their students to get good grades or perform well at the interview, so a student who does well in grades or interview from an underfunded state school is more likely to get an offer than a pupil with equivilent results from a high-performing public school

There's still a substantial bias in offers towards public schools, but a lot of that is because such schools are selective like the universities, actual student ability does still matter and better schools still improve student ability more, and there's a strong self-selection of candidates for top universities (a lot of people who might get accepted for oxbridge don't apply because they don't think they have a chance) as opposed to old-school elitism in the application process. (Not that all of the above factors are actually fair from a grand socioeconomic viewpoint, but they are reasonably fair in as much as the university judging the merits of the applicants it gets).


Doesn't Oxbridge interview a lot of people very year anyway - so they are already geared up to a process that appears to be pretty different, and more labour intensive, than other institutions?


Yes. Holding interviews at each of Oxford and Cambridge are approximately 2 weeks of full time work for tutors, with paid full-time assistance from some students, and with admin from other college and university staff.


What underlying data do you want to make available? There's no consistent data collection across the schools, what there is exists for internal use at a school so there's no requirement for it to be to any particular standard or of any particular frequency throughout the course.

It's also a single grade per subject, so the university /employer will usually have 3-5 grades to work with, as well as the broader picture built up from the personal statement/cover letter and interview.


> And as an employer I think it would be useful to have a more complete picture of the student than just a single grade.

The current system isn't coping well with COVID-19 but let's not throw the baby out with the bathwater here.

The current system is simple, and that means it can be seen to be fair. If studying X at university Y needs grades AAB that's easy to understand and you know it's not biased.

I much prefer that to the American system where every university has an opaque system that purports to look at a complete picture of the student then mysteriously picks white students over Asian students with better grades and can't explain why.


> The current system is simple, and that means it can be seen to be fair. If studying X at university Y needs grades AAB that's easy to understand and you know it's not biased.

If only it were that simple. Universities often also take in to account the outcome of in-person interviews which are inherantly biased.


Very few universities still do in-person interviews because of that bias. Only Oxford and Cambridge require them. UCL and Imperial offer them optionally.

Some courses like Medicine do have interviews at institutions that usually don't interview.


“Only” the two most famous (and arguably best, in a number of fields) institutions. They set the tone.


They're certainly a good indicator that the class system is alive and well in the UK, but since the rest of the universities rely on the Highers/A-Level results without any interviews I'm not sure what you are implying by "set the tone"


And clearing - i.e. what happens to match people to the vacant spaces left by those who didn't make the offer target - is almost 100% interview based, usually by phone and on results day.


But as a consumer of grades it is not about fairness. It is a heuristic used to predict future performance. And it is possible for grading to be a bad heuristic whilst also being fair to individual students.


The underlying data is just more grades though, isn't it? How you did on this or that exam, coursework, etc? Minus substantial facts like whether you were ill, your gran died, whether you had tutoring, and so on?

You're right that grades are reductive, but the alterntive isn't more of the same information.


That passes the buck on to the university addmissions teams. Then it's the same story with Oxbridge as the antagonists.

Fundamentally it's a hard problem: the admissions require more information than is available. But...

> All this fiasco does is demonstrate how unnecessarily reductive the whole concept is.

...there never really was enough information. This is an important factor IMO. The system has always been unfair, now it's less fair and in a colder, more obvious , more systematic way that resonates with the current political climate around technology. Politically it's hard for this government to get away with standing by a policy that adversely affects poor students in the same way it would have been hard for a left-wing government to institute the current fiscal policy.

>kids

Nit: almost all of them are now legally adults. At university they can look forward to this twilight age where they are treated as an adult when it's about financial obligations and treated as a child when it's their institutions forcing policy upon them.


The Tory line has always been "Work hard and get ahead." It's an outright lie, because the reality has always been "Go to the right school and have the right parents and get ahead."

But much middle class [1] and some working class Tory support still believes in the social status dream, and this fiasco has directly undermined that narrative.

The party is getting huge heat from its base over this, not just because of their future prospects, but because some of them are now wondering if "Work hard and get ahead" has ever been true for them.

[1] Note for the US - in the UK "middle class" specifically means "Highly educated professionals with significant cultural and political capital" not just "People with a degree making decent money."


> The Tory line has always been "Work hard and get ahead." It's an outright lie, because the reality has always been "Go to the right school and have the right parents and get ahead."

But it's not as simple as that, either. Coincidentally, I just read an obituary of John Hume, who was born into poverty in Derry and rose -- through education and hard work -- to become a leading politician and a Nobel prize winner.

https://www.irishtimes.com/life-and-style/people/john-hume-o...


A single or few data point(s) of less privileged people managing to work hard and become hugely successful is not a valid argument against:

> It's an outright lie, because the reality has always been "Go to the right school and have the right parents and get ahead."

There have been and will always be exceptions, cherry picking a few of these is a form of straw man.


It's not a single grade. It's a set of usually 3 or 4 grades in subjects the students selected at (US) grade 10 . The choice of subjects contains a lot of information about the students' objectives and the grades show the performance level. The national normalized exam structure is intended to compensate somewhat for large regional and social inequalities. It does not achieve this perfectly, of course.


That's more or less what one of the Oxford colleges did. They realised all grades were guesses and that the information they had when offers were made had not changed quantifiably. So they let all existing offers stand regardless of what grades students were assigned.

I think it's a shame this was the exception rather than the norm.


The Universities know this and are working with students that missed the required grades to understand what their projected grades, mock results etc were and make a case by case evaluation. It's nowhere near as dire a situation as is being made out.

I watched the news broadcasts when the results were announced. There were TV journalists at schools in deprived areas interviewing students before the grades were released, with the journalists giving dour warnings about the impending problems they expected. When the results were announced the students were leaping about with joy, hugging and celebrating, it was like a world cup win. The journalists didn't seem to know what to say. back in the studio the best the presenter could do was mutter that "I'm sure we'll find some students that were disappointed later".

I'm sure they did.


Yes, because the kids whose futures just got screwed aren't the ones liable to be jumping around in front of everyone.


I've been hearing loads of reports of kids getting marks way off what was predicted, and clearly the algorithm is... wrong. This essay is a good explanation of why. One thing that annoys me a bit is that we as a society have discovered all sorts of interesting thing about statistics and decision theory, causality, etc, but then we don't apply them. Why didn't Ofqual find some people who understood how stats work, and and ask them for advice?

It seems clear that if you just take a distribution from previous years and lay them on top of a ranking, you are creating a lot of problems. And you are hiding your incentives on the penalty function, which is something he mentioned. People have correctly figured out that a good kid at a bad school will not be found, and that the marginally second best kid in a year is going to get marked down.

Whatever you think of exams, and I'm no fan, they do give people that mix of noise and signal that society needs to put to rest the issue of who deserves what. They're not entirely a lottery, not entirely skill or hard work, but if we need a way to distribute X number of future doctors, that's the long established way to do it.


>Why didn't Ofqual find some people who understood how stats work, and and ask them for advice?

A very substantial number of the people who work at Ofqual are statisticians of course, it just seems that they have made some serious mistakes.

I don't understand why they couldn't have published at least the algorithm for peer review and discussion months ago. These issues could have been resolved with plenty of time left before results had to be out.

Another problem with this algorithm is that every year, a certain number of exam takers just fuck it up on an epic scale. They didn't get any sleep, they get a question that completely flummoxes them, whatever. So the distribution of grades, even at a good school with a pretty good intake, will have some very low grades on them. A big school will have a U or so. Now come to this year - noone had the opportunity to mess up their actual exam so some poor probably-C, B-on-a-good-day kid has to be stuffed into the "U" to make the distribution fit. Purely on the basis that the big school they went to usually has at least one. The difference is, if you fuck the exam up, you come out knowing that! So when you get your U, you know how it happened. This year someone has had it assigned to them without actually screwing up the exam which seems rather harsh.


> Why didn't Ofqual find some people who understood how stats work, and and ask them for advice?

They wouldn't sign a five-year NDA:

https://news.sky.com/story/a-levels-exam-regulator-ignored-e...


That is completely mental, a five year NDA for something that would be completed by now?

The nicest interpretation of this is that Ofqual had no idea what the hell they were doing.


I suppose what you don't want is to publish everyone's exam results, and then have one of your helpful academics say "oh, but of course these results are rubbish", and then have a massive public scandal about how the results are rubbish.

However, what they opted for was the same thing, but without the second step.


> Why didn't Ofqual find some people who understood how stats work, and and ask them for advice?

Because, fundamentally, the current UK government is not interested in scientific evidence or taking advice from experts in an area (c.f. Michael Gove's famous line "people of this country have had enough of experts")


At least include the full quote, which is far more reasonable in-context:

"I think the people of this country have had enough of experts with organisations with acronyms saying that they know what is best and getting it consistently wrong."


And the full context: he was referring to accusations that the government were ignoring the advice of economists and trade experts on the subject of Brexit. The accusations were that this was because the advice didn't fit in with the UK government's stance on the matter.

He was using the example - nearly ten years on - of how "experts" didn't predict the 2008 crash. At best, a straw man argument (which experts; were any expert advisors aware and did they endorse the extreme lending that affected the UK, etc.)


Wouldn’t that apply to the Tories, though? They _consistently_ get it wrong after saying they know best.


But they don't profess to be experts.


Hence Dominic Cummings?


Cummings explicitly claims not to be an expert, and decries the fact that he can't seem to find anyone competent at the higher levels of the civil service anywhere.


Not really - the second half is just rhetoric to add flavour. He wasn't saying "People have had enough of the subset of experts who are always wrong", not least because obviously an "expert" who is always wrong is unwanted.


Probably best to contextualise that particular quote. Your use of it here is misleading.

The government is on record as wanting precisely more technically qualified people to enter the civil service, and restructuring the service toward this end. Whether or not the way they intend to go about it is sensible or reasonable is another topic for debate.


> I've been hearing loads of reports of kids getting marks way off what was predicted

Why do we need the second set of testing at all if the predictions exist and if the marking is going to be judged against them to the extent we are trusting the prediction over the marking.


>I've been hearing loads of reports of kids getting marks way off what was predicted, and clearly the algorithm is... wrong.

It's worth noting that the predictions that were initially planned to be used as part of the algorithm would have resulted in 38% higher grades than any other year, ever. It's clear that teacher predictions are ALSO "wrong".


It's useful to read some examples to see how devastating this is to children, and how unfair it is that high achieving young people who have done well all throughout their education have suddenly been downgraded.

https://twitter.com/lewis_goodall/status/1295069110829752323...

And this describes the advantages that private schools got: https://twitter.com/hugorifkind/status/1295292925463724032?s...


I’m no fan of ham fisted normalization.... but I think the bigger issue is the fairness aspect of thresholding based on class sizes.

I think it should be possible to create ‘virtual classes’ above the threshold by combining smaller like classes together.

Such a model would be fairly ham fisted.


This is not merely "ham-fisted normalisation". In many cases, the student's attainment is being completely ignored!

> but I think the bigger issue is

The tricky thing about this scandal is that it is actually about 8 different catastrophes for students all enmeshed together. This makes it hard to know where to start critisising the government.


"Gavin Williamson and Ofqual have apologised to students and their parents, as they announced that all A-level and GCSE results in England will [now] be based on teacher-assessed grades." (https://www.theguardian.com/education/2020/aug/17/a-levels-g...)


People should note that even in non-covid times, UK domestic university admissions is almost entirely based on predicted grades. Predicted grades are much more important than actual grades. The only difference this year is an extra layer of noise in the form of an algorithm. It's a very strange, unfair, socially regressive system (I may be a little bitter).


Unless things have changed in the last 24 years * , offers are based on predicted grades, and requirements are set on the actual grades achieved in order for that offer to transform into a place. Those students who did not achieve the requirements and those universities who still have places to fill then enter the "clearing" process to match up.

Were you predicted worse grades than you achieved, and as a result not offered places where you would have liked?

( * they may have, significantly, it's been a long time)


There are a lot of unconditional offers these days, which means that the university will not look at actual A levels' results at all.

Edit:

"38% of applicants (97,045 applicants) received at least one offer with an unconditional component in 2019, increasing from 34% (87,540 applicants) in 2018, and continuing the year-on-year growth in offers with an unconditional component since 2013." [1]

This year at least one Oxford College has already decided to honour all offers without looking at actual A-Level results , whereby effectively turning all offers into unconditional offers [1]. Granted, this is not a normal year.

Let's be honest, though, if you got an offer from Oxbridge you're very good so they're not exactly taking a risk and it gets them brownie points.

[1] (PDF) https://www.ucas.com/file/250931/download?token=R8Nn7uoI#:~:....

[2] https://www.theguardian.com/education/2020/aug/15/a-levels-r...


A common trick the universities use is to tell candidates that they'll upgrade them from a conditional (e.g. AAB offer) to an unconditional offer if they make that university their firm choice within X days.

I believe it's a way for them to better predict how many students they'll have next year, otherwise it can fluctuate pretty wildly based on exam day results.


The only unconditional offers I've ever heard of have been for less prestigious courses.


Interesting. They were pretty rare back then.


In Scotland you do Highers in 5th year and apply to universities during 6th year after you get your grades.

Motivation for working hard during 6th year can be difficult if you get an unconditional offer based on results you already have....


Yeah I can imagine that does make things weird! So there's nothing happens in the final year that can affect the outcome much?

I effectively got an unconditional offer in England way back when, was predicted something like 3 As and a B, and was given an offer to my preferred place at two Cs. Still brought in two As and two Bs though.


If you get an unconditional that you are happy with then that's it - you have 6 months where you can either work hard for exams that don't count or, like me (and indeed my son who recently went through this process), you can use that time to enjoy yourself and get a job to earn some money before going to Uni. Schools tend to be a bit unhappy about this approach though.

If you have a conditional offer then you face the prospect of being like everyone in the rUK.

It's pretty nice if you get unconditional offers!


I should pre-disclaim this by saying I actually really enjoyed my time at Exeter and it's an excellent school but at the time I felt I'd been treated very unfairly. I think my case is a good example of how the system is badly designed, and then operates in a way that's worse than the theoretical design.

>Were you predicted worse grades than you achieved, and as a result not offered places where you would have liked?

Yes. Story time but skip to the next quote if it's boring :)

I got 4 A's at AS level back in 2001 (I did my Alevels in 2002, the first year of the AS/A2 split). My subject teachers all agreed I'd get the same at A Level. My School decided that "to avoid over estimating" they would reduce all the predictions by 3 levels. So I was predicted BBB. What the Fuck? My friend was friends with the head of VI form (he did theater with her after school, they got drunk together at after show parties, he was very pretty and charming and she was 50+, super weird and dodgy but whatever). Somehow he was predicted AAB. The only one in the school year of ~90 students over BBB. Some people even cried over this as they wanted to do medicine and that's difficult with AAA predicted...

My friend was called for interview at Imperial, where we both wanted to go and study Physics. He went and was offered AAA. I was not called for interview.

Results came in and I got my AAA. He got BBB. Imperial didn't care and admitted him anyway.

I think I am sort of a perfect example of what's wrong with the current system: I was poor and badly turned out and a bit brutish but ferociously bright. He was sort of the opposite. I got good grades, he was better at getting predictions. He was the one that got the Imperial place.

>Those students who did not achieve the requirements and those universities who still have places to fill then enter the "clearing" process to match up.

I think this is the issue because this rarely happens even though it's meant to. There are 2 sides to this:

* If you miss the required grades, you generally keep your place. Universities routinely make an offer at X and then take you whether you get the grades. I understand why: when you reject a candidate with BBB, you're not guaranteed one with AAA, your going to end up with whatever clearing gives you. And not many AAA candidates are in clearing are they? This becomes a self fulfilling prophecy of sorts.

* If you blow through your predicted grades (As instead of Bs) you have 3 options: 1. Stick with the school you were offered at (I did this). 2. Take a whole year out and re-apply next year with your grades. That's a pretty huge step for an 18 years old to take. 3. Enter clearing, but you know that Imperial, Oxbridge etc don't really do clearing do that?

Back in 2002 there was actually discussion of forcing all uni's to keep X% of places for clearing for exactly this reason. It's also worth noting that these days something like 20% of offers are unconditional.

I think I did more than ok in the end. I had a better time than my friend (he had to live at home in first year and never really got the "experience" etc). But people should be aware of how the system works: you're actual grades make very little difference, predicted grades are KEY to getting into (not just an offer, actually into) a top uni, and the best way to get good grades is only about 50% about ability and hard work compared to looking nice and sucking up to the right people.

Maybe this is a great lesson for how life isn't fair :)


That's a sad tale, it is a shame that your predictions were poor when your record was so good. I think your AS grades should have been given much more weight. We didn't do AS levels in my day...

Imperial figures in my story too. I was predicted very highly, either 4 As or 3A+B, I already had an A at Mathematics under my belt, taken a year early. I approached Cambridge and Imperial. The Imperial open day was first and they offered at 2Cs, because they knew I was interested in Cambridge and figured that if I got a high offer from there I'd keep them as a backup.

The Cambridge interview day went badly. I came away feeling like I'd just been a complete dumbass, missed some really obvious stuff because I was stressed, and no offer was made. So I had Imperial on offer at 2Cs and no backup. And indeed I went there.

And that's when I discovered London nightlife, drugs, sex and dropping out. Failed the first year of my engineering course miserably and ended up studying CompSci at Southampton a year later. Which has turned out great, so no complaints :)

(I'm also lucky enough that there was nothing to pay for either course. I was in the last year through the system when there were no fees, what's happened on that front since is pretty awful, IMHO)


Yeah, to be honest I think I got the better end of the deal actually. Exeter gave me a lot of opportunities academically and the space to get my social abilities together. My friend did like you. There is something strangely inhumane about uni admissions: we over booked accommodation at Exeter by 10% (literally had people sleeping in the common rooms) because the administration said at least 10% would leave before the 6 month mark (homesick mostly). They were not wrong.

It's too much to hope for but... I actually think part of the answer is forcing everyone to wait a year to go to uni. Then they can get themselves together a bit more. They can get being 18 out of their systems. They can apply to what they want, not what mum and dad say. Forcing people to do national service would give people the experience of living away from home before they have to go and fail a degree over it or get a year in and realise they want to be somewhere else doing something else.


Not sure I'm down with the national service part, but in general I agree - a year to think about yourself and your life, maybe do something useful/productive, and a university application process based on grades you already have, would do a lot of people a lot of good.


Yeah, me either to be honest...


I'm the opposite example. I was going to study Computer Science at Warwick. Amazing course, I really wanted to do that. I was accepted for a place if I got ABB (iirc, this was 1988). I was predicted AAA. Life got in the way and I got BDE. I accepted a place at Oxford Polytechnic (now Headington University I believe). I completely messed it up and flunked out after 2 terms. Went off the rails completely, became a punk, worked in shitty manual jobs. Finally got a job as a developer at 25, and back on the rails from there.

I'm so glad it turned out that way. I had no idea who I was, and needed that time to discover myself. I shouldn't even have been applying for Uni, but that's the thing you do, isn't it?

I wonder how many people from this debacle will look back and say they're grateful it happened because it gave them a chance to take a different path.


Yeah, you're not wrong.


I do find the system of using predicted exam grades odd.

But using grades pupils have achieved during the previous years(s) seems fine to me and can potentially erase the issue of having a good/bad exam day.

In some countries, either they use exam results and/or they look at report cards for the previous years.


The only good reason I can think of is that the UK had a very tight focus on subjects: at GCSE you study 10+ things, at A(s) level 3-4, at university 1.5. You can't really use GCSE results for a UK degree entry because 80% of those subjects are irrelevant for this degree because (unlike EU\North America) a (say) physics degree has zero credits for English lit or history or even chemistry. A Level results are too late. So that puts everything on AS results (and I think the Tories got rid of the actual AS exams a few years back?).

I am a big "final exams not coursework please" guy. But they'd be better having end of term exams for the 6 terms in A Levels and using the results from the first 5 for most of admissions IMHO.


Yes, the current government reduced or got rid of much coursework and stopped As levels contributing to an A level.

In the previous system, there was more coursework in some subjects (almost always still <50%), thus recognising the abilities of those who don't do so well in exams, and still recognising the abilities of those who are better at exams than coursework. Exams were split over two years, with most of the material studied in year 1 examined in one go in year 1, and material studied in year 2 (building on year 1 material )examined in one go in year 2.

Now there is an almost total reliance on exams, all in one go at the end of the second year.

Hence (partly) the current storm.


Notice how Ofqual can write a 319 page "it's not our fault" report, but can't seem to publish their code on Github.

That explains more than anything what the problem is with the British Civil Service. The epitome of "nobody every got fired for buying IBM".


The civil service? Pretty sure this is Dominic Cummings overriding the civil service. But there's plenty enough wrong in this story to avoid invoking his name.


There are plenty of data available on each students. The issue is how to normalise them in order to create a national ranking.

Mock exams may be useful because, arguably, the results should already be pretty normalised, but I'm not sure if these took place this year.

So what is left is to work with is the grades the students have had throughout the year and previous years and, importantly, also what school they are in.

The latter is heavily charged politically so they are wary of communicating on how that impacts their model.

It seems to me, though, that pupils from top private schools are not complaining too much about having their grades downgraded, so it seems that it did not happen too much for them.


On the whole, students from private schools had significant grade inflation over previous years, often because the predictions were used as-is. This left fewer high grades "available" for state schools (in order to maintain the whole-country numbers).


So apparently, the government are due to announce an official u-turn - https://www.thetimes.co.uk/article/results-to-follow-teacher... - and grades will be based on teachers predictions, which, according to the article in the OP's link, should have a .7* correlation with official results from the past.

It's still going to prove controversial, but sounds much better than the apparently woolly 40 - 75% accuracy for the OFQUAL algorithm mentioned in the article.


Everyone keeps referring to "The Algorithm" like a mythical beast.

"The Algorithm" must exist. Can we see it?


Yes. It's section 8.2 onwards of this document:

https://assets.publishing.service.gov.uk/government/uploads/...


Amazing! I (wrongly) assumed it wasn't public.


Update: they've decided the A level grades will be the predicted grades


Source?



The big thing that stands out is that there have been some glaring problems like native speakers getting C's in their language at (say) German or French A-level i.e. so they slacked in the mock tests.

It suggests that their dataset was either way to narrow or their regressions far too simple


Can someone please explain this to me?

> Obviously it has gone catastrophically wrong

I have no idea what this is referring to. The author is obviously (!) assuming some knowledge on the part of the reader that I do not possess. Can someone please fill me in?


Not sure what the equivalent is outside the UK, but A Levels are the exams at the end of school and used to determine of you have the necessary grade for university. If you don't get the required grades, you don't go to university.

The problem is that due to lockdown, no one sat the exams. So they kind of guessed what the grades should be based on predicted grades from teachers or the grades assigned by mock exams. However, those awarding the grades felt that predicted grades were too optimistic and there is no standardisation for mock exams between subjects at the same school, let alone multiple schools across the country.

What the exam boards seem to have done (and I don't think they've admitted to this) is moderate the grades based on past results from the school. However, schools in poorer areas traditionally do worse, so students in those areas lost anywhere from 1 to 3 grades, e.g. an A becomes a D. This includes students that never had a D at any point in the past. So then the government said students could appeal. Then they said exam grades would be adjusted back to no lower than their predicted grades. Or they could resit them once lockdown ended.

Meanwhile university places are filling up based on the first set of grades. Some universities are doing better, notably one of the Oxford colleges who decided to honour all offers made on the basis that they have no new information to change their offer on. But those that didn't apply to these universities have to go through clearing but still don't know what their grades are. Universities need to confirm places for accomodation which is also filling up fast.

To call it a catastrophe isn't far off the mark.


Thanks.


I assume this recent thread is related: https://news.ycombinator.com/item?id=24179635


> The objections appear obvious, with the dropping of teacher predictions implicitly being...

Twice in the opening text it talks about teacher predictions being dropped, but they aren't, the teachers ranked the students and this is used as part of the input data. Just not the entire input data.

If it was the entire input data then we know that, miraculously, this would have been the smartest set of kids to ever come through the system. So clearly just going by teacher predictions is also wrong.

I'm sure there are other valid objections to what has happened, but this isn't really one.


A ranking is different to a prediction. The text notes that. The prediction is dropped.

Arguably, a prediction depends on a ranking but contains less and more data. Less data in that only part of the intrastudent rank is maintained, more in that the ranks are tied to grades.

It would seem that you agree that teachers cannot be expected to produce fair absolute grades, but will produce fairer rankings - which is supported by the articles references.

In terms of the objection being obvious - it is obvious in a political sense that people would quarrel with the prediction being dropped as it both favours them and is more personal. Mathematically its presented as obvious that this is preferable [1].

[1] personaly I object in principal to us estimating grades full stop. I'd have preferred spending the last five months rewriting exams based on covered material and then a mammoth effort at getting kids to sit them safely. However that requires foresight and funding.


> personaly I object in principal to us estimating grades full stop

It's clearly a massive kludge - those rankings are only estimates, lots of kids would have performed better or worse than estimated, and effectively they are being granted grades based on a whole load of factors other than their own actual performance. I feel very sorry for them.


> The very idea of fairly guessing a student's grade is problematic, and only acceptable if you forget we are discussing real people, with individuality, dreams and purpose. They would have certainly been happier sitting their exams, whatever it took to do so safely.

They almost certainly...wouldn't? The exams are stressful and hard work: both high- and low-achieving students were celebrating when the standardised exams got cancelled here in Ireland and I'm sure it was little different in the UK.

At best, those who think they'd get better grades in the exam would prefer it. The ultimate use of these grades is zero-sum (there are only so many university places) so every decision that hurts one student is helping someone else.

It doesn't really matter how good the model is, it's an impossible task politically. There will always be some people aggrieved by the new process and they will always have plausible ways to spin it as racist or classist or whatever else is necessary to fuel the outrage machine. As if the previous setup was some kind of examplar of egalitarianism.


> It doesn't really matter how good the model is, it's an impossible task politically. There will always be some people aggrieved by the new process and they will always have plausible ways to spin it as racist or classist or whatever else is necessary to fuel the outrage machine. As if the previous setup was some kind of examplar of egalitarianism.

So just because every system has flaws, we should throw in the towel and give up? Taking that line of thinking all the way means we can just replace university allocations with a lottery.

That's not how society works.

Even if every system have flaws, even if the current suffers from racism and classism, we have to strive towards making it better. The system the UK invented here does not even try. It very clearly favors people with nice postcodes.

Maybe we should replace it with a lottery, it would be equal. But I'm pretty sure the people with nice postcodes would like it much, and that's the people propping up the UK gov. Etonians, Oxford, silver spoon etc.

I for one believe there must be _some way_ to fix the situation and is better than what they came up with.


It's clear there's some abuse of power going on here. It doesn't sound like an entirely objective process. That the algorithm has been selectively applied, is definitely worth an independent review and some time in court.

That argument can be made without drawing emotional prejudices into it, but it wouldn't hurt to raise the question of prejudice while conducting this debate, because there has been evidence of a class based skew in the UK education system for years now.

And this appears to be the most overt attempt to rig the system yet seen.


> That argument can be made without drawing emotional prejudices into it, but it wouldn't hurt to raise the question of prejudice while conducting this debate,

There's a public sector equality duty under the Equality Act 2010 which is one reason people are looking at the disproportionate impact on some groups.


> both high- and low-achieving students were celebrating when the standardised exams got cancelled here in Ireland

They celebrated the message "Exams are cancelled"

If the message had instead been "Exams are cancelled, at this school A grade students will receive a C instead" I suspect you'd have seen a very different response.


Well, yeah, but they'd really be thrilled at the school where they announced C grade students would get an A.

There is some benefit to society to matching up the right students to the right courses, so it's not completely zero-sum. But the cost of getting it a bit wrong isn't terrible: some marginal students get to follow their dreams and study medicine, some others miss out on medicine and have to try their hand at accountancy.


I would expect moderation to lower grades much more often than it would raise grades.

After all, it's understandable why a teacher would over-estimate grades (assumption everyone else is going to do the same, and their kids will be relatively disadvantaged if they don't) and hence get moderated down.

And it's understandable why a teacher would give spot-on estimated grades (honesty, valuing the credibility of their estimates) and hence not get moderated at all.

But it's difficult to see why a teacher would under-estimate grades.

So I suspect there are not many C grade students getting upgraded to an A.


This discounts the sense of disenfranchisement that may result, which might be consequential, or that we may end up with inferior doctors.


There is good evidence that comprehensive school pupils were disadvantaged and independent school pupils were advantaged, when you dig beneath the overall averages.


A number of reports suggest that the algorithm has (and will be, in the case of the coming GCSE results) been selectively applied too. So your academy's history will only be taken into account, once it's determined your school is of a certain size.

Which most state run schools are, of course.

This is a nightmare, really. Like, no one can directly prove the malice here, but it certainly feels like there was an aspect of malevolence in the enterprise.


Who would stand to gain from that and what would they stand to gain? UK education policy over the last couple of decades has been squarely set on getting as many state school students to university as possible so it would be strange for them to suddenly switch track on the sly.

I'm no fan of the UK government but I'm pretty sure this is a textbook example of Hanlon's razor


It would keep exclusive institutions exclusive.

Much of the UK educational system is like that. For example, I don't know how it works elsewhere, but in England, certain students will only be offered certain exam questions, in the same subject, if the teachers determine you can take them.

So immediately, you can see some aspect of these grades are based on a subjective standard. In this case, at the discretion of teachers. One could easily argue that this Covid exam fiasco is a supercharged manifestation of those very same principles.

In the above case, obviously, all students sitting the same exam, for the same subject, should see the same exam papers.

I mean, why would they not?

I suspect the obvious answer would be the cost of teaching resource. If that's the case, this then becomes a political argument, not a technical one. But the 'exam algo' debate is definitely a technical one, and it can and should be used as a lens to examine (pun) a number of questionable practices in the UK education system.


When I took exams in England (admittedly a decade and a half ago) there were indeed different questions based on what you had learned, but you were instructed to only answer one of them (if you answered more it didn't count), and they were all worth the exact same number of points. It's possible some schools taught two texts to let their students choose whatever was easier on the day, but I doubt this was widespread and I don't see this system as being unfair.


> getting as many state school students to university

They want the state students to get to universities yes, but they want the private students to get to the good universities (oxbridge).


It does not need to be malice.

Good independent schools (and grammar schools) have very consistent results. Often 70%+ of A/A* year after year.

If previous grades and predictions say that most pupils should get A/A* that's not surprising. That's the way it is every year, and that probably does not indicate an inflation because of this year's strange circumstances.

But for a comprehensive school where results can vary much more from pupil to pupil, and for which past data are a mixed bag, it is trickier.


Yeah, they decided to have a magic number of 6 when they started standardising. Given that lots of private (nee public) schools have smaller class sizes, this made the bias worse.


I believe it is 10-15 actually. If the class is more than 15 people then it uses historical grades. If it is less than 10 it just uses the prediction (which is obviously generous). In-between they are weighted.


If some students would prefer to take exams and some not, it seems obvious to let only those students who want take the exams. Propose the model’s grades to the student. If they accept it, you’re done, if they refuse then let them do an exam. Likely 80% or more of students would accept the proposed grades and it would make it doable to organize socially distanced exams for the rest.


Students will only opt to take the exam if they think that doing so will improve their grade. The end result would be grade inflation.


An update: they're now going with a scheme whereby students get to keep the better of the two grades given by two different methods, with teachers' predictions on the one hand, and the 'standardisation' system on the other.

Presumably this will lead to grade inflation.

https://www.bbc.co.uk/news/uk-53810655

See also Wales seeming being the first to switch to this approach: https://www.bbc.co.uk/news/uk-wales-53807854


Yes, it will lead to grade inflation. The important question is: does that matter? Exams already have their flaws such as being far more likely to give a flawed bad grade to someone than a flawed good grade. How much grade inflation would you see purely from eliminating that effect? And that would largely be desirable.


There are exams taking place in the Autumn, so this is kind of what is happening.


They will have missed this cycle of admissions, and not all students have the luxury of taking a gap year. The students have also missed a term of tuition, and so have probably not covered the course material necessary to pass or do well in autumn.

Children/adults in care (people without families that can support them) might not be eligible for support for another year, and will often be expected to go out and work to support themselves. Poor families often rely on the extra financial support that ends with "finishing" education.

There are a lot of stories out there of children/young adults in exactly this situtation that should be off to skilled work training or university that might end up having their entire education wrecked over this despite being able academically. And this is all with the context that private school students often have not been subject to any "standardisation" process due to their small class sizes, or have had their awards upgraded above predicted grades.

We just need to give this year a free pass in much the same way as workers have been given a free pass with the furlough scheme. We can live with one year of inflated results.


Its a particularly bad time for students not being able to take a gap year as well - traveling is pretty much of the cards for those who would have otherwise have done that, and for those who would have spent the year working they're being thrown into that amongst the highest unemployment rates in decades, and up against people who have extensive experience doing the sort of jobs they'd be applying for.


That’s too late for university admissions this cycle, and next year’s Oxbridge admissions cycle will be a nightmare given the large number of reapplicants. It’d have been necessary to use the time saved in not having to mark exams to run the algorithm early.


Exams are already "social distanced". That was never an issue! Has everybody forgotten what exams are like?


It wasn't just about the exams; students missed out on months of teaching, which would have put them at a huge disadvantage if they were then expected to sit the originally-planned exams.

So we'd need exams tailored to just the part of the curriculum that the students have actually covered -- which would be different for each school, making any kind of standardisation impossible.

Another thing people don't seem to be considering is the knock-on effect on universities, which -- if students who didn't complete the A-level course are given a hypothetical grade, whether based on teachers' predictions or modified exams or "mocks" or whatever -- will get an intake of students who are less well prepared than normal, and will need some kind of remedial support if they're not going to flounder in their degree courses.

Just giving students generous grades now, to calm the current storm, means storing up trouble for the future when university teaching staff are faced with a cohort that includes students who shouldn't be there and can't cope -- but their teachers gave over-generous estimates -- or who in principle should be OK but arrived woefully under-prepared because they missed a bunch of key A-level content.


I agree with most of the points you're making, but I do not think that the exams would need to be tailored to just the part of the curriculum that the students have actually covered.

If one student does not know key parts of the curriculum and the other does, then this should be reflected in the grade no matter what caused that difference, whether one of them got more schooling or studied on their own or took some out-of-school tutoring.

And if a diligent student who got all A's before missed out on key parts of the curriculum that's required for further study, then the final grade should reflect that they do not (yet) know the subject as expected and are going to need remedial support, i.e. their final grade should not be an A, the exam should ask for the expected content (no matter if it was not covered in school due to Covid) and the exam results and grade should illustrate whether the students have learned it otherwise or if there's a gap.


I'm inclined to agree, in principle, but this would mean that many students would end up with a lower grade than they could expect to have achieved normally. This would particularly apply to less-advantaged students whose schools and/or families have been less able to fill in the gap.

While that'd result in grades that would be far more meaningful than any of what's currently being done -- and universities, employers, etc would just have to take account of the special circumstances surrounding 2020 results when making admission/employment decisions -- I don't think it's politically tenable.


Well, Covid definitely meant some drastic action had to be taken, but when the UK government originally discussed suspending high school 'finals' for all students, they strongly suggested the derived grades would be based on mock exams, and teachers predictions / appraisals. Which would have at least localised the appeal system.

Then any regulatory board could have worked 'backwards', so to speak... Only look at schools where there are blatant discrepancies across aggregated results.

There was never any mention of the application of an opaque post-process. You would think that would be the kind of detail you would have to discuss in a national assembly before the state commissions it's implementation, but clearly, it was never raised.

So while I understand your point (accusations of 'classism' are premature), I think the fact that we're only learning the details of this system now, is an outrageous abuse of trust and technology.


"okay, lets do it, refuse all evaluation of our work and just push something through" is not the appropriate response to "this task might be impossible". It being "zero-sum" in some sense is also an extremely low bar we don't generally accept as sufficient.


They would have been better off politically just giving them all their predicted grades and letting the universities sort out the mess. I think that's a bit cowardly but it would have taken the heat off.

Concerns over long-term grade inflation are totally overblown anyway since the majority of these exam results get used that year or the next for university admission and only for that. What does it matter whether my grades from 20 years ago are comparable with grades now? It's not like we'd be competing for the same university places.

The only problem with accepting the CAGs is that some centres will have been laxer with them but that could easily have been sorted by using similar techniques to the ones they used here to look for way out-there schools that clearly aren't taking the process seriously.


> There will always be some people aggrieved by the new process and they will always have plausible ways to spin it as racist or classist or whatever else is necessary

But if you have a transparent process and documented algorithms, it's much, much harder for them to do that. Of course, UKGOV has neither of those things in this instance...


Guessing one of two outcomes correctly 50% of the time is pretty poor.

Guessing one of five outcomes correctly 50% of the time is pretty good.


If remote learning is going to continue for the foreseeable future why don't assessments adapt to that environment? Stop testing and maybe rely more on projects and prepared speeches? Maybe continuous in-class real-time questioning to see if the student is keeping up? Sounds obvious but I know for a fact that it's not happening.

I've always thought those things were better than exams before remote learning regardless and many school boards weigh them similarly in aggregate. Testing was always a bit of a flawed model pandemic or not.


Unfortunately the current government has spent basically its entire time in tenure moving the assessment systems away from continuous/coursework-based testing and towards huge monolithic exams. To adapt to the times would be to backpedal pretty dramatically.


An exam tests that you have absorbed and can apply the knowledge. In subjects like mathematics I'm not sure what prepared speeches might give you, and grading based on real-time, in-class questioning is going to be subject to bias from the teachers.

The whole reason we have a controversy over this is that the teachers were asked to give predicted grade to their pupils, and in aggregate their predictions were improbably high compared to other recent year groups.


The system I'm suggesting which admittedly wasn't fleshed out in a single paragraph is really a complete overhaul of assessments to fit remote teaching. You'd have to actually craft real-time continuous assessments for each subject that fit each subject, just like you have to do for exams anyway, but the benefit is you do it in mass amounts making it easier to get a mean performance compared to just 1 or 2 exams. How it currently works is that these types of assessments are treated as second class citizens which i guess was really my frustration.

Also these days I'd heavily debate your premise- i think exams are more of a test of memory and being able to game the system especially in the UK where you are given a test bank of past papers which are almost identical to exam papers. I was a terrible student until 2 weeks prior to the final exam where I grinded maybe 6 or 7 past papers and at least 3 out of 10 questions were carbon copied from previous papers. I would not be able to approach a problem that fit outside of the structure of the problems in the exam papers in the real world.


> You'd have to actually craft real-time continuous assessments for each subject that fit each subject

Which sounds like continuous, standardised, externally applied examination. Leaves rather little room for individuality in teaching techniques or pace...

> How it currently works is that these types of assessments are treated as second class citizens which i guess was really my frustration.

In as much as these things exist at all, they are part of the teaching method and aren't in any way standardised.

I'm not saying your proposal is wrong, but it would lead to a much more rigid system.


It's essentially the same as the hiring problem.

You're trying to work out who is a good fit from very limited data. You can run some practical tests - whiteboard, paper exam - but that doesn't take into account daily variations in performance, and it disadvantages outliers who may have very useful skills or character traits that don't tick the usual boxes.

The problem may be more that education is seen an industrial production line with authoritarian control of outcomes than as a personal development tool. Early apprenticeships and interning opportunities with real tasks to solve and real challenges to face might be better at developing a broader range of talents in context.


Agreed. I know it's a common issue with large institutions, commercial or otherwise, but you would think they could pivot to a new model quicker than they have.

I'm currently studying a part-time undergrad in CS at the Open Uni and even they have failed to provide an alternative to exams this year. You would think if any educational body would be prepared, it would be the Open Uni. If an institution that prides itself on distance learning can't make it happen, lord knows what's happening at traditional schools.


> Stop testing and maybe rely more on projects and prepared speeches?

Projects sounds like another word for coursework.

Judging on prepared speeches doesn't sound fair at all, unless it's a course on public speaking.

There's a reason things are as they are.

> Maybe continuous in-class real-time questioning to see if the student is keeping up?

So when a student asks a question in class, they have to do so in the knowledge that they may be negatively judged for it? There's a reason learning and assessment are at least somewhat separated.


A problem there (which the previous system attempted to solve) is to establish a reasonable ranking of students that is aligned between schools. It may well be that in some year and some subject the best student of school X is weaker than the worst student of school Y, and that in many institutions it would be reasonable that in some subject according to an objective country-wide ranking noone in their class deserves an A this year (I don't know the specifics of UK A-level grading, but I'm thinking of an exam policy where A implies being in the top 15-20% countrywide).

However, teachers naturally want to motivate their best performers (what teacher would grade so that the best student in their class would get a B- ?) and school administrators have a practical incentive to inflate the grades. This means that subjective evaluation of speeches and projects (especially if you're comparing them within a single institution, not to what projects and speeches are considered good and bad in other institutions) and continuous real-time questioning won't lead to an evaluation that's useful to determine where a student stands in comparison to other students in other institutions, and furthermore, they are going to be optimistic. A relevant headline is "40% of A-level results were downgraded from teachers' predictions." - however, that seems normal and expected to me; do we have the equivalent numbers from last year, comparing the teacher's prodictions with the actual exam results?

Technically you might consider taking the "local evaluation" and make an adjustment for the institution, but this is exactly what they attempted to do this year in UK, and the whole original article is about the limitations of this approach.

Also, fraud and cheating is a problem. There are many approaches that have proven to result in somewhat effective remote teaching and learning, however, we don't have good solutions for effective remote evaluation (this is a problem that I personally experienced this spring in my work). For the pre-COVID remote education, the only thing that IMHO worked was a network of in-person proctoring centers supervising the testing process and identity of the people who are taking the test (otherwise people will get others to take the test for them, it's not a hypothetical issue) or, in certain cases, a privacy invasive and labor intensive online proctoring.

IMHO it makes all sense to separate instruction and teaching from final evaluation and certification, due to the unavoidable conflicts of interest and misleading results that happen otherwise. This is also what UK has chosen as its policy - but as they found out, this means that they can't have a meaningful evaluation if they skip the actual evaluation part this year.


> A relevant headline is "40% of A-level results were downgraded from teachers' predictions." - however, that seems normal and expected to me; do we have the equivalent numbers from last year, comparing the teacher's prodictions with the actual exam results?

It's about 39%. The problem is that in a normal year, everyone comes out an exam with a rough sense of how they've done. If you come out feeling like you bombed it then seeing your B become a C is not going to be a surprise.

The issue is that "predicted grades" are not expectation values in a statistical sense, if they were then teachers would be pretty consistently doing a bad job of forecasting them.

A predicted grade is based on an assessment of ability: "What do I think little Johnny is capable of when he takes his exam in however months time". Now in theory, results can be both better and worse than that prediction. In practice, the fact that the scale is closed ended (and many more are near the top than the bottom) means that it is more common for the variability of exam performance to lead to slightly worse performance. In other words, more pupils underperform their ability based prediction on exam day than outperform it.

So there is nothing inherently wrong with reducing grades from the centre assessed grades, many newspaper headlines are treating this like the CAG is the "real" grade that is somehow being taken away.

If you look at this table: https://twitter.com/Samfr/status/1293979033458417668/photo/1 you can see that the overall grade distribution produced by the model for this year is similar to that for previous years. Not surprising since that is by design.

The problem here is that Ofqual is treating grade inflation like it's the worth possible thing that could happen and has essentially sacrificed individual fairness on the altar of aggregate consistency.


Even if you know it is likely that 2 out 5 students who you assess as grade B will have a bad day and get a C in the exam, you don't know which 2 of the 5, so you have to assess them all as a B.

Ranking students might be roughly correct, but many of the rankings in the middle where students are very similar are basically random.


Even ranking students isn't a good answer on the individual level.

Imagine there are two schools: A typical small private school, which normally achieves high grades, and a larger state school which normally achieves a broader spread. Normally the highest individual grade is achieved at the private school.

The historic average of the private school might be A-star, A, B, but predictions were A-star, A, A. Rankings are correct to the individuals' abilities. "Actual" or "deserved" standard could be A, A, A or A, A, B.

The state school's top 3 grades are normally A, A, B. This year you have a student who's capable (and likely) to achieve an A*. Teacher predictions for the top three grades are therefore A-star, A, A. Rankings are still correct according to ability.

Across the country the top 6 grades are normally A-star, A, A, A, B, B.

The private school uses the teacher predictions, because it has a small class: A-star, A, A.

We're now left with only A, B, B available to give to the top students at the state school, so the A-star, A, A assessed students are awarded A, B, B. Every one is a downgrade.

But it's OK, because across the country there's no grade inflation.


That is exactly what happened to my son's Computer Science class. They all got shifted down.

Thankfully the UK government have now decided to revert to the teacher assessed grades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: