An exam tests that you have absorbed and can apply the knowledge. In subjects like mathematics I'm not sure what prepared speeches might give you, and grading based on real-time, in-class questioning is going to be subject to bias from the teachers.
The whole reason we have a controversy over this is that the teachers were asked to give predicted grade to their pupils, and in aggregate their predictions were improbably high compared to other recent year groups.
The system I'm suggesting which admittedly wasn't fleshed out in a single paragraph is really a complete overhaul of assessments to fit remote teaching. You'd have to actually craft real-time continuous assessments for each subject that fit each subject, just like you have to do for exams anyway, but the benefit is you do it in mass amounts making it easier to get a mean performance compared to just 1 or 2 exams. How it currently works is that these types of assessments are treated as second class citizens which i guess was really my frustration.
Also these days I'd heavily debate your premise- i think exams are more of a test of memory and being able to game the system especially in the UK where you are given a test bank of past papers which are almost identical to exam papers. I was a terrible student until 2 weeks prior to the final exam where I grinded maybe 6 or 7 past papers and at least 3 out of 10 questions were carbon copied from previous papers. I would not be able to approach a problem that fit outside of the structure of the problems in the exam papers in the real world.
> You'd have to actually craft real-time continuous assessments for each subject that fit each subject
Which sounds like continuous, standardised, externally applied examination. Leaves rather little room for individuality in teaching techniques or pace...
> How it currently works is that these types of assessments are treated as second class citizens which i guess was really my frustration.
In as much as these things exist at all, they are part of the teaching method and aren't in any way standardised.
I'm not saying your proposal is wrong, but it would lead to a much more rigid system.
You're trying to work out who is a good fit from very limited data. You can run some practical tests - whiteboard, paper exam - but that doesn't take into account daily variations in performance, and it disadvantages outliers who may have very useful skills or character traits that don't tick the usual boxes.
The problem may be more that education is seen an industrial production line with authoritarian control of outcomes than as a personal development tool. Early apprenticeships and interning opportunities with real tasks to solve and real challenges to face might be better at developing a broader range of talents in context.
The whole reason we have a controversy over this is that the teachers were asked to give predicted grade to their pupils, and in aggregate their predictions were improbably high compared to other recent year groups.