Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't be the only one thinking, given how much ChatGPT gets confidently wrong, that it's way too early to be talking about funneling this into classrooms?

The internet is bursting with anecdotes of it getting basics wrong. From dates and fictional citations, to basic math questions... how on earth can this be a learning tool for those who are not wise enough to understand the limitations?

OpenAI's examples include making lesson plans, tutoring, etc. Just like with self driving cars - too much too quick, and many are not capable of understanding the limits or blindly trust the system.

ChatGPT isn't even a year old yet...



Its probably the perfect time to be talking about it giving how fast the advancements occur

They probably wont be using that model for another year, while people will be using that website for many years


It doesn’t get the kind of things taught in most classrooms wrong in the way it gets business applications wrong, because there’s a (mostly) correct response that isn’t going to vary a ton from source to source. The weighting will always push its responses towards the right answer, though in moments of relative uncertainty I guess if you had the temperature turned super high you might get some weird responses.

It’ll (mostly) always know about the Sherman Antitrust Act and what precipitated its passage, for example.

That said, OpenAI repeatedly suggests verifying responses and says, “make it your own” which IMO includes spot checking for correctness.


> It doesn’t get the kind of things taught in most classrooms wrong in the way it gets business applications wrong, because there’s a (mostly) correct response that isn’t going to vary a ton from source to source.

It's fabricated legal cases and invented citations to back up it's statements.

The issue is, it can be difficult to know when it's wrong without putting in a lot of effort. Students won't put in the effort, and that assuming they're even capable of understand when/where it's wrong in the first place.

Just like self driving cars - we can say "pay attention and keep your hands on the wheel at all times"... but that's not what everyone does and we've seen the consequences of that already.

We need to be careful here. This tech is new. ChatGPT hasn't even existed (publicly) for a year. Getting it wrong and going too fast has consequences. In the education space in particular, those consequences can be profound.


This is nothing at all like self driving cars; firstly the risks are not even in the same ballpark, and secondly every piece of advice given includes, “check the response independently.” It says nothing about a tool like this if people choose to misuse it.

At some point, using LLMs like ChatGPT recklessly is on the user, not the tool.


The internet is sampling the interesting samples, not necessarily a realistic picture.

I'd love to see a good research study on this that shows the actual error rate as well as a comparison with other non-human alternatives (e.g. googling, using textbook only, etc) as well as possibly human (personal tutor, group instructor, ...)


> The internet is sampling the interesting samples, not necessarily a realistic picture.

A tutor is expected to know the subject and guide the student. If, say, 10% of the time it guides the student into a false understanding, the damages are significant. It's very hard to unlearn something, particularly when you have confidence you know it well.

My personal adventures with ChatGPT are probably close to a 50% success rate. It gets some stuff entirely wrong, a lot of stuff non-obviously wrong, and even more stuff subtly wrong - and it's up to you to be knowledgeable enough to wade through the BS. Students, learning a subject in school are by definition not knowledgeable enough to discern confident BS from correctness.

Will ChatGPT be useful in the future? Yes, almost certainly. But let's not rush this and get it very wrong. The consequences can be staggering in the education space - children or adults.


I'm getting north of 90% success with GPT4, and while a dedicated tutor or a group instructor would definitely be better, none of the other non-human alternatives come close. Searching the internet and youtube tutorials can also lead to wrong information and false understanding - all self-directed methods have this pitfall. ChatGPT, however, is the only one where you can probe deeper once you find problems.

If I had to place it somewhere, it would be between a study buddy and a tutor, closer to a study buddy.

Still - a well designed study will give us a much better picture of where we actually are. I think that would be extremely valuable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: