Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As the confidence of advice, how much the rates of the mistakes are different between human lawyers and the latest GPT?

Notice I am not talking about "rates of mistakes" (i.e. accuracy). I am talking about how confident they are depending on whether they know something.

It's a fair point that unfortunately many humans sound just as confident regardless of their knowledge, but "good" experts (lawyers or otherwise) are capable of saying "I don't know (let me check)", a feature LLMs still struggle with.



> I am talking about how confident they are depending on whether they know something.

IMHO, that's irrelevant. People don't really know they level of confidence either.

> feature LLMs still struggle with.

Even small LLMs are capable of doing that decently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: