Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, I tried it, and this is how dumb it is. I ask it what's the context length it supports. It said that PaLM 2 supports 1024 tokens and then proceeds to say that 1024 tokens equals 1024 words, which is obviously wrong.

Then I changed the prompt slightly, and it answered that it supports 512 tokens contradicting its previous answer.

That's like early GPT-3.0 level performance, including a good dose of hallucinations.

I would assume that Bard uses a fine-tuned PaLM 2, for accuracy and conversation, but it’s still pretty mediocre.

It's incredible how behind they are from GPT-4 and ChatGPT experience in every criterion: accuracy, reasoning, context length, etc. Bard doesn't even have character streaming.

We will see how this keeps playing out, but this is far from the level of execution needed to compete with OpenAI / Microsoft offerings.



> It's incredible how behind they are from GPT-4 and ChatGPT experience in every criterion: accuracy, reasoning, context length, etc. Bard doesn't even have character streaming.

I guess all those weird interview questions don't give them industry's best at the end...


Why is character streaming important if Bard seems to be faster generating a complete answer than ChatGPT?


That's because simple questions in Bard only generate like 200 tokens per answer. The latency is more noticeable for longer answers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: