Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My favorite benchmark is to analyze a very long audio file recording of a management meeting and produce very good notes along with a transcript labeling all the speakers. 2.5 was decently good at generating the summary, but it was terrible at labeling speakers. 3.0 has so far absolutely nailed speaker labeling.


My audio experiment was much less successful — I uploaded a 90-minute podcast episode and asked it to produce a labeled transcript. Gemini 3:

- Hallucinated at least three quotes (that I checked) resembling nothing said by any of the hosts

- Produced timestamps that were almost entirely wrong. Language quoted from the end of the episode, for instance, was timestamped 35 minutes into the episode, rather than 85 minutes.

- Almost all of what is transcribed is heavily paraphrased and abridged, in most cases without any indication.

Understandable that Gemini can't cope with such a long audio recording yet, but I would've hoped for a more graceful/less hallucinatory failure mode. And unfortunately, aligns with my impression of past Gemini models that they are impressively smart but fail in the most catastrophic ways.


I wonder if you could get around this with a slightly more sophisticated harness. I suspect you're running into context length issues.

Something like

1.) Split audio into multiple smaller tracks. 2.) Perform first pass audio extraction 3.) Find unique speakers and other potentially helpful information (maybe just a short summary of where the conversation left off) 4.) Seed the next stage with that information (yay multimodality) and generate the audio transcript for it

Obviously it would be ideal if a model could handle the ultra long context conversations by default, but I'd be curious how much error is caused by a lack of general capability vs simple context pollution.


The worst when it fails to eat simple pdf documents and lies and gas lights in an attempt to cover it up. Why not just admit you can’t read the file?


This is specifically why I don't use Gemini. The gaslighting is ridiculous.


Now try an actual speech model like ElevenLabs or Soniox, not something not made for it.


I'd do the transcript and the summary parts separately. Dedicated audio models from vendors like ElevenLabs or Soniox use speaker detection models to produce an accurate speaker based transcript while I'm not necessarily sure that Google's models do so, maybe they just hallucinate the speakers instead.


Agreed. I don’t see the need for Gemini to be able to do this task, although it should be able to offload it to another model.


What prompt do you use for that?


I just tried "analyze this audio file recording of a meeting and notes along with a transcript labeling all the speakers" (using the language from the parent's comment) and indeed Gemini 3 was significantly better than 2.5 Pro.

3 created a great "Executive Summary", identified the speakers' names, and then gave me a second by second transcript:

    [00:00] Greg: Hello.
    [00:01] X: You great?
    [00:02] Greg: Hi.
    [00:03] X: I'm X.
    [00:04] Y: I'm Y.
    ...
Super impressive!


Does it deduce everyone's name?


It does! I redacted them, but yes. This was a 3-person call.


I made a simple webpage to grab text from YouTube videos: https://summynews.com Great for this kind of testing? (want to expand to other sources in the long run)


It's not even THAT hard. I am working on a side project that gets a podcast episode and then labels the speakers. It works.


Parakeet TDT v3 would be really good at that


Yes, this is the best solution for that goal. Use the MacWhisper app + Parakeet 3.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: