Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be clear I’m not saying it did happen for sure. Only that it’s realistic and not exactly far fetched given everything that’s publicly known.

> they can also interpolate in latent space? How does that actually work?

I don’t know. I assumed it is the case because of the vast and excellent interpolation AI appears to do in other domains.



> I don’t know. I assumed it is the case because of the vast and excellent interpolation AI appears to do in other domains.

Sure it's plausible just like many other forms of baseless speculation are, I still don't think to state it as a matter of fact like it is happening in this and other articles and in comments.

> I don’t know. I assumed it is the case because of the vast and excellent interpolation AI appears to do in other domains.

Yes but do you know of any examples with sequential data, and in particular transformers? I don't and I think it is a somewhat different thing. With pictures you just train an encoder into some low dimensional latent space and there you can interpolate, but working with sequences you don't want to give the whole sequence as an input to the model and generate the whole sequence as an output.

The natural way to do interpolation with an audio transformer model, which it can in principle indeed do, would be to somehow audio-prompt engineer it to speak in a voice that interpolates SJ and your voice actor. But that would be a pretty fragile method.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: