I enjoy Fei-fei li's communication style. It's straight and to the point in a way that I find very easy to parse. She's one of my primary idols in the AI space these days.
In my experience I have never once thought about what color means what, but if the colors are wrong I can "sense" that something is broken. It almost becomes a second sense, the patterns are recognized somewhere deep in my mammal brain
I am curious to hear an expert weigh in on this approach's implications for protein folding research. This sounds cool but it's really unclear to me what the implications are
But as with anything in research, it will take months and years to see what the actual implications are. Predictions of future directions can only go so far!
Their representation is simpler, just a transformer. That means you can just plug in all the theory and tools that have been developed specifically for transformers, most importantly you can scale the model easier. But more than that, I think, it shows that there was no magic to AlphaFold. The details of the architecture and training method didn't matter much. All that was needed was training a big enough model on a large enough dataset. Indeed lots of people who have experimented with AlphaFold have found it to behave similiar to LLMs, i.e. it performs well on inputs close to the training dataset and but it doesn't generalize well at all.
Except their dataset is mostly the output of AlphaFold, which had to use the much smaller dataset of proteins analyzed by crystallography as input. This is really an exercise in model distillation - a worthy endeavor but it's not like they could have just taken their architecture and the dataset AlphaFold had and expect to get the same results. If that was the case, that's what they would have done because it would've been much more impressive.
> But more than that, I think, it shows that there was no magic to AlphaFold. The details of the architecture and training method didn't matter much. All that was needed was training a big enough model on a large enough dataset.
People often like to say that we just need one more algorithmic breakthrough or two for AGI. But in reality it's the dataset and the environment based learning. Almost any model would do if you collected the data. It's not in the model, it's outside where we need to work on.
I think the sentiment that simplicity is good, is a false conclusion. Simplicity is simply good scientific methodology.
Doing too many things at once makes methods hard to adopt and makes conclusions harder to draw. So we try to find simple methods that show measurable gain, so we can adapt it to future approaches.
Its a cycle between complexity and simplicity. When a new simple and scalable approach beats the previous state of art, that just means we discovered a new local maxima hill to climp up.
I am genuinely interested where the strong negativity towards Siri has come from in recent culture. From what I gather it's likely due to the high expectations we have for Apple. But what I don't really get is why is there not a similar amount of negativity being directed at Google or Samsung, who both have equally shit phone AI assistants (obviously this is just from my perspective, I am a daily user of both iOS and a Samsung Android)
I am not trying to defend Apple or Siri by any means. I think the product absolutely should (and will) improve. I am just curious to explore why there is such negativity being directed specifically at Apple's AI assistant.
As a vocal critic of Siri, I can give you a number of reasons we hate it:
1. It seems to be actively getting worse. On a daily basis, I see it responding to queries nonsensically, like when i say “play (song) by (artist)” (I have Apple Music) by opening my Sirius app and putting on a random thing that isn’t even that artist. Other trivial commands are frequently just met with apologies or searching the web.
2. Over a year ago Apple conducted a flashy announcement full of promises about how Siri would not only do the things that it’s been marketed as being able to do for the last decade, but also things that no one has seen an assistant do. Many people believe that announcement was based on fantasy thinking and those people are looking more and more correct every day that Apple ships no actual improvements to Siri.
3. Apple also shipped a visual overhaul of how Siri looks, which gives the impression that work has been done, leading people to be even more disappointed when Siri continues to be a pile of trash.
4. The only competitor that makes sense to compare is Google, since no one else has access to do useful things on your device with your data. At least Google has a clear path to an LLM-based assistant, since they’ve built an LLM. It seems believable that android users will have access to a Gemini-based assistant, whereas it appears to most of us that Apple‘s internal dysfunction has rendered them unable to ship something of that caliber.
I think Siri has always been criticized, likely because it has never worked super well and it has the most eyes (or ears) on it (iPhones still have 50% market share in the US).
And now that we have ChatGPT with voice mode, Gemini Live, etc which have incredible speech recognition and reasoning comparatively, it's harder to argue that "every voice assistant is bad" still.
For the last three iOS major versions, Siri has been unable to execute the simple command "shuffle the playlist 'Jams'", or any variation, like "play the playlist Jams on shuffle". I am upset for that reason.
Is it just my rosie glasses or did siri work much better in the first couple of years and seem to decline continually since then. I actually used it a lot initially then eventually disabled it as it never worked anymore.
I feel like the same is true of a lot of products that moved from being programmatically connected ML workflows to multi-modal AI.
We, the consumer, have received inferior products because of the vague promise that the company might one day be able to make it cheaper if they invest now.
I've disabled Siri as much as I possibly can. I've never even tried to use it. I would do the same for any other AI assistant. I don't like that they are always listening, and I just don't like talking to computers. I find it unnatural, and I get irrationally angry when they don't understand what I want.
If I could buy a phone without an assistant I would see that as a desirable feature.