Something can be both technically incredibly impressive, genuinely useful and yet at the same dramatically over-hyped. That may be the case for AI today although the range of possible outcomes makes the latter point genuinely unclear.
The difference for me is that Web3 was never shown to be at all useful.
This made me chuckle abit, not that you are wrong i dont know. But it reminds me of how my parents generation understood the internet, full of scams and grifters :)
I might be in your parents generation and yes, there were a lot of scams. The difference is, there were some useful things. We still use the web and email for example.
François Chollet already have a response[1] for you in his thread:
> One last thought -- don't overindex on the web3 <> LLMs comparison. Of course web3 was pure hot air while LLMs is real tech with actual applications -- that's not the parallel I'm making. The parallel is in the bubble formation social dynamics, especially in the VC crowd.
`Everyone` refers to all people in a given subset of the population. E.g. "Everyone arrives to school by 7:30" Obviously you're talking about people at a specific school, not every human on Earth.
I wouldn't have written "everyone", but technically incorrect use of language is, ironically, one of the things people jump on GPT for when it gets things wrong: https://news.ycombinator.com/item?id=34292129
No, quote uses “Everyone” in the context of people expecting return on investment: “ Everyone is expecting as a sure thing "civilization-altering" impact (& 100x returns on investment) in the next 2-3 years”
Not so. And even if it were so the statement he is making is still false - I’m absolutely certain not everyone who has investment expectations expects 100 x returns in 2-3 years. It’s a very sloppy tweet.
I just read your newsletter. In an opening, you say "Hi everyone and thanks for subscribing." I haven't subscribed so your usage of "everyone" can't literally apply to every reader. Is it possible that using "everyone" is appropriate in such situations even if the statement doesn't literally apply to every single member of the subset?
The comparison is between the level of hype not the technology itself. If you take the time to read the thread he is more than well aware of the practical aspects of AI.
I know that - I read the thread - and I respect Chollet and the fact that he knows an awful lot more about AI than I do. I personally don't think though that the comparison to Web3 is helpful even with the clarification deep in the thread.
> The difference for me is that Web3 was never shown to be at all useful.
Because nobody agrees what web3 even means, it's a totally useless term, not that dissimilar to AGI or consciousness, which nobody can properly define. People who hate crypto define web3 in terms of all the negative stuff, while people who have a vested interested in pumping token prices (like VCs) will define web3 in terms of all the potentially positive stuff.
I believe that the that fundamentals of crypto are incredibly useful and will eventually be adopted, and have been in meany places, but all the web3/NFT/etc stuff (whatever that means) is just useless hype and ponzi schemes.
Crypto has a much more negative image because it's full of scams, but that doesn't mean anything. It's just significantly easier to create scams with crypto because money is at the very base layer, than it is to create scams with AI. But that doesn't imply anything about the usefulness of these technologies.
>I believe that the that fundamentals of crypto are incredibly useful and will eventually be adopted, and have been in meany places, but all the web3/NFT/etc stuff (whatever that means) is just useless hype and ponzi schemes.
Could you elaborate? It's been 15 years since the bitcoin paper and I still don't see where the game changing commercial applications of crypto are beyond the black/grey market uses for which it has already been adopted.
One big use case is decentralized cloud computing. The advantage over centralized services is low prices due to market pressures.
For example, Filecoin lets you store data for $2.33 per TB per year. If you used S3 that would be $150. The Golem network let's you run compute jobs for $0.003 per CPU core per hour.
The only barrier here is software. There just aren't enough developers to build anything useful. There could be a user-friendly web service like the AWS console that enables normal developers to use this stuff, but nobody has built one yet. Golem could support GPU compute so that people can use it to train AI models cheaply, but they are short on devs so that feature is still in the backlog.
Give it time. Encrypted email has been possible since at least the 90s, but many hospitals still haven't adopted it and are using fax machines. Before people can adopt crypto, they first need to understand it, and that's not going to happen for a long time.
Both your Filecoin and Golem examples might be inexpensive but they have no sort of SLA for end users. They can also evaporate tomorrow taking your data and compute resources down with them. Both systems are only viable if the cost to run the systems is below the money made from running it. If the whole reason users pick Filecoin is the price, they'll abandon the platform if the price ever increases (to make running nodes profitable) which creates a feedback loop that will collapse the network.
While Amazon might cost more, they're unlikely to go out of business tomorrow or in ten years. They also have an SLA and predictable prices and predictable capacity. if you build something on AWS that works today it'll likely work tomorrow.
It's hard to predict how these sorts of systems play out until they start to scale up. I don't know much about these but I can imagine a functioning incentive structure that gives hardware owners the freedom to schedule downtime with reasonable notice while penalizing them for going down without notice. A decentralized cloud could have big advantages over centralized data centers; lower costs for highly diffuse "voluntary" maintenance labor, fewer single points of failure, etc.
It might not work at all, but it's not necessarily a Ponzi scheme (which isn't even really an accurate way to describe NFTs, which are a mostly worthless asset bubble; ponzi schemes require lying about transactions having taken place, not lying about the future value of an asset)
No offense but this stance reminds me of telco people deriding the "best effort" Internet for serious commercial applications 20 years ago. A professor of mine was adamant VoIP would never take off because it just didn't work reliably.
VoIP didn't take off until it worked reliably. The same is true for "best effort" IP for various applications. Untold amount of person hours and investment have gone into making IP networks good (MPLS etc). Your professor's assertion about VoIP wasn't necessarily wrong, it just wasn't a complete statement.
The pure IP routing networks of 20 years ago were not full capable of making VoIP work well enough for general usage.
It's so strange to me that what you describe genuinely sounds useful and fairly straightforward, millions of person hours have gone into "web3" and it's all grifter stuff translating one shit coin into another and giving cheap credit or farming coins. Yet, something as obviously useful as you describe doesn't take off. Where does the discrepancy come from?
Well, there was also a time when "Macintosh" meant a raincoat to all but a handful of people, and another time 10 years later when "Amazon" was a forest or a river to all but a few weird nerds who were finding it cool to give out their credit card details "online" to order books, instead of just crossing the street to get them from the bookstore like every sane person did.
ChatGPT is trivially useful in a lot of cases...The other day I asked it to write marketing copy for a project and it wrote _better_ marketing copy than I could have written in an hour. On another project, I spent 10 minutes integrating the OpenAI library and was programmatically receiving incredible results with almost no effort.
The nature of predicting the future means that there will be periods of overconfidence. In the case of Web3 (taken to mean blockchain/crypto/digital coins), the overconfidence was fueled by a core ponzi scheme combined with truly extraordinary returns for early speculators. But AI has no ponzi scheme attached to it so the comparison breaks down. My uncle cannot gamble his retirement on a mysterious promise of überwealth from a man with wild hair and no financial experience.
AI companies will have lots of false starts but 2022 was a transformational year and we are only getting started.
Read the whole thread. Francis is the creator of Keras and an AI engineer at Google. He knows his stuff.
He calls out ads/marketing/copywriting as one area where ChatGPT is effective. His point is that the scope of things beyond that use case is probably a lot more limited than people are assuming.
Chollet's view on this can't just be dismissed. There can't be many people better positioned to understand the current state of the art and the immediate prospects for advancement.
It's fine to disagree with him, but most people doing so will be from a much less informed position.
> Chollet's view on this can't just be dismissed."
Then why did Chollet dismiss it?
"If this had been a blog post and not a random spur-of-the-moment train of thoughts, I wouldn't have made the AI/web3 comparison. It was counterproductive, as it is what most folks are now focusing on."
> Recent GPT/stable diffusion products universally blow people away with their output.
I've yet to be blown away by either system. I guess I am just incredibly cynical but the output of ChatGPT and Stable Diffusion aren't all that impressive to me. A lot of Stable Diffusion art just looks like browsing random DeviantArt or ArtStation pages. ChatGPT reminds me of IRC Markov bots, it produces the same non-sensical crap that they would. Only when a prompt hews incredibly close to some training content does it seem to product cogent output.
Both will get better and might become more impressive to me but right now that's not the case. My main issue is neither system actually understands what it is being asked. They're relying on the viewer's tendency for apophenia and pareidolia do make them think the system understood what it was asked.
I think there's justification for heavy skepticism for the future of these systems because we're approaching the point where their incorrect/inaccurate output is going to end up feeding the next generation systems' training data.
> Recent GPT/stable diffusion products universally blow people away with their output.
I see a lot more naysayers than you. While I am impressed, plenty are unconvinced.
(I really need to make time to test it against random standard written qualifications; if anyone has some example exams and marking schemes, in any language, please let me know. I have Polish maths, plus English and Irish school leaving certificates).
I would say code generation and documenting is another one. The UX for this is a bit sucky but I've been pretty blown away with chat gpt's ability to generate, translate, and explain existing code.
I actually started trying to learn rust using chat gpt. I can just ask it to explain bits I don't understand ... and it does.
There's a lot of content generation in the legal and medical sphere that is probably also pretty much something where chat gpt could be helpful. Dangerous of course when it gets it wrong. But still, I could see this being a useful tool for researchers in all sorts of fields to quickly dig through a lot of information. Basically, chat gpt is trained on more stuff than a single human will be able to read.
I think this whole space is maybe bottlenecked on imagination. We have a lot of AI experts with not a whole lot of other expertise not seeing the forest for the trees. OpenAI employs a few geniuses but their product is basically a chat box and an API.
As I was remarking to a friend the other day: so they have chat gpt, pretty decent speech to text, and amazing text to speech .... so why can't I talk to chat gpt and listen to the answer? Such an obvious thing to do. Surely somebody thought of that. I mean inside OpenAI. I know there's a multitude of openai powered proofs of concept by third parties. But they don't seem to have the ambition to support a finished product so far.
> Read the whole thread. Francis is the creator of Keras and an AI engineer at Google. He knows his stuff.
I read the thread, it actually starts with him saying:
"If this had been a blog post and not a random spur-of-the-moment train of thoughts, I wouldn't have made the AI/web3 comparison. It was counterproductive, as it is what most folks are now focusing on. The two are, in fact, very much not the same."
So you can have the "appeal to authority" but the authority now says that was bad and should not have done it. :-)
He continues:
"I meant to compare the surrounding hype generation dynamics (as discussed towards the end of the thread). Expectations unmoored from reality becoming a universally accepted, self-evident canon once the same narratives have been repeated enough times in the echo chamber."
Yes, argument by authority. Like it or not, this is actually a good way to learn things. Another thing informal fallacy citers always forget is that it's not a disproof of what is being said.
I'm guessing your salary depends on you thinking that AI-generated marketing copy is a Good Thing™.
To me, it just sounds like more dystopian attention spam. If absolutely all marketing copy vanished overnight never to return, in my opinion the world would be a better place.
I'm hoping we haven't hit the "uncanny valley" of ML generated text yet, and we will get to a point where something very deep in our brain is able to pick up that ML generated text isn't something coming from a genuine person (much like how lots of people intuitively understand that marketing copy is valueless) and viscerally react to it in a negative way.
It would be the only possible defense against this dystopian crap.
So far, when asked what's the use case for ChatGPT/LLMs, the sole answer with clear PMF is marketing copy. Agree it is very good for that. Just as crypto had very clear PMF for cross-border payments and stablecoins in middle income countries without dollarized banking. The problem is neither of these use cases is sufficient to justify peak valuations. (OpenAI is currently raising at $30bn.)
Like crypto, there are other potential use cases on the horizon for LLMs. The problem is the tech is still far too clunky, unreliable and impractical for the foreseeable future. LLMs are subject to a devastating hallucination problem, where they confidently report clearly erroneous data. And there is no solution in sight. Marketing copy is one of the sole major use cases, where random inaccuracies is mostly not catastrophic. But you're never going to trust LLMs to write code or legal contracts or technical documentation or even your own emails until the hallucination issue is drastically improved.
That leaves us with a few other niches that are, frankly not very monetizable. ChatGPT is very good at writing student essays, but students don't have very much money to spend. Makes a fun chatbot, but people are unlikely to pay much of anything to have a chat companion.
Like crypto, AI is a case of really intellectually interesting technology that nerdsniped a lot of smart people. Like crypto, AI has a few small (in terms of market size) use cases where there's clear PMF. Like crypto, there are potentially a lot of other much larger use cases that the nerdsniped smart people are imagining. Like crypto, these large use cases are going to require a lot of fundamental improvements to the tech itself before they become feasible. Like crypto in 2021, AI valuations in 2023 are treating those large scale use cases as imminent rather than long-term speculative.
(Note this isn't a statement about which tech has a brighter long-term future. It's still more than possible to believe (as many on HN do) that AI will be transformative over the next couple decades and crypto is vapor. But the point is AI valuations are implying that it's going to be transformative over years not decades, and it's pretty clear that's not the case. There are parallels with crypto in 2021, where regardless of the long-term promise, the short-term returns simply cannot support the valuations.)
From the thread, the author states that spam/"marketing content creation" is the only viable area of application.
> This is consistent with the primary learning from the 2020-2021 class of GPT-3 startups (a category of startups willed into existence by VCs and powered by hype), which is that commercial use cases have been falling almost entirely into the marketing and copywriting niches.
Having lived through hype wave after hype wave after hype wave in tech over the last forty-fifty years, the commentary about the behaviour of VCs resonated. I wonder if part of why VCs back so many copycat companies in hype cycles is a structural incentive to just invest money in things that are getting press, raising the VCs profile and attracting more deal flow and limited partners.
This led to idle speculation: If it was possible to short early-stage startups that VCs were backing, there would be as much incentive for the media to discuss a startup’s shortcomings and vapourware promises as there is to repeat their breathless braggadocio.
I think there is some truth to the statement, although many VCs that I've met are really sharp and thoughtful. Not at all the caricature of greasy tech illiterates that sometimes gets thrown around. You kind of can't generate good returns if you aren't able to think about the long-term trajectories of industries.
But I think that with the bubble times of 2018-2021 a lot of people entered the industry or changed their behavior because it was trendy and cool, and they have been way more down to ride the hype cycle rodeo.
I have also met many intelligent and thoughtful investors. I am speaking only to the systemic effects of a kind of marketplace for information that overvalues hype, and undervalues criticism.
My speculative musing is that if people could take short positions in startups and not just coattail on as one of a group of investors, I wonder if we’d get a lot fewer breathless PR puff-pieces and a lot more cases for pessimism around certain types of hype cycle investments, backed by investors betting their own moeny against the hype.
I think that that's very possible, and I completely agree with you on the phenomenon. There are also few limitations on your ability to talk your own book in the private markets which further turbocharges the hype cycle.
Web3/NFT/Crypto promises: Very clever and advanced tech, People will stop using the old stuff because they hate the government and want to save pennies.
Web3/NFT/Crypto realities: using it is expensive and hard, scams left and right and the government actually arrives but doesn't remedy the damages. People are not in to save pennies on transactions but to get rich quick.
DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.
DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.
We will probably hit the limits soon and won't have GAI next year but the stuff already delivered is already useful. The crypto stuff might become useful but its nowhere near the hype.
> DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.
> DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.
That's a pretty generous if not biased take, is it not? For me personally, ChatGPT is underwhelming and hasn't done anything remotely impressive when I have used it. And there's a lot of fervor around people using it, but are there interesting use cases outside of advertising? And are there not a ton of harmful use cases? The internet is going to be absolutely filled to the brim with GPT generated junk.
It's really good on things that are not hard but boring, which makes it extremely valuable.
For example, I needed a tag cloud in SwiftUI. Very easy but very boring task, I guess there are tons of libraries for it but I don't like using libraries for this kind of things.
As if it's my junior, I told ChatGPT to generate a tag cloud and it did. It got the main things right but I need it to loop through an array of custom Structs, gave the structure of the Struct and told it to modify the algorithm. ChatGPT did it very well, as if understood the structure of my custom Struct type.
But I needed the tags be clickable, told it to make it clickable and it did and it correctly guessed how to connect a property from my custom type to the click action. I told it to change the colors, size etc and it was able to do it all. Sometimes it generated incorrect syntax but that wasn't a problem because I told to fix it and it did.
If having a junior developer as an assistant isn't valuable, I don't know what is.
I also use it to drill down on my curiosities. Yesterday I was curious how a torrent client can connect to peers to share a file and I made chatGPT explain to me step by step how NAT works, what strategy developers use to overcome issues with it(If curious, apparently they use this thing called STUN server, which is basically a remote machine that the client can connect to learn its own remote IP address). So it's not just a junior but also an expert in a domain that can answer questions conversationally. Much better that Googling keywords because Google is ridden with spam and tries to be clever about your queries without actually being clever. So once I learned the technical keywords like STUN server, then Google became useful again.
My concern would be the same as with a Junior Dev. It's fine if they don't know something but please don't bullshit/make something up. Maybe tech questions are immune to this but it doesn't seem to have any conception of true and false.
This is a silly example but I asked for a list of Star Trek references in Seinfeld and it gave 6-7 examples that all sounded genuine (Jerry makes a joke about transporters in the Contest episode) but were 100% made up. If I wasn't super familiar with the show I wouldn't have been able to tell most were invented. With code generation that's less important because we have ways of testing code for "truth" but I would worry about relying on any factual statements from the thing.
Sure, humans are not obsolete just yet! People are needlessly freaking out about losing their jobs or even reason for existence to AI. The reason for employing developers or designers is not that we need someone to write loops and draw lines.
It's a tool that makes some things significantly more easier and it does have risks. It will replace only people who are doing jobs not suitable for people.
The problem is that most of us work for companies ACTIVELY TRYING to use this stuff to replace human beings, INCLUDING THE ACTUAL DECISION MAKING and slapping black boxes everywhere in their bureaucratic processes because "machine says no" is a SUPER beneficial thing to a business, especially the modern massive corporation that doesn't really have competition and is mostly profitable because it ignores problems.
I give it five years until even we here are unable to get the attention of a tech company employee to fix our wrongly locked out account because not only are the primary touch points """automated""" but the appeals and appeals of appeals are also entirely automated.
Everyone here will get to enjoy "machine says no" a hundred times more often. Every tech support or bill support or anything support will put you through half an hour of terrible "AI" interaction before you are even allowed to be routed to a human, and businesses will use this to get rid of even more call center employees.
Hate not being able to understand the thick accent in your support call? Get ready for 1% of the time the AI in the phone call puts together sounds that don't actually form words, or just straight up misunderstand what you say, and nobody will believe you because "it's so accurate". Get ready to be gaslit by your fucking phone.
The prevalence of “computer says no” or “machine says no” in our modern society makes me scream and can honestly send me into depression and anxiety. It creates the most helpless feeling.
I have an increasing pile of issues with companies’ services that just persist because fixing them requires getting in contact with a human and a human who actually knows something and isn’t just a terminal for the computerized system.
20 years ago, we could have had the same conversation about outsourcing customer support. It was rife with problems and limitations, just as AI today is.
But that didn't stop companies from doing it anyway. The C-suite isn't listening to researchers and the general public, they're looking at what their shareholders think they 'should' do. Once McKinsey and BCG whip out their "AI Digital Transformation" powerpoints, it's over.
This points out the most dangerous part of ChatGPT. It's a highly confident idiot. When I told it was explicitly wrong, it basically responded with "I'm sorry it wasn't clear".
Not ChatGPT, but Github copilot just wrote me some very nice documentation for a library I'm working on that was surprisingly good. All I had to do was change some wording and clean up the formatting, probably saved me a couple of hours.
>The internet is going to be absolutely filled to the brim with GPT generated junk.
It is already filled with human generated SEO junk that is largely worse than GPT junk. Smart people who like to shit on this stuff forget how fucking dumb most people and things are.
I didn't forget, and yes of course it's already filled with junk. No reason to be excited about doubling-down because in the end it's humans being the impetus.
Humans already struggle to tease value out of the noise that is the SEO and ad and marketing filled internet. Making producing junk, noise, and spam easier can only possibly make that worse.
This is like turning on your microwave while playing a shooter on WiFi because "the space my router is in is already full of 2.4ghz noise".
We all (students and staff) use it a lot. It'll write lectures, help think up interesting worksheet ideas, help students code, help them think of things to write about, how to structure assignments.
I needed a lecture on regex, ChatGPT wrote it for my in 30 seconds. Then I asked it for some problems for the students to solve and it wrote those too.
It may not be relevant to you, but for some of us it is changing the way we work. That hasn't happened since the dawn of the internet, or social media, or mobile phones.
The next generation are using it, and using it a lot.
A good, as if the profession of teaching needed to be hollowed out even more. Now, instead of a carefully assembled and designed curriculum, students will be fed literal autogenerated nonsense.
If you do something in your area of expertise half-way with AI, someone somewhere WILL try to do it all the way and market replacing your expertise with just more black boxes for cheaper, and the people who pay you who aren't experts in your field, WILL be sold on that offer.
Nobody will listen to you when you talk about how it is problematic or wrong or error prone or anything. Think of how angrily people argue about Tesla's camera based self driving, think of all the bad takes people have about how "it's safer than a real person" despite the lopsided and bad statistics behind that claim. Now instead of arguing this with strangers on the internet about an extremely rare outcome that could theoretically kill a random person, now you are having this exact style of conversation with your boss about how the AI model he is saying will replace you has an entire class of errors that humans aren't familiar with and don't seem to notice very well and will absolutely hurt things, but only rarely, in the future.
Get ready for a future where pretty much everything inexplicably fails 1 out of 50 times and nobody will ever be able to tell you why, they won't be able to fix it, and companies prefer this anyway.
If it's anything like my experiences so far, that lecture is likely to be riddled with plausible but incorrect statements. Any chance you could paste it here?
> The next generation are using it, and using it a lot.
So we should expect the Flynn effect to reverse even harder the coming years? The modern classroom is already making kids dumber, this might actually reverse many of the effects of education and put us back very far.
OpenAI's actual promises are "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."
A chatbot that's relatively accurate at interpreting and summarising stuff compared with previous generation chatbots and an image generation algorithm that's actually pretty good are a damned sight more useful than NFTs which is why I agree with others that the comparison isn't helpful, but I don't think it's realistic to characterise OpenAI and AI enthusiasts in general as under-promising
I agree, but it can be "wow that's good" without "wow that's going to replace very single creative job in the world by 2024" which a lot of people seem to expect
It's a bit like thinking autonomous parallel parking will automatically bring you fully autonomous cars soon
I don’t think it’s that people expect it. It’s that some people absolutely despise the idea of artists and have some irrational desire to erase their jobs and automate it all.
These people usually aren’t people signing checks—just angry people online, and oftentimes tied in with political motivations. I don’t think many companies are licking their lips at the idea of firing artists (yet). They’re probably thinking about how they can use this to assist artists to get more done faster and at higher quality.
> I talked to Hollie Mengert about her experience last week. “My initial reaction was that it felt invasive that my name was on this tool, I didn’t know anything about it and wasn’t asked about it,” she said. “If I had been asked if they could do this, I wouldn’t have said yes.”
The main issue here is that someone is using her name. Luckily she does have legal protections available to her! She can trademark her name!
> “I feel like AI can kind of mimic brush textures and rendering, and pick up on some colors and shapes, but that’s not necessarily what makes you really hireable as an illustrator or designer. If you think about it, the rendering, brushstrokes, and colors are the most surface-level area of art. I think what people will ultimately connect to in art is a lovable, relatable character. And I’m seeing AI struggling with that.”
Which is spot on! AI is fantastic at rendering (which tends to be everyone's least favorite part of the process) and not so great at everything else.
> “As far as the characters, I didn’t see myself in it. I didn’t personally see the AI making decisions that that I would make, so I did feel distance from the results. Some of that frustrated me because it feels like it isn’t actually mimicking my style, and yet my name is still part of the tool.”
Good art is and will always be about the end result after a series of creative decisions made by the artist.
Professional artists have had assistants that paint large portions of their works since at least the Renaissance. In these cases they rely on their assistants for basic rendering tasks like backgrounds and folds on clothing. What makes it their painting is that they made the creative decisions.
I come from a very creative family, but even I can see the writing on the wall. Join any Discord server for a high quality image network and you will see incredible art pieces appear all the time based on someones description on what they want. Here is a random image I found: https://cloud.nwcs.no/index.php/s/TmzWzBW6fae4pkp
And this is very important: We haven't had these for a very long time. Imagine what they can do in 10 years? Or 6 months?
What about ChatGPT, imagine it in 10 years? Every single task that simply requires reiterating known information can be replaced if desired. That doesn't mean I desire it. But I also know what kind of world we live in. The economy comes first, people second.
That people are angry I can completely understand. I can also understand trying to argue that "this means nothing", and "its a grift", like someone working in oil hearing about climate change. Personally I am in awe that this is possible, but also sad at what will inevitably happen.
I’m an artist and I’m not worried at all, because I see more potential in the hands of artists than I do amateurs with zero skill.
AI translation has been around for decades now. It’s pretty damn good these days. But translators still find plenty of work and most of them will tell you that they use AI translation to improve their work efficiency. As with any job, the last 10% of work is always the hardest. AI can get a pretty good “sketch” of what someone wants, but a real artist can polish out the details and make it even better.
The thing to realize is that this means translators are being replaced by AI. If a translator can reduce time per translation by 50% because they're just doing a proofread at the end, that means they can do 2x more work/half as many translators are needed.
In translation AI has replaced mainly use cases where a human translator would never have made financial sense. It replaced crappy automated translation.
People were cheering for automation in the 60s, saying we wouldn't have to work more than 3 hours a day by the 80s, I think people should reboot their crystal balls before prophesying the end of artists
The same thing will happen to artists that happened to the factory worker with the rise of factory automation: Most of their jobs will go away, and there will only be very few artists that can even make a cent off their work, worse than that exact problem exists today as now there won't even be a skill barrier to filter out people.
The artists that currently make x$ a year off their work will lose access to that income and have to find new careers and then a giant conglomerate will buy all the companies that create digital works and their PR arm will tell us how much better off we all are that Art is cheap and automated and once again we will see individual income not increase for fifty years as GDP doubles again and wealth inequality reaches even stupider heights.
The main driver is technological advancement and the generated value is _obviously_ net positive.
Not saying that there aren't issues that need to be discussed, nor am I saying that there isn't any (unnecessary) hype. But the comparison to Web3 is a stretch.
The author addresses this:
> One last thought -- don't overindex on the web3 <> LLMs comparison. Of course web3 was pure hot air while LLMs is real tech with actual applications -- that's not the parallel I'm making. The parallel is in the bubble formation social dynamics, especially in the VC crowd.
Every day I see one or two new thought pieces on how AI is not actually good/capable/impressive or how AI is overhyped. I have seen zero thought pieces on how AI is amazing and hyped. Meanwhile a lot of people are having fun with, or doing useful things with, StableDiffusion and ChatGPT.
The negative takes are mostly correct in all the limitations they talk about, but what they miss is how amazing these things are despite these limitations. These things are remarkably simple and limited yet they can generate realistic photos of myself in places I've never been, wearing clothes I've never worn, doing things I've never done, with a quick text sentence. Or nearly pass the bar exam.
On top of that a lot of the limitations have straightforward ways to address, many of which are already in progress. It is going to get really interesting. StableDiffusion knows nothing about the images it is produces, its just repeated denoising with image targets. It doesn't really understand anything about your text either, its just matching up tags. But both of those things can easily change. Put a big language model in front of it to better understand text. Already variants of these image models have depth information. Next up 3d object information, maybe next models of physics so it can understand how things would actually work in the scene, and so on.
> but what they miss is how amazing these things are despite these limitations
The author of this Twitter thread is François Chollet, Senior Staff Software Engineer at Google, creator of Keras and major contributor to TensorFlow, and author of Deep Learning in Python, considered by many a seminal book in introductory DL...
I'd say he understands how amazing these things are despite the limitations, and he specifically says in the thread that it's an amazing time to be building DL apps.
>The author of this Twitter thread is François Chollet, Senior Staff Software Engineer at Google, creator of Keras and major contributor to TensorFlow, and author of Deep Learning by Python, considered by many a seminal book in introductory DL...
That sounds like someone who's great at API design, not a deep learning researcher.
> That sounds like someone who's great at API design, not a deep learning researcher.
Your point being, that a data science researcher or an academic would be better positioned to speak about DL products than one of the driving forces of two of the most widely used ML/DL libraries in the world, and an author of a seminal book on DL applications?
Don’t think anybody doubts that the current achievements are intellectually impressive. The issue for VCs is how valuable are they from a business perspective. Creating photos of people in places they’ve never been is really cool, but realistically how much money will people actually pay for that?
For AI to be as valuable as current rounds are implying it has to actually be capable of replacing your lawyer, your accountant, your software engineer. ChatGPT is nowhere near accurate enough to do that. It’s the same issue we ran into with super cruise control versus fully autonomous driving. Or speech recognition. Early progress was fast. Last mile in AI is always brutal.
Maybe there’s some argument to say ChatGPT won’t replace your lawyer or software engineer but will make them 20% more efficient. Possible. Though I’m still skeptical. ChatGPT’s unparalleled ability to create really convincing sounding errors is a big hidden negative productivity drag.
Generative AI is good for SEO copy because it’s an AI talking to an AI. Google looks at text generated by ChatGPT and is like “wow, that’s exactly how I think about things.” They were trained on much of the same content: the web.
The biggest challenge for generative AI is its willingness to make things up. It’s fine when you’re playing around. Not so fine when you’re expecting it to actually help you in a real way.
I suspect this is why Google has not debuted such an interface despite literally decades of work on AI. You have to be able to bolt a “truth filter” onto the AI, which seems difficult.
As far as consumer interest goes, the problem with Web3 is it sells the means rather than the end. Nobody cares if your Twitter/Substack/Spotify Web3 alternative is decentralised or you own your own data on the platform. To win it has to be immediately useful and/or better than alternatives.
Content generation AI is so obviously useful to the majority of people and it does not require an understanding of how it works in order to be impressed by it.
Impressive for sure. Usefulness is still being explored (outside of a few early things like Copilot).
The majority of people are still interacting with this stuff primarily as a toy, and while it's a much, much smaller leap to imagine how you'd use it in everyday life vs. something like web3, most people aren't yet, and I don't think it's an absolute certainty that they will (or at the very least, how universally they will).
"We tend to overestimate the effect of technology in the short term and underestimate it in the long term."
The comparison with web3 is very excessive though. This AI stuff is at least somewhat actually useful. Web3 was a gigantic billion dollar bubble that produced very little in the way of things that are useful for any purpose, even playing around. It's one of the most vapid bubbles in history outside pure financial instrument bubbles.
When you say "people" I hope that you mean "minority of people". Yes, I've heard people saying that laptops will go away. Some people said that. Yes. Some people.
Web3 is just laughable. Again, some people.
Google is obviously not dead.
However, I think with AI, there's a huge potential. It will change the world, the way some things are done and some jobs are gonna go away and new jobs will appear. Much more disrupting than a f web3 joke.
Laptops sort of have gone away, in the sense that the smartphone has become many people's first computing device over the past decade.
Adults who did not own a computing device before the smartphone, may choose to continue living their lives without a laptop/desktop. Kids who have grown up with iPads and Chromebooks at school may look for mobile options in college first instead of a standard Mac/Windows laptop.
I actually think AI is much more like VR or 3D-cinema tech than Web3: it sounds like it might be a game changer, but nobody actually likes the new medium that much after the initial cool factor. AI generated images are much more impressive though, and some things will be unique.
In the next decade, I see AI to tackle a special category of problems: those that shouldn't be solved, or else the system as a whole gets worse -- chat bots for customer support, absurd amount of content creation. There's much more content today than 20 years ago, and yet my enjoyment has gone down. If I were a member of congress, I'd be surely thinking about ways to slow this down.
I too have lived through many hype cycles. I jumped on one in 1992 and created one of the first ISPs in Michigan. That changed my career path from automotive to tech. It was an exciting time as thousands of ISPs were launched.
I completely ignored the hype around blockchain, NFTs, etc. I even block anyone who says they are a blockchain expert on Linkedin.
But I am all in on leveraging LLM for new business use cases. My mind is blown at what we can do with DaVinci 3.5 and looking forward to evaluating 4.0. ChatGPT is a demonstrator. DaVinci is ushering in the next wave of innovation.
Jumped same time (the year it became legal to make money on the Internet, which I think was '93), co-founding an ISP and web agency.
And fully agree with you, except that I think DLT (distributed ledger technology) has use cases, for instance, ATMs in grocery stores.
Meanwhile Mid-Journey and text-davinci-003 are super impressive in making interesting things from probabilistic blends of other things.
ChatGPT is like decades of Bayesian "reasoning" or probabilistic (aka "evidenced-based whatever") free association finally having a creative application and outlet.
What I find interesting is there has been a (percieved) massive change in the popular opinion re "AI". If you got an AI story on HN 6 months ago, there were all kinds of "it's just statistics" comments (also incorrect). Now the mood has changed and all the laypeople are gushing about how great it is.
Otoh, my impression is people who have been involved in ML for a while didn't have any sea change in their opinion of the technology based on the recent advances. These were predictable, but cool, extensions of things that were already known, and represent a fundamental advance in polish and marketing, rather than technology.
My point is that the public discourse is now mostly dominated by people looking to profit from hype, not people who actually have experience in the technology, which is of course going to lead to a web3 type feel
Yep. The value/utility of the underlying technology doesn't really matter. The scammers will just switch to using whatever buzzwords are most hip and eventually people will associate the buzzword more with the scammers than the technology itself. This was taken to the extreme with cryptocurrency post-2015 where most people never even had an actual interaction with cryptocurrency but believe they know what it is from all the tangentially related scams they heard ads or news about.
The utility of AI can be real, just as the utility of bitcoin is, but it'll be drowned under things like "Quantum AI trading platform" or "NFT"s in the public perception.
I blame much of the hype around the "metaverse" on crypto scammers. VR was arguably in a slow but steady growth phase centered around games until it got tied up with the overhyped and poorly defined buzz word, which also includes every hyped up technology in existence (AI, AR, Blockchain, etc). As a long time VR developer going back to 2015, it was hard to watch it happen.
I still think social authentication and S3 replacements are valid use cases for web3.
as for GPT I think it is revolutionary. And it is the first chatbot that I prefer to work with over google. Way easier to use than google.
If it becomes a paid service(assuming the pricing is right) I feel it's incentives align better than googles with mine.
Where things get messed up is recommending products and services. If it can stick to a no pay to rank type of service than it would revolutionize the economy.
No assistant is perfect but seems to do better then a lot of human assistants.
Will it be perfect in the future no, but I think if it can offer some kind of confidence factor to it's answers that would go a long way.
This is a dangerous assertion.
Web3 never had any “there there” No coherent use case, no value add. Smart contracts? Maybe. No proposed use case actually requires or benefits from blockchain. The currencies were always blatant speculation. A cabdriver in OKC recently told me he was “invested” in diesel coin. A) wtf is that and B) in what way is that an investment (you know where you dedicate capital to a value engine and when value is created you get out more than you put in)
AI as it exists TODAY has the potential (with a bit of prompt engineering and a free account) to assist everyone in their jobs. It will disrupt the software industry, art, writing, education, law, science.
Specialized AI assistants already exist and work startlingly well. It can write at a college level. It can code at a college level. It can learn specialized knowledge worker skills in a trivial amount of time. (Law for example)
It’s ok to not be optimistic. But if you discount it entirely you’re in for a bad time in the coming year, three years, five years.
Actually no, do whatever you want. Ai will come as a surprise to something like 6 billion people. There’s no harm to me in you being part of that group. And frankly the existence of different opinions about the future is a great hedge every society makes.
I am really looking forward to AI that can help us solve hard biology and engineering issues (say nuclear reactor design, materials science) practically. Or an AI that will help solve coordination problems by telling us exactly where to compromise and how to negotiate so that everyone will agree to say relax zoning regulations. I know there are things like AlphaFold and probably lots of proprietary things in niche industries I'll never hear about because no mainstream media is going to cover that. Are there such examples?
I find the likes of ChatGPT wonderful, but in the end image and copy generation seems like a very first world/"content creator" use case that won't help us solve critical problems.
> “The fact that investment is being driven by pure hype, by data-free narratives rather than actual revenue data or first-principles analysis. The circularity of it all -- hype drives investment which drives hype which drives investment. The influx of influencer engagement bait.”
Think this cynical. author good to see ads or marketing using hype for clicks. people do stuff to make money on latest trend, just nature.
but me dont know anything productive in web3. it hype through all way. English not me first language. writing use almost feel magic to have chatgpt clean and rewrite my posts. also used for coding examples. dont think this equal to web3. maybe need time to mature to billion dollar scale but me wouldnt bet money against
Crooks and swindlers will find there way into anything where big money is likely to be hyped. Unlike web3 I actually see products being built from LLM.
Very skeptical of this because the AI is so strong that safety and quality will always be an issue and if safety isn’t an issue the output may not be that interesting because the model is strongly curbed.
For example, I saw a Twitter post where a podcaster said the AI generated introduction for his podcast was better than anything he could write. I think the takeaway is that his writing isn’t that good and he also thinks boilerplate is impressive.
To me that is excellent. I hate emails and I’m admittedly not good at writing them.
I now type “write an email that says it was was nice meeting and getting to know the team. Send me the scope docs which I’ll fill out and return. Let’s schedule a status in two weeks”
It gives me a great email which I proofread and send in less than 2mins.
So now instead of you spending a few minutes putting together a minimal email with just the required info that takes me only a little time to read and understand, you will generate a blob of whatever content and length, and basically force me to either ignore your email, or waste a bunch of time reading it to try and tease out useful info.
Doesn't that mean you are basically offloading work to me?
> So now instead of you spending a few minutes putting together a minimal email with just the required info that takes me only a little time to read and understand
This is literally what chatgpt does. No fluff just facts. I can even add ‘be concise’.
Possibly. I'm certainly excited by it, but unless there are patents involved (for all I know there might be, but I've not heard anything about that either way), I'd say 2022's headline AI look like they will be the next spreadsheet or the next spell-check/grammar-check rather than the next Google: any software that takes less than a few million to implement, is a race to the bottom with regard to monetisation.
What's annoying is that I have seen not one realistic idea come out of this. This is not some kind of alien technology, it is a concrete algorithm. You cannot just say 'this will revolutionize everything' and then refuse to elaborate. It is hard to integrate this into an existing project, and it does not solve the agency issue that we already had, which is that AI can talk, but it cannot make decisions for you. So more emasculated chat bots that nobody wants to talk to. And how big a market is copy editing really? To publish something based on chatgpd you still need to put in a ton of work, because it does not check its sources and makes stuff up. So how is AI really better than 5 years ago other than that it's a better writer with a better style? That wasn't the problem 5 years ago, and it is not today.
The killer feature with ChatGPT is that you can interactively refine a result. Your skill as a writer for a magazine for example now can be that you're good at recognizing good writing and can articulate how it can be made better, but you don't need to be a good writer yourself anymore. This was absolutely not possible 5 years ago or even 1 year ago.
ChatGPT doesn't need to be a "chat" product. It's an interactive stateful tool.
Oh you poor soul. AI is already making decisions for and about you, and policy will often prefer that automated decision over a human one, or worse, will farm out the "human appeal" to what is basically an internal Fiverr system that requires the humans to only take about 60 seconds for each appeal and can not possibly make good decisions.
I know from my work that a high quality human review of a simple situation with significant data labeling and highlighting important and relevant info to be reviewed costs about $1. If you are spending any less than that on a human review of arbitrary data with arbitrary rules you should consider it worse than a coin flip.
> You cannot just say 'this will revolutionize everything' and then refuse to elaborate.
This is intentional, as most of the low-hanging fruit of AI transformation are things people either don't care about, or think would be a net negative. Like replacing junior publicists with AI-written PR copy. What difference would that make to the consumer? Very little if any.
On the other hand, replacing real human art (as you might see on The New Yorker cartoons or The Economist) with AI-generated images may make some readers unhappy.
People overestimate what's possible short-term and under-estimate what's possible long term.
Current AI models have three main limitations:
- rapid skill acquisition. Example: someone invents a new programming language. Something like Elm or Rust. Humans can start using it right after reading a "quick start" page and few tutorials from the authors of the new language. How much training data will GPT-style models need to start using that language? A lot, like output of hundreds or thousands of people. This needs to go down by a factor of 10x to 100x to match humans.
- agency, or taking actions guided towards a goal. Example: Can you ask GPT-like model to book a flight ticket for you? Help it help you learning Photoshop? New IDE? Test your latest app or indie game and find bugs? No. The ability of current models is still not good enough to be useful to an average person with regard to interacting with outside world.
- acting in the physical world. Example: An average human can learn to drive enough to pass a driving test in the order of 100 hours during a driving course. How far away are we from a system that can control humanoid-style robot to learn to drive in the order of 100h? Currently, we can't do it with billions of dollars with specially designed hardware. Using humanoid robot to control a car is not even useful as a benchmark for the state of the art machine learning systems.
IMO the currently existing systems like ChatGPT or Stable Diffusion are worth all put together in the range of $10B to $100B in the next 3-5 years.
Future systems that will address all 3 limitations mentioned above may be literally the whole reachable universe changing if we decide to build self-replicating space probes (Von Neumann probes). We know that there are physical systems that are not very intelligent but capable of exponential growth like viruses or bacteria (humans too).
The main limitation of biological systems is their adaptability, especially to lack of water. If robots can build other robots avoiding bottlenecks of human intellectual and manual labor then the robots are limited only by resources and energy available. We have plenty of both on Earth and Solar system is full of it.
Also, I'm just talking about human-like abilities. All of that is possible without involving concepts like superintelligence. Bacteria are not very smart, but they can multiply exponentially.
The spread of possibilities is enormous.
All of that hinges on your timelines on the 3 above mentioned limitations. It's like predicting when atomic bomb will be possible. It may not happen for decades, or there may be right this very moment someone with an idea that will make it all possible.
What is web3 actually? I don't think there is a definition people can agree on. So even bashing web3 as if it is anything other than at best a vaporware, worst case a scam, is fruitless and not grounded in reality.
Let's talk about the current climate in AI part. From his tweet, he seems to point to Sam Altman's statement that GPT-3.5+ is going to 'civilization transformation'. Well, if you follow Altman's past statements, he likes to make grandiose statements like this, and at the end of day, it is more of his personal style of speech than anything else.
So let's back to the comparison of web3 vs GPT style AI agents. I would argue, the latter is indeed, partially already a reality. Imagine if you have a multiple modular in-context learning agents, then here is the opportunity: A machine that is programmable through natural language and examples/demonstrations. The level of automation potential is going to insane, even scary. What if we hook it up with some robotic arms? If someone makes this work, we will have a factory that can make many many things, at the same time, in the same place, with little human involvement. This, of course, is going to fundamentally change wide range of industries, or capitalism itself and will have geopolitical implications.
Most of them hinge upon incredible yet highly nebulous improvements to the technology to solve as-yet unsolved problems based on a lot of extrapolation based on hype and "dude trust me the next thing is gonna be great"'s from AI companies, and a generally poor understanding of how the technology works and what its limitations are.
The central logic error is something along the lines of "I don't understand the limitations of this technology, therefore it must have no limitations".
This just isn't the case. People are being productive with AI right now.
Crypto ran out of excuses for having no actual products outside of itself a long time ago. That is, apart from the money laundering and grift that it was used for from day one.
You can call the productive activity what you will. Crypto has no "features" any more than it has "products".
Copilot works. GPT is helping me write more coherently. Stable Diffusion is entertaining my nieces with silly pictures. Biotech AI-based drug discoveries are being announced every day.
Bitcoin's raison d'etre is to serve as a decentralized currency free from the constraints of central banks or governments, and it fulfills this purpose admirably.
it's potential as a currency and store of value should not be dismissed based on its current limitations. Its adoption and value will likely continue to grow as more people become aware of its potential, particularly with the implementation of the Lightning Network which aims to improve scalability and transaction speeds
What do you define as takeoff? The transaction numbers are minuscule.
I personally think crypto transactions are a dead use case. El Salvador recently released numbers that said crypto remittances were in the 1% range in 2022, despite all the hype. Coinbase actually shutdown there remittances product after a couple of years and a couple of thousands of transactions.
The operative word in hype cycles is always will. Blockchains will revolutionize this, web3 will revolutionize that, stable diffusion will do this, ChatGPT will do that.
Talk to me about what it actually has changed and you may convince me.
I think you're misreading the situation with AI. It's already doing extremely useful things, and will continue to pop up all over the place, I expect.
* I already talk to my phone to type more than I actually type on it. This is finally made my phone roughly as efficient as my laptop for simply inputting English text.
* An artist named darknut worked for years in the Morrowind community to retexture Morrowind and creating a new higher resolution texture pack. ESRGAN upscaling lapped his work in a matter of weeks. I love his work, but the change in efficiency was a change in kind.
* I haven't done it myself, but I've heard countless reports of programmers simply asking for a program with a given set of specifications to be written, and having the resulting source code available seconds later.
* I wanted character sheet portraits for a role-playing scenario I was creating, and stable diffusion was able to generate half a dozen portraits in about a half an hour that were perfectly suitable. This just wasn't possible a year ago.
I never got on the web3 hype train, but I'm already finding AI useful, and I don't expect it to play a less useful role over time.
- I could use it as is for customer service chatbots if I could trust it (hard nut to crack, but I see it coming. You.com already has a nice factual chatbot that usually delivers sources to its claims.
- I use it to sum articles or rephrase silly "philosophers speak" to me
- It could easily write business email fluff for me (just don't need that too much luckily)
Non ChatGPT:
- Even plain intent detection is great to replace these "press n for X" machines on the phone
- audio clean-up is amazing ("imagines" a HQ recording from a noisy one)
- neural speech to text is amazing (the tik-talk voice is a good popular example, but there are even better ones)
I had to write 40 individualized reports this weekend. I fed my notes into GPT, along with a few directions about style, and it spit out competent reports that mostly only required tweaking. It saved me from losing my Sunday too.
I see people saying things like this online, and I have to wonder what their standards for "good" or "competent" or whatever are. I'm not going to ask you to post those reports so I can "judge" them. You shouldn't and none of us cares that deeply anyway, but I'm still waiting for a good example of actual useful work done by these things. I'm replying to you randomly, there are other posts like this in this thread.
On the art front, every single piece of art that I see is bad. Not bad like a human not being able to draw, but bad in a different but still obvious (to me, at least) way.
On a personal front, I had a lot of fun generating funny pictures based on inside jokes with my friends a few weeks ago, but lately it barely makes us smile. I think the novelty is wearing off, so even that is gone.
Ultimately, it's not going to matter what I think, enough people think this is good content that it will be flooding the internet soon. It's all so tiresome.
Automated stupid and poorly planned tasks will just ossify whatever system forced you to work a stupid and poorly planned task. All you are doing is making your future life harder.
Except that these deep learning models are currently useful, we've demonstrated that brute force scaling alone has the potential to improve them significantly, and there is a lot of room for architectural improvements to make them more efficient.
What's more, we're starting to see people stack/chain specially trained models to achieve very impressive workflows.
In this analogy, would OpenAI be the Ethereum-equivalent of AI space - actually innovative tech and some real world utility with a strong team of developers?
What non-scam/non-marketing-bullshit-and-no-product-in-sight came out of Ethereum? I know a few people that work in related field and it's shady money managed by people with exorbitant personality issues doing endless POC that will be sold... one day... to someone... totally, dude, like, radically expensive...
A very first-world perspective. I live in the third world. My currency keeps crashing every year. There are currency controls over how much foreign currency I can own. My government also randomly enacts haphazard financial policies, such as outlawing 86% of all circulating cash.
Crypto, especially stablecoins, has actually helped me protect my wealth. Real use case for me at least, and many others like me in the third world.
The world is bigger than you think, and their problems are more varied than you realize.
The difference for me is that Web3 was never shown to be at all useful.