Web3/NFT/Crypto promises: Very clever and advanced tech, People will stop using the old stuff because they hate the government and want to save pennies.
Web3/NFT/Crypto realities: using it is expensive and hard, scams left and right and the government actually arrives but doesn't remedy the damages. People are not in to save pennies on transactions but to get rich quick.
DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.
DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.
We will probably hit the limits soon and won't have GAI next year but the stuff already delivered is already useful. The crypto stuff might become useful but its nowhere near the hype.
> DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.
> DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.
That's a pretty generous if not biased take, is it not? For me personally, ChatGPT is underwhelming and hasn't done anything remotely impressive when I have used it. And there's a lot of fervor around people using it, but are there interesting use cases outside of advertising? And are there not a ton of harmful use cases? The internet is going to be absolutely filled to the brim with GPT generated junk.
It's really good on things that are not hard but boring, which makes it extremely valuable.
For example, I needed a tag cloud in SwiftUI. Very easy but very boring task, I guess there are tons of libraries for it but I don't like using libraries for this kind of things.
As if it's my junior, I told ChatGPT to generate a tag cloud and it did. It got the main things right but I need it to loop through an array of custom Structs, gave the structure of the Struct and told it to modify the algorithm. ChatGPT did it very well, as if understood the structure of my custom Struct type.
But I needed the tags be clickable, told it to make it clickable and it did and it correctly guessed how to connect a property from my custom type to the click action. I told it to change the colors, size etc and it was able to do it all. Sometimes it generated incorrect syntax but that wasn't a problem because I told to fix it and it did.
If having a junior developer as an assistant isn't valuable, I don't know what is.
I also use it to drill down on my curiosities. Yesterday I was curious how a torrent client can connect to peers to share a file and I made chatGPT explain to me step by step how NAT works, what strategy developers use to overcome issues with it(If curious, apparently they use this thing called STUN server, which is basically a remote machine that the client can connect to learn its own remote IP address). So it's not just a junior but also an expert in a domain that can answer questions conversationally. Much better that Googling keywords because Google is ridden with spam and tries to be clever about your queries without actually being clever. So once I learned the technical keywords like STUN server, then Google became useful again.
My concern would be the same as with a Junior Dev. It's fine if they don't know something but please don't bullshit/make something up. Maybe tech questions are immune to this but it doesn't seem to have any conception of true and false.
This is a silly example but I asked for a list of Star Trek references in Seinfeld and it gave 6-7 examples that all sounded genuine (Jerry makes a joke about transporters in the Contest episode) but were 100% made up. If I wasn't super familiar with the show I wouldn't have been able to tell most were invented. With code generation that's less important because we have ways of testing code for "truth" but I would worry about relying on any factual statements from the thing.
Sure, humans are not obsolete just yet! People are needlessly freaking out about losing their jobs or even reason for existence to AI. The reason for employing developers or designers is not that we need someone to write loops and draw lines.
It's a tool that makes some things significantly more easier and it does have risks. It will replace only people who are doing jobs not suitable for people.
The problem is that most of us work for companies ACTIVELY TRYING to use this stuff to replace human beings, INCLUDING THE ACTUAL DECISION MAKING and slapping black boxes everywhere in their bureaucratic processes because "machine says no" is a SUPER beneficial thing to a business, especially the modern massive corporation that doesn't really have competition and is mostly profitable because it ignores problems.
I give it five years until even we here are unable to get the attention of a tech company employee to fix our wrongly locked out account because not only are the primary touch points """automated""" but the appeals and appeals of appeals are also entirely automated.
Everyone here will get to enjoy "machine says no" a hundred times more often. Every tech support or bill support or anything support will put you through half an hour of terrible "AI" interaction before you are even allowed to be routed to a human, and businesses will use this to get rid of even more call center employees.
Hate not being able to understand the thick accent in your support call? Get ready for 1% of the time the AI in the phone call puts together sounds that don't actually form words, or just straight up misunderstand what you say, and nobody will believe you because "it's so accurate". Get ready to be gaslit by your fucking phone.
The prevalence of “computer says no” or “machine says no” in our modern society makes me scream and can honestly send me into depression and anxiety. It creates the most helpless feeling.
I have an increasing pile of issues with companies’ services that just persist because fixing them requires getting in contact with a human and a human who actually knows something and isn’t just a terminal for the computerized system.
20 years ago, we could have had the same conversation about outsourcing customer support. It was rife with problems and limitations, just as AI today is.
But that didn't stop companies from doing it anyway. The C-suite isn't listening to researchers and the general public, they're looking at what their shareholders think they 'should' do. Once McKinsey and BCG whip out their "AI Digital Transformation" powerpoints, it's over.
This points out the most dangerous part of ChatGPT. It's a highly confident idiot. When I told it was explicitly wrong, it basically responded with "I'm sorry it wasn't clear".
Not ChatGPT, but Github copilot just wrote me some very nice documentation for a library I'm working on that was surprisingly good. All I had to do was change some wording and clean up the formatting, probably saved me a couple of hours.
>The internet is going to be absolutely filled to the brim with GPT generated junk.
It is already filled with human generated SEO junk that is largely worse than GPT junk. Smart people who like to shit on this stuff forget how fucking dumb most people and things are.
I didn't forget, and yes of course it's already filled with junk. No reason to be excited about doubling-down because in the end it's humans being the impetus.
Humans already struggle to tease value out of the noise that is the SEO and ad and marketing filled internet. Making producing junk, noise, and spam easier can only possibly make that worse.
This is like turning on your microwave while playing a shooter on WiFi because "the space my router is in is already full of 2.4ghz noise".
We all (students and staff) use it a lot. It'll write lectures, help think up interesting worksheet ideas, help students code, help them think of things to write about, how to structure assignments.
I needed a lecture on regex, ChatGPT wrote it for my in 30 seconds. Then I asked it for some problems for the students to solve and it wrote those too.
It may not be relevant to you, but for some of us it is changing the way we work. That hasn't happened since the dawn of the internet, or social media, or mobile phones.
The next generation are using it, and using it a lot.
A good, as if the profession of teaching needed to be hollowed out even more. Now, instead of a carefully assembled and designed curriculum, students will be fed literal autogenerated nonsense.
If you do something in your area of expertise half-way with AI, someone somewhere WILL try to do it all the way and market replacing your expertise with just more black boxes for cheaper, and the people who pay you who aren't experts in your field, WILL be sold on that offer.
Nobody will listen to you when you talk about how it is problematic or wrong or error prone or anything. Think of how angrily people argue about Tesla's camera based self driving, think of all the bad takes people have about how "it's safer than a real person" despite the lopsided and bad statistics behind that claim. Now instead of arguing this with strangers on the internet about an extremely rare outcome that could theoretically kill a random person, now you are having this exact style of conversation with your boss about how the AI model he is saying will replace you has an entire class of errors that humans aren't familiar with and don't seem to notice very well and will absolutely hurt things, but only rarely, in the future.
Get ready for a future where pretty much everything inexplicably fails 1 out of 50 times and nobody will ever be able to tell you why, they won't be able to fix it, and companies prefer this anyway.
If it's anything like my experiences so far, that lecture is likely to be riddled with plausible but incorrect statements. Any chance you could paste it here?
> The next generation are using it, and using it a lot.
So we should expect the Flynn effect to reverse even harder the coming years? The modern classroom is already making kids dumber, this might actually reverse many of the effects of education and put us back very far.
OpenAI's actual promises are "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."
A chatbot that's relatively accurate at interpreting and summarising stuff compared with previous generation chatbots and an image generation algorithm that's actually pretty good are a damned sight more useful than NFTs which is why I agree with others that the comparison isn't helpful, but I don't think it's realistic to characterise OpenAI and AI enthusiasts in general as under-promising
Web3/NFT/Crypto realities: using it is expensive and hard, scams left and right and the government actually arrives but doesn't remedy the damages. People are not in to save pennies on transactions but to get rich quick.
DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.
DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.
We will probably hit the limits soon and won't have GAI next year but the stuff already delivered is already useful. The crypto stuff might become useful but its nowhere near the hype.