Now ask yourself why AI companies don't want to be regulated or scrutinized.
So many companies (users and providers) jump on the AI hype train because of FOMO. The end result might be just as destructive as this mythical "AGI".
Edit: I am not saying to not use the technology. I am just on the side of caution and constant validation. The technology has to serve society. But I fear this hype (and ideology) has it the other way around. Musk isn't destroying the US government for no reason...
My impression is that companies in most of the fields do not like to be regulated or scrutinized, so nothing new there.
While observing some people using LLMs, I realized that for a lot of people it really makes a huge difference in time saved. For me the difference is not significant, but I am generally solving complex problems, not writing nicely formatted reports where words and not numbers are relevant, so YMMV.
Is it good for one person (the writer) to save time, only for lots of other people (the readers) to have to do extra work to understand if the work is correct or hallucinated?
Is it good for one person (the writer) to ask a loaded question just to save some time on making their reasoning explicit, ony for lots of other people (the readers) to have to do extra work to understand what the argument is?
> Is it good for one person (the writer) to save time, only for lots of other people (the readers) to have to do extra work to understand if the work is correct or hallucinated?
This holds true whether an LLM/AI is used or not — see substantial portions of Fox News editorial content as an example (often kernels of truth with wildly speculative or creatively interpretive baggage).
In your example, a responsible writer who uses AI will check all content produced in order to ensure that it meets their standards.
Will there be irresponsible writers? Sure. There already are. AI makes it easier for them to be irresponsible, but that doesn’t really change the equation from the reader’s perspective.
I use AI daily in my work. I describe it as “AI augmentation”, but sometimes the AI is doing a lot of the time-consuming stuff. The time saved on relatively routine scut work is insane, and the quality of the end product (AI with my inputs and edits) is really good and consistent.
Anecdata, N=1; I recently used aider — a tool that gives LLMs access to specific files and git integration. The tools are great, but the LLMs are underwhelming, and I realized that — once in the flow — I am significantly faster at producing large, correct, and on-point pieces of code, whereas when I had to review LLM code, it was frustrating, it needed multiple attempts, and it frequently fell into loops.
anecdata n=1: LLMs lack understanding of context, stakeholder sensitivities and nuance in word usage, to write reports with the required depth and at the quality bar I need. Maybe it is faster at generating BS reports with no substance, but I can still write my reports much better and much faster than LLMs so far, probably because the reports are merely the artefact of solving a complex problem.
>being blindsided when it, inevitably, becomes mainstream.
I don't see how this could happen. This is not a limited resource. It's not a real estate opportunity. There is enough AI for everyone to buy when it becomes useful to do so.
I think FOMO correctly identifies the irrational effort of many companies to jump in without any idea of what the utility might be in any practical sense.
You are right. But these are different different types of motivations of the same thing. And there is always context for these motivations.
Its a different thing to sell Trump that LLMs should take over crucial decisions within a government than just using it for some prototyping, code completion at work or to create cat pictures at home.
Take Copilot for example. It was rolled out in different companies, I worked with. Aside of warnings and maybe trainings, I doubt the companies are really able to measure the impact that has. Students are already using the technology to do homework. Schools and universities are sending mixed signals about the results. And then those students enter the workforce with Copilot enabled by default.
At least with companies, its the "free market" that will regulate (unless some company is too big to fail...)
Fully agree, in recent weeks I've also started to consider LLMs in a wider context, which is to destroy all trust in the web.
The enshittification of search engines, making social media verification meaningless, locking down APIs that used to be public, destroying public datasets, spreading lies about legacy media, the easiness of deploying bots that can sound human in short bursts of text... it's all leading towards making it impossible to verify anything you read online.
The fearmongering around deepfakes from a few years back is coming true, but the scale is even bigger. Turns out, there won't be Web 3.0.
You could never believe everything you read online, but with enough time and effort, you could chase any claim back to its original source.
For example, you could read something on Statista.com, you could see the credits of that dataset, and visit the source to verify. Or you randomly encounter some quote and then visit your favourite Snopes-like website to verify that the person actually said that.
That's what's under attack. The "middleware" will still be there, but the source is going to be out of your reach. Hallucinations are not a bug, but a feature.
If you can't trace something back to its source, it's suspect. It was that way then too. I suppose you're just concerned there's a firehose of disinformation now.
So perhaps we have to just slough off the internet completely, the way we always have for things like weekly rags about "Bat Boy" or whatever.
I hate to see the internet go, but we'll always have Paris.
Genuine question - how so? If I want to find stuff out I go to wikipedia, nyt, guardian, hn, linked sites and so on. I'm not aware of that lot being noticeably less trustable than in the past? If anything I find getting information more trustable than before in that there are a lot of long from interviews from all sorts of people on youtube where you can get their thoughts directly rather than editorialised and distorted.
I mean the web was never a place where things were vetted - you've always been able to put any sort of rubbish on it and so have had to be selective if you want accuracy.
But I'm amazed on the progress we make every week.
There is real FOMO because if you don't follow it, it just might be here suddenly.
Deepseek impressive, deep research also great.
And what you might complete underestimate: we never had a system we're it was worth it to teach it everything.
If we need to fine-tune LLMs for every single industry that would still be a gigantic shift. Instead of teaching a million employees we will teach an LLM all of it once then clone the agent a million times
We still see so much progress and there is still plenty of money and people available to flow into this space.
There is not a single indication right now that this progress is stoping or slowing down.
And not only that, in parallel robots are having their break through too.
Your musk point I do not understand really? He is a narcissist and he pushed his propaganda platform for becoming president because he is in big shit and his house of cards was close to crashing
It absolutely is destructive. I read an opinion the other day about Microsoft shoving Copilot into every product, and it kinda makes sense. Paraphrasing but: In MS's ideal world, worker 1 drafts a few bullet points and asks Copilot to expand it into a multi-paragraph email. Worker 2 asks Copilot to summarize the email back into bullet points, then acts on it. What's the point? Well, both workers are paying for Copilot licenses, so MS has already won. And management at the firm is happy because "we're using AI, we're so modern." But did it actually help, with anything, at all? Never mind the amount of wasted energy and resources blasting LLM-generated content (that no human will ever read) back and forth.
Now ask yourself why AI companies don't want to be regulated or scrutinized.
So many companies (users and providers) jump on the AI hype train because of FOMO. The end result might be just as destructive as this mythical "AGI".
Edit: I am not saying to not use the technology. I am just on the side of caution and constant validation. The technology has to serve society. But I fear this hype (and ideology) has it the other way around. Musk isn't destroying the US government for no reason...