A lot of fair criticisms of the splash page here. But, I'll say the slide deck has a nice comprehensive review of research headlines over the year, at least.
And their highlights conveniently ignored any and all negatives except for the one they could spin into a positive. It's almost like it's in their best interest to sell you a future so bright you gotta' wear million dollar shades.
Yes I guess asking 1200 "ai practitioners" would give you 95% ai-users... but what is with the 5 percent? 60 people are ai-liars? Anti-Survey anarchists?
3 to 4 years ago chat gpt2 wrote much more creative stories, its faults actually made
interesting and absurd concoctions out of it's training material. Nowadays its just a machine for plagiarizing works and laundering copyright.
Quite a comprehensive report. But things look too rosy when the phenomenon is at the peak of hype cycle. So I was curious to see what the Predictions tab has to say. It has disappointed me with the "current-state" news again, not really any predictions.
What profession do you have that made you a Luddite based on the current state of LLMs and AI?
I'm an artist, programmer and musician, and is no closer to being a Luddite today than five years ago, not sure why others would either. Anti-capitalist or Anti-fascist I'd understand, considering the state of the world and the current direction.
First of all, the AI should not take the claim at face value. I would begin by questioning the assumption that "the response here misunderstood my post" and look for reasons why that is false, not just be a servile parrot to my interlocutor, and contrive for them ways their belief might be true.
What happened in the conversation is that CaptainOfCoit obviously understands that the noosphr's post is a general remark about human nature.
CaptainOfCoit is using the (highly popular Internet) debating strategy of presenting anecotal evidence. "Hey, I'm a datapoint against your generalization: I work in a number of areas all affected by AI, and, look at me, I'm not a Luddite. What areas are you working in that you see people turning into Luddites? Why, it can't be art, music or programming!".
There is no misunderstanding, but only the fallacy of using a personal anecdote against a generalization that was never presented as absolute.
It's actually kind of amazing that Claude didn't latch on to this angle.
I didn't say that using personal anecdotes against generalizations is a "sophisticated debate tactic", LOL. In fact, added a mildly sarcastic commentary in parentheses whose careful interpretation conveys the opposite.
Claude is clearly engaging in classic trolling at this point, putting words into mouths.
People who use anecdotal arguments tend to be dolts who genuinely believe that their experiences are those of most people. "You can't be a programmer, artist or musician because I'm in all those activities and I'm not a Luddite nor trending toward becoming one (and neither is anyone else I know). My reality represents everyone similar to me, but maybe in whatever fields or hobbies you are working in, there are Luddites against AI, so you are generalizing that to everyone."
In no way am I intending to present that as sophistication, rather than a misunderstanding.
So, yes, actually in a way Claude's original analysis has a grain of truth in that someone who uses personal anecdotes in arguments assumes that others are also only proceeding from personal anecdotes (i.e. making a confession and the like) rather than some kind of based generalization. I.e. they operate in a mode in which, unless perhaps concrete data is given from credible studies, everyone's statement is just from their personal anecdotes, like their own.
Be it AI or a personal consult, it's generally more productive to engage on what one's own misunderstandings might be first and work out from that. It tends to wipe away a lot of, but not all (of course), the trust in these kinds of responses as being meaningful additions to moving the conversation forward on their own.
Example from Claude in the reverse:
When I posted asking "What profession do you have that made you a Luddite based on the current state of LLMs and AI?", CaptainOfCoit responded: "I'm an artist, programmer and musician, and is no closer to being a Luddite today than five years ago..."
I initially wondered if this response implied I was the only person with such a profession who became skeptical of AI. Here's why that interpretation might occur, even though it's likely not what was meant:
The Potential Misunderstanding:
The phrasing "I'm an artist, programmer and musician, and is no closer to being a Luddite..." could be read as implicitly contrasting with my position—as if to say "I have these exact professions you're asking about, yet I didn't become a Luddite, so why did you?"
This creates an unintended impression that my reaction might be unusual or isolated among people in these fields.
What Was Likely Actually Meant:
CaptainOfCoit was almost certainly just:
- Answering my question directly by sharing their own professional background
- Expressing genuine curiosity about why others would become more Luddite-leaning
- Offering their own perspective that anti-capitalism or anti-fascism might be more justified stances than Luddism
The Takeaway:
Text-based conversations can create ambiguity. A straightforward personal statement can accidentally feel like an implicit challenge, especially when discussing contentious topics. CaptainOfCoit was likely just contributing their experience to the discussion, not suggesting my perspective was uniquely misguided.
Maybe take 10 minutes (without Claude) and try to figure out why, I'm sure it could be helpful for future human-to-human conversations. Hint: The reasons are not "AI despite useful"
Interesting, this is the first I'm hearing of that. It's too little too late for me though. I was paying for Perplexity Pro and Kagi Ultimate at the same time for a few months to decide which I liked more. Perplexity was often able to get me answers that I wanted faster than Kagi could, but in the cases where it would run in circles around a false result, it seemed like it would more than make up for the time saved on other queries.
The CEO talking about wanting to roll out advertisements was one of the final nails in the coffin for me. I have exactly zero interest or patience for being subjected to advertisements on a service that I'm paying for.
The black blaze stats were published by AI investors?
Or given the HDD setting, HDD investors? Are you sure they weren't published by the technical people that actually analyzed the failure rates of disks? ... instead of polling for opinions from Internet strangers, on platforms that are full of bots?
Huh? The authors of Blackbalze report are obviously bias since HDD is their business. That doesn’t make their reports less worth reading.
Nathan Benaich has PhD in Computer and Mphil in Biology from Cambridge, and majored Biology from Oxford. He is more than qualified to discuss tech topics, a lot more than many pieces of content here on HN. Not reading him because he is an investor? Give me a break
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly, and we've asked you repeatedly to stop. It's not what this site is for, and destroys what it is for.
Worth keeping in mind this is made by a "AI investor", so obviously comes with a lot of bias. It's also a relatively tiny survey, seems only 1.2K people answered.
An example of the bias:
> shows that 95% of professionals now use AI at work or home
Obviously 95% of professionals don't use AI at work or home, and these results are heavily skewed.
The question just says "Do you use generative AI tools in your work?", which would probably include 100% of office workers today, directly or indirectly.
Maybe the 33 people who said "No" doesn't know the implementation details so they assume it's not used anywhere in their daily professional life.
I agree there is some implicit bias in this reporting, particularly because Nathan is colleagues (or at the very least previous colleagues) with Ian Hogarth, who is currently the chair of the UK AI Safety Institute, recently renamed to the "AI Security Institute".
So, I would have to take reporting on safety with a grain of salt. That said, I do think there are a lot of other interesting insights throughout the presentation.
just a quick point here; 1.2K is highly statistically significant, even for a national level poll/survey. The issue here is the potential for selection bias, which seems primarily to be driven by people who want to do the survey not sure how this ultimately skews the results but 1.2K is easily an adequate sample size
Okay I do toy with local image generation when I get extremely bored...
But other than that only AI use is when google forces it on me. And then gets things wrong... Which is easily found out by comparing it's output and synopsis on the links it give...