> This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
I have a different theory.
Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.
My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.
I just asked like I said, give me plot summary until chapter 14, don't spoil the rest of the book. And of course when I told it what it just did it was like oh I'm sorry, here's a summary without the spoilers for the ending. So clearly it could do it without additional context.
>>Do they even have direct access to published works to use as reference material?
I mean, clearly, given that it did answer my question eventually. Also wasn't it a whole thing that these models got trained on entire book libraries(without necessarily paying for that).
>>I wouldn't expect any LLM to be able to respect such a request
Why though? They seem to know everything about everything, why not this specifically. You can ask it to tell you the plot of pretty much any book/film/game made in the last 100 years and it will tell you. Maybe asking about specific chapters was too much, but Neuromancer exists in free copies all over the internet and it's been discussed to death, if it was a book that came out last year then ok, fair enough, but LLMs had 40 years of discussions about Neuromancer to train on.
But besides, regardless of everything else - if I say "don't spoil the rest of the book" and your response includes "in the last chapter character X dies" then you just failed at basic comprehension? Whether an LLM has any knowledge of the book or not, whether that is even true or not, that should be an unacceptable outcome.
I wouldn't expect an AI to know exactly what happens in every chapter of a book.
Knowing the plot of Neuromancer isn't the same as being able to recite a chapter by chapter summary.
I tried this Neuromancer query a few times and results greatly vary with each regeneration but "do not include spoilers" seems to make Gemuni give more spoilers, not less.
I'm not the person you replied to but I'm wondering which Google AI product you are referring to that you use for search which is so excellent that you need someone to find for you an example of it failing?
I think Google has several ai products with search features?
Which one in your experience "seems correct"?
I'm fascinated because I've never found any LLM to be particularly error free at search.
Google.com with the AI overview or whatever they call it now. It seems to source web page information for grounding so it's reasonably correct and doesn't hallucinate recently at least.
I played around with it and its better than it used to be but if you ask it something like
"Whats the name of the third book in the peripheral trilogy going to be" it just regurgitates some dumb reddit comment by someone who seems to be making things up.
There's no actual title that has been announced and the reddit post was not a reasonable bit of speculation.
The problem with these LLMs is they rarely say "the search results were not credible no response can be provided."
These days, Google AI overviews regularly add a qualifier to the effect of "... according to this comment on Reddit <link>"
That's basically a UX trick to entirely sidestep being held accountable for the results, but seems sufficient to notify the user about the provenance of the answer to adjust their grains of salt.
By that logic a Markov chain is better on average just for the fact that it was trained on a large corpus of human knowledge, including psychology, therapy and study material.
I've definitely posted to the same subreddit with two different accounts by accident without being banned.
The android reddit app annoyingly doesn't check for account matches. If you click a browser notification link on Account A it can open a reply form on App account B.
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
You may be a bit too credulous. There has been a "leak" or "rumor" that Intel's GPU initiatives are canceled about once every three months, for over two years. Yet Intel continues to release new SKUs and make new product announcements. Just last month they announced a new data center GPU product (an inference-focused variant of Jaguar Shores).
I can't see the future, but I can see patterns: the media that reports straight from the industry rumor mill LOVES this "Intel has cancelled its GPUs" story, for whatever reason. I have no particular love for Intel (out of my six current systems, my only Intel box is a cheap NUC from 2018), but at this point, these rumors echo the old joke about economists who "accurately predicted the last nine out of two recessions".
I have a different theory.
Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.
My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.
reply