Why use AI to make good content, when you can take unremarkable content and add the string "AI" to pretend it's interesting. You could have just done an old-school web search for this!
The trick is to still use the same underlying information, but forwarded to you by a large language model, that way you can take most of the credit yourself. How unsensational of an article would it be, if you said you followed a step by step guide and built the thing the guide built before. But slap an LLM between yourself and the tutorial and now you can suddenly say you built something with the help of AI. Weird times.
I've seen a lot of that happening recently. People claiming they did this and that, when they actually followed the steps from a LLM. I actually don't mind this, except for the part where they omit the use of a LLM to get all the credits to themselves.
Reminds me of all these YouTubers making video essays parotting something they have just learned, without actually mastering the subject.
I think we seek to credit the source for reasons that are not clear. Do you feel any which way if I suggest to you your carrot soup only tastes good because of the carrots you put in it?
With the follow up question being, why don’t you admit you put carrots in it?
We don’t really do this because there’s no concept of ownership of food to that degree. Maybe 200 years ago, where a farmer may chew you out since he toiled the soil to get you that carrot.
So, for us to not be possessive of knowledge, we’d need to evolve. It’s not likely in our lifetime, but perhaps 500-1000 years down the line the social fabric will evolve to handle this, similar to food possession.
Or I could be wrong, and we just have a bunch of naturally thieving crooks all over the place.
> I think we seek to credit the source for reasons that are not clear... We don’t really do this because there’s no concept of ownership of food to that degree.
Cooking is not a great comparison, and it betrays your point more than anything. If you cook something particularly impressive or complex, people will almost universally ask about the recipe and where it came from.
Origination is actually a pretty common topic of discussion for many things.
Some fine restaurants here are very proud to tell you the meat comes from such and such farm, the vegetables from such farm by this people and the wine from this vineyard. That's even on the menu.
Here in Switzerland, even burger joints have this for all major ingredients (meat, potato, veggies, bun made by really local bakers etc.). Heck, even McDonald has it, which is the worst tasting & looking on the market, and not necessarily cheapest.
If population cares about it enough companies adapt, even if 50km down the road in another country they sell lower quality with same name (EU generally is less strict re food quality, but both tower high above what US FDA permits).
> I think we seek to credit the source for reasons that are not clear.
For me a source means I can verify some claims, find another opinion/presentation, are able to view other work based on that source.
It is the difference between having links between web pages or having only independent web pages. I guess we can all agree there is value in having the information "X was based on Y".
The reason people do not credit can also be that they don't add any value. If the original source is more complete, more correct and better presented then they might "loose" their perceived value. Does this happen in all cases? Probably not. But it is my first instinct when I see it (happens a lot to "news" article as well for example, when talking about papers, university announcements, etc, things that could be easily "linked")
There are so many relatively popular youtube channels making 10 to 20 minute videos about interesting subjects which say nothing more than the Wikipedia page for that topic. Whole channels, well liked by the algorithm, built off paraphrasing Wikipedia without citing it.
Being generous to them, some of them do a good job of picking topics I wouldn't have thought to look up myself. They probably do spend a lot of time reading through diverse subjects looking for the interesting ones.
It’s probably low effort scripts that are performed by influencers. The scripts are probably not written by the influencers because the influencer usually brings the looks, voice acting, and charm.
Some of them certainly, that beard and thick rimmed glasses guy with a dozen channels comes to mind (I don't know his name, but other people have given me the same description of him). Others are pretty weird looking nerdy guys tbh, I believe they probably read Wikipedia a lot.
Yeah. The people who conduct bedroom experiments "with AI" are basically the same people who would have conducted bedroom experiments with the help of web pages and YouTube five years ago, or with books if they could get hold of them 30 years ago.
And actually, I'm not sure the switch from one ubiquitous digital format to another lossier one is the big step change here....
I wonder if this is better for society somehow? I found even getting people to use a search engine to find a link to figure something out was like pulling teeth. Maybe LLMs will teach people how to use search engines? Also, while bypassing the various ads on the enshitified internet (for now)?
This project seems complex enough that following a guide is not enough, and you will likely have to swap unavailable with equivalent ones, for which you need at least a basic understanding of what you are doing.
HudZah is seemly using the AI as a search engine for the reference materials he collected for the project, which is a legit use of the technology.
He seems to be following the Instructable rather closely. Or the LLM was following the Instructable.
See fusor.net.[1] It's unlikely this rig is doing any fusion. It's just a plasma created by high voltage, like a neon lamp. He's not putting in deuterium gas. He's not detecting neutrons.
Most of the people who try to do this get the blue glow, but not neutrons.
The main hazard is then the high voltage.
These things are not energy producers. It takes about a billion times as much energy input as comes out in neutrons. They can be useful neutron sources for imaging and research.
That's like saying "we don't need teachers because we've got books". A how-to guide has to assume a certain level of knowledge, so will inevitably be insultingly basic for some readers or completely impenetrable for others. A static document can't elaborate on something you don't understand or fill in gaps in your knowledge. It can't help you debug a problem or look at your setup and point out where you might have gone wrong.
We now have good evidence that AI-assisted learning can be substantially more effective than traditional methods, at incredibly low marginal cost.
> That's like saying "we don't need teachers because we've got books".
I respectfully disagree. I don't think it's wrong or useless to get an LLM to help, I recently did a similar project myself (even though I manually fact-checked all the high-voltage stuff).
If you find a guide that explains too much then you can skip the parts you know. If it doesn't explain something you don't know yet then you recursively look that stuff up. It doesn't matter if it's a book or a teacher or a search engine or an LLM.
It's just not good journalism here, because evidently this project has been done lots of times in similar circumstances without LLMs.
>evidently this project has >been done lots of times in >similar circumstances without >LLMs.
LLMs help organizing knowledge and research in a novel domain. The point is not “can you do it”. LLMs are great at giving vanilla standard answers. Which is exactly what you want when researching a novel domain. I’ve been dipping into novel domains for 25 years and LLM:s make it so much more pleasant.
https://www.instructables.com/How-to-Build-a-Fusion-Reactor-... https://makezine.com/projects/nuclear-fusor/ https://fusor.net/board/viewtopic.php?t=3247
The "artificially intelligent" aspect is trivial.