Their design approach wasn’t particularly unusual, so I’m not sure what that sentence means.
I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.
It's annoying because it is a super common widget and it is interesting work, the first draft or literally even prompt they gave the AI probably would've been a great post, all they had to do was not ensloppify it...
I remember back I think around 2011, CF was new and I was testing it on some vbulletin forum, all the email communication were with the cofounder if I recall correctly, the UI had only the dns settings back then. Now they make a whole article on some text redesign, time flies.
That's why I say most AI content isn't just slop—it's fundamentally about deception. It's about tricking someone into believing that a text was written by a human, or that a photo or video is a true recording of a real event.
Like this, its purpose is to fly under the radar unless your figurative ears are pricked up and primed to detect the telltale signs. Fuck this shit.
Yeah it’s basically the prose equivalent of getting too much radio play - hilarious how the breakthrough of LLM content has ‘ruined’ “it’s not X—it’s Y” for so many of us now
Maybe, like overplayed pop songs, in 20 years or so we’ll come around to viewing the phrase fondly.
> "Not just X -- it's Y" is one of the more irritatingly common signs ...
It's a bit of a "Karen AI" telltale sign. It's probably been trained on a lot of "I-know-it-all-Karen" posts and as a result we're bombarded with Karen-slop.
The amount of inference required for semantic grouping is small enough to run locally. It can even be zero if semantic tagging is done manually by authors, reviewers, and just readers.
Where did "AI for inference" and "semantic tagging" come from in this discussion? Typically for code repositories - AIs/LLMs are doing reviews/tests/etc, not sure what/where semantic tagging fits? Even do be done manually by humans.
And besides that - have you tried/tested "the amount of inference required for semantic grouping is small enough to run locally."?
While you can definitely run local inference on GPUs [even ~6 years old GPUs and it would not be slow]. Using normal CPUs it's pretty annoyingly slow (and takes up 100% of all CPU cores). Supposedly unified memory (Strix Halo and such) make it faster than ordinary CPU - but it's still (much) slower than GPU.
I don't have Strix Halo or that type of unified memory Mac to test that specifically, so that part is an inference I got from an LLM, and what the Internet/benchmarks are saying.
Real humans write like that though. And LLMs are trained on text not speech. Maybe they should get trained on movie subtitles, but then movie characters also don't speak like real humans.
"LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere
I laugh every time somebody qualifies their anti-AI comments with "Actually I really like AI, I use it for everything else". The problem is bad, but the cause of the problem (and especially paying for the cause of the problem)? That's good!
I laugh every time somebody thinks every problem must have a root cause that pollutes every non-problem it touches.
It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.
This is basically the "I'm not evil, it's just a normal job" excuse. Like with any moral issues there will be disagreement where to draw the line but yes if you do something that ends up supporting $bad_thing, then there is an ethical consideration you need to make. And if your answer is always that it's OK for the things you want to do then you are probably not being very honest with yourself.
Your response assumes the tool is a $bad_thing rather than one specific use of it. In my analogy, that would be saying that "there is an ethical consideration you need to make" before using (or buying) a blender.
It's not as one-dimensional as good vs bad. Transformers generally are extremely useful. Do I want to read your transformer generated writing? Fuck no. Is code generation/understanding/natural language interfaces to a computer good? I'd have to argue yes, certainly.
I cry every time somebody tries to frame it one dimensionally.
Cultured people have no need for such words in public discourse so I'd rather not see either of them or the need. People are judged by the words they use.
> I've heard, but haven't confirmed, they also detect you opening developer tools using various methods and remove your auth keys from localstorage while you have it open to make account takeovers harder. (but not impossible)
You can open the network tab, click an API requesst, and copy the token from the Authorization header.
Every single post here is written in the most infuriating possible prose. I don't know how anyone can look at this for more than about ten seconds before becoming the Unabomber.
I've found that libadwaita apps tend to look at least decent outside of their native environment, whereas QT apps near-universally look terrible outside of KDE.
reply