Hacker Newsnew | past | comments | ask | show | jobs | submit | Starlevel004's commentslogin

Rust is also design-by-committee.

> Designing a product with billions of eyeballs on it isn't just challenging — it requires a fundamentally different approach.

I'm not reading this.


Their design approach wasn’t particularly unusual, so I’m not sure what that sentence means.

I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.


It doesn't mean anything. It is just there to be there and catch low-hanging RL reward granting eyeballs.


It's annoying because it is a super common widget and it is interesting work, the first draft or literally even prompt they gave the AI probably would've been a great post, all they had to do was not ensloppify it...


I agree this thing went on forever and seemed to have multiple summaries of the same concepts.


To CloudFlare employees: This is a super interesting topic, but next time we'd rather hear from you, grammar mistakes and all, not from AI.

If I want AI slop, I'll gladly have a chat with my paid $20 bucks Gemini account.


I remember back I think around 2011, CF was new and I was testing it on some vbulletin forum, all the email communication were with the cofounder if I recall correctly, the UI had only the dns settings back then. Now they make a whole article on some text redesign, time flies.


That's why I say most AI content isn't just slop—it's fundamentally about deception. It's about tricking someone into believing that a text was written by a human, or that a photo or video is a true recording of a real event.

Like this, its purpose is to fly under the radar unless your figurative ears are pricked up and primed to detect the telltale signs. Fuck this shit.


Can’t tell if the “it’s not X — it’s Y” as your first sentence is intentional irony or not lol


You're absolutely right!


Did you base the AI use on the emdash or is this an a common AI phrase (or both)?


"Not just X -- it's Y" is one of the more irritatingly common signs, especially for sentences like that one which absolutely do not need it.

The Wikipedia article on detecting AI writing is a big help if you need to calibrate your sensors: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


Yeah it’s basically the prose equivalent of getting too much radio play - hilarious how the breakthrough of LLM content has ‘ruined’ “it’s not X—it’s Y” for so many of us now

Maybe, like overplayed pop songs, in 20 years or so we’ll come around to viewing the phrase fondly.


> "Not just X -- it's Y" is one of the more irritatingly common signs ...

It's a bit of a "Karen AI" telltale sign. It's probably been trained on a lot of "I-know-it-all-Karen" posts and as a result we're bombarded with Karen-slop.


I see, thx for the article too!


I think I'll actually post the article here, quite useful


Thanks for the Wikipedia tip.


It's not just overused phrasing — it's the hallmark of LLM prose.


“It’s not X, it’s Y” is an absolutely ubiquitous AI pattern. Throw in an em-dash and it’s basically ai;dr


Thx!


It's also just an utterly meaningless statement. Filler words with no value whatsoever.


"Let's be honest" is another extremely strong tell.


Yet again [0] quality standards seem to have slipped on the cloudflare blog. I'm not able to point at a cause, but it's not painting a pretty picture.

[0]: https://news.ycombinator.com/item?id=46781516


It kinda looks like employees need to make a blog post about something twice a month.


Person who pays for AI: We should make everything revolve around the thing I pay for


The amount of inference required for semantic grouping is small enough to run locally. It can even be zero if semantic tagging is done manually by authors, reviewers, and just readers.


Where did "AI for inference" and "semantic tagging" come from in this discussion? Typically for code repositories - AIs/LLMs are doing reviews/tests/etc, not sure what/where semantic tagging fits? Even do be done manually by humans.

And besides that - have you tried/tested "the amount of inference required for semantic grouping is small enough to run locally."?

While you can definitely run local inference on GPUs [even ~6 years old GPUs and it would not be slow]. Using normal CPUs it's pretty annoyingly slow (and takes up 100% of all CPU cores). Supposedly unified memory (Strix Halo and such) make it faster than ordinary CPU - but it's still (much) slower than GPU.

I don't have Strix Halo or that type of unified memory Mac to test that specifically, so that part is an inference I got from an LLM, and what the Internet/benchmarks are saying.


Real humans don't speak in LinkedIn Standard English


Real humans write like that though. And LLMs are trained on text not speech. Maybe they should get trained on movie subtitles, but then movie characters also don't speak like real humans.


Movie characters also don't speak like movie subtitles: the subtitles omit a lot of their speech.


"LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere


Humans don't, but cocaine does speak "LinkedIn Standard English".


> LinkedIn Standard English

We need a dictionary like this :D


The old Unsuck-it page comes pretty close. I’m not a huge fan of the newer page though. https://www.unsuck-it.com/classics


LinkedIn and its robotic tone existed long before generative AI.

Know what's more annoying than AI posts? Seeing accusations of AI slop for every. last. god. damned. thing.


Yes that's the point. LLMs pretty much speak LinkedInglish. That existed before LLMs, but only on LinkedIn.

So if you see LinkedInglish on LinkedIn, it may or may not be an LLM. Outside of LinkedIn... probably an LLM.

It is curious why LLMs love talking in LinkedInglish so much. I have no idea what the answer to that is but they do.


It is at least thematically appropriate, of course a corporate-built language machine speaks like LinkedIn.

The actual mechanism, I have no clue.


I laugh every time somebody qualifies their anti-AI comments with "Actually I really like AI, I use it for everything else". The problem is bad, but the cause of the problem (and especially paying for the cause of the problem)? That's good!


I laugh every time somebody thinks every problem must have a root cause that pollutes every non-problem it touches.

It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.


This is basically the "I'm not evil, it's just a normal job" excuse. Like with any moral issues there will be disagreement where to draw the line but yes if you do something that ends up supporting $bad_thing, then there is an ethical consideration you need to make. And if your answer is always that it's OK for the things you want to do then you are probably not being very honest with yourself.


Your response assumes the tool is a $bad_thing rather than one specific use of it. In my analogy, that would be saying that "there is an ethical consideration you need to make" before using (or buying) a blender.


It's not as one-dimensional as good vs bad. Transformers generally are extremely useful. Do I want to read your transformer generated writing? Fuck no. Is code generation/understanding/natural language interfaces to a computer good? I'd have to argue yes, certainly.

I cry every time somebody tries to frame it one dimensionally.


I cry every time someone uses the f-word in their writing for no apparent reason.


I'd rather read "fuck" than "f-word". The latter is like eating dumplings for lent because surely $deity won't see the meat in there.


Cultured people have no need for such words in public discourse so I'd rather not see either of them or the need. People are judged by the words they use.


Cultured people? I'd certainly argue that words which people feel are "uncultured" strictly DO have a use case.

It's as if you're saying "smart people only color inside the lines." Take a step back.


The usage of those words is to accurately convey the speaker's emotion. If you don't see any use in that then that's your problem.


There are far better words availble than to use those often referred to as "gutter talk".


Why laugh? Why can't a tool have good and bad uses, and why can't one be disappointed about the bad uses but embrace the good ones?


> I've heard, but haven't confirmed, they also detect you opening developer tools using various methods and remove your auth keys from localstorage while you have it open to make account takeovers harder. (but not impossible)

You can open the network tab, click an API requesst, and copy the token from the Authorization header.


When is that going to be?


The Fourth Amendment only applies to places flying a flag with gold fringes


Every single post here is written in the most infuriating possible prose. I don't know how anyone can look at this for more than about ten seconds before becoming the Unabomber.


It's that bland, corporate, politically correct redditese.


I've found that libadwaita apps tend to look at least decent outside of their native environment, whereas QT apps near-universally look terrible outside of KDE.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: