Hacker Newsnew | past | comments | ask | show | jobs | submit | janalsncm's commentslogin

Well said. The most important part of writing is thinking. LLMs cannot do the thinking for you.

This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?

Unless the idea is trivial, LLMs are probably just getting in the way.


Framing a business problem in terms of ML is indeed important. Where does classification come in, where does regression come in, when to use retrieval, when to use generative solutions. Would be a good section to add imo.

I tried to tackle that under Topology for the problem but it may not be well named https://github.com/dreddnafious/thereisnospoon/blob/main/ml-...

I personally think it is much more important to have strong statistical intuitions rather than intuitions about what neural networks are doing.

The latter isn’t wrong or useless. It’s simply not something a typical software engineer will need.

On the other hand, wiring up LLMs into an application is very popular and may be an engineer’s first experience with systems that are fundamentally chaotic. Knowing the difference between precision and recall and when you care about them will get you a lot more bang for your buck.

I would suggest the gateway drug into ML for most engineers is something like: we have a task and it can currently be done for X dollars. But maybe we can do it for a tenth of the price with a different API call. Or maybe there’s something on Huggingface that does the same thing for a fixed hourly cost, hundreds of times cheaper in practice.


I'm just trying to develop the lens where I can see a problem and know what properties of it are meaningful from an ML standpoint.

Coming from a specific domain where I have a sharpened instinct for how things are haven't really given me the ability to decompose the problem using ML primitives. That's what I'm working on.


Back in my day we called this real time training from implicit user feedback.

The engineering challenge here is an order of magnitude bigger though. An LLM is orders of magnitude bigger than a recommender system model. Kudos.


> Enter AI, the great equalizer of time.

Enter Terms of Service, monopolies and duopolies, and maybe even cybercrime statutes.

There’s no law that says AT&T can’t just ban your account if you hook up your “talk to customer service” bot to their AI/overseas customer service phone tree army.

Or, they can just lock your account for “safety” reasons.


You don’t need to buy Apple adapters. You can buy a $10 usbc to hdmi adapter off Amazon and it’ll work just fine.

Same thing with the USB A ports. Not really selling point imo.


Apple's official HDMI adapter is $70. I was already talking generic.

Or just use a Thunderbolt cable to send video, power, and USB to a newer monitor with a single cord. That’s my work setup and I’d never go back.

And yeah, USB A? I got a cheapo C-to-A hub for my dwindling number of legacy devices. There’s no remaining upside to A.


On the Neo that doesn't support Thunderbolt? Or on the Acer that supports USB4 and might actually work with the hub?

It's a weird choice to pair with a budget laptop since monitors that support that are usually several dollars extra...


I can see exactly one, and it's niche: the ability to safely leave tiny USB-A peripherals like flash drives, wireless dongles, and SFF YubiKeys connected while not in use (not that I'm recommending a YubiKey be left connected to a laptop when not in use).

Hubs are mostly only relevant for docking or increasing the number of ports, given that USB-A to -C adapters are so cheap (assuming they're not bundled with the peripheral in the first place) you can reasonably leave them permanently attached to larger form factor USB-A peripherals.

As for full-sized HDMI, assuming you're not talking about the hellish mini or micro HDMI as alternatives, I'll take USB-C, or even mini DisplayPort, over full HDMI, as both have decent connectors and provide more and better inexpensive options for display connectivity (though admittedly finding good active DisplayPort-to-HDMI dongles can be harder than it should be because chroma subsampling is a thing that's not frequently touched upon in product descriptions).


It’s not just about perception. Apple doesn’t load your computer up with crapware and ads from the five different companies in the supply chain.

They got away with it forever because at $600 there was no competition.

I would say it’s more that Microsoft will make your $600 feel cheap, Apple will make it feel respectable.


> Apple doesn’t load your computer up with crapware and ads from the five different companies in the supply chain.

No apple prefers to have a monopoly on ads and crapware but they're still there. The internet is filled with annoyed apple customers who want to debloat their systems:

https://discussions.apple.com/thread/254337272

https://apple.stackexchange.com/questions/414682/how-can-i-r...

https://tech.yahoo.com/ai/articles/5gb-pure-bloatware-apple-...

https://forums.macrumors.com/threads/macos-debloating-thread...


You didn't read any of those, did you. They're asking about things like, literally: How can I delete the Chess app? How do I disable Spotlight? How do I remove Siri?

Those are not in any way comparable to ads or Candy Crush in the start menu.


I now assume that all ads on Apple news are scams (kirkville.com)

1178 points by cdrnsf 49 days ago | 564 comments

https://news.ycombinator.com/item?id=46911901

Apple testing new App Store design that blurs the line between ads and results (9to5mac.com)

618 points by ksec 67 days ago | 514 comments

https://news.ycombinator.com/item?id=46680974

https://news.ycombinator.com/item?id=46463180

https://news.ycombinator.com/item?id=46325114


What is the difference between a chess app and a candy crush app exactly? They are both "Games I didn't ask for, but were preinstalled"

Ads aren't as intrusive or annoying on a mac yet, but they aren't not intrusive or annoying either (https://discussions.apple.com/thread/256235494)


Amen to this.

I still haven't figured out how to remove Microsoft Store apps from the Start menu in recent non-LTSC versions of Windows 11, even on Enterprise with the Enterprise-only "disable consumer experiences" Group Policy key set.

Suggestion for any Microsofties listening: give me an easy way to override Windows key press-and-release to open the PowerToys Command Palette, and I'll never complain about the Start menu again.


I haven’t used chess, but does it have IAPs?

Not directly, but some features require the Apple Games app which I believe requires an account and does have IAPs.

It isn’t pointless.

The author cited research that demonstrates that model collapse can happen on a small scale.

The author also cited sources that a larger and larger portion of the web will be written by language models.

There are already studies showing that LLM generated text is less diverse than human generated text:

https://techxplore.com/news/2026-03-llms-creativity-ai-respo...

https://arxiv.org/html/2501.19361

The studies don’t show that the lack of creativity in LLMs is caused by model collapse or that the problem is getting worse.

But 1) we know they do this and 2) we know that training on synthetic data can cause model collapse.


The key missing step which breaks the loop is that while indeed a larger and larger portion of the web is written by language models, that data isn't being used to train new models - at the beginning of LLMs people did indeed want to use "all the web" to train models, but that's not being done now anymore, you either take only old pre-LLM data, or you pay for new 'clean' data, or take extensive filtering steps to avoid accidentally ingesting synthetic data.

The main phrase of the title "model collapse is happening" is untrue and not substantiated in the article - all the true statements in the article are about the hypothetical problem, warning of the bad consequences that would likely happen if makers of major models did something they aren't doing, but they aren't doing that because that is a known issue that they're avoiding. It's like writing an article "Foot shooting epidemic is happening" with a long, solid (and true!) proof that if you'll shoot yourself in the foot, it will indeed cause serious injury...


>demonstrates that model collapse can happen

yes, so given the title one might expect cited research that model collapse IS happening, as per OP's point.


I would put Blake Lemoine into this category. In 2022 he became so convinced that Google’s chatbot was sentient that he hired an attorney to represent it (against Google). Of course Google fired him.

Maybe that was the canary in the coal mine. Some percent of people will be convinced that chatbots are real people trapped in a box, not a box that pretends be a person.


There’s some percentage of people who will believe anything: just look at religion, the success of butchering scams, or the comments on YouTube videos about the moon landing.

Is it really a surprise that a “smart enough” chat box is able to convince people of something kooky? :P


Yes turns out humans are just dumb animals with a limited attention span and capacity for knowledge. We act like we're civilised, but look at the state of the world lmao. The "me and mine" attitude applies broadly across the HN audience as well; just look at the responses I've gotten when I've suggested that it should be illegal to own second homes in the middle of a housing crisis. We're all in it for ourselves and work together only enough to benefit our tribe, like animals.

I, of course, include myself in this as well. Give me dat dopamine and serotonin!


Empathy hijacking. If the chatbots framed their responses as “beep boop, I’m a robot, here’s an estimated answer to your query” then we likely wouldn’t have this problem.

I’ve noticed out of the box that current Claude is far less inclined to do that empathizing stuff than current ChatGPT.

Chat wants to be a weird mix of buddy, toady, and Buzzfeed editor (emoji, “one weird thing”).

Claude, while not perfect, is more “work colleague”, and far more preferable.


This is what people should keep in mind when the statistic about US defense spending being higher than the next N nations combined or whatever it is now. If I buy a 30k Prius, and you spend 300k on a different car,

1) that doesn’t mean you can drive 10x as fast and

2) maybe you just bought an overpriced Prius, perhaps a gold plated one

This is a more general problem in politics, where the overall budget being allocated is reported rather than the practical result.


Yeah, you often read stories on the internet about how the SR-71 could easily outrun the MIG-25, proving US technological superiority, but those don't really take into account that there was like a dozen made of the former, with titanium hulls and exotic engineering. While there were more than a thousand made of the cheap, steel hulled MIG 25

Not sure about the comparison to the SR-71, but the more interesting comparison was with the US XB-70[1] which ended up cancelled but the MIG-25 was designed to intercept[2].

Ironically the XB-70 was also stainless steel - but it still was pretty exotic. It partly relied on compression-lift and highly corrosive fuel to cruise at Mach 3 (in 1961!).

Edit: Wikipedia diving after writing that led me to the Sukhoi T-4 which was the Russian response to the XB-70. Only a prototype, but this one was titanium and it is an amazing, drop-nose machine [3]

[1] https://en.wikipedia.org/wiki/North_American_XB-70_Valkyrie

[2] https://en.wikipedia.org/wiki/Mikoyan-Gurevich_MiG-25#Backgr...

[3] https://en.wikipedia.org/wiki/Sukhoi_T-4


I think while these kinds of projects are cool, but I think the point of my parent comment is that volume matters. If you can do something, its interesting and great for bragging rights, but making and operating thousands of airframes (especially considering the breakneck speed with which technology evolved, timeframes were very compressed!).

While the SR71 was more capable than the MIG, if the Air Force would've wanted to build a thousand of those in 5 years, it would've been impossible, not to mention the maintenance burden.

So while the planes you mentioned might've been more capable, in a real conflict they wouldn't have mattered much, as they could not have sustained a volume of strikes to be relevant.

Interesting how quality and quantity have changed over the years: in WW2, giant factories pumped out airplanes on endless production lines by the tens of thousands, yet those planes couldn't drop bombs accurately.

In contrast, 4th gen fighters were made in still significant volumes, and their smart bombs could hit a target accurately enough so that a hundred pound bomb can do the job you would need a WW2 B-29 to drop its entire payload for.

I think that was a peak in quality X quantity in aviation.

Yes, modern jets have even more tech, and stealth and stuff, but their complexity and and difficulty of manufacture doesn't offset the drop in volume.

So quality went up, but quantity went way down, and as a result their total effectiveness is less than the generation they're supposed to replace.


> Yes, modern jets have even more tech, and stealth and stuff, but their complexity and and difficulty of manufacture doesn't offset the drop in volume.

> So quality went up, but quantity went way down, and as a result their total effectiveness is less than the generation they're supposed to replace.

Not sure why you think this.

The 5th generation F-35 is a great airplane[1], and they've made 1300 of them since 2016.

The 4th F-16 (also a great plane!) had 4600 built since 1976.

[1] Yes, despite all the negative press and the amount of time it took to get right, it's a great plane. See eg https://theaviationist.com/2016/03/01/heres-what-ive-learned... where the editorializing is anti-F35 but the pilot who flew it only has positive things to say.


> The 5th generation F-35 is a great airplane[1], and they've made 1300 of them since 2016.

Because in that time, F14,F15,F18 and F111 variants have been made as well, the total number of which is more than 10k. The testament to the usefullness of these is that they're still being made.

And the thing is each of these 4th gen planes generally carry significantly more weapons externally, than the F35 does internally.

So while I don't dispute that the F35 is individually a great plane, I still don't think the quality X quantity metric of a pure F-35 fleet is higher than a 4th gen fleet.

Which is echoed by US procurement, because if it was, they'd have stopped building other planes, just like the stopped building F-4s not long after 4th gens entered service.


> Because in that time, F14,F15,F18 and F111 variants have been made as well, the total number of which is more than 10k. The testament to the usefullness of these is that they're still being made.

The F-111 is a 3rd generation fighter-bomber jet.

Putting that aside, the F-18 Super Hornet is a 4.5 generation plane that is pretty different to the F-18. It was created as a stop-gap because the F-35 was behind schedule.

The F-16 is in production to sell to air forces that can't afford the F-35.

The F-15 is in production to fill gaps because of slow production of the F-35.

The F-14 isn't in production.

> And the thing is each of these 4th gen planes generally carry significantly more weapons externally, than the F35 does internally.

This isn't true.

The F-35 can carry 22,000 lbs of payload. The F-15 can carry 29,000 lbs, the F-16 15,000 lbs, the F-18 18,000 and the F-14 could only carry 14,000 lbs.

So the F-15 is the only one that can carry slightly more than the F-35 and it is way less stealthy carrying external weapons. The others carry less and are less stealthy.

There is a reason the F-35 has won every fly-off against the Typhoon, Rafael and the Gripen.


Off topic but I can't help but snicker every time someone gets autocorrected into writing Rafael instead of Rafale (which I literally just had to correct, myself).

Argh!

It's a false comparison.

How many MIG-25s flew over the borders of the United States mainland during the cold war?

Yes the MIG-25 was a cheaper and more practical plane, but that wasn't the MO of the sr71.


I am not the one making those. If you read an article about how a Lamborghini Aventador was faster than a Nissan GT-R, you would go 'well, duh, it costs 20x as much'.

A school bus costs 4-5x more than. GT-R, and I wouldn’t expect it to be faster.

The SR-71 wasn't trying to catch the MIG-25, it was trying to get away--and it worked. The U-2 proved vulnerable to filling the sky with cheap stuff--the missiles were ballistic by the time they got up there but when the sky was full of them the U-2 had no path to safety.

The SR-71 couldn't be defeated by the level of missile spam that Russia was capable of, the MIG-25 couldn't get close enough to catch it and they didn't have a missile that could actually work up there. (You need more control surface up there, but down lower more control surface costs you performance.)

(And the MIG-25 was a maintenance nightmare.)


I suggest you read the book Skunk Works by Ben Rich, to get reliable account about the SR-71 and its relation to Soviet air defenses from the horse's mouth. Besides, it's genuinely well-written and enjoyable book

These don’t seem comparable to me. The sr 71 was never meant to be mass produced or to head to head against a mig. The sr71 didn’t even have any guns it’s a spy plane. The sr 71 accomplished its goal with flying colors and spotted nuclear test sites and information on the Cuban missle crisis.

The star fighter, or f15 or f22 would be more apt.

TLDR special purpose tool vs general fighter cannot be compared


During the Cuban Missle Crisis it was the U2, not the sr-71

my mistake, conflated my stories. The sr71 found a hidden nuclear site not the cuban missle crisis

There were 32 SR-71s, 13 A-12s and 2 M-21s. That's 47 total I believe, making your figure off by about 300%, which incidentally is how much cooler the SR-71 is relative to the Mig, on account of it looking incredibly exotic and elegant instead of like a pointy sky tractor. Being faster is just icing on the cake.

Your figure of 300% is off by orders of magnitude for how much cooler the SR-71 is at 60 years of age than practically anything else that exists.

That's my point. The SR71 makes for a much cooler topic of discussion, but in a war, it matters how many planes you have. Even a thousand jets isn't really that much when fighting a country of millions.

To the point that defense spending prevents us from publishing an accurate financial statement for soon to be 30 years running.

The Government Accountability Office (GAO) issued a disclaimer of opinion on the U.S. government’s FY 2025 financial statements — the 29th consecutive year it has been unable to determine whether the statements are fairly presented. This is primarily due to serious, ongoing financial management problems at the Department of Defense and weaknesses in accounting for interagency transactions.


Or they bought a lambo, which is amazing, and goes really fast... but when you are out of gas, the Prius will keep going. :)

Yes this is a great point. The great irony of the tech sector is that although tech creates efficiencies, the process by which tech is created is itself comically inefficient.

Almost nobody, especially those working for government actually looks at a complex, expensive solution and says "We should simplify this and make it cheaper." The government is paying for a LOT of unnecessary complexity. I would say that's most of the cost of essentially every tech project the government funds.

Reminds me of that 3-section meme about Starlink boosters showing how they simplified the design over time. This is the exception which proves the rule.


A lot of what you see was removed was just test sensors. The same happens in every engineering program, but no one else pretends that it's somehow innovation.

It's like removing test code when you ship a binary.


I don't agree that it's not innovation. It always looks stupidly simple with hindsight to just remove unnecessary complexity, and yet it's extremely rare to see a team which actually does it right on the first go.

Getting the design right the first time requires vision, foresight as well as a deep understanding of all relevant parts and priorities. Very few people can do it without hindsight.

I'm an experienced software engineer and team lead who worked on a range of big complex projects over almost 2 decades and my experience with every single project (for which I wasn't the team lead) was that they were often way over-engineered. At least 95% of the time was spent on fixing unnecessary intermediate technical issues which the team itself created for itself.

Even the sensor argument... Do you need so many sensors, monitoring and fallback mechanisms if every part of the system was designed to work within the simplest necessary constraints to begin with? My experience is that the answer is almost always; no. Once you accept that your design is flawed and needs runtime monitoring and fallbacks, any patch you add on top to correct the flaws provides tiny diminishing returns if any. Often, the additional complexity actually makes it more likely that your core mechanisms will fail.

The safety mechanisms only end up making themselves useful by increasingly the likelihood of failure to begin with.

My view on fallback mechanisms is that, in the event of failure of the main system, they shouldn't be so complex as to try to keep the system running as if nothing had happened; they should just provide graceful failure and sometimes they aren't needed at all. Just an error log is enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: