Hacker Newsnew | past | comments | ask | show | jobs | submit | thepasch's commentslogin

> But to even know what is more useful, it is crucial to have walked the walk.

I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!

The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.

> I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.

Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:

- A function is some math that allows you to draw a picture of how a number develops if you do that math on it.

- A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.

- “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.

- This can tell us about how the function works.

Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.

I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?

What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.

This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.


> I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?

From a science standpoint, I'd say whatever "results" you got are completely worthless.

> I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct

And how do you know if your understanding is correct, if you are only taking what the LLM gives to you and you are not able to verify independently?

> Science is what happens when you expect something, test something, and get a result.

Right, but has any LLM come up with any hypothesis on its own? Has any AI said "given all this literature that I read, I'd expect <insert something completely out of the training data space>?".


Asking all of these questions after (allegedly) reading my entire comment either means you didn't pay attention, in which I'm not going to spend any more effort responding either; or you've completely missed the point, in which case I can probably save myself the effort anyway. In any case, if you're genuinely interested in answers to your questions instead of merely posturing, I suggest you re-read carefully and then make a better faith attempt at engaging with it.

I'll leave these direct quotes from the comment as a hint:

> But that only matters if I take its output at face value. […] If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification.


The problem I have with your logic is that you are hedging your arguments so much that the whole point become meaningless. If you are trying to argue that young aspiring scientists will be able to use LLMs to learn new concepts instead of doing the hard work themselves, then you also need to explain how they will be able to develop the skills to analyze and "run more thorough verification" INDEPENDENTLY of LLMs.

> That’s what the program he just took was supposed to be for, learning not output.

If you send a kid to an elementary school, and they come back not having learned anything, do you blame the concept of elementary schools, or do you blame that particular school - perhaps a particular teacher _within_ that school?


> There is a vast difference between “never learned the skill,” and “forgot the skill from lack of use.”

This sentence contains the entire point, and the easiest way to get there, as with many, many things, is to ask “why?”


The majority of nails people might want to rent a HammerAsAService for these days can already easily be put in by open source hammers you can run on consumer, uh… workbenches.

Not to stretch the metaphor too far, but those workbenches require understanding (and hammers) to set up.

Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?


> Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

The same way any open-source infrastructure finds widespread use, I’d say. If you’re willing to put in the elbow grease, you can probably set it up yourself (maybe even with the help of one of the frontier, uh, hammers, in its free tier). Or there might be services that act as middlemen to make it all more convenient and cheaper. But the difference is that if Service X pisses you off, then there will be Services Y, Z, A, and B who sell the same service using the same open-source infrastructure, so you always have a choice.

If you don’t like GitHub, try Gitlab, Codeberg, Gitea, and so forth. Or Bitbucket or Azure DevOps. (Don’t actually, though.)


The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment. I personally think both are really important, and I also think AI won’t be able to do both better than any human could for another while, and moreso when it comes to doing both at the same time (though I’m not going to claim it’s never going to).

My point is that both Alice and Bob have a place in this world. In fact, Bob isn’t really doing much different from what a Pricipal Investigator is already doing today in a research context.


> The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment.

Those aren't mutually exclusive.

"People who do things" can do both, and doing the latter is a function of doing the former, so they tend to do the latter sufficiently well.

"People who prompt things" can only do the latter, and they routinely do it poorly.


> “People who prompt things” can only do the latter, and they routinely do it poorly.

Right, but what I don’t agree with here is the idea that this category of people will never be able to improve into the first category of people. The value of an experienced anything is that they realize there is a big chasm between something that works now and something that will continue to work long into the future.

I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.

You can do that by spending two weeks to build a brick wall by hand, or you can do that by spending two weeks having your magical helpers build ten brick walls that eventually collapse. I don’t think the tools are some sort of fundamental threat to cognition, I think they’re - within this society - a fundamental threat to safety, because the relentless pursuit of profit means even those that realize those ten brick walls should never actually ever be used to hold anything up will find themselves pressured to put a roof on them and hope, pray, they hold.

And this isn’t an LLM-specific thing. The vast diverse space of building codes around the world proves this, and coincidentally, the countries with laxer building codes tend to get a lot more done a lot faster; and they also tend to deal with a big tragic collapse every now and then, which I suppose someone will file away as collateral somewhere.


> I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.

This isn't true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor. A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses, but they never build the entire core set of skills that separates engineers from mechanics and doctors from nurses.

There are maybe some exceptions to this, but those exceptions are so rare that it doesn't matter for this discussion. A few people still learning it properly wont save anything.


> This isn’t true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor.

“Doesn’t generally happen” =/= “is literally impossible”. The word “never” should be used with care.

> A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses

This statement can only make sense if you regard titles as something that’s imbued upon you, and until it is, you are incapable of performing the acts that someone who has earned that tile can perform. I’ll just say I fundamentally disagree with this notion on pretty much every conceivable level, and if that’s the belief system you subscribe to, that would also makes arguing about this any further pointless. But I might just be getting you wrong.


You didn't read my closing statement:

> There are maybe some exceptions to this, but those exceptions are so rare that it doesn't matter for this discussion. A few people still learning it properly wont save anything.

It is fine it was stupid of me to not go and fix the "never" earlier but instead corrected it at the end of my post. You are right it would be very dumb if you thought people never ever could make such a transition, but I wrote that as the weaker version of never. There is a reason there is a "never ever", since often in common language "never" doesn't mean never.


> the idea that this category of people will never be able to improve into the first category of people

The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

Changing from the second category to the first is something that would require already being in the first.


> The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

Exactly! That’s my entire point. Because now you’re separating the categories by “is willing to put in effort” and “is not willing to put in effort” rather than by “has done the thing” and “hasn’t done the thing”.

I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them. But as long as you understand what the thing you’re using it is for, you don’t have to understand how it works exactly. You can shift gears in a car without a physics degree.


> I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them

No, you misunderstood here. People aren't saying "it is harder to learn in the future", the issue is "it will be harder to make sure that someone will learn in the future".

Currently you need an engineering degree and experience to do engineering work. However if in the future a lot of people get their degree and experience just by calling LLM for every problem, those engineers will not understand at all what they are doing. Before someone having that experience will have had solved a lot of problems manually on the job, that experience made them an expert. The same person solving those by calling an LLM and pasting in the answer will just as ignorant as someone with no experience.

Most such people today didn't wanna learn to be engineers out of curiosity, they just wanted a job. In the future all such people would use LLM and never learn. Those are the main parts of our workforce, so it is a scary prospect that in the future we cannot force them to learn things properly in the same way since LLM allows them to do basic tasks without learning.

If you argue there are plenty of people who learn for fun, then you would be wrong. Extremely few people learn enough in their own time to contribute meaningfully to for example mathematics, it isn't enough to matter. People learn those fundamentals primarily because they are forced to do it for a degree they need for a job, if they weren't forced to learn and pass tests they would happily go do the job without any knowledge or skills.


What does this do other than copy the selected HTML into a new API prompt that says “describe this”?

When you just copy the html, you get a ton of useless repeating data. This tool removed a lot of duplicated things, structures the styling in a better way where we don't spam the styling rules. The goal is to provide just the useful info, without the noise.

Having said that, it does do essentially what you described, but when prototyping fast, and not designing from scratch it's extremely useful.


Right, but that sounds like something any frontier LLM could one-shot. It’s a basic browser extension that wraps elements in the DOM and sends them to an LLM API embedded in a prompt. This would be neat as an open-source repo, but charging $10/month for this is certainly a choice.

Yeah for sure, I don't mind open sourcing it one day, but I think you'd understand why I chose current set up, since we're both on HN :D. I do think the free tier is more than enough for most people.

Hey, I mean, Steinberger got acquihired off of an Open Source repo, so...

yeah, me next xDDDD

Well, that does sound quite useful.

Qwen3.5-Plus is the largest variant of the open weight Qwen3.5 model, expanded with a 1M context window and fine-tuned on the Qwen-native harness’ specific tools.

At this point, I’m pretty sure saying “I’ve done my research” is more of an indicator that someone hasn’t done their research but would like to be taken seriously anyway by pretending they did. The kind of person who’s both smart enough to realize that an issue might be more nuanced than they present it, as well as intellectually dishonest enough to… not care.

> What is the motivation for someone to put out junk like this?

Getting something with a link to their GitHub onto the frontpage of HN. Because form matters much more in this world than substance.


I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.

Right. The LLMs' quirks aren't bad in themselves, they're bad when they're in every damn paragraph. They're mostly things that in moderation actually improve writing, and that if you see them once (without the knowledge that they're things LLMs do) would rightly tend to make you think better of the author. And so, of course, in RLHF training they get rewarded, and unfortunately it's not so easy for an LLM to learn "it's good to do this thing a bit but not too much.

The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.


I refuse to cater to the "em dashes are AI" crowd.

And I was just noticing that my home-built blog render pipeline produces dumb quotes and that was embarrassing to me. Needs to be fixed.

(Counterpoint, dumb quotes are 7-bit clean and paste nicely... Hmm.)


> I refuse to cater to the "em dashes are AI" crowd.

I wrote a plugin for my blog that converts all hyphens (surrounded by whitespace) into em-dashes.

https://blog.nawaz.org/posts/2025/Dec/a-proclamation-regardi...


I feel ya. I've never been accused of using an LLM, fortunately, but depending on the context I do use “smart quotes” (even in „Dutch” or »German«) and the em-dash obviously… (And that ellips fella there. It's just so simple to type with a compose key set up.)

I thought the guillemet was French rather than German and the other way around.

https://en.wikipedia.org/wiki/Quotation_mark#Summary_table


German uses both kinds depending on the style and writer's preference. French has the guillemets the other way around.

(That Wikipedia table shows that too by the way.)


Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(

It's absolutely shocking how many people think that inverting all the quality metrics that we've traditionally used "because LLMs" will lead to good things. Nothing about this will end well.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: