Hacker Newsnew | past | comments | ask | show | jobs | submit | ocbyc's commentslogin

If 1 click was for 0, how many clicks for 1?


I would imagine 2


Exactly. D+1 for every digit.


This is a straw man. Even if the top richest people paid an additional 16 billion in taxes that would run the gov for like a day. Our problem is with spending.


I’m not sure where that 16 billion figure came from, but what about the top 100 businesses?

The ones that have been gaming the tax system for 3 decades?


I agree. We don't spend enough money on everything, most notably housing.


The top earners only need so many houses and places to live, even including corporate housing.

The median population doesn’t have the money to spend.


Isn't that a problem caused by inflation which is driven (right now) by gov spending.


No. Artificial inflation where the median wage doesn't adjust is from a handful of key industries and it's rippled across.

Focusing on government spending is a distraction.


How can we possibly spend more?


Transformers are just pattern matching. So if you write "give me a list of dog names" it knows that "Spot" should be in that result set. Even though it doesn't really know what a dog is, a list is, or what a spot is.


> Transformers are just pattern matching.

That's trivially true. The question is: are we any different?


I think so. You ask that question because you’re interrogating the position, not because 1000 humans have asked that question in similar situations.

You and I know there’s a truth and we’d like to find it. The GPT is just happy (I.e. rewarded) to produce frequently used tokens.


And I'm just happy to perform actions that will make me survive and reproduce?


Most likely, unless you meditate a lot. Sometimes you'll take a bullet to save other people. Sometimes you'll drink yourself into a state that doesn't help you survive or reproduce. Or you'll write on a forum anonymously that doesn't help with survival or reproduction because it's enjoyable, makes you think, or you're addicted. Who knows :)


You are even better at analyzing me than GPT-4.


but maybe that feeling of 'looking for truth' is just what happens when you're doing pattern matching on the text embeddings?


I feel it’s a bit more, given we think about it after the sentence is complete. But it raises some interesting questions about what an agent would do if it had an instruction to keep trying until it got a reliable answer. Maybe an argument generating agent and a critic agent.

Worth a shot :)


I approach LLMs with the perspective that “maybe this demonstrates that we humans are all just stochastic parrots?”and we should have the null hypothesis that humans are just pattern matchers.


This is the way I perceive my thoughts. I don't know what I'm going to think of beforehand or in advance, these could all be stochastic "tokens" based on what I've observed in my life.

So of course I feel a bit offended when people claim LLMs are just stochastic parrots, because it doesn't feel to me, that I'm specifically any better?

My thoughts - they just happen, and sometimes not in my favor - I have had times of depression, I didn't have control over my thoughts. Neither do I have now, but at least I am in a better place. Because the "happiness" chemicals are regulated to be in a more favorable state to me for various different factors.

I didn't know what I was going to comment in response to your comment, I was just streaming my conscious.


I don't think that's true. They clearly group related things together and seem to be able to create concepts that aren't specifically in the training data. For example, it will figure out the different features of a face, eyes, nose, mouth even if you don't explicitly tell it what those are. Which is why they are so cool.


Most of that magic comes from embedding no? which is clustering things by their relation in some N-dimensional space


Exactly. It figures that out on its own. That's what "understanding" looks like in this context, imo.


They are cool, but then you are also cool.


Can you describe a test that would separate trivial pattern matching from true understanding?


A simple conversation would do.


Could you share a conversation link with GPT-4 with either about a "list" or a "dog", to determine whether it truly understands one of those things compared to a human?


I don't have a GPT account. I would start with: "Do you like dogs?" Next question: "Why?"


It kind of answered "why" for me

"""I think dogs are wonderful! They're known for their loyalty, playfulness, and their ability to bring joy to people's lives. What about you? Do you have a favorite breed or dog story?"""

What do I ask next?


This reply sounds so fake that in my opinion should be enough to rule out any hint of intelligence. However if you insist I'd continue with this:

"I'm not a fan of dogs. I do know a few dogs though. Sometimes I invite my neighbour's dog for dinner. He's got good taste, for a dog. The last time he came around we talked about the situation in the Middle East. Do you know a good book about this topic that I could recommend to him?"


Just did that. It seems to understand. Checkmate /fingerguns


How would I test whether I "know" or "understand" what a dog is?


Oh, that's easy, we just give the dog a keyboard and see if you accurately identify it's a dog from your text based interactions ;-)


Are you calling me a dog?


Even this seems too grand a claim. I’d water it down thus: the LLM encodes that the token(s) for “Spot” are probabilistically plausible in the ensuing output.


...because it understands what a dog name is. Why wouldn't you see Gary or Florence in that list? How does it know those aren't dog names?

You can't be suggesting it has memorized relationships between all concepts, the model would be enormous.

So clearly, there is something else going on. It's able to encode concepts/ideas.


The model is enormous, and N-dimensional for very high N. But the model remains insufficiently enormous for understanding, and moreover, the model cannot observe itself and adjust.

Ask an LLM to extrapolate, see any semblance of reason collapse.


Extrapolate what?


I guess we hope these busses will last 35+ years?


I mean what’s in em? Motors have very long service lives and can be replaced. So can batteries. Both can be recycled. The batteries may be way smaller/lighter by then too. The rest seems quite repairable as well.

Is it worth it compared to getting new ones in 10-15 years (based on sibling comment)? I don’t know.

But maybe electric fleet vehicles will change the calculus n when you refurbish them vs just replace them. Time will tell.



I suspect it varies wildly on region. Rust is the biggest enemy in places that get snow.

From that, I suspect seats get beat up.


Probably was the porthole. It looks like the view port was rated for 1,300 meters. (They were descending to 4,000 meters) An engineer even warned them of the issue, and was fired. I suppose they didn't want to spend the money to develop an adequate one.


You have multiple parents or they have multiple computers?

What did you replace Windows with?


Grammatically, the location of the apostrophe makes parent singular. However, it seems weird if I think of it that way, likely because pronunciation as a whole is the same regardless of parent being singular or plural. Computers is plural regardless. This really doesn't effect me ib any way, but somehow seeing someone else want to know makes me want to know.


"What did you replace Windows with?"

Haiku probably ... s/

The only real alternative option with most hardware is linux. And when the use case is browsing the internet and you are there to help them unstuck a broken update, things should be fine.


Well we left a bunch of the mraps behind. So I guess they gotta start over.


YouTube is probably the best source of education delivered by passionate teachers at this point.


I've been looking for a really engaging solid intro to number theory course on youtube, including rings, Abelian groups, etc for early teens.

Does anyone have a recommendation here?


Yes, we need cheaper energy. Nuclear is a good option. Many drawbacks of previous gen reactors have been solved.


I had to scroll down to far. Cheap nuclear energy now!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: