This is a straw man. Even if the top richest people paid an additional 16 billion in taxes that would run the gov for like a day. Our problem is with spending.
Transformers are just pattern matching. So if you write "give me a list of dog names" it knows that "Spot" should be in that result set. Even though it doesn't really know what a dog is, a list is, or what a spot is.
Most likely, unless you meditate a lot. Sometimes you'll take a bullet to save other people. Sometimes you'll drink yourself into a state that doesn't help you survive or reproduce. Or you'll write on a forum anonymously that doesn't help with survival or reproduction because it's enjoyable, makes you think, or you're addicted. Who knows :)
I feel it’s a bit more, given we think about it after the sentence is complete. But it raises some interesting questions about what an agent would do if it had an instruction to keep trying until it got a reliable answer. Maybe an argument generating agent and a critic agent.
I approach LLMs with the perspective that “maybe this demonstrates that we humans are all just stochastic parrots?”and we should have the null hypothesis that humans are just pattern matchers.
This is the way I perceive my thoughts. I don't know what I'm going to think of beforehand or in advance, these could all be stochastic "tokens" based on what I've observed in my life.
So of course I feel a bit offended when people claim LLMs are just stochastic parrots, because it doesn't feel to me, that I'm specifically any better?
My thoughts - they just happen, and sometimes not in my favor - I have had times of depression, I didn't have control over my thoughts. Neither do I have now, but at least I am in a better place. Because the "happiness" chemicals are regulated to be in a more favorable state to me for various different factors.
I didn't know what I was going to comment in response to your comment, I was just streaming my conscious.
I don't think that's true. They clearly group related things together and seem to be able to create concepts that aren't specifically in the training data. For example, it will figure out the different features of a face, eyes, nose, mouth even if you don't explicitly tell it what those are. Which is why they are so cool.
Could you share a conversation link with GPT-4 with either about a "list" or a "dog", to determine whether it truly understands one of those things compared to a human?
"""I think dogs are wonderful! They're known for their loyalty, playfulness, and their ability to bring joy to people's lives. What about you? Do you have a favorite breed or dog story?"""
This reply sounds so fake that in my opinion should be enough to rule out any hint of intelligence. However if you insist I'd continue with this:
"I'm not a fan of dogs. I do know a few dogs though. Sometimes I invite my neighbour's dog for dinner. He's got good taste, for a dog. The last time he came around we talked about the situation in the Middle East. Do you know a good book about this topic that I could recommend to him?"
Even this seems too grand a claim. I’d water it down thus: the LLM encodes that the token(s) for “Spot” are probabilistically plausible in the ensuing output.
The model is enormous, and N-dimensional for very high N. But the model remains insufficiently enormous for understanding, and moreover, the model cannot observe itself and adjust.
Ask an LLM to extrapolate, see any semblance of reason collapse.
I mean what’s in em? Motors have very long service lives and can be replaced. So can batteries. Both can be recycled. The batteries may be way smaller/lighter by then too. The rest seems quite repairable as well.
Is it worth it compared to getting new ones in 10-15 years (based on sibling comment)? I don’t know.
But maybe electric fleet vehicles will change the calculus n when you refurbish them vs just replace them. Time will tell.
Probably was the porthole. It looks like the view port was rated for 1,300 meters. (They were descending to 4,000 meters) An engineer even warned them of the issue, and was fired. I suppose they didn't want to spend the money to develop an adequate one.
Grammatically, the location of the apostrophe makes parent singular. However, it seems weird if I think of it that way, likely because pronunciation as a whole is the same regardless of parent being singular or plural. Computers is plural regardless. This really doesn't effect me ib any way, but somehow seeing someone else want to know makes me want to know.
The only real alternative option with most hardware is linux. And when the use case is browsing the internet and you are there to help them unstuck a broken update, things should be fine.