I'm tired of this pseudointellectual reductionist response. It's not "literally by accident" when they're trained to do something, as if we are not also machines that generate next actions based on learned neural weights and abstract (embedded) representations. Your issue is with semantics rather than content.
Obviously "hallucinate" and "lie" are metaphors. Get over it. These are still emergent structures that we have a lot to learn from by studying. But I suppose any attempt by researchers to do so should be disregarded because Person On The Internet has watched the 3blue1brown series on Neural Nets and knows better. We know the basic laws of physics, but spend lifetimes studying their emergent behaviors. This is really no different.
I just kind of wish the behavior for "hallucinations" just didn't have such confident language in the context... actual people will generally be relatively forthcoming at the edge of their knowledge or at least not show as much confidence. I know LLMs are a bit different, but that's about the best comparison I can come up with.
Of course they hallucinate because we are training on random mode. +Since you mentioned 3blue1brown there is an excellent video on ANN interpretation based on the works of famous researchers who attempt to provide plausible explanations about how these (transformers based) archs store and retrieve information. Randomness and stochasticity is literally the most basic components which allow all these billions of parameters to represent better embedding spaces almost hilbertian in nature and barely orthogonal as training progresses.
The "emergent structures" you are mentioning are just the outcome of randomness guided by "gradiently" descending to data landscapes. There is nothing to learn by studying these frankemonsters. All these experiments have been conducted in the past (decades past) multiple times but not at this scale.
We are still missing basic theorems, not stupid papers about which tech bro payed the highest electricity bill to "train" on extremely inefficient gaming hardware.
If someone is nominally trying to convince you of a point, but they shroud this point within a thicket of postmodern verbiage* that is so dense that most people could never even identify any kind of meaning, you should reasonably begin to question whether imparting any point at all is actually the goal here.
*Zizek would resist being cleanly described as a postmodernist - but when it comes to his communication style, his works are pretty much indistinguishable from Sokal affair-grade bullshit. He's usually just pandering to a slightly different crowd. (Or his own navel.)
I usually scroll a page to see how many headings it has, but I'm looking for the opposite. Too many headings is one of the quickest aesthetic clues that I'm looking at slop, as it doesn't require me to read any of the text. (Emojis and over-usage of bullet point lists are the others I can think of in this category.)
I noticed something similar when working with (unlike the post's author, non-marxist, as far as I know) Russian developers who had made the jump abroad (EU).
When debating directions, some of them focused on just never stopping talking. Instead of an interactive discussion (5-15 seconds per statement), they consistently went with monotone 5-10 minute slop. Combined with kind of crappy English it is incredibly efficient at shutting down discourse. I caught on after the second guy used the exact same technique.
This was a long time ago. I have since worked with some really smart and nice russian developers escaping that insane regime. And some that I wish would have stayed there after they made their political thoughts on Russia known.
When you have a 30 minutes meeting with busy people, a single 15 minute monologue might buy you another week to solve your problem.
Indeed, very efficient, usually it requires somebody to put his foot down AND a consensus to deescalate immediately. If you have an antidote, please let me know.
Some ask: "Isn't backpropagation just the chain rule of Leibniz (1676) [LEI07-10] & L'Hopital (1696)?" No, it is the efficient way of applying the chain rule to big networks with differentiable nodes—see Sec. XII of [T22][DLH]). (There are also many inefficient ways of doing this.) It was not published until 1970 [BP1].
The article says that but it's overcomplicating to the point of being actually wrong. You could, I suppose, argue that the big innovation is the application of vectorization to the chain rule (by virtue of the matmul-based architecture of your usual feedforward network) which is a true combination of two mathematical technologies. But it feels like this and indeed most "innovations" in ML is only considered as such due to brainrot derived from trying to take maximal credit for minimal work (i.e., IP).
This misses the point of isospin. Isospin is an approximate SU(2) symmetry due to the fact that the up and down quarks (the "light" quarks) have very similar masses compared to the rest of the quarks, so they can be approximated as two different eigenstates of the same particle. It's mathematically identical to the SU(2) symmetry of a spin-half particle. The reason it doesn't include the other quarks is because they are so much more massive.
For better or worse, news flows through social media, so this approach basically amounts to ignoring all the bad stuff going on. If you read HN, chances are you can probably safely get through the next four years doing this. But as the saying goes, "first they came for the communists..."
Care to enlighten me? It's not like I stated any of that as fact, it was qualified with an admission of my own ignorance- I'm open to being corrected. Or are you just demonstrating what happens when established dogma is questioned in this field? If so point taken, I can see why they're having problems
Moreso 'question' than 'challenge'- but it seems like the idea that psychology is a hard science at all is sort of a baseline assumption, or dogma. This article goes into great detail on all sorts of issues in the field, but stops short of questioning whether or not the whole thing could even be classified as scientific. I'd argue that the reproducibility crisis throws that into question to some degree (though that crisis apparently extends into 'harder' sciences as well, so maybe not?)- And intuitively, human psychology just doesn't seem like something you can quantify, at least not to the level of granularity required by the scientific method. That is, unless you're measuring the activity of neurons, synapses, hormone levels, any physical measurable phenomena, to draw your conclusions- and I'm not sure how much of that is done in psychology as opposed to neuroscience
I think there's a common pattern on Hacker News that goes something like:
A: Overly broad generalization of a huge body of work put together over 100 years by tens of thousands of professionals
B: Ugh, hate this take from armchair experts
A: Okay, then give me all the examples! Otherwise you're proving me right!
I happen to think your overly broad generalization is more right than wrong, but I also recognize the silliness of asking to be "enlightened" on an entire branch of human endeavor via internet comments. This is a problematic argument form, and someone calling out this behavior does not prove you right.
So let's be clear about what "enlightening you" means. If your argument is "psychology is based on a fundamentally flawed/useless study design (surveys) and we can never learn anything real from it", then a few examples of reproducible, interesting, not a-priori obvious results from surveys should be sufficient to show that we actually can learn real things from surveys. (And be careful not to fall into the "I could have told you that!" fallacy.) Luckily, this question was already asked on Reddit, and I think there are some strong examples:
On the other hand, the field is absolutely rife with problematic study design and even some entire psychology departments (e.g. Stanford) seem to be completely rotten. The most salient example of this is the "implicit bias" studies that came out of Stanford. Their study design was something like:
Task 1: Associate good words with white/Christian/American themes as fast as you can
Task 2: Associate bad words with "foreign" themes
Task 3: Associate good words with white/Christian/American themes again
Task 4: Associate good words with "foreign" themes
And the result is: you're racist because Task 4 takes you a few milliseconds longer. It never occurred to them (or it did and they intentionally forced the result) that in Task 4, you're literally unlearning what you've just practiced 3 times. It was one of the most blatantly bad studies I've ever seen in my life and I didn't see anyone else calling out how problematic it was, because Stanford.
So in general I actually agree with your take: the field is rife with junk science, some of it obvious, and almost certainly some of it intentional. But please also recognize that "I'm an expert in tech and therefore everything, and if you can't prove me wrong in an internet comment then that proves me right!" is a very problematic argument style. It sounds like you're trying to prove yourself right, and a much more efficient way to get smarter is to habitually try to prove yourself wrong.
I appreciate the productive answer- You're right, re-reading it now my tone was more argumentative than inquisitive- Itd be foolish to dismiss such a large body of work as 'useless' and I hope it didn't come off that way- Of course understanding human psychology is immensely useful for all sorts of reasons
I agree with the overall thrust of your argument, but:
> be careful not to fall into the "I could have told you that!" fallacy.
That's not considered to be one of the standard logical fallacies as far as I know. Why would it be fallacious? Social studies are rife with findings that are either extremely obvious to everyone, extremely obvious to conservatives specifically (because psychologists are nearly always on the left), or extremely obvious to anyone who reads the study design.
I recently wrote an essay about why replication studies can't fix science [1] and one of the problems cited is the prevalence of studies that aren't worth replicating because "I could have told you that". Examples include silly papers like [2], which is literally titled "People's clothing behaviour changes according to external weather and indoor environment" yet somehow manages to also say, "It is evident that further studies are needed in this field", or [3] saying that the average male student would like to be more muscular.
But there are less silly examples which crop up due to the ideological bias in the field. Academics purge any conservatives they find, meaning that social studies spends a lot of time and money investigating things that are considered obvious outside of far left spaces. Jonathan Haidt is famous for arguing that this is a problem (albeit, not actually doing anything about it). As an example highly apropos to this thread, psychologists recently started discovering that stereotypes are usually accurate. Much other work in psychology is built on the suspiciously circular premise that stereotypes are either fictional and thus mere folk intuitions, as Mastroianni would put it, or are accurate only because people believe they are accurate (the field of "stereotype threat" is like this). On the left the idea that stereotypical achievement gaps are socially constructed is considered obvious and a matter of faith, to people on the right the opposite is true: idea that they reflect actual truths about reality is the obvious idea.
So even if you set aside the offensively wasteful, there's still a lot of scope for study claims to be considered obvious by some and not by others.
> That's not considered to be one of the standard logical fallacies as far as I know
I don't really care whether it's on The Official List of Logical Fallacies™ or not, and in fact caring too much about this list is itself a bit of a fallacy. Nor do I necessarily consider "I could have told you that!" necessarily a logical fallacy; more like an emotional fallacy. (But humans use emotions when trying to understand things!) I consider a fallacy to be something which is an "attractive but wrong" step in an argument or rationale. There are two reasons why I consider it a "fallacy" by that definition:
1. Humans dramatically over-estimate the obviousness of an idea after they've already heard it. Once you know the answer, you basically become immediately unable to estimate how obvious that answer was beforehand. This is highly evident in mathematical proofs, e.g. when "obviously right" things turn out to be wrong, often with an "obvious counterexample". Both sure feel completely obvious, depending on what you know!
2. Even obvious things are worth testing. Plenty of things that seemed obvious have turned out to be wrong. This is not evidence of wasted funding. Obvious things can also be related to less obvious things. Your 3rd example shows this: there's a 2nd related hypothesis they're testing: "men would believe that women would find a more muscular shape more attractive than women actually report". So men tend to want to be more muscular, and also men tend to think women find high muscularity more attractive than they actually do (or they think women's "ideal muscularity" is higher than it actually is). This may be an example of "overturning our intuitions" whose understanding could improve outcomes -- if it replicates, anyway. Hardly an example of a pointless study.
That being said, humans actually do seem pretty good at being able to "tell you that" beforehand. There are some fun quizzes you can take [1, 2] to see if a study replicates beforehand, and you'll probably do pretty well on them. But that doesn't necessarily mean that we shouldn't do the tests anyway! We can still be surprised.
> Academics purge any conservatives they find
This is extremely dependent on the department and institution, and in general is way overblown. Yes, this can be a problem in some departments in some of the social sciences. On the other hand, in economics departments, it can be liberal ideas that are verboten. There are fads and politics everywhere, but mostly science is doing OK at considering ideas on their actual merits -- eventually. Meanwhile a lot of people lob angry criticisms at academia for rejecting their bad ideas because they're bad and wrong, assuming
> psychologists recently started discovering that stereotypes are usually accurate
I suspect this is way too strongly worded for what's actually being found (and replicated). Citations would be very much appreciated. Be wary that slight changes in the mean of a population can be detected in tests with p < whatever epsilon you, and also have oversized effects on the tail ends (e.g. professional sports), but give you almost no predictive power for individuals you meet on the street. If "stereotypes are usually accurate" means "population X is slightly more likely to do Y with near-certainty", that does not mean "most X do Y" nor even "a random member of X is significantly more likely to do Y than a random member of not-X". One of the reasons these kinds of studies are considered problematic is in how easily they can be misconstrued to justify racism.
Sure, here you go. This is the first I found with a few seconds of searching so I don't claim it's the best citation but it gives an overview of the research on stereotype accuracy:
Note the reference to over 50 studies, and stereotype accuracy being amongst the most replicable of findings in social psychology. Not very surprising given they're literally asking "are things most people believe to be true actually true" - this is a question that's going to obviously yield a lot of big effect sizes and high levels of replicability, but it's also trivial by definition.
> in economics departments, it can be liberal ideas that are verboten
That seems doubtful, depending on how you define "liberal". Academic economics still has a notable left leaning bent. If we check out the last two issues of QJE, one of the best known economics journals, we see a large number of papers on typical liberal fascinations that have little to do with conventional economics, things like gender pay gaps, domestic violence against women, how to socially engineer people into taking vaccines etc:
Also, the basic premise of academic economics is that you can treat the economy as something knowable and controllable from the outside, whereas a more libertarian take would be that the economy is the process of everyone collectively figuring out what's both true and desirable, thus you cannot "step outside" the system in any meaningful way by definition. So the very nature of studying economics in a university is to some extent a left wing starting point.
Obviously "hallucinate" and "lie" are metaphors. Get over it. These are still emergent structures that we have a lot to learn from by studying. But I suppose any attempt by researchers to do so should be disregarded because Person On The Internet has watched the 3blue1brown series on Neural Nets and knows better. We know the basic laws of physics, but spend lifetimes studying their emergent behaviors. This is really no different.