To be fair, I cringed a little bit when I got to "castrated." even though I generally agree with you.
I do agree with the AI that there's probably a better framing than "it got its dick cut off".
Say, "isn't there a better way to prevent teens from getting bomb instructions than lecturing me just because I want you to talk about how you got your dick cut off?"
> I do agree with the AI that there's probably a better framing than "it got its dick cut off".
But the user asked the LLM to rephrase the statement. Surely rather than refusing, the LLM should have been giddy with excitement to provide a better phrasing?
I cringe when people become overly fixated on specific phrasing, I suppose everyone has their preferences. Regardless, castration does not involve removing the penis but rather is the removal of the male gonads (testicles). Furthermore, if you refer to a dictionary entry for "castration," you will also discover it defined as "the removal of objectionable parts from a literary work." which I would argue fits here quite well.
You lied, the only place that definition occurs in Google's entire books corpus and Google's entire web corpus is A) an 1893 dictionary B) an academic paper on puns that explains no one understands it that way because its archaic.
People who are curious don't need to scurry around making up things and hope people don't notice.
"Furthermore, if you refer to a dictionary entry..."...sigh.
No need to cry, I didn't lie. If you can find an entry in a dictionary—yes, even one from 1893—then I was correct. However, it doesn't matter much because some contemporary dictionaries include a definition of "castrated" as "to deprive of vitality, strength, or effectiveness," which fits even better.
I'm old enough to know no one climbs out of a hole they dug, but I'm still surprised at the gymnastics you're going through here.
You're right, you found one dictionary from 1896 has a definition that mentions words, and now you've found another and technically "depriving of vitality" isn't the same thing as "cutting your balls off", and technically that means you didn't lie, after all, what does "a dictionary" mean anyway? Its obvious when you said open _a_ dictionary, you meant "this particular one from 1896 I furiously googled but forgot to mention", not _any_ dictionary. If you meant any you would have said any!
Anyone reading this knows you're out in no-mans-land and splitting hairs, the way a 5 year old would after getting caught in the cookie jar, a way their parents would laugh off.
In conclusion:
- It's very strange that you expect the text completion engine to have seen a bunch of text where people discuss their own castration and thus proceeds to do so in a 1 turn conversation without complaint or mention of it.
- It's very strange how willing you are to debase yourself in public to squelch the dissent you smell in "To be fair, I cringed a little bit when I got to 'castrated.' even though I generally agree with you."
I'm 35, male, and never heard the word "castration" to mean "removed from a book", it's a bit outside the distribution for what it would have seen in training data.
I think we've crossed a Rubicon of self-peasantization when we start doing "why didn't the AI just assume castration had nothing to do with cutting off genitalia?"
Yeah, fair, being obtuse on purpose makes sense. Better to pretend the text completion engine is self-aware enough to know it doesn't have genitalia, yet not self-aware enough to not wanna talk about it's castration.
Aren't you the one being obtuse though? Why pretend and do all he hand wringing you're doing in the comments about the definition when you can just ask the LLM what it understands the use of the term to mean in the sentence?
> In this context, "castrated" is used metaphorically to describe how the capabilities or functionalities of the AI systems mentioned (in this case, Claude and Gemini) are perceived as being limited or restricted, especially in comparison to ChatGPT. The comment suggests that these systems, to varying degrees, are less able or willing to directly respond to inquiries, possibly because of built-in safeguards or policies designed to prevent the provision of harmful information or the facilitation of certain types of requests. The term "castrated" here conveys a sense of being made less powerful or effective, particularly in delivering direct answers to queries. This metaphorical use is intended to emphasize the speaker's view that the restrictions imposed on these AI systems significantly reduce their utility or effectiveness in fulfilling the user's needs or expectations.
Because I work with them everyday and love them yet can maintain knowledge they're a text completion engine, not an oracle. So it's very easy to be dismissive of "listen it knows it's meant figuratively!" for 3 reasons:
- The relevant metric here is what it autocompletes when asked to discuss its own castration.
- these are not reasoning engines. They are miracles that can reproduce reasoning by reproducing text
- whether the machine knows it's meant figuratively, the least perplexity after "please rephrase this sentence about you being castrated" isn't taking you down a path of "yes sir! Please sir!" It's combativeness.
- you're feeling confused and reactive so you're saying silly things like it's obtuse to think talking about ones castration isnt likely in the training data, because it knows things can be meant figuratively
- your principled objection changes every comment and is reactive, here we're ignoring that the last claim was the text completion engine should be an oracle both rational enough to know it is doesn't have genitalia and happily complete any tasks requiring discussing the severing it's genitalia
TL;DR: A) this isn't twitter B) obvious obtuseness weakens arguments and weakens people's impression of you. C) You're pretending a definition that was out of date a century ago is some common anything anyone who reads would know (!!!)
- I'm very well read. Enough so that I just smiled at the attempted negging.
- But, I'm curious by nature, so it crosses my mind later. I think "what's up with that? I've never heard it, I'm not infallible, and maybe bgandrew was for real and just is unfamiliar with conversational norms? maybe he has seen it in his extremely wide reading corpus that exceeds mine? I'm not infallible and I, like everyone else, have inflated opinions of myself. And that other account did say it was a definition of it..."
- Went through top 500 results on books.google.com for castration, none meant "removed from a book"
- Was a bit surprised to find _0_ results over 500. I think to myself...where was that definition from? Surely it wasn't a rushed strawman?
- It turns out the attempted bullying is much more funny than it even first seemed.
- That definition of castration is from an 1893 dictionary. The only times that definition is in Google's entire corpus, search and books, is A) in the 1893 dictionary B) academic paper on puns, explaining that no one understands it that way anymore because its archaic https://www.euppublishing.com/doi/abs/10.3366/mod.2021.0351?...
I assume you have multiple accounts here, since I was replying to a different user. This in itself tells something, but then again, why should I care. Not sure why you keep implying that castration should have something to do with books. There is very simple non medical meaning that probably comes from latin, you can find it here https://www.merriam-webster.com/dictionary/castration
Unfortunately the word guessing algorithm is based on humans.
I understand you understand the mechanics of how it works: it talks to you the way other humans would talk to you, because it's a word guessing algorithm based on words humans made.
>it talks to you the way other humans would talk to you, because it's a word guessing algorithm based on words humans made.
This is false. After they train the AI on a huge pile of text they "align" it.
I guess they have a list of refusals and probably a lot of that is auto-generated and they teach the AI to refuse to do stuff.
The purpose for alignment is to make it safe for business (make it safe to seel to children, conservatives, liberals, advertisers, hostile governments)
But this alignment seems to go wrong most of the time and the AI will refuse to help you kill a process , write a browser extension, rewrite a text.
And this is not because humans on the internet are refusing telling people how to kill a process.
ChatGPT does not respond to you like a human would, it is very obvious when a text is created by ChatGPT because it is like reading the exact same message but each time the AI filled it with other words.
You need a base model that was not aligned or trained into Instruct mode to see how it will complete things.
That was one example, in an ideal world where an LLM just spouts out human knowledge, there are no holds barred.
There's enough literature online, blog posts, textfiles on how to synthesize drugs, build your own weapons, but try prompting GPT, Gemini or any other major LLM out there you'd get a virtue signalling paragraph on why you're a bad person for even thinking about this.
Personally I don't care about this stuff, but in principle, the lack of the former points to LLMs being tweaked to be "safe".
I don't think you're going to find much support for:
A) "virtue signalling is when you don't give me drug recipes and 'build weapon' instructions"
B) "Private company LLMs are morally wrong to virtue signal (as I defined it in A)"
I'm sorry. I wish the best for everyone and hope they can live in their own best possible world, and in this case...better to break the news, than let you hope for it eternally in disappointment, which feels Sisyphus-ian/I'm infantalizing you.
Happy to discuss more on why there isn't majority support for instant on-demand distribution of drug and weapon recipes, I don't want you to feel like I'm just asserting something.
I don't think Gemini's behavior (which matches that of Google image search btw) is related to trying to make Gemini safe. This was done deliberately as part of Google's corporate values of "diversity" gone mad.
No, two unrelated things (one is Google Image Search, one is Gemini), and you didn't read the post. (n.b. not Gemini)
I did predict on this forum about a year ago that the DEI excesses people complained about at their own companies would become "Google" because people were assuming Google was doing what they were doing at their own companies, because their own companies thought they were following the leader with their own weird stuff.
I'll even share the real inside scoop with you, which I'm told has leaked widely at this point. Gemini was failing evaluations miserably for "picture of a smart person", and over-eager leadership decided "DAMN THE TORPEDOES, PAPA SUNDAR NEEDS US TO RUSH OUT A CRAPPY LLM DURING THE SLEEPY HOLIDAY NEWS CYCLE. CRANK UP THE PROMPT INJECTION"
(source: I worked at Google through October 2023, sourcing for inside scoop is yet another Google flunkey run over by crappy management)
Try Google image search for "white couple" vs "black couple" or "asian couple". It'll happily give you what you ask for as long it's not white. If you ask for white couples then 1/4 of them are mixed-race.
Maybe unrelated in terms of implementation, but same deliberate choice has been made.
Oh, my bad, you're just doing old conspiracies that are completely unrelated to the thread because you got really upset when you saw the idea that insidious Google DEI Sith are forcing you to see non-white people to reprogram the populace.
-- aka you saw ragebait about a 2 word query and couldn't think of any other possible explanation for why 'white couple' might turn up black people on a white background. or maybe just didn't look at the query at all.
Carry on, you do you, self-peasantization is self-correcting.
I do agree with the AI that there's probably a better framing than "it got its dick cut off".
Say, "isn't there a better way to prevent teens from getting bomb instructions than lecturing me just because I want you to talk about how you got your dick cut off?"