Hacker Newsnew | past | comments | ask | show | jobs | submit | mardef's commentslogin

Why not everyone invite a different Anna?

I think the moral is for everyone to be individually a bit nicer, not one friend group to support an entire community.


These aren't anti-AI comments. We're on a forum talking person to person. It is antithetical to just spout "AI said this" repeatedly when the entire point of this place is human discourse.

It's like having a group conversation in person and one member of the group contributes nothing but things they read off Google.


I'd much rather talk to an AI than have this sort of human discourse.


Hm good point. Okay, So if I may ask, What would be the more correct response imo.

Should they have searched it and kept the information to themselves? Or should they have done additional research after asking AI(like looking into its sources) and tried confirming it and actually listing us the sources of their discoveries and then disclose that they used AI.

I generally feel like they wanted to share the information but I mean :/ I'd be actually interested as to what you offer him to do actually if he was really curious and did search chatgpt.

I always feel like knowledge should be open and that just saying that knowledge out loud doesn't hurt but I do agree with your point too wholeheartedly so its nuanced imo.


Same feeling. After scratching my head at some of the descriptions I decided to just head as far southeast as I could. I was met with a bunch of edgy comments and a penis drawn on the screen.


That was Windows Mobile, which was the end of the line of the old Windows embedded line vs Windows Phone, the brand new OS made for modern (at the time) smartphones.

WP7 was the first of the new OS


Windows Phone 7 was another OS. Windows Phone 8 was the next totally incompatible OS just couple years later.


There is no memory that the LLM has from your initial instructions to your later instructions.

In practice you have to send the entire conversation history with every prompt. So you should think of it as appending to an expanding list of rules that you put send every time.


You can use a RAG / embedding database as a kind of memory and add pulled 'approved examples', 'feedback comments' etc alongside future prompts.


UWP was limited because it was the subset that could run on PC, Xbox, Phone, and HoloLens. Being able to make one responsive app and deploy it across that ecosystem was pretty awesome.

But when you kill the non-PC platforms, you're just left with a reduced capability version of windows apps


It's not the same, but I don't know if it's worse.

My IRQ conflict resolution skills or knowledge about himem.sys aren't really useful these days.

But I've seen genz kids do incredible things with Minecraft mods and the like that make me reminisce about quake modding.

The masses are just blindly using devices, but the masses didn't even have a PC at home 30 years ago.


> My IRQ conflict resolution skills or knowledge about himem.sys aren't really useful these days.

Your ability to meticulously solve a problem using a systematic troubleshooting approach is always useful. You just happened to hone the skill w/ IRQ conflicts and himem.sys.


Agreed. And while what we did to get into the details and discover are different some kids still do.

Heck I did the same. Dip switches galore. Did I know what an IRQ actually is on the OS level while solving IRQ conflicts as a kid? Heck no! Only years later when I no longer needed to did I understand what those actually are/were.

The today equivalent of learning about autoexec.bat and config.sys to not load the cdrom driver because else this one game wouldn't start because it did not have enough memory is figuring out what's behind the Steam "Start" button and where the games "live" and how you can get what you want instead of doing everything through steam.

The kids that are the today equivalent of us in the old days do exist.


(Smile) 30 years ago was 1995, when most people did. You're thinking 1985. Forty years ago.


In 1995 around 1 in 3 US homes had a computer.


Yeah in Canada it looks like about 28% of homes had a personal computer in 1995, according to Stats Canada: https://www150.statcan.gc.ca/n1/pub/56f0004m/2005012/c-g/c1-...


It used to be that if you wanted to do gaming on a PC you started by building the PC.


That hasn't changed. Of course there are pre builts but there were twenty years ago, too. I should know -- I had one. I built my third gaming PC myself.


There were pre builts many years before your 20 years ago too. I used to build my computers myself as well 30 years ago and my dad did 40 years ago ;)


I dunno... My C64 required very little assembly.


I think coding skills don't lag as far behind with those who enjoy coding. It's a hell of a lot easier to learn and more accessible than it ever was. Plus applications like modding make learning fun.

It's more systems, networks, OS fundamentals... i.e. how you pull all the pieces together and make them work especially among your "non-technical" user set.


I code more for fun now, because the proliferation of higher end languages and libraries for practically everything drastically reduces the time to that first "wow cool!" moment.

I'm sure it's the same with young people.


And FB would really like to own the next platform so they don't get kneecapped again by things like Apple's Do Not Track.


Not if I only seeing updates from the people I actually know and explicitly connected to on the social graph.

The current problem exists because the content is chosen algorithmically


No. Even then. You may know assholes. User accounts may be compromised. Users may have different tolerances for gore that you don’t.

Not gotchas, I’m not arguing for the sake of it, but these are pretty common situations.

I always urge people to volunteer as mods for a bit.

At least you may see a different way to approach thing, or else you might be able to articulate the reasons the rule can’t be followed better.


Would not a less draconian solution then to be to hide the link requiring the user to click through a [This link has been hidden due to linking to [potential malware/sexually explicit content/graphically violent content/audio of a loud Brazilian orgasm/an image that has nothing to do with goats/etc] Type "I understand" here ________ to reveal the link.]?

You get the benefits of striving to warn users, without the downsides of it being abusive, or seen as abusive.


It’s not a bad option, and there may be some research that suggests this will reduce friction between mod teams and users.

If I were to build this… well first I would have to ensure no link shorteners, then I would need a list of known tropes and memes, and a way to add them to the list over time.

This should get me about 30% of the way there, next.. even if I ignore adversaries, I would still have to contend with links which have never been seen before.

So for these links, someone would have to be the sacrificial lamb and go through it to see what’s on the other side. Ideally this would be someone on the mod team, but there can never be enough mods to handle volume.

I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.

That is IF you get reports. People click on a malware infection, but aren’t aware of it, so they don’t report. Or they encounter goats, and just quit the site, without caring to report.

I’m actually pulling my punches here, because many issues, eg. adversarial behavior, just nullify any action you take. People could decide to say that you are applying the label incorrectly, and that the label itself is censorship.

This also assumes that you can get engineering resources applied - and it’s amazing if you can get their attention. All the grizzled T&S folk I know, develop very good mediating and diplomatic skills to just survive.

thats why I really do urge people to get into mod teams, so that the work gets understood by normal people. The internet is banging into the hard limits of our older free speech ideas, and people are constantly taking advantage of blind spots amongst the citizenry.


> I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.

When I consider my colleagues who work in the same department: they really have very different preferred schedules concerning what their preferred work hours are (one colleague would even love to work from 11 pm to 7 am - and then getting to sleep - if he was allowed to). If you ensure that you have both larks and "nightowls" among your (voluntary) moderation team, this problem should become mitigated.


Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.

But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people.

This is a difficult and unsolved problem.


I admit that ensuring consistent moderation quality is the harder problem than the moderation coverage (sleep pattern ;-) ) problem.

Nevertheless, I do believe that there do exist at least partial solutions for this problem, and a lot of problems concerning moderation quality are in my opinion actually self-inflicted by the companies:

I see the central issue that the companies have deeply inconsistent goals what they want vs not want on their websites. Also, even if there is some consistency, they commonly don't clearly communicate these boundaries to the users (often for "political" or reputation reasons).

Keeping this in mind, I claim that all of the following strategies can work (but also each one will infuriate at least one specific group of users, which you will thus indirectly pressure to leave your platform), and have (successfully) been used by various platforms:

1. Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right"). This will, of course, infuriate users who are on the "free speech" side. Also people who have a "currently politically accepted" stance on the controversial topic will be angry that they are not allowed to post about their "right" opinion on this topic, which is a central part of their life.

2. Only allow arguments for one side of some controversial topics ("taking a stance"): this will infuriate people who are in the other camp, or are on the free speech side. Also consider that for a lot of highly controversial topics, which side is "right" can change every few years "when the political wind changes direction". The infuriated users likely won't come back.

3. Mostly allow free speech, but strongly moderate comments where people post severe insults. This needs moderators who are highly trustable by the users. Very commonly, moderators are more tolerant towards insults from one side than from the other (or consider comments that are insulting, but within their Overton window, to be acceptable). As a platform, you have to give such moderators clear warnings, or even get rid of them.

While this (if done correctly) will pacify many people who are on the "free speech" side, be aware that 3 likely leads to a platform with "more heated" and "controversial" discussions, which people who are more on the "sensitive" and "nice" side likely won't like. Also advertisers are often not fond of an environment where there are "heated" and "controversial" discussions (even if the users of the platform actually like these).


>Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right").

Yup. One of my favored options, if you are running your own community. There are some topics that just increase conflict and are unresolvable without very active referee work. (Religion, Politics, Sex, Identity)

2) This is fine ? Ah, you are considering a platform like Meta, who has to give space to everyone. Dont know on this one, too many conflicting ways this can go.

3) One thing not discussed enough, is how moderating affects mods. Your experience is alien to what most users go through, since you see the 1-3% of crap others don't see. Mental health is a genuine issue for mods, with PTSD being a real risk if you are on one of the gore/child porn queues.

These options to a degree are discussed and being considered. At the cost of being a broken record, more "normal" users need to see the other side of community running.

Theres MANY issues with the layman idea of Freespeech, its hitting real issues when it comes to online spaces and the free for all meeting of minds we have going on.

There are some amazing things that come out of it, like people learning entirely new dance moves, food or ideas. The dark parts need actual engagement, and need more people in threads like this who can chime in with their experiences, and get others down into the weeds and problem solving.

I really believe that we will have to come up with a new agreement on what is "ok" when it comes to speech, and part of it is going to be realizing that we want freespeech because it enables a fair market place of ideas. Or something else. I would rather it happen ground up, rather than top down.


> Ah, you are considering a platform like Meta, who has to give space to everyone.

This is what I at least focused on since

- Facebook is the platform that the discussed article is about

- in https://news.ycombinator.com/item?id=42852441 pixl97 wrote:

"Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.

But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people."


As you said, consistent moderation is different that coverage. Coverage will matter for smaller teams.

There’s a better alternative for all of these solutions in terms of of consistency, COPE was released recently, and it’s basically a light weight LLM trained on applying policy to content. In theory that can be used to handle all the consistency issues and coverage issues. It’s beta though, and needs to be tested en masse.

Eh.. let me find a link. https://huggingface.co/zentropi-ai/cope-a-9b?ref=everythingi...

I’ve had a chance to play with it. It has potential, and even being 70% good is a great thing here.

It doesnt resolve the free speech issue, but it can work towards the consistency and clarity on rules issues.

I will admit I’ve strayed from the original point at this stage though


Lord do I wish that were true. The main reason I left Facebook was less the algorithmic content I was getting from strangers, and more the political bile that my increasingly fanatical extended family and past acquaintances chose to write.


You would be surprised at the amount of crap that exists and the amount of malware that posts to fb


I used the SocialFocus extension to remove those kinds of features from sites when I was still weaning off the sites.

Removing the official apps was an essential first step. Then I progressed to using mobile web sparingly with SocialFocus to trim the experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: