Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can see how this gets challenging, though, right?

If you train your model to prioritize real photos (as they're often more accurate representations than artistic ones), you might wind up with Denzel Washington as the archetype; https://en.wikipedia.org/wiki/The_Tragedy_of_Macbeth_(2021_f....

There's a vast gap between human understanding and what LLMs "understand".



If they actually want it to work as intelligently as possible, they'll begin taking these complaints into consideration and building in a wisdom curating feature where people can contribute.

This much is obvious, but they seem to be satisfied with theory over practicality.

Anyway I'm just ranting b/c they haven't paid me.

How about an off the wall algorithm to estimate how much each scraped input turns out to influence the bigger picture, as a way to work towards satisfying the copyright question.


An LLM-style system designed to understand Wikipedia relevance and citation criteria and apply them might be a start.

Not that Wikipedia is perfect and controversy-free, but it's certainly a more sophisticated approach than the current system prompts.


Then who in this black box private company is the Oracle of infinite wisdom and truth!? Who are you putting in charge? Can I get a vote?


> If you train your model to prioritize real photos

I thought that was the big bugbear about disinformation and false news, but now we have to censor reality to combat "bias"


I mean now you d to train AI to recognise the bias in the training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: