>And we all kind of blindly walking into this no questions asked.
It's already happening, now, unfortunately. All your tweets (posted & liked) etc analyzed and flagged for bad language, drug/alcohol mentions, bigotry etc. and a report generated for HR.
> Our machine learning technology has flagged hundreds of thousands of
instances of misogyny, bigotry, racism, violence and criminal behavior in
publicly available online content.
If you go and compare what the product actually seems to do - tracking down your Twitter account and grepping every Tweet you interacted with against a list of "thoughtcrime" words (like "hell" or "ass") - you can almost feel the next AI winter coming. How many more bullshit companies calling their fake, broken and trivial technology "AI" or "Machine Learning" will it take until the whole field of ML gets derailed by bad reputation? At least, (as much as I know history), the last AI winter involved companies trying but failing at AI. This time around, they're not even trying.
It's already happening, now, unfortunately. All your tweets (posted & liked) etc analyzed and flagged for bad language, drug/alcohol mentions, bigotry etc. and a report generated for HR.
https://news.ycombinator.com/item?id=22211363
The company in question is used by Sterling and HireRight, so this isn't some unknown/niche company.