Hacker Newsnew | past | comments | ask | show | jobs | submit | bisectable's commentslogin

Almost there. Each individual comes with a distinct set of biases. Social collective action is the contextual weighted average of those individual biases. This results in quite resilient societies: at equilibrium individual biases balance each other across the population, small changes in a relatively small number of individual preferences result in societal course corrections, avoiding dangerous shocks. Smooth ride.

A subtle AI risk is simply the centralization of the decision making process. This leads to the reinforcement of some subset of biases, until they completely overshadow alternative options. The system becomes rigid, having lost its day-to-day individual level feedback, thus its capacity to adapt to new circumstances. Sooner or later, it will fail.

See also 'too big to fail' and 'central planning'.


How about companies operate in their local markets and we abolish multinationals?


You're raising an ethics concern. Strangely, no self-styled 'ethicist' lavishly employed by BigTech appears to be interested in such a topic. It's almost as if the answer to why BigTech hires a certain brand of loudmouth 'ethicists' is to DDOS the public space and keep questions about economic inequality levels that surpass even the Gilded Era from gaining any traction.

https://inequality.org/great-divide/america-2018-more-gilded...


See also https://en.wikipedia.org/wiki/Auto-da-fe. Questioning the righteous narrative will not be tolerated.


Yugoslavia is a cautionary tale. Humans (and possibly much older than humans) have an builtin in-group / out-group mechanism which, left unchecked, leads to widespread ethnic hatred. It doesn't take much to prime the bomb besides a steady dose of confirmation of the other group's evil ways, of which we have an ample supply thanks to the proliferation of sensationalistic (social) media. Our lives have become an incessant outrage stream. The clock is ticking...

Hacker culture was born at the zenith of American liberalism, just after its ideals had triumphed over Soviet Union's self-destructing spiral. Those times may never come back, and liberalism itself is on life support :(


The public sphere is very heavily slanted. Few people dare come out under their own name and condemn the bullying behavior of Timnit, for fear of being subjected to that very bullying, which has drastic real life consequences. The letter demands to push the conversation into the public sphere, where one sided Twitter and media pressure can be fully applied. One has to be very naive and sheltered to sign such a letter.


Given the current public discourse climate in academia, where 99% of the researchers are politically left or extreme left, I recommend a grain of salt for any research that makes claims racial discrimination. The report you quote has at least a couple of strange holes. The paper itself is behind a paywall, thus beyond mere mortals reach.

> For example, in about 10% audits in which a white and an African-American auditor were sent to apply for the same unit after 2005, the white auditor was recommended more units than the African-American auditor. These trends hold in both the large HUD (Housing and Urban Development)-sponsored housing audits, which others have examined with similar findings to us, and in smaller correspondence studies

They fail to mention how large is the gap. Is the white auditor recommended 102 vs the black auditor 97, or 150 vs 50, or 200 vs 3? Without such critical information it is hard to form an opinion, unless one already has a large bias in accepting discrimination narratives uncritically.

> In the mortgage market the researchers found that racial gaps in loan denial have declined only slightly, and racial gaps in mortgage cost have not declined at all, suggesting persistent racial discrimination. Black and Hispanic borrowers are more likely to be rejected when they apply for a loan and are more likely to receive a high-cost mortgage.

They fail to mention the magic words 'when controlled for income'. America has a huge income disparity problem, which is conveniently forgotten behind the ongoing race (and gender) hucksterism. Assuming we'd wave a magic wand and fix all disparities across visible populations tomorrow, it will still not fix the fact that huge income disparities exist between individuals. Google engineers and researchers get paid 5 times the median national income or more, and (senior) Google management in the 10x to 1000x range. The vast majority of the population is stuck in dead end precarious jobs, with little social mobility, one medical emergency from bankruptcy.


In causal modelling thou shallst not control by consequences of the causal treatment (the causal treatment here is being born black). Read your Rubin&Rosenbaum.


You've worked with a visual appearances dataset, lacking sufficient examples from one class of entities, and it failed to perform well for that class. You solved the problem by adding more example of that class. While the malfunction might have hd some, as of yet unquantified, real world impact in some hypothetical police face recognition system, it doesn't follow that:

a. Datasets that are not about visual appearances, are prone to the same problem and to the same degree. Perhaps the house lending datasets / systems have small race (visual appearances) issues, but large class issues. The political debate of how to handle class issues is as old as politics have been around.

b. The extent of real world impact, which depends on the actual system deployed. Perhaps a hypothetical real world system has a 1% failure rate vs .1% failure rate. Should we stop developing useful system just because they are not produce exactly the same results across all visible demographics we can carve ourselves into?

c. Can the impact be mitigated by human post processing. The hypothetical face recognition system is part of the judicial process, there are many checks and balances before one gets to suffer drastic consequences. For example a human actually looking at the picture, or a solid alibi. "Your honor, I was skiing in Canada at the time of the alleged Florida murder".

As others have expressed in this thread, dealing with first order visual issues is easy. everyone can agree at a glance what a correct solution to visual questions is, and bugs are usually straightforward to fix. Language issues on the other hand, are second order, everything is subject to interpretation. Once we open the can of worms of talking 'critically' about language and AI, we are getting uncomfortably close to language police, and via Shapir Whorf, to thought police. The BIG underlying stake of 'AI Ethics', one that possibly neither side has completely articulated just yet:

Should a small group (in the thousands) of hyperachieving hyperpriviledged individuals working in the AI labs at the handful of megacorporations controlling the online flow of human language, get to decide what we can speak, and by extension what we can think?


The entire course of action you suggest could have happened were not for Timnit going public by her own volition, accusing her employer and coworkers of unethical behavior in the process, and encouraging sympathetic colleagues to apply external political and judicial pressure on Google. What for, so she can publish a review paper disregarding relevant internal feedback?

She made a lot of fuss and burnt a lot of bridges. Nobody forced her to do so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: