Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely

I have thought about this topic for a while at the time that I worked with Law data (e.g. Family Law and Military Law), and I just came to the conclusion that several societal institutions and it's agents are inherently intransparent, even in situations where some "illusionist transparency" (there's transparency, but the magician deviates your attention to another side) is given (e.g. judiciary system, under-the-table political agreements, etc.).

That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top. An algorithm will have human and institutional biases but in some sort, you can explain part of it and fine-tune it; a human making the final call on top of a given option would need to explain and rationally explain its decision. At best a human actor will use logic and make the right call, at worst it will transparently expose the biases of the individual.



<< That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top. An algorithm will have human and institutional biases but in some sort, you can explain part of it and fine-tune it; a human making the final call on top of a given option would need to explain and rationally explain its decision. At best a human actor will use logic and make the right call, at worst it will transparently expose the biases of the individual.

I will admit that it is an interesting idea. I am not sure it would work well as a lot of the power ( and pressure to adjust as needed ) suddenly would move to the fine-tuning portion of the process to ensure human at the top can approve 'right' decisions. I am going to get my coffee now.


> That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top

But isn’t that what the rule of law is supposed to be? A set of written rules with judges at the top to interpret or moderate them when all else fails.

The problem is that, for a variety of complex reasons, the rules are not applied evenly, and sometimes only enforced opportunistically.

So I don’t see how an algorithmic society is any different from today’s society. The problem is not the ability to operate algorithmically, which we already have, but in determining what the rules should be, how they should be enforced, what the penalties should be, who pays for what, and, perhaps most importantly, how to avoid capture of the algorithmic process by special interests.

None of these problems go away with an algorithmic approach, less so if there is a judge sitting on top who can make adjustments.


I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top

Let’s re-audit the algorithm regularly; say, perhaps, a central committee revisits and revises the plan every 5 years?


I'm not so sure the corporation will survive if humans do.


Like elections?


> a human making the final call on top of a given option would need to explain and rationally explain its decision.

To who? What you describe does not seem much different than the representation governments most of us here are accustomed to, other than the algorithm eases some day-to-day work required of the constituents. Already nobody cares, and no doubt would care even less if they could let an algorithm let them be even less involved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: