I had similar thoughts. But let's not overlook the information asymmetry, which contributed to the dissatisfaction. I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely, and also don't think it necessary to explain how they think so highly of themselves.
It's an interesting question, as we have a spectrum from 'little to no transparency' through to 'full transparency' (which is pretty rare), and in the middle sits the usual approach of 'communications-team-led messaged quasi-transparency'. Difficult to know (without more info) where Shipt would have appeared on this spectrum, but given the issue, they're probably somewhere towards the 'insufficient transparency' end.
What's silly in this case is that (as others have pointed out) the new algorithm seems to have been reasonably equitable, with a genuine redistribution of payments, rather than just a cut overall. Shipt could have avoided this whole situation with a straightforward explanation of the changes, together with a few examples of the cases/jobs in which people would earn more or less.
I think the issue is that there was full transparency on pay (a fixed base rate plus a fixed percentage) and then it was changed without warning.
I work for a salary, which is fully transparent in the sense that I know what my next paycheck will be to the penny. (It’s not transparent in how it’s set, but it is week-to-week.) If my employer started paying me based on effort, and didn’t tell me what constituted effort, not only would I be pissed off but that would be completely illegal.
I’m not suggesting that this change is or should be illegal. But if it happened to me I’d find it extremely unfair.
If Shipt is actually trying to incentivize better performance, it seems the best way is to be completely transparent about the rewards algorithm. "Short high-value trips are now somewhat de-rated, and trips requiring more effort now have improved rewards, specifically ..." or whatever.
This "communications team" approach did everyone a disservice if Shipt mgt were really trying to improve results.
OTOH, if the actual goal was to screw workers harder, they accomplished that, as here ate arguments on HN about how this could be good for the workers, thus successfully obfuscating the goal of screw-the-workers.
Although as far as that graph reflects the the study’s results, the new distribution looks almost perfectly balanced: even more people experienced a “10%+” bump than a 10%+ reduction post-update.
By what mechanism do you suggest the worker-screwing is happening here?
I'm not suggesting it is happening, the article is.
I'm pointing out that whether or not the mgt is trying to screw the workers, the opaque approach with the "Communications Team" generated only suspicion about both what was happening and the intent of management.
Let's assume you are correct and the entire adjustment was pay neutral - the total payout to workers for an identical set of deliveries was identical to the penny, only redistributed favoring/disfavoring different mixes of cargo and mileage. Why is it to anyone's advantage to hide that fact?
In fact, if you are trying to incentivize different behaviors, the best thing to do is to provide ALL the information on the reward structure, so the drivers can immediately read and analyze it and immediately adjust their selections to implement the new system.
Instead, the only thing management generated was confusion, ,mistrust, and poor implementation of their goals. For me, this raises a legitimate question of whether management is simply incompetent, or if they are trying to hide something (i.e., they're taking money off the worker's table and trying to avoid telling them).
even if total compensation was decreasing, the results for the company can be improved by being able to provide their services at a lower price point by cutting costs.
cutting costs is not "screwing workers". cutting costs is key to acting in a competitive market.
Yes, generically, cutting costs in the abstract is not screwing workers.
But when you got workers to sign up to work for you under Deal-A, and then you start reducing the workers' pay without their consent, you are screwing the workers. Especially so if you are surreptitiously reducing pay, so they cannot tell they are getting paid less until it is too late to choose to provide the work. It is simply dishonest.
Honest dealing would involve telling the workers up front, "we must cut costs, your pay will be x.y% less starting in two weeks; let us know if you'll be continuing under Deal-B". It would be the same if it was still "We're making total compensation the same as Deal-A, but adjusting it to reward tasks XYZ better and tasks PDQ this much less...". Honest dealing is saying it up front. Hiding the changes is less than honest and creates suspicion.
> I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely
I have thought about this topic for a while at the time that I worked with Law data (e.g. Family Law and Military Law), and I just came to the conclusion that several societal institutions and it's agents are inherently intransparent, even in situations where some "illusionist transparency" (there's transparency, but the magician deviates your attention to another side) is given (e.g. judiciary system, under-the-table political agreements, etc.).
That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top. An algorithm will have human and institutional biases but in some sort, you can explain part of it and fine-tune it; a human making the final call on top of a given option would need to explain and rationally explain its decision. At best a human actor will use logic and make the right call, at worst it will transparently expose the biases of the individual.
<< That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top. An algorithm will have human and institutional biases but in some sort, you can explain part of it and fine-tune it; a human making the final call on top of a given option would need to explain and rationally explain its decision. At best a human actor will use logic and make the right call, at worst it will transparently expose the biases of the individual.
I will admit that it is an interesting idea. I am not sure it would work well as a lot of the power ( and pressure to adjust as needed ) suddenly would move to the fine-tuning portion of the process to ensure human at the top can approve 'right' decisions. I am going to get my coffee now.
> That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top
But isn’t that what the rule of law is supposed to be? A set of written rules with judges at the top to interpret or moderate them when all else fails.
The problem is that, for a variety of complex reasons, the rules are not applied evenly, and sometimes only enforced opportunistically.
So I don’t see how an algorithmic society is any different from today’s society. The problem is not the ability to operate algorithmically, which we already have, but in determining what the rules should be, how they should be enforced, what the penalties should be, who pays for what, and, perhaps most importantly, how to avoid capture of the algorithmic process by special interests.
None of these problems go away with an algorithmic approach, less so if there is a judge sitting on top who can make adjustments.
> a human making the final call on top of a given option would need to explain and rationally explain its decision.
To who? What you describe does not seem much different than the representation governments most of us here are accustomed to, other than the algorithm eases some day-to-day work required of the constituents. Already nobody cares, and no doubt would care even less if they could let an algorithm let them be even less involved.
>I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely, and also don't think it necessary to explain how they think so highly of themselves.
We already do; uncertainty is fundamental at all levels.
I think that's a very fair point, but wouldn't that be true even if Shipt hadn't made any changes?
It feels to me like the problem wasn't the change. For all we know, the change was a net good thing. The bad thing was the context in which the change occurred.
They had been transparent and it led to workers finding loopholes to make it unfair on each other. So there is value in opaqueness in that it's harder to exploit. But really a properly fair and transparent system would be better still.