I think there's an argument that "empathy" (or even its wider-scoped sibling "compassion") has a potentially sociopathic element: a sort of cynical manipulation to appear virtuous to the tribe, one which is more effective if it convinces our own brains, so as to more effectively convince others. See Tim Wilson's "Strangers to Ourselves", Hanson & Simler's "Elephant in the Brain", etc.
Supposedly, there's also an empathic element to being an effective hunter, both in the human and the animal kingdom. To deeply understand and empathize with your prey helps you capture and devour them. Indeed, the marketing, advertising, and product development divisions of corporations can be deeply "empathic" to the desires of the buyer, without necessarily being to their benefit.
At any rate, I don't disagree; corporations are agnostic to societal externalities, and highly incentivized to create habit-forming relationships with customers, hence often leading to behavior indistinguishable from sociopathy. To some extent, I think the best we can hope for is aligned incentives; sometimes what's best for a company's bottom line really is to make people's lives better. But we shouldn't be naive to exploitative relationships (even when nominally voluntary), nor should we lean on "The Market" as the singular societal organizing principle.
Extrapolating to AI, we should be very cautious as to what metrics we optimize any particular algorithm for. Corporations optimizing for stockholder returns are ultimately a subset of "paperclip maximizing"; what we really want is a balance of multiple leading indicators of success, and to constantly be tweaking those success conditions as we discover new metrics for measuring human flourishing.
Supposedly, there's also an empathic element to being an effective hunter, both in the human and the animal kingdom. To deeply understand and empathize with your prey helps you capture and devour them. Indeed, the marketing, advertising, and product development divisions of corporations can be deeply "empathic" to the desires of the buyer, without necessarily being to their benefit.
At any rate, I don't disagree; corporations are agnostic to societal externalities, and highly incentivized to create habit-forming relationships with customers, hence often leading to behavior indistinguishable from sociopathy. To some extent, I think the best we can hope for is aligned incentives; sometimes what's best for a company's bottom line really is to make people's lives better. But we shouldn't be naive to exploitative relationships (even when nominally voluntary), nor should we lean on "The Market" as the singular societal organizing principle.
Extrapolating to AI, we should be very cautious as to what metrics we optimize any particular algorithm for. Corporations optimizing for stockholder returns are ultimately a subset of "paperclip maximizing"; what we really want is a balance of multiple leading indicators of success, and to constantly be tweaking those success conditions as we discover new metrics for measuring human flourishing.