Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Don't Get Distracted (2017) (calebhearth.com)
206 points by Kerrick on Dec 11, 2024 | hide | past | favorite | 158 comments


In a different place a long time ago we were approached by a company that wanted a collaboration. They had a very specific project in mind, with requirements already set. It was a humanitarian project to "find earthquake survivors". If you read the requirements in a very positive way it was an ethically clean project.

But the company is a major defense contractor and it was very unusual for them to be involved in humanitarian work. So it made sense to question the requirements after a less generous reading. Detecting mostly occluded humans in a highly cluttered and destroyed urban environment has other use cases. They got as far as "highlighting the survivors with a laser designator" before I started to ask some very pointed questions and start discussing the ethical considerations of us helping to build a system to kill people.

I was asked to leave.

After that meeting the potential collaboration was rejected and it never came up again. The company continued shopping around other places to get help building their "earthquake rescue system".

Never get distracted. Always question the purpose of work. Consider ethics.

Getting another job can be a pain. Living with knowing what you helped to build would be much harder.


I'm a little surprised that a defense contractor would bother with that much awkward subterfuge or even farm out a core part of their product to some random other company.

The handful of US based defense contractors I've had contact with were VERY up front about what they develop. I think they more so wanted you to know so everyone was on the same page.


I'll bet my next paycheck the above scenario never happened, or this company was less "major defense contractor" and more "VC-funded SaaS-app defense contractor wannabe."

Lockheed Martin doesn't need to trick anyone into working for them, and that includes software developers.

If this was a real thing that actually happened, everyone would need a security clearance anyway.


A LOT of major (and minor) defense contractors were always on campus at UCF, when I was there, interviewing/recruiting. They had no shortage of students wanting to get out there and get a job.


Long long ago I had contact with Lockheed Martin, their emails had wallpaper of jets, bombs, weapons systems.


> The handful of US based defense contractors I've had contact with were VERY up front about what they develop. I think they more so wanted you to know so everyone was on the same page.

That matches my couple data points.

I did a lot of software engineering consulting work through a Federal contractor. When I was first approached, the director told me some of their customers were military, and how would I feel about that. I said something generally favorable, and that I only had a personal rule that I didn't want to work on weapon systems, nor domestic surveillance. That was fine with them, and we did years of great work, on very positive safety programs.

Earlier, in school, I did a final project for a class (a simple "agent" system to help people find online communities of interest in Internet chat). The professor later told me, out of the blue, that a federal contractor wanted to license my little project(!), for surveillance(!), and they'd pay me to advise. At least they were upfront about what they wanted it for. I declined and discouraged them.[1]

[1] One reason was that I turned off by surveillance of the Internet, as a pre-Web Internet kid (also raised on anti-Soviet/anti-police-state propaganda) who still thought of the Internet as a better world that we insiders wanted to bring to everyone else. The other reason was my little technical approach would've been a waste of money for that surveillance problem. IIRC, I was polite in declining, and didn't mention the first reason, but was probably impolitic in telling them the latter reason.


It is only a couple data points, but the story given was "major defense contractor" and there really are only a few of those.

At least as far as the US goes I've never heard of them awkwardly trying to get people to develop something for them in the manner described. In my experience they're very up front about what they do and aren't short on people willing to work for them.

Not to say the story was a lie, I don't know, but it sounds very unusual.


I don't know whether there's a common practice that's followed up and down all the org charts of all the employers.

I didn't disbelieve the article's story. It might've just, for example, been someone's pet project, with only budget to hire some co-op students and an inexperienced new-grad coder, to make a demo, to pitch for a real project/contract. Or to proof-of-concept a method, to possibly be properly designed and implemented by a real engineering team. Maybe they didn't think they needed experienced serious engineers with clearance for this compartmentalized exercise.

Another, maybe less-likely, possibility was something involving a cleanroom implementation as demonstration it was "obvious to first-year students". Or to find a plausibly alternate cleanroom implementation, by people who hadn't been tainted by knowledge of the encumbered one. Guessing not, but I did see something like this done at least once. I'd bumped into this affable young high school student, who was idling around in common areas of my lab, and was outgoing enough to strike up a conversation with random passerby me, and start telling me he was working on a project. It turned out a professor had tasked him to reinvent this one algorithm that was some alum grad student's thesis, which had been patented, and which the alum had been milking rather hard... and the professor didn't tell the high school student about the prior work.

I also once had a current PhD student hired as an intern, to apply very niche expertise towards a speculative new feature ("wouldn't it be great if we could do this cutting-edge thing in the new system"), and they were mostly successful, but the product ended up going a different direction. (The PhD student did OK: they then had understanding of a great, tricky real-world domain example, to guide research. A lot of research uses over-simplified examples of applications that they never had the opportunity to really understand.)

I did have a twinge while reading the article, wondering why it seemed like they were about to go into methods used by what sounded to my naive ear like a tracking/targeting component of a military system, but I would guess that they didn't spill any secret sauce beans.


They probably already have a complete system. They're just looking for new revenue sources.


I worked for a company that located phones during E911 calls, at the cell network level, without using GPS, down to a few meters. The most humanitarian mission you can imagine. It was golden.

But it turns out, authoritarian states wanted to buy the very same tech to locate undesirable people, record their associates and travel history, geofence them, etc. Also turns out salespeople gonna sell. And that's how you go from an ethical position to another kind. People did quit over that.


So what happens now, the next time there is an earthquake and we need to find occluded survivors?


You send the killbots, but replace the existing machine-guns with lasers.


They die, or we send people in who themselves risk death.


Professional trained search and rescue dog teams, from what my kids learned in an imax film at the science center.


As far as I can tell, this isn't widely known, but around a decade ago some folks deployed a version of this technology for that exact use case: https://www.jpl.nasa.gov/news/finder-search-and-rescue-techn...

The obvious military applications for this technology are explored in Tom Clancy's novel "Rainbow Six" (1998).


What about questioning the purpose of the taxes you pay?


Questioning where your tax money goes is a good thing, and a necessary activity to be an informed citizen. However the next step is usually "I don't like $THING, therefore I shouldn't have any of my tax money go toward paying for it" which does not work at scale.

At least in the US, we are not a direct democracy so it's not important or even an desirable to have everyone decide where their money goes based on their own individual morals or ideology. We elect leaders and in a perfect world they're smart enough (lol) to see that taxpayer funds are allocated in a manner that best helps Americans (lol).


It's not often that we get to see whataboutism actually start with an actual "What about...".

Less snarkily: even if we agreed that paying taxes makes one complicit on evil acts done with that money, it doesn't mean that person should not avoid being complicit in other evil actions where they can.


do you know what happens if military does not have a system to find and target specific people in a building? they just might need to bring down the whole building. everyone in it might die but at least your conscience is clean.


So really, we're all murderers for not helping them to murder more carefully? No.


Even the wording of this question conflates a soldier killing a soldier during a conflict (which is objectively not murder) with the wanton execution of civilians with no regard to collateral or preventable damage, which is a war crime. And completely ignores the existence of a middle ground where a high-value military target is killed, preventative measures are taken to limit civilian casualties, and some civilians are killed despite those measures. That is not good but it's also not murder, and not a war crime.

I would love to have an honest, thoughtful discussion about how a war can be prosecuted between two powers with minimal civilian harm, but it's not possible when people aren't even honest about what constitutes murder and what doesn't.


Calling it "collateral damage" or "manslaughtering civilians" or whatever won't change my argument in the slightest. This isn't a language barrier. Actually, it sounds like the opposite: you're restricting the discussion to people who share your mindset.

Some people feel even stronger than I do and would prefer to avoid being involved in killing even an enemy combatant.


I'm not restricting anything but it's a different discussion entirely.

Discussing 1-how to limit civilian casualties during an otherwise legitimate conflict between two nations is one thing, 2-whether those civilian casualties constitute murder (and 3-by whom) is something else. But through pretty much all of human history, nearly all people have recognized a difference between killing during a war, including innocent civilian deaths, and intentional murder. If you want to discuss #1 you had probably agree on #2 first, don't you think?


I agree it's an interesting discussion, but I really didn't get any of that from the comment you originally replied to. (Please correct me if I'm wrong.) My point was that the word "murder" there is largely irrelevant, the issue is killing in general. For me, there's no definition or level of efficiency that will make me comfortable being able to directly connect my hard work and someone else's death. And trying to frame harm caused by another as blood on my hands due to my refusal to help is ridiculous.


Replace murder with kill. It doesn't change the point. I'm not more complicit through refusing to help them kill in the first place.


Not everyone agrees that a soldier killing a soldier during a conflict is objectively not murder.

Many of us, including many intelligent people, think that all such premeditated killing is objectively murder regardless of the political context.

Please don't handwave this fact away.


Given that some people do not see a soldier killing another solider vs a serial killer killing a random person as different, I think it is relevant. The technical distinction as far as I can tell is homicide (killing in any sense) vs murder (unlawful killing, e.g. not what soldiers typically do). How can one have a discussion about the ethics if those are not different? To flip it around, how could we have a discussion about the ethics of software and human life, if one of us believed that a serial killer killing people was an ok thing to do? e.g. everything would be ok, so there's no discussion? Conversely if all forms of combat killing are not ok, then there's no discussion to be had.


I suspect the idea that “people aren't even honest about what constitutes murder and what doesn't” creates a high barrier to the discussion you’d like to have.

From your post, I think you’re starting from the pov that governments have the right to decide what is murder vs an acceptable killing. Some of the people I’ve met who are most interested in these ideas are staunch pacifists with a strong religious motivation (e.g. Quaker or Methodist) and reject the idea that governments can declare any killing to be acceptable. I don’t believe they’re dishonest, for all that it’s a very different starting point.


Society, not governments, has more-or-less agreed for at least a few thousand years that there is a difference between these two acts. You're free to feel differently but as another commenter pointed out, there's not much of a discussion to be had on the ethics of killing in war if you think any two instances of one human killing another human are identical from a moral or ethical standpoint. Throughout all of human history, most people has believed there is a difference.


Attacking computer security is the only example I can think of for conflict without hurting civilians. Any armed conflict is going to cause additional civilian deaths.


Attacking computers that control critical infrastructure could absolutely cause civilian deaths.


Honestly, this just Roko's Basilisk but even dumber.


They pay a military aged male to point a PEQ at the guy.


It's worth mentioning that this really isn't programming-specific, or even engineering-specific, at all. The exact same story applies to the guy doing finance, or marketing, or project management, or legal work, or whatever. This is basically a reminder that your role as a cog in the wheel is to make the wheel move, and you shouldn't forget what the wheel is doing.

That said, working for a US military contractor while not being supportive of the US military mission is kind of a silly thing to do.


The throughline is that we get caught up in long-arc status games ('did i do a good job learning all those things they taught us in school, do i look smart in front of my tribe / in front of all these other seemingly impressive engineers around me, am I demonstrating to my outside friends, family, future romantic interests that I'm a stable & effective member of society?') that fool us into blinding ourselves from looking at the bigger picture effect of our work.


The genius of the George C Scott scene in Dr Strangelove when he is talking about the B52's ability to evade radar is the canonical example of this. He is literally cheering the end of the world because he is proud of the skill of "his boys".


I think it's fair that you might think mostly about the military work overseas and not about it being licensed to local police or used to track citizens or so on.

Wasn't a lot of the Snowden stuff that tools intended to be used on foreign enemies would be used at home?


From my understanding (with no special access or insider info), it's unclear how much it was used "at home" (against US citizens). Certainly the collection infrastructure was in place, but there were technical and procedural safeguards that were intended to filter out US citizens' data, and analysts were not supposed to access US citizen data without a search warrant from a judge.

It's interesting to me that many people who work for giant tech companies that carry out indiscriminate data collection feel confident in their employer's internal processes and controls, but don't extend that same optimism when it's the government that pinky-swears they aren't going to peek at anything they shouldn't.


> there were technical and procedural safeguards that were intended to filter out US citizens' data, and analysts were not supposed to access US citizen data without a search warrant from a judge.

You never read about LOVEINT?


Yeah, this is an example of the kind of misbehavior those controls are meant to prevent.

This kind of thing is routine in the tech world, too: https://nymag.com/intelligencer/2010/09/elite_google_enginee...


> there were technical and procedural safeguards that were intended to filter out US citizens' data, and analysts were not supposed to access US citizen data without a search warrant from a judge.

Nope

> The NSA has built a surveillance network that has the capacity to reach roughly 75% of all U.S. Internet traffic.

> An internal NSA audit from May 2012 identified 2776 incidents i.e. violations of the rules or court orders for surveillance of Americans and foreign targets in the U.S. in the period from April 2011 through March 2012, while U.S. officials stressed that any mistakes are not intentional.

> The FISA Court that is supposed to provide critical oversight of the U.S. government's vast spying programs has limited ability to do so and it must trust the government to report when it improperly spies on Americans.

> A legal opinion declassified on August 21, 2013, revealed that the NSA intercepted for three years as many as 56,000 electronic communications a year of Americans not suspected of having links to terrorism, before FISA court that oversees surveillance found the operation unconstitutional in 2011.

https://en.m.wikipedia.org/wiki/2010s_global_surveillance_di...

The kicker to me:

> it must trust the government to report when it improperly spies on Americans

The courts told them to watch themselves and to self incriminate if they do crimez. An honor system. Which naturally, they did not snitch on themselves to the courts, because why would they.

Also let's not forget that they can make all the illegal surveillance legal in an instant just by labeling any person a "terrorist"


Sure this is a great example. You're extremely suspicious of the NSA's willingness or ability to police themselves, but you trust Amazon? Google? Facebook? Why?

https://www.vice.com/en/article/google-fired-dozens-for-data...

https://www.independent.co.uk/tech/amazon-ring-camera-spying...


Google staffers can't jail or torture or assassinate you. The CIA can and does.


There are very few cases of the CIA torturing or killing US citizens (foreigners, yes) There is legal and congressional oversight over the CIA, intelligence agencies can be held accountable by American voters through the democratic process.

There is no similar venue for oversight and accountability over the tech industry, which almost certainly holds more sensitive data on the American public.


The CIA wouldn't really be good at what they do if the assasinations or other illegal things they do were well documented and available to the public.

There's a reason the things they do are done in places where US laws don't apply...

> CIA black sites systematically employed torture in the form of "enhanced interrogation techniques" of detainees, most of whom had been illegally abducted and forcibly transferred. Known locations included Afghanistan, Lithuania, Morocco, Poland, Romania, and Thailand

https://en.wikipedia.org/wiki/CIA_black_sites

Then there's GITMO


There is zero meaningful congressional oversight of the CIA.

They broke major laws and tortured people, hacked the computers of the congressional oversight group to delete evidence, got caught hacking, lied about it, and nothing happened anyway.

The mental model you have where the CIA is in any way constrained by the law is an utter fiction. They would summarily execute Snowden in a millisecond if they could.


Why haven't they, then? Your model of intelligence agencies as sinister and omnipotent is at odds with all the available evidence.

This is who is lurking behind the curtain, petty, fallible humans just like everywhere else: https://www.newyorker.com/magazine/2022/06/13/the-surreal-ca...


They are sinister and competent; nobody said anything about omnipotent. The question was why should we be more concerned about government data access than private industry. The answer is because the government claims a monopoly on violence, up to and including murder and assassination; Google does not.


Please show me where I said I trust Amazon, Google, or Facebook. You must have read that I trust those companies for you to hold that belief so again, show me where I said that...

Don't make assumptions and don't put words in my mouth


> That said, working for a US military contractor while not being supportive of the US military mission is kind of a silly thing to do.

In my youth I applied for a defense force job in my country. I was turned down because I didn't have recent experience in the exact skill set they wanted for the first project (iirc it was Visual Basic front-end UI coding, lols. My employer at the time said they'd made a stupid mistake, I should have appreciated this statement more at the time) but it made me think. I never went for another job in defense. Honestly I don't have a problem making machines that might kill (such things are, sadly, often necessary) but on reflection, I sure do have a problem with killing someone that I don't agree should be killed.

This one time I did help a friend fix a machine that was cleaning the defense force's boats, but that's another story.


The problem is that the stated mission of the US military and the actual mission of the US military are not remotely the same thing.

Many reasonable people support the stated mission. But that shit ain't the truth.


"Any sufficiently large organization's stated mission is unrecognizably different from its actual mission".

(with apologies to Arthur C. Clarke)


It’s wild how little The-Bad-Thing even gets mentioned in work settings, if at all.


That's pretty much how all people working on tracking features at GAFAMS, all people working on algo manipulating people's behavior on social networks and shopping sites, and incidentally how people working on the manhattan project, moved.

They get into the interesting problems and miss the forest for the tree.

Plus it's fun to be among smart peers, and well-paid.

The author woke up from this, but many do it for their whole life and see no problem with it.

That's why I rejected interviews from Google and Facebook. Because I knew I would do the same. It's too tempting.


Yep, I -- an always very political, lefty, very critical person -- went down this trap. Nerd-sniped myself. Ended up working in ad-tech for some years and making pretty good money at it, and eventually ending (through acquisition) at Google because of exactly this. Because of intellectual interesting things... because of opportunity to do those things...

Copious quantities of impression data? Get to do big-data-ish things? Play with things like Cassandra etc (back then kinda interesting), k-means clustering and fun stats stuff and eventually ML?, high throughput & low latency transaction processing... shaving milliseconds off here and there... and lots of money in the sector... to hire devs, to do these things... which were at least back then somewhat intellectually challenging...

Especially for people not in the Bay Area, opportunities like this were/are hard to come by...

But then once you take a step back and look at how the sausage is really made, you start to feel icky.

Granted, that was ad-tech in 2010ish time-frame. It became much much worse.

Luckily within Google I was able to transfer out of ads, and into things that seemed on the surface much less icky (consumer hardware, etc.) But the thing is... no matter what you're doing at Google, that project is funded by ad-tech... So...


If you're using ad-tech money to do something actually good, does it matter that the money was made from ad tech?


Yes. In a sense, your behavior launders adtech’s poor reputation.

Think of it like this: what if I tried to argue with you “think of all the good we can do with the profits from the baby-pulping machine!” as an executive of the baby pulping company. Maybe I run a dog adoption agency with the profits, or a soup kitchen. I’m using acts of charity as a way to launder the horrific actions of my company.


The question then is not about the CEO but the people working in that dog adoption agency, is their work unethical?


That’s a question indeed! If you know your largest donor is baby pulping inc. is it ethical to take that money?


I mean, you're doing an "appeal to extremes" argument here... And I think it's a bit more subtle.

Almost everything we do as a consumer or worker in a capitalist market is tied somewhere down the line to a generally-agreed undesirable outcome somehow.

So the really tricky part that involves using your brain and heart is figuring out where that line is.

(And it's sad that in large part our voice in the world's operations is reduced to mostly ineffective "buy or not buy" ... )


By doing something good, do you mean buying a pile of synthesizers and vintage computers and gardening tools? Asking for a friend :-)

But seriously, I think there's a big difference between ad-tech in the strict sense of pre-"social media" ad tech and ad-tech now. When I was doing it, behavioural targeting was just getting started. One of the startups I worked at was trying to get knee deep into it (pretty incompetently, IMHO), but overall it was still "here's a stream of mostly undifferentiated impressions/bid-requests/clicks" and that was that...

Overall, it was hard to make a strong moral judgement at that point. Ads paid for the Internet as we knew it and most publishers could not survive without it.

But after about the Facebook IPO, we're talking about a very different thing, one that gets more much more morally grey-area, and sometimes outright just evil.

If the thing you're working on ends up with pushing even just one teenage girl into an eating disorder... How do you feel now about it now, even if you dumped your earnings into charity?


If you're using human trafficking money to do something actually good, does it matter that the money was made from human trafficking?


Pardon me another story.

In the early 2000s, I wrote software for a startup where the founders and initial equity-holders had pledged 10% of the founding stock to a certain charity. This was part of the marketing approach for the company- help us succeed and it benefits others (without affecting the competitiveness of our cost/pricing structure of our services in any particular tangible way.)

There were some weaknesses in the model of mixing charity with startup-mindset (the model later changed to 10% of profits and we actually brought the charity work in-house eventually), but as a lead engineer, I actually found it quite helpful in pushing back on questionable (in my eyes) things the CEO was otherwise tempted to pursue and asking me to build (the CEO not being initially a technical guy and having any developed sense of technical ethics around spam, dark web practices, etc). If we did something really unethical (and were caught), it would actually hurt the charity we were also all trying to help and undermine part of our company's core differentiation.

If you are a technical founder and bake this sort of charity commitment into your startup and its marketing on day 1, you can help people at all levels of the firm develop a more ethical culture than the average startup which faces all sorts of (often stupid) temptations.


On the other hand, I've also seen companies that start saying that it's okay to cut corners and drive people into the ground because it's for the "greater good." I think SBF took advantage of this in many ways as well.


Fair point. There is no substitute for keeping your eyes open.


Half our work is on bringing GPUs/graphs/AI/etc to Serious Problems like cyber, misinfo, fraud, crime, etc, and a few things I've come to:

* If you're unsure, quit

* It's fine to distrust the system, and as with most of the population, do something else

* If you do work on a Serious Problem, decide what your line in the sand is, and ensure that aligns with the organizations you work for & with

* Constantly reevaluate yourself, the people, and the work

Extremes like "don't do things if they can ever go wrong" would suggest the only answer is to do nothing. That doesn't work for most things. Likewise, I don't like painting people as unethical just because they have evolved different belief systems or, quite often, a more nuanced & evolved view of something difficult. Thankfully, I find most people in the US, and in many other countries, to try to do the right thing on average. This is even more true of people grinding on Serious Problems. So it often comes down to personal responsibility and awareness for whatever you care about, keeping the alignment with those around you... and leaving when that stops working.

The AI & robotics booms have made this stuff even more important. We view it as a chance to do good, and with more accountability & quality than when those with less scruples & ability do so. But agreed, we pick where we work, and ensure new staff agree on where we drew our own lines in the sand.


Agree with your points overall, but one of the specifically worrying things about right now is the massive downswing in the tech job market means that -- especially for those of us outside of the Bay Area -- your first recommendation: "quit" has become a lot lot harder than it was 2-3 years ago.

e.g. I work fulltime in Rust, and love it. I look at the job postings that emphasize Rust ... and 80% of them are crypto/web3 companies. I won't work for those companies because of my personal beliefs about "crypto"...

But if I were to lose my pretty decent job right now... and the bills started to pile up and my family was in financial danger... The argument becomes more difficult.


Agreed!


Author here. I gave this as a talk at several conferences through 2017.

I didn't realize that the video embed was broken until someone pointed out that this was making the rounds again, so I've just fixed that. If you've already looked, the video is now available. The content is more or less the same as the written version so don't feel like you need to watch it if you've already read the post.


I went into facial recognition specifically to demystify the technology as well as first hand learn the ethics of that industry. I was there for 7 years, and learned enough to duplicate what is SOTA and to know the ethics are none. Securing locations was the primary goal of the company and tech, but when they started ignoring obvious unethical client behavior, and it became clear that client behavior was omnipresent, and they refused to do anything about it when discussing the situation, I left he company and the industry entirely.

The issue is the common practice of police asking crime victims if their assailant looked like any celebrity. If they give any celebrity's name, they put that celebrity's face into their FR system, and start harassing anybody in their FR system that looks like the celebrity. That is omnipresent, and a very good reason to alter your appearance if you are unlucky enough to look like anyone famous.


Do you believe that there is sufficient moral hazard for Ukrainians working in their domestic drone industry to dissuade them from doing that work? Why or why not?

What is different about that situation from the situation you described in your talk?

What factors would have to change in the global balance of power for you to consider building systems that kill for American companies?


Morality is personal. In any instance, we should think through first-, second-, and subsequent-order consequences of our actions and consider whether we're comfortable with those potential outcomes and whether those consequences balance one another.

I won't take the bait on defining the differences between wartime and peacetime work.


I don't know if the gp's question is intended as a bait or not.

However, it describes a real scenario playing out literally as we speak.

While I absolutely don't mean this in a confrontational manner, if your ethical framework doesn't provide you with means to address a real-world ongoing situation, what's it even useful for?


The distinction between wartime and peacetime is important. As you said, the devil is in the details and we should always think about the consequences of our (in)actions.

The decades of American misadventure in the Middle East have been devastating for the future of world peace. In the quest to occupy two countries and engage in asymmetric warfare against relatively poorly equipped terrorists / freedom fighters the United States has burned a tremendous amount of social capital and soft power abroad. Additionally these prolonged conflicts have been hugely unpopular domestically and have had detrimental effects on the morale and functionality of the armed forces. People and institutions are burned out at the thought of supporting the armed forces, and your talk is a prime example of that.

This threatens world peace because even in times we consider to be peaceful authoritarian forces are plotting against democratic institutions. While America burned trillions of dollars of assets and social capital, Russia and China have been quietly amassing the resources to wage war and shatter the peace that people like us as well as our Ukrainian, Taiwanese, South Korean, and Japanese counterparts. At the same time they've been not so quietly waging war in a different domain and have built up a disconcertingly powerful fifth column through concerted social media campaigns that affect large sites and Hacker News alike.

I used to be very opposed to American hegemony and interventionism and for good reason too. What happened in Iraq and Afghanistan was an atrocity. Dick Cheney and others should be living out the rest of their lives in the ICC detention centre and the oligarchs who indirectly amassed their fortunes from these conflicts like Liz Cheney should be stripped of the resources and influence that they have in our society today. Unfortunately that's not the world we live in.

Instead we live in the reality where these people are free to run amok and their authoritarian counterparts in Russia and China are preparing for an all-out assault on global democratic institutions and individual freedom.

What's even more concerning is that we live in a world where the resources of the US military are depleted. The decades spent fighting insurgencies have left the military unfocused to address the rise of countries like Russia and China. The US can't even supply enough shells for Ukraine to wage war effectively. And there plans to rise to the level of production needed for that conflict and others like it are too little too late.

In the seeingly impending conflict with China over Taiwan the wargame scenarios paint a dire picture.[0]. America has insufficient stocks of missiles to wage a protracted war with China with many supplies estimated to be exhausted within a week of conflict and the lead time for producing replacements is measured in months to years. China comically outstrips the shipbuilding capacity of the United States with the US Navy so desperate to build naval ships that they've begun outsourcing production to South Korea[2] which again seems too little too late and precariously close to Chinese missiles.

If America loses access to the advanced production of South Korea[3] and Taiwan in the near future how will it ever scale up production to meet the rising threat of authoritarianism like it did in World War 2?[4] While the US has let their industrial capacity deteriorate and has meagre stockpiles for war China is not so quietly building theirs.[5] China dominates in the production of crucial commodities like steel, aluminum, copper as well as more advanced products like batteries and solar panels. They are constructing massive factories[6] to dominate the electric car industry that can easily be repurposed to producing drones which are proving to dominate the battlefield in Ukraine and Russia.

I agree with you completely. We must be ever mindful of actions and their effects both intentional and unintentional. There are consequences to action and inaction alike but question is not whether the price of action is high, but whether the world can afford the cost of inaction.

[0] https://selectcommitteeontheccp.house.gov/sites/evo-subsites...

[1] https://www.visualcapitalist.com/countries-dominate-global-s...

[2] https://www.navalnews.com/naval-news/2024/08/hanwha-ocean-be...

[3] https://www.visualcapitalist.com/which-countries-have-the-mo...

[4] https://www.construction-physics.com/p/how-to-build-300000-a...

[5] https://archive.ph/5hlDA

[6] https://x.com/taylorogan/status/1859146242519167249


Thanks. I really appreciate the transcript! I'm still reading it. This is a really good and thoughtful talk. Thanks for sharing it!


Great article! This was similar to my experience in ad-tech, though obviously more mundane. It's almost like people had to create artificial problems to avoid thinking about what they're actually building. Even language filled with euphemisms and acronyms was employed, I suppose to put the person at a distance from what they're building.

Even if there's nothing morally objectionable, like other places I've worked at, the product is so worthless that people would enter the same patterns of behavior.


Exactly the same, down to the industry! My job was as a simple force multiplyer for account managers and sales folks, automating tasks to allow them to scale their projects and help more clients. It was all cool helping them get more ads out the door for Toyota and cool authors, weird when it started being for "One weird trick" ads, downright wtf when I started helping boost SERP pages and finally once we started promoting gambling apps I just couldn't take it anymore. It was great money. I miss that. And helping all these very kind, friendly people make numbers felt useful. But man, once I finally got un-distracted (after ten years?!) I had to get out of there.


I skimmed through this because it was really obvious where it was going.

> The unifying factor in all of the stories I’ve told is that a developer wrote the code that did these unethical or immoral things. As a profession, we have a superpower: we can make computers do things. We build tools, and ultimately some responsibility lies with us to think through how those tools will be used.

No, it does not. It still boggles my mind that people actually seem to have this model of morality. If everyone thought that way, the world would not have nuclear power, to name just the most obvious example that comes to mind.

Nobody is "responsible for" making it possible for others to do evil. That's their choice.

(To expand a bit, because clearly not everyone shares my intuition: if this sort of moral failing can attach at one degree of separation, logically nothing prevents it from attaching at arbitrarily many degrees of separation. If I become culpable because my software "is used to kill people", then any person or company who enables me to write the software faces the same judgment. Vim is used to write software that is used to kill people. Do you really want to put that on the guy[0] who tried to get you to donate to humanitarian aid in Uganda?)

(The same logic is used by activists to demand participation in various seemingly arbitrary corporate boycotts - and people I care about have been harassed because of it. The thing about "following the money" is that it goes literally everywhere.)

[0]: https://en.wikipedia.org/wiki/Bram_Moolenaar


I think there's a lot of daylight between "made a text editor that in turn was used to make software, and that software in turn enabled killing machines" and "the core functionality of this software is to facilitate the killing of specific people by finding their cell phones". In the former case, the lethal (and indeed, military) application is pretty disconnected from any of the design decisions that lead to its creation. In the latter, failing to recognize that the lethal application drove all design requirements ultimately hampered the delivery of the product.

To an extent, I'm with you. Is a brick maker responsible if someone uses one of his bricks to commit a murder? Obviously not. On the other hand, if you work on, say, proximity triggers on missiles and never stop to wonder who your missiles will be used on, I'd say you've abdicated a core ethical responsibility.

I don't have a good answer on where the line is, and I've thought about it a lot.


The quote was "ultimately some responsibility lies with us to think through how those tools will be used". It's saying that we as builders have a responsibility to consider and be mindful of the impacts of what we're building. It's not saying that the maintainer of wget is directly responsible for a system used to exfiltrate data from a database of political asylees.

To take your same point to its logical conclusion, no one is responsible for evil aside from the one who pulls the trigger.


> It's saying that we as builders have a responsibility to consider and be mindful of the impacts of what we're building.

Yes, and my argument is that building those things doesn't have the "impact" referred to; using them does.

> To take your same point to its logical conclusion, no one is responsible for evil aside from the one who pulls the trigger.

Tools, by their nature, have explicitly designed uses, and potential cascading consequences of that use. This is one of the reasons that open-source licenses include a disclaimer of warranty: for the legal protection of the author, not just from claims by users, but claims by third parties injured by those users.

As far as culpability goes, I'm much happier drawing the line in a place where, if you keep applying the same logic you used originally, it will stay put rather than moving inexorably further. Contrary to what many have tried to tell me, intent does matter, a whole lot.


Good rebuttal, a lot of absolutes in philosophy you can just extrapolate out to absurd conclusions on both sides.

As an aside, one thing I notice a lot on internet forums is the tendency to immediately jump to these two extremes (often as a form of strawman). Might be projecting here, but I think it's an attempt to get internet points and seem smart, e.g. debate culture. Though I could totally see that maybe everybody can intuitively see these absurd conclusions, so it follows that there will probably be one genuinely disgruntled reader that that finally reaches their breaking point. I know I've certainly made similar comments.

How do we navigate this line? Ultimately I think the answer can only lie in human experiences, and thus I'm glad that the original article exists. It's another datapoint. (though this spawns a whole other discussion about how we get our data)


I think both things can be true. A person can take the position that developing something which will be used to directly kill someone is wrong, and thus refuse such jobs. A person can also take the position that software itself is morally neutral, and how it is used is the choice of the user.

Both of these are perfectly valid lines of moral reasoning. Which one you choose is going to be a personal decision. Debating about which one is more "right" devolves into a philosophical discussion.

Does it "boggle your mind" that some people choose to be vegetarians?


The point is that "which will be" is load-bearing. The idea that someone would feel moral qualms about how software is used by the military is incongruous with signing up for the job in the first place. The military does in fact do a lot of things that are not killing people, and presumably could also find a use for the technology described that does not involve killing people. Like, say, locating allies for a rescue effort.


I'll also say that there's a huge difference between IT specialist in the military and Tier 1 Special Forces Operator. One may facilitate killing, but its not a core competency, and you can spend an entire enlistment term doing absolutely nothing that might suggest your relation to a lethal machine; indeed you can do your job and never think about what you might be facilitating. The other exists to kill, and occasionally explicitly murder, and failing to make peace with that reality makes them less effective at their job. Both are ultimately necessary.

I understood the article as reminding folks to actually think about what you might be facilitating, and make your choices accordingly.


By this logic, software developers of life saving medical technology should have zero pride or comfort in knowing that their work directly saved lives. And the janitor at a hospital should feel no pride for helping to clean a place that makes people feel good.

It's a spectrum.


Of course I'm not supposing that software developers are utterly disconnected from the use of their software. But what matters is the designed purpose of the software, not the (even reasonably foreseeable) motivations of any particular client.

The code described at the beginning of the story locates objects, which are presumed to be in the vicinity of a person, whom the military might then kill, after presumably having used that location software without the victim's consent. By this standard, we should hold Tim Cook responsible for every case of stalking involving the use of an AirTag.


What it sounds like you're opposed to is even the consideration that your work can be used for purposes outside of what they're created for.

And um, yeah, the AirTag release was bad enough for obvious reasons and Apple had to make significant privacy changes. Almost as if they forgot to consider that their work could be used for purposes outside of what it was created for. Could've been safer from the beginning.

Edit: turns out Apple is being sued for Air Tags' uses in stalking [0]. So I'm pretty confused by your point since would presumably Cook care about Apple being sued. Will he personally be criminally liable? Doubt it. But it's not like he's blameless or ignorant to the situation.

https://apnews.com/article/apple-airtags-stalking-lawsuits-e...


>What it sounds like you're opposed to is even the consideration that your work can be used for purposes outside of what they're created for.

Others are free to consider whatever they like about possible uses of my work.

They are not justified in blaming me for those uses, and I am especially unsympathetic to arguments that I ought to blame myself.


Can I ask -- what do you do for work? My gut tells me that you're younger and still trying to figure out whether your means of living is good or justified. And you haven't exactly found that answer yet.


I find it ironic that a US citizen gets to enjoy the privilege of making an ethical choice of whether to support their military thanks pricesely to the fact the US possesses the most advanced military force out there.


Everyone's got their own ethical standards.

This guy won't touch anything related to the military.

Palmer Luckey is part of society's "warrior class that is enthused and excited about enacting violence on others in pursuit of good aims" [1].

I'm somewhere in between.

1. https://techcrunch.com/2024/10/01/palmer-luckey-every-countr...


Needs a [2017].

More substantively, my company is a contractor and has publicly committed to neither working on 'weapons' nor 'destructive financial systems.' Both terms are nebulous but we generally just decide en masse as a company (there are only 20 people) what does and doesn't count. I don't necessarily care for these particular lines in the sand but it's important to have some kind of process for determining and thinking about things, which I'm happy we do.


All the software engineers in places with mandatory conscription like South Korea, Singapore, Israel, Finland, Switzerland are in shambles.

Someone is complaining about writing software that could be used in implementations which kill people, whereas men in these countries straight-up don camouflage uniforms, helmets, and armour. They are given rifles, and are taught how to shoot and kill people, and sometimes even get to kill people.

Cut out the middlemen and all the abstraction, I suppose; program in C and assembly rather than in Python.


The moral of the story is not "always be cynical and ridicule defense guys".


I don't really see the incongruity here; somebody doesn't like helping people kill people, while that is mandated in other countries. What's your point?


I feel we should attempt to hold decision-makers accountable instead of blaming the field workers for unethical practices.


I'm not sure if I would be comfortable working for the military / a contractor directly, but I'm more ambivalent than unsure. Certainly each individual has to make their call.

I do wonder about this whole premise. Let's say your country is under attack by an oppressive neighbor or known bad terrorist organization?

Would people feel differently?

I get the same weapon could be used differently / against less bad targets, but it doesn't change that they can also be used to defend / attack people who genuinely want to do you harm.

I do not see this as a cut and dry call.


Yes-- and if your enemies are making the same tech you don't want to fall behind. That's also why we created the atomic bomb, so I'm not sure how to feel about it.


I used to work at one of the first deep learning image recognition startups, it was all going great, we raised a ton of money, hired a lot of people and built some amazing technology. The founder was on the covers of fancy business magazines and it looked like we were the next big thing.

Then revenue growth stalled and we shifted to enterprise consulting, which caused a lot of the early employees to quit (including myself). A few months after I quit they took on the Maven project that Google and Amazon employees protested against and in no time after that they had an office near DC with a bunch of ex generals on their staff. Now they work on secret projects that nobody still working there can tell you anything about.

Moral of the story, you can be a well meaning person and work on cool tech but you have no control over how it will get used in the future. The same thing that happened in CV will happen with all of these LLM startups once growth stalls and they're unable to raise another up round to cover their huge expenses.


This is a really interesting and thoughtful piece, although I think the repetition comes a little too often.

Still the example of getting focused on engineering problems and not what the code will be used for lands well.


So when can we start a guild of Software Engineers?


I really don't want to get into the meta of what that guild "should" be doing / telling other folks who they should work for.

My local teacher's union has some affinity groups and one group decided to invite a speaker who in the past has had some very strong words for people of a particular religion and point of view. Needless to say it is a mess of a problem.

I'm not sure I'd work on a military project, but I also wouldn't be comfortable with a guild telling me that I or anyone else can't / shouldn't.


Engineering already has a professional association that tells its members what can and cannot be done. A guild will have the power to also provide limits on what an employer can or cannot do, set minimum/floor rates for standard services and generally improve the labour conditions for its employees. That's exactly what SAG and AFTRA achieve, and it's worth looking into how they work.


That won't solve this problem. The military and government will do what is necessary to make sure the Guild of Software Engineers is officially completely fine with military applications.

This is something that only individuals can do.

And there will always be individuals willing to write this code, and that's not anything that can be solved by any number of Guilds either. The military will pay anything it takes to make it worth working outside of the Guild's rules.


Being able to say no with legal force is helpful even outside defense. I've discussed one of my previous roles unintentionally building an employee surveillance tool, and another codebase I contributed to is currently on the front page as a distant result of hurting people.


We can start with private enterprise, with ad tech, consumer data harvesting/surveillance and dark patterns targeting engagement and addiction. The military can be the last bastion to tackle once the guild has strength and can bargain to enforce guardrails. I'm looking at how SAG is so effective at navigating a multi-billion dollar entertainment industry filled with behemoths like Disney. If they can do it, we can too. At this point, the actors are looking smarter than software devs.


There are already professional societies for other engineers (IEEE for example), but I think there are engineers that work for military applications.

It is a complicated topic I think; on one hand lots of people don’t want to work in those types of applications. On the other, we have a decision-making process for how violence is applied. If we are going to start inserting more veto points in there (which I think is a good idea), I don’t know that “has some rare technical skill” would the first qualifying characteristic I’d look at, right?


Software engineers are free to join IEEE.


Groups like IEEE and the ACM don't do what professional associations / unions / "guilds" do... help regulate / moderate employer behaviour in a given industry.

Of course those associations are always a mixed bag, often fostering some mediocrity or being themselves corrupt.

But definitely feel like there's some aspect of that missing in our sector.


Those organizations don't have teeth, neither does the state-based PE except to regulate members. Where is the organization that regulates employers?


I remember an anecdote about this topic where someone (I think it was Douglas Engelbart?) was giving another a hard time about their military software being used to kill people. At which point the other person mentioned their software used a mouse (which Engelbart invented). Obviously a complex topic with lots of considerations that I venture won't be solved in this HN thread, but good to discuss and think about.


There's a pretty meaningful distinction between something (relatively) neutral that's occasionally used to support unethical purposes than something that can only be used for unethical purposes, plus shades of gray in-between.

I don't think there's a reasonable argument that e.g. Keurig engineers need to grapple with the ethical dilemmas of murderers and terrorists using their products for example.


I think a course would be a helpful good step. I remember watching Yale’s introduction to ethics class.

Something like that but then applied to software engineers specifically


I don't know. On one hand, it is true we are the last line of defense. I do wish more of us said no to putting tracking on websites, among other things. But on the other hand, a random software engineer's sense of ethics might be different than you. Do we want activist software engineers?

I mean, field of study changes perspective. Recently, there was that backlash against a release of a 1-mil post dataset of bluesky users by some people on bluesky. https://eugeneyan.com/writing/anti/

I personally didn't think it was a big deal because the API and dataset is already public. And also if your text isn't available to LLMs, then your values are effectively invisible to the future, as society depends more and more on LLMs. To me, I think it's better that you get your thoughts out there into LLMs. As more of society relies on LLMs, culture wars will be fought not just on the internet, but in the embedding space of LLMs. So, given that, would these people want me to decide for them?

I dunno. Just as journalism lost the trust of the public by deliberately taking an activist stance, so too would engineering lose public trust by deliberately taking an activist stance. (You may say tech already lost public trust, but I think of tech as the industry and engineering as the profession).


Maybe we do need a Engineer's Hippocratic Oath, and a means to police our own.

Go look at the latest "Ask HN" thread - there's two projects proudly trumpeting their ability to allow applicants to cheat on their interviews. On the FAQ of one of the apps, a common question apparently is if it will work during proctored exams.

https://news.ycombinator.com/item?id=42375440

https://news.ycombinator.com/item?id=42383928

With this kind of loose approach to morality, is it really any surprise that the world is filled with devs who build involuntarily telemetry, spyware, etc.?


It just doesn't make sense. So it is immoral for engineers to design weapons? That's like saying it's immoral to butcher an animal. Sure go ahead, ban butchering animals in your society so everyone can feel good about themselves while they import their meat and thereby outsource the butchering instead. Or you say that everyone just stops eating meat? Fine that works but here the analogy breaks, because everyone won't just stop needing weapons.

What I can agree with is that placing a moral cost on engineering designing weapons in some way increases the actual labor cost of weapon R&D, I guess.


They didn’t mention weapons in their post. Which, given the focus of the article, seems intentional. Edit: specifically they seem to have gone for lower stakes examples.


The ACM has a code of ethics to which members are supposed to adhere: https://ethics.acm.org/code-of-ethics/


Don't use moral quandary to inject your propaganda. That is how a snake moves.


The author talks about thinking about consequences, implying bad consequences, but there are also good ones. Making the Internet faster, say, makes it possible for bad actors to do bad things more quickly. But also the opposite. Tracking phones more accurately can help the military find targets, but could also find a man down, someone who is being protected, or a criminal.

I applaud anyone who imbues their work with a sense of ethics and responsibility, but most tasks, inventions and creations arise in an environment filled with muddy grays, rather than being clearly good or evil.


> Do we want activist software engineers?

Regardless of whether "we" want them, we already have them, and that genie isn't going back in the bottle. In my view, every person has to decide their ethics and then act accordingly. The alternative is forcing people to accord with the principles of the employer or government or guild or whoever. There just isn't a coherent supermajority morality anymore - and no real path to one.

I don't have a solid object level opinion on the llm stuff. For the most part it seems like a tempest in a teacup to me. Some things will get easier, some things will get more annoying, and everyone will still have to get up and put on underwear one leg at a time.


It's amazing how many tech people, like this author in this piece, still believe that their phone is constantly listening to you to target ads.



I don’t think it’s that surprising. Here’s two true statements:

I don’t trust my phone on a fundamental level. I also have mentioned things out loud and gotten ads afterward about them.

On the one hand, those two statements absolutely don’t prove that my phone is listening. Just because I don’t trust my phone doesn’t mean that it’s being used maliciously, and the ads I’ve seen might just be a coincidence.

On the other hand, those two statements together are definitely suggestive. They feel suspicious. It would be so easy for Apple/Google/Samsung to secretly listen and make a boatload of money by telling me a lie. Lord knows they’ve lied about other things.

So yeah, balance of evidence is that phones probably don’t do that. But don’t be surprised when people suspect that they do.


Your phone doesn't need to listen to you to know your interests. The ad algos are smart enough to figure it out from your browsing behavior.

Also, it happened a few times I talked about some thing and an hour later it was talked about on broadcast TV. People over-fit on finding patterns.


This was 2017. Meanwhile the geo political situation has changed quite a bit in the world. Unfortunately, the ethics already feel outdated.


I wonder how many of the people back-patting the author and each other work for companies whose business models entail egregiously violating users' privacy or destroying the economic security of low-wage workers? How many work on exploitative AI projects--especially AI art--that rely on training data used without permission or compensation? Yet suddenly, when it comes to bettering the United States' and her allies' war-making capabilities, ethics become an issue?

Thank god during the Cold War we had real tech companies, like Digital and HP, staffed by real engineers whose heads were screwed on straight and whose values weren't perverted.


Long story short, the author lives peacefully and comfortably in a country defended by others and has ethical issues helping those who protect him. I don't see eye-to-eye with such people. I joined the military (the infantry) straight out of high school out of a sense of duty due to my upbringing. I knew what the job entailed but never thought about it much. It was peacetime, too, so that made not thinking about it easy. Then, the war on terror (9/11) happened, and the reality of the job blew up in my face (almost literally). It was a challenging experience to say the least, but I would do it all over again if given the chance.


To steelman the author's argument and as someone who's debated about this a lot with myself, what's your perspective on the argument that goes like this: "helping those who protect us" would be fine if that's all that those-who-protect-us did; but in addition, they go start unnecessary wars in foreign lands, kill innocent people, surveil their own population etc. Because they far overstep the reasonable bounds of keeping peace and defending the country, and because the "help" I'm giving them will go directly to that extra stuff that I abhor, I can't in good conscience help them. Would that argument help you to see eye-to-eye?


> So how can we “carefully consider potential impacts”? Honestly, I don’t have any answers to this. I don’t think that there really is a universal answer yet, because if we had it I have to believe we’d not be building these dangerous pieces of software.

So, there is no general answer to the problem, but you can always approximate it by asking: are you working for the US defense/intelligence industry or one of its contractors? If so, you are almost certainly building something harmful.


Where do you draw the line? Is it wrong to work on food preservation technology, because it gets used to make MREs that get fed to soldiers who kill people?


I'm not sure whether you're asking this question in good faith, but I think as with any moral spectrum, you draw the line where your best reflective judgment tells you to?


No. One of them is feeding people, all people. One of them is actively working on systems designed to kill people and only kill people


It's nice that the author can hold opinions and write a blog post like this and not be disappeared. I wonder why that's possible...


Though oddly Boeing whistleblowers keep dying.

Hamilton made a much better case than I could for a way to view the military as a necessary evil neither to be loved nor feared in Federalist no. 8.


Great article, and very important things to consider. As a practice, I take my holidays at the end of each year to consider my career, my life, and my worldview. A long holiday is an opportunity to refocus from work to introspection, and allows you to find comfort with your place in the world. The primary things I ask myself near the end of this introspection period are:

1. Is the work that I am doing advancing my personal goals?

2. Is the way I spend my time good for my mental health, my physical health, and my family?

3. Is the work that I do aligned with my ethical worldview?

4. Am I being renumerated appropriately aligned to the value that I am bringing?

5. Am I still learning something each day?

6. Are my family happy with the outcome of this year and the way I contributed to it through my work and otherwise?

If I get a no to any one of these questions, I start looking for a new job. I've moved across country and across the world, and I've been at companies for a long time and a short time throughout my career (that's nearing 20 years at this point). I've switched industries and areas of technical focus many times, and I've even switched entire career paths by simply asking myself these questions after a relatively short period of introspection.

I encourage everyone to consider the ethics of their work, but also to consider regularly whether or not your work is advancing your goals, making you happy, making you healthy, and supporting your family in the way they need to be happy and healthy. Ethics is a core part of this. If you are doing work you don't agree with for purposes you would not agree with, it can be traumatic to your own psyche and deeply affect both your health and happiness, including how you treat other people and especially those closest to you.

Take some time, the holidays are coming.


The undiscussed counterpoint is how a new weapon will also result in saving lives if used wisely. Nuclear stockpiling is a darn effective war deterrent, and has prevented an uncountable number of hot wars.


It’s too early to conclude that nuclear weapons save more lives than they take.


The boiled-down argument being 'Pax Americana' must end before nuclear stockpiling can be called an overall success?


I’d say we need to reach the point where nuclear Armageddon is no longer possible. Even then, it’s possible we got lucky and nukes were actually more dangerous than not having them, but we’d at least be able to definitely say that their total kill count is only X, for whatever X turns out to be.


FWIW the Fasten link (goes to https://fasten.com/us/cities/austin) is broken


Thanks, they went out of business in 2018. I've added a note to that effect and removed the link.


TL;DR:

The author recounts their experience as an intern building software for the Department of Defense, unaware of its intended use. The software, designed to locate WiFi signals, utilized algorithms like R^2, Gaussian estimation, and Kalman Filters to improve accuracy and tracking capabilities. Despite the technical intrigue, the author acknowledges the software’s purpose was to aid in killing people, highlighting the ethical implications of their work.

The author recounts a past experience working on a project for a Department of Defense contractor, where they were tasked with creating a tool to locate phones, ultimately realizing it was intended for targeting and potentially killing individuals. This experience, along with examples like a deceptive quiz and Uber’s “greyball” feature, highlights the potential for code to be used unethically or even illegally. The author emphasizes the importance of developers considering the ethical implications of their work, as software increasingly impacts various aspects of society.

Developers have a responsibility to consider the potential misuse of their work, as unethical applications can have serious consequences. While there is no universal solution, developers should critically evaluate project requests, consider worst-case scenarios, and prioritize ethical considerations over deadlines. Ultimately, developers must decide whether to build a product, even if it has the potential for misuse, or to prioritize ethical concerns and potentially face consequences.

Code can have serious consequences, even death.


China, Iran, Russia, ISIS and others would be happy to get their hands on this tech and kill you.


Chris: So, I talked to him.

Mitch: You did?

Chris: Yeah, and he used to be the number one stud around here in the 70’s. (whispers) Smarter than you and me put together.

Mitch: So what happened? Did he crack?

Chris: Yes, Mitch. He cracked, severely.

Mitch: Why?

Chris: He loved his work.

Mitch: Well what’s wrong with that.

Chris: There’s nothing wrong with that, but that’s all he did. He loved solving problems, he loved coming up with the answers. But, he thought that the answers were the answer for everything. Wrong. All Science no Philosophy. So then one day someone tells him that the stuff he’s making was killing people.

Mitch: So what’s your point? Are you saying I’m going to end up in a steam tunnel?

Chris: Yeah, I am.


absolutely, you need focus when you work for the department of war


[snark removed]


Please keep to the HN guidelines:

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


The reality is nobody is hired to decide why things are done. Under capitalism, the why is to make profits, to increase shareholder value, if you will. That's it.

You don't plan what you don't control, you don't control what you don't own. For the vast majority (yes, including engineers), the only thing we own that is a part of the whole process of capitalist production is our ability to do some kind of work. When that ability is sold (for a wage/salary) to whoever possesses the resources to decide the why, we forego the right to decide how it's used. Just like when you sell someone a chair for example, you don't get to tell them where and how they can use that chair.

You can refuse to get or do this or that job, but you need to have a job. At the end of the day, we all need food and shelter to stay alive and the bills aren't gonna pay themselves. So ultimately, someone—among the vast majority of us who don't have a choice—will also have to do the job you refuse. So we shouldn't delude ourselves into thinking we have some sort of "superpower". We don't. That is, not unless we can collectively withold our ability to work and force the hand of the minority that actually decides why things are done.

A society where those who do the work decide why and how it's done, is a society where the working people own the resources. That is communism. That world is possible, but it won't build itself, you have to tirelessly fight for it. Look over at marxist dot com if you want to help out [1].

That doesn't mean that in the meantime we shouldn't strive to find meaningful work that fit with our values. We should, and whoever manages to find that good for you! You've been lucky (for now) but you won't be forever, and even if you are to be, the vast majority of people with pretty much the same skills and conditions as you, actually won't. We actually have to transform society if we want to secure that for ourselves and for everyone else.

[1] https://marxist.com/about-us.htm


Don't get distracted by the word communism!


Attributing "capitalism" to the increase of value to "shareholders" is a mischaracterization of capitalism. It's the only system that has worked for one singular reason:

People want more value for more work. If I work 12 hours picking potatoes and have to give 3/4 of them "according to my need" I will refuse to work. If you create fake jobs like "bolt counter" to insure 100% employment so the means can justify the ends (as communism believes) you end up with a collapsing economic system.

The current iteration of late-stage capitalism is not capitalism _at large_. It's feudalism clever disguised as an egalitarian economic system. The solution to this is not communism, it's undoing 100 years of stock-market oriented business design. We don't know what "late-stage" communism looks like simply because a revolution occurs several hundred years before it reaches that point. Not even the premiere implementers of communism, the Russians, could keep their system.

It doesn't work. In any country, in any system, with any group of people larger than 4.


That's basically the definition of capitalism- that shareholders (capital) spend their money to generate profit for themselves.

Like you note thoigh, we can have a free market and democracy without that dynamic


In a free market, you aim to increase your market share and push out your competitors. That's how monopoly are formed. Not by some perversion of the "pure" principles of the free market, but as a logical outcome of it. Turn back the clock to whenever you want in the course of capitalism, the same conditions with the same logic will drive you back to the same results.


I agree, but theres historical examples of markets in non-capitalist systems. Graeber's Debt makes a fun point that the two are at odds even, because like you say, capital really wants monopolies, which is the opposite of a free market. So by extention, if you support a free market you must on some level reject capitalism.


I find it interesting and somewhat relieving how many normal people are 'pro capitalism but anti shareholder'. Which means that its mostly an outreach and organization problem. If you are anti-shareholder you're by definition anticapitalist but you just don't know it yet.

Which is why the powers that be have to pass things like this: https://www.govtrack.us/congress/bills/118/hr5349/text/eh to keep people confused, and to conflate being opposed to shareholders (capitalism) with support for authoritarianism etc.


> The current iteration of late-stage capitalism

Don't fall for the trap of accepting "late-stage capitalism" as a valid concept - it's a made-up term that has no concrete or meaningful definition and is used by Marxists to constantly move goalposts and impute bad things into our functional (if suboptimal) economic system.


[dead]


> Every sentence in this reply is a strawman.

Zero evidence or explanation provided, meaning that it's far more likely that GP is making good points that you cannot respond to.

> here's an FAQ [1] that can help clarify what terms like capitalism, communism, feudalism, profit, etc. actually mean

This is a lie. Marxists intentionally repurpose and distort language, especially language around markets and governance structures, to deceive others into falling for their murderous and completely infeasible ideology. These are not what those terms "actually mean" - those are how Marxists use those terms. Claiming that those are what those terms "actually mean" or that that's how other people use them is false and deceptive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: