This is a general trend you can see at work in lots of venture-connected industries, and it's not just about surveillance.
Lots of people take gigs working on 'cool', 'interesting', terrifyingly profitable projects at well-funded, fast-growing startups that the press and investors both love. It looks great on their resume, they get to tell their friends about the cool work they're doing, and they learn lots of new stuff. Great, right?
Of course, on the other hand, lots of these startups are directly preying on the uninformed, the mentally ill, and groups that simply can't make good decisions, like the young. Over time eventually controls and safeguards get added to reduce the damage done, stop those 'top customers' from paying TOO MUCH, and staunch the blood loss from horrible churn and counteract the consequences of short-term focused designs. Ideally, by the time these fixes start happening and consequences start destroying revenue, these startups have IPOd or exited and the founders and investors all exit happy and wealthy to move onto the next gig.
Here's the thing - that whole last paragraph was describing games startups, not the obvious evil of surveillance companies. It's almost like we gleefully allow ourselves to be blinded to the consequences, both short-term and long-term, of our actions, and pretend that whatever we're working on is innocent and harmless, just because it isn't directly selling rifles to terrorists. It's easier not to think through the consequences; to think about where our money is coming from, to think about who's buying our products, to think about what using our products does to people's lives, and to their relationships. To think about what those viral acquisition pathways do in the long term, to think about what those retention and churn metrics really mean, to think about WHY those metrics are going up or down. Most people don't think about what the unanticipated consequences could be from storing users' address books, or their call histories, or leaving a camera/microphone on all time, or automatically logging conversations. It's easy not to do those things.
Because at the end of the day, the person with the clean conscience doesn't have 50 million dollars, and if you somehow come face-to-face with the consequences of your actions, well... lawyers don't cost that much, do they?
It's easy to look at the whole venture-funded startup industry and find the big picture kind of exhausting when viewed with this perspective. Most of the people starting these companies, and working at these companies - they're not objectively bad people! Their actions, in a short-term context, are wholly defensible - there's no reason to question them. But somehow we keep ending up in this situation, and we're looking back, and saying, jeez, that sure hurt a lot of people, didn't it? Or wow, how did they do something that stupid? Why did anyone invest in them? Why was anyone stupid enough to go work at that company that did those awful things?
Aaron Swartz looked back at reddit. He realized the company that made him rich was now nothing but a mechanism to create addictions to meme-bytes, waste time, destroy relationships, normalize sexism and racism, and literally rekindle the Neo-Nazi movement (reddit now has the biggest concentration of Neo-Nazis on the internet--the fanatical kind that even Stormfront won't tolerate on their website).
If this is even the fate for something as benign as reddit, how much worse will it turn out for people working in Big Data or directly on technologies like facial recognition?
How many Silicon Valley technologists will look back at their work in 50 years and have the same kind of feeling that Nazi scientists had, that Manhatten Project physicists had, that the inventors of mustard gas had?
Whether technology will be used as a weapon in the hands of a selfish elite, or as a tool for liberation for the impoverished and underprivileged, is being decided now by how these technologists use their time.
Yes, when you create a venue for free speech, it will -- gasp -- be used to express various opinions and ideas you will not agree with. So?
The Nazis had no trouble doing their dirty work the last time without Facebook, Reddit, or Twitter. These days, they have more to lose from exposure than they would ever have to gain. The antidote to bad speech is more speech, and that's precisely what Reddit is good for.
History does not support your naive view that allowing criminals to freely congregate, organize, and recruit stops their ideology.
The Neo-Nazis on reddit have publicized the tactics they're using to normalize racism across the internet. According to watchdog groups the Neo-Nazi movement has grown tremendously since it was able to start recruiting and spreading that hate ideology online.
That is utterly ridiculous. For one, there are boards out there that are about as popular as reddit that are also far less restrictive in what they allow (such as any of the many popular American imageboards out there).
Reddit does not "normalize sexism and racism" or "LITERALLY rekindle the Neo-Nazi movement" any more than Youtube does, if you go by the utterly inane highly rated comments on thousands of Youtube videos. Does that mean Youtube, and Google, are trying to usher in a new area of racism and intolerance?
> literally rekindle the Neo-Nazi movement (reddit now has the biggest concentration of Neo-Nazis on the internet--the fanatical kind that even Stormfront won't tolerate on their website)
I guess it's a legal problem, not technological. Neo-nazis from European countries flee with their online activity to the 'land of the free', because around here their websites are banned, many of them would be hunted and some possibly locked up (depending of the case and specific country's law). It's one of those cases where free speech backfires in your face. Not like we don't have those problems around here, but at least spreading extremist views is a criminalized pathology, not a protected right.
> Whether technology will be used as a weapon in the hands of a selfish elite, or as a tool for liberation for the impoverished and underprivileged, is being decided now by how these technologists use their time.
Most Americans would tell you it's not a legal "problem" at all, but a clear homage to time-honored First Amendment rights.
On the other hand most of my friends from in the E.U. are quite willing to give up freedom of speech for neo-Nazis and extremists. They find it a worthy trade-off to avoid the possibility of widely spreading that form of hate and filth again.
I find it all kind of amusing, how a lot of the rights we find inviolable and those we find we can bend a bit depend as much on our national origins as it does on anything else.
On a funny side note, a few years ago hitler memes were very popular here in Poland, and even a few clubs advertised saturday parties with them. It was hilarious, but shit hit the fan when butthurts reported it to mainstream media and the topic got up to the evening news, lawyers argued if it's promoting ideology, etc. Borderline humor problems, lol. Recently a newly opened restaurant owner had his business shut down and evicted, all because of the name - Fritzl's Basement.
Lots of people take gigs working on 'cool', 'interesting', terrifyingly profitable projects at well-funded, fast-growing startups that the press and investors both love. It looks great on their resume, they get to tell their friends about the cool work they're doing, and they learn lots of new stuff. Great, right?
Of course, on the other hand, lots of these startups are directly preying on the uninformed, the mentally ill, and groups that simply can't make good decisions, like the young. Over time eventually controls and safeguards get added to reduce the damage done, stop those 'top customers' from paying TOO MUCH, and staunch the blood loss from horrible churn and counteract the consequences of short-term focused designs. Ideally, by the time these fixes start happening and consequences start destroying revenue, these startups have IPOd or exited and the founders and investors all exit happy and wealthy to move onto the next gig.
Here's the thing - that whole last paragraph was describing games startups, not the obvious evil of surveillance companies. It's almost like we gleefully allow ourselves to be blinded to the consequences, both short-term and long-term, of our actions, and pretend that whatever we're working on is innocent and harmless, just because it isn't directly selling rifles to terrorists. It's easier not to think through the consequences; to think about where our money is coming from, to think about who's buying our products, to think about what using our products does to people's lives, and to their relationships. To think about what those viral acquisition pathways do in the long term, to think about what those retention and churn metrics really mean, to think about WHY those metrics are going up or down. Most people don't think about what the unanticipated consequences could be from storing users' address books, or their call histories, or leaving a camera/microphone on all time, or automatically logging conversations. It's easy not to do those things.
Because at the end of the day, the person with the clean conscience doesn't have 50 million dollars, and if you somehow come face-to-face with the consequences of your actions, well... lawyers don't cost that much, do they?
It's easy to look at the whole venture-funded startup industry and find the big picture kind of exhausting when viewed with this perspective. Most of the people starting these companies, and working at these companies - they're not objectively bad people! Their actions, in a short-term context, are wholly defensible - there's no reason to question them. But somehow we keep ending up in this situation, and we're looking back, and saying, jeez, that sure hurt a lot of people, didn't it? Or wow, how did they do something that stupid? Why did anyone invest in them? Why was anyone stupid enough to go work at that company that did those awful things?