Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scientists who make apps addictive (1843magazine.com)
227 points by sdeepak on Nov 16, 2018 | hide | past | favorite | 46 comments


The development of online software over the last 10-15 years is a case study in feedback loops. It really has flipped the paradigm.

For lotus notes, success is when a user makes the software do what she wants, like sending an email. In the modern equivalent, the software (designer) succeeds, when the user does what the software wants, like sending an email.


This is a symptom of detailed metrics. When you're responsible for the "send an email" feature, you want to show you're doing your job. Getting more people to send email is an easy way to show progress without having to understand people's actual experience with the feature.


> Getting more people to send email is an easy way to show progress without having to understand people's actual experience with the feature.

That makes my gut turn a little bit. It is a blind and abusive dynamic and I think it is horribly wrong.

For a while, I've been pondering what ethically designed software looks like from a user's perspective. And, for the modern consumer I don't have any answers.

The obvious bits are "it allows the user to do what they need" (ie, send an email, or draw a picture, communicate with friends, etc), but I don't know how to disconnect the functionality from addictive design elements.

Does that mean a HN style interface, where you can look at what threads you've started, but with no notifications? Or something else?


> I don't know how to disconnect the functionality from addictive design elements.

One way to think about it is, does the software help the user achieve their goals without trying to change what their goals are? Even this gets fuzzy when you think about long-term and short-term goals. An exercise app that manipulates me to frustrate my short-term goal of being lazy and help me reach my long-term goal of being fit might be ethically good.

But there are many clear cases where the app's job (just like much of advertising) is not to give me what I want, it's to control what I want and make me want what the app developer wants me to do.

One way I measure this informally is by asking how I feel after I use an app and put it down. While I'm using it, it always feels good because that's how these programs are designed. But many apps leaving me with a linger feeling of regret afterwards, exactly like the feeling I get after binging on junk food or drinking too much the night before.

Those are the apps that are a problem.


I don't think it's about "showing you're doing your job", but it very much is about metrics.

Imagine your metrics are perfect. Think of what your goal is: to get that sweet $$$. Using metrics, you'll tune your app towards extracting as much value as possible from customer. Note that it's different from providing value to customer in exchange for money - the latter is just an early stage of the former, and as you optimize further (using your perfect metrics), the only way to squeeze out more profit is to start making user's life worse.


I would say it's a symptom of bad management. Why do you need to "show you're doing your job", if the feature clearly exists and works? Why do you need to cause it to be used more, too?

I don't see this happening in other industries. Do people responsible for radios or sunroofs or seat belts in cars take action to cause their features to be used more?


Well, people pay for Lotus Notes so a user-centrix UX helps customer retention and reduces the burden on customer support.

The modern crop of "addictive" apps are usually free so having "time spent" as a KPI helps bring in ad revenue, which is good for business.


Nicely said. This really summarizes how I feel when I use a lot of modern websites. I often feel manipulated. Instead of me driving the interaction the website tries to impose its opinion on what I need or want.


Every UI presents an opinion. In many cases, that opinion is unclear, muddled, or counterintuitive. As developers and designers pay more attention to the UX, that opinion becomes more clear and unilateral. In the end, the decision to like a piece of software or website depends on how well your expectations align with the opinion that the UI expresses.

If you feel manipulated, it means that you did not want to do something, but the UI expressed its opinion in a way that was strong enough for you to notice it.

If the UI expresses its opinion in a way that aligns with your intentions, it gets called easy-to-use, unobtrussive, frictionless, and other nice-to-hear words.


I think a big differences is between

a) tools for professionals/experts where the assumption is the user knows best what it wants, and will spend time figuring out how to reach it

b) the instagram/autotune etc kind of tools that make everyone feel like an expert without doing the work, by providing very nice results for very little effort, but as a user you will very quickly run into the limititions when you form your own opinion.

Many tools that used to be a) now are becoming b) which is nice because it can increase power and productivity for everyone, but the experts and pro's are left behind... What's even worse for them is these simplified tools are often not created by domain experts...


And by becoming b) the designers of the tools can also inject their agenda which is probably often selling more ads or driving up some metric like engagement. The goal of computers used to be to empower people but now it seems just to push people into doing what companies want them to do.

That concerns me about the progress in AI. It won't be used to help us but mainly to manipulate us.


> which is nice because it can increase power and productivity for everyone, but the experts and pro's are left behind

It's not nice overall, because as those b) tools push out a) tools, it creates an artificial ceiling. Sure, it may be nice to enable a random person to do a new thing for the first time in their lives, but it's not nice if your software prevents those who want to do that thing more than once from being able to do it faster and better. Toys are more approachable than tools, but good works happens with tools; when almost all software tools become toy-like, we have a problem.


> If people could understand what computing was about, the iPhone would not be a bad thing. But because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that Guitar Hero is the same as a real guitar.

-- Alan Kay


Look at the path “configuration defaults” have taken. Whenever software has settings or can do things multiple ways, there is usually a default out-of-the-box setting.

Long time ago, the default was often whatever was easiest to program or took up the fewest compute resources: the variable is zero initialized by the OS, so therefore the setting will default to off.

Then, the rule became: the default should be the way most users will want to use the software.

Now, it’s: the default should be whatever causes the user to take the action that the developer wants them to take.

We are also seeing software with fewer and fewer settings. Developers are simply taking away the option and forcing users to do it the way the developer wants. This trend is aided and abetted by the current crop of ‘minimalist’ UI designers who insist that settings are bad and nothing should be user configurable.


On the bright side, the more restrictive these apps become, the more susceptible they become to competition.


That doesn't really seem to happen though.


These are soft concepts, so I don't think we can be totally right or wrong about them. That said, I think there's more than opinion here. There's agenda too. When a browser or search engine wants to update your defaults, that's more agenda than opinion.

With a lot of modern software (FB is the big example) has an agenda. A list of stuff labeled KPI on a whiteboard.. stuff FB would like people to do.


I justo experienced this with s chess game app (lichess) that suddenly would not let me play by purpose.

A software that don't want you to use it? Very stupid


Fellow user of lichess here, what do you mean by it would not let you play?


> In the modern equivalent, the software (designer) succeeds, when the user does what the software wants

After all that's what the creators always wanted, sometimes more, sometimes less successful. Thinking of myself, I found technology much more addictive 15-20 years ago. I cannot seriously imagine myself playing these very stupid online games that are full of add-ons you need to buy and weird, unnecessary instructions to 'guide' the user.

Most really addictive things were created by accident like Flappy bird or HN.


Your definition of "really addictive" is only yours. To find the broadly addictive apps, open up top grossing in an app store and count how many places down the slot machine games are.


I just seen this in action, I bought something with paypal and I get an email that my device is now trusted but if this is a shared device or it is stolen I need to go to place X and do Y, Basically instead of getting asked if I want this I was forced into it and I had to use the steps in email to fix it, at least they allowed me to not link my Google account with PayPal by adding a small link on the bottom with "Not now" text


That's a great point. The rate in which human mind evolves clearly is not catching up with software getting more and more intelligent (doing what you want to do with software Vs what software wants you to do). Last 15 years have definitely changed how humanity is evolving, partly because decision makers/ people in control couldn't anticipate what's going to happen and partly because it took time for the general public to understand what is happening. These two phenomena always counterbalanced each other time to time. This is probably the first time in history that they coincided.


Wow, this is probably the most insightful comment I've read in the past 2 years on HN. Thanks.

It is also a giant market opportunity. If there were a video site for tutorials (in every field) that just show what you want and not emphasize "user engagement" (time on site) instead of, "hey what's up youtube justin here and today we're going to take a look at how to (click, snip, tie, step, whatever). Intro music. So last week we looked at something totally unrelated and a lot of you got back to me with a request for how to do this one simple thing. (Montage of comments asking how to do one simple thing.) Well have no fear because Justin is here. All you're going to need is what you put in the search field, and today for this demonstration I'm also going to put in a totally unrelated product from our sponsor in a vaguely related industry. If you want to check out more information check the links below. Okay, let's get started. So, this is what you searched for. Have you ever wondered how to do the thing you searched for? Like this. Okay ready? This. And, there you have it. Now to finish this up I'm going to also go ahead and do this, this, and this, and.. voila. The picture in the thumbnail. Don't forget to like, subscribe, and if you liked this video be sure to tell all your friends. Thanks again to unrelated products for their generous support. Got anything YOU want to see on this channel? Just leave a comment below. And as always, stay beautiful/fantastic/ amazing/smart/awesome/ productive/strong/etc."

I'm not sure what the right micropayment model is for 5 second tutorials but if some video site found it I would put that URL straight into the browser and search there.


Made me chuckle. Spot on. Also why I only go to YouTube as a last resort.


Being able to open up and read the video transcript has saved me hours


it's weird, because we pay with our attention. Would people "pay" for a 7 second video that shows exactly what they want perfectly clearly, by watching a 30 second advertisement? Maybe yes, maybe no, but those 7 seconds are more valuable than 2 minutes of filler, during which you have to watch the ad anyway. I think there's room for some kind of micropayment for short tutorials (7 seconds) but I think attention (attention to ads, i.e. sitting through one) might not be it.


That could be the path that got General AI started... :-P

don't know if I'm serious or not.

Let that A.I. figure out tutorials based on what people search for.


Maybe a site that uses all existing videos on YT, but points directly to the relevant point?


Hum, mumble mumble... I may have experimented something like that, having become an Emacs addicted... But I'm sure enough that no Emacs devs work with such theory in mind :-)

Maybe I also have a rebound effect against certain platforms/apps/digital jails... How many sharing this around here?

Anyway, seriously: in the history we (as society) learn to distinguish good and bad things, generally the hard way, we develop society-antibodies for many "bad things". Unfortunately actual rapid growth of corporatocracy, while it's not nothing new under the sun, it's evolved quicker than society capacity to metabolize it and that's a real big danger for us all.

At nazi times it's easy to identify "the enemy" if someone go bomb you, go invade your country, have clear uniforms, symbols, clearly state that want to dominate the world it's easy to understand that's not good. But "new enemies" learned that well, they suppress symbols, they ceased to appear a unique body (of course, they are not, but even original nazi are not a unique body, have had they internal fight etc) they do not say they want to conquer anything but only "having success", like anybody want... Well for the mean, typical biped that's not much evident. Especially since actual "humans" disappear being presented as "platforms" with unclear propriety, with tons of different commercial brands that belong to a sole subject but most people do not know or if they appear in person they present themself as "genius in it's lab", young happy hippy that "work for a better world".

And even worse, ancient dictatorship require strong power to stay and evolve, actual corporatocracy do not. They simply remains as the sole option to buy services or products. They do not have to prohibit something "free" like open PCs, open cars etc, they simply stop to produce them after having bought any possible other producer, substituting them with jails but well presented, well colored and of course "for our safety".


Makes you wonder if a jail can be called such if it is made "fun". Eventually the need to maintain "fun" disappears if no options to leave are present.


It's depend on scale: if prisoners are few there is no need to make anything fun; if prisoners are an enormous amount of people and guards are a little group keep prisoners calm it's needed to avoid revolts.

Panem et circense always pay, at least as long as people can survive or have a bit of something to loose.


I think the general danger with computers is falling into a trap of repetitive behavior. In the age of letters you'd check your letter box maybe once a day. Now everywhere we have systems of immediate responses. It creates these loops of repetition where the constant stream of new data drives the user to be constantly aware of the possibility to respond and consume. What if letters could arrive at your home invisibly, at any time, without you knowing it? The problem with technology seems to be that when things become too easy, they often seem to acquire a potential of being destructive in a way that they hadn't before.


That's the first thing I do when I get a new phone: set up mail and usual apps, disable notifications for most, notify every 4 hours for mail & others, though I usually leave instant messaging run regularly. Also my phone is most of the time in do not disturb, only calls go through.

At the beginning I kept checking, but eventually the need dies off, at least in my case. Anything actually important will be a call.


Never really thought about this....but this would really increase my productivity. I get super annoyed when my wife keeps messaging me at work for example, but I also respond all the time.

Disabling notification for few hours at a time is a great idea. I will give it a shot.


I do the same thing. Calls go through as normal and messages get checked when I feel like it. There have been a few occasions when I miss out on something or whatever but it's way better my phone dictating when I give it attention rather than the other way around.


I found this youtube talk a while back. It's given by a consultant who's selling these skills and has slides titled "Use of coercive monitization".

Some of the more disturbing parts are when he highlights forum posts about targeting whales and people who say things like "I have to do every challenge and every side quest and it takes away from my enjoyment of the game" and "I've restarted this game hundreds of times because my character isn't perfect".

https://www.youtube.com/watch?v=zex3b2mDnUw&t=17m16s


I think recommender systems are the biggest culprit in this. Most recommender systems are probably trained with a simple objective of maximizing the amount of time the user spends browsing through the list of recommended items.

The whole idea of continuously giving your users content so that they can passively scroll down and be entertained is just like putting users in a box where they can pull a lever to get food. When users are given all these options without having to think and to actively search for them, they just become vegetables.


I don’t use FB myself anymore but my wife still does and this morning she showed me some posts by a friend of ours. He’s... troubled anyway, but recently he seems to have become addicted to self-harming and posting the photos on FB for the reactions and comments. Congratulations, FB scientists, this is your contribution to the world.


Their contribution is profiting from his self harm, which is worse in some ways.


How do these researchers justify such a lack of empathy in their product? If I worked for a company that makes a product that does objective harm to society, like Facebook or a missile, I would feel like I wasted years of my life as I would be giving great effort to something where the outcome can only be regression, not improvement of anyone’s life. Exploiting these behavioral feedback loops for profit is cruel and dystopian. I already feel terribly for all the people I encounter who could be so much more focused on their own ambitions if they weren’t spending 4+ hours a day trapped in the infinite scroll of social media. I know how much of a setback that kind of addiction can be, because it used to be me.


>How do these researchers justify such a lack of empathy in their product?

$$$$ and/or personality disorders I imagine


Willful ignorance or cognitive dissonance may also explain some of this. See Nazi Germany: citizens knew something was going on, but they willed the whispering away.


As an aside, my previous team went through Fogg's course on feature planning. It is truly a unique way to get stakeholder buy in on planned work. We went from planning meetings that involved shouting matches to meetings where everyone left happy.


Captology and behavior design are, to a degree, marketing wank that sells courses, lectures, speeches, books and consulting gigs... because it's fluffy idealism PR people want to buy into. Sure you can fool most of the people some of the time, a few of the people all of the time, but you can never...


Interesting article! Of course, scientists should work on the topic as the app developers want to attract as many users as possible. Everything is in our heads!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: