Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Empirically however, it's an artificial distinction.

You can see this in very high class concierge services, where they have deep dossiers on their clients, but a rotating staff of concierge's. Often there is a primary point of contact concierge that relay's the client's preferences to the others.

Taking that same approach, especially when it comes to measurable behaviors (types of foods/drinks, media consumption etc...), why would it be any different mechanically other than the fact that you prefer familiarity with the person you are talking with?



The only difference in whether data is creepy or not lies in the answer to a single question: Did you willingly provide it? In other words, did you choose to give it to them?

I tell the concierge service what type of table I like at a restaurant, to put into their file on me, so I expect anyone there to know that.

I do not tell a Bumble date where I work, so I do not expect them to know that (even though they could Linkedin stalk me).

A lowball offer on a house I am selling could be viewed as insulting. But, if before the offer I let it slip that an ugly gas station was about to go up next door, then all of a sudden the lowball offer is not insulting anymore.


> The only difference in whether data is creepy or not lies in the answer to a single question

Not at all.

> Did you willingly provide it AND REMEMBER?

People have inherent expectations and soft historical memories. The idea that consent is some boundary line is a dystopian legal mechanic, which is specifically at odds with the psychology of humans. Agreeing to something doesn't affect if it's "creepy" (a psychological defense mechanism).


I think you are using an unhelpful definition of "consent", albeit one that is indeed shared by some legal constructs?

The idea that "you consented to me having this info, so anything I can come up with to do with this info is fair game" is obviously bullshit. But that's just because it's quite obviously not really consent, but rather a case of someone trying to justify doing something that is obviously lacking consent. People generally consent to specific uses of information, so if someone is ignoring the explicit or implied purpose of collecting some information, they are really just pretending there is consent.

Which is, BTW, why the GDPR limits use of collected information to specific purposes (which have to be either inherently required for fulfilling some obligation, or there needs to be consent for that specific purpose).


Empirically, interesting usage of the word. What exactly are you arguing for? We can be objective but we have access to a rich and beautiful subjectivity. It's interesting in all these "measurable" behaviors there is no room for measuring an individual's suffering, which is quite real, but it seems those making the decisions are far removed from it.


I want personalization if done right.

A co-worker went to try a new restaurant, and ordered his drink. The bar tender made his drink as ordered, and then realized that there was only one person in town who ordered that drink and what he wanted was slightly different from what was ordered. Because she knew what he wanted she remade the drink to be exactly what the customer wanted not what the waitress wrote down. (she delivered the drink personally just to verify that it really was the customer she thought it was and not a new guy who might have wanted it as ordered, which is how I heard the story)


This is a good story that illustrates the problem very well. I don't want to be treated according to what my statistical cohort in a certain particular context suggests. But I have no problem with a personalized treatment from my bank personnel, from my cleaning staff, or from the undertaker. Or from the bartender who knows me.

Personalization != probabilistic statistical match.


That’s a lot of effort for a drink that is relatively unimportant in the scheme of things, but sure you could probably raise a couple million while real problems go unanswered. I think that is my problem with all the efforts on personalization, we should be working on more meaningful problems with open data analysis on high impact social projects. It doesn’t pay well which is the most exquisite torture of capitalism.


Of course what everyone else practises is relatively unimportant but your little pet social project is meaningful. Sure, buddy.

The fact that craftsmanship in this role can be rewarded is a fantastic thing. The people who can do things well can meet the people who want those things done well.


Personal nastiness breaks the site guidelines and will eventually get your account banned, so please edit it out of what you post here. If you'd (re-)read https://news.ycombinator.com/newsguidelines.html and follow the rules in the future, we'd appreciate it.

The comment would be just fine with just the second paragraph.


It would help if the guidelines were more clear.


>Sure, buddy.

Don't do that.


I seek to minimize suffering. This can be in transportation, housing, healthcare, education, nutrition, or legal fields. These have a disproportionate impact on our lives and are neglected with low salaries. How does this make sense? I'm disappointed you view this as a pet project. I disagree that craft for the sake of craft is fantastic but I would not personally attack you for it, if you have the safety net you are welcome to pursue what interests you find rewarding. Many do not have the economy of doing so, they are who I work for.


Don't switch accounts like this.


I was slowbanned and wanted to respond, not sure what to do in that case.


"I was banned from the club and wanted to go in anyway [so I broke in through a window], not sure what to do in that case."


What exactly are you arguing for?

I'm arguing that lamenting "mechanization" of social interaction - whether between corporate employee and client as in this example, or even between individuals - is akin to appealing to ether as a virtue of the universe.

If interactions between actors can be measured (which they can to some level of specificity in certain contexts), and the desires of the actors can be understood to such a degree (which we're starting to be able to do), then we can model actions which increase or reduce the likelihood of desired outcome, and following from that can produce and optimize decision support systems that nudge users toward some mutually beneficial state that may have otherwise been opaque to both actors.

People seem to hate this idea because it basically puts hard determinism right in their face - in that your past behaviors if known well enough should be predictive of future actions ceterus paribus.

It's often argued that, such a granularity of measurement in social dimensions is impossible technically, or the fact of being measured changes their behavior. I wouldn't argue either of those to be untrue, only that it need not be perfect to increase the overall optimization of the system.

So instead of saying, yes lets use measurement to optimize our system of interactions across commerce and relations, people bristle at the mere concept of social engineering in the Popperian sense of the term because it feels restrictive to our sense of "free will." I argue the opposite, that doing such would simply make us more aware of our predilections and much more likely to be able to align them across groups.


>> nudge users toward some mutually beneficial state

How do you agree on that ? Because that's currently missing in most selling processes that use manipulation - that's exactly why manipulation is used in the first place.


You need to set an overarching goal system and then get buy in either explicitly or implicitly.


That's interesting. I violently disagree but haven't worked out a constructive philosophical position yet. Thanks for the well-thought description.


There was a book called Nudge that takes up this topic to some extent, basically hacking people's decisions in "beneficial" ways, like making healthy foods more prominent in grocery stores, or changing organ donation from opt in to opt out. (Not sure those examples are entirely representative of the book, but they're the main ones I remember.) I was ambivalent. It was interesting what could be done to subconsciously influence people, but I bristle at the idea of being manipulated without my awareness, even if it's already prevalent, because usually it's done to benefit other-than-me.


My approach to education requires the students understand the subject and build their own representations. Nudging in the large is not acceptable to me, but it the small perhaps to gain the tools for understanding. Tricking someone into understanding a thing for themselves seems ok.


I think by empirically you mean intuitively or logically?

Is there some large body of measurements you are referring to?


Yes I mean empirically. There is no theoretical difference between what an individual human perceives about another person and what can possibly be tangibly measured about that person by some combination of systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: