Hacker Newsnew | past | comments | ask | show | jobs | submit | rolodato's commentslogin

Good riddance. Hope it extends beyond the EU as well.


it will not, because that is unsustainable. but we can shove our head even deeper in the sand and hope so


It's unsustainable to do advertising while maintaing user privacy? Someone tell the whole advertising industry from 1950 to 2000 or so.


It's much less efficient to do mass ads than targeted ads. Giving a huge advantage to entrenched firms, and cutting off small startups. I guess that's what the EU wants though - protecting it's existing elites through exorbitant income taxes and regulations like these that favours big companies.


every regulation favours big companies in the trivial sense that there's a non zero cost to comply with any rule. However this is hardly a reason to not have rules, we also don't let startup companies throw waste into the river.

Smaller companies aren't somehow inherently virtuous, if they can only exist by violating user privacy even more so than large companies, I'd rather deal with large companies. Small companies aren't an end in itself. Banking is another good example of this. Canada has five big, heavily regulated banks and has not had one bank failure in a century. In the US tax payers bail out SV bank


I hear you that small companies can suck even harder than big ones. But the idea of capitalism is that it’s a search function, looking for optimal solutions.

So having many firms positioning themselves differently is better than having a couple in that you’re more likely to find a good best solution.

We need some rules, but we need to err on the side of fewer not more rules.

My point is that mass ads requires massed capital. There will be few, huge, entrenched players. It’s much better to have a dynamic economy with piles of mom and pops as well as a couple of huge companies. This can only happen if mom and pop can afford to advertise. This can only work with targeted ads.


Ad targeting may be old good regional advertising. Or advertising in specific content rather than trying to target specific human beings based on their profiles.


What if the target market for your specific content is 10k people? And they are spread out geographically?

Let's say you develop a new product/service aimed at a thin slice of the population - say, dentists in Chicago, or 19 y.o. females across the US, or beginner coders, or people with older dogs in the state of Utah.

How do you reach these people?

Not through a national/regional ad campaign - your ad budget is likely $10-$100/day. This is why we all need targeted ads.


What could be a product only for such a specific niche?

All dentists is Cervelo market :) Even if you really really want to target specifically Chicago dentists, you’re probably better off doing a presentation at local dentists’ union or something along those lines. Or just doing old good cold calls.

As for 19 year olds, just advertise in whatever is popular among such crowd? If 18 or 20 y/o sees it, it’s probably not far off, eh?


Have you ever marketed anything?

If you ever do, I hope you have an unlimited budget and time horizon to make your ideas work.


Why your budget and timing should take higher priority than my privacy? Arguably social coherence is at risk too since targeted ads in politics may be very nasty. Targeted ads benefit few at expense of the whole society.


Why is it not sustainable?


Because he probably works for a tech company who's main revenue is advertising.


One disgusting trend I see with many phone apps is notification spam with no fine-grained option to disable them without potentially disabling transactional notifications. Ride-sharing and delivery apps are particularly annoying for this.

I remember the time when phone OSes did not have notification categories but it’s not a useful feature if app developers don’t implement them.


Looking at you, Uber, and my once-a-month ride somehow equating to 10 UberEats notices per day.

Marketers -- you're on notice.


I disabled notifications for Uber years ago for ths, the first time I got a marketing push. It's your phone, and they can tell if you've disabled push. Tell them you're not ok with it.


I use the notifications for the once-a-month reserved ride I have to take. Else this would be the solution.


Uninstall Uber Eats until you need it again but yes this sucks.


I don't have Uber Eats app or use the service, but if I could use the marketing cash they offer as currency I'd have a nice chunk of the office rent paid each month.


Bugs, faults, errors


I think bootlicker is a stronger but more appropriate word. I genuinely cannot believe people support being sold a fully standalone physical product and being charged just for the ability to use it, while thinking it is in their best interest.


Stremio with torrent addons is extremely common.


this is the answer, Stremio is the easy-to-use solution that I setup for my parents and forgot about. the comments above with their complicated usenet setups reminds me of the famous hackers news comment when dropbox was released.


My cynical AI opinion is that if it truly was revolutionary, we would be seeing private companies or governments use it internally to massively boost their productivity or achieve otherwise impossible feats. Instead, all we're seeing are consumer products or half-baked integrations that are another way to further centralize data collection.

Talks about all-powerful, malevolent AI or requesting to halt AI development just sounds like baiting for regulary capture to me. If AI research or use is deemed dangerous, it becomes even harder for startups or individuals to compete with established companies.

Also, I am not concerned about an all-powerful AI in the slightest. Humanity is excellent at oppressing others, and I have no doubt we'd be equally good at oppressing a true AI as well.


"we would be seeing private companies or governments use it internally "

What world are you living in, where this isn't happening? Every private company that can use it is using it, Microsoft themselves must be using Copilot. Governments are lining up to train their own LLMs.

Commercial and consumer use are not mutually exclusive, in any case.

Also "Humanity is excellent at oppressing others, and I have no doubt we'd be equally good at oppressing a true AI as well."

If you actually read history a bit carefully, you'd understand this frequently works out badly for the oppressors. The Romans hired, and abused their germanic mercenaries, thinking with their centuries of political experience, they could let the Germanics do all the hard fighting while getting paid little.

The contemporary Jin dynasty in China thought the same way, just hire barbarian mercenaries to do all the hard work in their civil wars. Those illiterate barbarians, who up that point, have never achieved much in China, surely wouldn't be a threat.

It ended very badly for both of those polities.


The empire may no longer around but the UK and the Commonwealth still are. They are thriving. France is still around, and despite the protests, is still a wealthy nation. Belgium is still going strong. The US is the richest country in the world. Turkey is still kicking. Despite Nanjing, Japan's economy is to die for.


The former empires are shadows of their former selves and had to undergo significant reforms. Nobody knows or cares who the king of Belgium is anymore.


> What world are you living in, where this isn’t happening?

You left off the rest of the sentence: “to massively boost their productivity or achieve otherwise impossible feats”


That was implied, I think, though achieving impossible feats is a bit of a stretch in expectations.

Private enterprise is certainly leveraging LLMs. OpenAI APIs on Azure are very hot right now, limited to companies that have existing contracts with Azure.


Say hypothetically that it was a prefect duplicate of a human brain. That would certainly be called a truly revolutionary accomplishment, but that duplicate wouldn't be expected to massively boost productivity any more than adding another human would.


If a GPT model (+ associated cheap software wrapper like LangChain etc) was hypothetically as good/productive as a qualified human engaged in remote work, that would massively boost productivity. The reason is because no matter how much it costs to run such a model at inference, it isn't going to cost anywhere near as much as the ~million dollars required from society to raise a human infant until they're capable of that same level of productivity (in the developed world), plus the ongoing cost of wages. What that means is that once you find a "good worker" model, you don't need to go on an expensive hiring spree, all you need to do is change the number of instances you're using from 1 to whatever number is most optimal for you. You could employ thousands of new people at a guaranteed level of quality within a day.

From the point of view of the organisation building said agents, this would get a lot more extreme. You have all of the above benefits, except you're only paying for electricity and amortised hardware costs rather than someone else's profit. But you can also work on improving the hypothetical human-level agents. If you can optimise their compute costs at runtime and we're accepting the premise that they're as good as a qualified human, then you can get superhuman performance through simply running them faster. Spin up a group of however many professors and engineers you need, give them four days of real time that's equivalent to a year of subjective time, that's superhuman performance. How long did it take to go from GPT-3 to GPT-4? If these agents are truly human equivalent for the purposes of work, you can set them to building GPT-5 or whatever the next most powerful is, as fast as you can run them. I suspect the real limit would be just how fast you can download the data (or build the architecture required to gather more), not how fast you can categorise and select it. Once your ducks are in a row with regards to the data, you have an even better foundational platform to do the same thing again but with smarter agents this time. If they're human level in performance, you could also task them with generating data for the model. You could do this directly (e.g. training on the huge amount of text they're producing as they work), or you could task them towards building and designing consumer products designed to increase their access to data. For example, designing a cheap robot that can help people in their home like a robot vacuum cleaner, or something like Tesla's FSD, or a home speaker assistant. Once the model is multi-modal like GPT-4 is, you can get data by acquiring novel images rather than being restricted to just text. Maybe GPT-5 isn't just text and images but also audio, so you can increase the ways you acquire data even further. If they're genuinely at human level performance, none of this should be impossible. In our current world a major limiting factor on productivity is that skilled human labour is rare - when you can copy-paste a known-good skilled 'human' labourer that becomes completely inverted.

Summing up: if we could get them to reliable human level performance, that would lead to a massive productivity boost because it would make the cost of skilled human labour and ingenuity far, far lower while increasing supply to effectively "infinite, limited only by your ability to pay to run them". Agents like these are not at that stage yet, they've still got a significant way to go. But if they get to human equivalent productivity, that isn't just like adding one more high quality research scientist or engineer, it's adding millions of them, and that's a massive productivity boost.


Fully agree-- thanks for the correction. (still trying to figure out why I wrote that!)


> My cynical AI opinion is that if it truly was revolutionary, we would be seeing private companies or governments use it internally to massively boost their productivity

ChatGPT (with GPT-3.5) is, even from preview, only four months old, the paper on the ReAct pattern, a key approach to doing productive work with LLMs, is about a month older.

There’s a lot of work on doing what you talk about in many organizations, but it takes time to do.


I wouldn’t call this cynical, I’d call it incredibly naive. Notwithstanding the fact that we are still feeling out what LLMs can and cannot do. When are established players quick to move? When does disruptive tech NOT have to fight back against organisations compromised of individuals who are often incentivised against its success? When does a new player topple the giants overnight? Pretty much never.


Revisit this comment in 2 months, 6 months, a year. It’s mainly a matter of when OpenAI allows widespread access to the gpt-4 api and developers have time to build stuff.


Also, shockingly, go back and look at the history of other game changing innovations - cars, microcomputers, even electricity. It took time for companies (and consumers) time to figure out how to use them. This will be no different.


They are. But you can’t expect something to be released and six month later we’ve completely exhausted all possible value from it. It takes longer than that at most big companies to agree to even use a technology and another six months to pass it through infosec, compliance, legal, fifty tech managers, etc. But as a first party source I can tell you these things are happening, pervasively, at every company, everywhere.


It's just starting, https://restofworld.org/2023/ai-image-china-video-game-layof...

The internet took 10 years to develop into a mature ecosystem where normies could utilize it with cease.


Drogon is similar to "drogón" in Spanish, which means "junkie" or someone who takes a lot of drugs.


I guess if we're doing random tidbits.

And "junkie" is a very harmful term to use and implies a moral failing of the drug user.


> - Give me a setting to autoconfirm all cookie consent requests and lobby for a legally binding do-not-track header. Cookie consent was well meaning, but it has turned out to make things worse. Let's move on.

This can be done with uBlock Origin and an "annoyances" filter list such as EasyList Cookie. It doesn't actually give websites consent to use of cookies, only hides the consent form.


Thanks! I really did not know of the annoyances, it really helps.


In the case of Google, not at all:

    $ curl -v https://google.com
    [...]
    < HTTP/2 301
    < location: https://www.google.com/


Just an FYI: in this case you might want to use `curl -i` which will display just headers + body (switch `-v` displays lots of other information too).

    $ curl -i https://google.com
    HTTP/1.1 301 Moved Permanently
    Location: https://www.google.com/
    ...
EDIT: interesting that it responds with HTTP/2 in your case and HTTP/1.1 in mine, would need to look up why the responses are different.


When you get the receipt back, you can request an item be cancelled because you "didn't take it". They will pretty much accept it, no questions asked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: