Game mechanics are not considered copyrightable[1]. If you had a clean room implementation with your own significantly different assets, it would be allowed.
However, the exact definitions of "significantly different" and "assets" is where things start to get fuzzy. While you could definitely make a very similar RTS game, exactly how similar can you get? EA doesn't own "military-themed RTS", but they probably do own "Soviets vs Allies with about 5 different unit types, air transports, and tesla coils." Getting even more fuzzy, are unit abilities considered assets, or game mechanics? It'd have to be worked out in court.
My gut feeling is these clone engines would probably lose in court. I think the specific expression of the general game mechanics being cloned here probably would constitute infringement. But there isn't much upside to the IP owners to pursue enthusiastic hobbyists cloning a 20+ year old game in a non-commercial way, so they let it slide.
[1] "Although Amusement World admitted that they appropriated Atari's idea, the court determined that this was not prohibited, because copyright only protects the specific expression of an idea, not the idea itself." https://en.wikipedia.org/wiki/Atari,_Inc._v._Amusement_World...
I'm sure if EA could undo their release of Red Alert and C&C as open-source, they would.
OpenRA simply downloads a copy that it loads for the purpose of assets, but the engine is completely new, and it is very different from the orignal Red Alert. At this point, I don't think a single unit acts exactly the way it did in the original game. It's endlessly being rebalanced.
Same thing as always. Stick with your plan and rebalance if you need to. If your plan is 80% stock 20% bond (or whatever ratios), and the increased stock prices are putting you significantly out of balance, then sell your stock funds and buy bonds to put it back to where it should be. If the crash happens, sell your now too-high bonds and buy stocks. Or just buy into one of those funds that does all this for you, or hire a fiduciary financial advisor to do it for you.
The drama around the XSLT stuff is ridiculous. It's a dead format that no one uses[1], no one will miss, no one wants to maintain, and that provides significant complexity and attack surface. It's unambiguously the right thing to do to remove it. No one who actually works in the web space disagrees.
Yes, it's a problem that Chrome has too much market share, but XSLT's removal isn't a good demonstration of that.
[1] Yes, I already know about your one European law example that you only found out exists because of this drama.
The fact that people didn't realize that a site used XSLT before the recent drama is meaningless. Even as a developer, I don't know how most of the sites I visit work under the hood. Unless I have a reason to go poking around, I would probably never know whether a site used react, solid, svelte, or jquery.
But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility. If you make a web site which only relies on web standards (i.e. not vendor specific features or 3rd party plugins), you can/could expect it to keep working forever. Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.
Oh, and as the linked article points out, the attack surface concerns are obviously bad faith. The polyfil means browser makers could choose to sandbox it in a way that would be no less robust than their existing JS runtime.
> Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.
No, this is wrong.
Maintaining XSLT support has a cost, both in providing an attack surface and in employee-hours just to keep it around. Suppose it is not used at all, then removing it would be unquestionably good, as cost & attack surface would go down with no downside. Obviously it's not the case that it has zero usage, so it comes down to a cost-benefit question, which is where popularity comes in.
I want to start out by noting that despite both the linked article the very comment you're replying to pointing out that the security excuse is transparently bad faith, you still trotted it out, again.
And no, it really isn't a cost benefit question. Or if you'd prefer, the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost. As it stood, as a web developer you only needed to make sure that your code followed standards and it would continue to work. If the browser makers can decide to depriciate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular (or rather, whether browser makers will continue to _think_ they're popular, which is very much not the same thing).
> security excuse is transparently bad faith, you still trotted it out
I don't see any evidence supporting your assertion of them acting in bad faith, so I didn't reply to the point. Sandboxes are not perfect, they don't transform insecure code into perfectly secure code. And as I've said, it's not only a security risk, it's also a maintenance cost: maintaining the integration, building the software, and testing it, is not free either.
It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question. I don't see anyone at Chrome acting in bad faith with regards to XSLT removal. The drama here is really overblown.
> the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost ... If the browser makers can decide to deprecate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular.
This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage. No one actually cares about this feature, they just like screaming at Google. (To be fair, so do I! But you gotta pick your battles, and this particular argument is a dud.)
> It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question
If browser makers had simply said that maintaining all the web standards was too much work and they were opting to depreciate parts of it, I'd likely still object but I wouldn't be calling it bad faith. As it stands however, they and their defenders continue to cite alleged security problems as one of if not the primary reason to remove XSLT. This alleged security justification is a lie. We know it's a lie because there exists a trivial way to virtually completely remove the security burden presented by XSLT to browser maintainers without depreciating it, and the chrome team is well aware of this option. There is no significant difference in security between "shipping an existing polyfil which implements XSLT from inside the browser's sandbox instead of outside it" and "removing all support for XSLT", so security isn't the reason they're very deliberately choosing the former over the latter.
> This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage
This isn't a counter argument, you've just repeated your point that XSLT (allegedly) isn't sufficiently well used to justify maintaining it, ignoring the fact that said tradeoff being made by browser maintainers in the first place is a problem.
> But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility.
The fact that you put "contract" in quotes suggests that you know there really is no such thing.
Backwards compatibility is a feature. One that needs to be actively valued, developed and maintained. It requires resources. There really is no "the web platform." We have web browsers, servers, client devices, telecommunications infrastructure - including routers and data centres, protocols... all produced and maintained by individual parties that are trying to achieve various degrees of interoperability between each other and all of which have their own priorities, values and interests.
The fact that the Internet has been able to become what it is, despite the foundational technologies that it was built upon - none of which had anticipated the usage requirements placed on their current versions, really ought to be labelled one of the wonders of the world.
I learned to program in the early to mid 1990s. Back then, there was no "cloud", we didn't call anything a "web application" but I cut my teeth doing the 1990s equivalent of building online tools and "web apps." Because everything was self-hosted, the companies I worked for valued portability because there was customer demand. Standardization was sought as a way to streamline business efficiency. As a young developer, I came to value standardization for the benefits that it offered me as a developer.
But back then, as well as today, if you looked at the very recent history of computing; you had big endian vs little endian CPUs to support, you had a dozen flavours of proprietary UNIX operating systems - each with their own vendor-lock-in features; while SQL was standard, every single RDBMS vendor had their own proprietary features that they were all too happy for you to use in order to try and lock consumers into their systems.
It can be argued that part of what has made Microsoft Windows so popular throughout the ages is the tremendous amount of effort that Microsoft goes through to support backwards compatibility. But even despite that effort, backwards compatibility with applications built for earlier version of Windows can still be hit or miss.
For better or worse, breaking changes are just part and parcel of computing. To try and impose some concept of a "contract" on the Internet to support backwards compatibility, even if you mean it purely figuratively, is a bit silly. The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers. If only an extreme minority of "customers" require native xslt support in the web browser, to use today's example, it makes zero business sense to pour resources into maintaining it.
> The fact that you put "contract" in quotes suggests that you know there really is no such thing.
It's in quotes because people seem keen to remind everyone that there's no legal obligation on the part of the browser makers not to break backwards compatibility. The reasoning seems to be that if we can't sue google for a given action, that action must be fine and the people objecting to it must be wrong. I take a rather dim view of this line of reasoning.
> The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers.
As you yourself pointed out, the web is a giant pile of cobbled together technologies that all seemed like a good idea at the time. If breaking changes were an option, there is a _long_ list of potential depreciation to pick from which would greatly simplify development of both browsers and websites/apps. Further, new features/standards would be able to be added with much less care, since if problems were found in those standards they could be removed/reworked. Despite those huge benefits, no such changes are/should be made, because the costs breaking backwards compatibility are just that high. Maintaining the implied promise that software written for the web will continue to work is a business requirement, because it's crucial for the long term health of the ecosystem.
Another bit of ridiculousness is pinning the removal on Google. Removing XSLT was proposed by Mozilla and unanimously supported with no objections by the rest of the WHATWG. Go blame Mozilla if you want somebody to get mad at, or least blame all the browser vendors equally. This has nothing to do with Chrome’s market share.
Shouldn't the users of the Web also get a say? There's been a lot of blowback on this decision, so this isn't as cut and dried as it's being made out to be
Using the technology and opting in to telemetry, feedback forums, user surveys, newsgroups, letter writing, email campaigns, telnet into a BBS, grass-roots websites, semaphore, Morse code, teletype, fax, etc.
Anything is better than nothing, if anyone actually listens to the feedback they get instead of taking it and ignoring it.
Google are the ones immediately springing into action. They only started collecting feedback on which sites may break after they already pushed "Intention to remove" and prepared a PR to remove it from Chromium.
> Google are the ones immediately springing into action.
You say that like it's a bad thing. The proposal was already accepted. The most useful way to get feedback about which sites would break is to actually make a build without XSLT support and see what breaks.
This page is styled via an XSLT transform: https://www.europarl.europa.eu/politicalparties/index_en.xml The drama mongers like to bring it up as an example of something that will be harmed by XSLT's removal, but it already has an HTML version, which is the one people actually use.
I've been running a small hobby site using XML and XSLT for the last five or so years, but Google refused to index it because Googlebot doesn't execute XSLT. I can't be the only one, but good luck Googling it
I migrated to an Org-mode-based workflow a couple of weeks ago because I can see the writing on the wall, but most of the XML and XSLT files are still in place because cool URIs don't change(1).
Who knows how many other XML and XSLT-based sites still exist on the internet because Google refuses to index that content
This has to be proven by Google (and other browser vendors), not by people coming up with examples. The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples: https://github.com/whatwg/html/issues/11523#issuecomment-315...
XSLT deprecation is a symptom of how browser vendors, and especially Google, couldn't give two shits about the stated purposes of the web.
To quote Rich Harris from the time when Google rushed to remove alert/confirm: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones" https://dev.to/richharris/stay-alert-d
Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages setting up a device. I wouldn’t expect it to show up on a large percentage of page loads.
XSLT was supposed to be a broad standard with applications beyond single setup pages. The fact that those two features are used similarly despite one supposedly being a broad standard and the other being a niche feature that only gets used in unique cases (device setup or debugging) is only supportive of deprecating XSLT, IMO
Furthermore, you can’t polyfill USB support. It’s something that the browser itself must support if it’s going to be used at all, as by definition it can’t run entirely inside the browser.
That’s not true for XSLT, except in the super-niche case of formatting RSS prettily via linking to XSLT like a stylesheet, and the intersection of “people who consume RSS” and “people who regularly consume it directly through the browser” has to be vanishingly small.
You can't polyfill many things. Should we just dump everything into the browser? Well, Google certainly thinks so. But that makes the question about "but this feature is unused, why support it" moot.
And Google has no intention to support a polyfill, or ship it with the browser. The same person who didn't even know that XSLT is used on podcast sites scribbled together some code, said "here, it's easy", and that's it.
And the main metric they use for deprecations is the number of sites/page uses. So even that doesn't work in favor of all the hardware APIs (and a few hundred others) that Google just shoved into the browser.
At least there's consensus on removing XSLT, right? But there are many, many objections about USB, HID, etc. And still that doesn't stop Google from developing, shipping and maintaining them.
Basically, the entire discussion around XSLT struck a nerve partly because all of the arguments can immediately be applied to any number of APIs that browsers, and especially Chrome, have no trouble shipping. And that comes on top of the mismanaged disaster that was the attempt to remove alert/confirm several years ago (also, "used on few sites", "security risk", "simpler code", "full browser consensus" etc.)
The distinction in my mind is that if a browser doesn’t ship with XSLT, then devs have to go through the hassle of adding support for it themselves, but if a browser doesn’t support a device driver, it’s completely impossible for devs to do that themselves.
Without built-in support, XSLT is inconvenient. Without built-in support, things like WebUSB cannot possibly exist.
That’s why I think they can’t be compared directly.
The context of this thread was “in the browser”. For example, I use a web page to configure my Meshtastic radios which are connected to my laptop via USB. If the browser did not provide an API for web pages to talk to USB devices, no amount of clever JS programming would make it possible for that radio config page to work.
> Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages
So, if XSLT sees 10x usage of USB we can consider it a "niche technology that is 10x useful tan USB"
> The fact that those two features are used similarly
You mean USB is used on 10x fewer pages than XSLT despite HN telling me every time that it is an absolutely essential technology for PWAs or something.
> What, to you, would constitute sufficient proof? Is it feasible to gather the evidence your suggestion would require?
Let me quote from my comment, again:
--- start quote ---
The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples
--- end quote ---
I would like to see more evidence than "we couldn't care less, remove it" before a consensus on removal, before an "intent to deprecate" and before opening a PR to Chrome removing the feature.
Yeah, they do. Go talk to anyone who isn't in a super-online bubble such as HN or Bsky or a Firefox early-adopter program. They're all using it, all the time, for everything. I don't like it either, but that's the reality.
Not really. Go talk to anyone who uses the internet for Facebook, Whatsapp, and not much else. Lots of people have typed in chatgpt.com or had Google's AI shoved in their face, but the vast majority of "laypeople" I've talked to about AI (actually, they've talked to me about AI after learning I'm a tech guy -- "so what do you think about AI?") seem to be resigned to the fact that after the personal computer and the internet, whatever the rich guys in SF do is what is going to happen anyway. But I sense a feeling of powerlessness and a fear of being left behind, not anything approaching genuine interest in or excitement by the technology.
If I talk to the people I know who don’t spend all their time online, they’re just not using AI. Quite a few of my close friends haven’t used AI even once in any way, and most of the rest tried it out once and didn’t really care for it. They’re busy doing things in the real world, like spending time with their kids, or riding horses, or reading books.
I talk to an acquaintance selling some homemade products on Etsy, he uses & likes the automatically generated product summary Etsy made for him. My neighbor asks me if I have any further suggestions for refinishing her table top beyond the ones ChatGPT suggested. Watching all of my coworkers using Google search, they just read the LLM summary at the top of the page and look no further. I see a friend take a picture, she uses the photo AI tool to remove a traffic sign from the background. Over lunch, a coworker tells me about the thing she learned about from the generated summary of a YouTube video.
We can take principled stands against these things, and I do because I am an obnoxiously principled dork, but the reality is it's everywhere and everyone other than us is using it.
Being busy riding horses and reading books are both niche activities (yes, reading too, sadly, at lest above a very small number of books which does not translate to people being busy doing it more than a tiny fraction of their time), which suggests perhaps your close friends are a rather biased set. Nothing wrong with that, but we're all in bubbles.
Way off. I've polled about this (informally) as well. Non-technical people think it's another thing they have to learn and do not want to (except for those who have been conditioned into constant pursuit of novelty, but that is not a picture of mental health or stability for anyone). They want technology to work for them, not to constantly be urged into full-time engagement with their [de]vices.
They are already preached at that they need a new phone or laptop every other year. Then there's a new social platform that changes its UI every 6 months or quarterly, and now similarly for their word processors and everything.
This is kinda like how if you ask everyone how often they eat McDonald's, everyone will say never or rarely. But they still sell a billion burgers each year :) Assuming you're not polling your Bsky buddies, I suspect these people are using AI tools a lot more than they admit or possibly even know. Auto-generated summaries, text generation, image editing, and conversation prompts all get a ton of use.
Pretty funny post. He won't be held responsible for any failures. Worst case scenario for this guy is he hires a bunch of people, the company folds some time later, his employees take the responsibility by getting fired, and he sails into the sunset on several yachts.
So he's not using his own money, and he has enough personal wealth that there is no impact to him if the company fails. It's just another rich guy enjoying his toys. Good on him, I hope he has fun, but the responsibility for failure will be held by his employees, not him.
LeCun's net worth is estimated between 5-10 million.
Just for payroll of 10 AI researchers at 300k/yr would cost over $3 million per year. And his wealth probably isn't fully liquid. Given payroll + compute he would be bankrupt in a year. Of course he's not using just his own money.
However, I expect he will be a major investor. Most founders prefer to maintain some control.
He's been leading a large, important organization at Meta for 13 years. The stock has 10x'd in that time. He's almost certainly worth way more than that. Those random google sites that talk about net worth have no real idea what they're guessing at and are more akin to clickbait
Ok, great. So he'll only lose 10% of his net worth per year if it fails. Better for some VC to lose 1% of their net worth per year.
The point is, VC money for an AI venture is not chump change even for someone with a $10-$100MM net worth. The point still stands, including his own expected investment.
Ah, she's barely out of her teens, give her a break :) Better things to spend one's life on in those years than worrying over a few hundred bucks in a bank account. She'll come back around in a few years.
> a desolate empty parking lot with no trees is somehow ideal
The author is trying to measure "claustrophobia" specifically, not ideal-ness. An empty parking lot would be less claustrophobic than most other kinds of places, yes. The measured claustrophobia factor appears to be just one part of a larger analysis that resulted in a NYT article, but unfortunately the article isn't linked.
Don't know about Switzerland, but most US brokers offer some kind of "target retirement date" fund, which automatically shifts from higher-risk assets to lower-risk as you approach retirement. VFIFX is one from Vanguard, for example. Pick one you like (just ask a coworker what they use, if you pick a big-name brokerage it really doesn't matter which one), shove your extra cash into it regularly, and forget about it. Then cross your fingers the market isn't actively crashing when you plan to retire (this is unlikely, but it does happen a couple times per century).
If you start to get into truly high wealth amounts (USD$500K+) you might consider hiring a wealth advisor, who can probably do better even after accounting for their fees.
The idea is that over a 40 year window that 20% (or more) crash is eventually going to rebound, so just sitting on the target retirement fund is going to do well over it's lifetime. As you get closer to retirement, and don't have the time to recover from the crash, the plan moves to safer investments.
However, the exact definitions of "significantly different" and "assets" is where things start to get fuzzy. While you could definitely make a very similar RTS game, exactly how similar can you get? EA doesn't own "military-themed RTS", but they probably do own "Soviets vs Allies with about 5 different unit types, air transports, and tesla coils." Getting even more fuzzy, are unit abilities considered assets, or game mechanics? It'd have to be worked out in court.
My gut feeling is these clone engines would probably lose in court. I think the specific expression of the general game mechanics being cloned here probably would constitute infringement. But there isn't much upside to the IP owners to pursue enthusiastic hobbyists cloning a 20+ year old game in a non-commercial way, so they let it slide.
[1] "Although Amusement World admitted that they appropriated Atari's idea, the court determined that this was not prohibited, because copyright only protects the specific expression of an idea, not the idea itself." https://en.wikipedia.org/wiki/Atari,_Inc._v._Amusement_World...