Hacker Newsnew | past | comments | ask | show | jobs | submit | DonaldFisk's commentslogin

I had assumed that XWayland is a drop-in replacement fo X11, and will be available indefinitely.

I regularly write code which relies on a working X11. I have written a virtual machine which makes X11 calls to do 2D graphics and event handling, as well as applications which compile to the virtual machine code. If X11 and now XWayland cease to be available, not only would I have to rewrite large parts of my virtual machine, but also rewrite all the 2D graphics code in applications. All so that I can stand still when the rug is being pulled from under my feet. I'm sure there are others in a similar predicament.

I may be naive about this, but as X11 just works, and has done for decades, it should require little to no maintenance, so why the need to withdraw it? I don't expect, or require, any additional functionality.


Yes, XWayland is intended to continue to be available indefinitely.


Scotland, 56 degrees north - I was able to see the aurora through occasional gaps in fast moving clouds around 0400hrs. Red, easily visible to the naked eye.


It's actually an interesting news article, at least if you're interested in neuroscience. I can confirm, though, that it's nothing whatsoever to do with flight simulators, and have no idea why the author chose that particular simile.


I remember those early systems, and was in touch with Upendra Shardanand and Pattie Maes at the time. Other early systems ca

As music recommendation was already being done, I developed MORSE, short for MOvie Recommendation SystEm, shortly after Ringo appeared. Like Ringo and Firefly, it was a collaborative filtering system, i.e. it worked by comparing how similar your tastes were to the tastes of other users, and took no account of other information (e.g. genre, director, cast). As it was a purely statistical algorithm, I didn't call it, or other collaborative filtering systems, AI. It was different to symbolic AI (which I was previously working on, in Prolog and Common Lisp), didn't use neural networks, and wasn't Nouvelle AI (actually the oldest approach to AI) either. I wrote it in C (it had to run fast and was just processing numbers) and used CGI (Common Gateway Interface) to collect data and give recommendations on the WWW.

In a nutshell, to predict the rating for a film a user hasn't seen yet, it plotted the ratings given by other users for that film against how their ratings correlated with the the user, found the best-fitting straight like through them and extrapolated it, estimating the rating of a hypothetical user whose tastes exactly matched the user for the film. It also calculated the error on this, which it took into account when giving recommendations. Other collaborative filtering systems used simpler algorithms which ignored the ratings of users whose tastes were different. When I used those simpler algorithms on the same data, recommendation accuracy got worse.

MORSE was released on the BT Labs' web site in 1995. It survived a few years there, but was later taken off the server. As BT Weren't going further with it, I asked if the source code could be released, This was agreed, but it wasn't on any machine, and they couldn't find the backup tape. The algorithm is described in detail here: https://fmjlang.co.uk/morse/morse.pdf and more general information is here: https://fmjlang.co.uk/morse/MORSE.html


My algorithm was pretty similar to yours I’m guessing. (See my other long post here.) I described mine to a friend one time and he called it “toothpick AI”.


I think that intelligence requires, or rather, is the development and use of a model of the problem while the problem is being solved, i.e. it involves understanding the problem. Accurate predictions, based on extrapolations made by systems trained using huge quantities of data, are not enough.


No, MCP was a much older operating system (https://en.wikipedia.org/wiki/Burroughs_MCP).



> Just look at North Korea for example ... on average their literacy is terrible.

Where do you get that information from? According to the CIA, it's 100% (https://www.cia.gov/the-world-factbook/about/archives/2023/f...). North Korea Info (https://www.northkoreainfo.com/why-does-north-korea-have-a-h...) gives a slightly lower estimate of 97%-99%.


This isn't a new issue, and it predates the internet. There were publishers of magazines containing pornography (or anything else unsuitable for children). These were sold in shops. A publisher had to ensure that the material in the magazines was legal to print, but it wasn't their responsibility to prevent children from looking at their magazines, and it's difficult to see how that would even be possible. That was the responsibility of the people working in the shops: they had to put the magazines on the top shelf, and weren't allowed to sell them to children.

On the internet, people don't get porn videos directly from pornographic web sites, just as in the past they didn't buy porn directly from the publishers. The videos are split up into packets, and transmitted through an ad hoc chain of servers until it arrives, via their ISP, on their computer. The web sites are the equivalent of the publishers, and ISPs are the equivalent of the shops. So it would make a lot more sense to apply controls at the ISPs. And British ISPs are within the UK's jurisdiction.

And before anyone points out that there are workarounds that children could use to bypass controls, this was also the case with printed magazines.


I don't have a problem with holding companies responsible for the products they sell. But your analogy breaks down because the shop owner chooses the products to sell in his shop. The porn mags aren't in the shop unless he specifically arranges to sell them, so it's easy to say he's responsible for keeping kids away from them.

An ISP doesn't do that. A better match for an ISP would be the trucking company that hauls magazines (porn and otherwise) from publishers to shops, or the company that maintains the shop's cash register.


> I don't have a problem with holding companies responsible for the products they sell.

That's why I wrote, "A publisher had to ensure that the material in the magazines was legal to print." Web sites should also follow the laws of the countries where they are based, but not be required to follow other countries' laws. In the specific case here, a UK body is trying to collect daily fines from a US based web site (4chan.org) with no physical UK presence.

> An ISP doesn't do that.

For over a decade, they have been blocking traffic to/from web sites deemed unsuitable for children, by default. Which should make people wonder what this adult verification is actually for.


> but it wasn't their responsibility to prevent children from looking at their magazines

They weren't made to guarantee no child could peek at them, no, but they do have age restrictions that are followed (a child who picks one up couldn't buy it) and they were often on the top shelf. The kind of thing a basic risk assessment would flag "hey we keep the hardcore porn in front of the pokemon magazines...".

> The videos are split up into packets, and transmitted through an ad hoc chain of servers until it arrives, via their ISP, on their computer. The web sites are the equivalent of the publishers, and ISPs are the equivalent of the shops

The pictures emit photons which fly through the air to the child. The air is the shop.

Or for websites your computer is the shop.

The ISP is not the shop. Nor in the OSA is it viewed as such. The company who makes the service has some responsibility.

> So it would make a lot more sense to apply controls at the ISPs.

This fundamentally cannot work for what is in the OSA, and if you cannot see why almost immediately then you do not know what is in the OSA and cannot effectively argue against it. It is not a requirement to add age checks to porno sites.


From about ten years ago, ISPs were required to block web sites which were unsuitable for children by default. Any ISP's customer (the person paying for internet access, who would therefore be over 18) could ask for the block to be removed. Requiring individual web sites to block access was unnecessary if the intention was to prevent children accessing those sites.


My understanding is that the default "block" just worked through the ISP's DNS servers. So that only works if the parents know to restrict the ability of their kids to change their DNS servers on their local devices (which is not set up by default) and the kids don't know how to get around it.


>Requiring individual web sites to block access was unnecessary if the intention was to prevent children accessing those sites.

Hmm. So Reddit, Youtube, etc. would be blocked by ISPs by default?



Neither of those are relevant. One watched porn at work. Another had her husband expense his porn. And they were both caught rather than admitting it.

We're talking about just watching porn in private, normally. Find me an MP that admits to that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: