Hacker Newsnew | past | comments | ask | show | jobs | submit | nyrikki's commentslogin

An ISDN BRI (basic copper) actually had 2 64kbps b channels, for pots dialup as an ISP you typically had a PRI with 23 b, and 1 d channel.

56k only allowed one ad/da from provider to customer.

When I was troubleshooting clients, the problem was almost always on the customer side of the demarc with old two line or insane star junctions being the primary source.

You didn’t even get 33k on analog switches, but at least US West and GTE had isdn capable switches backed by at least DS# by the time the commercial internet took off. Lata tariffs in the US killed BRIs for the most part.

T1 CAS was still around but in channel CID etc… didn’t really work for their needs.

33.6k still depended on DS# backhaul, but you could be pots on both sides, 56k depended on only one analog conversion.


Unfortunately even podman etc.. are still limited by OCIs decision to copy the Docker model.

Crun just stamp couples security profiles as an example, so everything in the shared kernel that is namespace incompatible is enabled.

This is why it is trivial to get in-auditable communication between pods on a host etc…


> Unfortunately even podman etc.. are still limited by OCIs decision to copy the Docker model.

Which parts of the model are you referring to ?


OCI Container Runtimes like OCI's runc are "container runtimes", so the runtime spec[2]

Basically, docker started using lxc, but wanted a go native option, and wrote runc. If you look at [0] you can see how it actually instantiates the container. Here is a random blog that describes it fairly well [1]

crun is the podman related project written in c, which is more efficient than the go based runc.

You can try this even as the user nobody 65534:65534, but you may need to make some dirs, or set envs.

Here is an example pulling an image with podman to make it easier, but you could just make an OCI spec bundle and run it:

    mkdir hello
    cd hello
    podman pull docker.io/hello-world
    podman export $(podman create hello-world) > hello-world.tar
    mkdir rootfs
    tar -C rootfs -xf hello-world.tar
    runc spec --rootless
    sed -i 's;"sh";"/hello";' config.json
    runc run container1
    
    Hello from Docker!
runc doesn't support any form of constraints like a bounding set on seccomp, selinux, apparmor, etc.. but it will apply profiles you pass it.

Basically it fails open, and with the current state of apparmor and selinux it is trivial to bypass the minimal userns restrictions they place.

Historically, before rootless containers this was less of an issue, because you had to be a privileged user to launch a container. But with the holes in the LSMs, no ability to set administrative bounding sets, and the reality that none of the defaults constrain risky kernel functionality like vsock, openat2 etc... there are a million ways to break netns isolation etc...

Originally the docker project wanted to keep all the complexity of mutating LSM rules etc... in containerd. and they also fought even basic controls like letting an admin disable the `--privileged` flag at the daemon level.

Unfortunately due to momentum, opinions, and friction in general, that means that now those container runtimes have no restrictions on callers, and cannot set reasonable defaults.

Thus now we have to resort to teaching every person who launches a container to be perfect and disable everything, which they never do.

If you run a k8s cluster with nodes on VMs, try this for example, if it doesn't error out, any pod can talk to any other pod on the node, with a protocol you aren't logging, and which has limited ability to log anyway. (if your k8s nodes are running systemd v256+ and you aren't using containerd which blocked vsock, but cri-o, podman, etc... don't (at least up to a couple of weeks ago)

    socat - VSOCK-LISTEN:3000
You can also play around with other af_families as IPX, Appletalk, etc... are all available by default, or see if you can use openat2 to use some file in /proc to break out.

[0] https://manpages.debian.org/testing/runc/runc-spec.8.en.html [1] https://mkdev.me/posts/the-tool-that-really-runs-your-contai... [2] https://github.com/opencontainers/runtime-spec/blob/main/REA...


> Crun just stamp couples security profiles

I don't understand any of this :-)


I will try to go more in-depth in later posts, but many users, especially in a k8s context probably have a socket activated sshd listener on vsock, that may pose a serious risk and possibly violate your security assumptions.

I don't think [0] is showing what you think it does.

> % Very satisfied with the way things are going in personal life

That Dropped from 65% in 2020 to 44% in 2025

> Record-Low 44% of Americans Are 'Very Satisfied' With Their Personal Life

Also focusing on the raw percentages of these style reports is challenging, due to socially desirable response bias [0]

The fact it is dropping is the important part, it is a relative measure, not a absolute one, and I am sure Gallop would change there questions/responses in a modern survey that didn't need to maintain compatibility with historical data.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC5519338/


* Gallup (not Gallop) has the English questions and responses in the PDF at the bottom of the page. They will also respond if you email them so you can check if wording changed significantly.

* Yes, I am pretty sure the Gallup thing is showing exactly what I think it does considering I said "81% are [somewhat] satisfied or very satisfied" and the Gallup survey shows that 81% are somewhat satisfied or very satisfied.

* The fact that the Hacker News community was enthusiastic about the thesis of a loneliness epidemic during a period when satisfaction was rising casts aspersions on "the fact that it is dropping is the important part". When satisfaction was rising, there were still posts on that where everyone was agreeing about how bad it was.


I was looking at the PDF [0] and [1] and [0] calls out:

> In addition to sampling error, question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of public opinion polls.

``QN5:Personal Satisfaction is a binary question``, with a category for refused/didn't know that WAS NOT OFFERED IN THE QUESTION, with an additional question asking about very, sort of etc... They call out `QN5QN6COMBO: Personal Life Satisfaction`

I can't answer the HN sentiment straw man, the DELTA from previous results is what is important. Using it as an absolute scale would almost certainly be discouraged if you asked them via the email address in the PDF.

Basic statistics realities here, and Gallup knows the limits far better than the comment section here. And they understand that "81% are [somewhat] satisfied or very satisfied" especially when presented as two trivial properties, has limitations.

Once again they asked:

> In general, are you satisfied or dissatisfied with the way things are going in your personal life at this time?

Then followed up with:

> Are you very [satisfied/dissatisfied], or just somewhat [satisfied/dissatisfied]?

Note how both of those are binary, with a NULL being an option to mark down as an exception.

You do not have quintiles at all.

[0] https://carsey.unh.edu/sites/default/files/media/2020/07/gal...

[1] https://news.gallup.com/poll/1672/satisfaction-personal-life...


I don't think it's a straw man. If it is true that the delta matters and it is also true that at the time when this metric was showing the most positive results and trending upwards, online communities such as this talk about the existence of the loneliness epidemic https://news.ycombinator.com/item?id=20468767 then one must ask oneself whether this is a property of the online communities in question.

At the time when the gallup poll showed an upward trend towards its peak this community was talking about the loneliness epidemic. When the gallup poll shows a downward trend toward its lowest, this community is talking about the loneliness epidemic. And it's the change in satisfaction that is the most significant. So there are two changes in opposite directions causing the same conclusion.

If this were happening to me, I would ask myself "Am I sure this is a general property and not just a property of me?". Do you find this not convincing to move your estimate of the likelihood of the loneliness epidemic actually existing? If you don't, it's all right. We can leave it here.


The old efnet #unix sex chart debunks some of these claims.

I know I learned more from having ops in #unixhelp than stack overflow.

But the 90s were different, and it really depended on the channels.

I remember going into #nanog to finally get uunet to pull bgp routes once.

Sure there were bad parts, but at least you still had agency unlike with modern social media.


There is a lot to blame on the OS side, but Docker/OCI are also to blame, not allowing for permission bounds and forcing everything to the end user.

Open desktop is also problematic, but the issue is more about user land passing the buck, across multiple projects that can easily justify local decisions.

As an example, if crun set reasonable defaults and restricted namespace incompatible features by default we would be in a better position.

But docker refused to even allow you to disable the —privileged flag a decade ago,

There are a bunch of *2() system calls that decided to use caller sized structs that are problematic, and apparmor is trivial to bypass with ld_preload etc…

But when you have major projects like lamma.cpp running as container uid0, there is a lot of hardening tha could happen with projects just accepting some shared responsibility.

Containers are just frameworks to call kernel primitives, they could be made more secure by dropping more.

But OCI wants to stay simple and just stamp couple selinux/apparmor/seccomp and dbus does similar.

Berkeley sockets do force unsharing of netns etc, but Unix is about dropping privileges to its core.

Network aware is actually the easier portion, and I guess if the kernel implemented posix socket authorization it would help, but when user land isn’t even using basic features like uid/gid, no OS would work IMHO.

We need some force that incentivizes security by design and sensible defaults, right now we have wack-a-mole security theater. Strong or frozen caveman opinions win out right now.


IIRC the desk side onyx had the royal purple stripes and only accepted one CPU board, the rack mount version were that blue-purple color, more indigo (color)

So there were two computers made with the same bits mid 90s. Origin(compute, blue) and onyx(graphics, purple). Both had deskside and rack systems.

Onyx had a few slots reserved for graphics, original could have more compute boards. But you could certainly put two couple cpu boards in an onyx deskside or rack.


Uhoh, I think I mistook Oxygen for Origin. It was 25 years ago.

The funny thing was that our Cray sales team, from the C90 and T3D, jumped ship from Cray to SGI before their merger.

So got to set up some Indys as web servers/ Oracle database. Surprise! Same sales team. Then the merger happened and we got the Origin boxes.


It must have been more complex then that, I was at the skywalker ranch when onyx was being replaced by o200/2000 and never saw a purple onyx at Kerner(ILM), I had a purple Indy impact as my home machine and was looking for purple. I hated teal with a passion.

Perhaps all the purple rack onyx had been dumped but we dug through ILMs boneyard looking to add to our cxfs cluster, but the FC bus speed was too low.

It is possible that R10k was different or that there were multiple chassis. The desk sides I had experience with required RAM in slot 1, with CPU in slot 2, with up to 4 CPUs on the board.

o200 was more restrictive, with 2 CPUs per chassis, with the ability to cray-link two chassis for a total of 4 CPUs, more required o2000.

But this was a quarter of a century ago or more by now…so I may misremember things.


Oh I think you may be right here. I was thinking originally 2000/onyx 2 which were basically the same system just with or without graphics pipes.

And you’re right origin had three models (maybe more) a tower system (o200), a deskside system (same as onyx2) and a rack (same as onyx2).

For Indy impact so you mean the teal/purple indigo2? Weirdly the teal desktop was named indigo2..

Weird but cool stuff.


I actually forgot that they made a low end ‘Indy’ and never saw one. We called the Indigo2 Indy

There were some oddities with ad dollars and prestige clients in the 90s, where systems would be upgraded, avoiding swapping out the serialized, asset tagged parts, yet upgrading systems.

The teal Indigo2s were the original with the impact graphics ones being purple.

But all the Indigo2s I saw typically were r10k + max impact graphics despite the color.

The cases were identical.

But Evans & Sutherland and Lucas are the only places I dealt with SGI, so probably not a good source for typical customer’s experience


I'm tugging at my memory with this page: http://www.sgistuff.net/hardware/systems/index.html

I had a teal Indigo2, an O2, and an Indy at various points. Neat stuff, but time moves on.

There's some complex setups listed in the origin/onyx pages. You could mostly mix and match boards to get what you wanted.


I should add, the o200 was a 4u chassis with a desk side skin option.

If you popped off the plastic and had rails you could convert them.

The o200 had stonith (shoot the other node in the head) ports like the o2000 for clustering etc..


Note that while containers can be leveraged to run processes at lower privilege levels, they are not secure by default, and actually run at elevated privileges compared to normal processes.

Make sure the agent cannot launch containers and that you are switching users and dropping privileges.

On a Mac you are running a VM machine that helps, but on Linux it is the user that is responsible for constraints, and by default it is trivial to bypass.

Containers have been fairly successful for security because the most popular images have been leveraging traditional co-hosting methods, like nginx dropping root etc…

By themselves without actively doing the same they are not a security feature.

While there are some reactive defaults, Docker places the responsibility for dropping privileges on the user and image. Just launching a container is security through obscurity.

It can be a powerful tool to improve security posture, but don’t expect it by default.


To add to this,

The concept of connascence, and not coupling is what I find more useful for trade off analysis.

Synchronous connascence means that you only have a single architectural quanta under Neil Ford’s terminology.

As Ford is less religious and more respectful of real world trade offs, I find his writings more useful for real world problems.

I encourage people to check his books out and see if it is useful. It was always hard to mention connascence as it has a reputation of being ivory tower architect jargon, but in a distributed system world it is very pragmatic.


Google "doesn't sell your data" but RTB leaks that info, and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.

It is well documented in many many places, people just don't care.

Google can claim that it doesn’t sell your data, but if you think that the data about your searches isn't being sold, here is just a small selection of real sources.

https://www.iccl.ie/wp-content/uploads/2022/05/Mass-data-bre...

And it isn't paranoia, consumer surveillance is a very real problem, and one of the few paths to profitability for OpenAI.

https://techpolicy.sanford.duke.edu/data-brokers-and-the-sal...

https://stratcomcoe.org/cuploads/pfiles/data_brokers_and_sec...

https://www.ftc.gov/system/files/ftc_gov/pdf/26AmendedCompla...

https://epic.org/a-health-privacy-check-up-how-unfair-modern...


> and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.

Citation needed for a claim of this magnitude.

> It is well documented in many many places, people just don't care.

Yes, please share documentation of companies buying search data and rejecting candidates for it.

Like most conspiracy theories, there are a lot of statements about this happening and being documented but the documentation never arrives.


Like most cults, you ignore direct links with cites from multiple governments agencies, but here is another.

https://www.upturn.org/work/comments-to-the-cfpb-on-data-bro...

> Most employers we examined used an ATS capable of integrating with a range of background screening vendors, including those providing social media screens, criminal background checks, credit checks, drug and health screenings, and I-9 and E-Verify.29 As applicants, however, we had no way of knowing which, if any, background check systems were used to evaluate our applications. Employers provided no meaningful feedback or explanation when an offer of work was not extended. Thus, a job candidate subjected to a background check may have no opportunity to contest the data or conclusions derived therefrom.30

If you are going to ignore a decade of research etc... I can't prove it to you.

> The agency found that data brokers routinely sidestep the FCRA by claiming they aren't subject to its requirements – even while selling the very types of sensitive personal and financial information Congress intended the law to protect.

https://www.consumerfinance.gov/about-us/newsroom/cfpb-propo...

> Data brokers obtain information from a variety of sources, including retailers, websites and apps, newspaper and magazine publishers, and financial service providers, as well as cookies and similar technologies that gather information about consumers’ online activities. Other information is publicly available, such as criminal and civil record information maintained by federal, state, and local courts and governments, and information available on the internet, including information posted by consumers on social media.

> Data brokers analyze and package consumers’ information into reports used by creditors, insurers, landlords, employers, and others to make decisions about consumers

https://files.consumerfinance.gov/f/documents/cfpb_fcra-nprm...

And that CFPB proposal was withdrawn:

https://www.consumerfinancialserviceslawmonitor.com/2025/05/...

Note screen shots of paywalled white papers from large HR orgs:

https://directorylogos.mediabrains.com/clientimages/f82ca2e3...

Image from here:

https://vendordirectory.shrm.org/company/839063/whitepapers/...

But I am betting you come back with another ad hominem, so I will stay in the real world while you ignore it, enjoy having the last word.


You keep straying from the question. The question was: who has access to google searches? RTB isn't google searches. Background screening isn't google searches. Social media isn't google searches. Cookies aren't google searches. etc etc

Every link you provided is for tangential things. They're bad, yes, but they're not google searches. Provide a link where some individual says "Yes, I know what so-and-so searched for last wednesday."


Where in your post are Google searches used?

Can you answer this question without walls of unrelated text, ad hominem attacks (saying I’m in a cult), or link bombing links that don’t answer the question?

It’s a simple question. You keep insisting there’s an answer and trying to ad hominem me for not knowing it, but you consistently cannot show it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: