Hacker Newsnew | past | comments | ask | show | jobs | submit | Arch-TK's commentslogin

That weird feeling when you realise that the people you hang out with form such a weird niche that something considered common knowledge among you is being described as "buried deep within the C standard".

What's noteworthy is that the compiler isn't required to generate a warning if the array is too small. That's just GCC being generous with its help. The official stance is that it's simply undefined behaviour to pass a pointer to an object which is too small (yes, only to pass, even if you don't access it).


The other fun wart with `static` is C++ doesn't support it. So it has to be macro'd out in headers shared with C++.

https://godbolt.org/z/z9EEcrYT6


And probably never will, because C++ compatibility with C beyond what was done initially, is to one be close as possible but not at the expense of better alternatives that the language already offers.

Thus std::array, std::span, std::string, std::string_view, std::vector, with hardned options turned on.

For the static thing, the right way in C++ is to use a template parameter,

    template<typename T, int size>
    int foo(T (&ary)[size]) {
       return size;
    }
-- https://godbolt.org/z/MhccKWocE

If you want to get fancy, you might make use of concepts, or constexpr to validate size at compile time.


I guess. I think it's mostly not a very useful part of C, so it doesn't see much adoption in C anyway.

In these applications, size and T are fixed -- you'd just take `std::span<uint8_t, XCHACHA20POLY1305_NONCE_SIZE>` rather than templating.


The C feature does not have to be fixed-size though.

The authors in the article want a fixed-sized feature; C just doesn't have it.

C has this fixed-sized feature which is what the article points out.

Not surprising and not a "wart". C and C++ have diverged since the mid-90s and are two very different languages now. E.g. trying to build C code with a C++ compiler really doesn't make much sense anymore (since about 20 years).

People still routinely want to incorporate C libraries in C++ code, using C headers. It mostly works.

Sure, but it works mostly by intentional hobbling of headers. Meanwhile in other languages you just write/generate bindings and they survive it.

The most important C feature that C++ lacked for a long time was C99 designated initializers, but C++ finally supports them in C++20[0].

Other than that... I'm not sure what hobbling you have in mind. Many C23 features come directly from earlier C++ standards (auto, attribute syntax, nullptr, binary literals, true/false keywords). VLAs? Though these are optional in newer C standards, too.

[0]: https://en.cppreference.com/w/cpp/language/aggregate_initial...


> C99 designated initializers, but C++ finally supports them in C++20

Unfortunately C++20 designated init has been butchered so much compared to the C99 that it is pretty much useless in practice except for the most trivial structs (for instance designators must appear in order of declaration, array item init is completely missing, designator chaining doesn't work ... and those are only the most important things).


For me it is complex and variably modified types (mandatory in C23).

A lot of "modern" C features (e.g. added after ca 1995) are unknown to C++ devs, I would have expected that at least the Linux kernel devs know their language though ;)

The article seems to perpetuate one of those age old myths that NAT has something to do with protection.

Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding. But implementing NAT on it's own says nothing about the behavior of your router firewall with regards to receiving Martians, or with regards to whether the router firewall itself accepts connections and if the router firewall itself isn't running some service which causes exposure.

To actually protect things behind NAT you still need firewall rules and you can keep those rules even when you are not using NAT. Thus those rules, and by extension the protection, are separable from the concept of NAT.

This is the kind of weird argument that has caused a lot of people who hadn't ever used IPv6 to avoid trying it.


Yeah, I keep meaning to write something about this. I've definitely noticed people wary of IPv6 because their machines get "real" IP addresses rather than the "safe" RFC1918 ones. Of course, having a real IP address is precisely the point of IPv6.

It's like we've been collectively trained to think of RFC1918 as "safe" and forgotten what a firewall is. It's one of those "a little knowledge is a dangerous thing" things.


In a world where people think NAT addresses are safe because you don’t need to know anything else about firewalls, IPv6 _is_ fundamentally less secure.


> In a world where people think NAT addresses are safe because […]

The vast, vast majority of people do not know what NAT is: ask your mom, aunt, uncle, grandma, cousin(s), etc. They simply have a 'magic box' (often from the ISP) that "connects to Internet". People connect to it (now mostly via Wifi) and they are "on the Internet".

They do not know about IPv4 or IPv6 (or ARP, or DHCP, or SLAAC).

As long as the magic box is statefully inspecting traffic, which is done for IPv4-NAT, and for IPv6 firewalls, it makes no practical difference which address family you are using from a security perspective.

The rending of garments over having a globally routable IPv6 address (but not globally reachable, because of SPI) on your home is just silliness.

If you think NAT addresses are safe because… of any reason whatsoever really… simply shows a lack of network understanding. You might as well be talking to a Flat Earther about orbital mechanics.


> which is done for IPv4-NAT, and for IPv6 firewalls

Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)? Maybe thats the point where the insecurity comes from. A Home NAT cannot work any other way(it fails "safely"), a firewall being absent usually means everything just gets through.


All the ones I've had have had a firewall by default for IPv4 and IPv6, yes. If ISPs are shipping stuff without a firewall by default I'd consider that incompetence given people don't understand this stuff and shitty IoT devices exist.

I do wonder how real the problem is, though. How are people going to discover a random IPv6 device on the internet? Even if you knew some /64 is residential it's still impractical to scan and find anything there (18 quintillion possible addresses). If you scanned an address per millisecond it would take 10^8 years, or about 1/8 the age of the earth, to scan a /64.

Are we just not able to think in such big numbers?


> Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)?

Consider the counter-factual: can you list any home routers/CPEs that do not do SPI, regardless of protocol? If someone found such a thing, IMHO there would be a CVE issued quite quickly for it.

And not just residential stuff: $WORK upgraded firewalls earlier in 2025, and in the rules table of the device(s) there is an entry at the bottom that says "Implicit deny all" (for all protocols).

So my question to NAT/IPv6 Truthers is: what are the devices that allow IPv6 connections without SPI?

And even if such a thing exists, a single IPv6 /64 subnet is as large as four billion (2^32) IPv4 Internets (2^32 addresses): good luck trying to find a host to hit in that space (RFC 7721).


There is one practical difference. IPv6 without a NAT exposes information about different devices inside the private network. A NAT (whether ipv4 or ipv6) will obfuscate how many devices are on the network. Whether that is desirable depends on the circumstances.


> A NAT (whether ipv4 or ipv6) will obfuscate how many devices are on the network. Whether that is desirable depends on the circumstances.

"Revisiting IoT Fingerprinting behind a NAT":

* https://par.nsf.gov/servlets/purl/10332218

"Study on OS Fingerprinting and NAT/Tethering based on DNS Log Analysis":

* https://www.irtf.org/raim-2015-papers/raim-2015-paper21.pdf

Also:

> […] In this paper, we design an efficient and scalable system via spatial-temporal traffic fingerprinting from an ISP’s perspective in consideration of practical issues like learning- testing asymmetry. Our system can accurately identify typical IoT devices in a network, with the additional capability of identifying what devices are hidden behind NAT and the number of each type of device that share the same IP address. […]

* https://www.thucloud.com/zhenhua/papers/TON'22%20Hidden_IoT....

Thinking you're hiding things because you're behind a NAT is security theatre.


> IPv6 without a NAT exposes information about different devices inside the private network.

In practice this has not been true for over 20 years.

IPv6 devices on SLAAC networks (which is to say, almost all of them) regularly rotate their IPv6 address. The protocol also explicitly encourages (actually, requires) hosts to have more than one IPv6 address active at any given time.

You are also making a wrong assumption that the externally visible address and port ranges chosen by the NAT device do not make the identity of internal devices easily guessable.


In both cases the only consumer security comes from "the home router defaults to being a stateful firewall". The only difference between the two is whether it also defaults to doing NAT with that state, which is not what was making IPv4 secure for people unaware either.


If you think about it, NAT offers pretty much the same protection as a default stateful firewall. Only allowing packets from the outside related to a connection initiated from the inside.


> Only allowing packets from the outside related to a connection initiated from the inside.

NAT a.k.a IP masquerading does not do that, it only figures out that some ingress packets whose DST is the gateway actually map to previous packets coming from a LAN endpoint that have been masqueraded before, performs the reverse masquerading, and routes the new packet there.

But plop in a route to the network behind and unmatched ingress packets definitely get routed to the internal side. To have that not happen you need to drop those unmatched ingress packets, and that's the firewall doing that.

Fun fact: some decade ago an ISP where I lived screwed that up. A neighbour and I figured out the network was something like that:

    192.168.1.x --- 192.168.1.1 --
                                  \
                                   10.0.0.x ----> WAN
                                  /
    192.168.2.x --- 192.168.2.1 --
192.168.1 and 192.168.2 would be two ISP subscribers and 10.0.0.x some internal local haul. 192.168.x.1 would perform NAT but not firewall.

You'd never see that 10.0.0.x usually as things towards WAN would get NAT'd (twice). But 10.0.0.x would know about both of the 192, so you just had to add respective routes to each other in the 192.168.x.1 and bam you'd be able to have packets fly through both ways, NAT be damned.

Network Address Translation is not a firewall and provides no magically imbued protection.


I have never seen a NAT implementation that forwarded every packet sent to it. As you stated in your first sentence, NAT forwards packets that match previous packets. Assuming it does that job well, that’s filtering right there.


its pretty common to have the NAT gateway also be a stateful firewall (you’re tracking state, after all) but they’re not the same and you can have one without the other.

Its just uncommon in consumer or prosumer devices.

A similar allegory is perhaps industrial washing machines vs consumer ones or that printer/scanner combos are common (even in offices) but print shops and people who actually need a lot of paper would have dedicated equipment that does either scanning or copying better.

It’s also like a leatherman, they all have some commonality (the need to be gripped) so theres a lot of combination; but a tradie would only use one as a last resort- often preferring a proper screwdriver.



> NAT offers pretty much the same protection as a default stateful firewall

Most NAT requires itself to include a stateful firewall; it's the same thing as the NAT flow table. This whole trope is mostly getting into people's heads to not forget about actually configuring that "free" firewall properly, since it'll just be a poor one otherwise.


>Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding.

Don’t forget source routing. That said, depending on your threat model, it’s not entirely unreasonable to just rely on your ISP’s configuration to protect you from stuff like this, specifically behind an IANA private range.


I don't think source routing is a thing anymore. At least if you're talking about the ability of a source to specify a path to its destination.

The last time I heard about source routing actually being a useful feature or a vulnerability used by hackers was the 1990's.


This is a weird argument.

First off, we don't have a good way to actually measure an individual's intelligence. IQ is actually meant to correlate with g which is a hidden factor we're trying to measure. IQ tests are good insofar as you look at the results of them from the perspective of a population. In these cases individual variation in how well it correlates smooths out. We design IQ tests and normalise IQ scores such that across time and over the course of many studies these tests appear to correlate with this hidden g factor. Moreover, anything below 70 and above 130 is difficult to measure accurately, IQ is benchmarked such that it has a mean of 100 and a standard deviation of 15. Below 70 and above 130 is outside of two standard deviations.

So, in summary, IQ is not a direct measure of intelligence. What you're doing here is pointing at some random guy who allegedly scored high on an IQ test and saying: "Look at how dumb that guy is. We must be really bad at testing."

But to say we don't know what intelligence is, is silly, since we are the ones defining that word. At least in this sense. And the definition we have come up with is grounded in pragmatism. The point of the whole field of research is to come up with and keep clarifying a useful definition.

Worth also noting that you can study for an IQ test which will produce an even less correlated score. The whole design and point of IQ tests is done with the idea of testing your ability to come up with solutions to puzzles on the spot.


My point is to state that one of two things must be true: Either IQ does not really measure Intelligence, or Intelligence (being the thing IQ measures or correlates to) isn't much of a desirable quality for agentic systems to have. I suspect its a mix. The people on the upper end of the IQ spectrum tend to lead wholly uninspiring lives; the 276 guy isn't the only example, fraud or not, there's a couple university professors with relatively average publishing history, a couple suicides, a couple wacko cult leaders, a couple self-help gurus... and the goat, Terrance Tao, he's up there, but its interesting how poorly the measure correlates with anything we'd describe as "success".

The apologists enter the chat and state "well, its because they're frauds or they're gaming the system" without an ounce of recognition that this is exactly what we're designing AI systems to do: Cheat the test. If you expect being able to pass intelligence evals as being a way to grow intelligence, well, I suspect that will work out just about as well as IQ tests do for identifying individuals capable of things like highly creative invention.


You are throwing around anecdotes. They're not that helpful.

It's worth noting that success in life (for whatever that is defined as) is not the same thing as intelligence. And being intelligent isn't even enough for you to be successful in intellectual pursuits either.

You can be highly intelligent and receive no education, have no access to books (or be unable to read) and then you might be able to intelligently solve the problem of eating a sandwich but that wont get you anywhere.

Likewise, you can be intelligent and have access to the right tools but you might be too anxious to try to excell. Maybe you're intelligent and have unmedicated ADHD causing you to constantly fail to actually get anything completes in a timely manner.

There are a lot of things between IQ and success in life. But we do know for a fact that when controlling for other factors, we see positive trends between IQ and life success. That doesn't mean that IQ is the only factor.

Certainly the fact you can pull out a handful of anecdotes about high IQ individuals and talk about how uninspiring their lives are doesn't mean that all high IQ people are living uninspiring lives, or that living an inspiring life is uncorrelated with IQ, or that there is even a meaningful definition of an inspiring life.

Lastly, please note that there are lots of successful people who had an IQ test where they scored really low, and lots of unsuccessful people who had an IQ test where they scored high. This will in part be due to the fact that IQ doesn't corelate at 100% with anything, but also due to the fact that IQ doesn't correlate with itself over time at 100%. You can do an IQ test on an exceptionally bad day, or an exceptionally good day, you might get an IQ test which is not good at measuring you in particular. That's why when we do research on this topic we apply multiple different tests, we control for variables, and we run these on large groups of people.

Whether intelligence is useful for a model or not, who knows. All I can tell you with relative confidence is that IQ tests are designed with humans in mind, and when you apply them to models, it is no longer clear what they measure.

One thing models don't have (yet) is lives which they can live and which we can study.


/bin is the "standard" location for bash on a subset of Linux distributions and basically no other Unix...

So it's not really a standard.

/bin/sh is a much more common convention but once again, not a standard.

There really isn't a truly portable shebang, but the same can be said about executables themselves. As part of the build or install step of whatever thing you're making, you should really be looking these up and changing them.

What's more, bash isn't a standard shell.


Sorry, I should probably think more widely, but I was just considering Linux distros.

> /bin is the "standard" location for bash on a subset of Linux distributions

Considering "location" such that it includes /bin symlinks, that would be nearly all distros, I would think...

> What's more, bash isn't a standard shell.

De facto and specifically among Linux distros, it is. It's probably an underestimate that 95% of all Linux distro installations have it preinstalled.


It's only really NixOS as far as I know that doesn't ever put bash in /bin/bash (as far as Linux distributions go). But, on the other hand, there are quite a few distros (or at least flavours of distros) which don't ship bash by default (alpine, minimal versions of most distros, and embedded-Linux focused stuff if you count it). I imagine the most common "installation" of Linux is userspace in a container (yeah I know there's no kernel there, but nobody who talks about "Linux" broadly speaking specifically cares about the kernel) and a good chunk of those will be minimal with no bash.

Bash has to be explicitly installed on OpenBSD, FreeBSD, NetBSD (I think, haven't used it in a while) and probably a bunch of others. And in all of those cases (that I know of) it doesn't end up in /bin/bash once installed.

The default bash shipped on macs is so abhorrently ancient that it would be strictly better if it didn't exist because it would reduce the number of people who think bash scripts I write are broken (they're not broken, they just inevitably depend on some bash 4+ feature). Moreover, hardcoding /bin/bash as your shebang in this case will prevent anyone from remediating this problem by installing a non-ancient bash because the old one in /bin/bash will still get used.


> /bin is the "standard" location for bash on a subset of Linux distributions and basically no other Unix...

You’re forgetting macOS. It has been using /bin/bash forever.


Keep in mind that the bash you get on MacOS is bash 3.2 released in 2006 so relying on it for portability might not be a good idea.


Pretty much. I will continue using "#!/usr/bin/env <language>".


Amazing website.

One thing I noticed is that while I hate being told what to do, and my partner hates being told what to do, and we understand deeply how we feel when someone tells us what to do, we still tell each other what to do (which goes especially badly after a long day).

Edit: I am glad you wrote this, so I didn't have to. It feels like reading my own autobiography. But the problem with reading about this stuff is that, if you forget for a minute that it's literally just how life is for you, it reads like some fantastical fiction comedy. I avoid telling anyone I deal with that I have ADHD because I feel like if I tell them they'll lump me in with some crappy mental model, and I avoid telling anyone I deal with about these problems because they sound completely absurd.


Thanks for this comment! You describe a very relatable situation :)

> I am glad you wrote this, so I didn't have to.

My hope was that this website would fulfill this goal, and I'm very glad you think so <3


Valve isn't likely to make SteamOS the kind of platform that facilitates intrusive* anti-cheat** or which is locked down in a way to prevent cheating at the client side. This means that a number of competitive multiplayer games will never run on it. I think in this regard, consoles still have an advantage*** if you're into those kinds of games.

* I don't care what the intention is, they are _objectively_ intrusive.

** Last time I argued this, someone seemed to assume that I was claiming that writing Linux kernel drivers is harder than Windows kernel drivers. I am not arguing that, you need some kind of trusted party enforcing signed kernel drivers and a signed kernel in order to make KLA sufficiently hard to bypass.

*** In terms of the average Joe just wanting their game to run rather than having to think about the ethical implications of buying hardware you don't actually own or running an OS which gives control of your hardware to various corporations (but not you).


> Valve isn't likely to make SteamOS the kind of platform that facilitates intrusive* anti-cheat* or which is locked down in a way to prevent cheating at the client side. This means that a number of competitive multiplayer games will never run on it. I think in this regard, consoles still have an advantage** if you're into those kinds of games.

Depends on just how successful SteamOS gets. If it start to have a significant market share, competitive multiplayer games might start to find it hard to ignore it. Though how they decide to deal with that, I have no idea.

I think Valve see a future for anti-cheat where most of it is behavioral analysis. Client-side anti-cheat is a big game of cat and mouse. It does make cheat harder to develop, but to a point where the customer is impacted. Post game analysis cannot be countered "technically". Cheat would need to mimic a real player behavior, which at the end is a success. If you can't tell if a player is cheating or not, does it matter that they are ? Although for things like wallhacks, it might be harder to detect.


"you can't tell if a player is cheating or not, does it matter that they are"

This is basically effectively where KLA has gotten to. There are still plenty of cheaters, people just don't realize.

I think it does matter in a strictly moral sense, and if people were more aware of how bad the problem is, they would likely be outraged. Alas, since they can't see it, they are not aware of it, so there is no outrage and the games companies are satisfied.


KLA?


Kernel-level Anticheat


I think the assumption that Valve would choose user protection over getting games to work is flawed, they want openness where possible because they see it as a competitive advantage. With VAC they clearly think that maximally invasive anti-cheat isn't necessary so maybe they'll try to push providers in that direction?


Valve thinks it's not necessary and it's still in the air if it really isn't.

They have bet on the behavioral analysis anti-cheat horse but it hasn't won any races yet.

Moreover, they've proven that it's certainty more difficult to get it working than regular old fuck-the-end-user anti cheat.

Lastly, don't assume that the success of the platform will persuade these companies. They were already firmly un-persuaded when the steam deck got popular. And really, I think the popularity of a platform depends on the support of these companies more than the support of these companies depends on the popularity of a platform.


There is a real need for anti cheat / certified hardware. Valve is uniquely be positioned to address it because they have trust from the gaming community. Ideally a single anti cheat mechanism would be shared by all software vendors. Online games could request "console mode" involving hardware key exchange. Done right this wouldn't have to be invasive like current anti cheat.


It's either invasive anti-cheat on a vendor controlled platform or it's a totally locked down vendor controlled platform, there are no other options in the client side anti cheat space.

Given that valve refuses to use KLA for their own competitive multiplayer games, and has gone out of their way to not make their hardware locked down, I really dont think they will go down the path of making a locked down platform or facilitating intrusive anti cheat.


Is it truly either-or? Obviously the root of anti-cheat needs to be totally locked down, aka the TPM. But almost all "open" computers have a locked down TPM. The TPM doesn't need to prevent you from running an unsigned firmware, kernel, modules or user software, it only needs to report on whether you are / have. You can reboot your computer into "trusted" mode and run your games with anti-cheat. Then when you're done playing you can as much unsigned software as you want.


You ask if it's either intrusive spyware or if it's a locked down system and then describe dual-booting intrusive spyware.

A TPM is entirely under your control. It's designed in such a way that you can't do certain things with data within it, but that's not because (at least in theory) someone else can and is controlling your TPM to prevent you from doing those things. The TPM, unlike an installation of Windows, doesn't only listen to Microsoft.


What I'm describing is exactly the situation now. Many people dual boot Windows & Linux, with kernel level anti-cheat on their Windows partition. The existence of Linux on the same computer does not prevent the kernel level anti-cheat from working on Windows.

Similarly, the presence of unsigned software on a computer would not stop a Linux kernel level anti-cheat from working, and the kernel level anti-cheat shouldn't prevent the unsigned software from working. Once you run that unsigned software, your machine is tainted similarly to the way your kernel is tainted if you load the NVidia driver.


I wonder if it’s possible to implement anti-cheat as a USB stick. Your GabeCube or gaming PC would stay open by default, but you could buy an anti-cheat accessory that plugs into a free USB port. Connecting that device grants access to match making with other people who have the device.

There are several products that rely on a USB device like this for DRM solutions. It’s probably much easier to unlock static assets than validate running code, but I don’t have insight on the true complexity.


>I wonder if it’s possible to implement anti-cheat as a USB stick. Your GabeCube or gaming PC would stay open by default, but you could buy an anti-cheat accessory that plugs into a free USB port. Connecting that device grants access to match making with other people who have the device.

What does the USB stick actually do? The hard part of implementing the anti-cheat (ie. either invasive scanning or attestation with hardware root of trust) is entirely unaddressed, so your description is as helpful as "would it be possible to implement a quantum computer as a USB stick?"


I am very skeptic there's much cheating in Counter Strike or Dota.

They use different means to detect cheaters, which means sometimes they are banned several weeks after the fact, but they do ban cheaters.


We could just stop calling it an engineering discipline. You've laid out plenty of reasons why it is nothing like an engineering discipline in most contexts where people write software.

Real software engineering does exist. It does so precisely in places where you can't risk trying it and seeing it fail, like control systems for things which could kill someone if they failed.

People get offended when you claim most software engineering isn't engineering. I am pretty certain I would quickly get bored if I was actually an engineer. Most real world non-software engineers don't even really get to build anything, they're just there to check designs/implementations for potential future problems.

Maybe there are also people in the software world who _do_ want to do real engineering and they are offended because of that. Who knows.


I don't get offended. I've built software for my entire (43+ year) career and call myself a software developer (and back in the day a computer programmer) and I have never worked in a place where they treated it like actual engineering but have worked in plenty of places that had the "never enough time to do it right but plenty of time to do it over" attitude


Would be fun to see the traffic dumps, I would love to try to figure out the protocol offline with them.

Just spent half a day reverse engineering a Windows virtual printer driver (for work) and had to force myself to stop spending the rest of the day doing it.


Las organizaciones obreras de los países petroleros mayores del mundo abusan de un derecho tecnológico q ayudara a avanzar en la extracción sin problemas e daños de los obreros o trabajadores en los cuales tienen una baja de 32 %de muertes en el trabajo


Is it not? What is it instead? Carbon negative?


Greenpeace was founded to oppose nuclear energy, because it would lead to nuclear war (that was their position at least), which illustrates that, even now, nuclear power is considered not-green.


Greenpeace is a big part of the problem, especially with most of the European left being aligned with them on nuclear hate.


Being "green" is like being "organic" compared to being carbon-neutral which is something you can just calculate.


Temporarily controlling screen brightness is something that, at least on android, the app can do itself...

The cracked screen thing - funny.

I wonder how long these twats will hold out.

I hope by the next time I fly Ryanair, someone has figured out how to emulate the look of the app and extract the relevant data so I don't have to run their garbage malware on my phone in order to have the pleasure to fly a "cheap" airline which bills you for everything after using every dark pattern imaginable when you purchase their tickets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: