Hacker Newsnew | past | comments | ask | show | jobs | submit | nikkwong's commentslogin

You can’t easily snapshot the current state of an OS and restore to that state like with git.

Let's assume that you can. For disaster recovery, this is probably acceptable, but it's unacceptable for basically any other purpose. Reverting the whole state of the machine because the AI agent (a single tenant in what is effectively a multi-tenant system) did something thing incorrect is unacceptable. Managing undo/redo in a multiplayer environment is horrific.

Maybe not for very broad definitions of OS state, but for specific files/folders/filesystems, this is trivial with FS-level snapshots and copy-on-write.

At least on macOS, an OS snapshot is a thing [1]; I suspect Cowork will mostly run in a sandbox, which Claude Code does now.

[1]: https://www.cleverfiles.com/help/apfs-snapshots.html


Ok, you can "easily", but how quickly can you revert to a snapshot? I would guess creating a snapshot for each turn change with an LLM become too burdensome to allow you to iterate quickly.

For the vast majority, this won't be an issue.

This is essentially a UI on top of Claude Code, which supports running in a sandbox on macOS.


All major OSes support snapshotting, and it's not a panacea on any of them.

Well there is cri-u for what its worth on linux which can atleast snapshot the state of an application and I suppose something must be similar available for filesystems as well

Also one can simply run a virtual machine which can do that but then the issue becomes in how apps from outside connect to vm inside


Filesystems like zfs, btrfs and bcachefs have snapshot creation and rollbacks as features.

Sure you can. Filesystem snapshotting is available on all OSes now.

I wonder if in the long run this will lead to the ascent of NixOS. They seem perfect for each other: if you have git and/or a snapshotting filesystem, together with the entire system state being downstram of your .nix file, then go ahead and let the LLM make changes willy-nilly, you can always roll back to a known good version.

NixOS still isn't ready for this world, but if it becomes the natural counterpart to LLM OS tooling, maybe that will speed up development.


You're all referencing the strange idea in a world where there would be no open-weight coding models trained in the future. Even in a world where VC spending vanished completely, coding models are such a valuable utility that I'm sure at the very least companies/individuals would crowdsource them on a reoccurring basis, keeping them up to date.

The value of this technology has been established, it's not leaving anytime soon.


SOTA models cost hundreds of millions to train. I doubt anyone is crowdsourcing that.

And that’s assuming you already have a lot of the infrastructure in place.


I think faang and the like would probably crowdsource it given that they would—according to the hypothesis presented—would only have to do it every few years, and ostensibly are realizing improved developer productivity from them.

I don’t think the incentive to open source is there for $200 million LLM models the same way it is for frameworks like React.

And for closed source LLMs, I’ve yet to see any verifiable metrics that indicate that “productivity” increases are having any external impact—looking at new products released, new games on Steam, new startups founded etc…

Certainly not enough to justify bearing the full cost of training and infrastructure.


What should Home Depot be doing? They don’t control the administration or the ICE raids. Forcing day laborers off the property ensures less raids happen on the property—I haven’t really understood the boycotts.


They don't have to set up Flock cameras and share the data with people who plan the ICE raids.

Home Depot's hands aren't totally clean here.


Home Depot put up the cameras to deal with organized crime, both theft and gift-card fraud. Flock specifically advertises that Home Depot put up the cameras to deal with gift card fraud:

> The Home Depot leveraged Flock Safety’s technology to close a case involving a multi-state gift card tampering ring, resulting in fraud and property theft charges exceeding $300,000. This type of success underscores how powerful connected data can be in mitigating fraud risks. [0]

Aside from that, Home Depot has been dealing with massive, multi-state, organized theft campaigns. Earlier this month, NY prosecutors lodged 780 counts of theft against thirteen suspects who stole millions of dollars of merchandise from Home Depot stores in nine states [1].

Not everything is about illegal immigrants.

[0] https://www.flocksafety.com/blog/combating-retail-fraud-with... [1] https://queenseagle.com/all/2025/12/12/retail-theft-ring-tha...


You wanna prevent gift card fraud? Stop selling gift cards.

Gift cards are a huge fraud vehicle by their nature. Home Depot is just noticing because it fraud against them, rather than the more usual money laundering for scams. Retailers turn a bit of a blind eye, since they make so much money from gift cards that never get used or end up with leftover balances. But really gift cards are an attractive nuisance, and add no value for the (non-sucker) consumer.

And the cameras will have small effectiveness after the first few arrests anyway. "Don't let the LPR catch your car" just becomes part of the tradecraft for these organized operations. Whereas sporadic, opportunistic, individualized ripoffs won't create much of a signature in the LPR stream.


Last time I got 10 gauge conduit wiring it was literally padlocked and needed a manager to get because the theft issues are so bad.


Why not?

It is really no different than having drug dealers set up shop on your corner and sharing footage with police. You have people who are likely committing criminal activity (multiple crimes in the day laborer case) and are sharing footage with the relevant authorities.

The politicization of enforcement doesn’t change that as a business owner I would not want to own the location people facilitate illegal transactions.


> no different

In your world view immigrants working jobs you find beneath you is the same as someone selling drugs?

> likely committing criminal activity

You understand that exploiting day laborers to circumvent labor laws puts the, mostly civil though vanishingly rare criminal, liability on the employer rather than the employee, right?

We use laws rather than your own personal hatred of immigrants to define criminality.


I’ve done landscaping, home repair, fence construction, outdoor painting. My family still actively does. I don’t find them beneath me.

Working under the table without work authorization is actually spectacularly illegal as an employer and employee. Tax evasion is also spectacularly illegal as an individual.

What are you talking about?


Killing a comment that links to dot gov sources about undocumnteds' being protected, rather than prosecuted, by labor law and showing immigrants pay taxes is fascinating indeed.

https://www.dir.ca.gov/DIRNews/2025/2025-53.html

"The Labor Commissioner is reminding all workers that California’s labor laws protect every worker in the state, regardless of immigration status."

https://docs.house.gov/meetings/JU/JU01/20250122/117827/HHRG...

"A new study shows that undocumented immigrants paid nearly $100 billion in federal, state and local tax revenue in 2022 while many are shut out of the programs their taxes fund."


The reason it’s dead is these are completely irrelevant and you aren’t having a conversation, you’re taking a pulpit.

California does not dictate federal labor law and I’m sure that you already know that. Your arguments are bad and aggressive.

You’d have way more influence and agreement if you argued about immigration processes as a whole (“why are these people with jobs not given visas already?”) than these contrived obviously ridiculous and irrelevant excerpts.

You’re arguing with me like I won’t actually think about what you say, which is the “not the HN style” comment I gave you before. I will.


[dead]


You seem to not be reading anything I’m saying. I have family that works for legally operated blue collar businesses.

The difference is engaging in criminal activity.

Your arguments are spectacularly lazy so I’ll ask you to show me where people not authorized to work in the country have no legal liability if they choose to work in the country.

I don’t really know what’s ruffled your feathers so much here, but this isn’t really how HN operates. It seems like you got a bit flustered when the “you’re a bad rich person” argument didn’t work, and now you’re just flailing wildly.


[flagged]


You won’t answer the question because you can’t. Your links are irrelevant which is why your post is dead.


Unsure what question you are refering to.

Thanks for letting me know it was [dead].

Killing a comment that links to dot gov sources about undocumnteds' being protected, rather than prosecuted, by labor law and showing immigrants pay taxes is fascinating indeed.

https://www.dir.ca.gov/DIRNews/2025/2025-53.html

"The Labor Commissioner is reminding all workers that California’s labor laws protect every worker in the state, regardless of immigration status."

https://docs.house.gov/meetings/JU/JU01/20250122/117827/HHRG...

"A new study shows that undocumented immigrants paid nearly $100 billion in federal, state and local tax revenue in 2022 while many are shut out of the programs their taxes fund."


I always thought having day laborers chilling in Home Depot parking lots was a net positive thing for the store and a bit of an untapped potential. Companies pay a lot of money to insert themselves in the hiring stream, and here is Home Depot as the defacto meeting point for a substantial amount of economic activity. Surely a more intelligent and less frightened company could make something positive out of this.

But that's what you get with a fear-based political leadership. ICE targets day laborers not because of the horrible damage they do to the US economy, but because they have been selected as the scapegoats du jour.


How can an intelligent company make money from illegal activity in your opinion? Day laborers hang in the parking lot because they can't work legally, if they could then they could use HD's contractor portal and bid on jobs there.


> What should Home Depot be doing?

Nothing? Why should they do anything?


Whatever they should be doing, it mustn't make my ears ring when I go to their store. There is only one way to prevent this: Lowe's.


HD doesn't need anything more than asking people to leave their property. These folks generally are on a public sidewalk.


Lobbying against the administration doing raids? It seems to me like every single part of this would hurt their business...

They put up deterrents for day-laborers who might otherwise shop for the projects they're getting hired for at home depot...


Could there be a motif unrelated to ICE ? That Home Depot does not like that day labourers are loitering and approaching customers entering and leaving the store.


I believe Home Depot offers a similar service now so in a way they are directly competing


Likely because they contrast with many of its own employees' lack of helpfulness, knowledge, or work ethic.


if so, you wouldn't expect this to be a new policy


This is amazing; now when is building this kind of thing going to become more accessible so we can start seeing a lot more of it? Webassembly has been around for years now but we still don't really see many companies compiling games or game-lite experiences to WASM. The tooling doesn't seem to be there which is the necessary prerequisite that could make building experiences like this actually feasible for most devs. Is that coming, ever?


I don't even think you can make something like this accessible other than render a normal site for specific users. It's almost entirely visual.


As another Seattle SWE, I'll go against the grain and say that I think AI is going to change the nature of the market for labor for SWE's and my guess would be for the negative. People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here. If you were to just judge by the sentiment on HN, you would think no coder worth their weight was using this in the real world—but my experience on a few teams over the last two years has been exactly the opposite—people are often embarrassed to admit it but they are using it all the time. There are many engineers at Meta that "no longer code" by hand and do literally all of their problem solving with AI.

I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.

Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.

I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.


This has been exactly my mindset as well (another Seattle SWE/DS). The baseline capability has been improving and compounding, not getting worse. It'd actually be quite convenient if AI's capabilities stayed exactly where they are now; the real problems come if AI does work.

I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line. But I'm not sure what's left if white collar work and creative work are automated en masse for "efficiency's" sake. Most folks like feeling like they're contributing towards something, despite some people who would rather do nothing.

To me it is clear that this is going to have negative effects on SWE and DS labor, and I'm unsure if I'll have a career in 5 years despite being a senior with a great track record. So, agreed. Save what you can.


> the real problems come if AI does work.

Exactly. For example, what happens to open source projects where developers don't have access to the latest proprietary dev tools? Or, what happens to projects like Spring if AI tools can generate framework code from scratch? I've seen maven builds on Java projects that pull in hundreds or even thousands of libraries. 99% of that code is never even used.

The real changes to jobs will be driven by considerations like these. Not saying this will happen but you can't rule it out either.

edit: Added last sentence.


> It'd actually be quite convenient if AI's capabilities stayed exactly where they are now

That's what Im' crossing my fingers at, makes our job easier, but doesn't degrade our worth. It's the best possible outcome for devs.


> I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line.

Most people do not dream of working most white collar jobs. Many people dream of meaningful physical labor. And many people who worked in mines did not dream of being told to learn to code.


The important piece here is that many people want to contribute to something intellectually, and a huge pathway for that is at risk of being significantly eroded. Permanently.

Your point stands that many people like physical labor. Whether they want to artisanally craft something, or desire being outside/doing physical or even menial labor more than sitting in an office. True, but that doesn't solve the above issue, just like it didn't in reverse. Telling miners to learn to code was... not great. And from my perspective neither is outsourcing our thinking en masse to AI.


I keep getting blown away by AI (specifically Claude Code with the latest models). What it does is literally science fiction. If you told someone 5 years ago that AI can find and fix a bug in some complex code with almost zero human intervention nobody would believe you, but this is the reality today. It can find bugs, it can fix bugs, it can refactor code, it can write code. Yes, not perfect, but with a well organized code base, and with careful prompting, it rivals humans in many tasks (certainly outperforms them in some aspects).

As you're also saying this is the worst it will ever be. There is only one direction, the question is the acceleration/velocity.

Where I'm not sure I agree is with the perception this automatically means we're all going to be out of a job. It's possible there would be more software engineering jobs. It's not clear. Someone still has to catch the bad approaches, the big mistakes, etc. There is going to be a lot more software produced with these tools than ever.


> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.

This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.


I think whether you are right or wrong it makes sense to hedge your bets. I suspect many people here are feeling some sense of fear (career, future implications, etc); I certainly do on some of these points and I think that's a rational response to be aware of the risk of the future unknown.

In general I think -> if I was not personally invested in this situation (i.e. another man on the street) what would be my immediate reaction to this? Would I still become a software engineer as an example? Even if it is doesn't come to past, given what I know now, would I take that bet with my life/career?

I think if people were honest with themselves sadly the answer for many would probably be "no". Most other professions wouldn't do this to themselves either; SWE is quite unique in this regard.


> code generation today is the worst that it ever will be, and it's only going to improve from here.

I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.

> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.

Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.

But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.

Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.


I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.


> I mean, they've made all of the progress up to now in essentially the last 5 years

I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.


We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.


> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.

I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.


The "hallucination problem" can't be solved, it's intrinsic to how stochastic text and image generators work. It's not a bug to be fixed, it's not some leak in the pipe somewhere, it is the whole pipe.

> there's still going to be a lot of toxic waste generated.

And how are LLMs going to get better as the quality of the training data nosedives because of this? Model collapse is a thing. You can easily see a scenario how they'll never be better than they are now.


> People need to remember that the ability of AI in code generation today is the worst that it ever will be

I've been reading this since 2023 and yet it hasn't really improved all that much. The same things are still problems that were problems back then. And if anything the improvement is slowing down, not speeding up.

I suspect unless we have real AGI we won't have human-level coding from AIs.


It has improved drastically, as evident by the kinds of issues these things can do with minimal supervision now.


He says a lot of things. Just because some of them end up being true by happenstance doesn't make him a prophet.


I feel like the idea here is cute; but does it realistically work at scale? Of course, a messaging app like this—if it's going to work anywhere, is going to work in Gaza, one of the (at least formerly) most densely populated areas in the world. But bluetooth was not designed for this type of communication whatsoever; phones can only establish bluetooth connections between devices at the very most 100ft under the most ideal conditions; and is probably much lower than that in practice.

Even if people are living in open-air conditions I can imagine messages getting stuck or being delivered very late; especially at night when there may not be a lot of human movement. How well does this actually work in practice?


The point the OP is making is that LLMs are not reliably able to provide safe and effective emotional support as has been outlined by recent cases. We're in uncharted territory and before LLMs become emotional companions for people, we should better understand what the risks and tradeoffs are.


I wonder if statistically (hand waving here, I’m so not an expert in this field) the SOTA models do as much or as little harm as their human counterparts in terms of providing safe and effective emotional support. Totally agree we should better understand the risks and trade offs but I wouldn’t be super surprised if they are statistically no worse than us meat bags this kind of stuff.


One difference is that if it were found that a psychiatrist or other professional had encouraged a patient's delusions or suicidal tendencies, then that person would likely lose his/her license and potentially face criminal penalties.

We know that humans should be able to consider the consequences of their actions and thus we hold them accountable (generally).

I'd be surprised if comparisons in the self-driving space have not been made: if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Though we also know that with big corporations, even clear negligence that leads to mass casualties does not often result in criminal penalties (e.g., Boeing).


> that person would likely lose his/her license and potentially face criminal penalties.

What if it were an unlicensed human encouraging someone else's delusions? I would think that's the real basis of comparison, because these LLMs are clearly not licensed therapists, and we can see from the real world how entire flat earth communities have formed from reinforcing each others' delusions.

Automation makes things easier and more efficient, and that includes making it easier and more efficient for people to dig their own rabbit holes. I don't see why LLM providers are to blame for someone's lack of epistemological hygiene.

Also, there are a lot of people who are lonely and for whatever reasons cannot get their social or emotional needs met in this modern age. Paying for an expensive psychiatrist isn't going to give them the friendship sensations they're craving. If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

> if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Waymo of course -- but Waymo also shouldn't be financially punished any harder than humans would be for equivalent honest mistakes. If Waymo truly is much safer than the average driver (which it certainly appears to be), then the amortized costs of its at-fault payouts should be way lower than the auto insurance costs of hiring out an equivalent number of human Uber drivers.


> I would think that's the real basis of comparison

It's not because that's not the typical case. LLMs encourage people's delusions by default, it's just a question of how receptive you are to them. Anyone who's used ChatGPT has experienced it even if they didn't realize it. It starts with "that's a really thoughtful question that not many people think to ask", and "you're absolutely right [...]".

> If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Talk to ChatGPT and try to put yourself into the shoes of a hurtful person (e.g. what people would call "narcissistic") who's complaining about other people. Keep in mind that they almost always suffer from a distorted perception so they genuinely believe that they're great people.

They can misunderstand some innocent action as a personal slight, react aggressively, and ChatGPT would tell them they were absolutely right to get angry. They could do the most abusive things and as long as they genuinely believe that they're good people (as they almost always do), ChatGPT will reassure them that other people are the problem, not them.

It's hallucinations feeding into hallucinations.


> LLMs encourage people's delusions by default, it's just a question of how receptive you are to them

There are absolutely plenty of people who encourage others' flat earth delusions by default, it's just a question of how receptive you are to them.

> There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Again, that sounds like a people problem. Dictators infamously fall into this trap too.

Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike. If others are okay with having their egos stroked and their delusions encouraged and validated, that's their prerogative.


> If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.

You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.


> We need to know whether we should be strongly discouraging it before it becomes another public health disaster.

That's fair! However, I think PSAs on the dangers of AI usage are very different in reach and scope from legally making LLM providers responsible for the AI usage of their users, which is what I understood jsrozner to be saying.


>Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

We're not holding LLMs to a higher standard than humans, we're holding them to a different standard than humans because - and it's getting exhausting having to keep pointing this out - LLMs are not humans. They're software.

And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem. We tend not to accept that kind of behavior in other people, because we understand the very real negative consequences of mass delusion and sociopathy. Why should we accept it from software?


> LLMs are not humans. They're software.

Sure, but the specific context of this conversation are the human roles (taxi driver, friend, etc.) that this software is replacing. Ergo, when judging software as a human replacement, it should be compared to how well humans fill those traditionally human roles.

> And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

Fair point.

> And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem.

Fair point again. Thanks for helping me gain a wider perspective.

However, I don't see it as inevitable that this becomes a serious large-scale problem. In my experience, current GPT 5.1 has already become a lot less cloyingly sycophantic than Claude is. If enough people hate sycophancy, it's quite possible that LLM providers are incentivized to continue improving on this front.

> We tend not to accept that kind of behavior in other people

Do we really? Maybe not third party bystanders reacting negatively to cult leaders, but the cult followers themselves certainly don't feel that way. If a person freely chooses to seek out and associate with another person, is anyone else supposed to be responsible for their adult decisions?


They also are not reliably able to provide safe and effective productivity support.


Who ever claimed there was a therapist shortage?


The process of providing personal therapy doesn't scale well.

And I don't know if you've noticed, but the world is pretty fucked up right now.


... because it doesn't have enough therapists?


People are so naive if they think most people can solve their problem with a one hour session a week.



i think most western governments and societies at large


This is neat! Any notes on the algo for controlling the output of the ws2812s?



Yes, it's probably something like that. WLED has a ton of patterns, it's really nice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: