The Copilot in Visual Studio (Code) is not the same as Microsoft's Copilot. The former is GitHub's AI product and the latter is Microsoft's AI product. You can tell them apart because GitHub Copilot's icon is a helmet with goggles and Microsoft Copilot's icon is a colourful swirl thing.
It's wildly confusing branding not only because they're identically-named things that both repackage OpenAI's LLMs, but also because they're both ultimately owned by the same company.
I can only assume that the conflicting naming convention was either due to sheer incompetence or because they decided that confusing users was advantageous to them.
They do, and those models are served by Microsoft. You pay a premium per “request” (what that means is not fully clear to me) for certain models. If you use the native chat extension in VSCode for GitHub CoPilot, with Opus model selected, you are not paying Anthropic. This counts against your GitHub Copilot subscription.
The Claude Code extension for VSCode from Anthropic will use your Claude subscription. But honestly it’s not very good - I use it but only to “open in terminal” (this adds some small quality of life features like awareness it’s in VSC so it opens files in the editor pane next to it).
This is my biggest frustration as a full time .NET developer. Its especially worse when you're searching for Visual Studio (IDE) specifics, and get results for VS Code. It bewilders me why a company that owns a search engine names their products so poorly.
Copilot for Visual Studio (IDE) has multiple models, not just OpenAI models, it also includes Claude. It is basically a competitor to JetBrains AI.
The only good "AI" editor that supports Claude Code natively has so far been Zed. It's not PERFECT, but it has been the best experience short of just running Claude Code directly in the CLI.
Pretty sure the idea predates that lecture, it appears in Charles Stross' novel Accelerando from 2005 (which is based on short stories that were published years earlier).
There are other substances that can be used for reactor coolant. Molten salt reactors are actually substantially more efficient than water-cooled reactors because they have a higher operating temperature. You can also use liquid metal as coolant, such as lead or bismuth.
Could you elaborate? Why would being deep in the gravity well be a non-starter? I thought Mercury's proximity to Sol was a huge advantage because of the ample solar power which would make planet-side manufacturing easier.
They asked if the astronauts "want to risk it", not if it was actually safe. Those are very different questions. The astronauts are, in fact, the world's leading experts on whether or not they personally want to risk it, so it's not entirely unreasonable to think that they could answer that question.
It just depends on whether you think that the fact that they accept the risks is reason enough to let them fly a potentially-dangerous spacecraft.
I know we all have a lot of respect for astronauts, but the fact is that they blindly trust whoever tells them "it's safe enough" that it is, actually, safe enough.
Artemis II doesn't need astronauts to do its flights. Astronauts are trained to survive in a spaceship that does not need them to do anything at all. That it is their dream to survive in such a spaceship does not say at all that they have any valid idea of how much risk they are taking.
We can say "maybe the astronauts would accept to fly knowing that they have a probability of 1/30 of dying" all we want, but that doesn't answer the question here, which is: what is the probability that they die?
The article says "we don't really know: the first test flight was very concerning, and we used the exact same methods to prepare the second flight, so we won't really know how unsafe it is until we try it".
Sure, they have made tests on the ground. But the first flight proves that those tests are not enough, otherwise Artemis I wouldn't have had those issues in the first place.
Artemis II is not safe, at least by the standards we apply to things. It's the third flight of a capsule, on the second flight of the rocket, and the first flight of things like the life support system.
At the end of the day, one of the reasons astronauts are respected is they understand those risks, and go into space anyway. That doesn't mean we shouldn't try to minimize risks - but at some point the risk becomes acceptable, and the cost of reducing it too great.
To paraphrase a quote from Star Trek - risk is their business.
Project Hail Mary. It's a sci-fi novel by Andy Weir (author of The Martian) that was adapted into a movie that released in theaters a couple weeks ago. It's fantastic and you should totally read/watch it.
Unintentional denial-of-service attacks from AI scrapers are definitely a problem, I just don't know if "theft" is the right way to classify them. They shouldn't get lumped in with intellectual property concerns, which are a different matter. AI scrapers are a tragedy of the commons problem kind of like Kessler syndrome: a few bad actors can ruin low Earth orbit for everyone via space pollution, which is definitely a problem, but saying that they "stole" LEO from humanity doesn't feel like the right terminology. Maybe the problem with AI scrapers could be better described as "bandwidth pollution" or "network overfishing" or something.
Theft isn't far off, it seems closer to me than using the word for IP violations.
When a crawler aggressively crawls your site, they're permanently depriving you the use of those resources for their intended purpose. Arguably, it looks a lot like conversion.
If I took a photo off your photography blog and used it on my corporate website without your say or input, I don't think it would be unfair to call that stealing.
Doing that on a mass scale with an obfuscation step in between suddenly makes it ok? I'm not convinced.
you're totally right about not being theft, but we have a term. you used it yourself, "distributed denial of service". that's all it is. these crawlers should be kicked off the internet for abuse. people should contact the isp of origin.
Firstly, since this argument is about semantic pedantry anyways, it's just denial-of-service, not distributed denial-of-service. AI scraper requests come from centralized servers, not a botnet.
Secondly, denial-of-service implies intentionality and malice that I don't think is present from AI scrapers. They cause huge problems, but only as a negligent byproduct of other goals. I think that the tragedy of the commons framing is more accurate.
EDIT: my first point was arguably incorrect because some scrapers do use decentralized infrastructure and my second point was clearly incorrect because "denial-of-service" describes the effect, not the intention. I retract both points and apologize.
If they came from centralized servers they would be easy to block. The whole problem is that they have a seemingly endless supply of source IPs - that' means they are "distributed" in every way that matters on the Internet even if the requests are coordinated centrally.
ah, no fun, I was going to continue the semantic deconstruction with a whole bunch of technicalities about how you're not quite precisely accurate and you gotta go do the right thing and retract your statements.
The first is incorrect, these scrapers are usually distributed across many IPs, in my experience. I usually refer to them as "disturbed, non-identifying crawlers (DNCs)" when I want to be maximally explicit. (The worst I've seen is some crawler/botnet making exactly one request per IP -_-)
I think one could argue that one. Is a DDoS a symptom? In which case the intent is irrelevant. Or is a DDoS an attack/crime? In which case it is. We kind of use it to mean both. But I think it's generally the latter. Wikipedia describes it as a "cyberattack", so actually I think intent is relevant to our (society's) current definition.
The semantics that make sense to me is that "DDoS" describes the symptom/effect irrespective of intent, and "DDoS attack" describes the malicious crime. But the terms are frequently used interchangeably.
In a similar vein, I want a text editor where pasting from an external source isn't allowed. If you try, it should instantly remove the pasted text. Copy-pasting from inside the document would still be allowed (it could detect this by keeping track of every string in the document that has been selected by the cursor and allowing pastes that match one of those strings).
It wouldn't work in every use case (what if you need to include a verbatim quote and don't want to make typos by manually typing it?), but it'd be useful when everything in the document should be your words and you want to remove the temptation to use LLMs.
Which is especially odd because the author (Sam Hughes) lives in the UK and wrote the original in UK English, but apparently wrote the rewrite in US English. For example, a chapter in the original was titled "Case Colourless Green", but in the US edition of the rewrite that chapter is "Case Colorless Green" (without the 'u'). So Hughes, a native UK English speaker, wrote the rewrite in a non-native (to him) dialect, then had it (lazily) translated into his native dialect.
It was probably to deal with the transposition [0] out of the SCP universe into a new one. SCP is vaguely 'set' in the US because that's how a majority of the contributors naturally write and spell things, which sets the voice of the world indirectly.
[0] AKA "filing the serial numbers off" when it's explicitly fanfiction instead of a shared universe model like SCP
> The only way text can become communication is when the writer has intents
I'm curious as to what you mean by this. I assume you don't mean it literally, as that would be trivially falsifiable (for example, the text readout on a digital caliper doesn't have "intents", yet it absolutely communicates meaning), but I can't think of another way that you might have meant it. Could you elaborate?
The digital caliper isn't communicating with you. You're only reading text from a tool. I'm not expert in the field, but there are different "models of communication". For example one model has components: sender, receiver, message, channel, noise. The sender and the receiver are always people. There are other models focused on machines, but that's a very specific use of communication models.
It's wildly confusing branding not only because they're identically-named things that both repackage OpenAI's LLMs, but also because they're both ultimately owned by the same company.
I can only assume that the conflicting naming convention was either due to sheer incompetence or because they decided that confusing users was advantageous to them.
reply