Just as you can select whatever railroad gauge as a country (if you also build compatible trains, obviously) and have it work just fine no matter what you select--until you try to connect it to another country and then you need to have converters--, some physical theories have gauges like this that you can choose however you want and it will work fine (and be equivalent to a theory in any other gauge)--but if you want the theory to work with the next guy's theory you have to have converters.
Of course. Also with regular customer projects. Even without AI--but of course having an idiot be able to execute commands on your PC makes the risk higher.
> If so, why doesn’t Anthropic / AI code gen providers offer this type of service?
Why? Separate the concerns. Isolation is a concern depending on my own risk appetite. I do not want stuff to decide on my behalf what's inside the container and what's outside. That said, they do have devcontainer support (like the article says).
>Hard to believe Anthropic is not secure in some sense — like what if Claude Code is already inside some container-like thing?
It's a node program. It does ask you about every command it's gonna execute before it does it, though.
>Is it actually true that Claude cannot bust out of the container?
There are (sporadic) container escape exploits--but it's much harder than not having a container.
You can also use a qemu vm. Good luck escaping that.
Or an extra user account--I'm thinking of doing that next.
Bootstrapping everything is exactly how it's done correctly--and how it's actually done in practice in Guix.
I mean sure if you have a business to run you outsource this part to someone else--but you seem to think it's not done at all.
Supply chain attacks have been happening pretty much non-stop the past years. Think it's a good idea to use binary artifacts you don't know how they were made (and thus what's in them)? Especially for build tools, compilers and interpreters.
>And why is that location more valid of a decision than the one that doesn't require building the build system from source?
Because you only have to review a 250 Byte binary (implementing an assembler) manually. Everything else is indeed built from source, including make, all the way up to Pypy, Go, Java and .NET (and indeed Chromium).
I didn't realize until I read this, but all software engineers would benefit from building everything from source at least once as an educational experience.
I've never gone all the way to the bottom, but now that I know it's possible I cannot resist the challenge to try it.
>Because you only have to review a 250 Byte binary
It's dishonest to not mention the millions upon millions of lines of source code you also have to verify to know that dependencies are safe to use. Compiling from source doesn't prevent supply chain attacks from happening.
In my opinion there is more risk in getting a safe Siso binary in going through this whole complicated build everything from scratch process vs Google providing a trusted binary to use since you have to trust more parties to not have been compromised.
There's always tension between language simplicity (and thus cognitive load of the programmers) and features. Compare Scheme with Common Lisp.
The idea in Python is:
1. Statements are executed line by line in order (statement by statement).
2. One of the statements is "def", which executes a definition.
3. Whatever arguments you have are strictly evaluated.
For example f(g(h([]))), it evaluates [] (yielding a new empty list), then evaluates h([]) (always, no matter whether g uses it), then evaluates g(...), then evaluates f(...).
So if you have
def foo(x = []):
...
that immediately defines
foo = (lambda x = []: ...)
For that, it has to immediately evaluate [] (like it always does anywhere!). So how is this not exactly what it should do?
Some people complain about the following:
class A:
x = 3
y = x + 2
That now, x is a class variable (NOT an instance variable). And so is y. And the latter's value is 5. It doesn't try to second-guess whether you maybe mean any later value of x. No. The value of y is 5.
But it just evaluates each line in the class definition statement by statement when defining the class. Simple!
Complicating the Python evaluation model (that's in effect what you are implying) is not worth doing. And in any case, changing the evaluation model of the world's most used programming language (and in production in all countries of the world) in 2025 or any later date is a no go right there.
If you want a complicated (more featureful) evaluation model, just use C++ or Ruby. Sometimes they are the right choice.
> For that, it has to immediately evaluate [] (like it always does anywhere!). So how is this not exactly what it should do?
It has a lambda there. In many programming languages, and the way human beings read this, say that "when there is a lambda, whatever is inside is evaluated only when you call it". Python evaluating default arguments at definition time is a clear footgun that leads to many bugs.
Now, there is no way of fixing it now, without probably causing other bugs and years of backwards compatibility problems. But it is good that people are aware that it is an error in design, so new programming languages don't fall into the same error.
For an equivalent error that did get fixed, many Lisps used to have dynamic scoping for variables instead of lexical scoping. It was people critizing that decision that lead to pretty much all modern programming languages to use lexical scoping, including python.
>It has a lambda there. In many programming languages, and the way human beings read this, say that "when there is a lambda, whatever is inside is evaluated only when you call it".
What is inside the lambda is to the right of the ":". That is indeed evaluated only when you call it.
>But it is good that people are aware that it is an error in design, so new programming languages don't fall into the same error.
Python didn't "fall" into that "error". That was a deliberate design decision and in my opinion it is correct. Scheme is the same way, too.
Note that you only have a "problem" if you mutate the list (instead of functional programming) which would be weird to do in 2025.
>For an equivalent error that did get fixed, many Lisps used to have dynamic scoping for variables instead of lexical scoping. It was people critizing that decision that lead to pretty much all modern programming languages to use lexical scoping, including python.
Both are pretty useful (and both are still there, especially in Python and Lisp!). I see what you mean, though: lexical scoping is a better default for local variables.
But having weird lazy-sometimes evaluation would NOT be a better default.
If you had it, when exactly would it force the lazy evaluation?
def f(x=lazy: [g()]):
x = 3
if random() > 42:
print(x)
^ How about now?
Think about the implications of what you are suggesting.
Thankfully, we do have "lazy" and it's called "lambda" and it does what you would expect:
If you absolutely need it (you don't :P) you can do it explicitly:
def f(x=None, x_defaulter=lambda: []):
x = x if x is not None else x_defaulter()
Or do it like a normal person:
def f(x=None):
x = x if x is not None else []
Explicit is better than implicit.
Guido van Rossum would (correctly) veto anything that hid control flow from the user like having a function call sometimes evaluate the defaulter and sometimes not.
That’s a very academic viewpoint. People initialize variables with defaults, and sometimes, that default needs to be an empty list. They are just holding it wrong, right?
Most people writing any language without a linter are holding it wrong.
When a linter warns me about such an expression, it usually means that even if it doesn't blow up, it increases the cognitive load for anyone reviewing or maintaining the code (including future me). And I'm not religious — if I can't easily rewrite the expression in an obviously safe way, I just concede that its safety is not 100% obvious and add a nolint comment with explanation.
My point was that no matter the conceptual purity or implementation elegance, if a language design decision leads to most people getting it wrong–then that's a bad decision.
But it's not about that. I don't like this decision either, but the other side of the trade-off is not just about some abstract concepts or implementation, it's about complexity of the model you need to keep in your head to know what will a piece of code do. And this has always been a priority for Python.
If you want it to be permanent, then you can use a guix home profile (that's a declarative configuration of your home directory) with a patch function in the package list there:
You can also write a 10 line guile script to automatically do it for all dependencies (I sometimes do--for example for emacs). That would cause a massive rebuild, though.
>The most pain-free option I can think of is the --tune flag (which is similar to applying -march=native), but
> packages have to be defined as tunable for it to work (and not many are).
We did it that way on purpose--from prior experience, otherwise, you would get a combinatorial explosion of different package combinations.
If it does help for some package X, please email us a 2 line patch adding (tunable? . #t) to that one package.
If you do use --tune, it will tune everything that is tuneable in the dependency graph. But at least all dependents (not dependencies) will be just grafted--not be rebuilt.
I don't like vscode extensions advertising to me every 5 seconds, auto-downgrading the free versions of extensions, auto-installing aux tools every 5 seconds, having a 400 MB RSS chromium runtime (remember Eight Megabytes And Constantly Swapping? VS code is much worse; and it's also just a plain text editor); nerfing the .net debugger and breaking hot reload on purpose in VSCodium; telemetry, .... it's so noisy all the time. You are using this? On purpose?!
VS code is basically the same idea as emacs, just the MVP variant and with a lot of questionable technology choices (Javascript? Electron? Then emulate terminal cells anyway and manually copy cell contents? uhhh. What is this? Retrofuturism?) and done with the usual Microsoft Embrace-Extend-Extinguish tactics (nerfing pylance, funny license terms on some extensions that the extensions are only allowed to be used in their vscode etc).
So if you didn't like emacs you probably wouldn't like vscode either.
Also, if you use anything BUT emacs for Lisp development, what do you use that doesn't have a jarring break between the Lisp image and you? vim seems weird for that use case :)
emacs is very very good for Lisp development.
On the other hand, VSCode for Lisp is very flaky and VSCode regularily breaks your Lisp projects. Did you try it?
Because of your comment I tried VSCode again and now about 20 extensions (one of them "Alive", a Lisp extension for vscode) complain about now missing
"Dev container: Docker from Docker Compose"
(keep in mind they worked before and I didn't change anything in vscode--I hadn't even run VSCode for 8 months or so) and when I try to fix that by clicking on the message in the extension manager the message immediately disappears from all 20 extensions in the manager (WTF?) and I get:
>>./logs/20250112T181356/window1/exthost/ms-vscode-remote.remote-containers/remoteContainers-2025-01-12T17-13-58.234Z.log:
>>>> Executing external compose provider "/home/dannym/.guix-home/profile/bin/podman-compose". Please see podman-compose(1) for how to disable this message. <<<<
>a239310d8b933dc85cc7671d2c90a75580fc57a309905298170eac4e7618d0c1
>Error: statfs /var/run/docker.sock: no such file or directory
>Error: no container with name or ID "serverdevcontainer_app_1" found: no such container
... because it's using podman (I didn't configure that--vscode did that on its own, incompletely. Also, it thinks that means having a docker/podman service running as root has to be a thing then (instead of rootless podman). Funny thing is I use podman extensively. I don't wanna know how bad it would be if I HADN'T set podman up already).
So it didn't actually fix anything, but it removed the error message. I see.
And there's no REPL for the editor--so I can't actually find out details, let alone fix anything.
I had thought emacs DX was bad--but I've revised my opinion now: compared to vscode DX, emacs DX is great. You live with VSCode if you want to.
And note, vscode was made after emacs was made. There's no excuse for this.
I think this now was about all the time that I want to waste on this thing, again.
How is this a problem in 2025? shakes head
>VS Code managed to by-pass the qualitiy and amount of extensions/plugins in a fraction of time Emacs took decades.
Yeah? Seems to me these vscode extensions are written in crayon. Bad quality like that would never make it into emacs mainline. And it's not even strictly about that! I wonder who would write a developer tool that the developer can't easily debug its own extensions in (yes, I know about Ctrl-Shift-P). That flies about as well as a lead balloon.
For comparison, there's emacs bufferenv that does dev containerization like this and it works just fine. Configuration: 1 line--the names of the containerfiles one wants it to pick up. Also, if I wanted to debug what it did (which is rare) I could just evaluate any expression whatsoever in emacs. ("Alt-ESC : «expression»" anywhere)
PS. manually running "podman-compose up" in an example project as a regular user works just fine--starts up the project and everything needed. So what are they overcomplicating here? Pipes too hard?
PPS. I've read some blog article to make socket activation work for rootless podman[1] but it's not really talking about vscode. Instead, it talks how one would set up "linger" so that the container stays there when I'm logged out. So that's not for dev containers (why would I possibly want that there? I'm not ensuring Heisenbugs myself :P).