Enterprise likes it, because it gives them more visibility and control over what's going on in the dev box. That gives more angles to control against e.g. exfiltration threats. It also makes dev environments much more homogeneous: you worry less about client machines. In principle you could have a fleet of stateless Chromebooks whose main function is ssh and http, all connecting to identical virtualized desktops that are easily provisioned and deprovisioned.
I agree that enterprise likes it. Developers hate it, though. We are a lot that are very picky about our tools.
Virtualization means images means standardization means everyone is using the IDE and tools that IT and "Enterprise" decides.
It has it's pros and cons. But one of those cons is that it can make for an intolerable work environment. At least for me. I've gotten to the point where I now ask about dev environment during the interview process. If they mandate Mac or Visual Studio I'm not interested (not trying to start a flame war, those are my personal preferences). I know that certain organizations need their control for auditing, security and remote wipe reasons, and that is understandable. I'll just go work elsewhere. It's fine.
You don't speak for all developers, at least not me. In particular, having a working dev environment that accurately represents prod just handed to you, instead of trying to rig up something locally is actually kinda nice. It matters greatly what you're working on. If you're building a local C++ application, a devbox is going to be wildly inappropriate, but for a SaaS company where prod is a untangleable mess; databases and queues and microservices, it's infinitely more efficient to gave me a box already spun up that's a good enough faxscimile of prod. Bonus points for seeing the DB with test info. In IT, if a user comes to you with a broken mouse, do you try and fix it, or do you just give them a new one and get them back to work asap, and take the mouse in the back and fix it later? Same principle. If the dev box isn't working, it's far easier to spin up a new one rather than debugging on someone's laptop.
Dev boxes don't have to mean standardizing on One True Editor (although everyone knows that's vim), as long as you can get to the source, typically via ssh, and edit it, whatever you want to use is just fine.
If only there were tools to spin up repeatable environments based on a given config file locally, using either VMs or containers, reducing the setup process to a single command.
I didn't mean one specific thing. There are numerous tools in this space that approach the problem from different angles using different technologies.
My point really, was if your local dev environment requires you to jump through a bunch of hoops doing manual setup to get it working you're doing it wrong.
I think this hits the nail on the head. Plenty of companies have software stacks so large / complex to configure that there's no reasonable way to run the entire system on a single laptop.
As much as I dislike the experience of remote dev environments in general, there are certainly times where it may be worth the pain.
Sometimes thing are just comped complex. I know we want to see epicycles and simplify and be able to reduce the system, and sometimes that's even possible. (Don't get me started on being all in on all of AWS.) But sometimes it isn't and, well, that's where we are as an industry.
I haven't worked at many places like this, but the last time I did they had services that managed and operated on petabytes worth of data. The full suite of front end applications, rest services, ingestion services, maintenance, queues, billing, communication, scheduling and other systems is large enough that I probably only saw the source code for maybe 1/3 at most, let alone modified.
I do like local development better, every time it is feasible, but there's not much point in doing so if it can't reasonably reproduce a production system.
This is really not true - a poor implementation shouldn't dictate the direction of the "remote development" space. Sure, IT and security have their requirements, but primary requirement is to make the developers happy and provide them with reproducible dev environments.
I used to work at Uber and what we ended up doing with devpod (https://www.uber.com/blog/devpod-improving-developer-product...) was to enable the popular IDEs to connect to these remote environments - all the dotfiles etc etc were persisted so it literally felt like the local IDE, just way faster. Admittedly, it costs a bunch of money to build internally, but there's a path to having people be happy with dev environments.
(we collected data on what IDEs to prioritize based on surveys)
Why use a survey and not just ask the endpoints directly? Presumably the laptops are managed and are running something like Santa on them. Would remove bias to get the data this way.
yea - we had that too (good for understanding how laptop tooling worked, and what areas were starting to show latencies and therefore, needed to be worked on)
When I discover a company prefers Windows desktops, Microsoft 365, and Sharepoint, it's often a red flag for me. I've worked with guys who didn't know how to use git, so they copy all their code into Sharepoint. Then there are others who didn't know how to merge branches, so they copy from one folder to another using the windows desktop. It's even worse when the people doing this are supposedly "senior." In a Linux or Mac environment, I never encounter these sorts of WTFs
Yep. I ran into the “don’t overwrite these files” issue on a branch at a new company. It took months to get a sane branching/merging process in place. The “lead” loved copying crap with his windows desktop, did not even script his half baked processes. You should’ve seen what the actual code looked like.
I like it. I used gitpod.io before. Super nice to just be able to switch between projects in an easy way. Stopped when they changed pricing though from unlimited time to credits.
As someone who migrated our devs from local VMs to cloud VMs, I like it because despite the promise that every VM is the same, every dev laptop is not. There's so many ways for Vagrant and VirtualBox to screw up. Sure, the cloud VM is a little slower than local, but at least it fundamentally _works_, unlike VirtualBox where on any given day you might be suffering from any combination of 1) VirtualBox kexts not working on the latest version of MacOS, 2) file sharing silently breaking yet again with no logs, 3) docker containers inside the VM losing network connectivity for the third time this week because the bridge decided it didn't want to do anything, or 4) having VirtualBox installed just completely breaking MacOS and causing a crash during boot even after completely wiping, reloading, and trying to install VirtualBox again. Those are all real issues I spent weeks tearing my hair out over across dozens of devs. Now that we've moved to EC2-based VMs the only problems I have to deal with are minikube problems (which I had to deal with under VirtualBox as well) and when devs forgot they shut down their VM before going on vacation. The devs sometimes don't like that the default EBS volume is a little small, but every one that used the local VM knows it's a small price to pay
Even HashiCorp know vbox is scraping the bottom of the barrel for hypervisors:
> if you are using Vagrant for any real work, VMware providers are recommended since they're well supported and generally more stable and performant than VirtualBox.
If you're using vagrant to manage the VM and either use widely-supported or self built base boxes, each developer can use whichever works best on their platform.
So you might have a windows dev using hyperv, another usimg vmware workstation, a mac user with parallels, a linux user with lxc and another with kvm.
This is exactly what I've been doing. I pay for a vps and just use vscode to do all of my work, I tend to switch between various computers and even various laptops, and it doesn't matter which one I'm on and just ssh into my dev station.
The advantage of docker based developer workflows is that it makes things repeatable. If you’ve ever tried to support a team installing their own dependencies you’ll understand the pain!
If everyone is working in docker in VS Code it’s not a huge jump to have them develop in remote VMs. Now your devs can get setup instantaneously and can use thin and light MacBook Airs instead of heavy MacBook Pros.
> Someone, anyone, tell me which hypervisor doesn't enforce a memory limit on VMs?
They don’t mean the memory usage is allowed to grow indefinitely.
They mean, if your laptop has 32 GB RAM, and if your OS and apps consume 20 GB, it’s using 20 GB RAM.
If a VM has 32 GB of vRAM allocated, and the OS is aggressively caching, that VM will consume 32 GB of RAM on the physical host, even if only 20 GB of vRAM is being used within that guest VM.
Of course, every good hypervisor will have a way to prevent this from happening, so the article does seem disingenuous on this point.
If you work from the same computer in the same place everyday there are no advantages to having a remote box... If you work 50:50 from office & home you start getting annoyed by either having a different environment at home & work or by having to drag your laptop around with you... If you're a digital nomad wanting to work while travelling through "cheaper" parts of the world it's less worrying to carry around a $500 laptop than a $4000 laptop...
it was an either/or statement.., if you don't have a remote devbox your choice is either having to deal with the fact that your home work environment is different from your work environment at the office and you have to set up lots of things twice and encounter some weird issues even if you try to keep them in sync (more problematic for less experienced devs in fast moving companies), or your choice is using a laptop and carrying it with you even if you do have a more powerful PC at home, have the option of having a more powerful desktop at work and would otherwise prefer desktops to laptops...
Automated reproducible VMs are pretty straight forward with vagrant, and "setup the same thing from a defined set of instructions" is basically the definition of docker.
But if you're not happy with that scenario - I don't know maybe your work will be drastically affected if you have two different but identical VMs - tb3/4 external ssds are perfect for storing VMs on, so that you can move between machines at the drop of a hat. I've been doing this for about 5 years as a safety valve to my desktop having a fault: I can just unplug the drive and plug it into the laptop or a new machine and the VMs are all there exactly as I left them.
Doing a reproducible build with Vagrant and Docker is possible but very far from straight forward as you have to make extra sure that you use the exact same source box/image and install every software in the exact same version and deal with updates manually (and while most of the time updates don't break stuff anymore, it still happens at least once a year) + you still have to deal with synchronizing your IDE settings, secrets & credentials which you don't want baked into a Docker image, ... again yes it's possible, but not as straightforward as working in the exact same environment...
the VM on an external SSD is a better solution, but then it's still something you have to carry with you even though it's more compact than a laptop...
Devbox will give you the same project environment (packages, env-vars) on your work and home laptop. It leverages nix, and uses your native file-system avoiding the overhead and complexity of using Docker.
As someone who has actually traveled through "cheaper" parts of the world, I couldn't imagine working without a local environment. Cheaper parts of the world often imply worse internet connections, spotty wifi, and so on. Requiring a stable internet connection for everything would have been a productivity killer.
> Someone, anyone, tell me which hypervisor doesn't enforce a memory limit on VMs?
Not to mention the popular virtualization platforms like kvm, vmware, and hyper-v all support ballooning[1][2] where you can define a base amount of memory and a max, and when the guest garbage collects its heap within the vm the memory is reclaimed by the hypervisor.
This also allows overcommitting to better utilize the resources that would otherwise site mostly idle, and trust me every cloud provider overcommits their hypervisors, I used to work for a major one but anyone can check the cpu steal time in top and see how thrashed their host is
I don't really understand the point of a "dev box" that's hosted in a DC and shared, at all - at least not the way they're painting it here.
Hardware capabilities for even consumer level laptops and desktops have progressed much faster than average network connections.
Having testing/preview/branch named environments in a DC? Sure. But this line:
> They should be able to run any software they want inside their own workspaces without impacting each other.
What does that even mean?
Is this about someone working on a feature branch that uses some new dependency that needs to be installed?
That's 100% the sort of thing your local development environment is for, until you're ready to push it to your hosted test/whatever environment.
> But when it's running in a VM, the VM gobbles up as much memory as it can and, by itself, shows no inclination to give it back.
Someone, anyone, tell me which hypervisor doesn't enforce a memory limit on VMs?