Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Welder – set up your Linux server with plain shell scripts (github.com/pch)
158 points by pchm on June 9, 2017 | hide | past | favorite | 66 comments


I used to bootstrap a lot of VMs in development using shell scripts [1]. At a certain point Bash scripts become unmanageable. Ansible, Chef, Puppet have stricter syntax, battle-tested modules, large communities, proper multi-OS support, etc. Investing time to learn any of those tools certainly pays off in the long run.

Bash is great for a quick setup or a one-off instance. However it requires discipline for anything more. You are also dealing with the underlying OS yourself (e.g., Nginx is packaged and configured differently in Ubuntu, Debian, Cent OS & Arch; networking is configured differently in Ubuntu depending on whether systemd is installed, etc.).

[1] https://github.com/StanAngeloff/vagrant-shell-scripts


so it is the scripting language used by ansible, chef, puppet that imposes the required "discipline"?

methinks each of these must in fact call the shell to get things done or least system(3).

if they are calling execve then i would be more interested.

i would also be interested in ansible, chef, puppet if i knew the scripting languages they are written in or wanted to learn them. but other languages interest me more.

as a hobbyist, what i would like to see is a hosting provider that will boot fs images (including bootloader) that user builds on users own computer. i.e., provider gives exact specs of machine and network details. user builds and sends fs image to provider and provider boots from it on some bare metal in a datacenter.

no "host" or "guest". no virtual server. no provider software. if something software-related does not work, it is user's responsibility because it is all user's software. user can send an updated fs image.

anyone know if this exists? the service wanted is barebones: a computer in a datacenter that has an internet connection and someone to boot it from user fs image. cost not an issue.


Ansible, Chef and Puppet are not languages. They are systems for managing the state of your server, during the whole lifetime of the server. You typically tell them "I want X to be in Y state" and they will take whatever actions to make that happen. Writing all that logic in a shell script is not particularly trivial. Of course, it's not all perfect and sometimes you have to resort to scripting-like approach, but the idea is that for the common patterns, they already have the logic implemented.


> You typically tell them "I want X to be in Y state" and they will take whatever actions to make that happen.

I had so many problems with this, because not all possibilities are tested on all distributions on all architectures with all libc's and compiler combinations. Much is implicit compared to Nix where only the kernel is implicit.


Why would you be running on dozens of distros on different architectures?

With some sane level of standardization (lets say less than 6 combinations) it's easy enough to test all and fix up the discrepencies.

The more declaritive you are, and the less you assume about the environment, the better this will work.

Ie. if you need a package, always install it. On some distros it may be there by default, but you'll run into issues if it's not. The good thing about being declaritive is that there's no harm in checking.


> as a hobbyist, what i would like to see is a hosting provider that will boot fs images (including bootloader) that user builds on users own computer. i.e., provider gives exact specs of machine and network details. user builds and sends fs image to provider and provider boots from it on some bare metal in a datacenter.

This is actually somewhat possible with linode. It's not easy to do but it's possible.

https://www.linode.com/docs/platform/disk-images/migrating-a...


> no "host" or "guest". no virtual server

Much of the time, virtualization is just a way to give you control over the whole machine. If virtualization is thin, you don't lose a lot of performance.

It shouldn't be hard to make a Xen-backed VM administration interface. Then you can upload your MirageOS images (that you have tested on your local Linux, because MirageOS does that too) and it would work.

You can't get rid of virtualization and still have it remotely configurable. How are you going to reset configuration of a machine if you destroyed the bootloader of the ring-0 operating system?


do not want to have it remotely configurable. nor want anyone to be able to "reset configuration" remotely.

some configuration is fixed, static. to change it one needs to change the image. this is intentional. hobbyist user wants this.

bootloader is on removable media user sends to provider. user is not using any provider software. user is paying for dedicated server and internet connection. that is all.

do not use linux, mirageos. do not need to. have rump. user can make own xen guest kernels and xen host kernels. anyway, this is tangent, irrelevant. user does not need virtualization necessarily and in any case not virtualization provided by third party.

simple requirements: want dedicated bare metal server. want hardware and internet connection. do not need/want provider software. will pay more for this.


Used to do this working at a colo back in the 90s; boot the image then walk away. If someone borked their machine they'd email or call and we'd roll the power for them.


Here is a declarative bash DSL I ran across a few months ago:

https://github.com/mattly/bork

and here is an example from their README:

    ok brew                                       # presence and updatedness of Homebrew
    ok brew git                                   # presence and updatedness of Homebrew git package
    ok directory $HOME/code                       # presence of the ~/code directory
    ok github $HOME/code/dotfiles mattly/dotfiles # presence, drift of git repository in ~/code/dotfiles
    cd $HOME
    for file in $HOME/code/dotfiles/configs/*
    do                                            # for each file in ~/code/dotfiles/configs,
      ok symlink ".$(basename $file)" $file       # presense of a symlink to file in ~ with a leading dot
    done


This is just plain a bad idea. Bash is the worst possible language. Yes, it can do it, but the syntax is awful. The problem of single quotes in double quotes in escaped double quotes.

I replaced a home grown, written by someone else, bash configuration management tool with Salt. The difference was huge.

Configuration management tools give you built-in logging, state checking, return codes, template libraries, and access to very mature models to access things like the AWS api.

I realize people don't like the steep learning curve, but there is a reason for it.


Honestly, I find ansible quite simple and straight forward. You don't have to use third party playbooks and using 'autoenv' I achieved a very straight forward, simple, dynamic setup.


Agree - not sure what the advantage is. I understand bootstrapping can be a minor irritation (and I dont have a great way to do it 100% auto)

Dynamic inventory scripts + ansible-playbook -k -u root ....

can do the same thing for bootstrapping.

How is this an advantage over that?


With Ansible I use the 'raw' command to run a command to check whether Python (dependency of Ansible) is installed, and install it if not.

    - name: Bootstrap Ansible
      hosts: all
      gather_facts: False
      tasks:
        - name: Install Python 2
          raw: test -e /usr/bin/python || (sudo apt -y update && sudo apt install -y python-minimal)


Author claims all 90% of the time all they need is a single shell script. Then shows an example directory structure that mostly reminds me of Ansible.

How is Welder different from Ansible, except bash vs python?


My experience with Ansible is that I have to write a dozen lines of yaml just to ensure permissions on a bunch of files. (But Puppet and Chef scare me even more.)

The tools in this space feel like very thick abstractions on top of commands and configs you have to understand any way, to get the results you want. The console output Ansible produces is also truly horrible.

Instead, I now maintain a single shell script for our servers. It's maybe a couple hundred lines, half of it heredocs containing small config snippets.

I will definitely be looking at this tool later.


> The tools in this space feel like very thick abstractions on top of commands and configs you have to understand any way, to get the results you want.

It does, thats kind of the point in my mind. All I have to learn is the reasonably thin configuration management on top of normal stuff.


It also depends on Ruby and liquid templates. I am not sure how he can still claim it is "plain shell scripts"


Also, I'm a little bit surprised by the title/description, and then the examples. As opposite, I was looking at something like https://github.com/JonathanHuot/needrun/ which is EXACTLY a "plain shell script".


Bash has it's own substitution system built in, though I would usually use sed for this task.

http://tldp.org/LDP/abs/html/string-manipulation.html

See the section under Substring Replacement.


Even M4 macros may work. It would be still more in the shell spirit than ruby gems.


I came from setup shell scripts (especially running in the postinstall step of debian-installer) to Ansible, and I think the big win with Ansible is at least the ideal of idempotency.

That's a lot harder to achieve with plain shell scripts, unless you're pretty disciplined about only performing mutations to your system that are themselves idempotent (like a package install).


I wrote a small book on the shell, and I have no idea why anyone would prefer Bash as a scripting language. Or as a shell interpreter, for that matter. Sometimes one lacks alternatives, and sometimes the Bash version of some task is too simple not to use.

I feel like there's a cadre of developers who used Bash, Perl, and Unix in the mid-90s - early naughties, to excellent effect, but have resisted learning anything else since, in the mistaken belief that these tools were appropriate solutions to all classes of problem.

And not to say that shell scripts are bad, just that the range of tasks for which they are best suited is far more restrictive then people seem to think. A friend surprised me by saying that Bash was an essential part of his genetics research, and after being initially shocked, the idea made rather a lot of sense: genetic data is semi-structured text, and much of his work involved transforming the output of one program to the input of another. Outside of such applications, I view the use of Bash as non-optimal at best.


Stupid question, but why are Ansible playbooks supposed to be "more idempotent" than plain shell scripts?

Due to taking over a project I had to deep dive into it pretty quickly and to me it feels like shell in yaml syntax.


You might be on the same idempotency level with very disciplined shell scripts, but you won't have reporting out of the box. If I deploy something with ansible, I can see which changes were applied. This way, I can see whether I just changed something I didn't want to change. Other than that, yes, much discipline makes the difference. I know exactly what a core module does, but I have to read the entire shell script of "that colleague" to understand what it really does.


> This way, I can see whether I just changed something I didn't want to change.

Or with --check, before you make the change :)


As you get into more complicated setups, it certainly requires discipline, eg:

https://ryaneschinger.com/blog/ensuring-command-module-task-...

But as long as you're careful about coupling operations like lineinfile with checks for the line already being there and so on, you can achieve it (and Ansible certainly makes this much easier than shells scripts do).


Sometimes the way you achieve idempotency feels like hacks, like when you have to ignore errors, always consider a task unchanged, stat files then test if the stat found anything, etc. But in the end I look at what I've accomplished, and if I'd done it in shell scripts most of the code would be boilerplate and it would be unmaintainable. Which makes the times when it's less than good totally worth it.


The answer is, they're simply not. Ansible is just as easy if not moreso to shoot yourself in the foot, only there's some smoke and mirrors (like their 'lineinfile' module) that make you think you're idempotent but actually just create race-conditions and reduce version control.

There are a few tricks that Ansible uses that are non-obvious uses of shell - For example, files should be replaced by copying the new version to the target directory, then unlinking the original file and linking in the new version in it's place, finally unlinking the temporary file. This functionality has been implemented in the 'install' utility in gnu and bsd coreutils since the 80s (though it's not in POSIX, so it's not exactly guaranteed; You'd be hard-pressed to find a machine outside of busybox docker images that does not support it)


Race-conditions? As in, if you're running the same Ansible script concurrently on the same machine? Yeah, definitively don't do that. But why would you?

I agree that modules like lineinfile are tricky and should be avoided (better to copy a full file), but it's hardly just as easy to shoot yourself than the equivalent sed command, in my opinion, if nothing else because they're more restricted.


Thanks for your answer! Actually a colleague of mine that got even more quickly frustrated with Ansible, claimed that Ansible is like Shell, but the latter but be more simple with less abstraction layers. I'll try the shell stuff at home. (In fact I'm also looking forward to check out Terraform, but it seems still a bit beta...)


I think this is much less of an advantage if you follow the pattern of "immutable servers", where, instead of mutating and updating servers directly, you just blow them away and provision new ones. The biggest benefit to systems like Ansible and Salt with that use case is the templating/config systems, which are much more flexible and powerful than what you can get with shell scripts (and which Welder, incidentally, provides).


Sure, and set it up once and throw it away is always everyone's ideal, but there are going to be long-lived machines (like your database), and moreover, there are often times (such as during development) when it's really convenient to be able to iterate relatively quickly.

It's not realistic to perform a multi-minute deploy (or have to be resetting VMs to snapshots) just to try out tweaks in your non-idempotent setup script.


the ideal of idempotency.

Indeed. That's why I use Saltstack. I can do all I use Salt for without Salt. Except Idempotency


I created something similar for Arch Linux:

https://github.com/CyberShadow/aconfmgr

It has a few crucial differences from typical configuration managers, though, as it allows transcribing the system state back into your configuration.


Basically, this is SaltStack's salt-ssh re-implemented using bash instead of python... That too is just YML + Templates + Bash + SSH, with the exception that you can use Python too and the templates are JINJA.


If you don't like all the directories that Ansible playbooks require you can use the simple style of a single YAML file that describes the actions to take. Check out https://serversforhackers.com/an-ansible-tutorial, the first introduction walks you through setting up just 2 files.


I did something Similar, but fully based in shell scripts with no dependencies -> https://gist.github.com/mariocesar/8e674ec40dad6b94114d2a44d...

I completely agree with OP when Ansible is overkill for must cases, and even slower than a plain smart shell script.

For my solution, simple create a bash function:

  function play ()
    local remote=${2}
    local script=${1}
    local directory=$(dirname ${script})
    tar cpf - ${directory}/ | ssh -t ${remote} "
      tar xpf - -C /tmp &&
      cd /tmp/${dirname} &&
      bash /tmp/${script} &&
      rm -rf /tmp/${dirname} "
  }
and later just call it like.

  play provision/bootstrap.sh ubuntu@10.0.0.1
Where provision is a directory, and bootstrap.sh is the main shell script.



I was really excited until I saw "Welder requires rsync, ruby and liquid" --- why do I need Ruby and all its dependencies here?

I tried https://github.com/myplaceonline/posixcube and it has zero dependencies , just bash itself, and it is nice.


On the subject of non-approved sysadmin/devops tools,

I wrote a little 'tarball compiler' framework in mostly PMake some time ago to "DevOps" like it's 19[89]9 ... basically it will build per-host file overlay trees, rdist them out to target hosts, and then optionally run user defined start/stop/restart hook scripts when needed.

https://github.com/ixcat/admmk/blob/master/doc/admmk.rst

unfortunately, though it will do per-host targets, I didn't get around to per-service targets.. should be feasable however.

Definitely a bit hairy, but hey, thats what quasi-functional symbolic makefile programming is all about my friends..

ps: if you think this is bad, try rewriting it in gnumake if that's even possible.. PMake! Woo!


cdist was pretty much the same, you should see why they made the decisions they made


There is really not much you can't do using HashiCorp's Packer[1] and Terraform[2].

Packer can generate base images and roles based on shell scripts and transferring files. Terraform manages the creation of infrastructure on clouds and can bootstrap instances with shell scripts as well.

[1] - https://terraform.io [2] - https://packer.io



Could you explain what this line of code does?

    ssh -t user@example.com "$(< ./my-setup-script.sh)"


It's a weird way of writing

  ssh -t user@example.com /bin/bash < my-setup-script.sh
That is, take the contents of the local file 'my-setup-script.sh' and run them as commands on example.com as the 'user' user (using SSH to authenticate the user & transport the commands).


But that might fail half-way through the file, and the other one is all or nothing, might just report "argument list too long".


> might fail half-way through the file

Oh, while transferring!

  ssh -t user@example.org bash <( echo "function do_it() {"; cat my-setup-script.sh; echo -e "}\ndo_it" )
No argument limits to worry about, no quoting hassles.


Looks like a fancier version of

        ssh -t user@example.com "$(cat ./my-setup-script.sh)"
I'll have to file this away myself!


Not fancier, just a bash-ism that may not work with some other shells.

[bash] https://www.gnu.org/software/bash/manual/html_node/Command-S...

[posix] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3...


It allows you to run a local shell script on the server via ssh, using bash command substitution - the "$(<" part. The -t option (force tty) executes the script in an interactive session - without it stuff like password prompts won't work (if I remember correctly).


it runs the local contents of ./my-setup-script.sh on the remote server in a way that escapes all the special characters so they don't do unexpected things.


I was about to make the point about Ansible but I guess you've already made it in the README.


Is Ansible really too much though, as the author suggests in README for simple tasks? If all you want is basic "SSH to a server, install some packages" type behavior, I'd still make the argument Ansible is simpler than this. Ansible isn't like Chef/other competitors where you have to bootstrap or install a client on the target machine. IIRC it only really needs SSH access and a Python installation present on the target machine, which is basically most Linux distros.

Looking at how you use this thing, I'm really not sure it would save me any time versus using Ansible.


Author here. It's just a matter of personal preference. If you're proficient with Ansible then I'm sure you can achieve the same task even quicker. But to me, shell scripts are a more natural & faster way to get going for single-server setups and I always have a hard time with Ansible yaml syntax.

Ansible has some undisputed advantages (e.g. idempotence) and if I had to recommend Ansible vs my approach, I'd always recommend Ansible. But for my personal needs (single-server rails apps), I prefer shell scripts.


There's also a learning curve case to be made against Ansible: if you've spent years polishing your Bash and coreutils one liners, you start over at zero when switching to Ansible.

Ansible is also declarative (in principle) while shell scripts are imperative, which is a bit of mind bender. Then, when you debug, you have to understand that Ansible is implemented by imperative code.


The same could be said for some NIH tool.

Bringing people up to speed on in-house tools can be a major hassle. Never mind the fact that popular tools like Ansible are battle-tested and most likely has more documentations and tutorials.


the link here was for a tool using just SSH and Bash..?


For small deployments, this kind of thing means One Less Thing To Understand. I like and use Ansible, but there is a mild learning curve and the occasional gotcha.

Given you generally have to know what you're trying to achieve in shell anyway, the shell versions are simpler, if less robust. For example,

    sudo apt update && sudo apt install nginx
is easier than

    become: True
    tasks:
    - name: install nginx for great justice
      apt:
        name: nginx
        update_cache: yes
        state: latest
There are also some edge cases where shell is better than ansible; some parameters are missing from some modules, meaning shell is more flexible; or if you want to read a command's STDOUT, it's a pain in Ansible.


> IIRC it only really needs SSH access and a Python installation present on the target machine

Which for my FreeBSD machines I have to run some bootstrap code that installs Python on it before I can run ansible against it.

Would be nice if salt-ssh or ansible allowed me to have some bootstrap code that ran if Python didn't exist on the target machine.





Hm I've always just used pssh for this type of administration. I guess this saves a step not having to pscp.pssh the script onto the nodes.


Yet another Ruby-based solution?!


Yeah, I also was a little disappointed with the dependencies, but the general approach has merit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: