Linux Swarm Script

· 4min · Dan F.

This article is regarding a script that I've never gotten to work properly on OpenBSD, and only works correctly (currently) in Linux. This script is used to access and run commands across multiple servers in parallel.

Edit: This script now works fine on OpenBSD, with the only requirement being to install the flock package! Also, this script is actively being ported to being 100% POSIX compliant, which should enable any shell to run it without issue.

Edit: The script is now portable across bash, dash, and ksh, and the default interpreter now is simply /bin/sh. As for some simple metrics, this script is able to hit 1000 servers running the uptime command in less than 30 seconds when not using sudo. Execution time is about half that when using sudo, since it needs to create two connections instead of one to complete the command.

You check swarm.sh out on gitlab

Now I know that there are plenty of other parallel ssh options out there, so this one is not that special. This is simply another option out there that is written solely in bash. This was originally written on linux, and the missing functionality in OpenBSD, the one missing piece, is a fast file locking mechanism.

The basic usage is to pass swarm a list of servers and a command, and swarm will connect to each server in parallel, with as many forks as you require (default 10), and will save each server's output in a log in /tmp.

The script also has the functionality to use sshpass, for those remote servers that might require a username and password, as well as a way to pass a sudo password over to a remote server without the password showing up in plain text.

The implemented method for passing a sudo password is kind of a hack, but I couldn't find a better way anywhere online. When the user passes the password via read, openssl is used to encrypt the password to a temporary file using a random key. This encrypted file is then scp'd over to the remote server's /tmp. Ssh is then used in conjunction with openssl to decrypt the file in /tmp and pipes the output to the desired command prefixed by sudo -S. Finally, the tmp file is removed from the remote server.

The random key is never written to a file, so the only way the password can be regenerated is for someone to capture the ps output of the ssh command, while at the same time capturing the temporary file in /tmp before the ssh session completes. This is not perfect, but every other way I found shows the password in ps, this way, at least, does not.

The basic functionality is as follows:

./swarm.sh [-psbuhS] [-J PROXY_HOSTNAME[:PORT]] [-P PORT] [-t THREADS] [-l SERVERLIST] -c COMMAND
    
    -b      Brief mode: only show the first line of output to stdout, 
            but save full output to log
            
    -c      Command to run on the remote servers

    -h      Show this usage

    -J      Utilize a proxy (Jump) host to connect to the remote servers. Format would be servername[:port]

    -l      Specify the serverlist to run command on

    -p      Ask for user's password, to be used if the remote servers requires a password to login

    -P      Specify ssh port to use. Default is 22

    -s      Use sudo to execute the command passed. If -p is used, -s will use that password. If -p
            is not specified, then the script will as for the user's password

    -S      Show stats at the end of the script

    -t      Threads to create

    -u      Use a specific username, instead of current logged-in user

    -U      Unattended mode: Do not show output. Only return the final log location


Swarm Status Explanation:
Command Status: [Server Hostname] [Active Threads/Thread Spawn Number - Success/Error/Failed count]: Command Stdout 

Example Status:
ok: [testserver] [3/85 82/5/0]: 09:09:54 up 98 days, 11:48,  0 users,  load average: 0.29, 0.23, 0.14

The gitlab repo will be used from now on to host all random scripts whipped up.