When we want to manage a remote PC, we usually connect to an SSH server listening on this machine and login with our username and password.
However, when connecting to your common household PC, this would require a certain amount of set-up beforehand:
Therefore, and as a simplified and more secure option, we can use an exposed SSH server as a middle man. The user’s PC can connect to it, start a reverse shell and allow us to connect back from our local machine.
]]>When we want to manage a remote PC, we usually connect to an SSH server listening on this machine and login with our username and password.
However, when connecting to your common household PC, this would require a certain amount of set-up beforehand:
Therefore, and as a simplified and more secure option, we can use an exposed SSH server as a middle man. The user’s PC can connect to it, start a reverse shell and allow us to connect back from our local machine.
In this scenario, we have to define a few items so we can understand the process better:
bob-pc
).bob
).vpn.example.org
).rhelp
).Step by step, the process would be:
It’s worth noting that the proxy server and the local machine can be the same, as long as the requirements are met.
In our proxy server, first we will create a passwordless restricted user with very few privileges:
useradd -c 'Restricted User' -m -s '/bin/false' -u 22000 -U rhelp
If we have the remote user’s SSH public key and we want them to login
without a password, we can just add it to our restricted user’s
authorized_keys
file:
cat <<- 'EOF' > ~rhelp/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCr3GtrUvnWfOyhy5BaaKMUj62lHf3O3caS1FJidSaaG5qtZDwqL6MKGAOgtmt+krJCRp8yT6uKkYYYBHlugOE9Es8LibuxdFT/LHViAWAbtINOKOIzzrC26R7xseNe1VXEEoH8+2QnrH2U1C9D687rIptanGcvkwzj8yOFBVVMIl6Ldhs38r9xpi04Y3SPQl5duTa2CebuICLha1xbS0h9HSOIGQNEeRYtj1Te44fbaWkM/Fg2inA4QLKonWObUDTwYjede9lDaPXSWGUEQz4A2u+ljjGRbxPaarj7HPsnnlpLnQzRZzvjVq1dC3h8swE4Qx8pvImzte4OacUS8vbT bob@bob-pc
EOF
The best part about using a key for login is that we can limit what the remote user can do when connecting via SSH. We just want to allow it to forward ports 44000 (for SSH) and 5900 (for VNC). For this, we have to enter the following in front of the authorized key:
restrict,port-forwarding,permitlisten="localhost:44000",permitlisten="localhost:5900"
So our restricted user’s authorized_keys
file would look like this:
restrict,port-forwarding,permitlisten="localhost:44000",permitlisten="localhost:5900" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCr3GtrUvnWfOyhy5BaaKMUj62lHf3O3caS1FJidSaaG5qtZDwqL6MKGAOgtmt+krJCRp8yT6uKkYYYBHlugOE9Es8LibuxdFT/LHViAWAbtINOKOIzzrC26R7xseNe1VXEEoH8+2QnrH2U1C9D687rIptanGcvkwzj8yOFBVVMIl6Ldhs38r9xpi04Y3SPQl5duTa2CebuICLha1xbS0h9HSOIGQNEeRYtj1Te44fbaWkM/Fg2inA4QLKonWObUDTwYjede9lDaPXSWGUEQz4A2u+ljjGRbxPaarj7HPsnnlpLnQzRZzvjVq1dC3h8swE4Qx8pvImzte4OacUS8vbT bob@bob-pc
If we don’t have the remote user’s SSH public key, we can set a password for our restricted user so the remote user can login manually for the first time:
passwd rhelp
New password:
Retype new password:
passwd: password updated successfully
We might also need to enable password authentication in the SSH server:
cat <<- 'EOF' >> /etc/ssh/sshd_config
PasswordAuthentication yes
EOF
After we restart SSH’s service, the remote user should be able to login with
username and password so we can proceed to connect to the remote PC and create
an SSH key pair or copy an existing one to our restricted user’s
authorized_keys
.
Afterwards, we can remove the password of the restricted user by running:
passwd -l rhelp
passwd: password expiry information changed.
And also disable password authentication in the SSH server’s configuration:
sed -i '/^PasswordAuthentication yes/d' /etc/ssh/sshd_config
With the proxy server properly set up, we can ask the remote user to login from their PC and forward the appropriate ports. They can do so with the following commands:
ssh -v -C -N -R 5900:localhost:5900 -R 44000:localhost:22 rhelp@vpn.example.org
The meaning of the options are:
-v
: Verbose mode.-C
: Request compression of all data.-N
: Do not execute a remote command.-R 5900:localhost:5900
: Forward port 5900 on remote server to port 5900 on
the local side.-R 44000:localhost:22
: Forward port 44000 on remote server to port 22 on
the local side.The first time, the remote user will have to enter the command manually on a terminal window, accept the host key and, if not using an authorized key, enter the restricted user’s password manually.
From our local machine, we can check whether the remote user has connected to the proxy server by running:
ssh vpn.example.org -- ss -tnlp | grep '127.0.0.1:44000'
Once the remote user is connected to the proxy server, we can connect to the remote PC from our local machine. We need to use the proxy server as a jump host:
ssh -J vpn.example.org localhost -p 44000
We can add all these and more options in the configuration file for SSH:
cat <<- 'EOF' >> ~/.ssh/config
Host bob-pc
Compression yes
Hostname localhost
LocalForward 45900 localhost:5900
Port 44000
ProxyJump vpn.example.org
ServerAliveInterval 30
EOF
The options are:
Compression yes
: Specifies whether to use compression.Hostname localhost
: Specifies the hostname we want to connect to.LocalForward 45900 localhost:5900
: Specifies that port 45900 on the local
machine will be forward to port 5900 on the remote host.Port 44000
: Specifies the port number we want to connect to.ProxyJump vpn.example.org
: Specifies the jump proxy in the form
[user@]host[:port]
.ServerAliveInterval 30
: Sets a timeout interval after which ssh will
request a response if no data has been received from the server.With these, when we want to connect to the remote PC again, we can simply run:
ssh bob-pc
If our username and its key are authorized to login on both the proxy server and the remote PC, we will be automatically logged in the remote PC, ready to enter commands. Otherwise, we will have to use the appropriate passwords when prompted.
If we want to see the remote user’s desktop and provide support for graphical applications, we will need to use VNC. More specifically, we will have to run x11vnc on the remote PC:
x11vnc -display ':0' -verbose -localhost -forever -auth guess -nopw
The meaning of the options are:
-display ':0'
: X11 server display to connect to.-verbose
: Print out more information.-localhost
: Allow connections from localhost only.-forever
: Keep listening for connections when a client disconnects.-auth gues
: Try to guess the XAUTHORITY filename and use it.-nopw
: Disable the warning message when not using a password.Now we just have to run a VNC viewer on our local machine to connect to the forwarded port and have access to the desktop on the remote PC:
vncviewer localhost:45900
Just like that, we will be able to see and control the remote user’s desktop:
Once we are connected to the remote user’s PC, we can make it easier for them to activate the reverse shell by creating a file on their desktop:
cat <<- 'EOF' > ~bob/Desktop/SSH.desktop
[Desktop Entry]
Version=1.0
Type=Application
Name=SSH
Comment=Connect to receive remote support via SSH
Exec=/usr/bin/ssh -v -C -N -R 5900:localhost:5900 -R 44000:localhost:22 rhelp@vpn.example.org
Icon=gnome-terminal
Terminal=true
Categories=Application;
EOF
Also, we create a similar file for the remote user to activate the VNC server for remote graphical support:
cat <<- 'EOF' > ~bob/Desktop/VNC.desktop
[Desktop Entry]
Version=1.0
Type=Application
Name=VNC
Comment=Connect to receive remote support via VNC
Exec=/usr/bin/x11vnc -verbose -localhost -forever -auth guess -nopw
Icon=gnome-remote-desktop
Terminal=true
Categories=Application;
EOF
Let’s make these files executable so they can be launched by double-clicking on them:
chmod +x ~bob/Desktop/{SSH,VNC}.desktop
The remote user will see two new icons on the desktop:
With these, the remote user can manually start the reverse shell and the VNC server. Once started, we will be able to connect from our local machine to the remote PC. When the remote user wants to regain control and disconnect us from their PC, they can just close the SSH or VNC windows to do so.
If we need to have permanent access to the remote PC and the remote user has given us authorization to do so, we can set up a service so the SSH connection is resumed every time the remote PC boots up.
If we have systemd on the remote PC, this can be easily achieved by creating a service file:
cat <<- 'EOF' > ~bob/.config/systemd/user/reverse-ssh.service
[Unit]
Description=Reverse SSH connection
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/ssh -o 'ExitOnForwardFailure=yes' -o 'ServerAliveInterval=30' -C -N -R 5900:localhost:5900 -R 44000:localhost:22 rhelp@vpn.example.org
Restart=always
RestartSec=5s
[Install]
WantedBy=default.target
EOF
Alternatively, we can use autossh instead since it is designed precisely for this purpose:
cat <<- 'EOF' > ~bob/.config/systemd/user/reverse-ssh.service
[Unit]
Description=Reverse SSH connection
After=network.target
[Service]
Type=simple
Environment="AUTOSSH_GATETIME=0"
ExecStart=/usr/bin/autossh -M 0 -o 'ExitOnForwardFailure=yes' -o 'ServerAliveInterval=30' -C -N -R 5900:localhost:5900 -R 44000:localhost:22 rhelp@vpn.example.org
ExecStop=/bin/kill $MAINPID
Restart=always
RestartSec=5s
[Install]
WantedBy=default.target
EOF
We will have to reload the list of services and enable the new service:
systemctl --user daemon-reload; systemctl --user enable reverse-ssh.service
With this, every time the remote user logs into the remote PC, a reverse
shell will be started automatically to the proxy server. If we want this
service to be started immediately after boot, whether the remote user is
logged in or not, we will have to enable the automatic start-up of systemd
instances for the remote user (e.g. bob
):
loginctl enable-linger bob
If we don’t have systemd installed on the remote PC, we can simply use
cron. For this, we run crontab -e
and add the following:
@reboot while true; do ssh -o 'ExitOnForwardFailure=yes' -o 'ServerAliveInterval=30' -C -N -R 5900:localhost:5900 -R 44000:localhost:22 rhelp@vpn.example.org; sleep 5; done
If we opt for autossh, we must enter the following instead:
@reboot autossh -M 0 -o 'ExitOnForwardFailure=yes' -o 'ServerAliveInterval=30' -C -N -R 5900:localhost:5900 -R 44000:localhost:22 rhelp@vpn.example.org
Blog posts:
Gists:
Manual pages:
]]>It has often been said that some distributions are bloated or not as minimal as others like Arch, Gentoo, Void or OpenBSD.
Even though most major distributions, either BSD or Linux, can’t really be considered bloated, it can be argued that some desktop-oriented setups or flavors are far from minimal in their default configurations.
Having said that, nothing really stops a user from making a bare bones install that will take very few resources. This principle applies to most major distributions, which often provide minimal installation processes where the system can be tailored to the user’s needs.
In this post, we will try to get a lightweight system using Debian’s readily minimal installation options.
]]>It has often been said that some distributions are bloated or not as minimal as others like Arch, Gentoo, Void or OpenBSD.
Even though most major distributions, either BSD or Linux, can’t really be considered bloated, it can be argued that some desktop-oriented setups or flavors are far from minimal in their default configurations.
Having said that, nothing really stops a user from making a bare bones install that will take very few resources. This principle applies to most major distributions, which often provide minimal installation processes where the system can be tailored to the user’s needs.
In this post, we will try to get a lightweight system using Debian’s readily minimal installation options.
The easiest way to get a minimal Debian installation is to use the mini.iso
file, commonly known as netboot (a portmanteau of network boot). We
can download the latest ISO from Debian’s website:
wget https://deb.debian.org/debian/dists/stretch/main/installer-amd64/current/images/netboot/mini.iso
We will be using QEMU to do the installation, so we pass both the ISO
file and the destination device (e.g. /dev/sdi
) as arguments to the
executable:
qemu-system-x86_64 -m 256 -drive file=mini.iso,media=cdrom -drive file=/dev/sdi,format=raw,cache=none
From here on we can use most of the defaults. We just have to make sure nothing is selected during the Software selection step of the installation process:
An alternative option is to use the package debootstrap to create a bare installation. To install the necessary packages in our local system, we run:
sudo apt install debootstrap
Since there is no installer to help us create the partitions, we must create them manually. We can use GParted, fdisk or any other partitioning tool for this:
printf 'o\nn\np\n1\n\n\nw\n' | sudo fdisk /dev/sdi
We must then create an ext4 file system with the command:
sudo mkfs.ext4 /dev/sdi1
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 61049389 4k blocks and 15269888 inodes
Filesystem UUID: 60bb0786-b320-4285-abc0-95efce9ac10b
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
Now, let’s mount the partition:
sudo mount /dev/sdi1 /mnt/debian
And use debootstrap to copy the base system:
sudo debootstrap --arch amd64 stretch /mnt/debian http://deb.debian.org/debian
This will take a few minutes since it has to download the required packages and copy them on to the disk.
Now we will chroot into our newly created system to apply some finishing touches. But before doing this, we need to recreate some basic file system hierarchy:
sudo mount -t proc /proc /mnt/debian/proc && \
sudo mount --rbind --make-rslave /dev /mnt/debian/dev && \
sudo mount --rbind --make-rslave /sys /mnt/debian/sys && \
sudo mount --rbind --make-rslave /run /mnt/debian/run
Now we can safely use chroot:
sudo chroot /mnt/debian /bin/bash
First of all, let’s change the hostname:
echo 'debianlight' > /etc/hostname && \
echo -e '127.0.1.1\tdebianlight' >> /etc/hosts
By default, /etc/fstab
is empty, so we must add the disk with the UUID of
the file system we created earlier. We can use the command blkid
to see it:
blkid
/dev/sdi1: UUID="60bb0786-b320-4285-abc0-95efce9ac10b" TYPE="ext4" PARTUUID="17a30f04-01"
So let’s create the appropriate entry:
cat <<- 'EOF' > /etc/fstab
UUID=60bb0786-b320-4285-abc0-95efce9ac10b / ext4 defaults 0 0
EOF
For Debian’s stable release, updates to stuff like virus scanners or timezone
data are delivered via the updates repository. Therefore, it’s a good
idea to add it to our sources.list
:
cat <<- 'EOF' >> /etc/apt/sources.list
deb http://deb.debian.org/debian stretch-updates main
EOF
We should also make sure to enable the security updates, especially if this system is to be online. For this, we need to add the repository for Debian’s security team:
cat <<- 'EOF' >> /etc/apt/sources.list
deb http://security.debian.org/debian-security stretch/updates main
EOF
Let’s get our system up to date:
apt update && apt upgrade --no-install-recommends
A debootstrap installation is mainly used for chroots or containers, therefore it’s missing a few fundamental packages. To have a bootable system, we must install them before we boot the machine for the first time.
First let’s install and configure the locales to avoid some annoying error messages:
apt install --no-install-recommends locales && dpkg-reconfigure locales
Since no timezone is configured, the time and date may be reported erroneously. Let’s fix that:
dpkg-reconfigure tzdata
Of course, we shouldn’t forget about installing a kernel:
apt install --no-install-recommends linux-image-amd64
Also, we have to install a boot loader on our disk (e.g. /dev/sdi
) so the
system can be booted:
apt install --no-install-recommends grub-pc && update-grub
And replace all entries in grub.cfg
with the UUID for our disk:
sed -i 's,root=/dev/sdi[0-9],root=UUID=60bb0786-b320-4285-abc0-95efce9ac10b,' /boot/grub/grub.cfg
At the moment, we are using our system’s network connection inside the chroot,
but this won’t be available once we boot this new system by itself. We need to
add a configuration for our network card in the
/etc/network/interfaces.d
directory. Since we are going to test the
installation with QEMU, we can use the default interface’s name:
cat <<- 'EOF' > /etc/network/interfaces.d/ens3
allow-hotplug ens3
iface ens3 inet dhcp
EOF
In order to login as root, we’ll need to set a password:
passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Finally, let’s clean up and exit the chroot so we can boot into our new system:
apt clean; exit
Although first we should cleanly unmount the file hierarchy we previously set up:
sudo umount -R /mnt/debian
Now, let’s boot up with QEMU:
qemu-system-x86_64 -m 256 -drive file=/dev/sdi,format=raw,cache=none -net user,hostfwd=tcp::2222-:22 -net nic
Manually entering commands in QEMU’s console is not very efficient (it’s not possible to copy and paste), so it’s always a good idea to install SSH so we can access the system remotely. After logging in as root, we can install it:
apt install --no-install-recommends ssh
We must also enable root access by modifying /etc/ssh/sshd_config
:
cat <<- 'EOF' >> /etc/ssh/sshd_config
PermitRootLogin yes
EOF
Once we have rebooted the virtual machine, we should be able to connect from our own machine using SSH:
ssh -p 2222 root@localhost
Let’s see how much of the system’s resources we are using. For instance, for the netboot install:
df -h --output=source,used,target /
Filesystem Used Mounted on
/dev/sda1 635M /
free -ht | grep ^Total
Total: 240M 26M 168M
dpkg -l | grep ^ii | wc -l
217
And for the debootstrap install:
df -h --output=source,used,target /
Filesystem Used Mounted on
/dev/sda1 579M /
free -ht | grep ^Total
Total: 240M 26M 169M
dpkg -l | grep ^ii | wc -l
191
Not bad! But we can do better.
Since we are trying to make this system as minimal as possible, we should make
sure only the required packages are installed without having to provide the
--no-install-recommends
option every time:
cat <<- 'EOF' >> /etc/apt/apt.conf.d/99local
APT::Install-Suggests "0";
APT::Install-Recommends "0";
EOF
Now, we can trim some disk space by deleting unused locales. We can just use a one-liner to do this:
find /usr/share/locale -mindepth 1 -maxdepth 1 ! -name 'en*' -exec rm -r {} \;
To prevent packages from installing unwanted locales, we can force dpkg to ignore them:
cat <<- 'EOF' > /etc/dpkg/dpkg.cfg.d/01_nolocales
path-exclude /usr/share/locale/*
path-include /usr/share/locale/en*
EOF
The same thing can be done for documentation files:
find /usr/share/doc -depth -type f ! -name copyright -delete
find /usr/share/doc -empty -delete
rm -rf /usr/share/man /usr/share/groff /usr/share/info /usr/share/lintian /usr/share/linda /var/cache/man
And to prevent them from being installed at all:
cat <<- 'EOF' > /etc/dpkg/dpkg.cfg.d/01_nodocs
path-exclude /usr/share/doc/*
path-include /usr/share/doc/*/copyright
path-exclude /usr/share/man/*
path-exclude /usr/share/groff/*
path-exclude /usr/share/info/*
path-exclude /usr/share/lintian/*
path-exclude /usr/share/linda/*
EOF
We can even get more space by removing some unnecessary packages:
apt purge --auto-remove apt-listchanges aptitude aspell* at avahi-autoipd avahi-daemon bc bluetooth debconf-i18n debian-faq* doc-debian eject exim4-base groff iamerican ibritish info installation-report ispell* krb5-locales logrotate manpages modemmanager nano os-prober pcscd ppp popularity-contest reportbug rsyslog util-linux-locales wamerican
To remove old logs, we can run the following command:
find /var/log -type f -cmin +10 -delete
What does our system looks like after all these improvements? Let’s see how the netboot install fares:
df -h --output=source,used,target /
Filesystem Used Mounted on
/dev/sda1 544M /
free -ht | grep ^Total
Total: 240M 25M 170M
dpkg -l | grep ^ii | wc -l
202
And for the debootstrap install:
df -h --output=source,used,target /
Filesystem Used Mounted on
/dev/sda1 503M /
free -ht | grep ^Total
Total: 240M 25M 171M
dpkg -l | grep ^ii | wc -l
187
We can see that the netboot install adds some extra packages that we could easily remove, although the improvement would be marginal:
apt purge --auto-remove busybox discover kbd keyboard-configuration laptop-detect pciutils task-english
If we are no fans of systemd and aren’t using any of its features, we can remove it and install a different init system in its place. For this example, we will install SysV:
apt install --purge --auto-remove --no-install-recommends sysvinit-core
Then, let’s create an inittab
file:
cp /usr/share/sysvinit/inittab /etc/inittab
After a reboot using the new init system, we can purge systemd:
apt purge --auto-remove systemd libpam-systemd
To avoid installing any systemd package in the future, we will configure APT accordingly:
cat <<- 'EOF' >> /etc/apt/preferences.d/nosystemd
Package: libsystemd0
Pin: release *
Pin-Priority: 500
Package: *systemd*
Pin: release *
Pin-Priority: -1
EOF
Without systemd, we get a slight improvement on memory usage and we also get below the 500MB mark of disk usage:
df -h --output=source,used,target /
Filesystem Used Mounted on
/dev/sda1 495M /
free -ht | grep ^Total
Total: 240M 21M 96M
dropbear is a lightweight SSH server designed for small memory environments. We can install it by running:
apt install --no-install-recommends dropbear-run
We must also enable the service so it starts during boot:
sed -i 's,^NO_START=1,NO_START=0,' /etc/default/dropbear
Now, we can remove OpenSSH:
apt purge --auto-remove ssh
If we are really short on disk space, we can run debootstrap with the
--variant=minbase
option:
sudo debootstrap --arch amd64 --variant=minbase stretch /mnt/debian http://deb.debian.org/debian
This option, according to debootstrap’s manual page, only installs the essential packages:
Currently, the variants supported are minbase, which only includes essential packages and apt; buildd, which installs the build-essential packages into TARGET; and fakechroot, which installs the packages without root privileges. The default, with no --variant=X argument, is to create a base Debian installation in TARGET.
– Manpages
Along with the usual debootstrap configuration, in order to have a functional environment, we need to install an init system (such as SysV) and some network tools so we can connect to the network:
apt install --no-install-recommends ifupdown iproute2 isc-dhcp-client netbase
In the end, we should be able to shave off a few megabytes of disk space by having less packages than the default debootstrap variant, and also use less memory:
df -h --output=source,used,target /
Filesystem Used Mounted on
/dev/sda1 453M /
free -ht | grep ^Total
Total: 240M 19M 182M
dpkg -l | grep ^ii | wc -l
118
In a previous post we learned how to setup a LEMP guest machine with Vagrant and Ansible to test PHP applications. However, in this day and age, there are other programming languages that can be used for web development other than PHP.
Python is a very popular beginner-friendly programming language that will allow us to create web applications with ease due to its multiple third party modules and packages.
Unlike PHP, Python was not born as a server-side scripting language designed for web development and it’s not as intertwined with web servers (such as Apache). For this reason, it requires some kind of module or gateway to work.
For this post, we will be installing uWSGI, which is a Web Server Gateway Interface, inside our Vagrant guest machine and connect Nginx to it.
]]>In a previous post we learned how to setup a LEMP guest machine with Vagrant and Ansible to test PHP applications. However, in this day and age, there are other programming languages that can be used for web development other than PHP.
Python is a very popular beginner-friendly programming language that will allow us to create web applications with ease due to its multiple third party modules and packages.
Unlike PHP, Python was not born as a server-side scripting language designed for web development and it’s not as intertwined with web servers (such as Apache). For this reason, it requires some kind of module or gateway to work.
For this post, we will be installing uWSGI, which is a Web Server Gateway Interface, inside our Vagrant guest machine and connect Nginx to it.
We are going to create a very simple test application using Flask.
Flask is a micro web framework written in Python and based on the Werkzeug toolkit and Jinja2 template engine.
A web framework is a great tool that makes developing Python web applications much easier.
We will save the configuration files for our application in the subdirectory
vagrant/www/test
:
vagrant/www/test
├── requirements.txt
├── test.ini
└── test.py
A virtual environment is a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages.
Virtual environments give us the possibility to isolate each Python application. This way, we can use PIP to install different versions of packages without conflicting with versions of the same package for other applications.
And how do we tell PIP which packages and which versions to install? Well, by using a requirements file for each application.
Since our test application only needs Flask, we will add it to
requirements.txt
:
Flask>=0.12
To tell uWSGI how to launch our application, we will set some parameters in the
file test.ini
that will be loaded by uWSGI on boot:
[uwsgi]
plugins = python3
socket = /tmp/test.sock
venv = /opt/virtualenvs/test
chdir = /vagrant/www/test
wsgi-file = test.py
callable = app
Here we specify the version of Python to use, the socket file, the path of the virtual environment, the directory for our application’s files, the file that contains the application, and what object to call.
The web application per se will be in test.py
:
#!/usr/bin/env python3
from flask import Flask
app = Flask(__name__)
@app.route('/test')
def test():
return '<p style="background: aliceblue;">Hello Wold</p>\n'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
This just will send the text Hello World (on a blue background) to the web browser when we connect to the server.
Now, we have to modify our previous configuration for Ansible so it installs all the necessary packages and loads the appropriate configuration files.
First of all, since we will be using some variables that are common to more than
one role, let’s create a file to store these global variables. In
the subdirectory vagrant/cfg/group_vars
we create a file called all.yml
for
all roles:
---
base_dir: '/vagrant/www'
venv_dir: '/opt/virtualenvs'
We specify the directory that Nginx will use as root and the directory where Python’s virtual environments will be created.
In this same file, we will include the name/directory of our applications:
apps:
- name: test
Now, let’s add a new role to our configuration so all the appropriate Python
dependencies are met. We will save the files in the subdirectory
vagrant/cfg/roles/python
:
vagrant/cfg/roles/python
├── handlers
│ └── main.yml
└── tasks
└── main.yml
Tasks are saved in tasks/main.yml
:
---
- name: Install Python
package: name={{ item }} state=present
with_items:
- python3-pip
- python3-venv
- uwsgi
- uwsgi-plugin-python3
notify:
- start uwsgi
- name: Install PIP packages
pip:
requirements: '{{ base_dir }}/{{ item.name }}/requirements.txt'
virtualenv: '{{ venv_dir }}/{{ item.name }}'
virtualenv_command: pyvenv
with_items:
- '{{ apps }}'
- name: Link uWSGI file
file:
src: '{{ base_dir }}/{{ item.name }}/{{ item.name }}.ini'
dest: '/etc/uwsgi/apps-enabled/{{ item.name }}.ini'
force: yes
state: link
with_items:
- '{{ apps }}'
notify:
- restart uwsgi
The steps are:
requirements.txt
for
each application and install its packages in a virtual environment./etc/uwsgi/apps-enabled
using the
file module.And let’s not forget the handlers to enable and restart the service when needed.
Add these to the file handlers/main.yml
:
---
- name: start uwsgi
service: name=uwsgi enabled=yes state=started
- name: restart uwsgi
service: name=uwsgi state=restarted
Finally, we modify vagrant/cfg/site.yml
to add this new role:
---
- name: Configure LEMP server
hosts: lemp
roles:
- mariadb
- php
- python
- nginx
We also need to modify the template for our Nginx role in
vagrant/cfg/roles/nginx
so it loads our applications.
For this, we edit templates/default
and add the following inside the server
directive:
{% for item in apps %}
location /{{ item.name }} {
include uwsgi_params;
uwsgi_pass unix:/tmp/{{ item.name }}.sock;
}
{% endfor %}
This will loop through each of our applications and configure Nginx to use uWSGI for its corresponding subpath.
We can now start the guest machine with Vagrant using the command vagrant up
.
After the machine has finished booting, we will be able to open our test
application by pointing our web browser to http://172.28.128.10/test:
Using Python as the backend for our web applications is not as straightforward
as throwing some PHP code in an index.php
file but, with some tweaking, we can
have a working application and have access to the power of Python
packages.
LEMP is a variant of the common LAMP (Linux, Apache, MariaDB and PHP) bundle that swaps the Apache server with Nginx.
Many times I’ve used it to test some web application. Usually, you’d want to do this in a clean environment that won’t interfere with any previous configuration.
For this, you’d normally use some kind of virtual machine that you’ve installed and configured from scratch. Maybe, if it’s a common environment, you’d create a snapshot so you can revert to it afterwards. Or maybe you could use one of the many cloud images found in the Internet.
However, a much simpler option is to use Vagrant to handle these cloud images and a configuration management tool to handle their configuration.
]]>LEMP is a variant of the common LAMP (Linux, Apache, MariaDB and PHP) bundle that swaps the Apache server with Nginx.
Many times I’ve used it to test some web application. Usually, you’d want to do this in a clean environment that won’t interfere with any previous configuration.
For this, you’d normally use some kind of virtual machine that you’ve installed and configured from scratch. Maybe, if it’s a common environment, you’d create a snapshot so you can revert to it afterwards. Or maybe you could use one of the many cloud images found in the Internet.
However, a much simpler option is to use Vagrant to handle these cloud images and a configuration management tool to handle their configuration.
Vagrant is an open-source software product for building and maintaining portable virtual development environments.
Vagrant can use different engines to boot up these cloud images, and also different tools for software provisioning. Here we will use VirtualBox and Ansible for these roles respectively.
Ansible is an open-source automation engine that automates software provisioning, configuration management, and application deployment.
On our host machine, we will only need to install Vagrant and VirtualBox, since Ansible will run in the guest machine. Therefore, we need to download and install the appropriate software for our operating system:
Vagrant’s configuration is stored in a single file named Vagrantfile
.
First, we tell Vagrant to use VirtualBox as the default provider:
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox'
Then, we start the actual configuration by selecting the base cloud image we will be using. For this example, we use the official Ubuntu Xenial 32-bit image:
config.vm.box = 'ubuntu/xenial32'
To configure the virtual machine hardware (512 MB of RAM and a single CPU capped to 50%), we add the following:
config.vm.provider :virtualbox do |vbox|
vbox.memory = 512
vbox.cpus = 1
vbox.customize ['modifyvm', :id, '--cpuexecutioncap', '50']
end
Now we configure the hostname and IP address of the guest OS:
config.vm.define 'lemp' do |node|
node.vm.hostname = 'lemp'
node.vm.network :private_network, ip: '172.28.128.10'
node.vm.post_up_message = 'Web: http://172.28.128.10'
end
We will also share the local subdirectory vagrant
with the guest so it’s
mounted at /vagrant
:
config.vm.synced_folder 'vagrant', '/vagrant'
Finally, we configure Ansible to be run locally on the guest using the
configuration found in /vagrant/cfg
. In this directory, it will find the
inventory file hosts.ini
and the playbook file site.yml
. We will also tell
it to run all tasks using sudo:
config.vm.provision :ansible_local do |ansible|
ansible.provisioning_path = '/vagrant/cfg'
ansible.inventory_path = 'hosts.ini'
ansible.playbook = 'site.yml'
ansible.sudo = true
end
In the end, the file should look like this:
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox'
Vagrant.configure('2') do |config|
config.vm.box = 'ubuntu/xenial32'
config.vm.provider :virtualbox do |vbox|
vbox.memory = 512
vbox.cpus = 1
vbox.customize ['modifyvm', :id, '--cpuexecutioncap', '50']
end
config.vm.define 'lemp' do |node|
node.vm.hostname = 'lemp'
node.vm.network :private_network, ip: '172.28.128.10'
node.vm.post_up_message = 'Web: http://172.28.128.10'
end
config.vm.synced_folder 'vagrant', '/vagrant'
config.vm.provision :ansible_local do |ansible|
ansible.provisioning_path = '/vagrant/cfg'
ansible.inventory_path = 'hosts.ini'
ansible.playbook = 'site.yml'
ansible.sudo = true
end
end
Since we are sharing the subdirectory vagrant
with the guest machine, we need
to place all configuration files for Ansible inside vagrant/cfg
as specified
in Vagrantfile
.
Ansible’s inventory file contains the machines in which it will run. In this case, it will only run locally on one machine so we add it:
lemp ansible_connection=local
Also, Ansible’s playbooks store the steps to be taken on the machines. We could put everything in this file, but Ansible’s Best Practices recommend using roles:
---
- name: Configure LEMP server
hosts: lemp
roles:
- mariadb
- php
- nginx
Here we specify that this task will apply to the machine named lemp and that it will execute the roles mariadb, php and nginx.
This role will install and configure MariaDB. Its configuration lives in the
subdirectory vagrant/cfg/roles/mariadb
:
vagrant/cfg/roles/mariadb
├── handlers
│ └── main.yml
├── tasks
│ └── main.yml
└── vars
└── main.yml
The tasks to be run are saved in tasks/main.yml
:
---
- name: Install server
package: name={{ item }} state=present
with_items:
- mariadb-server
- python-mysqldb
notify:
- start mysql
- name: Change root password
mysql_user:
name: root
host: localhost
password: '{{ mysql_root_password }}'
state: present
- name: Change bind-address
replace:
dest: /etc/mysql/mariadb.conf.d/50-server.cnf
regexp: '^bind-address'
replace: 'bind-address = {{ mysql_bind_address }}'
notify:
- restart mysql
- name: Create test database
mysql_db: name={{ mysql_db_name }} state=present
- name: Create test user
mysql_user:
name: '{{ mysql_db_user }}'
host: '%'
password: '{{ mysql_db_password }}'
priv: '{{ mysql_db_name }}.*:ALL'
state: present
These are the steps taken:
All the variables we use in this role can be set in vars/main.yml
:
---
mysql_root_password: 'root'
mysql_bind_address: '0.0.0.0'
mysql_db_name: 'test'
mysql_db_user: 'test'
mysql_db_password: 'test'
Also, we define some handlers which are basic tasks that are run when another task changes something and notifies the handler. We use them to make sure the server is enabled and to restart it when we change the configuration:
---
- name: start mysql
service: name=mysql enabled=yes state=started
- name: restart mysql
service: name=mysql state=restarted
This role will install PHP. Its configuration lives in the subdirectory
vagrant/cfg/roles/php
:
vagrant/cfg/roles/php
├── handlers
│ └── main.yml
└── tasks
└── main.yml
Using the package module, it installs PHP-FPM (FastCGI Process Manager) and the module to communicate with MySQL:
---
- name: Install PHP
package: name={{ item }} state=present
with_items:
- php-fpm
- php-mysql
notify:
- start php-fpm
We also define the handler that will make sure the service is enabled:
---
- name: start php-fpm
service: name=php7.0-fpm enabled=yes state=started
This role will install and configure Nginx. Its configuration lives in the
subdirectory vagrant/cfg/roles/nginx
:
vagrant/cfg/roles/nginx
├── handlers
│ └── main.yml
├── tasks
│ └── main.yml
├── templates
│ └── default
└── vars
└── main.yml
The tasks in tasks/main.yml
are:
---
- name: Install server
package: name={{ item }} state=present
with_items:
- nginx
notify:
- start nginx
- name: Change default configuration
template:
src: default
dest: /etc/nginx/sites-available/default
notify:
- reload nginx
Once again, using the package module, it will install the necessary
software. Then, using the template module, it changes the default’s site
configuration by copying our template from templates/default
:
# Default server configuration
server {
listen {{ http_port }} default_server;
listen [::]:{{ http_port }} default_server;
root {{ base_dir }};
index index.html index.htm index.php;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
The variables used in the template can be set in vars/main.yml
:
---
http_port: 80
base_dir: '/vagrant/www'
This way, any file we save in the subdirectory vagrant/www
of our host
machine will be accessible in the guest machine’s web server. We can work with
our favorite development tools locally and see all changes immediately
in the web server.
Finally, we define the handlers that will make sure the server is enabled and the configuration reloaded when we make any change:
---
- name: start nginx
service: name=nginx enabled=yes state=started
- name: reload nginx
service: name=nginx state=reloaded
Once everything is set up, we just have to start the guest machine with the
command vagrant up lemp
.
The first time we run it, it will download the necessary cloud image, so it might take a while. Subsequent boots will only check whether we have an updated version of the image.
Once it finishes booting up, we can connect to the web server in
http://172.28.128.10. For testing purposes, let’s say we’ve saved this in
vagrant/www/index.php
:
<?php phpinfo(); ?>
When we connect to the server with our web browser, we will see something like this:
We can also connect to the database server by running:
mysql --host=172.28.128.10 --user=test --password test
And, after providing our password, we will be able to enter SQL commands:
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 32
Server version: 10.0.29-MariaDB-0ubuntu0.16.04.1 Ubuntu 16.04
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [test]>
To control the guest machine, here are the most important Vagrant commands:
Action | Command |
---|---|
boot guest machine | vagrant up lemp |
reboot guest machine | vagrant reload lemp |
shutdown guest machine | vagrant halt lemp |
boot and reconfigure guest machine | vagrant up lemp --provision |
connect to guest machine with SSH | vagrant ssh lemp |
destroy guest machine | vagrant destroy lemp |
Coupling Vagrant with Ansible (or any other SCM tool) allows for a portable reproducible system, contained in just a few text files:
.
├── [ 66K] vagrant
│ ├── [ 58K] cfg
│ │ ├── [ 37] hosts.ini
│ │ ├── [ 54K] roles
│ │ │ ├── [ 17K] mariadb
│ │ │ │ ├── [4.1K] handlers
│ │ │ │ │ └── [ 133] main.yml
│ │ │ │ ├── [4.7K] tasks
│ │ │ │ │ └── [ 759] main.yml
│ │ │ │ └── [4.2K] vars
│ │ │ │ └── [ 162] main.yml
│ │ │ ├── [ 21K] nginx
│ │ │ │ ├── [4.1K] handlers
│ │ │ │ │ └── [ 131] main.yml
│ │ │ │ ├── [4.3K] tasks
│ │ │ │ │ └── [ 263] main.yml
│ │ │ │ ├── [4.4K] templates
│ │ │ │ │ └── [ 397] default
│ │ │ │ └── [4.1K] vars
│ │ │ │ └── [ 70] main.yml
│ │ │ └── [ 12K] php
│ │ │ ├── [4.1K] handlers
│ │ │ │ └── [ 79] main.yml
│ │ │ └── [4.1K] tasks
│ │ │ └── [ 139] main.yml
│ │ └── [ 93] site.yml
│ └── [4.1K] www
│ └── [ 144] index.php
└── [ 748] Vagrantfile
No more messing with installers, restoring snapshots or reconfiguring stuff. You can boot up a fresh system, mess it up, destroy it and boot it up brand new again in a few minutes. You can even use a version control system to store these files and share them with others.
Also, official images for many operating systems can be found in Vagrant’s
website. Not just for Ubuntu but also for
Debian, Fedora, CentOS and
FreeBSD. You can even specify your own box with the setting
config.vm.box_url
.
At the same time, Ansible’s myriad of modules let us configure the guest OS automatically in almost any way possible, even though we may need to adapt many of the tasks to specific Linux distros or operating systems.
In the end, this method greatly simplifies the process of creating and managing test environments.
Inspired by an article about test driving old Linux distros, I’ve decided to try installing some of the distributions I have used since my first contact with Linux 17 years ago.
Although I’ve been using Debian since I installed Debian 3.1 back in 2004, the distro that previously allowed me to use Linux exclusively was Mandrake 9.1.
]]>Inspired by an article about test driving old Linux distros, I’ve decided to try installing some of the distributions I have used since my first contact with Linux 17 years ago.
Although I’ve been using Debian since I installed Debian 3.1 back in 2004, the distro that previously allowed me to use Linux exclusively was Mandrake 9.1.
Unfortunately, Mandrake (later renamed to Mandriva), has been defunct since 2011 but, thanks to the wonderful Internet Archive, we can still get the ISO files for Mandrake 9.1:
wget https://archive.org/download/Mandrake91/Mandrake91-cd{1-inst,2-ext,3-i18n}.i586.iso
Once we have the ISO files, we can start the installation using QEMU. Therefore, we have to install the necessary package:
sudo apt install qemu
First of all, let’s create a disk image:
qemu-img create -f qcow2 mandrake91.qcow2 10G
Now, to run the installation, we need to point QEMU to the first ISO and to the disk image we just created:
qemu-system-i386 -m 128 \
-cdrom Mandrake91-cd1-inst.i586.iso \
-hda mandrake91.qcow2 -usb \
-vga cirrus -soundhw ac97 \
-nic user,model=rtl8139 \
-boot order=dc
Surprisingly, the installation felt as easy now as it did back then, and the default settings can be left unchanged.
At some point, and depending on the packages we have selected for installation, we may be asked to change the installation CD:
For this, we need to press Ctrl+Alt+2 to access QEMU’s monitor interface, and enter this at the prompt for the second CD:
change ide1-cd0 Mandrake91-cd2-ext.i586.iso
Or this for the third:
change ide1-cd0 Mandrake91-cd3-i18n.i586.iso
After we press Ctrl+Alt+1 to go back to the installation, we can then click OK to continue the process.
Once all packages are installed, we’ll have to provide a password for root and create a user.
Then, a summary of the installation process will be shown. We will probably want to launch the configuration for the Graphical Interface by clicking on the Configure button:
Thankfully, the auto-detected defaults are fine and we won’t need to change anything. Although we might want to choose a higher resolution and color depth:
After several other uneventful steps, the installation will finish and we will be free to reboot the system:
We can boot into Mandrake by running QEMU again. This time, however, we exclude the ISO file:
qemu-system-i386 -m 128 \
-hda mandrake91.qcow2 -usb \
-vga cirrus -soundhw ac97 \
-nic user,modle=rtl8139
After the boot loader and some other boot messages, we will see the login screen:
Once we’ve logged in with our user, Mandrake’s First Time Wizard will be launched so we can pick our preferred desktop environment (KDE, GNOME or IceWM). I’ve chosen KDE because it’s what I used back in the day:
If we start a console, we’ll get to check what version of the Linux kernel we
are running with the command uname -a
:
Linux localhost 2.4.21-0.13mdk #1 Fri Mar 14 15:08:06 EST 2003 i686 unknown unknown GNU/Linux
To change desktop settings like the background, theme or window decorations, we simply have to launch KDE Control Center:
In order to change any system setting, however, we will need to start Mandrake Control Center. After entering our root password, we’ll be able to change things like the boot options, network settings, users and security settings:
From Mandrake Control Center, we can also do all the package management we need using rpmdrake. This tool automatically handles dependencies so we won’t have to hunt for the required packages:
The file manager is Konqueror which is still kicking today, even if only as KDE’s default web browser:
Unfortunately, as a web browser, this version of Konqueror isn’t able to handle many modern websites. And neither was Mozilla’s web browser:
Of the websites I tried, these two web browsers were only able to load Google:
They both failed to load other top ranking websites such as YouTube, Facebook, Wikipedia, Yahoo, Reddit, Twitter, Amazon or Instagram.
I was surprised to find out that OpenOffice’s suite was already around back in 2003, with its usual components such as Writer, Calc, Impress, etc.:
When it comes to multimedia playback, it was necessary to install xine, since neither MPlayer nor KDE’s default viewer were able to play Big Buck Bunny’s AVI file:
Still, the playback, both for audio and video, was choppy. Maybe the results would improve by trying other emulated video and audio devices.
I was pleasantly surprised by how user friendly this version of Mandrake Linux was, even compared to modern operating systems.
Common day-to-day user actions, and even most system administration, can be performed using some kind of GUI, without giving up the power of the Unix shell underneath.
Granted, most of the issues that would have been encountered back in the 2000’s, regarding Linux, would be related to hardware compatibility and lack of drivers. This time around, we luckily avoided all that by using a virtual machine.
Either way, the effort that went into creating such an experience for the user is notable. Especially by the fact that only outside factors, such as vendor support, could prevent us from enjoying this distro to its full potential.
]]>If you’re anything like me, you’ll probably have at least one old PC or laptop collecting dust somewhere in your house. There are many ways to give new life to these devices, but one very simple option is to use them as thin clients to remotely access a more powerful and modern machine.
]]>If you’re anything like me, you’ll probably have at least one old PC or laptop collecting dust somewhere in your house. There are many ways to give new life to these devices, but one very simple option is to use them as thin clients to remotely access a more powerful and modern machine.
Let’s configure our PC so it will act as the server for us to connect to. Assuming we already have Debian installed, we will just need to install the server software.
For this purpose, we will use xrdp which is an open source server for the RDP protocol. To install it just run:
sudo apt install xrdp
Standard RDP Security, which is not safe from man-in-the-middle attack, is used. The encryption level of Standard RDP Security is controlled by crypt_level.
– Manpages
We definitely should do something about this so we will use TLS as the security layer.
The necessary certificates were generated automatically during the installation of the ssl-cert package but we need to add user xrdp to this group so it can read the private key:
sudo adduser xrdp ssl-cert
Now let’s edit file /etc/xrdp/xrdp.ini
and change these:
security_layer=negotiate
certificate=
key_file=
To these:
security_layer=tls
certificate=/etc/xrdp/cert.pem
key_file=/etc/xrdp/key.pem
And restart the service:
sudo service xrdp restart
With this, our PC is ready to be accessed remotely by the thin client.
We can now go ahead and set up the box that will act as thin client.
For this we just need a bare bones Debian 9 (stretch) install with only a few extra packages. Therefore, we will make sure nothing is selected during the Software selection step of the installation process:
Afterwards, once we have booted into Debian, we can install a display manager and a window manager so we can have a simple graphical environment. We will use LightDM and Openbox respectively, and tint2 as a lightweight taskbar:
sudo apt install lightdm openbox tint2 xterm
After a reboot, we will be presented with the login screen:
Let’s configure Openbox so it launches tint2 after we login. We need to copy the default configuration files for Openbox to our home directory so we can modify them:
mkdir -p ~/.config/openbox && cp /etc/xdg/openbox/* ~/.config/openbox
Then we will edit the file ~/.config/openbox/autostart
to add, at the end, the
following lines:
# Launch taskbar
tint2 &
To access our server remotely, we will use Remmina as it supports several protocols (RDP, VNC, SSH, NX, XDMCP, etc.):
sudo apt install remmina
Once the installation finishes, we will again edit Openbox’s autostart
file to
launch Remmina:
# Start Remmina
remmina &
After we login, we can create a new connection in Remmina by pressing Ctrl+N. We just have to enter a name for the connection and the address or hostname of our server:
After we click on Connect, the connection will be saved and we will be asked to accept the server’s TLS certificate:
If we accept it, we will get to the login screen for our remote server:
Here we just need to enter the username and password for our remote server and we will have access to the desktop. By pressing R_Ctrl+F we can make it fullscreen for a seamless experience:
We can configure our Debian thin client to automatically login and make Remmina launch the connection to our remote server so we are presented with the remote login screen.
First we will configure LightDM to automatically login with our local user. For
this, we need to edit the file /etc/lightdm/lightdm.conf
as root and configure
our username in the Seat configuration section:
[Seat:*]
autologin-user=agus
Now, we need to find out the filename for our connection as it was saved by
Remmina. Connections are either in $HOME/.remmina
, for older versions, or in
$XDG_DATA_HOME/remmina
for newer ones.
~/.remmina
├── 1492192074855.remmina
└── remmina.pref
Then we just have to modify the last line of Openbox’s autostart
file
accordingly:
# Start Remmina
remmina -c ~/.remmina/1492192074855.remmina &
If we reboot now, we will connect directly to the remote machine and be presented with its login screen:
We can configure Remmina to save the remote login credentials and log us into the server automatically.
For this, we need to save the username and password on the connection profile:
However, for extra security, we should install GNOME plugin for Remmina so the password is stored in GNOME keyring:
sudo apt install remmina-plugin-gnome seahorse
We will need to logout and log back in so the keyring is generated transparently using our local password.
Now, if we modify the connection and add the credentials, the password will be stored in the keyring for safekeeping:
Since our local password is needed to unlock the keyring to retrieve the remote
password, we will have to revert the changes in /etc/lightdm/lightdm.conf
:
[Seat:*]
#autologin-user=agus
Otherwise, it will ask us to unlock the keyring before Remmina can connect to our remote server.
Since this box won’t be doing much work other than running Remmina to connect to our server, we can remove some unneeded packages. Things like job scheduling and message logging are pointless:
sudo apt purge --auto-remove anacron cron rsyslog
As an added note, if we plan to connect to a wireless network, we might want to install NetworkManager and its applet to make it easier for us:
sudo apt install network-manager-gnome
However, since NetworkManager uses GNOME keyring, we won’t be able to automatically login locally in a seamless way.
As we have seen, using an old PC or laptop as a thin client is a great way to give new life to these devices.
The hardware requirements are very low since it will be mostly using the network. You can see the resource usage in a system with only 128 MB of RAM:
Duply is a frontend for the mighty Duplicity magic and a really nifty one. Anybody that has used Duplicity for backups may have noticed two things: how powerful and versatile a tool it is, and how tricky it can be to configure a backup scheme.
First of all, let’s just talk a little bit about the backend, that is, Duplicity. As the Wikipedia article nicely points out, Duplicity provides encrypted, versioned, remote backups that require very little of the remote server, in fact, it just needs for the server to be accessible via any of the supported protocols (FTP, SSH, Rsync, etc).
]]>Duply is a frontend for the mighty Duplicity magic and a really nifty one. Anybody that has used Duplicity for backups may have noticed two things: how powerful and versatile a tool it is, and how tricky it can be to configure a backup scheme.
First of all, let’s just talk a little bit about the backend, that is, Duplicity. As the Wikipedia article nicely points out, Duplicity provides encrypted, versioned, remote backups that require very little of the remote server, in fact, it just needs for the server to be accessible via any of the supported protocols (FTP, SSH, Rsync, etc).
The first step required to use Duply is the creation of a backup profile. This
can be accomplished by running duply <profile> create
where <profile> is
whatever name we want for the profile.
This creates a configuration file called ~/.duply/<profile>/conf
that we will
edit. The configuration file is quite well documented but I will break down the
main points.
There are several settings we should take into account when configuring Duply:
There are two types of encryption Duply can use (unless we just disable it altogether), both with pros and cons.
Encryption with GPG keys is self-explanatory. You use a GPG key to encrypt each
volume of the backup and both the GPG key and the passphrase are needed to
decrypt the backup, giving you extra security. This also means that, if you lose
the GPG key file, you will not be able to recover your backup, therefore you
have to make sure that the ~/.gnupg/
directory is copied somewhere else and
not just inside the backup… Trust me, it’s happened to me:
GPG_KEY='ADD274FA' # Use 'gpg --list-keys' to see your keys
GPG_PW='VeryStrongPass' # Passphrase of the key
Symmetric encryption is simpler in that it only uses a single password for encryption, meaning you can recover your backup so long as you remember this password. Obviously, it is less secure than using a key, since it’s subject to bruteforce attacks:
#GPG_KEY='ADD274FA' # Comment out this line
GPG_PW='ItBetterBeStr0ng' # Password to use
Now we have to configure where Duply will save our backups. In the conf
file
there are several examples for all the supported protocols. In my case I will
use FTP:
TARGET="ftp://ftpuser:ftppass@server/$USER@$HOSTNAME"
Notice that, if you use shell environment variables ($USER
, $HOSTNAME
, etc)
you have to use double quotes instead of the default single quotes, otherwise
the substitution won’t expand.
Usually, as normal users, we would want to backup our home directory and exclude
those directories/files with an exclude list. This can be done with Duply by
changing the following setting in the conf
file:
SOURCE="$HOME"
Again, notice the double quotes for variable substitution.
For system backups, since we can only specify one source, we should use the root folder and use exclude lists:
SOURCE='/'
Once we have determined our source for backups, we should filter out files or
directories that would make our backups too big. We do this by creating the file
~/.duply/<profile>/exclude
and listing the files inside. Thankfully, these
lists accept default Unix globbing. For reference, this is what’s in my
exclude file:
**/*[Cc]ache*
**/*[Hh]istory*
**/*[Ss]ocket*
**/*[Tt]humb*
**/*[Tt]rash*
**/*[Bb]ackup
**/*.[Bb]ak
**/*[Dd]ump
**/*.[Ll]ock
**/*.log
**/*.part
**/*.[Tt]mp
**/*.[Tt]emp
**/*.swp
**/*~
**/.adobe
**/.cache
**/.dbus
**/.fonts
**/.gnupg/random_seed
**/.gvfs
**/.kvm
**/.local/share/icons
**/.macromedia
**/.obex
**/.rpmdb
**/.thumbnails
**/.VirtualBox
**/.wine
**/Downloads
As you can see, you can specify both wildcards or certain directories/files.
It’s worth noting that, even though the file is called exclude
, it can be used
to include files too. For instance, if we used the root directory as source
(SOURCE='/'
) as we talked about before, we can exclude all files except
certain directories like so:
+ /etc
+ /root
+ /var/lib/mysql
+ /var/mail
+ /var/spool/cron
+ /var/www
**
That last line would tell Duply to ignore all files except those listed previously and preceded by a plus sign.
Since version v0.5.14 of Duply, there is another way to exclude directories
from the backup. By creating a file called .duplicity-ignore
inside a
directory, we will force Duply to ignore it recursively. To enable this, we
will have to uncomment these lines in our configuration file
~/.duply/<profile>/conf
:
FILENAME='.duplicity-ignore'
DUPL_PARAMS="$DUPL_PARAMS --exclude-if-present '$FILENAME'"
Finally, we can determine the age of the backups we keep when we run the purge commands. There are a couple of settings here depending on the way we make backups.
This setting tells Duply to keep backups up to a certain time (for example 6
weeks) when we run duply <profile> purge
:
MAX_AGE=6W
This other one tells Duply to keep a number of full backups when we run
duply <profile> purge-full
:
MAX_FULL_BACKUPS=2
However, the most useful one for me, is the setting that uses the
--full-if-older-than
option of duplicity to automatically make a full backup
when the previous full backup is older than a certain age:
MAX_FULLBKP_AGE=1W
DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE "
Finally, after everything is configured, we should run a backup to test
everything is alright with the command duply <profile> backup
. This might take
a while since, not having any previous backup, it will execute a full backup.
After that, we can check the status of our backups by running duply <profile>
status
, which would give us something like this:
Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Tue Apr 17 14:48:54 2012
Chain end time: Wed Apr 18 14:01:33 2012
Number of contained backup sets: 1
Total number of contained volumes: 52
Type of backup set: Time: Num volumes:
Full Tue Apr 18 14:48:54 2012 52
-------------------------
No orphaned or incomplete backup sets found.
--- Finished state OK at 15:46:34.122 - Runtime 00:00:03.495 ---
That looks cool and everything but we cannot rely on our memory to remember when we should make a backup. That’s why we should schedule our backups using cron (or anacron, or fcron) and leave the heavy lifting to them.
We can either specify a time for both a full and an incremental backup, like this:
@daily duply <profile> backup_verify
@weekly duply <profile> full_verify_purge --force
This will run and verify a daily incremental backup and a weekly full backup. Also, it will purge old backups weekly after completing and verifying the full backup.
However, if we configured Duply to use the --full-if-older-than
option of
duplicity like discussed above, we can just run a single command:
@daily duply <profile> backup_verify_purge --force
This is extremely useful for laptops and boxes that are not on 24x7.
Another basic requirement for any backup solution is the option to run certain
commands both before and after the backup is executed. Duply, of course, has
this too and will run any command inside the file ~/.duply/<profile>/pre
before the backup and any command inside ~/.duply/<profile>/post
after the
backup.
This is useful to lock and flush databases before backup and unlocking them afterwards, maybe even make a LVM snapshot for consistent and quick backup. Or just to gather any other information that needs to be backed up too (f.e. installed packages, Delicious bookmarks, etc).
There are some drawbacks to using the system while the backup is being run. An obvious one is the impact on performance, since the backup is using the disks.
Also we have the fact that, if the backups take a while, which is very likely to happen, and the files are modified in the meantime, the verification will fail. That doesn’t mean the backup has failed but the verification obviously will.
For this, I would recommend either a LVM snapshot as suggested above which, let’s face it, is not very likely to be done on anything other than a server; or we can just disable the verification and use ionice like so:
@daily ionice -c3 duply <profile> backup_purge --force
This will execute the backup with low I/O priority, which means we will be able to use the computer without much impact, and cron will still send us an email with the output of the command so we can confirm that the backup was done properly.
]]>