Applications self hosted on The Edge will be fully isolated from each other by running within Linux Containers.
All of the dependencies for any given application will be installed in the respective application container. Only a select group of container ports will be mapped to host ports, so that the applications could be reached from the outside.
The diagram below draws a sample server on The Edge running a number of application containers. Every application container fully isolates the application and its dependencies from the rest of the applications self hosted on the server. The dependencies are also exemplary.
A single server on The Edge is usually capable of running multiple applications. In fact, this is desirable, so that the return on investment could maximize. In which way could we run any number of applications on this server with minimal system overhead?
In order to answer the question above, we explored the following options.
It worked relatively well with a couple of applications. Then there was this third application which required a slightly different version of PHP. Multi tenancy with regard to databases increased the complexity of maintenance and operation a lot.
The experience of fully isolating applications and their dependencies in virtual machines seemed to be ideal. However, launching virtual machines turned out to be slow and running them put quite a cap on less capable devices, such as Raspberry Pi.
We evaluated running both qemu and KVM virtual machines on a Raspberry Pi 4 with RAM 8Gb and an external SSD connected with an USB-SATA adapter.
At the time we started with defining The Edge Docker was already considered a mature technology and an industry standard. For us, though, it increased the complexity of self hosting applications a lot.
Stateful applications, incl. all types of database engines, don't play well with Docker containers. Application data needs to be stored elsewhere outside of the respective container. As a side effect one cannot copy containers and their data just like that.
Docker containers are designed to run a single process, which is very different from what many of the applications running on The Edge require. Thus, we had to isolate every dependent application in its own container. Then we had to use Docker Compose to orchestrate the exploding number of containers running on the host. As an example, a typical application would need at least two containers - one for the application and one for a database engine. In many cases it would be three containers because of a reverse proxy running in front of the application. In some it would be even more.
We never dared trying out orchestrating Docker containers via Docker Swarm or Kubernetes. For a small home operation any of these would have been a huge overkill.
Other than that, the containers introduced minimal system overhead and sort of provided the level of isolation that we were looking for.
Updating applications was a breeze. It required pulling the newest version of an image and launching a new container. However, this nice feature came at the price already discussed above.
Linux containers turned out to be the sweet spot for us.
Application and dependency isolation is comparable to what virtual machines offer.
The experience of working with them is also simple and very similar to the one of virtual machines.
Linux containers are much more light weight than virtual machines. Their system overhead is comparable with the overhead of Docker containers.
As a drawback, within a Linux container one needs to take care of the maintenance of the underlying operating system and the self hosted applications - as if running on bear metal. At the moment of capturing this technical decision, maintenance is still a manual process.