How I set up a Machine in my Local Network as a Remote Docker Host for Coding inside Remote Containers

Nilabhra Roy Chowdhury
7 min readApr 1, 2021
Photo by Avi Richards on Unsplash

In December 2020, I decided to part ways from my 2017 model of MacBook Air and got myself the late 2020 model of MacBook Pro that came with the Apple M1 SOC. While fully aware that the community of developers is going to take some time to port existing tools and libraries to the ARM64 based Apple silicon, I decided to take the plunge into the darkness. I wanted to experience the paradigm shift closely as I strongly believe that RISCs will slowly outpace CISCs, at least when it comes to portable and handheld devices.

My work as an NLP Engineer requires me to use many libraries which are not yet compiled to run on the M1 chip natively, but thankfully, Apple’s Rosetta 2 does the job of translating Intel’s x86_64/AMD64 instructions to Apple’s ARM64 instructions for many libraries that come with pre-compiled binaries. Some of the libraries which do not work with Rosetta 2 can be compiled from scratch too. But there are still software and libraries out there which can’t run on Apple Silicon or any other CPUs apart from the ones that are manufactured by Intel. This is because the pre-compiled binaries used in these software uses Intel’s proprietary AVX512 instructions or other similar SIMD instructions. These special instruction sets allow for fast vector and matrix operations and hence are used by many data-science related software and libraries as multivariate linear algebra is the basis for many of them.

Unfortunately for me, some of the libraries and databases I use for my work use these intel specific instruction sets and as a result, I cannot run and test several projects I am involved in (not to mention that I won’t get linting support while developing), on my laptop. One way out of this was to use a remote machine for development, such as an Amazon EC2 instance but instead, I went for a more local solution.

In my previous post, I mentioned how Docker could be useful only for testing and production but also for the process of development. Since my projects are set up to be developed from within a Docker container all I needed was the Docker Engine running on an Intel-based machine in my local network.

The first step was to choose the hardware. I needed 16 GB of RAM (pre-trained language models are getting larger and larger) and an SSD apart from an Intel CPU that supported the AVX512 instruction set. While browsing Amazon, I stumbled upon this for 399 Euros. It met all my requirements and in addition, allowed for the memory to be expanded to 32 GB, had built-in WiFi and Bluetooth. The built-in WiFi meant that the machine was only needed to be plugged into a power source for all intended purposes.

The PC arrived the next day and upon connecting it with a power supply, keyboard+mouse and a display, I was greeted by Windows 10 Pro setup screen. This was a no-go, I wanted to maximise my usage of the resources available on that machine and having Windows on it means that there would be resources allocated for running the GUI and other Windows-specific services. While I wanted an OS running without a GUI, I also wanted an OS that I can use with a GUI if I needed to. I decided to get myself a copy of Pop!_OS which has a neat GUI (which unfortunately I would need to disable) thanks to it being developed by the Elementary OS team. I downloaded Pop!_OS and used UNetbootin to make a USB stick into a bootable live Pop!_OS USB drive.

16GB RAM, 256 GB SSD, Intel Core i5–5257U with builtin WiFi and Bluetooth with a form factor of 17 x 8 x 7.8 cm

The installation wasn’t as smooth as I expected. What worked, in the end, was formatting the entire SSD after erasing all existing volumes and then installing Pop!_OS as the only OS in the machine.

Setting up SSH

Installing an SSH server
Once the OS was installed and configured to use the Internet via WiFi, I wanted to set up SSH on it and try connecting to it from my MacBook. I installed the Open SSH server by running:

sudo apt update && sudo apt install openssh-server

Now I needed to find the IP address assigned to the machine by the router’s DHCP server. I just needed to check the inet in the wlan0 or wlxxxx sections in the output of ifconfig. If you know what you are looking for, you can directly use ifconfig | grep inet to find the IP addresses of all the network devices. I found that the IP assigned to my machine was 192.168.178.54. I typed in ssh nilabhra@192.168.178.54 (nilabhra being my username) on my MacBook’s terminal and sure enough was prompted for a password which upon entering I had shell access to the machine. It goes without saying that the PC and my MacBook was connected to the same WiFi router.

Key based authentication
In order to use SSH on a regular basis, it is better to set up key-based authentication so that one can avoid typing in a password while logging in to a remote machine. This step is also crucial for configuring VS Code later. DigitalOcean has an easy to follow tutorial on how to do this in case you need to look up the steps.

Once this was done, I could log in to the PC without having to type in a password.

Static IP
To make life easier, I wanted to make the IP assigned to the machine remain static so that my ssh commands do not need to change after rebooting the machine or the router. Luckily enough, I saw that binding an IP to a device was easy as ticking one checkbox in my router’s configuration page. The process should be more or less similar for most routers.

Disabling the GUI

I found that there is a simple way to disable the GUI on boot moreover, re-enabling it also pretty easy. I ran sudo systemctl set-default multi-user to disable the GUI on boot and upon restarting the machine, I was sure enough presented with the text-based login screen of Linux. If I wanted to have the GUI turned back on again, all I would need to do is to run sudo systemctl start gdm3. After I was done with this step, I disconnected the monitor, keyboard and mouse from the machine and just kept it plugged into power.

Installing Docker

I followed the instructions at https://docs.docker.com/engine/install/ubuntu/ to install Docker on the new Linux machine. I basically had to run these commands in order:

sudo apt-get updatesudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpgecho \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get updateudo apt-get install docker-ce docker-ce-cli containerd.io

A quick docker --version told me that the installation was a success.

Configuring VS Code

This was perhaps the easiest step in the whole process. All that was needed to be done was to set the docker.host property in VS Code settings to the IP of the new machine running Docker in the Linux machine. The property can be set up in VS Code at both the user level and the workspace level, but since I wanted to have access to the Docker engine running on my MacBook for other projects, it made sense for me to set up the remote docker host at the workspace level. To do this, I created a .vscode directory in the root directory of a project I was working on and then created a settings.json file inside it with the following contents:

{
"docker.host": "ssh://nilabhra@192.168.178.54",
}

You might have noticed that VS Code uses SSH as a protocol to connect to remote docker hosts.

To see if everything so far has worked, I opened the command palette and clicked on “Remote-Containers: Rebuild and Reopen in Container”. After waiting for a good 12 minutes, all the required services for my project were finally built and I was presented with a full-fledged VS Code window. Everything seemed perfect except for the build time.

Configuring The Project

I realised that the long build time was due to how the build context was specified for my dev container. Large files from my laptop (around 3GB in total) were being copied over to the Linux machine via the WiFi. This would happen whenever I would want to rebuild the dev container.

As a workaround, I copied the large files to the Linux machine via scp and specified a directory only containing a Dockerfile (and a requirements.txt ) as the build context in the docker-compose.yml. To test if the strategy worked, I tried to rebuild the container again and this time it was done in seconds!

The setup was finally ready for me to to be able to code and test all components of the project without any hiccups.

--

--

Nilabhra Roy Chowdhury

NLP Engineer @ www.varia.media | Computer Engineer whose work is focused on Natural Language Processing