Remote Development Environment in Kubernetes using VS Code, SSH and custom Dockerfile

Remote Development Environment in Kubernetes using VS Code, SSH and custom Dockerfile

Being able to develop directly in your own Kubernetes cluster (e.g. on Azure) has many advantages. If you are a service developer yourself, I don't need to bore you with listing them. If not, my article probably won't interest you anyway. Just in case, these are the reasons that come to my mind first 😉:

  • Always have a terminal at hand in your target environment.
  • Debug your services in the same environment where you want to use them.
  • To have your development environment always ready for use and independent of your client installation.

There are already some how-tos on this topic, but in each of them, I was missing some details or features that were important to me:

  • To extend and adapt it to my needs, I wanted to create my own customizable Dockerfile.
  • To have remote access to private repositories or GitHub, I wanted to be able to use my private SSH key without installing it into the remote pod.
  • To fully understand this, I wanted to set it up myself without the need for any additional software except VS Code and Microsoft's Remote Development Plugin.

Please note that this article and the following examples are not exhaustive and may contain errors and security vulnerabilities. The article is purely informative and shows how I set up a remote development environment. Whatever you do with the provided information, understand and double check what you are doing and do it on your own responsibility and risk. I take no responsibility for damages of any kind caused by imitation.

Dockerfile

I've chosen Ubuntu as parent image, mostly because of personal preferences and compatibility reasons. Any other minimal image like Alpine could also work and might be even smaller, but VS Code seems to work best with glibc-based Linux distributions, whereas Alpine is based on musl libc.

I've removed usernames from the Dockerfile and disabled password authentication for security reasons. To make it work, a username must be set, and selected how the corresponding public SSH key is added in order to enable public key authentication. In the example it is either retrieved from GitHub (by also setting your GitHub username) or copied from the home folder.

It might also be worth adding pre-generated host keys to prevent the system from regenerating them when the pod is restarted. Otherwise SSH client complains about a possible man-in-the-middle attack.

Dockerfile

Deployment

After building and pushing the image to my cluster's private container registry, I've deployed it using the following YAML file:

deployment.yaml

Service

To make the SSH server reachable, I added a service of type LoadBalancer and mapped the default port to 2222 to not immediately show that an SSH server is running there.

service.yaml

SSH config

As mentioned at the beginning I want to use my private SSH key without adding it to the container. Therefore I forward my local SSH agent to the server. Let's have a look at an extract of my ssh configuration file (~/.ssh/config):

ssh config

Of course, this only works if an SSH agent with the added key is running on the client. If necessary, a ProxyJump can also be added if the LoadBalancer IP is not directly accessible from your client (e.g. because you are behind a corporate firewall/proxy or similar). In such a case the ProxyJump must also forward the SSH agent.

Open points

One disadvantage of this setup is certainly that it will lose all data if the pod is stopped. In my case I can easily check out and push my sources from/to a remote Git repository, so the drawback is subtle, but still there is a risk of accidental data loss. So I'm thinking about replacing the Deployment with a StatefulSet, which claims persistent storage for a working directory.

If I find the time and this article finds some interest, this could be part of an upcoming post.

Please let me know if you have questions, what you would do differently and/or if you found some other weaknesses. Or simply tell me whether you liked it or not 😉.

Hi Matthias, danke für den tollen und informativen Artikel! Wenn ich unsere Cluster untersuchen will, erstelle ich mir meistens direkt einen neuen Debug-Pod auf Basis von Ubuntu mit `kubectl run debug-shell --rm -it --image ubuntu -- bash`, in welchen ich dann cURL installiere. Der Pod löscht sich nach meiner Untersuchung wieder und belegt somit dauerhaft keinen Platz im Cluster. Deine Version finde ich auch sehr interessant, um sich schnell ins Cluster zu hängen. Grüße, Natalie

To view or add a comment, sign in

Others also viewed

Explore content categories