DevOps @ Home

Note: this is my first time writing an article via LinkedIn Publishing, as I'm just checking out the capabilities of this platform versus attempting an actual blog on Blogger or some other platform.

Introduction

I recently replaced my home Linux server of 7+ years with a new box, and with it did a fresh OS installation of the latest Fedora version; the previous box had started on a Fedora version in the low 20's and had periodically been refreshed to a newer version using FedUp or DNF system updates. With the fresh OS install I had the chance to revisit how I deployed various services and what software packages I used to provide those services.

My point with this article is simply how easy it is to put together a local Linux development environment with all of the services and amenities that you'd come to expect in a well-equipped work environment. This includes the availability of services such as a private git server for source control, wiki, source code browsing/indexing, and CI/CD. Some of the topics below probably warrant their own separate article or blog post, as I'm admittedly light on details here.

Docker and Container Orchestration

On my previous Linux server I had ended up running a single-node Kubernetes cluster and was experimenting with running containerized services rather than VMs. While I wanted to continue in this direction it seemed like Kubernetes was complete overkill for what I was doing and for the available compute esources. I therefore made the switch to Docker Swarm. I installed Docker CE, replacing the Fedora-provided docker rpm, and enabled Swarm mode.

Services

Wiki

I previously ran Foswiki either in a separate VM or in various attempts at containerizing the service with wiki state information mounted as a volume from the host. After a few more failed attempts at doing this cleanly with a Docker image I was happy to find WikiJS which is a modern opensource wiki written in Node.js. WikiJS fully supports containerization, so I deployed it and its dependencies (Postgres) as a Swarm stack. The wiki state information in the Postgres container is mounted as a volume from the host. For convenience this information is located in the home directory of a `wiki` user on the server.

Git Server

While I have created a couple of open source projects on Github (serf-cpp and config-cpp) and have a number of other repositories there I still wanted the ability to have local git repositories, ideally with a better administrative interface and user experience than gitolite. I was pleased to find Gogs, which is a self-hosted git server written in Go. Gogs provides a very github-like user experience in terms of its web interface and capabilities like project issue tracking, wiki, pull requests, etc. Given that Gogs has minimal dependencies I created a `git` user on the server and deployed Gogs as a systemd service.

Git repositories can be set up with remotes for both the local Gogs server as well as my public Github location, and it is possible to do development workflows focused either on the private repository or public Github repository.

Web Server

My main interest in having a local web server at all is in supporting local http-based yum repositories with RPMs I'd need for building development Docker images. I deployed a minimal Swarm stack for Nginx, serving a subdirectory of the home directory of an `nginx` user on the server. This subdirectory hosted the various yum repositories which were built on the server using `createrepo`.

Other Linux boxes or Docker images can add the local repo via a .repo file in /etc/yum.repos.d:

[fir]

name=fir yum repository

baseurl=http://server.lan:3004/yum/fedora/30

enabled=1

gpgcheck=1

Source Code Indexing/Browsing

I've long been a fan and user of Opengrok as a web-based source code browser and indexer, but have never liked dealing with the hassle of deploying a Tomcat server. Fortunately there is an official Opengrok Docker image, which I was able to deploy as a Swarm stack. The source code to be indexed all resides under the home directory of an 'opengrok' user on the server, as git checkouts either from the local Gogs server or from actual github repos. A cron job does periodic git pulls for all repos and triggers the Opengrok container to rebuild its indices.

CI/CD Server

While I use internet-based services like Travis-CI for projects on Github I also want CI/CD available for projects being developed locally. I ended up deploying Jenkins and Jenkins agents as Swarm stacks, with state information mounted as a volume from a local 'jenkins' user on the server. The 'jenkins' user is in the 'docker' group and is therefore able to run container-based builds.

After installing the Gogs plugin in Jenkins it was possible to configure Gogs webhooks on applicable repositories to POST to the local Jenkins server and trigger project builds. Multi-branch pipelines created on the Jenkins server and implemented via 'Jenkinsfile' in project repos make it possible to do multi-branch development on projects locally with full CI/CD support for building and testing.

Docker-based Software Builds

Services like Travis-CI make it very easy to do software builds and tests targeting a wide range of compiler and library versions for projects hosted on Github. For projects being developed locally it is very easy to create a set of Docker base images built on the base Fedora image which add a particular toolchain (e.g. a specific gcc or clang version). These images add users matching the uids of both my local user and the 'jenkins' user on the server as they can be used both for CI/CD builds and for manual/interactive builds and debugging sessions. The images can leverage both Fedora-provided RPMs and locally-built RPMs with alternate gcc or clang versions:

FROM fedora:30


# Base gcc 6.5.0 docker image
COPY fir.repo /etc/yum.repos.d/

RUN \
    dnf install --nogpgcheck -y cmake gcc-c++ make git wget libstdc++-devel lcov alt-gcc6 sudo && \
    groupadd -g 1000 builder  && \
    useradd -g 1000 -u 1000 -m builder && \
    groupadd -g 1003 jenkins && \
    useradd -g 1003 -u 1003 -m jenkins && \
    mkdir -p /home/builder/work && \
    mkdir -p /home/jenkins/work && \
    echo "export PS1=\"gcc6.5.0 > \"" >> /home/builder/.bashrc && \
    chown builder:builder /home/builder/.bash_profile && \
    echo "builder ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/builder && \
    echo "jenkins ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/jenkins && \
    echo "export PS1=\"gcc6.5.0 > \"" >> /home/jenkins/.bashrc && \
    chown jenkins:jenkins /home/jenkins/.bash_profile
    chown jenkins:jenkins /home/jenkins/.bash_profile

You can then create a project-specific image which starts with the toolchain image and adds in project dependencies built from source with that toolchain:

FROM gcc650:latest


# serf-cpp docker image using gcc 6.5.0
COPY builddeps.sh /


RUN \
    /builddeps.sh && \
    echo "export PS1=\"serf-cpp-gcc6.5.0 > \"" >> /home/builder/.bashrc


USER builder

This Jenkinsfile shows a multi-branch pipeline file that uses Jenkins' docker agent to perform a software build inside a container. The image can also be used from the Linux server via `docker run`:

> docker run --rm -it -v $PWD:/home/builder/work serf-cpp-gcc650:latest /bin/bash

Backups

All of the services mentioned have their state information stored in service account home directories under /home on the server, so it is very easy to back up those directories both to an external USB drive as well as to a cloud location (if desired).

To view or add a comment, sign in

More articles by Chris Love

  • Another kind of "Pi day"

    Motivation In order to stay somewhat current with containers and Kubernetes now that my day job is pure…

  • Going Dockerless

    In my last article back in July 2019 I described deploying a variety of services (wiki, opengrok, jenkins, nginx) as…

Others also viewed

Explore content categories