A docker test composition

A docker test composition

In some larger project the need may arise to spin up a testing environment fast from one or more team feature branch builds. In this article i share our experience in providing such a service to development teams using docker containerized applications and docker-compose. If your new to docker you might want to read up on the basics, see docker basics below.

The development environment

In this case the platform consists of two separate git branches. One containing the actual web application, windows and web services used in production, the other the mocks used during development and test. The mocks are a number of wcf services used for validating postal codes, chamber of commerce registrations but also more complex financial interactions with other third party ( financial / government ) institutions. The CI/CD tooling is implemented by on premise Azure-Devops, a windows 2016 machine used for our private docker repository and a windows 2019 server to run the containers.

Creating base images

Since we will be deploying containerized wcf and aspnet applications we'll use the available base images to create our own custom base images with the aspnet and wcf 4.8 windowsserver core-ltsc2019 images. See the article on Windows 2019 container support.

Isolation mode

Docker containers can run either in process or hyperv isolation. On windows 10, hyperv is default, on windows 2019 process isolation. Hyperv isolation consumes a tenfold in memory and therefore to run more complex applications on windows 10 it might be needed to switch to process isolation. For this we can add the "isolation = process" to our docker-compose-ovveride.yml file. Caveat is we need to rebuild all containers based on the 4.8-20200310-windowsservercore-1903 due to host incompatibilities.

Configuration support

When our containerized applications start we would like them to use the environment settings from the docker-compose file. For this purpose we circumvent startup through a postscript file replacing settings in the application or web config file. You can read the details in the article about overriding config settings.

FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2019
RUN md c:\aspnet-startup
COPY . c:/aspnet-startup
ENTRYPOINT ["powershell.exe", "c:\\aspnet-startup\\Startup.ps1"]

Build Agent

Our win 2019 docker host machine is also configured as a Azure-Devops build agent. Docker compose will give you errors if you try to build images based on newer OS versions than the build agent in use!. Since our base images are based on windows 2019 they won't build on a windows 2016 machine. We put our docker enabled agent in a separate docker agent pool which we can select in our release configuration.

Docker Compose

The docker-compose.yml file contains all our services and is used to build and run the containers. We separate service definitions needed to run the containers from settings needed to build them by placing the build settings in a docker-compose.override.yml file. This way we only need to copy the docker-compose file and a .env file to the host. The .env file contains the version tag we wish to use when we run docker-compose up. In the override file, for every service we point to a docker file and set the build context to the appropriate folder in the extracted artifact used in the Azure-Devops release pipeline.

Connecting to SQL server

To connect with a local instance of SQL from your container we use the ip and portnumber in the connectionstring.

       environment:
            - CONNSTR_APPDB=server=${DBSERVERIP},1433;database=${DBNAME};User    Id=${DBUSER};Password=${DBPASSWORD}

Services , url's and identity server

Our project uses identity server. The main application configuration only contains one url used for both the redirect and server side calls to identity server application. Therefore we cannot simply use the identity service name here but need to stick to using the url. Since docker containers only know ip numbers by default we need to add a mapping to this url that is available for all our services. Luckily we can add aliases to our service network section that will ensure a dns entry to resolve the url to the container's ip adress.

        expose:
            - "80"
            - "443"
        depends_on:
            - "otherservice"    

        extra_hosts:
            - "${DBSERVER}:${DBSERVERIP}"
        networks:
            portal_network:
                aliases:
                    - ${TAG:-latest}_application.${ENV:-local}.domain.com

Passing build arguments

To support multiple environments based on different builds we need to pass some variables like the current build number ( also used as image TAG ) and environment ( local or test host ) to our docker files and associated powershell scripts. For this purpose we use a args section in our override compose file and the ARG and ENV keywords in our docker file (Check out the link below on how to use $env:ENV in your powershell script) .

Connecting networks

In our case we have two projects, one containing our mocks , the other containing our application web site and api's. Since both originate from different builds ( mocks wont change as often ) we would like to combine them. With docker-compose this is pretty straight forward, just define a network as external in your app's compose file with the name of your mocks network and add this network to the service you want to allow to communicate with your mocks.

        expose:
            - "80"
            - "443"
        extra_hosts:
            - "${DBSERVER}:${DBSERVERIP}"
        networks:
            app_network:
                aliases:
                    - ${TAG:-latest}_someservice.${ENV:-local}.domain.com
            mock_network:
        
networks:
    mock_network:
        external:
            name: ${MOCKS:-latest}_mocks
    app_network:
       name: ${TAG:-latest}_app
       driver: nat 

Updating your hosts file

After we started the app with 'docker-compose up -d' we would like to be able to go to the website by using a url. For this we need to update our hosts file with the ip and url of the right docker container. To get the ip we can use this statement.

docker inspect -f '{{(index .NetworkSettings.Networks).<appnetworkname>.IPAddress}}' <app_container_name>

Redirect IIS logging

In a test scenario it might be handy to be able to inspect incoming requests for any container. Therefore we redirect the IIS logs (and more) to docker host with LogMonitor. See the reference below for more details.

RUN md c:\LogMonitor
COPY LogMonitor.exe LogMonitorConfig.json C:/LogMonitor/
RUN md c:\aspnet-startup
COPY . c:/aspnet-startup
WORKDIR /LogMonitor
ENTRYPOINT ["C:\\LogMonitor\\LogMonitor.exe", "powershell.exe c:\\aspnet-startup\\Startup.ps1"]

Now you can use docker dashboard to monitor your iis logs realtime.

Build and push

To build and deploy our containers to our docker host machine we create a special docker release in Azure Devops. Within this release we download and extract the build artifacts.We add a pipeline step with tasks to update the .env file with buildid and docker host environment variables, build images, push images to our private registry and copy the .env and docker-compose.yml files to a folder on our docker host machine. Also two steps are added to remotely do a docker-compose up and down. This allows us to start and stop the containers from the Azure Devops release. Since docker-compose uses the containing folder name of the yml file to create unique instances and we want to be able to run different versions simultaneously, we copy files to the docker host into a directory with the buildid as directory name.

Reverse Proxy

Al url's in our configuration start with the buildid. Our host is added to the dns with a wildcard domain for this url's requests to end up at our docker host. Now all we need to do is redirect incoming requests to the right container for our application to be available to the outside world. We use Traefik which has integrated docker support and automatically detects new container instances we like to expose. Per container configuration is achieved by using labels in the docker-compose file.

        labels:
            - "traefik.enable=true"
            - "traefik.port=443"
            - "traefik.frontend.passHostHeader=true"
            - "traefik.http.routers.${TAG:-latest}_portal.rule=Host(`${TAG:-latest}_portal.docker.localdomain.nl`)"
            - "traefik.http.services.${TAG:-latest}_portal.loadbalancer.server.port=443"
            - "traefik.http.services.${TAG:-latest}_portal.loadbalancer.server.scheme=https"
            - "traefik.http.routers.${TAG:-latest}_portal.tls=true"
            - "traefik.http.routers.${TAG:-latest}_portal.tls.domains[0].main=${TAG:-latest}_portal.docker.localdomain.nl"
            - "traefik.http.routers.${TAG:-latest}_portal.tls.domains[0].sans=*.docker.localdomain.nl"
            - "traefik.http.routers.${TAG:-latest}_portal.middlewares=${TAG:-latest}_portal"
            - "traefik.http.middlewares.${TAG:-latest}_portal.redirectscheme.scheme=https"
            - "traefik.protocol=https"
No alt text provided for this image

Maintenance

Docker consumes quite a lot of disk space. Therefore some maintenance measures are needed in the long run. For a quick clean up use the docker system prune command.

Conclusion

Docker and docker-compose on windows gives us the possibility to rapidly deploy testing environments. Compared to docker on linux it appears to take more effort to get things working, Especially because most examples found are rather linux specific. Also we had the impression that docker on windows seems less mature and the underlying host's windows version is of greater influence. We will evaluate and further tweak this solution over the coming period, so any tips and/or questions are welcome!

References:

Docker Basics

Windows 2019 container support

Overriding config settings

Passing ARG to powershell

Connecting to SQL

Connecting networks

Redirect IIS Logging

Deployment to Docker Host

Reverse Proxy

Maintenance

To view or add a comment, sign in

More articles by ruud jansen

  • Een paar development copilot prompts

    Hier wat handige prompts welk te gebruiken zijn in verschillende stadia van het development proces. Als er niets is…

  • Understanding Azure YAML

    Might be because i'm getting old or just because i'm not the smartest, but it took me some effort to understand the…

  • From end to end in TFS

    For those who are or planning to use end to end testing and TFS here some tips that might help setting up e2e testing…

  • Checking your code for S.O.L.I.D.

    Every programmer knows or should know the S.O.

  • The Aurelia Experience continued

    After just four weeks of development the enthusiasm working with Aurelia has grown even more. We managed to meet…

  • Gitflow and GitVersion to the rescue

    Recently we made the transition to using Git in TFS. Although being common ground to most open source developers it is…

  • The Aurelia Experience

    In this article and eventual follow ups i hope to share my experience in working on a single page application based on…

  • Thoughts about creating a test plan ( part III )

    In this final part of this short series of articles, I will try to discuss a way to identify test data for good…

  • Thoughts about creating a test plan ( part II )

    In this second part of this short series of articles, I will try to discuss testing at the system level and its…

  • Thoughts about creating a test plan (part I)

    In this brief article series i would like to inspire to some ideas on implementing a test plan. Most may seem obvious…

Others also viewed

Explore content categories