Docker Compose - Microsoft SQL Server
Part 1 - Local Host
Velkommen, today we are going to learn how to use a Microsoft SQL server in a containerized environment using docker-compose ( multiple container environments). If you are interested in learning what docker-compose is please read my previous article here and also on the official docker's website here.
We will use Asp.net core MVC (front-end) and Asp-net WebAPI ( backend) projects to demonstrate the concept. With the front end communicating to the backend API the Backend API will communicate with the SQL server only.
Architecture (simple)
Similarly in Visual Studio 2022, our projects look like the following.
The source code is here. The projects are built in Visual Studio 2022 with .NET 6.
The dbapi project uses the latest Entity Framework core to connect and access the data from the SQL server.
Microsoft.EntityFrameworkCore.Relational" Version="6.0.28
Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.28
Now based on the above simple structure, let us run the following docker-compose file with containers and images.
# docker-compose file
version: '3.4'
services:
sqldb:
container_name: sqldb
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- "1433:1433"
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=Password@12345#
volumes:
- ./data/:/var/opt/mssql/data/
- ./log/:/var/opt/mssql/log/
dbapi:
container_name: dbapi
image: samueladnan/dbapi
build:
context: .
dockerfile: dbapi/Dockerfile
ports:
- "58433:80"
- "127.0.0.1:58441:80"
depends_on:
- sqldb
environment:
- DB_HOST=host.docker.internal,1433
- DB_NAME=crmdb
- DB_SA_PASSWORD=Password@12345#
- ASPNETCORE_ENVIRONMENT=Development
web:
container_name: web
image: samueladnan/web
build:
context: .
dockerfile: web/Dockerfile
ports:
- "58432:80"
- "127.0.0.1:58440:80"
depends_on:
- dbapi
environment:
- DB_API_URL=host.docker.internal
- HTTP_PORT=58441
- ENVIRONMENT_FLAG=Development
Now open the command prompt or PowerShell navigate to the root folder of your project where you have a docker compose file and run the following command.
docker-compose -f "docker-compose.yml" up -d
You will see the following containers created in the system running in Visual Studio Container's window or inside the Dockers hub software application.
Similarly, we have the following images with tags:
Recommended by LinkedIn
Now run the following command in the browser http://localhost:58440/
you will see the local frontend container running to add data to the SQL database.
To check the backend API is running, http://localhost:58441/swagger/index.html
Now to check that the SQL server container is running, use SQL Server Management Studio version 18+.
The server information is 127.0.0.1\sqldb,1433
Fantastik, now everything is up and running, and by pressing the Add button in the frontend container website can save data to the SQL server container.
Now to access the backend API service from the front end and SQL server connection to the Backend API we will call the services as http://host.docker.internal:port, instead of using the localhost: port, see the environment section of web in the docker-compose file.
For more information please read here.
For example, to access the backend API service from the front end our URL will be http://host.docker.internal:58441/api/Db. See the file Homecontroller. cs at line number 69.
Similarly from the backend API, we will connect the SQL server container using host.docker.internal,1433 in the environment variable.
Remember SQL server minimum memory requirement is 1 GB and recommended is 4 GB, (SQL 2017/2019/2022). You might get an certain errors if you try running SQL server inside a container with memory less than 1 GB, you must scale up container to minimum requirement.
If you look closely at the docker-compose file, we have defined three services in the respective order, so after they are built, they must be up and running immediately.
At the moment, I have not defined any custom network for the above docker container system in docker-compose file. However, in the production environment, it is good practice to have a network defined to limit the scope of containers and how they communicate with each other and the external world. Read here to create networks in the docker-compose file.
In the next series of these articles, we will publish these containers to the Docker hub and deploy them to the app for containers and then Kubernetes clusters with DevOps pipelines using ARM templates.
Last but not least if time permits we will add Dapr and KEDA functionality to our applications in the future parts of the same article for demonstration purposes.
So stay tuned, and we learn together :)