原文作者:Sun
This is a quick discussion of how to set up a local development environment for a Go API running inside of a Docker container with hot reloading. I’ve found this to be an effective way to develop locally. The full source code can be found on GitHub.
API
We’ll first set up a dummy API in cmd/api/main.go:
1package main
2
3import (
4 "fmt"
5 "log"
6 "net/http"
7)
8
9func handler(w http.ResponseWriter, r *http.Request) {
10 fmt.Fprint(w, "Hello from the api!")
11}
12
13func main() {
14 http.HandleFunc("/", handler)
15 log.Println("listening on 5000")
16 log.Fatal(http.ListenAndServe(":5000", nil))
17}
This API simply listens on the root path and returns a hello world back out.
Good enough for this example!
Dockerfile
The next thing we need is a Dockerfile to define how we want to build our container for this Go API:
FROM golang:1.10
WORKDIR /go/src/github.com/Zach-Johnson/go-docker-hot-reload-example
COPY . .
RUN ["go", "get", "github.com/githubnemo/CompileDaemon"]
ENTRYPOINT CompileDaemon -log-prefix=false -build="go build ./cmd/api/" -command="./api"
go build
by default. I explicitly specify a go build
command here to build from the directory I want and then execute the binary so that the server starts up. CompileDaemon handles the rest - anytime any .go file changes in the directory, it will terminate the server and re-run the build steps. The file types that it watches for can also be modified as needed with the -pattern
flag. Sweet!docker-compose
A docker-compose file can simplify orchestration between Docker containers. For this example it’s a bit contrived since we’re only running one service, but often times it makes things much nicer when running multiple microservices and perhaps a database locally.
1version: '3.6'
2
3services:
4 api:
5 image: api:latest
6 ports:
7 - 5000:5000
8 volumes:
9 - ./:/go/src/github.com/Zach-Johnson/go-docker-hot-reload-example
We first specify the image to use; the latest image of the API that will be created when we build the Docker image from our Dockerfile. We expose port
5000 and map it to port 5000 on the Docker container so that we can reach our API from outside of the Docker container. The last line is a volume mount. This is what makes the hot reloading work inside of a Docker container! This links our current working directory to the directory inside of the Docker container so that any file changes that happen on our local machine also happen inside of the Docker container. Since CompileDaemon is running inside of the Docker container, when it sees a file change inside the container, it can then re-build and re-run the server.
Makefile
Lastly we write a little Makefile to simplify server start up and shutdown:
1default:
2 @echo "=============building Local API============="
3 docker build -f cmd/api/Dockerfile -t api .
4
5up: default
6 @echo "=============starting api locally============="
7 docker-compose up -d
8
9logs:
10 docker-compose logs -f
11
12down:
13 docker-compose down
14
15test:
16 go test -v -cover ./...
17
18clean: down
19 @echo "=============cleaning up============="
20 rm -f api
21 docker system prune -f
22 docker volume prune -f
Our default command simply builds the Dockerfile for the API. The up command starts up the API and runs it in the background. logs tails the logs on the Docker container. down shuts down the server. test runs any tests in the current directory tree. clean shuts down the API and then clears out saved Docker images from your computer. This can be useful when running another image like MySQL which writes information to your local machine and doesn’t clean it up when the container shuts down.
I’ve found this to be an effective way to develop locally when running multiple APIs that are interacting with a DB of some sort. It can be a simple way to run integration tests as well: by keeping your infrastructure running locally inside of containers, you can run make test and execute all of your integration tests against your local infrastructure. Then any code changes you make get hot reloaded, but the Docker images don’t have to rebuild, so it is much quicker than having a separate test suite that has to build and run all of your infrastructure each time you want to change something.
版权申明:内容来源网络,版权归原创者所有。除非无法确认,我们都会标明作者及出处,如有侵权烦请告知,我们会立即删除并表示歉意。谢谢。