To deploy a project that needs access to a PostgreSQL service (containerized in another repository, along with other services inside a custom bridge network), a common scenario in microservices or multi-repository projects, the best practices revolve around keeping your services isolated but enabling cross-repository communication in a scalable and secure way.
Here are several best practices and approaches to achieve this:
- Create an external Docker bridge network that both projects (repositories) can attach their containers to. This allows containers in different repositories to communicate while maintaining network isolation within their bridge networks.
-
Create an external network:
docker network create external-network
-
In
docker-compose.ymlof the PostgreSQL repository: Attach the PostgreSQL container to the external network:version: '3' services: postgres: image: postgres:latest environment: POSTGRES_PASSWORD: mysecretpassword networks: - internal-network # Internal to this repo - external-network # External network shared with other projects networks: internal-network: driver: bridge external-network: external: true
-
In the other project's
docker-compose.yml: Attach the container to the same external network:version: '3' services: app: image: myapp:latest environment: - DB_HOST=postgres # Use the service name from the other network - DB_PORT=5432 networks: - internal-network - external-network networks: internal-network: driver: bridge external-network: external: true
This approach uses Docker’s built-in DNS resolution to allow services in separate repositories to communicate via the container name (like postgres in the external network).
- Clean separation between services in different repositories.
- Containers in each repository maintain their internal networks, while only sharing the external network where necessary.
- Services can access PostgreSQL by container name (
postgres), without exposing ports to the host.
- You need to ensure that all repositories share the external network when needed.
If the projects are deployed in separate environments, such as on different machines, or in isolated setups where docker-compose is run independently in each project’s CI/CD pipeline, you can access the PostgreSQL service by exposing it to the host machine.
-
Expose PostgreSQL to the host in the first project: In the
docker-compose.ymlof the PostgreSQL project:version: '3' services: postgres: image: postgres:latest environment: POSTGRES_PASSWORD: mysecretpassword ports: - "5432:5432" # Expose PostgreSQL to the host for access from other repositories networks: - internal-network networks: internal-network: driver: bridge
-
Access PostgreSQL from the second repository: In the second project, set the
DB_HOSTto the host machine’s IP (or domain):version: '3' services: app: image: myapp:latest environment: - DB_HOST=host.docker.internal # Or the actual host IP - DB_PORT=5432
- Simple approach when deploying projects on separate environments.
- Does not require both projects to share the same Docker network.
- Less secure, as PostgreSQL is exposed to the host and potentially the public.
- Increased risk of port conflicts.
- Requires careful network configuration on the host to allow external access.
For more advanced deployments, using Docker Swarm or Kubernetes provides built-in service discovery and network management, making inter-service communication easier across different repositories. Both tools offer more robust networking solutions when scaling microservices.
-
Create a Swarm on your machine:
docker swarm init
-
Deploy the PostgreSQL service in one stack (
docker-compose.yml):version: '3' services: postgres: image: postgres:latest environment: POSTGRES_PASSWORD: mysecretpassword networks: - global-network networks: global-network: driver: overlay
-
Deploy the other service in a different stack (
docker-compose.yml):version: '3' services: app: image: myapp:latest environment: - DB_HOST=postgres # Can use service name with overlay network - DB_PORT=5432 networks: - global-network networks: global-network: external: true
-
Deploy both stacks:
docker stack deploy -c docker-compose.yml postgres-stack docker stack deploy -c docker-compose.yml app-stack
- Service discovery: Automatically resolves services by name.
- Scalability: Easier to manage multiple microservices.
- Isolation and Security: Offers a more secure and organized way to manage multiple services across repositories.
- More complex setup and learning curve.
- Best suited for larger, more complex applications or when scaling.
Regardless of the approach, centralized configuration management (like HashiCorp Vault, AWS Secrets Manager, or environment variable management) is a good practice. It helps maintain DB credentials and other sensitive information securely, while ensuring services in both repositories can access the same configurations seamlessly.
- For simplicity: Using external Docker networks is the most straightforward solution when dealing with services in separate repositories that need to communicate via Docker.
- For complex, scalable systems: Consider using Docker Swarm or Kubernetes for better service discovery and container orchestration.
- Security consideration: Avoid exposing your PostgreSQL container to the host unless necessary. Prefer to use Docker's networking capabilities for secure communication between services.
Let me know which approach works best for your use case or if you need more details!