Skip to content

Instantly share code, notes, and snippets.

@AlexanderMityaev
Last active October 7, 2024 14:00
Show Gist options
  • Select an option

  • Save AlexanderMityaev/b680e9af522bad8050e1c0261033617c to your computer and use it in GitHub Desktop.

Select an option

Save AlexanderMityaev/b680e9af522bad8050e1c0261033617c to your computer and use it in GitHub Desktop.
Best practice deploying in docker projects with separate repositories but common services

To deploy a project that needs access to a PostgreSQL service (containerized in another repository, along with other services inside a custom bridge network), a common scenario in microservices or multi-repository projects, the best practices revolve around keeping your services isolated but enabling cross-repository communication in a scalable and secure way.

Here are several best practices and approaches to achieve this:

1. External Docker Network Across Repositories

  • Create an external Docker bridge network that both projects (repositories) can attach their containers to. This allows containers in different repositories to communicate while maintaining network isolation within their bridge networks.

Steps:

  • Create an external network:

    docker network create external-network
  • In docker-compose.yml of the PostgreSQL repository: Attach the PostgreSQL container to the external network:

    version: '3'
    services:
      postgres:
        image: postgres:latest
        environment:
          POSTGRES_PASSWORD: mysecretpassword
        networks:
          - internal-network  # Internal to this repo
          - external-network  # External network shared with other projects
    
    networks:
      internal-network:
        driver: bridge
      external-network:
        external: true
  • In the other project's docker-compose.yml: Attach the container to the same external network:

    version: '3'
    services:
      app:
        image: myapp:latest
        environment:
          - DB_HOST=postgres  # Use the service name from the other network
          - DB_PORT=5432
        networks:
          - internal-network
          - external-network
    
    networks:
      internal-network:
        driver: bridge
      external-network:
        external: true

This approach uses Docker’s built-in DNS resolution to allow services in separate repositories to communicate via the container name (like postgres in the external network).

Benefits:

  • Clean separation between services in different repositories.
  • Containers in each repository maintain their internal networks, while only sharing the external network where necessary.
  • Services can access PostgreSQL by container name (postgres), without exposing ports to the host.

Drawbacks:

  • You need to ensure that all repositories share the external network when needed.

2. PostgreSQL with docker-compose and Environment Configuration

If the projects are deployed in separate environments, such as on different machines, or in isolated setups where docker-compose is run independently in each project’s CI/CD pipeline, you can access the PostgreSQL service by exposing it to the host machine.

Steps:

  • Expose PostgreSQL to the host in the first project: In the docker-compose.yml of the PostgreSQL project:

    version: '3'
    services:
      postgres:
        image: postgres:latest
        environment:
          POSTGRES_PASSWORD: mysecretpassword
        ports:
          - "5432:5432"  # Expose PostgreSQL to the host for access from other repositories
        networks:
          - internal-network
    
    networks:
      internal-network:
        driver: bridge
  • Access PostgreSQL from the second repository: In the second project, set the DB_HOST to the host machine’s IP (or domain):

    version: '3'
    services:
      app:
        image: myapp:latest
        environment:
          - DB_HOST=host.docker.internal  # Or the actual host IP
          - DB_PORT=5432

Benefits:

  • Simple approach when deploying projects on separate environments.
  • Does not require both projects to share the same Docker network.

Drawbacks:

  • Less secure, as PostgreSQL is exposed to the host and potentially the public.
  • Increased risk of port conflicts.
  • Requires careful network configuration on the host to allow external access.

3. Service Discovery with Docker Swarm or Kubernetes

For more advanced deployments, using Docker Swarm or Kubernetes provides built-in service discovery and network management, making inter-service communication easier across different repositories. Both tools offer more robust networking solutions when scaling microservices.

Docker Swarm Example:

  • Create a Swarm on your machine:

    docker swarm init
  • Deploy the PostgreSQL service in one stack (docker-compose.yml):

    version: '3'
    services:
      postgres:
        image: postgres:latest
        environment:
          POSTGRES_PASSWORD: mysecretpassword
        networks:
          - global-network
    
    networks:
      global-network:
        driver: overlay
  • Deploy the other service in a different stack (docker-compose.yml):

    version: '3'
    services:
      app:
        image: myapp:latest
        environment:
          - DB_HOST=postgres  # Can use service name with overlay network
          - DB_PORT=5432
        networks:
          - global-network
    
    networks:
      global-network:
        external: true
  • Deploy both stacks:

    docker stack deploy -c docker-compose.yml postgres-stack
    docker stack deploy -c docker-compose.yml app-stack

Benefits:

  • Service discovery: Automatically resolves services by name.
  • Scalability: Easier to manage multiple microservices.
  • Isolation and Security: Offers a more secure and organized way to manage multiple services across repositories.

Drawbacks:

  • More complex setup and learning curve.
  • Best suited for larger, more complex applications or when scaling.

4. Use a Centralized Configuration or Secret Management

Regardless of the approach, centralized configuration management (like HashiCorp Vault, AWS Secrets Manager, or environment variable management) is a good practice. It helps maintain DB credentials and other sensitive information securely, while ensuring services in both repositories can access the same configurations seamlessly.


Conclusion:

  • For simplicity: Using external Docker networks is the most straightforward solution when dealing with services in separate repositories that need to communicate via Docker.
  • For complex, scalable systems: Consider using Docker Swarm or Kubernetes for better service discovery and container orchestration.
  • Security consideration: Avoid exposing your PostgreSQL container to the host unless necessary. Prefer to use Docker's networking capabilities for secure communication between services.

Let me know which approach works best for your use case or if you need more details!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment