Skip to Content
Back to Blog

Docker Compose Configuration for Elastic Stack in ft_transcendence

Docker Compose Configuration for Elastic Stack in ft_transcendence

Introduction

In this article, we'll explore the complete Docker Compose configuration that powers our observability stack for the ft_transcendence project. This setup allows us to run Elasticsearch, Kibana, Filebeat, and Logstash in a coordinated, containerized environment optimized for local development.

Directory Structure

Before diving into the configuration, let's understand the project structure:

ft_transcendence/
├── docker-compose.yml            # Main Docker Compose file
├── .env                          # Environment variables
├── config/
│   ├── elasticsearch/
│   │   ├── elasticsearch.yml     # Elasticsearch configuration
│   │   └── jvm.options           # JVM settings
│   ├── kibana/
│   │   └── kibana.yml            # Kibana configuration
│   ├── filebeat/
│   │   └── filebeat.yml          # Filebeat configuration
│   └── logstash/
│       ├── logstash.yml          # Logstash main configuration
│       ├── pipelines.yml         # Pipeline definitions
│       └── pipeline/             # Pipeline configurations
│           ├── main.conf
│           ├── nginx.conf
│           ├── django.conf
│           └── ...
├── logs/                         # Mounted log directories
│   ├── nginx/
│   ├── django/
│   ├── nextjs/
│   └── ...

Docker Compose Configuration

Here's our complete docker-compose.yml for the Elastic Stack:

version: '3.8'

services:
  # Elasticsearch service
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.17.4
    container_name: ft_elasticsearch
    environment:
      - node.name=ft_transcendence_node
      - cluster.name=ft_transcendence
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - 'ES_JAVA_OPTS=-Xms1g -Xmx1g'
      - xpack.security.enabled=true
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - ./config/elasticsearch/jvm.options:/usr/share/elasticsearch/config/jvm.options.d/custom.options:ro
      - elasticsearch_data:/usr/share/elasticsearch/data
    ports:
      - '9200:9200'
      - '9300:9300'
    networks:
      - elastic
    healthcheck:
      test: ['CMD', 'curl', '-f', '-u', 'elastic:${ELASTIC_PASSWORD}', 'http://localhost:9200']
      interval: 30s
      timeout: 10s
      retries: 5
    restart: unless-stopped

  # Kibana service
  kibana:
    image: docker.elastic.co/kibana/kibana:8.17.4
    container_name: ft_kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
      - ENCRYPTION_KEY=${KIBANA_ENCRYPTION_KEY}
    volumes:
      - ./config/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
    ports:
      - '5601:5601'
    networks:
      - elastic
    depends_on:
      - elasticsearch
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:5601/api/status']
      interval: 30s
      timeout: 10s
      retries: 5
    restart: unless-stopped

  # Filebeat service
  filebeat:
    image: docker.elastic.co/beats/filebeat:8.17.4
    container_name: ft_filebeat
    user: root # Required to access container logs
    volumes:
      - ./config/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - filebeat_data:/usr/share/filebeat/data
      # Mount log directories
      - ./logs/nginx:/var/log/nginx:ro
      - ./logs/django:/var/log/django:ro
      - ./logs/nextjs:/var/log/nextjs:ro
      - ./logs/postgresql:/var/log/postgresql:ro
      - ./logs/redis:/var/log/redis:ro
    environment:
      - ELASTIC_USER=elastic
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - LOGSTASH_HOST=logstash:5044
    networks:
      - elastic
    depends_on:
      - elasticsearch
      - logstash
    command: filebeat -e -strict.perms=false
    restart: unless-stopped

  # Logstash service
  logstash:
    image: docker.elastic.co/logstash/logstash:8.17.4
    container_name: ft_logstash
    volumes:
      - ./config/logstash/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro
      - ./config/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./config/logstash/pipeline:/usr/share/logstash/pipeline:ro
    environment:
      - ELASTIC_USER=elastic
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - 'LS_JAVA_OPTS=-Xms256m -Xmx256m'
    ports:
      - '5044:5044'
    networks:
      - elastic
    depends_on:
      - elasticsearch
    restart: unless-stopped

networks:
  elastic:
    driver: bridge

volumes:
  elasticsearch_data:
    driver: local
  filebeat_data:
    driver: local

Environment Variables

Our .env file contains sensitive information and configuration variables:

# Elasticsearch
ELASTIC_PASSWORD=your_secure_password_here

# Kibana
KIBANA_ENCRYPTION_KEY=32_character_encryption_key_here

# Paths for log collection
LOG_PATH=/path/to/application/logs

# Memory settings
ES_HEAP_SIZE=1g
LS_HEAP_SIZE=256m

Configuration Files

Elasticsearch Configuration (elasticsearch.yml)

# Cluster configuration
cluster.name: ft_transcendence
node.name: ${HOSTNAME}

# Network settings
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
http.cors.enabled: true
http.cors.allow-origin: '*'

# Security settings
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.audit.enabled: true

# Memory and performance
bootstrap.memory_lock: true

# Paths
path.data: /var/lib/elasticsearch/data
path.logs: /var/lib/elasticsearch/logs

Kibana Configuration (kibana.yml)

# Server settings
server.name: kibana
server.host: '0.0.0.0'
server.port: 5601

# Elasticsearch connection
elasticsearch.hosts: ['http://elasticsearch:9200']
elasticsearch.username: '${ELASTIC_USER}'
elasticsearch.password: '${ELASTIC_PASSWORD}'

# Security settings
xpack.security.enabled: true
xpack.encryptedSavedObjects.encryptionKey: '${ENCRYPTION_KEY}'

# Monitoring
monitoring.ui.container.elasticsearch.enabled: true

# CORS and other access settings
server.cors.enabled: true
server.cors.allow_origin: ['*']

Resource Optimization

Since we're running in a constrained local environment, we've made several optimizations:

  1. Memory Limits: Restricted heap sizes for all services

    • Elasticsearch: 1GB
    • Logstash: 256MB
    • Kibana: Default settings
    • Filebeat: Minimal footprint
  2. CPU Restrictions: Limited workers in pipeline configurations

    • Single worker per Logstash pipeline
    • Reduced parallelism in data processing
  3. Storage Management:

    • Named volumes for persistent data
    • Log rotation settings
    • Index lifecycle management for older data

Starting the Stack

To start the complete Elastic Stack:

# Generate passwords if needed
docker run --rm -it docker.elastic.co/elasticsearch/elasticsearch:8.17.4 \
  /bin/bash -c "bin/elasticsearch-reset-password -u elastic -a -s"

# Start the stack
docker-compose up -d

Accessing the Services

After starting the stack:

Monitoring the Stack

To monitor the health of our Elastic Stack:

# Check Elasticsearch status
curl -u elastic:${ELASTIC_PASSWORD} http://localhost:9200/_cluster/health?pretty

# Check logs
docker-compose logs elasticsearch
docker-compose logs kibana
docker-compose logs filebeat
docker-compose logs logstash

Common Issues and Solutions

  1. Memory Pressure

    • Issue: JVM heap errors in Elasticsearch
    • Solution: Further reduce heap size or increase host memory allocation
  2. Connection Failures

    • Issue: Services can't connect to Elasticsearch
    • Solution: Check network configuration and ensure proper startup order
  3. Permission Problems

    • Issue: Filebeat can't access log files
    • Solution: Verify volume mounts and container permissions

Conclusion

This Docker Compose configuration provides a complete observability stack for the ft_transcendence project, optimized for local development environments. While it's designed to run on limited resources, it still provides powerful log collection, processing, and visualization capabilities.

In a production environment, you would want to:

  1. Increase memory allocations
  2. Add multiple Elasticsearch nodes
  3. Implement proper TLS certificates
  4. Use more robust secrets management
  5. Implement index lifecycle management

But for our development needs, this configuration strikes a good balance between functionality and resource efficiency.