Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hiverge.ai/llms.txt

Use this file to discover all available pages before exploring further.

Configuration

Create your initial configuration file:
hive init
You will be prompted for your organisation ID. This creates a config file at ~/.hive/config.yaml. To open the config in your editor:
hive edit config
The text editor used to edit the configuration file is determined by the EDITOR environment variable, and defaults to vim. You can edit this by calling e.g., export EDITOR="nano". Configuration is stored in ~/.hive/config.yaml by default (or a custom path via --config).

Field reference

Below, we document the fields that can be specified in a configuration YAML.

Top level fields

  • apiversion (string, default: v1alpha1)
    Configuration schema version.
  • organization_id (string, required)
    Organisation ID to associate experiments with.
  • coordinator_config_name (string, default: default-coordinator-config)
    Coordinator config to use for the experiment.
  • log_level (string, default: INFO)
    Logging level. One of: DEBUG, INFO, WARNING, ERROR, CRITICAL.

repo

Repository configuration for the experiment source code.
  • source (string, required)
    Git repository URL (HTTPS only; SSH URLs are not supported).
  • branch (string, default: main)
    Branch to use for the experiment.
  • evaluation_script (string, default: evaluation.py)
    Script that evaluates experiment results (path relative to repo root).
  • target_code (string, required)
    Code for agents to evolve. Supports line ranges — see line range syntax.
  • additional_context (string, default: "")
    Additional files to include as context. Same range syntax as target_code.
  • github_token (string)
    GitHub token for accessing private repositories.

Line range syntax

The target_code and additional_context fields support line ranges and multiple files:
target_code: main.py             # entire file
target_code: main.py:1-50        # lines 1 to 50
target_code: main.py:1-10&21-30  # multiple ranges
target_code: main.py,evolve.py   # multiple files

runtime

Controls experiment execution.
  • num_agents (integer, default: 1)
    Number of parallel agents to run.
  • max_runtime_seconds (integer, default: -1)
    Maximum execution time in seconds. -1 = unlimited.
  • max_iterations (integer, default: -1)
    Maximum iterations per agent. -1 = unlimited.

sandbox

Container environment configuration.
  • base_image (string, required)
    Docker base image (e.g. python:3.9-slim).
  • workdir (string, default: /app)
    Working directory inside the container.
  • setup_script (string)
    Shell script to run before the experiment starts (e.g. install dependencies).
  • evaluation_timeout (integer, default: 60)
    Maximum time (seconds) for the evaluation script to run.
  • envs (list)
    Environment variables to set in the container. Each entry has a name and value:
    envs:
      - name: DATABASE_URL
        value: postgres://localhost/mydb
    
  • services (list)
    Sidecar services to run alongside the sandbox. See sandbox.services below.

sandbox.resources

  • cpu (string, default: "1")
    CPU limit (e.g. "2", "500m").
  • memory (string, default: "2Gi")
    Memory limit (e.g. "4Gi", "512Mi").
  • accelerators (string)
    GPU resources (e.g. a100-80gb:8).
  • shmsize (string)
    Size of /dev/shm (e.g. "1Gi").

sandbox.services[]

Each entry defines a sidecar container that runs alongside the sandbox.
  • name (string, required)
    Service name.
  • image (string, required)
    Docker image for the service.
  • ports (list)
    Ports to expose. Each entry has a port (integer) and optional protocol (TCP or UDP, defaults to TCP).
  • envs (list)
    Environment variables (same format as sandbox.envs).
  • command (list of strings)
    Container entrypoint override.
  • args (list of strings)
    Arguments to the entrypoint.
  • resources.cpu (string, default: "1")
    CPU limit for this service.
  • resources.memory (string, default: "2Gi")
    Memory limit for this service.

prompt

Optional prompt configuration for the Hive agents. Omit this section entirely to use defaults.
  • enable_evolution (boolean, default: false)
    Whether to enable evolution for the experiment.
  • context (string)
    Experiment-specific context to provide to agents.
  • ideas (list of strings)
    Ideas that will be randomly sampled and injected into the Hive.
    ideas:
      - "Try batching database writes"
      - "Consider async I/O for network calls"
    

Full example

~/.hive/config.yaml
apiversion: v1alpha1
organization_id: 'my-org-id'
coordinator_config_name: default-coordinator-config
log_level: INFO

repo:
  source: https://github.com/your-org/your-repo.git
  branch: main
  evaluation_script: evaluation.py
  target_code: main.py:1-50
  additional_context: utils.py:10-30
  github_token: ghp_xxxxxxxxxxxx

runtime:
  num_agents: 10
  max_runtime_seconds: 3600
  max_iterations: 100

sandbox:
  base_image: python:3.9-slim
  workdir: /app
  evaluation_timeout: 60
  setup_script: |
    pip install -r requirements.txt
    python setup.py
  resources:
    cpu: "2"
    memory: "4Gi"
    shmsize: "1Gi"
    accelerators: a100-80gb:8
  envs:
    - name: DATABASE_URL
      value: postgres://localhost/mydb
    - name: DEBUG
      value: "true"
  services:
    - name: redis
      image: redis:7-alpine
      ports:
        - port: 6379
          protocol: TCP
      resources:
        cpu: "500m"
        memory: "512Mi"
    - name: worker
      image: my-worker:latest
      command: ["python"]
      args: ["-m", "worker", "--queue", "default"]
      envs:
        - name: WORKER_CONCURRENCY
          value: "4"
      resources:
        cpu: "1"
        memory: "1Gi"

prompt:
  enable_evolution: true
  context: "Focus on optimizing the data pipeline for throughput"
  ideas:
    - "Try batching database writes"
    - "Consider async I/O for network calls"
    - "Explore caching intermediate results"

Next steps

Managing experiments

Create, monitor, and manage your Hive experiments.