Structured JSON Logging

Write structured JSON logs to a file for observability platforms like Datadog, Splunk, ELK, and CloudWatch. Every resource change, plan, and apply event is captured with resource IDs, actions, and timing.

Usage

Add --log-file to any command:

oxid plan --log-file ./oxid.log
oxid apply --log-file /var/log/oxid/oxid.log
oxid sync --log-file ./oxid.log
oxid destroy --log-file ./oxid.log
TipThe --log-file flag works before or after the subcommand. Console output remains unchanged - the file layer runs independently.

Terraform / OpenTofu Log Bridge

Already using Terraform or OpenTofu? Pipe their JSON output through oxid watch to get structured, observable logs without changing your workflow:

Terraform
terraform plan -json | oxid watch --log-file ./tf.log
terraform apply -auto-approve -json | oxid watch --log-file ./tf.log
terraform destroy -auto-approve -json | oxid watch --log-file ./tf.log
OpenTofu
tofu plan -json | oxid watch --log-file ./tf.log
tofu apply -auto-approve -json | oxid watch --log-file ./tf.log
tofu destroy -auto-approve -json | oxid watch --log-file ./tf.log

oxid watch reads Terraform/OpenTofu JSON from stdin, extracts resource events, and writes structured logs. Console output shows a clean summary while the log file captures every detail.

NoteAll events from oxid watch include source: "terraform" so you can distinguish them from native oxid events in your log aggregator.

CI/CD Pipeline Example

GitHub Actions
- name: Terraform Apply with Observability
  run: |
    terraform apply -auto-approve -json | oxid watch --log-file ./tf.log

- name: Upload logs
  uses: actions/upload-artifact@v4
  with:
    name: terraform-logs
    path: ./tf.log
GitLab CI
deploy:
  script:
    - terraform apply -auto-approve -json | oxid watch --log-file ./tf.log
    - oxid sync --log-file ./tf.log
  artifacts:
    paths:
      - tf.log

Watch Event Types

tf.resource.planResource planned (address, action, provider)
tf.resource.apply.startResource apply started
tf.resource.apply.completeResource applied (resource_id, elapsed_secs)
tf.resource.apply.failedResource failed (error details)
tf.plan.summaryPlan totals (creates, updates, deletes)
tf.diagnosticTerraform warnings and errors
tf.watch.completeFinal summary (resources processed, errors)

Log Format

Each line in the log file is a self-contained JSON object:

Example: resource applied
{
  "timestamp": "2026-05-07T10:30:02Z",
  "level": "INFO",
  "fields": {
    "event": "resource.apply.complete",
    "address": "aws_vpc.main",
    "resource_type": "aws_vpc",
    "resource_id": "vpc-0626c706762a661e9",
    "provider": "hashicorp/aws",
    "action": "update",
    "workspace": "default"
  },
  "target": "oxid::executor::engine",
  "message": "Resource applied successfully"
}
Example: plan summary
{
  "timestamp": "2026-05-07T10:30:01Z",
  "level": "INFO",
  "fields": {
    "event": "plan.summary",
    "creates": 0,
    "updates": 1,
    "deletes": 0,
    "replaces": 0,
    "no_ops": 39,
    "total_resources": 40
  },
  "message": "Plan complete"
}
Example: resource failed
{
  "timestamp": "2026-05-07T10:30:03Z",
  "level": "ERROR",
  "fields": {
    "event": "resource.apply.failed",
    "address": "aws_security_group_rule.web[0]",
    "error": "InvalidPermission.Duplicate: rule already exists",
    "elapsed_secs": 1
  },
  "message": "Resource operation failed"
}

Event Types

Command Events

command.initProject initialization started
command.applyApply started (includes auto_approve, targets)
command.destroyDestroy started
command.syncState sync started

Resource Events

resource.planResource planned (address, action, provider, resource_type)
resource.apply.completeResource applied (resource_id, action, provider, workspace)
resource.apply.failedResource failed (error, elapsed_secs)
resource.destroy.completeResource destroyed (resource_id, provider)
resource.skipResource unchanged, skipped (reason: no_changes)

Summary Events

plan.summaryPlan totals (creates, updates, deletes, replaces, no_ops)
apply.summaryApply results (added, changed, destroyed, failed, elapsed_secs)
sync.completeSync results (updated, added, removed)

Provider Events

provider.resolveResolving provider version from registry
provider.downloadDownloading provider binary
provider.download.completeProvider download finished (version, path)

State Sync Events

state.sync.resourceIndividual resource synced (address, resource_id, action)
state.sync.failedResource sync failed (address, error)

Datadog Integration

Point Datadog Agent at the log file:

datadog.yaml
logs:
  - type: file
    path: /var/log/oxid/oxid.log
    service: oxid
    source: oxid
    sourcecategory: infrastructure

Then query in Datadog:

# All failed resources
service:oxid @fields.event:resource.apply.failed

# All changes to a specific resource
service:oxid @fields.address:aws_vpc.main

# Apply summary with timing
service:oxid @fields.event:apply.summary

ELK / Splunk

The JSON format is directly compatible with Filebeat, Logstash, and Splunk Universal Forwarder. Each line is a complete JSON document - no multiline parsing needed.

filebeat.yml
filebeat.inputs:
  - type: log
    paths:
      - /var/log/oxid/*.log
    json.keys_under_root: true
    json.add_error_key: true

OpenTelemetry (OTel)

Use the OpenTelemetry Collector's filelog receiver to ingest oxid logs into any OTel-compatible backend (Jaeger, Grafana Tempo, Honeycomb, New Relic, etc.):

otel-collector-config.yaml
receivers:
  filelog:
    include:
      - /var/log/oxid/*.log
    operators:
      - type: json_parser
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%dT%H:%M:%S'
        severity:
          parse_from: attributes.level

exporters:
  otlp:
    endpoint: "otel-collector:4317"

service:
  pipelines:
    logs:
      receivers: [filelog]
      exporters: [otlp]

For Grafana Loki, use the Loki exporter:

loki export
exporters:
  loki:
    endpoint: "http://loki:3100/loki/api/v1/push"
    labels:
      resource:
        attributes:
          - event
          - address
          - resource_type
TipThe structured JSON format maps directly to OTel log attributes. Fields like event, address, resource_id, and action become searchable attributes in your observability backend.

Combining with Verbose

Use both -v and --log-file together:

oxid apply -v --log-file ./oxid.log

Console shows debug-level human-readable output. The log file captures info-level structured JSON independently.