Structured JSON Logging
Write structured JSON logs to a file for observability platforms like Datadog, Splunk, ELK, and CloudWatch. Every resource change, plan, and apply event is captured with resource IDs, actions, and timing.
Usage
Add --log-file to any command:
oxid plan --log-file ./oxid.log oxid apply --log-file /var/log/oxid/oxid.log oxid sync --log-file ./oxid.log oxid destroy --log-file ./oxid.log
Terraform / OpenTofu Log Bridge
Already using Terraform or OpenTofu? Pipe their JSON output through oxid watch to get structured, observable logs without changing your workflow:
terraform plan -json | oxid watch --log-file ./tf.log terraform apply -auto-approve -json | oxid watch --log-file ./tf.log terraform destroy -auto-approve -json | oxid watch --log-file ./tf.log
tofu plan -json | oxid watch --log-file ./tf.log tofu apply -auto-approve -json | oxid watch --log-file ./tf.log tofu destroy -auto-approve -json | oxid watch --log-file ./tf.log
oxid watch reads Terraform/OpenTofu JSON from stdin, extracts resource events, and writes structured logs. Console output shows a clean summary while the log file captures every detail.
source: "terraform" so you can distinguish them from native oxid events in your log aggregator.CI/CD Pipeline Example
- name: Terraform Apply with Observability
run: |
terraform apply -auto-approve -json | oxid watch --log-file ./tf.log
- name: Upload logs
uses: actions/upload-artifact@v4
with:
name: terraform-logs
path: ./tf.logdeploy:
script:
- terraform apply -auto-approve -json | oxid watch --log-file ./tf.log
- oxid sync --log-file ./tf.log
artifacts:
paths:
- tf.logWatch Event Types
tf.resource.planResource planned (address, action, provider)tf.resource.apply.startResource apply startedtf.resource.apply.completeResource applied (resource_id, elapsed_secs)tf.resource.apply.failedResource failed (error details)tf.plan.summaryPlan totals (creates, updates, deletes)tf.diagnosticTerraform warnings and errorstf.watch.completeFinal summary (resources processed, errors)Log Format
Each line in the log file is a self-contained JSON object:
{
"timestamp": "2026-05-07T10:30:02Z",
"level": "INFO",
"fields": {
"event": "resource.apply.complete",
"address": "aws_vpc.main",
"resource_type": "aws_vpc",
"resource_id": "vpc-0626c706762a661e9",
"provider": "hashicorp/aws",
"action": "update",
"workspace": "default"
},
"target": "oxid::executor::engine",
"message": "Resource applied successfully"
}{
"timestamp": "2026-05-07T10:30:01Z",
"level": "INFO",
"fields": {
"event": "plan.summary",
"creates": 0,
"updates": 1,
"deletes": 0,
"replaces": 0,
"no_ops": 39,
"total_resources": 40
},
"message": "Plan complete"
}{
"timestamp": "2026-05-07T10:30:03Z",
"level": "ERROR",
"fields": {
"event": "resource.apply.failed",
"address": "aws_security_group_rule.web[0]",
"error": "InvalidPermission.Duplicate: rule already exists",
"elapsed_secs": 1
},
"message": "Resource operation failed"
}Event Types
Command Events
command.initProject initialization startedcommand.applyApply started (includes auto_approve, targets)command.destroyDestroy startedcommand.syncState sync startedResource Events
resource.planResource planned (address, action, provider, resource_type)resource.apply.completeResource applied (resource_id, action, provider, workspace)resource.apply.failedResource failed (error, elapsed_secs)resource.destroy.completeResource destroyed (resource_id, provider)resource.skipResource unchanged, skipped (reason: no_changes)Summary Events
plan.summaryPlan totals (creates, updates, deletes, replaces, no_ops)apply.summaryApply results (added, changed, destroyed, failed, elapsed_secs)sync.completeSync results (updated, added, removed)Provider Events
provider.resolveResolving provider version from registryprovider.downloadDownloading provider binaryprovider.download.completeProvider download finished (version, path)State Sync Events
state.sync.resourceIndividual resource synced (address, resource_id, action)state.sync.failedResource sync failed (address, error)Datadog Integration
Point Datadog Agent at the log file:
logs:
- type: file
path: /var/log/oxid/oxid.log
service: oxid
source: oxid
sourcecategory: infrastructureThen query in Datadog:
# All failed resources service:oxid @fields.event:resource.apply.failed # All changes to a specific resource service:oxid @fields.address:aws_vpc.main # Apply summary with timing service:oxid @fields.event:apply.summary
ELK / Splunk
The JSON format is directly compatible with Filebeat, Logstash, and Splunk Universal Forwarder. Each line is a complete JSON document - no multiline parsing needed.
filebeat.inputs:
- type: log
paths:
- /var/log/oxid/*.log
json.keys_under_root: true
json.add_error_key: trueOpenTelemetry (OTel)
Use the OpenTelemetry Collector's filelog receiver to ingest oxid logs into any OTel-compatible backend (Jaeger, Grafana Tempo, Honeycomb, New Relic, etc.):
receivers:
filelog:
include:
- /var/log/oxid/*.log
operators:
- type: json_parser
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S'
severity:
parse_from: attributes.level
exporters:
otlp:
endpoint: "otel-collector:4317"
service:
pipelines:
logs:
receivers: [filelog]
exporters: [otlp]For Grafana Loki, use the Loki exporter:
exporters:
loki:
endpoint: "http://loki:3100/loki/api/v1/push"
labels:
resource:
attributes:
- event
- address
- resource_typeCombining with Verbose
Use both -v and --log-file together:
oxid apply -v --log-file ./oxid.log
Console shows debug-level human-readable output. The log file captures info-level structured JSON independently.