Telemetry and observability
The dbt Fusion Engine provides a comprehensive telemetry system that replaces dbt Core's structured logging. Built on OpenTelemetry conventions and backed by a stable protobuf schema, it enables deep integration with orchestrators, observability platforms, and custom tooling.
This uses the same integration that dbt platform relies on for orchestration and monitoring, providing proven and production-ready features that work at scale.
Available output formats
Fusion telemetry supports three output formats, which you can enable independently:
| Loading table... |
Enabling telemetry output
The following are some examples of options for enabling telemetry output (You can combine multiple outputs in a single run):
Write JSONL to a file (saves to the logs/ directory):
dbtf build --otel-file-name telemetry.jsonl
Stream JSONL to stdout:
dbtf build --log-format otel
Write a Parquet file (saves to target/metadata/ directory):
dbtf build --otel-parquet-file-name telemetry.parquet
Export to an OpenTelemetry collector:
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318" dbtf build --export-to-otlp
Telemetry data
Fusion telemetry contains two types of records:
- Spans — Operations with a start and end time (like compiling a model or running a test).
- Log records — Point-in-time events within a span.
Telemetry hierarchy
Every dbt command creates a hierarchy of spans:
Invocation (dbtf build)
├── Phase (Parse)
├── Phase (Compile)
│ ├── Node (model.project.customers)
│ └── Node (model.project.orders)
└── Phase (Run)
├── Node (model.project.customers)
└── Node (model.project.orders)
The trace_id (also known as invocation_id) remains consistent across all telemetry records for a single dbt command, making it easy to correlate events.
Node outcome
Every node produces a result for each phase it participates in. Some phases, such as parse, don't involve node-level execution, so they don't produce node spans or node outcomes.
The node_outcome field indicates whether or not Fusion executed the node's operation.
| Loading table... |
Skip reasons
When Fusion skips a node, the telemetry includes a reason:
| Loading table... |
Test outcomes
When a test executes successfully (node_outcome: success), it reports the test result:
| Loading table... |
A test with node_outcome: success and test_outcome: failed means Fusion successfully ran the test, and the test reported data quality issues. This differs from node_outcome: error, which means the test itself couldn't run (for example, invalid SQL).
Querying telemetry data
Query the telemetry data to gain deeper insights into your dbt runs.
JSONL examples
The following are some examples of querying the JSONL telemetry data.
Watch for errors in real-time:
tail -f telemetry.jsonl | jq 'select(.severity_text == "ERROR")'
List skipped nodes, reasons, and upstream details:
cat telemetry.jsonl | jq 'select(.attributes.node_outcome == "NODE_OUTCOME_SKIPPED") | {node: .attributes.unique_id, reason: .attributes.node_skip_reason, upstream: .attributes.node_skip_upstream_detail.upstream_unique_id }'
Parquet analysis with DuckDB
Leverage DuckDB to better understand your telemetry data stored in Parquet files.
Find slowest nodes:
import duckdb
duckdb.sql("""
SELECT
attributes.unique_id,
(end_time_unix_nano - start_time_unix_nano) / 1e6 AS duration_ms
FROM 'telemetry.parquet'
WHERE event_type LIKE '%NodeProcessed%'
ORDER BY duration_ms DESC
LIMIT 10
""").show()
Count outcomes by type:
duckdb.sql("""
SELECT
attributes.node_outcome,
COUNT(*) as count
FROM 'telemetry.parquet'
WHERE attributes.node_outcome IS NOT NULL
GROUP BY attributes.node_outcome
""").show()
OpenTelemetry integration
Fusion's native OTLP support lets you send telemetry directly to any OpenTelemetry-compatible receiver, including Datadog, Jaeger, Google Cloud Trace, Grafana Tempo, and Honeycomb.
This enables:
- Integration with existing observability — No custom integrations needed.
- Custom alerts that trigger notifications on failures or slow builds.
- Correlate across systems that links dbt traces with downstream services.
- Centralized monitoring to view dbt alongside your other infrastructure.
Setting up OTLP export
The following example configures the OTLP export:
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
dbtf build --export-to-otlp
Mapping to dbt Core concepts
If you're familiar with dbt Core's structured logging, here's how Fusion telemetry maps:
| Loading table... |
Node status mapping
| Loading table... |
Note that dbt Core's fail status maps to Fusion's node_outcome: success because Fusion distinguishes between "the test ran successfully and found data issues" versus "the test couldn't run." This separation enables more precise alerting and retry logic.
Fusion adds skip_reason: cached for nodes reused via State Aware Orchestration, which has no dbt Core equivalent.
Record structure
Each telemetry record contains envelope fields plus event-specific attributes:
{
"record_type": "SpanEnd",
"trace_id": "f9a0a9e64c924b878133363ba3515e50",
"span_id": "0000000000000036",
"span_name": "Node(model.project.customers)",
"parent_span_id": "0000000000000017",
"start_time_unix_nano": "1756139116981079652",
"end_time_unix_nano": "1756139117234567890",
"severity_text": "INFO",
"event_type": "v1.public.events.fusion.node.NodeEvaluated",
"attributes": {
"unique_id": "model.project.customers",
"phase": "Run",
"node_outcome": "success"
}
}
| Loading table... |
Schema stability
Unlike dbt Core's structured logging, Fusion telemetry is backed by a public protobuf schema with strict compatibility guarantees:
- Additive only — New fields and event types may be added, but existing fields are never removed or changed.
- Forward compatible — Your integrations will continue to work as the schema evolves.
This makes Fusion telemetry a reliable foundation for production integrations, orchestrators, and long-term analytics pipelines.
Official client library (coming soon)
dbt Labs is developing an official open-source client library. Built in Rust for performance, it will be available as:
- A standalone Rust crate and CLI.
- A fully-typed Python package wrapping the Rust core.
The library will provide type-safe, forward-compatible access to telemetry data—stream JSONL in real-time, query Parquet files, and build custom integrations with confidence that schema changes won't break your code.
Was this page helpful?
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.