Replace your data stack with one binary.
No connectors. No DAGs. No infrastructure.
Copy a pipeline. Run it locally.

-- @kind: merge
-- @unique_key: order_id
-- @incremental: updated_at
-- @constraint: order_id PRIMARY KEY
-- @audit: row_count > 0
SELECT
order_id,
customer_id,
order_date,
total_amount,
status
FROM raw.ordersWrite models, not infrastructure
Ingest from REST APIs with built-in HTTP, OAuth, and pagination.
# @kind: merge
# @unique_key: id
url = "https://api.example.com/users"
resp = http.get(url, retry=3)
for user in resp.json:
save.row(user)Transform data with plain SQL. Materialization and change detection are automatic.
-- @kind: table
-- @constraint: id NOT NULL
-- @constraint: email EMAIL
-- @audit: row_count > 0
SELECT
id,
LOWER(TRIM(name)) AS name,
LOWER(TRIM(email)) AS email,
created_at
FROM raw.api_usersPreview every change before committing. See diffs, schema evolution, and downstream impact.
╠═══ Execution ════════════════════════════╣
[OK] raw.customers (45ms)
[OK] staging.customers (128ms)
[OK] staging.orders (95ms)
[OK] mart.revenue (54ms)
╠═══ Summary ══════════════════════════════╣
Unchanged: 2 models
Changed: 2 models
╠═══ Changes ══════════════════════════════╣
staging.orders
Rows: 100 → 142 (+42, +42.0%)
+ Added: 42 rows
mart.revenue
SCHEMA EVOLUTION:
+ Added: region (VARCHAR)
Rows: 5 → 6 (+1, +20.0%)Copy into your project and run
Auth, pagination, incremental loading — all included.
The data stack you no longer need
No connectors to build. No DAGs to maintain. No pipelines to orchestrate. No systems to glue together.
dbt
SQL transformations with built-in CDC and schema evolution. No models to manage across environments. No Jinja.
Airflow
Automatic dependency resolution and execution. No DAGs. No scheduling. No orchestration layer.
Airbyte / Fivetran
Ingest from any API with built-in HTTP, OAuth, and pagination. No connectors. No sync jobs.
Kafka (event ingestion)
Send events over HTTP and query them directly. No broker. No consumers. No ops.
Reverse ETL tools
Send data back to APIs directly from your pipeline. No sync layer. No separate system.
Custom scripts
No glue code between systems. Ingest, transform, and query in one place.
Modern data stack
No layers. No services. No architecture to maintain. Just one system that runs your data.
Everything you need, nothing you don't
From API to Query
From API to queryable data in one command. No message broker. No ingestion layer. No orchestrator.
Runs Locally
Runs on one machine. No cluster. No cloud. No setup. Install, init, run.
Every Change Tracked
Every run is reproducible. Snapshots track every change. Rollback with time-travel.
Preview Before You Ship
Preview all changes before committing. See diffs, schema evolution, downstream impact — then decide.
No Connectors to Install
Call any API directly. HTTP, OAuth, pagination — all built in. No connectors to install. No Python.
Quality Built In
26 constraints block bad data. 17 audits catch regressions. Column lineage tracks every field. Schema evolves without migrations.
Everything in One Pipeline
SQL for transforms. Scripts for ingestion. Config built in. No separate tools. No glue code.
Built-in Event Collection
POST events to an embedded endpoint. Durable buffering. Exactly-once flush. No Kafka.
One command replaces your pipeline stack
You don't configure pipelines. You run them.
1. Install
curl -fsSL https://ondatra.sh/install.sh | sh
2. Write SQL
Place SQL files in models/. Add @kind, @incremental, @constraint as comments.
3. Run
ondatrasql run — DAG built, changes detected, data materialized.
Get started in minutes
One binary. No dependencies. No cloud required.
Ondatra Labs