Skip to main content

PostgreSQL Database

Every Deepline workspace gets its own dedicated Neon PostgreSQL project. Not a shared multi-tenant database. Not a proprietary data store you need a vendor SDK to query. A real PostgreSQL database with your own schemas, your own roles, and direct SQL access from any client that speaks the Postgres wire protocol. Every GTM team eventually asks: “Where does my data actually live, and can I get it out?” Yes. It lives in PostgreSQL. You can get it out with psql, DBeaver, Metabase, your ORM, a pg driver, or a COPY TO command. There is no export queue, no CSV-only download button, no 10,000-row limit.
AttributeValue
Database enginePostgreSQL (Neon serverless)
IsolationOne dedicated Neon project per tenant
Schemas5 managed schemas + 1 custom schema
Database roles4 (owner, runtime, read, override)
Schema migrationsRolling migrations supported
Direct SQL accessAny PostgreSQL client (psql, DBeaver, Metabase, pg driver)
Data exportUnrestricted — pg_dump, COPY TO, direct query
Data sovereigntyYour data stays in your schema

What an included database means for your GTM data

Most enrichment tools store your data in their cloud and give you a UI to look at it. If you want to do anything real with that data — join it to your CRM, run custom reports, build internal tools, deduplicate across providers — you are either exporting CSVs or paying for an API that rate-limits you against your own records. Deepline takes a different approach. Your enrichment results, identity graph, manual overrides, and resolved views all live in a PostgreSQL database that you can connect to directly. Here is when that matters and when it does not.

Data sovereignty

Your enrichment results live in your schema, not in a vendor’s multi-tenant database. Query, export, back up, or delete your data at any time with standard PostgreSQL tools. No vendor lock-in. No export fees. No “please contact sales to download your records.” If you leave Deepline, your data is a pg_dump away.

Custom reporting

Connect Metabase, Looker, Mode, Grafana, or any BI tool directly to your database. Write SQL to answer questions the vendor UI never anticipated: “Which companies in our ICP had job postings in the last 30 days but no enriched contact?” “What is our email verification rate by provider over time?” Full SELECT access to resolved views — no API pagination, no row limits.

Identity resolution

Enrich the same person through Apollo, Hunter, and People Data Labs and you get three separate responses with overlapping but inconsistent data. The identity graph (dl_graph) links enrichment events to resolved entities, deduplicates across providers, and produces a single coalesced record in dl_resolved. Query the resolved view and get the best available data without writing merge logic.

Not relevant for ephemeral lookups

If you run a single enrichment and pipe the result straight into a CRM or spreadsheet, you may never touch the database directly. The CLI and API return results inline. The database matters when you build on top of enrichment data over time — running waterfalls, deduplicating contacts, tracking history, or building internal tools.

Schema architecture

Each tenant database contains 5 managed schemas and 1 customer-owned schema. The schemas are designed around a clear data flow: raw enrichment results land in dl_cache, the identity graph in dl_graph links them to resolved entities, manual corrections go in dl_override, coalesced views surface in dl_resolved, and dl_meta tracks schema versioning and tenant configuration.

The 5 managed schemas

Stores raw enrichment events from every provider call. Each row represents a single enrichment response keyed by a unique source identifier (provider + operation + input hash). The doc column holds the full JSON response from the upstream provider. The extracted_potential_identifier_keys column contains parsed identifiers (emails, LinkedIn URLs, domains) used by the identity graph for entity linking.Key table: dl_cache.enrichment_eventThis schema is write-only from the application perspective. The platform runtime role writes enrichment results here; tenant roles have no direct write access. You can read cached enrichment data through the dl_resolved views, which merge cache and override data into a single coalesced record.Why it matters: Every enrichment call you have ever made is stored here. You never re-pay for data you already have. The cache also powers waterfall logic — if Provider A already returned an email for this person, Provider B is only called if the cached result is stale or incomplete.
Stores human-entered corrections and custom enrichment events. When a user corrects an email, updates a job title, or tombstones a stale record, the change lands here. Override records take precedence over cached enrichment data when the dl_resolved views are queried.Key table: dl_override.custom_enrichment_eventThe override role (dl_tenant_override) has full read/write access to this schema. This is the schema you write to when building features that let users correct or annotate enrichment data in your own applications.Why it matters: Enrichment data is never perfect. Overrides give you a clean separation between “what the provider said” and “what we know is actually correct,” without mutating the original cached data.
The core of cross-provider deduplication. This schema contains three interconnected tables:
  • dl_graph.entities — Resolved person and company entities. Each entity has a type (person or company) and an optional parent link (person → company).
  • dl_graph.adoptions — Links enrichment event rows (from dl_cache or dl_override) to their resolved entity. Tracks confidence scores and the reasoning behind each adoption.
  • dl_graph.identifier_memberships — Maps parsed identifiers (email addresses, LinkedIn URLs, company domains, phone numbers) to entities. This is the join table that enables “find all enrichment results for this person across all providers.”
Why it matters: When you enrich the same VP of Sales through three different providers, you get three separate JSON blobs. The identity graph resolves them into a single entity, links all three enrichment events to that entity, and makes the merged result available through dl_resolved. Without the graph, you would be writing custom deduplication logic for every provider combination.
Read-only views that merge dl_cache enrichment events, dl_override corrections, and dl_graph entity resolution into a single queryable surface. This is the schema you query 90% of the time.Key views:
  • dl_resolved.resolved_people — One row per resolved person entity, with the best available data from all linked enrichment events and any override patches applied.
  • dl_resolved.resolved_companies — Same for companies.
  • dl_resolved.coalesced_enrichment_event — Lower-level view showing individual enrichment events with override patches applied.
Why it matters: You never have to write merge logic. Query dl_resolved.resolved_people and you get a single record per person with the best email, most recent job title, and all linked identifiers — regardless of which provider originally returned each field.
Tracks schema version and tenant-specific configuration. The platform uses this to manage rolling migrations across tenant databases.Key tables:
  • dl_meta.schema_migrations — Ordered list of applied migrations with timestamps. Used by the migration runner to determine which migrations need to run on each tenant.
  • dl_meta.tenant_settings — Key-value store for tenant-specific configuration (e.g., which schema object names are active after a migration).
Why it matters: Rolling migrations mean your database schema evolves without downtime. The migration runner checks dl_meta.schema_migrations, applies any pending migrations in order, and updates dl_meta.tenant_settings with the new configuration. You do not need to manage migrations yourself — this is platform-managed.

The customer-owned schema

In addition to the 5 managed schemas, every tenant database includes a tenant_custom schema. The override role has full DDL and DML privileges here — you can create your own tables, indexes, functions, and views. Use this for application-specific data that lives alongside your enrichment data (e.g., account segments, scoring models, internal tags).
CREATE TABLE IF NOT EXISTS tenant_custom.account_segments (
  id text PRIMARY KEY,
  company_entity_id uuid REFERENCES dl_graph.entities(entity_id),
  segment text NOT NULL,
  score numeric,
  created_at timestamptz NOT NULL DEFAULT now()
);

Database roles

Each tenant database has 4 roles with carefully scoped permissions. You never connect as the owner role — it is used only during provisioning and migrations.
RolePurposePermissions
OwnerProvisioning and DDLFull access to all schemas. Used by the platform during bootstrap and migrations. Never exposed to tenants.
Runtime (dl_platform_runtime)Application operationsRead/write on dl_cache, dl_graph, dl_meta. Internal platform role for enrichment writes and identity graph updates.
Read (dl_tenant_read)Reporting and BI toolsSELECT on dl_resolved.*. Connect your BI tool here. Cannot see raw cache data or modify anything.
Override (dl_tenant_override)User corrections and custom tablesSELECT on dl_resolved.*, full CRUD on dl_override.*, full DDL/DML on tenant_custom.*. Use this when building apps that write data.
Which role should I use? Start with the read role for dashboards and reporting. Use the override role only when your application needs to write corrections or create custom tables. The read role is safe to hand to analysts — it cannot modify any data.

How to access your database

1. Check workspace status

curl -s https://code.deepline.com/api/v2/tenants/status \
  -H "Authorization: Bearer $DEEPLINE_API_KEY" | jq .
If status is not active, provision first:
curl -X POST https://code.deepline.com/api/v2/tenants/provision \
  -H "Authorization: Bearer $DEEPLINE_API_KEY"

2. Get a connection URI

curl -X POST https://code.deepline.com/api/v2/tenants/connection-uris/reveal \
  -H "Authorization: Bearer $DEEPLINE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"kind": "read", "pooled": true}'
Use "kind": "override" when you need write access for corrections or custom tables.

3. Connect with any PostgreSQL client

psql "postgresql://dl_tenant_read:****@ep-xxxxx.us-east-2.aws.neon.tech/neondb?sslmode=require"

4. Export your data

Full database export:
pg_dump "postgresql://dl_tenant_read:****@ep-xxxxx.us-east-2.aws.neon.tech/neondb?sslmode=require" \
  --schema=dl_resolved \
  --no-owner \
  -f deepline-export.sql
Export to CSV:
psql "$DEEPLINE_DB_URI" -c "\copy (SELECT * FROM dl_resolved.resolved_people) TO 'people.csv' WITH CSV HEADER"

Rolling migrations

Deepline uses rolling migrations to evolve your database schema without downtime. Each migration is versioned, ordered, and idempotent. The migration runner:
  1. Reads dl_meta.schema_migrations to determine the current schema version
  2. Applies any pending migrations in order
  3. Updates dl_meta.tenant_settings with new configuration (e.g., renamed tables/views)
  4. Verifies the migration with SQL checks
You do not need to run migrations yourself. The platform manages migration rollout across all tenant databases. Migrations are designed to be backward-compatible during the rollout window — your queries against dl_resolved continue to work while migrations are in progress.

Frequently asked questions

Yes. Request a read connection URI and configure your BI tool with the host, port, database, username, and password from the URI. SSL is required. The dl_resolved views are designed to be the primary query surface for reporting — they merge cache and override data into clean, deduplicated records.
No. Each workspace gets a dedicated Neon project. Your data is physically isolated — separate compute, separate storage, separate connection endpoints. There is no row-level security or shared-schema multi-tenancy.
Yes, in the tenant_custom schema. The override role has full DDL and DML privileges there. The 5 managed schemas (dl_cache, dl_override, dl_graph, dl_resolved, dl_meta) are platform-managed — you should not modify their structure directly.
The corresponding override is removed, and dl_resolved views revert to showing the original cached enrichment data for those records. Deleting overrides is non-destructive to the underlying cache.
Call the credential rotation endpoint:
curl -X POST https://code.deepline.com/api/v2/tenants/connection-uris/rotate \
  -H "Authorization: Bearer $DEEPLINE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"kind": "read"}'
This generates a new password for the specified role. Existing connections using the old password will be terminated.
Yes. Connect with the read role and run pg_dump against any schema you need. For a full export of your resolved data, dump the dl_resolved schema. For raw enrichment events, dump dl_cache. There are no restrictions on standard PostgreSQL export tools.
Neon runs PostgreSQL 16. All standard PostgreSQL 16 features, extensions (pgcrypto is enabled by default), and wire protocol compatibility apply.
Neon’s serverless architecture scales storage automatically. There is no artificial row limit imposed by Deepline. Storage costs are part of the Neon project provisioned for your workspace — see the pricing page for details.

Database Access (Developer Guide)

API endpoints, code patterns, and the full schema contract reference for building on your database.

Quick Start

Install Deepline and run your first enrichment in 60 seconds.

CLI Concepts

Command patterns, payloads, and execution flow.

Pricing

BYOK free tier, managed credits, and what is included with your database.