PostgreSQL Database
Every Deepline workspace gets its own dedicated Neon PostgreSQL project. Not a shared multi-tenant database. Not a proprietary data store you need a vendor SDK to query. A real PostgreSQL database with your own schemas, your own roles, and direct SQL access from any client that speaks the Postgres wire protocol. Every GTM team eventually asks: “Where does my data actually live, and can I get it out?” Yes. It lives in PostgreSQL. You can get it out withpsql, DBeaver, Metabase, your ORM, a pg driver, or a COPY TO command. There is no export queue, no CSV-only download button, no 10,000-row limit.
| Attribute | Value |
|---|---|
| Database engine | PostgreSQL (Neon serverless) |
| Isolation | One dedicated Neon project per tenant |
| Schemas | 5 managed schemas + 1 custom schema |
| Database roles | 4 (owner, runtime, read, override) |
| Schema migrations | Rolling migrations supported |
| Direct SQL access | Any PostgreSQL client (psql, DBeaver, Metabase, pg driver) |
| Data export | Unrestricted — pg_dump, COPY TO, direct query |
| Data sovereignty | Your data stays in your schema |
What an included database means for your GTM data
Most enrichment tools store your data in their cloud and give you a UI to look at it. If you want to do anything real with that data — join it to your CRM, run custom reports, build internal tools, deduplicate across providers — you are either exporting CSVs or paying for an API that rate-limits you against your own records. Deepline takes a different approach. Your enrichment results, identity graph, manual overrides, and resolved views all live in a PostgreSQL database that you can connect to directly. Here is when that matters and when it does not.Data sovereignty
Your enrichment results live in your schema, not in a vendor’s multi-tenant database. Query, export, back up, or delete your data at any time with standard PostgreSQL tools. No vendor lock-in. No export fees. No “please contact sales to download your records.” If you leave Deepline, your data is a
pg_dump away.Custom reporting
Connect Metabase, Looker, Mode, Grafana, or any BI tool directly to your database. Write SQL to answer questions the vendor UI never anticipated: “Which companies in our ICP had job postings in the last 30 days but no enriched contact?” “What is our email verification rate by provider over time?” Full
SELECT access to resolved views — no API pagination, no row limits.Identity resolution
Enrich the same person through Apollo, Hunter, and People Data Labs and you get three separate responses with overlapping but inconsistent data. The identity graph (
dl_graph) links enrichment events to resolved entities, deduplicates across providers, and produces a single coalesced record in dl_resolved. Query the resolved view and get the best available data without writing merge logic.Not relevant for ephemeral lookups
If you run a single enrichment and pipe the result straight into a CRM or spreadsheet, you may never touch the database directly. The CLI and API return results inline. The database matters when you build on top of enrichment data over time — running waterfalls, deduplicating contacts, tracking history, or building internal tools.
Schema architecture
Each tenant database contains 5 managed schemas and 1 customer-owned schema. The schemas are designed around a clear data flow: raw enrichment results land indl_cache, the identity graph in dl_graph links them to resolved entities, manual corrections go in dl_override, coalesced views surface in dl_resolved, and dl_meta tracks schema versioning and tenant configuration.
The 5 managed schemas
dl_cache — Enrichment cache
dl_cache — Enrichment cache
Stores raw enrichment events from every provider call. Each row represents a single enrichment response keyed by a unique source identifier (provider + operation + input hash). The
doc column holds the full JSON response from the upstream provider. The extracted_potential_identifier_keys column contains parsed identifiers (emails, LinkedIn URLs, domains) used by the identity graph for entity linking.Key table: dl_cache.enrichment_eventThis schema is write-only from the application perspective. The platform runtime role writes enrichment results here; tenant roles have no direct write access. You can read cached enrichment data through the dl_resolved views, which merge cache and override data into a single coalesced record.Why it matters: Every enrichment call you have ever made is stored here. You never re-pay for data you already have. The cache also powers waterfall logic — if Provider A already returned an email for this person, Provider B is only called if the cached result is stale or incomplete.dl_override — Manual overrides
dl_override — Manual overrides
Stores human-entered corrections and custom enrichment events. When a user corrects an email, updates a job title, or tombstones a stale record, the change lands here. Override records take precedence over cached enrichment data when the
dl_resolved views are queried.Key table: dl_override.custom_enrichment_eventThe override role (dl_tenant_override) has full read/write access to this schema. This is the schema you write to when building features that let users correct or annotate enrichment data in your own applications.Why it matters: Enrichment data is never perfect. Overrides give you a clean separation between “what the provider said” and “what we know is actually correct,” without mutating the original cached data.dl_graph — Identity graph
dl_graph — Identity graph
The core of cross-provider deduplication. This schema contains three interconnected tables:
dl_graph.entities— Resolved person and company entities. Each entity has a type (personorcompany) and an optional parent link (person → company).dl_graph.adoptions— Links enrichment event rows (fromdl_cacheordl_override) to their resolved entity. Tracks confidence scores and the reasoning behind each adoption.dl_graph.identifier_memberships— Maps parsed identifiers (email addresses, LinkedIn URLs, company domains, phone numbers) to entities. This is the join table that enables “find all enrichment results for this person across all providers.”
dl_resolved. Without the graph, you would be writing custom deduplication logic for every provider combination.dl_resolved — Coalesced views
dl_resolved — Coalesced views
Read-only views that merge
dl_cache enrichment events, dl_override corrections, and dl_graph entity resolution into a single queryable surface. This is the schema you query 90% of the time.Key views:dl_resolved.resolved_people— One row per resolved person entity, with the best available data from all linked enrichment events and any override patches applied.dl_resolved.resolved_companies— Same for companies.dl_resolved.coalesced_enrichment_event— Lower-level view showing individual enrichment events with override patches applied.
dl_resolved.resolved_people and you get a single record per person with the best email, most recent job title, and all linked identifiers — regardless of which provider originally returned each field.dl_meta — Migrations and settings
dl_meta — Migrations and settings
Tracks schema version and tenant-specific configuration. The platform uses this to manage rolling migrations across tenant databases.Key tables:
dl_meta.schema_migrations— Ordered list of applied migrations with timestamps. Used by the migration runner to determine which migrations need to run on each tenant.dl_meta.tenant_settings— Key-value store for tenant-specific configuration (e.g., which schema object names are active after a migration).
dl_meta.schema_migrations, applies any pending migrations in order, and updates dl_meta.tenant_settings with the new configuration. You do not need to manage migrations yourself — this is platform-managed.The customer-owned schema
In addition to the 5 managed schemas, every tenant database includes atenant_custom schema. The override role has full DDL and DML privileges here — you can create your own tables, indexes, functions, and views. Use this for application-specific data that lives alongside your enrichment data (e.g., account segments, scoring models, internal tags).
Database roles
Each tenant database has 4 roles with carefully scoped permissions. You never connect as the owner role — it is used only during provisioning and migrations.| Role | Purpose | Permissions |
|---|---|---|
| Owner | Provisioning and DDL | Full access to all schemas. Used by the platform during bootstrap and migrations. Never exposed to tenants. |
Runtime (dl_platform_runtime) | Application operations | Read/write on dl_cache, dl_graph, dl_meta. Internal platform role for enrichment writes and identity graph updates. |
Read (dl_tenant_read) | Reporting and BI tools | SELECT on dl_resolved.*. Connect your BI tool here. Cannot see raw cache data or modify anything. |
Override (dl_tenant_override) | User corrections and custom tables | SELECT on dl_resolved.*, full CRUD on dl_override.*, full DDL/DML on tenant_custom.*. Use this when building apps that write data. |
How to access your database
1. Check workspace status
status is not active, provision first:
2. Get a connection URI
"kind": "override" when you need write access for corrections or custom tables.
3. Connect with any PostgreSQL client
4. Export your data
Full database export:Rolling migrations
Deepline uses rolling migrations to evolve your database schema without downtime. Each migration is versioned, ordered, and idempotent. The migration runner:- Reads
dl_meta.schema_migrationsto determine the current schema version - Applies any pending migrations in order
- Updates
dl_meta.tenant_settingswith new configuration (e.g., renamed tables/views) - Verifies the migration with SQL checks
dl_resolved continue to work while migrations are in progress.
Frequently asked questions
Can I connect a BI tool like Metabase or Looker directly?
Can I connect a BI tool like Metabase or Looker directly?
Yes. Request a
read connection URI and configure your BI tool with the host, port, database, username, and password from the URI. SSL is required. The dl_resolved views are designed to be the primary query surface for reporting — they merge cache and override data into clean, deduplicated records.Is my data shared with other tenants?
Is my data shared with other tenants?
Can I run my own DDL (CREATE TABLE, ALTER TABLE)?
Can I run my own DDL (CREATE TABLE, ALTER TABLE)?
Yes, in the
tenant_custom schema. The override role has full DDL and DML privileges there. The 5 managed schemas (dl_cache, dl_override, dl_graph, dl_resolved, dl_meta) are platform-managed — you should not modify their structure directly.What happens if I delete data from dl_override?
What happens if I delete data from dl_override?
The corresponding override is removed, and
dl_resolved views revert to showing the original cached enrichment data for those records. Deleting overrides is non-destructive to the underlying cache.How do I rotate database credentials?
How do I rotate database credentials?
Call the credential rotation endpoint:This generates a new password for the specified role. Existing connections using the old password will be terminated.
Can I use pg_dump to back up my database?
Can I use pg_dump to back up my database?
Yes. Connect with the read role and run
pg_dump against any schema you need. For a full export of your resolved data, dump the dl_resolved schema. For raw enrichment events, dump dl_cache. There are no restrictions on standard PostgreSQL export tools.What PostgreSQL version is this?
What PostgreSQL version is this?
Neon runs PostgreSQL 16. All standard PostgreSQL 16 features, extensions (pgcrypto is enabled by default), and wire protocol compatibility apply.
Is there a row limit or storage cap?
Is there a row limit or storage cap?
Neon’s serverless architecture scales storage automatically. There is no artificial row limit imposed by Deepline. Storage costs are part of the Neon project provisioned for your workspace — see the pricing page for details.
Related
Database Access (Developer Guide)
API endpoints, code patterns, and the full schema contract reference for building on your database.
Quick Start
Install Deepline and run your first enrichment in 60 seconds.
CLI Concepts
Command patterns, payloads, and execution flow.
Pricing
BYOK free tier, managed credits, and what is included with your database.