PostgreSQL
Experimental
The DB integrations layer is experimental. APIs and annotations described in this section may change in future releases.
PostgreSQL is the most complete Atscript DB adapter. It supports transactional DDL, native foreign key enforcement, pgvector for vector similarity search, CITEXT for case-insensitive text, GIN-based fulltext search, and JSONB for structured data. If you need a production-grade relational database with advanced capabilities, PostgreSQL is the recommended adapter.
Installation
pnpm add @atscript/db-postgres pgThe adapter uses node-postgres (pg) as its driver. Install @types/pg as a dev dependency if you need TypeScript types for the underlying pg library.
Setup
Driver and Adapter
Create a PgDriver with your connection details, then wrap it in a PostgresAdapter factory via DbSpace:
import { DbSpace } from '@atscript/db'
import { PgDriver, PostgresAdapter } from '@atscript/db-postgres'
const driver = new PgDriver('postgresql://user:pass@localhost:5432/mydb')
const db = new DbSpace(() => new PostgresAdapter(driver))PgDriver accepts three input forms:
// Connection URI string
const driver = new PgDriver('postgresql://user:pass@localhost:5432/mydb')
// Pool configuration object
const driver = new PgDriver({
host: 'localhost',
user: 'postgres',
database: 'mydb',
max: 10,
})
// Pre-created pg.Pool instance
import pg from 'pg'
const pool = new pg.Pool({ connectionString: '...' })
const driver = new PgDriver(pool)Convenience Helper
For quick setup, use the createAdapter shortcut that creates both the driver and DbSpace in one call:
import { createAdapter } from '@atscript/db-postgres'
const db = createAdapter('postgresql://user:pass@localhost:5432/mydb')You can pass additional pool options as the second argument:
const db = createAdapter('postgresql://localhost:5432/mydb', { max: 20 })Plugin Registration
To use PostgreSQL-specific annotations (@db.pg.*), register the plugin in your Atscript configuration:
import ts from '@atscript/typescript'
import { dbPlugin } from '@atscript/db/plugin'
import pg from '@atscript/db-postgres/plugin'
export default {
plugins: [ts(), dbPlugin(), pg()],
}dbPlugin() is required — it registers all portable @db.* annotations. The PostgreSQL plugin (pg()) is optional and only needed if you use @db.pg.type, @db.pg.schema, or @db.pg.collate. See Setup for full configuration details.
PostgreSQL-Specific Annotations
These annotations opt into PostgreSQL-specific behavior. Files using only portable @db.* annotations remain adapter-agnostic.
| Annotation | Level | Purpose |
|---|---|---|
@db.pg.type "TYPE" | Field | Override the native column type (e.g., "CITEXT", "INET", "MACADDR") |
@db.pg.schema "name" | Interface | Set the database schema (default: "public") |
@db.pg.collate "collation" | Field | Native PostgreSQL collation override (e.g., "tr-x-icu", "C") |
Example:
@db.pg.schema "analytics"
@db.table "events"
export interface Event {
@meta.id
id: string
@db.pg.type "INET"
ip: string
@db.pg.collate "tr-x-icu"
name: string
}Type Mapping
| Atscript Type | PostgreSQL Type | Notes |
|---|---|---|
string | TEXT | VARCHAR(N) when @expect.maxLength is set |
string (PK or has default) | VARCHAR(255) | |
string with @db.column.collate 'nocase' | CITEXT | Case-insensitive text extension |
string with char tag | CHAR(1) | |
number | DOUBLE PRECISION | |
number (integer tags) | SMALLINT / INTEGER / BIGINT | Based on int8/int16/int32/int64 tags |
number with @db.column.precision | NUMERIC(p,s) | |
number with @db.default.increment | BIGINT | GENERATED BY DEFAULT AS IDENTITY |
number with @db.default.now | BIGINT | Epoch milliseconds |
boolean | BOOLEAN | Native boolean (not INTEGER) |
decimal | NUMERIC(p,s) | Defaults to NUMERIC(10,2) |
| Nested objects | Flattened __ columns | address.city becomes address__city |
@db.json | JSONB | Stored as a single JSONB column |
| Arrays | JSONB | |
@db.default.uuid | VARCHAR(255) | DEFAULT gen_random_uuid() |
@db.search.vector | vector(N) | pgvector extension; falls back to JSONB |
Unsigned Integer Promotion
PostgreSQL has no unsigned integer types. Unsigned types are promoted to the next-larger signed type to preserve the full value range:
| Atscript Tag | PostgreSQL Type | Reason |
|---|---|---|
uint8 | SMALLINT | 0--255 fits in SMALLINT |
uint16 | INTEGER | 0--65535 exceeds SMALLINT max (32767) |
uint32 | BIGINT | 0--4.3B exceeds INTEGER max (~2.1B) |
uint64 | BIGINT | Best available; large values may lose precision |
CITEXT — Case-Insensitive Text
The portable @db.column.collate 'nocase' annotation maps to PostgreSQL's CITEXT column type, which handles case-insensitive equality, sorting, UNIQUE constraints, LIKE, range queries, and aggregation at the storage layer.
@db.table "users"
export interface User {
@meta.id
id: string
@db.column.collate 'nocase'
@db.index.unique
email: string
}The adapter auto-provisions the extension when CITEXT fields are detected:
CREATE EXTENSION IF NOT EXISTS citextNo query-side wrapping (e.g., LOWER()) is needed — case insensitivity is handled entirely by the column type.
Managed PostgreSQL services
Some managed PostgreSQL services (e.g., certain AWS RDS configurations) require manual extension provisioning by an admin. If auto-provisioning fails, create the extension manually:
CREATE EXTENSION IF NOT EXISTS citext;Then use @db.pg.type "CITEXT" as a direct type override, which skips the auto-provisioning step.
pgvector — Vector Search
The adapter supports vector similarity search via the pgvector extension. Annotate a field with @db.search.vector to define a vector embedding column:
@db.table "documents"
export interface Document {
@meta.id
id: string
title: string
content: string
@db.search.vector 1536 "cosine"
embedding: number[]
}How It Works
- The field is stored as
vector(N)where N is the dimension count - An HNSW index is created automatically for efficient approximate nearest-neighbor search
- The adapter auto-provisions the pgvector extension:
CREATE EXTENSION IF NOT EXISTS vector
Distance Metrics
| Similarity | PG Operator | Index Ops Class | Description |
|---|---|---|---|
cosine (default) | <=> | vector_cosine_ops | Cosine distance |
euclidean | <-> | vector_l2_ops | L2 / Euclidean distance |
dotProduct | <#> | vector_ip_ops | Negative inner product |
Runtime Search
const results = await table.vectorSearch(queryEmbedding, {
filter: { status: 'published' },
controls: { $limit: 10, $threshold: 0.8 },
})The $threshold parameter is a normalized similarity score (0--1) matching MongoDB Atlas semantics. The adapter converts it to the appropriate pgvector distance value internally:
- Cosine:
distance = 2 * (1 - score)(score range [0, 1] maps to distance range [2, 0]) - Euclidean: threshold is used as max distance directly
- Dot product: threshold is negated
Graceful Fallback
When the pgvector extension is not available (not installed on the server), vector fields are stored as JSONB instead. The data is preserved, but indexed similarity search is not available — vectorSearch() will throw an error.
TIP
Install pgvector on your PostgreSQL server before using vector search features. On most systems: sudo apt install postgresql-16-pgvector or build from source. See the pgvector installation guide.
GIN Fulltext Search
The @db.index.fulltext annotation creates a GIN index on the to_tsvector() expression for efficient text search:
@db.table "articles"
export interface Article {
@meta.id
id: string
@db.index.fulltext "search_idx"
title: string
@db.index.fulltext "search_idx"
body: string
}This generates a GIN index on the concatenated tsvector:
CREATE INDEX "search_idx" ON "articles"
USING gin(to_tsvector('english', coalesce("title", '') || ' ' || coalesce("body", '')))Search queries use plainto_tsquery():
const results = await table.search('database optimization', {
filter: { published: true },
controls: { $limit: 20 },
})Multiple fields in the same fulltext index group are concatenated for combined search.
Transactional DDL
PostgreSQL DDL is fully transactional, unlike MySQL and SQLite. Schema sync changes are wrapped in BEGIN/COMMIT blocks — if any DDL statement fails, the entire sync rolls back atomically. This means:
- Adding columns, changing types, and renaming tables are all-or-nothing operations
- Table recreation (for changes that require rebuilding the table) uses a transaction for safe data migration: create temp table, copy data, drop old, rename new
- No risk of half-applied schema changes after a failure
TIP
This is a significant advantage over SQLite (no transactional DDL) and MySQL (implicit COMMIT before each DDL statement). If schema reliability matters, PostgreSQL is the safest choice.
Batched Inserts
The insertMany method uses multi-row INSERT INTO ... VALUES (...), (...), ... RETURNING * for optimal performance. PostgreSQL has a protocol-level limit of approximately 65,535 parameters per query. The adapter automatically chunks large batches to stay within this limit (~60,000 parameters per chunk):
// 10,000 rows with 8 columns = 80,000 params
// Adapter splits into 2 chunks: 7500 rows + 2500 rows
const result = await table.insertMany(largeDataset)The RETURNING clause returns inserted rows including any generated defaults (auto-increment IDs, UUIDs, timestamps), so insertedIds are always populated correctly.
Schema Support
Tables can be placed in PostgreSQL schemas (namespaces) using @db.pg.schema:
@db.pg.schema "analytics"
@db.table "events"
export interface Event {
@meta.id
id: string
type: string
payload: object
}The table is created as "analytics"."events" and all queries are schema-qualified. The schema is created automatically during schema sync if it does not exist.
You can also use the portable @db.schema annotation, which has the same effect.
Auto-Increment
The @db.default.increment annotation uses PostgreSQL 10+ identity column syntax:
"id" BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEYThis is the modern PostgreSQL approach (not the legacy SERIAL pseudo-type). After table recreation during schema sync, the adapter automatically resets the identity sequence to MAX(id) + 1 so new inserts don't conflict with existing data.
INFO
GENERATED BY DEFAULT (not GENERATED ALWAYS) allows explicit ID values during inserts, which is needed for data migration and seeding.
Connection Pooling
PgDriver uses a pg.Pool internally. Each CRUD operation acquires a connection from the pool, executes the query, and releases the connection automatically. For transactions, a dedicated connection is acquired via getConnection() and held for the duration of the transaction.
const driver = new PgDriver({
connectionString: 'postgresql://localhost:5432/mydb',
max: 20, // maximum pool size
idleTimeoutMillis: 30000,
})The pool is initialized lazily — the pg module is dynamically imported on first use, so the package works in both ESM and CJS environments without top-level import issues.
To shut down cleanly, call driver.close() which ends the pool and all its connections.
Error Handling
The adapter maps PostgreSQL constraint errors to structured DbError instances:
| PG SQLSTATE | Error Type | Description |
|---|---|---|
23505 | CONFLICT | Unique constraint violation — includes the field path and constraint detail |
23503 | FK_VIOLATION | Foreign key violation — extracts the offending column from PG's error detail |
These structured errors are the same across all adapters, so error handling code is portable:
import { DbError } from '@atscript/db'
try {
await table.insertOne({ email: 'duplicate@example.com' })
} catch (err) {
if (err instanceof DbError && err.code === 'CONFLICT') {
console.log('Duplicate:', err.errors[0].path) // "email"
}
}Custom Type Parsers
The PgDriver configures custom type parsers for consistent JavaScript value handling:
| PostgreSQL Type | Parsed As | Why |
|---|---|---|
TIMESTAMP / TIMESTAMPTZ | number (epoch ms) | Consistent with SQLite and MongoDB adapters |
NUMERIC | number | Default pg driver returns string |
INT8 / BIGINT | number | Returns string only if value exceeds Number.MAX_SAFE_INTEGER |
BOOLEAN | boolean | Native (no conversion needed) |
These parsers are applied per-pool (not globally), so they don't affect other pg usage in the same process. When using a pre-created pg.Pool, type parsing is the caller's responsibility.
Limitations
- No unsigned integer types — unsigned types are promoted to the next-larger signed type (see Unsigned Integer Promotion)
- Extension privileges required —
CREATE EXTENSIONfor citext and pgvector requires database-levelCREATEprivilege. Managed services may need manual provisioning by an admin - UPDATE/DELETE with LIMIT — PostgreSQL does not support
LIMITonUPDATEorDELETEstatements. The adapter uses actidsubquery (WHERE ctid = (SELECT ctid FROM t WHERE ... LIMIT 1)) to achieve single-row update/delete - pgvector installation — the pgvector extension must be installed separately on the PostgreSQL server; it is not bundled with PostgreSQL
- Parameter limit — PostgreSQL supports ~65,535 bind parameters per query. Large
insertManycalls are auto-chunked, but custom raw queries should be aware of this limit
See Also
- Adapter Overview — feature comparison across all adapters
- Schema Sync — automatic schema migration
- CRUD Operations — create, read, update, delete
- Vector Search — vector similarity search guide
- Text Search — fulltext search guide