Postgres BML: Binary Model Loading
By 2025, the "vector database" hype has settled, and PostgreSQL has emerged as the winner. With the introduction of BML (Binary Model Loading), we can now do more than just store vectors; we can run the models that generate them directly inside the database process.
Why BML?
Previously, to get a vector embedding, you had to:
- Fetch text from Postgres.
- Send it to an external microservice (Python/FastAPI).
- Load the model in that service.
- Generate the embedding.
- Send it back to Postgres.
With BML, the model is a first-class database object.
Loading a Model
Using the new pg_bml extension, loading a quantized GGUF or ONNX model is a single command:
SELECT bml.load_model('text-embed-v3', '/models/embed-v3-q4.bml');
In-Database Inference
Once loaded, you can generate embeddings as part of your INSERT or UPDATE pipeline using a trigger or a simple function call.
INSERT INTO documents (content, embedding)
VALUES (
'This is a deeply technical blog post about Postgres.',
bml.embed('text-embed-v3', 'This is a deeply technical blog post about Postgres.')
);
The Performance Advantage
By eliminating the network round-trip and the serialization overhead between JS/Python and SQL, BML-powered inference is up to 5x faster than external calls. This allows for real-time semantic search even on high-throughput write workloads.
Hybrid Search in 2025
BML also enables "Smart Reranking" within the same query. You can use standard BM25 for initial retrieval and then use a small BML model to rerank the top 100 results based on semantic relevance, all within the Postgres execution plan.
SELECT id, content
FROM documents
WHERE embedding <=> bml.embed('text-embed-v3', 'query') < 0.5
ORDER BY bml.rerank('cross-encoder-mini', content, 'query') DESC
LIMIT 10;
Postgres has evolved from a storage engine to a complete intelligence platform. In 2025, if your data is in Postgres, your models should be too.