Storing Rich Domain Objects in SQL Without an ORM
ORMs promise freedom from SQL. For simple CRUD‑style applications, they often deliver. The trouble begins when the domain model stops looking like rows and columns.
As soon as your domain includes deeply nested structures, polymorphic behavior, or fields that evolve independently of the database schema, the ORM becomes less of an abstraction and more of an obstacle. Generated SQL becomes difficult to reason about. Lazy loading produces N+1 queries in places no one expects. Schema migrations turn into tense negotiations between Java classes and database tables.
At that point, many teams reach for more annotations, more custom mappings, or more aggressive caching. Few stop to ask whether the ORM is still the right tool.
There is an alternative pattern that trades ORM convenience for explicit control. Store the full domain object as a compressed binary blob, and maintain a small set of flat, indexed columns that represent the queryable projection of that object.
The serialization layer
At the core of this pattern is a clear separation between representation and querying.
The domain object is serialized to JSON using a standard library. That JSON is compressed and stored as a binary blob in the database. The compression step is critical. Complex domain objects can easily reach tens of kilobytes in raw JSON form. Compression reduces that to a few kilobytes in most cases. Across millions of rows, this difference matters for storage, I/O, and cache behavior.
Domain object
→ JSON serialization
→ compression
→ binary blob
→ stored in database
On reads, the process is fully reversible. The blob is read, decompressed, and deserialized back into a domain object. The application works with a fully hydrated object that preserves all nested structure without impedance from relational constraints.
Example in Java:
byte[] json = objectMapper.writeValueAsBytes(domainObject);
byte[] compressed = compressionService.compress(json);
Example in Python:
json_bytes = json.dumps(domain_object).encode("utf-8")
compressed = zlib.compress(json_bytes)
The key idea is that the database stores state, not behavior. The shape of the domain model belongs to the application.
The queryable projection
Storing everything as a blob solves the richness problem, but it introduces a new one. SQL cannot efficiently filter, sort, or index into arbitrary binary data.
The solution is to maintain a parallel projection. Each row contains a small set of flat columns that represent the fields you actually query on. These columns are written in the same transaction as the blob, so consistency is guaranteed.
INSERT INTO calculations (
id,
blob_data,
status,
period_start,
period_end,
gross_pay
)
VALUES (?, ?, ?, ?, ?, ?);
The blob holds the complete domain object. The flat columns expose just enough structure for efficient queries.
Queries operate exclusively on the projection:
SELECT id
FROM calculations
WHERE status = 'FINAL'
AND gross_pay > 5000
ORDER BY period_start;
Application logic always reads from the deserialized blob. The flat columns exist only to support database queries, not business logic. Both views of the data live in the same row, eliminating synchronization risk.
Schema evolution without migrations
The most powerful property of this pattern is decoupled evolution.
When you add a new field to the domain object, you do not need a database migration for the blob. New writes simply include the new field. Old rows remain readable.
Backward compatibility is handled in the deserialization layer through versioning and defaults.
Version 1 blob: { gross_pay, net_pay, tax }
Version 2 blob: { gross_pay, net_pay, tax, legal_entity }
switch (blobVersion) {
case 1 -> domain.setLegalEntity(null);
case 2 -> deserializeNormally();
}
This approach eliminates entire classes of migrations. No downtime. No coordination between deploys and schema changes. No need to backfill historical rows unless you actually want to.
For systems with frequent domain evolution, this alone can justify the pattern.
Operational implications
This pattern shifts responsibility from the database to the application.
The database stops enforcing fine‑grained structure. The application becomes responsible for validation, version compatibility, and invariants. Backup, replication, and recovery continue to work normally because the blob is just data.
Observability does require thought. Inspecting production data now requires tooling that understands the blob format. Most teams solve this by providing admin endpoints or offline inspection tools that deserialize blobs for debugging.
When this pattern does not apply
This approach is not a universal replacement for relational modeling.
If your system relies heavily on ad‑hoc queries, analytics, exploratory reporting, or cross‑cutting filters across many fields, the blob becomes a liability. You will either keep adding projection columns or reach the limits of what the model can express.
This pattern works best when query patterns are well defined, stable, and narrow, while the domain model is rich and evolving. In that space, storing rich domain objects as compressed blobs is not a hack. It is a deliberate architectural choice.
ORMs are excellent tools. They are just not neutral. Knowing when to stop using one is a form of senior engineering judgment.