AboutBlogContact
DevelopmentApril 5, 2026 5 min read 49

10 Database Design Mistakes That Will Haunt You in Production

AunimedaAunimeda
📋 Table of Contents

10 Database Design Mistakes That Will Haunt You in Production

Bad database design is uniquely painful: it's hard to fix once you have data in production, and the consequences compound as your data grows. These are the mistakes I see most often and what to do instead.


Mistake 1: Storing Calculated Values

-- Bad
orders (
  id, subtotal, tax_amount, discount_amount, total  -- total is redundant
)

When total gets out of sync with subtotal + tax - discount, which do you trust? You now have inconsistent data and no good answer.

Fix: Calculate derived values at query time, not storage time.

SELECT subtotal + tax_amount - discount_amount AS total FROM orders;

Exception: calculated aggregates that are expensive to compute (denormalization for read performance) - do this deliberately with synchronization logic, not accidentally.


Mistake 2: Storing Comma-Separated Lists in a Column

-- Bad
products (
  id, name, tag_ids  -- "1,5,23,47"
)

You cannot index this. You cannot query "find all products with tag 5" without a full table scan and string parsing. You cannot enforce referential integrity.

Fix: Junction table:

product_tags (
  product_id INTEGER REFERENCES products(id),
  tag_id     INTEGER REFERENCES tags(id),
  PRIMARY KEY (product_id, tag_id)
)

Mistake 3: Not Using Transactions for Multi-Step Operations

// Bad - partial failure leaves inconsistent state
await updateInventory(productId, -quantity);
await createOrder(userId, items);
await chargePayment(userId, total); // If this fails, inventory is decremented but no order exists

Fix: Wrap related operations in a transaction:

await db.transaction(async (trx) => {
  await updateInventory(productId, -quantity, trx);
  const order = await createOrder(userId, items, trx);
  await chargePayment(userId, total, trx);
  return order;
});
// All succeed or all roll back

Mistake 4: Missing Indexes on Foreign Keys and Filter Columns

-- You have this:
orders (id, user_id, status, created_at)

-- But not this:
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_orders_status ON orders(status);
CREATE INDEX idx_orders_created_at ON orders(created_at DESC);

SELECT * FROM orders WHERE user_id = 123 becomes a full table scan at 1M rows.

Rule of thumb: Every foreign key column should have an index. Every column you filter or sort by frequently should have an index. Check EXPLAIN ANALYZE output regularly.


Mistake 5: Using VARCHAR(255) Everywhere

-- Common but lazy
name VARCHAR(255),
email VARCHAR(255),
country_code VARCHAR(255),  -- Country codes are 2 chars

VARCHAR(255) doesn't reserve 255 bytes - it's fine storage-wise. But it signals no thought was given to the domain. Use appropriate constraints:

name VARCHAR(200),          -- Reasonable limit
email VARCHAR(254),         -- Max email length per RFC
country_code CHAR(2),       -- Fixed length, always 2 chars
phone VARCHAR(20),          -- E.164 format max
status VARCHAR(20),         -- Or better: ENUM

Mistake 6: Soft Delete Without Thinking Through the Implications

-- Added deleted_at or is_deleted column
users (id, email, deleted_at)

Now every query needs WHERE deleted_at IS NULL. If you forget once, deleted users appear in results. Unique constraints on email now need to handle deleted rows.

If you need soft delete:

  • Use PostgreSQL Row-Level Security to hide soft-deleted rows automatically
  • Or use a separate deleted_users archive table and hard delete from main table
  • Or add WHERE deleted_at IS NULL to a view and query through the view

Mistake 7: Storing Monetary Values as FLOAT

-- Never do this
price FLOAT,
amount DOUBLE PRECISION

Floating point arithmetic is imprecise:

0.1 + 0.2 = 0.30000000000000004

For money, use:

price NUMERIC(10, 2)  -- Exact decimal
-- Or: store in smallest unit (cents/tyiyn) as INTEGER
price_cents INTEGER   -- $9.99 stored as 999

Mistake 8: N+1 Queries

// Bad: 1 query for orders + 1 query per order for user = N+1 queries
const orders = await getOrders(); // 100 orders
for (const order of orders) {
  order.user = await getUserById(order.userId); // 100 separate queries
}

At 100 orders: 101 queries. At 10,000 orders: 10,001 queries.

Fix: JOIN or batch fetch:

// JOIN
const orders = await db.query(`
  SELECT orders.*, users.name, users.email 
  FROM orders JOIN users ON orders.user_id = users.id
`);

// Or: batch fetch all users in one query
const userIds = orders.map(o => o.userId);
const users = await getUsersByIds(userIds); // 1 query
const usersMap = new Map(users.map(u => [u.id, u]));
orders.forEach(o => o.user = usersMap.get(o.userId));

Mistake 9: No Audit Trail for Important Data

-- You can't answer: who changed this, when, what was it before?
-- Bad: just UPDATE the row
UPDATE orders SET status = 'cancelled' WHERE id = 123;

Fix for critical data: Event sourcing or audit log:

order_events (
  id, order_id, event_type, 
  old_data JSONB, new_data JSONB,
  changed_by UUID, changed_at TIMESTAMPTZ
)

You should always be able to answer: what did this record look like at any point in time?


Mistake 10: Ignoring Timezone Handling

-- Bad: local timestamps
created_at TIMESTAMP  -- What timezone?

-- Good: always UTC
created_at TIMESTAMPTZ  -- PostgreSQL stores in UTC, converts for display

Store everything in UTC. Convert to user's local timezone only for display. If you store local times, you'll have a very bad time when a user changes timezone, your servers move regions, or daylight saving time hits.

// Always use UTC when inserting
await db.query(
  'INSERT INTO events (created_at) VALUES ($1)',
  [new Date().toISOString()] // ISO 8601 with UTC offset
);

Bonus: Migration Best Practices

  • Never drop a column in the same migration that stops using it. Deploy code that ignores the column first, then drop it in a later migration.
  • Make migrations reversible - write the down migration, even if you never run it.
  • Large table migrations need to be done online (without locking) - use pg_repack or ALTER TABLE ... CONCURRENTLY patterns.

Database design and architecture consulting →

Read Also

Web Vitals & Lighthouse 100: Practical Optimization Guide 2026aunimeda
Development

Web Vitals & Lighthouse 100: Practical Optimization Guide 2026

Achieving Lighthouse 100 on a real-world production Next.js app — not a blank page. Covers LCP, INP (replaced FID in 2024), CLS, TTFB, font optimization, image optimization, JS bundle analysis, and CSS critical path — with specific code changes.

Node.js vs Bun vs Deno in 2026: Runtime Comparison with Real Benchmarksaunimeda
Development

Node.js vs Bun vs Deno in 2026: Runtime Comparison with Real Benchmarks

Bun 1.x is production-stable. Deno 2.0 supports npm packages. Node.js 22 has native TypeScript. The runtime landscape changed. Here's what the numbers actually show and when each runtime makes sense for real projects.

Supabase vs Firebase vs PocketBase 2026: Real Comparison for Production Appsaunimeda
Development

Supabase vs Firebase vs PocketBase 2026: Real Comparison for Production Apps

Choosing a backend-as-a-service in 2026: Supabase (PostgreSQL + open-source), Firebase (Google, mature ecosystem), or PocketBase (single binary, self-hosted). Real benchmarks, pricing at scale, and when each one breaks.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles