Spec Driven Development is the Future
Or: How I Learned to Stop Worrying and Love the Specification
After a slight hiatus, I tried my hand at building a multi-tenant e-commerce platform handling 10K+ concurrent users. Not by "vibing" with AI, but by treating my specification document like it was the actual compiler.
Look, I'll be honest. When I first asked Claude to "just build me an auth system," I got 847 lines of code that technically compiled but had the security posture of a screen door on a submarine. Classic vibe-coding energy.
This isn't another "ChatGPT made me 10x" brag post. This is about recognizing that spec-driven development with AI agents is the natural successor to every software engineering paradigm we've been iterating on for 50 years. And if you're not adopting it, you're about to become the engineering equivalent of someone still debugging in production with console.log().
The Evolution: From "Let's Plan Everything" to "Let's Plan the Right Things"
Waterfall (1970s): Spend 18 months designing. Build for 12 months. Realize requirements changed in month 2. Cry.
Agile (2000s): Sprint every 2 weeks! Adapt constantly! Forgot to document anything! Technical debt goes brrrr.
DevOps (2010s): "Everything as code!" Deploy 47 times a day! Still takes 3 sprints to add a button.
Spec-Driven AI Era (2024+): Write the spec once. AI implements it. You verify. Ship Tuesday instead of Q3.
Each paradigm solved the predecessor's biggest pain:
Here's the insight that changed everything for me: Modern AI coding agents (Claude, GPT-5.2, Gemini) are the first tools that can transform well-structured specifications into production-grade code.
The catch? The "well-structured" part is doing all the heavy lifting in that sentence.
The Framework: 4 Phases That Prevent "Context Rot" (and Existential Dread)
I used the Get Shit Done (GSD) system. Yes, that's the real name. Yes, it's unironically good.
Phase 1: SPECIFY (What + Why + Constraints)
Human-owned. This is where you earn your salary.
Define business requirements, technical constraints, non-negotiable security boundaries, and the definition of "done." Not vague stuff like "make it scalable"... actual numbers. "Handle 1000 orders/sec with p95 latency <200ms" is a spec. "Make it fast" is a cry for help.
Phase 2: PLAN (Architecture + Trade-offs)
Human-owned. This is where staff engineers flex.
Make explicit architectural decisions. Document the alternatives you rejected and why. When your agent tries to implement 2-phase commit for your shopping cart, you want PLAN.md to say "we chose saga pattern because 2PC locks resources during network calls, killing our availability SLA."
Phase 3: TASK (Atomic, Verifiable Units)
Human-owned. The difference between success and hallucination.
Break work into tasks that change <200 lines and take <2 hours to implement. Each task gets acceptance criteria that a robot could verify. "Implement payment processing" is not a task. "Write createOrder() function that atomically inserts order + outbox events with idempotency key check" is a task.
Phase 4: EXECUTE (Implementation + Verification)
Agent-executed. Let the robots do robot things.
Generate code. Run tests. Fix issues. Commit atomically. The AI does this entire phase. You only intervene if it gets stuck.
The critical insight: Phases 1-3 are human territory. Phase 4 is machine territory. The moment you start "helping" the agent write code instead of improving the spec, you've lost the plot.
Real Example: Building "ShopStream" E-Commerce Platform
Business Context: Multi-tenant SaaS e-commerce. Each merchant gets isolated inventory, orders, and analytics. Must survive: Black Friday traffic, security audits, and my product manager's "quick questions."
Requirements: 99.9% uptime, sub-200ms p95 latency, PCI-DSS compliant checkout. You know, the easy stuff.
Let me show you what spec-driven development looks like when the rubber meets the road (and the road is on fire because it's Cyber Monday).
Feature 1: High-Availability Order Processing (Or: How to Not Lose Money)
The Spec (REQUIREMENTS.md excerpt):
## Order Processing Service
**Business Requirements:**
- Orders must survive service restarts (no more "oops we lost your $10K order")
- Failed payments retry with exponential backoff (max 3 attempts, then merchant gets angry email)
- Merchants see order status in <100ms (because they refresh the page 47 times)
**Technical Constraints:**
- Multi-tenant isolation (Tenant A can't see Tenant B's data, no matter how creative their SQL)
- Idempotent operations (duplicate order prevention—credit cards get charged once, not thrice)
- Distributed transaction handling (inventory reservation)
**Non-Functional Requirements:**
- Availability: 99.9% (max 43min downtime/month, including when I break production)
- Scalability: Handle 1000 orders/sec peak load (Black Friday or die)
- Security: Encrypt PII at rest, audit all state changes, make compliance people happy
**Acceptance Criteria:**
✓ Order creation returns within 200ms (p95)
✓ Payment failures trigger retry queue (not immediate panic)
✓ Duplicate order submissions return 409 with existing order ID
✓ All database operations use prepared statements (Bobby Tables protection)
Notice what I did here? Every requirement has a number or a measurable outcome. "Fast" became "200ms." "Reliable" became "99.9%." "Secure" became "prepared statements + PII encryption + audit logs."
When your spec is this explicit, the agent doesn't need to guess. Guessing is where hallucinations live.
The Architecture Decision (PLAN.md excerpt):
## Order Processing Architecture
**Decision:** Implement saga pattern with outbox pattern for reliability
**Alternatives Considered:**
1. **Distributed 2-Phase Commit (2PC)**
❌ Rejected: Holds locks during network calls. During a payment gateway timeout, we'd lock inventory for 30+ seconds. Availability dies. Black Friday becomes Black Eye.
2. **Event Sourcing**
❌ Rejected: Correct pattern for audit-heavy domains, but rebuilding state from events adds 300ms to order queries. Also we'd need event store infrastructure. Q1 feature creep is real.
3. **Saga + Outbox Pattern**
✅ SELECTED: Eventual consistency (acceptable for order status), survives crashes (outbox in same transaction), horizontally scalable (stateless workers).
**Trade-offs:**
- ✅ Survives service crashes (outbox events guaranteed to be published)
- ✅ Horizontally scalable (workers are stateless, just add pods)
- ⚠️ Eventual consistency (order status may lag 50-500ms—acceptable per PM)
- ⚠️ Requires background worker for outbox processing (added operational complexity)
**Data Flow:**
1. Order API → Transactional write (orders + outbox_events tables in single TX)
2. Outbox worker → Poll events, publish to message queue (RabbitMQ)
3. Payment worker → Process payment, update order status
4. Inventory worker → Reserve stock, handle rollback on failure
This is the secret sauce. When I documented the alternatives I rejected, the agent never tried to implement them. No random "let me add event sourcing to show off" moments. The spec became a fence that kept the AI focused.
The Code (Staff Engineer Level; No Hand-Holding):
// order-service/domain/order-processor.ts
import { Pool } from 'pg';
import { randomUUID } from 'crypto';
import { Logger } from 'pino';
interface CreateOrderRequest {
tenantId: string;
customerId: string;
items: Array<{ productId: string; quantity: number; priceSnapshot: number }>;
paymentMethodId: string;
idempotencyKey: string; // Client-generated UUID for duplicate prevention
}
interface OrderEvent {
eventId: string;
eventType: 'ORDER_CREATED' | 'PAYMENT_INITIATED';
aggregateId: string;
payload: unknown;
tenantId: string;
}
export class OrderProcessor {
constructor(
private readonly db: Pool,
private readonly logger: Logger
) {}
/**
* Creates order with transactional outbox pattern.
* Guarantees: Idempotent (duplicate-safe), atomic (all-or-nothing), durable (survives crashes).
*
* This is the "we don't lose orders even if Kubernetes decides to YOLO restart pods" function.
*/
async createOrder(request: CreateOrderRequest): Promise<{ orderId: string; status: string }> {
const client = await this.db.connect();
try {
await client.query('BEGIN');
// Set tenant context for row-level security (PostgreSQL RLS)
// This makes "DELETE FROM orders WHERE 1=1" only delete *your* tenant's data
await client.query('SELECT set_config($1, $2, true)', [
'app.current_tenant_id',
request.tenantId
]);
// Idempotency check: Has this exact request been processed before?
// Handles: User mashes "Place Order" button 5 times in panic
const existingOrder = await client.query(
`SELECT order_id, status FROM orders
WHERE tenant_id = $1 AND idempotency_key = $2`,
[request.tenantId, request.idempotencyKey]
);
if (existingOrder.rows.length > 0) {
await client.query('ROLLBACK');
// Return existing order, charge card zero times. Everyone happy.
return {
orderId: existingOrder.rows[0].order_id,
status: existingOrder.rows[0].status
};
}
// Calculate total using price snapshot (prevents TOCTOU race conditions)
// Without this: User adds item at $10, price changes to $100 mid-checkout, sadness
const totalAmount = request.items.reduce(
(sum, item) => sum + item.priceSnapshot * item.quantity,
0
);
// Insert order (parameterized query = Bobby Tables can't hurt us)
const orderId = randomUUID();
await client.query(
`INSERT INTO orders (
order_id, tenant_id, customer_id, total_amount,
status, idempotency_key, created_at
) VALUES ($1, $2, $3, $4, $5, $6, NOW())`,
[orderId, request.tenantId, request.customerId, totalAmount, 'PENDING', request.idempotencyKey]
);
// Insert order items (for receipt/fulfillment)
for (const item of request.items) {
await client.query(
`INSERT INTO order_items (
order_id, product_id, quantity, price_snapshot, tenant_id
) VALUES ($1, $2, $3, $4, $5)`,
[orderId, item.productId, item.quantity, item.priceSnapshot, request.tenantId]
);
}
// Transactional outbox: These events WILL be published, even if we crash 1ms from now
// This is the "order survives Kubernetes chaos monkey" guarantee
const events: OrderEvent[] = [
{
eventId: randomUUID(),
eventType: 'ORDER_CREATED',
aggregateId: orderId,
tenantId: request.tenantId,
payload: { orderId, customerId: request.customerId, totalAmount }
},
{
eventId: randomUUID(),
eventType: 'PAYMENT_INITIATED',
aggregateId: orderId,
tenantId: request.tenantId,
payload: { orderId, paymentMethodId: request.paymentMethodId, amount: totalAmount }
}
];
for (const event of events) {
await client.query(
`INSERT INTO outbox_events (
event_id, event_type, aggregate_id, tenant_id,
payload, published, created_at
) VALUES ($1, $2, $3, $4, $5, false, NOW())`,
[event.eventId, event.eventType, event.aggregateId, event.tenantId, JSON.stringify(event.payload)]
);
}
await client.query('COMMIT');
this.logger.info({ orderId, tenantId: request.tenantId }, 'Order created successfully');
return { orderId, status: 'PENDING' };
} catch (error) {
await client.query('ROLLBACK');
this.logger.error({ error, request }, 'Order creation failed');
throw error;
} finally {
client.release();
}
}
}
Why This is Staff+ Engineer Quality:
✅ Idempotency: Client retries don't double-charge cards (survived: payment gateway timeouts, user panic-clicking)
✅ Multi-tenant isolation: Row-level security means SQL injection can only hurt your own data
✅ Price snapshot: Race conditions can't turn $10 items into $1000 charges
✅ Transactional outbox: Orders survive service crashes, Kubernetes restarts, Friday deployments
✅ Prepared statements: Bobby Tables defeated
✅ Atomic operations: All-or-nothing semantics (no "half an order" in database)
The agent generated 90% of this code. I added the snarky comments afterward (agents have no sense of humor, yet).
Feature 2: Scalable Inventory (Or: How to Not Sell the Same Item Twice)
The Problem: Two customers simultaneously buy the last iPhone. Both see "1 in stock." Both click "Buy." Who gets it? Both? Neither? Does the universe implode?
The Spec:
## Inventory Reservation System
**Problem:** Race condition where concurrent purchases can cause overselling
**Requirements:**
- Prevent overselling (inventory quantity >= 0, always, even under concurrency)
- Handle 100+ simultaneous checkouts per product
- Automatic reservation expiry (15min timeout for abandoned carts)
**Acceptance Criteria:**
✓ Concurrent reservation attempts handled gracefully (optimistic lock detection)
✓ Failed reservations return HTTP 409 with current available quantity
✓ Expired reservations auto-release back to inventory (cron job, horizontally scalable)
✓ Under load test: 200 concurrent requests for last item, exactly 1 succeeds, 199 get 409
The Code:
Recommended by LinkedIn
// inventory-service/domain/inventory-manager.ts
export class InventoryManager {
constructor(private readonly db: Pool) {}
/**
* Reserves inventory using optimistic locking + explicit row lock.
* Prevents the "we just sold 5 of an item we have 1 of" disaster.
*
* Fun fact: This function was load tested with 200 concurrent requests
* for the last unit. Exactly 1 succeeded. The other 199 got polite rejections.
* This is how we avoid explaining to customers why they're getting air.
*/
async reserveInventory(
tenantId: string,
productId: string,
quantity: number,
orderId: string
): Promise<{ success: boolean; availableQuantity?: number }> {
const client = await this.db.connect();
try {
// REPEATABLE READ: Prevents phantom reads during transaction
await client.query('BEGIN ISOLATION LEVEL REPEATABLE READ');
await client.query('SELECT set_config($1, $2, true)', [
'app.current_tenant_id',
tenantId
]);
// Read with explicit row lock (FOR UPDATE)
// This is belt-and-suspenders: optimistic lock (version) + pessimistic lock (row lock)
const result = await client.query(
`SELECT available_quantity, version
FROM inventory
WHERE tenant_id = $1 AND product_id = $2
FOR UPDATE`, // "Hey PostgreSQL, no one else touches this row until I'm done"
[tenantId, productId]
);
if (result.rows.length === 0) {
await client.query('ROLLBACK');
throw new Error('Product not found');
}
const { available_quantity, version } = result.rows[0];
// Check sufficient inventory
if (available_quantity < quantity) {
await client.query('ROLLBACK');
// Return actual quantity so UI can show "only 2 left!" instead of generic error
return { success: false, availableQuantity: available_quantity };
}
// Update with optimistic lock check (version comparison)
// If version changed, another transaction beat us to it
const updateResult = await client.query(
`UPDATE inventory
SET available_quantity = available_quantity - $1,
version = version + 1,
updated_at = NOW()
WHERE tenant_id = $2
AND product_id = $3
AND version = $4
RETURNING available_quantity`,
[quantity, tenantId, productId, version]
);
if (updateResult.rowCount === 0) {
// Version mismatch: someone else modified inventory between our read and write
await client.query('ROLLBACK');
return { success: false };
}
// Record reservation (so we can expire it if cart abandoned)
await client.query(
`INSERT INTO inventory_reservations (
reservation_id, tenant_id, product_id, order_id,
quantity, expires_at, created_at
) VALUES ($1, $2, $3, $4, $5, NOW() + INTERVAL '15 minutes', NOW())`,
[randomUUID(), tenantId, productId, orderId, quantity]
);
await client.query('COMMIT');
return { success: true };
} catch (error) {
await client.query('ROLLBACK');
throw error;
} finally {
client.release();
}
}
/**
* Background job: Release expired reservations back to inventory.
* Runs every 1 minute. Horizontally scalable (multiple instances safe).
*
* Handles: User adds 5 iPhones to cart, goes to lunch, never comes back.
* Those iPhones get released after 15min so real customers can buy them.
*/
async releaseExpiredReservations(): Promise<number> {
const result = await this.db.query(`
WITH expired_reservations AS (
DELETE FROM inventory_reservations
WHERE expires_at < NOW() AND status = 'PENDING'
RETURNING tenant_id, product_id, quantity
)
UPDATE inventory
SET available_quantity = available_quantity + er.quantity,
version = version + 1
FROM expired_reservations er
WHERE inventory.tenant_id = er.tenant_id
AND inventory.product_id = er.product_id
RETURNING inventory.product_id
`);
return result.rowCount || 0;
}
}
Engineering Decisions:
Real talk: The first version of this I had the agent write didn't have the FOR UPDATE lock. During load testing, we oversold by 47 units. After adding the spec requirement "must pass 200 concurrent request test," the agent added the row lock. Spec quality = output quality.
Feature 3: High-Performance Product API (Or: Making Cache Hit Rates Go Brrrr)
The Spec:
## Product Catalog API
**Performance Requirements:**
- p50: <50ms, p95: <150ms, p99: <500ms
- Support 10K requests/sec (Black Friday load)
- Cache hit ratio >80% (reduce database load)
**Security Requirements:**
- Rate limiting: 100 req/min per tenant (prevent abuse)
- Input validation (no injection attacks, no 10MB product names)
- Audit logging for all mutations (compliance requirement)
The Code:
// api/routes/products.ts
import { FastifyInstance } from 'fastify';
import { Redis } from 'ioredis';
import { z } from 'zod';
import { RateLimiterRedis } from 'rate-limiter-flexible';
// Input validation schema (Zod enforces at runtime what TypeScript checks at compile time)
const ProductSchema = z.object({
name: z.string().min(1).max(200), // No empty names, no 10KB names
description: z.string().max(5000),
price: z.number().positive().max(1000000), // Prevents $-1 exploit, caps at $1M
sku: z.string().regex(/^[A-Z0-9-]+$/), // Alphanumeric only (prevents injection)
categoryId: z.string().uuid() // Valid UUID (prevents '../../../etc/passwd' type attacks)
});
export async function productRoutes(
fastify: FastifyInstance,
redis: Redis
) {
// Rate limiter: 100 requests per 60 seconds per tenant
// Prevents: Malicious tenant DOS attack, accidental infinite loop in client code
const rateLimiter = new RateLimiterRedis({
storeClient: redis,
points: 100,
duration: 60,
keyPrefix: 'rl:products'
});
// GET /products/:id - Read with Redis cache
fastify.get<{ Params: { id: string } }>(
'/products/:id',
{
preHandler: async (request, reply) => {
try {
await rateLimiter.consume(request.tenantId);
} catch {
// Rate limit exceeded: return 429 (Too Many Requests)
reply.status(429).send({ error: 'Too many requests' });
}
}
},
async (request, reply) => {
const { id } = request.params;
const cacheKey = `product:${request.tenantId}:${id}`;
// Try cache first (in production: 85% hit ratio, 12ms avg latency)
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Cache miss - hit database (p95 latency: 45ms)
const result = await fastify.pg.query(
`SELECT product_id, name, description, price, sku, category_id
FROM products
WHERE tenant_id = $1 AND product_id = $2 AND deleted_at IS NULL`,
[request.tenantId, id]
);
if (result.rows.length === 0) {
return reply.status(404).send({ error: 'Product not found' });
}
const product = result.rows[0];
// Cache for 5 minutes (TTL chosen based on: product data rarely changes,
// but merchant might update price/description, so not too long)
await redis.setex(cacheKey, 300, JSON.stringify(product));
return product;
}
);
// POST /products - Create with validation + audit
fastify.post<{ Body: unknown }>(
'/products',
{
preHandler: async (request, reply) => {
try {
await rateLimiter.consume(request.tenantId);
} catch {
reply.status(429).send({ error: 'Too many requests' });
}
}
},
async (request, reply) => {
// Validate input (Zod returns detailed errors for client debugging)
const parseResult = ProductSchema.safeParse(request.body);
if (!parseResult.success) {
return reply.status(400).send({
error: 'Invalid input',
details: parseResult.error.issues
});
}
const product = parseResult.data;
const productId = randomUUID();
// Write to database
await fastify.pg.query(
`INSERT INTO products (
product_id, tenant_id, name, description,
price, sku, category_id, created_by, created_at
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, NOW())`,
[
productId,
request.tenantId,
product.name,
product.description,
product.price,
product.sku,
product.categoryId,
request.userId
]
);
// Audit log (PCI-DSS compliance requirement, also useful when merchant says
// "I didn't delete that product!" and you need receipts)
await fastify.pg.query(
`INSERT INTO audit_log (
event_type, tenant_id, user_id, resource_type,
resource_id, details, created_at
) VALUES ($1, $2, $3, $4, $5, $6, NOW())`,
[
'PRODUCT_CREATED',
request.tenantId,
request.userId,
'PRODUCT',
productId,
JSON.stringify({ name: product.name, sku: product.sku })
]
);
return reply.status(201).send({ productId });
}
);
}
Production Metrics (well, simulated production):
The pattern: Redis cache + rate limiting + input validation + audit logging. This is the "standard enterprise API" starter pack. The agent generated 95% of this because the spec was explicit.
The Critical Pitfalls (Things I Learned by Failing)
❌ Pitfall 1: Treating the Spec as Static Documentation
What I did wrong: Wrote spec, gave to agent, changed requirements, forgot to update spec.
What happened: Agent kept implementing old requirements. I kept saying "no not that way!" Agent kept saying "but the spec says..." We were both right and both wrong.
The fix: REQUIREMENTS.md is not a historical document. It's a contract. When requirements change, update the spec FIRST, then tell the agent. Treat your spec like you treat your database schema—migrations required.
❌ Pitfall 2: Skipping Architecture Documentation
What I did wrong: "Just use Redis for caching, it's obvious."
What happened: Agent used Redis for caching. And for rate limiting. And for distributed locks. And for pub/sub. And for session storage. And for job queuing. Now we have 6 different Redis usage patterns and no one knows which data structures are where.
The fix: Document every architectural decision in PLAN.md with: 1) What we chose, 2) What we rejected, 3) Why. When the agent sees "we chose Redis for caching only; rate limiting uses Redis but different instance," it stays in its lane.
❌ Pitfall 3: Vague Task Descriptions
What I did wrong: Task: "Implement payment processing."
What happened: Agent generated 1,200 lines across 8 files, integrated Stripe, PayPal, and Bitcoin (I don't even support Bitcoin), added webhook handlers I didn't ask for, and created a PaymentFactory factory factory.
The fix: Atomic tasks. "Write createOrder() function that: inserts order + outbox events in single transaction, handles idempotency, returns within 200ms." Agent generated 80 lines. Perfect.
❌ Pitfall 4: No Verification Criteria
What I did wrong: "Just make sure it works."
What happened: Agent ran tests. Tests passed. Deployed. Orders were being created with $0.00 total because I forgot to specify "total must be sum of item prices."
The fix: Every task gets explicit acceptance criteria. "✓ Total amount equals sum of item.quantity * item.priceSnapshot." Agent added validation. Bug never reached production.
Why This Matters for Senior+ Engineers
Your value proposition just shifted. It's no longer "I can implement complex algorithms fast." It's:
1. Spec-Writing is the New Coding
Translating ambiguous business requirements into explicit, verifiable constraints. "Make checkout secure" becomes "Enforce idempotency keys, encrypt PII at rest with AES-256, audit all state changes, parameterize all SQL queries."
This is hard. This is where senior+ engineers shine.
2. Architecture Decisions Are Permanently Visible
"We chose saga over 2PC because [explicit trade-off]" becomes documentation that lives forever. Future you (and future teammates) will thank you when they're not reverse-engineering decisions from code comments.
3. Verification Design is Pre-Implementation
Define "done" before writing code. Crazy, right? Acceptance criteria like "under load test with 200 concurrent requests for last item, exactly 1 succeeds" makes bugs impossible to ship.
4. Context Engineering is a Skill
Structuring information so agents don't hallucinate. This is the new version of "writing clean code"—except you're writing for a very literal, very powerful, very gullible audience (the AI).
The Results (Numbers Don't Lie)
ShopStream Platform (2 weeks, spec-driven development):
Total: 12 working days for a production-grade multi-tenant e-commerce platform.
Comparison to traditional development (based on past projects):
Production metrics (expected):
The kicker: The spec-driven approach let me focus on architecture and requirements (things I'm good at) while the agent handled implementation (things that used to take 90% of my time).
The Paradigm Shift is Already Here
Spec-driven development isn't a future trend. It's happening now:
We didn't invent writing specifications. We invented specifications good enough that machines can implement them.
The engineers who recognize this shift will 10x their output. The ones who don't will wonder why they're being out-delivered by people who "just write docs all day."
Your move.
Tools & Resources
Have you shipped anything with AI coding agents? Are you still "vibing" or have you adopted structured specs? Drop a comment. I want to hear your war stories (especially the disasters).
Excellent article!!! So many things covered. Absolutely....it is amazing how for 1 year paradigm of development shifted...from core development...to spec-driven development. I wonder how the operation of dev teams is changing, even how we should interview and hire developers. might it be we need to look only for candidates with an architect mindset and knowledge? What will happen to junior developers...I never heard of junior architects. How will a junior developer grow to architect if coding is automated? We need to address the university curriculum in that sense.
Arjun Basu , another great article. I read the entire thing in my iPhone (for an old guy who wears glasses, this is already a demonstration of how much I was enjoyed it). I didn’t know about GSD - maybe that’s the next iteration for methodologies (waterfall, agile, devops, GSD?). 😀 “Spec-Writing is the New Coding”. This is so true. It helps if/when the spec writing is someone who understands technology (back in the old days: FS+TS). I’m really enjoying your articles. Keep them coming.