Tradeoffs and When NOT to Use Event Sourcing
Memory hook: "Event sourcing is chemotherapy -- incredibly powerful for the right diagnosis, but you wouldn't prescribe it for a headache."
Table of Contents
- When Event Sourcing Is Worth the Complexity
- When CRUD Is Better
- Complexity Costs
- Operational Costs
- Team Readiness Considerations
- Hybrid Approaches
- Decision Framework
- Scenario Comparison Table
- Common Mistakes When Adopting Event Sourcing
- Memory Hooks
- Interview Questions
When Event Sourcing Is Worth the Complexity
Event sourcing earns its complexity when the business domain benefits from it, not when the technology is interesting. Look for these signals:
Strong Indicators (Event Source This)
| Signal | Why Event Sourcing Helps | Example |
|---|---|---|
| Audit trail is a business requirement | Events are the audit trail; no separate audit logging needed | Financial transactions, healthcare records, legal contracts |
| Temporal queries are needed | "What was the state at time T?" is a replay away | Insurance policies, regulatory compliance, billing disputes |
| Complex state machines | Events capture every transition; easier to debug state bugs | Order fulfillment (our domain), loan processing, claims handling |
| Multiple read models from same data | Events are projected into different shapes for different consumers | E-commerce (order list, dashboard, analytics, recommendations) |
| Business insight from event history | The sequence of events tells a story that final state does not | Customer journey analysis, fraud detection, process optimization |
| Undo / compensation needed | Compensating events are natural; reverting a snapshot is painful | Booking systems, inventory reservations, financial adjustments |
Weak Indicators (Think Twice)
| Signal | Why It Is Tempting but Insufficient |
|---|---|
| "We want to try event sourcing" | Technology curiosity is not a business reason |
| "We need an audit log" | A simple audit log table with triggers might be 10x simpler |
| "We want to use Kafka" | Kafka is an event bus, not an event store (see kafka-integration.md) |
| "We want CQRS" | CQRS does not require event sourcing; you can have separate read/write models with CRUD |
When CRUD Is Better
CRUD Wins for These Scenarios
| Scenario | Why CRUD Is Better |
|---|---|
| Simple CRUD operations | A blog, a TODO app, a contact list -- the "event history" of a blog post has zero business value |
| Reporting-first systems | If the primary use case is ad-hoc SQL queries, a normalized schema with JOINs is more natural |
| Small teams, short timelines | Event sourcing requires significant upfront investment in infrastructure and patterns |
| Low domain complexity | If the business logic is "validate input, save to database, return response," you do not need DDD or event sourcing |
| Data is mutable by nature | User profile settings, UI preferences, configuration -- overwriting the old value is the correct semantic |
| GDPR/deletion is critical | Deleting an entity in CRUD is DELETE FROM. In event sourcing, it requires crypto-shredding or event rewriting |
The Honest Question
"If I stored only the current state (no event history), would my business lose something valuable?"
- Yes: Consider event sourcing
- No: CRUD is probably right
Complexity Costs
1. Projection Maintenance
Every new query requirement may require a new projection. Each projection is:
- A new table schema to design
- A new event handler to write and maintain
- A new checkpoint to track
- A new thing that can fall behind or fail
CRUD: Need a new query? Write a SQL query.
ES: Need a new query? Design a projection, write handlers for every
relevant event type, handle idempotency, handle replay, deploy.Real cost: In this project, we have 3 projections (order_read_model, order_timeline, order_dashboard). Each handles 9 event types. That is 27 event handler methods to maintain. In CRUD, these would be 3 simple queries.
2. Eventual Consistency
Commands succeed immediately; read models update asynchronously. This means:
- Users may see stale data after a write
- UI must handle the "command accepted, query not yet updated" window
- Testing becomes harder (must wait for projections to catch up)
- Debugging "wrong data" reports requires checking whether the projection is behind
Real cost: Every UI developer must understand that POST followed by GET may return old data. This is a training and design burden.
3. Event Versioning
Events are immutable. When the business changes requirements:
Before: OrderCreatedEvent(orderId, customerId, shippingAddress)
After: OrderCreatedEvent(orderId, customerId, shippingAddress, currency, priority)You must either:
- Write an upcaster that transforms old events to the new format
- Support multiple event versions in your aggregate
- Accept that old events will forever have missing fields (with defaults)
Real cost: Every schema change requires careful migration planning. In CRUD, you ALTER TABLE ADD COLUMN and backfill.
4. Aggregate Design Constraints
Event-sourced aggregates must:
- Have no external dependencies (no database calls in domain logic)
- Be reconstructible from events alone
- Keep events small and focused
- Handle all event types in the
When()method
Real cost: Developers accustomed to "load related data from DB, make decision, save" must fundamentally rethink how they write domain logic.
Operational Costs
1. Replay Time
As the event store grows, rebuilding projections takes longer:
| Events | Estimated Replay Time | Impact |
|---|---|---|
| 10,000 | ~5 seconds | No issue |
| 100,000 | ~30 seconds | Brief downtime for full rebuild |
| 1,000,000 | ~5 minutes | Noticeable; plan maintenance windows |
| 10,000,000 | ~1 hour | Significant; consider parallel projection |
| 100,000,000 | ~10 hours | Requires snapshot-based rebuild or partitioned replay |
2. Storage Growth
Events are append-only. Storage grows monotonically.
Estimate: 500 bytes per event (average, with JSONB overhead)
100 orders/day * 10 events/order * 500 bytes = 500 KB/day = 180 MB/year
10,000 orders/day * 10 events/order * 500 bytes = 50 MB/day = 18 GB/year
1,000,000 orders/day * 10 events/order * 500 bytes = 5 GB/day = 1.8 TB/yearFor most systems, this is manageable. For very high-volume systems, consider archiving old events to cold storage.
3. Debugging
In CRUD, you look at the database and see the current state. In event sourcing:
CRUD debugging:
SELECT * FROM orders WHERE id = 'abc-123';
-- Done. You see the state.
Event sourcing debugging:
SELECT * FROM event_store WHERE stream_id = 'abc-123' ORDER BY version;
-- 47 rows. You must mentally replay them to understand the current state.
-- Or load the aggregate in code and inspect it.The trade-off: event sourcing gives you more information (the full history), but extracting meaning from that history requires tooling and expertise.
4. Monitoring Requirements
Event-sourced systems need monitoring that CRUD systems do not:
| Metric | Why It Matters |
|---|---|
| Projection lag | How far behind is the read model? |
| Event store size | Storage growth rate |
| Events per second | Write throughput |
| Concurrency conflicts | Are we getting too many retries? |
| Snapshot hit rate | Are snapshots actually helping? |
| Consumer group lag | Kafka consumers falling behind? |
Team Readiness Considerations
Skills Required
| Skill | Level Needed | How to Build |
|---|---|---|
| DDD fundamentals | Strong | Books (Vernon, Evans), workshops |
| Event-driven architecture | Strong | Event modeling sessions with domain experts |
| Eventual consistency | Comfortable | Build a small prototype; experience the pain firsthand |
| PostgreSQL/SQL | Intermediate | Standard database skills |
| Message broker (Kafka) | Intermediate | Kafka tutorials; understand consumer groups, partitions |
| Testing event-sourced systems | Intermediate | Practice: given-when-then for aggregates |
Team Size Guidelines
| Team Size | Recommendation |
|---|---|
| 1-2 developers | CRUD unless the domain truly demands ES. You are the sole maintainer of projections, event versioning, and infrastructure. |
| 3-5 developers | Event sourcing is viable if at least 1-2 have prior experience. |
| 5+ developers | Event sourcing can be a strong choice, especially with dedicated infrastructure support. |
Red Flags for Adoption
- Team has never built an event-driven system before and the project has a tight deadline
- Management is pushing ES as a "modern architecture" without a domain-driven reason
- The domain expert cannot articulate what "events" happen in the business process
- There is no budget for the additional monitoring and operational tooling
Hybrid Approaches
The Best of Both Worlds
Not every aggregate needs event sourcing. Within the same system:
Order Fulfillment (complex state machine, audit trail needed) --> Event-sourced
Product Catalog (simple CRUD, rarely changes) --> Traditional CRUD
User Preferences (mutable settings, no history value) --> Traditional CRUD
Payment Processing (strong audit requirements) --> Event-sourced
Notifications (fire-and-forget, ephemeral) --> Neither (just log and send)Implementation Strategy
// Event-sourced aggregate: implements AggregateRoot<T>, uses IEventStore
public sealed class Order : AggregateRoot<OrderId> { /* ... */ }
// CRUD entity: plain EF Core entity, uses DbContext
public class Product
{
public Guid Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
// Simple properties, no events, no version tracking
}Both can coexist in the same codebase. The key is that each bounded context chooses its own persistence strategy independently.
Integration Between ES and CRUD Contexts
The CRUD context consumes integration events from Kafka but stores its data in traditional normalized tables with UPDATE semantics. No event sourcing complexity in that context.
Decision Framework
Decision Matrix (Quick Reference)
Score each factor 0-3. If total >= 8, event sourcing is likely worthwhile.
| Factor | 0 (Low) | 1 | 2 | 3 (High) |
|---|---|---|---|---|
| Audit trail value | No business need | Nice-to-have | Regulatory requirement | Core business feature |
| Domain complexity | Simple CRUD | Moderate logic | Complex workflows | Complex state machines with compensation |
| Temporal query need | Never | Rarely | Occasionally | Frequently |
| Read model variety | 1 view | 2-3 views | 4+ views | Different views per consumer/service |
| Team experience | No ES experience | Read about it | Built a prototype | Production ES experience |
Scenario Comparison Table
| Scenario | Recommendation | Reasoning |
|---|---|---|
| E-commerce order fulfillment | Event Sourcing | Complex state machine, audit trail, multiple projections, temporal queries for disputes |
| Blog / CMS | CRUD | Content is mutable, no event history value, simple reads |
| Banking / financial ledger | Event Sourcing | Regulatory audit requirements, balance as sum of transactions, temporal queries |
| User profile management | CRUD | Mutable settings, no history value, simple reads |
| IoT sensor data pipeline | Event Sourcing (or append-only log) | Time-series data is naturally event-like, temporal queries essential |
| Internal CRUD admin tool | CRUD | Low complexity, small team, fast delivery needed |
| Insurance claims processing | Event Sourcing | Complex state machine, regulatory audit, multiple stakeholders need different views |
| To-do list app | CRUD | Trivial domain, no event history value |
| Inventory management | Hybrid | Track movements as events (event sourcing for stock transactions), CRUD for product catalog |
| Multi-tenant SaaS settings | CRUD | Configuration is mutable by nature |
| Booking / reservation system | Event Sourcing | Temporal queries, cancellation/rebooking as compensating events, audit trail |
Common Mistakes When Adopting Event Sourcing
1. Event Sourcing Everything
Mistake: Applying event sourcing to every entity in the system.
Reality: Most entities are simple CRUD. Event sourcing only makes sense for aggregates with complex state transitions and business-valuable history. Product catalog, user settings, and reference data should remain CRUD.
2. Treating Events as Database Rows
Mistake: Designing events as "database change notifications" -- FieldXUpdated, FieldYUpdated.
Reality: Events should capture business intent. Not OrderStatusUpdated(status: "Submitted") but OrderSubmitted(totalAmount, currency). The event name should make sense to a domain expert.
3. Large Events with Full State
Mistake: Each event contains the entire aggregate state (a full snapshot).
Reality: Events should contain only the delta -- what changed and why. OrderLineAdded(productId, quantity, unitPrice), not OrderUpdated(fullOrderJson). Full-state events make the event store a glorified audit log, losing the semantic richness.
4. Ignoring Projection Rebuild Time
Mistake: Not testing how long a full projection rebuild takes until production.
Reality: When you need to fix a projection bug or add a new projection, you must replay the entire event store. If that takes 6 hours, you have a 6-hour deployment for read model changes. Plan for this from day one.
5. Not Planning for Event Versioning
Mistake: Assuming event schemas will never change.
Reality: They will change. Build upcasting infrastructure early. Every event should include a schema version or be designed for forward-compatible evolution (adding optional fields, never removing or renaming fields).
6. Synchronous Projections
Mistake: Updating read models within the same transaction as the event store write.
Reality: This defeats the purpose of CQRS (you have coupled read and write performance) and creates a distributed transaction if read models are in different databases. Projections should be asynchronous with eventual consistency.
7. No Idempotency in Consumers
Mistake: Assuming each event will be delivered exactly once.
Reality: Consumers will see duplicates due to retries, rebalancing, and replay. Every consumer must be idempotent. Use the Redis dedup pattern or database upserts.
8. Premature Event Store Optimization
Mistake: Choosing EventStoreDB or building a complex event store before understanding if PostgreSQL is sufficient.
Reality: A simple PostgreSQL table handles millions of events with proper indexing. Start simple, measure, and graduate when you hit real limits.
Memory Hooks
| Concept | Memory Hook |
|---|---|
| When to use ES | "If deleting the history would cost the business money, event source it." |
| When NOT to use ES | "If you would describe the feature as 'just save the form', use CRUD." |
| Projection cost | "Every projection is a mini-application that must process every event forever." |
| Eventual consistency | "The read model is a newspaper -- accurate when printed, possibly outdated now." |
| Event versioning | "Events are like published API contracts -- you can add fields, never remove them." |
| Hybrid approach | "Event source the complex core; CRUD the boring edges." |
Interview Questions
Q: "When would you NOT use event sourcing?"
When the domain is simple CRUD with no business value in the event history, when the team has no event sourcing experience and the timeline is tight, or when GDPR right-to-deletion is a primary concern (event sourcing makes deletion significantly harder).
Q: "What is the biggest operational cost of event sourcing?"
Projection rebuild time. When you fix a projection bug or add a new read model, you must replay the entire event store. This can take hours for large systems and must be planned for in deployment processes.
Q: "Can you use event sourcing without CQRS?"
Technically yes -- you could replay events on every read. But this is impractical at scale because replaying hundreds of events per query is too slow. CQRS (separate read models) is the standard complement to event sourcing. They are not the same thing, but they almost always go together.
Q: "How do you handle GDPR deletion in an event-sourced system?"
Two main approaches: (1) Crypto-shredding -- encrypt PII in events with a per-user key; to "delete," destroy the key, making the encrypted fields unreadable. (2) Event rewriting -- create a new stream without PII and swap references. Crypto-shredding is more common because it does not violate event immutability.