Scaling

Event-driven vs request-driven architecture

Binadit Engineering · Apr 01, 2026 · 8 min lire
Event-driven vs request-driven architecture

Why your system grinds to a halt under real load

Your request-driven system works perfectly in development. Users click, database responds, page loads. Then you launch, traffic grows, and everything falls apart.

Here's what happens: every user action creates a chain of synchronous calls. User submits form → validate data → save to database → send email → update analytics → return response. Each step waits for the previous one. Under load, one slow database query blocks everything behind it.

The business impact is immediate. Page load times jump from 200ms to 8 seconds. Users abandon checkouts. Revenue drops. Your team scrambles to add more servers, but throwing hardware at architectural problems never works.

Event-driven architecture promises to fix this. Instead of chaining requests, you publish events. User submits form → publish "form submitted" event → return response immediately. Other services pick up the event when they can handle it.

But most teams implement event-driven systems wrong. They add complexity without understanding the tradeoffs. Systems become harder to debug, data gets inconsistent, and failure modes multiply.

How request-driven architecture actually works

Request-driven systems follow a simple pattern: request comes in, processing happens, response goes out. Everything is synchronous and sequential.

When a user places an order in your e-commerce system:

  • Validate payment details (waits for payment gateway)
  • Check inventory (waits for database)
  • Reserve items (waits for inventory service)
  • Send confirmation email (waits for email service)
  • Update analytics (waits for tracking service)
  • Return success response

Each step must complete before the next one starts. The user waits for all steps to finish, even non-critical ones like sending emails or updating analytics.

This creates a cascade of dependencies. If your email service is down, order processing stops completely. If the payment gateway is slow, everything slows down. One bottleneck kills your entire system.

Under low load, this works fine. Response times are predictable. Debugging is straightforward because you can trace every request from start to finish. Data stays consistent because operations happen in order.

But load changes everything. With 100 concurrent users, you need 100 database connections, 100 email service connections, 100 payment gateway connections. Resources get exhausted. Requests pile up. The system collapses.

Why event-driven architecture exists

Event-driven systems solve the blocking problem by making operations asynchronous. Instead of waiting for every step to complete, you publish events and let other services handle them independently.

The same order process becomes:

  • Validate payment details
  • Check inventory
  • Reserve items
  • Publish "order placed" event
  • Return response (user sees confirmation immediately)

Other services subscribe to the "order placed" event:

  • Email service sends confirmation
  • Analytics service records the sale
  • Fulfillment service prepares shipment
  • Accounting service updates revenue

Each service processes events at its own pace. If the email service is slow, it doesn't affect order processing. If analytics are down, orders still go through.

This decoupling is powerful. Services can scale independently. You can add new functionality by creating services that subscribe to existing events. Failures in one service don't cascade to others.

But decoupling comes with costs. Data consistency becomes harder. You can't guarantee that all services have processed an event at any given time. Debugging distributed workflows is complex. Event ordering matters, but networks don't guarantee order.

Common mistakes that make event-driven systems worse

Publishing too many fine-grained events. Teams create events for every database change: "user_name_updated", "user_email_updated", "user_phone_updated". This creates event spam. Services can't keep up. Processing events takes longer than the original synchronous operations.

Events should represent meaningful business actions, not data changes. "User registered", "Order placed", "Payment completed" are business events. "User table row updated" is not.

Making events carry too much data. Teams stuff entire object states into events because "consumers might need this data later". Events become huge. Network traffic increases. Event stores bloat. Processing slows down.

Events should carry identifiers and minimal context. If consumers need more data, they can fetch it. This keeps events small and focused.

Building synchronous event processing. This defeats the purpose. Teams publish events, then immediately wait for all consumers to process them before returning a response. You get all the complexity of events with none of the performance benefits.

Event processing must be truly asynchronous. Publishers don't wait for consumers. If you need synchronous behavior, use request-driven patterns.

Ignoring event ordering and idempotency. Networks reorder messages. Services restart and reprocess events. Without proper handling, you get race conditions and duplicate processing.

Events need unique IDs and timestamps. Consumers must handle duplicate events gracefully. Critical workflows need event sourcing or saga patterns to maintain consistency.

Creating circular event dependencies. Service A publishes events that Service B consumes, which publishes events that Service A consumes. Under load, this creates feedback loops and cascading failures.

Map your event flows before implementation. Avoid cycles. Use domain boundaries to prevent services from being too interconnected.

What actually works for each architecture

Request-driven works best for:

  • Simple workflows with few dependencies
  • Operations that must be atomic (all steps succeed or fail together)
  • Real-time user interactions where immediate feedback matters
  • Small teams that need predictable debugging
  • Systems with consistent, moderate load

A content management system fits this model. User creates post → validate content → save to database → return success. Fast, simple, predictable.

Event-driven works best for:

  • Complex workflows with many independent steps
  • High-volume systems where some operations can be delayed
  • Integration between multiple independent services
  • Systems where different parts scale at different rates
  • When you need audit trails of all business events

E-commerce platforms, financial systems, and multi-tenant SaaS applications benefit from event-driven patterns.

You can mix both approaches. Handle the critical path synchronously, then publish events for everything else. Order processing validates payment and reserves inventory synchronously (user needs immediate feedback), then publishes events for emails, analytics, and fulfillment.

This hybrid approach gives you the reliability of request-driven architecture for critical operations and the scalability of event-driven architecture for everything else.

Real-world scenario: scaling a SaaS platform

A customer came to us with a project management SaaS that was failing under growth. Their request-driven system handled everything synchronously:

User creates project → validate permissions → create database records → send notifications to team members → update activity feeds → generate analytics → return response.

With 50 concurrent users, response times averaged 800ms. At 200 users, they jumped to 6 seconds. At 500 users, requests timed out.

The notification service was the bottleneck. Sending emails to large teams took 2-3 seconds per project creation. But users had to wait because everything was synchronous.

We redesigned the critical path to be hybrid:

Synchronous (immediate response):

  • Validate permissions
  • Create project record
  • Add user to project
  • Return success response

Asynchronous (via events):

  • Send team notifications
  • Update activity feeds
  • Generate analytics
  • Create default project structure

Response times dropped to 150ms under any load. The system could handle 2000+ concurrent users. Notification delays became acceptable because users got immediate confirmation that their project was created.

But we had to solve new problems. Event processing failures meant some team members didn't get notified. We added retry mechanisms and dead letter queues. We implemented idempotent consumers so duplicate events didn't create duplicate notifications.

The complexity increased, but failure modes became isolated. If notifications were slow, project creation still worked. If analytics were down, the core application stayed online.

Implementation approach for your system

Start with request-driven architecture. Don't begin with events. Build the simplest version that works, then identify bottlenecks under realistic load testing.

Profile your system under load. Find operations that block the critical path but don't need immediate completion. These are candidates for event-driven patterns.

Extract non-critical operations first. Move logging, analytics, and notifications to event-driven patterns. Keep core business logic synchronous until you understand the tradeoffs.

Choose your event infrastructure carefully. Managed services like AWS EventBridge or Google Cloud Pub/Sub handle scaling and reliability for you. Avoid building event systems from scratch unless you have specific requirements that managed services can't meet.

Design events around business domains. Each service should own events for its domain. User service publishes "user registered" events. Order service publishes "order placed" events. Don't create generic "data changed" events.

Implement proper error handling from day one. Events will fail to process. Networks will partition. Services will restart. Plan for eventual consistency and build monitoring that tracks event processing delays.

Test failure scenarios explicitly. Load testing event-driven systems requires testing more than throughput. Test what happens when consumers fall behind, when event stores fill up, when services restart during processing.

Monitor event lag, not just system resources. Event-driven systems can appear healthy while events pile up in queues. Track how long events take to process and alert when delays affect user experience.

The decision comes down to acceptable complexity

Request-driven architecture trades performance for simplicity. You'll hit scaling limits earlier, but debugging is straightforward and data stays consistent.

Event-driven architecture trades simplicity for performance and resilience. You can scale further and isolate failures, but debugging becomes harder and consistency requires more work.

Most systems should start request-driven and evolve. Don't build event-driven systems because they sound sophisticated. Build them when request-driven patterns become the bottleneck.

The hybrid approach works best for many applications. Keep the critical path synchronous for immediate user feedback, use events for everything else. You get the performance benefits without making core workflows complex.

Your architecture choice affects your infrastructure needs. Event-driven systems need message queues, event stores, and more sophisticated monitoring. Request-driven systems need fewer moving parts but must handle traffic spikes differently.

If your system is struggling under load and architectural changes feel overwhelming, you're not alone. Most teams underestimate the operational complexity of distributed systems.

Schedule a call