Course → Module 7: Real-World Case Studies I

The Core Problem

A smart parking system tracks the real-time availability of every spot in a parking facility and allows users to reserve spots before arrival. The physical world imposes constraints that purely digital systems do not have. A parking spot can only hold one car. Sensors can malfunction. A driver might park in a different spot than the one they reserved. The system must reconcile digital state with physical reality continuously.

For a mall with 500 parking spots across 3 floors, the numbers are modest. But the design patterns are the same ones used at airport scale (10,000+ spots) or city-wide smart parking networks (100,000+ spots across hundreds of facilities). The concurrency challenge surfaces even at small scale: weekend peak hours produce bursts of simultaneous booking attempts for the last available spots on a popular floor.

A parking system is a distributed inventory problem with physical constraints. Each spot is a unit of inventory. The physical sensor is the source of truth. The database is the best available estimate.

Data Model

Entity Key Fields Relationships Notes
ParkingLot lot_id, name, address, lat, lng, total_spots Has many Floors Top-level entity, one per facility
Floor floor_id, lot_id, floor_number, spot_count Belongs to ParkingLot, has many Spots Physical grouping for navigation
Spot spot_id, floor_id, spot_number, type, status, version Belongs to Floor, has many Bookings type: regular, handicapped, EV. version: for optimistic locking
Booking booking_id, spot_id, user_id, start_time, end_time, status, created_at Belongs to Spot and User status: reserved, active, completed, cancelled
Payment payment_id, booking_id, amount, method, status, idempotency_key Belongs to Booking Idempotency key prevents double charges

The version field on the Spot entity is critical. It enables optimistic locking, which we will cover in detail below.

High-Level Architecture

graph TB subgraph "Client Layer" MA["Mobile App"] KI["Kiosk / Display"] end subgraph "Application Layer" GW["API Gateway"] BS["Booking Service"] AS["Availability Service
(in-memory cache)"] PS["Payment Service"] NS["Notification Service"] end subgraph "Data Layer" PG["PostgreSQL
(Bookings, Spots)"] RD["Redis
(Availability Cache)"] KF["Kafka
(Events)"] end subgraph "IoT Layer" S1["Floor 1 Sensors"] S2["Floor 2 Sensors"] S3["Floor 3 Sensors"] IGW["IoT Gateway
(MQTT Broker)"] end MA --> GW KI --> GW GW --> BS GW --> AS BS --> PG BS --> PS BS --> KF AS --> RD KF --> NS S1 --> IGW S2 --> IGW S3 --> IGW IGW --> AS IGW --> KF style MA fill:#222221,stroke:#6b8f71,color:#ede9e3 style KI fill:#222221,stroke:#6b8f71,color:#ede9e3 style GW fill:#222221,stroke:#c8a882,color:#ede9e3 style BS fill:#222221,stroke:#c47a5a,color:#ede9e3 style AS fill:#222221,stroke:#c47a5a,color:#ede9e3 style PS fill:#222221,stroke:#c47a5a,color:#ede9e3 style NS fill:#222221,stroke:#c47a5a,color:#ede9e3 style PG fill:#222221,stroke:#8a8478,color:#ede9e3 style RD fill:#222221,stroke:#8a8478,color:#ede9e3 style KF fill:#222221,stroke:#8a8478,color:#ede9e3 style S1 fill:#222221,stroke:#6b8f71,color:#ede9e3 style S2 fill:#222221,stroke:#6b8f71,color:#ede9e3 style S3 fill:#222221,stroke:#6b8f71,color:#ede9e3 style IGW fill:#222221,stroke:#c8a882,color:#ede9e3

The Availability Service maintains an in-memory count (via Redis) of available spots per floor, updated by both the Booking Service (when a reservation is made) and the IoT Gateway (when sensors detect occupancy changes). This cache enables sub-millisecond availability checks without querying the database.

Concurrent Booking with Optimistic Locking

The central design challenge: two users open the app at the same time, see the same spot listed as available, and both tap "Reserve." Only one should succeed.

Pessimistic locking (SELECT FOR UPDATE) would work but creates contention. During peak hours, many users compete for spots on the same floor. Holding row locks for the duration of a booking transaction (which includes payment) would serialize all bookings and destroy throughput.

Optimistic locking is the better fit. Each Spot row has a version column. The booking flow reads the current version, performs the reservation, and writes back only if the version has not changed.

sequenceDiagram participant U1 as User A participant U2 as User B participant BS as Booking Service participant DB as PostgreSQL U1->>BS: Reserve Spot #42 U2->>BS: Reserve Spot #42 BS->>DB: SELECT status, version FROM spots WHERE spot_id = 42 Note over DB: Returns status=available, version=5 BS->>DB: SELECT status, version FROM spots WHERE spot_id = 42 Note over DB: Returns status=available, version=5 BS->>DB: UPDATE spots SET status='reserved', version=6
WHERE spot_id=42 AND version=5 Note over DB: User A: 1 row affected. Success. DB-->>BS: 1 row updated BS->>DB: UPDATE spots SET status='reserved', version=6
WHERE spot_id=42 AND version=5 Note over DB: User B: 0 rows affected. Version mismatch. DB-->>BS: 0 rows updated BS-->>U1: Reservation confirmed BS-->>U2: Spot no longer available

The WHERE version = 5 clause is the guard. When User A's update commits, the version changes to 6. User B's update finds no row matching version = 5, so zero rows are affected. The Booking Service interprets zero affected rows as a conflict and returns an error to User B.

This approach requires no explicit locks. The database's row-level atomicity handles the race condition. The tradeoff: under very high contention (dozens of users competing for the same spot), the retry rate increases. In a parking system, this is rarely a problem because contention is spread across hundreds of spots.

IoT Sensor Integration

Physical sensors (ultrasonic, infrared, or magnetic) detect whether a car is physically present in a spot. These sensors communicate via MQTT to an IoT Gateway, which translates sensor events into system events.

The sensor data serves two purposes. First, it reconciles the digital state with reality. If a spot is marked "reserved" in the database but the sensor shows it has been physically occupied for 15 minutes past the reservation window, the system can auto-complete the booking. If a spot is marked "available" but the sensor shows a car parked there (someone parked without a reservation), the system marks it as occupied.

Second, sensor data provides real-time availability that is more accurate than the booking database alone. A driver might reserve a spot, arrive, and park in a different spot because the reserved one was physically blocked. The sensor layer catches this discrepancy.

Availability: Cache vs. Database

Users checking availability far outnumber users making bookings. The read-to-write ratio might be 100:1. Querying PostgreSQL for every "how many spots are available on Floor 2?" request would waste database capacity on simple counts.

Redis maintains atomic counters: avail:{lot_id}:{floor_id}. When a booking is confirmed, the counter decrements. When a booking is cancelled or a sensor detects a car leaving, the counter increments. The counter is the first thing the app checks. The database is only queried when the user selects a specific spot to reserve.

A periodic reconciliation job (every 5 minutes) compares Redis counters against the actual database counts and sensor states. If they drift (due to missed events or Redis restarts), the job corrects the cache. This is eventual consistency with a bounded staleness window.

Payment Integration

Parking payment follows the same patterns as e-commerce (Session 7.6). An idempotency key per booking prevents double charges. For time-based pricing (pay per hour), the system calculates the final amount when the driver exits, not when they enter. The booking record stores the start time. The exit event (sensor detects car leaving, or user taps "End Session") triggers the payment calculation.

Pre-authorization is useful for reservations. The system authorizes a hold on the user's payment method at booking time and captures the actual amount at exit. If the user cancels before arriving, the hold is released.

Further Reading

Assignment

A shopping mall has 500 parking spots across 3 floors (200, 200, 100). Two users attempt to book the same spot at the same time.

  1. Write the SQL statements for the optimistic locking flow. Include the SELECT (read version), the UPDATE (with version guard), and the INSERT into the bookings table. Wrap it in a transaction.
  2. What happens if the system uses pessimistic locking (SELECT FOR UPDATE) instead? Calculate the maximum throughput if each booking transaction takes 200ms and there are 500 spots. Is this acceptable for peak-hour traffic?
  3. A sensor detects a car in Spot #42, but the database shows Spot #42 as "available" (no booking exists). Design the reconciliation logic. What status should the spot get? Should the system attempt to identify the car's owner?
  4. Design the Redis cache invalidation strategy. When should the counter be updated? What happens if a Redis node crashes and restarts with stale data?