2025

Blogify — Distributed Blogging Platform

A 3-service distributed backend system with async communication, Redis caching, and RabbitMQ-based messaging, achieving 99.9% message delivery reliability.

Technology Stack

Node.jsTypeScriptMongoDBRedisRabbitMQDocker

Overview

System Architecture

  • Services: Decomposed into discrete User, Author, and Blog microservices to isolate fault domains.
  • Message Broker: RabbitMQ handles asynchronous communication (e.g., "Post Created" events) to decouple service dependencies.
  • Caching Layer: Redis implements look-aside caching and Pub/Sub output caching to offload the primary database.
  • Persistence: MongoDB sharded clusters provide horizontal scalability for content-heavy workloads.

Key Challenges

  • Designing message-driven workflows without tight coupling.
  • Maintaining cache consistency across services.
  • Handling retries, failures, and duplicate events.
  • Ensuring reliable delivery under load.

Key Learnings

  • Redis TTL-based caching reduced read latency from ~120ms to under 6ms.
  • Pub/Sub invalidation enabled near-instant cross-service cache coherence.
  • Message acknowledgements were critical for reliability.
  • Docker simplified local orchestration and testing.

Uniqueness

  • True async service communication (not fake microservices).
  • Redis + RabbitMQ used for distinct responsibilities.
  • Failure-aware message handling.

Impact

  • Achieved 99.9% message delivery reliability.
  • Reduced read latency by ~95%.