Apache Kafka
Apache Kafka

Apache Kafka

Article content
Apache Kafka

Introduction:

What is event Streaming?

Event is basically any any change in logs of your app. When we share our logs to anyone live that is known as event streaming.

i.e: When we use youtube streaming. Our streaming is avaliable for public so they can watch our streaming to see what we are doing etc? So If we are sharing our gameplay with public. In this example any change in the stream is a event and we are sharing our every event with public is a event streaming.

Technically speaking, event streaming is the practice of capturing data in real-time from event sources like databases, sensors, mobile devices, cloud services, and software applications in the form of streams of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting to the event streams in real-time as well as retrospectively; and routing the event streams to different destination technologies as needed. Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time.

What is EDA?

  • Loose coupling: Components are independent and communicate through events, making them easier to develop, maintain, and update.
  • Scalability: EDA can handle more consumer as you want to process events.
  • Real-time processing: Event can trigger action as it happens enabling real-time applications

Advantages of EDA

  • Flexibility: EDA can accomodate changing business needs by adding events type
  • Resilience: If one part of your system fails, other parts of your system will continue processing events
  • Concurrency: Multiple consumer can process events at a time

Disadvantages of EDA

  • Complexity: Designing and debugging EDA systems can be more complex than traditional architecture
  • Monitoring: Troublesshooting issues can be challenging due to asynchronous communication
  • Testing: Testing event-driven system require specilized tools and techiniques

EDA is a powerfull approach for building scalableand responsive apps

Synchronous Inter Services Messages between Microservices

There are two main approaches to inter-service communication in a microservices architecture:

  1. Synchronous: This method involves a direct call between services, typically using an API exposed by one service that another service can interact with. Protocols like HTTP are commonly used for this purpose. The calling service waits for a response from the other service before proceeding.
  2. Asynchronous Messaging: In this approach, services don't directly call each other. Instead, they send messages to a queue or message broker. The receiving service then processes the message independently. This allows for looser coupling between services and improved scalability. Message brokers like Kafka or RabbitMQ are popular choices for asynchronous communication.

Both synchronous and asynchronous communication have their advantages and disadvantages:

  • Synchronous communication is simpler to implement and easier to debug, but it can lead to tight coupling between services and performance bottlenecks if not designed carefully.
  • Asynchronous communication offers better scalability and looser coupling, but it can be more complex to implement and reason about.

The best choice for your microservices communication will depend on the specific needs of your application. Consider factors like latency requirements, message volume, and desired level of coupling between services when making your decision.

Synchronous Communication with FastAPI, Docker, and Poetry

Code Examples: FastAPI, Docker, and Poetry Here are some code snippets to get you started:

FastAPI - Simple Hello World Endpoint:

Python
from fastapi import FastAPI

app = FastAPI()

@app.get("/")
async def root():
    return {"message": "Hello World!"}        

This code defines a basic FastAPI application with an endpoint (/). The async def root function is an asynchronous handler, but it can be used for synchronous communication as well.

Dockerfile (Basic): Dockerfile

FROM python:3.9

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]        

Use code with caution. content_copy This Dockerfile defines a container based on the Python 3.9 image. It copies the requirements.txt file (which would list your FastAPI dependencies) and installs them. Then, it copies your application code and finally runs the uvicorn command to start the FastAPI app.

  1. Database Model:

from sqlmodel import Field, SQLModel

class User(SQLModel):
    id: int = Field(default=None, primary_key=True)
    name: str = Field(max_length=255)        

This code defines a basic User model for a database. The id field is the primary key and the name field is a string.

A Challenge: FastAPI Event Driven Microservices Development With Kafka, KRaft, Docker Compose, and Poetry

  • Kafka 3.5 use zookeeper. Zookeeper is a 3rd party tool providing highly reliable distributed coordination of cloud applications. It handle all metadata of kafka.
  • Kafka 3.7 use KRaft. KRaft is a new distributed coordination system for Kafka. So we don't need to install or use third party tools for handling metadata. Kakfa has now its own.

Kafka 3.7 Docker Images

Follow this Quick Start with Docker and KRaft: https://kafka.apache.org/quickstart

Get the docker image

docker pull apache/kafka:3.7.0        

Start the kafka docker container

docker run -p 9092:9092 apache/kafka:3.7.0        

Open another console and check to see if container running:

docker ps        

Copy the container name, and give the following command to attach:

docker exec -it <container_name> /bin/bash/        

Note: you can also use first four digit of container_id

After getting in intractive mode check your position:

ls        

Note: Kafka commands are in this directory in the container

cd /opt/kafka/bin        

then:

ls        

Create a Topic to store your Events

Ready to store your events in Kafka? Here's what you need to do first:

  • Create a Topic: In Kafka, events are organized into categories called topics. You'll need to create a topic specifically for the events you want to store.
  • Use the Kafka Command-Line Tools: The exact command will vary depending on your Kafka version and desired topic configuration.

So before you can write your first events, you must create a topic. Run:

/opt/kafka/bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092        

All of Kafka's command line tools have additional options:

Note: run the kafka-topics.sh command without any arguments to display usage information. For example, it can also show you details such as the partition count of the new topic:

/opt/kafka/bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092        

Write events to a Kafka topic

A Kafka client communicates with the Kafka brokers via the network for writing (or reading) events. Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you need—even forever.

Run the console producer client to write a few events into your topic. By default, each line you enter will result in a separate event being written to the topic.

/opt/kafka/bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092        

Generate enents:

Hello this is my event
I'm Hamza Waheed

Read the Event

Open another terminal session and run the console consumer client to read the events you just created:

/opt/kafka/bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092        

You should see the events you just created:

Hello this is my event
I'm Hamza Waheed

Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want. You can easily verify this by opening yet another terminal session and re-running the previous command again.

To view or add a comment, sign in

More articles by Hamza Waheed

Others also viewed

Explore content categories