Implementing FastAPI Python - OpenTelemetry with .NET Aspire Dashboard: A Practical Guide

Implementing FastAPI Python - OpenTelemetry with .NET Aspire Dashboard: A Practical Guide

Introduction

Modern distributed applications require robust observability solutions to monitor performance, detect issues, and optimize operations. In this article, I'll share a generic approach to implementing OpenTelemetry with the .NET Aspire Dashboard, providing comprehensive insights into your application's behavior.

What is OpenTelemetry?

OpenTelemetry is an open-source observability framework that provides vendor-agnostic APIs, libraries, and agents for collecting distributed traces, metrics, and logs. It enables you to instrument your applications once and send telemetry data to multiple backends.

What is .NET Aspire Dashboard?

.NET Aspire is Microsoft's opinionated, cloud-ready stack for building observable, production-ready distributed applications. The Aspire Dashboard provides a visual interface for monitoring your application's health, performance, and telemetry data in real-time.

Implementation Details

Let's explore how to implement OpenTelemetry with Aspire Dashboard in a Python application:

1. Setting Up Dependencies

First, ensure you have the necessary packages in your requirements.txt:

fastapi
uvicorn
opentelemetry-distro
opentelemetry-exporter-otlp-proto-grpc
opentelemetry-instrumentation-fastapi
opentelemetry-instrumentation-logging
opentelemetry-instrumentation-system-metrics
opentelemetry-instrumentation-requests        

2. OpenTelemetry Protocol (OTLP) Configuration

Here's a generic approach to set up OTLP exporters:

from opentelemetry import trace, metrics
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor

def configure_opentelemetry(service_name="my-service", endpoint="host.docker.internal:4317", insecure=True):
    """
    Configure OpenTelemetry with traces, metrics, and logs
    """
    # Define resource attributes
    resource = Resource(attributes={SERVICE_NAME: service_name})
    
    # Configure tracing
    trace_provider = TracerProvider(resource=resource)
    span_processor = BatchSpanProcessor(
        OTLPSpanExporter(endpoint=endpoint, insecure=insecure)
    )
    trace_provider.add_span_processor(span_processor)
    trace.set_tracer_provider(trace_provider)
    
    # Configure metrics
    metric_reader = PeriodicExportingMetricReader(
        OTLPMetricExporter(endpoint=endpoint, insecure=insecure)
    )
    metrics.set_meter_provider(
        MeterProvider(resource=resource, metric_readers=[metric_reader])
    )
    
    # Configure logging
    logger_provider = LoggerProvider(resource=resource)
    logger_provider.add_log_record_processor(
        BatchLogRecordProcessor(
            OTLPLogExporter(endpoint=endpoint, insecure=insecure)
        )
    )
    set_logger_provider(logger_provider)        

3. Application Instrumentation

Here's how to instrument a FastAPI application:

from fastapi import FastAPI
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.logging import LoggingInstrumentor
from opentelemetry.instrumentation.system_metrics import SystemMetricsInstrumentor

# Create FastAPI app
app = FastAPI()

# Instrument your FastAPI application
FastAPIInstrumentor.instrument_app(app)

# Instrument logging and system metrics
LoggingInstrumentor().instrument(set_logging_format=True)
SystemMetricsInstrumentor().instrument()        

4. Structured Logging with Trace Context

Set up logging to include trace context:

import logging

def setup_logging():
    """
    Configure logging with OpenTelemetry trace context
    """
    logging.basicConfig(
        level=logging.INFO,
        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s - "
               "[trace_id=%(otelTraceId)s span_id=%(otelSpanId)s "
               "resource.service.name=%(otelServiceName)s]"
    )        

5. Putting It All Together in a FastAPI Application

Here's a complete example of a FastAPI application with OpenTelemetry:

import logging
from fastapi import FastAPI, Response
from fastapi.exceptions import RequestValidationError

# Initialize OpenTelemetry
configure_opentelemetry(
    service_name="my-api-service",
    endpoint="host.docker.internal:4317",
    insecure=True
)

# Setup logging
setup_logging()

# Create FastAPI app
app = FastAPI(
    title="My API Service",
    description="API with OpenTelemetry instrumentation",
    version="1.0.0"
)

# Instrument FastAPI
FastAPIInstrumentor.instrument_app(app)

@app.get("/hello")
async def hello():
    """Sample endpoint that will be automatically traced"""
    logging.info("Hello endpoint called")
    return {"message": "Hello, OpenTelemetry!"}        

6. Setting Up Aspire Dashboard

Running the .NET Aspire Dashboard in Docker:

docker run --rm -it -p 18888:18888 -p 4317:18889 -d --name aspire-dashboard -e DOTNET_DASHBOARD_UNSECURED_ALLOW_ANONYMOUS='true' mcr.microsoft.com/dotnet/nightly/aspire-dashboard        

Then configure your application to send telemetry to host.docker.internal:4317.

Key Benefits

  1. Complete Observability: This implementation provides traces, metrics, and logs in a unified dashboard.
  2. Cross-Language Support: OpenTelemetry works with any language (Python, .NET, Java, etc).
  3. Distributed Tracing: Track requests across service boundaries in complex architectures.
  4. Minimal Performance Impact: Batch processing of telemetry data ensures low overhead.
  5. Vendor Neutrality: Switch between different backends without changing instrumentation.

Best Practices

  1. Consistent Service Naming: Use descriptive, consistent service names across your distributed system.
  2. Environment-Based Configuration: Use environment variables to control telemetry behavior.
  3. Custom Instrumentation: Add custom spans for critical operations:

from opentelemetry import trace

tracer = trace.get_tracer("my-service")

def process_data(data):
    with tracer.start_as_current_span("process-data") as span:
        span.set_attribute("data.size", len(data))
        # Process the data
        result = perform_processing(data)
        span.set_attribute("processing.success", True)
        return result        

4. Correlation IDs: Ensure proper propagation of trace context across services.

5. Sampling Strategy: Implement appropriate sampling for high-throughput services.

Conclusion

Implementing OpenTelemetry with the Aspire Dashboard provides a powerful, vendor-neutral observability solution that delivers insights across your entire application stack. The approach demonstrated in this article can be adapted to any application, regardless of its complexity or technology stack.

For organizations building distributed systems, this implementation offers a future-proof observability strategy that scales with your architecture while providing immediate operational benefits.

To view or add a comment, sign in

More articles by Harish Chandran

Others also viewed

Explore content categories