🐍 Stop Using print() for Debugging! Master Python's Logging Module
If you're still sprinkling print() statements throughout your Python code for debugging and monitoring, you're missing out on one of Python's most powerful built-in tools: the logging module.
🎯 Why Logging > Print Statements
The Problem with print():
With proper logging:
💻 Practical Example: From Beginner to Production
❌ The Bad Way (Don't do this!)
python
def process_order(order_id):
print("Processing order:", order_id)
try:
# Process order
result = charge_payment(order_id)
print("Payment successful:", result)
except Exception as e:
print("ERROR:", e)
print("Stack trace:", traceback.format_exc())
print("Order complete")
Problems:
✅ The Professional Way
python
import logging
from logging.handlers import RotatingFileHandler
# Configure logging once at application startup
def setup_logging():
# Create logger
logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG) # Capture everything
# Console handler - INFO and above
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_formatter = logging.Formatter(
'%(levelname)s - %(message)s'
)
console_handler.setFormatter(console_formatter)
# File handler - DEBUG and above with full details
file_handler = RotatingFileHandler(
'app.log',
maxBytes=10*1024*1024, # 10MB
backupCount=5
)
file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - '
'%(filename)s:%(lineno)d - %(message)s'
)
file_handler.setFormatter(file_formatter)
# Add handlers
logger.addHandler(console_handler)
logger.addHandler(file_handler)
return logger
# Use throughout your application
logger = setup_logging()
def process_order(order_id):
logger.info(f"Processing order {order_id}")
try:
result = charge_payment(order_id)
logger.info(f"Payment successful: ${result.amount}")
except PaymentError as e:
logger.error(f"Payment failed for order {order_id}", exc_info=True)
# exc_info=True automatically includes stack trace
except Exception as e:
logger.critical(f"Unexpected error processing order {order_id}",
exc_info=True)
logger.debug(f"Order {order_id} processing complete")
```
**Output to Console (INFO+):**
```
INFO - Processing order 12345
INFO - Payment successful: $99.99
```
**Output to app.log (DEBUG+):**
```
2025-01-15 14:32:10,123 - myapp - INFO - orders.py:45 - Processing order 12345
2025-01-15 14:32:10,234 - myapp - DEBUG - payment.py:12 - Connecting to payment gateway
2025-01-15 14:32:10,456 - myapp - INFO - payment.py:28 - Payment successful: $99.99
2025-01-15 14:32:10,457 - myapp - DEBUG - orders.py:52 - Order 12345 processing complete
🔧 Key Concepts Explained
1. Logger Hierarchy
python
# Create hierarchical loggers
logger = logging.getLogger('myapp') # Root
db_logger = logging.getLogger('myapp.database') # Child
api_logger = logging.getLogger('myapp.api') # Child
# Set different levels for different modules
db_logger.setLevel(logging.DEBUG) # Verbose for DB
api_logger.setLevel(logging.WARNING) # Only warnings for API
2. Multiple Handlers
python
# Send different levels to different destinations
logger = logging.getLogger('myapp')
# Console: INFO and above
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# File: Everything
file = logging.FileHandler('app.log')
file.setLevel(logging.DEBUG)
# Email: Only CRITICAL errors
email = logging.handlers.SMTPHandler(
mailhost='smtp.example.com',
fromaddr='app@example.com',
toaddrs=['admin@example.com'],
subject='CRITICAL ERROR'
)
email.setLevel(logging.CRITICAL)
logger.addHandler(console)
logger.addHandler(file)
logger.addHandler(email)
3. Format Customization
python
# Development format - readable
dev_format = logging.Formatter(
'%(levelname)s - %(message)s'
)
# Production format - detailed
prod_format = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - '
'%(filename)s:%(funcName)s:%(lineno)d - %(message)s'
)
# JSON format - for log aggregation systems
import json
class JsonFormatter(logging.Formatter):
def format(self, record):
log_obj = {
'timestamp': self.formatTime(record),
'level': record.levelname,
'message': record.getMessage(),
'module': record.module,
'function': record.funcName,
'line': record.lineno
}
return json.dumps(log_obj)
🛠️ Pro Tips for Production
1. Use Configuration Files
python
Recommended by LinkedIn
import logging.config
import yaml
# logging_config.yaml
with open('logging_config.yaml') as f:
config = yaml.safe_load(f)
logging.config.dictConfig(config)
logger = logging.getLogger('myapp')
logging_config.yaml:
yaml
version: 1
formatters:
simple:
format: '%(levelname)s - %(message)s'
detailed:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
file:
class: logging.handlers.RotatingFileHandler
filename: app.log
maxBytes: 10485760 # 10MB
backupCount: 5
level: DEBUG
formatter: detailed
loggers:
myapp:
level: DEBUG
handlers: [console, file]
propagate: no
2. Context Managers for Temporary Verbose Logging
python
import logging
from contextlib import contextmanager
@contextmanager
def verbose_logging(logger, level=logging.DEBUG):
"""Temporarily increase logging verbosity"""
original_level = logger.level
logger.setLevel(level)
try:
yield
finally:
logger.setLevel(original_level)
# Usage
logger = logging.getLogger('myapp')
logger.setLevel(logging.WARNING)
logger.warning("This shows")
logger.debug("This doesn't show")
with verbose_logging(logger):
logger.debug("This shows during context")
logger.debug("This doesn't show again")
3. Structured Logging with Extra Data
python
# Add contextual information
logger.info(
"User action completed",
extra={
'user_id': user.id,
'action': 'purchase',
'amount': 99.99,
'ip_address': request.remote_addr
}
)
# Custom formatter to include extra fields
class ContextFormatter(logging.Formatter):
def format(self, record):
if hasattr(record, 'user_id'):
record.msg = f"[User:{record.user_id}] {record.msg}"
return super().format(record)
4. Performance: Lazy Evaluation
python
# ❌ Bad - string formatting happens even if not logged
logger.debug("Processing data: " + expensive_operation())
# ✅ Good - only formats if actually logged
logger.debug("Processing data: %s", expensive_operation())
# ✅ Even better for complex formatting
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Data: %s", expensive_operation())
5. Exception Logging
python
try:
risky_operation()
except Exception:
# Automatically includes full stack trace
logger.exception("Operation failed")
# Or equivalently:
logger.error("Operation failed", exc_info=True)
🎓 Quick Reference: Log Levels
🚀 Getting Started Checklist
✅ Replace print() with logger.info() in your codebase
✅ Set up at least two handlers: console (INFO+) and file (DEBUG+)
✅ Use rotating file handlers to prevent disk space issues
✅ Add timestamps and context to log messages
✅ Use exc_info=True when logging exceptions
✅ Create hierarchical loggers for different modules
✅ Use configuration files for production deployments
✅ Consider JSON formatting for log aggregation systems
Remember: Good logging is like insurance - you don't appreciate it until you need it. Future you (debugging at 2 AM) will thank present you for setting this up properly! 🌙☕
#Python #Logging #SoftwareEngineering #BestPractices #CleanCode #Debugging #Programming #DevOps #SoftwareDevelopment #CodeQuality
What's your favorite logging trick or setup? Share in the comments! 👇