Day 96 of Backend/Cloud/DevOps Journey - Debugging bcrypt Compatibility Issues Today was all about real-world debugging. Spent the session troubleshooting a passlib/bcrypt compatibility issue that persisted even after switching Python versions. What I accomplished: • Installed Python 3.12 via Homebrew to replace Python 3.14 (learned that compilation from source takes time due to C extension building) • Discovered the issue wasn't Python version but bcrypt library version incompatibility with passlib • Pinned bcrypt to version 4.0.1 to resolve passlib's internal self-test failures • Successfully tested all auth utilities: hash_password, verify_password, create_access_token, verify_token • Learned why pinning dependency versions in requirements.txt prevents environment inconsistencies across deployments • Updated verify_token() to include algorithms parameter for security best practices Technical insights: • Homebrew's "make" step compiles C source code to machine code which is CPU-intensive and time-consuming • Virtual environments are tied to the Python version that created them—you cannot switch versions without recreating the venv • The bcrypt 72-byte password limit caused passlib's detect_wrap_bug() self-test to fail on newer bcrypt versions (4.1+) • Pinning dependencies (bcrypt==4.0.1) ensures consistent behavior across development, staging, and production environments • The algorithms parameter in jwt.decode() prevents algorithm confusion attacks—always specify explicitly • Real-world debugging often means the obvious fix (switching Python versions) doesn't work and requires deeper investigation • Library compatibility issues are common in production—this is why teams use stable Python versions not bleeding edge Key lesson: When an error persists after the "obvious" fix, dig deeper into the dependency chain. The root cause was bcrypt library version, not Python version. Auth system now fully functional: password hashing with bcrypt salting, password verification, JWT creation with expiration, and token decoding with validation. Next steps: Complete User model in SQLAlchemy, create users table, build registration and login endpoints. #100DaysOfCode #Python #FastAPI #Debugging #bcrypt #DependencyManagement #BackendDevelopment #CloudEngineering #DevOps #LearningInPublic #TechCareer #BuildInPublic
Debugging bcrypt compatibility issue with passlib in Python
More Relevant Posts
-
n8n is a powerhouse for workflow automation, offering the perfect balance between a user-friendly UI and the flexibility to inject custom logic where it’s needed most. Self-hosting n8n gives you incredible control, but things get interesting the moment you try to bring Python into the mix. I recently spent some time navigating the complexities of adding a Python runner to a self-hosted setup. While it sounds straightforward, the reality of Docker permissions and environment pathing can lead to a fair share of troubleshooting. I’ve documented the exact roadblocks I hit and the final fix that actually works. If you’re looking to supercharge your n8n workflows with custom Python scripts without the headache, this is for you. Check out the full guide here: https://lnkd.in/gwK4bvjH #n8n #Python #SelfHosting #Automation #DevOps #LowCode #DataEngineering
To view or add a comment, sign in
-
Your codebase is a mess and you know it. Ruff - The fastest Python linter that actually fixes your code automatically What it does: → Finds code quality issues 10-100x faster than other linters → Auto-fixes most issues (no manual cleanup needed) → Replaces 8 different tools (Black, isort, flake8, etc.) → Written in Rust, runs instantly even on huge codebases Setup: uv pip install ruff Cost: $0 (open source) The difference: Running flake8 + black + isort on 50,000 lines: 45 seconds Running ruff on same codebase: 0.3 seconds 150x faster. One command instead of three. Real scenario from this week: Inherited a messy data science repo. 30,000 lines. Inconsistent formatting. Import chaos. Before: Spent 2 hours manually fixing imports and formatting After: ruff check --fix (3 seconds, everything cleaned up) What it catches: - Unused imports - Undefined variables - Formatting inconsistencies - Import sorting - Code complexity issues - Security vulnerabilities The magic: ```bash ruff check . # Shows all issues ruff check --fix . # Fixes everything automatically ruff format . # Formats like Black but faster ``` Works in: - Pre-commit hooks (catches issues before push) - CI/CD pipelines (fails builds on quality issues) - VS Code/PyCharm (real-time feedback) This is what separates hobby projects from production code. Professional Python developers don't manually format code. They automate it and move on to real problems. Ruff makes that effortless. 🔗 https://lnkd.in/gsb2FiZv #Python #CodeQuality #DevTools
To view or add a comment, sign in
-
I just lost a PR that I really wanted to get merged. And it wasn’t because the code was wrong. https://lnkd.in/gtJcWddb A few days ago, I successfully got a documentation PR merged into the Python SDK of Hatchet. Small change, clean diff, smooth review. That win gave me confidence. So I picked up a slightly more “serious” issue: Replacing generic ValueError in ClientConfig token validation with a domain-specific HatchetConfigurationError. On paper, it was a tiny refactor: Better exception semantics Align with SDK’s error hierarchy No runtime logic change Simple, right? Not really. What followed: Discussion about Pydantic validator behavior Backwards compatibility concerns Exception inheritance (ValueError vs custom base class) Missing unit test coverage Comment-only diffs outside the stated scope Version bump requirements in pyproject.toml Changelog formatting consistency Re-exporting new exceptions properly Then came the lint failure. CI failed because of formatting. So I ran black on the whole SDK. That made the diff huge. I had to uncommit it. Then re-run black only on the modified files. Then force-push. Which created even more back-and-forth in the review history. Every small detail mattered. And they were right. Maintainers aren’t just reviewing code — they’re protecting: Public API stability Semantic versioning guarantees Developer experience Long-term maintainability Clean git history Eventually, the PR was closed unmerged. For a small change. That stung. But here’s what I learned: In mature OSS projects, even “non-breaking” refactors can be breaking. Exception types are API surface. Scope discipline in PRs is critical. Tests are part of the contract. Versioning + changelog hygiene is not optional. Don’t run formatters on the entire repo unless asked. Maintainer time is sacred. My doc PR got merged because it reduced friction. My refactor PR increased surface area. Open source isn’t just about being technically correct. It’s about being ecosystem-aware. Still contributing. Still learning. #OpenSource #YC #OSS #Golang #Go #Python
To view or add a comment, sign in
-
Are you ready to level up your backend development skills? I just released a comprehensive 4-hour crash course on FastAPI, one of the fastest-growing and most modern web frameworks for Python today. Whether you’re a Django/Flask fan or coming from a Node.js background like me, this video will show you why FastAPI is a game-changer for building high-performance REST APIs. In this video, we build a real-world Spotify Clone from scratch, covering: Modern Python Tooling: Setting up virtual environments and project structures [09:15]. FastAPI Essentials: Creating your first server, routing, and dynamic path parameters [16:16]. Data Validation: Leveraging Pydantic schemas for cleaner, safer code [02:57]. Database Integration: Connecting to PostgreSQL using SQL Alchemy ORM [02:44]. Security & Auth: Implementing JWT (JSON Web Tokens) and protecting routes [03:56:07]. Auto-Documentation: Using the built-in Swagger UI to test your endpoints instantly [04:58]. Watch the full course here: https://lnkd.in/dugZbs7u I’ve designed this to be a hands-on experience, starting from a "Hello World" to building complex relationships and authentication [03:57:56]. If you find the video helpful, please consider subscribing to the channel and leaving a comment—it really helps the algorithm reach more developers like you! Happy coding, Haider Malik
Build a Real-World Python API with FastAPI & Pydantic (Full Crash Course)
https://www.youtube.com/
To view or add a comment, sign in
-
Deploying a FastAPI app shouldn't take longer than writing your first endpoint. Yet developers still spend hours on ASGI configs, Uvicorn tuning, and SSL certificate setup just to get "Hello World" live. There's a faster way. Our FastAPI deployment guide shows you how to: → Skip the server configuration headaches → Go from local to live in 60 seconds → Get automatic Swagger docs at /docs instantly Stop wrestling with infrastructure. Start shipping. Full guide: https://lnkd.in/dywzJeu5 #FastAPI #Python #WebDevelopment #DevOps #Deployment #API
To view or add a comment, sign in
-
I lost count of how many hours I've wasted debugging "it works on my machine" issues early in my career. The culprit? Almost always the Python environment. Here's what I wish someone had told me on day one: 🔹 Virtual environments aren't optional — they're foundational 🔹 One project = one isolated environment. Always. 🔹 Never touch system Python. Ever. 🔹 Pin your dependency versions before they pin you I just published a deep-dive guide on truepythoneer.com covering everything you need to set this up right: ✅ venv, virtualenv, and conda — when to use which ✅ Managing multiple Python versions with pyenv (macOS/Linux) and py launcher (Windows) ✅ Fixing permission errors, compilation failures, and dependency conflicts Whether you're just starting out or have years of experience, getting your environment right is the single highest-leverage habit you can build. https://lnkd.in/g9EuC-Wt #Python #SoftwareDevelopment #PythonTips #DevTools #CodingBestPractices
To view or add a comment, sign in
-
I just migrated a production Python codebase to Go using Claude Code as the primary coding agent. The project was Kodit, an MCP server for indexing code repositories. The code compiled, the tests passed, but it didn't work. No surprise there right? It took a total of about 2 weeks to get right. The real value of this experience was learning what goes wrong when you let an AI do a cross-language migration. Dead code accumulation, phantom features rebuilt from deprecated references, missing integration tests, context window limits causing half-finished refactors. Claude is a powerful but literal executor. The gaps in your design become the bugs in your system. I wrote up the full methodology, the automation script, and everything that went wrong so you can learn from my mistakes.
To view or add a comment, sign in
-
This is fabulous and an excellent read on real world experience of AI-accelerated language port done well -- and I can't wait to decimate the size of our HelixML Mac app with the new slimline Kodit container 😍
AI Consulting & Development via Winder.AI | Author of Reinforcement Learning | Co-founder of Helix.ML
I just migrated a production Python codebase to Go using Claude Code as the primary coding agent. The project was Kodit, an MCP server for indexing code repositories. The code compiled, the tests passed, but it didn't work. No surprise there right? It took a total of about 2 weeks to get right. The real value of this experience was learning what goes wrong when you let an AI do a cross-language migration. Dead code accumulation, phantom features rebuilt from deprecated references, missing integration tests, context window limits causing half-finished refactors. Claude is a powerful but literal executor. The gaps in your design become the bugs in your system. I wrote up the full methodology, the automation script, and everything that went wrong so you can learn from my mistakes.
To view or add a comment, sign in
-
New Blog Post 😃 In this #CopilotStudio post I share how to make your own MCP Server using Python 🐍 If you’ve never created an MCP Server before you might be surprised how straightforward the building process can be 🏗️ https://lnkd.in/gQav32d4
To view or add a comment, sign in
-
I spent years thinking 'concurrency' meant 'multithreading'—until a brutal production bug proved me wrong. Here's the critical difference that saved clients (and my sanity) from terrible performance: ❌ **Async != Multithreading** (Even though both aim for concurrency) Think of it this way: * **Async (Asyncio):** One highly efficient chef 🧑🍳 managing *many* tasks simultaneously. While waiting for water to boil, they're chopping veggies. Everything runs on a *single thread*. Perfect for I/O-bound jobs (network requests, database calls). * **Multithreading:** Multiple chefs 🧑🍳🧑🍳🧑🍳 working in parallel. But in Python, due to the GIL, they often end up arguing over the same spice rack! Best for CPU-bound tasks if you can work around the GIL, or for true parallel I/O blocks. This isn't just syntax; it's a fundamental architectural decision. Get it wrong, and your app crawls. Get it right, and your users will thank you. What's one common Python misconception you wish you'd learned sooner? 👇 #Python #AsyncIO #Multithreading #Concurrency #Backend
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development