🔍 Analyzed my own code with SonarQube — here's what I found. As part of my Software Engineering course (4th Semester) at Dawood University of Engineering & Technology, I applied SonarQube to ResumRank-AI, my AI-powered resume screening app built with Python, Flask, spaCy, and pdfplumber. Instead of a textbook demo, I ran it on real production-intended code. The results were eye-opening: 🔴 8 Security Hotspots — CSRF vulnerabilities & sensitive data exposure in Flask routes 🟠 58 Maintainability violations — Cognitive Complexity exceeding threshold in ranking logic 🟡 60 Total issues — consistency, reliability & maintainability across 3.7k lines ⚡ 11 Reliability issues — unhandled edge cases in backend processing 📊 7 hours of Technical Debt estimated ✅ Quality Gate: Passed The biggest takeaway: code that works is not the same as code that's secure and maintainable. SonarQube flagged real vulnerabilities in my own codebase that I hadn't noticed — proving why static analysis belongs in every developer's workflow. 🔗 GitHub Repo: https://lnkd.in/d6MKMhU4 📄 Full Analysis Report: https://lnkd.in/gMgW-avE #SonarQube #Python #SoftwareEngineering #CodeQuality #StaticAnalysis #DUET #ArtificialIntelligence #Flask #OpenSource
SonarQube Analysis of ResumRank-AI Codebase
More Relevant Posts
-
My first version of the Task Manager CLI was embarrassing. Everything in one file. No error handling. Variables named x, temp, data2. It worked — until it didn't. And when it broke, I had no idea where to start debugging. That moment of staring at my own code and not understanding it was one of the most important moments in my development as an engineer. I refactored the entire project. Broke it into modules. Added structured exception handling. Rewrote the validation logic. Debugging effort dropped by 40%. More importantly, I could actually read my own code. The lesson wasn't technical. It was about ego. Bad code is often the result of being in a hurry to "finish" — instead of being willing to do it right. I now build slower and ship cleaner. And I'm better for it. Have you ever looked back at old code and learned something important about yourself? #Refactoring #CleanCode #Python #SoftwareEngineering #LessonsLearned
To view or add a comment, sign in
-
update from previous post[https://lnkd.in/dVx_NrDQ] From "Arrow Code" to Clean Architecture. 🏹 ➡️ 🧱 I’ve hit a major turning point in my Python journey. I realized my code was starting to look like an arrow—layers upon layers of if statements and while loops pushing my logic further and further to the right. In professional dev circles, they call this Arrow Code, and it's a nightmare to maintain. Here’s how I "flattened" my latest project: ✅ Decomposition: I broke down massive, nested blocks into small, dedicated functions. Each function now does one thing well, making the main logic readable at a single glance. ✅ The "Gatekeeper" Pattern: Instead of scattered validation, I built a centralized handler to act as a security guard for all user inputs. ✅ State Management: I’m now mastering the "Baton Pass"—using return values and arguments to move data (like budgets) safely through the app instead of relying on global variables. ✅ Professional Workflow: I’m officially using Git branches and Pull Requests on GitHub to review my own work and track my architectural improvements. The goal isn't just to write code that the computer understands; it’s to write code that other humans can read. 🚀 #CleanCode #Python #SoftwareEngineering #Refactoring #CodingJourney #BuildInPublic
To view or add a comment, sign in
-
-
From a simple log parser to simulating real SRE scenarios I extended my Log Analyzer project to make it more aligned with real-world production systems and incident handling. 🔧 What’s new: • Regex-based log parsing to extract timestamp, log level, and message • Top N error analysis using Python’s Counter • Error spike detection based on a time window (simulating incident conditions) 📊 Example insight: The tool can now detect abnormal error spikes within a short duration — something SREs rely on during production incidents. 💡 What I learned: Log analysis isn’t just about counting errors — it’s about identifying patterns, trends, and anomalies over time. 🔗 Project: https://lnkd.in/dEZyK7qH Next step: exploring real-time log monitoring and alerting integrations. Would love your feedback! #SRE #DevOps #Python #Observability #SiteReliabilityEngineering #LearningInPublic #GitHub
To view or add a comment, sign in
-
-
🚀 Solved Two Sum Problem with Optimal Approach | LeetCode Today I solved the classic Two Sum problem and focused on writing an efficient solution rather than just making it work. 💡 Problem: Given an array of integers, return indices of two numbers such that they add up to a target. ⚡ Approach: Instead of using the brute force method, I used a HashMap (dictionary) to store elements and their indices. 👉 Logic: Traverse the array once For each element, calculate: difference = target - current value Check if difference already exists in the HashMap If yes → return indices instantly 🔥 Time & Space Complexity: ⏱ Time Complexity: O(n) 📦 Space Complexity: O(n) 🚀 Optimization: Improved from brute force O(n²) → O(n) using HashMap lookup 🏆 Result: ✔️ Accepted (All test cases passed) ✔️ Runtime: 0 ms (Beats 100%) 📌 Key Learnings: HashMap enables constant time lookup Thinking in terms of complement simplifies problems Optimization is key in coding interviews 💻 Tech Stack: Python | Data Structures & Algorithms 📈 Consistency + Practice = Growth 🚀 #leetcode #dsa #python #algorithms #coding #programming #softwareengineering #100DaysOfCode #tech
To view or add a comment, sign in
-
-
The Secret to Clean Code: Snake Case 🐍 Have you heard about snake variable 🐍? If you are diving into Python or working within a team of developers, you’ve likely seen variable names like 'user_login_count' or 'total_price_usd' This specific style—where words are written in lowercase and joined by underscores is known as snake_case. Why does it even matter? In the world of coding, readability is just as important as functionality. Here is why snake case is a standard for many: Readability—It mimics natural spacing, making it easy for the human eye to distinguish separate words at a glance. Context— Instead of vague abbreviations like ua, using "user_age" tells anyone reading your code exactly what data is being stored. Consistency— Following naming conventions ensures that a large codebase remains professional and maintainable, whether you're working solo or with a global team. Writing clean code is a form of professional etiquette. It respects the time of your future self and your colleagues. What’s your preferred naming convention? Let’s discuss below! 👇 #Python #CleanCode #ProgrammingTips #DataScience #WebDevelopment #TechCommunity
To view or add a comment, sign in
-
-
Most beginner backend projects die in refactoring. Here's the structure I use to prevent that. When I built my Task Manager CLI, I learned this the hard way — a monolithic file that worked until it very much didn't. After refactoring, here's the structure I now start with: Before writing a single line: → Define your data model first → Identify all operations (CRUD) you'll need → Map inputs, outputs, and error states While building: → One module per concern (routes, models, utils, exceptions) → Validate inputs at the boundary — not deep inside logic → Handle errors explicitly — no silent failures Before shipping: → Test the unhappy paths, not just the happy ones → Read your own code like a stranger would This approach reduced my debugging effort by 40% on a real project. It works at any scale — from a CLI tool to a FastAPI service. What's the first thing you do when starting a new backend project? #BackendDevelopment #Python #FastAPI #SoftwareEngineering #CodingTips
To view or add a comment, sign in
-
🚀✨ Sharing something I’ve been building — Code Crawler 🕷️ Ever joined a large Python codebase and spent days 😵💫 figuring out what calls what? Yeah… that frustration led me to build this 👇 💡 Code Crawler helps you understand any Python codebase visually and instantly. Just point it to a GitHub repo (or even a local folder 📂), and it parses everything using AST to generate a function & class-level lineage graph 🧬 🔍 What makes it powerful: ⚙️ Resilient crawl pipeline (Temporal) — survives restarts & scales smoothly 🔗 Cross-repo lineage — track relationships across repos + inheritance chains 🌿 Branch comparison — see what changed & what it impacts downstream ▶️ Run functions from UI — no test setup needed 🧪 Smart mocking — auto-detect dependencies & stub them easily 📊 Batch testing — run multiple test cases together with instant results 📂 Local support — drag & drop projects, no GitHub required 🏢 Multi-tenant architecture — full isolation + invites + admin panel 💭 Why I built this: Understanding a new codebase shouldn’t feel like solving a puzzle 🧩 Instead of jumping across hundreds of files: 👉 Visualize relationships 📈 👉 Trace flows instantly 🔄 👉 Run functions with real inputs ⚡ All in one place. 🚧 Still building, but already super useful for exploring codebases I didn’t write Would love feedback from folks who’ve struggled with large Python projects 🙌🔥 🔗 https://lnkd.in/gtD-5juu #Python 🐍 #SoftwareEngineering 💻 #DevTools 🛠️ #OpenSource 🌍 #FastAPI ⚡ #React ⚛️
To view or add a comment, sign in
-
🚀 Day 54 of My LeetCode Journey 🔍 Problem: Find the Index of the First Occurrence in a String Today’s problem focused on searching for a substring inside a string — a very common concept in interviews and real-world applications like text processing and search engines. 💡 Key Idea: We compare the substring (needle) with every possible starting position in the main string (haystack) until we find a match. 🧠 What I Learned: How string matching works internally Importance of boundary conditions (like empty strings) Difference between brute-force and optimized approaches Why built-in methods like indexOf() are efficient ⚡ Approaches: Using built-in method (indexOf) – simple and efficient Manual iteration (brute force) – helps understand logic deeply 📈 Time Complexity: O(n * m) for brute-force approach 🔥 Takeaway: Even simple problems can teach core fundamentals. Mastering these basics builds a strong foundation for advanced algorithms like KMP. #Day54 #LeetCode #Java #DataStructures #CodingJourney #Programming #InterviewPreparation
To view or add a comment, sign in
-
-
>>Not every PR gets merged. This one taught me more than most that did. While contributing to Scipy-stubs, I worked on an issue: "CI: only type-check and stubtest if needed." The goal was simple, avoid running expensive checks unless the relevant files actually changed." My approach : - Initially i started with `dorny/paths-filter` ( for adding two condition as using `paths` we connot have two condition. - Then faced some `zizmor` errors highlighted the risk. To guard from supply-chain attach we used (full length commit SHA instead of a mutable tag like v6. (https://lnkd.in/gAG4vfyn) Then we decided instead of using any third party (dorny/paths-filter), we moved towards splitting the workflow in two files, (I've created stubset.yml) > And then came the real insight, splitting the workflow meant we could no longer require the typecheck/typetest jobs to pass as status checks, which would silently break GitHub's auto-merge feature. That was the dealbreaker. The PR is closed, but the decision was right. A workflow that silently breaks auto-merge causes more problems than it solves. Not every contribution needs to be merged to be valuable. Sometimes mapping the dead ends is enough. If this saves someone an hour of debugging, worth it. Drop a comment if you want to discuss more .. #GitHubActions #DevSecOps #OpenSource #Python #SupplyChainSecurity #Scipy #Python #CI/CD
To view or add a comment, sign in
-
-
Built a Python-based Directory Sync Tool to compare and synchronize files between two directories with reliability and control. Instead of relying only on file names or timestamps, the tool uses a combination of metadata and SHA-256 hashing to accurately detect new, modified, and missing files. Key highlights: • Recursive directory scanning with structured metadata (name, extensions, size, hash) • Efficient change detection using size-first filtering followed by hash comparison • Memory-efficient hashing using chunk-based file reading (handles large files) • Synchronization support with metadata preservation using shutil.copy2 • Safe cleanup by optionally removing extra files from the destination While building this, I focused on moving beyond a basic script and treating it like a real tool, structuring the code into clear components, improving output readability, and adding validation and error handling to make it more reliable in real use. GitHub:https://lnkd.in/gt-Ec3rF #Python #CLI #GitHubProjects #SoftwareDevelopment #LearningByBuilding #SystemsThinking
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development