🚀 Built my first CLI-based Personal Finance Manager using Python & SQLite 💻💰 Today was one of those days where things finally clicked. I started with a simple idea: track income and expenses from the terminal. But along the way, I ended up learning some core backend concepts that every developer should understand. 🔧 Here’s what I worked on: • Designed a proper database schema (and fixed it multiple times 😅) • Learned why data modeling matters (type + amount > separate income/expense columns) • Implemented input validation to prevent crashes • Built a clean CLI menu with error handling • Used SQL aggregation (SUM) to calculate balance • Understood the difference between fetchall() and fetchone() • Handled edge cases like NULL values from SQL • Fixed logical bugs in control flow and menu mapping 💡 Biggest takeaway: Writing code is easy. Designing logic and handling edge cases is where real learning happens. There were moments where things didn’t work (wrong queries, wrong data types, tuple confusion 😵), but debugging those issues taught me more than just copying solutions ever could. 📈 Current features: ✔ Add transactions ✔ Calculate balance using SQL 🔄 Next: List transactions + improve UI This is part of my journey through Harvard’s CS50 Python course — and I’m starting to feel the shift from “just coding” to actually building software. #Python #CS50 #SQLite #Programming #LearningInPublic #100DaysOfCode #DeveloperJourney
Building CLI Finance Manager with Python & SQLite
More Relevant Posts
-
𝗗𝗮𝘆 𝟲𝟲/𝟳𝟱 | 𝗟𝗲𝗲𝘁𝗖𝗼𝗱𝗲 𝟳𝟱 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: 714. Best Time to Buy and Sell Stock with Transaction Fee 𝗗𝗶𝗳𝗳𝗶𝗰𝘂𝗹𝘁𝘆: Medium 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘂𝗺𝗺𝗮𝗿𝘆: Given an array prices where prices[i] represents the stock price on day i, and a transaction fee, find the maximum profit you can achieve. Constraints: • You can make multiple transactions • You must sell before buying again • Each transaction incurs a fixed fee 𝗠𝘆 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: This problem is solved using Dynamic Programming with state optimization. Instead of maintaining a full DP table, we track two states: • buy → Maximum profit when holding a stock • sell → Maximum profit when not holding a stock • Initialization: – buy = -∞ (we haven’t bought yet) – sell = 0 • Transition for each price: – buy = max(buy, sell - price) (Either keep holding or buy today) – sell = max(sell, buy + price - fee) (Either keep not holding or sell today after paying fee) • Final answer: sell This works because at every step, we decide whether to take an action (buy/sell) or skip, while always keeping track of the best possible profit. 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: • Time Complexity: O(n) • Space Complexity: O(1) 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Stock problems often reduce to state machines. Tracking “holding” vs “not holding” states and optimizing transitions can simplify even complex trading constraints like transaction fees. 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗟𝗶𝗻𝗸: https://lnkd.in/gz6hgkXw #Day66of75 #LeetCode75 #DSA #Java #Python #DynamicProgramming #Greedy #MachineLearning #DataScience #ML #DataAnalyst #LearningInPublic #TechJourney #LeetCode
To view or add a comment, sign in
-
-
Hosting sessions on technical topics like VBA, Power BI & HTML always excite me, as they push me beyond the comfort zone in finance. One such session I had the opportunity to host yesterday was on “Python for Finance Professionals & Beginners” What made this session special was not just the content, but the intent of the participants - spending a Saturday morning (when most of us prefer to relax) to learn, explore and upskill. That mindset itself is inspiring. 🚀 Here’s what we covered in the session: 🔹 Understanding Frontend vs Backend and where Python fits 🔹 Why Python is becoming essential beyond Excel 🔹 Key concepts like Libraries (with focus on Pandas) 🔹 Live Demo: ✔️ Stock screening using Python ✔️ Merging 100+ Excel files in seconds 🔹 Showcasing Frontend power using my AI Hackathon Grand Finale project (HTML use case) 🔹 Web Scraping & Automation: ✔️ Automating download of 26AS & AIS from Income Tax portal 🔹 How to run Python: ✔️ Local setup ✔️ Google Colab (no installation needed) …and a lot of practical insights around how Python can simplify real finance workflows. 💡 Key takeaway from the session: Python is not about coding…it’s about reducing repetitive work and thinking smarter. A big thank you to BRAIN & BYTES CA Krupanand Bammidi CA Krishna Sri Myneni for the opportunity to host this session and heartfelt thanks to all the participants who showed up with curiosity and commitment 🙌 🎥 Session Recording: https://lnkd.in/gMNDg8G6 If you’ve been thinking about learning Python but haven’t started yet - this might be your sign to begin. #Python #FinanceProfessionals #Upskill #Automation #Learning #AI #DataAnalytics
Python Essentials for Chartered Accountants & Finance Professionals | No Coding Experience Needed
https://www.youtube.com/
To view or add a comment, sign in
-
🐍 Day 11 of 30 — My Monday morning report used to take 2 hours. Python now runs it in 4 seconds. The honest story — bugs included. I'd been rebuilding the same pivot table every single Monday for months. I got tired of it. So I decided to automate it with Python. Zero experience. Just YouTube, documentation, and stubbornness. Here's what the script does: 1. Reads the weekly claims CSV file automatically 2. Filters rows where status = "denied" 3. Groups by denial_code + payer_name 4. Calculates total count and revenue at risk 5. Sorts by revenue descending — highest risk first 6. Outputs a formatted Excel report 7. Emails it to my manager automatically Here's the honest version history: Version 1: 3 bugs. Ran nothing correctly. Spent a full Saturday debugging. Version 2: Worked — but the output was completely unformatted. Final version: Runs every Monday at 7am. 4 seconds. Professional output. Zero effort. The weekend I spent building it? Has saved me 8+ hours every single month ever since. The best investment of time is always the thing that eliminates itself. Build it once. Let it run forever. That's automation. In a billing office. On real data. Tomorrow: That same script found a billing trend my team had missed for 4 straight months. 👇 #Python #Automation #HealthcareData #LearningInPublic #DataAnalysis #Day11of30
To view or add a comment, sign in
-
-
I spent a couple of minutes staring at red lines this week. A missing semicolon was the culprit. Also my best teacher. I've been translating real finance questions into SQL logic and those syntax errors? Uninvited coaching sessions I didn't know I needed. One concept finally clicked: WHERE vs HAVING. WHERE filters rows before grouping. HAVING filters grouped data after aggregating. Sounds simple until you see how it changes the answers you pull from a finance dataset. Monthly revenue, order trends, customer segments. It stops being syntax and starts being business logic. Also took my first steps into Pandas today. The shift from SQL tables to Python DataFrames is a whole new way of thinking. More on that soon. One hurdle I'm hitting: running multiple SQL queries on one page only shows the last output. My Data people, how do you handle this? Separate tabs, a specific IDE setup, something else? Drop your tips below. 👇 #VeekayBuilds #SQL #DataAnalytics #Python
To view or add a comment, sign in
-
-
Day 5/30 — Book Recommendation System (CLI-Based) 📚 🔹 Project Overview: Built a Python-based Book Recommendation System that suggests books based on user preferences, ratings, and behavior. The system uses collaborative filtering logic and provides personalized, mood-based, and popularity-based recommendations through a simple command-line interface. 🔹 Tools Used: Python | CSV | File Handling | Cosine Similarity | CLI 🔹 Key Features: • User registration & login system 🔐 • Book search and browsing functionality 🔍 • Personalized recommendations using user similarity 🎯 • Mood-based suggestions (Happy, Sad, Motivated) 😊 • Book rating system (1–5 scale) ⭐ • Popular books recommendation for new users 📈 • CSV-based lightweight data storage 📂 • Error handling & logging system ⚙️ 🔹 What I Learned: • How recommendation systems work (collaborative filtering) • Implementing similarity logic (cosine similarity) • Managing structured data using CSV • Designing modular Python applications • Building real-world logic without heavy ML 🔗 GitHub Repository: https://lnkd.in/dbFKzn9p Would love your feedback and suggestions! 🙌 #PythonProjects #RecommendationSystem #DataScience #BeginnerProjects #PortfolioProject #PythonLearning
To view or add a comment, sign in
-
Stop splitting your code into 500-token chunks. Your retrieval pipeline is broken and you don't even know it. Here's what to do instead. Day 3 of building TriageCopilot in public. A graph-aware RAG agent that auto-triages GitHub issues. Today was chunking day. The layer between raw data and the vector database. Get this wrong and retrieval is useless no matter how good your model is. One chunker doesn't fit all. Here's what I built: → Code: tree-sitter AST chunking (Python, JS, TS, Go) Splits at function/class boundaries, not line counts. Falls back to sliding windows for unknown languages. → Markdown: heading-aware splitter Splits on #/##/### boundaries. Short sections merge into the previous chunk so you don't get orphan fragments. → Discussions: issue + PR formatter Emits a header chunk (title, labels, state) then runs the body through the markdown chunker. Three chunkers. Three data shapes. One pipeline. What surprised me: Python decorators broke the code chunker. tree-sitter wraps @decorator + function into a "decorated_definition" node. My chunker was pulling the wrong symbol name. Took a while to catch because the chunks looked fine until you checked the metadata. The kind of bug you only find by writing tests. Stack check: → Embeddings: voyage-code-3 for code, text-embedding-3-large for text → Vector DB: Qdrant with two collections (code + discussions) → All chunk IDs are stable UUIDs so re-indexing is idempotent Day 4 tomorrow: hybrid retrieval. BM25 + dense vectors + Reciprocal Rank Fusion. The search layer that ties everything together. 3 days in. 3 days shipped. On track. Comment "TRIAGE" and I'll DM you the repo + progress as I post it.
To view or add a comment, sign in
-
I spent a weekend building a tool I actually needed : A PDF-to-Flashcard pipeline that runs 100% locally. The Win: No subscriptions, no data exposure, and zero latency. Just Python and local intelligence. The Stack: → PyMuPDF : Clean text extraction → Ollama to run Llama 3 locally:: High-performance local LLM. → Streamlit for the interface (and Sithara Hayavadana — the standalone local UI is genuinely great for this kind of project) → Pandas: Instant Anki-compatible CSV exports. The Biggest Learning: Data preparation beats model size every time. I found that chunking strategy mattered more than prompt engineering or model choice. The stack is entirely free — and yes, Keming Wang, free and open source tools were enough to buIld this 😁 I have shared the full article and technical breakdown in the comments below! 👇 Have you experimented with Ollama for your local workflows yet?
To view or add a comment, sign in
-
-
Day 65 of #90DaysOfCode Today I built a Flask web application to collect and display cafe data including location, opening hours, coffee quality, wifi strength, and power availability. The application allows users to submit cafe details through a form, which is validated on the backend and stored in a CSV file. The stored data is then rendered dynamically on a separate page. Key features implemented • Form handling using Flask-WTF • Input validation using WTForms • Handling GET and POST requests in Flask • Data storage using CSV files • Dynamic rendering using Jinja templates • Redirect flow after form submission Key concepts learned • How form submission works in backend systems • Difference between GET and POST requests • Validating and processing user input • Structuring Flask applications properly This project gave me a clearer understanding of how backend systems handle user input and store structured data. GitHub Repository https://lnkd.in/gNvjnbZT #Python #Flask #BackendDevelopment #WebDevelopment #SoftwareEngineering #90DaysOfCode
To view or add a comment, sign in
-
Are you ready to take your data management skills to new heights? Introducing our exciting course: "Mastering Python for Data Grouping and Validation"! This dynamic course is designed for adults who want to unlock the potential of Python in the world of data. Whether you're aiming to elevate your career or simply want to dive into the fascinating realm of data management, we have just the ticket for you! Here’s what you can expect: Course Features: - Learn Python from the ground up, starting with programming basics and setting up your development environment. - Dive deep into grouping information using powerful tools like lists, dictionaries, and the popular Pandas library. - Master data validation techniques to ensure accuracy and reliability in your datasets. - Get hands-on with reservation calculations and real-world applications that make your learning practical and relevant. - Build dynamic projection models and forecasting techniques that will impress your colleagues and bosses alike. Benefits of taking this course: - Enhance your data management skills and open new career opportunities. - Equip yourself with real-world applications that can be directly applied to your job. - Network with fellow learners and industry professionals. - Complete an engaging final project to showcase your newly acquired skills. Don't miss this chance to become a Python pro and make data work for you! Sign up today and let’s get started on this exciting journey together. Ready to jump in? Visit us at https://lnkd.in/gJ7TmVCx and transform your data skills today!
To view or add a comment, sign in
-
-
If you're a Claude Code user, check out these terminal tools! Glad to see Starship and CShip getting the love they deserve!
AI Tech Lead | Senior Data Scientist | Writing a book on Post-training LLMs and Inference Optimization
Claude Code has pulled me back into the terminal full-time. These are the top tools for productivity boost in your terminal: 1. 𝐅𝐢𝐬𝐡 𝐬𝐡𝐞𝐥𝐥 → An alternative to zsh and bash with autocomplete for commands, options, flags, and git branches → Syntax highlighting: immediately shows you if a command is valid or not → Automatically activates Python virtual environments https://fishshell.com/ 2. 𝐒𝐭𝐚𝐫𝐬𝐡𝐢𝐩 → A fully customizable prompt → Shows your current folder, git branch, active Python/TS environment at a glance https://starship.rs/ 3. 𝐂𝐬𝐡𝐢𝐩 (𝐒𝐭𝐚𝐫𝐬𝐡𝐢𝐩 𝐟𝐨𝐫 𝐂𝐥𝐚𝐮𝐝𝐞 𝐂𝐨𝐝𝐞) → Brings Starship-level customization to the Claude Code status line → By default the status line is very barebones → Cship adds information on token usage, when your window resets, all in a customizable way. https://cship.dev/ 4. 𝐘𝐚𝐳𝐢 → A graphical file manager that runs inside your terminal → Replaces the ls and cd loop with a fast, visual interface → Shows a preview of every file (code, images, even PDFs) https://lnkd.in/ePcegMWA 5. 𝐑𝐢𝐩𝐠𝐫𝐞𝐩 → Search your codebase for regex patterns faster than grep → Respects .gitignore, so no false positives in your .venv or node_modules folders 6. 𝐀𝐭𝐮𝐢𝐧 → Replaces Ctrl+R with a searchable, filterable history across sessions → Super useful when you need to find that command you ran two weeks ago → Allows syncing across machines. Searching for that command you run on your other computer? https://atuin.sh/ Are you using these? What else should I add to this list? I write about data & AI every week. Subscribe to my newsletter to get each one in your inbox 👉 https://lnkd.in/echQG4Zu
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development