Built something a bit different this evening. I wanted a better way to search through emails (especially job alerts), so I put together a small system that: - indexes my email archive - lets me search it instantly via a web interface - pulls results from a FastAPI backend - and displays full emails on click (a bit like a lightweight Outlook) It started as a quick idea and turned into a fully working tool with a frontend, API, and database behind it. Still lots I could add (UI polish, smarter filtering, tagging etc.), but really pleased with how it came together. Nice reminder that sometimes the best way to solve a problem is just to build the thing you wish existed 👍 #Python #FastAPI #ITSupport #Homelab #Learning #Automation
Building a Custom Email Search System with Python and FastAPI
More Relevant Posts
-
Your webhook handler is probably slowing down your system without you realizing it. ⚡ I used to process webhook events directly inside the request cycle. Receive request → process logic → update DB → return response. It worked… until traffic increased. Multiple webhook events started hitting at the same time. External services kept retrying if response was slow. And suddenly, the system was under pressure. The problem: Webhook providers don’t care about your processing time. If you’re slow, they retry. If they retry, you get duplicate load. The fix: Stop processing inside the request. Receive → validate → acknowledge fast Then push the actual work to a background queue Example approach: # views.py def webhook_handler(request): data = request.data process_webhook.delay(data) # async task return Response({"status": "received"}) What changed: • Faster response to webhook provider • No blocking request threads • Better handling of traffic spikes • System stayed stable under load Important detail: Async alone is not enough. You still need idempotency to handle retries safely. The insight: Webhooks should be received fast, not processed fast. #SoftwareEngineering #BackendDevelopment #Django #Python #SystemDesign #Webhooks #Scalability #Performance #Developers
To view or add a comment, sign in
-
-
𝗢𝗻𝗲 𝗺𝗮𝗿𝗸𝗱𝗼𝘄𝗻 𝗳𝗶𝗹𝗲. 𝗭𝗲𝗿𝗼 𝗿𝗲𝗽𝗲𝗮𝘁𝗲𝗱 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀. 𝗡𝗼 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗱𝗿𝗶𝗳𝘁. Most engineers run OpenCode with the default build agent. OpenCode is a free, open-source Claude Code alternative. 95K GitHub stars. Supports 75+ models. No subscription. Bring your own API key or work with a ChatGPT subscription. But the default agent is still general-purpose. New session. Blank context. Same stack explanation every morning. I wrote my own operator agent instead. 100+ production workflows behind it. 4 stacks. Runs daily across n8n workflows, Python scripts, Next.js builds, and production automation. Here's the exact 𝗗𝗮𝗶𝗹𝘆 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿 𝗔𝗴𝗲𝗻𝘁 config I run in OpenCode: → 𝗣𝗹𝗮𝗻-𝗳𝗶𝗿𝘀𝘁 𝗺𝗼𝗱𝗲. Numbered plan for every non-trivial task. No code gets written before the plan is confirmed. → 𝗔𝗻𝘁𝗶-𝘀𝘆𝗰𝗼𝗽𝗵𝗮𝗻𝗰𝘆 𝗿𝘂𝗹𝗲. Agent challenges assumptions before executing. Flags gaps in my logic before touching a file, not after. → 𝗕𝗮𝘀𝗵 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹. git push, npm run, and python scripts all default to ask. Nothing destructive runs without confirmation. → 𝗥𝗲𝗮𝗱 𝗯𝗲𝗳𝗼𝗿𝗲 𝗲𝗱𝗶𝘁. The agent checks the current file state before every change. No overwrites from stale context. → 𝗦𝘂𝗯𝗮𝗴𝗲𝗻𝘁 𝗱𝗲𝗹𝗲𝗴𝗮𝘁𝗶𝗼𝗻. Parallel research and exploration run as scoped subtasks. The main agent stays on the build. → 𝗦𝘁𝗮𝗰𝗸 𝗹𝗼𝗰𝗸. n8n, Python, Next.js, Postgres, Prisma. Full context before the first prompt of every session. The default agent is general-purpose. This one knows my risk tolerance, my repo patterns, and my stack before I type a single word. Drop 𝗦𝗧𝗔𝗖𝗞 in the comments, and I'll send you the full config. 𝗦𝗮𝘃𝗲 𝘁𝗵𝗶𝘀 for the next time you have 30 minutes and want to level up your OpenCode setup. Follow 𝗕𝗶𝗹𝗮𝗹 𝗔𝗵𝗺𝗮𝗱 for more on 𝗻𝟴𝗻, 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲, Opencode, and Production automation. #OpenCode #ClaudeCode #AIAgents #AutomationEngineering
To view or add a comment, sign in
-
I have been thinking less about agents and more about paths. A lot of software still assumes execution is: - an app flow - an async mesh - or an agent stack But human work usually looks messier and more real: you carry something, fork, test, return, defer, resume, and only sometimes settle. So I have been exploring a path-style runtime where: - push opens a fork - pop rejoins or abandons a fork - skills attach to the path - memory keeps residue - repeated signals reinforce instead of just duplicating noise The interesting question is not “what is the next token?” It is more like: - what is still being carried? - what changed direction? - what can be deferred? - what actually needs to settle? I may post a tiny Python demo of path execution next. https://lnkd.in/eairkdJK #runtime #systems #paths #softwarearchitecture
To view or add a comment, sign in
-
-
🚀 Vision AI demo with Google’s Gemma 4 Over the past few days since its release, I’ve been exploring the vision capabilities of Gemma 4. I’m sharing a Streamlit web app where you can upload images and ask natural language questions about them — all running locally via Ollama. ✨ What it does • Analyze any image (JPG, PNG, GIF, etc.) • Ask questions in natural language • Get AI-powered answers using gemma4:e2b (2B, 4-bit quantized) • Runs entirely on your machine — no cloud APIs 🛠️ Tech stack • Python • Streamlit • Ollama • Gemma 4 💻 The project includes: • Web UI (Streamlit) • CLI demo • VS Code debug setup 👉 Check it out: https://lnkd.in/dtegduUc #Gemma4 #AI #MachineLearning #ComputerVision #LocalAI #Ollama #Streamlit #Python #OpenSource
To view or add a comment, sign in
-
🗂️5 𝐓𝐲𝐩𝐞𝐬 𝐨𝐟 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧 𝐄𝐯𝐞𝐫𝐲 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐒𝐡𝐨𝐮𝐥𝐝 𝐊𝐧𝐨𝐰 Pagination isn't just about splitting data — it's about doing it efficiently. The wrong approach can kill your API performance at scale. Here are the 5 most important pagination types: 1️⃣ 𝐎𝐟𝐟𝐬𝐞𝐭-𝐁𝐚𝐬𝐞𝐝 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧: The classic approach — skip N records, take M. Simple to implement but gets slow on large datasets. ?page=3&limit=10 ⚠️ Avoid on tables with millions of rows. 2️⃣ 𝐂𝐮𝐫𝐬𝐨𝐫-𝐁𝐚𝐬𝐞𝐝 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧: Uses a pointer to the last seen item instead of a page number. Efficient, consistent, and perfect for real-time data. ?after=eyJpZCI6MTIzfQ== ✅ Used by Twitter/X and Instagram APIs. 3️⃣ 𝐊𝐞𝐲𝐬𝐞𝐭 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧: Uses a unique column value (ID or timestamp) as the anchor. Blazing fast on indexed columns — scales beautifully. ?last_id=500&limit=10 ✅ Best choice for high-performance backends. 4️⃣ 𝐏𝐚𝐠𝐞 𝐍𝐮𝐦𝐛𝐞𝐫 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧: The classic UI pattern — pages 1, 2, 3… Easy for users but needs proper indexing server-side. 📌 Great for search results and admin dashboards. 5️⃣ 𝐓𝐢𝐦𝐞-𝐁𝐚𝐬𝐞𝐝 𝐏𝐚𝐠𝐢𝐧𝐚𝐭𝐢𝐨𝐧: Fetches records within a specific time range. Perfect for feeds, logs, and event streams. ?from=2024-01-01&to=2024-01-31 📌 Common in analytics and reporting systems. 💡 Pro Tip: Most production apps combine strategies — cursor-based for feeds, offset for search, time-based for reports. Which pagination type do you use most in your projects? Drop it in the comments 👇 #WebDevelopment #BackendDevelopment #SoftwareEngineering #API#Programming #DatabaseOptimization #SystemDesign #CleanCode #100DaysOfCode #CodingTips #Developer #TechCommunity #Flutter #Python #JavaScript #mitprogrammer
To view or add a comment, sign in
-
-
Tired of Googling bash flags every time you need a simple script? This dev built a scripting language where the thought in your head maps one-to-one to what you type. { author: Maksim Iakovlev } https://lnkd.in/ehb36bmG
To view or add a comment, sign in
-
Let’s talk about something fun and interesting I did quite a while ago. I optimized a keyword-driven query system, focusing on improving throughput and stability under constraints. The core problem: Maximize queries/hour while avoiding conflicts, throttling, and system instability. Key optimizations: • Parallel processing with controlled concurrency • Keyword-based query pipeline for structured input distribution • User-agent rotation to distribute request patterns • Retry + backoff mechanisms for handling transient failures • Idempotent execution to avoid duplicate processing One interesting tweak that made a noticeable difference: I introduced a keyword expansion strategy - combining each keyword with incremental alphabet variations (e.g., keyword + a, keyword + b, ...). This helped: • Increase result coverage without changing the core keyword set • Avoid repetitive query patterns • Improve overall discovery efficiency per keyword After multiple iterations, the system stabilized at ~70 leads/hour from about ~15–20 leads/hour with consistent performance. This was one of the most interesting things I had worked on, may not be as flashy but interesting for sure that such a small change can have such a great impact! Curious to know your thoughts! #Optimizations #Python #Software #SaaS
To view or add a comment, sign in
-
What if the question isn't which tool is best, but which process requirements you haven't mapped yet? Every tool comparison post on LinkedIn ranks platforms by features. Number of integrations. Pricing tiers. UI screenshots. None of that tells you which one fits your actual workflow. Four questions that matter more than any feature list: Does your data need to stay on-premise? If yes, the field narrows to n8n self-hosted or custom Python. Zapier and Make are cloud-only. For regulated industries, this single question eliminates half the options. How many exception paths does the process have? Under 3: Zapier handles it. Between 3 and 10: Make or n8n. Above 10: you need n8n's flexibility or custom code. Who maintains it after deployment? If the ops team maintains it without engineering support, visual tools win. If an engineering team owns it with code review and CI/CD, Python or n8n with Git integration. Does the workflow need version control? If deployments need rollback capability and audit trails, cloud-only tools with no Git backing create risk. Map the process. Answer the four questions. The tool becomes obvious. In your last tool evaluation, did anyone map the process requirements before the vendor demos started? #WorkflowAutomation #ProcessAutomation #n8n #OperationsManagement
To view or add a comment, sign in
-
512,000 lines of code. Almost 2,000 files. All it took was one tiny mistake in a package.json file for it all to go down the drain. Yesterday, March 31, 2026, ANTHROPIC, basically one of the richest AI companies out there, accidentally leaked the entire source code for "Claude Code." And the wild part is? Nobody hacked them. No one broke into their servers. It was just a simple file that shouldn't have been pushed. When we build apps with JavaScript, we use these files called "source maps." They’re basically like a map that helps developers fix bugs by linking the messy, unreadable code back to the clean stuff we actually wrote. They’re great for working on your own computer locally during development, but you’re never supposed to let them leave your local machines. Anthropic’s team accidentally left one in their public package. That map file pointed straight to a private zip folder on their storage. A researcher found it and it was all over. Within a few hours, over 40,000 people had already copied forked the whole code on github. It’s so easy to laugh and say, "How did these geniuses mess up something so basic?" But honestly, this happens to the best of us. We spend all our time worrying about high-tech security and hackers, but we forget to check the boring stuff, like a simple configuration file or a typo in our ".npmignore" list. The lesson is pretty clear: it doesn't matter how good your code is if your shipping process is sloppy. A single line nobody checked is the difference between a successful launch and giving your work away for free. If you’re a dev, do yourself a favor and always run a dry-run before you publish anything today. Anthropic can afford a mistake like this; most of us can't. Is it just me, or are we getting so focused on AI and complex architecture that we're forgetting the basics of shipping code? #SoftwareEngineering #Coding #JavaScript #Anthropic #TechTips #Programming
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development