I wrote async Python for several years before I understood what I was doing. Narrator: he did not know what he was doing. ────────────────────────────── The bug that changed everything: Our FastAPI service: 800 req/s. Smooth. Life is good. Then one endpoint: 8–12 second response times. CPU: 2%. Memory: fine. Zero errors in logs. Me at my desk: "Perhaps the server is just tired." ────────────────────────────── The culprit? One line. ONE. data = requests.get(external_api_url) Inside. An. Async. Function. This is the Python equivalent of stopping a highway to tie your shoelace. Every car waits. Nobody honks. The event loop just... cries silently. ────────────────────────────── What async actually means: Async doesn't make your code faster. It makes your code wait smarter. await = "go do other stuff, I'll call you when I'm ready" requests.get() inside async = "everyone stop. I need a moment." One sync call = the entire app holds its breath. 800 concurrent users. One blocked thread. Pure chaos. ────────────────────────────── The 3 mistakes every async developer makes (including me, twice): 1) requests instead of httpx "But requests is so clean and familiar..." Yes. So is writing to a floppy disk. Move on. → await httpx.AsyncClient().get(...) 2) Sync DB drivers (psycopg2, pymysql) Your async app. Your sync DB call. Your entire event loop, crying. → asyncpg or SQLAlchemy 2.0 async 3) time.sleep() inside async This one is personal. I'm not ready to talk about it. → await asyncio.sleep(). Please. ────────────────────────────── How to catch it before production does: loop.slow_callback_duration = 0.1 Anything blocking >100ms gets logged. Think of it as a motion sensor for bad code. ────────────────────────────── The fix: 3 minutes. Finding it: 2 days. Explaining it to my manager: "the server was tired." That's the real cost of not understanding the event loop. ────────────────────────────── Drop a "🧵" if you've lost hours to a sync call hiding in async code. Or just tell me your story — misery loves company. 👇 #python #asyncio #fastapi #backend #softwaredevelopment
Async Mistakes: 3 Common Errors in Python
More Relevant Posts
-
Two developers. Same problem. Same Python. Completely different results. Here's what separates them. 👇 I want to show you something that changed how I think about writing Python. Not a framework. Not a library. Just one decision — made before writing a single line of code. The decision of which data structure to use. --- The task: Find all duplicate values in a list of 100,000 items. ━━━━━━━━━━━━━━━━━━━━ ❌ Without DSA thinking: duplicates = [] for i in range(len(data)): for j in range(i + 1, len(data)): if data[i] == data[j]: duplicates.append(data[i]) Looks logical. Runs correctly. But with 100,000 items? ⏱ Runtime: ~47 seconds 🔁 Comparisons: ~5,000,000,000 ━━━━━━━━━━━━━━━━━━━━ ✅ With DSA thinking: seen = set() duplicates = [] for item in data: if item in seen: duplicates.append(item) seen.add(item) One loop. One set. Done. ⏱ Runtime: 0.01 seconds 🔁 Comparisons: 100,000 ━━━━━━━━━━━━━━━━━━━━ Same output. 4,700x faster. Not because of a smarter algorithm. Not because of better hardware. Because one developer understood that a Python list checks membership in O(n) — and a set does it in O(1). That single insight is the difference between code that works and code that scales. 🚀 --- This is why DSA isn't just for interviews. It's the lens that helps you look at any problem and ask: "What's the right tool for this job?" Python gives you Arrays, Sets, Dicts, Heaps, Queues. Each one purpose-built. Each one powerful in the right hands. Know your tools. Build faster. Ship better. --- 💬 Which data structure clicked for you the most? Drop it in the comments — let's see what the community says. 👇 ♻️ Repost this to every Python developer in your network. This one visual could save them days of debugging. 👉 Follow for weekly Python + DSA breakdowns — practical, visual #Python #DSA #DataStructures #Arrays #PythonProgramming #SoftwareEngineering #CleanCode #BuildInPublic #CodingTips #100DaysOfCode
To view or add a comment, sign in
-
-
On April 5, Milla Jovovich pushed a Python repo to GitHub. Within days it had tens of thousands of stars. The project is called MemPalace. It is an open-source AI memory system, built with Ben Sigman of Libre Labs, and developed using Claude Code as the primary build tool. Here is what it actually does, and where the launch story broke down. What the tool does Most AI tools forget everything when a session ends. MemPalace stores conversations verbatim on your device, then retrieves relevant chunks at query time. It uses ChromaDB for vector search and SQLite for a temporal knowledge graph. It connects to Claude, ChatGPT, and Cursor via MCP. The "memory palace" structure organizes stored data into wings, halls, rooms, and drawers. It is a useful navigational metaphor. It is not the source of the benchmark results. What the benchmarks actually showed The launch claimed 100% on LongMemEval, 100% on LoCoMo, and 30x lossless compression via AAAK. None of those held up cleanly on review. The LongMemEval score was tuned against its own test questions. After correction, the reproducible number is 96.6% recall at retrieval depth 5. That measures retrieval quality, not end-to-end question answering. The LoCoMo score used a retrieval window wide enough to include the full candidate set. Retrieving everything produces a high score. It does not say anything about ranking. AAAK was described as lossless. It is lossy. The token-count example in the documentation used a non-standard tokenizer. When tested on a real tokenizer, benchmark performance dropped when AAAK was enabled. The README has since been corrected. The project remains live, local, and MIT-licensed. What is worth taking from this The 96.6% retrieval result is real and reproducible. It comes from verbatim storage combined with ChromaDB, not from the palace structure itself. That distinction matters if you are evaluating whether to use this tool or build something similar. The broader question this project raises is worth sitting with: as AI memory tooling moves into the open-source space, how do you evaluate a benchmark claim that ships alongside a 50,000-star launch week? The answer is the same as always. Read the methodology, not the headline. Sources 1) LongMemEval: arxiv.org/abs/2410.10813 2) LoCoMo benchmark: https://lnkd.in/d3EcHSns
To view or add a comment, sign in
-
-
From console games to a GUI app—this Python learning journey in 3 weeks shows you what matters. When I started learning Python on April 5, 2026, I didn't overthink it. No elaborate plan. No waiting for the "perfect moment." I just started building. Week 1: Getting the reps in. · madlibs. py → Getting comfortable with strings & input · weightConvertor. py → If/else logic · calculator. py → Basic operations · compound interest calculator → While loops Week 2: Adding real functionality. · quizGame. py & hangman gam. py → Lists & arrays · alarm clock & counter. py → Time module & loops · dice roller & rock, paper, scissors → Tuples + random module · shopingcart. py & slot machine → Lists & functions · encryption program → String manipulation & logic Week 3: Leveling up to real-world apps. · digital clock & stopwatch → PyQt5 GUI concepts · banking program → Functions & OOP fundamentals · Weather API app → API integration & real data 25 programs in 19 days. No theory paralysis. Just "learn by doing." Why Python? 🐍 · Simplicity first. Python's syntax reads like plain English. You spend less time wrestling with language quirks and more time actually building things. That's why data science, AI, and web development all run on Python. Its "pseudocode-like" nature lowers the barrier of entry. It's a high-level language that's also free, open-source, and cross-platform. · Versatility. One language that powers web backends (Django, Flask), crunches data (Pandas, NumPy), builds AI models (TensorFlow, PyTorch), automates repetitive tasks, and even creates desktop GUIs (PyQt5). It's a general-purpose tool that can handle just about any problem. · Future-proof career. Python consistently ranks as the most in-demand programming language. With over 12,500+ job mentions, it's the #1 skill employers look for in 2026, dominating AI, ML, data science, and backend development. Python's ecosystem is rapidly growing. · Real earning potential. Python developers command strong salaries. In the US, the average sits around $99,990, with total pay ranging up to $187k. For remote roles, the average jumps to $123,208. In Europe, hybrid/remote roles offer a median of £73,750. Some markets, like Japan, report average annual earnings of ¥9,440,000 (approx. $63k USD). With demand surging, mid-level salaries in some regions have seen increases of 40% within a single year. This repo isn't just code. It's proof. It doesn't claim to be a "Python expert" or "senior architect." It's just someone documenting the actual process of learning—commit by commit, program by program. Every messy first attempt. Every "aha" moment. All of it. That's the kind of learning journey worth following. Check out the repo: [https://lnkd.in/gfvwTQ95] How do you learn best? Theory-first or build-first? Drop your approach in the comments. #Python #LearningJourney #Coding #SoftwareDevelopment #CareerGrowth #TechJourney
To view or add a comment, sign in
-
-
📨 AI-Driven Micro-Loan Platform 5/10: The gRPC Fast-Lane – When "Async" isn't fast enough ⚡ In Post 3, we talked about the "Deep-Dive" via Service Bus. But what if you need an answer now? For the initial "Pre-Score" (the 5-second decision that keeps a user in the app), we can't wait for a message queue. We need a direct, high-speed connection between .NET 10 and Python. Enter gRPC. 🏎️ 1. Why gRPC over REST? ⚖️ Most teams default to JSON over HTTP. In a high-volume microservices environment, that's "death by a thousand cuts." Protobuf vs. JSON: gRPC uses Protocol Buffers (binary). It’s smaller, faster to serialize, and strictly typed. Multiplexing: Using HTTP/2, we keep a single connection open for multiple requests, reducing the overhead of constant "handshakes." 2. The "Pre-Score" Flow 🟢 When the user hits "Check My Limit": .NET API calls the Python ML Service via a gRPC client. Python pulls the "Lightweight" features from Redis. Inference happens in <20ms. The result returns to .NET, and the user sees a "Preliminary Offer" immediately. 3. The Contract-First Advantage 📝 One of the biggest headaches in Polyglot teams (.NET + Python) is API breaking changes. The .proto file: This is our "Single Source of Truth." Both teams agree on the input and output types. Auto-Generation: .NET generates its client, and Python generates its server from the same file. No more "Expected an integer but got a string" bugs in production. 4. Handling the "Timeout" Trap 🛡️ Direct calls are risky. If Python is slow, .NET hangs. The Strategy: We implement strict Deadlines. If Python doesn't answer in 100ms, the gRPC call cuts off. The Fallback: If the Fast-Lane fails, the system gracefully falls back to the "Deep-Dive" Async flow we discussed earlier. The user gets a "We're processing your request" message instead of a crash. 📈 The Results: ✅ Real-Time UX: Users get instant gratification. ✅ Polyglot Harmony: .NET and Python talk as if they were in the same project. ✅ Efficiency: Reduced CPU overhead on both sides compared to REST/JSON. 🧠 Post 6: The Watchtower – Real-time Observability with OpenTelemetry & Dashboards. #gRPC #Microservices #DotNet #Python #SystemDesign #FinTech #MLOps #API #SoftwareEngineering #PerformanceOptimization
To view or add a comment, sign in
-
-
☕ 𝗕𝗿𝗲𝘄𝗶𝗻𝗴 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝘄𝗶𝘁𝗵 𝗣𝘆𝘁𝗵𝗼𝗻: 𝗢𝗻𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲, 𝗘𝗻𝗱𝗹𝗲𝘀𝘀 𝗣𝗼𝘀𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 The image perfectly captures a powerful truth about Python — it’s not just a language, it’s a foundation that fuels multiple high-impact domains. Like a single kettle pouring into different cups, Python seamlessly powers diverse career paths. 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝘆 𝗣𝘆𝘁𝗵𝗼𝗻 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗲𝘀 𝘁𝗼 𝗱𝗼𝗺𝗶𝗻𝗮𝘁𝗲 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵 𝗹𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲: 🔹𝐃𝐚𝐭𝐚 𝐒𝐜𝐢𝐞𝐧𝐜𝐞 𝐄𝐱𝐜𝐞𝐥𝐥𝐞𝐧𝐜𝐞 — Python offers robust libraries like Pandas and NumPy, making data manipulation, analysis, and visualization efficient and scalable. 🔹𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐏𝐨𝐰𝐞𝐫𝐡𝐨𝐮𝐬𝐞 — Frameworks such as TensorFlow and Scikit-learn enable rapid development of predictive models and AI-driven solutions. 🔹𝐖𝐞𝐛 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 𝐅𝐥𝐞𝐱𝐢𝐛𝐢𝐥𝐢𝐭𝐲 — With frameworks like Django and Flask, Python allows developers to build secure, scalable, and dynamic web applications. 🔹𝐀𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐨𝐧 & 𝐒𝐜𝐫𝐢𝐩𝐭𝐢𝐧𝐠 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 — From simple task automation to complex workflows, Python drastically reduces manual effort and increases productivity. 🔹𝐁𝐞𝐠𝐢𝐧𝐧𝐞𝐫-𝐅𝐫𝐢𝐞𝐧𝐝𝐥𝐲, 𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐲-𝐑𝐞𝐚𝐝𝐲 — Its clean syntax makes it ideal for beginners, while its vast ecosystem supports enterprise-level applications. 🔹𝐂𝐫𝐨𝐬𝐬-𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐲 𝐀𝐝𝐨𝐩𝐭𝐢𝐨𝐧 — From finance to healthcare, startups to tech giants — Python is everywhere. 🔹𝐒𝐭𝐫𝐨𝐧𝐠 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐒𝐮𝐩𝐩𝐨𝐫𝐭 — A global developer community ensures continuous improvement, learning resources, and innovation. 🔹𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧 𝐂𝐚𝐩𝐚𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬 — Python integrates smoothly with other technologies, APIs, and languages, making it highly versatile. 🔹𝐑𝐚𝐩𝐢𝐝 𝐏𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐢𝐧𝐠 — Develop ideas faster and validate concepts with minimal development overhead. 🔹𝐅𝐮𝐭𝐮𝐫𝐞-𝐏𝐫𝐨𝐨𝐟 𝐒𝐤𝐢𝐥𝐥 — With AI, data, and automation shaping the future, Python remains a critical skill for long-term growth. 💡 𝗙𝗶𝗻𝗮𝗹 𝗧𝗵𝗼𝘂𝗴𝗵𝘁: Mastering Python is not about choosing one path — it’s about unlocking multiple opportunities with a single skill.
To view or add a comment, sign in
-
-
☕ 𝗕𝗿𝗲𝘄𝗶𝗻𝗴 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝘄𝗶𝘁𝗵 𝗣𝘆𝘁𝗵𝗼𝗻: 𝗢𝗻𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲, 𝗘𝗻𝗱𝗹𝗲𝘀𝘀 𝗣𝗼𝘀𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 The image perfectly captures a powerful truth about Python — it’s not just a language, it’s a foundation that fuels multiple high-impact domains. Like a single kettle pouring into different cups, Python seamlessly powers diverse career paths. 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝘆 𝗣𝘆𝘁𝗵𝗼𝗻 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗲𝘀 𝘁𝗼 𝗱𝗼𝗺𝗶𝗻𝗮𝘁𝗲 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵 𝗹𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲: 🔹𝐃𝐚𝐭𝐚 𝐒𝐜𝐢𝐞𝐧𝐜𝐞 𝐄𝐱𝐜𝐞𝐥𝐥𝐞𝐧𝐜𝐞 — Python offers robust libraries like Pandas and NumPy, making data manipulation, analysis, and visualization efficient and scalable. 🔹𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐏𝐨𝐰𝐞𝐫𝐡𝐨𝐮𝐬𝐞 — Frameworks such as TensorFlow and Scikit-learn enable rapid development of predictive models and AI-driven solutions. 🔹𝐖𝐞𝐛 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 𝐅𝐥𝐞𝐱𝐢𝐛𝐢𝐥𝐢𝐭𝐲 — With frameworks like Django and Flask, Python allows developers to build secure, scalable, and dynamic web applications. 🔹𝐀𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐨𝐧 & 𝐒𝐜𝐫𝐢𝐩𝐭𝐢𝐧𝐠 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 — From simple task automation to complex workflows, Python drastically reduces manual effort and increases productivity. 🔹𝐁𝐞𝐠𝐢𝐧𝐧𝐞𝐫-𝐅𝐫𝐢𝐞𝐧𝐝𝐥𝐲, 𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐲-𝐑𝐞𝐚𝐝𝐲 — Its clean syntax makes it ideal for beginners, while its vast ecosystem supports enterprise-level applications. 🔹𝐂𝐫𝐨𝐬𝐬-𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐲 𝐀𝐝𝐨𝐩𝐭𝐢𝐨𝐧 — From finance to healthcare, startups to tech giants — Python is everywhere. 🔹𝐒𝐭𝐫𝐨𝐧𝐠 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐒𝐮𝐩𝐩𝐨𝐫𝐭 — A global developer community ensures continuous improvement, learning resources, and innovation. 🔹𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧 𝐂𝐚𝐩𝐚𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬 — Python integrates smoothly with other technologies, APIs, and languages, making it highly versatile. 🔹𝐑𝐚𝐩𝐢𝐝 𝐏𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐢𝐧𝐠 — Develop ideas faster and validate concepts with minimal development overhead. 🔹𝐅𝐮𝐭𝐮𝐫𝐞-𝐏𝐫𝐨𝐨𝐟 𝐒𝐤𝐢𝐥𝐥 — With AI, data, and automation shaping the future, Python remains a critical skill for long-term growth. 💡 𝗙𝗶𝗻𝗮𝗹 𝗧𝗵𝗼𝘂𝗴𝗵𝘁: Mastering Python is not about choosing one path — it’s about unlocking multiple opportunities with a single skill.
To view or add a comment, sign in
-
-
Behind the Scenes of the .pkl File: How Python "Freezes" Your Data 🥒📦 If you work with Python for Machine Learning, QSAR, or Data Engineering, you’ve definitely seen .pkl files. But have you ever wondered what’s actually happening under the hood when you save one? Unlike a CSV or JSON, which only stores raw text and numbers, a Pickle file stores the soul of your Python object. 🧠 How it Works: The Magic of Serialization The process behind a .pkl file is called Serialization (or "Pickling"): Memory Mapping: When you create a complex model or a chemical database, Python organizes it in your RAM with a sophisticated web of pointers and references. The Byte Stream: The pickle library traverses that complex structure and flattens it into a linear stream of bytes(a sequence of 0s and 1s). Perfect Reconstruction: When you use pickle.load, Python reads that stream and rebuilds the object with the exact same structure, data types, and attributes it had before. It’s like disassembling a LEGO castle, labeling every piece, and perfectly reassembling it in a different room. 📁 What does it save that a CSV can't? While a text file "forgets" the properties of an object, a .pkl preserves: Exact Typing: If your data was a 64-bit float or a specific NumPy array type, it stays that way. Object Relationships: If you have a dictionary pointing to a list of SMILES strings, those internal links remain intact. Learned Parameters: For Machine Learning, it saves the weights and coefficients your algorithm spent hours (or days) learning. 🛠️ The Syntax: "wb" and "rb" In your code, you will always see these modes: 'wb' (Write Binary): Necessary because you aren't writing "text," you are writing raw machine data. 'rb' (Read Binary): Necessary to translate those bytes back into a Python object you can interact with. ⚖️ When should you use it? ✅ YES for: Saving trained models, pre-computed molecular fingerprints, or saving the state of a long-running experiment. ❌ NO for: Public data sharing (use JSON or Parquet for security) or when you need to open the file in another language like R or Julia. Understanding your file formats is the first step toward building more robust, reproducible research workflows! 🚀 #Python #DataScience #MachineLearning #Pickle #Programming #TechInsights #QSAR #Bioinformatics #CodingTips
To view or add a comment, sign in
-
-
RLM is the most import foundation of my Pi Harness (other than Pi of course). It's seeded with late interaction retrieval results (thanks to @lightonai for pylate). The Agent initiates it with query then.. 𝐒𝐞𝐭𝐮𝐩 A python REPL is created and seeded with: 1. Late interaction search to pre-filter. Instead of doing top 3/5/10, it's top hundreds of documents. This is set into a `context` variable. 2. Python functions are loaded in to do more searches if `context` variable isn't enough. And to make llm calls with cheaper models in parallel batches. 𝐈𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐋𝐨𝐨𝐩 From there, an LLM iterates in the REPL based on the query. It's just like exploring in a jupyter notebook. The LLM writes prose (like a markdown cell) and code to be run in the REPL each turn. This allows the LLM to sort, filter, and synthesize information. It can fan out and ask smaller models to summarize, combine, contrast, or do anything else to documents to help it understand the data. After several turns the LLM reponds with the final answer. Either because it found the answer, or hit the budget limit. Context as a Python variable, LLM as the programmer, REPL as the runtime. 𝐖𝐡𝐲 𝐃𝐨𝐞𝐬 𝐓𝐡𝐢𝐬 𝐖𝐨𝐫𝐤 1. Richer Shell. Agents (and subagents) work by intermixing code and prose/thinking. But they use static scripts or bash that run and exit and start over each tool call. That's not ideal for exploration and synthesis of data. For that, state is useful to continue building and exploring the data as you learn more. There's a reason jupyter notebooks have been popular with data scientists. 2. Keeps main agent context clean. The better context you have the better the agent will perform (duh!). This means three thing: better human input, less missing search results, and less incorrect search results. Letting the agent iterate allows it to synthesize just what is needed and nothing else. All bad paths or peeks at something that turns out to be irrelevant stays out of main agent context. 3. Stack the good ideas! People often compare late interaction search vs RLM. Or static vs dynamic languages. Or agentic search vs semantic search. But...You can just use them all together for what they're each good at. Use them all for the area they're really great for. Read the full post which has more detail about how and why. https://lnkd.in/eJRkXA9Q
To view or add a comment, sign in
-
******Step-by-Step: How to Build a Simple AI Agent from Scratch Using an IDE (Beginner-Friendly Technical Guide)****** Many people talk about AI Agents. Very few explain how to actually build one from zero. Here’s a complete hands-on example using Python + OpenAI API where we create a simple AI Agent that reads a text file and generates action items automatically. No frameworks. No shortcuts. Pure fundamentals. What This Agent Will Do Input → Read meeting_notes.txt Process → Understand content Output → Generate structured action items Step 1: Install Required Tools Install: Python (3.10 or higher) VS Code (or IntelliJ / PyCharm) OpenAI Python SDK Run this in terminal: pip install openai python-dotenv Step 2: Create Project Structure Create a folder: ai-agent-demo Inside it create: main.py agent.py meeting_notes.txt .env Step 3: Add OpenAI API Key Open .env file Paste: OPENAI_API_KEY=your_api_key_here Save it. Step 4: Add Sample Input File Open meeting_notes.txt Paste: Rahul will prepare sprint report by Monday Ankur will review automation failures Team will finalize regression scope tomorrow Save it. Step 5: Create Agent Logic File Open agent.py Paste this code: from openai import OpenAI import os from dotenv import load_dotenv load_dotenv() client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) def generate_action_items(text): prompt = f""" Extract action items from the following meeting notes. Return output as bullet points. Meeting Notes: {text} """ response = client.chat.completions.create( model="gpt-4.1-mini", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content Step 6: Create Main Execution File Open main.py Paste this code: from agent import generate_action_items def read_notes(): with open("meeting_notes.txt", "r") as file: return file.read() def run_agent(): notes = read_notes() output = generate_action_items(notes) print("\nGenerated Action Items:\n") print(output) if name == "main": run_agent() --- Step 7: Understand the Prompt Used This is the intelligence layer of your agent: "Extract action items from the following meeting notes. Return output as bullet points." Prompt = behavior Model = brain Code = execution pipeline Change prompt → agent changes capability Example variations: Summarize notes Create Jira tickets Generate test cases Extract risks Create email summary Step 8: Run the AI Agent Open terminal inside project folder Run: python main.py Output appears like: • Rahul prepares sprint report by Monday • Ankur reviews automation failures • Team finalizes regression scope tomorrow Agent working successfully What Makes This an AI Agent? Because it: Takes input Applies reasoning using LLM Executes instruction via prompt Produces structured output #ArtificialIntelligence #GenerativeAI #LLM #OpenAI #PromptEngineering #AIEngineering
To view or add a comment, sign in
-
Python Lists 🐍 Lists are: • Mutable (changeable) • Ordered • Allow duplicates Created using [] List Slicing : Slicing lets you get subsets of the list. Syntax: list[start:stop] (stop is exclusive). You can omit start/stop or use negative indices. Adding Items to Lists: append(item): Adds to the end. insert(index, item): Inserts at a specific position. extend(iterable): Adds multiple items from another iterable (better than appending a list, which would nest it). Removing Items from Lists: remove(value): Removes the first occurrence of a value. pop(): Removes and returns the last item (or from a specific index with pop(index)). 📝 Python Lists - Example print("----- Creating List -----") Topics = ["AWS","GitHub","Linux","Terraform","Kubernetes"] print("Topics:", Topics) print("Length:", len(Topics)) print("First Item:", Topics[0]) print("4th Item:", Topics[3]) print("Last Item:", Topics[-1]) print("Second Last Item:", Topics[-2]) print("\n----- Slicing -----") print("Topics[0:2]:", Topics[0:2]) print("Topics[:2]:", Topics[:2]) print("Topics[2:]:", Topics[2:]) print("\n----- Adding Items -----") Topics.append('GCP') print("After append:", Topics) Topics.insert(0,'CICD') print("After insert:", Topics) Topics2 = ['Python','Go'] Topics.extend(Topics2) print("After extend:", Topics) print("\n----- Removing Items -----") Topics.remove('AWS') print("After remove AWS:", Topics) popped = Topics.pop() print("Popped Item:", popped) print("After pop:", Topics) #Python
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development