My Python Experiment: Giving AI Agents a Say in Their Own Exit I’ve been diving into Python lately with a small project to see how different AI models handle a structured discussion. I didn't start with a big plan; I just wanted to see what happens when they talk to each other. The Observation 🔍 During my first tests, I noticed something frustrating: after a few turns, the models often hit a wall. Instead of developing the argument further, they just started repeating themselves in different words. The discussion wasn't moving; it was just looping. The Idea: The "Stop Button" Handshake 🧪 That’s when I thought—why not let the models decide when they’ve had enough? I built a coordination layer called AI-Bridge and gave models like Gemini, GPT-4, and DeepSeek a single tool: propose termination. How it works (The "Natural Veto") 🛡️ Before each turn, the bridge tells an agent how many peers have already voted to stop. If an agent speaks without calling the tool while others want to quit, it counts as a natural veto. It basically forces the model to decide, "Do I actually have something new to add, or am I just talking because it's my turn?" What I learned 📚 Python is a great teacher: solving the state management of these "votes" across different APIs was a steep but rewarding learning curve. Models get more precise: once they feel the "pressure" of a pending exit, they tend to move away from fluff and focus on the remaining contradictions. Observing behavior is fascinating: watching a "skeptic" persona block the exit because it found a flaw in a colleague's point is exactly why I started this. It’s just a toy project for now, but it’s been a fantastic way to learn Python while exploring how LLMs negotiate and find common ground. To the Python community: I’m curious—how do you usually handle shared state across asynchronous model calls? I’m all ears for "pythonic" tips! #Python #LearningToCode #AIBridge #MultiAgentSystems #CodingJourney #SoftwareEngineering
Thomas Grill’s Post
More Relevant Posts
-
He created Python in 1991. The language that powers 70% of AI today. TensorFlow. PyTorch. NumPy. All Python. And here's what he thinks about AI: "I'm definitely not looking forward to an AI-driven future." This is Guido van Rossum. Creator of Python. Still writing code at 69. He uses AI every single day. But his role has shifted: "Instead of writing code, I've moved to the position of a code reviewer." His concern isn't robots taking over. It's something more real: "Too many people without ethics getting the ability to do much more." And on AI-generated code: "Code still needs to be read and reviewed by humans. Otherwise we risk losing control entirely." Three legends. Three weeks. One conclusion: Uncle Bob: AI increases demand for programmers. DHH: AI amplifies the strong, exposes the weak. Guido: AI without human oversight is dangerous. 🔥 Bonus - Uncle Bob posted this yesterday: https://lnkd.in/ddMDt-4x 27 years ago Kent Beck said "Refactor Mercilessly." Now with Claude, "merciless" takes on a new meaning. He's ripping systems apart and rebuilding them at will. Massive TDD + Gherkin acceptance tests keep everything stable. The tests are so thorough that Claude can't break free. Same Uncle Bob. New tools. Same discipline. The fundamentals have never mattered more. Save this if you're following this series. Drop a comment: are you still reviewing every line AI writes - or do you trust it blindly? #Python #AI #Programming #GuidoVanRossum #UncleBob #SoftwareDevelopment
To view or add a comment, sign in
-
-
✅ Day 14 of Learning Python .....🐍 — Topics Covered. 🔹 🔠 String Methods Overview. 🔹 ✨ capitalize() – Make first letter uppercase. 🔹 🧹 lstrip() – Remove left spaces. 🔹 🧹 rstrip() – Remove right spaces. 🔹 🧼 strip() – Remove spaces from both sides. 🔹 📏 ljust(width) – Left align text. 🔹 📏 rjust(width) – Right align text. 🔹 🎯 center(width) – Center align text. 🔹 🔐 Creating a Complex Password. 🔹 🔍 find() – Find first occurrence of substring. 🔹 🔎 rfind() – Find last occurrence of substring. 🔹 🔄 replace() – Replace text in string. #AI #MachineLearning #DataScience #FutureTech #Upskilling #ContinuousLearning #CareerGrowth
To view or add a comment, sign in
-
-
Is Python Still the leader in AI ML development? It’s 2026 and as new contenders keep emerging, promising faster execution and interesting new features. Yet, here we are: Python still remains the undisputed king of the AI and Machine Learning landscape. Why hasn't it been dethroned? It’s not just about the simple syntax; it’s about massive inertia that the language has on its side. The Python ecosystem is simply too big and diverse to be overtaken at the moment. The foundations of modern AI Development—PyTorch, TensorFlow, and JAX—are deeply entrenched in Python. Trying to replicate the sheer depth of libraries like scikit-learn or pandas in another language is a decade-long game of catch-up that few are willing to play. Its simplicity of syntax, high readability and the huge library support lowers the barrier to entry, fostering the world's largest, most active community of developers and researchers. When you're debugging a complex LLM architecture or a Generative AI pipeline, that immediate community support is invaluable. Furthermore, the old "Python is slow" argument is also fading. Python remains the ultimate "wrapper language" with heavily optimised C++ running under the hood of major libraries and new acceleration tools like Numpy, Python acts as the high-level command centre for low-level compute power. Until another language can offer this perfect storm of simplicity, mature tooling, and massive adoption, Python isn't going anywhere. #Python #ArtificialIntelligence #MachineLearning #DataScience #DeepLearning #PyTorch #TensorFlow #LLM
To view or add a comment, sign in
-
Andrej Karpathy just shared a complete GPT in 243 lines of python. This shows the algorithm is a commodity. The hard part is shaping what goes into the model and what you do with what comes out. Context engineering is the clearest example. Two teams using the same foundation model will get wildly different results depending on how they construct prompts, manage retrieval, and structure agent orchestration. The model weights are identical. The product differentiation is entirely in the surrounding system. Same story with evals. When the algorithm is a commodity, measuring whether your specific implementation actually works becomes the moat. Every AI PM needs a framework for this. The best AI PMs I talk to on the podcast all land in the same place. The model is a component, not a product. So PMs need to master these 5 skills: 1. Evals: https://lnkd.in/eGbzWMxf 2. Observability: https://lnkd.in/e3eQBdMp 3. AI foundations: https://lnkd.in/e6zyYugs 4. AI product strategy: https://lnkd.in/egemMhMF 5. Context engineering: https://lnkd.in/eUUPMmJK 243 lines is the engine. Everything above it is the product.
To view or add a comment, sign in
-
-
reiterating what aakash said - 5 sections where PM upskilling is required -> 1. Evals: https://lnkd.in/eGbzWMxf 2. Observability: https://lnkd.in/e3eQBdMp 3. AI foundations: https://lnkd.in/e6zyYugs 4. AI product strategy: https://lnkd.in/egemMhMF 5. Context engineering: https://lnkd.in/eUUPMmJK
Andrej Karpathy just shared a complete GPT in 243 lines of python. This shows the algorithm is a commodity. The hard part is shaping what goes into the model and what you do with what comes out. Context engineering is the clearest example. Two teams using the same foundation model will get wildly different results depending on how they construct prompts, manage retrieval, and structure agent orchestration. The model weights are identical. The product differentiation is entirely in the surrounding system. Same story with evals. When the algorithm is a commodity, measuring whether your specific implementation actually works becomes the moat. Every AI PM needs a framework for this. The best AI PMs I talk to on the podcast all land in the same place. The model is a component, not a product. So PMs need to master these 5 skills: 1. Evals: https://lnkd.in/eGbzWMxf 2. Observability: https://lnkd.in/e3eQBdMp 3. AI foundations: https://lnkd.in/e6zyYugs 4. AI product strategy: https://lnkd.in/egemMhMF 5. Context engineering: https://lnkd.in/eUUPMmJK 243 lines is the engine. Everything above it is the product.
To view or add a comment, sign in
-
-
Day 15/∞: Learning GenAI – Building Custom AI Agents with LangChain and Python Today I worked on moving from simple LLM calls to custom AI agents that can actually use tools and take actions, not just generate text. Using Python + LangChain + OpenAI, I defined regular Python functions as tools and then turned them into agent-capable tools by adding a decorator and a clear docstring. The docstring is more than documentation here—it’s how the LLM understands when and why to call that tool (for example, a math tool vs. a date tool). Repo🔗: https://lnkd.in/d469QWcm Once the tools were defined, I connected an LLM with a list of these tools so the agent could follow a cycle of reasoning → choosing a tool → acting → observing the result. This allowed it to handle more complex queries (like multi-step calculations) without me hard-coding every step. I also learned how to invoke the agent and read back specific responses from the message history, which is useful for logging and UI. This feels like a key step in going from “chatbot” to task-oriented AI systems that can actually get work done. #GenAI #LangChain #Python #AIAgents #OpenAI #Day15 #LearningInPublic
To view or add a comment, sign in
-
-
Hi everyone, I recently tried implementing the 𝗳𝗶𝗿𝘀𝘁 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝗯𝗲𝗵𝗶𝗻𝗱 𝗵𝗼𝘄 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 (𝗟𝗟𝗠𝘀) 𝘄𝗼𝗿𝗸 completely from scratch in pure Python. Instead of using ML libraries, I built a tiny transformer-style model step by step to understand what actually happens under the hood when a model reads and generates text. In simple terms, this project helped me learn: • how text is converted into numbers • how attention helps a model understand context • how layers build deeper understanding • how models predict the next word step by step The goal wasn’t performance, but clarity to really grasp the core mechanics behind modern AI systems. This hands-on implementation gave me a much stronger intuition about how LLMs actually work internally, beyond just using APIs. If you’re curious about the fundamentals, feel free to check out the repo I’ve documented each component and the learning journey in detail. 𝗚𝗶𝘁𝗵𝘂𝗯 𝗟𝗶𝗻𝗸:- https://lnkd.in/gp_rh9Bb #AI #MachineLearning #Transformers #LearningInPublic #LLM #Python
To view or add a comment, sign in
-
-
There is a lot of talk about using AI tools in modelling. The issue is LLMs are by their nature probabilistic. They figure out from their training data the most probable next step. If you give a chatbot the same prompt and the same rent roll multiple times, you can get a different answer each time. Most will be good. Some will be bad, but you won't know without checking. Python and other programming languages are deterministic. The same script will produce the exact same results, with the same input, every single time. That's why I built Rent Roll Wizard using Python. It takes any messy tenancy schedule and puts it into a standard format. It's the common first step to building a model and a boring time suck for all analysts! Just drag and drop your rent roll into the window on rentrollwizard.com. It’s free. Enjoy! There are a number of other steps to complete a strong model. I am in the process of building these out with Python, in a modular form, for a full PRS model. If you want me to send you the final product comment "PRS". Or if there is something else you think is a bigger time suck and/or better idea to program for, message me or leave a comment. Thanks all!
To view or add a comment, sign in
-
-
I spent my evening running a single Python file that trains an entire GPT from scratch. No PyTorch. No TensorFlow. No NumPy. Just plain Python. Andrej Karpathy wrote it — 200 lines, that's it. He called it MicroGPT and honestly the tagline says everything: "This file is the complete algorithm. Everything else is just efficiency." That hit different. So what does it actually do? It reads 32,000 human names, learns the patterns in them, and then makes up new ones. Names that don't exist but sound like they should. Some of the names it came up with: kamon, jaire, vialan, keylen, alerin. Tell me those don't sound real. The wild part is you can literally read every line and understand what's happening. The autograd engine, the attention mechanism, the optimizer — it's all there in front of you. No hidden magic behind library calls. I've read papers, watched lectures, built projects with frameworks. But this one file gave me more clarity on how transformers actually work than most of those combined. If you're learning ML or even just curious about what's behind ChatGPT, run this. It takes 5 minutes to set up (literally just python3 microgpt.py) and you'll walk away understanding the core algorithm.Sometimes the best way to learn something complex is to see it at its simplest. Link in comments 👇
To view or add a comment, sign in
-
“Python is slow.” I hear this a lot. But if that’s true, why is Python everywhere in AI? The truth is — AI is not a race for raw speed. It’s a race for faster learning and faster experimentation. In machine learning, you’re not building one perfect solution and shipping it. You’re: • Trying different model designs • Adjusting hyperparameters • Cleaning messy data • Running experiment after experiment This process requires flexibility and speed in development — not just fast execution. And that’s where Python shines. It’s simple. It’s readable. It lets you build, test, and modify ideas quickly. When you can move faster, you learn faster. And in AI, that matters more than saving a few milliseconds. Also, here’s something many people overlook: Python usually isn’t doing the heavy math alone. When you work with tools like NumPy, TensorFlow, or PyTorch, the intense computations run underneath in optimized C/C++ code — often using GPUs through CUDA. Python mainly coordinates everything. It acts like a manager directing powerful workers behind the scenes. That design is intentional. On top of that, Python has grown together with AI. The libraries, tools, community, tutorials, research support — everything is deeply connected and mature. That ecosystem advantage is huge. So yes, Python may not be the fastest language in pure benchmarks. But in AI, what really wins is: Speed of learning + Strong ecosystem + Powerful back-end performance. And that’s why Python continues to lead the AI space. #Python #ArtificialIntelligence #MachineLearning #DeepLearning #DataScience #AIEngineering #TechCareers #Developers #Coding #Innovation
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development