↩️ I didn’t expect Python to send me straight back to 1999. A few weeks ago, I started diving deeper into Python. What I didn’t expect was rediscovering patterns from the Smalltalk systems I worked on in 1999. Yes, Smalltalk. Back then I was knee‑deep in large, object‑heavy systems. I thought I had safely archived those memories in the “nice, but let’s not go back there” drawer. But suddenly, Python’s dynamic nature and object model gave me a familiar déjà‑vu. Did Python just wink at me? 🈹 My Vibe Coding Expedition (a.k.a. Pair Programming with a Hyperactive Alien) Then I ventured into Vibe Coding — sketching code interactively with an LLM. Fun? Absolutely. Predictable? Not quite. 😵💫 Ask for a design idea → get a chaotic buffet of functions you never ordered. 😵💫 As the project grows, the LLM starts to drift — forgetting its own methods like someone who walked into the kitchen and can’t remember why. 😵💫 Keeping the architecture clean requires long, surprisingly philosophical negotiations. 😵💫 And yes, sometimes the LLM becomes… stubborn. It happens. I learned quickly: CONTEXT IS EVERYTHING! So I started asking the LLM to write Markdown summaries of our decisions. Otherwise, every new session felt like onboarding a colleague with complete amnesia. 🈴 And Then Came Skills, RAG, MCP & Tools (a.k.a. “Congratulations, you’re now building agentic software.”) Somewhere along the way, I stumbled into the broader ecosystem: 👽Skills vs. RAG — realizing that sometimes the model doesn’t need more data; it needs a well‑defined capability. 👽MCP (Model Context Protocol) — suddenly I’m designing structured tool interfaces like I’m negotiating API contracts with an alien species. 👽Tools — because of course the LLM requires a toolbox now. Why wouldn’t it? At this point it became clear: I’m not “just coding” anymore. I’m orchestrating a small team of invisible interns with questionable attention spans. 📜 The Re‑Discovery: Spec‑Driven Development (SDD) Eventually I circled back to Spec‑Driven Development (shoutout to the Martin Fowler article by Birgitta Böckeler: "Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl"). And I had to smile. Because in 1999, this was simply called: 😶 Doing your job properly. 😶 * You write the spec. * You implement the spec. * If you change the code, you change the spec first. 🙎♂️Back then it felt bureaucratic. 🤖 Today - in an AI‑assisted world - it suddenly feels like the future. 🧐The spec becomes the primary artifact. 🧐The code becomes the byproduct. A final realization about working with LLMs 👺 Their answers always make some sense — In Some Sense👺 #Python #AI #VibeCoding #SoftwareArchitecture #Smalltalk #AgenticAI #MCP #RAG #SpecDrivenDevelopment #TechHistory
Thomas Grill’s Post
More Relevant Posts
-
Exciting news on the horizon! Introducing OptiScan — OptiRefine's Python AST-powered code analysis engine. This tool focuses on deterministic static analysis that interprets your code in the same manner as a compiler, without any AI or guesswork. Here’s what OptiScan offers: - **Big-O Complexity Analysis**: It parses your Python source into an Abstract Syntax Tree (AST) to assess real-time and space complexity, identifying nested loops, O(n²) patterns, and exponential bottlenecks before they reach production. - **Automated Refactoring Engine**: Beyond flagging issues, it rewrites them. It includes HashSet pattern injection, N+1 ORM query resolution via .select_related(), and async httpx rewrites, along with Counter-based frequency optimizations, all generated programmatically. - **Cyclomatic Complexity Scoring**: Each function in your codebase receives a score based on decision-point density: LOW (1–5), MEDIUM (6–10), HIGH (11+). This allows you to pinpoint functions that may become unmaintainable ahead of your next code review. - **Dead Code Detection**: A two-pass AST scan uncovers functions and variables that are defined but never referenced, resulting in cleaner binaries and reduced cognitive load. - **Memory & Resource Auditing**: It identifies unclosed file handles, unbounded list accumulation in loops, and unnecessary generator expression materialization, catching patterns that lead to silent memory growth in long-running services. - **DevSecOps Static Analysis**: Hardcoded secrets, unsafe eval()/exec() calls, and risky module imports are flagged automatically before reaching a PR review. - **In-Browser PyTest Generation**: OptiScan generates PyTest scaffolding for every detected function and executes them directly in your browser via WebAssembly (Pyodide), requiring zero setup and no local environment. OptiScan is built on libcst, a concrete syntax tree library that ensures full, loss
To view or add a comment, sign in
-
I built a library. ~900 downloads in one month. No marketing. No funding. Just <300 lines of Python. Here's what I learned building 𝗔𝗴𝗲𝗻𝘁𝗞𝘂𝗯𝗲-𝗠𝗶𝗻𝗶: Most people think agent orchestration is magic. It's not. It's a task list that knows which tasks depend on which. That's it. I was tired of reading agent framework docs that hid everything behind abstractions. You use it, it works, but you have no idea why. So I built the smallest possible version that actually ships. 300 lines. Zero dependencies. Open source. It does four things: - Defines agents and their dependencies as a DAG - Runs independent tasks in parallel automatically - Emits events at every step so you can see exactly what's happening - Shares memory so downstream agents use upstream outputs That's the whole engine. No magic. Just graph traversal and a scheduler. The moment it clicked for me was when engineers started using it as a teaching tool not just a production tool. "𝗜 𝗳𝗶𝗻𝗮𝗹𝗹𝘆 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝗟𝗮𝗻𝗴𝗚𝗿𝗮𝗽𝗵 𝗶𝘀 𝗱𝗼𝗶𝗻𝗴 𝘂𝗻𝗱𝗲𝗿 𝘁𝗵𝗲 𝗵𝗼𝗼𝗱." That's the comment I keep seeing. AgentKube-Mini is not trying to beat LangGraph. Use LangGraph when you need tool loops, human-in-the-loop, state persistence. It's genuinely better for that. The real unlock? Run your LangGraph sub-agents INSIDE an AgentKube-Mini DAG. Best of both worlds. 900 downloads taught me one thing: engineers are hungry to understand, not just use. Are you building on top of frameworks or do you actually know what's underneath? Drop it below. 👇
To view or add a comment, sign in
-
-
I switched from n8n to Python + Claude Code mid-project. Best call I made all quarter. Here's the honest comparison. n8n is not the automation tool you think it is. It's perfect for 3-step workflows. It becomes a debugging nightmare past that. I've built workflows in both — here's the honest breakdown. n8n wins when: → The workflow is small (under 5 nodes) → Speed to first result matters more than everything → The person building it isn't a developer But complexity changes the math fast. A 20-node workflow breaks. You open the visual editor to find the problem. Half your afternoon is gone. And the AI token cost while building medium to large flows? Every tweak, every node adjustment burns more than you'd expect. It compounds quietly. That's where OpenClaw(or Claude Code) + Python changes everything. For medium to large workflows: → Debugging is just reading code — no visual maze → Building is faster, less back-and-forth with AI → Token usage drops significantly The visual layer feels like a feature when you start. It becomes friction when the workflow grows. Code doesn't have that problem. My rule now: → Quick, simple automations → n8n → Everything from medium up → Python + Claude Code (And I am NOT a Python Developer! I just can understand the generated code. But that is not the point. I just have to specify what I want and if anything breaks have to say what broke and how it is supposed to be. On the other hand, with n8n debugging is a nightmare! Try it out!!! The tool you prototype with isn't always the one you should scale with. Follow me for more honest takes on AI tooling. What's your experience been? Drop your thoughts below.
To view or add a comment, sign in
-
"If you are an experienced software engineer, you can learn Python in a few hours." Don't believe it! After 10+ years if not 20+ of writing Java, I’ve spent the last year diving deep into Python. Sure, I could write a for loop in an hour, but writing truly idiomatic, type-safe Python? That is a different journey entirely. We are still in a transition phase where we have to review code carefully, especially the vibe code, and the "simple" way isn't always the "right" way. Mastering the nuances of the type system is what separates a script from a production-grade system. Take a look at this evolution of a simple intent label as an example(a real story from the work): The "Just-do-it" approach (Generic): label: str = Field(description="Must be one of: fully_understand, partial_understand, or not_understand") The Problem: The LLM might "hallucinate" and send "mostly_understand" or just "understand". Your code won't catch it until it's too late. The "Pythonic Master" approach (Strict): label: Literal["fully_understand", "partial_understand", "not_understand"] = Field(description="intent understanding label") This uses Constrained Decoding. It doesn’t just "suggest" a value to an LLM; it mathematically restricts the output. It turns a runtime guessing game into a compile-time guarantee. This is one common task while building AI Agent: turn non-deterministic to deterministic. Syntax is easy. Semantics and type-safety are where the real work happens. Never stop learning, respect the complexity of the craft. Aim for the masterpiece! #SoftwareEngineering #Python #Java #VibeCoding #LLMs #TypeSafety #Pythonic #Agent #AIAgent
To view or add a comment, sign in
-
Most people rush to write code. Very few pause to understand what code actually is. Python, at its core, is not just a programming language it’s a structured way of thinking. 🔹Take comments. They are ignored by the machine, yet essential for humans. That alone reveals something important not everything valuable in a system is meant for execution some things exist purely to create clarity and shared understanding. 🔹Variables may look simple, but they represent abstraction the ability to assign meaning to data. Naming rules are not arbitrary they enforce discipline. Clean names often reflect clean thinking, while messy names usually signal unclear logic. 🔹Then come data types integers, floats, strings, booleans. These are not just categories they are constraints. And constraints are what make systems predictable and reliable. A language that distinguishes between "12" and 12 is a language that demands precision in thought. 🔹Even string indexing carries a deeper idea any structure can be accessed, sliced, and interpreted differently depending on perspective forward or backward. It’s a reminder that how you look at something changes what you see. 🔹Type conversion introduces another subtle lesson. Sometimes transformation happens automatically (implicit), and sometimes it requires intent (explicit). Knowing when each occurs is the difference between control and assumption. 🔹And then there is truth in Python only a small set of values evaluate to false everything else is true. That’s not just syntax, it is a model of evaluation clear, minimal, and consistent. 🔹Finally, Python’s execution model bytecode and the Python Virtual Machine reminds us that what we write is never what the machine directly understands. There’s always a layer of translation. What feels simple at the surface is powered by deeper abstraction underneath. At this level, programming stops being about syntax. It becomes about systems, logic, constraints, and clarity of thought. #Python #PythonProgramming #Programming #Coding #SoftwareDevelopment #ComputerScience #Tech #TechThinking #LogicBuilding #ProblemSolving #Abstraction #DataTypes #Variables #LearnPython #CodingJourney #DevCommunity #SoftwareEngineering #BackendDevelopment #FullStackDevelopment #ComputerScienceStudents #DeveloperLife #CleanCode #CodeNewbie #TechEducation #ProgrammingFundamentals
To view or add a comment, sign in
-
-
Two years ago, Sam Thach, Caleb Hart, Joshua Aguayo, and I took on a formidable project: 𝐁𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐚 𝐬𝐲𝐬𝐭𝐞𝐦 𝐭𝐡𝐚𝐭 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐞𝐬 𝐦𝐞𝐚𝐧𝐢𝐧𝐠𝐟𝐮𝐥 𝐝𝐨𝐜𝐬𝐭𝐫𝐢𝐧𝐠𝐬 𝐟𝐨𝐫 𝐏𝐲𝐭𝐡𝐨𝐧 𝐟𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬. 𝐖𝐡𝐲 𝐭𝐡𝐢𝐬 𝐩𝐫𝐨𝐣𝐞𝐜𝐭? We explored several NLP-based ideas such as translation systems, auto-documentation, and text generation, but ultimately landed on a Python Code Commenter because it solved a problem we all had firsthand experience with. 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐥𝐞𝐠𝐚𝐜𝐲, 𝐩𝐨𝐨𝐫𝐥𝐲 𝐝𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐞𝐝 𝐜𝐨𝐝𝐞. 𝐓𝐡𝐞 𝐠𝐨𝐚𝐥 𝐰𝐚𝐬 𝐬𝐭𝐫𝐚𝐢𝐠𝐡𝐭𝐟𝐨𝐫𝐰𝐚𝐫𝐝: Take a file of uncommented Python functions and return the same code with autogenerated docstrings that explain what each function does. 𝐖𝐡𝐚𝐭 𝐰𝐞 𝐛𝐮𝐢𝐥𝐭: • Fine-tuned a T5 transformer model using PyTorch • Trained on a non‑proprietary dataset from Hugging Face • Framed the problem correctly as a code-to-text translation task • Generated docstrings only for valid function definitions to ensure reliability • Designed and implemented a Tkinter GUI so the tool was usable by non-ML users By the end, we had a functional prototype that could process large volumes of uncommented code and meaningfully document function definitions, landing an average accuracy score of ~1.43/2 across independent evaluations (with 2 being amazing). 𝐒𝐜𝐨𝐩𝐞 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 & 𝐫𝐞𝐚𝐥-𝐰𝐨𝐫𝐥𝐝 𝐜𝐨𝐧𝐬𝐭𝐫𝐚𝐢𝐧𝐭𝐬: This project was a great lesson in adapting plans to reality. 𝘞𝘩𝘢𝘵 𝘴𝘵𝘢𝘳𝘵𝘦𝘥 𝘢𝘴: • Line-by-line comments • CodeBERT • Broad scope 𝘌𝘷𝘰𝘭𝘷𝘦𝘥 𝘪𝘯𝘵𝘰: • Function-level docstrings • Switching from CodeBERT to T5 • A narrower, more robust and defensible solution 𝘈𝘭𝘰𝘯𝘨 𝘵𝘩𝘦 𝘸𝘢𝘺, 𝘸𝘦 𝘥𝘦𝘢𝘭𝘵 𝘸𝘪𝘵𝘩: • School-imposed security restrictions • Insufficient hardware and delayed access to Data Science machines • Shared environment issues • Version control growing pains • Team availability constraints during a compressed timeline None of these stopped the project, but 𝐚𝐥𝐥 𝐨𝐟 𝐭𝐡𝐞𝐦 𝐟𝐨𝐫𝐜𝐞𝐝 𝐮𝐬 𝐭𝐨 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐞 𝐛𝐞𝐭𝐭𝐞𝐫, 𝐫𝐞𝐩𝐫𝐢𝐨𝐫𝐢𝐭𝐢𝐳𝐞 𝐜𝐨𝐧𝐬𝐭𝐚𝐧𝐭𝐥𝐲, 𝐚𝐧𝐝 𝐦𝐚𝐤𝐞 𝐩𝐫𝐚𝐠𝐦𝐚𝐭𝐢𝐜 𝐭𝐞𝐜𝐡𝐧𝐢𝐜𝐚𝐥 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧𝐬. 𝐖𝐡𝐚𝐭 𝐈 𝐭𝐨𝐨𝐤 𝐚𝐰𝐚𝐲: Beyond the technical skills, this project reinforced lessons I still apply today: 1. Scope management matters! 2. “Crunch” is real, and planning for it is essential 3. Many problems already have partial solutions; understanding them is half the job 4. Framing the problem correctly can unlock progress 𝐌𝐨𝐬𝐭 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭𝐥𝐲, 𝐢𝐭 𝐫𝐞𝐦𝐢𝐧𝐝𝐞𝐝 𝐦𝐞 𝐡𝐨𝐰 𝐦𝐮𝐜𝐡 𝐬𝐭𝐫𝐨𝐧𝐠𝐞𝐫 𝐨𝐮𝐭𝐜𝐨𝐦𝐞𝐬 𝐚𝐫𝐞 𝐰𝐡𝐞𝐧 𝐚 𝐭𝐞𝐚𝐦 𝐬𝐭𝐢𝐜𝐤𝐬 𝐭𝐡𝐫𝐨𝐮𝐠𝐡 𝐮𝐧𝐜𝐞𝐫𝐭𝐚𝐢𝐧𝐭𝐲 𝐚𝐧𝐝 𝐟𝐫𝐢𝐜𝐭𝐢𝐨𝐧 𝐢𝐧𝐬𝐭𝐞𝐚𝐝 𝐨𝐟 𝐚𝐛𝐚𝐧𝐝𝐨𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 𝐩𝐫𝐨𝐛𝐥𝐞𝐦. If you’re curious, the full project can be found here: 🔗 https://lnkd.in/gPNaKjv3
To view or add a comment, sign in
-
PYTHON NO LONGER ENDS WITH CODE. It begins where the architecture of intelligence begins. For years, Python was seen as a programming language. A practical tool. A clean syntax. A fast way to build software. But that description is no longer enough. TODAY, PYTHON IS BECOMING SOMETHING FAR GREATER. It is turning into a language of orchestration: of models, of tools, of agents, of reasoning chains, of decision layers, of context, and of action. Not long ago, a developer wrote functions. NOW, MORE AND MORE OFTEN, A DEVELOPER DESIGNS BEHAVIOR. That is a profound shift. Because the real question is no longer: Can you write code? The real question is: CAN YOU BUILD A SYSTEM IN WHICH CODE, MODEL, DATA, MEMORY, AND CONTEXT BEGIN TO WORK AS ONE? This is exactly why Python is not disappearing in the age of AI. Quite the opposite. ITS STRATEGIC ROLE IS GROWING. Because very few languages combine so much at once: simplicity, abstraction, integration, automation, experimentation, and the ability to move from idea to working system with extraordinary speed. And that is why the future will not belong to those who merely write code. IT WILL BELONG TO THOSE WHO CAN DESIGN THE ARCHITECTURE OF DECISION. The engineer of the coming years will not be judged only by syntax. Not only by frameworks. Not only by whether a script runs. They will be judged by whether they can create structures in which intelligence becomes usable, directed, and real. PYTHON IS NO LONGER JUST A LANGUAGE OF SOFTWARE. IT IS BECOMING A LANGUAGE OF AGENCY. A language for building systems that do not merely execute instructions, but coordinate meaning, logic, memory, and response. So the real question is no longer: Should people still learn Python? The real question is: CAN YOU USE IT TO BUILD SYSTEMS THAT THINK WITH YOU, ACT WITH YOU, AND EXTEND HUMAN CAPABILITY? That is where the game is now. And many still do not see it. #Python #AI #LLM #MachineLearning #SoftwareArchitecture #Agents #Automation #FutureOfWork
To view or add a comment, sign in
-
-
Most agent frameworks tightly couple workflow logic with Python code. AgentSPEX is a dedicated specification language for LLM-agent workflows. Instead of burying control flow inside Python scripts, AgentSPEX makes it explicit. Typed steps, branching, loops, parallel execution, reusable submodules, and state management all live in a readable spec - separate from the execution layer. The agent harness underneath handles tool access, sandboxed environments, checkpointing, and verification. It's the difference between editing a blueprint and rewiring a building. The team evaluated AgentSPEX across 7 benchmarks and ran a user study comparing it against a popular existing agent framework. Users found AgentSPEX workflows significantly more interpretable and accessible to author. The project also ships with ready-to-use agents for deep research and scientific research tasks, plus a visual editor that synchronizes graph and workflow views in real time. The practical upside here is maintainability. Current orchestration tools like LangGraph, DSPy, and CrewAI give you structure, but modifying a workflow still means modifying code. A dedicated spec language means non-engineers can inspect, edit, and verify agent behavior without touching the runtime. The real question: will teams adopt a new language when Python already works? If the interpretability gains hold up in production, the answer might be yes - especially when debugging a failing 15-step agent pipeline at 2 AM. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
To view or add a comment, sign in
-
Shipping Python code shouldn’t feel like rolling dice in production. Modern tooling has quietly changed the game — not by adding complexity, but by removing entire classes of bugs before they ever exist In my latest Towards Data Science article I break down how a lightweight but powerful toolchain can turn your dev pipeline into a safety net: black → zero-effort format consistency ruff → lightning-fast linting pytest → confidence through real, maintainable tests mypy → catching type-related bugs before runtime py-spy → understanding performance without touching code pre-commit → enforcing all of the above automatically The real takeaway isn’t the tools themselves — it’s how combining them creates a feedback loop that catches issues early, standardizes quality, and speeds up development instead of slowing it down. If your pipeline still relies on “we’ll catch it in review” or “we’ll fix it later”… this is worth your time. Read the full breakdown and setup guide: https://lnkd.in/ewuXn6NF
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
For those interested in the deep dive on SDD, here is the Birgitta Böckeler article I mentioned (via Martin Fowler's site): https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html