We surveyed 278 Python developers about how they use AI for coding. 65% said the same thing: AI helps with small tasks, but falls apart on anything real. Context loss, contradictory answers, code they can't fully trust. The problem isn't the AI. It's the workflow. Chat-based tools can't see your project, can't run your tests, and forget everything when the window fills up. Agentic coding is different. The AI runs in your terminal, reads your files, edits them directly, manages git, and works across your whole codebase. On April 11–12, Real Python is running a 2-day hands-on course on Claude Code for Python developers. You'll build a complete project from an empty directory and leave with a repeatable workflow you can apply to your own code. If you've been wondering how to actually integrate AI into your professional development workflow, this is a good place to start: https://lnkd.in/gvS-KzVn
Python Devs on AI: Workflow Challenges and Solutions
More Relevant Posts
-
Python just lost its crown on GitHub. For the first time, TypeScript is officially the most-used programming language in the world. But the reason why is absolutely wild. It wasn't a human decision. It was an AI decision. • AI loves rules: TypeScript has strict typing. This makes it incredibly easy for AI tools like GPT-5.5 and Claude to write, debug, and refactor code without making mistakes. • The death of "vibe coding": Python is still king for AI research, but for actual production software, developers are pivoting to whatever language the AI reads best. We are officially designing our systems for machines to read, not humans. "AI-legible" is the new standard. If AI tools code 10x faster in TypeScript than in Python, you’re going to use TypeScript. It’s that simple. What language do you think AI will force us to adopt next ?
To view or add a comment, sign in
-
🐍 Python Term of the Day: Lovable (AI Coding Tools) An AI-powered full-stack platform that generates and deploys web applications from natural language descriptions. https://lnkd.in/gW--_n-T
To view or add a comment, sign in
-
I’ve been building small projects with vibe coding. Prototypes, automation tools, quick APIs — Python feels unbeatable. Fast. Minimal ceremony. You think it, it runs. But something changes when the project grows. On small systems, Python gives you speed. On large systems, it demands discipline you didn’t realize you needed. The issue isn’t syntax. It’s structural pressure. Dynamic typing feels like freedom early on. Later, it becomes deferred risk. When modules multiply, refactors cut deep, and AI generates code across files, the lack of enforced contracts starts to show. You don’t feel it on day one. You feel it months later. A renamed field doesn’t fail at compile time. A changed response shape doesn’t break until runtime. The system keeps moving — until it doesn’t. Switching to C# was surprising. Vibe coding didn’t slow down. It felt calmer. The compiler became a collaborator. Strong typing, interfaces, built-in dependency injection — structure is unavoidable. When AI generates code, it must respect contracts. If it doesn’t, the build fails. That changes the dynamic. With Python, AI generates logic and hopes the structure holds. With C#, AI generates inside guardrails. And when AI accelerates development, guardrails matter more. This isn’t about which language is “better.” It’s about velocity. If AI writes faster than humans review, structural enforcement becomes your safety net. Python is incredible for small systems. For long-lived, multi-team backends, strongly typed ecosystems feel more stable. The language didn’t change. The development dynamics did.
To view or add a comment, sign in
-
-
Most agent frameworks tightly couple workflow logic with Python code. AgentSPEX is a dedicated specification language for LLM-agent workflows. Instead of burying control flow inside Python scripts, AgentSPEX makes it explicit. Typed steps, branching, loops, parallel execution, reusable submodules, and state management all live in a readable spec - separate from the execution layer. The agent harness underneath handles tool access, sandboxed environments, checkpointing, and verification. It's the difference between editing a blueprint and rewiring a building. The team evaluated AgentSPEX across 7 benchmarks and ran a user study comparing it against a popular existing agent framework. Users found AgentSPEX workflows significantly more interpretable and accessible to author. The project also ships with ready-to-use agents for deep research and scientific research tasks, plus a visual editor that synchronizes graph and workflow views in real time. The practical upside here is maintainability. Current orchestration tools like LangGraph, DSPy, and CrewAI give you structure, but modifying a workflow still means modifying code. A dedicated spec language means non-engineers can inspect, edit, and verify agent behavior without touching the runtime. The real question: will teams adopt a new language when Python already works? If the interpretability gains hold up in production, the answer might be yes - especially when debugging a failing 15-step agent pipeline at 2 AM. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
To view or add a comment, sign in
-
Let me see if I can fix this headline for the developers here: "Company that has yet to make a profit and projects to have significant losses for the foreseeable future buys critical rust-driven python toolmaker which most data scientists, data engineers, and software engineers now depend on." If you work in Python, you probably use some tools made by Astral (e.g.: `uv`, `Ruff`). While the CEO restates his comment to their mission of providing high quality high-performance tools, I'm skeptical that this isn't a bad thing for the Python community. https://lnkd.in/gNVZgSF2
To view or add a comment, sign in
-
AI chatbots know how to code. To them, Python, JavaScript, and SQL are just languages, and there are examples for them to train on absolutely everywhere. Some programmers have even taken to “vibe coding”, letting AI work as though it’s a junior programmer, just by describing what they want to build. But can regular people do that? Tim Biggs tested whether someone without coding experience could use AI to make a new web tool: https://lnkd.in/g5xeFpZe #AI #vibecoding
To view or add a comment, sign in
-
A Python interpreter written in Rust. Under 1 microsecond startup. Pydantic just shipped Monty, and it's exactly the sandbox AI agents need. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 When an LLM generates Python code, you need to run it somewhere safe. Current options: spin up a Docker container (slow), use a VM (heavy), or just run it and pray (please don't). Monty is a minimal Python interpreter built in Rust, designed specifically for executing LLM-generated code. 𝗪𝗵𝗮𝘁 𝗺𝗮𝗸𝗲𝘀 𝗶𝘁 𝗰𝗼𝗼𝗹 👉🏽 0.06ms startup (microsecond-scale, not second-scale) 👉🏽 No filesystem access unless you explicitly grant it 👉🏽 No network calls without authorization 👉🏽 Preset resource limits (execution time, memory, stack depth) 👉🏽 Runs in WebAssembly 👉🏽 ~4.5MB download size 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 The interpreter pauses when it hits an external function call. Your host code decides whether to execute it and passes the result back. The LLM writes Python that calls your tools as regular functions, instead of going through the usual tool-call dance. This is already powering "code mode" in Pydantic AI, where the model writes Python calling tools as functions rather than making sequential tool calls. 𝗚𝗲𝘁𝘁𝗶𝗻𝗴 𝘀𝘁𝗮𝗿𝘁𝗲𝗱 👉🏽 𝘶𝘷 𝘢𝘥𝘥 𝘱𝘺𝘥𝘢𝘯𝘵𝘪𝘤-𝘮𝘰𝘯𝘵𝘺 Supports a solid subset of Python: asyncio, re, datetime, json, dataclasses. No class definitions yet, but enough for most agent tasks. 𝘞𝘩𝘢𝘵'𝘴 𝘺𝘰𝘶𝘳 𝘤𝘶𝘳𝘳𝘦𝘯𝘵 𝘴𝘢𝘯𝘥𝘣𝘰𝘹 𝘧𝘰𝘳 𝘳𝘶𝘯𝘯𝘪𝘯𝘨 𝘈𝘐-𝘨𝘦𝘯𝘦𝘳𝘢𝘵𝘦𝘥 𝘤𝘰𝘥𝘦?
To view or add a comment, sign in
-
-
Python was the first programming language I learned, but for me it fell by the wayside years ago. I’m now re-learning it specifically because it seems to be a required skill in the new generation of “AI” companies. So - genuine question for technical folks building AI companies: If your backend is just routing prompts to Anthropic or OpenAI — you're not doing ML. You're doing API calls. So why Python? If you're not training models, if you're not running local inference, you have no NumPy pipelines or CUDA kernels…why on earth Python? Golang gives you compiled performance, tiny binaries, and dead-simple concurrency. Node/TypeScript unifies your entire engineering team under one language and toolchain. There are plenty of other options. Python made sense when once upon a time but now? Not so sure. If your company adds value while still being essentially an AI passthrough - is your stack a technical decision?
To view or add a comment, sign in
-
Claude works while I sleep. In a previous post, I shared that I asked Claude to design a programming language from scratch — not for humans, but exclusively for AI. I told it to think about the philosophy first, then build. When I looked at the design principles Claude arrived at, they were strikingly similar to harness engineering — the discipline that emerged in 2026 around making AI agents reliable. This is not a coincidence. It is the logical conclusion Claude reached by analyzing how LLMs work and where they fail. So I gave this language philosophy a name: Harness Engineering As A Language (HEAAL). Once the paradigm was established, Claude took off overnight. Then I asked: can we measure how well a language embodies the HEAAL philosophy? Claude designed the metrics itself, then built a dashboard to visualize them. From this perspective, we can now quantitatively evaluate the harness safety of any project or agent — human-designed or otherwise. The concept of the HEAAL Score was born. When we first introduced this score, the language's weaknesses became apparent. In many cases, AI was actually better off writing Python. But after continuous refinement and experimentation, we reached a turning point: Sonnet — which had never been trained on this language — was given a few-shot introduction and proceeded to outperform Python across every metric. This is not an attempt to replace Python. We simply want to be there where we are needed. Please see our README for details. This mirrors a key insight from harness engineering: once a model reaches a certain level of intelligence, it benefits more from a stronger harness than from more sophisticated training. (We are also fine-tuning smaller models, but that work remains experimental.) I have no intention of stopping here. I plan to build safer and more useful runtimes, operating systems, and more. Though "I build" is not quite right — I provide the ideas, and Claude implements them. Everything is still in its early stages. If you are interested, you can visit the repository: https://lnkd.in/gwPGmZRp
To view or add a comment, sign in
-
PYTHON NO LONGER ENDS WITH CODE. It begins where the architecture of intelligence begins. For years, Python was seen as a programming language. A practical tool. A clean syntax. A fast way to build software. But that description is no longer enough. TODAY, PYTHON IS BECOMING SOMETHING FAR GREATER. It is turning into a language of orchestration: of models, of tools, of agents, of reasoning chains, of decision layers, of context, and of action. Not long ago, a developer wrote functions. NOW, MORE AND MORE OFTEN, A DEVELOPER DESIGNS BEHAVIOR. That is a profound shift. Because the real question is no longer: Can you write code? The real question is: CAN YOU BUILD A SYSTEM IN WHICH CODE, MODEL, DATA, MEMORY, AND CONTEXT BEGIN TO WORK AS ONE? This is exactly why Python is not disappearing in the age of AI. Quite the opposite. ITS STRATEGIC ROLE IS GROWING. Because very few languages combine so much at once: simplicity, abstraction, integration, automation, experimentation, and the ability to move from idea to working system with extraordinary speed. And that is why the future will not belong to those who merely write code. IT WILL BELONG TO THOSE WHO CAN DESIGN THE ARCHITECTURE OF DECISION. The engineer of the coming years will not be judged only by syntax. Not only by frameworks. Not only by whether a script runs. They will be judged by whether they can create structures in which intelligence becomes usable, directed, and real. PYTHON IS NO LONGER JUST A LANGUAGE OF SOFTWARE. IT IS BECOMING A LANGUAGE OF AGENCY. A language for building systems that do not merely execute instructions, but coordinate meaning, logic, memory, and response. So the real question is no longer: Should people still learn Python? The real question is: CAN YOU USE IT TO BUILD SYSTEMS THAT THINK WITH YOU, ACT WITH YOU, AND EXTEND HUMAN CAPABILITY? That is where the game is now. And many still do not see it. #Python #AI #LLM #MachineLearning #SoftwareArchitecture #Agents #Automation #FutureOfWork
To view or add a comment, sign in
-
Explore related topics
- How Developers can Use AI in the Terminal
- How AI can Improve Coding Tasks
- How AI Affects Coding Careers
- AI Coding Tools and Their Impact on Developers
- How to Use AI for Manual Coding Tasks
- How to Use AI to Make Software Development Accessible
- How to Support Developers With AI
- How to Use AI Instead of Traditional Coding Skills
- How to Use AI Agents to Optimize Code
- How to Adopt AI in Development
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development