erlang_python 1.2.0 - two releases later Quick follow-up on the 1.0 announcement. Two new releases based on real world usage. Keep state between calls ML models are expensive to load. Now you can keep them in memory and reuse them across requests. Load once, predict many times. Faster responses, lower costs. Better concurrency Python threads can now talk back to Erlang without blocking. This matters when you're running parallel ML workloads or batch processing. Nested workflows Python can call Erlang, which calls Python, which calls Erlang... as deep as you need. Useful for complex AI pipelines where orchestration and inference need to talk to each other. Shared data Workers can share cached results - embeddings, configs, intermediate computations. No need for external caching infrastructure. The goal stays the same: bring Python's AI/ML ecosystem into your Erlang or Elixir backend without adding infrastructure complexity. No separate services, no message queues, no API layers to maintain. https://lnkd.in/eHh9txfe #erlang #elixir #python #ml #ai
Benoit Chesneau’s Post
More Relevant Posts
-
AI agents are powerful, but they’re far more effective and efficient when they can reason over resolved code instead of raw text. OpenRewrite’s LST models Java and JavaScript to enable large-scale analysis and reviewable transformation for both developers and AI-assisted workflows. Now, with Python joining that model, more of the stack, including data and AI systems, can now be analyzed and evolved with the same semantic foundation. https://lnkd.in/g2t_m-tS
To view or add a comment, sign in
-
For the past few days, I immersed myself in Python at every layer of the language. I revisited what makes Python distinct, what it truly means for it to be interpreted, and how compilation to bytecode fits into execution. I examined how modules differ from packages and why structure matters in real systems. Even concepts like indentation, scope, and case sensitivity felt less like rules and more like deliberate design choices. I strengthened my understanding of core data behavior. Lists versus tuples beyond mutability. Sets and why they are unordered. Dictionaries and how they contrast structurally with sets. Slicing, indexing, negative indexing, and what actually happens in memory. Type conversion, operator behavior, and the subtle differences between modulus, division, and floor division. Control flow became sharper. The real distinction between for and while. The behavioral impact of break, continue, and pass. List comprehensions as expressive logic. Lambda functions and flexible parameter handling with *args and **kwargs. Object oriented principles took deeper shape. Inheritance, including multiple inheritance and method resolution order. Method overriding versus overloading. Abstraction and encapsulation as design discipline. The true role of self in managing instance state. The difference between instance, class, and static methods. Memory management demanded serious attention. Reference counting. Garbage collection. Why memory is not fully deallocated on exit. Shallow versus deep copy and how object references behave. How arguments are passed within Python’s model. I also explored performance and architecture. The GIL and its implications for multithreading. Generators and decorators as powerful structural tools. The performance difference between Python lists and NumPy arrays. The internal mechanics behind sort() and sorted(). On the data side, I worked with Pandas, clarifying the difference between Series and DataFrames. I revisited file handling, JSON manipulation, exception handling patterns, and package management with pip. I examined architectural contrasts between Django and Flask from a structural perspective. Then I reinforced everything with algorithmic practice. Fibonacci implementation. Linked list traversal, including finding the middle element in one pass. Time complexity reasoning. Longest Common Substring. 0/1 Knapsack. Stateful problems like Asteroid Collision requires stack simulation and disciplined thinking. What stood out to me is how interconnected everything becomes when studied this way. Memory influences object behaviour. Object behaviour influences algorithm design. Algorithm design influences performance. Architecture influences scalability. The deeper I go into Python, the more intentional it feels. And I am eager to keep building from here.
To view or add a comment, sign in
-
-
I'm transferring GuardSpine's verification kernel from Python to Lean 4. The migration's painful enough that I built a toolkit to make it repeatable. So I'm giving it away. Quick context. GuardSpine is my open-source AI governance framework — 16 repos, SHA-256 hash chains, Apache 2.0 licensed. It answers one question: "Who authorized this semantic change?" The core kernel works. But it's Python. Python is great for getting something running. It's terrible when you need to prove that something is correct. Lean 4 is a formal proof language. The compiler mathematically verifies your code. Not "it passed the tests" — the compiler won't let you ship anything that isn't provably correct. We're in the process of moving critical verification components over now. It's slow. It's tedious. And the tooling gap between Python and Lean is brutal — so I built lean-python-migration-kit to bridge it. It's on GitHub. This matters beyond my project. AI agents don't just write code for humans anymore. They write code for other agents. Agent A generates a function. Agent B calls it. Agent C chains it into a workflow nobody reviewed. Who's verifying any of this? Right now — mostly nobody. Maybe some unit tests. Maybe a human glances at it. That's not going to hold when agents are autonomously composing systems at scale. Zero-trust isn't just a network security concept. It's becoming an AI architecture requirement. Every artifact an agent produces — code, configs, documents — needs cryptographic proof of integrity before another agent should touch it. The research backs this up. VeriBench (AI4Math workshop, ICML 2025) found Claude 3.7 Sonnet could only compile about 12.5% of formal verification challenges in Lean 4. But a self-optimizing agent architecture hit nearly 90%. Agents with iterative self-correction are already dramatically better at proving code correct than single-shot models. The money's following. Harmonic has raised nearly $300M building "hallucination-free" AI on Lean 4's backbone — valued at $1.45B as of late 2025. Every AI system that hit medal-level performance at the International Math Olympiad used Lean. Google DeepMind, ByteDance, Mistral — all building on it. Proof code isn't academic anymore. It's infrastructure. My bet: within 3 years, "unverified agent output" will sound as reckless as "unencrypted database." The governance layer between agents won't be API keys and permissions. It'll be mathematical proof. That's why GuardSpine needs a formally verified kernel. Still early. The migration is ongoing and the toolkit is rough in places. But the direction is clear. If you're building agent infrastructure or thinking about AI governance — it's free: https://lnkd.in/eyTVWWe8 #AIGovernance #FormalVerification #Lean4 #OpenSource #GuardSpine
To view or add a comment, sign in
-
Python is way too slow for AI at scale. TM Dev Lab just published an MCP Server Performance Benchmark, and their conclusion about Python is blunt: "Not Recommended For: Any production high-load scenario (31x slower than Go/Java)." See the full benchmark: https://lnkd.in/ekMuK5hp Here's what stood out to me: 📊 Memory performance: Go #1, Java close behind 📊 CPU performance: Java #1, Go close behind 📊 Overall winner: Go, but I'd add an important caveat Go won in a vacuum, but most medium-to-large enterprises have far more Java talent, infrastructure, and libraries than Go. For most organizations, Java is the smarter tradeoff. This isn't about Python being a bad language. I recommend Python and TypeScript to new developers. But the strengths that make Python ideal for ML prototyping become liabilities when you need enterprise integration and performance at scale. I wrote about this last year in "Python is Not the Language of AI": https://lnkd.in/ebmWaQQD The usual caveats: Benchmarks are never the full story, and MCP servers are just one segment of AI deployments. But when the performance gap is measured in orders of magnitude, it should give you pause before deploying Python for AI at scale. What's your production AI stack built on?
To view or add a comment, sign in
-
-
Ed Donner i remember your course did hv an example whr u ported python to c++ and it improved performance mugiple order. it thn escaped me thn why are we doing ML in python. The articld below seemed relevant. But c++ dpesnt figure. Any reason?
Python is way too slow for AI at scale. TM Dev Lab just published an MCP Server Performance Benchmark, and their conclusion about Python is blunt: "Not Recommended For: Any production high-load scenario (31x slower than Go/Java)." See the full benchmark: https://lnkd.in/ekMuK5hp Here's what stood out to me: 📊 Memory performance: Go #1, Java close behind 📊 CPU performance: Java #1, Go close behind 📊 Overall winner: Go, but I'd add an important caveat Go won in a vacuum, but most medium-to-large enterprises have far more Java talent, infrastructure, and libraries than Go. For most organizations, Java is the smarter tradeoff. This isn't about Python being a bad language. I recommend Python and TypeScript to new developers. But the strengths that make Python ideal for ML prototyping become liabilities when you need enterprise integration and performance at scale. I wrote about this last year in "Python is Not the Language of AI": https://lnkd.in/ebmWaQQD The usual caveats: Benchmarks are never the full story, and MCP servers are just one segment of AI deployments. But when the performance gap is measured in orders of magnitude, it should give you pause before deploying Python for AI at scale. What's your production AI stack built on?
To view or add a comment, sign in
-
-
I should keep my mouth shut but I can't This is the type of content that can lead to misinformation that causes leaders to take wrong decisions without proper care. First lets dissect the problems of the benchmark (https://lnkd.in/dyJEr7fp): 1. From the code, linked it was executed with concurrent users calling tools sequentially, which means that the fibonacci sequence computation was impacting the other users tool calls, making it impossible to correctly determine the performance of each tool individually since they were contaminated (benchmark.js/mcpSession method initializes and calls all tools sequentially) 2. Docker compose was sending healthcheck requests which compete with the benchmark requests 3. The warm up is just 10 requests to the /mcp endpoint which does not allow the interperters to perform any JIT in the actual tool endpoints, also, just 10 requests with no time constraint prevents garbage collector from running as well so the warmup is not really a warmup (run_benchmark.sh/warmup) 4. Each simulated tool call is initializing a new session, which is not how a production implementation would look like, you want to initialize session once and reuse it to call the tools individually, we know that since 1997 when HTTP 1.1 was intially released 5. Python, Java and Go code are using a recursive fibonnaci algorithm, NodeJS is running a iterative algorithm. For non-technical people, recursion is a lot slower then iteration, which means that NodeJS has an advantage here, but it get's worse, since Python is not compiled, it cannot perform the optimzations that Java and Go can to eliminate some of the costs of recursion meaning that Python is severely penalized by this implementation. 6. 3.9 million requests is the sum of the three rounds, not individual ones, which is also misleading But let's say the benchmark was actually done right, what do the results actually tells us? The answer is -> nothing <- Nothing that we didn't already know: Java runs faster than Python, so what? Now let me ask you, after you saved 20ms, what are you going to do with the 2 SECONDS spent in the actuall LLM call? That's a 100x difference between your actual problem and the time you saved. AI is a tool, so is Python and Go and Java and all of them have a role in how we work. But you need to know which tool to use and how, otherwise you will be making the same mistakes I highlighted here. Or at least hire someone that will guide you in this journey. If I got something wrong I'm more than happy to correct myself, please use the comments and let's discuss.
Python is way too slow for AI at scale. TM Dev Lab just published an MCP Server Performance Benchmark, and their conclusion about Python is blunt: "Not Recommended For: Any production high-load scenario (31x slower than Go/Java)." See the full benchmark: https://lnkd.in/ekMuK5hp Here's what stood out to me: 📊 Memory performance: Go #1, Java close behind 📊 CPU performance: Java #1, Go close behind 📊 Overall winner: Go, but I'd add an important caveat Go won in a vacuum, but most medium-to-large enterprises have far more Java talent, infrastructure, and libraries than Go. For most organizations, Java is the smarter tradeoff. This isn't about Python being a bad language. I recommend Python and TypeScript to new developers. But the strengths that make Python ideal for ML prototyping become liabilities when you need enterprise integration and performance at scale. I wrote about this last year in "Python is Not the Language of AI": https://lnkd.in/ebmWaQQD The usual caveats: Benchmarks are never the full story, and MCP servers are just one segment of AI deployments. But when the performance gap is measured in orders of magnitude, it should give you pause before deploying Python for AI at scale. What's your production AI stack built on?
To view or add a comment, sign in
-
-
I have been studying artificial intelligence, large language models for the past 4 months. Although I have more experience with Spring/Java or NodeJS, I have used Python because of the examples in the book. The problem with Python is that it is pretty unstructured and I can see a lot of technical debt accumulating, making the code more difficult to test and modify as complexity increases. Even the book I am using, which is excellent from an AI point of view, does not seem to reuse or encapsulate code so I had to refactor the book examples into classes and objects. Python is used, I assume, because of its extensive libraries dealing with tensors which is the heart of deep learning. I see that Java 25 and Spring AI have recently been released and I am just wondering if anyone has used them and how those new libraries stack up against Python. #SpringAI #Java25 #SoftwareArchitecture #EnterpriseAI #LLM #ModernJava #TechDebt #SpringBoot #AIdevelopment #PrincipalEngineer #SoftwareEngineering #JavaDevelopment #Python #MachineLearning #GenerativeAI
To view or add a comment, sign in
-
LandingAI just open-sourced ade-python — a Python SDK that enables agentic document extraction, turning complex documents into structured data for AI workflows. 🚀 🔗 https://lnkd.in/gWmAhXCb
To view or add a comment, sign in
-
The "Python monopoly" on AI agents is officially breaking. 🧊🔨 According to the latest from The New Stack, the Java ecosystem is no longer just "catching up"—it’s providing a superior path for production-grade AI agents. While Python is great for the lab, the JVM is where AI goes to work. Here’s why Java (and Kotlin!) are becoming the secret weapons for AI agents in 2026: 1️⃣ Determinism over "Prompt Magic" 🔮➡️✅ New frameworks like Embabel are introducing Goal-Oriented Action Planning. Instead of hoping an LLM follows instructions, the framework uses deterministic logic to ensure agents are predictable and explainable. In the enterprise, "I don't know why the bot did that" is no longer an acceptable answer. 2️⃣ The "Real-World" Advantage (Fault Tolerance) 🛡️ AI agents are long-running systems. They fail, they time out, they need to be restarted. Koog (a Kotlin-native framework from JetBrains) is built on the premise that an AI agent is just another microservice. It brings the decades of JVM fault tolerance and database integration to the "messy" world of LLMs. 3️⃣ LangChain4j: The Enterprise Heavyweight 🏗️ It’s unopinionated, lightweight, and now has the backing of Microsoft. Whether you’re on Quarkus, Spring Boot, or Micronaut, LangChain4j provides the secure, type-safe foundation that Python-based alternatives struggle to match at scale. 4️⃣ Performance is No Longer "Optional" ⚡ As AI agent usage explodes, the ability to handle massive concurrency becomes critical. Python’s global interpreter lock (GIL) is a bottleneck; Java’s Virtual Threads (Project Loom) are the solution. 💡 The Big Takeaway: Python built the prototypes. The JVM is building the Infrastructure. If your AI agent needs to interact with a real database, authorize real users, and run with 99.9% uptime, you don't need a new stack—you need the one you already have. Are you sticking with Python for your agents, or are you moving to a type-safe, JVM-based future? Let’s hear your thoughts! 👇 #Java #Kotlin #AIAgents #SoftwareEngineering #LangChain4j #JVM #SpringAI #TechTrends #JetBrains #Python
To view or add a comment, sign in
-
-
🌶️ 💪 Modern API workloads aren’t “one user, one request” anymore—they’re bursts of concurrent traffic, mixed fast/slow calls, and unforgiving tail-latency expectations. That’s why I’m excited to share our new post on Select AI for Python 1.3 and a major step forward for production-grade concurrency: connection pooling. https://lnkd.in/eze4sUCb With 1.3, developers can now pool connections using: select_ai.create_pool() select_ai.create_pool_async() In the blog, learn what changed from standalone connections, what we measured by integrating pooling into a FastAPI service, and how to think about choosing a pool size that fits your workload. The results: better throughput, improved p95/p99 latency, and more predictable behavior under load—exactly what matters in real-world services. If you’re running (or planning) concurrent Python services with Select AI, this is one of the simplest, highest-impact upgrades you can make. #Oracle #Database #SelectAI #OracleAI #Python #FastAPI #Concurrency #ConnectionPooling
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Incredible - Yet another upgrade, to an amazing tool, quietly launched, in our tiny vital, corner of the world, that will likely augment millions of peoples lives...eventually. Then again, eventually consistent does seem to be our MO. 😄 Keep crushing it Benoit. 🙏