I’m genuinely passionate about the possibilities of Python. In data, machine learning, and AI, it’s an incredibly powerful and flexible tool. And when we bring LLMs and RAG into the picture… it gets even more exciting. But real-world work teaches an important lesson: * what pays the bills is the client’s actual need. Lately, I’ve been spending time migrating and integrating applications from Python to Java, and honestly, I expected the process to be far more complex. Instead, I’ve been pleasantly surprised by how mature and productive the Java AI ecosystem has become. * Tools that really stood out to me: -LangChain4j -Spring AI -DJL (Deep Java Library) At the end of the day, language choice is just a means to an end. What truly matters is delivering value through solid architecture, pragmatic decisions, and technology serving the problem — not the other way around. Always learning, adapting, and evolving. #Python #Java #AI #LLM #RAG #SoftwareEngineering #Architecture #ContinuousLearning
Python to Java Migration: Delivering Value with Solid Architecture
More Relevant Posts
-
Python is way too slow for AI at scale. TM Dev Lab just published an MCP Server Performance Benchmark, and their conclusion about Python is blunt: "Not Recommended For: Any production high-load scenario (31x slower than Go/Java)." See the full benchmark: https://lnkd.in/ekMuK5hp Here's what stood out to me: 📊 Memory performance: Go #1, Java close behind 📊 CPU performance: Java #1, Go close behind 📊 Overall winner: Go, but I'd add an important caveat Go won in a vacuum, but most medium-to-large enterprises have far more Java talent, infrastructure, and libraries than Go. For most organizations, Java is the smarter tradeoff. This isn't about Python being a bad language. I recommend Python and TypeScript to new developers. But the strengths that make Python ideal for ML prototyping become liabilities when you need enterprise integration and performance at scale. I wrote about this last year in "Python is Not the Language of AI": https://lnkd.in/ebmWaQQD The usual caveats: Benchmarks are never the full story, and MCP servers are just one segment of AI deployments. But when the performance gap is measured in orders of magnitude, it should give you pause before deploying Python for AI at scale. What's your production AI stack built on?
To view or add a comment, sign in
-
-
Ed Donner i remember your course did hv an example whr u ported python to c++ and it improved performance mugiple order. it thn escaped me thn why are we doing ML in python. The articld below seemed relevant. But c++ dpesnt figure. Any reason?
Python is way too slow for AI at scale. TM Dev Lab just published an MCP Server Performance Benchmark, and their conclusion about Python is blunt: "Not Recommended For: Any production high-load scenario (31x slower than Go/Java)." See the full benchmark: https://lnkd.in/ekMuK5hp Here's what stood out to me: 📊 Memory performance: Go #1, Java close behind 📊 CPU performance: Java #1, Go close behind 📊 Overall winner: Go, but I'd add an important caveat Go won in a vacuum, but most medium-to-large enterprises have far more Java talent, infrastructure, and libraries than Go. For most organizations, Java is the smarter tradeoff. This isn't about Python being a bad language. I recommend Python and TypeScript to new developers. But the strengths that make Python ideal for ML prototyping become liabilities when you need enterprise integration and performance at scale. I wrote about this last year in "Python is Not the Language of AI": https://lnkd.in/ebmWaQQD The usual caveats: Benchmarks are never the full story, and MCP servers are just one segment of AI deployments. But when the performance gap is measured in orders of magnitude, it should give you pause before deploying Python for AI at scale. What's your production AI stack built on?
To view or add a comment, sign in
-
-
I should keep my mouth shut but I can't This is the type of content that can lead to misinformation that causes leaders to take wrong decisions without proper care. First lets dissect the problems of the benchmark (https://lnkd.in/dyJEr7fp): 1. From the code, linked it was executed with concurrent users calling tools sequentially, which means that the fibonacci sequence computation was impacting the other users tool calls, making it impossible to correctly determine the performance of each tool individually since they were contaminated (benchmark.js/mcpSession method initializes and calls all tools sequentially) 2. Docker compose was sending healthcheck requests which compete with the benchmark requests 3. The warm up is just 10 requests to the /mcp endpoint which does not allow the interperters to perform any JIT in the actual tool endpoints, also, just 10 requests with no time constraint prevents garbage collector from running as well so the warmup is not really a warmup (run_benchmark.sh/warmup) 4. Each simulated tool call is initializing a new session, which is not how a production implementation would look like, you want to initialize session once and reuse it to call the tools individually, we know that since 1997 when HTTP 1.1 was intially released 5. Python, Java and Go code are using a recursive fibonnaci algorithm, NodeJS is running a iterative algorithm. For non-technical people, recursion is a lot slower then iteration, which means that NodeJS has an advantage here, but it get's worse, since Python is not compiled, it cannot perform the optimzations that Java and Go can to eliminate some of the costs of recursion meaning that Python is severely penalized by this implementation. 6. 3.9 million requests is the sum of the three rounds, not individual ones, which is also misleading But let's say the benchmark was actually done right, what do the results actually tells us? The answer is -> nothing <- Nothing that we didn't already know: Java runs faster than Python, so what? Now let me ask you, after you saved 20ms, what are you going to do with the 2 SECONDS spent in the actuall LLM call? That's a 100x difference between your actual problem and the time you saved. AI is a tool, so is Python and Go and Java and all of them have a role in how we work. But you need to know which tool to use and how, otherwise you will be making the same mistakes I highlighted here. Or at least hire someone that will guide you in this journey. If I got something wrong I'm more than happy to correct myself, please use the comments and let's discuss.
Python is way too slow for AI at scale. TM Dev Lab just published an MCP Server Performance Benchmark, and their conclusion about Python is blunt: "Not Recommended For: Any production high-load scenario (31x slower than Go/Java)." See the full benchmark: https://lnkd.in/ekMuK5hp Here's what stood out to me: 📊 Memory performance: Go #1, Java close behind 📊 CPU performance: Java #1, Go close behind 📊 Overall winner: Go, but I'd add an important caveat Go won in a vacuum, but most medium-to-large enterprises have far more Java talent, infrastructure, and libraries than Go. For most organizations, Java is the smarter tradeoff. This isn't about Python being a bad language. I recommend Python and TypeScript to new developers. But the strengths that make Python ideal for ML prototyping become liabilities when you need enterprise integration and performance at scale. I wrote about this last year in "Python is Not the Language of AI": https://lnkd.in/ebmWaQQD The usual caveats: Benchmarks are never the full story, and MCP servers are just one segment of AI deployments. But when the performance gap is measured in orders of magnitude, it should give you pause before deploying Python for AI at scale. What's your production AI stack built on?
To view or add a comment, sign in
-
-
The "Python monopoly" on AI agents is officially breaking. 🧊🔨 According to the latest from The New Stack, the Java ecosystem is no longer just "catching up"—it’s providing a superior path for production-grade AI agents. While Python is great for the lab, the JVM is where AI goes to work. Here’s why Java (and Kotlin!) are becoming the secret weapons for AI agents in 2026: 1️⃣ Determinism over "Prompt Magic" 🔮➡️✅ New frameworks like Embabel are introducing Goal-Oriented Action Planning. Instead of hoping an LLM follows instructions, the framework uses deterministic logic to ensure agents are predictable and explainable. In the enterprise, "I don't know why the bot did that" is no longer an acceptable answer. 2️⃣ The "Real-World" Advantage (Fault Tolerance) 🛡️ AI agents are long-running systems. They fail, they time out, they need to be restarted. Koog (a Kotlin-native framework from JetBrains) is built on the premise that an AI agent is just another microservice. It brings the decades of JVM fault tolerance and database integration to the "messy" world of LLMs. 3️⃣ LangChain4j: The Enterprise Heavyweight 🏗️ It’s unopinionated, lightweight, and now has the backing of Microsoft. Whether you’re on Quarkus, Spring Boot, or Micronaut, LangChain4j provides the secure, type-safe foundation that Python-based alternatives struggle to match at scale. 4️⃣ Performance is No Longer "Optional" ⚡ As AI agent usage explodes, the ability to handle massive concurrency becomes critical. Python’s global interpreter lock (GIL) is a bottleneck; Java’s Virtual Threads (Project Loom) are the solution. 💡 The Big Takeaway: Python built the prototypes. The JVM is building the Infrastructure. If your AI agent needs to interact with a real database, authorize real users, and run with 99.9% uptime, you don't need a new stack—you need the one you already have. Are you sticking with Python for your agents, or are you moving to a type-safe, JVM-based future? Let’s hear your thoughts! 👇 #Java #Kotlin #AIAgents #SoftwareEngineering #LangChain4j #JVM #SpringAI #TechTrends #JetBrains #Python
To view or add a comment, sign in
-
-
"Implementing ML algorithms in Python is a library call. Implementing them in C++ is a math lesson you can't escape." The Conflict (The Exploding Gradient): "While building Linear and Logistic Regression from scratch, I hit a classic wall: Exploding Gradients. One minute my loss was decreasing, the next my weights were hitting nan and my model was flying off the rails. In C++, you don't have the safety nets of high-level frameworks. If your feature scaling is off or your learning rate is a fraction too high, the double-precision variables will let you know immediately." The Technical Insight (Theory): "I learned that numerical stability is just as important as the algorithm itself. To fix the divergence, I had to look closely at: Weight Initialization: Starting too far from the local minimum. Feature Scaling: Ensuring the input space didn't warp the gradient. Learning Rate Scheduling: Realizing that a static alpha isn't always the answer." "My loss curves aren't perfect yet, and my C++ pointer management is still a work in progress, but seeing the cost function finally trend downward without a library's help is incredibly satisfying. I’m still learning the nuances of optimization and memory efficiency in systems-level ML, but the 'from scratch' journey is where the real growth happens." "To my fellow C++ or ML devs: What was the most frustrating 'NaN' or overflow error you've ever had to debug? Check out the progress (and the struggles) in the repo here: https://lnkd.in/gb-Tcu3N
To view or add a comment, sign in
-
-
erlang_python 1.2.0 - two releases later Quick follow-up on the 1.0 announcement. Two new releases based on real world usage. Keep state between calls ML models are expensive to load. Now you can keep them in memory and reuse them across requests. Load once, predict many times. Faster responses, lower costs. Better concurrency Python threads can now talk back to Erlang without blocking. This matters when you're running parallel ML workloads or batch processing. Nested workflows Python can call Erlang, which calls Python, which calls Erlang... as deep as you need. Useful for complex AI pipelines where orchestration and inference need to talk to each other. Shared data Workers can share cached results - embeddings, configs, intermediate computations. No need for external caching infrastructure. The goal stays the same: bring Python's AI/ML ecosystem into your Erlang or Elixir backend without adding infrastructure complexity. No separate services, no message queues, no API layers to maintain. https://lnkd.in/eHh9txfe #erlang #elixir #python #ml #ai
To view or add a comment, sign in
-
Imagine a future where high-level languages like Python, Java, or Rust no longer exist. There’s a fascinating theory circulating in tech: as agentic IDEs evolve, they might bypass human-readable code entirely, generating optimized binary directly for the machine. Intent ➡️ Execution. No syntax wars. No framework fatigue. Pure efficiency. But here’s the paradox: this doesn’t kill engineering. It elevates it. If the machine handles the “how,” humans must master the “what.” In this future, engineers transition from writers of code to architects of intent. When you can no longer read the source code to spot a bug, the hardest problems become: • How precisely can we describe the problem? • How do we verify safety and performance without reading the output? • How do we design constraints that an AI cannot hallucinate its way out of? The bottleneck of the future isn’t typing speed or syntax knowledge. It’s clarity of thought. We’re moving toward a world where the ability to ask the right question is infinitely more valuable than knowing the right syntax. Are we ready to stop being coders and start being architects? #FutureOfTech #AI #SoftwareEngineering #GenerativeAI #DevCommunity
To view or add a comment, sign in
-
Why Python is losing the Production Battle to Compiled Languages Python is the undisputed King of AI Research and Prototyping. But in a high-scale production environment? The story changes. As a developer building deep-search AI tools, I've seen where Python hits the wall: Global Interpreter Lock (GIL): A major bottleneck for true multi-threading. Execution Speed: When processing thousands of image-metadata requests, milliseconds matter. Memory Overhead: Native batch processing in Python often lacks the efficiency of lower-level languages. In my recent projects, I’ve been shifting toward Kotlin Multiplatform (KMP) and Swift for the edge, while using Serverless Go/Node for the backend. The goal is simple: AI that is fast, not just smart. What’s your stack for AI in production? #AI #SoftwareEngineering #Python #KMP #CloudComputing
To view or add a comment, sign in
-
-
🐍 Why Python is the Unrivalled King of Machine Learning In the world of AI, speed and simplicity are everything. I’m often asked why I chose Python for my self-study journey in Machine Learning instead of other languages like C++ or Java. The answer isn't just about it being "easy." It’s about the ecosystem. Here is why Python remains the industry standard: 1️⃣ The Powerhouse Ecosystem (Libraries) Python provides specialised tools for every stage of the AI pipeline. 🌟 NumPy: For high-performance N-dimensional array computing. 🌟 Pandas: For seamless data manipulation and analysis. 🌟 Matplotlib/Seaborn: For visualising complex data patterns. 🌟 Scikit-learn/PyTorch: For building and deploying actual ML models. 2️⃣ Focus on Logic, Not Syntax As a developer, I want to spend my time solving mathematical problems and optimising neural networks, not fighting with complex memory management or syntax errors. Python’s readability enables us to translate mathematical concepts into code almost instantly. 3️⃣ Community & Support From StackOverflow to GitHub, the AI community speaks Python. If you hit a bug in your "Neural-Math-Engine," someone, somewhere has already solved it in Python. 4️⃣ Seamless Integration Python acts as a "glue language." It can easily trigger high-performance C/C++ code in the background (which is how libraries like NumPy stay so fast), giving us the best of both worlds. My Take: As I work through my "2026 AI Roadmap," Python has been the bridge between complex Calculus and real-world implementation. What do you think? Is Python’s dominance here to stay, or do you see languages like Julia or Mojo taking over in the future? Let’s discuss! 👇 #Python #MachineLearning #ArtificialIntelligence #DataScience #Coding #SelfLearning #TechTrends #SoftwareDevelopment #Roadmap2026 #ITUM
To view or add a comment, sign in
-
-
🐍 Why Python? It’s not just a language, it’s a shortcut.. If you look at the most impactful AI projects, they almost all have one thing in common: They are written in Python. Why? It’s not because Python is the fastest language (C++ wins that). It’s because Python is the most human language. Here are the 3 reasons Python won the Data Science war: 1. The Ecosystem In other languages, you have to build your own tools. In Python, someone has already built it for you. Want to handle matrices? Use NumPy. Want to clean a messy CSV? Use Pandas. Want to build a neural network? Use PyTorch. You aren't writing code from scratch; you are assembling powerful blocks to solve real-world problems. 2. The Readability Python code looks like English. This is huge for collaboration. When I build a credit risk model for a microfinance firm, I want the lead analyst to be able to read my logic even if they aren't a "coder." 3. The Production Bridge Python is the only language that is great for Research AND Production. You can explore data in a Jupyter Notebook in the morning. You can deploy that same logic as a web API using FastAPI /Djangoin the afternoon. There is no yranslation needed. What you build is what you ship. We don't choose Python because we love the syntax. We choose it because it reduces the time between having an idea and delivering impact. #Python #DataScience #MachineLearning #ZambiaTech #FinTech
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development