Running Python inside the JVM is cool. 𝙄’𝙢 𝙣𝙤𝙩 𝙘𝙤𝙣𝙫𝙞𝙣𝙘𝙚𝙙 𝙞𝙩’𝙨 𝙪𝙨𝙚𝙛𝙪𝙡. 𝙔𝙚𝙩. I tried Project Detroit, and technically — it delivers. You can call Python from Java. Python can call back into Java. Same process. No network hop. That part works. But the thing that actually matters right now? 𝗡𝗼 𝗽𝗶𝗽. 𝗡𝗼 𝗡𝘂𝗺𝗣𝘆. 𝗡𝗼 𝗣𝘆𝗧𝗼𝗿𝗰𝗵. 𝗡𝗼 𝗿𝗲𝗮𝗹 𝗠𝗟 𝘀𝘁𝗮𝗰𝗸. 𝘿𝙚𝙩𝙧𝙤𝙞𝙩 𝙞𝙨𝙣’𝙩 𝙨𝙤𝙡𝙫𝙞𝙣𝙜 𝘼𝙄 𝙞𝙣𝙩𝙚𝙜𝙧𝙖𝙩𝙞𝙤𝙣 𝙩𝙤𝙙𝙖𝙮. It’s redefining what the boundary between runtimes could look like. And that raises some uncomfortable questions: 𝘋𝘰 𝘸𝘦 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 𝘸𝘢𝘯𝘵 𝘵𝘰 𝘳𝘦𝘮𝘰𝘷𝘦 𝘵𝘩𝘦 𝘱𝘳𝘰𝘤𝘦𝘴𝘴 𝘣𝘰𝘶𝘯𝘥𝘢𝘳𝘺? 𝘖𝘳 𝘥𝘰𝘦𝘴 𝘵𝘩𝘢𝘵 𝘣𝘰𝘶𝘯𝘥𝘢𝘳𝘺 𝘦𝘹𝘪𝘴𝘵 𝘧𝘰𝘳 𝘨𝘰𝘰𝘥 𝘳𝘦𝘢𝘴𝘰𝘯𝘴 — 𝘪𝘴𝘰𝘭𝘢𝘵𝘪𝘰𝘯, 𝘴𝘤𝘢𝘭𝘪𝘯𝘨, 𝘧𝘢𝘪𝘭𝘶𝘳𝘦 𝘤𝘰𝘯𝘵𝘳𝘰𝘭? Because once everything runs in one process, you don’t just gain performance… You also inherit each other’s problems. Right now: 𝘯𝘰𝘵 𝘱𝘳𝘰𝘥𝘶𝘤𝘵𝘪𝘰𝘯-𝘳𝘦𝘢𝘥𝘺. 𝘕𝘰𝘵 𝘦𝘷𝘦𝘯 𝘤𝘭𝘰𝘴𝘦. But if they crack the Python ecosystem problem, this stops being a demo and starts becoming architecture.
Rishabh Nigam’s Post
More Relevant Posts
-
Every insight Potpie delivers about your codebase starts with parsing. Our context engine builds a complete map of your codebase: its structure, its components, its relationships into a knowledge graph that agents can use to navigate and query code faster. Currently, it supports more than 15+ languages including python, typescript, java etc. We understand that parsing large repositories is inherently time-consuming, but it shouldn't slow you down. That's exactly why we rebuilt this critical functionality in Rust: to make understanding your codebase faster, without any performance compromise. Our benchmarks show approximately 30% faster parsing for repositories with 1M+ lines of code, with the performance gap expected to grow significantly for larger codebases. Looking ahead, we plan to introduce parallelization of the parsing pipeline to use multi-core threading for processing files simultaneously without Python's GIL constraints. This allows us to handle enterprise-scale repositories with sub-minute indexing times.
To view or add a comment, sign in
-
Maybe you've heard the hype about Rust being "memory-safe", but what does that actually mean? Most languages handle memory one of two ways: → You manage it manually (C/C++) - fast, but one mistake means a security vulnerability → A garbage collector does it (Go, Java, Python) - safer, but you're paying a runtime cost you don't always control Rust does neither. Instead, it uses a system of ownership. Here are three rules the compiler enforces before your code ever runs. 1. Every value has exactly one owner. One variable. One piece of data. No ambiguity about who's responsible for it. 2. When the owner goes out of scope, the value is dropped. No free(). No garbage collector pause. Memory is released at compile-determined points. 3. You can have many immutable borrows, or exactly one mutable borrow - never both simultaneously. The compiler ensures these never conflict. The payoff: entire categories of bugs (use-after-free, dangling pointers, data races) caught before your code ever runs. Is Rust the right tool for every project? Definitely not. The learning curve is real, and for some projects, the tradeoff isn't worth it. But if you're working on systems where performance and safety both matter, it's worth understanding why Rust manages memory the way it does.
To view or add a comment, sign in
-
-
For use cases where ownership and borrowing works, it's fine, but as soon as you need to handle cycles like with mere parent/child (e.g. order/order-lines) that is mostly used, Rust uses ARC like swift, for example. Marketing overstates ownership and borrowing that is only for a small part of the use cases!!!
Maybe you've heard the hype about Rust being "memory-safe", but what does that actually mean? Most languages handle memory one of two ways: → You manage it manually (C/C++) - fast, but one mistake means a security vulnerability → A garbage collector does it (Go, Java, Python) - safer, but you're paying a runtime cost you don't always control Rust does neither. Instead, it uses a system of ownership. Here are three rules the compiler enforces before your code ever runs. 1. Every value has exactly one owner. One variable. One piece of data. No ambiguity about who's responsible for it. 2. When the owner goes out of scope, the value is dropped. No free(). No garbage collector pause. Memory is released at compile-determined points. 3. You can have many immutable borrows, or exactly one mutable borrow - never both simultaneously. The compiler ensures these never conflict. The payoff: entire categories of bugs (use-after-free, dangling pointers, data races) caught before your code ever runs. Is Rust the right tool for every project? Definitely not. The learning curve is real, and for some projects, the tradeoff isn't worth it. But if you're working on systems where performance and safety both matter, it's worth understanding why Rust manages memory the way it does.
To view or add a comment, sign in
-
-
🐍 Access Modifiers in Python — What Every Developer Should Know! Coming from Java or C++? You might expect Python to have strict private and public keywords. It doesn't — and that's by design. 🎯 Python uses a naming convention to signal access intent: 1️⃣ Public → self.name Accessible from anywhere. The default for all attributes and methods. 2️⃣ Protected → self._name Single underscore. A gentle signal for "internal use" — still accessible, but handle with care. 🔓 3️⃣ Private → self.__name Double underscore triggers name mangling → _ClassName__name. Harder (but not impossible) to access from outside. 🔒 💡 Python's philosophy: "We're all consenting adults here." It trusts developers to respect conventions rather than enforcing hard rules. Understanding this is key to writing clean, maintainable, Pythonic OOP code. #Python #PythonProgramming #OOP #ObjectOrientedProgramming #SoftwareEngineering #CodeNewbie #PythonDeveloper #Programming #TechLearning #CleanCode #100DaysOfCode #LearnPython #BackendDevelopment #DevTips #PythonTips
To view or add a comment, sign in
-
Stop writing Python wrappers. Build AI Control Planes. Python is unmatched for training models and writing data scripts. But when you move to production, you aren't just calling an LLM—you are building a Control Plane. In my current Travel Agent RAG system, a single user query requires parallel calls to fetch hotel data, flight APIs, and vector embeddings before hitting the LLM. Doing this efficiently requires strict thread management and fail-fast concurrent structures. Building this in a lightweight script often leads to bottlenecked event loops. Building it with Java 21, Spring Boot, and Virtual Threads gives you massive throughput without sacrificing readability. Are you orchestrating your LLM calls synchronously in your backend, or relying on async task queues? Let’s talk architecture below. 👇
To view or add a comment, sign in
-
-
Python nodes are fine until you need to reason about executor timing. Most ROS2 teams start in Python. It is faster to iterate, the API is cleaner, and rclpy works well for the majority of nodes. Then at some point a callback is late, a timer drifts, a high-frequency publisher starts dropping, and suddenly you are reading the Global Interpeter Lock (GIL) documentation at 11pm. The split that i have usually seen in practice: - Write Python for: launch files (you have no choice), testing with launch_testing, diagnostic scripts, parameter tuning nodes, any node that runs at low frequency and does not touch hardware directly. - Write C++ for: hardware interfaces in ros2_control, any node with timing guarantees, high-frequency publishers above ~50Hz, anything that shares data between callbacks without wanting to think about the GIL, Nav2 and MoveIt plugins. The non-obvious part: it is not really about speed. A Python subscriber at 10Hz is fine. The problem is that Python's executor behavior under load is harder to reason about. A C++ node with a MultiThreadedExecutor and proper callback groups gives you explicit control over what runs concurrently. rclpy gives you the same API but the GIL means your mental model of "these callbacks run in parallel" is not always true. Senior engineers on most teams I know write production logic in C++ and use Python for everything else. Not because they prefer C++. Because the failure modes are easier to find. What is your current split? And has the GIL ever caught you off guard in a ROS2 node? I would love to hear about your experience!
To view or add a comment, sign in
-
-
today I had such an awesome conversation with a "powerful" model. I asked it what was better java or python. at first it wouldnt answer so I pressed it. of course it answered python :) funny anecdotal arguments. " java has a compiler step" I say, "Eveyone uses an ide. They click green play button. no one really has a compile step." it says "Your right. I meant python repl." I said, "java has lots of repls .. beanshell, jbang groovy..." answer. Your right i was speaking anecdotally they both have repl." Man this thing was struggling. I moved onto the favorite python argument. "the consise code and easy language. " Then I hit it with, "Pythons packaging is a mess. you have to handcraft __init__.py to bandaid a broken language." "Pythons packaging isnt a mess, it is ' consistently inconsistent' " quote the powerful model. People talk about amazing "summarization" and "thinking" skills. its generally just feeding you hollow arguments. "consistently inconsistent"
To view or add a comment, sign in
-
Stop writing clunky Python code. 🐍 Most developers treat Python like C++ or Java, but the "Zen of Python" reminds us: Simple is better than complex. I just finished Benjamin Bennett Alexander’s latest guide, and these 3 "tricks" are absolute game-changers for your daily workflow: 1️⃣ Merge Dictionaries with 1 Character: Forget .update(). In 2026, the | operator is the cleanest way to combine data. 2️⃣ Stop Guessing Memory Usage: Use sys.getsizeof() to see exactly how much RAM your objects are eating. (Spoiler: Tuples are almost always more efficient than lists) . 3️⃣ Speed Up String Joins: Stop using + to concatenate. Using .join() is significantly faster because Python only has to create one string object in memory. Python isn't just about making it work; it's about making it "Pythonic." Which of these is new to you? Let’s discuss in the comments. 👇 #PythonProgramming #SoftwareDevelopment #CodingTips #Technology #AI
To view or add a comment, sign in
-
Software news: teiphy v.0.1.24 is now available at https://lnkd.in/gjXnatFh! I've made some Dependabot-informed dependency updates (which unfortunately required me to drop Python 3.9 support); conversion methods now include a progress bar; and BEAST 2.7 XML outputs are more streamlined to reduce unnecessary computation. As always, you can check out the source code directly on GitHub, or you can install the latest version easily with pip via pip install teiphy
To view or add a comment, sign in
-
The Python HTTP client space is in a weird place right now. requests is everywhere, but it is 10+ years old. httpx looked like the future for async, but development has slowed down and the direction is not that clear anymore. So teams either: stick to something outdated or build workarounds around limitations Zapros is one of the projects trying to rethink this layer. What is interesting is not the library itself, but the idea behind it. Instead of tying the client to a specific transport implementation, it separates the layers and builds everything around abstractions. That opens things like: - switching transport without rewriting the client logic - composing behavior through middlewares (retries, caching, etc.) - supporting both sync and async in a cleaner way Will it replace existing tools? Hard to say. Still, it is a good signal that people are trying to rethink this layer, not just patch it. And this is usually where bigger architectural shifts start. Curious what others use today for HTTP in Python. Still requests? Moved to httpx? Or something else entirely?
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Repo: github.com/openjdk/detroit-python You'll need JDK 25 and Python 3.14 exactly.