I built a programming language from scratch in Rust. Then I benchmarked it against Python. The results broke my expectations. 🔬 THE BENCHMARK We ran 74 functions across 15 categories, testing Vitalis v9.0 against pure Python on identical workloads (100K elements, zero overhead). Vitalis won 13/13 benchmarks, averaging 7.5x faster than Python. The highlights: 🚀 Cosine Distance → 29.1x faster 🚀 Batch ReLU → 10.3x faster 🚀 Std Deviation → 9.2x faster 🚀 MSE Loss → 8.1x faster Even with Python→Rust FFI overhead included, our Science and Math modules are hitting over 1.5 Million ops/sec. ⚡ WHAT IS VITALIS? It is not a wrapper around an LLM. It is a full, hand-written JIT-compiled language pipeline: Source (.sl) → Lexer → Parser → AST → Type Checker → SSA IR → Cranelift JIT → Native x86-64 We bypassed LLVM and GCC entirely, emitting native machine code through Cranelift. 🏗️ THE ECOSYSTEM We built 14 algorithm libraries right into the language, fully equipped with FFI and Python bindings: 🔐 Cryptography (SHA-256, HMAC) ⚛️ Quantum Simulator (Statevector, Bell states) 🕸️ Graph Algorithms (Dijkstra, PageRank) 📈 Statistics & Advanced Math ...plus Signal Processing, Compression, Security, and more. 💡 WHY THIS MATTERS Most modern "AI languages" are just prompt wrappers. Vitalis actually compiles to native x86-64 machine code, evolves its own functions at runtime, and radically outperforms Python on real ML/AI workloads. It’s cross-platform, CI/CD ready, and fully documented. If you are interested in compiler design, code evolution, or what a hand-crafted Rust compiler looks like, the repo is live and open-source. 🔗 https://lnkd.in/eXcnej-m #RustLang #CompilerDesign #MachineLearning #Python #OpenSource #DeepTech #SoftwareEngineering #Cranelift
Vitalis Rust Language Outperforms Python in 13/13 Benchmarks
More Relevant Posts
-
Your build server is running out of space. You run df -h. 94% full. Great. So you du -sh your way through directories like it's 2005, mentally adding up numbers, until you finally find the 6 .venv folders nobody cleaned up. There's a better way — and building it yourself is half the point. In my latest article, I walk through building pydusk: a terminal disk usage analyzer in Python, inspired by ncdu. Keyboard-driven, non-blocking, with a delete confirmation flow and a clean TUI — all in a single file with two dependencies. Stack: Textual · Typer · os.scandir Things worth stealing from this project even if you never run it: → Why os.scandir beats os.walk for disk traversal → Textual's @work(thread=True) pattern for background tasks → ModalScreen[T] + dismiss() for confirmation dialogs https://lnkd.in/dmKvDSgH #Python #Textual #CLI #OpenSource #Developer
To view or add a comment, sign in
-
Imagine being able to write C code using Python? That just became a reality with the PythoC library. PythoC is a Domain-Specific Language (DSL) compiler that allows developers to write C-like programs and create .exe's using standard Python syntax. You can call the compiled library directly from Python like this, # test1.py from pythoc import compile, i32 @compile def add(x: i32, y: i32) -> i32: return x + y # Compile to native code @compile def main() -> i32: return add(10, 20) result = main() print(result) Then run it like this, python test1.py # prints 30 Or you can create a stand-alone .exe to run. My Towards Data Science article takes you through the details. Read it for free below. https://lnkd.in/eWAku8Wg
To view or add a comment, sign in
-
Python GIL explained: why it exists and how it affects multithreading Many developers hear about the Python GIL and assume one thing: “Python can't use multiple CPU cores.” But the reality is a bit more nuanced. What the GIL actually is The Global Interpreter Lock (GIL) is a mutex inside CPython that ensures only one thread executes Python bytecode at a time. Even if your program has multiple threads, only one can run Python code simultaneously. Example: Thread A → running Thread B → waiting Thread C → waiting This means Python supports threads, but they take turns executing code. Why Python has the GIL The main reason is memory management. CPython relies heavily on reference counting for garbage collection. Every Python object tracks how many references point to it. When the count reaches zero, the object can be safely removed from memory. If multiple threads updated these counters simultaneously, memory corruption could occur. The GIL prevents this by ensuring only one thread modifies Python objects at a time. Trade-off: simpler interpreter design safer memory management faster single-thread performance How the GIL affects multithreading The limitation mainly impacts CPU-bound workloads. Example: def compute(): for i in range(10_000_000): pass Running this across multiple threads does not give real parallelism because threads must wait for the GIL. Thread A → holds GIL Thread B → waiting Thread C → waiting The workload becomes effectively serialized. Why Python threads still work well for web servers Python threads still scale well for I/O-bound tasks. Examples: API requests database queries file operations While a thread waits for I/O, the interpreter releases the GIL, allowing another thread to run. That's why frameworks like FastAPI and Django can handle many concurrent requests. Common ways developers work around the GIL 1️⃣ Multiprocessing Each process has its own interpreter and its own GIL. 2️⃣ Native extensions Libraries like NumPy or PyTorch run heavy computations in C/C++ and release the GIL. 3️⃣ Async I/O Libraries like asyncio allow thousands of concurrent tasks in a single thread. ## Interesting development: Python without the GIL Simple visual model Threads → Queue → GIL → CPU Only one thread can pass through the GIL and execute Python bytecode. Question for the community: How do you handle CPU-bound workloads in Python? multiprocessing distributed workloads native extensions #Python #PythonProgramming #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
Write C Code Without Learning C: The Magic of PythoC an interesting library the other day that I hadn’t heard of before. PythoC is a Domain-Specific Language (DSL) compiler that allows developers to write C programs using standard Python syntax. It takes a statically-typed subset of Python code and compiles it directly down to native machine code via LLVM IR (Low Level Virtual Machine Intermediate Representation). LLVM IR is a platform-independent code format used internally by the LLVM compiler framework....
To view or add a comment, sign in
-
Most teams do not reach for #PyFlink because Python feels nicer. They reach for it after paying the production cost of splitting one ML system across two ecosystems: Python for training, Java for prediction, and months lost to subtle feature mismatches, latency, and debugging. That is the real adoption driver. New on ML-Affairs: "If the real source of friction in your system is that your training, feature logic, and model-adjacent code live naturally in Python, then "just use Java #Flink" is not a neutral suggestion. It is an architectural trade, and often an expensive one."
To view or add a comment, sign in
-
Machine Learning Data Visualization using multicore tsne #machinelearning #datascience #datavisualization #multicoretsne This is a multicore modification of Barnes-Hut t-SNE by L. Van der Maaten with Python CFFI-based wrappers. This code also works faster than sklearn.TSNE on 1 core ( as of version 0.18 ) https://lnkd.in/gFmdZxJu
To view or add a comment, sign in
-
This is a short post about why plain Python logging is not enough when you move to real‑world distributed systems. It shows how we went from simple text logs to structured JSON and basic context passing so that debugging and observability actually work. #Python #Logging #DevOps #Observability #SoftwareEngineering #Backend #PythonDev #Kubernetes #ELK #Datadog https://lnkd.in/eRXbuSP9
To view or add a comment, sign in
-
3 Performance Mistakes Python Developers Make in Production Your code works locally. It passes tests. It even gets deployed. But in production? It slows down. Here are 3 common mistakes I keep seeing: 1. Using a List Instead of a Set for Lookups if x in my_list: Lists search one by one → O(n) If lookup is frequent, use: my_set = set(my_list) if x in my_set: Sets use hashing → O(1) average time Small change. Massive impact at scale. 2. Ignoring Time Complexity Nested loops feel harmless… Until data grows 100x. Quadratic logic in small datasets becomes a production bottleneck. If you don’t know the Big-O of your solution, you’re coding blind. 3. Ignoring Memory Usage Creating unnecessary copies: new_list = old_list[:] Loading huge datasets fully into memory instead of streaming. Using lists where generators would work. Performance isn’t just speed — it’s also memory efficiency. Real Engineering Insight: Production performance problems rarely come from “bad Python.” They come from weak algorithmic thinking. Code that works is beginner level. Code that scales is professional level. Which performance mistake did you learn the hard way? #Python #Performance #SoftwareEngineering #DSA #Programming #Developers #CleanCode
To view or add a comment, sign in
-
9 Modern Python Libraries You Must Know in 2026 Python is evolving fast. If you're still using only the “classic stack,” you're missing out on some powerful modern tools. If you master these, you’ll be ahead of 90% of Python developers in 2026. 1. Polars – lightning-fast DataFrame library (faster than pandas) 2. Pydantic v2 – powerful data validation & settings management 3. FastAPI – high-performance APIs with automatic docs 4. Chain – build AI & LLM applications 5. PyTorch 2 – next-gen deep learning framework 6. DuckDB – Analytics database that runs directly in Python 7. Typer – build beautiful CLI apps effortlessly 8. Rich – stunning terminal formatting & dashboards 9. Pandera – schema validation for DataFrames Which one are you already using?
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development