Dark Programming Language (Darklang): Deployless Backends, Language Design, and the Open-Source Reboot At Techtide Solutions, we’ve learned the hard way that “backend development” is rarely about business logic alone. Infrastructure decisions, deployment choreography, observability plumbing, and permission models tend to sprawl until a simple feature request feels like a mini-migration. Meanwhile, ex...
Darklang: Simplifying Backend Development with Deployless Architecture
More Relevant Posts
-
Today, we're open-sourcing our flagship project: Maat -- a Turing-complete programming language designed from the ground up for writing zero-knowledge proofs (ZKPs). Rust syntax, Rust semantics, ZK-only execution. If you write Rust, you already know Maat. And every construct that's illegal in a zero-knowledge circuit -- floating-point arithmetic, global mutable state, raw pointers, dynamic dispatch, I/O -- is a compile-time error in Maat. There is no "standard mode." Every program that compiles is provable by construction. We started with a single question: what if a language was Rust-native but ZK-constrained from day one? That question became 11 releases, 14 compiler crates, and a production-quality pipeline: - A logos DFA lexer and winnow combinator parser - Hindley-Milner type inference (Algorithm W) with generics, algebraic data types, and exhaustive pattern matching - Structs, enums, traits, impl blocks, `Option<T>`, `Result<T, E>`, and the `?` operator - A file-based module system with dependency resolution, cross-module type checking, and visibility enforcement - A stack-based bytecode VM with 44 opcodes and deterministic execution - A standard library with higher-order methods on collections, typed numeric parsing, and comparison utilities - Security hardening: `#![forbid(unsafe_code)]` across all 14 crates, checked arithmetic on every operation, 7M+ fuzz runs with zero crashes, 9 property-based tests verifying type soundness and execution determinism, and a published threat model This is where Proof-Driven Development (PDD) begins -- software development where formal verification and mathematical proofs replace trust assumptions. The compiler frontend is complete. Next is the ZK backend. Version 0.12 will introduce a trace-generating VM that records execution traces suitable for STARK proof generation via the FRI protocol. Same compiled bytecode, dual backends -- one for development and testing, one for proving. We're targeting the Winterfell library first, with the architecture designed for Plonky3 and Stwo swappability. No trusted setup. Post-quantum secure. Transparent proofs. Beyond that, the roadmap includes native field element arithmetic (`Felt` type), linear/affine types to prevent unconstrained witness bugs, an effect system for provable functional purity, STARK-to-SNARK wrapping for compact on-chain verification, dependent types for expressing constraint invariants, and ultimately self-hosting. Maat is early stage, not yet audited, and not production-ready. But the foundation is solid and the direction is clear. If you're working in zero-knowledge cryptography, formal verification, or language design -- or if you're simply a Rust developer curious about what provable computation looks like -- we'd love for you to take a look. Star the repo. Try the REPL. Execute custom programs/examples. Break things. Tell us what you think: https://lnkd.in/dyYGQ2gj
To view or add a comment, sign in
-
I'm very pleased to share what our team at Maat Labs have been cooking in the past few months... Maat: a Rust-native, Turing-complete programming language for writing zero-knowledge proofs (ZKPs). First milestone achieved: complete compiler frontend. Next is the ZK backend. We can now confidently build in the open. Star it, try it, break it! Your feedback and contributions are very much welcome.
Today, we're open-sourcing our flagship project: Maat -- a Turing-complete programming language designed from the ground up for writing zero-knowledge proofs (ZKPs). Rust syntax, Rust semantics, ZK-only execution. If you write Rust, you already know Maat. And every construct that's illegal in a zero-knowledge circuit -- floating-point arithmetic, global mutable state, raw pointers, dynamic dispatch, I/O -- is a compile-time error in Maat. There is no "standard mode." Every program that compiles is provable by construction. We started with a single question: what if a language was Rust-native but ZK-constrained from day one? That question became 11 releases, 14 compiler crates, and a production-quality pipeline: - A logos DFA lexer and winnow combinator parser - Hindley-Milner type inference (Algorithm W) with generics, algebraic data types, and exhaustive pattern matching - Structs, enums, traits, impl blocks, `Option<T>`, `Result<T, E>`, and the `?` operator - A file-based module system with dependency resolution, cross-module type checking, and visibility enforcement - A stack-based bytecode VM with 44 opcodes and deterministic execution - A standard library with higher-order methods on collections, typed numeric parsing, and comparison utilities - Security hardening: `#![forbid(unsafe_code)]` across all 14 crates, checked arithmetic on every operation, 7M+ fuzz runs with zero crashes, 9 property-based tests verifying type soundness and execution determinism, and a published threat model This is where Proof-Driven Development (PDD) begins -- software development where formal verification and mathematical proofs replace trust assumptions. The compiler frontend is complete. Next is the ZK backend. Version 0.12 will introduce a trace-generating VM that records execution traces suitable for STARK proof generation via the FRI protocol. Same compiled bytecode, dual backends -- one for development and testing, one for proving. We're targeting the Winterfell library first, with the architecture designed for Plonky3 and Stwo swappability. No trusted setup. Post-quantum secure. Transparent proofs. Beyond that, the roadmap includes native field element arithmetic (`Felt` type), linear/affine types to prevent unconstrained witness bugs, an effect system for provable functional purity, STARK-to-SNARK wrapping for compact on-chain verification, dependent types for expressing constraint invariants, and ultimately self-hosting. Maat is early stage, not yet audited, and not production-ready. But the foundation is solid and the direction is clear. If you're working in zero-knowledge cryptography, formal verification, or language design -- or if you're simply a Rust developer curious about what provable computation looks like -- we'd love for you to take a look. Star the repo. Try the REPL. Execute custom programs/examples. Break things. Tell us what you think: https://lnkd.in/dyYGQ2gj
To view or add a comment, sign in
-
🚨 STOP treating LLMs like COMPILERS! There's a narrative that equates: - Code 👉 NL (natural language) specs - Compiler 👉 LLM Yes, it's true that compilers eliminate the need to read, write, and optimize assembly or machine code. But that analogy falls apart once you take a closer look: 👉 NL is fuzzy, open for [mis]interpretation. Spec is NOT code. Computer programming languages stay the most accurate way to explain HOW the machines should work. No amount of NL can achieve that level of accuracy even if the context window was 100M tokens. LLMs will continue to hallucinate until a breakthrough happens. 👉 Compilers are predictable and their output is reproducible. A given code always produces the exact same output down to bits, given the same compiler version and execution environment. That's why we confidently DON'T check in our binaries to repo. LLMs are unpredictable by design. --- I know, it's a bummer to learn that creating serious software still requires professional software engineers. These tools do a good job for DYI projects: personal software, throwaway code, and demos. But let me ask you this: do you really want to fly an airplane where the autopilot is vibe coded? Do you even want to drive in a street where there's a 1% chance of encountering a car with spec-driven software? If it's too good to be true, it probably is. If everyone can do it, it’s probably not that valuable. You can ask your closest engineer about the strengths and limitations of these tools because in engineering, there are no silver bullets. There are trade-offs, variables, measurements, workarounds, system thinking and accuracy. PS. no AI was used to write this post (as usual). Image from Wikipedia.
To view or add a comment, sign in
-
-
🔥 After Loom, do we even need WebFlux? For 10 years, we accepted Reactive Programming as a "necessary evil." We traded readable stack traces and simple debugging for the promise of high scalability. We built complex pipelines just to keep our threads from choking. But Java 21 changed the ROI calculation forever. The Reality Check: Virtual Threads might kill Reactive for 80% of enterprise use cases. Here’s why the architectural landscape has shifted: 🧵 1. Thread-per-request is BACK We spent a decade shouting: "Never block the event loop!" With Project Loom, blocking is cheap. You can now write simple, imperative code that is easy to read, easy to test, and actually debuggable. The JVM handles the "parking" of virtual threads under the hood, so you don't have to. 🥊 2. The "Blocking" debate is over Loom doesn't eliminate blocking; it makes it efficient. For standard CRUD, REST APIs, and microservice orchestration, the mental tax of Mono, Flux, and flatMap is no longer a justifiable expense. 🏆 3. Where Reactive still holds the crown WebFlux isn't dead—it's just becoming a specialist tool again. It still wins in: True Streaming: Infinite data with complex backpressure. Edge Gateways: High-concurrency proxying with a tiny footprint. Event Composition: Complex windowing or sampling of async events. 💡 The Principal Engineer's Verdict: Unless you are building a high-throughput streaming engine, start with Virtual Threads. Developer velocity and "debuggability" are your most expensive resources. Don't waste them on a complex paradigm you no longer need. What’s your move? Are you sticking with the Flux, or are you happy to be back to basics with Loom? 👇 Let’s debate in the comments. #Java #SoftwareArchitecture #ProjectLoom #Spring #WebFlux #Programming #BackendDevelopment #Reactive #NonBlocking
To view or add a comment, sign in
-
-
Ever struggled to apply functional programming concepts you've read about to real-world code? John Todd faced a gnarly procedural method with repetitive error checking and turned to an LLM as a programming partner to refactor it functionally. What makes this post valuable: 💡 Real learning journey: Follow along as John works through type mismatches, monad confusion, and C#'s limitations - not a polished tutorial, but an honest problem-solving session. 🔧 Practical Functional patterns: Learn how to use a context record to accumulate state through a pipeline. 🤝 AI as pair programmer: This isn't "AI writes all the code." It's collaborative refinement - John catches type errors, challenges naming decisions, and improves his functional library iteratively. 📚 The gap C# leaves: See the same pipeline in F# at the end and understand why functional programming feels harder in C# (no higher-kinded types, Task isn't a monad, async is outside the type system). Read now → https://lnkd.in/dfaVmaZr #FunctionalProgramming #CSharp #FSharp #DotNet #AIPairProgramming #SoftwareDevelopment #CleanCode #LLM #DevLearning
To view or add a comment, sign in
-
Most developers believe that nested loops are unavoidable. However, they are actually a design mistake. When your code includes: - A loop inside another loop - Repeated scanning - O(n²) complexity You are not solving the problem efficiently; you are simply adhering to a habit. In my research, I explored: - Why nested loops occur - The underlying root causes - How indexing can reduce complexity from O(n²) to O(n) The most significant realization? Performance is determined before writing code, based on how data is structured. Nested loops are not merely a coding issue; they are a problem of thinking. I have shared the full research as a document and would appreciate your thoughts. How frequently do you encounter nested loops in production code #SoftwareEngineering #Performance #CleanCode #Java #Backend #SystemDesign #Developers #TechInsights #JavaDevelopment #NestedLoops #CodeOptimization #SoftwareEngineering #ProgrammingTips #JavaTips #PerformanceTuning #EfficientCoding #TechInsights #DeveloperCommunity #CodingBestPractices #SoftwareDevelopment #JavaProgramming #TechOptimization #DevLife
To view or add a comment, sign in
-
Programming languages are more than technical choices — they are strategic business decisions. The right stack influences development speed, scalability, security, and the ability to evolve a product over time. In 2026, software architectures are increasingly polyglot. Python drives AI and automation layers, JavaScript and TypeScript power interactive user experiences, Go supports scalable infrastructure, and Rust is emerging for high-performance and security-sensitive systems. Forward-thinking organisations no longer choose a single language — they design ecosystems where each technology solves a specific problem efficiently. Understanding these shifts helps founders make better product decisions, allocate budgets wisely, and avoid expensive rebuilds later. #SoftwareDevelopment #TechStack #Programming #Innovation #TechEurope
To view or add a comment, sign in
-
Designers fight over ideas... 🎨😤 Programmers share Stack overflow links. 😎 One thing I really like about the programming community is how knowledge is shared. In many fields, similar ideas can lead to discussions about who created it first. In software development, it’s often different. Developers learn from each other, reuse solutions, read other people’s code, and build on top of what already exists. That’s basically the spirit behind open source. Good developers don’t just write code they learn from code written by others. #programming #coding #developer #softwareengineering #softwaredeveloper #python #backend #devlife #codinglife #tech #technology #webdevelopment #fullstack #developers #computerscience #opensource #automation #ai #machinelearning #programacao #tecnologia #desenvolvimento #desenvolvedores #engenhariadesoftware
To view or add a comment, sign in
-
-
Hi! Beyond Benchmarks: Building High‑Performance Distributed Systems with Modern Systems Programming Languages In the past decade, the term “high‑performance distributed system” has become a buzz‑word for everything from real‑time ad bidding platforms to large‑scale telemetry pipelines. The temptation to prove a system’s worth with a single micro‑benchmark—say, “10 µs latency on a 1 KB payload”—is strong, but those numbers rarely survive the chaos of production. Real‑world workloads contend with variable network conditions, evolving data schemas, memory pressure, and the unavoidable need for observability and safety. Modern systems programming languages such as Rust, Go, Zig, and the latest C++20/23 standards have entered the arena with promises of zero‑cost abstractions, strong static guarantees, and ergonomic concurrency models. Yet the decision to adopt one of these languages cannot be reduced to “which one runs fastest in a benchmark.” Instead, engineers must examine how language features interact with the entire stack: networking, serialization, scheduling, and even deployment pipelines. This article goes beyond the allure of raw numbers. We’ll explore a holistic approach to building high‑performance distributed systems, discuss the strengths and trade‑offs of contemporary systems languages, and walk through concrete, side‑by‑side implementations of a simple distributed key‑value store in Rust and Go. By the end, you should have a mental checklist that moves you from “benchmark‑centric” to “system‑centric” performance engineering. Read the full guide: https://lnkd.in/ddpCu9PS #distributedsystems #systemsprogramming #performanceengineering #Rust #Go
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development