Benoit Chesneau’s Post

erlang_python 1.2.0 - two releases later Quick follow-up on the 1.0 announcement. Two new releases based on real world usage. Keep state between calls ML models are expensive to load. Now you can keep them in memory and reuse them across requests. Load once, predict many times. Faster responses, lower costs. Better concurrency Python threads can now talk back to Erlang without blocking. This matters when you're running parallel ML workloads or batch processing. Nested workflows Python can call Erlang, which calls Python, which calls Erlang... as deep as you need. Useful for complex AI pipelines where orchestration and inference need to talk to each other. Shared data Workers can share cached results - embeddings, configs, intermediate computations. No need for external caching infrastructure. The goal stays the same: bring Python's AI/ML ecosystem into your Erlang or Elixir backend without adding infrastructure complexity. No separate services, no message queues, no API layers to maintain. https://lnkd.in/eHh9txfe #erlang #elixir #python #ml #ai

Incredible - Yet another upgrade, to an amazing tool, quietly launched, in our tiny vital, corner of the world, that will likely augment millions of peoples lives...eventually. Then again, eventually consistent does seem to be our MO. 😄 Keep crushing it Benoit. 🙏

To view or add a comment, sign in

Explore content categories