I wanted to share this. It's a look at the Agent Development Kit (ADK) Samples repository. This is a great collection of ready-to-use AI agents. It's designed to help developers speed up their work. The repository has samples for both Python and Java. You can find everything from simple conversational bots to more complex multi-agent workflows. If you're working with AI agents, this looks like a very useful resource for getting started and seeing how they are built. You can find the full repository here: https://lnkd.in/gDRux4qe
Agent Development Kit Samples: A Useful Resource for AI Developers
More Relevant Posts
-
Pass-by-reference vs pass-by-value — explained with a 🍌banana. And why you might be wrong about how Python works. There are two core ways to pass data to functions: 0/ By value - a copy is created and passed to the function 1/ By reference - a reference to the same data is passed to the function Languages like C/C++ make you explicitly choose - a pointer* or a value. But what about other languages? 👽 JavaScript & Java is pass-by-value.... BUT: object type is a reference, so you pass a copy of a reference to the function, so it looks like pass by reference. 🐍 What about Python? By reference, right?... Python has "names" that refer to "values". Everything is referenced by a name, and many names can reference the same value. You pass references to functions. Where it get's confusing: For immutable types (numbers, strings, tuples, ...), when assigning a new value, you're rebinding the name, rather than changing the value. So you're creating a new name, and it looks like pass by value: def gotcha(y): y = 3 x = 1 gotcha(x) print(x) 🔁 Repost for other devs to see this... 👋 Follow, Miko Pawlikowski 🎙️, for dev insights that don’t 🍌 slip past the details.
To view or add a comment, sign in
-
-
Just released a new Python module: "py-rtoon" As the TOON format is released by Johann Schopplich this month, there are a lot of TOON distributions in many languages, including Python. . Python, however, is well-known for slow running and high resource consumption. I tried to create an alternative Python TOON module, which is backed up by Rust with the already existing Rust crate`rtoon` . This will enhance encoding and decoding speed, the same in `Polars` and `orjson`. . While these are promising, there is a trade-off in importing the Rust module from Python. We need to do some work for some irregular run time, like `Pyodide`, which I don't have the idea to compile it yet, so I strongly recommend Python native instead. . However, for ordinary runtimes like official `Python` are tests from 3.9 to the latest in Windows, Mac, Linux on GitHub in GitHub action pipelines with more than 86 test cases. . And work on Kaggle notebook: https://lnkd.in/gFqB_j_j . I aim for Py-rtoon to be light, fast, accurate and token efficient. . If you want to try, you may install by running `pip install py-rtoon` or `uv add py-rtoon` or take a look at https://lnkd.in/gDfy4YWH . Contributor welcome . Next plans: - Add performance benchmarking for other TOON tools <- Need contributors - Add LLM Accuracy benchmarking <- Need contributors - Add more data type support (Pydantic/ORM/dict/Pandas/Polars) - Ensure framework compatibility like (Langchain/Langgraph/CrewAI/ etc.) - Migrath rtoon to toon-rust (Owner has changed code base location) - Add code checker in CI pipeline Special thanks: Shreyas S Bhat (https://lnkd.in/gC9qc-WT) for rtoon Rust crate: Link to my repo: https://lnkd.in/gDfy4YWH
To view or add a comment, sign in
-
-
Python — Asyncio: Write Faster I/O Without Threads Want concurrency without the headache of threads? If you’ve ever tried using threads in Python to speed up your program, you probably ran into synchronization issues, race conditions, or the infamous Global Interpreter Lock (GIL). Fortunately, Python offers a cleaner and more efficient way to achieve concurrency—asyncio. Asyncio introduces an event loop, which enables cooperative multitasking. Instead of running multiple threads that compete for CPU time, asyncio allows your program to pause one task when it’s waiting for I/O (like a network request or file read) and resume another in the meantime. This happens seamlessly using the await keyword. When a coroutine (an async function) hits an await, it yields control back to the event loop. This means your program isn’t blocked while waiting for data to arrive—it’s busy doing something else useful. That’s why asyncio is perfect for I/O-bound applications such as web servers, API clients, chat apps, or database connectors. You can scale to thousands of concurrent tasks without creating thousands of threads, making it far more memory-efficient. Combine it with libraries like aiohttp for async web requests or asyncpg for PostgreSQL database operations, and you’ll see dramatic performance improvements. Here’s the catch: asyncio isn’t magic for everything. It won’t speed up CPU-bound workloads like image processing or complex calculations, since the GIL still applies. For that, use the multiprocessing module or offload heavy work to a separate process or thread pool. To keep your async code elegant and bug-free: Use async with and async for consistently. Avoid mixing sync and async code arbitrarily. Wrap blocking calls (like traditional file I/O or CPU work) with run_in_executor() to prevent freezing the event loop. Asyncio gives you concurrency with clarity—no tangled threads, no shared-state chaos, just smooth, cooperative execution. Pro Tip: Don’t fight the event loop—embrace it. CTA: Follow and subscribe to my newsletter for more practical Python performance tips and async coding patterns. #Python #Programming #Asyncio #Threading #Concurrency #Pythonperformance
To view or add a comment, sign in
-
-
Requests worked fine for years. Then async happened. Now 100 HTTP calls take 0.19 seconds instead of 10. The gap isn't small. I've watched this shift happen in real-time. HTTPX isn't just another Python library. It's solving problems that Requests simply can't. The numbers tell the story: • HTTPX: 100 concurrent requests in 0.19 seconds • Requests: Same task takes over 10 seconds • Even in sync mode, HTTPX runs nearly twice as fast But speed isn't everything. HTTPX brings features that matter: 🚀 Native async/await support 🔗 HTTP/2 capabilities ⚡ Better connection pooling 🎯 Drop-in compatibility with Requests The Django Rest Framework team built this. They know what modern Python applications need. Requests still works great for simple tasks. But if you're building anything that handles multiple HTTP calls, HTTPX makes sense. One library. Both sync and async. Future-proof. The performance difference in concurrent operations isn't marginal. It's an order of magnitude better. What's holding you back from making the switch? #Python #AsyncProgramming #Python #Async #WebDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲: https://lnkd.in/eV2KkxUR
To view or add a comment, sign in
-
Requests worked fine for years. Then async happened. Now 100 HTTP calls take 0.19 seconds instead of 10. The gap isn't small. I've watched this shift happen in real-time. HTTPX isn't just another Python library. It's solving problems that Requests simply can't. The numbers tell the story: • HTTPX: 100 concurrent requests in 0.19 seconds • Requests: Same task takes over 10 seconds • Even in sync mode, HTTPX runs nearly twice as fast But speed isn't everything. HTTPX brings features that matter: 🚀 Native async/await support 🔗 HTTP/2 capabilities ⚡ Better connection pooling 🎯 Drop-in compatibility with Requests The Django Rest Framework team built this. They know what modern Python applications need. Requests still works great for simple tasks. But if you're building anything that handles multiple HTTP calls, HTTPX makes sense. One library. Both sync and async. Future-proof. The performance difference in concurrent operations isn't marginal. It's an order of magnitude better. What's holding you back from making the switch? #Python #AsyncProgramming #Python #Async #WebDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲: https://lnkd.in/eV2KkxUR
To view or add a comment, sign in
-
Python gained a natural first mover advantage in AI agent development that wasn't quite earned. Python is a great language whose intuitiveness and low ceremony are an asset to ML, but while ML is about computation and experimentation, AI is about context and structure. This is why statically typed programming languages proven in the enterprise like Java, Kotlin, C#, and TypeScript are better suited to AI than Python. But what if we didn't have to choose? After all, the prerequisite for successful AI is a data strategy with the governance to know everything you have and how to access it, and it would be amazing to leverage the rich Python ecosystem, including the vast library of Hugging Face models and its outstanding Transformers framework, to implement that strategy in a way that makes integration with a more enterprise-friendly technology like Java seamless. We're getting there. This GraalPy Spring Boot Summarization Demo on GitHub (link in comments) shows how you can leverage GraalPy to run the #Python libraries markitdown and Transformers along with the HuggingFaceTB/SmolLM2-360M model to process PDFs in a Spring Boot app written in #Java. This is super cool and I can't wait to see what's next.
To view or add a comment, sign in
-
-
Testfixtures 10 is out! 🎉 If you've ever struggled with writing clear, maintainable test assertions in Python - comparing complex objects, checking API responses, or validating database results - testfixtures can help. 🧵 https://lnkd.in/e6SRzQce 🎯 like() - Partial object comparisons: from testfixtures import compare, like compare(api.get_users(), expected = [like(User, email='alice@example.com', role='admin')]) Don't worry about attributes you don't care about! ✅ contains() - Check only specific items are present: from testfixtures import contains compare(event_log, expected=contains([ Event(type='user.login'), Event(type='purchase.completed'), ])) For logging, though, check out LogCapture: https://lnkd.in/emKa_jyu) 🔄 unordered() - Order-independent exact matching Database queries don't guarantee order? No problem: from testfixtures import unordered compare(query_results, expected=unordered([ User(id=1, name='Alice'), User(id=3, name='Charlie'), ])) 📊 sequence() - Flexible sequence comparisons Full control over ordering and partial matching: from testfixtures import sequence compare(results, expected=sequence(partial=True, ordered=False)( Record(id=3), Record(id=5), ])) ...oh I wish LinkedIn supported posting code snippets properly :-/
To view or add a comment, sign in
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝟯.𝟭𝟰: 𝗙𝗶𝗻𝗮𝗹𝗹𝘆, 𝗬𝗼𝘂 𝗖𝗮𝗻 𝗗𝗶𝘀𝗮𝗯𝗹𝗲 𝘁𝗵𝗲 𝗚𝗜𝗟! Big news for Python devs: Python 3.14 lets you turn off the Global Interpreter Lock (GIL) - a historic step in the language. --- What’s the GIL? The Global Interpreter Lock (GIL) prevents true multi-threading in standard Python: even with multiple threads, only one executes Python code at a time. It’s been a pain for devs building high-performance or parallel apps. What’s new in Python 3.14? • You can now run Python without the GIL! • Multiple threads can finally run real Python code in parallel on multiple CPU cores. Which means... • Multi-threaded code (e.g., concurrent web servers, data crunching, agent apps) gets a major speedup -> no more C extensions/hacks needed. • You can better use multi-core hardware: just like Java, C++, and Go. --- How to use it (very simply): • With Python 3.14 the default interpreter build remains the traditional GIL-enabled version, so existing Python code and libraries should work as before. • If you’re working on new parallel or CPU-bound threading workloads, you can optionally install or build the free-threaded (GIL-disabled) version of Python. Caveats: Not all third-party libraries are yet fully compatible with the GIL-free build. Also, single-threaded workloads may run slightly slower in this build, so the benefit is primarily for multi-threaded, core-saturating tasks. --- Overall: Python 3.14 lets you choose:- classic simplicity or full-power concurrency. It makes Python more future-proof for fast, modern applications. ♻️ Share it with your network if you find it useful, and follow Mayank Sultania for more practical AI tips. Video by: DailyDoseofDS.com #Python #Concurrency #GIL #Python314 #Developers #Performance
To view or add a comment, sign in
-
🚀 Piton v0.5.0: Modernizing the Bridge Between Elixir & Python I'm excited to announce a major upgrade to Piton, the open-source library that lets you run Python code from Elixir while bypassing the GIL! After months of work, v0.5.0 is here with a completely modernized stack. 🎉 🔧 The Modernization: We've brought Piton into 2025 with: ✅ Elixir 1.19 + OTP 27 support ✅ Python 3 only (Python 2 retired) ✅ Built-in JSON - removed Poison dependency ✅ GitHub Actions CI/CD - automated testing & publishing ✅ Latest dependencies - erlport 0.11, ex_doc 0.39 All 13 tests passing ✅ | Fully automated | Production ready 💡 Why This Matters: The real power isn't just the tech stack - it's what you can build with it. Real-world scenarios where Piton shines: 🔹 ML/AI in Phoenix Apps Run TensorFlow or PyTorch models directly from your LiveView without blocking the BEAM 🔹 Data Science Pipelines Leverage NumPy, Pandas, and SciPy while maintaining Elixir's fault-tolerance 🔹 Legacy Python Integration Migrate to Elixir gradually - wrap existing Python services without rewriting everything 🔹 Parallel Processing True parallelism - run multiple Python algorithms concurrently, bypassing the GIL using Erlang's process model 🔹 API Enrichment Call Python NLP libraries, image processing tools, or scientific computing packages from your Phoenix APIs 🎯 The Elixir + Python Sweet Spot: You get: •🏃♂️ Elixir's concurrency without the GIL limitation •🐍 Python's rich ecosystem (350K+ packages) •🛡️ Fault tolerance - Python crashes won't take down your app •⚡ Performance - modern OTP 27 optimizations •🤖 DevOps ready - full CI/CD automation Whether you're building ML-powered Phoenix apps, migrating Python workloads, or just want the best of both worlds - Piton v0.5.0 is ready. 📦 Get it: https://lnkd.in/ecarHYk 📚 Docs: https://hexdocs.pm/piton 💻 GitHub: https://lnkd.in/dkk9W8M #Elixir #Python #OpenSource #MachineLearning #AI #WebDevelopment #Phoenix #DataScience #SoftwareDevelopment #DevOps #ElixirLang #FunctionalProgramming
To view or add a comment, sign in
More from this author
Explore related topics
- Agent Development Kits for Artificial Intelligence
- AI Agents Compared to Workflows
- How to Build Intelligent Agents
- How to Design an AI Agent
- Multi-Agent Architecture for AI Development in ADK
- How to Build Agent Frameworks
- Steps to Build AI Agents
- How AI Agents Are Changing Software Development
- How to Build Custom AI Assistants
- Using Asynchronous AI Agents in Software Development
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development