Build Microsoft Foundry Agents with Python SDK: Part 1: AGENTIC AI Set up agents from scratch, add tools, install guardrails, etc. Introduction As the title suggests, we will be learning how to create agents using Foundry’s python SDK. Before we begin, some of you may ask — what’s wrong with using the Foundry UI to create agents? The simple answer is — Nothing really. In fact I urge you to try creating an agent using a few clicks on the Foundry UI to understand where to see: * the instructions (and use AI to even write them instead of starting from scratch). For instance, here I am explicitly forcing a guided approach for my researcher-agent, where the agent must identify and get the topics approved before it starts generating them. Use AI to write the instructions for an agent * the version (and compare different agent versions to see which offers the best output). On the left, latest v7 version: the agent uses the web search tool and follows our instructions to confirm the topics with the end users first before it starts generating the research report ; whereas in the older v1 version on the right, it lacked access to any tool and it would start producing an output from the get go without a human-in-the-loop to verify the topics first. Compare different versions of an agent with the same input * the run trace to assess the input/output/tooluse, etc. This is highly beneficial for debugging agent responses if they lack quality, decent latency, or showcase signs of hallucinations. Run traces for each agent interaction Once you’ve had a good look and feel of the playground, you are ready to move onto the next step. While you can keep using the platform’s UI for manual agent creation, the Foundry SDK allows for automation, scalability and integration into your own application. In more practical terms, there are a few things that can only be done via the SDK: * implementing custom functions as tools (for example: get_user_info()to let the agent fetch a user’s record before further processing). * enriching traces with metadata (for example: adding custom attributes like customer_tier: “gold” programatically to filter traces in portal to find specifically why “gold” tier users are experiencing high latency) * logging client-side traces (for ex: user_id, app_version, retries, queuing, time spent waiting before you even call responses.create()) and merging them with server side traces (response time, error rate, intermediate steps by agent, tool call, etc). Since the UI portal allows out-of-box access to server-side traces only i.e. what’s happening on the cloud platform, without the SDK, your app’s logic and foundry portal’s traces are two separate puzzles that don’t fit together. (for ex: when something goes wrong, Foundry’s server trace won’t tell you which user workflow, which app version, or which feature flag produced the call). Later on in Part 2,… #genai #shared #ai
Create Microsoft Foundry Agents with Python SDK: Part 1
More Relevant Posts
-
Most tutorials about async Python show you how to use asyncio. Almost none of them show you how to decide what should be async in the first place. I've been working on a backend pipeline that processes data-driven workflows — intake, classify, transform, store. When I inherited it, the whole thing was synchronous. Every API call, every database write, every LLM classification step waited in line. The throughput was fine for small volumes. At scale, it was a bottleneck hiding in plain sight. The temptation was to slap async on everything. That would have been a mistake. Here's the decision framework I actually used. Map the dependency graph first. Draw every operation and draw arrows between the ones that depend on each other's output. The operations with no arrows between them are your parallelization candidates. Everything else stays sequential. This sounds obvious but I've seen entire teams skip it and end up with race conditions they spend weeks debugging. I/O-bound waits are the real wins. An LLM API call that takes 800ms while your CPU does nothing — that's the perfect async candidate. A CPU-heavy data transformation that takes 200ms — making that async buys you almost nothing and adds complexity. I was ruthless about only converting the I/O operations: external API calls, database queries, file reads. The compute stayed synchronous. Batch where the API allows it. Some of the biggest gains didn't come from async at all. They came from batching — sending ten classification requests in one call instead of ten sequential calls. Batching and async together is where the real throughput jumps live, but batching alone often gets you 80% of the way there. Add backpressure before you add speed. The first time I parallelized the pipeline without a semaphore, it worked beautifully for thirty seconds and then overwhelmed the downstream API with concurrent requests. Rate limiting, semaphores, and bounded queues aren't optional — they're the difference between a fast system and one that takes itself down. The result was a 20% throughput improvement. Not by rewriting the system. By identifying the six operations that were waiting unnecessarily and letting them run concurrently while everything else stayed exactly the same. Async isn't a feature you add to a codebase. It's a scalpel you apply to the specific places where waiting is the bottleneck. #Python #AsyncIO #Backend #SoftwareEngineering #AIEngineering #SystemDesign #BuildInPublic #AppliedAI
To view or add a comment, sign in
-
*🚀Problem Solving with DSA - Day 47: Under the Hood | Internal Working of Python Dictionaries 🛠️🐍* ->Welcome to Day 47 of my 60-Day DSA Challenge! Today, we are cracking open the Python Dictionary to see how it achieves that lightning-fast O(1) speed. 🏗️ The Architecture: Buckets & Key-Value Pairs ->In Python, a dictionary is essentially a Hash Table. When you do my_dict["name"] = "Sriman", here is what happens: ->Hashing: Python calls the hash() function on your key ("name"). This generates a large integer. ->Indexing: It takes that integer and performs a modulo operation with the current size of the hash table: index = hash("name") % array_size. ->Storage: It stores the key, the hash value, and the actual value in a "Bucket" at that index. ⚔️ Collision Handling in Python: Open Addressing ->Unlike Java (which uses Chaining/Linked Lists), Python uses Open Addressing with a special probing technique. ->If a collision occurs (two keys map to the same index), Python doesn't create a list. ->Instead, it looks for another empty slot using a Pseudo-random Probing sequence. This keeps the data "flat" and cache-friendly. 📈 Dynamic Resizing: The Load Factor ->A HashMap works best when it's not too full. ->Load Factor: It’s the ratio of (number of items) / (table size). ->Resizing: When the dictionary gets about 2/3rd full, Python automatically creates a larger table (usually 2x or 4x the size) and re-hashes all existing keys into the new table. This ensures operations stay O(1). 💻 Python Code: Simulating the Logic # How Python sees your data key = "Day47" value = "HashMap Internal" # 1. Get Hash h = hash(key) # 2. Map to Index (Simplified) capacity = 8 index = h & (capacity - 1) # Efficient bitwise way to do modulo print(f"Key: {key} hashes to index: {index}") 📊 Python Dict Optimization (Compact Dict): ->Since Python 3.6+, dictionaries are ordered by default. They use a split-table design (an indices array and an entries array) which saves a lot of memory. 🧠 Challenge of the Day: ->"If Python uses Open Addressing, what happens to the search time if we keep adding elements without ever resizing the table? Why is the 2/3rd threshold important?" 📈 Progress Tracking: Current Topic: Dictionary Internals Status: Day 47/60 ✅ (78% Complete!) Next Up: Handling Collisions Understanding the 'Internal Magic' makes you a better developer, not just a coder mama! Ready to apply this speed to some real interview problems tomorrow? 👇 #60DaysOfCode #Python #HashMap #Hashing #SoftwareEngineering #InternalWorking #BigO #DataStructures #Algorithms #PlacementPrep #TechEducation
To view or add a comment, sign in
-
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Python API wrapper for rapid integration into any pipeline & the header-only C++ core for speed. STRIKE FIRST ; THEN SPEED!! NO MERCY!!! 11 of 14 Copy & paste Ai This is the complete overview of the libcyclic41 project—a mathematical engine designed to bridge the gap between complex geometric growth and simple, stable data loops. Project Overview: The Cyclic41 Engine 1. Introduction: The Core Intent The goal of this project was to create a mathematical library that can scale data dynamically while remaining perfectly predictable. Most "growth" algorithms eventually spiral into numbers too large to manage. libcyclic41 solves this by using a 123/41 hybrid model. It allows data to grow geometrically through specific ratios, but anchors that growth to a "modular ceiling" that forces a clean reset once a specific limit is reached. 2. Summary: How It Works The engine is built on three main pillars: * The Base & Anchor: We use 123 as our starting "seed" and 41 as our modular anchor. These numbers provide the mathematical foundation for every calculation. * Geometric Scaling: To simulate expansion, the engine uses ratios of 1.5, 2.0, and 3.0. This is the "Predictive Pattern" that drives the data forward. * The Reset Loop: We identified 1,681 (42^) as the absolute limit. No matter how many millions of times the data grows, the engine uses modular arithmetic to "wrap" the value back around, creating a self-sustaining cycle. * Precision Balancing: To prevent the "decimal drift" common in high-speed computing, we integrated a stabilizer constant of 4.862 (derived from the ratio 309,390 / 63,632). 3. The "Others-First" Architecture To make this useful for the developer community, we designed the library with two layers: 1. The Python Wrapper: Prioritizes Ease of Use. It allows a developer to drop the engine into a project and start scaling data with just two lines of code. 2. The C++ Core: Prioritizes Speed. It handles the heavy lifting, allowing the engine to process millions of data points per second for real-time applications like encryption keys or data indexing. 3. Conclusion: The Result libcyclic41 is more than just a calculator—it is a stable environment for dynamic data. It proves that with the right modular anchors, you can have infinite growth within a finite, manageable space. Whether it’s used for securing data streams or generating repeatable numerical sequences, the 123/41 logic remains consistent, collision-resistant, and incredibly fast. *So now i am heading towards the end of my material which is exactly where i started. Make sense? kNOw? KnoW! Stop thinkingi! “42” 11 of 14
To view or add a comment, sign in
-
Python for the Brain, .NET for the Nervous System. 🧠⚙️ Most AI models are born in Python, but the ones that survive at enterprise scale are increasingly running on .NET. While Python is the undisputed king of research and prototyping, it often hits a wall when it meets the brutal demands of a production environment. If you’re building high-performance, scalable, and mission-critical AI systems, here is why .NET (C#) is outshining the competition: 1. The Performance Gap (Compiled > Interpreted) Python is interpreted; .NET is JIT-compiled. In the world of real-time AI inference, every millisecond of latency matters. .NET’s native multithreading allows it to handle massive concurrent loads without being throttled by Python’s infamous Global Interpreter Lock (GIL). 2. Enterprise-Grade Reliability * Static Typing: Catch errors at compile-time, not at 3:00 AM in your production logs. * Memory Management: The Common Language Runtime (CLR) provides more efficient garbage collection, preventing the "latency spikes" that plague Python under heavy loads. * Security & Monitoring: .NET offers mature, built-in tools for authorization and API boundaries that are often an afterthought in Python POCs. 3. The "Hybrid" Winning Strategy 🏆 The best teams aren't choosing one over the other; they are using a split approach: * Python: Used as the "Experimental Brain" for training and model R&D. * ONNX Runtime & .NET: Used as the "Production Nervous System." By exporting models to ONNX, you get the best of both worlds—research flexibility and high-speed, type-safe execution. Why the shift to .NET for Production? * Execution Speed: High performance via Compiled/JIT execution vs. Python’s slower interpreted nature. * Concurrency: Excellent native threading capabilities, whereas Python remains bottlenecked by the GIL. * System Robustness: A static type system that ensures stability, compared to the dynamic prototyping focus of Python. * Scalability: Built specifically for the "Nervous System" of an enterprise, while Python excels as the "Experimental Brain." The Verdict: If you want to build a cool demo, use Python. If you want to build a resilient, multi-tenant AI platform that integrates seamlessly with the Azure ecosystem, it’s time to look at .NET. Are you moving AI into production this year? What’s your stack of choice? Let’s debate in the comments. 👇 #DotNet #CSharp #AI #SoftwareEngineering #MachineLearning #Azure #Python #TechArchitecture #ProductionAI
To view or add a comment, sign in
-
• Day 30/30 Today I learned about Python File Handling Methods, which are used to work with files such as reading, writing, updating, and managing file content. File handling is very important because it allows us to store data permanently instead of keeping it only in memory while the program runs. 🔸 Common Python File Methods • open(): The close() method is used to close a file after use. It is important because it ensures that resources are released properly. file = open("sample.txt", "r") print(file) file.close() • read(): The read() method is used to read the entire content of a file at once. It is useful when we want to access all the text stored inside a file. with open("sample.txt", "r") as file: print(file.read()) • readline(): The readline() method reads one line at a time from the file. It is useful when working with large files. with open("sample.txt", "r") as file: print(file.readline()) • readlines(): The readlines() method reads all lines of a file and stores them in a list. Each line becomes a separate list element. This is helpful when we want to process file data line by line using loops. with open("sample.txt", "r") as file: print(file.readlines()) • write(): The write() method is used to write data into a file. If the file is opened in write mode, it will overwrite the existing content. This method is useful for saving text or program output into a file. with open("sample.txt", "w") as file: file.write("Hello Python") • writelines(): The writelines() method is used to write multiple lines into a file at once. It takes a list of strings and writes them to the file. This is useful when saving structured text data. lines = ["Python\n", "File Handling\n", "Methods\n"] with open("sample.txt", "w") as file: file.writelines(lines) • close(): The close() method is used to close a file after use. It is important because it ensures that resources are released properly. file = open("sample.txt", "r") file.close() print("File closed") • flush(): The readline() method reads one line at a time from the file. It is useful when working with large files. file = open("sample.txt", "w") file.write("Data saved") file.flush() file.close() • seek(): The flush() method forces the file buffer to write data immediately into the file without closing it. This is useful when we want to ensure that the data is saved instantly. with open("sample.txt", "r") as file: file.seek(0) print(file.read()) • tell(): The seek() method is used to move the file pointer to a specific position. This allows us to read or write from a particular point in the file. It is useful for random file access. print(file.tell()) When opening a file, Python uses different modes depending on the operation: "r" → Read mode "w" → Write mode "a" → Append mode "x" → Create mode "b" → Binary mode "t" → Text mode (default) #Python #File_Handling #BengaluruStudents #BangaloreIT #BTMLayout #fortunecloud Fortune Cloud Technologies Private Limited
To view or add a comment, sign in
-
-
Python Prototypes vs. Production Systems: Lessons in Logic Rigor 🛠️ This week, I stopped trying to write code that "just works" and started writing code that refuses to crash. As an aspiring Data Scientist, I’m learning that stakeholders don’t just care about the output—they care about uptime. If a single "typo" from a user kills your entire analytics pipeline, your system isn't ready for the real world. Here are the 4 "Industry Veteran" shifts I made to my latest Python project: 1. EAFP over LBYL (Stop "Looking Before You Leap") In Python, we often use if statements to check every possible error (Look Before You Leap). But a "Senior" approach often favors EAFP (Easier to Ask for Forgiveness than Permission) using try/except blocks. Why? if statements become "spaghetti" when checking for types, ranges, and existence all at once. Rigor: A try block handles the "ABC" input in a float field immediately, keeping the logic clean and the performance high. 2. The .get() Method: Killing the KeyError Directly indexing a dictionary with prices[item] is a ticking time bomb. If the key is missing, the program dies. The Fix: I’ve switched to .get(item, 0.0). This allows for a "Default Value" fallback in a single line, preventing "Dictionary Sparsity" from breaking my calculations. 3. Preventing the "System Crush" Stakeholders hate downtime. I implemented a while True loop combined with try/except for all user inputs. The Goal: The program should never end unless the user explicitly chooses to "Quit." Every "bad" input now triggers a helpful re-prompt instead of a system failure. 4. Precision in Data Type Conversion Logic errors often hide in the "Conversion Chain." I focused on the transition from String (from input()) to Int (for indexing). The Off-by-One Risk: Users think in "1-based" counting, but Python is "0-based." I’ve made it a rule to always subtract 1 from the integer input immediately to ensure the correct data point is retrieved every time. The Lesson: Coding is about the architecture of the "Why" just as much as the syntax of the "What." [https://lnkd.in/gvtiAKUb] #Python #DataScience #CodingJourney #CleanCode #BuildInPublic #SoftwareEngineering #SeniorDataScientist #TechMentor
To view or add a comment, sign in
-
-
🚀 Understanding Python Classes, Methods & self — With a Real Example If you're learning Python OOP, this example will make everything click 👇 🔹 The Code class DataValidator: def __init__(self): self.errors = [] def validate_email(self, email): if "@" not in email: self.errors.append(f"Invalid email: {email}") return False return True def validate_age(self, age): if age < 0 or age > 150: self.errors.append(f"Invalid age: {age}") return False return True def get_errors(self): return self.errors validator = DataValidator() validator.validate_email("bad-email") validator.validate_age(200) validator.validate_email("another-bad-email") validator.validate_age(150) print(validator.get_errors()) 🔹 Step-by-Step Explanation ✅ 1. Class (Blueprint) DataValidator is a class — a blueprint for creating validation objects. ✅ 2. Constructor (__init__) def __init__(self): self.errors = [] Runs automatically when object is created Initializes an empty list to store errors ✅ 3. Methods (Functions inside class) 👉 validate_email(self, email) Checks if email contains "@" If invalid → adds error to list 👉 validate_age(self, age) Checks if age is between 0 and 150 If invalid → stores error 👉 get_errors(self) Returns all collected errors 🔹 The Magic of self 💡 self = current object (instance) When you write: validator.validate_email("bad-email") Python internally does: DataValidator.validate_email(validator, "bad-email") 👉 That’s why we don’t pass self manually 🔹 Instance (Real Object) validator = DataValidator() This creates an object Each object has its own errors list 🔹 Output Explained ['Invalid email: bad-email', 'Invalid age: 200', 'Invalid email: another-bad-email'] ✔ Invalid email → no "@" ✔ Invalid age → 200 > 150 ✔ Valid age (150) → ignored 🔥 Key Takeaways Class = Blueprint 🏗️ Instance = Real object 🎯 Method = Action (function inside class) ⚙️ self = current object reference 🧠 Objects can store state (like errors list) 💬 This is how real-world systems validate data in forms, APIs, and apps. If you understand this, you're officially stepping into real OOP development 🚀 #Python #OOP #Programming #Coding #Developers #LearnToCode #SoftwareEngineering
To view or add a comment, sign in
-
Task Holberton Python: Mutable vs Immutable Objects During this trimester at Holberton, we started by learning the basics of the Python language. Then, as time went on, both the difficulty and our knowledge gradually increased. We also learned how to create and manipulate databases using SQL and NoSQL, what Server-Side Rendering is, how routing works, and many other things. This post will only show you a small part of everything we learned in Python during this trimester, as covering everything would be quite long. Enjoy your reading 🙂 Understanding how Python handles objects is essential for writing clean and predictable code. In Python, every value is an object with an identity (memory address), a type, and a value. Identity & Type x = 10 print(id(x)) print(type(x)) Mutable Objects Mutable objects (like lists, dicts, sets) can change without changing their identity. lst = [1, 2, 3] lst.append(4) print(lst) # [1, 2, 3, 4] Immutable Objects Immutable objects (like int, str, tuple) cannot be changed. Any modification creates a new object. x = 5 x = x + 1 # new object Why It Matters With mutable objects, changes affect all references: a = [1, 2] b = a b.append(3) print(a) # [1, 2, 3] With immutable objects, they don’t: a = "hi" b = a b += "!" print(a) # "hi" Function Arguments Python uses “pass by object reference”. Immutable example: def add_one(x): x += 1 n = 5 add_one(n) print(n) # 5 Mutable example: def add_item(lst): lst.append(4) l = [1, 2] add_item(l) print(l) # [1, 2, 4] Advanced Notes - Shallow vs deep copy matters for nested objects - Beware of aliasing: matrix = [[0]*3]*3 Conclusion Mutable objects can change in place, while immutable ones cannot. This impacts how Python handles variables, memory, and function arguments—key knowledge to avoid bugs.
To view or add a comment, sign in
-
pywho - A debugging painkiller for Python developers What is pywho? pywho is a zero-dependency Python CLI that explains your environment, traces imports, and detects module shadowing. It supports JSON output, runs cross-platform Pain point: Debugging Python issues usually means checking the interpreter, virtualenv, sys.path, pip, and import resolution separately. That is slow, repetitive, and often leads to “works on my machine” problems. Target audience: All Python developers. GitHub repo: https://lnkd.in/dMvz9PYM PyPI package: https://lnkd.in/dM72_8rs Docs site here. https://lnkd.in/dCvUBAeu ♻️ Resharing to support the community #Python #PythonDeveloper #Debugging Python Valley Python #DeveloperTools #OpenSource Python Coding Python #SoftwareEngineering #BackendDevelopment Python Software Foundation #DevTools
Backend Engineer @ Bonial Germany | Python • 2 x AWS • Java • Microservices | Python Instructor @ ReDI
🐍 I built a Python CLI tool (Fully powered by AI) that solves a problem every developer has faced. You know the drill: ❌ “Works on my machine” — but breaks everywhere else ❌ "which python" → points to the wrong interpreter ❌ "import json" silently loads your "json.py" instead of the real one ❌ “Is my venv even active? Which one? What type?” ❌ Debugging environment issues by running 6 different commands and piecing together the puzzle These are the exact pain points that made me build pywho. 🔧 One command. Full picture. pip install pywho pywho gives you: ✅ Which Python interpreter you're running (version, path, compiler, architecture) ✅ Virtual environment status — detects venv, virtualenv, uv, conda, poetry, pipenv ✅ Package manager detection ✅ Full "sys.path" with index numbers ✅ All "site-packages" directories 🔍 Import tracing — ever wondered WHY "import requests" loaded that file? pywho trace requests Shows you the exact search order Python followed, which paths it checked, and where it finally found the module. ⚠️ Shadow scanning — the silent bug killer pywho scan . Scans your entire project for files like "json.py", "math.py", or "logging.py" that accidentally shadow stdlib or installed packages. These bugs can take hours to debug. "pywho" finds them in seconds. 💡 What makes it different? I looked for existing tools and found: - "pip inspect" → JSON-only, no shadow detection, no import tracing - "python -v" → unreadable verbose output - "flake8-builtins" → only catches builtin name shadowing - "ModuleGuard" → academic research tool, not a practical CLI - Linters like "pylint" → catch some shadows but don’t trace resolution paths No tool combines all three: • Environment inspection • Import tracing • Shadow scanning pywho is the first to bring them together. 🏗 Built with quality in mind - 🧪 149 tests, 98% branch coverage - 💻 Cross-platform: Linux, macOS, Windows - 🐍 Python 3.9 – 3.14 - 📦 Zero dependencies (pure stdlib) - ⚡ CI with 20 automated checks per PR - 🔒 Read-only — no filesystem writes, no network calls The best debugging tool is the one you don’t have to think about. Next time someone says “it works on my machine”, just ask them to run: pywho …and paste the output. Done. 🎯 ⭐ GitHub: https://lnkd.in/dMvz9PYM Would love your feedback! What other pain points do you hit with Python environments? 👇 #Python #OpenSource #DevTools #CLI #DeveloperTools #SoftwareEngineering #Debugging #PythonDev #pywho
To view or add a comment, sign in
More from this author
Explore related topics
- Steps to Build AI Agents
- How to Build Agent Frameworks
- How to Use Agentic AI for Better Reasoning
- How to Build Production-Ready AI Agents
- How to Build Custom AI Assistants
- How to Use AI Agents to Optimize Code
- Tips to Improve Agent Performance Using AI
- How to Improve Agent Interoperability
- How to Choose the Best AI Agent Framework
- How to Use AI Agents to Streamline Digital Workflows
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development