Spent some time today to build an AI Agent with python code and OpenAI API, using the ReAct Framework. Was a very satisfying experience. Please find the video explanation of the entire python code if you are interested. Basic Ideas: 1. Provide the LLM with Short Term memory, by appending all the conversational history 2. Provide the LLM with Tools that it can use to fulfill the use query. 3. Run the agent in a Loop of Thought, Action, PAUSE, Observation, to get the final answer. https://lnkd.in/dwV24J8j
Amit Deshpande’s Post
More Relevant Posts
-
8.7% of Python package hallucinations by AI models match valid JavaScript package names. Think about that. An AI suggests a Python package that happens to share a name with a real JS package. The developer doesn't find it on PyPI, gets confused, and either installs the wrong thing or an attacker registers it on PyPI first. Cross-ecosystem confusion is a real and growing attack vector.
To view or add a comment, sign in
-
Google ADK: Build Your First AI Agent in Python Google ADK: Build Your First AI Agent in Python Step-by-step guide to building, testing, and deploying AI agents with Google Agent Development Kit https://lnkd.in/gUwM5YDF https://lnkd.in/gUwM5YDF
To view or add a comment, sign in
-
Most data isn’t behind a paywall… it’s behind a login. 🔐 And that’s where many scraping workflows stop. This video shows how to scrape authenticated pages using Python, with Crawlbase supporting reliable data extraction so you can access what’s actually behind the login, not just what’s publicly visible. What you will learn: 🔹 How to handle login based scraping workflows 🔹 How session management works in real scenarios 🔹 Why authenticated scraping is critical for real data access 🔹 How Crawlbase supports more consistent data extraction If you’re working with protected data sources, this is a practical walkthrough worth watching. 👉 Watch full video: https://lnkd.in/gxfJPxGH #Crawlbase #WebScraping #Python #Automation #DataEngineering #Developers #APIs #TechTools #PythonTutorial
How to Scrape Data Behind Login Pages Using Python
https://www.youtube.com/
To view or add a comment, sign in
-
"BREAKING: China has open-source a massive Python framework for building AI agents called AgentScope, a python framework built around Agent-Oriented Programming that lets you build AI agents visually with MCP tools, memory, rag, and reasoning capabilities. 100% Open Source." ➡️ Helps your workflow? [https://lnkd.in/gAknPxGE)
To view or add a comment, sign in
-
-
Today I made a simple Python Debugger Agent. how it works: simply paste your python code and it will analyze Python code execution, catch errors, and suggest fixes. Scope: Debugging scripts, explaining errors, and guiding users interactively. The link is given below. You can also make it yours custom Ai agents. https://lnkd.in/d985cKZV #gensparkai #ai
To view or add a comment, sign in
-
Behind every working feature, there are: • 10 failed attempts • 20 Google searches • 50 lines of debug logs But that’s the process. Building scalable systems, fixing issues, and improving performance — that’s what makes development interesting. Enjoying the journey of solving problems one step at a time. #DeveloperLife #CodingJourney #Python #APIs
To view or add a comment, sign in
-
Rebuilt Bayesian Optimization from Python/BoTorch to native C++/CUDA. 33x faster end-to-end on a V100 — from 8.1s to 243ms per iteration. Optimizes trading strategy parameters across 19 dimensions with 20 parallel L-BFGS restarts on separate CUDA streams, all sub-second end-to-end. What made the biggest difference: - Batching cuBLAS dtrsm (one call with 4096 RHS vs per-candidate) - GPU-side perturbation generation (eliminated 95% of host-device transfers) - Shared-memory parallel reduction for MC sampling (23x kernel speedup) - Multi-stream parallel restarts (10 concurrent L-BFGS optimizations) What's your experience moving Python ML workloads to native CUDA?
To view or add a comment, sign in
-
-
Your Model Card is just as important as your Python script. Documentation should never be treated as an afterthought or a "nice-to-have" feature. A well-written Model Card explaining the intended use, data limitations, and ethical considerations is vital for transparency. It provides the necessary context for stakeholders to trust the "intelligence" you’ve built. If it isn't documented, it isn't production-ready. #ResponsibleAI #MLOps #Documentation
To view or add a comment, sign in
-
Built a Web Scraper with Pagination using Python. Features: 1.Scrapes quotes and authors from multiple pages 2.Implements pagination to fetch data beyond a single page 3.Allows user to control the number of results 4.Handles invalid inputs and stops when pages end This helped me understand: 1.How pagination works in web scraping 2.Loop-based data collection across pages 3.Structuring scraped data for better readability.Cognifyz Technologies
To view or add a comment, sign in
Explore related topics
- How to Build AI Agents With Memory
- How to Build Agent Frameworks
- How to Design an AI Agent
- How to Improve Agent Performance With Llms
- Building AI Applications with Open Source LLM Models
- Building Reliable LLM Agents for Knowledge Synthesis
- How to Use AI Agents to Optimize Code
- How Developers can Use AI Agents
- How to Use AI Agents in Legal Workflows
- How to Use Agentic AI for Better Reasoning
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development