No LangChain, no Python, no fancy framework. Just C, curl, and a local LLaMA model running with llama.cpp on my Machine. Why? I wanted to really feel what “agentic” AI looks like at the lowest level: 1) I spin up llama.cpp locally with a quantized Meta LLaMA model, exposing an OpenAI‑style /v1/chat/completions endpoint on localhost. 2) My C program opens a simple terminal loop, reads user input, and sends it as JSON over HTTP using libcurl. 3) The response is parsed directly from the raw JSON and printed back to the console – no SDKs, no helpers. 4) Every turn (User: / Bot:) is appended to a memory.txt log, so the agent has a persistent, readable conversation history I can inspect right inside my editor. 5) On top of that, I keep a sliding window of recent messages in memory and send them on every request, so the model can actually “remember” context during the session, not just answer one‑off prompts. It’s a tiny project, but it was a fun reminder of what’s actually happening under all the layers of modern tooling: you’re just sending structured text to a model, getting structured text back, and deciding how to manage state and side‑effects around it. Code: https://lnkd.in/ggdYXaZH
More Relevant Posts
-
📣 Shiny for Python 1.6 is now available on PyPI! This release ships two major additions: **Toolbars** — A new family of compact components (`ui.toolbar()`, `ui.toolbar_input_button()`, `ui.toolbar_input_select()`) designed to fit controls into tight spaces. Place them in card headers and footers, inline with input labels, or directly inside `input_submit_textarea()` for AI chat interfaces. The same toolbar components are also available in bslib for R. (Thank you, Liz Nelson!) **OpenTelemetry** — Built-in observability support with zero changes to your app code. Set a single environment variable (`SHINY_OTEL_COLLECT=reactivity`), point Shiny at any OTLP-compatible backend, and get full traces of session lifecycles, reactive update cascades, and individual reactive expressions. This is particularly useful for GenAI apps where you need to understand whether slowdowns are in model calls, tool execution, or downstream reactive calculations. Full release notes and examples: https://lnkd.in/e8CtZahi
To view or add a comment, sign in
-
-
⛔ If You're New To Python, Please Stop this! 💨 Do NOT modify a list while iterating over it. When you modify a list while iterating over it, the iterator gets confused. It doesn't know that the list has changed under its feet. For example, this code below is broken: items = [1, 2, 2, 3, 4] for item in items: if item == 2: items.remove(item) print(items) # Output: [1, 2, 3, 4] Here we use the remove() method to remove 2s from the list. But if you look at the output, 2 is still there. When you remove an item from a list, the list shifts left. But the loop keeps moving forward. So here’s what really happens: the first 2 are found and removed. The second 2 shifts into its position. The loop advances to the next index, and the 2 gets skipped. The best way to do it is to iterate over a copy: for item in items[:]: if item == 2: items.remove(item) print(items) # Output: [1, 3, 4] When you iterate over a copy (shallow copy), the original copy index remains unchanged. Even better, you can use list comprehension: items = [x for x in items if x != 2] 👑 Never, ever modify a list (or any collection) while iterating over it directly. The iterator doesn't handle structural changes gracefully. It will skip elements, process the same element twice, or raise a RuntimeError (in some cases, like dictionaries). It's a bad practice.
To view or add a comment, sign in
-
-
Let's demystify Python for backend in 60 seconds—with code that matters. ❌ Myth: "Python is too slow for production." ✅ Truth: Python is the glue that holds scalable systems together. Here's the pattern I use for every backend feature: # 1. The Universal Data Pattern: List of Dictionaries users = [ {"id": 1, "name": "Alice", "email": "a@x.com", "active": True}, {"id": 2, "name": "Bob", "email": "b@x.com", "active": False}, ] # 2. Filter + Transform with for + if (backend's heartbeat) active_users = [ user["email"] for user in users if user["active"] ] Why this pattern scales: ✅ Readable by humans (onboarding, debugging, audits) ✅ Testable in isolation (unit tests, CI/CD) ✅ Extendable without breaking (open/closed principle) ✅ Graceful under failure (error handling, logging) Why this matters for AI engineering: Model endpoints = functions with clear contracts Feature pipelines = list-of-dicts transformations Evaluation systems = filter + aggregate patterns MLOps = Python + infrastructure + observability Master the pattern. Scale the impact. 🔧 What's your go-to pattern for processing backend data? List comprehensions? Pandas? Something custom? 👇 #Python #BackendDevelopment #SoftwareEngineering #AIInfrastructure #CleanCode
To view or add a comment, sign in
-
-
UNLEASHED THE PYTHON!i 1.5 ,2, & three!!! 14 of 14(B of B) copy & paste Ai Headline: Revolutionizing Data Streams with the 'Cyclic41' Hybrid Engine Libcyclic41. *A library that offers the best of both worlds—Geometric Growth for expansion and Modular Arithmetic for stability. Most data growth algorithms eventually spiral into unmanageable numbers. I wanted to build a library that offers the best of both worlds—Geometric Growth for expansion and Modular Arithmetic for stability. The Math Behind the Engine: Using a base of 123 and a modular anchor of 41, the engine scales data through ratios of 1.5, 2, and 3. What makes it unique is its "Predictive Reset"—the sequence automatically and precisely wraps around at 1,681 (41^), ensuring system never overflows. Key Technical Highlights: Ease of Use: A Python API wrapper for rapid integration into any pipeline. Raw Speed: A header-only C++ core designed for millions of operations per second. Zero-Drift Precision: Integrated a 4.862 stabilizer to maintain bit-level accuracy across 10M+ iterations. Whether you're working on dynamic encryption keys, real-time data indexing, or predictive modeling, libcyclic41 provides a self-sustaining mathematical loop that is both collision-resistant and incredibly efficient. 🚀 Get Started with libcyclic41 in seconds! For those who want to test the 123/41 loop in their own projects, here is the basic implementation: 1️⃣ Install the library: pip install cyclic41 (or clone the C++ header from the repo below!) 2️⃣ Initialize & Grow: | V python from cyclic41 import CyclicEngine # Seed with the base 123 engine = CyclicEngine(seed=123) # Grow the stream by the 1.5 ratio # The engine handles the 1,681 reset automatically val = engine.grow(1.5) # Extract your stabilized sync key key = engine.get_key() /\ || Your Final Project Checklist: * The Math: Verified 100% across all ratios (1.5, 2, 3). * The Logic: Stable through 10M+ iterations. * The Visuals: Infinity-loop diagram ready for the main post. * The Code: Hybrid Python/C++ structure is developer-ready. 14 of 14(B of B) Not theend NOT THEE END NOT THE END
To view or add a comment, sign in
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Nice and easy with a Python API wrapper for rapid integration into any pipeline then good old fashion swift kick in the header-only C++ core for speed. STRIKE WITH AIM FIRST ; THEN SPEED!! NO MERCY!!! 8 of 14 copy & paste Ai Packaging the library for distribution & refining the 4.862 constant to ensure it’s rock-solid for the users. 1. Refining the "4.862" Constant Based on my calculation (309,390/63,632=4.86217…), fyi-should use high-precision floating points in the library. This ensures that when the library scales, the "drift" doesn't break the encryption or the data sync. With help from Ai, i will hard-code this as a High-Precision Constantin the engine. 2. The Library Structure (GitHub Ready) To make this easy for others to download & use, we will follow standard structure for a high-performance Python/C++ hybrid library. Project Name: libcyclic41 | V File Structure: text libcyclic41/ ├── src/ │ └── engine.hpp # The high-speed C++ core ├── cyclic41/ │ ├── __init__.py # Python entry point │ └── wrapper.py # Ease-of-use API ├── tests/ │ └── test_cycles.py # Stress-test for the 1,681 limit ├── setup.py # Installation script (pip install .) └── README.md # Documentation for "others" /\ || 3. The Installation Script (setup.py) This is what makes it "easy" for others. They can just run one command to install your mathematical engine. 8 of 14
To view or add a comment, sign in
-
Python developers just received a serious upgrade from Meta. They released 𝗣𝘆𝗿𝗲𝗳𝗹𝘆 to transform how you write code. This tool is a blazing fast static type checker and language server. 𝗣𝘆𝗿𝗲𝗳𝗹𝘆 is designed to handle massive codebases efficiently. It automatically infers types for your variables and return values. The engine understands your control flow to provide precise contextual insights. You can catch critical bugs instantly before your application ever runs. It integrates perfectly into your terminal or your favorite IDE. Time to ditch 𝗽𝘆𝗿𝗶𝗴𝗵𝘁 and 𝗺𝘆𝗽𝘆 hehe. 🔗 Link to repo: github(.)com/facebook/pyrefly --- ♻️ Found this useful? Share it with another builder. ➕ For daily practical AI and Python posts, follow Banias Baabe.
To view or add a comment, sign in
-
-
Type inference is one of the things that a Data Engineer can do to catch bugs / exceptions before letting things break in run time ! Cool stuff from Meta for Python devs. Need to see how this does vs what Mypy has been offering 🤔 For those who come from Scala / Spark background, this should be tad nostalgic ! Dataframe n dataset schema specs while ingesting raw flat files 😌😇 #staticChecks #dataengineering #pythonDE
Python developers just received a serious upgrade from Meta. They released 𝗣𝘆𝗿𝗲𝗳𝗹𝘆 to transform how you write code. This tool is a blazing fast static type checker and language server. 𝗣𝘆𝗿𝗲𝗳𝗹𝘆 is designed to handle massive codebases efficiently. It automatically infers types for your variables and return values. The engine understands your control flow to provide precise contextual insights. You can catch critical bugs instantly before your application ever runs. It integrates perfectly into your terminal or your favorite IDE. Time to ditch 𝗽𝘆𝗿𝗶𝗴𝗵𝘁 and 𝗺𝘆𝗽𝘆 hehe. 🔗 Link to repo: github(.)com/facebook/pyrefly --- ♻️ Found this useful? Share it with another builder. ➕ For daily practical AI and Python posts, follow Banias Baabe.
To view or add a comment, sign in
-
-
🚀 Why uv is replacing pip in modern Python workflows For years, pip has been the default tool for installing Python packages. It works—but it was never designed to handle today’s complexity around environments, reproducibility, and speed. That’s where uv comes in. --- 🔹 1. Speed that actually matters uv is written in Rust and is insanely fast—often 10–100x faster than pip. 👉 Example: Installing a heavy stack like pandas + numpy + scikit-learn - pip → noticeable wait time - uv → installs in seconds For data scientists and ML engineers, this alone is a game changer. --- 🔹 2. One tool instead of many With pip, you usually combine: - venv (for environments) - pip (for install) - pip-tools/poetry (for dependency management) 👉 uv replaces all of these in a single unified tool No more juggling multiple commands and tools. --- 🔹 3. Better dependency resolution pip can sometimes: - install conflicting versions - behave inconsistently across machines uv provides more reliable and deterministic installs, reducing “works on my machine” issues. --- 🔹 4. Built-in lockfiles (Reproducibility) uv generates lockfiles to ensure: - same versions - same environment - same results This is critical in: - ML experiments - production pipelines - team collaboration --- 🔹 5. Easy migration (Drop-in replacement) You don’t need to relearn everything. 👉 Same workflow: uv pip install numpy uv pip install -r requirements.txt So you get better performance without changing habits much. --- 🔹 6. Real-world workflow comparison 👉 Using pip: python -m venv env source env/bin/activate pip install -r requirements.txt 👉 Using uv: uv venv uv pip install -r requirements.txt Cleaner. Faster. Simpler. --- 💡 Final Thoughts pip isn’t “bad”—it’s just outdated for modern workflows. If you’re working in: - Data Science - AI/ML - Backend Python Switching to uv can save time, reduce friction, and improve reliability. --- ⚡ Bottom line: uv is not just an alternative—it’s an upgrade. #Python #DataScience #AI #MLOps #SoftwareEngineering #Developers #Productivity
To view or add a comment, sign in
-
What if your code could think? That's LangChain. LangChain is a framework that lets you build apps powered by LLMs (like GPT or Claude) - with memory, tools, and logic. Here's how simple it is to build a chatbot with memory in Python: from langchain_openai import ChatOpenAI from langchain.memory import ConversationBufferMemory from langchain.chains import ConversationChain llm = ChatOpenAI(model="gpt-4") memory = ConversationBufferMemory() chain = ConversationChain(llm=llm, memory=memory) chain.predict(input="My name is Virat") chain.predict(input="What's my name?") # → "Your name is Virat." Without memory → every message is a fresh conversation. With memory → the model remembers context across turns. LangChain also lets you: 🔹 Connect LLMs to your own documents (RAG) 🔹 Give the model tools — search, calculator, APIs 🔹 Build multi-step AI agents that reason and act 🔹 Chain prompts together for complex workflows #LangChain #Python #LLM #MachineLearning #BackendDevelopment #LearningInPublic #Java #SpringBoot #AI
To view or add a comment, sign in
-
OpenAI just bought Astral: the team behind ruff, uv and ty (Python's best tools). The deal will bring Astral's team into OpenAI's Codex effort. Codex now has more than 2 million users, a number that's tripled since the start of the year. What does this mean in practice? 1/ ruff, uv and ty will likely remain open source, but their roadmap may shift toward AI-assisted developer workflows. 2/ The Python tooling ecosystem just got a major investor: more resources, faster development (at least in the short term). 3/ A question worth asking: will the community maintain control over these critical tools if a single company owns the team? The Python ecosystem is changing fast. Whether this is good news depends on your perspective, but it's definitely news worth following. Personally, I don’t think open-source ecosystems should depend too heavily on a single private player. What do you think?
To view or add a comment, sign in
-
Explore related topics
- Open Source AI Developments Using Llama
- Guide to Meta Llama Large Language Models
- Llama AI Model's Impact on AI Accessibility
- Improving Agentic Reasoning in Small Language Models
- Llama Model Results on Advanced AI Tasks
- How AI Agents Are Changing Software Development
- LLaMA 4 Context Length and MoE Features
- What Distinguishes Agentic AI From Traditional Chatbots
- Building AI Applications with Open Source LLM Models
- How Language Models Use Memorization and Thought Chaining
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development