We all know git blame — but it was missing one thing: a sense of humor. Introducing gitblame-ai 🔥 It scans your repo, identifies the most "interesting" (read: suspicious) lines of code using custom heuristics, and sends them to Claude AI for a brutally honest critique. 🚢 Features: Roast Mode: Savage senior engineer critiques. Corporate Mode: Passive-aggressive "circling back" on your variables. Pirate Mode: "Arrr, this indentation be shallower than a coral reef!" Smarter Scanning: Automatically respects your .gitignore. Perfect for team standups, Friday afternoon vibes, or just holding your friends accountable in the funniest way possible. 🚀 Try it now: pip install gitblame-ai ⭐ Star it on GitHub: https://lnkd.in/dtww5zyG #Python #OpenSource #AI #Git #ClaudeAI #DeveloperTools #BuildInPublic
Gitblame-ai: Humorous Code Critique Tool
More Relevant Posts
-
I added memory to my chatbot — and what I expected to be a minor upgrade turned into a bit of a "wait, this is actually cool" moment. Version 1 was straightforward: ask a question, get an answer, start fresh next time. Useful, but cold. Like a vending machine that also talks. Version 2 remembers you. Not in some fancy way — it's literally reading and writing to a .txt file. But because LangChain feeds that history back into the model at every turn, the conversation has continuity. You can say "remember what I told you earlier?" and it actually can. Building it made me realize: memory isn't just a feature. It's the thing that makes an AI feel like it's actually *with* you instead of just responding *to* you. The stack is still simple — Python, LangChain, Ollama running LLaMA 3.2 locally. No external APIs, no data leaving my machine. Where I want to take it: — Smarter memory with a vector database — Distinguishing between what to remember long-term vs short-term — A proper UI so it doesn't live only in a terminal It's still early. But it's starting to feel like I'm building something, not just tinkering. Code link is in the comments. 👇 #AI #MachineLearning #Python #LangChain #Chatbot #BuildInPublic #GenAI
To view or add a comment, sign in
-
The leap from "Chatbot" to "Agent" starts with a single primitive: Tool Calling. Nyalalabs is excited to host a technical workshop featuring Abel Chin, an indie AI engineer and ex-ASEAN scholar. Abel’s journey is proof that it’s never too late to start building—having truly kicked off his builder journey in mid-2025, he’s now deep in the trenches of Agentic AI. In this session, we aren’t just talking about theory. We are writing code. 💻 Key Takeaways: Master the definition of strict JSON schemas to ensure model reliability. Learn how to bridge the gap between LLM reasoning and API execution. Walk away with a functional, runnable Python script that actually does things. If you’re ready to break LLMs out of their text-only boxes, this is for you. 📅 When: April 17th, 8:00 PM - 10:00 PM 🔗 Register via Luma: https://luma.com/2z5ak2pj #ArtificialIntelligence #LLMOps #Python #AgenticAI #TechCommunity #SingaporeAI #SoftwareDevelopment
To view or add a comment, sign in
-
-
Most RAG systems work great in demos. They fall apart in production. Here are the real failure points nobody talks about enough: 𝗕𝗮𝗱 𝗰𝗵𝘂𝗻𝗸𝗶𝗻𝗴 𝗸𝗶𝗹𝗹𝘀 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗯𝗲𝗳𝗼𝗿𝗲 𝗶𝘁 𝘀𝘁𝗮𝗿𝘁𝘀. If your chunks split context awkwardly, the embedding never captures the right meaning. Garbage in, garbage out. 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝘀 𝗻𝗼𝘁 𝗺𝗮𝗴𝗶𝗰. Embeddings fail on domain-specific language, acronyms, and numerical data. Your retriever returns confident, wrong results. 𝗧𝗼𝗽-𝗞 𝗶𝘀 𝗻𝗼𝘁 𝗮 𝘁𝘂𝗻𝗶𝗻𝗴 𝗮𝗳𝘁𝗲𝗿𝘁𝗵𝗼𝘂𝗴𝗵𝘁. Too low and you miss the answer. Too high and you dilute it. Most teams never revisit this after the first deploy. 𝗧𝗵𝗲 𝗟𝗟𝗠 𝗴𝗲𝘁𝘀 𝗯𝗹𝗮𝗺𝗲𝗱 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗲𝗿'𝘀 𝗺𝗶𝘀𝘁𝗮𝗸𝗲𝘀. When the final answer is wrong, engineers debug the prompt. The real issue is almost always upstream in the retrieval pipeline. Building a RAG system takes a weekend. Making it reliable in production takes months. What failure have you hit that surprised you the most? #RAG #LLM #AIEngineering #LangChain #GenerativeAI #Python
To view or add a comment, sign in
-
-
I’ve been building a side project: a web-based combat tracker for a custom TTRPG. You can check out the repo here: https://lnkd.in/dZrM-mhe. I ran the full delivery loop, requirements through tests, while tightening agentic pipelines so they could run on trial-tier models and still land close to what I'd get from heavier ones. The bet was that clearer prompts and smaller scopes would do more than burning tokens, and that's where most of the learning actually happened. On the app itself: I drafted and refined requirements and scope in markdown in the repo (requirements-done, backlog notes) so changes could be checked against written intent. I used those pipelines to turn ideas into small, agent-ready stories. For design, Stitch let me iterate on layout and tone early; screens were then built as Flask templates and static assets so they still matched real routes, forms, and Socket.IO events. The stack is Flask + SQLAlchemy + SQLite, with Socket.IO for live updates; I added pytest where it helped, plus browser automation only where it paid off, and a one-command DB init so a fresh clone isn’t blocked on missing tables. The Python backend is mine line by line, with AI used in a teaching / review mode rather than "write the app for me" mode, which for me beat a generic paid course. This isn't evidence that agents replace engineers. It's one more example of using AI as leverage on a loop you still own. If you're trying something similar, the README and branch layout are meant to read without insider context; you're welcome to reuse the Skills in the repo if they help. If you’re using Cursor or similar tools, the practical suggestion is the same: treat AI as leverage on that loop, not as a substitute for thinking. #Python #Flask #Cursor #AgenticAI #OpenSource #TTRPG
To view or add a comment, sign in
-
-
Give me your API key. I'll give you a chatbot in 10 lines of Python. Period. For me? This is volunteer work. I genuinely enjoy building this stuff. For you: → 10,000 conversations ≈ $1 → 1 million tokens ≈ $0.15 (input) / $0.60 (output) No complex setup. No months of dev. Just 5-10 lines of Python, your API key, and a chatbot that actually works. What do you need? Your API key + a clear idea of what you want. What do you get? A working chatbot. Period. I'm not selling anything. Just helping folks who want to leverage AI without the headache. DM me if you want one built. 🚀 #OpenAI #Python #Chatbots #AI #Automation #NoFluff #GenerativeAI #APIIntegration #TechInnovation #LowCode #AIAgents #MachineLearning #DeveloperLife #BuildInPublic #CostEffectiveAI
To view or add a comment, sign in
-
🛠️ Giving LLMs Hands and Feet: Mastering #LangChain #Tools & #Agents An #LLM on its own is a brilliant thinker, but it’s "locked in a room" with no way to touch the real world. It can tell you how to book a flight, but it can't actually book it—unless you give it Tools. In my latest guide, I dive deep into #LangChain #Tools— the bridge between reasoning and action. The "Tooling" Hierarchy: #BuiltInTools: Ready-made connectors for Google Search, Wikipedia, and Python REPL. 🔌 #CustomTools (@tool): Turning any Python function into an LLM-callable action with just one decorator. 🐍 #StructuredTools (Pydantic): Production-grade tools with strict schema validation for complex APIs. ✅ #Toolkits: Grouping related actions (like a "Google Drive Toolkit") for modular agent design. 🧳 The Secret Sauce: Tool Binding & Calling The magic isn't just in the tool itself; it's in the Reasoning Loop. The LLM decides which tool to use and what arguments to send. As developers, we execute that call and feed the result back, creating a loop of autonomous intelligence. Are you building passive chatbots or active agents? Let's discuss the future of AI agency below! 👇 #GenerativeAI #LangChain #GenerativeAIUsingLangChain #AIAgents #Python #LLMOps #SoftwareEngineering #Automation #Pydantic #Innovation #Tools #ToolsCreation #ToolsBinding #ToolsCalling #ToolsExecution
To view or add a comment, sign in
-
-
LangChain Just Released Deep Agents — And It Changes How You Build AI Systems (April 2026) "There’s a pattern I’ve watched repeat itself across almost every team that gets serious about building agents. It’s not that LangGraph is bad. It’s extremely powerful. But it’s a runtime — a low-level primitive — and most people are using it as if it’s an application framework. LangChain noticed this, and deepagents is their answer deepagents is a standalone Python library — installable with pip install deepagents — that sits on top of LangChain and LangGraph. The LangChain docs describe it as an "agent harness": it provides the same core tool-calling loop as other frameworks, but with a set of built-in capabilities baked in so you don't have to reinvent them. The Five Capabilities That Make This Different 1. Built-in Planning with write_todos 2. A Virtual Filesystem 3. Subagent Spawning 4. Automatic Context Compression and Summarization 5. Long-term Memory Across Conversations" https://lnkd.in/eZfihAmm
To view or add a comment, sign in
-
-
Generate datasets and use them immediately without breaking your workflow. With the DataCreator AI SDK, you can: • Generate conversational datasets directly in Python • Save as JSONL (fine-tuning ready) • Train your model in the same notebook No switching tools. No manual data prep. You can go from: idea → dataset → training in one flow. That’s how it should be. If you’re working with LLMs, try integrating this into your pipeline and see if it actually saves you time. Also, if something breaks or feels off, please add it under issues on our GitHub repository. Feedback (and bugs) are welcome. Repo + details in the comments 👇 #ai #sdk #generativeai #syntheticdata
To view or add a comment, sign in
-
We are on GitHub! Follow, Star, and Fork the DataCreator AI SDK and generate conversational datasets quickly, in the same notebook where your training code resides. #ai #generativeai #github
Generate datasets and use them immediately without breaking your workflow. With the DataCreator AI SDK, you can: • Generate conversational datasets directly in Python • Save as JSONL (fine-tuning ready) • Train your model in the same notebook No switching tools. No manual data prep. You can go from: idea → dataset → training in one flow. That’s how it should be. If you’re working with LLMs, try integrating this into your pipeline and see if it actually saves you time. Also, if something breaks or feels off, please add it under issues on our GitHub repository. Feedback (and bugs) are welcome. Repo + details in the comments 👇 #ai #sdk #generativeai #syntheticdata
To view or add a comment, sign in
-
Hey everyone! Been diving deep into Retrieval-Augmented Generation (RAG) lately, and it's really exciting! Think of it like this: AI models are amazing, but sometimes they need a little help remembering specific facts. RAG gives them that boost. Instead of just relying on what the AI already knows, RAG lets it search for relevant information from a database or website *while* it's answering your question. So, it's more accurate and can give you answers based on the latest data. I'm using Python and Django to build systems that use RAG, and it's a great way to make AI tools more useful for everyone. It's also super helpful for automating tasks with N8N by connecting AI to real-world data. What are your thoughts on RAG? Have you experimented with it? Let's chat! #AI #Python #Django #RAG #N8N #Automation #SoftwareDevelopment
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development