I turned Claude into my personal workflow automation engine using nothing but slash commands and markdown. The gist: you design complex workflows as custom Claude Code commands that guide you through multi-step processes, pulling data from systems, updating others, and handling tasks that need human judgment - all without tab-switching into oblivion. Here’s how I’m building these: 1 - Sketch the workflow first I use Mermaid diagrams. Not just because I love diagrams, but because I can feed them directly to the agent to help it orchestrate better. Visual structure = better execution. 2 - Break big workflows into Lego blocks Learned this the hard way. Started with one massive workflow file. Total mess, impossible to test. Now I break things down. My ideation workflow? Actually three smaller workflows that call each other: Gather insights and analytics, then prompt for ideas based on real problems Deep dive on the promising ones Design quick tests to de-risk before building Way more flexible. Way less brittle. 3 - Keep steps dead simple Each step does ONE thing. When a step starts doing two things, split it. Makes debugging 10x easier when something inevitably breaks. 4 - Structure everything with markdown & XML Sounds nerdy, but it works. I use XML properties to annotate steps and shift the LLM's behavior for each step. For example, sometimes I want the LLM to act more like a facilitator when executing a step, prompting me for input and guiding me towards a better result. Other times, I just want it to do something like grab data from other systems. 5 - Let the LLM update its own workflows Meta, but practical. Since everything's in Mermaid and structured text, I can ask it to refine its own workflow based on what's working. Saves me tons of time. 6 - Version control everything Git isn't just for code. When you inevitably break a working workflow prompt at 4 pm on a Friday, you'll thank yourself for that commit history. The result? Over the past few weeks, we’ve run several ideation sessions and saved hours pulling data and creating tickets in Vistaly and GitHub. I also started sharing these commands with customers and started to see them run with them and make updates. So cool. Who else is building custom workflows like this? What's the most complex thing you've automated with your LLM/MCP? Drop a comment or DM me if you want to swap workflow files. Building a small library of these things.
Avoiding Busywork With LLM Tools
Explore top LinkedIn content from expert professionals.
Summary
Avoiding busywork with LLM tools means using large language models (LLMs), such as ChatGPT or Claude, to automate repetitive and tedious tasks—especially those that eat up valuable time and distract from meaningful work. By creating custom workflows, automating knowledge management, or delegating routine coding chores to AI, professionals can focus on more creative or strategic projects while AI handles the grunt work.
- Automate repetitive tasks: Let LLMs handle data extraction, document summarization, ticket creation, or basic information queries so you can spend less time on manual processes and more on impactful projects.
- Build smarter workflows: Use structured commands, visual diagrams, and modular approaches to design AI-driven workflows that guide you through complex tasks without constant context switching or manual oversight.
- Organize knowledge efficiently: Set up LLM-powered wikis or persistent knowledge bases to keep information up-to-date and accessible, reducing time spent searching for answers or updating documentation.
-
-
I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!
-
Focusing on AI’s hype might cost your company millions… (Here’s what you’re overlooking) Every week, new AI tools grab attention—whether it’s copilot assistants or image generators. While helpful, these often overshadow the true economic driver for most companies: AI automation. AI automation uses LLM-powered solutions to handle tedious, knowledge-rich back-office tasks that drain resources. It may not be as eye-catching as image or video generation, but it’s where real enterprise value will be created in the near term. Consider ChatGPT: at its core, there is a large language model (LLM) like GPT-3 or GPT-4, designed to be a helpful assistant. However, these same models can be fine-tuned to perform a variety of tasks, from translating text to routing emails, extracting data, and more. The key is their versatility. By leveraging custom LLMs for complex automations, you unlock possibilities that weren’t possible before. Tasks like looking up information, routing data, extracting insights, and answering basic questions can all be automated using LLMs, freeing up employees and generating ROI on your GenAI investment. Starting with internal process automation is a smart way to build AI capabilities, resolve issues, and track ROI before external deployment. As infrastructure becomes easier to manage and costs decrease, the potential for AI automation continues to grow. For business leaders, identifying bottlenecks that are tedious for employees and prone to errors is the first step. Then, apply LLMs and AI solutions to streamline these operations. Remember, LLMs go beyond text—they can be used in voice, image recognition, and more. For example, Ushur is using LLMs to extract information from medical documents and feed it into backend systems efficiently—a task that was historically difficult for traditional AI systems. (Link in comments) In closing, while flashy AI demos capture attention, real productivity gains come from automating tedious tasks. This is a straightforward way to see returns on your GenAI investment and justify it to your executive team.
-
I used this guide to build 10+ AI Agents Here're my 10 actionable items: 1. Turn your agent into a note-taking machine → Dump plans, decisions, and results into state objects outside the context window → Use scratchpad files or runtime state that persists during sessions → Stop cramming everything into messages - treat state like external storage 2. Be ridiculously picky about what gets into context → Use embeddings to grab only memories that matter for current tasks → Keep simple rules files (like CLAUDE md) that always load → Filter tool descriptions with RAG so agents aren't confused by irrelevant tools 3. Build a memory system that remembers useful stuff → Create semantic, episodic, and procedural memory buckets for facts, experiences, instructions → Use knowledge graphs when embeddings fail for relationship-based retrieval → Avoid ChatGPT's mistake of pulling random location data into unrelated requests 4. Compress like your context window costs $1000 per token → Set auto-summarization at 95% context capacity with no exceptions → Trim old messages with simple heuristics: keep recent, dump middle → Post-process heavy tool outputs immediately - search results don't live forever 5. Split your agent into specialized mini-agents → Give each sub-agent one job and its own isolated context window → Hand off context with quick summaries, not full message histories → Run sub-agents in parallel when possible for isolated exploration 6. Sandbox the heavy stuff away from your LLM → Execute code in environments that isolate objects from context → Store images, files, complex data outside the context window → Only pull summary info back - full objects stay in sandbox 7. Make summarization smart, not just chronological → Train models specifically for agent context compression → Preserve critical decision points while compressing routine chatter → Use different strategies for conversations vs tool outputs 8. Prune context like you're editing a novel → Implement trained pruners that understand relevance, not just recency → Filter based on task relevance while maintaining conversational flow → Adjust pruning aggressiveness based on task complexity 9. Monitor token usage like a hawk → Track exactly where tokens burn in your agent pipeline → Set real-time alerts when context utilization hits dangerous levels → Build dashboards correlating context management with success rates 10. Test everything or admit you're just guessing → A/B test different context strategies and measure performance differences → Create evaluation frameworks testing before/after context engineering changes → Set up continuous feedback loops auto-adjusting context parameters Last but not the least, be open to new ideas and keep learning Check out 50+ AI Agent Tutorials on my profile 👋 .
-
Every time you ask your AI a question, it forgets everything it learned the last time you asked. That's how most AI knowledge systems work today. They're called RAG systems — Retrieval Augmented Generation. You upload documents. The AI searches fragments on every query. Builds an answer from scratch. Closes the session. Next question? Same process. Zero accumulation. Andrej Karpathy — the person who built Tesla's AI vision system and co-founded OpenAI — published something quietly last week that might change this fundamentally (https://lnkd.in/dNB5swS8). He calls it "LLM Wiki." The concept: Instead of searching your documents every time, the LLM builds and maintains a persistent knowledge base. A structured wiki of markdown files with cross-references, concept pages, and contradiction flags. New document arrives? The LLM doesn't just index it. It reads it, extracts the key information, updates existing pages, notes where new data contradicts old claims, and strengthens the evolving synthesis. His key insight: "The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping." Cross-references. Keeping summaries current. Noting when new data overrides old claims. That's exactly what nobody in your team has time for. And exactly what LLMs don't get bored doing. What this looks like in practice: Step 1 — Pick one knowledge domain. Not everything. One area where your team wastes time re-finding information. Customer onboarding. Product specs. Compliance requirements. Step 2 — Set up the structure. Claude Projects or Obsidian + Claude Code as the wiki layer. Raw sources go in one folder. The LLM-maintained wiki lives in another. Step 3 — Feed sources one at a time. Let the LLM summarize, cross-reference, and file. You review. Redirect. Ask follow-up questions. The wiki grows with every session. Step 4 — Query against the wiki, not the raw documents. Answers are faster, more contextual, and cite specific pages. The open-source project Graphify already implements this pattern — claiming 70x fewer tokens needed to answer questions compared to raw-folder RAG. For now, this is still a developer concept. No plug-and-play enterprise solution exists yet. The benchmarks are anecdotal, not peer-reviewed. The wiki could drift or hallucinate if not curated. But the direction is clear: AI that builds knowledge, not AI that searches it. Save this for when your team's knowledge base fails you again. — 📌 Save this post for later ♻️ Share it to inspire your network Follow Hartmut Hübner, PhD for AI insights that work.
-
I’ve been building and managing data systems at Amazon for the last 8 years. Now that AI is everywhere, the way we work as data engineers is changing fast. Here are 5 real ways I (and many in the industry) use LLMs to work smarter every day as a Senior Data Engineer: 1. Code Review and Refactoring LLMs help break down complex pull requests into simple summaries, making it easier to review changes across big codebases. They can also identify anti-patterns in PySpark, SQL, and Airflow code, helping you catch bugs or risky logic before it lands in prod. If you’re refactoring old code, LLMs can point out where your abstractions are weak or naming is inconsistent, so your codebase stays cleaner as it grows. 2. Debugging Data Pipelines When Spark jobs fail or SQL breaks in production, LLMs help translate ugly error logs into plain English. They can suggest troubleshooting steps or highlight what part of the pipeline to inspect next, helping you zero in on root causes faster. If you’re stuck on a recurring error, LLMs can propose code-level changes or optimizations you might have missed. 3. Documentation and Knowledge Sharing Turning notebooks, scripts, or undocumented DAGs into clear internal docs is much easier with LLMs. They can help structure your explanations, highlight the “why” behind key design choices, and make onboarding or handover notes quick to produce. Keeping platform wikis and technical documentation up to date becomes much less of a chore. 4. Data Modeling and Architecture Decisions When you’re designing schemas, deciding on partitioning, or picking between technologies (like Delta, Iceberg, or Hudi), LLMs can offer quick pros/cons, highlight trade-offs, and provide code samples. If you need to visualize a pipeline or architecture, LLMs can help you draft Mermaid or PlantUML diagrams for clearer communication with stakeholders. 5. Cross-Team Communication When collaborating with PMs, analytics, or infra teams, LLMs help you draft clear, focused updates, whether it’s a Slack message, an email, or a JIRA comment. They’re useful for summarizing complex issues, outlining next steps, or translating technical decisions into language that business partners understand. LLMs won’t replace data engineers, but they’re rapidly raising the bar for what you can deliver each week. Start by picking one recurring pain point in your workflow, then see how an LLM can speed it up. This is the new table stakes for staying sharp as a data engineer.
-
Want to use GPT or Claude to help with something complicated and loosely defined — like building a comms plan for a company-wide initiative? Here’s a pattern that leveled up my prompt-fu like there's no tomorrow. ✅ Step 1: Set the stage, don’t trigger the model (yet) “I’m working on [insert project]. I’ll upload the background material. Don’t do anything until I say I’m ready and give you further instructions.” This gives the model time to ingest, not assume. If you don't do this, it’ll start guessing what you want — and usually guess wrong. This saves me tons of backtracking. ✅ Step 2: Kick off the interaction with clear context and a defined role “You’re an internal comms consultant helping the Chief Product & Tech Officer of a public company roll out a major change initiative. Interview me one question at a time until you’re 95% sure you have what you need.” This flips the default dynamic. Instead of hallucinating, the model starts by asking smart, clarifying questions — and only switches to generation once it knows enough to do the job right. This simple two-step pattern has leveled up how I work with LLMs — especially on open-ended, executive-level tasks. 🚀 It’s cut out something like 95% of my frustration with these tools. Curious if others are doing something similar — or better? What’s your go-to prompting move? #promptengineering #worksmarter #LLM #AIworkflow
-
I work at Airbnb where I write 99% of my code with LLMs. One thing you need to understand is they only write shit code if you let them. When you're building high quality production software, writing code is always the 𝗹𝗮𝘀𝘁 𝘀𝘁𝗲𝗽. Your first step is to understand the problem that needs to be solved. Then ideate solutions, consider alternatives, explore tradeoffs and refine your exploration into a concrete plan. Even as you implement the plan task by task you should not be coding a stream of conscious. That leads to bad code design. You should be considering the architecture of the code, abstractions and coming up with a clean way to write it. Only after all this upfront design and planning work do you then start manually typing code with your fingers. That last step is not necessary to do manually anymore. Whenever I think of coding, I immediately reach for an LLM because I use it like a power tool. A carpenter does not leave their power drill on the table when they need to screw in a bolt. Why would you not use an LLM to execute on your plan? You are in the driver's seat, providing direct technical guidance at every step. 𝗬𝗼𝘂𝗿 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲 𝗮𝗻𝗱 𝘀𝗸𝗶𝗹𝗹 𝗹𝗲𝘃𝗲𝗹 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗶𝗺𝗽𝗮𝗰𝘁 𝗵𝗼𝘄 𝗴𝗼𝗼𝗱 𝘁𝗵𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗶𝘀. No, this is not slower than doing it without LLMs. You should also use LLMs as power tools for research, planning and architecture. This will get you even higher quality software than without them. It allows you to go far beyond due diligence and truly explore, analyze and refine your design fully before any single line of code is written. I use the following workflow to naturally research, design and plan the feature I want to build in the form of a conversation which then gets converted to a formal Spec that the LLM can implement task by task: 1. Explain the problem to the LLM. 2. Give it your ideas for the initial solution 3. Tell it explicitly: “Propose an approach first. Show alternatives to my solution, highlight tradeoffs. Do not write code until I approve.” 4. Review the proposal, poke holes in it, iterate 5. Tell it to write the plan to disk as a spec so you can hand off to another session later 6. Lastly, let it generate code. This is an excerpt from my article “Writing High Quality Production Code With LLMs Is A Solved Problem” full article here on LinkedIn —> https://lnkd.in/d3v-i9iK
-
LLMs are the single fastest way to make yourself indispensable and give your team a 30‑percent productivity lift. Here is the playbook. Build a personal use‑case portfolio Write down every recurring task you handle for clients or leaders: competitive intelligence searches, slide creation, meeting notes, spreadsheet error checks, first‑draft emails. Rank each task by time cost and by the impact of getting it right. Start automating the items that score high on both. Use a five‑part prompt template Role, goal, context, constraints, output format. Example: “You are a procurement analyst. Goal: draft a one‑page cost‑takeout plan. Context: we spend 2.7 million dollars on cloud services across three vendors. Constraint: plain language, one paragraph max. Output: executive‑ready paragraph followed by a five‑row table.” Break big work into a chain of steps Ask first for an outline, then for section drafts, then for a fact‑check. Steering at each checkpoint slashes hallucinations and keeps the job on‑track. Blend the model with your existing tools Paste the draft into Excel and let the model write formulas, then pivot. Drop a JSON answer straight into Power BI. Send the polished paragraph into PowerPoint. The goal is a finished asset, not just a wall of text. Feed the model your secret sauce Provide redacted samples of winning proposals, your slide master, and your company style guide. The model starts producing work that matches your tone and formatting in minutes. Measure the gain and tell the story Track minutes saved per task, revision cycles avoided, and client feedback. Show your manager that a former one‑hour job now takes fifteen minutes and needs one rewrite instead of three. Data beats anecdotes. Teach the team Run a ten‑minute demo in your weekly stand‑up. Share your best prompts in a Teams channel. Encourage colleagues to post successes and blockers. When the whole team levels up, you become known as the catalyst, not the cost‑cutting target. If every person on your team gained back one full day each week, what breakthrough innovation would you finally have the bandwidth to launch? What cost savings could you achieve? What additional market share could you gain?
-
I just vibe coded an LLM powered AI/ML conference deadline tracker website. 🔗 Link: https://lnkd.in/g9DjgB6d After wrapping up my ICML submission a couple of days ago, I realized I had wasted a lot of time repeating the same workflow: finding submission links, checking page limits, LaTeX templates, figuring out when the next conference is coming even if deadlines are not announced yet, and checking key dates by clicking through conference websites or Googling. So I built a site that uses agentic LLMs with Gemini 2.0 Flash to crawl and curate conference data automatically. Everything is open source (except my API key 😅). I hope it helps the community. You can click on each card to see more details like page limit, latex template, author guideline, desk rejection reasons! You can also view deadlines in calendar, kinda gives you a birds eye view. You can also filter conferences based on research area. Before building this, I did look at two existing deadline trackers but felt they did not do what I was expecting. 1) @huggingface AI Deadlines: https://lnkd.in/gt-vqHdu 2) https://aideadlin.es/. For example, they do not show upcoming conferences that are not announced yet, even though estimating timelines from last year is often quite accurate and very helpful for planning. also, for useful links like page limits, or LaTeX templates, you still have to search across different pages or Google to find them. Also I didn't like their UI. Under the hood, this avoids native nlp parsing based scrapers rather uses LLM to do the job. conference websites all follow different formats, and sometimes even break their own patterns. llms handle this noisy situation and extract useful information. currently using Gemini 2.0 flash, but in future will move to OpenRouter free models to cut cost. also, GitHub actions run weekly to update conference deadlines. If you spot missing conferences, wrong info, or want to improve things, feedback and contributions are very welcome 🙌 🔗 Github: https://lnkd.in/gtjDFiv2
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development