✨ 𝐄𝐯𝐞𝐫𝐲𝐨𝐧𝐞 𝐭𝐚𝐥𝐤𝐬 𝐚𝐛𝐨𝐮𝐭 “𝐰𝐫𝐢𝐭𝐢𝐧𝐠 𝐛𝐞𝐭𝐭𝐞𝐫 𝐩𝐫𝐨𝐦𝐩𝐭𝐬,” 𝐛𝐮𝐭 𝐫𝐞𝐚𝐥 𝐩𝐫𝐨𝐦𝐩𝐭𝐬 𝐡𝐚𝐯𝐞 𝐚𝐧 𝐚𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞 𝐦𝐨𝐬𝐭 𝐧𝐞𝐯𝐞𝐫 𝐬𝐞𝐞. A good prompt isn’t just a clear sentence, it’s a set of instructions you quietly engineer behind the scenes. Here’s my go-to checklist for prompts that actually deliver: 1. Set the role (Who’s answering?) Are you asking for advice from a career coach or an output from a Python script? Assigning a role instantly upgrades the relevance and depth of the answer. 2. Define the goal (What do you want?) The best prompts spell out what “useful” looks like. Do you want a summary, sample code, a strategic plan, or just raw ideas? Be precise about the win. 3. Add context (What’s the backstory?) Even top models can’t read your mind. Two sentences of context, why you’re asking, what’s happened already, and who’s involved, make the answer 10x smarter. 4. Set constraints (Boundaries, not handcuffs) Short? Formal? Bullet points only? Want to avoid clichés or “as an AI language model” disclaimers? State your non-negotiables up front. 5. Give feedback & iterate The real magic is in versions 2, 3, and 7. Tweak the prompt, rerun it, tighten up until it nails what you need. Don’t settle for the first swing. One common misconception is that better prompts are always longer however, it is not always the case. The best are well-framed, not just wordy. Prompting isn’t about scripting the perfect sentence, it’s about thinking like a designer and building clarity before chasing creativity. What’s one prompt tweak that’s changed your results? #AI #productivity #LLM
Key Prompting Strategies for Small LLMs
Explore top LinkedIn content from expert professionals.
Summary
Key prompting strategies for small LLMs focus on designing clear instructions and frameworks that guide AI models to generate reliable, meaningful responses. Small language models (LLMs) can provide smarter answers when prompts are thoughtfully structured, making the most of their limited computing capacity.
- Assign a role: Specify who or what the model should pretend to be, such as a teacher or a planner, to get more relevant and focused answers.
- Add examples: Include a few sample inputs and outputs within your prompt to help the model understand the kind of response you want.
- Test and refine: Try different versions of your prompt and make adjustments until the responses meet your needs, especially for complex tasks.
-
-
LLMs don’t just respond to What you ask—they respond to How you ask. If you’re still relying on basic prompting, you’re leaving a lot of performance on the table. Here’s how people are systematically optimizing prompts for higher accuracy, robustness, and efficiency in AI apps: ⭐ Few-Shot Prompting – Improve precision in classification tasks by including example inputs/outputs (e.g., for detecting jailbreak attempts, spam, or misinformation). ⭐ Meta Prompting – Use an LLM to refine its own prompts (e.g., "Given this input/output, how would you rewrite this prompt for better performance?"). This works especially well for text generation and retrieval tasks. ⭐ Gradient Prompt Optimization (GPO) – Treat prompts like trainable parameters, embedding them and optimizing with loss gradients. Think of it as fine-tuning without modifying the model itself—a game-changer for low-resource AI applications. ⭐ Prompt Optimization Libraries – Tools like DSPy automate prompt refinement, evaluating variations systematically. For production AI systems, this makes tuning scalable. The Takeaway? Prompt Optimization is a Continuous Process Real-world data shifts. New failure modes emerge. Just like model retraining, prompts need continuous iteration. What’s your go-to method for improving AI prompts?
-
In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ‘secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the “Take a deep breath” instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y
-
I consider prompting techniques some of the lowest-hanging fruits one can use to achieve step-change improvement with their model performance. This isn’t to say that “typing better instructions” is that simple. As a matter of fact, it can be quite complex. Prompting has evolved into a full discipline with frameworks, reasoning methods, multimodal techniques, and role-based structures that dramatically change how models think, plan, analyse, and create. This guide that breaks down every major prompting category you need to build powerful, reliable, and structured AI workflows: 1️⃣ Core Prompting Techniques The foundational methods include few-shot, zero-shot, one-shot, style prompts. They teach the model patterns, tone, and structure. 2️⃣ Reasoning-Enhancing Techniques Approaches like Chain-of-Thought, Graph-of-Thought, ReAct, and Deliberate prompting help LLMs reason more clearly, avoid shortcuts, and solve complex tasks step-by-step. 3️⃣ Instruction & Role-Based Prompting Define the task clearly or assign the model a “role” such as planner, analyst, engineer, or teacher to get more predictable, domain-focused outputs. 4️⃣ Prompt Composition Techniques Methods like prompt chaining, meta-prompting, dynamic variables, and templates help you build multi-step, modular workflows used in real agent systems. 5️⃣ Tool-Augmented Prompting Combine prompts with vector search, retrieval (RAG), planners, executors, or agent-style instructions to turn LLMs into decision-making systems rather than passive responders. 6️⃣ Optimization & Safety Techniques Guardrails, verification prompts, bias checks, and error-correction prompts improve reliability, factual accuracy, and trustworthiness. These are essential for production systems. 7️⃣ Creativity-Enhancing Techniques Analogy prompts, divergent prompts, story prompts, and spatial diagrams unlock creative reasoning, exploration, and alternative problem-solving paths. 8️⃣ Multimodal Prompting Use images, audio, video, transcripts, diagrams, code, or mixed-media prompts (text + JSON + tables) to build richer and more intelligent multimodal workflows. Modern prompting has fully evolved to designing thinking systems. When you combine reasoning techniques, structured instructions, memory, tools, and multimodal inputs, you unlock a level of performance that avoids costly fine tuning methods. What best practices have you used when designing prompts for your LLM? #LLM
-
Small variations in prompts can lead to very different LLM responses. Research that measures LLM prompt sensitivity uncovers what matters, and the strategies to get the best outcomes. A new framework for prompt sensitivity, ProSA, shows that response robustness increases with factors including higher model confidence, few-shot examples, and larger model size. Some strategies you should consider given these findings: 💡 Understand Prompt Sensitivity and Test Variability: LLMs can produce different responses with minor rephrasings of the same prompt. Testing multiple prompt versions is essential, as even small wording adjustments can significantly impact the outcome. Organizations may benefit from creating a library of proven prompts, noting which styles perform best for different types of queries. 🧩 Integrate Few-Shot Examples for Consistency: Including few-shot examples (demonstrative samples within prompts) enhances the stability of responses, especially in larger models. For complex or high-priority tasks, adding a few-shot structure can reduce prompt sensitivity. Standardizing few-shot examples in key prompts across the organization helps ensure consistent output. 🧠 Match Prompt Style to Task Complexity: Different tasks benefit from different prompt strategies. Knowledge-based tasks like basic Q&A are generally less sensitive to prompt variations than complex, reasoning-heavy tasks, such as coding or creative requests. For these complex tasks, using structured, example-rich prompts can improve response reliability. 📈 Use Decoding Confidence as a Quality Check: High decoding confidence—the model’s level of certainty in its responses—indicates robustness against prompt variations. Organizations can track confidence scores to flag low-confidence responses and identify prompts that might need adjustment, enhancing the overall quality of outputs. 📜 Standardize Prompt Templates for Reliability: Simple, standardized templates reduce prompt sensitivity across users and tasks. For frequent or critical applications, well-designed, straightforward prompt templates minimize variability in responses. Organizations should consider a “best-practices” prompt set that can be shared across teams to ensure reliable outcomes. 🔄 Regularly Review and Optimize Prompts: As LLMs evolve, so may prompt performance. Routine prompt evaluations help organizations adapt to model changes and maintain high-quality, reliable responses over time. Regularly revisiting and refining key prompts ensures they stay aligned with the latest LLM behavior. Link to paper in comments.
-
I recently went through the Prompt Engineering guide by Lee Boonstra from Google, and it offers valuable, practical insights. It confirms that getting the best results from LLMs is an iterative engineering process, not just casual conversation. Here are some key takeaways I found particularly impactful: 1. 𝐈𝐭'𝐬 𝐌𝐨𝐫𝐞 𝐓𝐡𝐚𝐧 𝐉𝐮𝐬𝐭 𝐖𝐨𝐫𝐝𝐬: Effective prompting goes beyond the text input. Configuring model parameters like Temperature (for creativity vs. determinism), Top-K/Top-P (for sampling control), and Output Length is crucial for tailoring the response to your specific needs. 2. 𝐆𝐮𝐢𝐝𝐚𝐧𝐜𝐞 𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬: Zero-shot, One-shot, and Few-shot prompting aren't just academic terms. Providing clear examples within your prompt is one of the most powerful ways to guide the LLM on desired output format, style, and structure, especially for tasks like classification or structured data generation (e.g., JSON). 3. 𝐔𝐧𝐥𝐨𝐜𝐤𝐢𝐧𝐠 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠: Techniques like Chain of Thought (CoT) prompting – asking the model to 'think step-by-step' – significantly improve performance on complex tasks requiring reasoning (logic, math). Similarly, Step-back prompting (considering general principles first) enhances robustness. 4. 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐚𝐧𝐝 𝐑𝐨𝐥𝐞𝐬 𝐌𝐚𝐭𝐭𝐞𝐫: Explicitly defining the System's overall purpose, providing relevant Context, or assigning a specific Role (e.g., "Act as a senior software architect reviewing this code") dramatically shapes the relevance and tone of the output. 5. 𝐏𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐟𝐨𝐫 𝐂𝐨𝐝𝐞: The guide highlights practical applications for developers, including generating code snippets, explaining complex codebases, translating between languages, and even debugging/reviewing code – potential productivity boosters. 6. 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 𝐚𝐫𝐞 𝐊𝐞𝐲: Specificity: Clearly define the desired output. Ambiguity leads to generic results. Instructions > Constraints: Focus on telling the model what to do rather than just what not to do. Iteration & Documentation: This is critical. Documenting prompt versions, configurations, and outcomes (using a structured template, like the one suggested) is essential for learning, debugging, and reproducing results. Understanding these techniques allows us to move beyond basic interactions and truly leverage the power of LLMs. What are your go-to prompt engineering techniques or best practices? Let's discuss! #PromptEngineering #AI #LLM
-
The SUPER prompt vs prompt chaining for LLM agents Imagine you are building a customer support agent to handle a wide range of customer inquiries. You might want features like: - Ability to do Q&A over a knowledge base (RAG) - Access to a bunch of tools like changing a user address, processing a refund, canceling an order, etc. - Ability to partake in friendly chit chat - Have guardrails in place to stay professional and not mention competitors There are two routes an AI Engineer might take to build this agent: The SUPER Prompt: Put everything in a single system prompt. The LLM gets all the tools, guardrails, and instructions to handle everything the end user is going to throw at it. Prompt Chaining: Decompose the agent into smaller prompts and chain them together. Most commonly this involves an intent router that classifies the user's message and invokes prompts tailored for each specific intent. Guardrails may also be broken out and run in parallel. Which is better? What I have seen work best in production is to take a prompt chaining approach: 1. Prompt chaining gives you the ability to improve different parts of the Agent independently. Changes to a prompt can often have subtle and surprising effects. In a SUPER prompt, you might add an instruction for a new tool, but that could have unintended effects on some other aspect of the agent. As the SUPER prompt grows in complexity, it becomes more brittle and challenging to work with. 2. Different parts of the prompt chain can be run by different models, providing more flexibility in optimizing cost and latency. SUPER prompts usually require using frontier level models due to the complexity and large number of instructions. This comes with increased cost and latency. 3. Prompt chaining provides natural places for observability. At each node in the chain, you can look at what the LLM is outputting. This can help inform debugging and further improvements. With a SUPER prompt, you may only see the initial inputs and final outputs. In defense of the SUPER prompt, it is a more straightforward approach with less overhead. As LLMs become more capable, longer prompts will be less brittle, which would diminish some of the benefits of prompt chaining. For now, I recommend teams that have large, growing prompts - particularly those for agentic systems - consider moving to a prompt chaining approach. #ai #genai #llm
-
𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗜𝘀𝗻’𝘁 𝗗𝗲𝗮𝗱—𝗜𝘁’𝘀 𝗚𝗿𝗼𝘄𝗶𝗻𝗴 𝗨𝗽 I keep hearing: “Prompt engineering is over.” With today’s powerful models, why bother writing careful instructions? But here’s the thing: prompt engineering still matters. It’s just changed completely. 👉 𝗥𝗲𝗺𝗲𝗺𝗯𝗲𝗿 𝘁𝗵𝗲 𝗛𝗮𝗰𝗸𝘀? (𝗕𝗮𝗰𝗸 𝗶𝗻 2021–2022): jailbreaks (“act as DAN”), role-playing (“You are a CEO”), “Let’s think step by step,” and few-shot priming. These went viral because they worked—but they also showed how fragile early models were. 👉 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝗡𝗼𝘄: 𝗥𝗲𝗮𝗹 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 (𝗧𝗼𝗱𝗮𝘆) • System prompts & guardrails for tone + safety • Structured instructions & JSON schemas • Orchestration of RAG, APIs, and agents • Evaluation & versioning (treating prompts like code) 𝗧𝗵𝗲 𝘀𝗵𝗶𝗳𝘁: 𝗳𝗿𝗼𝗺 𝗯𝗿𝗶𝘁𝘁𝗹𝗲 𝗵𝗮𝗰𝗸𝘀 → 𝗿𝗲𝗽𝗿𝗼𝗱𝘂𝗰𝗶𝗯𝗹𝗲 𝗱𝗲𝘀𝗶𝗴𝗻 𝗮𝗿𝘁𝗶𝗳𝗮𝗰𝘁𝘀 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲: 𝗙𝗼𝗿𝗰𝗲𝗱 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: To make a prompt reliable, you force it into a structure. You don't just ask a question. You make the output a contract. Look at this simple JSON example. We're not just asking for a category. We are demanding the output be in this specific, easy-to-use format. { "Instruction": "Classify the user's request using the tags below.", "Context": "The user is asking for the gold price.", "Schema": { "output_category": ["Finance", "News", "Support"], "confidence": "A number from 0.0 to 1.0" }, "Refusal Criteria": "Do not answer questions about illegal stuff." } 𝗔𝗻𝗱 𝘄𝗵𝗮𝘁’𝘀 𝗻𝗲𝘅𝘁? • Prompting becomes part of the LLM stack: generating synthetic data, orchestrating agents, and being tested like real code. • Prompts make data for fine-tuning. A good prompt can help you generate high-quality synthetic data. That data can train a smaller, cheaper model to do the job. • It's the conductor for tools. Prompts are what tell the model when to stop talking and start searching a database, or when to call an agent. 𝗣𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝘀𝗻’𝘁 𝗱𝗲𝗮𝗱. 𝗜𝘁’𝘀 𝘀𝗶𝗺𝗽𝗹𝘆 𝗴𝗿𝗼𝘄𝗶𝗻𝗴 𝘂𝗽—𝗮𝗻𝗱 𝗴𝗲𝘁𝘁𝗶𝗻𝗴 𝗮 𝗿𝗲𝗮𝗹 𝗷𝗼𝗯. We stopped looking for magic tricks. Now we're focused on building reliable, safe systems that actually scale. The teams that do this are the ones who will build the best LLM products.
-
🧠 Designing AI That Thinks: Mastering Agentic Prompting for Smarter Results Have you ever used an LLM and felt it gave up too soon? Or worse, guessed its way through a task? Yeah, I've been there. Most of the time, the prompt is the problem. To get AI that acts more like a helpful agent and less like a chatbot on autopilot, you need to prompt it like one. Here are the three key components of an effective 🔁 Persistence: Ensure the model understands it's in a multi-turn interaction and shouldn't yield control prematurely. 🧾 Example: "You are an agent; please continue working until the user's query is resolved. Only terminate your turn when you are certain the problem is solved." 🧰 Tool Usage: Encourage the model to use available tools, especially when uncertain, instead of guessing. 🧾 Example:" If you're unsure about file content or codebase structure related to the user's request, use your tools to read files and gather the necessary information. Do not guess or fabricate answers." 🧠 Planning: Prompt it to plan before actions and reflect afterward. Prevent reactive tool calls with no strategy. 🧾 Example: "You must plan extensively before each function call and reflect on the outcomes of previous calls. Avoid completing the task solely through a sequence of function calls, as this can hinder insightful problem-solving." 💡 I've used this format in AI-powered research and decision-support tools and saw a clear boost in response quality and reliability. 👉 Takeaway: Agentic prompting turns a passive assistant into an active problem solver. The difference is in the details. Are you using these techniques in your prompts? I would love to hear what's working for you; leave a comment, or let's connect! #PromptEngineering #AgenticPrompting #LLM #AIWorkflow
-
Here's a practical hack for improving your prompt engineering skills: Prompting is messy at first. Often, you start with something vague, then continuously refine it, correcting the model, adding context mid-conversation, and clarifying your intent until you finally get the output you're looking for. But there's a step most people skip: Once the model finally gets to the desired output, ask it: “Based on everything we've worked through in this conversation, what's the best prompt I could have given you upfront to reach this result immediately?” Then test this refined prompt in a fresh chat. It might not be perfect yet, but it typically gets you much closer. Repeat this feedback loop until your refined prompt reliably produces the output each time. I used this process extensively while preparing live trainings for the Australian Water School, where multiple participants had to follow along with me and we all needed to arrive at a very similar output. It's also how I refined prompts for examples in my "ChatGPT for Water Resources Engineers" course. Ultimately, this approach pays off most when you're building reusable prompts or instructions for custom GPTs. You will need to embrace the iterative nature of working with AI. Provide feedback and additional context, working collaboratively with AI rather than just treating it as a tool. After doing this several times, you will have a better understanding of the information required by the LLM and your initial prompts will get much better, and the need for iteration will decrease.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development