How to Improve AI User Experience with Prompt Engineering

Explore top LinkedIn content from expert professionals.

  • View profile for Edward Frank Morris
    Edward Frank Morris Edward Frank Morris is an Influencer

    Forbes. LinkedIn Top Voice for AI.

    35,759 followers

    A few months ago, a colleague screamed at Microsoft Copilot like he was auditioning for Bring Me The Horizon. He typed, “Make this into a presentation.” Copilot spat out something. He yelled, “NO, I SAID PROFESSIONAL!” It revised it. Still wrong. “WHY ARE YOU SO STUPID?” And that, dear reader, is when it hit me. It’s not the AI. It’s you. Or rather, your prompts. So, if you've ever felt like ChatGPT, Copilot, Gemini, or any of those AI Agents are more "artificial" than "intelligent"? Then rethink how you’re talking to them. Here are 10 prompt engineering fundamentals that’ll stop you from sounding like you're yelling into the void. 1. Lead with Intent. Start with a clear command: “You are an expert…,” “Generate a monthly report…,” “Translate this to French…" This orients the model instantly. 2. Scope & Constraints First. Define boundaries up front. Length limits, style guides, data sources, even forbidden terms. 3. Format Your Output. Specify JSON schema, markdown headers, or table columns. Models love explicit structure over free form prose. 4. Provide Minimal, High Quality Examples. Two or three exemplar Q→A pairs beat a paragraph of explanation every time. 5. Isolate Subtasks. Break complex workflows into discrete prompts (chain of thought). One prompt per action: analyze, summarize, critique, then assemble. 6. Anchor with Delimiters. Use triple backticks or XML tags to fence inputs. Cuts hallucinations in half. 7. Inject Domain Signals. Name specific frameworks (“Use SWOT analysis,” “Apply the Eisenhower Matrix,” “Leverage Porter’s Five Forces”) to nudge depth. 8. Iterate Rapidly. Version your prompts like code. A/B test variations, track which phrasing yields the cleanest output. 9. Tune the “Why.” Always ask for reasoning steps. Always. 10. Template & Automate. Build parameterized prompt templates in your repo. Still with me? Good. Bonus tips. 1. Token Economy Awareness. Place critical context in the first 200 tokens. Anything beyond 1,500 risks context drift. 2. Temperature vs. Prompt Depth. Higher temperature amplifies creativity. Only if your prompt is concise. Otherwise you get noise. 3. Use “Chain of Questions.” Instead of one long prompt, fire sequential, linked questions. You’ll maintain context and sharpen focus. 4. Mirror the LLM’s Own Language. Scan model outputs for phrasing patterns and reflect those idioms back in your prompts. 5. Treat Prompts as Living Docs. Embed metrics in comments: note output quality, error rates, hallucination frequency. Keep iterating until ROI justifies the effort. And finally, the bit no one wants to hear. You get better at using AI by using AI. Practice like you’re training a dragon. Eventually, it listens. And when it does, it’s magic. You now know more about prompt engineering than 98% of LinkedIn. Which means you should probably repost this. Just saying. ♻️

  • View profile for Laura Jeffords Greenberg

    General Counsel at Worksome | Building AI-Native Legal Functions | Board Member & Speaker

    18,314 followers

    Most people don’t realize: AI can coach you on how to prompt it better. Here’s how to turn AI into your personal prompt coach, so you get better results and learn how to use AI faster. Try this two-step fix: 1. State your goal and context. 2. Ask one of these questions: ➡️ "How would you rewrite my prompt to get more [specific, creative, detailed, etc.] responses?" ➡️ "If you were trying to get [desired outcome], how would you modify this prompt?" ➡️ "If this were your prompt, what would you change to make it more effective?" ➡️ "What elements are missing from my prompt that would help you generate better responses?" ➡️ "How might you enhance this prompt to avoid common pitfalls or misinterpretations?" ➡️ Or simply: "Improve my prompt." Before: "Explain force majeure clauses." After: "Analyze how courts in California have interpreted force majeure clauses in commercial leases since COVID-19, focusing on what constitutes 'unforeseeable circumstances' and the burden of proof required to invoke these provisions." The difference? A broad, non-jx specific, superficial overview vs. actionable legal insights for commercial leases in California. Not only will you get better outcomes, but you will learn how to improve your prompting in the process. What are your go-to strategies or favorite prompts to optimize AI responses?

  • View profile for Om Nalinde

    Building & Teaching AI Agents to Devs | CS @IIIT

    158,302 followers

    Anthropic’s “Prompting 101” is one of the best real world tutorials I’ve seen lately on how to actually build a great prompt. Not a toy example. They showcase a real task: analyzing handwritten Swedish car accident forms. Here’s the breakdown: 1. Stop treating prompts like playground experiments > Prompting is iterative engineering, not creative writing > Test, observe, refine - just like product development > One-shot prompts are amateur hour nonsense 2. Structure isn't optional - it's everything > Task context prevents dangerous model hallucinations > Static knowledge belongs in system prompts > Step-by-step instructions eliminate unpredictable outputs 3. Your model will lie without constraints > Claude hallucinated skiing accidents from car forms > Context and rules are your only defense > Trust but verify is dead - verify first 4. Examples are your secret weapon > Few-shot learning steers model behavior precisely > XML tags create structured reasoning pathways > Concrete examples beat abstract instructions always 5. Order of operations determines success > Analyze forms before sketches - sequence matters > Human reasoning patterns should guide model flow > Random instruction order produces random results 6. Output formatting is non-negotiable > Structured JSON/XML enables downstream processing > Parsing requirements must be baked in > Pretty responses don't integrate with databases 7. System prompts are your knowledge base > Static information belongs in system context > Prompt caching makes this economically viable > Domain expertise must be explicitly encoded 8. Extended thinking reveals model reasoning > Thinking tags expose decision-making processes > Analyze transcripts to improve prompt engineering > Model introspection beats guessing every time 9. The prompt IS the program > Language interfaces replace traditional APIs completely > Production teams version control their prompts > Treat prompts like mission-critical infrastructure code 10. Most "AI failures" are prompt failures > Garbage prompts produce garbage AI agents > Proper prompt engineering eliminates 80% of issues > Your AI is only as good as your instructions Link to the tutorial is in comments.

  • View profile for Lindsay McGregor

    Author of Primed To Perform; Founder and CEO, Factor.AI and Vega Factor

    9,808 followers

    Is your company scaling digital garbage? We’ve run hundreds of experiments across real organizations, and here’s what we consistently see: AI-generated output hits 52% quality when used like a typical corporate user with a generic prompt and no context. That output jumps to 94% quality when teams use expert prompts and robust, relevant context. 𝗠𝗼𝗱𝗲𝗹 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗵𝗮𝘀 𝗳𝗮𝗿 𝗹𝗲𝘀𝘀 𝗶𝗺𝗽𝗮𝗰𝘁 𝘁𝗵𝗮𝗻 𝘆𝗼𝘂’𝗱 𝘁𝗵𝗶𝗻𝗸. 𝗜𝘁’𝘀 𝘁𝗵𝗲 𝗶𝗻𝗽𝘂𝘁𝘀 𝘁𝗵𝗮𝘁 𝗱𝗿𝗶𝘃𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲. So what’s the formula? Prompt + Context > Model Here’s what that looks like in practice: Bad prompt: “Write a summary of this meeting” Better prompt: “Write a 3-bullet executive summary of this customer call for the product team. Highlight churn risk, requested features, and tone.” Bad context: [Nothing provided] Better context: • CRM note: Customer churned last year due to onboarding • Call transcript • Churn forecast model • Slack thread on feature roadmap The teams getting real value from AI don’t throw it at blank canvases. They design it like a product: • Requirements = prompts • User data = context • Performance = measurable Full article + experiment results: https://lnkd.in/gJnBV9Rq If you want to get a better idea of how to harness AI in your org, let's talk. My inbox is always open 😊 #GenAI #PromptEngineering #AIstrategy #DigitalTransformation #AIProductDesign #KnowledgeWork Factor.AI

  • View profile for Jonathan M K.

    VP of GTM Strategy & Marketing - Momentum | Founder GTM AI Academy & Cofounder AI Business Network | Business impact > Learning Tools | Proud Dad of Twins

    43,297 followers

    Everyone's talking about AI Agents wrong. Here's what Google's groundbreaking whitepaper actually tells us: 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 = 𝗠𝗼𝗱𝗲𝗹 + 𝗧𝗼𝗼𝗹𝘀 + 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 But here's what everyone misses: Without proper prompt engineering, you get: - An intelligent model that can't understand your goals - Powerful tools that get misused or ignored - An orchestration layer that can't coordinate effectively - Wasted computing resources and development time - Frustrated users and failed implementations Think about this. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝗵𝗮𝘃𝗲 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗮𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺 𝗮𝗻𝗱 𝘀𝘁𝗶𝗹𝗹 𝗳𝗮𝗶𝗹 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗽𝗿𝗼𝗽𝗲𝗿 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴! Prompt engineering is crucial for: - Accurate task interpretation and goal alignment - Efficient tool selection and coordination - Seamless multi-agent system communication - Autonomous decision-making capabilities - Dynamic context adaptation - Real-time error handling and recovery And you need to consider... - Environmental context management - Complex error handling scenarios - Task decomposition strategies - System constraints and limitations - User intent interpretation - Safety and reliability protocols - Performance optimization 🚩 𝗣𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝘀𝗻'𝘁 𝗷𝘂𝘀𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝘀𝗸𝗶𝗹𝗹 🚩 𝗜𝘁'𝘀 𝘁𝗵𝗲 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗟𝗲𝘁'𝘀 𝗹𝗼𝗼𝗸 𝗮𝘁 𝘄𝗵𝗮𝘁 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝗽𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗱𝗲𝗹𝗶𝘃𝗲𝗿𝘀: In Model Integration: - Crystal-clear task understanding - Contextual awareness - Consistent output quality - Reduced hallucinations - Better reasoning capabilities In Tool Usage: - Optimal tool selection - Efficient resource allocation - Reduced API costs - Enhanced functionality - Better integration In Orchestration: - Seamless workflow management - Dynamic task prioritization - Intelligent error recovery - Adaptive behavior patterns - Improved system reliability The Real Impact: - 𝟯𝘅 better task completion rates - 𝟱𝘅 fewer error scenarios - 𝟮𝘅 faster development cycles - 𝟰𝘅 improved user satisfaction - 𝟲𝘅 better resource utilization So why are we still treating prompt engineering as an afterthought when it's clearly the cornerstone of successful AI agent implementation? The future of AI agents isn't just about having the best models or the most tools. It's about mastering the art and science of each piece including prompt engineering to make it all work together. Does everyone need to know how to prompt? No. If you want to build agents do you need to understand? Yes. Come hang with us in the GTM AI Academy and let’s dig in. #AIAgents #PromptEngineering #GoogleAI #ArtificialIntelligence #TechTrends #FutureOfAI #AIInnovation

  • View profile for Aparna Dhinakaran

    Founder - CPO @ Arize AI ✨ we're hiring ✨

    35,311 followers

    LLMs don’t just respond to What you ask—they respond to How you ask. If you’re still relying on basic prompting, you’re leaving a lot of performance on the table. Here’s how people are systematically optimizing prompts for higher accuracy, robustness, and efficiency in AI apps: ⭐ Few-Shot Prompting – Improve precision in classification tasks by including example inputs/outputs (e.g., for detecting jailbreak attempts, spam, or misinformation). ⭐ Meta Prompting – Use an LLM to refine its own prompts (e.g., "Given this input/output, how would you rewrite this prompt for better performance?"). This works especially well for text generation and retrieval tasks. ⭐ Gradient Prompt Optimization (GPO) – Treat prompts like trainable parameters, embedding them and optimizing with loss gradients. Think of it as fine-tuning without modifying the model itself—a game-changer for low-resource AI applications. ⭐ Prompt Optimization Libraries – Tools like DSPy automate prompt refinement, evaluating variations systematically. For production AI systems, this makes tuning scalable. The Takeaway? Prompt Optimization is a Continuous Process Real-world data shifts. New failure modes emerge. Just like model retraining, prompts need continuous iteration. What’s your go-to method for improving AI prompts?

  • View profile for Vani P.

    Transforming the Enterprise through AI Implementation | Bridging CX & EX with Generative AI, Cloud Strategy, and Digital Automation | VP, AI and Digital Solutions @ Pronix Inc

    5,818 followers

    Prompt Engineering Isn’t About Fancy Jargon—It’s About Asking Better Questions We’ve all had that moment: you open ChatGPT, type something in, and the answer feels… meh. Then you rephrase, add a bit of context, and suddenly the response is ten times better. That’s the magic of prompt engineering—and it’s less about “engineering” and more about learning how to talk to AI in a way it understands. Think of it like giving directions. If you tell someone, “Drive me somewhere nice,” you’ll get wildly different results. But if you say, “Take me to a coffee shop within 10 minutes that’s quiet and has Wi-Fi,” you’ll probably end up exactly where you want to be. AI works the same way. The quality of your output depends on the clarity of your input. So, how do you get better at it? Here are three practical tips with CX and EX use cases: 1️⃣ Set the scene for CX. A retailer asked AI: “Help us handle customer complaints.” The output was too general. We reframed it as: “Draft empathetic responses for customers asking about late deliveries, offering a status update and a discount code for future orders.” Now the AI produced replies that were not only polite but also improved customer satisfaction. 2️⃣ Break big tasks into steps for EX. An HR team asked AI: “Create an employee onboarding plan.” The result was broad and not useful. We broke it into steps: Step 1: Draft a welcome email for new hires. Step 2: Create a 30-day checklist of tasks for managers. Step 3: Suggest 3 ways to collect feedback after 60 days. This gave HR a clear, structured plan they could use immediately. 3️⃣ Use examples for CX + EX. A bank wanted AI to generate FAQs for both customers and employees. Their first prompt: “Write FAQs for credit cards.” The results felt generic. So they gave AI an example: “Q: How do I increase my credit limit? A: Log into your account, click ‘Manage Credit Limit,’ and submit your request in minutes.” With that style guide, the AI created clear, consistent FAQs for both customer self-service portals (CX) and internal helpdesk systems (EX). Why does this matter? Because we’re moving into a world where knowing how to use AI tools will be as important as knowing how to send an email. Leaders who master prompting will move faster, think bigger, and execute better. It’s not about becoming an AI engineer. It’s about becoming a better communicator for both your customers and your employees. Takeaway: Prompt engineering is just smart communication. The clearer your input, the more valuable your output—whether that’s a faster customer response or a smoother employee experience. Now I’m curious—if you had AI draft one thing today to improve CX or EX in your business, what would it be?

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,719 followers

    I often say that in an AI world metacognition is the master capability. This applies at all levels, especially in framing work, but also in interacting with AI. Research reveals specific approaches that yield better outcomes in working with GenAI. Very pleased that Microsoft Research has a significant focus on metacognition, with numerous papers on the topic. One of these, "The Metacognitive Demands and Opportunities of Generative AI", has some particularly instructive findings on both system design and usage: 🧩 Make the task explicit before you prompt. Most prompting interfaces expect you to state clear goals and break work into sub-tasks (e.g., “condense to two paragraphs,” “update the tone”). This metacognitive step is not optional—users who specify goals and decompose tasks gain better control over outputs. 🧠 Treat prompting as a metacognitive exercise. Effective use requires two abilities during iteration: calibrating your confidence (“is it my prompt, parameters, or model randomness?”) and flexibly switching strategies (retry, refine, or decompose further). 🛞 Choose the right interaction mode for control vs. ease. Giving explicit instructions is felt to be harder than inline edits, but it gives more control. Users often struggle at “getting started,” especially when many adjustable parameters are exposed. 🧪 Expect heavier evaluation work when AI generates long content. GenAI outputs (full emails, presentations, or code) shift effort from writing to judging, increasing cognitive load compared to simple auto-complete. People also tend to “eyeball” generated code, risking over-confidence in correctness. ⚡ Watch for fluency-driven overconfidence. Fast, fluent answers can inflate your confidence in both the output and your own evaluation, even when accuracy hasn’t improved. Higher felt confidence then reduces the effort you invest in checking, shortening thinking time and lowering willingness to revise. 🗺️ Use planning aids to improve prompts. Built-in planning support (goal setting + task decomposition) helps users craft better prompts; “prompt chaining” (multi-step sub-tasks) made participants “think through the task better” and target edits more precisely. 🧭🛠️ Reduce demand with explainability and customizability. Surface the right controls (e.g., temperature, shortlist size, output length) and adapt complexity to user state. This can improve self-awareness, confidence, and satisfaction. 🕹️ Support self-evaluation and self-management in the UI. Proactive, neutral nudges based on prior behavior (e.g., “you typically add 15 follow-ups after vague summaries”) can guide users to specify goals up front and reduce rework. ⚖️ Manage cognitive load while improving metacognition. Interventions (decomposition steps, reflections, explanations) add information to process, but studies show metacognitive support can improve outcomes without raising overall load; adapt or fade prompts as skills grow.

  • View profile for NIKHIL NAN

    Global Procurement Strategy, Analytics & Transformation Leader | Cost, Risk & Supplier Intelligence at Enterprise Scale | Data & AI | MBA (IIM U) | MS (Purdue) | MSc AI & ML (LJMU, IIIT B)

    7,954 followers

    From Prompt Engineering to Context Engineering Anthropic’s new guide reframes AI system design: the shift is from writing prompts to engineering context. As agents grow more capable, their attention remains finite. The goal is to curate the smallest, highest-value set of tokens—system prompts, tools, examples, and data—that keep behavior reliable. Key principles: • Tokens are scarce. Keep context concise and high-signal. • Structure prompts. Use clear, sectioned system instructions instead of verbose or vague text. • Retrieve just in time. Pull information only when needed. • Compact history. Summarize, store decisions externally, reload selectively. • Streamline tools. Use fewer, well-defined functions to prevent overlap and confusion. In short: consistent AI performance depends less on creative phrasing and more on disciplined context management—the emerging foundation of effective agent design.

  • View profile for Maher Khan

    Ai-Powered Social Media Strategist |Adobe Ambassador |LinkedIn Top Voice (N.America)| M.B.A(Marketing) | AI Generalist |

    6,620 followers

    Stop blaming ChatGPT, Claude , or Grok for bad outputs when you're using it wrong. Here's the brutal truth: 90% of people fail at AI because they confuse prompt engineering with context engineering. They're different skills. And mixing them up kills your results. The confusion is real: People write perfect prompts but get terrible outputs. Then blame the AI. Plot twist: Your prompt was fine. Your context was garbage. Here's the breakdown: PROMPT ENGINEERING = The Ask CONTEXT ENGINEERING = The Setup Simple example: ❌ Bad Context + Good Prompt: "Write a professional email to increase our Q4 sales by 15% targeting enterprise clients with personalized messaging and clear CTAs." AI gives generic corporate fluff because it has zero context about your business. ✅ Good Context + Good Prompt: "You're our sales director. We're a SaaS company selling project management tools. Our Q4 goal is 15% growth. Our main competitors are Monday.com and Asana. Our ideal clients are 50-500 employee companies struggling with team coordination. Previous successful emails mentioned time-saving benefits and included customer success metrics. Now write a professional email to increase our Q4 sales by 15% targeting enterprise clients with personalized messaging and clear CTAs." Same prompt. Different universe of output quality. Why people get this wrong: They treat AI like Google search. Fire off questions. Expect magic. But AI isn't a search engine. It's a conversation partner that needs background. The pattern:  • Set context ONCE at conversation start • Engineer prompts for each specific task  • Build on previous context throughout the chat Context Engineering mistakes:  • Starting fresh every conversation  • No industry/role background provided  • Missing company/project details • Zero examples of desired output Prompt Engineering mistakes:  • Vague requests: "Make this better" • No format specifications  • Missing success criteria • No tone/style guidance The game-changer: Master both. Context sets the stage. Prompts direct the performance. Quick test: If you're explaining your business/situation in every single prompt, you're doing context engineering wrong. If your outputs feel generic despite detailed requests, you're doing prompt engineering wrong. Bottom line: Stop blaming the AI. Start mastering the inputs. Great context + great prompts = consistently great outputs. The AI was never the problem. Your approach was. #AI #PromptEngineering #ContextEngineering #ChatGPT #Claude #Productivity #AIStrategy Which one have you been missing? Context or prompts? Share your biggest AI struggle below.

Explore categories