Prompt engineering is the new consulting superpower. Most haven't realized it yet. Over the last couple of days, I reviewed the latest guides by Google, Anthropic and OpenAI. Some of the key recommendations to improve output: → Being very specific about expertise levels requested → Using structured instructions or meta prompts → Explicitly referencing project documents in the prompt → Asking the model to "think step by step" Based on the guides, here are four ways to immediately level up your prompting skill set as a consultant: 1. Define the expert persona precisely "You're a specialist with 15 years in retail supply chain optimization who has worked with Target and Walmart." Why it matters: The model draws from deeper technical patterns, not just general concepts. 2. Structure the deliverable explicitly "Provide 3 key insights, their implications and then support each with data-driven evidence." Why it matters: This gives me structured material that needs minimal editing. 3. Set distinctive success parameters "Focus on operational inefficiencies that competitors typically overlook." Why it matters: You push the model beyond obvious answers to genuine competitive insights. 4. Establish the decision context "This is for a CEO with a risk-averse investor applying pressure to improve their gross margins." Why it matters: The recommendations align with stakeholder realities and urgency. The above were the main takeaways I took from the guides which I found helpful. When you run these prompts versus generic statements, you will see a massive difference in quality and relevance. Bonus tips which are working for me: → Create prompt templates using the four elements → Test different expert personas against the same problem (I regularly use "Senior McKinsey partner" to counter my position detecting gaps in my thinking.) → Ask the model to identify contradictions or gaps in the data before finalizing any recommendations. We’re only scratching the surface of what these “intelligence partners” can offer. Getting better at prompting may be one of the most asymmetric skill opportunities all of us have today. Share your favourite prompting tip below! P.S Was this post helpful? Should I share one post per week on how I’m improving my AI-related skills?
How to Use Prompt Engineering for AI Projects
Explore top LinkedIn content from expert professionals.
Summary
Prompt engineering is the practice of crafting clear, detailed instructions for AI tools to help them produce accurate and valuable responses. When working on AI projects, shaping your prompts thoughtfully can turn basic outputs into customized insights that meet your goals.
- Define your intent: Start by specifying the purpose and context behind your request, so the AI understands exactly what you need.
- Set boundaries: Include constraints like length, tone, or format to guide the AI toward responses that fit your project requirements.
- Tailor to the tool: Adjust your prompting style to match the strengths and personality of each AI platform for best results.
-
-
A few months ago, a colleague screamed at Microsoft Copilot like he was auditioning for Bring Me The Horizon. He typed, “Make this into a presentation.” Copilot spat out something. He yelled, “NO, I SAID PROFESSIONAL!” It revised it. Still wrong. “WHY ARE YOU SO STUPID?” And that, dear reader, is when it hit me. It’s not the AI. It’s you. Or rather, your prompts. So, if you've ever felt like ChatGPT, Copilot, Gemini, or any of those AI Agents are more "artificial" than "intelligent"? Then rethink how you’re talking to them. Here are 10 prompt engineering fundamentals that’ll stop you from sounding like you're yelling into the void. 1. Lead with Intent. Start with a clear command: “You are an expert…,” “Generate a monthly report…,” “Translate this to French…" This orients the model instantly. 2. Scope & Constraints First. Define boundaries up front. Length limits, style guides, data sources, even forbidden terms. 3. Format Your Output. Specify JSON schema, markdown headers, or table columns. Models love explicit structure over free form prose. 4. Provide Minimal, High Quality Examples. Two or three exemplar Q→A pairs beat a paragraph of explanation every time. 5. Isolate Subtasks. Break complex workflows into discrete prompts (chain of thought). One prompt per action: analyze, summarize, critique, then assemble. 6. Anchor with Delimiters. Use triple backticks or XML tags to fence inputs. Cuts hallucinations in half. 7. Inject Domain Signals. Name specific frameworks (“Use SWOT analysis,” “Apply the Eisenhower Matrix,” “Leverage Porter’s Five Forces”) to nudge depth. 8. Iterate Rapidly. Version your prompts like code. A/B test variations, track which phrasing yields the cleanest output. 9. Tune the “Why.” Always ask for reasoning steps. Always. 10. Template & Automate. Build parameterized prompt templates in your repo. Still with me? Good. Bonus tips. 1. Token Economy Awareness. Place critical context in the first 200 tokens. Anything beyond 1,500 risks context drift. 2. Temperature vs. Prompt Depth. Higher temperature amplifies creativity. Only if your prompt is concise. Otherwise you get noise. 3. Use “Chain of Questions.” Instead of one long prompt, fire sequential, linked questions. You’ll maintain context and sharpen focus. 4. Mirror the LLM’s Own Language. Scan model outputs for phrasing patterns and reflect those idioms back in your prompts. 5. Treat Prompts as Living Docs. Embed metrics in comments: note output quality, error rates, hallucination frequency. Keep iterating until ROI justifies the effort. And finally, the bit no one wants to hear. You get better at using AI by using AI. Practice like you’re training a dragon. Eventually, it listens. And when it does, it’s magic. You now know more about prompt engineering than 98% of LinkedIn. Which means you should probably repost this. Just saying. ♻️
-
Most people don’t realize: AI can coach you on how to prompt it better. Here’s how to turn AI into your personal prompt coach, so you get better results and learn how to use AI faster. Try this two-step fix: 1. State your goal and context. 2. Ask one of these questions: ➡️ "How would you rewrite my prompt to get more [specific, creative, detailed, etc.] responses?" ➡️ "If you were trying to get [desired outcome], how would you modify this prompt?" ➡️ "If this were your prompt, what would you change to make it more effective?" ➡️ "What elements are missing from my prompt that would help you generate better responses?" ➡️ "How might you enhance this prompt to avoid common pitfalls or misinterpretations?" ➡️ Or simply: "Improve my prompt." Before: "Explain force majeure clauses." After: "Analyze how courts in California have interpreted force majeure clauses in commercial leases since COVID-19, focusing on what constitutes 'unforeseeable circumstances' and the burden of proof required to invoke these provisions." The difference? A broad, non-jx specific, superficial overview vs. actionable legal insights for commercial leases in California. Not only will you get better outcomes, but you will learn how to improve your prompting in the process. What are your go-to strategies or favorite prompts to optimize AI responses?
-
The difference between poor AI outputs and great ones? It's not the tool. It's how you prompt it. After working with teams across multiple industries on AI adoption, I've noticed this pattern: Most people write prompts. The best people architect them. Here's what a typical prompt looks like: "Write me an email about our new product." That's just a task. You've given the AI 20% of what it needs. Here's the 5-part Universal Prompt Architecture that works across ChatGPT, Claude, Gemini, Copilot, and any platform: 1. CONTEXT: Who you are + what the AI needs to know 2. TASK: The specific output you need 3. CONSTRAINTS: Your non-negotiables (tone, length, what to avoid) 4. OUTPUT FORMAT: Show the structure, don't make AI guess 5. QUALITY CHECK: How you'll validate the output When you use all 5 parts together: ✅ Output quality jumps 50%+ ✅ Revision cycles drop dramatically ✅ It works across every major AI platform I've trained hundreds of people on this framework. It sticks because it forces you to think before you prompt. The copy-paste template is pinned in the comments 📌👇 This is Week 1 of my 5-part series: "AI That Ships" Every Tuesday for the next 5 weeks, I'm sharing practical AI frameworks that actually work, across tools, teams, and industries. Follow me to get the full series 🔔 What's the one thing you struggle with when prompting AI? #AIThatShips #AIinMarketing #PromptEngineering
-
Most people prompt every AI the same way. That’s why their outputs are mediocre. I’ve tested hundreds of prompts across every major AI platform. The difference between average and exceptional outputs isn’t prompt length. It’s prompt style matched to the tool. This framework breaks it down: ChatGPT → Prompt like an instructor. Start with a role assignment: “Act as a productivity coach.” Define the specific task. Ask for step-by-step action plans with timelines. Specify your desired format—table, outline, bullet list. Request tool recommendations. ChatGPT excels at structured guidance and task planning. Give it constraints and it delivers. Perplexity → Prompt like a research analyst. Lead with specific information requests. Include relevant keywords, timeframes, and geographies. Ask for cited sources and reference links for verification. Request trend summaries with citations. Follow up with comparison questions that require data-backed reasoning. Perplexity is built for evidence-based analysis. Treat it like a junior analyst who needs clear research parameters. Grok → Prompt like a candid friend. Use conversational tone: “Hey Grok, what do you think about…” Add emotional context. Ask for honest, unfiltered feedback and alternative perspectives. Request comparisons or opposing viewpoints to challenge your assumptions. Ask for common pitfalls and mistakes to avoid. Grok thrives on casual brainstorming and identifying blind spots others miss. Gemini → Prompt like a project planner. Explain the overall project goal upfront. Define expected outputs—tasks, subtasks, timelines. Ask about Google Workspace integrations. Request detailed weekly or daily action plans. Ask for dependency breakdowns and milestones. Request formatted outputs like tables and charts. Gemini is optimized for project management and collaborative workflows. Why this matters: Each model has a personality bias baked into its training data and architecture. ChatGPT leans toward structured helpfulness. Perplexity toward verification and sourcing. Grok toward irreverence and contrarianism. Gemini toward organizational workflows. When you fight these tendencies, you get generic outputs. When you lean into them, you unlock capabilities most users never see. The tactical shift: Stop copying prompts between platforms. Start adapting your communication style to each tool’s strengths. Same question, different framing = dramatically different quality. One prompt style for all tools is lazy. Adapted prompting is leverage.
-
The most underrated skill for 2025? (Not code. Not ads. Not funnels.) It's knowing how to talk to AI. Seriously. Prompt writing is becoming the new leverage skill. And no one’s teaching it right until now. I’ve built AI workflows for content, marketing, and growth. They save me 10+ hours/week and cut down on team overhead. The key? 👉 It’s not just asking ChatGPT questions. It’s knowing how to structure your prompts with frameworks like these: Here are 4 frameworks I use to get 🔥 outputs in minutes: 1. R-T-F → Role → Task → Format “Act as a copywriter. Write an Instagram ad script. Format it as a conversation.” 2. T-A-G → Task → Action → Goal “Review my website copy. Suggest changes. Goal: Boost conversion by 15%.” 3. B-A-B → Before → After → Bridge “Traffic is low. I want 10k monthly visitors. Give me a 90-day SEO plan.” 4. C-A-R-E → Context → Action → Result → Example “We’re launching a podcast. Write a guest outreach email. Goal: Book 10 experts.” You’re not just prompting. You’re building AI systems. Mastering this skill will: ✅ 10x your productivity ✅ Reduce dependency on agencies ✅ Help you scale solo (or with a lean team) The AI era belongs to the strategic communicators. Learn how to prompt, and you won’t need to hire half as much. 📌 Save this post. 🔁 Repost if you believe AI is a partner, not a replacement. #ChatGPT #PromptEngineering
-
Monday Technical Deep Dive: Prompting for Precision You've probably heard about AI everywhere, but are you prompting it right to get the best results? Getting useful output from models like Gemini or ChatGPT isn't magic; it's a skill called Prompt Engineering. If your prompt is weak, your output will be too. I recently attended Google’s Generative AI Leader Program and solidified a core principle: Better Inputs = Better Outputs. Here are three simple techniques to immediately improve your results: 1. Zero-Shot Prompting (The Baseline) This is the simplest approach. You give the model no examples, just the instruction. Example: "Explain the concept of API idempotency." When to use it: For basic questions, definitions, or tasks where the model already has extensive knowledge. It's your starting point. 2. Few-Shot Prompting (The Teacher) This is where you give the model a few examples of the desired input/output format before asking your actual question. You are essentially teaching it your style. Example: "Here are three examples of how I write a professional email closing: [Example 1], [Example 2], [Example 3]. Now, write an email to a recruiter following this style." When to use it: When the output needs to match a specific format, tone, or structure (e.g., code functions, marketing copy, or technical documentation). 3. Chain-of-Thought (CoT) Prompting (The Analyst) This is the most powerful technique for complex tasks. You instruct the model to explain its reasoning step-by-step before providing the final answer. Example: "Before giving the final answer, first list and explain the security risks associated with deploying this new cloud function. Then, suggest three mitigation strategies." When to use it: For complex analysis, multi-step problem-solving, or debugging. For me, this is essential when working on AI and Security concepts, as you need verifiable reasoning. Prompting is a skill that will only grow in importance. Which of these techniques are you going to test today? Let me know your results! #GenerativeAI #PromptEngineering #TechnicalDeepDive #SoftwareEngineering #AI
-
Want better prompts? Try these 10 tactics. Most people treat prompting like a keyword search. They plug in a few words and expect miracles to happen. However, AI isn't a search engine. If you want great output, you need to design your prompts with intention, context, and clarity. In UX and product work, we spend years learning how to design for users. Now, we need to learn how to design for AI. I've found that design principles can be used for prompting. Here's what this looks like. → 𝗗𝗲𝗳𝗶𝗻𝗲 𝘁𝗵𝗲 𝗿𝗼𝗹𝗲 – Tell the AI who it is before you tell it what to do. Perspective changes everything. → 𝗖𝗹𝗮𝗿𝗶𝗳𝘆 𝘁𝗵𝗲 𝗴𝗼𝗮𝗹 – Don't just describe the task. Explain the purpose, audience, and outcome you need. → 𝗙𝗲𝗲𝗱 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗹𝗶𝗸𝗲 𝗮 𝗱𝗲𝘀𝗶𝗴𝗻 𝗯𝗿𝗶𝗲𝗳 – Give it audience, constraints, and intent. AI can't prioritize if you don't. → 𝗦𝗲𝘁 𝘁𝗵𝗲 𝘁𝗼𝗻𝗲 – Art direct your prompts. Tell it your brand personality, voice, and who you're speaking to. → 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝘆𝗼𝘂𝗿 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 – Speak in flows, not fragments. "List ideas → categorize → summarize" creates better results than vague asks. → 𝗔𝗻𝗰𝗵𝗼𝗿 𝗶𝗻 𝗿𝗲𝗮𝗹 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 – Give it physical, emotional, and environmental details. → 𝗗𝗲𝗳𝗶𝗻𝗲 𝘀𝘂𝗰𝗰𝗲𝘀𝘀 – Tell it what "good" looks like for your brand or team. → 𝗧𝗲𝗹𝗹 𝗶𝘁 𝗵𝗼𝘄 𝘁𝗼 𝘁𝗵𝗶𝗻𝗸 – Specify reasoning. "Apply design thinking to identify root causes" creates smarter outputs. → 𝗣𝗿𝗼𝘁𝗼𝘁𝘆𝗽𝗲 𝘁𝗵𝗲 𝗼𝘂𝘁𝗽𝘂𝘁 – Show it the structure you want. Wireframe your answer. → 𝗜𝘁𝗲𝗿𝗮𝘁𝗲 𝗹𝗶𝗸𝗲 𝗮 𝗱𝗲𝘀𝗶𝗴𝗻𝗲𝗿 – The best prompt isn't your first one. Version, test, refine. Prompting is quickly becoming the new design language. The same principles that make great UX (empathy, iteration, clarity), make great AI collaboration. So stop treating AI like a tool you command. Start treating it like a teammate you design with. --- 💡 Share if this helps others ➕ Follow Jason Moccia for more tech and leadership insights
-
Most PMs wouldn't give a developer a Post-it note saying "build something good". Yet they're perfectly happy giving AI exactly that. Vague prompts get vague results. AI isn't lazy. it's starving for clear context. 𝐆𝐫𝐞𝐚𝐭 𝐩𝐫𝐨𝐦𝐩𝐭𝐬 𝐬𝐡𝐨𝐮𝐥𝐝 𝐥𝐨𝐨𝐤 𝐥𝐢𝐤𝐞 𝐏𝐑𝐃𝐬, 𝐧𝐨𝐭 𝐭𝐰𝐞𝐞𝐭𝐬. Instead of vague "give me ideas", here's how to get value from AI on real product tasks: ① Rich product context: Clear goals, real constraints, competitive insights. ② Genuine personas: "Gen Z freelancers who abandon carts due to uncertain cash flow," not "young users." ③ Clear reasoning: Explicitly ask for the thinking behind the ideas. ④ Explicit reasoning steps: "Evaluate friction points → Suggest solutions aligned with personas → Prioritize by effort & impact" At Zentrik, we treat prompts like structured docs. The result? AI stopped guessing and started acting like an actual teammate. Prompting isn’t magic, but it can feel like it—if you stop writing tweets and start writing prompts that read like requirements. (Tip: Get help from Zentrik or Anthropic Workbench to get your context together) Stop feeding your AI scraps on Post-its. Give it real context, and watch it become your favorite thought partner. For those who want to go deeper, we co-created a full guide on prompt engineering for product teams with Matvey, Shamsher, and Product Map. Tons of real-world examples and templates you can adapt! 🔗 Link in the comments 💬 PMs, designers, devs => how structured are your AI prompts right now?
-
Anthropic’s “Prompting 101” is one of the best real world tutorials I’ve seen lately on how to actually build a great prompt. Not a toy example. They showcase a real task: analyzing handwritten Swedish car accident forms. Here’s the breakdown: 1. Stop treating prompts like playground experiments > Prompting is iterative engineering, not creative writing > Test, observe, refine - just like product development > One-shot prompts are amateur hour nonsense 2. Structure isn't optional - it's everything > Task context prevents dangerous model hallucinations > Static knowledge belongs in system prompts > Step-by-step instructions eliminate unpredictable outputs 3. Your model will lie without constraints > Claude hallucinated skiing accidents from car forms > Context and rules are your only defense > Trust but verify is dead - verify first 4. Examples are your secret weapon > Few-shot learning steers model behavior precisely > XML tags create structured reasoning pathways > Concrete examples beat abstract instructions always 5. Order of operations determines success > Analyze forms before sketches - sequence matters > Human reasoning patterns should guide model flow > Random instruction order produces random results 6. Output formatting is non-negotiable > Structured JSON/XML enables downstream processing > Parsing requirements must be baked in > Pretty responses don't integrate with databases 7. System prompts are your knowledge base > Static information belongs in system context > Prompt caching makes this economically viable > Domain expertise must be explicitly encoded 8. Extended thinking reveals model reasoning > Thinking tags expose decision-making processes > Analyze transcripts to improve prompt engineering > Model introspection beats guessing every time 9. The prompt IS the program > Language interfaces replace traditional APIs completely > Production teams version control their prompts > Treat prompts like mission-critical infrastructure code 10. Most "AI failures" are prompt failures > Garbage prompts produce garbage AI agents > Proper prompt engineering eliminates 80% of issues > Your AI is only as good as your instructions Link to the tutorial is in comments.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development