ChatGPT Usage Tips

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,472,831 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Ruben Hassid

    Master AI before it masters you.

    836,144 followers

    STOP asking ChatGPT to "make it better". Here's how to better prompt it instead: ☑ Clearly Identify the Issue Rather than a vague “make it better,” specify the exact element that needs change. For example: "Rewrite the second paragraph so it includes three concrete examples of our product’s benefits. The tone must be formal and persuasive. Remove any informal language or redundant phrases." ☑ Divide the Task into Discrete Steps Break the overall revision into a sequence of manageable tasks. For example: "Go through my instructions, step by step. – Step 1: Summarize it in one sentence. – Step 2: Identify two specific weaknesses. – Step 3: Rewrite the text to address these weaknesses, incorporating specific data or examples." ☑ Specify the Format and Level of Detail Define exactly how the final output should look. For example: "Provide the final revised text as a numbered list where each item contains 2–3 sentences. Each item must include at least one statistical fact or concrete example, and the overall response should not exceed 250 words." ☑ Request a Chain-of-Thought Explanation Ask the model to detail its reasoning process before giving the final output. For example: "Before providing the final revised text, explain your reasoning step-by-step. Identify which parts need improvement and how your changes will enhance clarity and professionalism. Then, present the final revised version." ☑ Conditional Instructions to Enforce Compliance Add if/then conditions to ensure all requirements are met. For example: "If the revised text does not include at least two concrete examples, then add a sentence with a real-world statistic. Otherwise, finalize the response as is." ☑ Consolidate All Instructions into One Prompt Integrate all the detailed instructions into a single, comprehensive prompt. For example: "First, identify the section of the text that needs improvement and explain why it is lacking. Next, summarize the current text in one sentence and list two specific weaknesses. Then, rewrite the text to address these weaknesses, ensuring the revised version includes three concrete examples, uses a formal and persuasive tone, and is structured as a numbered list with each item containing 2–3 sentences. Each list item must include at least one statistical fact or example, and the overall response must be no longer than 250 words. Before providing the final text, explain your reasoning step-by-step. If the revised text does not include at least two concrete examples, add an additional sentence with a real-world statistic." ___ Why This Works People never give enough context. And once ChatGPT answers, they never correct it enough. Think about it like an intern. Deep prompting is all about precision: give clear instructions, context & the right corrections. PS: Don't forget to use the new o3-mini model. It's crushing any other one. Yes – even DeepSeek.

  • View profile for Allie K. Miller
    Allie K. Miller Allie K. Miller is an Influencer

    #1 Most Followed Voice in AI Business (2M) | Former Amazon, IBM | Fortune 500 AI and Startup Advisor, Public Speaker | @alliekmiller on Instagram, X, TikTok | AI-First Course with 350K+ students - Link in Bio

    1,641,711 followers

    “What AI skill should my team and I actually learn right now?” I will scream this from the rooftops of NYC. ➡️ Learn agent delegation Target a dedicated workflow or task. Assign an AI agent said role, define the outcome, set constraints, and schedule review gates. Treat it like a junior teammate and give it work, while monitoring so you can review for accuracy. Here’s my do-this-now stack, and how I’d run it with a team ⏬ If you’re a beginner: Start with ChatGPT Agent Mode. Open a new ChatGPT chat and change the dropdown to ‘Agent Mode’. It can plan tasks, execute steps, and return cited outputs for market scans, vendor comparisons, executive briefs, and decision memos. Kick off the job, let it run, WATCH IT RUN, and then review the completion. If you’re more technical or ops-heavy: Use Claude Code when the work requires operating UIs or your computer - clicking through portals, filling forms, wrangling spreadsheets, saving down documents. Expect more upfront setup and ownership, so keep a step-by-step prompt checklist, add automatic reruns for failing steps, and update the checklist only when the site’s labels or paths change. If you’re living in Google Workspace: Turn on Google connectors (Drive, Gmail, Calendar) inside ChatGPT or Claude. Ask the model to find your team’s file, summarize threads, compare document versions, prepare for and schedule meetings, or draft from past emails. This lets your agent pull context and act on it without manual hunting. How to turn this into outcomes in 30 days ⏬ → Twice a week, use Agent Mode to produce a one-page brief with citations and a recommendation on a real business question. Track cycle time and data/citation quality, and, where relevant, use Claude Code to automate in parallel. At the end of the month, you should know where a few agents can tackle real work and have the data to support what to scale. #AIinWork

  • View profile for Dan Martell

    📘 Bestselling Author (Buy Back Your Time) 🚀 Building AI startups @Martell Ventures ⚙️ 3x Software Exits • $100M+ HoldCo 💬 DM "COACH" if you're looking to scale

    182,003 followers

    A few weeks ago I told my team that AI needs to do 92% of their work or they'll get left behind. Here’s how we're doing it (and why): Step 1: Get ChatGPT Plus/Pro Step 2: Create your master prompt • Tell AI: "I'm [your role] at [company type]. Create a master prompt for me. Ask me every question you need to give me the most context possible." • Spend 30-45 minutes answering everything it asks • Save the output as a PDF • Upload this to every new chat so AI knows your full context Step 3: Build system prompts Master prompts tell AI who you are. System prompts tell AI HOW to work. Here's the process: • Ask AI to create any output (email, ad, report) • Keep refining until it's perfect (3-6 iterations) • Then ask: "Write the system prompt that would have generated this output" • Save that prompt - it's now your intellectual property Now you have the exact formula to get that quality every time. Step 4: Use project folders  Think of these like rooms in your office with all context on the walls. • Create a project for each major area of your life/business • Upload your master prompt + all relevant documents • Every conversation builds on previous context • Share folders with your team for instant knowledge transfer I use this for investment decisions, business strategy, even family planning. Step 5: Set your custom instructions This makes AI remember how you like outputs formatted. Go to Settings → Personalization → Custom Instructions: • Tell it your communication style (short, bullet points, no fluff) • Remove AI language like "delve" and "moreover"  • Set your default tone and format preferences Never repeat formatting requests again. Step 6: Turn everything into custom GPTs These are your AI employees that do specific tasks consistently. • Take your best system prompts • Create custom GPTs for each repeatable task • Share them with your team • Update once, everyone gets the improvement I have custom GPTs for: emails, content creation, financial analysis, hiring, strategy docs. Step 7: Refine and improve Use AI to teach you AI. • Ask it to create your master prompt • Ask it to write your system prompts  • Ask it to suggest custom instructions • Ask it to help you build better prompts Here's what 92% actually looks like: - Content: AI does research, outlines, first drafts. You edit and add your voice. - Operations: AI creates SOPs, analyzes processes, suggests improvements. You decide. - Finance: AI analyzes reports, creates models, finds insights. You make decisions. - Strategy: AI processes information, suggests options. You choose direction. The 8% that stays human: Vision, taste, final decisions, and emotional intelligence. My team went from thinking AI was "kind of helpful" to saying it's their most valuable employee. It could be yours too. -DM P.S. If you want my complete prompting template and the 7 system prompts that save me 15+ hours per week, MESSAGE ME the word "AI" and I'll send it over. My gift to you 👊

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    AI Infrastructure Product Leader | Scaling GPU Clusters for Frontier Models | Microsoft Azure AI & HPC | Former AWS, Amazon | Startup Investor | Linkedin Top Voice | I build the infrastructure that allows AI to scale

    229,043 followers

    Prompting is an important technique that can help users of tools such as ChatGPT tap into their full potential. However, most users stop at “Write me a blog post,” or “Summarize this text.” Then they wonder why the output feels flat or generic. But here’s the truth: the difference between an average prompt and a powerful one is the structure of your thinking. ChatGPT mirrors the clarity and depth of the question you ask. If you guide it like a collaborator instead of a command box, it starts to think with you, not for you. This is where advanced prompting frameworks come in handy. These are the same techniques used by AI power users, researchers, and operators who consistently get strategic, context-rich results. Here are 6 of them that can change how you work with AI 👇 1.🔸Iterative Refinement – Don’t expect perfection on the first try. Refine, re-ask, and build progressively. 2.🔸Contextual Memory – Keep continuity across chats by referencing previous prompts and discussions. 3.🔸Multi-Turn Dialogues – Treat your prompt like a conversation, not a one-liner. Layer your questions. 4.🔸Task-Specific Prompts – Write differently for code, translation, or summarization. Precision wins. 5.🔸Guided Exploration – Narrow AI’s focus to deep-dive into one concept instead of surface-level replies. 6.🔸Prompt Chaining – Sequence multiple prompts logically, where each response feeds the next. Great prompt engineering means thinking like a teacher guiding a very smart student. Once you understand this, AI stops being a tool and starts becoming a true thinking partner. Are there other techniques you can add? #AI #PromptEngineering

  • View profile for Basia Kubicka

    AI PM • AI Agents • Rapid Prototyping • Vibe coding

    49,011 followers

    Most people still prompt GPT-5.2 like GPT-4. The misconception: “Smarter model → less work on prompting.” With GPT-5.2, it’s the opposite. The model is more capable, but also more programmable. If you don’t lean into that, you leave reliability, latency, and trust on the table. You don’t need fancier prompts. You need prompt contracts: small, reusable spec blocks you standardize across your workflows. At minimum, you want these 5 written down: 1️⃣ Output & verbosity spec Define exactly how you want responses to look. > Default length (e.g. 3–6 sentences, or ≤5 bullets) > Special rules for simple vs complex tasks > How to format changes, risks, next steps This alone cuts noise, back-and-forth, and review time. 2️⃣ Scope & constraints Prevent scope creep before it happens. > “Do ONLY what was requested” > “No extra ideas unless asked” > “Align to existing guidelines, don’t invent new ones” You’re telling the model what not to do, which matters more than you think. 3️⃣ Long-context handling Long threads and documents are where things quietly break. > Skim and outline key sections before answering > Re-state user constraints (timeframe, audience, domain) > Anchor claims to specific sections instead of talking generically This turns “lost in the scroll” into predictable recall. 4️⃣ Uncertainty & hallucination guardrails Don’t expect the model to “just be careful.” > Call out ambiguity explicitly > Offer 1–3 clarifying questions or labeled assumptions > For high-risk answers, add a quick self-check step You’re designing how the model behaves when it doesn’t know. 5️⃣ Tool & structure rules Most real value is in tools + structured output. > When to use tools vs internal knowledge > How to batch or parallelize calls to save time > JSON schemas with required vs optional fields > “Set missing fields to null, don’t guess” This is how you get repeatable, testable behavior instead of vibes. Once you treat these as shared contracts, a few things happen: - Model upgrades (like moving to GPT-5.2) stop breaking flows - Evaluation becomes meaningful because behavior is stable - New use cases reuse the same specs instead of reinventing prompts - You can treat failures as contract violations, not “the model is weird” If you use GPT for serious work, your job isn’t to write pretty prompts. It’s to design a small library of prompt contracts your whole team can lean on. If you don’t have these 5 written down yet, that’s the homework. Full guide here: https://lnkd.in/e48_XPQf ---- ♻️ Repost if your network needs to see this transformation ➕ Follow me (Basia Kubicka) for more AI insights 🔔 Subscribe to my newsletter for deep dives: https://air-scale.kit.com/ Opinions expressed are my own and do not represent the views, policies, or positions of my employer.

  • View profile for Dr. Martha Boeckenfeld

    Human-Centric AI & Future Tech | Keynote Speaker & Board Advisor | Healthcare + Fintech | Generali Ch Board Director· Ex-UBS · AXA

    151,026 followers

    Are your ChatGPT prompts falling flat and want to create chat prompts that are engaging, authentic, and convincing? Here's a simple 5-step guide to writing better prompts with ChatGPT: 1️⃣ Talk to ChatGPT like it's your assistant or even friend, not a robot! Forget programming vibes, give it a name - let's call it "Genie" for fun. Share stories, ask, re-ask, make it a real convo. Trust me, it works like magic. 2️⃣ Set the stage, spill the deets! Before asking a question, paint a picture. If you're prepping for a marathon, don't just ask how. Share your marathon dreams, like, "I'm a newbie wanting to conquer a marathon in 6 months. Tips, please!" More deets, more spot-on answers! 3️⃣ Let ChatGPT wear different hats! It's a chameleon! Ask it to think like a teacher, marketer, or even Shakespeare. You won't believe the diverse perspectives you get. 4️⃣ Keep your Genie in check! Sometimes, Genie gets a bit wild. Ask her to justify her thoughts or gently nudge her back to the topic. It's like steering a friendly chat - guide her, don't boss her! 5️⃣ Play around, have a blast! Don't be shy to experiment. Throw quirky prompts like "Describe a day in an ant colony from an ant's view." You'll be amazed at the creative whirlwind Genie can whip up! Nice bedtime stories for your kids. Ready to ChatGPT like a pro? Dive in and share your experiences!  #chatGPT #aichat #techtalks

  • View profile for Bhavishya Pandit

    Turning AI into enterprise value | $XX M in Business Impact | Speaker - MHA/IITs/NITs | Google AI Expert (Top 300 globally) | 50 Million+ views | MS in ML - UoA

    85,281 followers

    You hit rate limits because your token use keeps sneaking up on you 💯 And honestly? It’s not just “you being bad at prompts.” Most modern LLMs are tuned to be extra helpful more explaining, more context, more “let me walk you through it.” That sounds nice, but it makes you habituated to over-explanations, potentially biasing you to use thinking mode more often. And big tech doesn’t exactly hate that outcome: longer chats = more usage/tokens, more load, more 💰. So let’s make this practical yet simple. Token quota = input tokens + output tokens. Meaning: the more you paste in, and the more you ask it to produce, the faster you hit limits. So when you panic-switch between LLMs, you’re basically changing taxis while still sitting in traffic. Let’s fix the real problem: make the model do less work (on purpose). How to use fewer output tokens (without begging ChatGPT to “be concise”)? Instead, give the model hard boundaries: ✅ Cap the amount: “Give 5 bullets.” ✅ Cap the size: “Each bullet max 10 words.” ✅ Remove filler: “No intro, no summary, no examples.” ✅ Force a stop: “Stop after bullet 5.” Why it works: you’re not hoping it’s concise… you’re boxing it in. How to use fewer input tokens (the hidden token leak) Most token waste is YOU pasting a whole novel. Try this instead👇 ✅ Don’t paste everything. Paste only what changes the answer. ✅ Replace long context with “5 facts you must use.” ✅ Use labels so it doesn’t burn tokens guessing what matters. Example labels: GOAL, AUDIENCE, FACTS, RULES, FORMAT etc. Extra moves that save tokens 1) Ask for steps, not the whole meal. Better: “Write only section 1.” “List options, I’ll pick one.” “Give an outline only.” - after this, you can migrate to other LLMs without worrying about context/memory loss. This cuts output and reduces rework. 2) Control the “thinking out loud” habbit Models love to “explain” unless you forbid it. Add: “No reasoning, no explanations. Just the answer.” 3) Use “one format only” If you don’t specify format, it may give headings, examples, summaries, disclaimers, a life story… Say: “Return only a table.” “Return only bullets.” “Return only JSON.” Format control = token control. 4) Put tight constraints in one line Long rules create long compliance behavior. Good: “5 bullets, ≤10 words each, no intro/outro.” If you’re always hitting limits at the worst time, it’s usually not the model but you. Always remember: prompt engineering isn’t just getting the answer in the format you want - it’s getting it in that format using the fewest tokens possible 💯 Hope this helps :) #ai #dev #meme #promptengineering

  • View profile for Akhil Yash Tiwari
    Akhil Yash Tiwari Akhil Yash Tiwari is an Influencer

    Building Product Space | Helping aspiring PMs to break into product roles from any background

    35,732 followers

    You asked ChatGPT to write a PRD. It gave you 3 pages in 10 seconds.  You felt amazing until you actually read it. Generic features. Vague user segments. No real context. Just... fluff. Here's what nobody tells you: GPTs aren't mind readers. They're exam-takers who guess when you don't give them enough information. Think about it….when you didn't know an exam answer, you still cooked up something, right? That's exactly what GPT does when your prompt lacks context. It hallucinates. It fills gaps with imaginative guesses. And suddenly you're stuck in a loop, re-prompting 10 times trying to fix what started wrong. So how do you fix this? … I have distilled 1000+ hours of my AI Product experience into this simple 3 step framework you need to follow to get the best desired output. Role & Context - Who is the AI, and what world is it operating in? Task & Guardrails - What should it do, and what should it NOT do? Output Format - What should the final answer look like? That's it. Now let's see what this looks like in practice. … 1/ Set the Role & Context Don't just say "help me with a food app." Give the AI an identity and a life in your world. ❌Don’t do this: Suggest a savings feature for a personal finance app ✅Do this: You are a PM at a consumer fintech app in India focused on first-jobbers (21–28) with ₹30–80k monthly income. They mostly use UPI and credit cards but feel anxious about ‘where their money went’ at the end of the month. We want to help them build a basic savings habit without overwhelming them with jargon … 2/ Define the Task & Add Guardrails Be specific about what you want AND what you don't want. This is where you prevent hallucination. ❌Don’t do this: Give me user segments ✅Do this: Identify the top 3 user segments for this MVP based on savings goals and spending behavior. For each: primary motivation, biggest pain point, and one feature they'd love. Don't make up statistics, If you lack research, say 'Unknown - needs validation' … 3️⃣ Shape the Output Format Show the AI exactly what structure you want, especially if you're pasting this into a doc or PRD. ❌Don’t do this: Compare with competitors ✅Do this: Create a comparison table with exactly 3 rows and 3 columns - Feature, Competitor Offering, Gap / Opportunity for Us. Each cell should have less than 6 words. Do not add additional commentary outside the table. Follow for more practical AI-PM advice!

  • View profile for Yuzheng Sun

    I don’t worship abstract intelligence; I care about judgment meeting reality early, and systems that compound.

    34,596 followers

    “AI is getting worse…” Reality check: You're hitting the ceiling of casual usage and don't know how to break through. What feels like AI degradation is actually predictable behavior from a system you don't understand yet. Most people just get frustrated with AI and blame the technology. Advanced AI users accept AI’s quirks and shortcomings and build around them. Here are AI’s 5 critical quirks, and how to deal with them: 1. Memory loss WHY THIS HAPPENS: ChatGPT is a UI interface that appends previous conversations to the current one. Over time, the context window gets messy, and the foundational model's attention is scattered. FIX: Instead of continuously adding to a long conversation history, use the edit button on your old response to generate a fresh response with new context. Check our free lightning course on the mechanism behind this fix: https://lnkd.in/gB-wmpGe 2. Laziness WHY THIS HAPPENS: AI models have output limits and weren't trained extensively on very long generation tasks. Work within AI’s limits instead of fighting them. FIX: Break large tasks into smaller, specific chunks. 3. Confident Lies WHY THIS HAPPENS: AI's goal is to be "helpful", and it does so by giving you the best answer in its distribution. It doesn't know what it doesn't know. FIX: Give AI more context. Ask for sources and cross-reference critical information. 4. Scattered Thinking WHY THIS HAPPENS: Think of AI's context window like RAM in a computer — it has limited space and processes everything at once. When that space gets cluttered with irrelevant information, performance degrades. FIX: Start fresh conversations for new topics. Keep context clean. 5. Personal Skill Degradation WHY THIS HAPPENS: You become too reliant on AI, and it makes your work worse. FIX: Never use AI-generated content without significant modification that reflects your unique perspective and context, and let AI gather information and suggest options, but make the final choices yourself. If you’re interested in developing a deeper understanding of AI so you can move from a casual user to a builder, I teach a course with Yan Wang where you’ll learn to: 1. Build and deploy functioning AI prototypes 2. Develop a durable framework for learning new concerts 3. Become the go-to AI strategist in your role Our next cohort kicks off this November: https://bit.ly/4n7RlcH Remember to use "version2" to get $300 off before the end of September.

Explore categories