Prioritizing Outcomes vs. Methods in LLM Projects

Explore top LinkedIn content from expert professionals.

Summary

Prioritizing outcomes vs. methods in LLM projects means focusing on solving real business problems and defining clear goals before choosing specific AI tools or models. Instead of chasing popular technology, this approach ensures that projects deliver value and measurable results that matter to the organization.

  • Start with impact: Identify the main business challenge or opportunity and decide how success will be measured before selecting any technical solution.
  • Choose flexible tools: Adapt your technology choices based on evolving needs, and stay open to using simpler models or combinations of methods if they fit the problem best.
  • Build repeatable workflows: Create templates and frameworks for prompts and processes so outcomes can be tracked and improved consistently across projects.
Summarized by AI based on LinkedIn member posts
  • View profile for Kashif Manzoor

    GenAI Strategist | Enterprise AI Maturity & Governance | Helping Organizations & Professionals Move from Experimentation to Operational AI

    6,169 followers

    Start With the Use Case, Not the LLM When I first began guiding teams through Gen AI adoption, most conversations started with: “Which LLM should we use: Cohere, OpenAI, Anthropic, or Gemini, etc?” That’s a trap I’ve seen even fall into. The real question should be: “What business outcome are we solving for?” An LLM-first mindset leads to tool sprawl, multiple APIs, overlapping features, and no measurable ROI. A use-case-first approach, on the other hand, forces clarity: What process are we improving? What knowledge or data powers it? How will success be measured? Only once those questions are clear does the LLM platform matter, and often, you’ll find the answer isn’t a single model, but a combination of tools that best fit the workflow. A Simple Framework Define the Outcome: Start with a business metric, time saved, revenue increased, risk reduced. Identify the Friction Point: What’s slowing this process down? Data? Human effort? Latency? Match Platform Capabilities: Choose the AI stack (e.g., LLM, vector DB, agentic tools) that targets that friction. Prototype Fast, Measure Early: Build a thin slice of value before expanding.

  • View profile for Tomasz Kulakowski

    Chairman of the Board at deepsense.ai | Angel Investor | Harvard Business School

    12,628 followers

    A week ago I was talking with a VP of AI, a colleague from my network. We ended up on a familiar topic: why so many AI projects stall. The conclusion was that it’s not usually the models. It’s the lack of prioritisation. Too often, someone jumps straight into the “hottest” LLM or agent idea. A few months later? Sunk costs, scattered pilots, no ROI. I’ve seen this pattern across companies. At one workshop, a global tech leader brought together 50+ engineers and business leaders. They generated 150+ ideas for LLM features. After scoring for feasibility and impact, only two made the cut, and those two were the ones that aligned with ROI and scaling. Another client arrived with a long wish list of AI projects across departments. After structured evaluation, they left with one clear initiative. Not the flashiest, but the one that delivered immediate productivity gains and could scale without drama. That’s the real inflection point: when technical depth meets business value. Get it right → you move fast, scale confidently, and win buy-in. Get it wrong → you spend months debugging projects that should never have started. Over the years, in different companies and finally at deepsense.ai, I’ve learned the same lesson: tech projects don’t fail because LLMs are weak. They fail when leaders skip prioritisation. This is becoming increasingly relevant, as starting a project is becoming easier, but launching them successfully is becoming harder. What do you think?

  • View profile for Shiva Arunachalam

    AI Platform Leader — Communications, Real-Time Systems @ Uber | LLMs | Distributed Systems | Mentor | Investor | Educator

    5,395 followers

    🚀 Some observations from our efforts to build Applied AI products at scale The past few months have been a masterclass for me and my team in learning how to build and ship Applied AI products in environments where both the tech and the business move at very different speeds and at Uber scale. A few takeaways: 1️⃣ The ecosystem is "very" fragmented and evolving fast. What looks like a solid choice today may feel outdated a month later. In large organizations, where decision cycles are long, this can get messy quickly. The way forward: make decisions at the right level of abstraction—anchored to the business problem, not just today’s tech capabilities—so you can adapt, mix, and evolve over time. 2️⃣ Don’t get lost in the vendor / model rabbit hole. You can spend months mapping who does what, with what model and end up with a pretty Gartner-style chart and some amazing decks…but solve zero problems. Keep the focus on the business problems first, make tech choices expendable, and once you have a viable solution—ship it. There’s no “perfect” solution out there, the same models offer different results with a lot of fine tuning and as long as you're making progress on your key metrics, you're on the right path. 3️⃣ Leaders need a framework to cut through the noise. When explaining AI options, anchor on a simple set of dimensions: Quality Outcomes Cost Feature depth Latencies The last one is underrated: most LLMs are too slow for true real-time decisions at the quality you need. Make it faster and hold the quality bar and the cost blows up 10x. That’s why augmenting with other ML models is essential. 4️⃣ Define what “good” means for you. This isn’t just about picking the right model—it’s about building the internal muscle: independent datasets, human & automated evaluations, publishing benchmarks, and understanding how good an AI system is in the context of your business problem. The tradeoff matrix of Quality of outcomes vs Cost is where the real decisioning happens. The AI world is moving crazy fast, but anchoring on business problems, building pragmatic frameworks, and defining your own defensible “good” is how you cut through the chaos. cc: Srinivas Motamarri, Gourav Gupta, Pooja Garg, Ryan Ashby, Saiket Talukdar, Ashish Krishna V., Javed Abdulla, janardhan reddy, Suchitha C.

  • View profile for Kavita Ganesan

    Practical AI Strategies for Sustainable Growth • Chief AI Strategist & Architect • Keynote Speaker

    6,780 followers

    I sat in yet another strategy meeting last week where a company had already decided on their first AI project. The problem? It wasn’t the right one to solve. It was labeled "low-hanging fruit." Easy win. Quick ROI. And the solution? Of course—LLMs. But when I asked: -What makes it the right problem to solve at this time? -How will success be measured? -Is there a better AI approach than the largest LLM for this problem? Silence. The reality is that many companies are picking AI projects based on what’s trending, not what actually makes business sense. They’re forcing tools like LLMs onto problems because these are the cool tech, not because they’re the right fit. And more often than not, the problem itself hasn’t been vetted for impact, feasibility, or measurable outcomes. This is why so many enterprise AI projects fail before they even get off the ground. Here’s what companies should be doing instead: 1️⃣ Define the business impact first – Not every problem is worth solving with AI. 2️⃣ Be open to different AI approaches – LLMs are powerful, but they are not a one-size-fits-all solution. In many cases, simpler models work equally well for a problem. 3️⃣ Validate feasibility before committing – Can this problem be solved with the available data? Is an LLM needed? What are the risks? Answer these questions first. AI projects in a business context shouldn’t start with technology; they should start with business value.

  • View profile for Ameya Kanitkar

    Co-founder & CTO at Larridin (a16z + Google backed) | AI ROI Measurement Platform

    3,577 followers

    Perplexity recently put out an "AI at Work" guide. It's a practical read, and it's packed with patterns you can reuse even if your team runs on ChatGPT, Claude, Gemini, or something else entirely. Here are 5 takeaways I'm adopting (with copy/paste examples): 1) Fix the workflow friction before you chase "smart outputs." Most productivity loss isn't about model quality—it's context switching. Example: After a 45-minute meeting, paste your rough notes and ask: "Summarize decisions, open questions, and owners. Output: a Slack update + a Jira-ready task list." Now you're not re-listening, rewriting, and copying into 3 different tools. 2) Prompt for outcomes + format, not keywords. LLMs respond better when you specify the artifact you actually need. Instead of "Help with product launch," try: "Create a 1-page launch plan for Feature X. Audience: GTM + Eng. Include timeline, owners, risks, and launch checklist. Max 350 words." 3) Delegate like you would to a strong teammate: steps + constraints + definition of done. Single-shot prompts are fine. Multi-step prompts are dependable. Example: "Step 1: Identify 3 plausible root causes from this incident summary. Step 2: Ask me 5 clarifying questions. Step 3: Draft a postmortem with: impact, timeline, root cause, fixes, follow-ups." 4) Standardize quality with reusable "prompt assets." The unlock here is repeatability—not clever prompting. Example: Create a "Weekly Exec Update" template: "Write in 6 bullets: outcomes, metrics, risks, asks, next week priorities, dependencies. Keep each bullet < 18 words." Reuse it every Friday. Your updates become consistent across the team. 5) Close the loop with judgment + lightweight verification. LLMs accelerate work, but you still own correctness. After it drafts a customer email or a PRD, ask: "List assumptions you made. What could be wrong? What would you verify? Provide 3 counterarguments." This catches hallucinations and sharpens decision quality. Perplexity wrote the guide for their product, but these patterns feel tool-agnostic—they're really about building better workflows See comments for link to full report

Explore categories