🔮 How To Prioritize UX Work (Framework) (https://lnkd.in/eGQrPm2N), a very practical guide on how to choose and estimate the right level of research and UX work needed for a successful outcome of a project — along with the process to follow and UX estimates to set. Kindly shared by Jeremy Bird. 🤔 Planning is typically done for the delivery phase only. 🤔 Design, research, discovery, ideation are not planned. 🤔 Effort, estimates, roadmaps, capacity are rare for UX work. 🚫 Not every project needs the same level of research/design. ✅ Goal: set realistic expectations for UX work in a timeframe. Jeremy suggests to estimate research and design efforts separately, and across different dimensions: we assess research by mapping Risks and Problem Clarity. And we estimate design effort needed by mapping Risk and Level of Complexity: 🔮 Clarity: Low ↔ High New, unknown problems usually come with a lot of assumptions and very low clarity. Well-known problems with shared understanding in the team and some extensive research have higher degree of clarity. 🔥 Risk: Low ↔ High Some projects are relatively easy to roll back and they don't really affect business-critical workflows (low risk). Others are much more difficult to reverse and operate within users' key journeys (high risk). 🚀 Complexity: Low ↔ High Self-contained projects in well-understood workflows are typically straightforward (low complexity). Some projects that involve many systems, external dependencies, stakeholders scattered across teams with little existing knowledge (high complexity). ✅ We start by defining a problem to solve + business impact. ✅ Then, we shape desired user outcome and success criteria. ✅ Next, we assess design effort and research effort levels. ✅ Run a kickoff meeting to prioritize and decide the scope. ✅ Designers break down UX work, estimate it, add to Jira. Personally, I always find it remarkably difficult to estimate the effort for research or design work. Even after so many years, with 20–30% buffer, I’m often underestimating the little nuances, blockers, constraints and bottlenecks hidden away somewhere between complex dependencies and external stakeholders. One thing is certain though: considering risk early is a very, very effective way to guide UX work in the right direction. High risk always requires some level of research and discovery. And early prioritization helps UX teams focus their effort where they add most value — saving time on resources for projects that deliver value to users and businesses. Finally: I can highly recommend to consider John Cutler's Effort vs. Value curves (https://lnkd.in/evrKJUEy) for prioritization work as well. Much of the work isn’t completed once it's delivered. More often than not, it will significantly add to maintenance costs over time. We better account for it early. #ux #design
Evaluating Workflows for Efficiency
Explore top LinkedIn content from expert professionals.
-
-
Six months ago, a client almost pulled the plug on an AI implementation we were running. Three weeks in. Leadership was aligned. The use case was clear. The tools were live. And yet adoption had started to stall. Usage dropped. Teams quietly slipped back into old workflows. Moments like this define whether an AI project succeeds or dies. At ALTRD, our instinct isn’t to defend the system we built. Our instinct is to investigate the system we missed. So we paused the rollout and audited what was actually happening inside the workflow. What we found was instructive. The training had landed well. But the implementation had been designed around how leadership thought the team worked. Not how they actually worked. Two things were quietly breaking adoption. First, we had optimized the visible workflow but missed an invisible step. There was a key handoff happening informally between two people over WhatsApp. It wasn’t documented anywhere. It never showed up in process charts. But it was where the real decision-making happened. Our redesigned workflow skipped that moment completely. Second, there was a quiet skeptic in the system. The team lead everyone naturally looked to before trying something new had concerns she hadn’t voiced in any meeting. Not because she was resistant, but because she wasn’t convinced the workflow would hold up under real pressure. Once the team sensed that hesitation, adoption slowed down. So we fixed the system. We remapped the actual workflow, not the documented one. Then we worked directly with the team lead. Not to sell the tool, but to understand the operational concerns and redesign parts of the system around them. The engagement expanded. And that project ended up becoming one of the most valuable learning moments for how we implement AI today. Two lessons we now carry into every engagement at ALTRD: Document the informal workflow, not just the official one. And find the quiet skeptic in the room early. They’re rarely the blocker. They’re usually the signal that something important hasn’t been designed properly yet. AI implementation isn’t just a technical system. It’s a human system. And if you want adoption to stick, you have to understand both.
-
There’s a huge difference between ‘I got AI to do this amazing thing for social media points’ and ‘I got AI to do this thing that generates a lot of revenue for my business or our clients.’ Real-world AI is very different. Most agents require small language models. Large context windows and multiple rounds of model calls turn the unit economics of foundational models negative for many use cases. Everything we build for clients starts with local AI. We spend no more than 2 days trying to get the workflow running on the Dell Pro Max T2 in my office. If it won’t run locally, using a frontier model rarely changes that. We scale the agent to support a small set of early adopters. This phase is critical. An early adopter cohort has been trained to use agents at their earliest maturity phase. Most users would reject the agent in this raw form. But this phase is intended to rapidly improve the agent’s workflow integration, orchestration, and reliability. Human feedback from trained early adopters improves agent performance faster than any other approach I have found. We iterate on more than just the LLMs. This phase fills in the knowledge graph, improves tool usage, adds guardrails, and informs the usage of more traditional machine learning models to augment the agent. When improvements plateau, we assess the agent. It is only promoted if its impact on outcomes meets user or customer expectations. Is it valuable? How does it reorchestrate workflows? Can the business monetize it? We roll the agent out to an alpha release cohort to scale the feedback flywheel. At this point, we know we have something valuable. We’re trying to improve its reliability and handle more workflow variations before a wider launch. We only evaluate frontier model usage at this phase. We finally know enough to make targeted decisions about where in the workflow frontier model performance could make a big enough difference to be worth considering. The alpha release also reveals adoption barriers for the agent and reorchestrated workflow. Most agents require us to craft an adoption journey for users and customers. That typically includes training for internal users and a phased rollout for customers. When improvement plateaus again, the agent is ready for general release. The process takes 2-3 months, and only about 30% of the workflows we try in my office end up going the distance. Data and information architecture make a huge difference. One client with a very mature knowledge graph is seeing a workflow success rate of over 50%. Small models perform significantly better for their use cases. #DellProMax
-
Once, we built a machine learning model that was expected to drive a 15% lift in conversions. The result? A shocking 0.01%. What went wrong? The model worked perfectly, but the business process behind it was too long and complex. By the time the offer reached the clients, most leads were lost. And the kicker? The business case was literally giving money to the clients! This experience taught us a crucial lesson: even the best machine learning model can fail without an aligned, efficient business process. The model had identified high-value leads, but the operational workflow to turn those leads into conversions was cumbersome and slow. It involved multiple handoffs, redundant steps, and delays that made it nearly impossible for the offer to reach the client in time. In this case, the problem wasn’t technical—it was systemic. The gap between predictive insights and actionable outcomes created friction that nullified the model's value. When we revisited the process, we streamlined the journey from the model’s output to client interaction. By reducing the time and steps involved, we saw significant improvements—not just in conversion rates but also in the trust clients placed in the business. This is why aligning AI models with business operations is just as critical as building accurate models. Are your machine learning projects driving real business impact, or are they stuck in the pipeline? Let’s discuss strategies to close the gap and unlock the full potential of your AI investments. Share your thoughts or experiences below!
-
The AI workflow produced great results, yet people did not feel safe relying on the output. ⛔ That was the situation I encountered in a client workshop in Brussels last week, and it is far more common than most organisations like to admit. The team had invested time and effort into designing an AI-supported workflow. The use case was clear, the technical setup was sound, the data quality was acceptable, and the people involved had already received training on how to use AI. Despite all of this, the workflow was barely used in practice. People ran the AI step, reviewed the output, and then quietly redid the work themselves. During the workshop, we mapped the real workflow together, step by step, focusing not on how the process was documented but on how the work actually happened on a normal working day. At one point, a participant looked at the whiteboard and said: “I only trust the result after I have checked it myself anyway.” That sentence shifted the entire conversation. As we continued mapping the process, a pattern became visible: Everyone validated AI outputs differently. Some checked everything, even low-risk drafts. Others barely checked high-risk decisions. Accountability was assumed but never explicitly defined. Human validation was happening constantly, but it was invisible, inconsistent, and highly personal. We redesigned the workflow and introduced a simple checklist for built-in human validation. 💡 This checklist replaced individual safety habits with a shared, explicit process. ✅ Define the risk level of the output. Clarify whether the AI output is a draft, a recommendation, or a decision with external impact. ✅ Decide if validation is required. Make it explicit which outputs require human review and which can flow through without intervention. ✅ Specify the validation moment. Define when validation happens in the workflow and before which downstream step. ✅ Assign clear responsibility. Name the role that validates the output and the role that makes the final decision. ✅ Separate generation from judgment. Ensure the AI prepares content or options, while humans remain accountable for approval and outcomes. ✅ Remove unnecessary checks. Regularly review the workflow to eliminate validation steps that add friction without reducing risk. Once this checklist was applied, people felt much more confident about the AI output because they knew when human judgment was required. 👉 Is human validation in your AI workflows clearly designed, or is it still improvised? Let’s discuss.
-
📌 How to do Prioritization as a Product Manager. Product Managers face a problem of plenty. You have so many things to do, many problems, many solutions, and many suggestions, but are always limited by time, bandwidth, and resources. Now you need to obsessively prioritize and filter ideas before you put them in the roadmap. But how do you prioritize? The simplest yet most powerful framework that most PMs rely on is the Impact v/s Effort Framework. The impact is determined by: - Potential revenue estimate, - Customer value, - Alignment with company goals, - Demand from the market, or - Any other relevant metrics that align with product goals. Impact estimation is mostly the responsibility of the product manager. The effort is determined by: - Development complexity, - Engineering efforts, - The time required & cost, - Operations complexity, etc. Effort estimation is mostly done by the delivery teams like engineers, design, ops, etc. This is a collaborative exercise. The next step is to visualize this through an impact v/s effort matrix. Provided that the estimations are done correctly, the low efforts & high impact items are picked at the earliest, & other things are prioritized in a logical order. 📌 3 Tips to take your prioritization game to the next level: 1. Consider tradeoffs at every step: Some high efforts ideas could be of high strategic importance, similarly some low-impact ideas could be critical for customer experience. Understand the situation from all angles. 2. Look out for red flags: All ideas look high impact, or the backlog is completely filled with low effort low impact ideas. This indicates either the PM is not competent at impact estimation or is not considering enough ideas during product discovery before deciding on the best one. 3. Validate high-effort ideas by first converting them into low efforts experiments. For example: Rather than converting your whole website into all Indian languages, try to convert the most popular pages into 3 popular languages, observe the results and then decide to roll back or go all in. 📌 Other frameworks for prioritization: There will be times when you'll need more detailed frameworks to prioritize, some of the other helpful frameworks are: 1. KANO: Puts customer satisfaction at the center and distinguishes between basic expectations, performance attributes, and delighters. 2. MOSCOW: categorizes requirements into four priority levels: Must have, Should have, Could have, and Won't have. 3. RICE: adds to more dimensions of Reach and Confidence to make Impact v/s Effort more reliable and exhaustive. ✨ Prioritization is a supercritical and useful skill for product managers, during their work, stakeholder management, and also during interviews. Do you think this would be helpful for you? I share helpful insights for product managers almost every day, consider connecting here 👉🏽 Ankit Shukla to not miss out. #productmanagement #prioritization
-
A company doesn’t stall because people are incompetent. It stalls because work is trapped inside individuals. If progress slows down when one person is unavailable, you don’t have a capacity problem. You have a system problem. Here’s the uncomfortable truth: - High performers often become bottlenecks. Not because they want control. But because they’ve never externalised their judgment. Real scale begins when you move from: Personal execution → to institutional logic. A few hard disciplines that change everything: Document judgment, not just steps. If you make the same decision twice, it needs criteria — not memory. Design decision frameworks. Teams don’t need permission for every move. They need clarity on: • What “good” looks like • Boundaries • Trade-offs • Non-negotiables Identify friction points. Where does progress stop when a senior leader is out? That is your next system to build. Convert recurring work into structure. - Templates. - Checklists. - Operating rhythms. - Review cadences. Consistency reduces chaos. Architecture reduces escalation. Train for outcomes, not micro-steps. Teach intent. Let execution evolve. The goal is not to remove leadership. It is to move leadership upward. From operator → to architect. Systems don’t dilute impact. They compound it. And at scale, compounding beats effort — every time. #OrganizationalDesign #LeadershipEvolution #SystemsThinking #ExecutionExcellence #ScalingUp
-
I watched my team burn hours and achieve nothing that mattered. Emails flying. Meetings back-to-back. Tasks stacking up. We were busy. We felt productive. But the results told a different story. The real question became: what actually moves the needle? I started tracking every task against measurable outcomes. I aligned the team around the top three priorities. Meetings became decision-making sessions. Deep work blocks were protected. Low-impact tasks were delegated or dropped. The transformation wasn’t overnight. But slowly, busy teams became strategic teams. Focus shifted from doing everything to doing what matters: We reclaimed hours. We saw tangible results. And the team felt calmer, more empowered, and more motivated. If your team feels constantly busy but stretched thin, try this: • Track what drives impact. • Align efforts around the highest priorities. • Protect time for focused work. The shift from busy to strategic isn’t easy, but it’s worth it. I’d love to hear from you: What’s one change you’ve made that helped your team focus on what truly matters?
-
Avoid the “Shiny Tool Trap” – Make Automation Work for You! Imagine pouring six figures into a tool that promises efficiency… only to realize it amplifies your problems instead of solving them. That’s the Shiny Tool Trap - and it’s costing companies millions. 💸 Automation can be a game-changer, but only if you have the right strategy. Here’s how to avoid the biggest pitfalls: 1. The Shiny Tool Trap Pitfall: Falling for the latest software without understanding your processes. Tools don’t fix broken workflows - they just make them fail faster. Fix: Map your processes first. Audit them ruthlessly. Ask: “Does this step add value?” If not, redesign it. Automation amplifies good processes - it doesn’t fix bad ones. 2. The Human Blind Spot Pitfall: Thinking automation is a “set it and forget it” deal. People resist change, and ignoring their concerns leads to failure. Fix: Work with your team, not just for them. Involve end-users early. Train them well. Celebrate small wins (e.g., “This bot saves us 10 hours/week!”). Change management is crucial. 3. The Feedback Black Hole Pitfall: Believing your automated process is “done.” Markets shift, regulations change, and customer needs evolve. Static automation becomes obsolete. Fix: Build feedback loops. Monitor KPIs, gather user insights, and iterate. Think of automation as a cycle, not a checkbox. Why this matters: Process automation isn’t just about cutting costs - it’s a growth engine. But only if you avoid these traps. At GBTEC Group, we’ve helped companies turn automation into a strategic advantage. How? By pairing tech with human-centric design and agile adaptation. Which of these automation pitfalls have you seen firsthand?
-
The Project Management Triangle suggests that you have to choose between speed, quality, and cost. But is this true for software, too? Recent evidence shows that the triangle needs rethinking. High-quality code doesn't take longer to write; on the contrary. Speed and quality aren't opposing forces -- in fact, quality code is the key to sustained speed, allowing you to ship more faster. What evidence do I have for these claims? Over the past few years, CodeScene's research team has studied the relationship between code quality and business outcomes. Here's what we found: 🎯 "Code quality" can be reliably measured through the Code Health metric (Red, Yellow, Green code). 💡 Teams deliver new features and fix bugs twice as fast in healthy (green) code compared to problematic code. 💡 Green code reduces the risk of cost overruns by 9X, due to less time spent trying to understand the existing solution. 🐞 It also has 15X fewer defects on average than Red code, translating directly into improved customer satisfaction and less unplanned work. 🕺 Green, healthy code cuts onboarding time in half, allowing new developers to contribute faster. ﹩And even with Green, healthy code, there's a progressive gain to improving code quality. Given these competitive advantages, shouldn't code quality be a standard business KPI?
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development