True Humans + AI work and thinking means humans should participate in the AI's thinking processes. A very interesting new paper proposes a "Collaborative Workshop" approach to extended chain-of-thought processes such as deep research. They base their approach on three principles: Transparency: The agent’s reasoning, file system, and terminal outputs are fully visible in real-time. Symmetrical Control: Humans and AI have equal authority to modify the workspace. A human can edit a code file or a plan document just as easily as the agent can. Role Fluidity: The workflow can seamlessly shift between AI-led (autonomous) and human-led (assisted) modes. Beyond the specifics of the approach outlined in this paper, these principles are excellent starting points for all AI interface design. They do this by externalizing the agent's thinking into a visible "Plan-as-Document" markdown file (TODO.md). Users can hit "Pause," edit the TODO.md file to correct the agent's strategy, and hit "Resume." The agent then reads the updated plan and adjusts immediately. Despite being designed for collaboration, the system proves highly capable autonomously. ResearStudio achieved 74.09% on the GAIA benchmark, outperforming OpenAI’s DeepResearch (67.36%) and other state-of-the-art systems. The paper gives concrete examples of how human participants in the collaborative thinking workflow create better results. "It transforms the agent from an opaque, brittle tool into a resilient, trustworthy partner, providing the essential safeguard needed to deploy autonomous systems on complex, real-world problems." Full code available with the paper. Image created by Nano Banana Pro
Implementing Collaborative Autonomy
Explore top LinkedIn content from expert professionals.
Summary
Implementing collaborative autonomy means designing systems where humans and AI agents work together, sharing control and decision-making while maintaining transparency and trust. This approach blends human oversight with AI capabilities, allowing both to contribute and adapt as situations change.
- Build transparent workflows: Make sure users can see and understand agent decisions, actions, and reasoning in real time so that trust and accountability are maintained.
- Establish clear boundaries: Define which decisions require human judgment versus those agents can handle, and set protocols for seamless handoffs when needed.
- Align roles and principles: Encourage flexible shifts between human-led and AI-led processes, and guide actions with clear rules that support teamwork and consistent standards across departments.
-
-
You're probably one of those leaders who say they want independent teams. But in reality, everything still runs through them. You become the decision bottleneck. Your team waits instead of thinking. And autonomy becomes a buzzword rather than a capability. Left unchecked, it gets worse: Decisions slow down as you get busier Standards become inconsistent when you’re absent People optimise for your approval, not good judgment Fixing this doesn't require more control. It’s principle-led leadership. Here’s how to actually implement it: 1. Start redirecting the questions When asked for a decision, respond with: → “What principle should guide this?” → “What would you do if I wasn’t here?” 2. Define 4–6 non-negotiable principles Clear rules only. No vague values. → “Long-term trust over short-term gain” → “Speed matters, but not at the cost of quality” 3. Reward thinking, not just outcomes When someone makes a call: → Praise alignment with principles even if the result isn’t perfect → Correct decisions that violate standards, even if they “worked” 4. Use simple decision frameworks Give structure to thinking: → Reversible vs irreversible decisions → Risk vs impact → Customer first, internal convenience second 5. Hold the line under pressure No “just this once” shortcuts. If principles disappear when it’s hard, they are never real. 6. Narrate your decisions out loud Don’t just decide, explain things with context: → “I’m choosing A because it aligns with B principle, despite C risk” 7. Step back on purpose Create space where decisions happen without you: → Rotate ownership → Review after, not before You'll notice the changes when you do this consistently. Your team will start making the right call even when you're not there. That’s the real point. From dependence on you to dependence on sound judgment. Share and follow Abrar S. for more content
-
Designing UX for autonomous multi-agent systems is a whole new game. These agents take initiative, make decisions, and collaborate, the old click and respond model no longer works. Users need control without micromanagement, clarity without overload, and trust in what’s happening behind the scenes. That’s why trust, transparency, and human-first design aren’t optional — they’re foundational. 1. Capability Discovery One of the first barriers to adoption is uncertainty. Users often don't know what an agent can do, especially when multiple agents collaborate across domains. Interfaces must provide dynamic affordances, contextual tooltips, and scenario-based walkthroughs that answer: “What can this agent do for me right now?” This ensures users onboard with confidence, reducing trial-and-error learning and surfacing hidden agent potential early. 2. Observability and Provenance In systems where agents learn, evolve, and interact autonomously, users must be able to trace not just what happened, but why. Observability goes beyond logs; it includes time-stamped decision trails, causal chains, and visualization of agent communication. Provenance gives users the power to challenge decisions, audit behaviors, and even retrain agents, which is critical in high-stakes domains like finance, healthcare, or DevOps. 3. Interruptibility Autonomy must not translate to irreversibility. Users should be able to pause, resume, or cancel agent actions with clear consequences. This empowers human oversight in dynamic contexts (e.g., pausing RCA during live production incidents), and reduces fear around automation. Temporal control over agent execution makes the system feel safe, adaptable, and co-operative. 4. Cost-Aware Delegation Many agent actions incur downstream costs, infrastructure, computation, or time. Interfaces must make the invisible cost visible before action. For example, spawning an AI model or triggering auto-remediation should expose an estimated impact window. Letting users define policies (e.g., “Only auto-remediate when risk score < 30 and impact < $100”) enables fine-grained trust calibration. 5. Persona-Aligned Feedback Loops Each user persona, from QA engineer to SRE will interact with agents differently. The system must offer feedback loops tailored to that persona’s context. For example, a test generator agent may ask a QA to verify coverage gaps, while an anomaly agent may provide confidence ranges and time-series correlations for SREs. This ensures the system evolves in alignment with real user goals, not just data. In multi-agent systems, agency without alignment is chaos. These principles help build systems that are not only intelligent but intelligible, reliable, and human-centered.
-
AI Agents & The Need to Consider an Agentic Spectrum Approach We need to move beyond the fixation on fully autonomous AI Agents (for now) & explore how varying levels of agency can be integrated into everyday applications. Introducing incremental autonomy allows users to benefit from AI assistance while maintaining control over critical decisions & actions, allowing a collaborative environment where human oversight and AI capabilities complement each other seamlessly. Currently, the performance of AI Agents falls short of expectations. For instance, the Claude AI Agent Computer Interface (ACI) achieves only 14% of human-level performance. In recent benchmarking, the top-performing AI Agent resolved just 24.0% of tasks, despite being the most expensive model, with an average cost of $6.34 per task and requiring 29.17 steps—indicative of high computational effort. AI Agents struggle with long-horizon tasks, highlighting the need for human oversight as a key component. Recent research shows that human integration as checkpoints within Agentic workflows can mitigate the limitations of fully autonomous agents. In these workflows, an Agentic layer in applications orchestrates and executes sub-tasks, optimising efficiency through discovery and adaptation. By blending AI's capabilities with human oversight, these workflows ensure continuous improvement and adaptability in automated systems.
-
𝗛𝗥'𝘀 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸 HR Techie Corner Salesforce reported their AI agents handle 32,000 customer conversations weekly with 83% resolution rates [Salesforce 2025]. ServiceNow executives predict autonomous agents will coordinate everything from employee onboarding to resource allocation [Deloitte 2025]. The technology works, but I regularly watch HR teams rush toward agent adoption without addressing coordination gaps that will derail implementations. Last month, a financial services company spent three months mapping approval workflows before deploying compliance agents. They discovered different departments used conflicting criteria for identical decisions and escalation paths often dead-ended with people lacking authority to resolve issues. This groundwork proved essential because agents amplify existing organizational patterns: strong coordination gets stronger, dysfunction spreads faster. The pattern separating successful implementations from expensive disappointments comes down to collaborative readiness. Start by auditing how decisions actually flow through your organization, not how they're supposed to flow. Select one cross-functional process like new hire onboarding and trace every handoff, approval, and communication touchpoint. Document both formal process and what happens when departments encounter exceptions. Then establish accountability frameworks before deployment. When an AI agent makes decisions creating problems, you need clear investigation and correction protocols ready. Salesforce calls this "trust architecture": transparency into agent decisions and control mechanisms for human intervention. This isn't about technical monitoring but governance structures balancing agent autonomy with organizational oversight. Most organizations test agent capability rather than integration complexity. A healthcare client perfected appointment scheduling agents in isolation, then discovered different medical specialties had incompatible scheduling logic when scaling across departments. Your proof-of-concept projects should challenge cross-departmental coordination, not showcase what technology can do. The most effective HR leaders define boundaries early—what agents should not handle rather than focusing on automation opportunities. Decisions requiring human judgment, sensitive employee situations, ambiguous policy areas. Success depends on designing seamless handoffs between agents and humans rather than expecting full automation. Organizations succeeding with AI agents aren't necessarily the most technically sophisticated. They're the ones that recognize agent integration as an opportunity to strengthen overall coordination capabilities rather than viewing it as another technology implementation project. Dave Millner, Nicole Lettich, Andreas Horn, Igor Menezes, Tilman Sheets, Sebastian Shearer, Abid Hamid #genai #aigovernance #orgdesign #operatingmodel #peopleanalytics #CommercialValue
-
As a company, we've been navigating the terrain of workplace flexibility, often charting it by a roster to get teams to work together. But after years of observing what truly empowers teams, I believe the real journey companies ought to charter is the one called ' Autonomy and Ownership'. It's about trusting the pilot within each individual enough to navigate how and when they soar highest. Think about a colleague on our team. Initially, when we shifted to a hybrid model, the person appreciated not having to commute every day. But the real productivity leap came when we moved beyond dictating schedules and empowered that person to structure their week in a way that aligned with their peak creative times. Being able to block out those early hours for deep work, without rigid meeting constraints, transformed output. That way, the person also had time for meetings during the day having knocked off the stuff that holds you down. On the other hand, some people were simply not able to work at home often, facing numerous distractions including chores to manage while work took place. Interestingly, studies highlight that employees with high autonomy report being 12% happier and 15% more productive. It's not just about comfort, it's about unleashing potential. This isn't about abandoning structure, it's about redefining it around trust. It's acknowledging that our team members are professionals who, given clear objectives and the right tools, can expertly manage their responsibilities. Have seen this firsthand when we launched a complex project with a distributed team. Instead of imposing strict meeting times that would have been inconvenient for some, we empowered them to decide on their collaboration rhythms. They established their own communication channels and check in points, leading to a surprisingly efficient workflow and a strong sense of collective ownership. Trust isn't just a feel good factor, it's a performance driver. This focus on autonomy also directly addresses the crucial aspect of employee well being. Consider the impact on stress levels when individuals have greater control over their work life integration. A survey revealed that 43% of professionals cite less stress and better mental health as the top benefits of workplace flexibility, with autonomy being a key enabler. It's about recognizing that life doesn't always fit neatly into a 9 to 5 box, and offering the freedom to manage personal commitments fosters a culture of respect and reduces burnout. The future of work isn't just about where the work gets done, it's about how we empower our people to own their work. By shifting our perspective to view autonomy as the core of workplace flexibility, we're not just offering a perk we're cultivating a more engaged, productive, and human centered way of working where everyone can truly fly. #TheFutureOfWork
-
I had a leader enamored with activity. He once told me that he expected, based on his calculations, that each CSM would enter 243 activities into our CRM per month. When I became the leader, we completely rebuilt our Customer Success team structure. As mentioned, the old model focused on tracking activities: number of calls made, response times, training, meetings held, etc. While these metrics were easy to measure, they didn't tell us if we were helping customers achieve their goals. Our new approach centered entirely on customer outcomes. Each CSM now owned specific customer objectives, measured through concrete business results: increased product adoption, faster time-to-value, and expanded use cases. The results exceeded our expectations: Team productivity improved by 35% as we eliminated low-impact activities Revenue expansion from existing accounts grew by 32% Voluntary team turnover dropped to under 5% Shifting accountability from activities to outcomes gave CSMs full autonomy to design their customer engagement strategy (within reason). We implemented weekly outcome reviews where teams share success stories and problem-solve together. This replaced our old activity-tracking meetings which felt more like performance reviews than collaborative sessions. Team morale was our most significant uplift. When you trust professionals to make decisions and hold them accountable for results rather than checkboxes, they rise to the challenge. Activity does not equal achievement. For leaders considering a similar transformation (and you should): Start with clear customer outcomes, give your team autonomy to achieve them, and measure what matters. The rest will follow.
-
Most regional leaders don’t fail because of market fit or poor execution. They fail in the fog of misalignment with corporate priorities. Corporate says “Think like an owner”—but every move needs approval. So how do you actually lead when autonomy is conditional? That’s exactly what a Regional President at a major real estate firm was wrestling with. He had been appointed to build the region's market. He had the network, the expertise, and a clear vision for scale. But instead of building momentum, he kept hitting friction—unclear signals from HQ and conflicting corporate priorities. Here’s the 3-part framework we used to shift the dynamic: 1. Turn Vision into Strategy Instead of informal conversations and scattered ideas, he documented his detailed market strategy, including pro-forma financials, showing a clear path to growth. This wasn't “just another deck.” It was a tangible plan in a concrete direction that could be discussed, measured, and aligned. Clarity invites alignment. 2. Create a “Conversation for Possibility” Next, he framed the conversation as a "conversation for possibilities"—an open, two-way discussion that combined his vision with the CEO's priorities. Notice what he didn't do: He didn't position it as a battle for resources or autonomy. Instead, he created the conditions for collaborative decision-making. 3. Focus on Progress Instead of aiming for a full green light, he proposed it as a first step. It opened the door to ongoing iteration, trust, and shared ownership. The result? - The CEO gained confidence in the regional strategy. - The President regained the autonomy to lead. - And together, they created a dynamic of mutual empowerment. So next time you feel constrained by corporate priorities: → Organize your thinking and present your strategy with specifics → Frame the conversation for possibilities → Approach it as a first step, not a final answer Your ability to bridge vision with corporate reality isn't just political savvy – it's leadership in action. What’s the hardest part of navigating that balance in your world?
-
The quality of your technology systems and products simply reflects the quality of your organization. An organization’s culture, specifically the tolerance for contrary ideas and conflicting opinions, directly impacts a system’s quality, its capacity to perform its core function with elegance and flexibility. Teams building these systems must be enabled, even encouraged, to discuss, debate, and push back. In organizations where negative information or contrary opinions are discouraged, architectures proceed often by fiat from the highest paid person’s opinion. Thus if that information flow is stunted, constrained, or otherwise dampened, the resulting system will emerge brittle and fragile in its implementation, unable to anticipate shocks that could overwhelm its capacity. The flow of communication among the teams building a system will be reflected in the end result. We know this from Conway’s Law, long regarded in the systems community, and recently popularized by our friends at Team Topologies. The form of processes and incentives also further shapes the outcomes that teams are able to achieve. How work is organized, the quality of the tooling available, the carrots and sticks leveraged by management, all these impact the end result. We want to build systems that are flexible, reliable, and adaptable—”anti-fragile” even. Since cross-functional collaboration and proper team-based incentives promote a culture of quality in the systems those teams build, teams should be evaluated and rewarded collectively to encourage collaboration and innovation. Leaders should go out of their way to create fora for dissent and debate, not shrink from difficult architecture discussions, but encourage them. Instead, many leaders drive for consensus too early, simply it because it makes them nervous, because they are not confident enough in their own leadership to welcome differences of opinion. Until leaders enable and empower teams to operate like flexible, creative, and cross-functional autonomous units, engaging with each other collaboratively to build the best and most flexible systems possible, we will continue to build and market substandard products, suffer massive outages, and generally waste time and money that could have been put to better use.
-
🧠 Building an Agentic AI Workforce in the Modern Enterprise Enterprises are evolving from automation driven organizations to intelligence augmented ecosystems. The next leap in this evolution is the rise of the Agentic AI workforce digital agents that reason, collaborate, and act with autonomy under human and organizational guardrails. 🏗️ The Architectural Foundation: From LLMs to Enterprise Agents The backbone of this workforce lies in combining retrieval based intelligence with secure action frameworks. The Retrieval Augmented Generation (RAG) architecture ensures that agents have access to the latest, organization-specific knowledge, while the Model Context Protocol (MCP) gives them the ability to act securely within enterprise environments. 💡 Business and Functional Considerations From a business standpoint, Agentic AI redefines workforce economics. Instead of hiring for routine data synthesis or reporting tasks, enterprises deploy agents that continuously operate at scale, bridging knowledge silos and ensuring operational continuity 24/7. Functionally, these systems must align with: - Auditability through immutable logs and traceable decision chains - Interoperability with legacy systems - Ethical governance, ensuring fairness, transparency, and explainability ⚙️ The Leading Approach: Hybrid Integration The most effective implementation model emerging globally is the Hybrid AI Workforce Framework, combining centralized governance (to ensure standards and oversight) with decentralized execution. Key Success Factors Strategic Clarity – Identify which functions benefit most from Agentic AI: operational workflows, decision-support systems, security operations, etc. Human-Centered Design – Treat agents as collaborators, not replacements. The design must integrate human approval loops, contextual transparency, and traceable decision logs. Data and Access Governance – Implement zero-trust principles, immutable input control, and encryption to maintain data sovereignty. Iterative Maturity Model – Begin with guided agents (low autonomy), evolve toward supervised execution, and finally deploy autonomous systems for stable use cases 📊 Measuring Success Quantifying success requires a blend of operational, financial, and cultural metrics, such as: - Reduction in manual task volume and turnaround time - Employee augmentation rate (AI+Human synergy metrics) - Net productivity gain and innovation throughput 🚀 Benefits and Risks Benefits: - Continuous productivity and 24/7 operations - Shortened decision cycles and faster insights - Reduction in operational cost and cognitive burden - Democratization of intelligence across departments Risks: - Over-reliance on AI judgment without human validation - Model drift or bias amplification without periodic retraining - Shadow AI unsanctioned agent deployment outside governance #AI #AgenticAI #Automation #DigitalTransformation #AIGovernance #AIWorkforce #FutureOfWork #CIO #Innovation
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development