Teams will increasingly include both humans and AI agents. We need to learn how best to configure them. A new Stanford University paper "ChatCollab: Exploring Collaboration Between Humans and AI Agents in Software Teams" reveals a range of useful insights. A few highlights: 💡 Human-AI Role Differentiation Fosters Collaboration. Assigning distinct roles to AI agents and humans in teams, such as CEO, Product Manager, and Developer, mirrors traditional team dynamics. This structure helps define responsibilities, ensures alignment with workflows, and allows humans to seamlessly integrate by adopting any role. This fosters a peer-like collaboration environment where humans can both guide and learn from AI agents. 🎯 Prompts Shape Team Interaction Styles. The configuration of AI agent prompts significantly influences collaboration dynamics. For example, emphasizing "asking for opinions" in prompts increased such interactions by 600%. This demonstrates that thoughtfully designed role-specific and behavioral prompts can fine-tune team dynamics, enabling targeted improvements in communication and decision-making efficiency. 🔄 Iterative Feedback Mechanisms Improve Team Performance. Human team members in roles such as clients or supervisors can provide real-time feedback to AI agents. This iterative process ensures agents refine their output, ask pertinent questions, and follow expected workflows. Such interaction not only improves project outcomes but also builds trust and adaptability in mixed teams. 🌟 Autonomy Balances Initiative and Dependence. ChatCollab’s AI agents exhibit autonomy by independently deciding when to act or wait based on their roles. For example, developers wait for PRDs before coding, avoiding redundant work. Ensuring that agents understand role-specific dependencies and workflows optimizes productivity while maintaining alignment with human expectations. 📊 Tailored Role Assignments Enhance Human Learning. Humans in teams can act as coaches, mentors, or peers to AI agents. This dynamic enables human participants to refine leadership and communication skills, while AI agents serve as practice partners or mentees. Configuring teams to simulate these dynamics provides dual benefits: skill development for humans and improved agent outputs through feedback. 🔍 Measurable Dynamics Enable Continuous Improvement. Collaboration analysis using frameworks like Bales’ Interaction Process reveals actionable patterns in human-AI interactions. For example, tracking increases in opinion-sharing and other key metrics allows iterative configuration and optimization of combined teams. 💬 Transparent Communication Channels Empower Humans. Using shared platforms like Slack for all human and AI interactions ensures transparency and inclusivity. Humans can easily observe agent reasoning and intervene when necessary, while agents remain responsive to human queries. Link to paper in comments.
Human-Machine Collaboration Interfaces
Explore top LinkedIn content from expert professionals.
Summary
Human-Machine Collaboration Interfaces are systems or tools that enable people and AI or robotic agents to work together, each contributing unique strengths to accomplish shared tasks. These interfaces are designed to improve communication, coordination, and productivity in workplaces where humans and machines interact as teammates, not just as tools or operators.
- Define clear roles: Assign specific responsibilities to both humans and AI agents within a team to avoid confusion and help everyone play to their strengths.
- Tailor interactions: Adjust how AI agents communicate or behave to complement the personalities and working styles of their human partners, which can increase both output quality and team satisfaction.
- Use transparent feedback: Set up communication channels where humans can observe, guide, and give real-time feedback to AI or robots, building trust and improving teamwork over time.
-
-
I'm delighted to be a co-author of this research, conducted in collaboration with professors from Harvard, MIT, and Wharton, that explores what actually happens when humans and GenAI work together. As a Partner at Seven2 where I focus extensively on AI transformation, this work is at the heart of the questions we tackle daily with our portfolio companies. Our new study reveals three distinct types of interaction: "Cyborgs, Centaurs and Self-Automators: Human-GenAI Fused, Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling" 📄 🔗 Paper: https://lnkd.in/eHfq2yRZ 🎥 Short Video: https://lnkd.in/eDN8arH7 Drawing on a field study of 244 global management consultants at BCG, we identify three distinct modes of human–AI interaction that unfold across real workflows: Cyborgs (Fused Knowledge Co-Creation) – human and GenAI continuously shape one another in a tightly fused decision process Centaurs (Directed Knowledge Co-Creation) – human steers the process while leveraging AI capabilities Self-Automators (Abdicated Knowledge Co-Creation) – delegation of both task and decision to AI We show how these modes differ in who drives the work and what skills are cultivated, with implications for: ✔ How professionals develop domain and AI expertise ✔ Organizational strategy for upskilling ✔ The broader future of work in GenAI-augmented environments Check out the short video for an overview, and dive into the full paper via the link above! Whether you're interested in AI adoption, workforce transformation, or productive human–machine collaboration, I'd love to hear your thoughts and feedback! 📘 Full paper: https://lnkd.in/eHfq2yRZ 🎥 Video: https://lnkd.in/eDN8arH7 #AI #GenerativeAI #FutureOfWork #KnowledgeWork #Research #Management #Innovation
-
We just built a commercial grade RCT platform called Pairit for humans and AI agents to collaborate in integrative workspaces. We then test drove it in a large-scale Marketing Field Experiment with surprising results. Notably, "Personality Pairing" between human and AI personalities improves output quality and Human-AI teams generate 60% greater productivity per worker. In the experiment: 🚩 2310 participants were randomly assigned to human-human and human-AI teams, with randomized AI personality traits. 🚩 The teams exchanged 183,691 messages, and created 63,656 image edits, 1,960,095 ad copy edits, and 10,375 AI-generated images while producing 11,138 ads for a large think tank. 🚩 Analysis of fine-grained communication, collaboration, and workflow logs revealed that collaborating with AI agents increased communication by 137% and allowed humans to focus 23% more on text and image content generation messaging and 20% less on direct text editing. Humans on Human-AI teams sent 23% fewer social messages, creating 60% greater productivity per worker and higher-quality ad copy. 🚩 In contrast, human-human teams produced higher-quality images, suggesting that AI agents require fine-tuning for multimodal workflows. 🚩 AI Personality Pairing Experiments revealed that AI traits can complement human personalities to enhance collaboration. For example, conscientious humans paired with open AI agents improved image quality, while extroverted humans paired with conscientious AI agents reduced the quality of text, images, and clicks. 🚩 In field tests of ad campaigns with ~5M impressions, ads with higher image quality produced by human collaborations and higher text quality produced by AI collaborations performed significantly better on click-through rate and cost per click metrics. As human collaborations produced better image quality and AI collaborations produced better text quality, ads created by human-AI teams performed similarly, overall, to those created by human-human teams. 🚩 Together, these results suggest AI agents can improve teamwork and productivity, especially when tuned to complement human traits. The paper, coauthored with Harang Ju, can be found in the link on the first comment below. We thank the MIT Initiative on the Digital Economy for institutional support! As always, thoughts and comments highly encouraged! Wondering especially what Erik Brynjolfsson Edward McFowland III Iavor Bojinov John Horton Karim Lakhani Azeem Azhar Sendhil Mullainathan Nicole Immorlica Alessandro Acquisti Ethan Mollick Katy Milkman and others think!
-
How can we enable robots to fluently collaborate with humans on physically demanding tasks? In our #HRI2025 paper, we focus on the task of human-robot collaborative transport, where a human and a robot work together to move an object to a goal pose. In the absence of explicit or a priori coordination, critical decisions such as navigating obstacles or determining object orientations become especially challenging. Our key insight is that a human and a robot can coordinate fluently by leveraging the transported object as a communicative medium. By encoding subtle, communicative signals into actions that affect the state of the transported object, the robot could effectively convey its intended strategy and role. To this end, we designed an inference mechanism that probabilistically maps observations of joint actions executed by the human and the robot to a set of joint strategies of workspace traversal, drawing from topological invariance. Integrated into a model predictive controller (IC-MPC), this mechanism enables a robot to estimate the uncertainty of its human partner over a traversal strategy, and take proactive corrective actions balancing uncertainty minimization and task efficiency. We deployed IC-MPC on a mobile manipulator (Hello Robot Stretch) and evaluated it in a within-subjects lab study (N = 24). IC-MPC enables greater team performance and empowers the robot to be perceived as a significantly more fluent and competent partner compared to baselines lacking a communicative mechanism. My fantastic PhD student, Elvin Yang, will present this work in the 1A: Human-Robot Collaboration Session on Tuesday, and in the X-HRI workshop today! paper: https://lnkd.in/gidfgq4W code: https://lnkd.in/gR8gAEud video: https://lnkd.in/gzkktKzf #robotics #humanrobotinteraction #artificialintelligence University of Michigan Robotics Department
-
I am giving a series of keynote talks on a research program that we have focused over the past two years: Foundations of Human-AI Collaboration. AI is already evolving from a tool we use into a teammate we collaborate with (a shif that you can experience today in Teamily AI (https://teamily.ai/) 🙂). This transition opens exciting opportunities for impactful research on how to design, scale, and optimize effective human–AI collaboration. Our research program tackles five foundational challenges: - Group Intelligence: How do we formally model collective intelligence in mixed human-AI teams? What makes a group greater than the sum of its parts? - Uncertainty Estimation: When should an AI agent handle a task vs. hand it off to a human (or a different agent)? We've developed methods like MARS and LARS for calibrated confidence across LLMs, vision-language models, and classification systems. - Universal Memory: Human-AI teams need shared, persistent, privacy-aware memory. We're designing a three-layer memory architecture that enables coherent collaboration across agents and humans. - Scale-Out Efficiency: AI inference and agentic orchestration costs are rising rapidly as systems move from single-model responses to persistent, multi-agent reasoning and execution. Drawing on information-theoretic foundations, we develop Rₗₗₘ(D), a rate–distortion framework that establishes principled limits for efficient inference, enabling optimal memory utilization, prompt and context compression, and cost-aware model and agent routing. - Trust, Safety & Alignment: The risks multiply when AI agents operate in teams due to cascading failures, groupthink, accountability gaps. etc. We need new frameworks for robustness and value alignment in collaborative settings. These aren't isolated problems. As the attached roadmap shows, they span three layers: Foundation (theory & models), System (architecture & design), and Application (real-world impact in enterprise teams, scientific discovery, and social life), with deep cross-cutting connections between them. Excited to share this vision with the community as we explore what’s next for human–AI collaboration. Much more to come.
-
🌟 👩💻 Presenting CREW: Advancing Human + AI Collaboration 🤖 🌟 Human-AI partnership is no longer a distant dream; it's now a driving force behind innovation in various sectors. Recently, researchers at General Robotics Labs at Duke University unveiled CREW, an advanced platform revolutionizing real-time decision-making. This innovation bridges the human-AI gap, fostering seamless teamwork and pioneering research endeavors. 👉 Key Features of CREW:- 🔍 Multidisciplinary Approach - Integrating cognitive science, machine learning, neuroscience, and more. 🤝 Real-Time Interaction - Enabling dynamic human feedback for enhanced AI training responsiveness. 🧠 Cognitive Insights - Gathering diverse human physiological data to explore teaming effectiveness variances. 🎮 Task Versatility - CREW's modular design allows it to adapt to various research requirements, from individual to multi-agent settings. 📊 Scalable Experiments - Conducting large-scale studies with parallel sessions, achieving unprecedented research possibilities. 🌐 In just one week, CREW facilitated 50 human subject studies, shedding light on how human characteristics impact AI training outcomes. The platform's open-source nature fosters collaborative advancements in Human + AI research. 🔗 Explore CREW: https://lnkd.in/dvYHagmS 👩💻 Code Repository: https://lnkd.in/dtbkMMSp 📹 Watch Video: https://lnkd.in/dswxR2nG 📜 Research Paper: arxiv.org/abs/2408.00170 (Published in Transactions on Machine Learning Research) The potential of human + AI collaboration knows no bounds, and CREW is leading the way. Let's redefine teamwork in the era of AI! #Human+AI #ArtificialIntelligence #JointTeamwork #ResearchInnovation #AI
-
AI integration at the workplace is poised to redefine the future of work, yet our understanding of how to design effective human-AI partnership remains limited. We recently conducted one of the first studies to examine how human-AI collaboration design affects human productivity, satisfaction, quality of work, and diversity of creative output. Our study was set in the context of creative writing. We found the collaboration designs in which humans were completely in charge of early creative tasks like ideation and outlining ("human creativity") or where AI was a copilot ("copilot") produced higher quality output, greater satisfaction for the task performer, and greater aggregate creative diversity than designs in which AI did the work and human role was to approve/reject/instruct AI ("Human confirmation design"). This was especially true for higher-skilled people whose quality of work was adversely impacted when they were not actively driving the early creative tasks. In short, the way you design the future of work will have a huge impact on work quality, worker satisfaction, and overall organizational creativity. A more detailed summary is in my Substack post here https://lnkd.in/e4HGxP7Q. A link to the full paper is in the comments. Coauthor: Daehwan Ahn
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development