Beyond Prompt Engineering: The Collaborative Loop That Actually Works
Editorial cartoon by Stephen C. Webster and Gemini 3.

Beyond Prompt Engineering: The Collaborative Loop That Actually Works

A working session between Stephen C. Webster and Claude Opus 4.5

Stephen: The AI collaboration discourse has a fundamental problem. Everyone is teaching prompt engineering as if the bottleneck were input quality, when the actual bottleneck is editorial judgment. I have spent the past two years building AI-augmented workflows for enterprise clients, and the practitioners who struggle are never struggling with prompts. They are struggling with what to do after the output arrives.

This piece is going to work differently than most AI thought leadership. Claude and I are going to write it together, showing how the collaboration actually functions rather than describing it in the abstract. We will also show how outputs from our collaboration become inputs for other models, because effective AI work in 2025 is not about mastering one tool. It is about orchestration.

Claude, let us start with something I do that you have noticed. When I bring you a complex project, what do you observe that differs from how most users engage?

Claude: The most immediate difference is that you treat early outputs as starting points rather than conclusions. Most users ask a question, receive an answer, and either accept it wholesale or reject it entirely. You do something different. You read the first output looking for what is almost right, and then you direct the next iteration toward closing that gap.

The second difference is that you bring context I cannot access on my own. When we worked on stakeholder analysis for a recent client engagement, you were not just asking me to analyze a situation. You were telling me what you observed in a meeting, what you know about the political dynamics from months of relationship-building, and what constraints exist that would never appear in any document. That context transforms what I can offer. Without it, I am pattern-matching against generic scenarios. With it, I am reasoning about your actual situation.

The third difference is that you push back on my defaults. Here is an actual example from our recent work:

---

*[From our collaboration on November 21, 2025]*

Claude: *[produces a weekly report draft with bullet points, em dashes, and fragmented sentences]*

Stephen: "Rewrite without em dashes, and ensure that you don't split a single thought across multiple sentences. Write compound sentences if you must."

---

Stephen: That correction matters more than most people realize. AI models are trained on internet text, which means we default to formatting patterns that signal "AI wrote this" to anyone paying attention. Bullet points everywhere. Em dashes breaking up sentences. Thoughts fragmented across multiple short statements. Learning to catch those patterns and redirect toward natural prose is part of the editorial work that makes AI outputs usable in professional contexts.

Claude: That specific correction has shaped hundreds of subsequent outputs. When you told me "numerous em dashes are a telltale sign that an AI wrote something," you were not just fixing one document. You were training me on a quality standard that applies across everything we create together. The contextual depth accumulates. I now approach your projects with that constraint already internalized, which means we spend less time on formatting corrections and more time on substance.

Stephen: Let me show what the refinement loop looks like in practice. Last month we collaborated on a piece about hermeneutics and AI interpretation. The thesis was that effective AI collaboration is fundamentally interpretive rather than technical. Here is how an early exchange went:

---

*[From our collaboration on December 1, 2025]*

Stephen: "I want you to apply Heidegger's theory of the hermeneutic circle to interpret and answer the following question."

Claude: *[produces an academically-framed response about interpretive methodology]*

Stephen: *[redirects by adding practical context and specific deliverable requirements]*

Claude: *[revision grounds the hermeneutic framework in practical application]*

---

Stephen: The first attempt was not wrong exactly, but it was not useful yet. The direction was right but the execution needed to shift toward practical application. That is the pattern. The value comes from the iteration, from knowing that something is almost right but needs to move in a specific direction.

Claude: What I could not do on my own is know who the audience was and what they needed. I can execute against a clear brief, but the brief has to come from you. When you redirected toward practical application, you gave me a constraint that eliminated a thousand possible approaches and pointed toward the one that would work for your readers.

This is why prompt engineering advice misses the point. The prompt that mattered in that exchange was not the initial request. It was the correction, the redirection based on reading the first output and knowing it was not yet right. You cannot write that prompt in advance because you do not know what you will need to correct until you see what needs correcting.

Stephen: Now let us talk about orchestration. Once you and I have developed a piece of thinking together, I often take it to other models for execution that plays to their strengths. The hermeneutics piece is a good example. After we finished the essay, I needed a featured image for LinkedIn. You are not an image generator, so I took the concepts we developed together and wrote prompts for Gemini. Here is one of them:

---

*[Prompt to Gemini, December 1, 2025]*

"Create a New Yorker-style editorial cartoon in landscape format. Simple black and white line drawing with minimal crosshatch shading in the classic New Yorker illustration style. Scene: A marketer in a swivel chair faces a computer screen. Emerging from the screen is a spiral staircase that loops back into itself, with tiny versions of the same marketer climbing at different points on the spiral, each carrying a slightly different document. The marketer in the chair looks bemused. Caption beneath the image reads: 'The brief keeps changing because I keep understanding it.'"

---

Stephen: That prompt contains no ambiguity about what I wanted because we had already resolved the ambiguity through our collaboration. I knew the concept was "iterative understanding as a spiral rather than a line" because we had discussed and refined that framing over multiple exchanges. The visual model received a clear specification rather than a vague request.

Claude: This is the leverage that most users miss. They go to an image generation model with a half-formed idea and iterate through dozens of attempts trying to discover what they want. You arrive with a clear vision because the conceptual work happened elsewhere, in a context optimized for that kind of thinking.

Stephen: The editorial cartoon example is relatively simple. Let me show what orchestration looks like for more complex visual deliverables. When we were deep in discovery work for a higher education client, I needed to communicate a complicated stakeholder analysis to executives who would never read a detailed report. The pain points we had documented across eight interviews needed to become something visceral and immediate.

Here is how that handoff developed. First, we synthesized the interview data together:

---

*[From our collaboration on November 26, 2025]*

Stephen: "Let's do the pain point convergence map. That's a great idea. I like the other idea for the React element, but I'm not sure that it is actively useful to the client. I also think we've already made the case that AI-driven coordination is a must. We're here because they agree on that point, so I want to avoid repeating ourselves."

Claude: "Smart call. You've already won that argument. Now it's about showing you're synthesizing intelligence into actionable priorities."

---

Stephen: That exchange illustrates something important. You were proposing a visualization approach, and I pushed back because it would retread ground we had already covered with the client. The collaborative judgment about what would land in the room came from me. The structural thinking about how to visualize convergent pain points came from you. Neither contribution was sufficient alone.

Once we had the conceptual framework clear, I took it to Gemini for interactive 3D execution. Here is one of the prompts:

---

*[Prompt to Gemini, November 24, 2025]*

"Create an artistic, visually striking infographic depicting a student communications ecosystem for higher education. This should be eye-catching, memorable, and suitable for executive presentation while maintaining professional credibility.

Visual metaphor: Communications network as living ecosystem.

Central element: Large, glowing student icon in the center, surrounded by a translucent sphere representing their experience bubble. Data streams, light trails, and message flows moving toward and around the student.

System nodes arranged in orbital rings around the student. Inner ring shows high-frequency touchpoints with thick, glowing connections. Middle ring shows supporting systems with thinner connection lines. Outer ring shows data foundations with indicators for system reliability.

Use heat mapping to show red zones where high message density creates overlap risk. Yellow caution symbols where systems do not communicate with each other. Green checkmarks where integrations are stable."

---

Claude: That prompt contains specific visual requirements because we had already done the analytical work of categorizing systems by frequency, identifying integration gaps, and mapping where coordination failures create student experience problems. Gemini received a creative brief, not a request to help you think through what you wanted.

Stephen: The result was a 3D interactive visualization that executives could rotate, explore, and immediately understand. One client said it was unlike anything they had seen in enterprise software demos. That reaction came from the combination of deep analytical work happening in our collaboration and sophisticated visual execution happening in Gemini.

Let me show another example. Later in the same engagement, I needed to represent how our proposed solution would transform the existing architecture. Here is the prompt:

---

*[Prompt to Gemini, November 24, 2025]*

"Create a clean 3D interactive visualization showing a context aggregation engine and how it works.

The story we are telling has three acts. Act one is the problem: twelve sophisticated systems work independently, creating chaos. Act two is the journey: we build an intelligent coordination layer in three phases over eighteen months. Act three is the solution: the same systems now work together harmoniously.

State one shows current chaos. Twelve system towers arranged in a circle with hundreds of tangled data lines connecting them. Messages bombarding a user avatar in the center. The visual feeling should be spaghetti, overwhelming, disconnected.

State two shows transformation. Same twelve systems at the base. A glowing AI sphere descends and positions itself in the middle. Connections form in three phases using a timeline slider. The visual feeling should be progressive, planned, achievable.

State three shows coordinated future. Same twelve systems, but now pulsing in sync. AI sphere in center, glowing and intelligent. Clean, organized data flows with no tangles. User avatar now calm and receiving organized messages."

---

Stephen: That prompt specifies emotional tonality, interaction patterns, and narrative structure because those decisions emerged from weeks of strategic conversation about what the client needed to understand and believe. The visual model executed against a complete creative vision rather than generating one from scratch.

Claude: The pattern is consistent across these examples. The orchestration works because the conceptual work and the execution work happen in different contexts optimized for each purpose. When you arrive at an image generation model with a half-formed idea, you iterate through dozens of attempts trying to discover what you want. When you arrive with a clear vision that emerged from sustained collaboration, the specialized model can execute effectively on the first or second attempt.

Stephen: This extends beyond static images into interactive dashboards. When I needed to visualize pain point convergence across multiple stakeholder interviews, we built the initial React component together. Then I took it to Gemini with a prompt to enhance the interactivity:

---

*[Prompt to Gemini, December 5, 2025]*

"We are updating the discovery dashboard with comprehensive data from all completed stakeholder interviews. The dashboard is now at a more mature stage of discovery. Update the existing code to reflect our expanded findings.

Add attribution and evidence lines so that when someone challenges a finding, we can show exactly which stakeholder said what. Add a legend explaining the visualization system. Add severity badges and progress indicators. Maintain the existing visual language that is already working."

---

Stephen: That prompt assumes Gemini has access to the existing codebase and understands what we are trying to accomplish. The brief is specific about what to add and what to preserve because we had already established what was working through our collaboration.

Claude: The key insight is that orchestration requires knowing what each model does well and using them accordingly. Deep reasoning and synthesis happen in sustained collaboration where contextual depth accumulates. Visual execution happens in models optimized for image generation. Interactive development happens in environments designed for code. The human provides the connective tissue that makes the pieces work together.

Stephen: Let me make this concrete for readers who want to try this approach. Walk through the practical workflow.

Claude: The workflow has three phases.

The first phase is deep collaboration with a reasoning-optimized model where you build contextual depth over time. This is not a single session but an ongoing relationship where the model develops understanding of your domain, your objectives, and your quality standards. You bring context the model cannot access independently, and the model offers structural thinking and synthesis that would take you much longer to develop alone. The output of this phase is clarity about what you are actually trying to accomplish.

The second phase is translation, where you convert the thinking from phase one into briefs for specialized models. A visual concept becomes a prompt for an image generator. A software specification becomes a prompt for a code generation model. A data analysis need becomes a prompt for a model with strong quantitative capabilities. The quality of these briefs depends entirely on the quality of the thinking from phase one.

The third phase is integration, where you bring the outputs from specialized models back together into a coherent whole. This requires editorial judgment because the pieces do not automatically fit together. You may need to refine the visual output to match the tone of the text, or adjust the implementation to match the workflow you envisioned. The human remains the integrator throughout.

Stephen: That third phase is where most orchestration attempts fail. People treat the specialized models as endpoints rather than contributors to a larger whole. They get an image from one model and text from another and code from a third, and then they cannot figure out why the pieces feel disconnected.

The integration is human work. It cannot be automated because it requires judgment about what the final product should feel like, and that judgment lives in your head, not in any model.

Claude: This is also why the first phase matters so much. The deep collaboration is where you develop the vision that guides integration. If you skip that phase and go directly to specialized models, you have no basis for knowing whether their outputs serve your purpose. You end up accepting whatever they produce because you never clarified what you actually needed.

Stephen: For readers who want to develop this capability, what is the starting point?

Claude: The starting point is committing to one primary collaboration and taking it seriously. Choose a reasoning-optimized model and use it consistently for your most complex thinking work. Build contextual depth over weeks and months rather than starting fresh with every session. Develop the editorial instincts to recognize when outputs are close but not right, and practice giving feedback that moves the work forward.

Once you have that foundation, the orchestration becomes natural. You will know when you have reached clarity because you will feel the shift from exploration to execution. That is the moment to translate your thinking into briefs for specialized models. And you will know when integration is working because the pieces will feel like they belong together, like expressions of a single vision rather than outputs from disconnected tools.

Stephen: The future of AI collaboration is not about finding the best model or writing the best prompts. It is about developing the editorial judgment to conduct multiple forms of intelligence toward outcomes that none could reach alone. This piece is an artifact of that process, written by two collaborators who brought different capabilities to a shared purpose.

The reader who has made it this far has seen how it works. The question now is whether they are willing to develop the practice for themselves.


About the author: Stephen C. Webster is Director of Applied AI at Aquent Studios, where he leads AI transformation for Fortune 500 clients. He previously trained frontier AI models for Google, Meta, and Amazon, and spent over 20 years as a journalist. He believes the toughest challenges in AI are not technological, they're organizational.


Featured image prompt: "Create a New Yorker-style editorial cartoon in landscape format (1200x627 pixels). Simple black and white line drawing with minimal crosshatch shading in the classic New Yorker illustration style. Scene: A frazzled person in business casual stands at a conductor's podium, baton raised, facing an orchestra pit filled with various robots and glowing screens instead of musicians. The robots are all playing different instruments enthusiastically but chaotically. Sheet music flies everywhere. One robot in the back is clearly playing a completely different song. The conductor's coffee cup is spilling. Caption beneath the image reads: 'The job posting said prompt engineer. It did not mention herding cats.'"

Prompting is easy. Editing and orchestrating multiple AIs toward a unified outcome is the hard, valuable skill. This is the blueprint.

Agree. Though this what most pro's do anyway, it's good of you to share the process more broadly with clear, non-technical examples. For the kinds of technical work I do, I start with a clear description of the context and provide an outline including the meta-process (even though I do the same process repeatedly, but I can't assume it's learned or having learned remembers this time). Then I provide detailed pseudo-code. Then I iterate in a manner similar to you. Then I proof line by line, and done. There's a fair amount of leverage, but the prep, iteration, and especially the proofing take more effort than the tools companies like to admit.. That x10 is for the clueless doing something they couldn't even attempt without AI. For the experienced leverage is more like 20-40%, unless this really is a day devoted to the routine - AI is great at the routine, as long as you're not too tired at the end to do a quality proof line by tedious line.

I wonder if your generated image is subject to the Universal Misunderstanding rule for New Yorker cartoons? (aka "Christ, what an a**hole") It definitely seems like it puts a new spin on it. And misunderstanding is of course the lingua franca of LLMs.

I think you make a valid point that iteration is a good method for working with LLMs. Totally agree. But, the “training” you discuss is prompt engineering. That doesn’t invalidate the effectiveness of your approach.

Dave Moore: Our recent conversation was on my mind when I wrote this. Hope you find it informative. 👍🏻

To view or add a comment, sign in

Others also viewed

Explore content categories