Your team's AI training from six months ago is now roughly as current as a 2017 travel guide to downtown Minneapolis. AI skills deprecate every 3-4 months. The tool your team learned in January has a different interface, different capabilities, and different best practices by April. A one-time training event gives people just enough confidence to use yesterday's approach on today's problem, which is arguably worse than no training at all. 48% of employees say they'd use AI more with training. But training-as-an-event has near-zero lasting impact when the technology moves faster than any annual learning calendar can track. The companies with sustained adoption gave up on workshops. They embedded learning into how work happens — monthly capability refreshes, internal champions tracking what changed, AI integrated into daily workflows instead of treated as a separate tool to open when you remember it exists. 45% of adoption comes from workflow integration. Not from a training deck. Not from an all-hands demo nobody asked questions at. If your AI learning plan is annual, it was obsolete three months after it launched.
Compoze Labs
Software Development
Minneapolis, Minnesota 1,604 followers
AI, data, and app development, engineered to reach production.
About us
Compoze Labs is a Minneapolis-based software consulting and development firm focused on AI, data, and app development. We take projects from strategy and architecture through to code that runs in production. What we do: - AI strategy, RAG, agents, and workflow automation - Data engineering, integration, and analytics - Mobile and web application development - Systems modernization and legacy migrations Our clients are technical and business leaders who need a team that can think strategically, write production code, and translate between the two.
- Website
-
https://www.compozelabs.com/
External link for Compoze Labs
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Minneapolis, Minnesota
- Type
- Privately Held
- Founded
- 2018
- Specialties
- Artificial Intelligence, Machine Learning, AI Agents, Generative AI, AI Consulting, Retrieval-Augmented Generation, Data Engineering, Data Analytics, Business Intelligence, Application Development, Custom Software Development, Mobile App Development, Web Application Development, Systems Modernization, Cloud Engineering, Software Architecture, API Development, Workflow Automation, DevOps, and Product Strategy
Locations
-
Primary
Get directions
Minneapolis, Minnesota, US
Employees at Compoze Labs
Updates
-
Governance frameworks. Workflow redesign. Measuring business outcomes instead of model accuracy. The companies doing this work are pulling ahead and the distance is getting wider every quarter. PwC surveyed 4,454 CEOs this year. 56% see zero cost or revenue improvement from AI. CEO confidence in revenue growth is at a 4-year low. And 25% of planned AI spend is being pushed to 2027 because CFOs are done writing checks on faith. The 12% reporting both lower costs and higher revenue embedded AI in operations from day one instead of running it out of an innovation lab. They had governance before they had models. They tracked P&L impact, not accuracy benchmarks. What's the biggest thing standing between your AI investment and measurable results?
-
-
8.6% of organizations feel fully data-ready for AI. The other 91.4% are evaluating LLMs anyway. 52% say data quality is their biggest barrier. And yet most AI roadmaps start with model selection. There's a team out there right now on month four of an LLM evaluation while the data pipeline that feeds it breaks every time someone updates a shared spreadsheet. The average enterprise runs 12+ siloed systems. The data that made the pilot look good was hand-curated. Production needs governance, lineage, ownership rules, and quality standards that nobody has built. MIT Sloan put it well: most AI failures are data failures in disguise. If you're three months into comparing LLMs and you haven't assessed whether your data can support any of them in production, you might be solving the fun problem instead of the important one.
-
-
Nobody has ever gotten excited about change management at an all-hands. But it's what separates the 12% seeing AI returns from the other 88%. The companies getting returns spent 70% of their AI budget on change management, workflow redesign, and data governance — and 30% on the technology. Most companies allocate the opposite, which is why the model works fine in the demo and then meets your org's 12+ siloed systems, undocumented processes, and tribal knowledge for the first time and everything stalls. 95% of IT leaders say integration is their #1 barrier. But the integration problem is almost never technical. It's that your company has been running on ambiguity and human workarounds for decades. Before AI, that was fine. Processes existed as habits. Exceptions got handled by whoever had been there longest. "Good enough" was the operating standard. AI has no patience for any of that. Ambiguity becomes hallucination. Undocumented processes can't be automated. "Good enough" becomes measurably wrong. So yeah. Change management. Workflow redesign. The boring stuff. That's where the 70% goes, and that's where the results come from.
-
-
Six teams. Six briefs. Half a day to build something real. Yesterday the Compoze team ran our first internal AI hackathon — building tools we actually want to use ourselves: → A feedback requestor that figures out who you should be asking and what about → A project audit skill that flags where work is drifting before it becomes a problem → A template factory that turns scattered examples into something on-brand → An issue-to-PR resolver that turns small backlog items into ready-to-review branches → A handoff package generator that keeps context intact between sales and delivery → A codebase onboarder that builds a real mental model in an hour, not a week By demo time, every team had something working. Shoutout to all our amazing teams: Jazzy Lemur, Cuddly Octopus, Dapper Manatee, Fluffy Barnacle, Velvet Raccoon, and Turbo Narwhal. A solid afternoon of work!
-
-
-
-
-
+4
-
-
About 70–80% of traditional software maintenance costs come after the build. The ongoing maintenance, the enhancements, the "who owns this now" conversations. When AI compresses build time from weeks to minutes, that lifecycle cost doesn't disappear. It compounds. Eric Carr and Andrew Larsen just recorded a 45-minute session breaking down what's happening as the cost of code drops by orders of magnitude. They cover why 88% of orgs are using AI but only ~6% are capturing strong returns, what the maturity model looks like for moving past vibe coding, and where the bottlenecks have shifted now that generating code is the easy part. If your org is building more software than it can absorb, this one's worth your time. Access the recording → https://lnkd.in/g5NGaipF
-
-
Today at noon CT! Eric Carr and Andrew Larsen on the Hypertail — what happens when AI makes software almost free to build, and how the right engineering foundations turn that speed into a sustainable competitive advantage. Still time to register. Even if you catch the replay, it's worth it. Save Your Spot → https://lnkd.in/g5NGaipF
-
-
The teams getting the most out of AI coding tools aren't the ones with the best prompts. They're the ones with the best engineering systems around the prompts. We worked with one team that had developers writing detailed, carefully crafted prompts for every task — spending 15 minutes setting up context before each generation. Output quality was decent, but the time savings were marginal because the prompt engineering itself was eating the productivity gains. Compare that to a team that spent three weeks building context rules into their development environment — architecture documentation that feeds into the AI tool automatically, constraint files that enforce their coding standards, and an evaluation step that catches common issues before code reaches review. Their developers write simple, short prompts. The system provides the context. The code comes out consistent. And because the quality gates are automated, senior engineers only review the decisions that actually require human judgment — not boilerplate pattern enforcement. The first team has skilled prompt engineers. The second team has a production-grade AI engineering system. The difference in sustained output is significant. Invest in the system, not just the skill. A developer who writes great prompts helps one team. An engineering system that provides context, enforces constraints, and evaluates output automatically helps every team — including the ones you haven't hired yet.
-
-
At some point you have to admire the commitment to running the same play and expecting different results 🥴 "We need an AI strategy" → buy tool → find use case → pilot → stall → pretend it never happened → repeat next quarter with a different vendor. Meanwhile the board keeps asking about the AI strategy, and the most accurate answer is a slide deck nobody's opened since Q3. The numbers back this up. 88% of organizations are using AI. 6% qualify as high performers seeing measurable returns. 95% of GenAI pilots never make it past experimental. Of the 5% that reach production, fewer than half move the P&L. $2.5 trillion in global AI spend this year, and most of it is funding that loop. The 6% doing well skipped it entirely. They started with a specific dollar amount they were losing to a specific problem and built from there. Less exciting in a board meeting. More useful everywhere else. What was the last AI pilot at your org that made it to production?
-
-
Eric Carr and Andrew Larsen spend most of their time inside enterprise engineering organizations building the systems that make AI-assisted development work in production. On April 15th, they're stepping back to talk about the bigger picture: what happens when creation costs drop by an order of magnitude — and how the organizations that move first on governance, data readiness, and maintenance infrastructure will capture a massive advantage. They'll walk through the three forces driving this shift and what engineering leaders need to build now. Save Your Spot → https://lnkd.in/g5NGaipF
-