We just open-sourced 36 AI skills for architects. Here's how it works. I see Anthropic 's Claude Code as very similar to Grasshopper. It opens the door for deep experts to package knowledge and logic in ways that are highly shareable and reusable across their firms. Grasshopper did it for computational design. "Skills" for Claude Code do this for workflow automation like zoning rules, spec conventions, product research, site analysis all encoded as small skills that anyone in your studio can install and use. We've been testing this idea for the past few months with my friend Mick McConnell. Then I tested it with rebuilding some of Canoa 's features for a friend. Then again with other friends at a small firm that just needed the help. We started building skills one at a time. A skill for zoning analysis. One for EPD parsing. One for product research. Eventually we started tying them together with agents that orchestrate these skills and rules that govern the output. Before we knew it we had a pretty expansive set of tasks and wanted to see how far we could take it. How many tasks can be automated this way? The short answer is, a lot. Today we're open-sourcing the result. The open source Architecture Studio project is a plugin for Claude Code that runs in your terminal. It gives Claude architecture-specific rules, packages common tasks into skills, and routes requests through specialist agents. Type /studio and describe what you need. The router figures out the rest. A few of the agents: - Site Planner - give it an address, it researches climate, flood zones, seismic risk, transit, demographics, and neighborhood history, then synthesizes a site brief. For a recent project, work that usually takes a day of pulling from NOAA, FEMA, USGS, and Census came back in a few minutes. - Product & Materials Researcher - give it a brief or a rep's PDF, it extracts specs, tags products by category and material, finds alternatives, and writes everything to a shared Google Sheet. - Sustainability Specialist - give it your materials, it finds EPDs from EC3, compares embodied carbon side by side, and checks LEED eligibility. Every output follows a transparency rule: every number links to its public source. Every calculation shows the inputs and the formula. If we reference a building code, you get a link to the government-published version. Our view is that if you can't see the math, the tool is hiding something. It's all MIT licensed and actively growing - very New York-centric on zoning and due diligence right now, but the architecture works for any jurisdiction. Try it if you feel like it. Fork it if you want to make it yours. The contributing guide is in the repo. The link to the repo is in the comments below. #architecture #opensource #AEC #claudecode #workplacedesign #sustainabledesign #interiordesign
Automated Design Task Management
Explore top LinkedIn content from expert professionals.
Summary
Automated design task management refers to using AI-powered tools and workflows that handle and coordinate repetitive or routine tasks in design projects, freeing up human creativity for higher-level decision making. This approach is rapidly transforming how teams plan, execute, and refine design work across product development and experimentation.
- Integrate smart tools: Connect AI agents and automation tools so design tasks, specs, and updates move smoothly between different phases and team members.
- Streamline handoffs: Reduce delays and miscommunication by letting automation carry context and requirements from one stage to the next—no more manual copy-paste or lost information.
- Free up creativity: Let AI handle time-consuming setup and assembly work so designers can focus their time on problem-solving and creative refinement.
-
-
𝗦𝗽𝗲𝗲𝗱 𝗶𝘀 𝘆𝗼𝘂𝗿 𝘂𝗹𝘁𝗶𝗺𝗮𝘁𝗲 𝗮𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲 𝗶𝗻 𝘁𝗵𝗲 𝗲𝗿𝗮 𝗼𝗳 𝗔𝗜. But most companies don't see improvements. They buy Copilot licenses. Give them to developers. And expect results. It doesn't work. Because the process remains the same. That's why we created the 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗕𝗹𝘂𝗲𝗽𝗿𝗶𝗻𝘁. Whether you're: • Accelerating existing product delivery • Modernizing legacy systems • Building something new This framework applies. ─── 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗶𝘀𝗻'𝘁 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹𝘀. 𝗜𝘁'𝘀 𝘁𝗵𝗲 𝗵𝗮𝗻𝗱𝗼𝗳𝗳𝘀. A product requirement written with AI gets manually copied into architecture docs. Then copied again into tickets. Then interpreted by developers who never saw the original. Each handoff loses information. Each handoff adds delay. ─── 𝗧𝗵𝗲 𝗳𝗶𝘅: 𝗖𝗼𝗻𝗻𝗲𝗰𝘁 𝘁𝗵𝗲 𝗲𝗻𝘁𝗶𝗿𝗲 𝗽𝗿𝗼𝗰𝗲𝘀𝘀. Instead of isolated AI tools, deploy AI agents that pass work from phase to phase with full context. 𝗞𝗲𝘆 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘁𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸: 1/ Context first: create specs, architecture, and design before coding 2/ Context compounds: each phase builds on validated artifacts from the previous one 3/ Agent handover at every step 4/ Spec-driven development with granular task decomposition 5/ Human approval gates between phases ─── 𝗛𝗲𝗿𝗲'𝘀 𝗵𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 𝗮𝗰𝗿𝗼𝘀𝘀 𝗳𝗼𝘂𝗿 𝗽𝗵𝗮𝘀𝗲𝘀: 𝗗𝗘𝗙𝗜𝗡𝗘: 𝗪𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗯𝘂𝗶𝗹𝗱? The Product Manager Agent reads your customer feedback, research, and existing docs. It writes the requirements and user stories. Then pushes them directly into your issue tracker. 𝗗𝗘𝗦𝗜𝗚𝗡: 𝗛𝗼𝘄 𝘀𝗵𝗼𝘂𝗹𝗱 𝗶𝘁 𝘄𝗼𝗿𝗸? The Architect Agent picks up those requirements automatically. It already knows your codebase structure. It writes the technical spec. 𝗕𝗨𝗜𝗟𝗗: 𝗠𝗮𝗸𝗲 𝗶𝘁 𝗿𝗲𝗮𝗹. The Developer Agent reads the spec and the designs. Writes the code. Creates the pull request. The QA Agent reads everything that came before (requirements, designs, code) then writes the tests. Finds the bugs. ─── 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝘄𝗼𝗿𝗸𝘀: Each agent passes forward the artifact, the context, and the acceptance criteria. The next agent starts with full information. No handoff meetings. No lost context. 𝗥𝗲𝘀𝘂𝗹𝘁𝘀: 30% to 70% productivity gains depending on the phase. On average, our clients see a 𝟮.𝟮𝘅 𝘀𝗽𝗲𝗲𝗱 𝗶𝗻𝗰𝗿𝗲𝗮𝘀𝗲. ─── 𝗕𝘂𝘁 𝗵𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝗺𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗺𝗶𝘀𝘀: You can't build this by optimizing development alone. Speed isn't a dev problem. It's an end-to-end problem. The bottleneck might be in product. Or design. Or QA. Or deployment approvals. You'll never see it if each department optimizes in isolation. Where does work wait longest in your organization? ♻️ Repost to help your network. Follow Alex Barády, founder of ENDGAME.
-
I’m the founder of a $3,000,000+ ARR staffing agency. Here are the tools I swear by for creating, automating, and delegating processes (save this post): - Mural A digital whiteboard tool I use to create flowcharts. It helps me break down tasks and document each step visually, so I can create processes. It’s a fantastic tool for mapping out your thought process. It also comes in handy for collaborative brainstorming sessions. - Loom A video recording tool that helps me create step-by-step training videos. All I do is hit record, walk through one of my processes, then send the link to whoever I want to delegate it to. - ChatGPT We often ask ChatGPT to create a job description or an event description. I also use it to transcribe and summarize my Loom recordings (see above) to create SOPs. - Notion We use Notion to write detailed task descriptions, along with checklists to help us track task completion step by step. It can also be used as a centralized workspace for sharing educational resources. - Zapier We use Zapier to automate repetitive tasks that don’t need to be done by a human. It connects and streamlines a lot of our other tools. The basic idea is, you have a trigger and succeeding actions. So if, say, someone signed up for your event, you could set up Zapier to automatically move them into your CRM or ping your SDR to give them a call. - Monday A powerful project management tool that helps you monitor progress visually. Realistically, it eliminates the need for a lot of other software too, such as Google Docs (document writing), Notion (task tracking), Slack (internal comms), and a dedicated CRM. It can be a one-stop shop if you want it to be. I highly recommend it. - Templates We’ve developed various templates to help us save time and stay consistent. That includes Gmail and Superhuman templates for email and Canva templates for graphics and presentations. Any tool you’d add to the list?
-
The design bottleneck in experimentation is about to break open. It's coming/changing fast. Two people on our team ( Tales Sampaio and Laszlo Zagyva) presented their AI design workflows this week and I want to share what they're finding because I think it maps to where a lot of experimentation teams are headed. The core problem with AI design tools for CRO/Testing has been twofold: outputs are off-brand (tools don't know your fonts, components, spacing) and outputs are static (wrong font? back to the prompt, ping-pong until you get it right). Tales found a tool called Alloy that solves most of this. It's a Chrome extension that scans the actual live page... not just screenshots, it analyzes the code, fonts, colors, component patterns. Then you prompt the change you want. He replaced a scrollable product carousel with a static 8-product grid on a client PLP. On-brand result in two minutes. Exported to Figma. Test variant ready for dev. The unlock isn't "AI can design." It's that for straightforward test ideas where you already know what you want, a PM or strategist can produce a usable mockup without pulling dedicated design resources. Laszlo went further. He built a Claude skill that connects to our Airtable base, reads the test brief, and generates a Figma file in the background. No designer involvement for initial setup. It produces wireframes of the control and treatment, sets up pages, cross-references previous tests in the base. If someone forgot to include the URL... it goes and finds the right page on the client's site from the touchpoint description alone. Then the agentic layer. Laszlo demoed prompting an AI agent to make edits directly on a design canvas. Asked it to add a secondary CTA. The agent went to the client's live site, looked up how secondary CTAs are styled, and applied that styling. It did the research a designer would do. Right now maybe 20% of our design process is automated. Laszlo thinks once the live control can be pulled into Figma programmatically (they're working on this with the plugin developers), that jumps to ~70%. Designer comes in for the final 30% of creative refinement and QA. I keep coming back to the velocity framing. If learning rate is the primary metric for experimentation programs (and I think it is), then compressing the design bottleneck without sacrificing quality directly increases your program's learning rate. Less assembly work for designers. More creative problem-solving. This isn't replacing designers. It's changing what they spend their time on. Curious what other experimentation teams are seeing here. Is AI changing the design layer of your testing workflow yet, or still mostly on the analysis and ideation side?
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development