𝐒𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝟑.𝟎 - 𝐓𝐡𝐞 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐓𝐨𝐨𝐥𝐢𝐧𝐠 𝐑𝐞𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧 We’re not just getting better tools—we’re witnessing a full replatforming of software development, and Bessemer’s “Software 3.0” roadmap frames it as a fundamental shift in how software is built, not an incremental upgrade. 𝐓𝐡𝐞 𝐡𝐞𝐚𝐝𝐥𝐢𝐧𝐞 𝐭𝐡𝐚𝐭 𝐬𝐡𝐨𝐮𝐥𝐝 𝐠𝐫𝐚𝐛 𝐞𝐯𝐞𝐫𝐲 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫'𝐬 𝐚𝐭𝐭𝐞𝐧𝐭𝐢𝐨𝐧: 𝐀𝐈 𝐢𝐬 𝐞𝐱𝐩𝐞𝐜𝐭𝐞𝐝 𝐭𝐨 𝐰𝐫𝐢𝐭𝐞 𝟗𝟓%+ 𝐨𝐟 𝐜𝐨𝐝𝐞 𝐛𝐲 𝟐𝟎𝟑𝟎. This isn't speculation. GitHub Copilot already drives over 40% of GitHub's $2B annual revenue. 𝐅𝐢𝐯𝐞 𝐬𝐡𝐢𝐟𝐭𝐬 𝐫𝐞𝐬𝐡𝐚𝐩𝐢𝐧𝐠 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐭𝐨𝐨𝐥𝐢𝐧𝐠: 𝟏. 𝐄𝐧𝐠𝐥𝐢𝐬𝐡 𝐢𝐬 𝐭𝐡𝐞 𝐧𝐞𝐰 𝐩𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 Natural language is becoming the primary interface. Describe what you want, and AI builds it. The barrier between "technical" and "non-technical" is dissolving. 𝟐. 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫𝐬 𝐬𝐩𝐞𝐧𝐝 𝐭𝐢𝐦𝐞 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭𝐥𝐲 𝐧𝐨𝐰AI takes the grunt work—debugging, code reviews, docs, env setup—so developers focus on architecture, creative problem-solving, and high-impact features;MTTR drops from hours to minutes. 𝟑. 𝐍𝐞𝐰 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐟𝐨𝐫 𝐀𝐈-𝐧𝐚𝐭𝐢𝐯𝐞 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 Just as Auth0 eliminated authentication complexity and Stripe abstracted payments, we're seeing foundational layers emerge: memory-as-a-service, AI-native frameworks, and runtime infrastructure that removes GPU headaches. 𝟒. 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐛𝐞𝐜𝐨𝐦𝐞𝐬 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐌𝐚𝐧𝐚𝐠𝐢𝐧𝐠 what information AI models can access—and how—becomes a competitive advantage. Organizations will invest in context pipelines the same way they invested in data engineering. 𝟓. 𝐀𝐠𝐞𝐧𝐭 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞 𝐫𝐞𝐩𝐥𝐚𝐜𝐞𝐬 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞 Tools are being redesigned for AI agents as first-class users. The best developer platforms won't just serve humans—they'll enable AI agents to operate autonomously. 𝐖𝐡𝐚𝐭 𝐭𝐡𝐢𝐬 𝐦𝐞𝐚𝐧𝐬 𝐟𝐨𝐫 𝐲𝐨𝐮𝐫 𝐭𝐞𝐚𝐦: Winning teams don’t just adopt AI—they redesign their workflows. Small squads now deliver department‑scale output as cycles compress from months to minutes. AI agents handle the repetitive; developers focus on architecture, creativity, and domain judgment—not replaced, amplified. 𝐓𝐡𝐞 𝐮𝐧𝐜𝐨𝐦𝐟𝐨𝐫𝐭𝐚𝐛𝐥𝐞 𝐭𝐫𝐮𝐭𝐡: If your team isn't experimenting with AI-native development workflows right now, you're not just behind on tooling-you're operating with a fundamentally different cost structure and velocity than your competition. The entire developer tooling landscape is being rebuilt from first principles. The question isn't whether to adapt—it's how quickly you can reorient around this new paradigm. 💬 How is AI changing your development workflow? Are you seeing productivity gains or still experimenting? #AIDevelopment #DeveloperTools #FutureOfCoding #AIEngineering #SoftwareDevelopment #TechTrends #AIAgents #NexaLabs
"Software 3.0: AI's Impact on Developer Tooling"
More Relevant Posts
-
🚀 Code Execution with MCP: The Game-Changer for AI Agent Efficiency As AI agents scale to work with hundreds of tools, we're hitting a critical bottleneck: context window overload. Here's how code execution with the Model Context Protocol (MCP) is solving this—with real benchmarks that'll blow your mind. Traditional MCP implementations load ALL tool definitions upfront, creating massive token overhead. When your agent connects to 50+ servers with 1,000+ tools, you're burning tokens before even reading the user's request. Real-World Scenario: Google Drive → Salesforce Sync Task: Download a 2-hour meeting transcript from Google Drive and attach it to a Salesforce lead. ❌ Flow 1: Traditional Direct Tool Calling 1. Load 150,000 tokens of tool definitions into context 2. Call gdrive.getDocument() → 50,000 token transcript 3. Pass full transcript through model context 4. Call salesforce.updateRecord() → Rewrite 50,000 tokens again 5. Total: ~250,000 tokens processed Problems: Expensive: Processing 250K tokens per request Slow: Multiple model roundtrips with massive payloads Error-prone: Model must manually copy 50K token transcript Context limits: May exceed window size entirely ✅ Flow 2: Code Execution with MCP typescript// Agent only loads 2 tool definitions (2,000 tokens) import * as gdrive from './servers/google-drive'; import * as salesforce from './servers/salesforce'; const transcript = (await gdrive.getDocument({ documentId: 'abc123' })).content; await salesforce.updateRecord({ objectType: 'SalesMeeting', recordId: '00Q5f000001abcXYZ', data: { Notes: transcript } }); Results: Just 2,000 tokens loaded (98.7% reduction!) Data flows directly between services—never enters model context Single execution step, zero manual data copying Works with documents of ANY size 📊 The Benchmark: Token Reduction: 150,000 → 2,000 (98.7% savings) Cost Impact: 75x cheaper per operation Latency: Eliminates multiple model roundtrips Scalability: Works with 1,000+ tools without context bloat The Bottom Line: Code execution with MCP isn't just an optimization—it's a fundamental shift in how we build AI agents. By treating MCP servers as code APIs instead of direct tools, we unlock: ✅ Progressive tool discovery (load what you need) ✅ Context-efficient data processing ✅ Privacy-preserving workflows (PII never touches the model) ✅ Persistent state and reusable skills ✅ Familiar programming patterns (loops, conditionals, error handling) The MCP community has already built thousands of servers, and this approach makes it feasible to connect agents to ALL of them without token explosion. Are you building with MCP? How are you handling context efficiency at scale? #AI #MachineLearning #LLM #MCP #ModelContextProtocol #AIAgents #SoftwareEngineering #CloudComputing #Anthropic Read the full technical deep-dive: https://lnkd.in/guQ4SpAh
To view or add a comment, sign in
-
Google just released a detailed 60-page manual on how to build production-ready AI agents. Here’s a condensed version for startup founders who want to move fast but build right: 1. Move Past Basic Chatbots -> AI agents aren’t just for answering questions. They think, reason, and act. -> Using frameworks like ReAct, agents combine reasoning, tool usage, and observation to handle multi-step tasks. 2. Pick the Right Development Track ->Code-first: Use the Agent Development Kit (ADK) for fine-grained control. -> No-code: Try Google Agentspace to get started quickly. -> Pre-built: Use Gemini Code Assist or Cloud Assist for specific use cases. 3. Know the Four Pillars of Agent Architecture ->Models: Flash for speed, Pro for advanced reasoning, Flash-Lite for efficiency. -> Tools: Integrations with APIs, databases, and external services. -> Orchestration: The brain that manages reasoning and workflow. -> Runtime: Deploy with Vertex AI Agent Engine or Cloud Run. 4. Use Grounding to Keep Agents Accurate -> RAG brings in real documents. -> GraphRAG maps relationships. -> Agentic RAG lets agents actively explore and reason over new information. 5. Design Thoughtful Memory Systems -> Long-term: Knowledge stored in Vertex AI Search. -> Working memory: Cached in Memorystore. -> Transactional: Logged in Cloud SQL for traceability. 6. Treat AgentOps Like DevOps -> Testing should be systematic, not intuitive. -> Use the Agent Starter Pack for automated testing, versioning, and CI/CD pipelines. 7. Build for Interoperability -> MCP links any data source. -> A2A (Agent-to-Agent) enables collaboration between agents. -> Design once, deploy everywhere. 8. Embed Safety and Security from the Start -> Add guardrails for inputs and outputs. -> Follow least-privilege access for all services. -> Keep full audit logs for compliance and debugging. 9. Plan for Scale Early -> Start managed with Vertex AI Agent Engine. -> Use containers when you need flexibility. -> Add observability, logging, and performance tracking from day one. 10. Your Startup Toolkit -> ADK: Write custom logic in Python or Java. -> Agent Starter Pack: Ready-to-deploy infra templates. -> Firebase Studio: End-to-end app development with AI integration. -> Gemini CLI: Quick experiments straight from the terminal. Full guide link available in the comments.
To view or add a comment, sign in
-
When microservices became the industry standard, I watched organizations rush to adopt polyglot architectures. Every team could choose its own programming language. Maximum flexibility, maximum autonomy. The result was organizational chaos. I'm seeing the same pattern with AI adoption in software teams. Without a clear strategy, some engineers are delegating functional requirements to AI, writing entire Jira tickets from vague prompts. Others refuse to touch the tools at all, becoming increasingly skeptical while their colleagues generate entire codebases with zero architectural understanding. New job titles are emerging on LinkedIn: "I fix AI slop for companies." This creates real problems. Engineers who want to maintain code quality get frustrated watching generated code pile up without review. Some become AI haters, which isn't good because there is genuine value in these tools when used correctly. The inefficiencies multiply. Teams that over-rely on AI lose architectural control. Teams that under-use it waste time on tasks that could be automated. When every team uses different tools without coordination, nobody shares learnings or builds common practices. The microservices effect taught us something important: hype without strategy creates problems, not solutions. What actually works is simpler than most organizations think. A basic spreadsheet with five columns: business lines (stable products versus pilots versus innovation projects), use cases for each line (refining requirements, coding integrations, building features), control levels you're applying (pilot, co-pilot, or autopilot), which tools you're using, and examples of what's working that other teams can learn from. Co-create it with representatives from different teams. Assign an owner who coordinates regular meetings. Run experiments, evaluate what's actually delivering value, then either expand successful approaches or roll back failed ones. Did you learn from the microservices mistakes, or are you about to repeat them? Watch the complete clip: https://lnkd.in/dN_G_7cu Full webinar: https://lnkd.in/dzM-zpmH
To view or add a comment, sign in
-
I just built & shipped to production a social media scheduling app in 12 hours. Not a prototype. A real app with... → OAuth for Twitter & LinkedIn → Smart retry logic → Template systems → Encrypted token storage → Real-time updates → Analytics dashboard Three years ago? This would've taken me 2-3 months. 🚨 THE PROBLEM Most developers are still thinking in the "pre-AI era." The real breakthrough isn't the tools—it's building a SYSTEM that leverages AI at every stage. ⚙️ MY 3-PART SYSTEM 1️⃣ Lean, AI-First Starter Kit Those bloated boilerplates with 50 features? They're no longer necessary. With AI, you build exactly what you need, when you need it. My stack: Next.js + Convex + Better Auth + TypeScript. That's it. 2️⃣ BMAD Method & Specialized Agents An agile framework designed for AI-driven development. Specialized agents and workflows that adapt based on project complexity. → https://lnkd.in/eSw9w9gM 3️⃣ AI Agent Orchestration Claude Code + 84 specialized agent systems. Not simple prompts—sophisticated workflows coordinating multiple AI agents. Architecture. Security auditing. Test generation. Code review. Documentation. All orchestrated. → https://lnkd.in/eJMW7tj4 🎁 TWO FREE RESOURCES 🤖 AI Starter Kit Production-ready Next.js + Convex + Better Auth template. Everything you need, nothing you don't. → https://lnkd.in/ezQaMGB9 → https://lnkd.in/ehzTpkTm - plug and play features as you need them 📝 SocialPost Source Code The complete app I built in 12 hours. Clone it, deploy it, and cancel your subscriptions 😎 → https://lnkd.in/etUmwBev 🤔 WHY I'M SHARING THIS I've taught 50,000+ developers through HowToCode.io. I'm in the Top 1% of TypeScript engineers on GitHub. But everything about "the right way" to build software needs refactoring for the AI era. 🧠 I've spent over three years figuring this out. Now I'm launching Refactoring AI—a weekly newsletter sharing: → Real build logs (shipping features in hours, full apps in days) → Deep dives into AI-first systems and frameworks → Tool reviews and when to use them and why → Mistakes I've made and how to avoid them No fluff. Just battle-tested approaches I use every day as a professional software engineer. 💬 I NEED YOUR HELP What's your #1 challenge using AI to build apps? Is it: - Not knowing where to start? - AI giving you broken code? - Spaghetti code 🍝 - Too many tools, don't know which to use? - Can't go from tutorial to real project? - Something else? Drop a comment. I respond to every single reply. If you're tired of taking months to ship ideas that could take days, this newsletter is for you. Subscribe: https://lnkd.in/eEfdvVuY Let's refactor how we build software. 🚀 P.S. In case you are wondering... Yes, I did use SocialPost to post this 😁 # AIEngineering #SoftwareDevelopment #WebDevelopment #AITools #TypeScript #NextJS #IndieHacker #SoloPreneur
To view or add a comment, sign in
-
The Shower, The AI, and The Death of the Developer Job (As We Knew It) TL;DR: Model horsepower isn’t the limit — direction is. Use a file-driven board, enforce tests and a “stop rule” (human-in-the-loop), and run an orchestrator that turns vision into tickets and schedules coding agents. The result: parallel, auditable, high-quality delivery, no hallucination architectures. A few weeks ago I stepped out of the shower, grabbed my phone, and said: “Create a ticket that explores three architectures for blind-spot video detection. Compare cost, latency, failure rates. Include trade-offs, risks, unanswered questions.” By the time my coffee was ready, the ticket existed. Not a stub or to-do, a reasoned brief with decision points, risk tables, and stakeholder questions. I didn’t open an editor, write a spec, I didn’t even sit down. That’s when I realized the job had changed. Copilot whispers; cloud agents build. Copilot feels like pair-programming: you guide and nudge. Cloud coding agents don’t collaborate. They deploy like a self-directed unit. Wrong repo? They won’t question you. They’ll spend 30 minutes on the wrong problem, touch files, invent patterns you never asked for, and hand back irrelevance. AI doesn’t need supervision. It needs guardrails, memory, gates. So I built the simplest coordination system. No Jira, Linear, or plugins. Just folders + markdown: /docs/board/ → backlog → todo → in-progress → review → done /docs/messages/ → awaiting-decision → resolved Each ticket includes: problem summary, constraints, trade-offs & risks, decisions required, progress log (AI-written), reasoning trail, open questions. Agents never guess. At a boundary, they stop and ask. I answer in plain text. They continue. Suddenly, coding felt like orchestration, not chaos. My day shifted from “implement/debug” to “is this the right problem, which trade-off, who signs off, how to split missions.” I became a Product Manager of AI engineers. Not because I wanted to, because the bottleneck isn’t code. It’s clarity. The new skill isn’t coding; it’s direction-setting. The future developer isn’t the best coder but the one who can: • Frame problems precisely • Translate outcomes into machine-safe missions • Define decision boundaries • Evaluate trade-offs • Design systems that think in parallel • Own intent, not implementation We’re moving from Software-Led Product to Product-Led Software. From “build this” to “should we build this at all?” AI doesn’t replace engineers; it changes how engineering is executed. Humans move upstream, toward meaning, judgment, direction. Some hybrid of product thinker, systems designer, constraint definer, decision owner, AI orchestrator. Someone who debugs direction, not the system. Someone who writes missions, not methods. We’re no longer watching software eat the world. We’re watching intent eat implementation. And the winners won’t be the best at writing code, they’ll be the best at writing clarity.
To view or add a comment, sign in
-
-
I recently paid double for hosting because I trusted AI-generated code. It looked fine, but the resource usage was off for my scale. This reminds me of rushing to my advisor's office, excited about a breakthrough paper. Within minutes, he'd identified multiple fundamental flaws I'd missed. Today, I'm seeing the same blind trust pattern with AI-generated code - and the consequences are far worse than a rejected thesis. Production code is being riddled with bugs because developers hit 'accept' without applying critical thinking. *Avoid Auto-Accepting Generated Code* Entire customer databases have been erased due to developers trusting AI-generated migration scripts without tracing through the logic. Before any AI-generated code touches production, trace through the execution path manually. *Know Your Codebase Better Than the AI Does* LLMs tend to recreate solutions from scratch rather than checking existing code. Your team might have a perfect user validation function, but the AI will write you a brand new one. Codebases can easily double purely from duplicated functionality that developers didn't catch. *Domain Knowledge Guides Efficiency* I caught an AI suggestion that would have dramatically increased complexity of a function that can run thousands of times per second. The code looked clean and would have worked in development. In production, it would have crushed our servers. Understanding your system's constraints lets you guide AI toward optimal solutions instead of accepting the first thing it generates. *Debug With System Understanding* AI excels at identifying symptoms but struggles with underlying causes. Teams spend days chasing AI-identified issues that turn out to be configuration problems. AI spots patterns but has a hard time grasping broader system context causing the problem. *Watch Your API Costs* Accepting AI-generated code with API calls without reviewing implications has cost teams far more than intended. The AI might suggest calling an expensive API inside a loop, making thousands of requests where you expected dozens. *Treating AI as Your Most Talented Junior Developer* Treat AI-generated code like code from a talented junior developer: skilled, helpful, but requiring supervision. Build team practices that preserve critical thinking through code reviews focused on business logic and debugging processes that start with system understanding. When you combine AI efficiency with human oversight, you get both productivity gains and system reliability. What's the most dangerous AI code suggestion you've caught before it hit production? Share your close calls - we need more honest conversations about these real-world challenges. #AI #SoftwareDevelopment #CodeReview #TechnicalDebt #ResponsibleAI #CriticalThinking
To view or add a comment, sign in
-
🚀 New Article Published! 🚀 I've just published a deep dive on Specification-Driven Development (SDD) and how GitHub’s Spec Kit is redefining how we build software in the era of AI. 🔍 In the article, I break down: ⦿ Why SDD is a step beyond TDD/BDD—and why the spec is now the code ⦿ How AI accelerates development by continuously auto-generating plans and code directly from specifications ⦿ The practical commands and workflows of Spec Kit (/specify, /plan, /tasks) ⦿ A hands-on learning roadmap for anyone to practice modern spec creation and build robust, production-grade apps ⦿ The unique challenges and opportunities of SDD for today’s teams Whether you’re a CTO, architect, or developer, this is the next step in writing scalable, maintainable, and AI-powered applications. 📝 Read the full article here: https://lnkd.in/ga7ZJG_b If you’re interested in learning how intent-driven, AI-accelerated workflows can transform your engineering, let’s connect or start a discussion in the comments! #SDD #SpecificationDrivenDevelopment #AI #SoftwareEngineering #GitHub #SpecKit #DevOps #TechLeadership GitHub
To view or add a comment, sign in
-
🤖 “AI agents are reshaping the future of developer tooling.” — Satya Nadella As a developer who’s spent over a decade in the .NET ecosystem, this statement really hits home. We’ve seen IDEs, compilers, CI/CD, and cloud pipelines evolve — but now we’re entering a stage where AI isn’t just assisting development... it’s becoming part of the toolchain itself. Imagine: PRs reviewed by context-aware AI code reviewers Architecture suggestions generated from system diagrams Refactors and optimizations proposed before bottlenecks occur Tools like GitHub Copilot, IntelliCode, and AI-powered code analyzers are only the beginning. In the next few years, the biggest leap might not be in frameworks or languages — it’ll be in how developers collaborate with AI. The question for us as senior engineers isn’t “Will AI replace developers?” — it’s “How do we train it to think like us?” #AI #DotNet #CSharp #SoftwareEngineering #DeveloperTools #CodingFuture #SoftwareArchitecture #MachineLearning #GitHubCopilot #TechTrends #Innovation #DevelopersLife #Microsoft https://lnkd.in/ewNVxWBJ
To view or add a comment, sign in
-
Attention software engineers & team leads: Generative AI isn’t just an experiment anymore — it’s a core part of modern workflows. In my latest guide “GenAI Tools for Developers: The Ultimate Playbook” I walk through how AI-powered tools are enhancing productivity, improving code quality, and automating parts of the dev lifecycle you didn’t even realise could be automated. 👉 Read the full article: https://lnkd.in/gCmmsVxZ Whether you’re working solo or leading a team, this is the kind of resource you’ll want bookmarked. Let’s build smarter together. #GenAI #AIinEngineering #DevOps #SoftwareEngineering #ProductivityTools
To view or add a comment, sign in
-
I built an AI-powered system that automatically maintains documentation for fast-moving codebases. Here's what happened in the first production run: - 27 documentation chapters analyzed and updated - 94 fixes applied automatically - Zero manual documentation updates required - Caught a critical 10x default value error that human review likely would have missed The Problem: Documentation drift costs organizations time, credibility, and customer satisfaction. For Debtmap (my Rust-based tech debt analyzer), rapid iteration meant docs were outdated within days of writing them. The Solution: I built a workflow using Prodigy (an AI orchestration system influenced by infrastructure as code tools) that: 1. Scans the codebase to build a feature inventory (ground truth) 2. Orchestrates parallel AI agents to detect drift in each chapter 3. Applies fixes while preserving writing quality 4. Validates everything builds and integrates correctly The workflow uses a map-reduce architecture with isolated git worktrees, allowing multiple agents to work concurrently without conflicts. It averaged 4.5 analysis passes per chapter to ensure comprehensive coverage. Business Impact: - Documentation no longer blocks releases - Lower support costs from accurate docs - 8-16 hours saved per release cycle - Scalable quality as codebase grows The approach is generalizable to any documentation maintenance challenge. The key is treating docs as code and applying modern automation practices. Read the full case study with implementation details and metrics: https://lnkd.in/ga2d_auk For a complete setup guide, see the Prodigy docs (also generated by this workflow): https://lnkd.in/g4z6V62v Open source projects: - Debtmap: https://lnkd.in/gtajsv28 - Prodigy: https://lnkd.in/gcA-iyZZ
To view or add a comment, sign in
Explore related topics
- How AI Agents Are Changing Software Development
- How Developers can Adapt to AI Changes
- How AI Impacts the Role of Human Developers
- Why Context Engineering Matters for AI Agents
- How to Boost Developer Efficiency with AI Tools
- How to Use AI to Make Software Development Accessible
- How AI is Changing Software Delivery
- How AI Can Reduce Developer Workload
- How AI Is Changing Programmer Roles
- How AI Foundation Models Transform Enterprise Software
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development