AI search is already deciding which early stage tech companies get seen and most are completely invisible inside it (even with solid SEO) I've been testing AI visibility strategies with B2B SaaS startups over the past year. What I've learned: Traditional SEO metrics tell you very little about whether ChatGPT, Perplexity, or Google's AI Overviews will surface your brand. The gap between what founders think works and what actually gets cited is massive. Here's the framework I've found that consistently moves the needle: 𝟭. 𝗕𝘂𝗶𝗹𝗱 𝗔𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆 𝗔𝗜 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗥𝗲𝗰𝗼𝗴𝗻𝗶𝘇𝗲 AI evaluates content using E-E-A-T: Experience, Expertise, Authority, and Trust. Of these four, trust matters most. What this looks like in practice: → Include detailed author bios with specific credentials → Share first-hand experience with real outcomes → Support every claim with verifiable sources → Update content regularly (53% of ChatGPT citations come from content updated in the last 6 months) 𝟮. 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗖𝗼𝗻𝘁𝗲𝗻𝘁 𝗳𝗼𝗿 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗣𝗮𝗿𝘀𝗶𝗻𝗴 Over 72% of first-page results use schema markup. AI systems need structured data to understand your content. The tactical approach: → Implement JSON-LD schema markup → Use logical heading hierarchies (H1/H2/H3) → Break content into short, scannable paragraphs → Create standalone quotable statements with specific data 𝟯. 𝗠𝗮𝘁𝗰𝗵 𝗡𝗮𝘁𝘂𝗿𝗮𝗹 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗤𝘂𝗲𝗿𝗶𝗲𝘀 Searches containing 5+ words grew 1.5× faster than shorter queries in 2023-2024. AI chat interactions last 66% longer than traditional searches because users are asking complete, conversational questions. How to adapt: → Research "People Also Ask" questions in your space → Target long-tail, question-based queries → Structure answers as standalone responses → Use conversational, clear language 𝟰. 𝗨𝘀𝗲 𝗛𝗶𝗴𝗵-𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗖𝗼𝗻𝘁𝗲𝗻𝘁 𝗙𝗼𝗿𝗺𝗮𝘁𝘀 Content over 3,000 words generates 3× more traffic than shorter pieces. Featured snippets have a 42.9% clickthrough rate, and 40.7% of voice search answers come from them. The formats that work: → Comparison articles with modular sections → Detailed listicles (2,300+ words for voice search) → FAQ sections with direct answers → Data-rich content with clear statistics 𝟱. 𝗧𝗿𝗮𝗰𝗸 𝗪𝗶𝘁𝗵 𝗚𝗘𝗢 𝗧𝗼𝗼𝗹𝘀, 𝗡𝗼𝘁 𝗦𝗘𝗢 𝗧𝗼𝗼𝗹𝘀 Traditional SEO metrics show weak correlation with AI citations. You need specialized Generative Engine Optimization (GEO) tools. What to track: → Brand mentions across AI platforms → Citation rates in ChatGPT, Perplexity, AI Overviews → Share of voice for key queries → Sentiment in AI-generated responses This isn't about abandoning SEO. It's about expanding your visibility strategy to include the platforms where your buyers are already searching. Repost this ♻️ if you found it helpful! P.S. If you're a technical founder trying to get visible in AI search, start with this 5-st
Optimizing Content for Voice Assistants
Explore top LinkedIn content from expert professionals.
Summary
Optimizing content for voice assistants means creating and structuring information so that digital voice helpers, like Siri or Alexa, can easily find, understand, and share it aloud in response to spoken questions. With the rise of audio-first searches and conversational AI, it’s important to make content clear, structured, and natural-sounding for voice-driven discovery.
- Use natural language: Write content in a conversational way, focusing on answering the exact questions people might speak to a voice assistant.
- Structure for clarity: Break your content into short paragraphs, use clear headings, and add Q&A sections so voice assistants can quickly identify and deliver precise answers.
- Apply voice-ready markup: Add special code like schema or Speakable markup to help voice assistants recognize which parts of your content are suitable for audio responses.
-
-
SEO for AI Mode: How to Win in Conversational Search Search is shifting from keywords to conversations. With AI-powered results and conversational search interfaces expanding, users now ask multi-layered questions instead of typing short queries. That changes how content ranks — and how it gets selected. The data: • Conversational queries are significantly longer, often 2–3x traditional keyword length • Informational SERPs with AI-generated summaries show measurable CTR compression • Pages with structured answers and clear entity signals are more frequently surfaced in AI responses Ranking is no longer about matching a keyword. It’s about being the most reliable answer in a dialogue. ⸻ What AI Mode Prioritizes AI-driven search systems favor: • Direct, concise answers at the top of pages • Structured formatting (lists, steps, definitions) • Clear entity associations and topical depth • Demonstrated expertise and consistency across content Thin, surface-level content is increasingly ignored. ⸻ How to Optimize for Conversational Search 1. Answer layered questions Instead of targeting one keyword, address primary + secondary intent within the same piece. 2. Add contextual depth Explain why, how, risks, benefits, and implications — not just definitions. 3. Strengthen topical clusters AI models favor domains that demonstrate breadth across a subject. 4. Improve entity clarity Use consistent terminology, structured headings, and schema to reinforce what your brand is associated with. ⸻ The Strategic Shift Traditional SEO optimized for rankings. AI Mode optimization focuses on: • Being selected • Being cited • Being trusted The brands that adapt will dominate conversational discovery.
-
Voice search is the next frontier everyone's ignoring. While companies obsess over ChatGPT citations and LLM optimization, there's a massive opportunity hiding in plain sight: voice-first discovery. AI can't read your blog content aloud yet, but that's changing fast. Google's Speakable schema is already in beta for news publishers, and voice search queries are growing 35% year-over-year according to recent data. The gap is huge. Most publishers are completely unprepared for audio-first discovery, treating voice search as an afterthought instead of a primary optimization channel. Here's what's interesting: The companies that nail voice optimization early will dominate audio discovery before their competitors even realize it's a thing. We started testing voice-first content strategies with select uSERP clients after noticing the trend. Here’s the voice-first content approach that's working for us: 🚀 Conversational query targeting by focusing on questions people actually ask aloud. "Best marketing automation for small teams" instead of "marketing automation software comparison." 🚀 Audio comprehension structure using clear Q&A blocks and concise answers designed for 20-30 second voice excerpts that provide complete value. 🚀 Voice-optimized schema implementation including Speakable markup for eligible content sections, plus FAQPage schema for broader voice search optimization. 🚀 Context-rich content creation that leads sections with clear topic identification like "This guide explains..." to help voice assistants understand and cite content accurately. 🚀 Conversational flow testing to ensure content sounds natural when read aloud, not just when scanned visually. The timing is perfect because voice search optimization is still largely unexplored territory. Most content is optimized for scanning, not listening. The brands that flip this approach will capture intent through entirely new discovery channels. Are you seeing any voice search traffic yet? How are you thinking about optimizing content for audio-first discovery? 👇
-
🎙 The hidden speech-to-text bottlenecks most teams miss 🎙 Most teams obsess over Word Error Rate when optimizing STT, but our analysis of top-performing voice agents shows that’s only part of the equation. Here are three counterintuitive insights that drive real performance gains: ⚡ Perceived speed > raw accuracy A lower time-to-first-token (TTFT) makes voice AI feel more responsive—even if total processing time stays the same. Shaving 100-200ms off TTFT can dramatically improve user experience. 🎯 The fine-tuning paradox Domain-specific tuning can 3-5x accuracy for specialized vocabulary (legal, medical, automotive), but it plateaus quickly. Instead of overfitting, focus on Keyword Recall Rates to ensure mission-critical terms are always captured. 🌎 Accent gaps are killing your accuracy Most voice agents show a 30% accuracy gap between native and non-native speakers. Stop training on "Californian accents reading newspapers" and start collecting conversational speech reflecting your actual users. For global applications, consider accent-specific models that treat speech variations as unique linguistic systems. 💡 Pro tip: Simulate real user speech in pre-production evals to catch failures before they hit production - with Coval. What STT levers have you pulled to optimize your voice agents? Share below 👇 In the next few days, I’ll be sharing more on building the ultimate Voice AI stack—follow along for more insights!
-
OpenAI just dropped a Prompting Guide for Voice AI Agents Here're 10 Actionable Insights: 1. Iterate and Test Relentlessly > Small wording changes dramatically impact behavior, like swapping "inaudible" to "unintelligible" improves noisy input handling. > Test every prompt modification thoroughly as minor adjustments can make or break performance. 2. Structure Prompts with Clear Sections > Use labeled sections (Role, Personality, Tools, Instructions) to help the model find and follow guidance efficiently. > Organize into focused sections rather than long paragraphs to improve comprehension and consistency. 3. Define Clear Role and Objectives > Pin the agent's identity explicitly to ensure responses stay conditioned to that role throughout. > Specify what "success" means for the agent to maintain focus on achieving goals. 4. Control Personality and Tone Precisely > Set explicit parameters for voice warmth, brevity, and pacing to ensure natural-sounding responses. > Add specific instructions for speech speed and emotional tone rather than relying on playback parameters. 5. Handle Pronunciation Challenges > Provide phonetic hints for brand names and technical terms to improve trust and clarity. > Force character-by-character pronunciation for critical alphanumeric data like phone numbers. 6. Optimize Tool Usage > Align tool descriptions in prompts with actual available tools to prevent non-existent function calls. > Add explicit "when to use" and "when not to use" instructions for each tool. 7. Design Conversation Flow States > Break conversations into clear phases with specific goals, instructions, and exit criteria. > Use state machines or dynamic updates to expose only relevant rules and tools per phase. 8. Implement Variety and Natural Speech > Add variety rules to prevent robotic repetition of the same phrases across turns. > Provide sample phrases as inspiration but instruct the model not to always use exact wording. 9. Handle Unclear Audio Gracefully > Create specific instructions for responding to background noise, partial words, or silence. > Define whether the model should ask for clarification or repeat questions when input is unclear. 10. Enable Proactive Tool Calling > Remove unnecessary confirmation loops by instructing proactive behavior for obvious tool calls. > Add preambles before tool calls to mask latency and improve user experience. 11. Establish Clear Escalation Paths > Define explicit thresholds for human escalation including safety risks and repeated failures. > Specify exact phrases the model should use when escalating to maintain consistency. P.S. Check out 200+ such guides on my profile 👋
-
🔮 Design Guidelines For Voice UX. Guidelines and Figma toolkits to design better voice UX for products that support or rely on audio input ↓ 🤔 People avoid voice UIs in public spaces, or for sensitive data. ✅ But do use them with audio assistants, learning apps, in-car UIs. ✅ Good conversations always move forward, not backwards. 🤔 The way humans speak is different from the way we write. 🤔 What people say isn’t always what they mean by saying it. ✅ First, define relevant user stories for your product. ✅ Sketch key use cases, then add detours, then edge cases. ✅ Design VUI personas: tone of voice, words, sentence structure. ✅ Listen to related human conversations, transcribe them. ✅ Write conversation flows for happy and unhappy paths. ✅ Add markers (Finally, Now, Next) to structure the dialogue. ✅ Accessibility: support shaky voices and speech impediments. ✅ Allow users to slow down or speed up output, or rephrase. ✅ Adjust speech patterns, e.g. speaking to children differently. 🚫 There are no errors or “wrong input” in human interactions. 🤔 Give people time to think: 8–10s is a good time to respond. ✅ Design for long silences, thick accents, slang and contradictions. Keep in mind that many people have been “burnt” with horrible, poorly designed automated phone systems. If your voice UX will come across even nearly as bad, don’t be surprised by a very low usage rate. You can’t replicate a long scrollable list in audio, so keep answers short, with max 3 options at a time. Instead of listing more options, ask one direct question and then branch out. Re-prompt or reframe when certainty is low. People choose their voice assistant based on the personality it conveys, and the friendliness it projects. So be deliberate in how you shape the tone, word choice and the melody of the voice. Don’t broadcast personality for repetitive tasks, but let is shine in a conversation. And: if you don’t assign a personality to your product, users will do it for you. So study how your customers speak. How exactly they explain the tasks your product must perform. The closer you get to a personal human interaction, the easier it will be to earn people’s trust. Useful resources: Voice Principles, by Ben Sauer https://lnkd.in/dQACgwue Voice UI Design System, by Orange https://lnkd.in/ezP-9QUu Designing A Voice Persona, by James Walsh https://lnkd.in/e3WXaxEC Voice UI Kit (Figma), by Shadiah Garwell https://lnkd.in/eGjJCWf7 Conversational UIs (Figma), by ServiceNow https://lnkd.in/enHVSEWP Voice UI Guide, by Lars Mäder https://vui.guide/ #ux #design
-
𝗪𝗮𝗻𝘁 𝘆𝗼𝘂𝗿 𝗔𝗜 𝘁𝗼 𝘀𝗼𝘂𝗻𝗱 𝗹𝗶𝗸𝗲 𝗬𝗢𝗨? Someone asked me … How do you get ChatGPT not to sound generic? She said, “I want my content to sound like me.” I told her, “You have to train AI.” The responses from AI platforms are a culmination of what’s popular across the web. And it follows a formulaic pattern in its response unless otherwise instructed to do something different. That’s why you can sometimes tell if a piece of content was written by AI. If you want AI to have your tone of voice and sound like you, you have to give it background information about you and your business along with writing samples. 𝗛𝗲𝗿𝗲'𝘀 𝘁𝗵𝗲 𝟰-𝘀𝘁𝗲𝗽 𝘀𝘆𝘀𝘁𝗲𝗺 𝗜 𝘂𝘀𝗲 𝘄𝗶𝘁𝗵 𝗰𝗹𝗶𝗲𝗻𝘁𝘀: 1. 𝗛𝗲𝗹𝗽 𝗔𝗜 𝘁𝗼 𝗹𝗲𝗮𝗿𝗻 𝗮𝗯𝗼𝘂𝘁 𝘆𝗼𝘂. Create a context document. I call it a Business Profile. Provide an overview of your company, value proposition, your competitive advantage, the problem(s) you solve, your company mission, target audience/ideal client, competitors, case studies, client testimonials, and your core products/services. Upload this into a Library or Project so it can be referenced when creating content. This gives AI background information. 2. 𝗚𝗶𝘃𝗲 𝗶𝘁 𝘆𝗼𝘂𝗿 𝗴𝗿𝗲𝗮𝘁𝗲𝘀𝘁 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗵𝗶𝘁𝘀. Copy/paste 10-15 of your best emails, posts, or articles based on your performance analytics. Upload this information into your AI platform for reference. The AI needs to review your natural rhythm, word choices, and how you structure ideas. 3. 𝗣𝗿𝗼𝘃𝗶𝗱𝗲 𝘆𝗼𝘂𝗿 𝗯𝗿𝗮𝗻𝗱 𝗴𝘂𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀 𝗼𝗿 𝘁𝗼𝗻𝗲 𝗼𝗳 𝘃𝗼𝗶𝗰𝗲 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁. If you don’t have one of these, create one or write 2-3 sentences describing how you communicate: • "I'm direct, but warm" • "I use short sentences and avoid jargon" • "I always include a story or example" 4.𝗧𝗲𝘀𝘁 𝘆𝗼𝘂𝗿 𝗻𝗲𝘅𝘁 𝗳𝗲𝘄 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗽𝗶𝗲𝗰𝗲𝘀. Use prompts like this: "Write this in my voice: [topic]. Remember: I'm conversational, use specific examples, and always end with a clear next step." Give the AI feedback if the response or output is accurate or give it more details and instruction on how to make it better. When you create content, reference your voice documents and the AI will continually get better as you converse or dialogue with it. 𝗧𝗵𝗲 𝗿𝗲𝘀𝘂𝗹𝘁 … AI will discern your personality, and your content will sound like you. 😃 𝗣𝗿𝗼 𝘁𝗶𝗽: The more specific you are about what makes YOUR communication unique, the better it gets. ----- Do you have a go-to phrase - the one that shows up in everything you write without you even realizing it? If yes, drop it in the comments.
-
Last week, I interviewed our AI team & summarized some of our internal processes, industry secrets, we use on large enterprise AI projects in a video. "NO ONE IS TALKING ABOUT THIS" 🤫 After years of building AI and SaaS projects for dozens companies, here’s how we make models faster, feel faster & cheaper, specifically in for real-time voice AI assistants. 📱✨ Here are three key steps we implement: 1️⃣ Streaming Instead of waiting for the entire response from the model, stream the response in real-time! As soon as the first sentence is generated, send it to a TTS model. This reduces the time to the first response from 5-7 seconds down to just 2-3 seconds, making interactions feel much quicker! ⏱️💬 Progressive Updates: Provide immediate feedback as each step of the process is completed. This way, users can see the model's performance across various tasks in real-time, making it feel even faster. For example, apps like Perplexity or ChatGPT plugins showcase this method effectively, delivering insights before the final response is ready. 🔄📈 2️⃣ Hybrid Processing We found that running speech-to-text processing on the edge (like on iPhones) is 5-7 times faster than server-based processing. This significantly improves performance, as it eliminates the need for mobile data to transmit audio. Smaller Models on the Edge: Implement a classifier model that determines when to utilize smaller models for simpler tasks instead of larger, more complex ones. For instance, a 7 billion parameter model could handle basic summarization tasks, reducing load on the larger model and enhancing response times. 🖥️📊 3️⃣ Model-Side Optimization Beyond quantization, you can enhance speed by reducing prompt size through Dynamic Prompting. Implement a RAG pipeline to pull relevant sections into the current prompt. This method can condense 70 questions down to just 10, improving response times. Additionally, consider summarizing past interactions and caching responses for repetitive queries to further boost efficiency! 📊⚡ Another effective technique is using a smaller model to summarize past interactions, allowing you to pass a concise summary instead of the entire chat history. This is especially useful for chat-oriented models like Llama or Mistral. Finally, consider caching responses in scenarios where the underlying data doesn’t change (like from a SQL database). When similar queries arise, you can quickly retrieve answers from the cache, utilizing a smaller model to check for matches instead of regenerating responses each time. This approach not only saves processing time but also enhances user experience! 📊⚡ If you need help with AI in your company. Feel free to drop me a DM, book a call.
-
"Near me" searches grew 900% in 3 years. Most local businesses are completely unprepared. Here's how to optimize for voice search and capture mobile traffic: Why Voice Search is Different Text search: "best plumber austin" Voice search: "Hey Siri, who's the best plumber near me with same-day service?" Voice queries are longer, conversational, question-based, and intent-specific. Your content needs to match this behavior. The "Near Me" Optimization Formula To rank for "near me" searches: Perfect Google Business Profile optimization (proximity is king), mobile-first website, fast loading (under 2 seconds), prominent click-to-call, conversational content, FAQ structured data, strong local citations. Mobile plus location plus speed equals "near me" rankings. Content Strategy for Voice Write how people talk. Not: "AC installation services" Instead: "How much does it cost to install a new AC in Dallas?" Create content around question phrases, conversational language, "How," "What," "Where," "When" queries, and long-tail searches. The FAQ Page Strategy Every local business needs an FAQ page optimized for voice. Format example: Q: "What's the best emergency plumber near me?" A: "[Business] provides 24/7 emergency plumbing with average response time of 45 minutes in [city]." Direct, conversational answers equal voice search optimization. Featured Snippet Optimization Voice assistants read featured snippets. To win snippets: Answer questions directly (40-60 words), use numbered lists, use tables for comparisons, structure with H2/H3 tags, add schema markup. Position zero equals voice search answer. The Mobile Experience Voice search is mobile search. Your site must have: Click-to-call button visible, loading speed under 2 seconds, mobile-friendly layout, no pop-ups blocking content, directions and map integration, text message option. Poor mobile experience makes you invisible to voice search. Tracking Voice Search In Google Analytics, look for long-tail conversational queries, question-based searches, mobile traffic spikes, and "near me" variations. In Search Console, filter by mobile, check question queries, and monitor featured snippets. Voice search is growing. Optimize now or miss the wave. Is your business optimized for voice and "near me" searches?
-
A parent asks Alexa: "What's the best private school near me?" Your school doesn't come up. Here's why. AI assistants don't guess. They follow a specific hierarchy when recommending schools to parents. They prioritize four things: • Structured data (schema markup that tells AI what you offer) • Trust signals (reviews, citations, domain authority) • Direct answers (clear, conversational content) • Mobile optimization (most voice searches happen on phones) If your website doesn't speak AI's language, you're invisible to 50% of searches by 2026. What this means for your admissions strategy: Add FAQ schema to your site. Answer parent questions in natural language. "When does kindergarten start?" not "Kindergarten enrollment information." Claim and optimize your Google Business Profile. Reviews and accurate info build trust signals AI relies on. Make your mobile site fast. Voice searchers want answers in seconds, not after three page loads. Structure content for featured snippets. AI reads Position Zero answers out loud. If you're not there, another school is. The schools winning voice search aren't spending more on marketing. They're just making it easier for AI to understand and recommend them. Your move.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development