Got an email from a colleague I've known for three years. Drinks after conferences. Inside jokes. His daughter plays soccer. Subject line: Strategic Alignment for Q3. Flawless formatting. Perfect grammar. Professionally upbeat. Every bullet precisely spaced. I felt absolutely nothing. Closed it without responding. Here's what's actually happening: for decades, polish was proof of effort. A well-written message meant someone cared enough to craft it. AI severed that connection completely. Now a perfect email could be 30 minutes of real thought or 3 seconds of prompting, and the recipient cannot tell. So we don't trust any of it. Not dramatically. Not consciously. But in the slow, cumulative way that hollows out working relationships over time. Each frictionless message becomes a little harder to take seriously. Each exchange feels more like a transaction, less like a conversation. There's a concept in evolutionary biology called costly signaling. A peacock's tail is trusted precisely because it's expensive to grow. Cheap signals carry no weight. AI communication costs nearly zero to produce. The recipient, consciously or not, values it accordingly. And when everyone in an org uses the same tools, something stranger happens: the voices converge. AI is a probability engine. It gravitates toward average phrasing, standard structure, safest tone. Use it to smooth your communication and you're not saving time, you're deleting your own fingerprint. Before your next important message, ask one question: is there a single sentence here that could only have come from me? If no, the message might land. But it won't build anything. The polished email costs nothing to produce. That's precisely why it costs everything to trust. Link to the full essay in the comments below.
AI Tools For Communication
Explore top LinkedIn content from expert professionals.
-
-
Anthropic just shipped Skills, Microsoft 365 integration, and enterprise search for Claude. After talking to dozens of enterprise companies this year, I think they're solving the right problems. 💰Context tax is killing enterprise AI adoption. Most AI tools require you to manually gather information before asking useful questions. You're copying emails, uploading documents, explaining organizational context. The AI might be smart, but you're doing all the integration work. Claude's Microsoft 365 connector changes this. Direct access to SharePoint, Outlook, Teams, and OneDrive means the AI already knows what your organization knows. Ask about Q3 strategy, and it pulls from the actual discussions, documents, and decisions. They also launched Skills — reusable instruction bundles that work across Claude's web app, API, and command-line tool. Think of these as expertise packages—instructions, scripts, and resources Claude loads on-demand. And lastly, the new Enterprise search is a shared project that searches multiple connected tools simultaneously. One query pulls information from HR docs in SharePoint, email discussions in Outlook, and team guidelines from various sources—then synthesizes it into a single answer. Model providers like Anthropic and OpenAI are realizing that enterprise AI needs to be operational, not just conversational. Less chatbot, more sidekick that accesses your actual systems and takes action.
-
Have you ever asked ChatGPT to write an email and got back something so painfully formal it made you cringe? Or watched it turn your friendly check-in into a corporate memo that sounds like it was written by a robot from the 1950s? Most people are absolutely butchering AI email writing. But it's not the AI's fault. It's yours. **The real problem (and it's not what you think)** Everyone obsesses over the actual writing bit, but that's not where most people mess up. The real issue? Context. Think about it: would you walk up to a brand new EA and say "Write me an email to Sarah"? Of course not. You'd explain the relationship, share previous conversations, mention your communication style. But with AI? We just fire off a prompt and expect magic. Without context, AI starts guessing. Maybe it thinks you're best mates who've worked together forever. Maybe it assumes it's first contact. Maybe it reckons you're discussing this project for the first time when you've actually been collaborating for three months. And AI's guessing game usually ends badly. **A solution that actually works** Step 1: Create a briefing document. Think company background, your role, key projects, and relationships. Everything a good EA would need to know about your work life. Step 2: Build a voice and style guide. Upload some of your best emails and ask AI to identify your writing patterns. Are you warm and conversational? Short and punchy? Do you use humour or stay formal? The AI will create a profile of your writing style that you can reuse forever. Step 3: Set up your AI workspace properly. Every platform has a different name for the same thing: • ChatGPT or Claude: Create a "project" • Microsoft Copilot: Create an "agent" • Google Gemini: Create a "gem" Upload your briefing document and style guide, and suddenly your AI knows who you are and how you communicate. **When to use AI for emails (and when not to)** Skip AI for: • Quick confirmations ("Yes, Tuesday works") • Simple responses that take 30 seconds to write • Anything that takes longer to set up than to write Use AI for: • Email introductions (those tricky "let me introduce you to..." messages) • Customer support responses • Sales emails • Repetitive queries (HR questions, product info, hotel bookings) • Anything where tone matters and you need to get it just right **Your action plan** 1. Pick one type of email you write regularly 2. Write a one-page brief about your role and key context 3. Upload 3-5 of your best emails to AI and ask it to create your writing profile 4. Create a project/agent/gem and upload your documents 5. Test and refine The difference between AI that writes like a robot and AI that writes like you? Context, context, context. Want to master this stuff? Inventium's GenAI Productivity Upgrade starts October 15. https://lnkd.in/gfeKDvWb
-
Everyone has the same AI tools, and that’s the problem. Clay. Apollo. Seamless.ai. Jason AI... The list goes on. Your competitors are using the exact same "personalization" tools you are. Result? Your prospects are getting 47 emails that all mention their recent LinkedIn post about Q4 planning. All "hyper-personalized." All generated by AI. All sounding exactly the same. We democratized the tools but not the strategy. This is happening across company stages: Teams think buying better AI tools will fix their outbound problem. It won't. Here's why AI-first outbound is broken: 1. Same inputs = same outputs Everyone's scraping the same LinkedIn posts, company news, and tech stack data. Your "unique" insight isn't unique. 2. AI optimizes for volume, not relationship These tools help you send 1000 emails that sound personal. They don't help you have 10 conversations that matter. 3. Recipients can smell automation When your "personalized" email mentions their job change but gets their new company name wrong—you're done. The solution isn't better AI. It's better human-AI collaboration. Here's what could actually work for you going forward 👇🏼 Use AI for research, humans for insight. Let AI pull the data. But you need to interpret what it means for their business. Use AI for first drafts, humans for authenticity. AI can write the structure. You add the perspective that only comes from real experience. Use AI for scale, humans for key accounts. Automate the mass outreach. But your biggest opportunities deserve human attention. Unfortunately, most teams use AI to avoid the hard work of understanding their prospects. They'd rather send 100 AI-generated emails than spend 30 minutes researching 5 key accounts. But here's what separates winners from spammers: → Winners use AI to do the research faster, then apply human judgment to create genuine insights. → Spammers use AI to avoid thinking altogether. Your prospects can tell the difference. When everyone has the same tools, execution becomes the differentiator. Not the email you send. The thinking behind it.
-
Your emails are dumb data. They sit there. Unstructured. Forgotten. Meanwhile, you're drowning in threads, missing critical messages, losing context. Here's how to build an Agentic Email Manager that actually thinks. Regular email tools follow rules you set. Agentic systems make decisions. Learn patterns. Take autonomous action. The difference? Data structure. I built this with three-layer data storage in Supabase: - document_metadata - Email properties - document_rows - Structured, queryable data - documents - Vector embeddings for semantic understanding Your agent doesn't just read emails. It builds a knowledge graph. Email arrives. Agent analyzes with Gemini. Extracts: category, priority, keywords, requires_action flag. Stores in THREE different formats. Now the agent can: - SQL query: "Show finance emails from last quarter" - Semantic search: "Find discussions about pricing concerns" - Memory recall: "What did John complain about previously?" Multiple retrieval paths = actual intelligence. This is where it gets interesting. The agent maintains long-term memory. Detects duplicate information. Builds context over time. Not just processing emails. Building institutional knowledge. Without structured data, your agent is just a chatbot with email access. With structure: - Agent knows which emails need immediate action - Can spot patterns humans miss - Makes connections across conversations - Actually learns from your communication at Brainforge We believe: "Data is a structure. When you have structure, you have better AI." That's not just a tagline. It's the foundation of everything we build. Your AI is only as smart as how you organize its inputs. Phase 1: Structured categorization Just get emails into proper schema. Phase 2: Add vector search Enable semantic understanding. Phase 3: Memory layer Build persistent context. Phase 4: Autonomous actions Let the agent make decisions. "How many support tickets mentioned billing?" SQL query. Instant answer. "What's the customer sentiment trending?" Semantic analysis across all emails. "Should I prioritize this?" Agent checks history, patterns, context. Makes recommendation. Your email becomes queryable intelligence. Not just a message archive. Look at your current email setup. If it's not creating structured, queryable data... Your "AI" is just keyword matching with extra steps. Build the data layer first. Then watch your agent actually think. How are you structuring data for your AI systems?
-
I think I worked out why AI writes the way it does and the answer involves colonial Britain and underpaid workers in Nigeria. I've been quietly going nuts. Not literally, but the word "quietly" is in almost every piece of AI-generated text I read and now I can't stop noticing it. Companies "quietly launched" things. People "quietly became" influential. Strategies were "quietly implemented." Everything is, apparently, quiet. ChatGPT writes like it's scripting a BBC period drama! Anyway, I went down a rabbit hole on this and the explanation is genuinely bizarre, but also makes perfect sense. When OpenAI needed humans to rate ChatGPT's outputs (picking which response was better, millions of times over) it seems that they outsourced most of this "Reinforcement Learning from Human Feedback" (or RLHF) work to Nigeria. This makes sense. Nigeria has a lot of well educated, skilled English speakers and they're way cheaper than hiring Americans or Europeans to click buttons all day. But here's the bit I suspect nobody thought through. Nigerian English is incredibly formal. Linguists call it "bookish." It comes from colonial education standards built on 19th century British literature and very rigid classroom teaching. Words like "quietly" and "delve" and "meticulously" are high-value descriptors in formal writing. So thousands of annotators kept picking the more formal, more literary response as the best response. The model kept learning (it's called reinforcement after all) and now ChatGPT writes like it's trying to get a first at Oxford in 1887! The numbers are well documented*. "Delve" went up 654% in medical papers after ChatGPT launched. "Meticulously researched" spiked almost 3,900%. So now you know! Oh and the best/worst bit? Students from the country whose English taught ChatGPT how to write are now getting their own writing flagged as AI generated! If anyone needs me I'll be quietly delving into writing a meticulously researched paper on the subject ;-) *source: Scientific American / arXiv
-
Do you trust an AI with your email? You may swap privacy for productivity. Google’s Deep Research can already read through Gmail, Drive, and Chat to generate research summaries. Microsoft’s can do the same inside 365. And now, ChatGPT through its new Connectors, you can link your Gmail or Outlook account directly to the model. Three of the biggest players now have AI systems that can read your inbox, summarize your messages & draft replies for you. That’s powerful. But it also raises the some questions: - What are you giving access to? - How long will it keep your data? - Who else can see what the AI reads? - Can you fully revoke access later? The line between “help me find an invoice” and “let me index your inbox forever” depends entirely on defaults and consent. It’s true that many of us have already given companies the ability to access our emails for years. So what’s the difference now? AI changes the nature, scale, and meaning of that access in 4 big ways: 1. Access shifts from passive storage to active reading. Historically, providers kept your email and scanned for security or spam. AI systems can read, interpret, summarize, analyze, and extract patterns across your entire inbox. That is a different level of capability and a different level of risk. 2. It is no longer about single messages. It is about your entire history. AI can instantly synthesize: - years of conversations - relationships & power dynamics - financial patterns - behavioral signals - health clues - personal habits Providers always held this data, but AI can now understand it. 3. The level of access expands massively. In the old model, scanning happened behind the scenes for narrow purposes. In the AI model, third party apps, plugins, and assistant features can access emails in real time for tasks you initiate. That creates new: (1) data pipelines (2) risk points (3) trust dependencies It’s not a closed internal system. It’s an ecosystem. 4. The intent changes from operational support to intelligence generation. This is the biggest shift. Old email access = spam filtering, security, categorization. New AI access = generating insights, making decisions, writing on your behalf. The AI is not just holding your data. It is thinking with it. Bottom line… It’s not simply that Google & Microsoft already have email access. The question is: What does it mean when companies with email access can now deeply analyze it, model behavior, and generate intelligence from it instantly? That’s the leap. That’s the risk. That’s why consent and design matter more than ever. Speed is easy to sell. But trust… that’s the product people actually want. What would you trade for convenience?
-
Lately, I’ve been deeply researching something that’s becoming increasingly relevant for students, researchers, founders, and content creators: How do AI detectors actually work? And how do models “decide” whether text feels human or AI-generated? Here’s what I’ve learned 👇 Most AI detectors don’t “understand” meaning the way humans do. Instead, they analyze statistical patterns in text. One of the key concepts behind detection is perplexity. In simple terms, perplexity measures how predictable the next word is. • If a sentence follows very common, highly probable word sequences, it has low perplexity. • If the wording is more surprising, irregular, or less statistically predictable, it has higher perplexity. Large language models are trained to predict the most likely next word. So naturally, AI-generated text often: • Uses highly probable word patterns • Maintains smooth, consistent sentence structure • Avoids unusual phrasing • Stays statistically “safe” Humans, on the other hand, are messy writers. We: • Change tone mid-paragraph • Break structure • Use uncommon transitions • Repeat oddly • Insert emotion, bias, or randomness • Choose less probable words based on context, memory, or mood Human writing isn’t optimized for probability — it’s driven by thought, feeling, and imperfection. That’s why some academic writing (especially very polished, formal writing) can accidentally get flagged. Highly structured, predictable, grammatically consistent writing can statistically resemble AI output. Detection models often combine: • Perplexity scoring • Burstiness analysis (variation in sentence length) • Token probability distribution • Stylometric features (writing style fingerprints) But here’s the truth: No detector is 100% accurate. They are probability models judging other probability models. And as AI improves, detection becomes a cat-and-mouse game between generation and statistical analysis. This research has completely changed how I look at writing. Instead of asking, “Is this AI?” A better question might be: How statistically predictable is this text compared to human baselines? If you’re working with AI-generated content and want to both detect and humanize it more effectively, you can try “aitextools”. Curious to hear your thoughts — Do you think AI detection will ever be fully reliable? #aidetector #aihumanizer #huamnizeai
-
We need to talk about AI detectors. Because right now, according to some of these tools, the only “real humans” left are the ones who can’t write grammatically correct English. Yes, seriously. These detectors don’t understand creativity, nuance, or originality. They rely on simplistic pattern matching. So the more structured, formal, or articulate your writing is… the more likely it is to get flagged as “AI-generated.” The Preamble of the Australian Constitution was labeled 97% AI-generated. Not because a time-traveling chatbot wrote it in 1900, but because detectors are trained on standardized, formal patterns, the same patterns found in legal and academic writing. When these models conflate “clear, polished English” with “AI,” false positives are inevitable. And that’s the real problem: 🔹 Scholars are being accused of using AI… for their own original work. 🔹 Faculty are forced to defend writing they themselves produced. 🔹 Publishing high-quality papers is getting harder, not because of poor research, but because detectors keep breaking. Meanwhile, companies behind these tools make millions selling technology that’s still unreliable, opaque, and far too influential in academic and professional spaces. Here’s the part that concerns me the most: We’re evaluating writing based on who (or what) might have produced it, rather than whether it’s meaningful, persuasive, personal, or impactful. That’s the opposite of how writing should be judged. If a piece of work isn’t thoughtful, connective, or communicative, it’s weak, no matter if it came from a human or a machine. If it is strong, then why should a flawed detector get the final say? It’s time for the research and academic communities to push back. This isn’t a niche issue, it affects scholars, educators, editors, students, and anyone who writes professionally. A formal petition, or at least a collective public stance, may be overdue. ✨ We deserve tools that support good writing, not ones that punish it. ✨ We deserve assessments based on quality, not questionable probabilities. ✨ We deserve better than “97% AI” being used as a verdict. Until then, keep writing boldly, clearly, intelligently, even if the detectors can’t recognize it as human. Image Source: Harshit Kumar Kushwaha
-
I've yet to meet a supplier who looks forward to logging into a customer portal. Most procurement leaders will tell you the same. And yet, portals keep getting implemented. Read report: https://lnkd.in/gTnCt6NC (featuring Knorr-Bremse, BraunAbility, and others). A mid-sized supplier works with 50, 100, sometimes 200+ customers. Each customer has their own portal, their own login, their own way of doing things. The portal works well for the customer. For the supplier, it's 200 extra jobs on top of their actual job. So they simply send an email. EDI was supposed to fill the gap. But it only works well for strategic partners as it requires IT investments on both sides, and takes months to implement per supplier. One company I spoke with integrated SAP Ariba to standardize confirmations for direct materials. Within months, teams drifted back to emailing the moment an exception came up. The pattern is consistent across the industry: EDI handles strategic partners. Portals handle a fraction of the rest. Email handles everyone else. The suppliers are too small to justify an EDI integration, too low-priority for a portal rollout, or too transactional to warrant the effort. For both sides, the cost of integration outweighs the cost of just sending an email. Emails become the default because they're the only tool that works for every supplier, as they require zero training and handle exceptions. The question was never how to get suppliers off email. The problem is what happens after it lands in the inbox: a human reading unstructured data, inferring context, and manually updating the ERP. That's where the time goes. AI eliminates exactly that. An AI agent operates within the same inbox buyers already use, reads supplier replies in any format, cross-references them against the original PO, flags mismatches, follows up autonomously if something is off, and updates the ERP in real time.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development