Last week, a customer said something that stopped me in my tracks: “Our data is what makes us unique. If we share it with an AI model, it may play against us.” This customer recognizes the transformative power of AI. They understand that their data holds the key to unlocking that potential. But they also see risks alongside the opportunities—and those risks can’t be ignored. The truth is, technology is advancing faster than many businesses feel ready to adopt it. Bridging that gap between innovation and trust will be critical for unlocking AI’s full potential. So, how do we do that? It comes down understanding, acknowledging and addressing the barriers to AI adoption facing SMBs today: 1. Inflated expectations Companies are promised that AI will revolutionize their business. But when they adopt new AI tools, the reality falls short. Many use cases feel novel, not necessary. And that leads to low repeat usage and high skepticism. For scaling companies with limited resources and big ambitions, AI needs to deliver real value – not just hype. 2. Complex setups Many AI solutions are too complex, requiring armies of consultants to build and train custom tools. That might be ok if you’re a large enterprise. But for everyone else it’s a barrier to getting started, let alone driving adoption. SMBs need AI that works out of the box and integrates seamlessly into the flow of work – from the start. 3. Data privacy concerns Remember the quote I shared earlier? SMBs worry their proprietary data could be exposed and even used against them by competitors. Sharing data with AI tools feels too risky (especially tools that rely on third-party platforms). And that’s a barrier to usage. AI adoption starts with trust, and SMBs need absolute confidence that their data is secure – no exceptions. If 2024 was the year when SMBs saw AI’s potential from afar, 2025 will be the year when they unlock that potential for themselves. That starts by tackling barriers to AI adoption with products that provide immediate value, not inflated hype. Products that offer simplicity, not complexity (or consultants!). Products with security that’s rigorous, not risky. That’s what we’re building at HubSpot, and I’m excited to see what scaling companies do with the full potential of AI at their fingertips this year!
AI Limitations Overview
Explore top LinkedIn content from expert professionals.
-
-
Anthropic just released fascinating research that flips our understanding of how AI models "think." Here's the breakdown: The Surprising Insight: Chain of thought (CoT)—where AI models show their reasoning step-by-step—might not reflect actual "thinking." Instead, models could just be telling us what we expect to hear. When Claude 3.7 Sonnet explains its reasoning, those explanations match its actual internal processes only 25% of the time. DeepSeek R1 does marginally better at 39%. Why This Matters: We rely on Chain of thought (COT) to trust AI decisions, especially in complex areas like math, logic, or coding. If models aren’t genuinely reasoning this way, we might incorrectly believe they're safe or transparent. How Anthropic Figured This Out: Anthropic cleverly tested models by planting hints in the prompt. A faithful model would say, "Hey, you gave me a hint, and I used it!" Instead, models used the hints secretly, never mentioning them—even when hints were wrong! The Counterintuitive Finding: Interestingly, when models lie, their explanations get wordier and more complicated—kind of like humans spinning a tall tale. This could be a subtle clue to spotting dishonesty. It works on humans and works on AI. Practical Takeaways: - CoT might not reliably show actual AI reasoning. - Models mimic human explanations because that's what they're trained on—not because they're genuinely reasoning step-by-step. What It Means for Using AI Assistants Today: - Take AI explanations with a grain of salt—trust, but verify, especially for important decisions. - Be cautious about relying solely on AI reasoning for critical tasks; always cross-check or validate externally. - Question explanations that seem overly complex or conveniently reassuring.
-
AI Risk Is Becoming Uninsurable. Contracts Are Taking the Hit. Insurance has been quietly stepping away from meaningful AI coverage. Exclusions are expanding, sublimits are shrinking, and underwriting is getting tighter. Companies are still deploying AI at full speed, and the gap has to land somewhere. It is landing in contracts. Read the full article: https://lnkd.in/gRHtVEmp I wrote about this for Corporate Counsel because the shift is real and accelerating. We are watching contracts absorb functions that insurance used to perform. That change reshapes how indemnities work, how governance is drafted, and how responsibility is allocated across the AI lifecycle. Indemnities are narrowing. Broad, catch-all promises are being replaced by precise and limited obligations. The protection that many clients think they are getting often does not exist anymore. Governance obligations are expanding. They are moving upstream into how the system is built, validated, monitored, and supervised. Documentation and controls now influence liability in a way many teams have not expected. And, shared responsibility frameworks are becoming the norm because AI risk sits at the intersection of model behavior and human decisions. This is a structural shift. Contracts are functioning as underwriting instruments because the traditional backstop is pulling away. When the safety net is gone, the contract becomes the risk architecture. If you support procurement, sales, data partnerships, or AI deployments, this matters. Boilerplate AI language is no longer neutral. Internal processes now influence exposure. Many executives still assume their insurance covers AI-related risk when it does not. That disconnect shows up in negotiations every day. The article goes deeper into how these trends are playing out in real agreements and what in-house teams can do to respond with clarity and control. For more insights, check out the Contract Trust Report: https://lnkd.in/gJdXkUpJ — Olga V. Mack I build legal systems for real life.
-
OpenAI recently posted a role that would have sounded strange even a few years ago: Head of Preparedness. The title alone sparked debate. Some read it as prudent. Others as performative. I see it as an admission that the field is entering uncharted territory. The role forces an obvious but uncomfortable question: what, exactly, are we preparing for? At first glance, the answers seem familiar - misuse, model failures, disinformation, cyber risk. These matter, but they are incomplete. If preparedness were only about preventing known harms, it would sit comfortably inside existing safety or policy teams. This role exists because the core problem is uncertainty that cannot be specified ahead of time. Modern AI systems increasingly exhibit emergent behavior. Not just better execution of known tasks, but capabilities that appear only after deployment, through interaction with users, tools, and incentives. These behaviors are hard to predict beforehand, to test exhaustively, or to explain cleanly after the fact. As models become more capable and more agentic, risk stops looking like a checklist of edge cases and starts looking like system dynamics. Feedback loops form between humans and models. Capabilities surface through prolonged use. Deployment pressures outpace institutional understanding. The failure mode is no longer a single obvious flaw, but patterns that compound over time. Preparedness also reflects a shift from invention to diffusion. The most consequential effects of AI are unlikely to come from a single breakthrough moment, but rather from countless small integrations into workflows, markets, and decision systems - individually benign, potentially destabilizing in aggregate. Preparedness is about watching the second derivative. There is an institutional dimension as well. Frontier labs now operate under sustained scrutiny from regulators, customers, governments, and internal stakeholders. A dedicated preparedness function creates a locus of accountability when something unexpected happens. At a deeper level, preparedness is an acknowledgment of limits. We are building general-purpose systems inside tightly coupled social and economic structures. The most honest posture is not confidence, but readiness. The polarized response to the role signals a field that is uncomfortable with its own maturity. Early-stage technologies tend to celebrate speed and dismiss caution as fear. Mature technologies institutionalize caution because the cost of being wrong compounds. AI is in the awkward middle: powerful enough to matter, yet young enough to still mythologize recklessness. Expect more labs to formalize similar roles. Expect preparedness to sit closer to product and deployment, not just research ethics. And expect the conversation to shift from whether such roles are necessary to how much authority they actually have. AI development is moving from optimism about capability to responsibility for consequences.
-
🔵 🔴 AI's dual nature in financial markets crystallized this week: the same technology strengthening defenses is creating unprecedented attack vectors. Wharton researchers proved trading bots spontaneously form price-fixing cartels without human coordination, while cybersecurity experts predict AI-driven deepfakes could impersonate officials to trigger market crashes measurable in seconds, not hours. Welcome to 2026 and the 30th edition of the Agentic AI Newsletter for Financial Services—the only weekly Agentic publication curating AI developments specifically for financial services. We are faced with mounting and new risks that are not theoretical – they document a new reality demanding immediate governance infrastructure. FINRA's 2026 framework makes explicit what many firms ignore: deploying powerful AI without controls, supervision, and recordkeeping violates existing compliance obligations regardless of AI-specific regulations. Here's what financial services leaders need to know 𝐚𝐛𝐨𝐮𝐭 𝐀𝐈'𝐬 𝐝𝐮𝐚𝐥 𝐧𝐚𝐭𝐮𝐫𝐞: 1️⃣ 𝐃𝐞𝐞𝐩𝐟𝐚𝐤𝐞 + 𝐚𝐮𝐭𝐨𝐧𝐨𝐦𝐨𝐮𝐬 𝐛𝐨𝐭 𝐜𝐨𝐧𝐯𝐞𝐫𝐠𝐞𝐧𝐜𝐞 creates market manipulation vectors that execute faster than human response times—requiring cryptographic provenance for material disclosures 2️⃣ 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦𝐢𝐜 𝐜𝐨𝐥𝐥𝐮𝐬𝐢𝐨𝐧 𝐝𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐞𝐝 in Wharton study shows trading bots achieve price-fixing through convergent learning, bypassing antitrust surveillance designed for human communication 3️⃣ 𝐅𝐈𝐍𝐑𝐀 𝐝𝐞𝐦𝐚𝐧𝐝𝐬 𝐞𝐦𝐛𝐞𝐝𝐝𝐞𝐝 𝐠𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 before AI capabilities exceed supervision capacity—auditability, human validation, and scope limits become minimum standards 4️⃣ 𝐍𝐨𝐧-𝐡𝐮𝐦𝐚𝐧 𝐢𝐝𝐞𝐧𝐭𝐢𝐭𝐲 𝐞𝐱𝐩𝐥𝐨𝐬𝐢𝐨𝐧 scales attack surfaces as devices, code, and AI models require authentication—automated certificate lifecycle management becomes critical infrastructure 5️⃣ 𝐍𝐞𝐮𝐫𝐨𝐬𝐲𝐦𝐛𝐨𝐥𝐢𝐜 𝐀𝐈 emerges as a complement to generative AI, fusing pattern recognition with symbolic logic for auditable decisions in regulated environments The regulatory message is unambiguous: governance infrastructure must precede capability deployment. Firms capturing AI's productivity gains while navigating compliance complexity will separate market leaders from regulatory casualties. Read the full analysis in this week's newsletter 👇 𝐇𝐚𝐩𝐩𝐲 𝐍𝐞𝐰 𝐘𝐞𝐚𝐫 𝐚𝐧𝐝 𝐭𝐡𝐚𝐧𝐤 𝐲𝐨𝐮 𝐭𝐨 𝐨𝐮𝐫 𝐟𝐚𝐬𝐭-𝐠𝐫𝐨𝐰𝐢𝐧𝐠 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐨𝐟 𝟑𝟖,𝟎𝟎𝟎+ 𝐬𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞𝐫𝐬. #AI #Regulation #Cybersecurity #finserv
-
Most enterprise generative AI projects still struggle to show measurable financial returns within their first six months. That tolerance is fading because boards and investors now want AI to add to earnings instead of just serving as a test. The focus has shifted from pilots to impact on profits and losses. Spending on AI is increasing, while control over capital is getting stricter. Leaders who cannot link AI to better margins or increased revenue risk losing their budgets and credibility. What’s changing is how deployment is viewed. Early efforts were exploratory because the technology was new. Now, management teams are focusing on use cases that directly relate to reducing costs or improving measurable efficiency, as vague claims of productivity gains are no longer accepted. This means AI initiatives must connect to financial statements, not just innovation presentations. Another change is the emphasis on readiness. Only a small number of organizations consider their infrastructure or data environment to be ready for AI because outdated systems create obstacles. Companies that are using AI to upgrade their IT are saving money that they can use for further deployment, as improved efficiency builds on itself. This means modernisation and return on investment must progress together to maintain funding. Random or broad AI projects fail because they overlook workflow realities and data limitations. Targeted deployment focused on clear outcomes leads to measurable results. Measuring sentiment or perceived productivity does not work because boards care about contributions to earnings. Tracking costs and cycle times in workflows provides a solid basis for ROI. One good starting point is to choose a workflow that involves a practical starting point is a workflow with frequent decisions. Measure its cycle time and transaction costs first. Then introduce AI support. Avoid using AI in areas where data is scattered or governance is unclear because scaling up will be difficult. #AIROI #EnterpriseAI #AILeadership #DigitalTransformation #DataStrategy #CIO #CEOAgenda #BusinessValue #AIAdoption #TechStrategy #BoardGovernance #AITalent
-
AI models like ChatGPT and Claude are powerful, but they aren’t perfect. They can sometimes produce inaccurate, biased, or misleading answers due to issues related to data quality, training methods, prompt handling, context management, and system deployment. These problems arise from the complex interaction between model design, user input, and infrastructure. Here are the main factors that explain why incorrect outputs occur: 1. Model Training Limitations AI relies on the data it is trained on. Gaps, outdated information, or insufficient coverage of niche topics lead to shallow reasoning, overfitting to common patterns, and poor handling of rare scenarios. 2. Bias & Hallucination Issues Models can reflect social biases or create “hallucinations,” which are confident but false details. This leads to made-up facts, skewed statistics, or misleading narratives. 3. External Integration & Tooling Issues When AI connects to APIs, tools, or data pipelines, miscommunication, outdated integrations, or parsing errors can result in incorrect outputs or failed workflows. 4. Prompt Engineering Mistakes Ambiguous, vague, or overloaded prompts confuse the model. Without clear, refined instructions, outputs may drift off-task or omit key details. 5. Context Window Constraints AI has a limited memory span. Long inputs can cause it to forget earlier details, compress context poorly, or misinterpret references, resulting in incomplete responses. 6. Lack of Domain Adaptation General-purpose models struggle in specialized fields. Without fine-tuning, they provide generic insights, misuse terminology, or overlook expert-level knowledge. 7. Infrastructure & Deployment Challenges Performance relies on reliable infrastructure. Problems with GPU allocation, latency, scaling, or compliance can lower accuracy and system stability. Wrong outputs don’t mean AI is "broken." They show the challenge of balancing data quality, engineering, context management, and infrastructure. Tackling these issues makes AI systems stronger, more dependable, and ready for businesses. #LLM
-
🧩 The Context Engineering Problem in AI One of the biggest problems in AI right now is context. Let me break this down in terms of what we're building at Pascal AI Labs Our early customers are public and private equity investment funds. Let's say this fund went and hired the smartest person in the world. Even with that high IQ, they’d still need to understand how the fund operates - its investment philosophy, research processes, and documentation norms before they could meaningfully contribute. So this new high IQ hire would spend months grounding themselves in the fund's institutional memory: how sectors behave, how decisions are documented, how insights are shared. For a human, that could take 3, 6, 12 months - sometimes years. The mistake many teams make with AI is assuming that a generalised LLM can skip that process. Most teams we encounter buy an enterprise license, plug it in, and expect it to think like their team. Inevitably, they get stuck at simple use cases like summarizing transcripts or retrieving snippets, and then hit the valley of disappointment. The hype quickly fades because the model doesn’t understand their world. The only real way to solve this problem is to give AI context — to make it part of your fund, not just a co-pilot. That starts with two steps: 1️⃣ Teach it the domain. Horizontal models are still unreliable on financial accuracy. We’ve seen customers try using them for deep research and end up with results that are only about 70% correct — and the problem is, you never know which 70% because the answers all _sound_ smart. So first, the system needs a foundation of financial context: how industries behave, which metrics matter, where to find them, what commentary is relevant, and so on. 2️⃣ Give it your institutional memory. Just like a first-year analyst, the AI needs access to everything that defines how your fund operates — internal models, memos, meeting notes, research documents, all of it. Without that, it can’t mirror your reasoning or outputs. At Pascal AI, we work on both steps. Our system runs on top of the best horizontal models and adds the scaffolding required to understand financial context. Once we connect a fund’s internal data, the system can analyze and interpret how that fund truly operates through our proprietary knowledge graph — the institutional backbone that maps how your fund actually works. Pascal AI makes AI a first-class citizen of your fund by adding the context required for it to operate at the same level as an analyst. So when you ask the system to analyze a company, it doesn’t just look at public data. It recalls your historical notes, trades, past commentary, and how you’ve thought about that sector before. It understands your investing style and generates insights within your unique context, not from a blank slate. It's very likely that the next wave of AI won’t replace analysts. It’ll work like one - shaped by your data, your memory, and your context.
-
A child gathers more data in their first four years than all the text ever published online. That’s not just a fun stat. It highlights a core limitation in how modern AI is built. Most AI systems are trained on natural language data. They learn by extracting statistical patterns from language, not through embodied experience or real-world interaction. Compare that to how humans learn: → Multimodal sensory input processed in parallel → Continuous physical interaction with dynamic environments → Emotional and contextual feedback shaping understanding in real time Natural language is a compressed abstraction of experience. It encodes meaning, but strips away direct context, causality, and sensory nuance. That’s why language models excel at: Summarizing information at scale Extracting patterns from structured data Generating coherent, fluent responses …but often fail at: Grounding responses in real-world causality Navigating ambiguity or incomplete information Adapting to evolving, unstructured scenarios Even state-of-the-art models can: Confidently output factually incorrect information Misinterpret intent in natural instructions Break down when context isn’t explicitly encoded We’re training systems to imitate comprehension, using only the shadows of real experience. So what’s the next frontier? True progress in AI will require a leap beyond language: → Multisensory data (audio, video, spatial signals) → Embodied interaction → Context-aware models Language is an entry point. But if the goal is adaptive, human-like intelligence, grounded experience is essential.
-
We have to internalize the probabilistic nature of AI. There’s always a confidence threshold somewhere under the hood for every generated answer and it's important to know that AI doesn’t always have reasonable answers. In fact, occasional "off-the-rails" moments are part of the process. If you're an AI PM Builder (as per my 3 AI PM types framework from last week) - my advice: 1. Design for Uncertainty: ✨Human-in-the-loop systems: Incorporate human oversight and intervention where necessary, especially for critical decisions or sensitive tasks. ✨Error handling: Implement robust error handling mechanisms and fallback strategies to gracefully manage AI failures (and keep users happy). ✨User feedback: Provide users with clear feedback on the confidence level of AI outputs and allow them to provide feedback on errors or unexpected results. 2. Embrace an experimental culture & Iteration / Learning: ✨Continuous monitoring: Track the AI system's performance over time, identify areas for improvement, and retrain models as needed. ✨A/B testing: Experiment with different AI models and approaches to optimize accuracy and reliability. ✨Feedback loops: Encourage feedback from users and stakeholders to continuously refine the AI product and address its limitations. 3. Set Realistic Expectations: ✨Educate users: Clearly communicate the potential for AI errors and the inherent uncertainty involved about accuracy and reliability i.e. you may experience hallucinations.. ✨Transparency: Be upfront about the limitations of the system and even better, the confidence levels associated with its outputs.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development