Navigating AI Competition

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,471,710 followers

    The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. [...] [Reached length limit. Full text: https://lnkd.in/grbFH4D6 ]

  • View profile for Ruben Hassid

    Master AI before it masters you.

    835,256 followers

    This is the most underrated way to use Claude: (and it has nothing to do with writing or coding) It's competitive intelligence. Using data that's free, public, and updated every single week. Here's my extract step by step guide: Step 1. Go to claude .ai. Step 2. Select the new Claude "Opus 4.6." Step 3. Turn on "Extended Thinking." Step 4. Pick a competitor. Go to their careers page. Step 5. Copy every open job listing into one doc. (Title. Team name. Location. Full description) Step 6. Save it as one .txt or .docx file. Step 7. Search the company at EDGAR (sec .gov) Step 8. Download its recent 10-K or 10-Q filing. (Official strategy, risks, and financials - all public.) Step 9. Upload both files to Claude Opus 4.6. Step 10. Paste this exact prompt: "You are a competitive intelligence analyst at a rival company. I've uploaded [Company]'s complete current job listings and their most recent SEC filing. Perform a strategic intelligence analysis: → Cluster these roles by what they suggest is being built. Don't use the team names they've listed. Infer the actual product initiatives from the skills, tools, and responsibilities described. → Identify capabilities or teams that appear entirely new — not mentioned anywhere in the SEC filing. These are unreleased bets. → Find roles where seniority is disproportionately high for a new team. This signals executive-level priority. → Cross-reference the SEC filing's Risk Factors and Strategy sections with hiring patterns. Where are they investing against a stated risk? Where did they flag a risk but have zero hiring to address it? → Predict 3 product launches or strategic moves this company will make in the next 6-12 months. State your confidence level and cite specific job titles and filing sections as evidence. Format this as a 1-page competitive intelligence briefing for a CMO." What you'll find: → Products that don't exist yet but will in 6 months. → Priorities that contradict what the CEO said. → Risks they told the SEC but aren't addressing. This is what consulting firms charge $200K for. It took me 10 minutes. I used the new Claude 'Opus 4.6' for a reason: ✦ It read 60 job listing & a 200-page filing together.  ✦ And connects dots across both. ✦ It is superior in thinking and context retrieval. That's why I didn't use ChatGPT for this.

  • View profile for Shelly Palmer
    Shelly Palmer Shelly Palmer is an Influencer

    Professor of Advanced Media in Residence at S.I. Newhouse School of Public Communications at Syracuse University

    383,033 followers

    Yesterday, Reuters reported that OpenAI finalized a cloud deal with Google in May. This might look like routine tech news. It is not. This is a strategic inflection point in the AI infrastructure wars. OpenAI, whose ChatGPT threatens the core of Google Search, is now paying Google billions of dollars to power its growth. This was not a partnership of choice. It was a partnership of necessity. Since ChatGPT launched in late 2022, OpenAI has struggled to meet soaring demand for computing power. Training and inference workloads have outpaced what Microsoft’s Azure alone can support. OpenAI had to expand. Google Cloud was the solution. For OpenAI, the deal reduces its dependency on Microsoft. For Google, it is a calculated win. Google Cloud generated $43 billion in revenue last year, about 12 percent of Alphabet’s total. By serving a direct competitor, Google is positioning its cloud business as a neutral, high-performance platform for AI at scale. The market responded. Alphabet shares rose 2.1 percent on the news. Microsoft fell 0.6 percent. There are only a handful of true hyperscalers in the U.S. AWS, Azure, and GCP dominate, with Oracle and IBM trailing behind. The appetite for compute is growing faster than any one company can satisfy. In this new phase of the AI era, exclusivity is a luxury no one can afford. Collaboration across competitive lines is inevitable. -s

  • View profile for Eric Schmidt
    Eric Schmidt Eric Schmidt is an Influencer

    Former CEO and Chairman, Google; Chair and CEO of Relativity Space

    93,493 followers

    Artificial intelligence is reshaping the world. The question is not whether that transformation will happen, but who shapes it and under what conditions. The past year has made clear that the AI race ahead is not a single competition, but multiple overlapping contests unfolding at once. The United States continues to lead in frontier systems, investing heavily in models that push toward artificial general intelligence. That leadership matters. The capabilities being built today could redefine economic productivity and global power. China is pursuing a different strategy. Through its AI+ initiative, the country is embedding AI across manufacturing and key sectors with extraordinary speed. While the U.S. builds the most advanced systems, China is focused on broadly deploying AI to power its economy. Meanwhile, in 2024 the European Union adopted the first comprehensive AI law, seeking to lead through governance rather than innovation. Yet uneven enforcement and expanding exemptions risk slowing the transformation it intends to guide. Saudi Arabia and the UAE are also investing hundreds of billions of dollars in data centers to become key players in the global AI economy. This is why I’ve said the greatest risk America faces is winning the AI frontier and still losing the AI era. Leadership in this moment requires more than breakthrough models. It requires solving energy constraints, scaling infrastructure, upskilling workers, and accelerating adoption across the entire economy. Building the frontier is essential, but converting that advantage into sustained economic strength will determine who leads the era. #SchmidtSights

  • View profile for Amanda Natividad
    Amanda Natividad Amanda Natividad is an Influencer

    Founder, Zero Click Marketing | VP Marketing, SparkToro

    63,488 followers

    All of our feeds: "AI is killing Google Search!" But what does the actual data show? Our new research reveals: → Google Search grew 21.64% in 2024 → Google receives ~373x more searches than ChatGPT → Even if you combined ALL AI tools, they'd represent < 2% of the search market Here's what a lot of marketers get wrong about AI hype: We confuse our tech-forward bubble with mainstream behavior. Those 20 people in your Twitter feed who "never use Google anymore"? They're not representative of the 14 billion searches happening on Google every day. But here's what a lot of us REALLY get wrong: We can't hold the tension of 2 competing truths in our mind. Because our new research doesn't mean you should ignore AI tools. If your audience is early-adopting and tech-forward, being visible on platforms like ChatGPT makes perfect sense. But for most brands? Your audience is still primarily using traditional search. Smart marketers follow their audience data, not media headlines. Full research on the SparkToro blog, with all the math showing exactly how we calculated these comparisons. Link below in the comments (and hopefully, soon, in the AI Overview 😏).

  • View profile for Howard Yu
    Howard Yu Howard Yu is an Influencer

    IMD Business School, LEGO® Professor | 2025 Thinkers50 Top 50 | Director, Center for Future Readiness

    57,897 followers

    Trump wants 15% of NVIDIA's China revenue. Beijing wants zero dependence on American chips. DeepSeek now trains on Huawei hardware. Alibaba built its own AI processor. The real challenge for NVIDIA isn't Washington. It's irrelevance. The chip containment strategy isn't working. For most Chinese companies, switching from NVIDIA still means accepting worse performance. But that's changing. Once you combine software breakthroughs with local hardware, the gap shrinks fast. DeepSeek shocked everyone with R1, achieving OpenAI performance at a fraction of the cost through algorithmic innovations. Now they're moving to Huawei chips for R2, showing the hybrid approach works. The numbers tell the real story: China produces 23,695 AI papers annually vs America's 6,378. They file 35,423 AI patents vs 2,678 from US, UK, Canada, Japan, and South Korea combined. Half the world's AI researchers are in China, creating most leading open-source models. To compete, America needs to invest in fundamentals, not restrictions. Quantum computing, nuclear-powered data centers, attracting global talent. These take decades, not election cycles. DeepSeek's shift to Huawei isn't just one company's decision. It's a preview. Alibaba's new chip works with NVIDIA's CUDA platform today, but that's transitional. Cambricon's revenue hit $247 million last quarter on domestic demand alone. Their market cap exceeds $87 billion despite warnings about "irrational exuberance." When chips are "good enough" and software is clever enough, dependence becomes choice. Jensen Huang said it best: "To win the AI race, U.S. industry must earn the support of developers everywhere, including China." He estimates China's AI market at $50 billion this year, growing 50% annually. Trump wants 15% of that. Beijing wants 0% dependence. When you block the front door, innovation finds the back window. TAKEAWAY Getting to technological supremacy is the promised land for superpowers. Washington wants quick wins, usually through restrictions that backfire. China isn't trying to match NVIDIA anymore. They're changing what "good enough" means. When half the world's AI researchers decide Huawei chips running clever algorithms IS good enough, being "the best" becomes irrelevant. America knew the fundamentals playbook once. Quantum computing, nuclear-powered data centers, attracting global talent. These take decades, not election cycles. But we're debating export controls while they're shipping products. P.S. The biggest problem with export controls is their reverse network effect. The more restrictions you add, the faster alternatives develop. When "good enough" becomes the new standard, being the best becomes irrelevant. (See my first comment for why this pattern was inevitable...)

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,706 followers

    We are entering a phase where 𝘬𝘯𝘰𝘸𝘪𝘯𝘨 AI isn’t enough — 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 and 𝗱𝗲𝗽𝗹𝗼𝘆𝗶𝗻𝗴 powerful, responsible AI systems will set you apart. To help navigate this rapidly evolving landscape, here’s a structured 𝟵-𝘀𝘁𝗮𝗴𝗲 𝗷𝗼𝘂𝗿𝗻𝗲𝘆 to mastering Generative AI in 2025: → 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗔𝗜: Understand the real differences between AI, ML, and DL. Master the fundamentals like optimizers, activation functions, and gradient descent. → 𝗗𝗮𝘁𝗮 & 𝗣𝗿𝗲𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴: High-performing AI starts with high-quality data. Learn how to clean, normalize, tokenize, engineer features, and balance datasets for better model accuracy. → 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 (𝗟𝗟𝗠𝘀): Go deeper than just using GPTs. Study how transformers work, what positional encoding means, and how scaling laws govern large models. → 𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴: Learn how to design effective prompts, create structured prompt chains, manage token budgets, and optimize model outputs systematically. → 𝗙𝗶𝗻𝗲-𝘁𝘂𝗻𝗶𝗻𝗴 & 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴: Master advanced techniques like PEFT, LoRA, and RLHF to fine-tune and optimize models with minimal data and efficient resource usage. → 𝗠𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹 & 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗠𝗼𝗱𝗲𝗹𝘀: Expand beyond text to images, audio, video, and cross-modal generation. Understand diffusion models, captioning, and multimodal search. → 𝗥𝗔𝗚 & 𝗩𝗲𝗰𝘁𝗼𝗿 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀: Learn how retrieval-augmented generation (RAG) systems ground models with external knowledge. Explore vector databases like Pinecone, ChromaDB, and FAISS. → 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 & 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜: Identify biases, ensure transparency, and integrate responsible AI practices into your systems — because trust and accountability are not optional. → 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 & 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗨𝘀𝗲: Turn prototypes into production-grade systems. Focus on API serving, scaling, inference optimization, logging, and setting usage controls. Each stage is mapped with the most relevant 𝘁𝗼𝗼𝗹𝘀, 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀, and 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 to focus on. The world does not just need more AI models. It needs 𝗯𝗲𝘁𝘁𝗲𝗿, 𝘀𝗮𝗳𝗲𝗿, and 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱-𝗿𝗲𝗮𝗱𝘆 AI systems. Built by those who deeply understand the full lifecycle from idea to deployment. → 𝗦𝗮𝘃𝗲 𝘁𝗵𝗶𝘀 𝗿𝗼𝗮𝗱𝗺𝗮𝗽. → 𝗥𝗲𝗳𝗹𝗲𝗰𝘁 𝗼𝗻 𝗶𝘁. → 𝗨𝘀𝗲 𝗶𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗺𝗲𝗮𝗻𝗶𝗻𝗴𝗳𝘂𝗹 𝗶𝗻 𝟮𝟬𝟮𝟱 𝗮𝗻𝗱 𝗯𝗲𝘆𝗼𝗻𝗱.

  • View profile for Lo Toney
    Lo Toney Lo Toney is an Influencer

    Founding Managing Partner at Plexo Capital

    116,973 followers

    Two months ago, the consensus was that Apple had "lost" the AI race. The narrative was that their lack of a frontier model was a failure of innovation. On CNBC in November last year, I argued the opposite: Apple’s silence was not weakness. It was discipline. While competitors were locking themselves into massive capital expenditure cycles to build intelligence, Apple was waiting for the market to mature enough to buy it. As I noted in this clip: "The companies racing ahead on AI may be running faster...but Apple is the only one not running into a margin trap." Last week's news that Apple will license Gemini for ~$1B validates that strategy. They effectively swapped tens of billions in CapEx risk for a predictable, fixed-cost OpEx line item. They did not lose the race. They just refused to run a race that did not make economic sense. Tomorrow, I am publishing a full breakdown of this new dynamic, which is a concept I call "Reverse TAC"...and why the Apple-Google deal marks the end of the "Training Era" and the beginning of the "Inference Economy." Start with the clip below. The math drops tomorrow. #Apple #Google #AI #InferenceEconomics #Strategy #TechInvesting

  • View profile for Jason Saltzman
    Jason Saltzman Jason Saltzman is an Influencer

    Insights @ a16z | Former Professional 🚴♂️

    36,299 followers

    The real AI war is being fought in the deployment layer. While everyone obsesses over GPT and Claude, these platforms, which process billions of inferences daily, are quietly determining who actually wins in AI. In just 12 months, the AI deployment landscape transformed more dramatically than cloud infrastructure did in 5 years. Big one-year changes in Mosaic scores (company health and trajectory metric) across the model deployment & serving market signal a fundamental reshaping of the AI infrastructure landscape. Market leaders redefining AI deployment: → Databricks dominates with the highest Mosaic score and $100B valuation, reaching $2.6B revenue with 60%+ growth → Baseten’s rapid rise just attracted a fresh $150M in funding, driven by their serverless GPU infrastructure → Together AI capitalized on generative AI demand, raising $533.5M at a $3.3B valuation with in-house LLMs using reinforcement learning → VESSL AI and Modal are winning with pay-per-use GPU compute models Current market leaders are split into distinct camps that will likely converge or consolidate sooner than we all expect. → Infrastructure specialists like Together AI and Fireworks AI focus on serverless inference for production environments. → Platform plays like Databricks leverage their existing enterprise relationships and massive resources to both build and buy innovation. → Developer-centric players like Modal attract startups with zero fixed costs. The winners share proven technical foundations driving their success: ↳Scale: Hugging Face hosts 500,000+ models for 5 million developers ↳Architecture: Serverless infra eliminates DevOps complexity (Baseten, Modal, Fireworks) ↳Business Model: Pay-per-use pricing removes barriers for growing startups ↳AI-Native: 80% of Databricks’ new databases are now AI-created vs. 30% last year ↳Generative AI Focus: Together AI and Fireworks built specifically for LLM inference demands Critically, these platforms combine efficient compute, intelligent orchestration, and developer-friendly abstractions – creating defensible moats against hyperscaler competition. In turn, this makes these companies prime acquisition targets for the established cloud leaders. With 96% of enterprises deploying AI models (up from 25% in 2023), infrastructure choice has become strategic. Massive YoY revenue growth numbers across both the hyperscalers and emerging players demonstrate the market's trajectory. While the world debates which LLM is smartest, the companies controlling how those models actually reach users are building the real moats. Incredible recent funding rounds and major acquisitions (Nvidia acquiring OctoAI) will define which platforms become the become dominant AI infrastructure players. P.S. Want more insights on the companies powering AI deployments? Drop "deploying" in the comments for *free* access to CB Insights' data and insights on the model deployment & serving market.

  • View profile for Pratik Thakker

    CEO at INSIDEA | Times 40 Under 40 | HubSpot Elite Partner

    248,580 followers

    Clicks are no longer the competition. Inclusion in AI-generated answers is. During a recent test with a generative search tool for a niche B2B service, the output was simple. A clean summary. A few companies mentioned. None of them were top-ranked in traditional search, and one had no visible paid presence. What stood out was not visibility, but clarity. This reflects a broader shift. AI now shapes discovery before a buyer ever reaches a website or speaks to sales. It interprets, filters, and presents brands based on how well their information is structured and understood. If that structure is weak, the brand does not just rank lower. It gets excluded. Publishing more content does not solve this. Structured clarity does. This week’s newsletter explores why semantic consistency, knowledge frameworks, and disciplined metadata are becoming real advantages. It also unpacks why volume without cohesion is starting to work against teams, not for them. For teams responsible for growth, brand, or go-to-market strategy, this shift is already in motion. The full piece dives deeper: AI Systems Mediate All Discovery.

Explore categories