AI holds great potential for the semiconductor industry and will kick-start the next round of innovation for faster, cheaper and more energy-efficient computation – that was my message today at SPIE Advanced Lithography + Patterning. I discussed the potential and the challenges that AI holds for our industry. The potential is clearly huge. AI is rapidly integrated into applications, and high-performance compute is expected to underpin growth towards $1 trillion of semiconductor sales by 2030. The challenges are around the computing needs of AI models and related energy consumption. The compute workload of training a leading AI model has increased 16x every 2 years in recent years – much faster than the increase in computing power delivered by Moore’s law, which is about 2x every 2 years. The energy needed to train a leading model has not grown so steeply but still rose 10x every 2 years. This computing need has been met by building supercomputers and massive data centers. If you extrapolate these trends, training a leading AI model would need the entire world-wide electricity supply in about 10 years. That’s clearly not realistic, so the trend has to break, by training algorithms becoming more efficient and by chips becoming more efficient. In other words, the needs of AI will stimulate immense innovation in chip design and manufacturing – and the potential value of AI to our society will put urgency and funding behind that drive. As a consequence, chip makers are pulling all levers to accelerate semiconductor scaling. This includes lithographic “2D” scaling: shrinking the dimensions of transistors to pack more into a square millimeter. It will also include “3D” integration, with innovations like backside power delivery, transistor designs like gate-all-around, as well as stacking chips in the package, where holistic lithography will play a critical role to deliver performance requirements. ASML will support these trends through a comprehensive, holistic lithography portfolio. Our 0.33 NA/0.55 NA EUV lithography systems allow chip makers to shrink dimensions at the lowest possible cost on their critical layers, while tightly matched and highly productive DUV systems will continue to reduce cost. More than ever, metrology and inspections tools – whose data is fed into lithography control solutions that keep the patterning process operating within tight specs to deliver the highest possible production yields – will be essential to deliver 2D scaling and 3D integration processes. 3D integration requires wafer-to-wafer bonding, and we have demonstrated the capability to map the stresses and distortions that bonding creates and to compensate for them, reducing overlay errors for post-bonding patterning by 10x or more. It was a pleasure catching up with the industry’s lithography and patterning experts in San Jose. I’m excited to see our collective innovation power having a go at these challenges. Together, we will push technology forward.
Advancing AI Development
Explore top LinkedIn content from expert professionals.
-
-
Deep stuff! We uncovered a startling link between #entropy, a bedrock concept in #physics, and how #AI can discover new ideas without stagnating. In an era where reasoning models can reflect on problems for days at a time (rather than generating quick, single-step solutions), our study shows how semantic entropy (the spread of meanings) and structural entropy (how evenly its links between concepts generated by the AI are distributed) together hold the secret to ongoing exploration as the model thinks through a problem. Specifically, we measured structural entropy using Von Neumann graph entropy (applied to the adjacency Laplacian), while semantic entropy came from a similarity-based embedding deep language embedding matrix. The key insight? Although semantic entropy consistently outpaces structural entropy, they remain in a near-critical balance—fueling "surprising edges" that introduce relationships between distant concepts. This mirrors physical systems on the brink of a phase transition, where a little bit of "disorder" keeps the process dynamic yet avoids chaos. The result is an AI that doesn’t just keep pace with known solutions but actively creates new pathways of thought over extended “thinking” sessions. As reasoning models become ever more capable—undertaking extended, multi-day "thought processes"—understanding fundamental principles is crucial. By weaving these insights into reinforcement learning strategies, we can reward models not just for correctness, but for venturing into novel conceptual ground. This opens the door to AI systems that actively cultivate new insights, rather than settling into narrow patterns or endlessly rehashing the same knowledge. Going Deeper When physicists describe entropy, they refer to the measure of "disorder" in a system: the number of ways particles can rearrange without altering the system’s energy. Yet entropy transcends molecules and heat. In this research, it emerges as the engine that drives AI reasoning models to keep generating fresh ideas over extended periods. The observed dynamics as the AI thinks about a problem reflects self-organized criticality—a state where systems hover between rigid order and random chaos. Much like a sand pile teetering on the edge of collapse, the AI preserves enough organizational structure to remain coherent, yet stays flexible enough to generate unexpected leaps in meaning. The fraction of "surprising edges" remains stable, offering evidence that the model naturally integrates new, distant ideas without toppling into confusion.
-
🚨 Elon Musk and Nvidia CEO Jensen Huang are urging students to look beyond just learning how to code. As AI gets better at handling repetitive tasks, both believe the real advantage will come from understanding how the world works. Through physics and math. Jensen Huang recently said that if he were graduating today, he'd focus on physics. He explained that future AI systems will need to work with the physical world, not just digital spaces. This means knowing how things move, how forces interact, and how systems behave in real life. Elon Musk has echoed the same idea. When asked about useful skills for the future, he pointed to physics, backed by math. At Tesla and SpaceX, his thinking is rooted in solving problems from the ground up using core principles, not just following existing methods. They’re not saying coding is useless. It still matters. But the next big opportunities will go to people who understand the systems AI is meant to model, control, and improve. In simple terms, learn how the world really works. Study the tough stuff. Physics and math build the kind of thinking that machines can’t easily replace. ------- Do you agree?
-
A new World Economic Forum report, written in collaboration with Accenture, offers one of the clearest pictures of what it takes to move from AI experimentation to real impact. What stands out most is how sharply the gap is widening between organizations that are still running pilots and those that are now delivering measurable business value. The differentiator isn’t model performance or access to technology. It’s whether leaders can align their organizations around AI as a core capability, not a side project. The companies pulling ahead are doing a few things differently. They’re embedding AI into strategic decision‑making, redesigning workflows so people and AI can collaborate meaningfully, and investing in the foundations that make scale possible: data, platforms, responsible governance, and modern engineering practices. They treat AI less as a promise and more as a system they are actively building. This is exactly what we’re seeing with Copilot across customers of every size. When strategy, data, security, operations, and culture all move together, AI creates compounding value. See the full report:
-
I recently spoke to Gartner about what is next in #AI. Here are my thoughts: We have seen impressive progress in #llm by scaling data and compute. Will this continue to hold? Yes, I believe so, but most of those gains will be in reasoning tasks where we have precise metrics to measure uplift, as well as the ability to have synthetic data to train further, and also the freedom to trade off computation for accuracy at test time. This is seen in the recent o1 model. For reasoning tasks, we will also be able to remove hallucination when we can construct accurate verifiers that can certify every statement that #llm makes. We have been doing this in our Leandojo project for mathematical theorem proving. However, there is one area of reasoning where #llm will never be good enough: understanding the physical world. This is because language is only high-level knowledge, and cannot simulate the complex physical phenomena needed in many applications. For instance, LLMs can talk about playing tennis or look up a weather app, but they cannot internally simulate any of these processes. While images and videos can help improve their knowledge of the physical world, models like Sora learn physics by accident, and hence, still produce physically wrong outputs. How can we overcome this? By teaching AI physics from the ground up. We are building AI models that are trained in a physics-informed manner at multiple scales. They are several orders of magnitude faster than traditional simulations, and can also generate novel designs that are physically valid. You can watch some of those examples in my recent TED talk.
-
𝗜𝗳 𝘆𝗼𝘂 𝗳𝗼𝗹𝗹𝗼𝘄 𝘁𝗵𝗲 𝗻𝗲𝘄𝘀, 𝘆𝗼𝘂’𝘃𝗲 𝗽𝗿𝗼𝗯𝗮𝗯𝗹𝘆 𝘀𝗲𝗲𝗻 𝗶𝘁 𝗮𝗹𝗹: 𝗔𝗜 𝗶𝘀 𝗯𝗼𝗼𝗺𝗶𝗻𝗴. 𝗔𝗜 𝗶𝘀 𝗼𝘃𝗲𝗿𝗵𝘆𝗽𝗲𝗱. 𝗔𝗜 𝘄𝗶𝗹𝗹 𝘀𝗮𝘃𝗲 𝘂𝘀. 𝗔𝗜 𝘄𝗶𝗹𝗹 𝗱𝗲𝘀𝘁𝗿𝗼𝘆 𝗷𝗼𝗯𝘀. The Stanford University AI Index 2025 cuts through all of it. Produced by the Institute for Human-Centered Artificial Intelligence, it’s one of the most respected and data-driven reports on the state of AI today. Over 400+ pages of concrete insights — from technical benchmarks and real-world adoption to policy shifts, economic impact, education, and public sentiment. 𝗧𝗵𝗲 2025 𝗲𝗱𝗶𝘁𝗶𝗼𝗻 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗹𝗮𝘀𝘁 𝘄𝗲𝗲𝗸. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 12 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: 1. 𝗕𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸𝘀 𝗮𝗿𝗲 𝗯𝗲𝗶𝗻𝗴 𝗰𝗿𝘂𝘀𝗵𝗲𝗱. ➝ AI performance on complex reasoning and programming tasks surged by up to 67 percentage points in just one year. 2. 𝗔𝗜 𝗶𝘀 𝗻𝗼 𝗹𝗼𝗻𝗴𝗲𝗿 𝘀𝘁𝘂𝗰𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗮𝗯. ➝ 223 FDA-approved AI medical devices. Over 150,000 autonomous rides weekly from Waymo. This is mainstream adoption. 3. 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗶𝘀 𝗴𝗼𝗶𝗻𝗴 𝗮𝗹𝗹-𝗶𝗻. ➝ $109B in U.S. private AI investment. 78% of organizations using AI. Productivity gains are no longer theoretical. 4. 𝗧𝗵𝗲 𝗨.𝗦. 𝗹𝗲𝗮𝗱𝘀 𝗶𝗻 𝗾𝘂𝗮𝗻𝘁𝗶𝘁𝘆—𝗖𝗵𝗶𝗻𝗮’𝘀 𝗰𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝘂𝗽 𝗼𝗻 𝗾𝘂𝗮𝗹𝗶𝘁𝘆. ➝ Chinese models now rival U.S. models on MMLU, HumanEval, and more. Global AI is becoming a multi-polar game. 5. 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝗶𝘀 𝗹𝗮𝗴𝗴𝗶𝗻𝗴 𝗯𝗲𝗵𝗶𝗻𝗱 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻. ➝ Incidents are rising, but standardized RAI benchmarks and audits are still rare. Governments are stepping in faster than vendors. 6. 𝗚𝗹𝗼𝗯𝗮𝗹 𝗼𝗽𝘁𝗶𝗺𝗶𝘀𝗺 𝗶𝘀 𝗿𝗶𝘀𝗶𝗻𝗴—𝗯𝘂𝘁 𝗻𝗼𝘁 𝗲𝘃𝗲𝗻𝗹𝘆. ➝ 83% of people in China are optimistic about AI. In the U.S., that number is just 39%. 7. 𝗔𝗜 𝗶𝘀 𝗴𝗲𝘁𝘁𝗶𝗻𝗴 𝗰𝗵𝗲𝗮𝗽𝗲𝗿, 𝘀𝗺𝗮𝗹𝗹𝗲𝗿, 𝗮𝗻𝗱 𝗳𝗮𝘀𝘁𝗲𝗿. ➝ The cost of GPT-3.5-level inference dropped 280x in two years. Open-weight models are nearly matching closed ones. 8. 𝗚𝗼𝘃𝗲𝗿𝗻𝗺𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗶𝗻𝘃𝗲𝘀𝘁𝗶𝗻𝗴. ➝ From Canada’s $2.4B to Saudi Arabia’s $100B push—states aren’t watching from the sidelines anymore. 9. 𝗘𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗲𝘅𝗽𝗮𝗻𝗱𝗶𝗻𝗴—𝗯𝘂𝘁 𝗿𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀 𝗹𝗮𝗴𝘀. ➝ Access is improving, but infrastructure gaps and lack of teacher training still limit global reach. 10. 𝗜𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗶𝘀 𝗱𝗼𝗺𝗶𝗻𝗮𝘁𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝗹 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁. ➝ 90% of top AI models now come from companies—not academia. The gap between top players is shrinking fast. 11. 𝗔𝗜 𝗶𝘀 𝘀𝗵𝗮𝗽𝗶𝗻𝗴 𝘀𝗰𝗶𝗲𝗻𝗰𝗲. ➝ AI-driven breakthroughs in physics, chemistry, and biology are earning Nobel Prizes and Turing Awards. 12. 𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗿𝗲𝗺𝗮𝗶𝗻𝘀 𝘁𝗵𝗲 𝗰𝗲𝗶𝗹𝗶𝗻𝗴. ➝ Despite all the progress, models still struggle with logic-heavy tasks. Precision is still a challenge. You can download the full report FREE here: https://lnkd.in/dzzuE5tN
-
Change is rarely blocked by technology. It is usually blocked by what the technology implies. Over the past year, I have been observing how different industries respond to AI. The pattern is consistent. People closest to learning and reinvention tend to move first. People closest to reputation and responsibility tend to pause. From my research, I have learned that most hesitation is not a knowledge gap. It is a risk calculation, wrapped in a story. Reaction sounds like: “This feels like hype.” “We are doing fine without it.” “Why buy a Ferrari if the current car, even though we can afford better, still runs?” Presence sounds like: “What problem are we solving?” “What would make this safe to scale?” “How do we keep trust while we move faster?” When you look at the data, the same blockers appear repeatedly: skills gaps, poor data quality, privacy and security concerns, integration challenges, ethics, and unclear regulation. Here is what shifts adoption from stalled to steady: Treat AI as a capability, not an experiment. ↳If it remains a side project, it stays fragile. Start with one clear use case. ↳Resistance drops when the value is specific. Make data readiness unglamorous and non-negotiable. ↳AI is only as reliable as the information it depends on. Lower the fear of “getting it wrong”. ↳People do not experiment when mistakes feel career-limiting. Name the real worry. In many cases, the unspoken question is, “Where do I fit if this works?” →Choose the right model for the job. →Sometimes that is a smaller, more controllable model. →Sometimes it is a larger one with stronger safeguards. →The point is fit, not fashion. Put governance on the same timeline as delivery. Speed without guardrails creates backlash later. Invest in AI literacy across the organisation. Not everyone needs to build models, but everyone should understand the limits and responsible use. The organisations that move fastest are not the most aggressive. They are the calmest. They create clarity, make learning safe, and treat trust as part of the design. That is what composure looks like when the world changes. Sources I drew on (for the data points and recurring barriers). ➕ Follow (Jyothish Nair) for reflections on AI, change, and human-centred AI. #ArtificialIntelligence #AIAdoption #DigitalTransformation #FutureOfWork
-
From CES this week, one thing is clear: we are moving into the era of physical AI — intelligence that operates in the real world. Robotics, including humanoid and non-humanoid systems, are getting a lot of attention right now. This is familiar territory for Autodesk. We have decades of experience working with manufacturing, AI, and industrial design leaders who build in the physical world. MarketWatch recently explored this momentum and included some of my perspective: https://lnkd.in/e_DN9HwC Progress will not come from machines that just look like us, nor just language. It will come from AI that understands physics, objects, and three-dimensional space. That’s why work on world models, like what Fei-Fei Li and others are doing, matters. These systems learn from sensory data to build a usable understanding of their environment. Physical AI will change how every industry that makes things designs, simulates, and executes. That is core to Autodesk’s mission, and I am optimistic about what is ahead. Who is ready to put physical AI to work across everything we design and build?
-
In this latest Forbes article, I draw a compelling line from Ada Lovelace’s 19th-century foresight to today’s AI-driven enterprise transformations. Lovelace envisioned machines augmenting human creativity—a vision now realized as #generativeAI reshapes industries. Accenture's experience with over 2,000 gen AI projects reveals that only 13% of companies achieve significant enterprise-wide value, while 36% are scaling AI for industry-specific solutions. Success in this new era hinges on more than just technology investment. Companies must also invest in their people, prioritize industry-specific AI applications, and embed responsible AI practices from the outset. Organizations adopting agentic architecture - digital teams comprising orchestrator, super, and utility agents—are 4.5 times more likely to realize enterprise-level value. Here are five key lessons we’ve learned: 1. Lead with value from the top: Executive sponsorship is crucial. Companies with CEO sponsorship achieve 2.5 times higher ROI from their #AI investments. 2. Invest in people, not just technology: Empower your workforce with the skills to harness AI. Organizations excelling in AI transformation invest in broad AI upskilling, adopt dynamic workforce models, and enable human + agent collaboration. 3. Prioritize industry-specific AI solutions: Tailor AI applications to your sector’s unique needs. Companies creating enterprise-level value are 2.9 times more likely to have a comprehensive data strategy to support their AI efforts. 4. Design and embed AI responsibly from the start: Ensure ethical and effective AI integration. Organizations creating enterprise-level value are 2.7 times more likely to have responsible AI principles and governance in place across the AI lifecycle. 5. Reinvent continuously: Stay adaptable in the face of ongoing change. Companies with advanced change capabilities are 2.1 times more likely to achieve successful transformations. These lessons should serve as a practical playbook for navigating the complexities of #AI integration and achieving sustainable growth. Please read the full article to explore how Lovelace’s visionary ideas are shaping the future of business through #generativeAI. https://lnkd.in/gEVzQeRA
-
The AI race isn’t just about smarter models anymore—it’s about who controls the silicon and the stack. Google, NVIDIA, and a shifting center of gravity Google’s Gemini 3 launch, backed by in-house Tensor ASICs, has forced even Nvidia and OpenAI to publicly tip their hats—an unusual moment of mutual acknowledgement in a fiercely competitive market. At the same time, Google’s stock jumped while Nvidia’s dipped, underscoring how capital markets are already repricing what “AI leadership” might look like when hyperscalers own more of the hardware narrative. ASICs vs GPUs: control vs versatility Nvidia and AMD still dominate with GPUs that serve broad, complex workloads and are wrapped in a mature software and data center ecosystem that is very hard to displace. Google’s Tensor chips, as ASICs, trade that general-purpose versatility for efficiency on narrower, highly-optimized AI tasks—enough to attract interest from Meta and Anthropic, but not yet enough to unseat Nvidia’s platform-scale advantage. Ecosystems, not winners, will define value Gemini 3 now tops many public benchmarks across text and image tasks, but other models outperform it on search and specialized use cases—a reminder that “best model” is becoming context-dependent. The more interesting story is ecosystem interdependence: Google is both a rival and a major Nvidia customer, and enterprises are increasingly assembling multi-model, multi-cloud, multi-chip strategies rather than betting on a single winner. What this means for leaders For executives, the real strategic questions are shifting from “Which model is best?” to: ⚫ Where do we need tight vertical integration (data + model + chip) versus flexible, multi-vendor optionality? ⚫ How do we avoid over-dependence on a single GPU vendor while not underestimating the cost of moving away from a mature platform? ⚫ Which workloads justify ASIC-style optimization, and which demand GPU-style breadth and agility? If your current AI roadmap doesn’t explicitly address hardware strategy, ecosystem risk, and a multi-model future, it’s time to revisit it. Bring your product, infra, and finance leaders into the same room and pressure-test your AI stack assumptions for the next 3–5 years—before the chip layer, not the model layer, becomes your biggest strategic constraint. Read More 👉 https://lnkd.in/g7C5nzd2 #AI #GenAI #GoogleGemini #Nvidia #AIChips #CloudComputing #Developers #AIInfrastructure #TechStrategy #EnterpriseAI
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development