Most people don’t know her name. And yet, almost every device you touch... your phone, your laptop, your Wi-Fi router, depends on a few hundred lines of code she wrote in 1985. Back then, computer networks had a fatal flaw. Backup paths created loops. Data would enter those loops… and spin forever. Packets multiplied, systems froze, entire networks crashed. It was like sending cars onto a roundabout with no exits. Eventually, everything jammed. The internet of the 1980s could not grow unless someone solved this. Radia Perlman did. Working at DEC in the mid-1980s, she created the Spanning Tree Protocol. This brilliant idea allowed switches to talk, detect loops, disable the dangerous ones, and instantly re-route traffic when a primary path failed. She taught networks how to heal themselves. Those few hundred lines of code became the backbone of the modern internet — running silently in offices, data centers, and across continents. As you read this in 2025, her algorithm is quietly protecting global networks from failure. But Radia Perlman walked into rooms where she was mistaken for an assistant. Her work was overlooked, attributed to others, forgotten in footnotes. When people later called her the “Mother of the Internet,” it was a compliment and an irony. Because great engineering is often invisible. And so was she. But she kept creating anyway. Over the 1990s and 2000s, she earned 100+ patents. She wrote textbooks that shaped generations. She developed new security methods. She was inducted into the Internet Hall of Fame in 2014. All built with the same philosophy: Make systems that survive. Make systems that keep going. Make systems that quietly hold the world together. Today, in her seventies, Radia Perlman is still working. And the protocol she wrote almost 40 years ago still runs beneath our digital lives. The internet was built to withstand failure. So was she. And maybe that’s the lesson that sometimes the people who change the world aren’t loud, or famous, or celebrated. Sometimes they’re just… invisible. But their work holds everything up. #INTERNET #inspiration #motivation #wisdom #computer #computerscience
Influential Tech Leaders
Explore top LinkedIn content from expert professionals.
-
-
What if you could save millions of lives by thinking differently about machines? Alan Turing did exactly that. In 1936, Alan Turing was a 24 year old mathematician at Cambridge asking a question that seemed purely theoretical. Could there be a universal computing machine capable of solving any problem that could be described as a series of logical steps? His answer, published in a paper titled On Computable Numbers, laid the mathematical foundations for every computer that would ever be built. Most people saw it as abstract mathematics with no practical application. Turing saw the future of computation. Then World War II changed everything. In 1939, Turing joined the Government Code and Cypher School at Bletchley Park. His mission was breaking Enigma, the encryption machine Nazi Germany used for military communications. The challenge was staggering. Enigma had 159 million million million possible settings. Checking them manually would take longer than the war would last. Turing built a machine that could think through the problem logically, eliminating impossible settings until the correct one emerged. The Bombe machine, as it was called, could break Enigma codes in hours instead of millennia. By the end of the war, Bletchley Park was decrypting thousands of messages daily. Historians estimate that breaking Enigma shortened the war by at least two years and saved millions of lives. But Turing was not satisfied with wartime applications. After the war, he continued developing the theory of machine intelligence. In 1950, he published Computing Machinery and Intelligence, proposing what became known as the Turing Test. Can machines think? If you cannot tell whether you are conversing with a human or a machine, does the distinction matter? This paper became the foundation of artificial intelligence research. Turing never saw the impact of his work. In 1954, at age 41, he died from cyanide poisoning. The inquest ruled it suicide, though some historians question that conclusion. Today, the Alan Turing Institute serves as the UK national institute for data science and artificial intelligence, carrying forward his legacy. The tools you use every day, from smartphones to AI assistants, exist because one mathematician asked whether machines could think and then proved they could. When you encounter someone whose thinking seems too different, too unconventional, too far ahead of current understanding, ask yourself whether you are dismissing the next Alan Turing. What unconventional thinking in your organisation gets dismissed because it does not fit established patterns? #AlanTuring #AI #Innovation #Legacy #BritishHistory
-
Navigators of the Next Web. Where the web ends, they begin. The old internet cracked open. Power shifted. Code evolved. New minds stepped into the breach. They chart the edge no map can show. 📌 Dr. Martha Boeckenfeld: The boardroom futurist guiding Web3 with precision. Trains execs in AI, DeFi, and digital trust. Built “Marthaverse” as a strategic map, not hype. Turns C-suites into future-fluent navigators. 📌 Dr. Rumman Chowdhury: The algorithm auditor holding AI to account. Built tools to test AI for bias and harm. Led Twitter’s META team on ethical AI. Turns black boxes into transparent systems. 📌 Cathy Hackl: The metaverse whisperer brands call before they leap. Advises LVMH, Nike, and beyond on virtual worlds. Built strategies at Magic Leap and HTC Vive. Translates tech hype into usable futures. 📌 Tavonia Evans: The crypto founder coding wealth for the diaspora. Launched Guapcoin to empower Black communities. Builds DeFi tools for financial independence. Turns blockchain into economic self-determination. 📌 Nanjira Sambuli: The policy strategist decoding tech for the people. Shapes digital rights across Africa and beyond. Advises on tech, equity, and global governance. Builds futures where access meets accountability. 📌 Shivani Siroya: The fintech founder who lends without a credit score. Built Tala to serve the underbanked globally. Uses mobile data for financial access and trust. Rewrites who qualifies in the digital economy. 📌 Erika Wykes-Sneyd: The brand strategist who dropped Adidas into Web3. Led the “Into the Metaverse” NFT launch. Connects culture, commerce, and digital identity. Builds brand loyalty block by block(chain). 📌 Kay Firth-Butterfield: The lawyer coding ethics into artificial minds. Led AI policy at the World Economic Forum. Advises nations on safe, inclusive AI futures. Builds guardrails before systems go rogue. 📌 Brenda Darden Wilkerson: The inclusion architect wiring equity into tech. Leads AnitaB.org to close tech’s gender gap. Builds pathways for women in STEM leadership. Designs the future’s talent pipeline, diverse by default. 📌 Wahiba Chair: The cyber guardian building trust into tomorrow’s web. Advises on security for Web3 and IoT systems. Focuses on digital resilience and data defense. Builds safety where code meets chaos. 📌 Sandy Carter: The Web3 exec helping enterprises cross the chasm. Drives adoption at Unstoppable Domains and beyond. Bridges Web2 giants into decentralized futures. Builds strategy where legacy meets the ledger. 📌 Meltem Demirors: The crypto strategist bridging capital with code. Led strategy at CoinShares, shaping digital finance. Launched Crucible to fund Web3 infrastructure. Advises globally on digital asset policy. New code. New questions. Who sets the course from here?
-
11 lines of code. That’s all it took to save the Apollo 11 mission. Margaret Hamilton wrote them, and in doing so, redefined software as a discipline. During Apollo 11's lunar descent, the guidance computer started throwing critical alarms. The mission was seconds away from disaster. Then Hamilton's software kicked in, prioritising only the most essential tasks and allowing the Eagle to land safely on the Moon. But this is what makes Hamilton's story essential for every entrepreneur: Hamilton was the first person to coin the term "software engineering." In the 1960s, software wasn't taken seriously. Hardware engineers dominated NASA. Hamilton fought to legitimise software as real engineering, demanding the same respect as rocket scientists. Her approach was revolutionary: → Built systems that expected failure and recovered in real-time → Designed for absolute reliability when there was zero margin for error → Created human-centered software that anticipated user mistakes → Treated people, process, and technology as one integrated system When Hamilton's young daughter accidentally crashed a simulator, she wanted to add safeguards against human error. NASA officials dismissed her saying: "Astronauts are trained never to make mistakes." Hamilton built the safeguards anyway. Months later, during an actual mission, an astronaut made the exact same mistake. Hamilton's "unnecessary" code saved the day. This is the mindset every founder needs: Don't just solve today's problems - solve the problems others refuse to acknowledge. Don't just build products - build systems that work when everything else fails. Don't just follow best practices - create the practices that become industry standards. Hamilton's real legacy isn't the Moon landing. It's proving that when you refuse to accept "that's not how we do things," you can quite literally reach for the stars. You can build the next IMPOSSIBLE.
-
Driving innovation goes beyond technological advancements: it starts with people. In the era of AI and rapid technological advancement, it's easy to fixate on the tools themselves. But true innovation begins with a focus on people – both customers and employees. At RingCentral, we're committed to a people-first AI strategy, designing solutions that support and unify our customers and employees. Whether it’s empowering customer experience professionals with an AI-powered contact center or helping teams have more collaborative meetings, our goal is to create technology that functions as a people enhancer. Innovation is not just about what we build but why we build it. By taking a people-first approach, leaders can create innovative solutions that make the world a more connected and collaborative place. I’m eager to hear how other leaders are putting people first in their innovation processes. How do you prioritize the needs and goals of your employees and customers in your innovation process?
-
John von Neumann died in early 1957, but much of his work was absolutely fundamental to the evolution of computing and AI. Today some of it is more relevant than ever. The von Neumann computer architecture consists of a central processing unit (CPU), memory, and input/output mechanisms, still the structure used today. He also pointed to the potential for parallel computing architectures, which have underpinned AI compute, especially since 2010. His book 'The Computer and the Brain' discussed the potential of brain-inspired computing, which led to the development of neural networks. He co-authored the seminal text on game theory, 'The Theory of Games and Economic Behavior', which underpins many branches of AI, including the algorithms underlying AlphaGo and related models. His work on stochastic methods, including Monte Carlo analysis, are fundamental to many AI techniques including Bayesian inference and Markov Chains. His work on self-reproducing automata shaped cellular automata, generative algorithms, and Generative Adversarial Networks. Today his theoretical constructs on self-reproduction are exceptionally relevant as agentic AI rapidly rises. I was aware of bits and pieces of this, but this was all brought into focus by reading the slightly fictionalized story of von Neumann and his contemporaries, 'The Maniac' by Benjamin Lebatut, an absolutely fascinating yarn. Highly recommended.
-
📰 THE MAN WHO WARNED US ABOUT BLOATED SOFTWARE WAS BORN TODAY – AND WE DIDN’T LISTEN. Read below for the full story ↓↓↓ #OnThisDay - 15th February 1934 - Niklaus Wirth was born. If you’ve spent time around structured programming, Pascal textbooks, or clean academic-style languages, you’ve felt his influence. Wirth wasn’t shipping syntax experiments. He shaped how generations of developers think about structure, clarity, and restraint. He designed Pascal in 1970, but also ALGOL W, Modula, Modula-2, Oberon and more. Each wasn’t trying to be bigger. They were trying to be cleaner. More disciplined. More intentional. Then in 1995 he gave us something that still stings: “Software is getting slower more rapidly than hardware becomes faster.” Now known as Wirth’s Law. When I started in the 1980s, that wasn’t theory. Memory was tight. Storage expensive. CPUs modest. If your program was bloated, it simply didn’t run. Efficiency wasn’t pride, it was survival. You thought about structure because structure reduces bugs. You thought about memory because it was visible and finite. You simplified because complexity had a measurable cost. There was nowhere to hide. Today we have an abundance. Gigabytes are cheap. Processors are absurdly fast. Frameworks abstract the plumbing. Yet much software feels heavier than it needs to be. Layers of abstraction. Deep dependency chains. That’s where Wirth’s Law feels uncomfortable again. He wasn’t saying hardware progress is wasted. He was arguing for lean software. Software that respects constraints even when they’re generous. Software that doesn’t assume power will mask poor design. And here’s something I’ve never quite explained: Pascal always felt glamorous to me. Disciplined. Proper. Elegant. Maybe it was the academic heritage. Maybe the structure was enforced. It felt serious, not hacked together. Fast forward to today, and we’re in another pivot. AI is generating code at scale. Boilerplate vanishes. Features appear in seconds. Powerful? Absolutely. But AI doesn’t automatically produce lean systems. It produces plausible ones. It optimises for patterns it has seen. Without strong engineering judgement, we risk amplifying complexity faster than ever before. Which brings us back to Wirth. If he believed in clarity and tight design in an era of scarcity, what would he argue for in an era of AI abundance? After nearly 40 years in this industry - from 8-bit machines to AI-assisted development - I’m convinced of one thing: Tools change. Discipline doesn’t expire. So here’s the question: Is software today genuinely better designed than it was in the 80s and 90s, or are we compensating for inefficiency with faster hardware and smarter tools? And in the age of AI… are we building leaner systems, or just building faster? 👨🏻💻 Still coding, still learning, still adapting 📘 Writing debugdeployrepeat.com - a long-view look at software careers 🎮 Building a retro-inspired game world at orebituary.com
-
David Harold Blackwell (April 24, 1919 – July 8, 2010) was one of the most influential mathematicians and statisticians of the 20th century whose peer-reviewed research now underpins modern finance, artificial intelligence, economics, and decision science. He also shattered barriers in American academia, becoming the first Black member of the National Academy of Sciences (1965) and the first tenured Black professor at University of California, Berkeley. Born in Centralia, Illinois, Blackwell demonstrated exceptional mathematical ability early. He earned his BA (1938), MA (1939), and PhD (1941) from the University of Illinois at Urbana-Champaign, completing his doctorate at just 22 years old making him only the seventh African American in U.S. history to earn a PhD in mathematics. During his early academic career, he authored more than 20 peer-reviewed papers in top-tier journals, contributing original theorems and frameworks that continue to be cited decades later. Blackwell’s career spanned institutions and generations. From 1944 to 1954, he taught at Howard University, rising to full professor and department chair while publishing extensively. In 1954, he joined UC Berkeley’s statistics department, later serving as chair, mentoring over 60 PhD students, and helping establish Berkeley as a global center for statistics until his retirement in 1988. The game-changing impact of Blackwell’s work lies in how it transformed decision-making under uncertainty. He independently developed foundational ideas in dynamic programming, advanced game theory, and reshaped Bayesian statistics. His contributions include the Theory of Games and Statistical Decisions (1954), Basic Statistics (1969), and the Rao–Blackwell Theorem, which permanently improved statistical estimation. These concepts are now core to machine learning, AI optimization, economics, and systems engineering. That influence reached a powerful modern milestone in 2024, when NVIDIA named its next-generation Blackwell GPU architecture in his honor explicitly recognizing the mathematician whose peer-reviewed theories help power today’s AI supercomputers and large language models. Dr. Blackwell received the John von Neumann Theory Prize in 1979, earned 13 honorary doctorates, and President Barack Obama posthumously awarded him the National Medal of Science in 2012. Dr. Blackwell is proof that one mind can change how the world thinks, decides, and builds the future. https://lnkd.in/euvNUqnh
-
Technology isn’t a cost center, it’s your competitive edge. If you can’t shift this perspective, you won’t be able to innovate 📈 Over the past decade, I've guided numerous companies, from pharmaceutical giants to financial leaders, on their journey to becoming product-led. One common mistake I've seen is treating technology as a mere expense, not a strategic advantage. This view undermines transformation efforts. Here's how it unfolds and what to do about it. In many organizations, technology is seen as a cost center. When executives talk strategy, it's often about cost reduction. They can articulate market differentiators well but stumble when asked, "How does your tech vision enhance your competitiveness?" Silence. The competitive edge dulls as rivals leveraging tech strategically catch up. This approach is like playing corporate whack-a-mole: solving cost issues while missing opportunities. What if the process you streamlined wasn't needed at all? Or if you could innovate beyond traditional methods? Many transformations start with Agile to address slow development cycles. But speed alone doesn't equate to success. Agile without product thinking can lead to an output-focused mindset: success measured by backlog clearance rather than solving real business problems. Transformations stall when teams build features quickly without building the right ones. It's crucial to view software products as strategic enablers, not just tools to "run the business." Without this shift, product strategies remain uninspiring. Even if your software isn't sold, it can be a major strategic differentiator. Consider Capital One's journey: disrupting the banking sector by using data analytics for credit risk models and improving customer experiences by eliminating unnecessary processes. What about internal tools? For pharmaceutical companies, bringing drugs to market is essential. Instead of merely speeding up processes, your tech could identify study participants and predict outcomes better than competitors. It's about asking the right questions. "How do I make this cheaper?" leads to outdated solutions. "How do we re-imagine this process for an exceptional experience?" drives innovation. If you're on this journey, start by changing the conversation. Ask "why?" and "what if?" Shift from cost-cutting to value-creation, from outputs to outcomes, from project management to product thinking. Real transformation isn't about new processes or team reorgs. Those are secondary. The core shift is viewing technology as a strategic asset driving business value. That’s where the real transformation begins.
-
Jeff Dean sat for this portrait in Google’s Building 43 on an August afternoon in 2025. The room was quiet, the light falling in a way that made his eyes appear both steady and amused. He has the look of someone who has been through countless problem-solving sessions but still finds joy in the process. When he arrived at Google in 1999, the company had fewer than a hundred employees. Search was already straining the capacity of existing systems. Dean and his longtime collaborator Sanjay Ghemawat began sketching solutions on whiteboards, filling them with boxes and arrows that hinted at ways to divide work across thousands of machines. Out of those sessions came MapReduce, which allowed Google to process massive data sets in a fraction of the time. Bigtable followed, giving the company a storage system that could keep pace with its ambitions. Engineers still remember the first time they saw queries finish in hours instead of days. Stories about Dean inside Google often return to the way he approaches code reviews. He is known to mark up a colleague’s code with detailed comments, sometimes line by line, always pushing for clarity. Yet the tone is never dismissive. Younger engineers recall feeling surprised that someone of his stature took their work so seriously. It gave them confidence to tackle harder problems. By the middle of the 2010s Dean turned his focus to artificial intelligence. As head of Google Brain, he encouraged the team to take bold steps, even when success was uncertain. TensorFlow, the open-source library they released, was built with that same spirit. It made sophisticated machine learning accessible, and within a few years it was being used by researchers in medicine, climate science, and language technology. Dean liked to point out that the best ideas often came from unexpected places once the tools were in wide circulation. His colleagues describe a leadership style that is more invitation than command. In meetings he often starts with a quiet question that reframes the problem, steering the discussion toward fundamentals. He is patient with complexity, willing to sit through hours of debate if it means arriving at a solution that will endure. His amusement shows itself in small ways, often in a half-smile when a tough question lands on the table. To photograph him is to see both sides at once. His arms are folded, his expression serious, yet his eyes suggest he is ready to lean forward into the next idea. The systems he has helped build are vast and nearly invisible, woven into daily life for billions of people. But in person he is disarmingly affable, a reminder that behind the abstractions of code and scale is a human being who still loves to puzzle out how things work.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development