Over the last 7 years, IBM has been quietly building something quite deliberate. Not a single product. Not a one off platform. But a set of capabilities that, taken together, form the operating backbone for enterprise AI. You can see the pattern when you step back: Foundation: Red Hat Performance: IBM Instana and IBM Turbonomic Governance: Apptio, an IBM Company and HashiCorp Integration and data: Webmethods and DataStax Flow: now strengthened with Confluent Individually, each of these solves a specific problem. Together, they start to look more like a system. For telecom operators, that matters. Telcos are not short of data. They are not short of platforms. What they are often dealing with is fragmentation, latency between systems, and the challenge of turning insight into action at scale. AI only works in that environment if a few things are true: Data moves in real time Systems are observable Resources are optimised continuously Governance is built in, not bolted on That is where this kind of architecture becomes relevant. Not as a “data fabric” concept, but as a way of running complex, distributed environments where decisions need to be made inside the operational loop, not after the fact. In telecoms, that translates into very practical outcomes: Better network performance Faster issue resolution More efficient use of infrastructure Lower cost to serve The interesting question now is not whether the components exist. It’s whether operators bring them together in a way that actually changes how their business runs. Because in telecoms, AI will be judged by how deeply it is embedded into the operating model and how it influences performance, efficiency and outcomes over time, not how impressive it looks in isolation. #Telecoms #AI #DataStreaming #Observability #FinOps #Cloud #TelcoTransformation Alison Clegg James Stewart Kash Hussain Callum Simpson Alexander Verdi Elke Kunde Begüm Daşkaya Gökhan Yılmaz Chantelle Govender Titus Masike
Telecommunication Engineering Breakthroughs
Explore top LinkedIn content from expert professionals.
-
-
Telcos, Stop Renting Intelligence. Start Building Brains. 🧠 As of 2025, fewer than 1% of telecom operators own a foundation AI model. SoftBank, SK Telecom, and China Mobile are examples of Telcos that have trained their own models on their network data. The rest rely on third-party APIs from OpenAI, Google, or Anthropic for analytics, operations, and automation. This dependency will have economic consequences. The problem for Telcos is the cost: A telco-grade foundation model costs about $60–100 million to train once. Renting equivalent intelligence through APIs costs $5–10 million per year, but without data control, model access, or customisation. Over five years, the cost is similar, except that your intelligence is owned by someone else. The alternative is co-training: Several operators share compute and data under an encrypted federation. Each retains sovereignty, yet all benefit from collective learning. The structure already exists in Telecom; the industry has shared towers, cores, and standards for decades. The next telecom standard should not be about air interfaces, but how networks learn. Owning the cognition layer determines who controls optimisation, automation, and cost efficiency in the next cycle. Three operators have proved it’s technically possible. The question now is whether the rest will keep renting cognition or start building it. https://lnkd.in/gMJsk2Cy
-
Breakthrough for the #quantum internet: For the first time a major telco provider has successfully conducted entangled photon experiments - on its own infrastructure. ➡️ 30 kilometers, 17 days, 99 per cent fidelity. Our teams at T-Labs have successfully transmitted entangled photons over a fiber-optic network. Over a distance comparable to travelling from Berlin to Potsdam. The system automatically compensated for changing environmental conditions in the network. Together with our partner Qunnect we have demonstrated that quantum entanglement works reliably. The goal: a quantum internet that supports applications beyond secure point-to-point networks. Therefore, it is necessary to distribute the types of entangled photons. The so-called qubits, that are used for #QuantumComputing, sensors or memory. Polarization qubits, like the ones used for this test, are highly compatible with many quantum devices. But: they are difficult to stabilize in fibers. From the lab to the streets of Berlin: This success is a decisive step towards the quantum internet. 🔬 It shows how existing telecommunications infrastructure can support the quantum technologies of tomorrow. This opens the door to new forms of communication. Why does this matter for people and society? 🗨️ Improved communications: The quantum internet promises faster and more efficient long-distance communications. 🔐 Maximum security: Entanglement can be used in quantum key distribution protocols. Enabling ultra-secure communication links for enterprises and government institutions 💡Technological advancement: high-precision time synchronization for satellite networks and highly accurate sensing in industrial IoT environments will need entanglement. Developing quantum technologies isn’t just a technical challenge. A #humancentered approach asks how these systems can be built to serve real needs and be part of everyday infrastructure. With 2025 designated as the International Year of Quantum Science and Technology, now is the time to move from research to readiness. Matheus Sena, Marc Geitz, Riccardo Pascotto, Dr. Oliver Holschke, Abdu Mudesir
-
At MWC Barcelona this year, we launched the GSMA Open-Telco LLM Benchmarks to unite a community tackling the unique challenges of telecom AI. The first results were clear: out-of-the-box AI models simply aren’t fit for telco-specific needs. Now, with version 2.0, this effort has evolved into a thriving, open-source collaboration. The findings point to a hybrid architecture as the most effective path forward - combining the broad reasoning of foundation models with the precision of specialised components. In addition to providing clear direction for AI in telecom, what’s really exciting is the unprecedented level of industry collaboration. Operators including AT&T, China Telecom Global, Deutsche Telekom, du, KDDI Corporation, KPN, Liberty Global, Orange, Telefónica, Turkcell, Swisscom, and Vodafone are joined by research and technology partners - Adaptive AI, Datumo, Huawei GTS, Hugging Face, The Linux Foundation, Khalifa University, NetoAI, Universitat Pompeu Fabra - Barcelona (UPF), The University of Texas at Dallas and Queen's University - to build a shared ecosystem for experimentation, validation, and learning. Read more in our latest blog: https://lnkd.in/eTDH5PBX
-
“I don’t need GPU programming. Telecom doesn’t use that stuff.” A senior architect told me this years ago when I mentioned I was learning CUDA. And honestly? He wasn’t wrong… at the time. It reminded me of telecom in the 4G era — everything was boxes, hardware, fixed functions, and purpose-built silicon. Why learn parallel computing when the network ran on appliances? But here’s what fixed-function networks can’t do: → Scale AI-native workloads → Run real-time inference → Handle Massive MIMO at 6G levels → Simulate entire networks as digital twins → Accelerate UPF, beamforming, LDPC, and sensing And here’s the twist… CUDA is what makes all of that possible. Because CUDA is basically a superpower: It lets you use a GPU not just for graphics — but for running thousands of operations at the same time. Perfect for: AI workloads, DSP workloads, Packet processing, RAN PHY acceleration. 6G research and simulation. That’s why operators chasing 6G — NTT DoCoMo, SK Telecom, Vodafone, AT&T, Deutsche Telekom — are already building compute-native networks powered by GPUs. And here’s the best part… You don’t need a C++ background to start. Modern tools let telecom engineers learn GPU acceleration with Python: → CUDA Python → Triton → PyTorch → NVIDIA Aerial for RAN/Core → TensorRT for real-time inference Because CUDA isn’t just a programming model, It’s how telecom turns AI, DSP, and packet processing into accelerated, scalable, software-defined systems. 6G won’t be built by people who “configure networks.” It will be built by people who understand telecom + compute + AI. Resources to Begin 🔗 CUDA Basics https://lnkd.in/g4Rkpj22 🔗 CUDA Python (no C++ needed) https://lnkd.in/gJ8ErQX6 🔗 NVIDIA Aerial (Telecom Acceleration) https://lnkd.in/gFkMKcxF 🔗 Beginner Parallel Programming Guide https://lnkd.in/gVP_gswD 6G won’t just be built by telecom engineers — it will be built by those who understand telecom + AI + accelerated compute. #tahasajid #6g #GPU #CUDA #telecom
-
𝐃𝐢𝐝 𝐲𝐨𝐮 𝐤𝐧𝐨𝐰 𝐭𝐡𝐚𝐭 𝐠𝐥𝐨𝐛𝐚𝐥 𝐦𝐨𝐛𝐢𝐥𝐞 𝐝𝐚𝐭𝐚 𝐭𝐫𝐚𝐟𝐟𝐢𝐜 𝐢𝐬 𝐞𝐱𝐩𝐞𝐜𝐭𝐞𝐝 𝐭𝐨 𝐫𝐞𝐚𝐜𝐡 𝐚 𝐬𝐭𝐚𝐠𝐠𝐞𝐫𝐢𝐧𝐠 77.5 𝐞𝐱𝐚𝐛𝐲𝐭𝐞𝐬 𝐩𝐞𝐫 𝐦𝐨𝐧𝐭𝐡 𝐛𝐲 2027? This explosion of data presents both a challenge and a massive opportunity for telecommunication companies. But are they equipped to handle it? The telecommunications industry is undergoing a seismic shift. Why should you care? Because this transformation impacts how we connect, communicate, and experience the digital world. A recent study showed that poor network performance can lead to a 30% increase in customer churn. 👉 In today's hyper-connected world, customer expectations are higher than ever, and telcos need to leverage data to stay ahead of the curve. 👉 Traditional data management systems struggle to keep pace with the sheer volume, velocity, and variety of data generated by modern telecom networks. Sifting through massive datasets to gain actionable insights is like finding a needle in a haystack. 👉 This makes it difficult to optimize network performance, personalize customer experiences, and develop innovative new services. Telcos need a new approach to data management to unlock the true potential of their data. 𝐓𝐡𝐞 𝐬𝐨𝐥𝐮𝐭𝐢𝐨𝐧? 👉 Deutsche Telekom, one of the world's leading telecommunications providers, is leading the charge by designing the telco of tomorrow with BigQuery. 👉 By leveraging BigQuery's powerful data warehousing and analytics capabilities, Deutsche Telekom is able to ingest and analyze massive datasets in real time. This enables them to gain valuable insights into network performance, customer behavior, and market trends. 👉 They can now proactively identify and resolve network issues, personalize offers and services for individual customers, and develop new revenue streams. 𝐊𝐞𝐲 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲𝐬: 👉 Real-time Insights: BigQuery enables real-time analysis of massive datasets, allowing telcos to react quickly to changing network conditions & customer needs. 👉 Improved Customer Experience: By understanding customer behavior and preferences, telcos can personalize services and offers, leading to increased customer satisfaction and loyalty. 👉 Innovation & Growth: Access to rich data insights empowers telcos to develop innovative new services & explore new business models. 👉 Scalability & Flexibility: Cloud-based solutions like BigQuery offer the scalability and flexibility needed to handle the ever-growing data demands of the telecommunications industry. This journey highlights the transformative power of data in the telecommunications industry. By embracing cloud-based data solutions, telcos can unlock valuable insights, improve customer experiences & drive innovation. The future of telecom is data-driven, and companies that embrace this reality will be the leaders of tomorrow. Follow Omkar Sawant for more. #telecommunications #bigdata #cloud #digitaltransformation #datanalytics
-
Silicon Photonics in 2026: The Shift From Trend to Transition LightCounting’s forecast—over 50% of optical transceiver sales using silicon-photonics modulators in 2026 up from 10% in 2018—represents a dramatic industry inflection. This shift is being driven by four major forces: ✅ 1. Explosive Bandwidth Demand from AI Clusters AI workloads (ChatGPT-class models, large-scale training clusters, hyperscale inference) require: • 800G → 1.6T optical transceivers • low power / low-latency interconnects • tight integration between compute and optics Electrical interconnects saturate around a few centimeters at >100 Gbps. Silicon photonics eliminates these physical limits, enabling co-packaged optics and eventually optical I/O directly integrated with advanced packaging. ✅ 2. Foundries Reconfiguring Their Roadmaps for SiPh The foundry landscape is shifting from small experimental lines to full commercial 300 mm manufacturing. The table you shared captures this transformation. ✅ 3. Wafer Transition: 200 mm → 300 mm This is one of the biggest structural shifts. Why 300 mm matters: • Better uniformity of waveguides and modulators • Higher yield for photonic components • Economies of scale similar to CMOS • Better compatibility with advanced packaging As transceiver volumes scale with AI datacenters, 200 mm lines (like Tower’s current base) cannot meet hyperscale demand. Most commercial deployment in 2026+ will rely on 300 mm. ✅ 4. Packaging Becomes the Real Battlefield Silicon photonics != complete system The real bottleneck is packaging and fiber alignment. Three major approaches are emerging: 1. Co-Packaged Optics (CPO) Optical engines integrated beside switch ASICs. TSMC and Nvidia are pushing this. 2. Pluggable Transceivers Using SiPh Still dominant today (800G / 1.6T). GF and Intel lead here. 3. Optical I/O / Optical Chiplets Future vision — optical communication directly connected to compute tiles. This requires: • ultra-low-loss coupling • integrated lasers or hybrid bonding • photonic + electronic co-design Expect early pilot deployments around 2027–2028.
-
🚦 **Reflections from NVIDIA GTC Washington, D.C 2025.** Last week’s GTC made one thing clear; AI-native infrastructure is evolving fast, and telecom is being invited to the table. But amid the excitement, it’s worth taking a balanced look at what’s real today versus what’s aspirational. 📡 Telecom in the Spotlight - **Nokia and NVIDIA** announced work on *AI-native 6G RAN nodes* using the Aerial/ARC-Pro platform, a promising signal of how compute and connectivity are converging. - Huang emphasized that *telecom is the nervous system of the economy*, calling for greater technology independence and domestic innovation. - Panels on “AI for Telecommunications” showcased prototypes of intelligent RAN optimization, edge analytics, and network planning powered by machine learning. ⚖️ Signals vs. Substance - **Early days**: Many of these initiatives are still in the *proof-of-concept* phase. Integrating AI models into live RAN environments will require years of testing, spectrum-policy clarity, and vendor alignment. - **Cost and complexity**: Embedding GPUs and AI accelerators into network nodes could shift the economics of telecom infrastructure, it’s a good idea, but not a trivial retrofit. Also, we have been there before with the whole MEC concept (which failed). - **Governance**: As sovereign-tech conversations grow louder, telcos will need to navigate new compliance, data-sovereignty, and security frameworks before large-scale deployment. 💭 My Take AI-enabled wireless is an exciting frontier, it promises smarter, more adaptive networks. .....But for now, the prudent path is **experimentation with guardrails**: pilot at the edge, validate the economics, and align architecture standards before scaling. If you’re in telecom or enterprise network architecture, this is a space to watch closely and approach "thoughtfully". #NVIDIAGTC #Telecom #AI #6G #RAN #EdgeComputing #NetworkTransformation
-
+3
-
📡 Day 3 – Telecom Data Types You Must Know for AI/ML in 5G Welcome to the 21-Day AI/ML for 5G Learning Series 💡 Before building AI models, you need to understand the fuel that powers them: TELECOM DATA 📊 5G Physical Layer Course link : https://lnkd.in/gnj4PtAZ Here are the most critical data types used in ML-driven telecom systems: 🔹 1️⃣ RSRP (Reference Signal Received Power) • Measures received power of reference signals. • Used for coverage estimation and handover decisions. 🧠 ML Use: Predicting poor coverage zones, user handover failures. 🔹 2️⃣ CQI (Channel Quality Indicator) • UE reports channel conditions (0–15 scale). • Helps gNB determine MCS (modulation & coding scheme). 🧠 ML Use: Adaptive bitrate control, link adaptation prediction. 🔹 3️⃣ Call Traces & CDRs (Call Detail Records) • Includes session start/end time, IMSI, cell ID, QoS parameters. • Great for user behavior modeling and mobility prediction. 🧠 ML Use: Clustering, anomaly detection, churn analysis. 🔹 4️⃣ Logs (Layer 1–3 Protocol Logs) • Includes MAC, RLC, PDCP, RRC, NAS events. • Used for root cause analysis and failure pattern mining. 🧠 ML Use: Auto-classification of failures, RCA automation, testing. 🎯 These data types are the foundation for AI/ML success in telecom. You can’t optimize what you don’t measure — and this is where data becomes power. #5G
-
Optical fiber prices have surged ~3.3× in the last 18 months. And the reason might surprise many. The explosion in AI infrastructure and GPU-dense data centers is driving an unprecedented demand for optical connectivity. Compared to traditional CPU setups, GPU racks require 16–36× more optical fiber to handle the massive data transfer between GPUs. At the same time, the industry is facing shortages of fiber cables and manufacturing capacity, creating a powerful tailwind for optical fiber manufacturers. A few Indian companies positioned to benefit: 1. Sterlite Technologies (STL) • Among the top 3 global optical fiber companies (≈8% market share ex-China) • The only Indian player with full integration from preform to fiber to cable • Data center revenue already ~20% of total, targeting 30% in the next 12–18 months • GPU-dense AI racks require up to 36× more fiber, directly benefiting STL’s IBR and 160-micron fiber products • Recently secured ₹500 crore in AI/data center orders 2. HFCL • Manufactures high fiber-count cables (up to 6,912 fibers per cable) used in hyperscale AI data centers • One of the few companies globally with this manufacturing capability • Currently turning down orders due to capacity constraints • Expanding OFC capacity from 30.5 mn fkm → 42.36 mn fkm by June 2026 • Targeting ₹3,500 crore OFC revenue by FY27 3. Finolex Cables • Optical fiber cable volumes up ~33% YoY in Q3 • Global fiber prices rising from $3 → $5 per fkm due to data center demand • Doubling fiber draw capacity from 4 mn → 8 mn km by Q1 FY27 • OFC EBIT margins currently ~2.5%, targeting 8–9% as scale and preform integration kick in • Long-term OFC revenue potential ₹600–700 crore The bigger picture: AI isn’t just about GPUs and semiconductors. The invisible infrastructure—optical fiber—may quietly become one of the biggest bottlenecks in the AI supply chain. Sometimes the biggest opportunities sit one layer deeper in the stack. I share investment insights on my substack. Join 550+ investors here for free: https://lnkd.in/d5cr-eDj #AI #DataCenters #OpticalFiber #Infrastructure #Investing
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development