Hugging Face has become a leading platform in artificial intelligence, serving as a central hub for models, datasets, and tools that work across multiple frameworks. Its broad compatibility with major AI frameworks, such as PyTorch, Keras, JAX, and TensorFlow, has been a key factor in its widespread adoption.
Strategic Shift in Framework Support
Recently, Lysandre Debut, Chief Open Source Officer at Hugging Face, announced on LinkedIn that the company will discontinue support for TensorFlow and JAX. Instead, Hugging Face will maintain strong integration with PyTorch and plans to introduce new support for Keras in an upcoming release. The decision to deprecate support is partly driven by the technical debt and maintenance burden of supporting multiple frameworks as the field rapidly evolves. This move signals a strategic focus on streamlining development and enhancing user experience with PyTorch as the primary framework.
PyTorch’s official LinkedIn page shared Debut’s announcement, expressing enthusiasm for Hugging Face’s decision:
We're excited to see this bet on PyTorch with a focus on keeping code simple!
Hugging Face’s decision to deprecate support for TensorFlow and JAX in its flagship Transformers library marks a pivotal shift in the open-source AI landscape. This move, which refocuses the platform’s development on PyTorch (with continued Keras support), is set to reshape workflows, tooling, and competitive dynamics across the AI ecosystem.
Immediate and Disruptive Consequences for a Vast User Base
The most immediate and acute impact would be felt by the countless developers and researchers who rely on the seamless interplay between Hugging Face and these frameworks.
- Broken Workflows and Increased Friction: The transformers library, a flagship Hugging Face project, offers a unified and straightforward interface for loading, training, and deploying models across different frameworks. The simple from_pretrained() and push_to_hub() commands that work effortlessly with TensorFlow and JAX would cease to function, breaking countless production and development workflows. Developers would be forced to abandon this streamlined approach and invest significant time and effort in learning and implementing alternative methods for model loading, saving, and sharing.
- Loss of a Centralized Model Hub: The Hugging Face Hub hosts a vast collection of pre-trained and fine-tuned models for JAX, and TensorFlow. The removal of support would render these models inaccessible through the familiar Hugging Face interface, effectively creating a "model graveyard." While the model weights might still exist in their respective repositories, the invaluable metadata, documentation, and community-driven discoverability would be lost. However, support for Keras still opens the doors to access these models, and yet requires writing custom wrappers to use with JAX and TensorFlow.
- Tooling and Infrastructure Obsolescence: Beyond the transformers library, other crucial tools in the Hugging Face ecosystem, such as datasets, accelerate, and evaluate, are designed for interoperability. The removal of support would lead to a cascade of breakages and deprecations, forcing developers to find and integrate new solutions for data loading, distributed training, and model evaluation.
- Keras remains supported and is now multi-backend, with integration for PyTorch, TensorFlow, and JAX. This may mitigate some disruption, but the need for custom wrappers and the loss of direct support for TensorFlow and JAX in Transformers is a significant change
- Hindrance to Reproducibility and Collaboration: The Hugging Face Hub is a vital platform for researchers to share their work, ensuring reproducibility and fostering collaboration. The ability to easily download and build upon existing models in their framework of choice would be severely hampered. This would slow down the pace of innovation as researchers grapple with the technical overhead of accessing and adapting models from a fragmented landscape of sources.
- Increased Barrier to Entry: For newcomers to the field, Hugging Face has significantly lowered the barrier to entry by providing easy access to state-of-the-art models and tutorials. Without this unified platform, aspiring researchers and developers using Keras, JAX, or TensorFlow would face a much steeper learning curve, potentially discouraging them from entering the field.
NVIDIA's GPUs: Cementing Market Dominance: For NVIDIA, this scenario would be exceptionally beneficial, reinforcing its market leadership and strengthening its software moat.
- Increased Demand and Reinforced Dominance: PyTorch runs best on NVIDIA GPUs, with deep integration into the CUDA software stack. By funneling the entire Hugging Face ecosystem (from students to large enterprises) exclusively onto PyTorch, the de facto hardware choice becomes NVIDIA GPUs. Workloads that might have run on TPUs or other accelerators would migrate to NVIDIA's data center products (like the H100 and its successors), boosting demand and solidifying their dominant market position.
- Strengthening the CUDA Moat: NVIDIA's true power lies in its CUDA software ecosystem. Directing more developers to the PyTorch-NVIDIA stack would further entrench CUDA as the unquestioned industry standard. The network effect would intensify: more developers optimizing for CUDA would lead to more tools and better performance, making it even harder for competitors like AMD and Intel to attract a critical mass of AI developers.
- Accelerated Innovation: With a larger, more concentrated user base on the PyTorch-NVIDIA stack, the feedback loop for innovation would accelerate. Both NVIDIA and the PyTorch community would likely double down on optimizations (from low-level libraries like cuDNN and NCCL to high-level performance enhancements) further widening their performance and feature lead over competitors.
Google's TPUs: A Major Setback
The removal of support for JAX and TensorFlow would be a significant blow to Google's TPU ecosystem, as these are the primary frameworks optimized for its hardware.
- Drastic Drop in Demand: JAX is the native language of TPUs, offering the best performance and scalability on this hardware. TensorFlow also has first-class support. Without Hugging Face providing a gateway for developers to use models in these frameworks, the primary funnel of workloads to TPUs would be severely constricted. This would likely lead to a sharp decline in the demand for and utilization of Google's cloud TPU resources, especially among the broader research and startup communities.
- Isolation of the JAX Ecosystem: JAX's adoption has been fueled by its use in cutting-edge research, with models often shared via Hugging Face. Losing this central platform would isolate the JAX community, making it harder to share research and collaborate. This could slow JAX's growth, potentially relegating it to a niche framework used primarily within Google and its close affiliates, thereby weakening the main value proposition of TPUs to the wider market.
- Pressure on Google's AI Hardware Strategy: Google has invested billions in developing the JAX+TPU stack as a key competitive differentiator. A Hugging Face shift would undermine this strategy by steering the mainstream AI community away from it. Google would be forced to either invest massively in making PyTorch a truly first-class, performant on TPUs (a significant technical challenge) or risk its specialized hardware becoming underutilized.
Long-Term Ecosystem-Wide Ramifications
The long-term consequences would extend beyond individual workflows and impact the very structure of the open-source AI community.
- Community Fragmentation: Hugging Face has acted as a powerful centralizing force, bringing together developers and researchers from different framework-specific communities. The removal of support for JAX, and TensorFlow would likely lead to a re-fragmentation of the community. Users would migrate to framework-specific hubs, such as TensorFlow Hub, or other emerging platforms, leading to siloed pockets of innovation and reduced cross-pollination of ideas.
- The Rise of Alternative Hubs: In the vacuum left by Hugging Face, platforms like TensorFlow Hub would see a significant surge in adoption. We could also witness the emergence of new, community-driven model hubs dedicated to JAX and Keras. While this would provide new avenues for model sharing, it would also contribute to the aforementioned fragmentation.
- A Blow to Framework Interoperability: Hugging Face has championed the vision of a multi-framework future where developers can seamlessly switch between backends. This move would be a significant retreat from that vision, potentially leading to a more competitive and less collaborative relationship between the major deep-learning frameworks.
- Advantage PyTorch: Given that PyTorch is the other major framework heavily supported by Hugging Face, a decision to drop Keras, JAX, and TensorFlow would invariably be perceived as a strategic alignment with PyTorch. This could inadvertently accelerate the migration of developers and researchers towards the PyTorch ecosystem, further solidifying its market-leading position.
Consequences for Hugging Face
Such a move would not be without significant repercussions for Hugging Face as well.
- Loss of a Substantial User Base: The Keras, JAX, and TensorFlow communities represent a massive and active user base. Alienating these users would lead to a significant decline in platform engagement, contributions, and overall relevance.
- Reputational Damage: Hugging Face's reputation as a neutral, community-focused, and framework-agnostic platform would be severely tarnished. The perception of favoritism towards a single framework could erode the trust it has built within the open-source community.
Conclusion
Hugging Face’s deprecation of TensorFlow and JAX support in Transformers is a watershed moment for the open-source AI ecosystem. While it streamlines development and aligns with industry trends favoring PyTorch, it introduces significant challenges for developers, researchers, and hardware vendors invested in alternative frameworks. The move is likely to accelerate PyTorch’s dominance, reinforce NVIDIA’s market position, and fragment the once-unified AI community, with lasting implications for innovation and collaboration.
Note: Written with help from Perplexity & Gemini for rephrasing sentences and research.
Build and deployment will be become more simpler and lightweight