Discover the groundbreaking projects from the GTC Vibe Hack winners. Over 100 teams competed, and the winning developers will showcase how they leveraged NVIDIA Nemotron models and the NVIDIA AI ecosystem to create high-impact, production-ready generative AI applications. Hear from the creators of Book Hook (Tony Shackford), an AI-driven application using a multi-agent system and Nemotron vision-language models to analyze bookshelf images, identify titles, value books using market data, and locate their physical coordinates. Also hear from Esports Hype Narrator (Tariq Shams, Sergei Voronstov), a dynamic AI narration tool using Nemotron-Nano VI, Mistral-Nemotron, and Kokoro for realistic speech synthesis to transform live event descriptions into tailored, brand-aligned media with a specific voice. They will share their design process, technical deep dives, and lessons learned.
About us
Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.
- Website
-
http://nvda.ws/2nfcPK3
External link for NVIDIA AI
- Industry
- Computer Hardware Manufacturing
- Company size
- 10,001+ employees
- Headquarters
- Santa Clara, CA
Updates
-
RL post-training is hitting a rollout bottleneck. This new paper from #NVIDIAResearch shows how speculative decoding in NeMo-RL + vLLM can accelerate rollouts losslessly, with 1.8x higher throughput at 8B and projected 2.5x end-to-end speedup at 235B. Read the full paper: https://nvda.ws/4t8gPcw
-
-
Some of the most important conversations at CVPR don't happen in session rooms. We’re hosting a reception for an evening of networking, ideas, and celebration with the AI research community. Giveaways + more. Limited spots 👉 https://nvda.ws/3QACtZF
-
-
If you're a student, professor, or researcher—this one's for you. We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and course materials to enhance research productivity and classroom workflows. 📅 Session lineup: May 12: Build an Academic Planner With Agentic AI May 14: Turn Agent Into Research Assistant May 19: Make Claws Collaborate as a Research Team May 22: AI Teaching Assistants Register now to secure your spot 👉 https://nvda.ws/4cXHW3Y
-
-
Discover the groundbreaking projects from the SJSU Agents for Impact Hackathon! With over 100 teams competing during GTC, the winning student developers will showcase how they leveraged the power of NVIDIA Nemotron models and NVIDIA NIM microservices to create AI agents that solve real-world problems in sustainability, education, and accessibility. Hear directly from the creators of CarbonSense AI (an agent for carbon-aware AI operations), LoominAi (a physics-based 3D simulation learning sandbox), and AccessAudit (a platform for ADA-compliant accessibility assessments) as they share their design process, technical deep dives, and lessons learned in building high-impact, production-ready generative AI applications.
Dev Community Live: SJSU Hackathon Winners – Building Impactful Agents
www.garudax.id
-
NVIDIA AI reposted this
Today we are releasing Holotron 3 Nano. Built by post-training NVIDIA Nemotron 3 Nano Omni on H Company’s proprietary data mixture, Holotron 3 Nano is our latest model for computer-use agents. It is designed for one thing: execution. Agents need to see interfaces, understand context, decide what to do next, and act reliably across real software environments. That requires speed, strong visual grounding and long-context reasoning. Holotron 3 Nano brings: • 30B total parameters, with ~3B active per token • Up to 256k tokens of native context • C-RADIOv4 for sharper screen understanding • 76.7% on OSWorld • Lower latency in HoloTab, from 2.0s to 1.7s per LLM step Holotron 3 Nano puts us at the top of the computer-use benchmarks race, with a model that is faster, leaner and ready for real deployment. And this is not the finish line. It is the warm-up lap. The model is now live in HoloTab and available on Hugging Face under the NVIDIA Open Model License. 👉 Links in comments #AI #MachineLearning #NVIDIA #HCompany #ComputerUseAgent #Holotron3 #EnterpriseAI #Innovation
-
-
NVIDIA AI reposted this
Faster models are smarter models. Chris Alexiuk, Product Research Engineer at NVIDIA , explains how he sees the race towards AGI evolving and what type of models will win the coming iterations. In the latest episode of The Merge, we talk about the importance of open-source contributions like Nemotron and what the next frontier for the ecosystem will unlock!
-
Congrats to the Mistral AI team on launching Mistral Medium 3.5! This new single 128B dense text-vision model merges instruction-following, reasoning, and coding. Plus, it scored 77.6% on SWE-Bench Verified. The best models run on NVIDIA. Try it out today on build.nvidia.com or scale with our containerized inference microservice NVIDIA NIM. 🔗 https://lnkd.in/gKTtfjDZ
Coding agents have mostly lived on your laptop. Today we're moving them to the cloud, where they run on their own, in parallel, and notify you when they're done. You can now start them from the Mistral Vibe CLI or directly in Le Chat, offloading a coding task without leaving the conversation. Powering this is Mistral Medium 3.5 in public preview, our new default model in Mistral Vibe and Le Chat, built to run for long stretches on coding and productivity work. The new Work mode in Le Chat (Preview) extends this with a powerful agent for complex, multi-step tasks like research, analysis, and cross-tool actions. Highlights: 🚀 Mistral Medium 3.5, a new flagship model that merges instruction-following, reasoning, and coding into a single 128B dense model, and specializes in long-horizon agentic tasks. Released as open weights, under a modified MIT license. 💪 Strong real-world performance at a size that runs self-hosted on as few as four GPUs. ⚡Mistral Vibe remote agents for async coding: sessions run in the cloud, can be spawned from the CLI or Le Chat, and a local CLI session can be teleported up to the cloud. 👨💻Start Mistral Vibe coding tasks in Le Chat. Sessions run on the same remote runtime and keep going while you step away. 🛠️ Work mode in Le Chat runs on a new agent, powered by Mistral Medium 3.5, that works through multi-step tasks, calling tools in parallel until the job is done. Learn more here: https://lnkd.in/eG4bsCvr