In the age of AI, an open format alone isn’t enough. At Snowflake, we’re working with the community to enable true interoperability across every layer: • Data: Apache Iceberg™ v3 support for semi-structured data, CDC, and more • Governance: Apache Polaris™ to make fine-grained access controls portable across engines • Semantics: Open Semantic Interchange (OSI) to standardize metrics and business logic Plus, innovations like pg_lake are bridging transactional and analytical systems, so data can be accessed where it lives. The goal: give you full agency over your data, without moving it, breaking governance, or losing context. The future of the lakehouse is open, interoperable, and built with the community. Read more: https://bit.ly/4c0fPlI
Snowflake Enables Interoperability Across Data Layers
More Relevant Posts
-
Is anyone else reaching a point where their gen AI use needs a real data layer? So far, Claude’s connector to Notion has become my go-to. I’ve also got a project aggregating JSON files directly inside Claude. It works, but it feels like a hack. Either these tools start to incorporate structured data stores, or we all end up with personal data layers in Snowflake. How are you handling it, and and where do you think this lands?
To view or add a comment, sign in
-
Day 1 of building an AI Data Copilot Got the backend working: FastAPI setup CSV upload endpoint Reading data with pandas Simple output: rows + columns Nothing fancy yet. Just building the foundation right. Next step: detect anomalies in the data.
To view or add a comment, sign in
-
If you haven't looked at TextQL yet, it's worth your time. We tested their new Dashboards feature on Citi Bike NYC data. Starting from zero familiarity with the dataset, the AI agent explored the schema, found and pulled in weather data from an external API on its own, and built an interactive multi-tab dashboard. Under four minutes. We wrote up the full experience ‚including what worked and what still needs polish. https://lnkd.in/gDqsGVqp
To view or add a comment, sign in
-
Graph analytics often means moving data between systems and managing extra infrastructure. In this demo, Neo4j runs alongside Snowflake data with agents exploring relationships, ranking influence, and clustering patterns. Check it out ⬇️
NODES AI 2026 - Ghost-busting with Neo4j Graph Analytics in Snowflake
https://www.youtube.com/
To view or add a comment, sign in
-
Graph analytics often means moving data between systems and managing extra infrastructure. In this demo, Neo4j runs alongside Snowflake data with agents exploring relationships, ranking influence, and clustering patterns. Check it out ⬇️
NODES AI 2026 - Ghost-busting with Neo4j Graph Analytics in Snowflake
https://www.youtube.com/
To view or add a comment, sign in
-
Your AI doesn't have trust issues. It has data silo issues. 🤖 Most enterprise RAG and agentic AI workflows fail not because the models are bad, but because the data feeding them is fragmented, inconsistent, or just… a mess. On April 21, Teradata is hosting a live demo showing how Enterprise Vector Store unifies structured and unstructured data at massive scale so your AI can actually do its job. We'll walk through a real healthcare example: hybrid search that delivers qualitative insight AND quantitative precision from the same data. Because in healthcare (and honestly, everywhere), intent and terminology accuracy aren't nice-to-haves. They're the whole ballgame. If you're building enterprise-ready RAG pipelines or agentic AI workflows and want to see what "production-scale" actually looks like, this one's for you. 📅 April 21 | Live Demo 👉 Register here: https://bit.ly/3OkqqyN #AI #GenerativeAI #RAG #EnterpriseAI #DataStrategy #Teradata #VectorSearch #AgenticAI
Join our April 21 live demo showcasing how Teradata’s Enterprise Vector Store unifies structured and unstructured data at massive scale to power next generation agentic AI and RAG workflows. Learn from a healthcare example of how hybrid search delivers both qualitative insight and quantitative precision from the same data—ensuring intent and terminology accuracy—a critical foundation for enterprise‑ready, scalable RAG. 👉 Register today! https://bit.ly/3OkqqyN
Scaling Agentic RAG with New Enterprise Vector Store Capabilities
To view or add a comment, sign in
-
We are building the self-healing and self-learning harnesses and frameworks you need to enable any employee capable of using a chat interface to deploy production-ready, enterprise-grade agents backed by a robust, multi-tiered Golden Eval Set and gated by layers of rigorous regression testing, variance stability testing, and more. Once deployed, a monitoring agent continuously scans user patterns, feedback, and agent telemetry to improve the original agent. Oh, it also performs regular regression testing and semantic drift detection and will automatically roll back faulty agents in production. Check out Omar’s post below, and hit me up if you’d like to learn more.
We're using agents to ship data agents (Talk to your Data) to production faster than ever. This is helping our clients rapidly accelerate the value they get from Snowflake Intelligence. Our team has developed a series of Skills that integrate with Cortex Code to deploy these agents. But we don't stop there – our agents also deploy: – Observability dashboards for performance/feedback monitoring – A full suite of AI evaluation metrics – Continuous learning pipelines so that the agents improve with feedback And we've deployed these skills to phData Forge to empower our team across all client engagements. Check out this incredible blog post from Omar Abid to learn more! https://lnkd.in/dG5i8vE9 #phDataForge #CortexCode #coco
To view or add a comment, sign in
-
See how HTP and others are now able to extract value from their data at a scale and speed they could not achieve without Textql.
If you haven't looked at TextQL yet, it's worth your time. We tested their new Dashboards feature on Citi Bike NYC data. Starting from zero familiarity with the dataset, the AI agent explored the schema, found and pulled in weather data from an external API on its own, and built an interactive multi-tab dashboard. Under four minutes. We wrote up the full experience ‚including what worked and what still needs polish. https://lnkd.in/gDqsGVqp
To view or add a comment, sign in
-
What if agent traces were not just observable, but part of your context graph? This isn't just storing traces. You can create a queryable record of how agent decisions connect to infrastructure behavior, cost, and customer outcomes. With Arize AI Data Fabric, teams can sync agent traces into BigQuery using open Iceberg tables, then join that data with billing, infra, and business datasets in SQL. That makes it possible to answer questions like: 👉 which prompt or tool decisions are driving cost 👉 whether latency issues are model-related or infra-related 👉 which agent behaviors correlate with customer outcomes https://google.smh.re/5TbR
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development