What is Compatibility Mode in Databricks Unity Catalog & How Does It Enable Cross-Platform Reads? Compatibility Mode helps us read Databricks Unity Catalog tables from external systems It creates a read-only, synced version of the table in a location we choose This version supports Delta Lake v1 and Iceberg metadata formats We can read data from: Athena, Snowflake, Microsoft Fabric, Apache Spark / Trino and Unity REST API How does it enable cross platform reads? Works only for Unity Catalog managed tables, streaming tables & materialized views Creates a synced table copy in external storage Auto-refresh happens hourly by default for managed tables (configurable to near real-time with 0 MINUTES setting). Streaming tables and materialized views refresh after every commit by default. Fully automated using Predictive Optimization Steps to enable: - Ensure an external location exists & you have CREATE EXTERNAL TABLE permission - Enable Compatibility Mode using table properties - Confirm status using DESC EXTENDED Real-Life Example: Imagine we maintain customer data in Unity Catalog. Our analytics team uses Databricks, but the finance team analyzes data in Athena or Snowflake. Instead of exporting data again and again: - We just enable Compatibility Mode - Finance continues using Athena/Snowflake - Everyone reads the same fresh data - No duplicate pipelines or data drift This reduces: Engineering effort, Storage duplication and the most important -> Cross-platform conflicts!
Cross-Platform Data Aggregators
Explore top LinkedIn content from expert professionals.
Summary
Cross-platform data aggregators are tools or technologies that combine and unify data from multiple systems, databases, or platforms, making it easier to view, analyze, and manage information without manual exports or patchwork solutions. These solutions help businesses streamline their workflows, reduce duplication, and gain clearer insights across different tools or environments.
- Streamline data access: Set up automatic syncs or connections so you can view and analyze data from all your platforms in one dashboard, rather than jumping between tools.
- Simplify collaboration: Use centralized reports or unified tables to share up-to-date information with different teams, making communication and analysis much smoother.
- Reduce manual work: Choose aggregators that support real-time updates and flexible integrations, cutting down on tedious exports and manual data blending.
-
-
👀 Got a sneak preview of Ascend.io upcoming Agentic Data Engineering launch, and they’re not holding back. As a data engineer turned Snowflake solutions architect, I’ve seen more than my share of flashy AI launches. But this one stuck with me for a different reason. Yes, the AI features are 🔥 (agentic pipelines, smarter automation, error explainers). But what really impressed me? How smooth and familiar the experience still is for classic data engineering workflows. Even running Ascend on top of Snowflake, I was still able to connect to BigQuery and Databricks – all within a single project. No clunky hand-offs, no stitching together brittle pipelines across environments. Just clean, composable data flows that meet you where you are. What’s more: Ascend makes it easy to optimize and centralize your existing processes, so when you’re ready to go agentic, the leap feels more like a natural step than a full rebuild. That kind of gradual, guided transition is rare, and it makes a big difference. This is the kind of tooling consolidation I know people are looking for. Keep everything in Snowflake, but still interoperate with the rest of your ecosystem? That’s a big win. If you're in the data engineering trenches and tired of duct-taping platforms together, this might be worth a look. 💬 Curious: How are you all thinking about cross-platform data engineering right now? Are you consolidating tools or adding more? #DataEngineering #Snowflake #AI #AgenticDataEngineering #Ascend #ModernDataStack #Interoperability #SnowflakeNativeApp
-
I’ve just published a new article exploring strategies to unify data sharing across Snowflake, Databricks, and Microsoft Fabric. While consolidating onto a single platform is often ideal, the reality for many large enterprises is more complex. Team autonomy, legacy investments, and strategic diversification often lead to multi-cloud and multi-product environments. Can your cross-platform integration architecture become a strategic advantage? The article focuses on options to share delta parquet and iceberg format storage amongst the three platforms: https://lnkd.in/gs4nS8Tt In the real world, very few large organizations are unified on a single data and analytics platform. Snowflake, Databricks, and Microsoft Fabric are all very popular products with widespread adoption. All three offer lakehouse architecture tools, but what are your options if you have data in more than one of these products? How do you share data amongst the platforms in a way that minimizes replication, is cost efficient, and has low latency? This post is the first in a three-part series focusing on interoperability amongst Snowflake, Databricks, and Microsoft Fabric. #Snowflake #Databricks #AzureDatabricks #MicrosoftFabric
-
Analyzing ads across channels shouldn't be chaotic but sometimes it is… Manual exports, fragmented views, metric inconsistency, and other things we "love" make cross-platform analysis unbearable. Solution? Having a live report that combines key indicators from the ad platforms you use. Example? A PPC multi-channel dashboard in Looker Studio, providing marketers with: ✅Clarity across platforms: check the performance of Google Ads, Microsoft Ads, and even social ads on one screen. ✅Real-time analytics: data syncs automatically with Coupler.io no-code connectors to reflect the current state of things. Unlike a standard Looker Studio connector, these have no limits on the number of ad sources to blend, and you can reuse your data with other BI tools. ✅Customizability: it's a white-label template that you can quickly set up and share with clients or your team. Ad analysis shouldn't take hours. It must (and can) be simple and clear. For those tired of cross-channel reporting patchwork, give this template a try: 🔗 Link in the first comment. #PPC #GoogleAds #MicrosoftAdvertising #Ad
-
PostgreSQL's Foreign Data Wrappers (FDW) are awesome, and I really wish they were more widely known and used. Many companies distribute their production data across multiple database instances or types—whether for single-tenant architecture, sharding, or optimizing the DB choice for specific workloads. Usually, though, these separations are not logically "clean." Sooner or later, use cases emerge that require aggregating and correlating data across databases—like "super tenants" needing access to multiple tenant DBs or internal dashboards pulling from different sources. How can PostgreSQL help in these scenarios? FDWs to the rescue! PostgreSQL's Foreign Data Wrappers (FDWs) allow users to abstract away a complex database architecture and expose to the backend a single, simple PostgreSQL instance that represents many databases "behind it." With FDWs, users can create "virtual" tables that act as seamless proxies for real tables in remote databases. When you query a "virtual" FDW table, PostgreSQL automatically retrieves the data from the remote database and returns it, making it function just like a native table in your own instance. What’s really awesome is just how well PostgreSQL implemented them. PostgreSQL can push down filters and aggregations to remote servers to minimize data transfer, fetch data in parallel, support both reads and writes, and just works as expected. Long live PostgreSQL and all the awesome (and sometimes unknown) features it offers!
-
Most B2B marketing teams have data in 5+ platforms. 😅 And hence, getting a cross-platform answer still takes hours! "Which Google Ads keywords drive leads that actually close in HubSpot?" That's a 2-hour spreadsheet exercise. We spent the last few weeks connecting all 5 platforms to Claude AI using MCP (Model Context Protocol) and documenting everything. Here's what we found: → 3 methods work: open-source servers, no-code connectors, and unified extensions like GrowthSpree's Marketing AI MCP. → Single-platform connections are useful. All platforms connected together is transformative. → The highest-value questions are cross-platform by nature. We wrote a full breakdown as an article, covering what MCP is, how each method compares, and how we built a free MCP extension at GrowthSpree that connects all 6 platforms, leading to the exact cross-platform queries that changed how our team operates daily.
-
In the process of utilizing and working with multiple APIs, there will be numerous instances where combining these APIs, including their individual requests and responses, becomes a necessary and important task. These requirements may vary significantly in complexity, spanning a broad spectrum from straightforward and easily manageable tasks to extremely intricate and demanding challenges that require specialized skills and knowledge. With organizations now managing tens, hundreds, and even thousands of APIs, a robust understanding of API composition, aggregation, and orchestration has become critically important for efficient operations and strategic advantage. Regardless of the specific naming conventions employed by different organizations, the fundamental task remains the same: the practical application of combining APIs and endpoints to achieve a functional outcome. API aggregation is a technique used to merge several individual requests into a unified request for streamlined processing and efficiency. 𝗧𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝗲𝘅𝗮𝗺𝗽𝗹𝗲𝘀 𝘀𝗵𝗼𝘄 𝘁𝗵𝗲𝗶𝗿 𝘂𝘀𝗲 𝗮𝗰𝗿𝗼𝘀𝘀 𝘃𝗮𝗿𝗶𝗼𝘂𝘀 𝗱𝗼𝗺𝗮𝗶𝗻𝘀 𝗮𝗻𝗱 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗲𝘀. 1. E-commerce Product Details Aggregation: Combine product descriptions, pricing, reviews, and availability from multiple suppliers. 2. Travel and Hospitality Flight and Hotel Search: Consolidate data from multiple booking platforms for flights, hotels, and car rentals. 3. Financial Services Account Aggregation: Display data from multiple bank accounts or credit cards in one view. 4. Social Media Unified Feed: Aggregate posts, tweets, and videos from different platforms. 5. Health and Fitness Patient Health Records: Combine data from wearable devices, medical tests, and healthcare provider systems 6. Real Estate Property Listings: Merge property data from various real estate platforms. 7. Entertainment Streaming Guide: Aggregate movie or show listings from various streaming platforms. 8. Logistics Shipment Tracking: Combine tracking details from multiple courier services. 9. Education and Learning Course Aggregation: Combine online courses from multiple providers. 10. General Utility Weather Forecasting: Combine data from multiple weather APIs for a more accurate forecast 𝗛𝗼𝘄 𝗧𝗵𝗲𝘀𝗲 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 𝗛𝗲𝗹𝗽 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿𝘀 1. Unified Experience: Customers get consolidated information without multiple API calls. 2. Improved Performance: Fewer client-side API requests result in faster load times. 3. Customizations: Easier to apply business logic or transformations on aggregated data 𝗪𝗵𝗲𝗿𝗲 𝘆𝗼𝘂 𝘄𝗼𝘂𝗹𝗱 𝗻𝗲𝗲𝗱 𝘁𝗵𝗶𝘀 ? 1. Request Aggregation 2. Response Aggregation 3. Combine API 4. Real-Time Data Aggregation 5. Multi-Layer Aggregation 6. Analytics and Monitoring 7. Orchestrated Workflows #api #technology #engineering
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development