Real Time Dispatch Edition 10: Mastering Data Export in Eventhouse: From Eventstream to OneLake and SQL
First, I can't believe we're already at 10 editions! Super excited to see how much this has grown already really looking forward to next year! This will be the last edition of the year, stay tuned to see some exciting things I'm working on for early 2026! With all of the Ignite announcements as well, there has been a lot to keep up with:
Mastering data export in Eventhouse
Over the past few weeks, I’ve worked with two different customers who shared a common challenge: how to operationalize curated Eventhouse data for analytics, reporting, and cross‑team sharing without giving broad access to the source Eventhouse.
Although their use cases were different, the architectural pattern ended up being the same:
These scenarios highlight a broader truth:
Operational analytics frequently require pushing Eventhouse data downstream into other engines: OneLake, ADLS, SQL, or external systems to enable integration without duplicating processing or broadening security boundaries.
Today’s article walks through the practical pattern for doing exactly that using the .export command. We’ll use the Bicycle sample dataset (from Eventstream) and focus purely on the management commands and process.
Why Export from Eventhouse?
Eventhouse gives teams a unified, high-performance environment for real-time data shaping. But operational systems often need that data outside Eventhouse for:
Exporting becomes the bridge.
For this walkthrough, we’ll work froma table populated from Eventstream’s Bicycle sample data, containing bike station locations, usage, and operational metrics.
Reference docs for setup: https://learn.microsoft.com/en-us/fabric/real-time-intelligence/overview
The .export command moves Eventhouse data to storage in multiple formats:
Recommended by LinkedIn
Handling Sensitive Values: Obfuscated String Literals
When exporting to storage that uses access keys or connection strings, never expose keys in logs.
Kusto supports obfuscated string literals: prefix a string with h to ensure the value is masked in telemetry:
h'MySuperSecretString'
H"MySuperSecretString"
"ThisIsMy"h'SuperSecretString'
Notice that you can both put it at the beginning to secure the entire string or only portions.
When exporting the data to onelake, use the below structure. You can grab both GUIDs directly from your Lakehouse URL in Fabric. The folder should already exist in your Lakehouse in Fabric.
https://onelake.dfs.fabric.microsoft.com/<workspaceGUID>/<lakehouseGUID>/Files/<folder>/
You can also send the data to ADLS, the command is almost exactly the same. For Onelake, note that you can only leverage Impersonate as the authentication method, but for ADLS you can use any of the ones listed in the docs. Because the commands are essentially the same, I put them in the same code sample below:
//export examples
.export to csv //(h@"https://onelake.dfs.fabric.microsoft.com/<workspaceGUID>/<LakehouseGuid>/Files/EventhouseExtracts/;impersonate")//Onelake
("https://<MyStorageAccount>.blob.core.windows.net/containername/"h';impersonate')//adls
with (
sizeLimit=10000,
namePrefix="export",
includeHeaders="all",
encoding="UTF8NoBOM"
)
<|
//this is your query you want to export. You can export the whole table, rows since the last time you exported, or only rows received in the last time period.
bicyclesampleraw
| take 100
Step 3: Export Data to SQL
Alternatively, instead of sending the data to storage you may want to export the data to SQL. To export data to a SQL database, use the .export command with the to sql option. This will work on any cloud version of SQL, I did not test it against on prem but I don’t believe anything would stop you as long as you can authenticate to the instance. At any rate, here is a good code sample you can use to get started.
//export to SQL
.export to sql ['dbo.EventhouseExtracts']
h@"Server=tcp:MyServer.database.windows.net, 1433;Authentication=Active Directory Integrated;Initial Catalog=MyDatabaseName;Connection Timeout=30;"
with (
createifnotexists="true"
)
<|
bicyclesampleraw
| project tostring(BikepointID), tostring(Street), tostring(Neighbourhood), tostring(Latitude), tostring(Longitude), toint(No_Bikes), toint(No_Empty_Docks)
| take 100
In SQL, the project clause is important. SQL requires strict type mapping.
Once you have your .export command defined, you can operationalize it through:
This enables:
Thanks all, happy holidays and looking forward to our next
Hey Chris, great write-up! I can see how the .export pattern helps in scenarios like the one I’m running into, where teams need Eventhouse tables available in a Lakehouse, but we don’t want to grant access to the primary Eventhouse tables. The challenge is that this pattern requires creating and scheduling a separate export job for every table. For organizations with dozens or hundreds of curated tables, that becomes significant operational overhead. What I’d really love to see is an Eventhouse-native mechanism where Eventhouse can automatically write table output to OneLake (similar to how OneLake Availability works today), without requiring manual .export job management and without widening security boundaries. That would allow Lakehouse teams to pick up the data without needing direct primary Eventhouse permissions. Is anything like that on the roadmap?