put together a small Python package: duckrun With it, you can: 1- connect to a lakehouse ( either from your laptop or inside Fabric), and optionally Point it at a folder of #SQL #Python files import duckrun con= duckrun.connect( "workspace_name/lakehouse_name.lakehouse","sql_folder") 2- Define a pipeline pipeline = [(download.py),(table1.sql, append), (table2.sql,overwrite)] con.run(pipeline) Data will be written as Delta in #onelake Alternatively, you can just write : con.sql("select 42 ").write.mode("overwrite").saveAsTable("test") Repo: https://lnkd.in/gmgfE-zf It’s nothing groundbreaking, after all, it is just a wrapper around #DuckDB and #delta_rs, but the main lesson I took away: Separating transformation logic (SQL and Python) from the notebook itself makes workflows a lot cleaner and more reusable. Python is great for working with files, but once you have some form of tabular data, SQL is just too good. Claude is awesome !!! and finally I understand why people like dbt, i get it :) #MicrosoftFabric #Notebook 👉 Would love to hear feedback, ideas, or suggestions!
Wait until you find SQLMesh 😉
Excellent work can’t wait to try this out
This reads excellent
Very cool! And great to see you 10x ing with Claude as well :) must be permitted now you can use it in excel and word ;)
Very cool!
Nice package name and logo 😉
The logo is 🔥!
we may have something you may like under the hood that you would be interested to test Mim cc Thierry Jean