New blog post on dynamic vertical scaling in Microsoft Fabric Python notebooks. It’s a nice trick that can be really useful, especially with unpredictable workloads, it is not new, but was not really documented another reaosn why Fabric Pipeline are awesome ok tested it with 158 GB of csv. https://lnkd.in/gpKysemw #onelake #python #Microsoftfabric #pipeline #polars #duckdb #lakesail
Wow. It’s like they have plopped aws ecs fargate as a simple config onto a python notebook 🤯
Thanks for pointing this out, it is a useful setting that I apply in most of my notebooks. It would be even better to have CU consumption benchmark next to the Duration, as it would help with making better choices.
Useful trick Mimoune Djouallah
We need something similar for Spark Notebooks so we can decide dynamically in the pipeline (based on the data volume for example) which cluster size to run the job on. Any experience with this?