AUTO LOADER
Auto Loader incrementally and efficiently processes new data files as they arrive in cloud storage without any additional setup.
Auto Loader provides a Structured Streaming source called cloudFiles. Given an input directory path on the cloud file storage, the cloudFiles source automatically processes new files as they arrive, with the option of also processing existing files in that directory.
You can use Auto Loader to process billions of files to migrate or backfill a table.
Auto Loader scales to support near real-time ingestion of millions of files per hour.
How Auto Loader Tracks Ingestion progress ?
- As files are discovered, their metadata is persisted in a scalable key-value
store (RocksDB) in the checkpoint location of your Auto Loader pipeline.
- This key-value store ensures that data is processed exactly once.
- In case of failures, Auto Loader can resume from where it left off by
information stored in the checkpoint location and continue to provide
exactly-once guarantees when writing data into Delta Lake.
Databricks recommends Auto Loader in Delta Live Tables for incremental data ingestion.
Auto Loader With DELTA Live Table
-: we can use Auto Loader in Delta Live Tables pipelines.
-: Delta Live Tables extends functionality in Apache Spark Structured
Streaming and allows you to write just a few lines of declarative Python
Recommended by LinkedIn
or SQL to deploy a production-quality data pipeline with:
NOTE: In Delta Live Table you do not need to provide a schema or
checkpoint location because Delta Live Tables(DLT) automatically
manages these settings for your pipelines.
Auto Loader syntax for DLT
In Python
The @dlt.table decorator tells Delta Live Tables to create a table that contains the result of a DataFrame returned by a function.
or In SQL you can write :
source - Microsoft Databricks Documentation.