I recently built a data pipeline that automatically tracks and visualizes real-time weather data. The project follows an ELT (Extract, Load, Transform) workflow to keep data moving quickly and accurately from the source to the final dashboard. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: • 𝗗𝗮𝘁𝗮 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻: A Python script pulls live weather data from an API every 5 minutes. • 𝗦𝘁𝗼𝗿𝗮𝗴𝗲: The raw data is immediately loaded into a PostgreSQL database. • 𝗖𝗹𝗲𝗮𝗻𝗶𝗻𝗴 𝗮𝗻𝗱 𝗦𝗼𝗿𝘁𝗶𝗻𝗴: I use dbt to transform raw data into structured tables for analysis: • 𝘀𝘁𝗴_𝘄𝗲𝗮𝘁𝗵𝗲𝗿_𝗱𝗮𝘁𝗮: The staging table where raw API data is cleaned, validated, and prepared for further processing. • 𝘄𝗲𝗮𝘁𝗵𝗲𝗿_𝗿𝗲𝗽𝗼𝗿𝘁: A refined table designed for real-time monitoring with clear, analysis-ready weather insights. • 𝗱𝗮𝗶𝗹𝘆_𝗮𝘃𝗲𝗿𝗮𝗴𝗲: An aggregated table that summarizes daily weather metrics to track trends over time. • 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻: Apache Airflow orchestrates the entire process. • 𝗟𝗶𝘃𝗲 𝗗𝗮𝘀𝗵𝗯𝗼𝗮𝗿𝗱: Apache Superset displays results with a 5-minute auto-refresh. • 𝗦𝗲𝘁𝘂𝗽: Fully containerized using Docker for easy deployment. 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • 𝗡𝗲𝗮𝗿-𝗥𝗲𝗮𝗹-𝗧𝗶𝗺𝗲: Data updates every 5 minutes. • 𝗥𝗲𝗹𝗶𝗮𝗯𝗹𝗲: Prevents duplicates and ensures high-quality data. • 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁: ELT enables scalable transformations inside the database. This project helped me build a complete, automated data system from scratch. #DataEngineering #ELT #Python #SQL #Airflow #Docker #DataPipeline #WeatherUpdate
More Relevant Posts
-
I recently worked on an ETL pipeline built around UK regional carbon intensity data. The pipeline extracts 24-hour regional data from the Carbon Intensity API, transforms the nested JSON response into a structured tabular format, aggregates the 30-minute interval readings into daily regional summaries, and loads the output into PostgreSQL for analysis. On the transformation side, the workflow flattens both the carbon intensity values and the generation mix data across fuel sources, then uses Pandas to produce daily region-level metrics. On the database side, the final output is stored in PostgreSQL tables designed for reporting, with date-based partitioning applied to the fact tables to support cleaner storage management and better scalability as the data grows. The result is a query-ready pipeline that turns raw API data into structured daily carbon intensity and generation mix data that can be used for downstream analysis and reporting. Tech used: Python, Pandas, PostgreSQL, SQLAlchemy, YAML #DataEngineering #ETL #Python #PostgreSQL #SQL #DataPipeline #DatabaseDesign #AnalyticsEngineering
To view or add a comment, sign in
-
-
New project unlocked🔓 I just finished building a 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗟𝗶𝗳𝗲𝘁𝗶𝗺𝗲 𝗩𝗮𝗹𝘂𝗲 (𝗖𝗟𝗩) 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻 𝗦𝘆𝘀𝘁𝗲𝗺. The starting question: 𝘩𝘰𝘸 𝘮𝘶𝘤𝘩 𝘳𝘦𝘷𝘦𝘯𝘶𝘦 𝘸𝘪𝘭𝘭 𝘦𝘢𝘤𝘩 𝘤𝘶𝘴𝘵𝘰𝘮𝘦𝘳 𝘨𝘦𝘯𝘦𝘳𝘢𝘵𝘦 𝘰𝘷𝘦𝘳 𝘵𝘩𝘦𝘪𝘳 𝘭𝘪𝘧𝘦𝘵𝘪𝘮𝘦 𝘪𝘯 𝘰𝘶𝘳 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴? Using the PostgreSQL DVD Rental dataset, I built an end-to-end pipeline: - Designed an ETL pipeline that processes ~14,000 transactions from 9 tables into a customer-level OLAP star schema - Engineered RFM-based features (Recency, Frequency, Monetary) for CLV modeling - Trained and compared multiple ML models (Linear Regression, Random Forest, Gradient Boosting) using chronological split and TimeSeriesSplit to avoid data leakage - Deployed everything into an interactive Django web app with a prediction form and business recommendations - The final model (Gradient Boosting) achieved strong performance, with R² close to 0.99 and low prediction error. One insight that came out of the analysis: customers who rent frequently, even at lower spend per transaction, often generate more lifetime value than occasional high spenders. Frequency matters more than monetary average! One limitation is that the dataset is static (historical DVD rental data), so the model reflects past behavior patterns rather than real-time customer activity. Additionally, some features like recency and tenure showed very low importance, likely due to the limited time range of the dataset, but they were still kept to ensures the model remains interpretable, aligned with business logic, and more generalizable to real-world scenarios beyond this dataset. This project helped me understand how data engineering, machine learning, and business thinking come together in a real system, not just a model. 🖇️GitHub → https://lnkd.in/g4k7iQuy Would love any feedback or thoughts!🖖🏻 #DataAnalytics #MachineLearning #Django #Python #PostgreSQL #PortfolioProject
To view or add a comment, sign in
-
Just wrapped up an end-to-end data engineering project as part of DataTalksClub’s Data Engineering Zoomcamp Built a pipeline to process GitHub Events data using Python, BigQuery, dbt, and Airflow. Some key things I worked on: - Designed an ELT pipeline using GCS → BigQuery → dbt - Implemented dimensional modelling (fact & dimension tables) - Orchestrated workflows using Apache Airflow (Dockerised) - Optimised performance using Parquet and partitioning - Reduced query costs after initially scanning ~200GB per run One of the biggest learnings for me was how small design decisions (like partitioning and materialisation strategy) can have a huge impact on performance and cost. Also got to debug real-world issues like Airflow setup problems, inefficient dbt models, and I/O bottlenecks — which made the learning much more practical. Dashboard: https://lnkd.in/gxgQYVkH Github Repo: https://lnkd.in/gVtVvWR9 I will continue improving this project by adding new features and optimisations over time. Would love to hear any feedback! 🙂 #DataEngineering #BigQuery #ApacheAirflow
To view or add a comment, sign in
-
🚀 Built an End-to-End Data Pipeline using API & SQL Server! Excited to share my recent hands-on project where I built a complete data pipeline from scratch 👇 🔹 What I did: 1. Source Database (SQL Server) ↓ 2. Create API using FastAPI ↓ 3. Expose endpoint (/data) ↓ 4. Call API using Python (requests) ↓ 5. Get data in JSON format ↓ 6. Connect to Target SQL Server ↓ 7. Auto-create table (if not exists) ↓ 8. Insert data into target table ↓ 9. Verify data in SSMS 🔹 Tech Stack: Python | FastAPI | SQL Server | pyodbc | requests 🔹 Key Learnings: 💡 How APIs act as a bridge between systems 💡 Converting JSON data into structured format 💡 Building real-world ETL pipelines 💡 Automating data movement without manual intervention This project helped me understand how real-world data engineering pipelines work — from data extraction to loading 🚀 Looking forward to building more such projects and improving my skills! #DataEngineering #Python #FastAPI #SQLServer #ETL #DataPipeline #LearningInPublic #100DaysOfData #BuildingInPublic
To view or add a comment, sign in
-
-
Built Clarity because data teams were drowning in tools. One tool for SQL. Another for ETL. Another for dashboards. Another for reporting. None of them talk to each other. So we built one workspace that does it all — and made it AI-native from day one. → Full data lineage — trace every metric back to its source → Governed pipelines with audit trails and role-based access → A semantic layer your whole org trusts as the single source of truth → Query in SQL or plain English — every result is reproducible Full demo coming soon. Built with #Flutter #FastAPI #ClickHouse #Python #FlutterWeb #DataPlatform #Analytics #BuildInPublic #DataScience #SaaS #DataGovernance #RealTimeAnalytics #DataTransparency #DataQuality #TechStartup #DataOps #DataEngineering #DataDriven
To view or add a comment, sign in
-
Still using cron jobs to run your data pipelines? Honest question, how do you handle retries, task dependencies, or debugging a failure that happened at 3 AM? That's exactly where Apache Airflow comes in. Our latest article on Data Engineering Byte breaks down Airflow in the simplest way possible, no jargon overload, no assumptions. Here's what you'll walk away with: → Why cron falls short (dependencies, retries, branching — it can't do any of it well) → What a DAG actually is (and why it's called "acyclic") → Your first DAG in under 20 lines of Python: with DAG( dag_id="simple_example", start_date=datetime(2026, 1, 1), schedule="@daily", catchup=False ) as dag: t1 >> t2 → What catchup=True vs False really means → How tasks talk to each other using XComs (think: passing sticky notes) → Full Docker setup to run Airflow 3 locally in minutes One thing that trips up beginners: Airflow does NOT store data. It only orchestrates. Your DAG tells tasks what to run, in what order, and when — that's it. Whether you're a data engineer, analyst stepping into pipelines, or just Airflow-curious — this 5-minute read will get you from zero to running your first DAG. ✍️ Written by Shrividya Hegde (Shri): AI Data Engineer, Apache Airflow Champion, and Women in Data Chapter Lead. 🔗 Link in comments 👇 Subscribe to Data Engineering Byte for more hands-on, no-fluff data engineering tutorials every week. #ApacheAirflow #DataEngineering #ETL #Python #DataPipelines #Airflow #DataEngineeringByte
To view or add a comment, sign in
-
-
Excited to join the Data Engineering Byte team as a Content Expert! 🎉 I'll be sharing practical insights on data engineering topics through Substack , starting with a beginner-friendly deep dive into Apache Airflow. If you've been curious about what Airflow is and why it matters, this first article covers the core fundamentals to get you started. More articles on the way. Thanks for the opportunity , can't wait to give back to this community!
Still using cron jobs to run your data pipelines? Honest question, how do you handle retries, task dependencies, or debugging a failure that happened at 3 AM? That's exactly where Apache Airflow comes in. Our latest article on Data Engineering Byte breaks down Airflow in the simplest way possible, no jargon overload, no assumptions. Here's what you'll walk away with: → Why cron falls short (dependencies, retries, branching — it can't do any of it well) → What a DAG actually is (and why it's called "acyclic") → Your first DAG in under 20 lines of Python: with DAG( dag_id="simple_example", start_date=datetime(2026, 1, 1), schedule="@daily", catchup=False ) as dag: t1 >> t2 → What catchup=True vs False really means → How tasks talk to each other using XComs (think: passing sticky notes) → Full Docker setup to run Airflow 3 locally in minutes One thing that trips up beginners: Airflow does NOT store data. It only orchestrates. Your DAG tells tasks what to run, in what order, and when — that's it. Whether you're a data engineer, analyst stepping into pipelines, or just Airflow-curious — this 5-minute read will get you from zero to running your first DAG. ✍️ Written by Shrividya Hegde (Shri): AI Data Engineer, Apache Airflow Champion, and Women in Data Chapter Lead. 🔗 Link in comments 👇 Subscribe to Data Engineering Byte for more hands-on, no-fluff data engineering tutorials every week. #ApacheAirflow #DataEngineering #ETL #Python #DataPipelines #Airflow #DataEngineeringByte
To view or add a comment, sign in
-
-
𝗣𝘆𝗦𝗽𝗮𝗿𝗸 𝘃𝘀 𝗦𝗽𝗮𝗿𝗸 𝗦𝗤𝗟: 𝗜 𝗱𝗼𝗻’𝘁 𝗰𝗵𝗼𝗼𝘀𝗲 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝗽𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲, 𝗜 𝗰𝗵𝗼𝗼𝘀𝗲 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝘁𝗵𝗲 𝗷𝗼𝗯. It’s easy to turn this into a “which is better” debate. In practice, both are useful just for different reasons. And one thing is often misunderstood: Spark doesn’t execute “Python” or “SQL” the way people think. It executes a 𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗽𝗹𝗮𝗻 -> 𝗼𝗽𝘁𝗶𝗺𝗶𝘀𝗲𝗱 𝗽𝗹𝗮𝗻 -> 𝗽𝗵𝘆𝘀𝗶𝗰𝗮𝗹 𝗽𝗹𝗮𝗻. So a lot of the time, the real difference isn’t performance, it’s 𝗵𝗼𝘄 𝗰𝗹𝗲𝗮𝗿𝗹𝘆 𝘆𝗼𝘂 𝗲𝘅𝗽𝗿𝗲𝘀𝘀 𝗶𝗻𝘁𝗲𝗻𝘁 𝗮𝗻𝗱 𝗵𝗼𝘄 𝗺𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝘁𝗵𝗲 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗶𝘀. 𝗪𝗵𝗲𝗻 𝗦𝗽𝗮𝗿𝗸 𝗦𝗤𝗟 𝘄𝗶𝗻𝘀 • The work is mostly select, join, filter, aggregate • Logic needs to be readable by more people (analysts + engineers) • I want quick iteration and clear intent • Performance tuning is easier because the query shape is obvious 𝗪𝗵𝗲𝗻 𝗣𝘆𝗦𝗽𝗮𝗿𝗸 𝘄𝗶𝗻𝘀 • I need custom logic that’s awkward in SQL • Complex parsing, nested structures, arrays/maps, JSON heavy work • Reusable functions and cleaner code structure (modules, unit tests) • Integration steps around the transformation (validation, file handling, etc.) 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝘁𝗿𝗮𝗱𝗲 𝗼𝗳𝗳 • SQL usually optimizes for clarity. • PySpark usually optimizes for flexibility. 𝗧𝗵𝗲 𝗯𝗲𝘀𝘁 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗜’𝘃𝗲 𝘀𝗲𝗲𝗻 • Use SQL for the core transformations (joins/aggregations) • Use PySpark for the edges (validation, enrichment, complex business rules) • Keep one “source of truth” so business logic doesn’t get duplicated Takeaway: Choosing PySpark vs Spark SQL isn’t a style choice. It’s a maintainability and delivery choice. Drop your go-to rule for choosing between them in the comments. #PySpark #SparkSQL #DataEngineering #Databricks #BigData #SQL #AnalyticsEngineering #DataPipelines
To view or add a comment, sign in
-
-
Most data analysts on my team spent more time writing SQL than actually analysing data. So I built a fix — without touching our existing Superset setup. It's called a Text-to-SQL Sidecar: a standalone FastAPI microservice that sits alongside Apache Superset and turns plain English into validated, safe SQL. You ask: "which products had the highest return rate last quarter?" It generates, validates, and executes the SQL — then hands the results back. A few things I was deliberate about: → AST-level SQL validation (not string matching — trivially bypassable) → Per-database table allowlists so the LLM can only touch what it's supposed to → Schema caching so we're not hammering the DB on every request → LLM-agnostic design — swap the endpoint URL, change the model → Reasoning traces returned alongside SQL so analysts can actually trust the output Superset never needs to know it exists. It just receives SQL. I wrote up the full implementation — architecture, code walkthrough, and the design decisions that make it production-ready. Link in the comments 👇 #DataEngineering #AI #SQL #FastAPI #ApacheSuperset #LLM #Python
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Good work Pooja M Keep it up