✅ *Python for Data Science: Complete Roadmap* 🐍📊 🔰 *Step 1: Learn Python Basics* - Variables & Data Types (int, float, string, bool) - Operators (arithmetic, logical, comparison) - Conditional Statements (`if`, `elif`, `else`) - Loops (`for`, `while`) - Functions & Scope - Lists, Tuples, Dictionaries, Sets - Input/Output & basic file handling 🛠 Practice: Write small programs (calculator, number guessing, etc.) 🧰 *Step 2: Master Python for Data Handling* - *Libraries:* - `NumPy` → Arrays, vectorized operations, broadcasting - `Pandas` → DataFrames, Series, data manipulation - Reading/Writing CSV, Excel, JSON - Data cleaning: handling missing, duplicates, renaming, filtering 🛠 Practice: Clean sample datasets from Kaggle or UCI 📈 *Step 3: Data Visualization* - *Matplotlib* → Basic plots (line, bar, scatter) - *Seaborn* → Advanced plots (heatmaps, boxplots, violin, etc.) - Customizing plots (titles, legends, colors) 🛠 Practice: Create dashboards or EDA (Exploratory Data Analysis) reports 🧠 *Step 4: Statistics & Probability* - Mean, Median, Mode, Std Dev, Variance - Probability basics - Distributions: Normal, Binomial, Poisson - Hypothesis Testing (t-test, chi-square) - Correlation vs Causation 🛠 Use: `scipy.stats`, `statsmodels`, `numpy` 📊 *Step 5: Exploratory Data Analysis (EDA)* - Analyze data distributions - Handle outliers - Feature relationships - Trend detection 🛠 Do EDA on Titanic, Iris, or Sales datasets 🤖 *Step 6: Introduction to Machine Learning* - *Using Scikit-learn:* - Supervised (Linear Regression, Logistic, Decision Trees) - Unsupervised (K-Means, PCA) - Train/Test Split - Model Evaluation (Accuracy, Precision, Recall, F1) 🛠 Practice on classification, regression, clustering tasks 🧩 *Step 7: Projects & Practice* - Real-world datasets (Kaggle, Google Dataset Search) - Ideas: - Movie Recommendation System - House Price Prediction - Sentiment Analysis - Sales Forecasting - Host on GitHub or make dashboards with *Streamlit* 🧠 Tools to Learn Alongside: - Jupyter Notebook - Google Colab - Git & GitHub - Virtual environments (`venv`, `conda`) - APIs (optional for live data) 🔥 *Stay consistent, build projects, and apply what you learn!* Data Science Resources: https://lnkd.in/g6Kgerxr Learn Python: https://lnkd.in/gsMtMnp8 💬 *Tap ❤️ for more!*
Python Data Science Roadmap: Learn Python Basics, Data Handling, Visualization, Statistics, EDA, Machine Learning & Projects
More Relevant Posts
-
Matplotlib Data Visualization Guide Start learning Python for data science → https://lnkd.in/dw3T2MpH Learn data visualization with Python → https://lnkd.in/d6Afxpjh Explore full data science roadmap → https://lnkd.in/dbmuZd97 ⬇️ Import Matplotlib → import matplotlib.pyplot as plt Used to create figures Used to build charts and visualizations ⬇️ Basic Plot → plt.plot(x, y) → plt.show() Creates a line chart ⬇️ Default X Values If x values are not provided Python automatically uses 0,1,2,3… Example → import numpy as np → y = np.array([2,4,1,5]) → plt.plot(y) ⬇️ Format Strings Control appearance of plot → marker → line style → color Example → plt.plot(y,'o:r') marker + dotted line + red color ⬇️ Change Line Color Example → plt.plot(y,'r') Common color codes → r red → g green → b blue ⬇️ Marker Options Highlight points on chart Example → plt.plot(y, marker='o') Change marker size → ms = 15 Change marker colors → mec edge color → mfc fill color ⬇️ Titles and Labels Add chart description → plt.title("Title") → plt.xlabel("x axis") → plt.ylabel("y axis") ⬇️ Grid Lines Add grid to charts → plt.grid() Axis specific grid → plt.grid(axis='x') → plt.grid(axis='y') ⬇️ Multiple Plots Create several charts in one figure Example → plt.subplot(1,2,1) → plt.subplot(1,2,2) Alternative → fig, ax = plt.subplots() ⬇️ Common Plot Types Line plot → plt.plot(x,y) Bar chart → plt.bar(x,y) Horizontal bar → plt.barh(x,y) Scatter plot → plt.scatter(x,y) ⬇️ Customize Charts Change bar colors → plt.bar(x,y,color=['r','g','b']) Change scatter size → plt.scatter(x,y,s=200) ⬇️ Legend Display labels for multiple datasets → plt.legend(['Dataset1','Dataset2']) #Python #Matplotlib #DataVisualization #DataScience #Programming
To view or add a comment, sign in
-
April 4, 2026. Day 2 of the new month. Still moving. Introduction to Data Visualization with Matplotlib — 4 hours — DataCamp. First course in the Data Visualization in Python track. And I want to talk about visualization honestly. Because there's a conversation here that goes deeper than charts and graphs. I've been visualizing data for a while now. Matplotlib has been in my toolkit. I've used it in projects — plotted distributions, drawn correlation matrices, built figures for EDA reports. So technically, I've been here before. But here's what I've come to understand about revisiting tools you think you already know: familiarity is not the same as fluency. I could produce a chart. I couldn't always produce the right chart, built the right way, communicating the right thing with intention and precision. There's a difference. Matplotlib is one of those libraries that rewards depth. On the surface it looks straightforward — you call a function, a plot appears. But underneath, it has a full object-oriented architecture. Figures. Axes. Artists. A structured way of thinking about every visual element as something you can control deliberately. Most people — myself included at earlier stages — use Matplotlib like a blunt instrument when it's actually a precision tool. This course made me slow down and learn the precision. And as someone who has spent over 10 years in a classroom drawing diagrams on a board — sketching graphs of quadratic functions, plotting velocity-time relationships in Physics, drawing titration curves in Chemistry — I know what it means to make a visual land. I know the difference between a graph that confuses and a graph that clarifies. I know that the choice of scale, label, color, and emphasis changes what a student — or a stakeholder — takes away completely. That teaching instinct is now being formalized into code. And it feels right. I'm also stepping into this new track — **Data Visualization in Python** — with a clear sense of where it fits in the bigger picture. Visualization is not decoration. It's not the thing you do after the "real" analysis. It IS part of the analysis. It's how you find patterns before you can name them. It's how you communicate what the data revealed after you've named them. Yesterday I completed the Data Manipulation in Python track — NumPy and pandas, the engine and the structure. Today, Matplotlib — the voice. The way data speaks to people who weren't in the room when it was collected. These things connect. Deliberately. That's the whole point. April is already demanding. But so am I. 📊 #Matplotlib #DataVisualization #Python #DataCamp #DataVisualizationInPython #DataScience #DataAnalysis #ContinuousLearning #3MTT #DeepTechReady #Nigeria #RealTalk #BuildingInPublic #April #TheGrind
To view or add a comment, sign in
-
-
🚀 Python for Data Science: Beyond the Basics with Seaborn.... Data visualization is not just about plotting graphs—it’s about extracting meaningful insights from data. While working with Seaborn, I compiled a quick revision of core concepts along with a few advanced additions that are often overlooked. 🔹 Core Seaborn Concepts - Statistical visualization built on Matplotlib - High-level API for attractive and informative plots - Common workflow: 1. Prepare data 2. Set aesthetics 3. Plot 4. Customize 📊 Key Plot Types - Categorical: "stripplot", "swarmplot", "barplot", "countplot" - Distribution: "distplot", "histplot", "kdeplot" - Regression: "regplot", "lmplot" - Matrix: "heatmap" - Axis Grids: "FacetGrid", "PairGrid", "JointGrid" 🎨 Customization Essentials - Styles: "whitegrid", "darkgrid" - Context: "talk", "paper", "notebook" - Color palettes for better storytelling - Axis control, labels, and layout tuning --- 💡 Additional Important Concepts (Advanced Layer) 🔸 1. Seaborn vs Matplotlib - Seaborn = High-level (quick insights) - Matplotlib = Low-level (full control) - Best practice: Use Seaborn + customize with Matplotlib 🔸 2. Wide-form vs Long-form Data - Wide-form: Columns represent variables - Long-form: Each row = observation (preferred in Seaborn) 🔸 3. Statistical Estimation - Seaborn automatically computes: - Mean - Confidence Intervals (CI) - Example: "barplot()" shows mean + CI, not raw values 🔸 4. Faceting (Very Important for Analysis) - Split data across dimensions using: - "FacetGrid" - "col", "row", "hue" - Enables multi-dimensional analysis 🔸 5. KDE (Kernel Density Estimation) - Smooth representation of distribution - Better than histogram for understanding probability density 🔸 6. Pairwise Relationships - "pairplot()" for quick EDA - Detects correlation, trends, and outliers 🔸 7. Heatmaps for Correlation - Essential for feature selection in ML - Works well with correlation matrices --- ⚠️ Common Mistakes - Using wrong plot type for data - Ignoring data format (wide vs long) - Misinterpreting confidence intervals - Overloading plots with unnecessary styling --- 📌 Takeaway Seaborn is not just a plotting library—it’s a statistical visualization tool. Mastering it means understanding both visualization and the underlying data distribution. If you're into Data Science or Machine Learning, strong visualization skills will significantly improve your analytical thinking and model interpretation. #DataScience #Python #Seaborn #MachineLearning #DataVisualization #EDA #AI #Programming #Analytics
To view or add a comment, sign in
-
-
You have been learning Python for months. But can you load a messy CSV and tell me what the business should do next? If not - you are learning the wrong things. I have seen candidates spend months learning algorithms and data structures - then freeze when I ask them to load a CSV and answer a basic business question from it. That is not a Python problem. That is a direction problem. Here is the exact Python roadmap for data analysts, from someone who interviews them: 𝗦𝘁𝗮𝗴𝗲 𝟭 - 𝗧𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀 Variables, data types, loops, conditionals, and functions. Do not spend more than 2 weeks here. Resource: CS50P by Harvard - free at cs50.harvard.edu/python 𝗦𝘁𝗮𝗴𝗲 𝟮 - 𝗣𝗮𝗻𝗱𝗮𝘀 & 𝗡𝘂𝗺𝗣𝘆 This is where data analyst Python actually starts. -- Load data with pd.read_csv() -- Explore with head(), info(), describe() -- Clean with fillna(), dropna(), drop() -- Summarize with groupby(), pivot_table(), value_counts() -- Combine with merge() and join() If you cannot do this on a messy dataset without Googling - you are not ready for an interview. Resource: Kaggle Learn - free at kaggle.com/learn 𝗦𝘁𝗮𝗴𝗲 𝟯 - 𝗗𝗮𝘁𝗮 𝗖𝗹𝗲𝗮𝗻𝗶𝗻𝗴 & 𝗘𝗗𝗔 This is what most of a real analyst's job looks like. Handle missing values with context. Remove duplicates. Detect outliers. Convert data types. Explore distributions and trends. Clean data is the foundation of every insight. Resource: Keith Galli - youtube.com/@KeithGalli 𝗦𝘁𝗮𝗴𝗲 𝟰 - 𝗗𝗮𝘁𝗮 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 -- Matplotlib for basic charts -- Seaborn for statistical visuals -- Plotly for dashboards Can you take messy data and create a visualization that answers a business question - without being told which chart to use? That judgment is the skill. Resource: freeCodeCamp - https://lnkd.in/gvKw8x2W 𝗦𝘁𝗮𝗴𝗲 𝟱 - 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 -- rolling() and cumsum() for time series -- apply() and lambda for logic SQL + Python together. Automate reports. This is what gets you promoted. 𝗦𝘁𝗮𝗴𝗲 𝟲 - 𝗔𝗜 + 𝗣𝘆𝘁𝗵𝗼𝗻 -- Use Claude to pressure test your analysis -- Use it to draft summaries -- Use GitHub Copilot to speed up code Python without AI in 2026 is like knowing SQL but refusing to use indexes. You do not need to know all of Python. You need to know the 20% that does 80% of the work - deeply. The candidates I hire are not the ones who learned the most. They are the ones who can clean, analyze, visualize, and explain what the business should do. That is the roadmap. Everything else is noise. Where are you on this right now? ♻️ Repost to help someone learning Python for data analytics 💭 Tag someone learning Python without direction 📩 Get my full data analytics career guide: https://lnkd.in/gjUqmQ5H
To view or add a comment, sign in
-
-
Learning Python by putting this roadmap and resources attached into practice can build practical skills needed, especially augmenting its impact by combining with AI-based capabilities 👇
I’ll Help You Grow In AI & Tech | 150K+ Community | Data Analytics Manager @ HCSC | Co-founded 2 Startups By 20 | Featured on TEDx, CNBC, Business Insider and Many More!
You have been learning Python for months. But can you load a messy CSV and tell me what the business should do next? If not - you are learning the wrong things. I have seen candidates spend months learning algorithms and data structures - then freeze when I ask them to load a CSV and answer a basic business question from it. That is not a Python problem. That is a direction problem. Here is the exact Python roadmap for data analysts, from someone who interviews them: 𝗦𝘁𝗮𝗴𝗲 𝟭 - 𝗧𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀 Variables, data types, loops, conditionals, and functions. Do not spend more than 2 weeks here. Resource: CS50P by Harvard - free at cs50.harvard.edu/python 𝗦𝘁𝗮𝗴𝗲 𝟮 - 𝗣𝗮𝗻𝗱𝗮𝘀 & 𝗡𝘂𝗺𝗣𝘆 This is where data analyst Python actually starts. -- Load data with pd.read_csv() -- Explore with head(), info(), describe() -- Clean with fillna(), dropna(), drop() -- Summarize with groupby(), pivot_table(), value_counts() -- Combine with merge() and join() If you cannot do this on a messy dataset without Googling - you are not ready for an interview. Resource: Kaggle Learn - free at kaggle.com/learn 𝗦𝘁𝗮𝗴𝗲 𝟯 - 𝗗𝗮𝘁𝗮 𝗖𝗹𝗲𝗮𝗻𝗶𝗻𝗴 & 𝗘𝗗𝗔 This is what most of a real analyst's job looks like. Handle missing values with context. Remove duplicates. Detect outliers. Convert data types. Explore distributions and trends. Clean data is the foundation of every insight. Resource: Keith Galli - youtube.com/@KeithGalli 𝗦𝘁𝗮𝗴𝗲 𝟰 - 𝗗𝗮𝘁𝗮 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 -- Matplotlib for basic charts -- Seaborn for statistical visuals -- Plotly for dashboards Can you take messy data and create a visualization that answers a business question - without being told which chart to use? That judgment is the skill. Resource: freeCodeCamp - https://lnkd.in/gvKw8x2W 𝗦𝘁𝗮𝗴𝗲 𝟱 - 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 -- rolling() and cumsum() for time series -- apply() and lambda for logic SQL + Python together. Automate reports. This is what gets you promoted. 𝗦𝘁𝗮𝗴𝗲 𝟲 - 𝗔𝗜 + 𝗣𝘆𝘁𝗵𝗼𝗻 -- Use Claude to pressure test your analysis -- Use it to draft summaries -- Use GitHub Copilot to speed up code Python without AI in 2026 is like knowing SQL but refusing to use indexes. You do not need to know all of Python. You need to know the 20% that does 80% of the work - deeply. The candidates I hire are not the ones who learned the most. They are the ones who can clean, analyze, visualize, and explain what the business should do. That is the roadmap. Everything else is noise. Where are you on this right now? ♻️ Repost to help someone learning Python for data analytics 💭 Tag someone learning Python without direction 📩 Get my full data analytics career guide: https://lnkd.in/gjUqmQ5H
To view or add a comment, sign in
-
-
🚀 𝗪𝗵𝘆 𝗣𝘆𝘁𝗵𝗼𝗻 𝗶𝘀 𝗮 𝗚𝗮𝗺𝗲-𝗖𝗵𝗮𝗻𝗴𝗲𝗿 𝗶𝗻 𝗧𝗼𝗱𝗮𝘆’𝘀 𝗧𝗲𝗰𝗵 𝗪𝗼𝗿𝗹𝗱 _________________________________________________________________________________ In a world driven by technology and data, Python stands out as one of the most powerful and in-demand programming languages. Its simplicity, flexibility, and wide range of applications make it an essential skill for modern developers. 🔹 🧠 𝗘𝗮𝘀𝘆 𝘁𝗼 𝗟𝗲𝗮𝗿𝗻 & 𝗨𝘀𝗲: _________________________________________________________________________________ Python’s simple and readable syntax makes it ideal for beginners and efficient for professionals. Focus more on problem-solving than complex syntax Clean code improves understanding and collaboration Easier debugging and long-term maintenance 🔹 🌍 𝗩𝗲𝗿𝘀𝗮𝘁𝗶𝗹𝗲 𝗔𝗰𝗿𝗼𝘀𝘀 𝗗𝗼𝗺𝗮𝗶𝗻𝘀: Python is a multi-purpose language used in various industries. 💻 Web Development 📊 Data Science & Analytics 🤖 Artificial Intelligence & Machine Learning ⚙️ Automation & Scripting ➡️ One language, multiple career paths 🔹 📈 𝗛𝗶𝗴𝗵 𝗜𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗗𝗲𝗺𝗮𝗻𝗱: _________________________________________________________________________________ Python is one of the most sought-after skills in today’s job market. Used by top global companies Opens roles like Developer, Data Analyst, ML Engineer Strong demand across industries 🔹 🧰 𝗣𝗼𝘄𝗲𝗿𝗳𝘂𝗹 𝗟𝗶𝗯𝗿𝗮𝗿𝗶𝗲𝘀 & 𝗧𝗼𝗼𝗹𝘀: Python’s ecosystem makes complex tasks easier and faster. NumPy, Pandas → Data handling TensorFlow, Scikit-learn → Machine Learning Django, Flask → Web development ➡️ Build advanced applications with less effort 🔹 ⚡ 𝗕𝗼𝗼𝘀𝘁𝘀 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆: _________________________________________________________________________________ Python allows developers to achieve more with minimal code. Faster development cycles Easy testing and debugging Ideal for rapid prototyping 🔹 🤝 𝗦𝘁𝗿𝗼𝗻𝗴 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗦𝘂𝗽𝗽𝗼𝗿𝘁: _Python has a massive global community that supports learning and growth. Thousands of tutorials and resources Quick solutions for problems Continuous updates and innovations 🔹 💻 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗜𝗻𝗱𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝘁: _________________________________________________________________________________ Python follows a “Write Once, Run Anywhere” approach. Works on Windows, macOS, and Linux Flexible and adaptable across environments 🔹 🔮 𝗙𝘂𝘁𝘂𝗿𝗲-𝗣𝗿𝗼𝗼𝗳 𝗦𝗸𝗶𝗹𝗹: Python is leading the future of technology. Core language in AI, Data Science, Automation Growing demand every year A reliable long-term career skill ✨ 𝗣𝘆𝘁𝗵𝗼𝗻 is not just a programming language—it’s a gateway to innovation and endless opportunities. 🌟 My Python Journey with Camerin - Indian Institute Of Upskill Learning Python with Camerinfolks has been a great experience. It helped me understand programming in a simple way. Thankful for the support and guidance. 🙏 Still learning and improving every day 🚀
To view or add a comment, sign in
-
-
I take a university class 1 night/week. Next week's homework had a bonus task: create Medallion architecture using Spark. I wrote some Python, started a spark session, and copied a SQL data warehouse to parquet. From raw files (bronze), with transforms (silver), and finally some dims and facts (gold). When I write Python, I like to make it modular and reusable. So I tend to write functions with parameters. But what do you do when you get to the silver layer? every project will have wide variations. This means writing unique functions for every table transformation. I am also not a big fan of typing. So I edited my SQL <-> DAX translator to SQL - > Spark. It's pretty nifty. Point and click for bronze table, and silver table goal. Click conversion button. AI analyzes the sql schemas and converts to spark syntax for transformation from bronze to silver. Then click Copy. Now there's portable code to insert into a silver.py. There may be times it has to be tweaked a little. Not an issue, the AI leaves notes. The tables have to exist, but don't necessarily need to have data. This could be very useful in migrations. I don't think it saves much on figuring out the spark syntax, it just reduces typing lol. I suppose if you don't know Spark it could be helpful. But take a look at one of the lines: F.col("id").cast("int").alias("ID") all it really means is take column "id", make it an integer, and rename it "ID". It's definitely not "English" as we know it but it's not completely foreign and this sort of operation is easy to pick up. I may make a "rolodex" for python apps, if I can figure it out. I am slowly creating a bunch of useful AI tools. I don't normally have repetitive tasks- like receiving Excels or sending them out. Or sending regular emails. No major everyday repetitiveness. So I usually fall back on a small library of python scripts I made, that I think of as my swiss army knife (collection). Because out of several hundred there's one that can usually be used for whatever I am doing. Well, now they're evolving into AI tools. They should go in a toolbox or rolodex. I know, most people would put them on their startbar or start menu.
To view or add a comment, sign in
-
-
I just finished cleaning data with Python. You know how a rough, scattered schedule makes it almost impossible to be productive? Like, even if you have 24 hours in a day, a messy plan makes it feel like you have none. That's exactly what dirty data does to a data scientist. You can have a million rows of data, but if it's messy, you're not getting anything meaningful out of it. Now here's what's funny. We always say we "clean data" before doing any real work. But have you ever stopped to ask, what exactly is dirty data? What are we even cleaning? Let me break it down 1. Missing values — like a contact list where half the phone numbers are just... blank. You know someone was there. But who? 2. Duplicate entries — same person registered twice because they forgot they already signed up. Classic. 3. Inconsistent formatting — one row says "Nigeria", another says "NG", another says "nigeria". Same country. Three personalities. 4. Wrong data types — a column that's supposed to hold numbers but someone snuck in a "N/A" and now the whole thing is treated as text. 5. Outliers that don't make sense — like someone entering their age as 700. Sir, are you Methuselah? 6. Extra whitespace — "Lagos " and "Lagos" look the same to the human eye. Python begs to differ. 7. Inconsistent capitalization — "male", "Male", "MALE". All the same. All treated differently. 8. Merged columns that shouldn't be — first name and last name crammed into one cell like they're sharing a studio apartment. 9. Placeholder values — someone typed "N/A", "none", "null", "0", and "–" all to mean the same thing: no data. One dataset, five languages. 10. Date format chaos — 04/17/2026. Or is it 17/04/2026? Or April 17, 2026? Or 2026-04-17? Yes. All of these. In the same column. Cleaning data isn't glamorous. Nobody's writing songs about it. But it's the difference between insights that mean something and charts that lie. The more I grow in data science, the more I realize, the real skill isn't just in the models or the visualizations. It's in how well you understand your data before you ever touch it. Also... it's Friday. I finished a course AND cleaned some data today. I'm going to go ahead and count that as a win. 😄 Happy TGIF, everyone. #DataScience #Python #DataCleaning #TGIF #DataEngineering #PythonForDataScience #GrowthMindset #Datacamp
To view or add a comment, sign in
-
-
Day 24 - Automate KPI Reports with Python I turned 3 hours of weekly KPI reporting into 90 seconds using Python + SQL + AI. import pandas as pd import pyodbc from openai import OpenAI from datetime import datetime conn = pyodbc.connect("DSN=your_db;UID=user;PWD=pass") query = """ SELECT metric_name, current_value, target_value, ROUND((current_value/target_value)*100, 1) AS pct_of_target FROM kpi_dashboard WHERE report_week = DATEPART(week, GETDATE()) """ df = pd.read_sql(query, conn) df['status'] = df['pct_of_target'].apply( lambda x: '🔴 Below' if x < 80 else ('🟡 At Risk' if x < 95 else '🟢 On Track') ) kpi_table = df[['metric_name','current_value','target_value','status']].to_string(index=False) client = OpenAI() response = client.chat.completions.create( model="gpt-4o", messages=[{ "role": "system", "content": "You are a senior business analyst. Write concise, professional executive summaries." }, { "role": "user", "content": f"""Write a 4-sentence executive KPI summary. KPI Data: {kpi_table} Report Date: {datetime.today().strftime('%B %d, %Y')}""" }] ) print(response.choices[0].message.content) print(kpi_table) Example output: This week the team achieved strong results in customer acquisition 103% of target and delivery time 98%. Revenue per user is at risk at 82% of target pricing adjustments recommended before month-end. Churn remains the top concern at 71% of target, immediate customer success outreach is advised. No more staring at spreadsheets trying to write summaries. Your Monday mornings just got easier. Which part would you use first: A) SQL pull B) Status flagging C) AI narrative D) All of it #Python #KPIReporting #DataAutomation #SQL #OpenAI #AIEngineer #BusinessIntelligence
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development