Our database ran out of connections at 3 AM. Every pipeline stopped. Every report failed. My phone was ringing at 3:15 AM. The cause? I had been leaking database connections for 3 months. Every pipeline run opened a new connection. None of them ever closed. The fix was 2 lines of Python. I just didn't know they existed. 👇 ──────────────── What was happening: # BEFORE — connection never closes if code crashes conn = get_db_connection() cursor = conn.cursor() cursor.execute("SELECT * FROM orders") results = cursor.fetchall() # if ANYTHING crashes above — conn stays open forever # 100 pipeline runs = 100 open connections conn.close() # never reached on error ──────────────── The fix — Python context manager: from contextlib import contextmanager @contextmanager def get_connection(db_config): conn = get_db_connection(db_config) try: yield conn # your code runs here finally: conn.close() # ALWAYS runs — crash or success # Now use it with 'with' keyword with get_connection(config) as conn: cursor = conn.cursor() cursor.execute("SELECT * FROM orders") results = cursor.fetchall() # connection closed here — automatically # even if cursor.execute() crashes halfway ──────────────── Why this works: The finally block runs no matter what. Success → closes connection. Crash → closes connection. Timeout → closes connection. The with keyword is Python's way of saying: "Use this resource. I'll handle the cleanup." ──────────────── 4 places every data engineer should use this: → Database connections (never leave open) → File handles (always close after reading) → Spark sessions (release cluster resources) → Temp directories (auto-cleanup after processing) ──────────────── That 3 AM call cost us 4 hours of downtime. Two lines of Python would have prevented all of it. Context managers are not advanced Python. They are basic production hygiene. What's your most painful Python mistake in prod? Drop it below 👇 #Python #DataEngineering #ETL #DataEngineer #PythonProgramming #DataPipeline #BestPractices #SoftwareEngineering #TechTips #OpenToWork #DataCommunity #HiringDataEngineers #100DaysOfPython #Databricks
Preventing Database Connection Leaks with Python Context Managers
More Relevant Posts
-
Day 32: File Handling — Making Data Permanent 💾 To work with files, Python needs to know where the file is (The Path) and how you want to use it (The Mode). 1. The Roadmap: Absolute vs. Relative Paths Before you can open a file, you have to tell Python its address. Absolute Path: The full address starting from the root of your hard drive. Windows: C:\Users\Name\Project\data.txt Mac/Linux: /Users/Name/Project/data.txt Relative Path: The address relative to where your Python script is currently running. . (Single Dot): The current folder. .. (Double Dot): Move one folder up (the parent folder). 💡 The Engineering Lens: Always prefer Relative Paths in your code. If you use an absolute path and send your code to a friend, it will crash because they don't have your exact username or folder structure. 2. File Operations: The Lifecycle Working with a file follows a strict three-step process: Open → Operate → Close. open(): Connects your script to the file. read() / write(): The actual work. close(): Disconnects the file. Crucial: If you forget to close a file, it can become "locked" or data might not be saved correctly. The "Senior" Way: The with Statement Instead of manually calling .close(), engineers use a Context Manager: with open("notes.txt", "r") as file: content = file.read() # File is automatically closed here, even if an error occurs! 3. File Modes: How are we opening it? When you open a file, you must specify your intent. Using the wrong mode can accidentally delete your data! 📌 File Opening Modes 🔹 r → Read 👉 Default mode. Opens file for reading ⚠️ Error if file doesn’t exist 🔹 w → Write 👉 Overwrites the entire file 👉 Creates file if it doesn’t exist 🔹 a → Append 👉 Adds data to the end of the file ✅ Safe – doesn’t delete existing content 🔹 r+ → Read + Write 👉 Opens file for both reading and writing 💡 Choosing the right mode prevents accidental data loss! 4. Reading and Writing Methods file.read(): Grabs the entire file as one giant string. file.readline(): Grabs just one line. file.write("text"): Puts text into the file (no automatic newline). file.writelines(list): Takes a list of strings and writes them all at once. #Python #SoftwareEngineering #FileHandling #ProgrammingTips #LearnToCode #TechCommunity #PythonDev #DataStorage #CleanCode
To view or add a comment, sign in
-
𝗦𝗽𝗮𝗿𝗸 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝘀 #2: 𝗨𝗗𝗙𝘀 — 𝗧𝗵𝗲 𝗦𝗺𝗮𝗿𝘁 𝗖𝗼𝗱𝗲 𝗧𝗵𝗮𝘁 𝗕𝗿𝗲𝗮𝗸𝘀 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲. I used to think UDFs were the cleanest way to write Spark code… Clean. Reusable. Easy. Until a discussion with my architect changed my perspective. “This is why your job is slow.” Then I looked under the hood… and everything clicked. 👉 UDFs (User Defined Functions) look powerful but inside Spark, they break optimization. ⚠️ What actually happens when you use a #UDF: ❌ #Spark treats it as a black box ❌ #CatalystOptimizer can’t analyze your logic ❌ No predicate pushdown ❌ No #WholeStageCodegen And it gets worse… 💥 #JVM ↔ #Python serialization overhead 💥 Execution becomes row-by-row 💥 Vectorized (batch) execution is lost 🧠 What’s happening internally (real reason) Spark doesn’t execute your Python code directly. It follows this pipeline: 1️⃣ Build Logical Plan 2️⃣ Optimize using Catalyst 3️⃣ Convert to Physical Plan 4️⃣ Generate JVM bytecode (WholeStageCodegen) 5️⃣ Execute in a distributed manner ✅ With Built-in Functions: Spark understands expressions like: → when, filter, join, agg So it can: ✔ Apply rule-based + cost-based optimization ✔ Push filters down to data source ✔ Reorder joins ✔ Eliminate unnecessary columns ✔ Combine multiple operations into a single stage 👉 Result: Fewer stages + less I/O + faster execution ❌ With UDF: Your logic lives in Python so for Spark, It becomes an opaque expression, Meaning: Spark don’t know what this function does. Because of this: 🚫 Catalyst cannot rewrite or optimize it 🚫 Filters cannot be pushed below UDF 🚫 Column pruning stops at UDF boundary 🚫 WholeStageCodegen cannot include it 💥 The Real Bottleneck 👉 Crossing JVM ↔ Python boundary, Spark runs in JVM, but UDF runs in Python. So for every row (or batch): → Serialize data (JVM → Python) → Execute function → Deserialize back (Python → JVM) This causes: 💥 High CPU overhead 💥 Serialization cost 💥 Loss of #vectorizedexecution 💥 More GC pressure 🔍 Example ❌ Using UDF: from pyspark.sql.functions import udf from pyspark.sql.types import StringType def categorize(age): return "minor" if age < 18 else "adult" df = df.withColumn("category", udf(categorize, StringType())(df.age)) ✅ Using Built-in Functions: from pyspark.sql.functions import when df = df.withColumn( "category", when(df.age < 18, "minor").otherwise("adult") ) 💡 Same logic. Completely different execution plan. ✔ Built-in → optimized DAG + codegen + vectorized ❌ UDF → isolated, row-based, non-optimizable 🚀 What to do instead: ✔ Prefer #SparkSQL functions (when, expr, concat) ✔ Think in columnar transformations ✔ Use #PandasUDF only when unavoidable 🧠 Spark is not a Python engine, It’s a distributed SQL engine with a Python interface. Happy to share more Databricks tutorials & Spark insights — just DM me #ApacheSpark #SparkInternals #DataEngineering #Databricks #BigData
To view or add a comment, sign in
-
-
SQL or Python for Data Cleaning? Why Not Both? I see this debate all the time on LinkedIn: which is better for data cleaning, SQL or Python (Pandas)? The answer is neither. They are both incredibly powerful tools, and the best engineers know how to use the right tool for the job. It's not about being a "SQL person" or a "Python person." It's about being an impact-driven engineer. Here’s my mental framework after 9+ years: 🛠️ The SQL Sweet Spot (Building the Foundation) SQL is king for initial heavy lifting. When you're dealing with massive datasets, the closer you can do your cleaning to the source (the database), the better. When to use SQL: Filtering out missing values (WHERE col IS NOT NULL), casting data types, and dealing with duplicates with a simple SELECT DISTINCT. The Advantage: It’s super fast, and you avoid transferring uncleaned, bloated data across the network. Simple, well-designed systems win. 🐍 The Python Sweet Spot (Finishing Touches) Python (Pandas) shines when you need flexibility and complex logic. Once your data is pre-filtered and at a more manageable size, you can do sophisticated cleaning on your local machine. When to use Python: Imputing missing values with the mean/median, dealing with tricky datetime formats, complex text string manipulation, and sophisticated outlier detection (like the IQR example in the cheat sheet). The Advantage: The flexibility is unmatched. You have a full programming language at your fingertips to handle any edge case. Making data usable, not impressive, is the goal. My advice to new joiners: Don't limit yourself. Learn both. Use SQL to get the data to a "usable" state, and then use Python to give it that final, clean, production-ready polish. The most valuable engineer is the one who can seamlessly move between both worlds. What’s your default tool for data cleaning? Are you a SQL-first or Python-first kind of engineer? Let me know in the comments! 👇 #DataEngineering #CareerAdvice #TechTalk #RealTalk #ExperienceMatters #SQL #Python #Pandas
To view or add a comment, sign in
-
-
🕶️ Do you want to know what Python really is? (Or how to find the exit from the Excel Matrix) Remember that scene where Morpheus offers Neo a choice? 🔵🔴 In logistics and supply chain planning, most of us choose the blue pill every single day: You copy the same data over and over. You build a VLOOKUP that crashes because you’ve hit 50,000 rows. You keep believing that "this is just how it has to be." But if you’re reading this, it means you’re looking for the red pill. You want to see how deep the automation rabbit hole goes. 🐇 💊 Where to find the code (and avoid becoming Agent Smith) People fear that the Matrix (read: Python) requires memorizing thousands of commands. Nonsense! Even "The One" didn’t know everything at once—he simply "downloaded" the programs he needed into his head. 💿 Here are your data-loading ports: 1. Libraries (The Kung-Fu Programs): You don't spend 20 years learning to fight. You type import pandas as pd and suddenly: "I know Kung-Fu" (translation: your data sorts, merges, and cleans itself). Libraries are pre-built move sets that someone else has already mastered for you. 2. Stack Overflow (The Oracle): If your code throws an error, don't panic. You type that error into Google and visit the Oracle. You’ll always find someone who already fixed it years ago. Copying code isn't a glitch in the Matrix—it’s the fastest way to the goal! 3. Documentation (The Source Code): This is the manual for the world. You don’t read it like a novel. You dip in only when you need to know how to "bend the spoon" (or how to reformat dates across 100 files at once). ✨ Your mission for today: Stop trying to jump across skyscrapers in one leap. Find one small, boring task that eats up 15 minutes of your day. Search for a Python "spell" to fix it. Remember: The system relies on your sacrificed time. Python lets you take that time back. The question is: Which pill are you taking today? 🔵 (Stay in the Excel Matrix) or 🔴 (Start your first script)? #PythonMatrix #DataNeo #SupplyChainRevolution #AutomationMagic #PandasPower #CareerChoice #LogisticsTech
To view or add a comment, sign in
-
-
🚨 Every data team has that one Python script. You know the one. Someone wrote it "just for now" two years ago. It's still running in production. No retries. No logging. Hardcoded credentials. And every time it breaks at 3 AM, someone has to SSH into a server and pray. I just published a new article on what actually separates a script from a pipeline. Spoiler: it's not complexity. It's whether the code was designed to fail gracefully. In the article, I cover: ⚙️ Why idempotency is the single most important property your pipeline can have (and how to test it in 30 seconds) 🔁 How to handle transient vs permanent errors the right way 🔐 The Twelve-Factor config test: could you open source your codebase right now without leaking credentials? 📊 Why print() is not observability, and what to log instead 🧪 The uncomfortable truth about data testing: only 3% of tests are business logic tests 🚫 The notebook trap and other anti-patterns killing your pipelines in production If your team is stuck between "it works on my laptop" and "production grade," this one is for you. Read it here 👉 https://lnkd.in/dwMDTUSD
To view or add a comment, sign in
-
## 𝗕𝗿𝗶𝗱𝗴𝗶𝗻𝗴 𝘁𝗵𝗲 𝗚𝗮𝗽: 𝗦𝗤𝗟 𝘁𝗼 𝗣𝘆𝘁𝗵𝗼𝗻 𝗳𝗼𝗿 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹𝘀 🐍📊 Navigating the world of data often involves working with both SQL and Python. Understanding how to translate common SQL operations into Python can significantly streamline your data analysis and manipulation workflows. This quickstart guide offers a handy reference for common tasks, from filtering and ordering data to handling missing values and merging datasets. 𝗞𝗲𝘆 𝗧𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗶𝗼𝗻𝘀: • 𝗙𝗶𝗹𝘁𝗲𝗿𝗶𝗻𝗴: `WHERE column = 'value'` → `df[df['column'] == 'value']` • 𝗢𝗿𝗱𝗲𝗿𝗶𝗻𝗴: `ORDER BY column ASC` → `df.sort_values(by='column', ascending=True)` • 𝗥𝗲𝗺𝗼𝘃𝗶𝗻𝗴 𝗗𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲𝘀: `SELECT DISTINCT col1, col2` → `df.drop_duplicates(subset=['col1', 'col2'])` • 𝗙𝗶𝗹𝗹𝗶𝗻𝗴 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝗩𝗮𝗹𝘂𝗲𝘀: `COALESCE(col, 'xxx')` → `df['column'].fillna('xxx')` • 𝗖𝗵𝗮𝗻𝗴𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗧𝘆𝗽𝗲𝘀: `CAST(col AS INTEGER)` → `df['column'].astype(int)` • 𝗥𝗲𝗻𝗮𝗺𝗶𝗻𝗴 𝗖𝗼𝗹𝘂𝗺𝗻𝘀: `SELECT col AS new_col` → `df.rename(columns={'col': 'new_col'})` • 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗶𝗼𝗻𝘀: `SUM()`, `AVG()`, `MIN()`, `MAX()`, `COUNT()` → `.sum()`, `.mean()`, `.min()`, `.max()`, `.count()` • 𝗠𝗲𝗿𝗴𝗶𝗻𝗴 𝗗𝗮𝘁𝗮𝘀𝗲𝘁𝘀: `JOIN` → `pd.merge(table1, table2, on='key')` • 𝗔𝗽𝗽𝗲𝗻𝗱𝗶𝗻𝗴 𝗗𝗮𝘁𝗮𝘀𝗲𝘁𝘀: `UNION ALL` → `pd.concat([table1, table2])` Mastering these translations can unlock greater efficiency and flexibility in your data projects. What are your favorite SQL to Python translation tips? Share them in the comments below! 👇 ♻️ Repost if you find it helpful #SQL #Python #DataAnalysis #DataScience #DataEngineering #Programming #Coding #Pandas
To view or add a comment, sign in
-
-
🎲 𝑫𝒂𝒚 𝟔 𝒐𝒇 𝟓𝟎: 𝑵𝒖𝒎𝑷𝒚 & 𝑫𝒂𝒕𝒂 𝑨𝒏𝒂𝒍𝒚𝒔𝒊𝒔 — 𝑻𝒉𝒆 𝑫𝒂𝒕𝒂 𝑺𝒄𝒊𝒆𝒏𝒄𝒆 𝑷𝒉𝒂𝒔𝒆 𝑩𝒆𝒈𝒊𝒏𝒔! Switches my gears today from Django to Numpy . 🤯 I moved from building web applications with Django to analyzing data with NumPy — and honestly, this transition feels like unlocking a completely new dimension of programming. 𝐖𝐡𝐚𝐭 𝐈 𝐁𝐮𝐢𝐥𝐭: 🛠️ A comprehensive Stock Market Analysis System that processes real historical data, calculates key statistics, identifies trading patterns, and generates professional analysis reports. All of it. No complex libraries. Just NumPy doing what it does best. 𝐖𝐡𝐲 𝐍𝐮𝐦𝐏𝐲 𝐂𝐡𝐚𝐧𝐠𝐞𝐝 𝐇𝐨𝐰 𝐈 𝐓𝐡𝐢𝐧𝐤: ⚡ Before NumPy, I was writing loops for everything. Calculating averages. Filtering data. Finding patterns. Lines and lines of code for simple operations. NumPy eliminated all of that. 𝑶𝒏𝒆 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 𝒄𝒂𝒍𝒍 𝒓𝒆𝒑𝒍𝒂𝒄𝒆𝒔 𝟐𝟎 𝒍𝒊𝒏𝒆𝒔 𝒐𝒇 𝒄𝒐𝒅𝒆. 𝑶𝒏𝒆 𝒍𝒊𝒏𝒆 𝒐𝒇 𝒍𝒐𝒈𝒊𝒄 𝒓𝒆𝒑𝒍𝒂𝒄𝒆𝒔 𝒆𝒏𝒕𝒊𝒓𝒆 𝒍𝒐𝒐𝒑𝒔. 𝑻𝒉𝒂𝒕'𝒔 𝒏𝒐𝒕 𝒋𝒖𝒔𝒕 𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒄𝒚 — 𝒕𝒉𝒂𝒕'𝒔 𝒂 𝒇𝒖𝒏𝒅𝒂𝒎𝒆𝒏𝒕𝒂𝒍𝒍𝒚 𝒅𝒊𝒇𝒇𝒆𝒓𝒆𝒏𝒕 𝒘𝒂𝒚 𝒐𝒇 𝒕𝒉𝒊𝒏𝒌𝒊𝒏𝒈 𝒂𝒃𝒐𝒖𝒕 𝒅𝒂𝒕𝒂. 𝐖𝐡𝐚𝐭 𝐈 𝐌𝐚𝐬𝐭𝐞𝐫𝐞𝐝 𝐓𝐨𝐝𝐚𝐲: 💪 📊 𝐒𝐭𝐚𝐭𝐢𝐬𝐭𝐢𝐜𝐚𝐥 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 — Mean, standard deviation, percentiles at scale 🔍 𝑫𝒂𝒕𝒂 𝑭𝒊𝒍𝒕𝒆𝒓𝒊𝒏𝒈 — Conditional selection without a single loop 📈📈 𝑻𝒓𝒆𝒏𝒅 𝑨𝒏𝒂𝒍𝒚𝒔𝒊𝒔 — Moving averages for pattern recognition ⚡ 𝑽𝒆𝒄𝒕𝒐𝒓𝒊𝒛𝒆𝒅 𝑶𝒑𝒆𝒓𝒂𝒕𝒊𝒐𝒏𝒔 — Processing millions of data points instantly 💾 𝑴𝒆𝒎𝒐𝒓𝒚 𝑬𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒄𝒚 — Handling big data without performance bottlenecks 𝐓𝐡𝐞 𝐀𝐡𝐚 𝐌𝐨𝐦𝐞𝐧𝐭: 💡 When a calculation that should take 10+ lines of code runs in a single elegant function call — 𝒕𝒉𝒂𝒕'𝒔 𝒘𝒉𝒆𝒏 𝒚𝒐𝒖 𝒖𝒏𝒅𝒆𝒓𝒔𝒕𝒂𝒏𝒅 𝒘𝒉𝒚 𝑵𝒖𝒎𝑷𝒚 𝒅𝒐𝒎𝒊𝒏𝒂𝒕𝒆𝒔 𝒅𝒂𝒕𝒂 𝒔𝒄𝒊𝒆𝒏𝒄𝒆. It's not just about writing less code. It's about thinking at a higher level. 𝑾𝒉𝒚 𝑻𝒉𝒊𝒔 𝑨𝒄𝒕𝒖𝒂𝒍𝒍𝒚 𝑴𝒂𝒕𝒕𝒆𝒓𝒔: 🌍 NumPy isn't just a library. It's the foundation that everything in data science builds on — Pandas, Scikit-Learn, TensorFlow. Master NumPy and you're not just learning a tool. You're building the instincts that make a great data scientist. Django taught me to build for users. NumPy is teaching me to understand data at scale. 𝟔 𝒐𝒇 𝟓𝟎 𝒄𝒐𝒎𝒑𝒍𝒆𝒕𝒆. 𝑻𝒉𝒆 𝒋𝒐𝒖𝒓𝒏𝒆𝒚 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆𝒔. #DataScience #NumPy #Python #DataAnalysis #50DayChallenge #LearningInPublic #MachineLearning #PythonDeveloper #TechJourney #BuildInPublic
To view or add a comment, sign in
-
🚀 SQL vs Python — The Core Skills Every Data Analyst Needs In the world of data, mastering just one tool is not enough. The real advantage comes when you understand how tools complement each other. 👉 SQL is the foundation for working with structured data 👉 Python (especially with Pandas) enables deeper analysis, automation, and scalability While SQL is designed for querying and manipulating data directly inside databases, Python extends those capabilities by allowing analysts to build complex logic, perform advanced transformations, and integrate with multiple systems. 🔍 Translating SQL concepts into Python Understanding how both tools align makes learning faster and more practical: 🔹 Filtering rows SQL: SELECT * FROM users WHERE city = 'Tokyo'; Python: df[df['city'] == 'Tokyo'] 🔹 Counting records SQL: SELECT COUNT(*) FROM users; Python: df.shape[0] or df['column'].count() 🔹 Grouping and aggregation SQL: SELECT city, AVG(age) FROM users GROUP BY city; Python: df.groupby('city')['age'].mean() 🔹 Sorting results SQL: ORDER BY age DESC; Python: df.sort_values('age', ascending=False) 🔹 Joining datasets SQL: JOIN operations Python: pd.merge(df1, df2, on='id', how='inner') 🔹 Updating values SQL: UPDATE users SET age = age + 1; Python: df['age'] = df['age'] + 1 🔹 Combining datasets SQL: UNION ALL Python: pd.concat([df1, df2]) ⚙️ Where each tool stands out ✔ SQL excels in: Extracting data efficiently from large databases Performing quick aggregations and filtering Working directly within data warehouses ✔ Python excels in: Data cleaning and transformation Advanced analytics and statistical operations Automation and pipeline building Integration with machine learning workflows 💡 Key Insight SQL and Python are not competitors — they are complementary. SQL helps you access and retrieve the right data, while Python helps you process, analyze, and scale that data into meaningful insights. For anyone working in data, the ability to move seamlessly between SQL queries and Python logic is what turns basic analysis into impactful decision-making. #DataAnalytics #SQL #Python #Pandas #DataEngineering #Analytics #CareerGrowth
To view or add a comment, sign in
-
Excited to share my latest article on modern data processing! I recently published "Polars: A High-Performance DataFrame Library in Python", where I dive into how Polars is emerging as a powerful alternative to traditional data manipulation libraries. As datasets continue to grow in size and complexity, performance becomes critical. In this article, I explore how Polars addresses these challenges with a highly efficient architecture built on Apache Arrow, enabling faster computation and reduced memory usage. Here’s what discuss in the article: ▪️ What Polars is and why it’s gaining traction in the data ecosystem ▪️ Its core design principles, including lazy execution, which optimizes queries before execution ▪️ Built-in parallel processing, allowing operations to run significantly faster compared to traditional approaches ▪️ How Polars handles large datasets more efficiently with lower memory overhead ▪️ Practical examples showcasing its performance benefits in real-world data workflows One of the most interesting aspects I found is how Polars shifts the mindset from step-by-step execution to an optimized query plan, making data pipelines not just faster, but smarter. If you're working in data science, data engineering, or analytics, and dealing with performance bottlenecks, Polars is definitely worth exploring. I’d love to hear your thoughts, have you tried Polars yet? How does it compare with your current tools? #Python #DataScience #BigData #Analytics #Polars #MachineLearning Read the full article here:
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Open to Data Engineer roles — 4.5 yrs with Python, PySpark, Azure and Snowflake at GfK NIQ. DM me or tag someone hiring!