My Data Science Journey — Python Tuple, Set, Dictionary & the Collections Library Today’s focus was on Python’s core data structures — Tuples, Sets, and Dictionaries — along with the powerful collections module that enhances their functionality for real-world use cases. 𝐖𝐡𝐚𝐭 𝐈 𝐋𝐞𝐚𝐫𝐧𝐞𝐝: Tuple – Ordered, immutable, allows duplicates – Single element tuples require a trailing comma → ("cat",) – Supports packing and unpacking → x, y = 10, 30 – Cannot be modified after creation (TypeError by design) – Faster than lists in certain operations – Used in scenarios like geographic coordinates and fixed records – Can be used as dictionary keys (unlike lists) Set – Unordered, mutable, stores unique elements only – No indexing or slicing support – Empty set must be created using set() ({} creates a dict) – .remove() raises KeyError if element not found – .discard() removes safely without error – Supports operations like union, intersection, difference, symmetric_difference – Methods like issubset(), issuperset(), isdisjoint() help in set comparisons – frozenset provides an immutable version of a set – Offers O(1) average time complexity for membership checks Dictionary – Key-value pair structure, ordered, mutable, and keys must be unique – Built on hash tables for fast lookups – user["key"] → raises KeyError if missing – user.get("key", default) → safe access with fallback – Methods: keys(), values(), items() for iteration – pop(), popitem(), update(), clear(), del for modifications – Widely used in real-world data like APIs and JSON responses – Common pattern: list of dictionaries for structured datasets Collections Library – namedtuple → tuple with named fields for better readability – deque → efficient queue with O(1) operations on both ends – ChainMap → combines multiple dictionaries without merging copies – OrderedDict → maintains order with additional utilities like move_to_end() – UserDict, UserList, UserString → useful for customizing built-in behaviors with validation and extensions Performance Insight – List → O(n) – Tuple → O(n) – Set → O(1) (average lookup) – Dictionary → O(1) (average lookup) 𝐊𝐞𝐲 𝐈𝐧𝐬𝐢𝐠𝐡𝐭: Understanding when to use each data structure — and how collections enhances them — is crucial for writing efficient, scalable, and clean Python code. Read the full breakdown with examples on Medium 👇 https://lnkd.in/gvv5ZBDM #DataScienceJourney #Python #Tuple #Set #Dictionary #Collections #Programming #DataStructures
Python Tuple, Set, Dictionary & Collections Library Explained
More Relevant Posts
-
📊 Handling Missing Values in Python - The First Real Data Problem You’ll Face You’ve loaded your dataset. Everything looks fine… until you notice this: 👉 Some values are missing. Blank cells. NaN values. Incomplete records. And here’s the truth: 👉 Almost every real-world dataset has missing data. 🔹 What Are Missing Values? Missing values are simply gaps in your dataset — places where data should exist but doesn't. In Python, they usually appear as: NaN # Not a Number — most common in pandas None # Python's version of empty 🔹 Why Do Missing Values Matter? Because they can silently break your analysis. ❌ Wrong averages ❌ Incorrect insights ❌ Errors in calculations ❌ Poor model performance 👉 Ignoring missing data = trusting wrong results 🔹 Simple Real-Life Example Imagine you’re analyzing employee salaries. Some entries are missing. Now if you calculate average salary: 👉 Your result will be misleading But once you handle missing values properly: 👉 Your analysis becomes accurate and reliable 🔹 How to Detect Missing Values In Python, it’s very simple: df.isnull().sum() 👉 This shows how many values are missing in each column 🔹 How to Handle Missing Values There is no “one right way”—it depends on the situation. But commonly, analysts use: ✔ Remove missing data df.dropna() ✔ Fill with mean (for numerical data) df['salary'] = df['salary'].fillna(df['salary'].mean()) ✔ Fill with mode (for categorical data) df['city'] = df['city'].fillna(df['city'].mode()[0]) ✔ Forward fill (for time-based data) df.fillna(method='ffill') 🔹 One Rule to Always Remember Missing % What to DoLess than 5% Safely drop the rows 5% to 30% Fill with mean, median or mode More than 30% Investigate — the column may be unreliable 🔹 When Should You Handle Missing Values? Always: ✔ Right after loading your dataset ✔ Before doing calculations ✔ Before building any model 👉 Cleaning comes before analysis. 🚀 Final Thought Dirty data is not the problem. Not knowing how to clean it — that is. Every professional dataset has missing values. What separates a good analyst from a great one is knowing exactly how to handle them. 💡 #DataAnalytics #Python #MissingValues #DataCleaning #pandas #DataAnalyst #LearningInPublic #PythonForData #AnalyticsJourney #DataScience
To view or add a comment, sign in
-
-
Day 12 of My Data Science Journey — Python Lists: Methods, Comprehension & Shallow vs Deep Copy Today’s focus was on one of the most essential data structures in Python — Lists. From data storage to manipulation, lists are used everywhere in real-world applications and data science workflows. 𝐖𝐡𝐚𝐭 𝐈 𝐋𝐞𝐚𝐫𝐧𝐞𝐝: List Properties – Ordered, mutable, allows duplicates, and supports mixed data types Accessing Elements – Used indexing, negative indexing, slicing, and stride for flexible data access List Methods – append(), extend(), insert() for adding elements – remove(), pop() for deletion – sort(), reverse() for ordering – count(), index() for searching and analysis Shallow vs Deep Copy – Understood that direct assignment does not create a new copy – Used copy(), list(), slicing for safe duplication – Learned the importance of copying, especially with nested data List Comprehension – Wrote concise and efficient code using list comprehension – Combined loops and conditions in a single readable line Built-in Functions – Used sum(), len(), min(), max() for quick data insights Additional Useful Methods – clear(), sorted(), zip(), filter(), map(), any(), all() 𝐊𝐞𝐲 𝐈𝐧𝐬𝐢𝐠𝐡𝐭: Understanding how lists work — especially copying and comprehension — is critical for writing efficient and bug-free Python code. Lists are not just a data structure; they are a core tool for solving real-world problems. Read the full breakdown with examples on Medium 👇 https://lnkd.in/gFp-nHzd #DataScienceJourney #Python #Lists #Programming
To view or add a comment, sign in
-
Day 12/30 - Nested Data Structures in Python Today everything clicked. Lists, dicts, tuples. They don't live separately. Real data nests them together. What is Nesting? Nesting means placing one data structure inside another. A list can contain dictionaries. A dictionary can contain lists. A dictionary can even contain other dictionaries. This is how Python represents complex, real-world data - the same structure used in JSON APIs, databases, and config files. Four Common Nesting Patterns List inside Dict -> a dictionary key holds a list as its value e.g. a student's list of scores Dict inside List -> a list contains multiple dictionaries e.g. a list of student records Dict inside Dict -> a key holds another dictionary e.g. a user with a nested address object List inside List -> a list contains other lists e.g. rows and columns in a grid or table How to Access Nested Data You access nested data by chaining brackets one for each level you go deeper: data["student"]["scores"][0] -->open dict , go to scores key, grab index 0 Rule: count the levels of nesting, then use that many brackets to reach the value. Looping Through Nested Structures When your data is a list of dictionaries, use a for loop to go through each dictionary, then use bracket notation to pull out values. This is the most common real-world pattern- reading records from an API or database. Code Example 1: List Inside a Dict python student = { "name" : "Obiageli", "scores": [88, 92, 75, 95], "passed": True } print(student["scores"]) = [88, 92, 75, 95] print(student["scores"][0]) = 88 print(student["scores"][-1]) = 95 Key Learnings ☑ Nesting = placing one data structure inside another ☑ Access nested data by chaining brackets , one bracket per level ☑ A list of dictionaries is the most common pattern, it's how API and database data looks ☑ Use a for loop to go through a list of dicts and pull values from each record ☑ Nested structures are the foundation of JSON -master this and real-world data won't feel foreign My Takeaway Nested data structures are where all the previous days connect. Lists, tuples, sets, dictionaries - they don't live in isolation. Real data combines all of them. Today I started seeing data the way Python sees it. #30DaysOfPython #Python #LearnToCode #CodingJourney #WomenInTech
To view or add a comment, sign in
-
-
Day 2 of Learning Python – And I Just Built My First Real Data Audit System 📊🐍 Today I didn’t just “learn Python”… I used it to analyze structured company-style audit data and built a Mistake Scoring System that automatically evaluates performance. And honestly, It felt like stepping into real business intelligence work. 💡 What I built today: Using Pandas, I processed an audit dataset and generated insights like: 📌 Total deals per responsible person 📌 Pipeline distribution per team member 📌 Mistake scoring based on missing actions (follow-ups, updates, documents) 📌 Final performance summary ranking everyone by errors ⚙️ The idea behind the system: Instead of manually checking performance, I created a logic-based scoring system where: Missing documents = +1 error No follow-up = +1 error No comment update = +1 error Unresolved status = +3 heavy penalty This turns raw data into actionable performance insights. 💻 Code I used: import pandas as pd file_path = r " Instered your excel data file here" Note: The r before the file path means it is a raw string, which helps Python correctly read the path without treating backslashes as escape characters. Also, make sure your Excel file is saved in the same folder where your Python script is located, or ensure the correct full file path is provided. df = pd.read_excel(file_path) # CLEAN DATA df.columns = df.columns.str.strip() df = df.fillna("No") # MISTAKE SCORE SYSTEM df["Mistake Score"] = 0 df.loc[df["Document/RF Request"] == "No", "Mistake Score"] += 1 df.loc[df["Comment Updates"] == "No", "Mistake Score"] += 1 df.loc[df["Follow up"] == "No", "Mistake Score"] += 1 df.loc[df["Status"].str.lower() == "unresolved", "Mistake Score"] += 3 # ANALYSIS print(df["Responsible"].value_counts()) print(df.groupby(["Responsible", "Pipeline"]).size()) mistakes = df.groupby("Responsible")["Mistake Score"].sum().sort_values(ascending=False) print(mistakes) summary = df.groupby("Responsible").agg( Total_Deals=("Responsible", "count"), Total_Mistakes=("Mistake Score", "sum") ) print(summary.sort_values("Total_Mistakes", ascending=False)) 🚀 Key takeaway: Even simple Python + Excel data can be transformed into a decision-making system that highlights performance gaps instantly. Day 2 of learning — and I’m already seeing how powerful data can be in real business environments. Can’t wait to build dashboards and automate even more next 🔥 #Python #DataAnalysis #Pandas #LearningInPublic #DataScience #Automation #BusinessIntelligence #CareerGrowth
To view or add a comment, sign in
-
🐍 Data Types & Type Casting in Python (Small Concept, Big Impact) When working with data in Python, one mistake beginners often make is ignoring data types. And trust me, this small thing can break your entire analysis. When you load a dataset in Python, it doesn't always read your data the way you expect. A column full of numbers might be stored as text. A date column might be treated as a random string. A true/false column might come in as an object. And if you don't fix this early, your entire analysis will give you wrong results. 🔹 So What Are Data Types? Every value in Python has a type - it tells Python what kind of data this is and what you can do with it. The most common ones in data analysis: int → Whole numbers → 25, 100, -5 float → Decimal numbers → 3.14, 99.9, -0.5 str → Text → "John", "Mumbai", "Yes" bool → True or False → True, False datetime → Dates & times → 2024-01-15 👉 Think of data types as the language your data speaks, If you misunderstand it, your analysis goes wrong. 🔹 Why Data Types Matter in Data Analysis Because Python behaves differently based on data types. Example: 👉 "100" + "20" → "10020" (string concatenation) 👉 100 + 20 → 120 (numeric addition) Same values. Different result. 🔹 A Simple Real-Life Example Imagine a salary column in your dataset. You try to calculate the average: df['salary'].mean() But Python throws an error. You check the data type and you see - salary is stored as object (string), not a number. Python literally can't do math on it. That's where Type Casting comes in. 🔹 What is Type Casting? Type casting means converting one data type into another. Your salary column is stored as "50000" (a string). Every calculation you run will give wrong results or fail completely. After type casting: # Convert salary column to number df['salary'] = df['salary'].astype(float) # Now calculate average salary df['salary'].mean() # works perfectly # Convert joining date to datetime df['join_date'] = pd.to_datetime(df['join_date']) # Convert employment status to boolean df['is_active'] = df['is_active'].astype(bool) Now Python understands your data — and you can calculate average salaries, find top earners, compare departments, and build models correctly. 🔹 Why This Matters in Real Projects Wrong data types silently break your analysis. - Calculations fail on string columns - Sorting dates goes wrong if stored as text - Visualizations won't plot numeric data stored as objects - Machine learning models reject incorrect types completely Checking and fixing data types is not optional — it is one of the first things a professional analyst does. 🔹 When Should You Always Check Data Types? ✔ Right after loading your dataset ✔ Before doing any salary calculations ✔ During data cleaning df.dtypes # check all column types at once One wrong data type = one wrong insight. And in salary analysis, one wrong insight can mislead an entire business decision. #DataAnalytics #Python #DataTypes #TypeCasting #pandas
To view or add a comment, sign in
-
-
🐍 𝐇𝐨𝐰 𝐃𝐨 𝐘𝐨𝐮 𝐌𝐚𝐤𝐞 𝐚 𝐂𝐥𝐚𝐬𝐬 𝐉𝐒𝐎𝐍 𝐒𝐞𝐫𝐢𝐚𝐥𝐢𝐳𝐚𝐛𝐥𝐞 𝐢𝐧 𝐏𝐲𝐭𝐡𝐨𝐧? If you’ve worked with APIs or data storage in Python, you’ve likely encountered this error: 👉 “𝘖𝘣𝘫𝘦𝘤𝘵 𝘰𝘧 𝘵𝘺𝘱𝘦 𝘟 𝘪𝘴 𝘯𝘰𝘵 𝘑𝘚𝘖𝘕 𝘴𝘦𝘳𝘪𝘢𝘭𝘪𝘻𝘢𝘣𝘭𝘦” It’s a common challenge — but also an important concept to understand for real-world development. Let’s break it down in a simple and practical way. ⚙️ 𝐖𝐡𝐲 𝐓𝐡𝐢𝐬 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐎𝐜𝐜𝐮𝐫𝐬 Python’s built-in json module can only serialize basic data types such as: - dict - list - str, int, float, bool 👉 Custom class objects? Not directly supported. That’s why you need to convert your object into a serializable format. 🚀 𝐌𝐞𝐭𝐡𝐨𝐝 1: 𝐂𝐨𝐧𝐯𝐞𝐫𝐭 𝐎𝐛𝐣𝐞𝐜𝐭 𝐭𝐨 𝐃𝐢𝐜𝐭𝐢𝐨𝐧𝐚𝐫𝐲 The simplest approach is to convert your class object into a dictionary. class User: def __init__(self, name, age): self.name = name self.age = age user = User("John", 22) import json json_data = json.dumps(user.__dict__) ✔ Quick and easy ✔ Works well for simple classes 🧠 𝐌𝐞𝐭𝐡𝐨𝐝 2: 𝐂𝐫𝐞𝐚𝐭𝐞 𝐚 𝐂𝐮𝐬𝐭𝐨𝐦 𝐒𝐞𝐫𝐢𝐚𝐥𝐢𝐳𝐞𝐫 You can define a function to handle serialization: def serialize(obj): return obj.__dict__ json_data = json.dumps(user, default=serialize) ✔ Flexible for multiple classes ✔ Cleaner and reusable 🧩 𝐌𝐞𝐭𝐡𝐨𝐝 3: 𝐔𝐬𝐞 𝐚 𝐂𝐮𝐬𝐭𝐨𝐦 𝐉𝐒𝐎𝐍𝐄𝐧𝐜𝐨𝐝𝐞𝐫 For more control, extend Python’s encoder: import json class UserEncoder(json.JSONEncoder): def default(self, obj): return obj.__dict__ json_data = json.dumps(user, cls=UserEncoder) ✔ Ideal for larger applications ✔ Centralized control over serialization ⚡ 𝐌𝐞𝐭𝐡𝐨𝐝 4: 𝐔𝐬𝐞 𝐃𝐚𝐭𝐚𝐜𝐥𝐚𝐬𝐬𝐞𝐬 (𝐌𝐨𝐝𝐞𝐫𝐧 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡) Python’s dataclasses make serialization cleaner: from dataclasses import dataclass, asdict import json @dataclass class User: name: str age: int user = User("Jhon", 22) json_data = json.dumps(asdict(user)) ✔ Cleaner syntax ✔ Built-in support for conversion 🔥 𝐁𝐨𝐧𝐮𝐬: 𝐓𝐡𝐢𝐫𝐝-𝐏𝐚𝐫𝐭𝐲 𝐋𝐢𝐛𝐫𝐚𝐫𝐢𝐞𝐬 Libraries like: - pydantic - marshmallow provide: ✔ Validation + serialization ✔ Better structure for APIs ✔ Production-ready solutions ⚖️ 𝐂𝐨𝐦𝐦𝐨𝐧 𝐌𝐢𝐬𝐭𝐚𝐤𝐞 👉 Trying to directly serialize complex objects without conversion. This leads to: - Errors - Broken APIs - Debugging frustration 💡 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞 ✔ Always convert objects to dictionaries ✔ Use dataclasses for clean design ✔ Use libraries for scalable applications 🧠 𝐅𝐢𝐧𝐚𝐥 𝐓𝐡𝐨𝐮𝐠𝐡𝐭𝐬 Serialization is not just about converting data — it’s about making your application communicate effectively. Mastering this concept will: ✔ Improve your API development ✔ Make your code more robust ✔ Help you handle real-world data efficiently #Python #JSON #Serialization #BackendDevelopment #SoftwareEngineering #Programming #Developers #API #DataEngineering #TechTips #LearningToCode #FullStackDeveloper #CareerGrowth #LinkedInTech
To view or add a comment, sign in
-
-
Day 11/30 - Python Dictionaries Today I learned the most powerful data structure in Python. And honestly it changed how I think about storing data. What is a Dictionary? A dictionary is an ordered, mutable collection of key-value pairs defined using curly braces {}. Unlike lists which use index numbers, dictionaries use keys , meaningful labels to access each value. Think of it like a real dictionary: you look up a word (key) to get its definition (value). Three core traits: Ordered — from Python 3.7+, dictionaries remember insertion order Mutable — you can add, update, and remove pairs after creation No duplicate keys — if you add the same key twice, the second value overwrites the first Syntax Breakdown my_dict = {"key1": value1, "key2": value2} "key" -> the label used to look up a value - must be unique and immutable value -> the data stored - can be any type: string, int, list, even another dict { } -> curly braces wrap the whole dictionary - pairs separated by commas Accessing Values dict.get("key") → returns None safely if the key is missing dict.get("key", "default") → returns your fallback value instead of None Rule: Use dict["key"] when you're sure the key exists. Use dict.get() when you're not — it's always the safer choice. Adding & Updating Items Add new key: dict["new_key"] = value Update existing key: dict["key"] = new_value Update multiple: dict.update({"key1": val, "key2": val}) Remove a key: dict.pop("key") Code Example student = { "name" : "Obiageli", "course": "Machine learning", "year" : 2024, "gpa" : 4.5 } print(student["name"]) =Obiageli print(student.get("gpa")) = 4.5 student["age"] = 22 student["gpa"] = 4.7 Key Learnings ☑ A dictionary stores data as key-value pairs. keys are labels, values are the data ☑ Use dict.get("key") over dict["key"] when unsure a key exists ☑ Keys must be unique and immutable — values can be any data type ☑ Use .keys(), .values(), .items() to loop through dictionaries effectively ☑ Dictionaries are the foundation of JSON — the format every web API sends data in Why It Matters Every time an app stores a user profile, an API sends data, or a program reads a config file, that data is almost always in dictionary format. Mastering dictionaries means you can work with real-world data right now. My Takeaway Lists store things in a line , dictionaries store things with meaning. Once I started thinking in key-value pairs, data started making a lot more sense. It's not just storage , it's structured storage. #30DaysOfPython #Python #LearnToCode #CodingJourney #WomenInTech
To view or add a comment, sign in
-
-
If anyone is interested in developing their skills in Object-oriented Languages, a quick thought based on my experience that might be helpful. 💬 Here are some tips for developing this skill: 1. Think in terms of behavior, not just data Python style: Use properties to protect attributes when needed, but don’t write trivial getters/setters. python # Bad (Java-style) class Person: def get_name(self): return self._name def set_name(self, name): self._name = name # Good Pythonic OOP class Person: def __init__(self, name): self.name = name # plain attribute @property def formal_name(self): return f"Ms./Mr. {self.name}" # behavior 2. Tell, Don't Ask Python style: Pass messages, avoid peeking inside other objects. python # Avoid if cart.items_count() > 0: apply_shipping(cart) # Prefer cart.checkout() # cart decides if shipping applies 3. Polymorphism over conditionals Python example – use subclasses with a common interface: python # Instead of: def play(sound_file): if sound_file.ext == 'mp3': play_mp3(sound_file) elif sound_file.ext == 'wav': play_wav(sound_file) # Do: class AudioFile: ... class MP3File(AudioFile): def play(self): ... class WAVFile(AudioFile): def play(self): ... # Then: sound_file.play() Python’s duck typing also allows polymorphism without formal inheritance – just implement the same method names. 4. Study small, well-designed OOP systems in Python Look at: collections.abc (abstract base classes: MutableSequence, Iterable) datetime module (e.g., datetime, timedelta, tzinfo) pathlib.Path – excellent example of OOP design with operator overloading (/ for joining paths) 5. Refactor a procedural script into OOP Take a script that reads a CSV, processes rows, and outputs a report. Create classes like: DataReader (reads, yields rows) ReportGenerator (processes, formats) FileSystemHandler (writes output) 6. SOLID principles in Python Example of Single Responsibility: python # Bad (reporting + saving) class Report: def generate(self): ... def save_to_db(self): ... # Good class Report: def generate(self): ... class ReportSaver: def save(self, report): ... Open-Closed via subclassing or using abc.ABC and @abstractmethod. 7. Unit tests drive design Use unittest or pytest. Write tests early to force clean interfaces: python def test_bank_account_deposit(): acc = BankAccount() acc.deposit(100) assert acc.balance == 100 This prevents you from making deposit depend on some hidden global state. 8. Get feedback with Python-specific examples Post a small class (e.g., a ShoppingCart with add_item, total, apply_discount) to r/learnpython or Stack Overflow. Ask: “How would you reduce coupling here? Should I use composition or inheritance?”
To view or add a comment, sign in
-
Python Prototypes vs. Production Systems: Lessons in Logic Rigor 🛠️ This week, I stopped trying to write code that "just works" and started writing code that refuses to crash. As an aspiring Data Scientist, I’m learning that stakeholders don’t just care about the output—they care about uptime. If a single "typo" from a user kills your entire analytics pipeline, your system isn't ready for the real world. Here are the 4 "Industry Veteran" shifts I made to my latest Python project: 1. EAFP over LBYL (Stop "Looking Before You Leap") In Python, we often use if statements to check every possible error (Look Before You Leap). But a "Senior" approach often favors EAFP (Easier to Ask for Forgiveness than Permission) using try/except blocks. Why? if statements become "spaghetti" when checking for types, ranges, and existence all at once. Rigor: A try block handles the "ABC" input in a float field immediately, keeping the logic clean and the performance high. 2. The .get() Method: Killing the KeyError Directly indexing a dictionary with prices[item] is a ticking time bomb. If the key is missing, the program dies. The Fix: I’ve switched to .get(item, 0.0). This allows for a "Default Value" fallback in a single line, preventing "Dictionary Sparsity" from breaking my calculations. 3. Preventing the "System Crush" Stakeholders hate downtime. I implemented a while True loop combined with try/except for all user inputs. The Goal: The program should never end unless the user explicitly chooses to "Quit." Every "bad" input now triggers a helpful re-prompt instead of a system failure. 4. Precision in Data Type Conversion Logic errors often hide in the "Conversion Chain." I focused on the transition from String (from input()) to Int (for indexing). The Off-by-One Risk: Users think in "1-based" counting, but Python is "0-based." I’ve made it a rule to always subtract 1 from the integer input immediately to ensure the correct data point is retrieved every time. The Lesson: Coding is about the architecture of the "Why" just as much as the syntax of the "What." [https://lnkd.in/gvtiAKUb] #Python #DataScience #CodingJourney #CleanCode #BuildInPublic #SoftwareEngineering #SeniorDataScientist #TechMentor
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development