Day 2 of Learning Python – And I Just Built My First Real Data Audit System 📊🐍 Today I didn’t just “learn Python”… I used it to analyze structured company-style audit data and built a Mistake Scoring System that automatically evaluates performance. And honestly, It felt like stepping into real business intelligence work. 💡 What I built today: Using Pandas, I processed an audit dataset and generated insights like: 📌 Total deals per responsible person 📌 Pipeline distribution per team member 📌 Mistake scoring based on missing actions (follow-ups, updates, documents) 📌 Final performance summary ranking everyone by errors ⚙️ The idea behind the system: Instead of manually checking performance, I created a logic-based scoring system where: Missing documents = +1 error No follow-up = +1 error No comment update = +1 error Unresolved status = +3 heavy penalty This turns raw data into actionable performance insights. 💻 Code I used: import pandas as pd file_path = r " Instered your excel data file here" Note: The r before the file path means it is a raw string, which helps Python correctly read the path without treating backslashes as escape characters. Also, make sure your Excel file is saved in the same folder where your Python script is located, or ensure the correct full file path is provided. df = pd.read_excel(file_path) # CLEAN DATA df.columns = df.columns.str.strip() df = df.fillna("No") # MISTAKE SCORE SYSTEM df["Mistake Score"] = 0 df.loc[df["Document/RF Request"] == "No", "Mistake Score"] += 1 df.loc[df["Comment Updates"] == "No", "Mistake Score"] += 1 df.loc[df["Follow up"] == "No", "Mistake Score"] += 1 df.loc[df["Status"].str.lower() == "unresolved", "Mistake Score"] += 3 # ANALYSIS print(df["Responsible"].value_counts()) print(df.groupby(["Responsible", "Pipeline"]).size()) mistakes = df.groupby("Responsible")["Mistake Score"].sum().sort_values(ascending=False) print(mistakes) summary = df.groupby("Responsible").agg( Total_Deals=("Responsible", "count"), Total_Mistakes=("Mistake Score", "sum") ) print(summary.sort_values("Total_Mistakes", ascending=False)) 🚀 Key takeaway: Even simple Python + Excel data can be transformed into a decision-making system that highlights performance gaps instantly. Day 2 of learning — and I’m already seeing how powerful data can be in real business environments. Can’t wait to build dashboards and automate even more next 🔥 #Python #DataAnalysis #Pandas #LearningInPublic #DataScience #Automation #BusinessIntelligence #CareerGrowth
Nischal Karki’s Post
More Relevant Posts
-
My Data Science Journey — Python Tuple, Set, Dictionary & the Collections Library Today’s focus was on Python’s core data structures — Tuples, Sets, and Dictionaries — along with the powerful collections module that enhances their functionality for real-world use cases. 𝐖𝐡𝐚𝐭 𝐈 𝐋𝐞𝐚𝐫𝐧𝐞𝐝: Tuple – Ordered, immutable, allows duplicates – Single element tuples require a trailing comma → ("cat",) – Supports packing and unpacking → x, y = 10, 30 – Cannot be modified after creation (TypeError by design) – Faster than lists in certain operations – Used in scenarios like geographic coordinates and fixed records – Can be used as dictionary keys (unlike lists) Set – Unordered, mutable, stores unique elements only – No indexing or slicing support – Empty set must be created using set() ({} creates a dict) – .remove() raises KeyError if element not found – .discard() removes safely without error – Supports operations like union, intersection, difference, symmetric_difference – Methods like issubset(), issuperset(), isdisjoint() help in set comparisons – frozenset provides an immutable version of a set – Offers O(1) average time complexity for membership checks Dictionary – Key-value pair structure, ordered, mutable, and keys must be unique – Built on hash tables for fast lookups – user["key"] → raises KeyError if missing – user.get("key", default) → safe access with fallback – Methods: keys(), values(), items() for iteration – pop(), popitem(), update(), clear(), del for modifications – Widely used in real-world data like APIs and JSON responses – Common pattern: list of dictionaries for structured datasets Collections Library – namedtuple → tuple with named fields for better readability – deque → efficient queue with O(1) operations on both ends – ChainMap → combines multiple dictionaries without merging copies – OrderedDict → maintains order with additional utilities like move_to_end() – UserDict, UserList, UserString → useful for customizing built-in behaviors with validation and extensions Performance Insight – List → O(n) – Tuple → O(n) – Set → O(1) (average lookup) – Dictionary → O(1) (average lookup) 𝐊𝐞𝐲 𝐈𝐧𝐬𝐢𝐠𝐡𝐭: Understanding when to use each data structure — and how collections enhances them — is crucial for writing efficient, scalable, and clean Python code. Read the full breakdown with examples on Medium 👇 https://lnkd.in/gvv5ZBDM #DataScienceJourney #Python #Tuple #Set #Dictionary #Collections #Programming #DataStructures
To view or add a comment, sign in
-
🐍 Data Types & Type Casting in Python (Small Concept, Big Impact) When working with data in Python, one mistake beginners often make is ignoring data types. And trust me, this small thing can break your entire analysis. When you load a dataset in Python, it doesn't always read your data the way you expect. A column full of numbers might be stored as text. A date column might be treated as a random string. A true/false column might come in as an object. And if you don't fix this early, your entire analysis will give you wrong results. 🔹 So What Are Data Types? Every value in Python has a type - it tells Python what kind of data this is and what you can do with it. The most common ones in data analysis: int → Whole numbers → 25, 100, -5 float → Decimal numbers → 3.14, 99.9, -0.5 str → Text → "John", "Mumbai", "Yes" bool → True or False → True, False datetime → Dates & times → 2024-01-15 👉 Think of data types as the language your data speaks, If you misunderstand it, your analysis goes wrong. 🔹 Why Data Types Matter in Data Analysis Because Python behaves differently based on data types. Example: 👉 "100" + "20" → "10020" (string concatenation) 👉 100 + 20 → 120 (numeric addition) Same values. Different result. 🔹 A Simple Real-Life Example Imagine a salary column in your dataset. You try to calculate the average: df['salary'].mean() But Python throws an error. You check the data type and you see - salary is stored as object (string), not a number. Python literally can't do math on it. That's where Type Casting comes in. 🔹 What is Type Casting? Type casting means converting one data type into another. Your salary column is stored as "50000" (a string). Every calculation you run will give wrong results or fail completely. After type casting: # Convert salary column to number df['salary'] = df['salary'].astype(float) # Now calculate average salary df['salary'].mean() # works perfectly # Convert joining date to datetime df['join_date'] = pd.to_datetime(df['join_date']) # Convert employment status to boolean df['is_active'] = df['is_active'].astype(bool) Now Python understands your data — and you can calculate average salaries, find top earners, compare departments, and build models correctly. 🔹 Why This Matters in Real Projects Wrong data types silently break your analysis. - Calculations fail on string columns - Sorting dates goes wrong if stored as text - Visualizations won't plot numeric data stored as objects - Machine learning models reject incorrect types completely Checking and fixing data types is not optional — it is one of the first things a professional analyst does. 🔹 When Should You Always Check Data Types? ✔ Right after loading your dataset ✔ Before doing any salary calculations ✔ During data cleaning df.dtypes # check all column types at once One wrong data type = one wrong insight. And in salary analysis, one wrong insight can mislead an entire business decision. #DataAnalytics #Python #DataTypes #TypeCasting #pandas
To view or add a comment, sign in
-
-
📊 Handling Missing Values in Python - The First Real Data Problem You’ll Face You’ve loaded your dataset. Everything looks fine… until you notice this: 👉 Some values are missing. Blank cells. NaN values. Incomplete records. And here’s the truth: 👉 Almost every real-world dataset has missing data. 🔹 What Are Missing Values? Missing values are simply gaps in your dataset — places where data should exist but doesn't. In Python, they usually appear as: NaN # Not a Number — most common in pandas None # Python's version of empty 🔹 Why Do Missing Values Matter? Because they can silently break your analysis. ❌ Wrong averages ❌ Incorrect insights ❌ Errors in calculations ❌ Poor model performance 👉 Ignoring missing data = trusting wrong results 🔹 Simple Real-Life Example Imagine you’re analyzing employee salaries. Some entries are missing. Now if you calculate average salary: 👉 Your result will be misleading But once you handle missing values properly: 👉 Your analysis becomes accurate and reliable 🔹 How to Detect Missing Values In Python, it’s very simple: df.isnull().sum() 👉 This shows how many values are missing in each column 🔹 How to Handle Missing Values There is no “one right way”—it depends on the situation. But commonly, analysts use: ✔ Remove missing data df.dropna() ✔ Fill with mean (for numerical data) df['salary'] = df['salary'].fillna(df['salary'].mean()) ✔ Fill with mode (for categorical data) df['city'] = df['city'].fillna(df['city'].mode()[0]) ✔ Forward fill (for time-based data) df.fillna(method='ffill') 🔹 One Rule to Always Remember Missing % What to DoLess than 5% Safely drop the rows 5% to 30% Fill with mean, median or mode More than 30% Investigate — the column may be unreliable 🔹 When Should You Handle Missing Values? Always: ✔ Right after loading your dataset ✔ Before doing calculations ✔ Before building any model 👉 Cleaning comes before analysis. 🚀 Final Thought Dirty data is not the problem. Not knowing how to clean it — that is. Every professional dataset has missing values. What separates a good analyst from a great one is knowing exactly how to handle them. 💡 #DataAnalytics #Python #MissingValues #DataCleaning #pandas #DataAnalyst #LearningInPublic #PythonForData #AnalyticsJourney #DataScience
To view or add a comment, sign in
-
-
Day 12/30 - Nested Data Structures in Python Today everything clicked. Lists, dicts, tuples. They don't live separately. Real data nests them together. What is Nesting? Nesting means placing one data structure inside another. A list can contain dictionaries. A dictionary can contain lists. A dictionary can even contain other dictionaries. This is how Python represents complex, real-world data - the same structure used in JSON APIs, databases, and config files. Four Common Nesting Patterns List inside Dict -> a dictionary key holds a list as its value e.g. a student's list of scores Dict inside List -> a list contains multiple dictionaries e.g. a list of student records Dict inside Dict -> a key holds another dictionary e.g. a user with a nested address object List inside List -> a list contains other lists e.g. rows and columns in a grid or table How to Access Nested Data You access nested data by chaining brackets one for each level you go deeper: data["student"]["scores"][0] -->open dict , go to scores key, grab index 0 Rule: count the levels of nesting, then use that many brackets to reach the value. Looping Through Nested Structures When your data is a list of dictionaries, use a for loop to go through each dictionary, then use bracket notation to pull out values. This is the most common real-world pattern- reading records from an API or database. Code Example 1: List Inside a Dict python student = { "name" : "Obiageli", "scores": [88, 92, 75, 95], "passed": True } print(student["scores"]) = [88, 92, 75, 95] print(student["scores"][0]) = 88 print(student["scores"][-1]) = 95 Key Learnings ☑ Nesting = placing one data structure inside another ☑ Access nested data by chaining brackets , one bracket per level ☑ A list of dictionaries is the most common pattern, it's how API and database data looks ☑ Use a for loop to go through a list of dicts and pull values from each record ☑ Nested structures are the foundation of JSON -master this and real-world data won't feel foreign My Takeaway Nested data structures are where all the previous days connect. Lists, tuples, sets, dictionaries - they don't live in isolation. Real data combines all of them. Today I started seeing data the way Python sees it. #30DaysOfPython #Python #LearnToCode #CodingJourney #WomenInTech
To view or add a comment, sign in
-
-
Lists are used everywhere: apps, APIs, databases, analytics. Day 3 — Lists (Python Arrays for Real Data Handling) 1. Concept (Real-World Understanding) A list is a collection of multiple values stored in a single variable. Think of it like a container that holds multiple items in order. Python fruits = ["apple", "banana", "mango"] Key Properties: Ordered → items keep their position Mutable → you can modify them Allows duplicates Can store different data types Python data = ["Rahul", 25, True, 99.5] Real-Life Analogy A list is like a shopping cart : You can add items Remove items Check items Update items 2. Coding Examples (Real World) Example 1: Accessing Elements Python fruits = ["apple", "banana", "mango"] print(fruits[0]) # apple print(fruits[-1]) # mango Example 2: Modifying List Python fruits = ["apple", "banana", "mango"] fruits[1] = "orange" print(fruits) Output: ['apple', 'orange', 'mango'] Example 3: Adding Items Python cart = ["laptop", "mouse"] cart.append("keyboard") print(cart) Example 4: Removing Items Python cart = ["laptop", "mouse", "keyboard"] cart.remove("mouse") print(cart) Example 5: Looping Through List (VERY IMPORTANT) Python items = ["pen", "book", "bag"] for item in items: print(item) This is used in almost every real project 3. Important List Operations 1. Length Python numbers = [10, 20, 30] print(len(numbers)) # 3 2. Check Item Exists Python fruits = ["apple", "banana"] print("apple" in fruits) # True 3. Extend List Python a = [1, 2] b = [3, 4] a.extend(b) print(a) Output: [1, 2, 3, 4] 4. Pop (Remove by Index) Python nums = [10, 20, 30] nums.pop(1) print(nums) 4. Practice Problems Problem 1 Create a list of 5 numbers and: Print first element Print last element Problem 2 Given: Python numbers = [10, 20, 30, 40] Add: 50 at the end 5 at the beginning Problem 3 Remove duplicate values: Python [1, 2, 2, 3, 4, 4] Problem 4 Loop through a list and print only even numbers 5. Mini Challenge (Real World) Build a Shopping Cart System Python cart = [] Operations: Add items Remove item Show cart Example Output: Your cart contains: ['laptop', 'mouse'] Bonus Challenge Calculate total price: Python prices = [100, 200, 300] Output: Total = 600 6. Common Beginner Mistakes Confusing append vs extend Python a = [1, 2] a.append([3, 4]) Output: for n in nums: nums.remove(n) # Wrong Leads to unexpected behavior 7. Takeaway From This Concept Lists store multiple values in one place Lists are mutable (can change) You can: Add (append) Remove (remove, pop) Loop (for) Lists are used in: APIs Databases User inputs Data processing #Day3 #Python
To view or add a comment, sign in
-
🚀 Day 7 of My Python Learning Journey | String Methods | Business Analyst Aspirant Continuing my Python journey to strengthen my skills for a Business Analyst role 📊 Today, I worked on String Methods in Python, which are extremely useful for data cleaning, transformation, and preprocessing — key tasks in real-world analytics. 💻 Topic: String Methods in Python # Remove spaces text1 = " hello python learners " print("Clean text:", text1.strip()) # Upper & Lower case print("Upper:", text1.upper().strip()) print("Lower:", text1.lower().strip()) # Replace text print("Replace:", text1.replace("python", "SQL").strip()) # Count occurrences print("Count of 'o':", text1.count("o")) # Check start print("Starts with hello:", text1.strip().startswith("hello")) # Check numeric mobile = "9876543210" print("Is numeric:", mobile.isnumeric()) # Split & Join msg = "Welcome to python Course" words = msg.split() print("Words list:", words) joined_text = "_".join(words) print("Joined text:", joined_text) # Find position print("Index of 'p':", msg.find("p")) # Extract domain email = "student@example.com" domain = email[email.find("@") + 1:] print("Domain:", domain) # Data Cleaning Example (Price) price_text = "Price : ₹3500/-" clean_price = price_text.replace("Price :", "")\ .replace("₹", "")\ .replace("/-", "")\ .strip() print("Clean price:", clean_price) 💡 Key Learnings: Cleaned raw text data using strip() and replace() Transformed text using upper(), lower(), split(), and join() Extracted useful information (like email domain) Practiced real-world data cleaning (price formatting) 📌 These skills are directly applicable in: ✔ Data Cleaning ✔ Excel / SQL transformations ✔ Power BI datasets I’m learning Python through Satish Dhawale sir course (SkillCourse) and practicing daily 💻 🔥 Next step: Applying these concepts on real datasets and analytics projects Let’s connect if you're also learning Python or Data Analytics 🤝 #Python #StringMethods #DataCleaning #BusinessAnalyst #DataAnalytics #LearningJourney #SkillDevelopment #SatishDhawale #SkillCourse #UpGrad
To view or add a comment, sign in
-
𝗖𝗮𝗻 𝗦𝗤𝗟 𝗱𝗼 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀? We usually do feature analysis in Python, but what if we cannot load millions of rows in Python? Can we do that with SQL? To figure this out, I took the problem of customer churn and tried to understand why customers are leaving and what we can do about it. For this, I tried to understand the behavior of churned customers across the different groups of each feature. For example, does a high number of support calls lead to churning? To study customer behavior, I calculated the churn rates across the groups of each feature using AVG() in SQL. I used churn rate because it allows comparison irrespective of group size. For calculating the churn rate for numerical features like payment delay, I first divided this feature into groups using GROUP BY in SQL. I did this by identifying the sudden difference in churn rates between two values. Consequently, I identified the thresholds of behavioral change and labeled the groups using a CASE conditional statement. For categorical features, it can be easily calculated. To decide which feature is important, I used this criteria: 1. The churn rate difference must be significant for at least one group compared to others. This suggests that after this threshold is the breaking point of customer behavior. 2. The pattern should be stable, to avoid random noise. 3. Group sizes should be comparable. Example: Issue Level (Support Calls) +------------------+------------------+ | Issue Level | Churn Rate | +------------------+------------------+ | Low | 0.10 | | Medium | 0.25 | | High | 0.80 | +-------------------+-----------------+ Churn rate stays stable across low and medium but increases sharply at high issue level. Customers waited patiently until the support calls were in the medium issue level. Once the threshold is crossed, 80% of the customers leave. That means one should respond to support calls before reaching the high issue level; otherwise, the customer will leave. In customer churn, the features are: Age, Gender, Tenure, Usage Frequency, Support Calls, Payment Delay, Subscription Type, Contract Length, Total Spend, Last Interaction, and Churn. For more detailed analysis, check out github repo (Notebooks/SQL_Analysis folder): https://lnkd.in/gUx9vgyE #SQL #FeatureAnalysis #CustomerChurn #DataAnalytics #DataScience #SQLAnalytics #ChurnAnalysis #DataEngineering #BehavioralAnalysis #AnalyticsEngineering #BigData #DataCommunity
To view or add a comment, sign in
-
-
Behind the Scenes of the .pkl File: How Python "Freezes" Your Data 🥒📦 If you work with Python for Machine Learning, QSAR, or Data Engineering, you’ve definitely seen .pkl files. But have you ever wondered what’s actually happening under the hood when you save one? Unlike a CSV or JSON, which only stores raw text and numbers, a Pickle file stores the soul of your Python object. 🧠 How it Works: The Magic of Serialization The process behind a .pkl file is called Serialization (or "Pickling"): Memory Mapping: When you create a complex model or a chemical database, Python organizes it in your RAM with a sophisticated web of pointers and references. The Byte Stream: The pickle library traverses that complex structure and flattens it into a linear stream of bytes(a sequence of 0s and 1s). Perfect Reconstruction: When you use pickle.load, Python reads that stream and rebuilds the object with the exact same structure, data types, and attributes it had before. It’s like disassembling a LEGO castle, labeling every piece, and perfectly reassembling it in a different room. 📁 What does it save that a CSV can't? While a text file "forgets" the properties of an object, a .pkl preserves: Exact Typing: If your data was a 64-bit float or a specific NumPy array type, it stays that way. Object Relationships: If you have a dictionary pointing to a list of SMILES strings, those internal links remain intact. Learned Parameters: For Machine Learning, it saves the weights and coefficients your algorithm spent hours (or days) learning. 🛠️ The Syntax: "wb" and "rb" In your code, you will always see these modes: 'wb' (Write Binary): Necessary because you aren't writing "text," you are writing raw machine data. 'rb' (Read Binary): Necessary to translate those bytes back into a Python object you can interact with. ⚖️ When should you use it? ✅ YES for: Saving trained models, pre-computed molecular fingerprints, or saving the state of a long-running experiment. ❌ NO for: Public data sharing (use JSON or Parquet for security) or when you need to open the file in another language like R or Julia. Understanding your file formats is the first step toward building more robust, reproducible research workflows! 🚀 #Python #DataScience #MachineLearning #Pickle #Programming #TechInsights #QSAR #Bioinformatics #CodingTips
To view or add a comment, sign in
-
-
📊 Detecting & Treating Outliers in Python - The Data Points That Can Mislead You You’ve cleaned missing values. Your dataset looks fine. But there’s one more hidden problem most beginners miss: And that is outliers And sometimes, just one outlier can completely distort your analysis. 🔹 Why Do Outliers Matter? Because they can quietly break your results: ❌ Skew averages ❌ Mislead insights ❌ Affect visualizations ❌ Reduce model accuracy 👉 One extreme value = one wrong conclusion What is an Outlier? An outlier is a data point that is significantly different from the rest of the data. It can be extremely high. Or extremely low. Either way — it does not represent the typical pattern. Examples from real data: An employee with a salary of ₹500 in a company where average salary is ₹60,000 A customer who ordered 9,000 units when everyone else ordered between 5 and 50 An age value of 150 in a health dataset These are not just unusual — they are dangerous to your analysis if left untreated. Step 1 — Detect Outliers Visually Always start by looking at the data. import seaborn as sns # Box plot to spot outliers visually sns.boxplot(x=df['salary']) A box plot immediately shows you which values fall far outside the normal range. Any dot beyond the whiskers — that is your outlier. Step 2 — Detect Outliers Using IQR Method The IQR (Interquartile Range) method is the most reliable way to detect outliers mathematically. Q1 = df['salary'].quantile(0.25) Q3 = df['salary'].quantile(0.75) IQR = Q3 - Q1 lower = Q1 - 1.5 * IQR upper = Q3 + 1.5 * IQR # Find outliers outliers = df[(df['salary'] < lower) | (df['salary'] > upper)] print(outliers) Anything below the lower limit or above the upper limit is flagged as an outlier. Step 3 — Treat the Outliers Now you have three choices depending on your situation. Remove them — when the outlier is clearly an error. df = df[(df['salary'] >= lower) & (df['salary'] <= upper)] Cap them — replace extreme values with the boundary limit. df['salary'] = df['salary'].clip(lower=lower, upper=upper) Replace with median — when you want to keep the row but fix the value. median = df['salary'].median() df['salary'] = df['salary'].apply( lambda x: median if x < lower or x > upper else x ) How to Decide Which Method to Use Situation Best Approach Value is a data entry error Remove it Value is extreme but possible Cap it You cannot afford to lose rows Replace with median Here is the truth no one tells beginners. Outliers are not always mistakes. Sometimes they are the most interesting part of your data — the customer who spends the most, the employee who performs the best, the product that sells far beyond expectations. Your job is not to blindly remove them. Your job is to understand them first — then decide. That is what separates a careful analyst from a careless one. 💡 #DataAnalytics #Python #DataCleaning #Outliers #DataAnalyst #LearningData
To view or add a comment, sign in
-
-
Day 4 of My Data Analyst Journey – Data Cleaning in Python Today, I practiced data cleaning techniques using Python, focusing on handling real-world messy text data. Problem Statement: I had a dataset of customer feedback containing: • Extra spaces • Mixed casing (UPPER/lower) • Punctuation (., !, ?) Objective: Clean and standardize the feedback text for better analysis. What I implemented: Removed punctuation using .replace() Converted text to lowercase Removed leading & trailing spaces using .strip() Handled lists inside a dictionary Python Code: import string feedback_data = { 'S_No': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'Name': ['Ravi', 'Meera', 'Sam', 'Anu', 'Raj', 'Divya', 'Arjun', 'Kiran', 'Leela', 'Nisha'], 'Feedback': [ ' Very GOOD Service!!!', 'poor support, not happy ', 'GREAT experience! will come again.', 'okay okay...', ' not BAD', 'Excellent care, excellent staff!', 'good food and good ambience!', 'Poor response and poor handling of issue', 'Satisfied. But could be better.', 'Good support... quick service.' ], 'Rating': [5, 2, 5, 3, 2, 5, 4, 1, 3, 4] } punctuation = ".,!?" cleaned_feedbackdata = {} for key, value in feedback_data.items(): if isinstance(value, list): new_list = [] for item in value: if isinstance(item, str): item = item.strip().lower() for p in punctuation: item = item.replace(p, "") new_list.append(item) cleaned_feedbackdata[key] = new_list else: cleaned_feedbackdata[key] = value print(cleaned_feedbackdata) Outcome: Cleaned and structured feedback data ready for analysis like sentiment detection, keyword extraction, and insights generation. Key Learning: Data cleaning is one of the most important steps in data analysis—clean data = better insights! #Python #DataCleaning #DataAnalytics #LearningJourney #BeginnerToPro #CodingPractice #100DaysOfCode
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development