If you tired of Using long complex if-else Chains in Python. Stop it right now. !! Start leveraging Dictionary Mapping Instead: ______________ For example: Let's suppose a messy data column as values like - "mgr", "manager", "Manager", "Sr Manager", "MGR". Old technique ( Lengthy to write & difficult to manage in longer run)- if x == "mgr": role = "Manager" elif x == "Manager": role = "Manager" elif x == "MGR": role = "Manager" ... New technique ( Dict Mapping) - 1. First create a mapping: mapping = { "mgr": "Manager", "manager": "Manager", "sr manager": "Senior Manager" } 2. Map the correct role: role = mapping.get(x.lower().strip(), "Unknown") Done!! _______________ This small implementation can save efforts & complexity of the code. #python #coding_ideas #data #dataengineering #datascientist #dataanlayst #python_learning
Replace If-Else Chains with Dictionary Mapping in Python
More Relevant Posts
-
I’ve published the second part of my "Mini Query Engine in Python" series. This time it’s all about the data model: how a table actually looks inside the engine and how it lives in memory. I talk about why I picked Apache Arrow as the internal format, how I model schemas and fields, what a column abstraction looks like, and how everything comes together as a DataBatch (the minimal unit of data that will later flow through the plan). Read for free: https://lnkd.in/eicsgcDA #queryengine #python #apachearrow #dataengineering
To view or add a comment, sign in
-
🛠️ Scraping Data from Wikipedia Using Python Today, I worked on a simple but powerful task: extracting structured data from Wikipedia using Python. With the right approach, Wikipedia becomes a rich data source for: 📊 analysis 📈 visualization 🤖 machine learning practice Using Python libraries like: Requests – to fetch the webpage BeautifulSoup – to parse HTML tables Pandas – to clean and structure the data I was able to convert raw web content into a clean, analysis-ready dataset. This is a reminder that: Data is everywhere — the real skill is knowing how to collect it responsibly and transform it into insight. Web scraping is not about copying data. It’s about automating data collection, ensuring accuracy, and saving time. If you’re learning data analytics or Python, projects like this sharpen: ✔️ data wrangling skills ✔️ automation thinking ✔️ real-world problem solving On to the next dataset 🚀 #Python #WebScraping #DataAnalytics #BeautifulSoup #Pandas #DataEngineering #LearningInPublic #TechSkills
To view or add a comment, sign in
-
Following up on my earlier post about Python graph optimization. One thing that became very clear after sharing the work publicly is how important evidence-backed engineering is—especially when discussing performance. After publishing the case study, I went back and revalidated every claim against actual execution logs, not assumptions or theoretical estimates. The README was regenerated directly from benchmark output to ensure alignment between documentation and reality. What the data consistently shows: Single-source shortest paths: ~3.5× speedup Bidirectional shortest path queries: ~70× speedup Connected components: ~1× (near parity, as expected for full graph scans) Compilation cost: ~50–70 ms, paid once Correctness: validated against NetworkX for every run This reinforced an important lesson: Optimization is not about rewriting code—it’s about understanding data layout, access patterns, and workload shape. NetworkX is excellent for flexibility and research. But in read-heavy, static-graph production systems, preprocessing and amortization can fundamentally change performance characteristics—even in pure Python. I’m continuing to focus on: Python performance engineering Algorithmic efficiency Benchmarking rigor Production-oriented tradeoffs If you’re working on latency-sensitive systems, backend services, or algorithm-heavy workloads, I’d be glad to exchange notes. Code + benchmarks remain available here: https://lnkd.in/ezkRivF4 #Python #PerformanceEngineering #SystemsEngineering #Backend #Optimization #Algorithms
To view or add a comment, sign in
-
PYTHON JOURNEY - Day 38 / 50..!! TOPIC – Python Sets Today I explored Sets — a unique data collection type in Python that is all about uniqueness and mathematical operations! 1. Creating a Set Sets use curly braces {} just like dictionaries, but they only contain single values, not pairs. Python numbers = {1, 2, 3, 4, 5, 5, 5} print(numbers) # Output: {1, 2, 3, 4, 5} (Duplicates are automatically removed!) 2. Unordered & Unindexed Unlike lists, sets do not have a fixed order. You cannot access items using an index like [0]. Python fruits = {"Apple", "Banana", "Cherry"} # print(fruits[0]) # This will cause an ERROR 3. Set Operations (The Power of Math) Python sets allow you to perform powerful operations like Union and Intersection. Python set1 = {1, 2, 3} set2 = {3, 4, 5} print(set1.union(set2)) # Output: {1, 2, 3, 4, 5} (Combines both) print(set1.intersection(set2)) # Output: {3} (Items present in both) Why Use Sets? Remove Duplicates: The easiest way to clean a list of repeating items is to convert it to a set. Membership Testing: Checking if an item exists in a set is much faster than in a list. Data Comparison: Perfect for finding what two groups of data have in common (or what makes them different). Mini Task Write a program that: Creates a list with duplicate numbers: [1, 2, 2, 3, 4, 4, 5]. Converts that list into a Set to automatically remove the duplicates. Creates a second set {4, 5, 6, 7} and prints the Intersection between the two sets. #Python #PythonLearning #50DaysOfPython #DailyCoding #LearnPython #CodingJourney #PythonForBeginners #LinkedInLearning #DeveloperCommunity
To view or add a comment, sign in
-
-
🚀 Day-9 — Sets in Python Sets are a powerful built-in data structure in Python used to store unique elements. They are especially useful when duplicate values must be automatically removed. 🔹 What is a Set? A set is a collection of items that is: ✔ Unordered ✔ Unindexed ✔ Mutable (can be changed) ✔ Stores only unique values Sets are defined using curly braces { }. 📝 Example: numbers = {1, 2, 3, 4, 4, 5} print(numbers) 📌 Output: {1, 2, 3, 4, 5} 🔹 Important Characteristics Duplicates are automatically removed No indexing or slicing (because sets are unordered) Elements must be immutable (int, str, tuple allowed; list not allowed) 🔹 Creating a Set set1 = {10, 20, 30} set2 = set([1, 2, 3, 4]) print(set1) print(set2) ⚠ Empty set: empty_set = set() # Correct empty_set = {} # ❌ This creates a dictionary 🔥 Adding & Removing Elements fruits = {"apple", "banana"} fruits.add("cherry") fruits.remove("banana") print(fruits) Other useful methods: discard() → removes element without error pop() → removes random element clear() → removes all elements 🔹 Set Operations Set operations are very useful for comparisons. ▶ Union a = {1, 2, 3} b = {3, 4, 5} print(a | b) ▶ Intersection print(a & b) ▶ Difference print(a - b) ▶ Symmetric Difference print(a ^ b) 🔹 Looping Through a Set for item in a: print(item) ⚠ Order is not guaranteed. ⚠ Common Beginner Mistakes ❌ Trying to access set elements using index ❌ Expecting order to remain same ❌ Confusing {} as empty set ❌ Adding mutable elements like lists 🌱 Best Practices Use sets when uniqueness matters Use set operations for fast comparisons Avoid relying on order of elements Sets are extremely efficient for handling unique values and comparisons. Once mastered, they simplify logic that would otherwise need complex loops. #Python #PythonProgramming #CodingJourney #LearnTogether #CodeDaily #ProgrammingBasics #TechCommunity
To view or add a comment, sign in
-
𝗗𝗮𝘆 𝟭𝟬: 𝗦𝘁𝗿𝗶𝗻𝗴 𝗠𝗲𝘁𝗵𝗼𝗱𝘀 𝗶𝗻 𝗣𝘆𝘁𝗵𝗼𝗻 🐍 String methods help manipulate and analyze text data efficiently. Most important string methods you must know 𝟭) 𝗹𝗼𝘄𝗲𝗿() / 𝘂𝗽𝗽𝗲𝗿() - convert to lowercase or uppercase "PYTHON".lower() → python "python.upper() → PYTHON 𝟮) 𝘁𝗶𝘁𝗹𝗲() - capitalizes each word "hello world".title() → Hello World 𝟯) 𝗰𝗮𝗽𝗶𝘁𝗮𝗹𝗶𝘇𝗲() - capitalizes first letter "hello".capitalize() -> Hello 𝟰) 𝘀𝘁𝗿𝗶𝗽() - removes extra spaces " hi ".strip() → hi 𝟱) 𝗿𝗲𝗽𝗹𝗮𝗰𝗲(𝗮, 𝗯) - replaces substring "hi there".replace("hi", "hello") 𝟲) 𝘀𝗽𝗹𝗶𝘁() / 𝗷𝗼𝗶𝗻() - split string ↔ join list "a,b,c".split(",") → ['a','b','c'] ",".join(['a','b','c']) → a,b,c 𝟳) 𝗳𝗶𝗻𝗱() - finds index "python".find("t") → 2 𝟴) 𝗰𝗼𝘂𝗻𝘁() - counts occurrences "banana".count("a") → 3 𝟵) 𝘀𝘁𝗮𝗿𝘁𝘀𝘄𝗶𝘁𝗵() / 𝗲𝗻𝗱𝘀𝘄𝗶𝘁𝗵() - checks start or end "hello".startswith("he") → True "hello".endswith("lo") → True 𝟭𝟬) 𝗶𝘀𝗱𝗶𝗴𝗶𝘁() / 𝗶𝘀𝗮𝗹𝗽𝗵𝗮() - checks digits or alphabets "1234".isdigit() → True #Python #LearningInPublic #DataScience #Programming
To view or add a comment, sign in
-
Python with Machine Learning — Chapter 2 📘 Topic: Python Data Types 🔍 Let's keep building your foundation. Data types tell Python what kind of value you're working with. Mastering them helps you avoid bugs and write cleaner code. Here are 5 essential types we'll use in ML: 1. Integer — whole numbers → counts, indices, labels 2. Float — decimal numbers → prices, measurements, probabilities 3. String — text → names, messages, file paths 4. Boolean — True or False → conditions, decisions 5. None → missing values or placeholders [CODE] # Integers and Floats age = 25 pi = 3.14 # Strings name = "Alice" # Boolean is_active = True # None missing_value = None print(age, pi, name, is_active, missing_value) [/CODE] Methods and functions • String methods: name.lower(), name.upper(), name.strip() • Type conversion: int(), float(), str(), bool() Quick tips • Use type() to check a variable's type • Start simple and build confidence with small experiments You're doing great. Keep practicing. Next up: Lists
To view or add a comment, sign in
-
Today’s Python focus was 𝗗𝗶𝗰𝘁𝗶𝗼𝗻𝗮𝗿𝗶𝗲𝘀 and 𝗧𝘂𝗽𝗹𝗲𝘀. I spent time understanding how Python handles structured data using key value pairs and fixed collections, and how this differs from lists. 𝗪𝗵𝗮𝘁 𝗜 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: • Creating dictionaries to store related data using meaningful keys • Accessing values using keys and using get() to avoid runtime errors • Updating existing values and adding new key value pairs • Deleting entries and checking for key existence • Iterating through dictionaries using keys and items() • Extracting only keys and only values when needed • Working with nested dictionaries to represent structured data • Iterating through nested dictionaries for multi level data • Using dictionaries to model real examples like contact details and revenue by region 𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: • Dictionaries store data as key value pairs, making lookups fast and clear • Dictionaries are mutable, so values can be updated without recreating the structure • get() is safer than direct key access when keys may not exist • Nested dictionaries are useful for representing hierarchical data • Iterating through dictionaries helps process structured datasets efficiently I also revisited 𝘁𝘂𝗽𝗹𝗲𝘀 conceptually and understood where they fit: • Tuples are ordered and immutable • They are useful when data should not change • Often used for fixed records, configuration values, or safe data grouping Working with dictionaries made it clear how real world data like contacts, configurations, and reports are represented in Python. If you are learning Python as well, which data structure are you currently focusing on? #Python #PythonLearning #DictionariesInPython #TuplesInPython #ProgrammingBasics #LearningInPublic #DataAnalytics #Upskilling
To view or add a comment, sign in
-
Why do we have different data structures in Python? Because different problems need different ways of storing and accessing data. No single structure is best for everything. Choosing the right one makes your code cleaner, faster, and more meaningful. Here I provide simple use case scenarios while building real applications, other than formal concepts that we blindly learn about the data structures. Lists Imagine you are building a music streaming app and you need to store the list of songs in a user’s playlist. The order of songs matters because they are played sequentially. Users can add new songs, remove existing ones, or rearrange the playlist at any time. A list is useful here because it maintains order and allows frequent modifications to the data. Tuples Suppose you are defining image dimensions for a computer vision model or returning latitude and longitude from a function. These values should never change during execution. A tuple is useful here because it protects fixed data from accidental modification. Tuples are commonly used for configuration values and function outputs that should remain constant. Sets Imagine you are tracking users who have already visited a website or students who have submitted an assignment. You do not want duplicate entries and you only care whether an element exists or not. A set is ideal in this scenario. Sets are also very useful when comparing datasets such as finding common skills between two resumes or common users between two platforms. Dictionaries Consider building a user profile system where each user has a name, email, and score. You want to access data using meaningful keys instead of positions. A dictionary fits naturally here. Dictionaries are heavily used in machine learning for storing model parameters, feature names with values, and JSON like API responses. The Bigger Picture Different data structures exist because data behaves differently in real world problems. Understanding when and why to use them is more important than memorizing syntax. Feel free to comment on real world use cases of data structures you have encountered in your projects. #Python #DataStructures #ProgrammingFundamentals #SoftwareDevelopment #MachineLearning #DataScience #LearningToCode
To view or add a comment, sign in
-
-
"Python patterns I actually use as a data person (Series Intro – Part 1)" I’m starting a short Python mini-series focused on how Python is actually used in analytics and data engineering — not tutorials, but real patterns that show up in production data work. After working on fraud detection and compliance pipelines, one thing became clear to me: -> Python becomes powerful when analysis is structured like a pipeline, not a one-off script. In real projects, a few repeatable patterns matter far more than clever tricks: • Using functions to encapsulate steps like loading, cleaning, feature engineering, and exporting so logic can be reused across projects. • Keeping configuration (file paths, table names, parameters) outside core logic using config files or environment variables. • Exploring in notebooks first, then refactoring stable logic into .py modules that can be scheduled, versioned, and run automatically. These patterns make it much easier to move from a “quick analysis” to a reliable workflow that teams can trust and reuse. Over the next few posts, I’ll share practical Python lessons from real data work — including unstructured data extraction, data validation, performance tuning, and production mistakes I learned the hard way. 👉 If you work with data and care about writing Python that scales beyond a notebook, follow along — next post drops soon. #Python #DataAnalytics #AnalyticsEngineering #DataEngineering #CareersInData
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development