🚀 Python for Data Analyst -Understanding Sets in Python (Part 1)-(Post -7) 🔹 What is a Set? A set in Python is: unordered unindexed mutable stores unique elements only can contain different data types does not allow duplicates Example: my_set = {1, 2, 3, 4, 5} print(my_set) 🔹 Key characteristics of sets Order is not guaranteed Duplicates are removed automatically You cannot access items by index like set[0] Sets are implemented internally using hash tables Sets are useful for duplicate removal and fast membership checking 🔹 Creating Sets 1. Using curly braces s = {1, 2, 3, 4} print(s) 2. Creating an empty set s = set() print(type(s)) Important: {} # this creates an empty dictionary, not a set 3. Using set() with other iterables print(set([1, 2, 2, 3, 4])) print(set((1, 1, 2, 3))) print(set("GeeksForGeeks")) print(set(range(3, 8))) 4. Converting dictionary to set d = {'x': 1, 'y': 2, 'z': 3} print(set(d)) Important: When a dictionary is passed to set(), only keys are taken. 🔹 Duplicate Removal One of the best uses of sets is duplicate removal. lst = [1, 2, 2, 3, 4, 4, 5] unique_vals = set(lst) print(unique_vals) 🔹 Can Sets Contain Any Type? Sets can only contain hashable / immutable elements, such as: int float string tuple None They cannot contain mutable / unhashable types like: list dictionary set Reason: Sets use hashing internally, so elements must be stable and hashable. 🔹 Accessing Set Elements Because sets are unordered and unindexed, this is invalid: s[0] # ❌ TypeError Correct ways: 1. Using loop s = {"Geeks", "For", "Python"} for item in s: print(item) 2. Using membership operator print("Geeks" in s) print("Java" in s) 🔹 Adding Elements add() → add a single element s = {1, 2, 3} s.add(4) print(s) If the element already exists, nothing changes. s.add(4) print(s) update() → add multiple elements from an iterable s.update([5, 6]) print(s) update() works with: list tuple set string any iterable Example: s.update("hi") print(s) Each character is added separately. 🔹 Removing Elements remove() Removes a given element. If element is not present, it raises KeyError. s = {1, 2, 3, 4, 5} s.remove(3) print(s) discard() Removes an element if present. If not present, no error is raised. s.discard(10) print(s) pop() Removes and returns any arbitrary element. val = s.pop() print(val) print(s) Important: Because sets are unordered, we cannot predict which element will be removed. clear() Removes all elements and makes the set empty. s.clear() print(s) Output: set() 🔹 Membership Testing Sets are excellent for fast membership checks. my_set = {1, 2, 3, 4, 5} print(3 in my_set) print(10 in my_set) 🔹 Practical Use Case Counting unique words: text = "In this tutorial we are discussing about sets" words = text.split() unique_words = set(words) print(unique_words) print(len(unique_words)) #Python #PythonLearning #DataAnalytics #Sets #LearningInPublic
Python Sets: Understanding and Usage
More Relevant Posts
-
*Python Data Structures interview questions with answers:* 📍 *1. What are the main built-in data structures in Python* *Answer:* Python provides four primary built-in data structures: – *List*: Ordered, mutable, allows duplicates – *Tuple*: Ordered, immutable, allows duplicates – *Set*: Unordered, mutable, no duplicates – *Dictionary*: Key-value pairs, unordered (ordered from Python 3.7+), mutable Each structure serves different use cases based on performance, mutability, and uniqueness. 📍 *2. What is the difference between a list and a tuple in Python* *Answer:* – *List*: Mutable, can be modified after creation – *Tuple*: Immutable, cannot be changed once defined Lists are used when data may change; tuples are preferred for fixed collections or as dictionary keys. ```python my_list = [1, 2, 3] my_tuple = (1, 2, 3) ``` 📍 *3. What is the difference between a set and a frozenset* *Answer:* – *Set*: Mutable, supports add/remove operations – *Frozenset*: Immutable, hashable, can be used as dictionary keys or set elements Use frozensets when you need a fixed, unique collection that won’t change. ```python my_set = {1, 2, 3} my_frozenset = frozenset([1, 2, 3]) ``` 📍 *4. What are common dictionary methods in Python* *Answer:* – `get(key)`: Returns value or default – `keys()`, `values()`, `items()`: Access dictionary contents – `update()`: Merges another dictionary – `pop(key)`: Removes key and returns value – `clear()`: Empties the dictionary ```python person = {"name": "Alice", "age": 30} print(person.get("name")) print(person.items()) ``` 📍 *5. How do you iterate over different data structures in Python* *Answer:* – *List/Tuple*: Use `for item in sequence` – *Set*: Same as list, but unordered – *Dictionary*: Use `for key, value in dict.items()` You can also use `enumerate()` for index-value pairs and `zip()` to iterate over multiple sequences. ```python for key, value in person.items(): print(key, value) ``` *Double Tap ❤️ For More*
To view or add a comment, sign in
-
🐍 Python Data Structures: The "Big Four" explained in 60 seconds. ⏲️ ------------------------------------------------------------------------ Mastering data structures is the first step toward writing efficient Python code. Here is a quick breakdown of the Big Four: 👉 List - It is an ordered collection of values of different data type. 🖊️ Ordered - It maintains the order of the data insertion. 🖊️ Changeable - It is mutable so the items in the list can be modified at any time. 🖊️ Duplicate - It can have duplicate values. 🖊️ Heterogeneous - It can have items of different data type. ▶️ my_list = ['Hello', 9000, 3.20, [2, 5, 8]] 👉 Dictionary - It is an ordered collection of unique value stored in key-value pair. 🖊️ Ordered - The item stored in dictionary are ordered without any index value so value can only be accessed with a key. 🖊️ Unique - Every item stored in dictionary have unique keys. 🖊️ Mutable - It is mutable so we can add/modify/delete after creation. ▶️ my_dictionary = {'name': 'Jason', 'position': 'Manager', 'experience': 10} 👉 Set - It is unordered collection of unique value which is unindexed. It is mutable but values are immutable. 🖊️ Unique - It stores unique value. 🖊️ Unindexed - It is unindexed so we cannot access any single item. 🖊️ Unordered - It is unordered so it does not maintain the order of insertion. 🖊️ Mutable Set but Immutable value - It is mutable so item can be added and removed but item are immutable so they cannot be modified. So if we want to modify any item we need to remove the item from the set and add new value. ▶️ my_set = {1, 2, 4, 6, 7, 9} 👉 Tuples - It is collection of items which is ordered, unchangeable and allow duplicate value. 🖊️ Ordered - It maintains the order of the data insertion. 🖊️ Immutable - It is immutable so value cannot be modified after creation. 🖊️ Duplicate - It can have duplicate value. 🖊️ Unchangeable - It is unchangeable so item values cannot be modified. 🖊️ Indexed - It can be accessed using index no. ▶️ my_tuples = ('apple', 'banana', 'orange', 'banana', 'cherry') #Python #PythonProgramming #SoftwareEngineer #PythonTips #LearnToCode
To view or add a comment, sign in
-
🐍 Data Types & Type Casting in Python (Small Concept, Big Impact) When working with data in Python, one mistake beginners often make is ignoring data types. And trust me, this small thing can break your entire analysis. When you load a dataset in Python, it doesn't always read your data the way you expect. A column full of numbers might be stored as text. A date column might be treated as a random string. A true/false column might come in as an object. And if you don't fix this early, your entire analysis will give you wrong results. 🔹 So What Are Data Types? Every value in Python has a type - it tells Python what kind of data this is and what you can do with it. The most common ones in data analysis: int → Whole numbers → 25, 100, -5 float → Decimal numbers → 3.14, 99.9, -0.5 str → Text → "John", "Mumbai", "Yes" bool → True or False → True, False datetime → Dates & times → 2024-01-15 👉 Think of data types as the language your data speaks, If you misunderstand it, your analysis goes wrong. 🔹 Why Data Types Matter in Data Analysis Because Python behaves differently based on data types. Example: 👉 "100" + "20" → "10020" (string concatenation) 👉 100 + 20 → 120 (numeric addition) Same values. Different result. 🔹 A Simple Real-Life Example Imagine a salary column in your dataset. You try to calculate the average: df['salary'].mean() But Python throws an error. You check the data type and you see - salary is stored as object (string), not a number. Python literally can't do math on it. That's where Type Casting comes in. 🔹 What is Type Casting? Type casting means converting one data type into another. Your salary column is stored as "50000" (a string). Every calculation you run will give wrong results or fail completely. After type casting: # Convert salary column to number df['salary'] = df['salary'].astype(float) # Now calculate average salary df['salary'].mean() # works perfectly # Convert joining date to datetime df['join_date'] = pd.to_datetime(df['join_date']) # Convert employment status to boolean df['is_active'] = df['is_active'].astype(bool) Now Python understands your data — and you can calculate average salaries, find top earners, compare departments, and build models correctly. 🔹 Why This Matters in Real Projects Wrong data types silently break your analysis. - Calculations fail on string columns - Sorting dates goes wrong if stored as text - Visualizations won't plot numeric data stored as objects - Machine learning models reject incorrect types completely Checking and fixing data types is not optional — it is one of the first things a professional analyst does. 🔹 When Should You Always Check Data Types? ✔ Right after loading your dataset ✔ Before doing any salary calculations ✔ During data cleaning df.dtypes # check all column types at once One wrong data type = one wrong insight. And in salary analysis, one wrong insight can mislead an entire business decision. #DataAnalytics #Python #DataTypes #TypeCasting #pandas
To view or add a comment, sign in
-
-
My Data Science Journey — Python Tuple, Set, Dictionary & the Collections Library Today’s focus was on Python’s core data structures — Tuples, Sets, and Dictionaries — along with the powerful collections module that enhances their functionality for real-world use cases. 𝐖𝐡𝐚𝐭 𝐈 𝐋𝐞𝐚𝐫𝐧𝐞𝐝: Tuple – Ordered, immutable, allows duplicates – Single element tuples require a trailing comma → ("cat",) – Supports packing and unpacking → x, y = 10, 30 – Cannot be modified after creation (TypeError by design) – Faster than lists in certain operations – Used in scenarios like geographic coordinates and fixed records – Can be used as dictionary keys (unlike lists) Set – Unordered, mutable, stores unique elements only – No indexing or slicing support – Empty set must be created using set() ({} creates a dict) – .remove() raises KeyError if element not found – .discard() removes safely without error – Supports operations like union, intersection, difference, symmetric_difference – Methods like issubset(), issuperset(), isdisjoint() help in set comparisons – frozenset provides an immutable version of a set – Offers O(1) average time complexity for membership checks Dictionary – Key-value pair structure, ordered, mutable, and keys must be unique – Built on hash tables for fast lookups – user["key"] → raises KeyError if missing – user.get("key", default) → safe access with fallback – Methods: keys(), values(), items() for iteration – pop(), popitem(), update(), clear(), del for modifications – Widely used in real-world data like APIs and JSON responses – Common pattern: list of dictionaries for structured datasets Collections Library – namedtuple → tuple with named fields for better readability – deque → efficient queue with O(1) operations on both ends – ChainMap → combines multiple dictionaries without merging copies – OrderedDict → maintains order with additional utilities like move_to_end() – UserDict, UserList, UserString → useful for customizing built-in behaviors with validation and extensions Performance Insight – List → O(n) – Tuple → O(n) – Set → O(1) (average lookup) – Dictionary → O(1) (average lookup) 𝐊𝐞𝐲 𝐈𝐧𝐬𝐢𝐠𝐡𝐭: Understanding when to use each data structure — and how collections enhances them — is crucial for writing efficient, scalable, and clean Python code. Read the full breakdown with examples on Medium 👇 https://lnkd.in/gvv5ZBDM #DataScienceJourney #Python #Tuple #Set #Dictionary #Collections #Programming #DataStructures
To view or add a comment, sign in
-
Day 2 of Learning Python – And I Just Built My First Real Data Audit System 📊🐍 Today I didn’t just “learn Python”… I used it to analyze structured company-style audit data and built a Mistake Scoring System that automatically evaluates performance. And honestly, It felt like stepping into real business intelligence work. 💡 What I built today: Using Pandas, I processed an audit dataset and generated insights like: 📌 Total deals per responsible person 📌 Pipeline distribution per team member 📌 Mistake scoring based on missing actions (follow-ups, updates, documents) 📌 Final performance summary ranking everyone by errors ⚙️ The idea behind the system: Instead of manually checking performance, I created a logic-based scoring system where: Missing documents = +1 error No follow-up = +1 error No comment update = +1 error Unresolved status = +3 heavy penalty This turns raw data into actionable performance insights. 💻 Code I used: import pandas as pd file_path = r " Instered your excel data file here" Note: The r before the file path means it is a raw string, which helps Python correctly read the path without treating backslashes as escape characters. Also, make sure your Excel file is saved in the same folder where your Python script is located, or ensure the correct full file path is provided. df = pd.read_excel(file_path) # CLEAN DATA df.columns = df.columns.str.strip() df = df.fillna("No") # MISTAKE SCORE SYSTEM df["Mistake Score"] = 0 df.loc[df["Document/RF Request"] == "No", "Mistake Score"] += 1 df.loc[df["Comment Updates"] == "No", "Mistake Score"] += 1 df.loc[df["Follow up"] == "No", "Mistake Score"] += 1 df.loc[df["Status"].str.lower() == "unresolved", "Mistake Score"] += 3 # ANALYSIS print(df["Responsible"].value_counts()) print(df.groupby(["Responsible", "Pipeline"]).size()) mistakes = df.groupby("Responsible")["Mistake Score"].sum().sort_values(ascending=False) print(mistakes) summary = df.groupby("Responsible").agg( Total_Deals=("Responsible", "count"), Total_Mistakes=("Mistake Score", "sum") ) print(summary.sort_values("Total_Mistakes", ascending=False)) 🚀 Key takeaway: Even simple Python + Excel data can be transformed into a decision-making system that highlights performance gaps instantly. Day 2 of learning — and I’m already seeing how powerful data can be in real business environments. Can’t wait to build dashboards and automate even more next 🔥 #Python #DataAnalysis #Pandas #LearningInPublic #DataScience #Automation #BusinessIntelligence #CareerGrowth
To view or add a comment, sign in
-
🚀 Day 7 of My Python Learning Journey | String Methods | Business Analyst Aspirant Continuing my Python journey to strengthen my skills for a Business Analyst role 📊 Today, I worked on String Methods in Python, which are extremely useful for data cleaning, transformation, and preprocessing — key tasks in real-world analytics. 💻 Topic: String Methods in Python # Remove spaces text1 = " hello python learners " print("Clean text:", text1.strip()) # Upper & Lower case print("Upper:", text1.upper().strip()) print("Lower:", text1.lower().strip()) # Replace text print("Replace:", text1.replace("python", "SQL").strip()) # Count occurrences print("Count of 'o':", text1.count("o")) # Check start print("Starts with hello:", text1.strip().startswith("hello")) # Check numeric mobile = "9876543210" print("Is numeric:", mobile.isnumeric()) # Split & Join msg = "Welcome to python Course" words = msg.split() print("Words list:", words) joined_text = "_".join(words) print("Joined text:", joined_text) # Find position print("Index of 'p':", msg.find("p")) # Extract domain email = "student@example.com" domain = email[email.find("@") + 1:] print("Domain:", domain) # Data Cleaning Example (Price) price_text = "Price : ₹3500/-" clean_price = price_text.replace("Price :", "")\ .replace("₹", "")\ .replace("/-", "")\ .strip() print("Clean price:", clean_price) 💡 Key Learnings: Cleaned raw text data using strip() and replace() Transformed text using upper(), lower(), split(), and join() Extracted useful information (like email domain) Practiced real-world data cleaning (price formatting) 📌 These skills are directly applicable in: ✔ Data Cleaning ✔ Excel / SQL transformations ✔ Power BI datasets I’m learning Python through Satish Dhawale sir course (SkillCourse) and practicing daily 💻 🔥 Next step: Applying these concepts on real datasets and analytics projects Let’s connect if you're also learning Python or Data Analytics 🤝 #Python #StringMethods #DataCleaning #BusinessAnalyst #DataAnalytics #LearningJourney #SkillDevelopment #SatishDhawale #SkillCourse #UpGrad
To view or add a comment, sign in
-
Python for Developers | Step 3 — Data Structures (Q&A Series) Dictionaries — not just “key-value pairs” At first, a dictionary looks like a simple mapping: my_dict = {"Mahmoud": 100} But internally, it behaves very differently from lists. That difference directly affects performance, correctness, and even bugs. What is a dictionary really? What: -A dictionary is a hash table, not just a collection of pairs. Why: Instead of searching linearly, Python: -Computes hash(key) -Maps it to an index in memory -Stores or retrieves the value directly Consequence: -Lookup (d[key]) is O(1) average -Performance depends on hashing, not position Why must keys be immutable? What: -Keys must be hashable (effectively immutable) Why: -The hash of a key determines where it is stored -If the key changes → hash changes → location becomes invalid Consequence: d = {[1, 2]: 10} # TypeError -Mutable objects (like lists) are rejected -Prevents silent data corruption What happens with duplicate keys? d = {"a": 1, "a": 2} What: -Only one entry exists Why: -Keys must be unique -Second insertion overwrites the first Consequence: {"a": 2} -No error raised -Earlier value is discarded immediately Why is lookup “fast” and when is it not? What: -Dictionary operations are O(1) on average Why: -Direct index access via hashing Consequence: -Fast lookups—until collisions happen What is a hash collision? What: -Two different keys map to the same index Why: -Hash space is finite -Collisions are unavoidable Consequence: -Python must resolve it → extra work → slower operations How does Python resolve collisions? What: -Using probing (open addressing) Why: -If a slot is occupied, Python searches for another one Consequence: -Lookup may require multiple steps -Too many collisions → performance degrades toward O(n) Why do dictionaries resize? What: -Dictionary expands when it becomes too full Why: -High load → more collisions -Need more space to keep O(1) behavior Consequence: -Temporary cost (rehashing all keys) -Restores performance Do dictionaries store values directly? What: -They store references to objects, not copies Why: -Consistent with Python’s memory model Consequence: a = {"x": []} b = a.copy() b["x"].append(1) -Both dictionaries change -Inner object is shared (shallow copy) What do .keys(), .values(), .items() return? What: -They return view objects, not lists Why: -Avoid copying data -Provide real-time access Consequence: k = d.keys() d["new"] = 1 -k updates automatically -But cannot be modified directly Views are not independent k = d.keys() d.clear() Consequence: -k becomes empty -It reflects the source, not a snapshot Final Question If dictionaries are “O(1)”, but collisions and probing exist: At what point does a dictionary stop behaving like O(1), and what kind of key patterns could cause that degradation in real systems?
To view or add a comment, sign in
-
-
𝗖𝗮𝗻 𝗦𝗤𝗟 𝗱𝗼 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀? We usually do feature analysis in Python, but what if we cannot load millions of rows in Python? Can we do that with SQL? To figure this out, I took the problem of customer churn and tried to understand why customers are leaving and what we can do about it. For this, I tried to understand the behavior of churned customers across the different groups of each feature. For example, does a high number of support calls lead to churning? To study customer behavior, I calculated the churn rates across the groups of each feature using AVG() in SQL. I used churn rate because it allows comparison irrespective of group size. For calculating the churn rate for numerical features like payment delay, I first divided this feature into groups using GROUP BY in SQL. I did this by identifying the sudden difference in churn rates between two values. Consequently, I identified the thresholds of behavioral change and labeled the groups using a CASE conditional statement. For categorical features, it can be easily calculated. To decide which feature is important, I used this criteria: 1. The churn rate difference must be significant for at least one group compared to others. This suggests that after this threshold is the breaking point of customer behavior. 2. The pattern should be stable, to avoid random noise. 3. Group sizes should be comparable. Example: Issue Level (Support Calls) +------------------+------------------+ | Issue Level | Churn Rate | +------------------+------------------+ | Low | 0.10 | | Medium | 0.25 | | High | 0.80 | +-------------------+-----------------+ Churn rate stays stable across low and medium but increases sharply at high issue level. Customers waited patiently until the support calls were in the medium issue level. Once the threshold is crossed, 80% of the customers leave. That means one should respond to support calls before reaching the high issue level; otherwise, the customer will leave. In customer churn, the features are: Age, Gender, Tenure, Usage Frequency, Support Calls, Payment Delay, Subscription Type, Contract Length, Total Spend, Last Interaction, and Churn. For more detailed analysis, check out github repo (Notebooks/SQL_Analysis folder): https://lnkd.in/gUx9vgyE #SQL #FeatureAnalysis #CustomerChurn #DataAnalytics #DataScience #SQLAnalytics #ChurnAnalysis #DataEngineering #BehavioralAnalysis #AnalyticsEngineering #BigData #DataCommunity
To view or add a comment, sign in
-
-
🚀 Python for Data Analyst- Advanced Set Concepts in Python (Part 3)-(Post 9) These are small concepts individually, but together they make sets very powerful in real-world Python work. 1️⃣ issubset() Checks whether all elements of one set are present in another. s1 = {1, 2, 3, 4, 5} s2 = {4, 5} print(s2.issubset(s1)) Output: True More examples: A = {1, 2, 3} B = {1, 2, 3, 4, 5} C = {1, 2, 4, 5} print(A.issubset(B)) # True print(B.issubset(A)) # False print(A.issubset(C)) # False print(C.issubset(B)) # True 2️⃣ issuperset() Checks whether one set contains all elements of another. A = {4, 1, 3, 5} B = {6, 0, 4, 1, 5, 3} print(A.issuperset(B)) print(B.issuperset(A)) Output: False True 3️⃣ isdisjoint() Checks whether two sets have no common elements. s1 = {1, 2, 3} s2 = {4, 5, 6} print(s1.isdisjoint(s2)) Output: True If there is at least one common value: set1 = {2, 4, 5, 6} set3 = {1, 2} print(set1.isdisjoint(set3)) Output: False It also works with: list tuple dictionary string Important: For dictionaries, only keys are checked. 4️⃣ copy() Creates a shallow copy of a set. set1 = {1, 2, 3, 4} set2 = set1.copy() print(set2) Using copy() is useful because direct assignment: set2 = set1 makes both variables point to the same set. With copy(), modifications in the copied set do not affect the original: first = {'g', 'e', 'k', 's'} second = first.copy() second.add('f') print(first) print(second) 5️⃣ frozenset A frozenset is like a set, but immutable. Once created: cannot add cannot remove cannot update Example: fs = frozenset([1, 2, 3, 4, 5]) print(fs) Output: frozenset({1, 2, 3, 4, 5}) Useful when you need a set-like structure that should not change. 6️⃣ Typecasting into Sets The set() constructor can convert: list tuple string range dictionary Examples: print(set([1, 2, 2, 3])) print(set((1, 1, 2, 3))) print(set("GeeksforGeeks")) print(set(range(3, 8))) print(set({'x': 1, 'y': 2})) Important: When converting a dictionary to a set, only keys are included. 7️⃣ set() Function Summary Syntax: set(iterable) removes duplicates automatically creates empty set if no argument is passed accepts only iterables Examples: set() set([4, 5, 5, 6]) set((1, 1, 2, 3)) set("hello") 8️⃣ min() and max() with Sets You can find minimum and maximum values in a set. s1 = {4, 12, 10, 9, 13} print(min(s1)) print(max(s1)) Output: 4 13 ⚠️ For heterogeneous sets like {"Geeks", 11}, min() and max() raise TypeError because Python cannot compare different types. 9️⃣ Using sorted() with Sets sorted() works on any iterable, including sets. It returns a new sorted list and does not modify the original set. s = {5, 3, 9, 1, 7} sorted_s = sorted(s) print(sorted_s) Output: [1, 3, 5, 7, 9] Descending order print(sorted(s, reverse=True)) Sorting strings in a set A = {'ab', 'ba', 'cd', 'dz'} print(sorted(A)) #Python #PythonLearning #DataAnalytics #Sets #LearningInPublic
To view or add a comment, sign in
-
Day 12/30 - Nested Data Structures in Python Today everything clicked. Lists, dicts, tuples. They don't live separately. Real data nests them together. What is Nesting? Nesting means placing one data structure inside another. A list can contain dictionaries. A dictionary can contain lists. A dictionary can even contain other dictionaries. This is how Python represents complex, real-world data - the same structure used in JSON APIs, databases, and config files. Four Common Nesting Patterns List inside Dict -> a dictionary key holds a list as its value e.g. a student's list of scores Dict inside List -> a list contains multiple dictionaries e.g. a list of student records Dict inside Dict -> a key holds another dictionary e.g. a user with a nested address object List inside List -> a list contains other lists e.g. rows and columns in a grid or table How to Access Nested Data You access nested data by chaining brackets one for each level you go deeper: data["student"]["scores"][0] -->open dict , go to scores key, grab index 0 Rule: count the levels of nesting, then use that many brackets to reach the value. Looping Through Nested Structures When your data is a list of dictionaries, use a for loop to go through each dictionary, then use bracket notation to pull out values. This is the most common real-world pattern- reading records from an API or database. Code Example 1: List Inside a Dict python student = { "name" : "Obiageli", "scores": [88, 92, 75, 95], "passed": True } print(student["scores"]) = [88, 92, 75, 95] print(student["scores"][0]) = 88 print(student["scores"][-1]) = 95 Key Learnings ☑ Nesting = placing one data structure inside another ☑ Access nested data by chaining brackets , one bracket per level ☑ A list of dictionaries is the most common pattern, it's how API and database data looks ☑ Use a for loop to go through a list of dicts and pull values from each record ☑ Nested structures are the foundation of JSON -master this and real-world data won't feel foreign My Takeaway Nested data structures are where all the previous days connect. Lists, tuples, sets, dictionaries - they don't live in isolation. Real data combines all of them. Today I started seeing data the way Python sees it. #30DaysOfPython #Python #LearnToCode #CodingJourney #WomenInTech
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development