Hey there, data fam! 👋 I’m kicking off a new series today to help you master Python in most simplified form. 🆅🅰🆁🅸🅰🅱🅻🅴🆂 🅰🅽🅳 🅽🅰🅼🅸🅽🅶 🆁🆄🅻🅴🆂.🐍 📦 What is a Variable? In Python, variables are symbolic names that act as references to objects stored in memory. In the simplest terms a variable is just a container for storing data values. You can think of it as a labeled box where you put something important so you can find it later. 🏠 The Real-World Example: Imagine you are moving into a new apartment. You have dozens of cardboard boxes. If you put your kitchen plates in a box but you forgot label it, you'll be digging through 50 boxes just to eat dinner. But if you write "Kitchen_Plates" on the side, you know exactly what is inside and where to find it. Simply said , In Python, the Label is the variable name, and the Plates are the data. 📝 𝐓𝐡𝐞 𝐍𝐚𝐦𝐢𝐧𝐠 𝐑𝐮𝐥𝐞𝐬 We can't just name a variable anything. Python has some ground rules to keep things organized. 👉 Start with a letter or underscore: name or _name is fine. 1name is a big no-no. 👉 No Spaces: Python hates spaces in names. Use an underscore instead (user_age, not user age). 👉 Case Sensitive: Age, age, and AGE are three completely different ! 👉 No Special Characters: Keep @, $, %, and & is no - no . 👉 Reserved Words: We can't use words Python already uses for itself, like if, else, or print. #python #variables #namingrules #pythonsimplified
Mastering Python: Variables and Naming Rules
More Relevant Posts
-
Day 10/365: Building a List from User Input & Finding Basic Stats 🔢📥 Today I wrote a Python program that takes numbers from the user, stores them in a list, and then calculates some basic statistics: sum, average, minimum, and maximum. What the code does step by step: First, I ask the user how many elements they want to enter and store that in n. I create an empty list l and a variable total to keep track of the sum. Using a for loop, I take n inputs from the user: Each number is added to the list using append(). At the same time, I keep adding each number to total to calculate the sum. After the loop: I print the full list. I print the sum using the total variable. Then I calculate the average as total / n and print it. To find the minimum and maximum: I start by assuming both min and max are the first element of the list. I loop through the list and update min if I find a smaller value. Similarly, I update max if I find a larger value. In the end, I print the minimum and maximum numbers in the list. What I learned from this exercise: How to take multiple inputs from a user and store them in a list. How to maintain a running sum while taking inputs. How to manually compute average, minimum, and maximum without using built‑in functions like sum(), min(), or max(). How loops and variables can work together to build simple but useful statistics — a basic idea used a lot in data analysis. Day 10 done ✅ 355 more to go. If you have ideas like extending this to find median, mode, or standard deviation, send them to me — I’d love to try them next. #100DaysOfCode #365DaysOfCode #Python #LogicBuilding #Lists #UserInput #CodingJourney #LearnInPublic #AspiringDeveloper
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟲 𝗼𝗳 𝘀𝗵𝗮𝗿𝗶𝗻𝗴 𝗺𝘆 𝗷𝗼𝘂𝗿𝗻𝗲𝘆 ✨ After working with Python in data analysis, one thing became clear: 𝗬𝗢𝗨 𝗗𝗢𝗡’𝗧 𝗡𝗘𝗘𝗗 𝗧𝗢 𝗞𝗡𝗢𝗪 𝗘𝗩𝗘𝗥𝗬𝗧𝗛𝗜𝗡𝗚. 𝗬𝗢𝗨 𝗡𝗘𝗘𝗗 𝗧𝗢 𝗞𝗡𝗢𝗪 𝗪𝗛𝗔𝗧 𝗔𝗖𝗧𝗨𝗔𝗟𝗟𝗬 𝗚𝗘𝗧𝗦 𝗨𝗦𝗘𝗗. Here are the Python concepts I rely on regularly: 🔹 𝗣𝗮𝗻𝗱𝗮𝘀 (𝘁𝗵𝗲 𝗯𝗮𝗰𝗸𝗯𝗼𝗻𝗲) → Filtering & slicing data → groupby() for aggregations → Handling missing values 🔹 𝗪𝗿𝗶𝘁𝗶𝗻𝗴 𝗰𝗹𝗲𝗮𝗻𝗲𝗿 𝗰𝗼𝗱𝗲 → List Comprehensions → Functions (reusable logic) → Lambda functions 🔹 𝗗𝗮𝘁𝗮 𝗖𝗹𝗲𝗮𝗻𝗶𝗻𝗴 (𝗺𝗼𝘀𝘁 𝘁𝗶𝗺𝗲 𝗴𝗼𝗲𝘀 𝗵𝗲𝗿𝗲) → fillna() → dropna() → Fixing messy data 🔹 𝗕𝗮𝘀𝗶𝗰 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 → Matplotlib & Seaborn → Spotting trends & patterns 💡 𝗕𝗶𝗴 𝗿𝗲𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: 𝗜𝘁’𝘀 𝗻𝗼𝘁 𝗮𝗯𝗼𝘂𝘁 𝗺𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗮𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗣𝘆𝘁𝗵𝗼𝗻. 𝗜𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝘂𝘀𝗶𝗻𝗴 𝘀𝗶𝗺𝗽𝗹𝗲 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆. That’s where the real impact comes from. What do you use the most in your workflow? 👇 #Python #DataAnalytics #Pandas #CareerGrowth #DataScience
To view or add a comment, sign in
-
-
Day 15/365: Merging Two Dictionaries with Summed Values in Python 🧮🔗 Today I worked on a very common real-world task: merging two dictionaries where overlapping keys should have their values added together. 🧠 What this code does: I start with two dictionaries: d1 = {1: 10, 2: 20, 3: 30} d2 = {3: 40, 5: 50, 6: 60} Each key can represent something like: a product ID with its total sales, a student ID with total marks, a user ID with total points. The goal is to combine d2 into d1: If a key from d2 already exists in d1, I add the values. If the key doesn’t exist in d1, I insert it. Step by step: I loop over each key i in d2: for i in d2: For each key: If i is already a key in d1: I update d1[i] by adding d2[i] to it. Otherwise: I create a new entry in d1 with that key and its value from d2. After the loop finishes, d1 contains the merged result. For the given dictionaries: Key 3 exists in both, so its values are added: 30 + 40 = 70. Keys 5 and 6 only exist in d2, so they are added as new keys. Final output: {1: 10, 2: 20, 3: 70, 5: 50, 6: 60} 💡 What I learned: How to merge two dictionaries manually using a loop and conditions. How to update values in a dictionary when keys overlap. How this pattern appears in real data tasks like: combining monthly reports, merging user activity stats, aggregating counts from multiple sources. Next, I’d like to explore: Handling much larger dictionaries efficiently. Using dictionary methods like update() or Counter from collections to compare approaches. Trying the same logic with string keys (like product names) instead of numbers. Day 15 done ✅ 350 more to go. Got any other dictionary + loop problems (like counting frequencies from multiple sources or merging configs)? Drop them in the comments—I’d love to try them next. #100DaysOfCode #365DaysOfCode #Python #Dictionaries #DataStructures #LogicBuilding #CodingJourney #LearnInPublic #AspiringDeveloper
To view or add a comment, sign in
-
-
🚀 Python Series – Day 14: File Handling (Read & Write Files) Yesterday, we explored advanced concepts in functions. Today, let’s learn something super practical — how Python works with files 📂 🧠 What is File Handling? File handling allows you to: ✔️ Read data from files ✔️ Write data to files ✔️ Store information permanently 👉 Used in real-world projects like logs, data storage, reports, etc. 📂 Step 1: Open a File file = open("demo.txt", "r") 👉 Modes: "r" → Read "w" → Write (overwrites file) "a" → Append "x" → Create new file 📖 Step 2: Read a File file = open("demo.txt", "r") print(file.read()) file.close() ✍️ Step 3: Write to a File file = open("demo.txt", "w") file.write("Hello, Python!") file.close() ➕ Step 4: Append Data file = open("demo.txt", "a") file.write("\nLearning File Handling 🚀") file.close() 🔥 Best Practice (Important!) Use with statement (auto closes file): with open("demo.txt", "r") as file: data = file.read() print(data) 🎯 Why This is Important? ✔️ Used in data science (CSV, logs) ✔️ Used in real-world applications ✔️ Helps manage large data ⚠️ Pro Tip: Always close files OR use with 👉 Otherwise it may cause memory issues 📌 Tomorrow: Exception Handling (Handle Errors Like a Pro!) Follow me to master Python step-by-step 🚀 #Python #Coding #Programming #DataScience #LearnPython #100DaysOfCode #Tech #MustaqeemSiddiqui
To view or add a comment, sign in
-
-
I used to think Data Structures were just a hard exam topic. Then StemLink lectures showed me how they actually work in Python — and everything changed. 🐍 Here's what I learned, broken down simply 👇 --- 🔷 The 4 main data structures in Python: 📋 Lists → ordered, mutable, most used mylist = [] mylist.append("item") # add to end mylist.sort() # sort in place 📖 Dictionaries → key-value pairs, O(1) lookup user = {"name": "Abiya", "age": 20} 🔸 Tuples → like lists but NOT mutable coords = (6.9, 79.8) # can't change this 🔹 Sets → unique values only, no duplicates tags = {"python", "dsa", "python"} # stores only once --- 🔷 How data lives inside structures: Everything stored inside these is called an Element. Elements are ordered in groups → accessed using Indexes. mylist[0] # first item mylist[-1] # last item mylist[0:3] # slice from 0 to 2 --- 🔷 How we move through data: Loops + Indexes work together to iterate through elements. for elem in mylist: # loop through every item print(elem) mylist[int] = x # modify by assignment --- 🔷 The big insight: Lists are the most popular data structure in Python. They connect everything — elements, loops, indexes, methods, and assignment — all in one place. Once you understand Lists deeply, the rest makes sense. These fundamentals go directly into the projects I build. Not just theory. Real code. Which Python data structure do you use the most? 👇 #Python #DSA #DataStructures #StemLink #IITColombo #LearnToCode #CS #StudentDeveloper #BuildInPublic #Programming
To view or add a comment, sign in
-
-
💡 Rearranging a Linked List In-Place — A Clean Trick I Learned Today Worked on the Odd-Even Linked List problem today, and it turned out to be a great reminder that pointer manipulation doesn’t have to be messy. The task sounds simple: 👉 Group all odd-indexed nodes together followed by even-indexed nodes But the challenge is to do it: In-place (no extra space) In one pass 🔑 Approach that clicked for me: Maintain two pointers: odd → tracks odd-position nodes even → tracks even-position nodes Keep a reference to the head of the even list (even_head) Rewire pointers while traversing Finally, connect odd list to even list ✨ What I liked: No extra data structures Pure pointer manipulation O(n) time and O(1) space Python code : https://lnkd.in/gCB9ZGaq 🧠 Biggest takeaway: Linked list problems often look tricky, but once you clearly track pointers and their roles, the solution becomes surprisingly elegant. Have you faced a linked list problem that looked hard but turned out simple after breaking it down? 👇 #LinkedList #DataStructures #CodingInterview #Python #ProblemSolving #LeetCode Rajan Arora
To view or add a comment, sign in
-
-
This SATURDAY. Yes, SATURDAY. The most requested video - "Python Advanced Tutorial For Data Domain" - goes live at 6:00 PM! A few days ago, I shared some facts about why you need to know way more than just the basics, and you guys literally showed so much LOVE in the comments. It’s clear: Everyone knows they need to learn Python (beyond just the FUNDAMENTALS). So, in this tutorial, we are going deep into the Python that is actually used in the industry. Quickly brush up on your basics (loops, functions, if-else) because we are diving into: ✅ Classes & Objects ✅ Constructors & Decorators ✅ Inheritance & Encapsulation ✅ Test Cases With PyTest ✅ Parallel Processing ✅ Incremental Data Loading ✅ Async Python ✅ Fetching Data From APIs ...And a lot of hands-on practical stuff! I literally put my HEART into this tutorial because I know how important it is for YOU. People were asking me to make this a paid course, but I chose to keep it a FREE tutorial for my DATA FAM. I want everyone on YouTube to be able to LEARN and GROW without barriers. Are you EXCITEDDD??? - If yes, simply write "We love Python" in the comments! 📍 See you this Saturday at 6:00 PM!
To view or add a comment, sign in
-
-
Technical post: I've been posting some graphs on here, talking about functions and "equivalence". This was all started by working on porting an MLOPs framework from python 3.10 to 3.12, and all the "dependency hell" one has to go through. Then naturally the question arose "What are the boundaries of one project to another, in terms of functions being called etc.,?" This led me down the rabbit hole (not too deep) of what happens when I do something like python -m <module> <somescript>. Specifically, what is a "no op" module, and what kind of ops can we inject, thanks to python being an interpreted language. A few years ago I'd worked on something along similar lines called TracePath, which provided a decorator to do something similar (e.g. who called who, how long it took, etc.). So I merged these two ideas (avoid decorating every function, have an "inspector" module) and ran this on a simple pandas dataframe creation. The resulting function invocation graph is the image attached to this post. When I ran it across the whole workflow (create, load, transform data etc.,), the graph had ~9000 connections. The nice thing is I can specify which modules (e.g. only pandas, or pandas and numpy) should be added to the graph etc. What do you think is the next logical thing to do with something like this? What kind of graphs would well structured software produce? How about badly written software? #graphs #swe #dependencyhell #python
To view or add a comment, sign in
-
-
Most data analysts know Python. But not everyone uses it effectively. This image covers some advanced Pandas techniques, and honestly, these are the kind of things that make a real difference in day-to-day work. Not because they’re “advanced", but because they make your code cleaner, faster, and easier to maintain What stood out to me is Instead of writing long, step-by-step transformations, you can chain operations for cleaner pipelines, use vectorized calculations instead of loops, and combine multiple aggregations in a single step. Also, small things matter more than we think: 🔺 selecting only required columns 🔺 handling missing data thoughtfully 🔺 using proper joins instead of manual merges These don’t sound fancy, but they save a lot of time in real projects. 𝐈'𝐦 𝐡𝐨𝐬𝐭𝐢𝐧𝐠 𝐚 𝐰𝐞𝐛𝐢𝐧𝐚𝐫 𝐨𝐧 𝐀𝐩𝐫𝐢𝐥 26. 𝐌𝐨𝐫𝐞 𝐝𝐞𝐭𝐚𝐢𝐥𝐬 𝐡𝐞𝐫𝐞: 👇 https://lnkd.in/gXQZCDV8 Visual Credits: Sohan Sethi 𝑾𝒂𝒏𝒕 𝒕𝒐 𝒄𝒐𝒏𝒏𝒆𝒄𝒕 𝒘𝒊𝒕𝒉 𝒎𝒆? 𝘍𝒊𝒏𝒅 𝒎𝒆 𝒉𝒆𝒓𝒆 --> https://lnkd.in/dTK-FtG3 Follow Shreya Khandelwal for more such content. ************************************************************************ #Python #DataScience #Pandas #Analytics
To view or add a comment, sign in
-
-
🚀 Last month, I built and published my first Python package — Pristinizer I wanted to solve a simple but real problem in data science: 👉 Cleaning and understanding raw datasets takes way too much time. So I built Pristinizer, a lightweight Python package that helps streamline data cleaning + EDA in just a few lines of code. 🔍 What Pristinizer does: • Cleans messy datasets (duplicates, missing values, column formatting) • Generates structured dataset summaries • Visualizes missing data (heatmap, matrix, bar chart) ⚙️ Tech Stack: Python • pandas • matplotlib • seaborn 📦 Try it out: >> pip install pristinizer >> import pristinizer as ps df = ps.clean(df) ps.summarize(df) ps.missing_heatmap(df) 🧠 What I learned while building this: • Designing a clean and intuitive API • Structuring a real-world Python package • Publishing to PyPI • Writing proper documentation for users 📌 Next, I’m planning to add: • Outlier detection • Automated preprocessing pipelines • Advanced EDA reports Would love to hear your thoughts or feedback! #Python #DataScience #MachineLearning #OpenSource #Pandas #EDA #Projects
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development