Learning Data Structure And Algorithm: Tuple 👩🏾💻 Tuples are very similar to lists, but the big difference is that tuples are immutable. Once you create them, you can’t change, add, or remove any element, unlike lists. This makes them faster and more memory-efficient, especially when you want to store data that shouldn’t change. Tuples are also hashable, which simply means Python can give them a unique identity. This allows them to be used as keys in dictionaries or stored in sets. Lists can’t do that because they can be modified. Time and space complexity of Tuples: Time Complexity: Accessing elements → O(1) Searching → O(n) Space Complexity: O(n), depending on the size of data stored I added more information about tuples in the image/code below That’s all for now, bye ☺️❤️ #Day6 #58DaysOfTech #PythonLearning #DSA #Tuple #TechJourney #LearnTogether #TechCommunity
Understanding Tuples in Python: Immutable and Hashable
More Relevant Posts
-
The Kruskal-Wallis test is a non-parametric method used to determine if there are statistically significant differences in the distributions of three or more independent groups based on ranks. Unlike ANOVA, it does not assume that residuals are normally distributed, making it more flexible for analyzing data sets that do not meet this assumption. Advantages of Proper Use: ✔️ Suitable for ordinal data or data sets where the normality assumption in residuals is not met. ✔️ Does not assume homogeneity of variance, offering more flexibility. ✔️ Can be used with small sample sizes, increasing its applicability in various research settings. Challenges If Not Handled Correctly: ❌ Interpretation can be complex, especially if the test is mistaken for a direct comparison of medians when specific conditions aren't met (e.g., independent samples, symmetric distributions). ❌ Less powerful than ANOVA when residuals are normally distributed and variances are equal. ❌ May require post-hoc tests to pinpoint specific group differences, adding complexity to the analysis 🔹 Python: Utilize the kruskal() function from the scipy.stats module for the analysis. #Coding #RStudio #pythonforbeginners #Package #Data #database
To view or add a comment, sign in
-
-
Data Structure and Algorithm: Array👩🏾💻 I’ve been using arrays for a while, but now I’m actually starting to understand how they work in memory and how their time complexity really makes sense. An array isn’t just a bunch of items stored randomly. It’s actually a continuous block of memory where all the elements sit side by side. Because of that, the computer already knows exactly where each element is stored, which is why accessing elements is really fast. For example, if you want to get the 5th element, the computer doesn’t need to go through everything one by one. It just calculates the exact position using the memory address. That’s why accessing an element is O(1) which means constant time. But inserting or deleting something in between is slower O(n) because other elements may need to shift. There are mainly two types of arrays 1. One dimensional array 2. Multi dimensional array A one dimensional array is like a straight line of elements. Think of it as a simple list like [10, 20, 30, 40]. Each element has an index 0, 1, 2, 3 which makes accessing any element easy and fast. A multi dimensional array on the other hand has more than one level like a table 2D or a cube 3D. A two dimensional array feels like rows and columns in a spreadsheet. A three dimensional array is like stacking multiple tables on top of each other, imagine a cube of data. One thing that really stood out to me is that arrays are static in size which means once you create them, you can’t easily change their size. This is also why Python lists are more flexible, they’re built on top of arrays but can grow or shrink dynamically. Understanding how time and space complexity works made me realize how powerful arrays actually are Accessing an element → O(1) Searching → O(n) Insertion or Deletion → O(n) Traversing all elements → O(n) I attached an image of examples of the different types of array below That's all for now, bye ☺️❤️ #TechJourney #PythonLearning #TechCommunity #Array #DataStructure #DSA #Python #Programming #Algorithm
To view or add a comment, sign in
-
-
𝐓𝐡𝐞 𝐂𝐡𝐞𝐚𝐭 𝐒𝐡𝐞𝐞𝐭 𝐓𝐡𝐚𝐭 𝐖𝐢𝐥𝐥 10𝐱 𝐘𝐨𝐮𝐫 𝐒𝐩𝐞𝐞𝐝. The truth about data work? It's not the fancy models; it's the 20% of foundational commands you use 80% of the time. And that little moment of doubt when you need to quickly reshape an array, calculate covariance, or nail a complex multi-condition filter... that's where all the time goes. I got fed up with bouncing between Stack Overflow and my IDE just to recall the syntax for 𝘯𝘱.𝘭𝘪𝘯𝘴𝘱𝘢𝘤𝘦 or 𝘥𝘧.𝘥𝘵.𝘥𝘢𝘺. So, I compiled this single-page, 𝐡𝐢𝐠𝐡-𝐢𝐦𝐩𝐚𝐜𝐭 𝐏𝐲𝐭𝐡𝐨𝐧 𝐂𝐡𝐞𝐚𝐭 𝐒𝐡𝐞𝐞𝐭 —specifically targeting the commands that separate the beginners from the power users. This isn't your standard, fluffy list. This is the condensed power you need for: - 𝐋𝐢𝐧𝐞𝐚𝐫 𝐀𝐥𝐠𝐞𝐛𝐫𝐚: Essential functions for ML foundations. - 𝐓𝐢𝐦𝐞 𝐒𝐞𝐫𝐢𝐞𝐬 𝐌𝐚𝐬𝐭𝐞𝐫𝐲: All the dt accessor methods (year, month, day) in one spot. - 𝐃𝐞𝐞𝐩 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧: Mastering 𝘨𝘳𝘰𝘶𝘱𝘣𝘺, 𝘢𝘨𝘨, and the critical pivot table for reporting. The goal is simple: Stop searching. Start doing. Found this helpful? 🔃𝐒𝐡𝐚𝐫𝐞 𝐢𝐭 #DataScience #Python #NumPy #Pandas #Productivity #CareerGrowth #MachineLearning
To view or add a comment, sign in
-
-
🧩 Understanding Missing Value Treatment in Data I recently explored how to handle missing data — one of the most common challenges in any dataset. This work helped me learn various techniques for identifying and managing missing values to ensure clean and reliable data. Key takeaways from my learning: 🔹 Detecting missing values using Pandas 🔹 Handling them with imputation, deletion, or replacement 🔹 Understanding the impact of missing data on analysis and models This practical experience improved my understanding of data preprocessing and why it’s crucial before any analysis or machine learning task. Guided by : Ashish Sawant sir 🔗GitHub Link : https://lnkd.in/e2tjgxKa 📁Google Drive Link : https://lnkd.in/eyumw6Sf #DataScience #DataCleaning #MissingValues #DataPreprocessing #Pandas #Python #MachineLearning #LearningJourney
To view or add a comment, sign in
-
How Adding sort=False Made My Pandas Code 3x Faster Just wrapped up the second phase of optimizing our data pipeline. After last week's vectorization work (20x speedup), I found another bottleneck hiding in plain sight. The Problem: Pandas groupby operations were spending 60% of their time sorting results that we never needed sorted. The Fix: One parameter. # Before (slow) df.groupby('cycle')['value'].min() # After (fast) df.groupby('cycle', sort=False)['value'].min() Results: GroupBy operations: 2-3x faster Delta calculations: 4.3x faster Overall aggregation: 2-4x faster Combined with vectorization: 60x total speedup from baseline! Key Takeaways: Default ≠ Optimal: Pandas sorts by default. Most use cases don't need it. Use .values for math: df['a'].values - df['b'].values is 2-5x faster than df['a'] - df['b'] Profile first: Without profiling, I'd never have suspected sorting was the bottleneck. Small changes may cause a huge impact: 15 lines of code. 2-4x speedup. Faster iteration, earlier insights Currently exploring Numba and Polars for the next phase. What's your favorite one-line performance boost? #Python #Pandas #NumPy #Performance #DataEngineering
To view or add a comment, sign in
-
📊 Day 5 of My Data Analytics Journey with NumPy 🤍 Today, I explored **Random Number Generation** in NumPy along with Indexing & Slicing techniques. These functions are really helpful for simulations, testing, sampling, and data analysis tasks. ✨ Topics I practiced: • np.random.randint() → Generate random integers • np.random.rand() → Generate random floats (0 to 1) • np.random.randn() → Generate random numbers from a normal distribution • np.random.choice() → Random sampling from given data • Indexing & Slicing → Accessing specific parts of arrays efficiently 💡 Learning Note: Understanding random data generation helps in mock data creation, model testing, and statistical analysis. Indexing & slicing makes data selection faster and cleaner. Onwards with consistency 🚀 #NumPy #DataAnalytics #DataScience #Python #LearningJourney #Practice #LinkedInLearning #DailyProgress
To view or add a comment, sign in
-
💡 Day 18 of 30 – LeetCode Challenge: Maximum Depth of Binary Tree Today’s problem helped me strengthen my understanding of Tree Traversal and Recursion in binary trees — a core concept in data structures. 🧩 Problem Summary Given the root of a binary tree, return its maximum depth, which is the number of nodes along the longest path from the root to the deepest leaf node. ⚙️ Example Input: root = [3,9,20,null,null,15,7] Output: 3 📖 Explanation: The longest path is 3 → 20 → 7 (or 3 → 20 → 15), which contains 3 nodes — so the maximum depth is 3. 🧠 Approach We can solve this using recursion — If the current node is None, return 0 (base case). Recursively find the depth of the left and right subtrees. The maximum depth is 1 + max(left_depth, right_depth). ⏱ Complexity Analysis Time Complexity O(n) — each node is visited once Space Complexity O(h) — recursion stack (h = tree height) ✨ Key Learnings ✅ Understood how recursion elegantly breaks down hierarchical structures like trees. ✅ Reinforced the concept of base cases in recursive solutions. ✅ Realized how traversal patterns (preorder, inorder, postorder) apply in different problem contexts. #Day18 #LeetCode #BinaryTree #Recursion #DSA #CodingChallenge #Python #WomenWhoCode #LearningJourney #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Built an Automatic Dataset Generator using Python that creates multiple realistic synthetic datasets for machine learning and data analysis — all offline! It generates 8 types of datasets including e-commerce, customers, sales, employees, social media, weather, website analytics, and student performance, using Pandas and Faker. GitHub repo:https://lnkd.in/grXRJm85 Perfect for EDA, analytics practice, and model testing. #Python #DataScience #MachineLearning #Dataset #Faker #Pandas #Project #codealpha CodeAlpha
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development