🚨 𝗣𝘆𝘁𝗵𝗼𝗻 𝗙𝘂𝗻 𝗙𝗮𝗰𝘁 𝗗𝗮𝘆 𝟲 You don’t need a temp variable to swap values 😏 𝚊 = 𝟻 𝚋 = 𝟷𝟶 𝚊, 𝚋 = 𝚋, 𝚊 𝚙𝚛𝚒𝚗𝚝(𝚊, 𝚋) 𝗢𝘂𝘁𝗽𝘂𝘁: 𝟷𝟶 𝟻 𝗪𝗮𝗶𝘁… 𝗵𝗼𝘄 𝗱𝗶𝗱 𝘁𝗵𝗮𝘁 𝗲𝘃𝗲𝗻 𝘄𝗼𝗿𝗸? In many languages, you’d do: 𝚝𝚎𝚖𝚙 = 𝚊 𝚊 = 𝚋 𝚋 = 𝚝𝚎𝚖𝚙 But Python says: “3 lines? Nah… 1 hi kaafi hai” 😌 𝗪𝗵𝗮𝘁’𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴 𝘂𝗻𝗱𝗲𝗿 𝘁𝗵𝗲 𝗵𝗼𝗼𝗱: Python creates a tuple behind the scenes: (𝚊, 𝚋) = (𝚋, 𝚊) Then it unpacks values back into variables. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗽𝗼𝘄𝗲𝗿𝗳𝘂𝗹: • Cleaner code • Less chance of bugs • Widely used in real-world Python 𝗕𝗼𝗻𝘂𝘀 𝘁𝗿𝗶𝗰𝗸: You can swap more than 2 variables too: 𝚊, 𝚋, 𝚌 = 𝟷, 𝟸, 𝟹 𝚊, 𝚋, 𝚌 = 𝚌, 𝚊, 𝚋 𝚙𝚛𝚒𝚗𝚝(𝚊, 𝚋, 𝚌) 𝗣𝘆𝘁𝗵𝗼𝗻 𝗯𝗲 𝗹𝗶𝗸𝗲: “Shortcut bhi elegant hona chahiye” 📌 𝗗𝗮𝘆 𝟳 𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄: Two variables that look the same… but behave totally differently 😈 Follow for more “yeh itna simple tha??” moments 😄 #Python #Programming #Developers #Coding #AI #DataScience #LearnPython
Uman Sheikh’s Post
More Relevant Posts
-
🚨 𝗣𝘆𝘁𝗵𝗼𝗻 𝗙𝘂𝗻 𝗙𝗮𝗰𝘁 𝗗𝗮𝘆 𝟱 A tuple is 𝘪𝘮𝘮𝘶𝘵𝘢𝘣𝘭𝘦… right? 👀 Then explain THIS 😳 𝚝 = ([𝟷, 𝟸, 𝟹], [𝟺, 𝟻]) 𝚝[𝟶].𝚊𝚙𝚙𝚎𝚗𝚍(𝟿𝟿) 𝚙𝚛𝚒𝚗𝚝(𝚝) 𝗘𝘅𝗽𝗲𝗰𝘁𝗲𝗱: Error because tuple is immutable 𝗔𝗰𝘁𝘂𝗮𝗹 𝗢𝘂𝘁𝗽𝘂𝘁: ([𝟷, 𝟸, 𝟹, 𝟿𝟿], [𝟺, 𝟻]) 𝗪𝗵𝗮𝘁’𝘀 𝗴𝗼𝗶𝗻𝗴 𝗼𝗻? Yes, tuples are immutable… BUT: They only protect the 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, not the 𝗰𝗼𝗻𝘁𝗲𝗻𝘁𝘀 So: • You can’t change the tuple itself • But you can modify objects inside it In this case: • The list inside the tuple is still mutable • So .append() works perfectly fine 𝗦𝗶𝗺𝗽𝗹𝗲 𝗯𝗿𝗲𝗮𝗸𝗱𝗼𝘄𝗻: Tuple = locked container But items inside = free to move 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: • Can lead to unexpected data changes • Important in nested data structures • Common confusion in interviews 😅 𝗣𝘆𝘁𝗵𝗼𝗻 𝗯𝗲 𝗹𝗶𝗸𝗲: “Main bahar se strict hoon… andar se flexible” 📌 𝗗𝗮𝘆 𝟲 𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄: A clean Python trick that replaces 3 lines with just 1 😎 Follow for more “yeh allowed hai??” moments 😄 #Python #Programming #Developers #Coding #AI #DataScience #LearnPython
To view or add a comment, sign in
-
-
🚨 𝗣𝘆𝘁𝗵𝗼𝗻 𝗙𝘂𝗻 𝗙𝗮𝗰𝘁 𝗗𝗮𝘆 𝟴 Python can remove duplicates… in ONE line 🔥 𝚗𝚞𝚖𝚜 = [𝟷, 𝟸, 𝟸, 𝟹, 𝟺, 𝟺] 𝚞𝚗𝚒𝚚𝚞𝚎_𝚗𝚞𝚖𝚜 = 𝚕𝚒𝚜𝚝(𝚜𝚎𝚝(𝚗𝚞𝚖𝚜)) 𝚙𝚛𝚒𝚗𝚝(𝚞𝚗𝚒𝚚𝚞𝚎_𝚗𝚞𝚖𝚜) 𝗢𝘂𝘁𝗽𝘂𝘁 (𝗼𝗿𝗱𝗲𝗿 𝗺𝗮𝘆 𝘃𝗮𝗿𝘆): [1, 2, 3, 4] Wait… that’s it?? Yup. 𝚜𝚎𝚝() automatically removes duplicates Because sets only store 𝘂𝗻𝗶𝗾𝘂𝗲 𝘃𝗮𝗹𝘂𝗲𝘀 𝗕𝘂𝘁 𝘁𝗵𝗲𝗿𝗲’𝘀 𝗮 𝗰𝗮𝘁𝗰𝗵 Sets are 𝘂𝗻𝗼𝗿𝗱𝗲𝗿𝗲𝗱, so you might get: [𝟸, 𝟷, 𝟺, 𝟹] If you want order preserved: 𝚗𝚞𝚖𝚜 = [𝟷, 𝟸, 𝟸, 𝟹, 𝟺, 𝟺] 𝚞𝚗𝚒𝚚𝚞𝚎_𝚗𝚞𝚖𝚜 = 𝚕𝚒𝚜𝚝(𝚍𝚒𝚌𝚝.𝚏𝚛𝚘𝚖𝚔𝚎𝚢𝚜(𝚗𝚞𝚖𝚜)) 𝚙𝚛𝚒𝚗𝚝(𝚞𝚗𝚒𝚚𝚞𝚎_𝚗𝚞𝚖𝚜) 𝗢𝘂𝘁𝗽𝘂𝘁: [𝟷, 𝟸, 𝟹, 𝟺] 𝗥𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀: • Data cleaning (your daily scene 😏) • Removing duplicate users/emails • Preprocessing datasets for ML 𝗣𝘆𝘁𝗵𝗼𝗻 𝗯𝗲 𝗹𝗶𝗸𝗲: “𝘒𝘢𝘮 𝘤𝘰𝘥𝘦, 𝘻𝘺𝘢𝘥𝘢 𝘬𝘢𝘢𝘮” 📌 𝗗𝗮𝘆 𝟵 𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄: A comparison trick that looks illegal… but works perfectly 😳 Follow for more “yeh itna easy tha?” moments 😄 #Python #Programming #Developers #Coding #AI #DataScience #LearnPython
To view or add a comment, sign in
-
-
🚨 𝗣𝘆𝘁𝗵𝗼𝗻 𝗙𝘂𝗻 𝗙𝗮𝗰𝘁 𝗗𝗮𝘆 𝟭𝟮 𝘚𝘰𝘮𝘦𝘵𝘪𝘮𝘦𝘴 𝘪𝘴 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘛𝘳𝘶𝘦… 𝘧𝘰𝘳 𝘯𝘰 𝘰𝘣𝘷𝘪𝘰𝘶𝘴 𝘳𝘦𝘢𝘴𝘰𝘯 𝚊 = 𝟸𝟻𝟼 𝚋 = 𝟸𝟻𝟼 𝚙𝚛𝚒𝚗𝚝(𝚊 𝚒𝚜 𝚋) 𝗢𝘂𝘁𝗽𝘂𝘁: 𝚃𝚛𝚞𝚎 𝗡𝗼𝘄 𝘁𝗿𝘆 𝘁𝗵𝗶𝘀: 𝚊 = 𝟸𝟻𝟽 𝚋 = 𝟸𝟻𝟽 𝚙𝚛𝚒𝚗𝚝(𝚊 𝚒𝚜 𝚋) 𝗢𝘂𝘁𝗽𝘂𝘁: 𝙵𝚊𝚕𝚜𝚎 (…𝘸𝘢𝘪𝘵 𝘸𝘩𝘢𝘵??) 𝗪𝗵𝗮𝘁’𝘀 𝗴𝗼𝗶𝗻𝗴 𝗼𝗻? Python does a hidden optimization called 𝘪𝘯𝘵𝘦𝘨𝘦𝘳 𝘤𝘢𝘤𝘩𝘪𝘯𝘨: Numbers from -5 to 256 are pre-stored in memory So Python reuses the same object That’s why: 𝟸𝟻𝟼 𝚒𝚜 𝟸𝟻𝟼 → 𝚃𝚛𝚞𝚎 𝟸𝟻𝟽 𝚒𝚜 𝟸𝟻𝟽 → 𝙵𝚊𝚕𝚜𝚎 (new objects created) 𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗻𝗼𝘁𝗲: This behavior can vary depending on: • Python version • Environment (REPL, script, etc.) So never rely on this in real code 𝗚𝗼𝗹𝗱𝗲𝗻 𝗿𝘂𝗹𝗲: Use == for value comparison Use 𝚒𝚜 only for identity (like None) 𝚒𝚏 𝚡 𝚒𝚜 𝙽𝚘𝚗𝚎: # 𝚌𝚘𝚛𝚛𝚎𝚌𝚝 𝗣𝘆𝘁𝗵𝗼𝗻 𝗯𝗲 𝗹𝗶𝗸𝗲: “Optimization bhi karunga… confuse bhi karunga” 𝗥𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗹𝗲𝘀𝘀𝗼𝗻: If you don’t understand 𝚒𝚜 vs ==, bugs will find you 📌 𝗗𝗮𝘆 𝟭𝟯 𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄: 𝘈 𝘰𝘯𝘦-𝘭𝘪𝘯𝘦 𝘭𝘪𝘴𝘵 𝘵𝘩𝘢𝘵 𝘤𝘳𝘦𝘢𝘵𝘦𝘴 𝘢 𝘉𝘐𝘎 𝘩𝘪𝘥𝘥𝘦𝘯 𝘣𝘶𝘨 𝘍𝘰𝘭𝘭𝘰𝘸 𝘧𝘰𝘳 𝘮𝘰𝘳𝘦 “𝘺𝘦𝘩 𝘬𝘺𝘢 𝘤𝘩𝘢𝘭 𝘳𝘢𝘩𝘢 𝘩𝘢𝘪 𝘗𝘺𝘵𝘩𝘰𝘯?” 𝘮𝘰𝘮𝘦𝘯𝘵𝘴 😄 #Python #Programming #Developers #Coding #AI #DataScience #LearnPython
To view or add a comment, sign in
-
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝗔𝘀𝘆𝗻𝗰𝗜𝗢 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝘀 You use async def and await. You know the surface. Sometimes your code deadlocks. Or it runs slow. You need a mental model to fix this. Async Python is not parallel. It is concurrent. One coroutine runs at a time. If a coroutine does not yield, nothing else runs. A coroutine is a function. It pauses at specific points. It resumes later. The coroutine decides when to stop. The interpreter does not force it. The event loop drives the code. It calls send() on the coroutine. The await keyword pauses the task. It yields control back to the loop. Learn these three terms: - Coroutine: An object created by async def. It needs a driver. - Future: A placeholder for a value not yet ready. - Task: A wrapper. It schedules a coroutine on the loop. Do not block the loop. time.sleep stops the OS thread. The event loop stops too. Use asyncio.sleep instead. Use asyncio.to_thread for heavy CPU work. Cancellation is not a kill switch. It throws a CancelledError into the task. You must re-raise this error. If you hide it, the task stays alive. Async Python is a single-threaded scheduler. It runs callbacks in order. Everything works when coroutines yield often. Everything breaks when something holds the thread. Source: https://lnkd.in/gJPpwWR3
To view or add a comment, sign in
-
#Mathematics #Python #NumberTheory #Primes #Algorithms #Coding #DataScience Why do Prime Numbers "follow" the number e? There is a deep connection between Prime Numbers and the Euler constant e (2.718...). At first glance, they seem unrelated—discrete building blocks of integers vs. continuous exponential growth. But the Least Common Multiple (LCM) of the range 1 to n reveals the connection. The LCM is the "Prime Blueprint"—the smallest number that contains the highest power of every prime building block needed to construct every integer from 1 to n. As n grows, the value of the LCM tracks with en. Mathematically, this is expressed through two beautiful limits: 1️⃣ lim (n→∞) [ln(lcm(1...n)) / n] = 1 2️⃣ lim (n→∞) [ⁿ√lcm(1...n)] = e To visualize this, we used the Sieve of Eratosthenes—a 2,000-year-old algorithm that finds primes by "sifting." You start at 2 and cross out all its multiples, then move to 3 and do the same. By the time you reach √n, every composite number has been filtered out. How efficient is it? The total operations are roughly O(n log log n) arithmetic operations assuming O(1) array access. It is nearly as fast as just counting the numbers! Key Takeaways from the analysis: ✅ Infinite Recharge: This convergence implies that primes are infinite. If we ran out of primes, the LCM growth would collapse and fall behind en. ✅ The "Staircase" Effect: The LCM only jumps when n hits a prime power (2, 3, 4, 5, 7, 8, 9...). Between these jumps, the integer staircase stays flat while en pulls ahead. ✅ No Infinite Gaps: For the growth to stay linear, primes must appear frequently enough to "kick" the staircase back up to the target line. Mathematics is the art of finding the smooth curve hidden behind a jagged set of integers. The python code to find out the Prime numbers, and the LCM is in the attached report.
To view or add a comment, sign in
-
💡 Generated Subsets with Duplicates Using Recursion — But There’s Room to Improve Today I worked on the Subsets II problem: generate all possible subsets from an array that may contain duplicates. Example: [1,2,2] Valid output: [], [1], [2], [1,2], [2,2], [1,2,2] ⚙️ My Approach: Recursive Include / Exclude I used classic backtracking logic: For each element: Exclude it from current subset Include it in current subset To handle duplicates: First sort the array Generate all subsets Remove duplicate subsets at the end using set() ✨ Why I liked this approach: Very intuitive recursion pattern Easy to understand Great for learning include/exclude decisions Python code : https://lnkd.in/gXcYNZa9 📊 Complexity: Time: O(2^n) Space: O(n) recursion stack (excluding output) 🧠 But here’s the real question: 👉 Can you give the best optimized solution? Instead of generating duplicates first and removing later, how would you skip duplicates during recursion itself? Would love to learn cleaner approaches from the community 👇 #Recursion #Backtracking #Algorithms #Python #CodingInterview #LeetCode #ProblemSolving Rajan Arora
To view or add a comment, sign in
-
-
Hyperparameter Optimization Machine Learning using smac #machinelearning #datascience #hyperparameteroptimization #smac SMAC is a tool for algorithm configuration to optimize the parameters of arbitrary algorithms, including hyperparameter optimization of Machine Learning algorithms. The main core consists of Bayesian Optimization in combination with an aggressive racing mechanism to efficiently decide which of two configurations performs better. SMAC3 is written in Python3 and continuously tested with Python 3.8, 3.9, and 3.10. Its Random Forest is written in C++. In the following, SMAC is representatively mentioned for SMAC3. https://lnkd.in/gMXPgSrP
To view or add a comment, sign in
-
🧠 Understanding Activation Functions in CNNs Think of CNN is like a machine looking at an image. - A layer scans the image and produces some numbers. - These numbers represent how much something is detected like edges, shapes, etc. 👉 But here's the problem, those number are just numbers. The network doesn't yet decide what matters. Here comes activation function which is like a decision gate. It looks at each number and says: keep this, ignore this or adjust this. Let's say a CNN layer gives this output: [-1, -3, 0, 2, 6], now apply most common activation function (ReLU). It has rules like: - if number < 0 → make it 0 - if number > 0 → keep it As a result weak or useless signals (negative values) were removed and important signals (positive values) are kept and it becomes like this: [0, 0, 0, 2, 6] This is important because without activation functions every layer just does computation like: `output = input * weight` and even with higher layer it's still just one big linear calculation which means: - No learning complex patterns - No understanding images - No intelligence #DeepLearning #CNN #Learning Here is the Python implementation.
To view or add a comment, sign in
-
-
Hyperparameter Optimization Machine Learning using shap hypetune #machinelearning #datascience #hyperparameteroptimization #shaphypetune shap-hypetune : A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models. Overview Hyperparameters tuning and features selection are two common steps in every machine learning pipeline. Most of the time they are computed separately and independently. This may result in suboptimal performances and in a more time expensive process. shap-hypetune aims to combine hyperparameters tuning and features selection in a single pipeline optimizing the optimal number of features while searching for the optimal parameters configuration. Hyperparameters Tuning or Features Selection can also be carried out as standalone operations. shap-hypetune main features : designed for gradient boosting models, as LGBModel or XGBModel; effective in both classification or regression tasks; customizable training process, supporting early-stopping and all the other fitting options available in the standard algorithms api; ranking feature selection algorithms: Recursive Feature Elimination (RFE) or Boruta; classical boosting based feature importances or SHAP feature importances (the later can be computed also on the eval_set); apply grid-search or random-search. https://lnkd.in/g-mvcdrX
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
All my old computer science professors are probably shaking their heads.