🚨 𝗣𝘆𝘁𝗵𝗼𝗻 𝗙𝘂𝗻 𝗙𝗮𝗰𝘁 𝗗𝗮𝘆 𝟭𝟮 𝘚𝘰𝘮𝘦𝘵𝘪𝘮𝘦𝘴 𝘪𝘴 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘛𝘳𝘶𝘦… 𝘧𝘰𝘳 𝘯𝘰 𝘰𝘣𝘷𝘪𝘰𝘶𝘴 𝘳𝘦𝘢𝘴𝘰𝘯 𝚊 = 𝟸𝟻𝟼 𝚋 = 𝟸𝟻𝟼 𝚙𝚛𝚒𝚗𝚝(𝚊 𝚒𝚜 𝚋) 𝗢𝘂𝘁𝗽𝘂𝘁: 𝚃𝚛𝚞𝚎 𝗡𝗼𝘄 𝘁𝗿𝘆 𝘁𝗵𝗶𝘀: 𝚊 = 𝟸𝟻𝟽 𝚋 = 𝟸𝟻𝟽 𝚙𝚛𝚒𝚗𝚝(𝚊 𝚒𝚜 𝚋) 𝗢𝘂𝘁𝗽𝘂𝘁: 𝙵𝚊𝚕𝚜𝚎 (…𝘸𝘢𝘪𝘵 𝘸𝘩𝘢𝘵??) 𝗪𝗵𝗮𝘁’𝘀 𝗴𝗼𝗶𝗻𝗴 𝗼𝗻? Python does a hidden optimization called 𝘪𝘯𝘵𝘦𝘨𝘦𝘳 𝘤𝘢𝘤𝘩𝘪𝘯𝘨: Numbers from -5 to 256 are pre-stored in memory So Python reuses the same object That’s why: 𝟸𝟻𝟼 𝚒𝚜 𝟸𝟻𝟼 → 𝚃𝚛𝚞𝚎 𝟸𝟻𝟽 𝚒𝚜 𝟸𝟻𝟽 → 𝙵𝚊𝚕𝚜𝚎 (new objects created) 𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗻𝗼𝘁𝗲: This behavior can vary depending on: • Python version • Environment (REPL, script, etc.) So never rely on this in real code 𝗚𝗼𝗹𝗱𝗲𝗻 𝗿𝘂𝗹𝗲: Use == for value comparison Use 𝚒𝚜 only for identity (like None) 𝚒𝚏 𝚡 𝚒𝚜 𝙽𝚘𝚗𝚎: # 𝚌𝚘𝚛𝚛𝚎𝚌𝚝 𝗣𝘆𝘁𝗵𝗼𝗻 𝗯𝗲 𝗹𝗶𝗸𝗲: “Optimization bhi karunga… confuse bhi karunga” 𝗥𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗹𝗲𝘀𝘀𝗼𝗻: If you don’t understand 𝚒𝚜 vs ==, bugs will find you 📌 𝗗𝗮𝘆 𝟭𝟯 𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄: 𝘈 𝘰𝘯𝘦-𝘭𝘪𝘯𝘦 𝘭𝘪𝘴𝘵 𝘵𝘩𝘢𝘵 𝘤𝘳𝘦𝘢𝘵𝘦𝘴 𝘢 𝘉𝘐𝘎 𝘩𝘪𝘥𝘥𝘦𝘯 𝘣𝘶𝘨 𝘍𝘰𝘭𝘭𝘰𝘸 𝘧𝘰𝘳 𝘮𝘰𝘳𝘦 “𝘺𝘦𝘩 𝘬𝘺𝘢 𝘤𝘩𝘢𝘭 𝘳𝘢𝘩𝘢 𝘩𝘢𝘪 𝘗𝘺𝘵𝘩𝘰𝘯?” 𝘮𝘰𝘮𝘦𝘯𝘵𝘴 😄 #Python #Programming #Developers #Coding #AI #DataScience #LearnPython
Python Integert Cache Behavior
More Relevant Posts
-
Graph Data libraries for graph processing, clustering, embedding and machine learning tasks Machine Learning Graph Data using networkx #machinelearning #datascience #graphdata #networkx NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks Software for complex networks : Data structures for graphs, digraphs, and multigraphs Many standard graph algorithms Network structure and analysis measures Generators for classic graphs, random graphs, and synthetic networks Nodes can be "anything" (e.g., text, images, XML records) Edges can hold arbitrary data (e.g., weights, time-series) Open source 3-clause BSD license Well tested with over 90% code coverage Additional benefits from Python include fast prototyping, easy to teach, and multi-platform https://lnkd.in/gxVxbYm5
To view or add a comment, sign in
-
🚨 𝗣𝘆𝘁𝗵𝗼𝗻 𝗙𝘂𝗻 𝗙𝗮𝗰𝘁 𝗗𝗮𝘆 𝟲 You don’t need a temp variable to swap values 😏 𝚊 = 𝟻 𝚋 = 𝟷𝟶 𝚊, 𝚋 = 𝚋, 𝚊 𝚙𝚛𝚒𝚗𝚝(𝚊, 𝚋) 𝗢𝘂𝘁𝗽𝘂𝘁: 𝟷𝟶 𝟻 𝗪𝗮𝗶𝘁… 𝗵𝗼𝘄 𝗱𝗶𝗱 𝘁𝗵𝗮𝘁 𝗲𝘃𝗲𝗻 𝘄𝗼𝗿𝗸? In many languages, you’d do: 𝚝𝚎𝚖𝚙 = 𝚊 𝚊 = 𝚋 𝚋 = 𝚝𝚎𝚖𝚙 But Python says: “3 lines? Nah… 1 hi kaafi hai” 😌 𝗪𝗵𝗮𝘁’𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴 𝘂𝗻𝗱𝗲𝗿 𝘁𝗵𝗲 𝗵𝗼𝗼𝗱: Python creates a tuple behind the scenes: (𝚊, 𝚋) = (𝚋, 𝚊) Then it unpacks values back into variables. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗽𝗼𝘄𝗲𝗿𝗳𝘂𝗹: • Cleaner code • Less chance of bugs • Widely used in real-world Python 𝗕𝗼𝗻𝘂𝘀 𝘁𝗿𝗶𝗰𝗸: You can swap more than 2 variables too: 𝚊, 𝚋, 𝚌 = 𝟷, 𝟸, 𝟹 𝚊, 𝚋, 𝚌 = 𝚌, 𝚊, 𝚋 𝚙𝚛𝚒𝚗𝚝(𝚊, 𝚋, 𝚌) 𝗣𝘆𝘁𝗵𝗼𝗻 𝗯𝗲 𝗹𝗶𝗸𝗲: “Shortcut bhi elegant hona chahiye” 📌 𝗗𝗮𝘆 𝟳 𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗼𝗺𝗼𝗿𝗿𝗼𝘄: Two variables that look the same… but behave totally differently 😈 Follow for more “yeh itna simple tha??” moments 😄 #Python #Programming #Developers #Coding #AI #DataScience #LearnPython
To view or add a comment, sign in
-
-
🚀 Python Series – Day 6: Conditional Statements (if-else) Till now, we learned how to take input and use operators 💻 But how does a program make decisions? 🤔 👉 Using Conditional Statements 🔥 🧠 What is a Condition? A condition checks whether something is True or False ✅ Basic if Statement age = 18 if age >= 18: print("You are eligible to vote") 🔁 if-else Statement age = int(input("Enter your age: ")) if age >= 18: print("Eligible") else: print("Not Eligible") 🔄 if-elif-else (Multiple Conditions) marks = int(input("Enter marks: ")) if marks >= 90: print("Grade A") elif marks >= 75: print("Grade B") elif marks >= 50: print("Grade C") else: print("Fail") ⚠️ Important Rule 👉 Indentation matters in Python! Incorrect: if age >= 18: print("Eligible") Correct: if age >= 18: print("Eligible") 🎯 Why is this important? ✔ Used in decision making ✔ Used in real-world logic ✔ Used in every program ❓ Question for you: What will be the output? x = 10 if x > 5: print("A") elif x > 8: print("B") else: print("C") 👉 Comment your answer 👇 📌 Tomorrow: Loops (for & while) 🔥 #Python #Coding #DataScience #Programming #LearnPython #Beginners #Tech #MustaqeemSiddiqui
To view or add a comment, sign in
-
-
Python Data Types – Quick Reference 🐍 Numeric Types - Integer (int) - Float (float) - Complex Number (complex) Dictionary (dict) Key-value pairs, unordered, mutable Boolean (bool) True or False values Set (set) Unordered, unique elements Sequence Types Ordered collections None (NoneType) Represents the absence of value Tuple (tuple) Ordered, immutable Range (range) Sequence of numbers (commonly used in loops) List (list) Ordered, mutable
To view or add a comment, sign in
-
-
Python string methods most commonly used str.lower() - converts to lowercase str.upper() - converts to uppercase str.capitalize() - capitalizes first letter str.title() - capitalizes first letter of each word str.find(sub) - returns lowest index or -1 str.index(sub) - like find but raises ValueError str.count(sub) - counts occurrences str.replace(old, new) - replaces occurrences str.strip() - removes leading/trailing whitespace str.split(sep) - splits into list sep.join(iterable) - joins iterable with separator str.format(*args, **kwargs) - formats string
To view or add a comment, sign in
-
𝗠𝗮𝗸𝗲 𝘆𝗼𝘂𝗿 𝗣𝘆𝘁𝗵𝗼𝗻 𝟭𝟬𝟬𝘅 𝗳𝗮𝘀𝘁𝗲𝗿 𝘄𝗶𝘁𝗵 𝗮𝘀𝘆𝗻𝗰! You've likely seen that headline and maybe even clicked it. The honest truth is that async doesn't actually make your code faster; it makes your waiting smarter. Your CPU isn't slow. Instead, your code spends most of its time idle—waiting for a database response, an API call, or a file to load. This is known as I/O. During all that waiting, synchronous Python just sits there, frozen and blocking everything behind it. 𝘢𝘴𝘺𝘯𝘤 addresses the waiting problem, not the computing problem. So when can async actually give you that 100x improvement? When you have 100 tasks that each spend 99% of their time waiting. Instead of processing them one by one: - 𝗦𝘆𝗻𝗰: 𝗘𝗮𝗰𝗵 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝘄𝗮𝗶𝘁𝘀 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗽𝗿𝗲𝘃𝗶𝗼𝘂𝘀 𝗼𝗻𝗲. - 100 requests × 1 second each = 100 seconds. 𝗽𝘆𝘁𝗵𝗼𝗻 𝘧𝘰𝘳 𝘶𝘳𝘭 𝘪𝘯 𝘶𝘳𝘭𝘴: 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘦 = 𝘳𝘦𝘲𝘶𝘦𝘴𝘵𝘴.𝘨𝘦𝘵(𝘶𝘳𝘭) # 𝘣𝘭𝘰𝘤𝘬𝘦𝘥. 𝘸𝘢𝘪𝘵𝘪𝘯𝘨. 𝘥𝘰𝘪𝘯𝘨 𝘯𝘰𝘵𝘩𝘪𝘯𝘨. With async, you can fire them all at once: 𝗔𝘀𝘆𝗻𝗰: 𝗔𝗹𝗹 𝟭𝟬𝟬 𝗿𝗲𝗾𝘂𝗲𝘀𝘁𝘀 𝗳𝗶𝗿𝗲 𝘀𝗶𝗺𝘂𝗹𝘁𝗮𝗻𝗲𝗼𝘂𝘀𝗹𝘆. - 100 requests, all waiting together = ~1 second. 𝗽𝘆𝘁𝗵𝗼𝗻 𝘵𝘢𝘴𝘬𝘴 = [𝘧𝘦𝘵𝘤𝘩(𝘶𝘳𝘭) 𝘧𝘰𝘳 𝘶𝘳𝘭 𝘪𝘯 𝘶𝘳𝘭𝘴] 𝘳𝘦𝘴𝘶𝘭𝘵𝘴 = 𝘢𝘸𝘢𝘪𝘵 𝘢𝘴𝘺𝘯𝘤𝘪𝘰.𝘨𝘢𝘵𝘩𝘦𝘳(*𝘵𝘢𝘴𝘬𝘴) # 𝘥𝘰𝘯𝘦. You achieve the same number of requests, same network speed, and same server, but with a 100x wall-clock time difference because you've eliminated wasted time. The key takeaway isn't to "use async everywhere." It's to understand where your time is actually going. Is it waiting? Async wins. Profile first. Optimize second. That's how you truly make Python fast. #𝗣𝘆𝘁𝗵𝗼𝗻 #𝗔𝘀𝘆𝗻𝗰𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 #𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 #𝗕𝗮𝗰𝗸𝗲𝗻𝗱𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 #𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 #𝗣𝘆𝘁𝗵𝗼𝗻𝗧𝗶𝗽𝘀
To view or add a comment, sign in
-
Hyperparameter Optimization Machine Learning using shap hypetune #machinelearning #datascience #hyperparameteroptimization #shaphypetune shap-hypetune : A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models. Overview Hyperparameters tuning and features selection are two common steps in every machine learning pipeline. Most of the time they are computed separately and independently. This may result in suboptimal performances and in a more time expensive process. shap-hypetune aims to combine hyperparameters tuning and features selection in a single pipeline optimizing the optimal number of features while searching for the optimal parameters configuration. Hyperparameters Tuning or Features Selection can also be carried out as standalone operations. shap-hypetune main features : designed for gradient boosting models, as LGBModel or XGBModel; effective in both classification or regression tasks; customizable training process, supporting early-stopping and all the other fitting options available in the standard algorithms api; ranking feature selection algorithms: Recursive Feature Elimination (RFE) or Boruta; classical boosting based feature importances or SHAP feature importances (the later can be computed also on the eval_set); apply grid-search or random-search. https://lnkd.in/g-mvcdrX
To view or add a comment, sign in
-
😊❤️ Todays topic: Topic: Encapsulation in Python: ============ Encapsulation means restricting direct access to data and controlling how it is modified. It helps protect data and maintain integrity. Basic Example (No Encapsulation): class Account: def __init__(self, balance): self.balance = balance acc = Account(1000) acc.balance = -500 # Invalid change print(acc.balance) Problem: Anyone can change the balance directly, even to invalid values. Using Encapsulation: class Account: def __init__(self, balance): self.__balance = balance # private variable def deposit(self, amount): self.__balance += amount def get_balance(self): return self.__balance acc = Account(1000) acc.deposit(500) print(acc.get_balance()) Output: 1500 Explanation: __balance → private variable (cannot be accessed directly) Access is controlled using methods Accessing private variable (not recommended): print(acc._Account__balance) Key Points: Encapsulation protects data Use __ (double underscore) for private variables Access data using methods (get/set) Interview Insight: Encapsulation helps in data hiding and ensures controlled access, which is important in large applications. Quick Question: What will happen if you try to access __balance directly using acc.__balance? #Python #Programming #Coding #InterviewPreparation #Developers
To view or add a comment, sign in
-
🐍 𝗣𝘆𝘁𝗵𝗼𝗻 𝗮𝗰𝗰𝗲𝗽𝘁𝗲𝗱 𝗹𝗮𝘇𝘆 𝗶𝗺𝗽𝗼𝗿𝘁𝘀. 𝟰𝟱𝟬 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀. 𝟯 𝘆𝗲𝗮𝗿𝘀 𝗼𝗳 𝗳𝗶𝗴𝗵𝘁𝘀. 𝗔𝗻𝗱 𝘀𝗼𝗺𝗲𝗼𝗻𝗲 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗵𝗮𝘁𝗲𝘀 𝗶𝘁. If you use Python daily — data scripts, ML pipelines, internal tools, automations — you've probably stared at a terminal waiting for your script to start without knowing why. That 3-second pause? Often it's Python loading libraries you don't even use in that run. Lazy imports fix exactly that: Python only loads what it actually needs. Your tools start faster, your team waits less, your servers spend less on cold starts. You don't need to change how you think about Python. You just write lazy in front of one import line and the problem is gone. PEP 810 — explicit lazy imports (deferred module loading: Python skips loading a module until its name is first used in the code) — was unanimously accepted by the Steering Council in November 2025 and is now shipping in Python 3.15 alphas. The numbers back it up. Meta cut startup time by 70%. Hugo van Kemenade's CLI dropped from 2.4s to 0.7s — 3x faster — with a two-line change. Machine learning training initialization went from 15s to 9s in internal benchmarks. But one voice on Hacker News put it bluntly: "This will break tons of code and introduce a slew of footguns. Import statements fundamentally have side effects. When and how these side effects are applied will cause mysterious problems and breakages that will keep people up for many nights." That's not wrong. The tradeoff is real: you're trading deterministic startup behavior for performance. Lazy imports shift errors from load time to runtime. A missing module you currently catch at startup might now blow up at 2am under a specific code path in production. PEP 810 addresses this by making lazy loading strictly opt-in with the lazy soft keyword (syntax: lazy import module_name). You choose your surface, you own the risk. The community was split for 3 years for good reason. The fix is now in the language. Use it surgically. Source: https://lnkd.in/dnkyrHFS #Python #SoftwareEngineering #PEP810 #Python315
To view or add a comment, sign in
-
Our Python service had a memory leak… but gc.collect() said everything was fine. Our Python document parsing service (PDF → OCR → Gemini APIs) started crashing with OOMs. Memory kept increasing after every document 📈 Eventually → OOM crashes Look at the image 👇 Top = before (slow memory growth) Bottom = after (stable) The tricky part? No obvious leak. gc.collect() was already there. Profilers showed nothing. What was actually happening: • Creating a new genai.Client() per request → sockets & connection pools never released • C-libraries (PyMuPDF, PIL, OpenCV) using malloc() → glibc holds memory, doesn’t return it to OS • Cleanup missing in exception paths → leaked temp files & buffers • Large objects staying in memory too long Fixes: ✔ Reused a single client ✔ Added: ctypes.CDLL("libc.so.6").malloc_trim(0) ✔ Moved cleanup to finally ✔ Explicitly closed & deleted large objects 💡 Takeaway In Python systems using C extensions: ➡️ gc.collect() is NOT enough ➡️ Memory leaks can live outside Python ➡️ Understanding the OS allocator matters Same system. Same workload. Completely different memory behavior. #backend #python #debugging #engineering
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development