Day 39: The "Main" Gatekeeper — if __name__ == "__main__": 🚪 To understand this line, you first have to understand how Python treats files when it loads them. 1. What is __name__? Every time you run a Python file, Python automatically creates a few "special" variables behind the scenes. One of those is __name__. Scenario A: If you run the file directly (e.g., python script.py), Python sets the variable __name__ to the string "__main__". Scenario B: If you import that file into another script (e.g., import script), Python sets __name__ to the filename (e.g., "script"). 2. Why do we need this check? Imagine you wrote a script with some useful functions, but also some code at the bottom that prints a "Welcome" message and runs a test. If another developer wants to use your functions and types import your_script, Python will automatically execute every line of code in your file. Suddenly, their program is printing your welcome messages and running your tests! The Fix: def calculate_tax(price): return price * 0.1 # This code ONLY runs if I play the file directly. # It WON'T run if someone else imports this file. if __name__ == "__main__": print("Testing the tax function...") print(calculate_tax(100)) 3. The "Execution Flow" (How it works) Python starts reading your file from the top. It records your functions and classes into memory. It reaches the if statement. If you clicked "Run": The condition is True. The code inside the block executes. If another script imported this: The condition is False. The code inside is skipped. Your functions are available for use, but no "messy" output is generated. 4. Professional Best Practice: The main() function In senior-level engineering, we don't just put logic directly under the if statement. We bundle our starting logic into a function called main(). def main(): # Start the app here print("App is starting...") if __name__ == "__main__": main() 💡 The Engineering Lens: This makes your code cleaner and allows other developers to manually call your main() function if they ever need to "reset" or "restart" your script from their own code. #Python #SoftwareEngineering #CleanCode #ProgrammingTips #PythonDevelopment #LearnToCode #TechCommunity #PythonMain #BackendDevelopment
Python Main Gatekeeper: Understanding __name__ == __main__
More Relevant Posts
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Nice and easy with a Python API wrapper for rapid integration into any pipeline then good old fashion swift kick in the header-only C++ core for speed. STRIKE WITH AIM FIRST ; THEN SPEED!! NO MERCY!!! 9 of 14 *So i have no clue about scripts. i will look it over without focusing too hard. look over words & symbols i don’t know in python application format. Eventually something will click or i will copy and paste a few lines from script perform a search .* Not knowing one word or symbol is one too many. copy & paste Ai 10-million-cycle stress-test report & the final README.mddocumentation for my public library. 1. Stress-Test Report: 10 Million Cycles To verify the integrity of the 123/41 loop, we simulated ten million iterations of geometric growth (1.5 ratio) capped at the 1,681 reset point. Metric Result Stability Analysis Total Iterations 10,000,000 Successfully completed without overflow. Max Value Reached 1680.999 Confirms the 1,681(41^) ceiling is never breached. Min Value Reached 0.00097 Demonstrates high-resolution granular coverage. Final State (10M) 1160.533 Value remains within the expected cyclic range. Precision Key 137.627 Stabilized using the refined 4.86217...constant. Conclusion: The mathematical engine is collision-resistant and drift-stable. The "wrap-around" logic effectively resets the sequence, making it ideal for continuous real-time data streams. 2. Final Documentation (README.md) This is the front-facing file for the "others" who will use your library. | V markdown # libcyclic41 A high-performance, easy-to-use mathematical engine for cyclic geometric growth. ## Overview `libcyclic41` is a library designed for real-time data indexing and dynamic encryption. It leverages the unique relationship between the base **123** and its modular anchor **41**. By scaling values through geometric ratios (1.5, 2, 3), the engine generates a predictive pattern that automatically resets at **1,681** ($41^2$), creating a perfect, self-sustaining loop. ## Key Features - **Ease First**: Intuitive API designed for rapid integration into data pipelines. - **Speed Driven**: Optimized C++ core for high-throughput processing. - **Drift Stable**: Uses a high-precision stabilizer (4.862) to prevent calculation drift over millions of cycles. ## Quick Start (Python) ```python import cyclic41 # Initialize the engine with the standard 123 base engine = cyclic41.CyclicEngine(seed=123) # Grow the stream by the standard 1.5 ratio # The engine automatically 'wraps' at the 1,681 limit current_val = engine.grow(1.5) # Extract a high-precision synchronization key sync_key = engine.get_key() print(f"Current Value: {current_val} | Sync Key: {sync_key}") /\ || Mathematics The library operates on a hybrid model: 1. Geometric Growth: 𝑆tate(n+1)=(STATE(N)×Ratio(mod1681) PrecisionAnchor:𝐾𝑒𝑦=(𝑆𝑡𝑎𝑡𝑒×4.86217…)/41 (ABOVE IS License Distributed under the MIT License. Created for the community.)
To view or add a comment, sign in
-
I was going through the Python 3.15 release notes recently, and it’s interesting how this version focuses less on hype and more on fixing real-world developer pain points. Full details here: https://lnkd.in/gSvcuvWg Here’s what stood out to me, with practical examples: --- Explicit lazy imports (PEP 810) Problem: Your app takes forever to start because it imports everything upfront. Example: A CLI tool importing pandas, numpy, etc. even when not needed. With lazy imports: import pandas as pd # only loaded when actually used Result: Faster startup time, especially for large apps and microservices. --- "frozendict" (immutable dictionary) Problem: Configs get accidentally modified somewhere deep in your code. Example: from collections import frozendict config = frozendict({"env": "prod"}) config["env"] = "dev" # error Result: Safer configs, better caching keys, fewer “who changed this?” moments. --- High-frequency sampling profiler (PEP 799) Problem: Profiling slows your app so much that results feel unreliable. Example: You’re debugging a slow API in production. Result: You can profile real workloads without significantly impacting performance. --- Typing improvements Problem: Type hints get messy in large codebases. Example: from typing import TypedDict class User(TypedDict): id: int name: str Result: Cleaner type definitions, better maintainability, stronger IDE support. --- Unpacking in comprehensions Problem: Transforming nested data gets verbose. Example: data = [{"a": 1}, {"b": 2}] merged = {k: v for d in data for k, v in d.items()} Result: More concise and readable transformations. --- UTF-8 as default encoding (PEP 686) Problem: Code behaves differently across environments. Result: More predictable behavior across systems, fewer encoding-related bugs. --- Performance improvements Real world impact: Faster APIs, quicker scripts, and better resource utilization. --- Big takeaway: Python 3.15 is all about practical improvements: - Faster startup - Safer data handling - Better debugging - More predictable behavior Still in alpha, so not production-ready. But it clearly shows where Python is heading. #Python #Backend #SoftwareEngineering #Developers #DataEngineering
To view or add a comment, sign in
-
-
"Python is slow." Every developer has heard this. And technically, it's true. Pure Python loops are 50-100x slower than C/C++. That part is real. But here's what nobody tells you — Python doesn't do the heavy lifting. It tells C and Rust what to do. 𝗪𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗿𝘂𝗻𝘀 𝘂𝗻𝗱𝗲𝗿𝗻𝗲𝗮𝘁𝗵: → numpy.dot() → Intel MKL / OpenBLAS (C/Fortran) → torch.matmul() → cuBLAS (CUDA C++) → Pydantic v2 → pydantic-core (Rust) → Uvicorn HTTP → httptools (C) → orjson.dumps() → Rust JSON serializer → pandas.read_csv() → C parser Python is the steering wheel. The engine is C/Rust. 𝗧𝗵𝗲 𝗻𝘂𝗺𝗯𝗲𝗿𝘀 𝘁𝗵𝗮𝘁 𝗺𝗮𝘁𝘁𝗲𝗿: Matrix multiplication (1000x1000): • Pure Python → 450 seconds • C++ → 0.8 seconds • Python + NumPy → 0.03 seconds Read that again. Python + NumPy is 26x faster than raw C++ because it calls hand-tuned BLAS libraries with CPU SIMD optimization. ResNet-50 training on ImageNet: • PyTorch (Python) → 28 min/epoch • LibTorch (pure C++) → 27 min/epoch Same speed. Because Python is just orchestrating — the math runs in compiled CUDA kernels. 𝗔𝗣𝗜 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 (𝗿𝗲𝗾𝘂𝗲𝘀𝘁𝘀/𝘀𝗲𝗰): → Gin (Go) → 45,000 → Spring Boot (Java) → 18,000 → Express.js (Node) → 15,000 → FastAPI (Python) → 12,000-15,000 → Django (Python) → 1,200 → Rails (Ruby) → 900 → Laravel (PHP) → 800 FastAPI sits right next to Express and Spring Boot. 15x faster than Laravel. 𝗪𝗵𝘆 𝗻𝗼 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗰𝗮𝗻 𝘁𝗼𝘂𝗰𝗵 𝗣𝘆𝘁𝗵𝗼𝗻 𝗶𝗻 𝗠𝗟: → 500,000+ pre-trained models on HuggingFace → PyTorch, TensorFlow, JAX — all Python-first → Native GPU acceleration (CUDA) → Go has zero mature ML frameworks → PHP has zero ML frameworks → Java has DL4J... and that's it Even if Go is 3x faster at raw computation — 3x faster at nothing is still nothing. You'd spend 6 months building what Python gives you in one pip install. 𝗧𝗵𝗲 𝗯𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲: Python is slow. Python's ecosystem is not. And in 2026, the ecosystem is what ships products — nobody writes matrix math by hand anymore. #Python #FastAPI #MachineLearning #SoftwareEngineering #WebDevelopment #AI
To view or add a comment, sign in
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Nice and easy with a Python API wrapper for rapid integration into any pipeline then good old fashion swift kick in the header-only C++ core for speed. STRIKE WITH AIM FIRST ; THEN SPEED!! NO MERCY!!! 7 of 14 copy and paste Ai Next Step: Testing for Accuracy * Before we move to the C++ speed optimization, we should verify the decimal drift. 1. The Python Test (Validation) This script runs a quick simulation of your geometric growth with the 1.5 ratio & the 1,681 reset point. | V python def cyclic_validation(base=123, ratio=1.5, limit=1681): state = float(base) print(f"--- Starting Validation (Base: {base}, Ratio: {ratio}, Limit: {limit}) ---") for i in range(1, 11): state = (state * ratio) % limit print(f"Iteration {i:2}: {state:>10.4f}") print("--- Test Complete: Logic remains within the 1,681 boundary ---") cyclic_validation() /\ || Test Results: * Expansion: You'll see the values climb (184.5, 276.75, 415.125...). * Safety: At iteration 7, the value hits 1401.0469 and then "wraps" back to 420.5703—proving the 1,681 limit successfully prevents the number from spiraling into infinity. * * 2. The C++ Speed Engine (Performance) * Now that the logic is proven, this C++ implementation uses a template-based header to maximize throughput. It is designed to be dropped into a data pipeline for "others" to use. * | * V * cpp #include <iostream> #include <iomanip> /** * Cyclic41 Speed Engine * Prioritizes raw throughput and bit-level precision. */ class Cyclic41Engine { private: double state; const double limit = 1681.0; const double anchor = 41.0; public: Cyclic41Engine(double seed = 123.0) : state(seed) {} // Inline for maximum speed in loops inline double grow(double ratio) { state = (state * ratio); // Optimized manual modulo for double-precision speed while (state >= limit) state -= limit; return state; } inline double get_key(double drift = 4.862) { return (state * drift) / anchor; } }; int main() { Cyclic41Engine engine; // Simulating a high-speed data stream of 1 million points for (int i = 0; i < 1000000; ++i) { engine.grow(1.5); } std::cout << "Final stabilized state after 1M iterations: " << engine.get_key() << std::endl; return 0; } * /\ * || Key Technical Advantages: * Inline Functions: The grow and get_keymethods are marked inline to eliminate function call overhead, allowing the CPU to process millions of iterations in milliseconds. * Stabilized Precision: By using state -= limit instead of the standard floating-point % operator, we maintain better precision over massive iteration counts. * Hybrid Ready: This C++ engine can be wrapped using pybind11 to make it accessible as a Python library while keeping this native speed 7 of 14
To view or add a comment, sign in
-
🔸 🔸 🔸 Classic example to address late-binding and closures in python This 4 line of code talks about multiple important concepts ↘️ Code: funcs = [] for i in range(3): funcs.append(lambda : i) print([f() for f in funcs]) 🤔 What do you think ❓ Will it give any result, will it error out of random ❓ 🟰 Well, this prints [2,2,2] over console The idea is to demystify why this code does what it does under the hood. Concepts ↘️ ➡️ Is lambda : i a valid syntax ? If yes, what does it denote It denotes a shorthand notation to define a function that does not accept any input parameters and returns a single value as output. What will lambda : i resolve to ❓ ↘️ def f(): return i ❓ What is the iterative statement doing 🟰 It simply creates a container in memory, denoted by i. Initializes i to 0 and keeps incrementing the value that the container referenced by i contains ➡️ i : 0 -> 1 -> 2 ❓What will the list funcs hold ➡️ It's important to note here, funcs is a list that appends lambda into it. ➡️ Since, lambda is nothing more than a function in python, the list holds function ✅ ❓ But, how can a list hold function and what does it mean ➡️ Python treats function as objects, and in programming objects are nothing more than a memory address. 🟰 So, the funcs list holds memory addresses that individually resolve to functions denoted by lambda ✅ ➡️ Another important pointer to note here , the list holds the memory reference to these functions, not the actual value returned by the function. ✅ ❓ Demistify [f() for f in funcs] 🟰 This is a list comprehension in python. ➡️ We are iterating over all thats contained in the list, referencing each element via local variable f Since, f refers to an in-memory function, f() will simply invoke the function call. ✅ ➡️ When i = 0: Invoke f(), the first function pointed to by the first element of the list funcs ❓ The interesting part Function call always creates a new individual scope in python. You can say it as a standalone local environment specific to that function call. So, for the first element in funcs f() -> invokes a function that returns i ❓ Where is i now ➡️ Well, i comes from the loop scope created earlier and is used in the enclosed scope lambda : i denoted by function. ❓ So , are we talking about a closure concept in python here ? ➡️ Yes, this is a closure as lambda is closing over the variable i which is defined in the outer scope, the outer scope is the loop scope. ❓ Another interesting thing to note here is : ➡️ i is defined over a single outer scope, so it is shared amongst all the functions contained in the list funcs. 🤔 Think of it as : We are evaluating each function in the list funcs, where each function simply returns i. 🟰 The value of i = 2, post loop iteration finished in range(3), and hence each of these function calls will return the value 2 i.e. value of the variable closed over by lambda. #python #closure #lambda #softwareengineering #dataengineering
To view or add a comment, sign in
-
#DataStructures #Python #LinkedList #CodingConcepts 🚀 Python Linked Lists Made Simple (with Code + Visual Thinking) 💬 Want a visual step-by-step animation of each operation (Screenshot below) Check it out here :- https://lnkd.in/g5kGPDi4 Most people write Linked List code… Very few actually understand what’s happening under the hood 👇 🧠 Core Idea (Visual First) 🟦 Data → 🔗 Pointer → 🟦 Data → 🔗 Pointer → 🟦 Data 👉 Each node stores: 📦 Data 🔗 Pointer to next node 🧩 1. The Building Block (Node) class Node: def __init__(self, data: int) -> None: self.data = data self.next = None 💡 Think: 📦 data + 🔗 next → One unit of the chain 🎯 2. The Entry Point (Head) self.head: Node | None = None 👉 head = 🚪 starting point of the list Lose it = 💥 lose the entire list 🔄 3. Traversal (The Most Important Concept) temp = self.head while temp: temp = temp.next 💡 Decode this: 👉 temp = temp.next = “Move to next node” 🧠 This single line powers: 👉 Traversal 👉 Search 👉 Insert 👉 Delete ➕ 4. Insert = Pointer Rewiring 📍 Beginning (⚡ O(1)) new_node.next = self.head self.head = new_node 📍 End (O(n)) while temp.next: temp = temp.next temp.next = new_node 📍 Position (Precision Logic) while temp and count < pos - 1: temp = temp.next 💡 You’re not “inserting” 👉 You’re changing links ➖ 5. Delete = Skip a Node temp.next = temp.next.next 🧠 Visual: A → B → C Becomes: A → C (B removed) 🔍 6. Search = Linear Scan while temp: if temp.data == key: return pos 📌 No shortcuts → O(n) ✏️ 7. Update = In-Place Change if temp.data == old_value: temp.data = new_value 👉 No movement, just modification ⚡ Key Takeaways 🔹 Linked List ≠ just about data → it’s about data and connections 🔹 Master temp = temp.next → you master everything 🔹 Insert/Delete = pointer manipulation, not shifting data 🔹 Traversal = backbone of all operations 💥 Final Thought If you can mentally visualize this: 🟦 10 → 🟦 20 → 🟦 30 → 🟦 40 …and predict what happens after each operation… 👉 You’ve moved from coder → problem solver 💬 Want a visual step-by-step animation of each operation (Screenshot below) Check it out here :- https://lnkd.in/g5kGPDi4
To view or add a comment, sign in
-
-
🚀 Python Series – Day 10: Strings in Python (Text Handling Basics) Till now, we worked with numbers and collections. But what about text data? 🤔 👉 That’s where Strings come in! 🧠 What is a String? A string is a sequence of characters enclosed in quotes. ✔️ Can use single ' ' or double " " quotes 🔧 Example: name = "Mustaqeem" print(name) 🔁 Access Characters text = "Python" print(text[0]) # P print(text[-1]) # n ✂️ String Slicing text = "Python" print(text[0:3]) # Pyt print(text[2:]) # thon 🔄 String Methods msg = "hello world" print(msg.upper()) # HELLO WORLD print(msg.lower()) # hello world print(msg.title()) # Hello World ❌ Mutability Fails in String Strings are immutable — meaning you cannot change them directly. text = "Python" text[0] = "J" # ❌ Error 👉 This will give an error because strings cannot be modified. ✅ Correct Way (Create New String) text = "Python" new_text = "J" + text[1:] print(new_text) # Jython 🎯 Why Strings are Important? ✔️ Used in almost every program ✔️ Helps in user input & output ✔️ Important for data processing 🔥 Pro Tip: Whenever you want to modify a string 👉 create a new one instead of changing the original ⚡ Quick Challenge: What will be the output? text = "Python" print(text[1:4]) 👇 Comment your answer! 📌 Tomorrow: Dictionaries & Sets (Advanced Data Structures) Follow me to learn Python step-by-step from basics to advanced 🚀 #Python #DataScience #Coding #Programming #LearnPython #Beginners #Tech #MustaqeemSiddiqui
To view or add a comment, sign in
-
-
✉️ Comments & Type Conversion in Python #Day28 If you're starting your Python journey, two concepts you must understand are Comments and Type Conversion. These may seem basic, but they play a huge role in writing clean, efficient, and bug-free code. 💬 1. Comments in Python Comments are notes in your code that Python ignores during execution. They help developers understand the logic behind the code. 🔹 Types of Comments: 👉 Single-line Comments Start with # Used for short explanations Example: # This is a single-line comment print("Hello World") 👉 Multi-line Comments (Docstrings) Written using triple quotes ''' or """ Often used for documentation Example: """ This is a multi-line comment Used to explain complex logic """ print("Python is awesome") 🌟 Why Comments Matter: ✔ Improve code readability ✔ Help in debugging ✔ Make teamwork easier 🤝 ✔ Useful for documentation 💡 Pro Tip: Avoid over-commenting. Write comments that add value, not noise. 🔄 2. Type Conversion in Python Type conversion means changing one data type into another. Python supports both implicit and explicit conversion. 🔹 Implicit Type Conversion (Automatic) Python automatically converts data types when needed. Example: x = 5 # int y = 2.5 # float result = x + y print(result) # Output: 7.5 👉 Here, Python converts int to float automatically. 🔹 Explicit Type Conversion (Type Casting) You manually convert data types using built-in functions. Common Type Casting Functions: int() → Convert to integer 🔢 float() → Convert to float 📊 str() → Convert to string 🔤 list() → Convert to list 📋 tuple() → Convert to tuple 📦 set() → Convert to set 🔗 Example: x = "10" y = int(x) # Convert string to integer print(y + 5) # Output: 15 ⚠️ Important Notes: ❗ Invalid conversions cause errors int("abc") # ❌ Error ✔ Always ensure compatibility before converting 🎯 Real-Life Use Cases 📌 Taking user input (always string → convert to int/float) 📌 Data cleaning in analytics 📌 Formatting outputs 📌 Working with APIs & files 💡 Quick Comparison FeatureComments 💬Type Conversion 🔄PurposeExplain codeChange data typeExecuted?❌ No✅ YesSyntax#, ''' '''int(), str(), etc.Use CaseReadability & docsData handling 🏁 Final Thoughts Mastering comments makes your code human-friendly, while type conversion makes it machine-friendly. Together, they make you a better Python developer 💪 #Python #Programming #Coding #DataAnalysts #DataAnalytics #LearnPython #DataAnalysis #DataCleaning #dataCollection #DataVisualization #DataJobs #LearningJourney #PowerBI #MicrosoftPowerBI #Excel #MicrosoftExcel #PythonProgramming #CodeWithHarry #SQL #Consistency
To view or add a comment, sign in
-
I had a Python UDF that was slow. Everyone told me to switch to a Pandas UDF. I switched. It got faster. I didn't stop there, which is where this gets interesting. I spent a weekend benchmarking the Arrow serialization overhead across different schema widths and batch sizes because I wanted to actually understand what I was paying for. Here is what I found. On a narrow schema, 4 columns, a Pandas UDF with default batch size of 10,000 records was 6.2x faster than the Python UDF. The serialization cost was trivial relative to the computation savings. On a wide schema, 180 columns, the Pandas UDF at default batch size was 2.1x faster. Still better. But the Arrow conversion was now a meaningful fraction of total execution time because converting 180 columns per batch is not free. When I dropped the batch size on the wide schema to 2,000 records, peak memory per conversion dropped and the job stopped spilling to disk on the executor with the largest partition. Total job time: 1.7x faster than the wide-schema default. A 23% improvement just from tuning spark.sql.execution.arrow.maxRecordsPerBatch. The configuration nobody sets: spark.sql.execution.arrow.pyspark.enabled=true. This is separate from Pandas UDFs. It accelerates toPandas() and createDataFrame() globally. Every time you collect to pandas interactively, you are either paying the Arrow overhead or the row-by-row serialization overhead. Arrow is always cheaper. It is not on by default in all environments. I set that flag. I set it in every cluster config I control. I set it so reflexively now that I had to think to remember whether it was a default or a choice. The point is not to memorize my numbers. Your schema is different. My point is that I ran the experiment and found a 23% improvement by changing one integer. You have not run the experiment. Run it. The number is different for your schema. Find it.
To view or add a comment, sign in
-
🔥 How Python Really Loads Modules (Deep Internals) Every time you write `import math` Python doesn't blindly re-import it. It follows a smart 4-step pipeline under the hood. Here's exactly what happens 👇 ━━━━━━━━━━━━━━━━━━━━ 𝗦𝘁𝗲𝗽 𝟭 — Check the cache first ━━━━━━━━━━━━━━━━━━━━ Python checks sys.modules before doing anything else. If the module is already there → it reuses it. No reload, no wasted work. That's why importing the same module 10 times in your code doesn't slow anything down. ━━━━━━━━━━━━━━━━━━━━ 𝗦𝘁𝗲𝗽 𝟮 — Find the module ━━━━━━━━━━━━━━━━━━━━ If not cached, Python searches in order: → Current directory → Built-in modules → Installed packages (site-packages) → All paths in sys.path This is why path order matters when you have naming conflicts. ━━━━━━━━━━━━━━━━━━━━ 𝗦𝘁𝗲𝗽 𝟯 — Compile to bytecode ━━━━━━━━━━━━━━━━━━━━ Your .py file gets compiled into bytecode (.pyc) and stored inside __pycache__/ Next time? Python skips compilation if the source hasn't changed. Faster startup. ━━━━━━━━━━━━━━━━━━━━ 𝗦𝘁𝗲𝗽 𝟰 — Execute and register ━━━━━━━━━━━━━━━━━━━━ Python runs the module code, creates a module object, and adds it to sys.modules["module_name"] Now it's cached for every future import in the same session. ━━━━━━━━━━━━━━━━━━━━ Most devs just write `import x` and move on. But knowing this pipeline helps you: ✅ Debug mysterious import errors ✅ Understand why edits don't reflect without reloading ✅ Write faster, cleaner Python What Python internals have surprised you the most? Drop it below 👇 #Python #Programming #SoftwareEngineering #100DaysOfCode #PythonTips
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development