🚀 Day 57 of #100DaysOfCode — Text Formatting & String Manipulation Hey everyone! 👋 Today’s challenge was all about cleaning up data: Capitalizing the first letter of a string while ensuring the rest are lowercase. It’s a common task in web development and data processing to make user input look consistent. 👨💻 What I practiced today: ✅ Case Normalization: Using .lower() to standardize the entire string first. ✅ String Indexing: Accessing the first character with [0] to apply .upper(). ✅ String Concatenation: Merging the transformed first character with the rest of the string using slicing [1:]. 📌 Today’s Task: ✔ Input: A string like "WORLD" or "hello". ✔ Goal: Return a properly formatted string with only the first letter capitalized. ✔ Example: "WORLD" → "World" | "hello" → "Hello". 🧠 Key Insight: While I manually handled the slicing and case conversion today, Python actually has a built-in method called .capitalize() that does exactly this in one step! Understanding the manual way helps me grasp how strings are immutable and how new strings are built in memory. ✨ Key Takeaway: String manipulation is a foundational skill. By breaking a string apart and reassembling it, you learn how to handle more complex text-processing tasks like title casing or custom data parsers in the future. #100DaysOfCode #Day57 #Python #CodingJourney #DSA #Strings #CleanCode #WebDevelopment #DataCleaning #SoftwareEngineer
Pradumya Gupta’s Post
More Relevant Posts
-
𝗗𝗮𝘆 𝟭𝟲: 𝗗𝗶𝗰𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗶𝗻 𝗣𝘆𝘁𝗵𝗼𝗻 🐍 🔹 A dictionary is a built-in, mutable data structure that stores data in key–value pairs 🔹 Key–value pairs are separated by commas and enclosed in curly braces {} 🔹 Data is stored as key : value 🔹 Keys must be unique 🔹 Dictionaries are ordered (from Python 3.7+) 𝗔𝗰𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗗𝗶𝗰𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗜𝘁𝗲𝗺𝘀 🔹 Two main ways to access values: 1️⃣Using keys → dict_name[key] (raises error if key not found) 2️⃣Using get() → dict_name.get(key) (returns None if key not found) 🔹 Access all values using → dict_name.values() 𝗔𝗱𝗱𝗶𝗻𝗴 & 𝗠𝗼𝗱𝗶𝗳𝘆𝗶𝗻𝗴 𝗜𝘁𝗲𝗺𝘀 ➕Using assignment operator → dict_name[key] = value ➕ Using update() → adds or updates key-value pairs 𝗥𝗲𝗺𝗼𝘃𝗶𝗻𝗴 𝗜𝘁𝗲𝗺𝘀 ❌ del dict_name[key] → removes a specific key ❌ pop(key) → removes key and returns its value ❌ popitem() → removes the last inserted key-value pair ❌ clear() → removes all items #Python #Dictionary #LearningPython #LearningInPublic #Consistency
To view or add a comment, sign in
-
Day 17/100 – #100DaysOfLeetCode 🚀 🧩 Problem: LeetCode 303 – Range Sum Query - Immutable (Easy) 🧠 Approach 1: Brute Force 1. Store the array as it is. 2. For every sumRange(left, right) call, calculate the sum using slicing. 💻 Solution: class NumArray: def __init__(self, nums: List[int]): self.nums=nums def sumRange(self, left: int, right: int) -> int: return sum(self.nums[left:right+1]) ⏱ Time | Space: O(n) | O(1) 🧠 Approach 2: Prefix Sum (Optimized) 1. Precompute prefix sums where each index stores the sum up to that position. 2. Range sum can then be calculated in constant time. 💻 Solution: class NumArray: def __init__(self, nums: List[int]): self.prefix = [0] for num in nums: self.prefix.append(self.prefix[-1] + num) def sumRange(self, left: int, right: int) -> int: return self.prefix[right + 1] - self.prefix[left] ⏱ Time | Space: O(1) | O(n) 📌 Key Takeaway: Prefix Sum is a powerful technique that transforms repeated range sum queries from O(n) to O(1) time, making it ideal for immutable array problems. #leetcode #dsa #python #problemSolving
To view or add a comment, sign in
-
-
Recently, I built a small reporting pipeline using Python + Pandas + web scraping. The goal was simple: Collect data from multiple web pages, clean it, and generate a structured report. Process looked like this: Scrape raw data (requests/BeautifulSoup) Convert to DataFrame in Pandas Clean & standardize columns (dates, prices, text) Handle missing values + duplicates Export final output to Excel/CSV for reporting Lesson learned: Scraping is the easy part — the real work is making messy web data analysis-ready. #Python #Pandas #WebScraping #DataCleaning #DataAnalytics #Automation
To view or add a comment, sign in
-
𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗶𝗻𝗴 𝗗𝗲𝗲𝗽 𝗦𝘁𝘂𝗱𝘆: 𝗔 𝗣𝘆𝘁𝗵𝗼𝗻 𝗟𝗶𝗯𝗿𝗮𝗿𝘆 𝗳𝗼𝗿 𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝗱 𝗘𝘅𝗽𝗹𝗼𝗿𝗮𝘁𝗼𝗿𝘆 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Deep Study is a lightweight Python package that automates exploratory data analysis and feature profiling. With just a few lines of code, it generates professional HTML reports that provide clear, actionable insights. Key Features ▫️ Automated Feature Profiling In-depth statistics on missing values, unique counts, data types, and memory usage ▫️ Target Variable Analysis Clear insights into feature relationships with the dependent variable ▫️ ML-Based Feature Importance Data-driven feature importance powered by Random Forest models ▫️ Professional Visualizations Clean, interactive HTML reports with high-quality charts and plots ▫️ Jupyter Notebook Integration Seamless support for interactive and collaborative workflows ▫️ Simple & Intuitive API Generate a complete EDA report in just three lines of code Available on PyPI and GitHub, Deep Study supports Python 3.8+ and integrates with popular data science libraries. 🔗 GitHub: https://lnkd.in/gyCzX7_9 🔗 PyPI: https://lnkd.in/g9FfaJX5 #DataScience #Python #EDA #MachineLearning #OpenSource
To view or add a comment, sign in
-
-
If your Python class is mostly storing data, you probably don’t need to write all that boilerplate. Take a simple example. Most of us used to write classes like this: - Define __init__ - Assign every field manually - Add __repr__ for debugging - Implement __eq__ for comparisons It works — but it’s repetitive. Now look at this: from dataclasses import dataclass @dataclass class Car: make: str model: str year: int That’s enough. Python automatically generates: • __init__ • __repr__ • __eq__ • Optional ordering methods • Optional hashing Same behavior. Far less code. Why this matters: • Cleaner classes • Fewer mistakes • Built-in comparison logic • Readable debug output • Type hints included And you can go further: • Make objects immutable using "frozen=True" • Enable sorting using "order=True" • Avoid shared mutable defaults with "field(default_factory=list)" • Add validation using "__post_init__" • Convert to dict with "asdict()" • Create updated copies using "replace()" Dataclasses are ideal when your class is primarily a "data container". They reduce noise and make intent obvious. If you're still manually writing constructors for simple data models, there’s a cleaner way. #Python #SoftwareEngineering #CleanCode #BackendDevelopment
To view or add a comment, sign in
-
-
🚀 Day 8/30 | LeetCode Problem: Sort Colors (75) Problem: Given an array nums containing only 0s, 1s, and 2s, sort the array in-place so that colors are ordered as: 🔴 0 → ⚪ 1 → 🔵 2 Without using the built-in sort function. 🧠 Approach: Dutch National Flag Algorithm We use three pointers: a → position for 0 (left) b → position for 2 (right) c → current index Logic: If nums[c] == 0 → swap with a, move both forward If nums[c] == 2 → swap with b, move b backward If nums[c] == 1 → just move c This sorts the array in one pass. ⏱️ Complexity Time: O(n) Space: O(1) (in-place) 🧾 Python Code class Solution: def sortColors(self, nums): a = 0 # left pointer (0s) b = len(nums) - 1 # right pointer (2s) c = 0 # current index while c <= b: if nums[c] == 0: nums[a], nums[c] = nums[c], nums[a] a += 1 c += 1 elif nums[c] == 2: nums[b], nums[c] = nums[c], nums[b] b -= 1 else: c += 1 ✅ Result Accepted ✅ Runtime: 0 ms (Beats 100%) In-place & optimal solution 🎯 Takeaway Understanding pointer-based algorithms helps solve array problems efficiently without extra space . 🔖 Hashtags #LeetCode #30DaysOfLeetCode #Day8 #Python #Arrays #TwoPointers #DSA #ProblemSolving #CodingJourney #SoftwareEngineering
To view or add a comment, sign in
-
-
How will you optimized your object size in Python for a high performance application? Problem: Imagine you have Order, Event, User or Message service where you have to create millions of Objects. In this time memory creation, GC (Garbage Collection) pressure does matter more than we think. How can we optimized? Solution: We can reduce object size using __slots__ class property or dataclass (modern python). class Order: def __init__(self, order_id, user_id, amount): self.order_id = order_id self.user_id = user_id self.amount = amount class Order: __slots__ = ("order_id", "user_id", "amount") def __init__(self, order_id, user_id, amount): self.order_id = order_id self.user_id = user_id self.amount = amount This second object will consume 70% memory then the first one. Why? Because when we create an object using class under the hood it also create a dictionary. You can print it your_instace.__dict__ . But when you add __slots__ class property then it will keep the object almost like c struct. It won't create __dict__ which is very light weight and fast to access the property. Result: • ~70% less memory per object • Faster attribute access • Lower GC pressure Use __slots__ for: • DTOs • Domain entities • Message/event objects Avoid it for: • ORM models • Framework-controlled objects Not every optimization matters — but hot-path object creation absolutely does. #python #backend #performance #systems #softwareengineering
To view or add a comment, sign in
-
-
#Pytest for #Python quick tip: using `tmp_path` I’ve seen many people temporarily override pytest’s built-in `tmp_path` fixture with a hardcoded directory just to debug a failing test locally. I get why it happens. You want to inspect files, rerun things without everything disappearing, check desired expectations manually. But you don’t actually need to rewrite your tests to do that. When running pytest from the command line, you can control where pytest writes its temporary files: ``` pytest tests/specific/test_you_want.py --basetemp /path/of/choice/ ``` That’s it. Same `tmp_path` fixture, same test code, but now all temporary output lives somewhere predictable and inspectable. The real tip here though: really use pytest’s built-in fixtures in the first place. Using `tempfile` directly instead often adds: - extra boilerplate - more manual cleanup - more file-handling logic to maintain tmp_path already gives you: - isolated directories per test - clean teardown - pathlib-native paths - easy overrides when debugging tldr; let `pytest` do the boring work. Your tests stay cleaner, and debugging does not require rewriting half your fixtures. https://lnkd.in/esmjaynv #Bioinformatics #Engineering #Testing
To view or add a comment, sign in
-
Common #DataTypes in #Python In #DataScience, understanding #DataTypes is the first step to working with data correctly. #Numeric Used for numbers and calculations - #Integer → whole numbers (10, 25, -3) - #Float → decimal values (12.5, 99.8) - #Complex → numbers with real and imaginary parts #Sequence Used for ordered data - #String → text values like names or labels - #List → ordered data that can be changed -#Tuple → ordered data that cannot be changed #Mapping Used to connect keys with values - #Dictionary → stores data in key–value pairs #Set Used to store unique values - #Set → removes duplicates automatically #Boolean Used for conditions and decisions - #Bool → True or False Why #DataTypes Matter - Help in proper #DataCleaning - Improve accuracy in #Analysis - Prevent errors in #MachineLearning workflows Key Takeaway Choosing the right #DataType makes data easier to manage, analyze, and trust. #Python #DataScience #DataTypes #MachineLearning #Analytics #ProgrammingFundamentals #TechCareers #LearningJourney
To view or add a comment, sign in
-
-
Python for loops for data visualization used to trip me up every single time. The syntax made sense in theory, but when it came to putting the right variables in the right order for a complex Seaborn or Matplotlib chart, my brain would just stall. I’m a visual learner. Documentation is great, but sometimes I need to "see" the logic before it clicks. To break through the wall, I went back to basics: markers and paper. 🖍️ I hand-wrote the code for a loop I was building for a recent analytics project. I used different colors to map out exactly where each "piece" was being used and how the data was flowing through the loop. Is the syntax 100% perfect? Probably not. Was it a "waste of time"? For some, maybe. But for me, it was the "Aha!" moment I needed. Now, when I’m stuck, I have this color-coded cheatsheet to ground me. The more I build, the more the official documentation starts to feel like a second language rather than a foreign one. The takeaway: Don't apologize for how you learn. Whether it's hand-drawn diagrams, rubber duck debugging, or color-coded markers—do what helps YOU grow. #DataAnalytics #Python #LearningJourney #DataVisualization
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development