The Magic of the Mirror: Image Flipping with NumPy! 🤳🔄 Day 85/100 Ever wondered how your phone 'mirrors' your selfies instantly? It’s just one line of array slicing! For Day 85 of my #100DaysOfCode, I explored Image Flipping and Mirroring. In the world of Computer Vision, an image is just a matrix, and to flip it, we simply reverse the order in which we read the rows or columns. Technical Highlights: 🔄 Axis Reversal: Mastering the [::-1] slicing syntax to reverse array indices without complex loops. 🤳 Mirror Logic: Implementing horizontal flips to simulate the front-camera 'selfie' experience. 🌊 Vertical Reflection: Creating water surface reflection effects by reversing the row order of 2D matrices. 🤖 AI Data Augmentation: Learning how flipping images is used in Machine Learning to double the size of training datasets and prevent model bias. Do check my GitHub repository here : https://lnkd.in/d9Yi9ZsC #100DaysOfCode #ComputerVision #NumPy #Python #BTech #IILM #AIML #ImageProcessing #DataAugmentation #SoftwareEngineering #LearningInPublic #WomenInTech
NumPy Image Flipping and Mirroring
More Relevant Posts
-
Controlling the Light: Brightness Adjustment with NumPy! 💡🌓 Day 87/100 In Computer Vision, light isn't a feeling it's an addition. For Day 87 of my #100DaysOfCode journey, I explored the mathematics of Exposure and Brightness. I learned that making a photo 'pop' is actually a simple matter of scalar addition across a 3D matrix. But the real skill lies in Clipping ensuring the math doesn't break the boundaries of 8-bit color depth. Technical Highlights: 💡 Scalar Offsetting: Using NumPy broadcasting to shift the global intensity of an image by adding or subtracting constant values. 🛡️ Value Clipping: Implementing np.clip to prevent numerical overflows, ensuring pixels never exceed 255 or drop below 0. ⚡ Performance Vectorization: Avoiding slow Python loops and using direct array operations for real-time image manipulation. 🤖 Preprocessing for AI: Understanding how brightness normalization helps ML models recognize objects in varying lighting conditions. Do check my GitHub repository here : https://lnkd.in/d9Yi9ZsC #100DaysOfCode #ComputerVision #NumPy #Python #BTech #IILM #AIML #ImageProcessing #DataScience #SoftwareEngineering #LearningInPublic #WomenInTech
To view or add a comment, sign in
-
-
🚀 Day 45 of My Learning Journey – NumPy Shape & Reshape Today, I explored how to work with array dimensions using NumPy, focusing on shape and reshape. 🔹 Key Learnings: ✔️ shape Helps to identify the dimensions of an array Example: (3, 2) → 3 rows and 2 columns ✔️ Modifying shape We can directly change the structure of an array Useful when reorganizing data ✔️ reshape() Creates a new array with a different shape Does NOT modify the original array Very helpful in data preprocessing 🔹 Hands-on Task Completed: Converted a list of 9 elements into a 3×3 matrix using NumPy. 💡 Takeaway: Understanding how to manipulate array dimensions is essential for data analysis, machine learning, and efficient problem-solving. 📌 Every small concept builds a stronger foundation! #Day45 #Python #NumPy #LearningJourney #DataScience #Coding #StudentLife
To view or add a comment, sign in
-
-
✅ Day 97 of 100 Days LeetCode Challenge Problem: 🔹 #1281 – Subtract the Product and Sum of Digits of an Integer 🔗 https://lnkd.in/gxTAZc6U Learning Journey: 🔹 Today’s problem involved extracting digits of a number and performing two operations simultaneously. 🔹 I initialized two variables: one for product (pr) and one for sum (sm). 🔹 Using a while loop, I extracted each digit using n % 10. 🔹 Updated the product by multiplying the digit and updated the sum by adding it. 🔹 Reduced the number using integer division (n //= 10) after each step. 🔹 Finally returned the difference between product and sum. Concepts Used: 🔹 Digit Extraction 🔹 While Loop 🔹 Arithmetic Operations 🔹 Number Manipulation Key Insight: 🔹 Both product and sum can be computed in a single traversal of digits. 🔹 Efficient use of modulus and division avoids converting the number to a string. Complexity: 🔹 Time: O(d) 🔹 Space: O(1) #LeetCode #Algorithms #DataStructures #CodingInterview #100DaysOfCode #Python #ProblemSolving #LearningInPublic #TechCareers
To view or add a comment, sign in
-
-
Inspired by Ron Kohavi and Evan Miller practical experimentation principles and tooling, I built a simple A/B Test Sample Size Calculator with Plotly Dash + statsmodels. Repo: https://lnkd.in/gkNvnkin Built quickly through vibe coding, but grounded in solid stats: fixed-horizon two-sample z-test for proportions, alpha/power controls, effect-size views, and runtime estimation. After years in experimentation, this automates a planning task I’ve repeated across various products/domains. An example of what experience + fast execution with AI assistance can produce. #ABTesting #Experimentation #ProductAnalytics #Growth #DataScience #OpenSource #Python #Plotly #Dash #Statsmodels #VibeCoding
To view or add a comment, sign in
-
-
✅ Day 89 of 100 Days LeetCode Challenge Problem: 🔹 #1480 – Running Sum of 1D Array 🔗 https://lnkd.in/gSTZrxF7 Learning Journey: 🔹 Today’s problem focused on computing the running (prefix) sum of an array. 🔹 Instead of using an extra array, I optimized the solution by modifying the input array in-place. 🔹 Starting from index 1, I updated each element as: • nums[i] += nums[i-1] 🔹 This way, each index stores the cumulative sum up to that point. 🔹 Finally, returned the modified array. Concepts Used: 🔹 Prefix Sum 🔹 In-place Computation 🔹 Array Traversal Key Insight: 🔹 The previous element already stores the prefix sum, so we can reuse it directly. 🔹 Eliminates the need for extra space while maintaining linear time. Complexity: 🔹 Time: O(n) 🔹 Space: O(1) #LeetCode #Algorithms #DataStructures #CodingInterview #100DaysOfCode #Python #ProblemSolving #LearningInPublic #TechCareers
To view or add a comment, sign in
-
-
Today, I focused on working with NumPy arrays. Building a solid foundation for data manipulation and analysis. Here’s what I practiced: 🔹 Created a 1D array with values from 1 to 15 🔹 Built a 2D array (3×4) filled with ones 🔹 Generated a 3×3 identity matrix 🔹 Explored key array properties like shape, type, and dimensions 🔹 Converted a regular Python list into a NumPy array This session helped me better understand how data is structured and handled in numerical computing. Getting comfortable with arrays is definitely a crucial step toward more advanced data analysis and machine learning tasks. Looking forward to building on this momentum 💡 #AI #MachineLearning #Python #NumPy #DataAnalysis #M4ACE
To view or add a comment, sign in
-
-
✅ Day 95 of 100 Days LeetCode Challenge Problem: 🔹 #869 – Reordered Power of 2 🔗 https://lnkd.in/gkNXaSFM Learning Journey: 🔹 Today’s problem focused on checking whether the digits of a number can be rearranged to form a power of 2. 🔹 I created a helper function to extract and store the digits of a number. 🔹 Then I sorted the digits of the input number for comparison. 🔹 Next, I generated powers of 2 iteratively and compared their sorted digit lists with the input. 🔹 If any match was found, I returned True. Otherwise, continued until the digit length exceeded the input. Concepts Used: 🔹 Digit Extraction 🔹 Sorting 🔹 Simulation of Powers of 2 🔹 Brute Force Optimization Key Insight: 🔹 Instead of generating permutations (which is expensive), sorting digits allows quick comparison. 🔹 Any valid rearrangement must have the same digit frequency as some power of 2. Complexity: 🔹 Time: O(log n * d log d) 🔹 Space: O(d) #LeetCode #Algorithms #DataStructures #CodingInterview #100DaysOfCode #Python #ProblemSolving #LearningInPublic #TechCareers
To view or add a comment, sign in
-
-
Recently, I implemented both Linear Regression and Logistic Regression from scratch using Python and NumPy, emphasizing vectorization and mathematical understanding. For both models, I utilized two feature vectors and created animated visualizations of the learning process, illustrating how the decision boundary (for Logistic Regression) and regression plane/line (for Linear Regression) evolve step by step during gradient descent. A major goal was to ensure the code was scalable, efficient, and mathematically transparent. Key aspects of the implementation include: • Fully vectorized code, avoiding unnecessary loops • Significant speed improvement due to vectorization, enhancing training efficiency • Generalization capability from 2 features to n-feature input spaces • Structured for straightforward future implementation of Lasso (L1) and Ridge (L2) regularization • Visualization currently limited to 2D feature space for interpretability Building these models from scratch deepened my understanding of the underlying mathematics, particularly gradient descent, cost functions, normalization, decision boundaries, and parameter updates. Writing the algorithm myself proved to be a more insightful learning experience than simply using a library. Code can be found at: https://lnkd.in/ghqdDxMg #MachineLearning #LinearRegression #LogisticRegression #LearningByBuilding
To view or add a comment, sign in
-
Recently, I worked on a small machine learning project on Fitness Class Attendance Prediction. The goal was to predict whether a member would attend a class or not, using a complete workflow from raw data to final model evaluation. The project included: cleaning inconsistent data formats handling missing values encoding categorical variables preparing preprocessing pipelines training and comparing multiple models I tested: KNN, Decision Tree, SVM, and Naive Bayes What I found interesting was that the “best” model depended on how performance was judged: Naive Bayes gave the best F1-score on the main split SVM gave the highest accuracy Decision Tree looked like the most stable option when the test size changed A good reminder that model selection should not depend on one metric only. Github Repo: https://lnkd.in/d8_ADgY5 Projects like this keep showing me how important it is to combine clean data, correct preprocessing, and thoughtful evaluation to reach a solid conclusion. #MachineLearning #DataAnalytics #Python #ScikitLearn #ClassificationModels #DataScienceProjects
To view or add a comment, sign in
-
-
🚀 Stop iterating through rows like it’s 2010. In a recent pipeline, we were processing 5 million records to calculate a rolling score. Using a standard loop took forever and pegged the CPU at 100%. Before optimisation: for i in range(len(df)): df.at[i, 'score'] = df.at[i, 'val'] * 1.05 if df.at[i, 'flag'] else df.at[i, 'val'] After optimisation: import numpy as np df['score'] = np.where(df['flag'], df['val'] * 1.05, df['val']) Performance gain: 85x faster execution. Vectorisation isn’t just a "nice to have"—it’s the difference between a pipeline that crashes at 2 AM and one that finishes in seconds. By letting NumPy handle the heavy lifting in C, we eliminated the Python overhead entirely. If you're still using `.iterrows()` or manual loops for column transformations, it’s time to refactor. The performance delta on large datasets is simply too massive to ignore. What is the biggest "bottleneck" function you’ve refactored recently that gave you a massive speedup? #DataEngineering #Python #PerformanceTuning #Vectorization #DataScience
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development