𝐃𝐚𝐲 13 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today was about slicing and extracting meaning from structured data using NumPy. ✔️ Created arrays from lists and transposed them in a single step ✔️ Used indexing and slicing to extract specific rows and subarrays ✔️ Retrieved individual data points while preserving correct data types ✔️ Isolated related attributes (age and gender) using slicing ✔️ Extracted a single feature for visualization ✔️ Plotted a histogram from a sliced array with proper labels Key takeaway: slicing is what turns raw arrays into focused insights and clear visualizations. Day 13 complete. Building fluency one operation at a time. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
Day 13: Data Analysis with Python using NumPy
More Relevant Posts
-
𝐃𝐚𝐲 16 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today’s focus was on Pandas Series and basic DataFrame creation, exploring how Series can simplify analysis and preparation of data. ✔️ Created a Pandas Series and counted the occurrences of each item ✔️ Checked for the presence of specific values in the Series ✔️ Extracted all unique values from the Series ✔️ Updated the Series by inserting new items at specific indices ✔️ Converted the Series into a DataFrame and inspected its shape and dimensions Key takeaway: Pandas Series provide a flexible structure for handling labeled data, and converting them to DataFrames allows for more advanced analysis. Day 16 complete. Building fluency with Pandas step by step. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 20 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today’s focus was on exploring data and visualizing insights using Pandas and Matplotlib. ✔️ Created a DataFrame to organize product data ✔️ Identified the most profitable product and visualized it with a bar plot ✔️ Determined the least profitable product and calculated the profit difference ✔️ Plotted costs and profits across all products using a line chart ✔️ Calculated average cost and average profit per product Insight: using Pandas for both analysis and quick visualizations, alongside Matplotlib for more detailed plots, makes it easier to interpret data and communicate insights effectively. Day 20 complete. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 14 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today’s focus was on analyzing and extracting insights directly from arrays using NumPy. ✔️ Combined multiple lists into a single structured NumPy array ✔️ Filtered names and scores using conditional logic ✔️ Queried specific student records by name ✔️ Identified the highest score and the student who achieved it ✔️ Determined the longest name in the dataset and its index Key takeaway: NumPy alone is powerful enough to perform meaningful data analysis when you understand indexing, slicing, and Boolean logic. Day 14 complete. Onward with clarity and discipline. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 22 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today was about doing the unseen but critical work that makes analysis reliable: preprocessing. ✔️ Counted missing values across columns to understand data quality ✔️Compared two strategies for handling missing data: dropping vs. imputing with column means ✔️Updated existing data by adding new attributes and reshaping it from wide to long format ✔️Used melt() to make the dataset more analysis-friendly ✔️Applied conditional filtering with where() to isolate valid records ✔️Standardized column headers for consistency and readability Key insight: preprocessing decisions directly shape the quality of insights you can extract. How you handle missing values, structure data, and standardize formats often matters more than the analysis itself. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 15 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today’s focus was on analyzing structured data from a CSV using NumPy, with emphasis on slicing and conditional logic. ✔️ Imported CSV data using genfromtxt() and transposed the array ✔️ Extracted names, ages, and gender into separate arrays using slicing ✔️ Filtered records based on age conditions ✔️ Counted subsets of data using Boolean logic ✔️ Isolated records by gender and calculated group averages ✔️ Computed average ages for specific names Key takeaway: with proper slicing and conditions, NumPy can efficiently handle real datasets and answer practical analytical questions without higher-level libraries. Day 15 complete. Momentum continues. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
To view or add a comment, sign in
-
-
DSA — LeetCode #760: Anagram Mapping I worked on a classic Data Structures & Algorithms problem and added it to my LeetCode journey. Problem Statement Given two integer arrays s and t, where t is an anagram of s, return an array mapping such that: mapping[i] is the index in t where the value s[i] appears. If a value occurs multiple times, any valid index can be returned. Approach Use a hash map to store indices of elements in array t. Traverse t and map each value to its index (or list of indices). Iterate through array s and fetch the corresponding index from the map. Build the result array using these indices. Example Input: s = [16, 5, 12, 10] t = [5, 10, 12, 16] Output: [3, 0, 2, 1] Complexity ✅ Time Complexity: O(n) ✅ Space Complexity: O(n) #DSA #LeetCode #ProblemSolving #Python #CodingInterview #Algorithms #DataStructures #HashMap #Anagram #CodingPractice #TechJourney #SoftwareEngineering #InterviewPrep #LearnToCode
To view or add a comment, sign in
-
-
📊 Python Data Visualization Cheat Sheet Data tells a story — visualization is how we make it speak. This cheat sheet brings together the most-used plots from Matplotlib and Seaborn, all in one place for quick reference and daily practice. From line plots and bar charts to heatmaps and KDEs, these are the visuals every data analyst and data scientist should feel comfortable with. Simple concepts, strong foundations. 🚀 Save it, revisit it, and keep building clarity through visuals. #Python #DataVisualization #Matplotlib #Seaborn #DataScience #DataAnalytics #EDA #LearningInPublic #TechSkills #Consistency
To view or add a comment, sign in
-
-
When KPIs suddenly look amazing, it’s tempting to celebrate 😅 Then my data reflex says: confirm the level of detail first. If our data is more detailed than we think, joins/merges and aggregations can quietly multiply rows and inflate metrics with zero errors. In PySpark/Python, I quickly check it by doing a groupBy(key).count() to spot duplicates, compare row counts before vs after the transformation, and sanity check a small sample end-to-end. Moral of the story : Celebrate after the checks, not before. #DataEngineering #PySpark #Python #DataQuality
To view or add a comment, sign in
-
Column transformation + groupby changed how I analyze data 📊 Raw data doesn’t give insights. Prepared data does. While working with Pandas, I realized how powerful simple column transformations are: • Cleaning percentage columns and converting them to numeric • Creating new logic-based columns (BONUS vs NO BONUS) • Adding derived columns instead of touching raw data Once the columns made sense, groupby unlocked the patterns. Grouping by department and aggregating values revealed insights that were invisible at the row level. Big lesson: ➡️ Clean columns first ➡️ Group second ➡️ Insights follow Question for data folks: Do you transform your columns before groupby — or learn this the hard way? 😅 #DataAnalytics #Python #Pandas #GroupBy #LearningInPublic
To view or add a comment, sign in
-
-
The Modern Developer's Power Couple: Data Science & Free Hosting! 🚀 If you are looking to build and deploy data-driven applications in 2026, these two infographics cover the essentials: The Python Data Science Stack: From the foundations of NumPy and Pandas to advanced AI with PyTorch and TensorFlow. The Deployment Layer: 20 incredible platforms like Vercel, Netlify, and Railway that let you host your projects for free. Whether you're performing complex EDA with Seaborn or deploying a fast API on Render, the barrier to entry has never been lower. Which hosting platform is your favorite for side projects? 👇 #Python #DataScience #WebDevelopment #MachineLearning #CloudComputing #OpenSource
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development