I recently updated a script for a rock-paper-scissors game to log every game outcome to a CSV file and conducted a small exploratory analysis using pandas. Here’s what I examined: - Total rounds played - Win counts and win percentages - User choice distribution This process reflects a fundamental data workflow: data generation → storage → analysis. Although it's a small dataset, the emphasis was on developing the habit of transforming raw events into structured, analyzable data. You can find the updated repository on GitHub: https://lnkd.in/ea3fxBbi #DataScience #DataAnalytics #Python #Pandas #LearningInPublic #GitHub
Rock-Paper-Scissors Data Analysis with Python and Pandas
More Relevant Posts
-
Starting my journey into Pandas for data analysis. In my first lesson, I worked hands-on with a real dataset and explored: • Reading CSV files with Pandas • Understanding the DataFrame structure • Exploring columns and inspecting data • Getting familiar with a real-world survey dataset I documented the process and shared both the code and detailed notes: 📓 Notebook (code): https://lnkd.in/eZzEX394 📝 Notes (explanations): https://lnkd.in/efvh2ApQ I’ll continue this series and share each step as I progress. #Python #Pandas #DataAnalytics #DataScienceJourney #LearningInPublic
To view or add a comment, sign in
-
Making Head()s and Tail()s of Your Data 🐼📊 Ever feel overwhelmed when first looking at a massive dataset? You don't need to load the whole thing to get a feel for it. That's where two of my favorite functions in the pandas library come in! df.head(): This function quickly shows you the first 5 rows of your DataFrame by default, providing an initial glimpse into the structure and data types. df.tail(): Conversely, this one displays the last 5 rows, which is super helpful for checking out recently added data or final entries. It's a simple, yet powerful, trick every data professional uses to start their data exploration and analysis journey on the right foot. #DataScience #Python #Pandas #DataAnalytics #DataManipulation #SQL #MachineLearning #LearningJourney# Abhishek kumar # Harsh Chalisgaonkar # SkillCircle™
To view or add a comment, sign in
-
-
🚀 New Video on Pandas Advanced – Part 2 I’ve just published the next episode in the Pandas Advanced series where we focus on practical performance optimization and large data handling — exactly how data analysts work with real datasets in the wild. In this video you’ll learn how to: • Measure memory usage • Optimize data types • Process large CSV files in chunks • Use vectorized operations • Speed up groupby & filtering 🎥 Watch here 👉 https://lnkd.in/gg7p262m 📂 Code on GitHub: https://lnkd.in/gNFk2iPa If you’re serious about Python data analytics, this advanced workflow will level up your skills. #Pandas #Python #DataAnalytics #DataScience #PyAIHub #ProfessionalDevelopment
To view or add a comment, sign in
-
-
Last week, Pandas 3 was released 🚀👇🏼 Fun fact - pandas 🐼 stands for PANel DAta and Spython. Release highlights 🎯 ✅ Dedicated string data type by default: string columns are now inferred as the new str dtype instead of object, providing better performance and type safety ✅ Consistent copy/view behaviour with Copy-on-Write (CoW) (a.k.a. getting rid of the SettingWithCopyWarning): more predictable and consistent behavior for all operations, with improved performance through avoiding unnecessary copies ✅ New default resolution for datetime-like data: no longer defaulting to nanoseconds, but generally microseconds (or the resolution of the input), when constructing datetime or timedelta data (avoiding out-of-bounds errors for dates with a year before 1678 or after 2262) ✅ New pd.col syntax: initial support for pd.col() as a simplified syntax for creating callables in DataFrame.assign Some features are potentially breaking changes.⛓️💥 Installation: python -m pip install --upgrade pandas==3.0.0 More details are available on the release notes: https://lnkd.in/gTCvwEdh #data #python #datascience
To view or add a comment, sign in
-
-
Column transformation + groupby changed how I analyze data 📊 Raw data doesn’t give insights. Prepared data does. While working with Pandas, I realized how powerful simple column transformations are: • Cleaning percentage columns and converting them to numeric • Creating new logic-based columns (BONUS vs NO BONUS) • Adding derived columns instead of touching raw data Once the columns made sense, groupby unlocked the patterns. Grouping by department and aggregating values revealed insights that were invisible at the row level. Big lesson: ➡️ Clean columns first ➡️ Group second ➡️ Insights follow Question for data folks: Do you transform your columns before groupby — or learn this the hard way? 😅 #DataAnalytics #Python #Pandas #GroupBy #LearningInPublic
To view or add a comment, sign in
-
-
Stop Googling "How do I do this SQL Group By in Pandas?" 🛑 SQL and Python are the twin pillars of data, but switching contexts kills productivity. I created this side-by-side cheat sheet to stop the syntax struggle. Inside: ✅ Select & Filter: The basics, translated. ✅ Joins: Inner, Outer, Left, Right made simple. ✅ Aggregations: Grouping logic for both. ✅ Null Handling: COALESCE vs .fillna() Fluency in both is a data superpower. 🦸♂️ ♻️ Repost to help a connection stop tab-switching today! Follow Mohammad Imran Hasmey for more related posts. #DataScience #SQL #Python #Coding #CheatSheet
To view or add a comment, sign in
-
-
Stop Googling "How do I do this SQL Group By in Pandas?" 🛑 SQL and Python are the twin pillars of data, but switching contexts kills productivity. I created this side-by-side cheat sheet to stop the syntax struggle. Inside: ✅ Select & Filter: The basics, translated. ✅ Joins: Inner, Outer, Left, Right made simple. ✅ Aggregations: Grouping logic for both. ✅ Null Handling: COALESCE vs .fillna() Fluency in both is a data superpower. 🦸♂️ ♻️ Repost to help a connection stop tab-switching today! #DataScience #SQL #Python #Coding #CheatSheet
To view or add a comment, sign in
-
-
𝐒𝐩𝐨𝐫𝐭𝐬 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐃𝐚𝐲 44: 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today’s analysis focused on inspecting a sports dataset, evaluating and optimizing memory usage by converting object columns to categorical types, renaming and querying specific fields, and exporting a cleaned subset for further use—highlighting how data type management directly impacts performance and efficiency. 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #DataAnalytics #LearnInPublic #GitHub #Data #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #SQL #Learning #ostinatorigore
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Next step: simulate thousands of games automatically and visualize outcome distributions.