Minitask Session 12 : Basic Python From this session, I learn something about : 1. Writing Structure (Syntax) 2. Data Type Flexibility Like : - String (Text) - Integer (Numbers) - List (Arrays, e.g., ['Coding', 'Data Analysis']) - Boolean (True/False, e.g., True) - Nested Dictionary (A dictionary inside another dictionary, e.g., the personality data). 3. How to retrieve data from a dictionary using key Thanks to Muslar Alibasya as my mentor and MySkill #Dataanalytics #Myskill
Learning Python Basics with Minitask Session 12
More Relevant Posts
-
Working with Integers & Floating-Point Numbers As I continue building strong Python fundamentals, I have been focusing on how numeric data types work specifically integers (int) and floating-point numbers (float). - Integers represent whole numbers, while floats handle decimal values. Python automatically infers the type based on the assigned value. - Practiced core arithmetic operations such as addition, subtraction, multiplication, and division, noting that standard division always returns a float. - Learned that mixing integers and floats in calculations automatically promotes the result to a float. - Explored advanced numeric operations like modulo (%), floor division (//), and exponentiation (**), which are commonly used in analytical computations. - Worked with type conversion using int() and float() to transform numbers and numeric strings into the required formats. - Reviewed helpful built-in functions like round(), abs(), and pow() for rounding, absolute values, and exponentiation. #PythonBasics #NumbersInPython #DataAnalyticsJourney #LearningInPublic #Upskilling
To view or add a comment, sign in
-
Getting Comfortable with Data Types Lately, I have been strengthening my Python fundamentals by understanding how different kinds of data are represented and handled in the language - a key concept when working with real-world data. - Python automatically identifies data types based on assigned values, making it flexible and easy to work with. - Explored commonly used data types such as integers, floats, strings, Booleans, lists, tuples, sets, dictionaries, range, and None. - Learned how to inspect variable types using the built-in type() function. - Also practiced checking data types using isinstance() to avoid unexpected runtime issues. These basics play an important role in writing clean, error-free code and handling data effectively. #PythonLearning #DataTypes #DataAnalyticsJourney #LearningInPublic #Upskilling
To view or add a comment, sign in
-
These days, I’m working on Python by breaking problems into small, logical steps instead of just writing code. What I’m actively practicing: • Writing clean logic using if-else • Automating repetitive tasks with loops • Organizing data using lists & dictionaries • Creating reusable code with functions The goal isn’t to memorize syntax. The goal is to think clearly, solve better, and analyze data smarter. Slow progress, real progress #PythonJourney #ProblemSolving #DataAnalytics #BCA #LearningByDoing #GrowthMindset
To view or add a comment, sign in
-
I’ve just wrapped up a Python project that dives deep into the fundamentals of data handling and system design. While libraries like Pandas are powerful, building a management system from scratch is a fantastic way to sharpen logic and understand the "why" behind the code. Features of the System: Robust Error Handling: Designed the system to gracefully handle missing files, empty datasets, and invalid non-numeric entries—ensuring a crash-proof user experience. Object-Oriented Programming (OOP): Built a modular DataSet class to encapsulate data loading, statistical logic, and reporting into a reusable structure. Algorithmic Logic: Implemented custom loops and arithmetic operators to calculate Totals, Averages, and Min/Max values without relying on external packages. Automated Reporting: The system concludes by generating a clean, exported report file summarizing the performance analytics. https://lnkd.in/gJWTwRV2 #Unilorinngr, #AbidoyeAbdulmujeeb, #GSE301
To view or add a comment, sign in
-
𝐃𝐚𝐲 9 | 50 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐰𝐢𝐭𝐡 𝐏𝐲𝐭𝐡𝐨𝐧 Today’s focus was on analyzing one-dimensional NumPy arrays and understanding why arrays are preferred over plain Python lists in data analysis. ✔️ Created a 1D NumPy array from a list ✔️ Found the minimum value in the array ✔️ Retrieved the index of the maximum value ✔️ Calculated the average of the smallest and largest values ✔️ Computed mean, median, and standard deviation ✔️ Identified outliers using statistical logic Key takeaway: NumPy arrays offer better performance, memory efficiency, and analytical flexibility than Python lists, which is why they’re foundational in scientific and data analysis workflows. Day 9 complete. Steady progress continues. 📈 𝐎𝐬𝐭𝐢𝐧𝐚𝐭𝐨 𝐑𝐢𝐠𝐨𝐫𝐞 #Python #NumPy #DataAnalysis #DataScience #MachineLearning #ArtificialIntelligence #CodingJourney #LearnInPublic #GitHub #Programming #TechCommunity #DailyPractice #Consistency #DataDriven #50_days_of_data_analysis_with_python #ostinatorigore
To view or add a comment, sign in
-
-
While learning data science, it’s easy to jump quickly into libraries and models. But I realized that many problems become simpler when the core Python logic is strong. As part of this phase, I focused on Python advanced fundamentals — specifically control statements, loops, and functions and practiced how they are used to build clean and flexible logic. During this module, I worked on: - Writing decision-based logic using if, elif, and else statements - Using for and while loops to automate repetitive tasks and handle dynamic conditions - Applying break and continue to control program flow effectively - Defining and using functions to make code reusable, modular, and easier to maintain - Understanding how functions, parameters, and return values help structure larger programs Instead of treating these topics as syntax, I focused on how they fit together while solving problems, from simple condition checks to building reusable logic blocks using functions. This module strengthened my understanding of how real-world data processing pipelines and analytical workflows rely heavily on well-structured Python logic before any libraries or models come into play. I’ll continue to build on this foundation as I move deeper into data analysis concepts. The practice notebooks and examples for this module are documented here: https://lnkd.in/d5W-zHkj #Python #Programming #DataScience #LearningJourney #ContinuousLearning
To view or add a comment, sign in
-
#Day13 of 365 Days of Code Today I learned about queues and tuples and spent time understanding how they differ and when to use each. Queues follow a first-in, first-out (FIFO) principle - the first item added is the first one removed. They’re useful when order matters, like task scheduling or processing requests in sequence. Tuples, on the other hand, are ordered but immutable. Once created, their values can’t be changed, which makes them useful for storing data that shouldn’t be modified accidentally. Learning these differences made it clearer that choosing the right data structure isn’t just about syntax - it’s about intent, safety, and how data flows through a program. Still learning. Still showing up. On to Day 14 #365DaysOfCode #Python #DataStructures #Queues #Tuples #LearningInPublic #Consistency #TechJourney
To view or add a comment, sign in
-
Day 12 of Python Today was all about going deeper into Pydantic fundamentals and understanding how data validation really works. Today’s progress 👇 → Pydantic foundations → Default conversions → Mixing Pydantic with typing . → Validations with Field → Field & model validators On to Day 13 ... #pythonprogramming #pydantic #typesafety
To view or add a comment, sign in
-
File handling in Python is less about syntax and more about understanding data flow 📂 In this practice session, I worked through the complete lifecycle of a text file — creating it, reading its contents, appending new data, and then modifying specific lines by re-writing the file. The exercise reinforced how Python’s file modes (w, r, a) directly control data persistence and why careless use of write mode can overwrite existing content. Reading data as a whole versus line-by-line also highlighted how different approaches suit different use cases. What made this exercise practical was treating the file like real data, not just text. Inserting a line at a specific position required reading into memory, modifying the structure, and writing it back — a common pattern when dealing with logs, reports, or configuration files. This is foundational for handling larger datasets later on, especially when working with data engineering and Big Data workflows 🔄 Understanding file handling at this level builds confidence for working beyond in-memory data. #Python #FileHandling #ProgrammingFundamentals #DataEngineeringBasics #CleanCode #LearningByDoing
To view or add a comment, sign in
-
-
🌟 New Blog Just Published! 🌟 📌 Python Billion-Row Data Analysis Made Easy with Vaex 🚀 📖 Imagine you receive a CSV file that contains 1.2 billion rows of sensor readings from a fleet of delivery trucks. Opening it in pandas on a laptop with 16 GB of RAM will likely trigger an...... 🔗 Read more: https://lnkd.in/dyKcTYYu 🚀✨ #vaex #billion-rows #out-of-core
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development