🚀 I Tested Excel Logic Using Python + Pytest (Real QA Use Case) Most people use Excel formulas. But how many actually validate them using automation? 👀 Here’s what I built 👇 🔹 Scenario: A simple nested IF condition in Excel to evaluate student performance =IF(A2>=90,"Excellent",IF(A2>=75,"Good",IF(A2>=50,"Average","Fail"))) Instead of trusting Excel blindly… I validated it using Python 🐍 💡 Approach: ✅ Read Excel data using openpyxl ✅ Recreate logic in Python ✅ Use pytest for validation ✅ Parameterize test cases dynamically 🧪 Full Code: # utils.py from openpyxl import load_workbook def get_test_data(file_path): wb = load_workbook(file_path) sheet = wb.active data = [] for row in sheet.iter_rows(min_row=2, values_only=True): marks, expected = row data.append((marks, expected)) return data # test_excel_validation.py import pytest from utils import get_test_data def evaluate_marks(marks): if marks >= 90: return "Excellent" elif marks >= 75: return "Good" elif marks >= 50: return "Average" else: return "Fail" @pytest.mark.parametrize("marks, expected", get_test_data("students.xlsx")) def test_marks_evaluation(marks, expected): actual_result = evaluate_marks(marks) assert actual_result == expected, \ f"Mismatch for {marks}: Expected {expected}, Got {actual_result}" 🔥 Why this matters: • Validates business logic outside Excel • Prevents hidden formula errors • Demonstrates real QA automation skills • Fully scalable (just add rows!) 💬 If you're in QA / Automation — this is the kind of project that stands out in interviews. Want more real-world automation ideas like this? Drop a 👍 or comment "MORE" #QA #AutomationTesting #Python #Pytest #DataDrivenTesting #SDET #Testing
Validating Excel Logic with Python and Pytest
More Relevant Posts
-
🐍 **Python support is coming to AI MR Reviewer — this Sunday.** After rolling out Java and JavaScript, the next major milestone is here. This Sunday, Python joins the lineup — bringing the same fast, inline, severity-based PR reviews your team already relies on, now for Python codebases. No setup. No delays. Just actionable feedback the moment you open a PR. **Here's what the Python analyzer covers in v1:** 🔴 **HIGH — Security & Critical Issues** → SQL injection via string formatting or concatenation → Use of `eval()` / `exec()` → Hardcoded secrets — API keys, tokens, passwords → `subprocess` calls with `shell=True` → Insecure deserialization (`pickle` / `yaml.load` without safe loader) → Bare `except:` blocks — silent failure risks → Debug mode left enabled in production frameworks 🟡 **MID — Code Quality & Maintainability** → `print()` statements in production code → Broad exception handling without specificity → Mutable default arguments in functions → Long functions or too many parameters → Missing context managers (`open()` without `with`) → Deprecated libraries or patterns (basic detection) 🔵 **LOW — Clean Code & Hygiene** → `TODO` / `FIXME` / `HACK` comments → Magic numbers without named constants → Non-descriptive variable or function names → Unused imports (basic detection) → Inconsistent naming — PEP8 signal detection **Every rule is built around one principle: low noise, high signal.** Your team only sees what truly matters. ⚡ **Same experience. Extended to Python.** ✅ Inline comments directly on PR diffs ✅ Clear severity levels — HIGH / MID / LOW ✅ Instant feedback within seconds of opening a PR 🚀 Going live this Sunday, InshaAllah. If your team works with Python daily — this is built for you. 👉 Install now: https://lnkd.in/dNaHtm2J 🌐 Learn more: primeoctopus.com #Python #CodeReview #AI #DeveloperTools #StaticAnalysis #DevEx #Automation #GitHub #OpenSource
To view or add a comment, sign in
-
-
Python/Pandas tip 🐼: pipe() for transformation pipelines Have you ever made several transformations in a DataFrame and ended up with something like this? ➤➤➤ df = df[df["preco"] > 100] ➤➤➤ df["total_value"] = df["preco"] * df["quantity"] ➤➤➤ df = df.sort_values("total_value", ascending=False) It works, but as the code grows, it becomes difficult to read and maintain 😅 The pipe() method allows you to chain transformations, passing the DataFrame from one step to another. It receives a function and applies this function to the current DataFrame. 📌 In the example of the image we apply three transformations: filter, create a new column and sort the data. With pipe(), we can organize these steps as a clear and sequential pipeline. 🚀 Why use? ⤷ improves readability ⤷ facilitates maintenance ⤷ organizes complex transformations ⤷ ideal for data pipelines 💡 Tip: pipe() makes your code much cleaner in real projects. 💬 Do you already use pipe() to replace step-by-step transformations? #Pandas #python #datascience #dataanalysis #pythoncode
To view or add a comment, sign in
-
-
🚀 Getting Started with Testing in Python using Pytest!! Testing is no longer optional—it's a must-have skill for anyone working with data, APIs, or production systems. One of the most powerful tools for Python testing is pytest. 🔹 What is Pytest? pytest is a Python testing framework that allows you to write simple, scalable, and readable test cases using plain assert statements. It automatically discovers tests, runs them, and gives clear reports. 🔹 Why Pytest is Powerful ✔ Minimal boilerplate code ✔ Automatic test discovery ✔ Rich ecosystem (fixtures, plugins) ✔ Easy debugging with detailed failure output 🔹 Simple Example def add(a, b): return a + b def test_add(): assert add(2, 3) == 5 Run with: pytest 💡 Use Cases in Data Engineering As a data engineer, testing is critical to ensure data quality, pipeline reliability, and system stability. 📊 Common real-world use cases: 1️⃣ ETL Pipeline Testing Validate data extraction from APIs or databases Ensure transformations (cleaning, filtering) are correct Verify data loads correctly into warehouses 2️⃣ Data Validation Check for null values, duplicates, schema mismatches Ensure business rules are applied correctly 3️⃣ API Testing Test data ingestion APIs Validate response formats and status codes 4️⃣ Data Quality Checks Compare expected vs actual datasets Ensure no data loss during processing 🔥 Why This Matters in Industry Companies rely on automated testing to: ✔ Prevent pipeline failures ✔ Catch bugs early ✔ Maintain trust in data systems ✔ Enable faster deployments 📌 Pro Tip: Use fixtures (conftest.py) in pytest to create reusable test data—this is exactly how large-scale systems are tested in production environments. 💬 If you're preparing for Data Engineering or Backend roles, mastering pytest can give you a strong edge in interviews and real-world projects. #Python #Pytest #DataEngineering #Testing #Automation #ETL #SoftwareDevelopment #Backend #LearnPython #Codebasics
To view or add a comment, sign in
-
-
🚀 **Built an Advanced Bug Tracker using Python (and learned a LOT!)** Today I worked on a hands-on mini project: **Advanced Bug Tracker** 🐞 It might look simple, but it helped me understand some very important real-world concepts. --- ## 🔧 What I implemented: ✔️ Add Bug (id, title, severity, status) ✔️ Show only **open bugs** ✔️ Filter bugs by severity ✔️ Close bug by ID ✔️ Delete bug by ID --- ## 📚 What I learned from this project: 🔹 **Class & Object** * Created a `Bug` class to structure data properly 🔹 **File Handling** * Used `"a"`, `"r"`, `"w"` modes * Stored and retrieved structured data from `bugs.txt` 🔹 **Filtering Logic** * Implemented conditions like: * show only open bugs * filter by severity (low/medium/high) 🔹 **String Processing** * Used `split(",")` to parse file data --- ## 🔥 New Things I Learned (Game Changer!) ### 🧨 1. Delete operation in file (very important) ➡️ You **can’t directly delete** a specific line from a file ✔️ Learned the proper way: * Read all lines * Skip the target line * Rewrite the file 💡 This was a big “aha” moment for me! --- ### 🛟 2. Backup system using `shutil` Before deleting, I added: ```python import shutil shutil.copy("bugs.txt", "bugs_backup.txt") ``` 👉 Now I have a backup in case something goes wrong ➡️ This felt like a **real-world production practice** 🔥 --- ### 🧩 3. Debugging a confusing issue (very important lesson) ❌ Problem: Data was being stored, but when I opened `bugs.txt`, it looked empty! ✔️ What I discovered: * Python was creating the file in a **different directory (working directory issue)** 👉 Solved it by: ```python import os print(os.getcwd()) ``` 💡 Learned: ➡️ Always check **current working directory** --- ### 📁 4. Proper project structure Finally fixed everything by: * Opening the **entire folder in VS Code** * Running `main.py` from the correct location ✔️ Now data is correctly stored in my manually created `bugs.txt` file --- ## 🎯 Key Takeaways: 👉 File handling is more tricky than it looks 👉 Small bugs can teach big concepts 👉 Debugging is where real learning happens 👉 Backup before delete = pro mindset --- 💬 Next plan: * Add update feature * Build menu-driven CLI * Maybe convert it into a small app 😎 --- #Python #LearningByDoing #BugTracker #FileHandling #OOP #BeginnerToPro #CodingJourney
To view or add a comment, sign in
-
I've just published a new guide: "BeautifulSoup Web Scraper: A Beginner’s Guide to Scraping Web Data to CSV". Whether you're a student or a seasoned developer looking to automate data tasks, this guide shows you how to fetch, parse, and save web data efficiently using modern Python tools like uv.
To view or add a comment, sign in
-
🚀Python Series – Day 26: JSON in Python (Handle API Data Like a Pro!) Yesterday, we learned APIs in Python🌐 Today, let’s learn how Python works with the most common data format used in APIs: JSON What is JSON? JSON stands for JavaScript Object Notation It is a lightweight format used to store and exchange data. 📌 JSON is easy for humans to read and easy for machines to understand. 🔹 Where is JSON Used? ✔️ APIs ✔️ Web applications ✔️ Config files ✔️ Data exchange between systems 💻 Example of JSON Data { "name": "Mustaqeem", "age": 24, "skills": ["Python", "SQL", "Power BI"] } 💻 Convert JSON to Python Dictionary import json data = '{"name":"Ali","age":22}' result = json.loads(data) print(result) print(result["name"]) 🔍 Output: {'name': 'Ali', 'age': 22} Ali 💻 Convert Python Dictionary to JSON import json student = { "name": "Sara", "age": 23 } json_data = json.dumps(student) print(json_data) 🔍 Output: {"name": "Sara", "age": 23} 🎯 Why JSON is Important? ✔️ Used in almost every API ✔️ Easy data exchange format ✔️ Important for Web Development ✔️ Must-know for Data Science projects ⚠️ Pro Tip 👉 Learn dictionary concepts well, because JSON looks similar to Python dictionaries. 🔥 One-Line Summary 👉 JSON = Standard format to store and exchange data 📌 Tomorrow: SQL with Python (Connect Python with Databases!) Follow me to master Python step-by-step 🚀 #Python #JSON #API #WebDevelopment #DataScience #Coding #Programming #LearnPython #MustaqeemSiddiqui
To view or add a comment, sign in
-
-
Working with Python and SQL together — a few things that made a difference for me In most projects, SQL handles data well, and Python helps in controlling the flow and processing around it. While working with both, a few patterns consistently worked better. 🔹 Always push filtering to SQL Instead of fetching everything and filtering in Python: rows = cursor.execute("SELECT * FROM orders") filtered = [row for row in rows if row["status"] == "COMPLETE"] Better to push it into SQL: SELECT * FROM orders WHERE status = 'COMPLETE'; 🔹 Use parameterized queries Avoid building queries using string formatting: query = f"SELECT * FROM emp WHERE emp_id = {emp_id}" Use bind variables instead: cursor.execute( "SELECT * FROM emp WHERE emp_id = :1", [emp_id] ) 🔹 Fetch data in manageable batches Instead of loading everything at once: rows = cursor.fetchall() Fetch in batches: rows = cursor.fetchmany(1000) 🔹 Let SQL handle data, Python handle flow cursor.execute("SELECT dept_id, COUNT(*) FROM emp GROUP BY dept_id") for row in cursor: process(row) SQL does aggregation, Python handles the next step. 💡 What worked for me Using Python and SQL together is less about replacing one with the other, and more about letting each do what it does best. Curious to know — how do you usually split work between SQL and Python in your projects? #Python #SQL #DataEngineering #OracleSQL #DatabaseDevelopment #CodingPractices
To view or add a comment, sign in
-
🧠 What is a Descriptor in Python ? A descriptor is any object that defines one or more of these methods: __get__() → Access attribute __set__() → Set attribute __delete__() → Delete attribute 👉 In simple terms: Descriptors control how attributes behave in a class Why It Matters? Whenever you use: @property 👉 You’re already using descriptors under the hood. Yes — even Django models rely heavily on descriptors. Example: class Descriptor: def __get__(self, instance, owner): print("Getting value") return instance._value def __set__(self, instance, value): print("Setting value") instance._value = value class MyClass: attr = Descriptor() obj = MyClass() obj.attr = 10 # calls __set__ print(obj.attr) # calls __get__ What’s Happening Internally? When you do: obj.attr Python does NOT directly fetch the value. 👉 It checks: Does attr have __get__? If yes → call descriptor logic Real-World Use Cases Descriptors are used in: ✅ @property (getter/setter logic) ✅ Django ORM fields (models.CharField) ✅ Data validation frameworks ✅ Lazy loading attributes ✅ Caching values Example (Validation): class PositiveNumber: def __set__(self, instance, value): if value < 0: raise ValueError("Must be positive") instance.__dict__['value'] = value def __get__(self, instance, owner): return instance.__dict__.get('value', 0) class Product: price = PositiveNumber() p = Product() p.price = 100 # ✅ p.price = -10 # ❌ Error 👉 Clean validation without cluttering your class. Have you ever used descriptors directly, or only via @property? #Python #AdvancedPython #OOP #Django #SoftwareEngineering #BackendDevelopment #infosys #citi #SoftwareDevelopment
To view or add a comment, sign in
-
🚀 Day 2/30 — SQL + Python Deep Dive Subqueries (Correlated vs Non-Correlated) 👉 Basics are done. Pipelines are built. 👉 Now we go deeper — into how SQL really executes and how Python scales. You’ve used subqueries before… But do you know: 👉 how they actually run? 👉 why some are slow? 🔹 What is a Subquery? 👉 A query inside another query 🔹 1. Non-Correlated Subquery 👉 Runs once, result is reused SELECT name FROM employees WHERE salary > ( SELECT AVG(salary) FROM employees ); 👉 Inner query runs once → outer query uses result 🔹 2. Correlated Subquery 👉 Runs for each row of outer query SELECT name FROM employees e1 WHERE salary > ( SELECT AVG(salary) FROM employees e2 WHERE e1.department = e2.department ); 👉 Inner query runs again & again 😐 🔹 Key Difference Non-correlated → runs once ⚡ Correlated → runs per row 🐢 🔹 Why This Matters Big impact on performance Correlated queries can be slow Often replaced using JOIN or CTE 🔹 Real Insight 👉 If your query is slow… 👉 check if it’s a correlated subquery 💡 Quick Summary Subqueries are powerful… But execution matters more than syntax 💡 Something to remember Same logic… Different execution → different performance. #SQL #Python #DataEngineering #LearningInPublic #TechLearning
To view or add a comment, sign in
-
-
Python Series – Day 29: Email Automation with Python (Send Emails Automatically!) Yesterday, we learned Excel Automation📊 Today, let’s learn how to automate one of the most common real-world tasks: 👉 Email Automation What is Email Automation? 👉 Email Automation means sending emails automatically using Python. Instead of sending emails manually one by one, Python can handle it for you ⚡ Where is it Used? ✔️ Sending reports 📊 ✔️ Notifications 🔔 ✔️ OTP / Verification codes 🔐 ✔️ Marketing emails 📧 ✔️ Alerts & reminders ⏰ Libraries Used ✔️ `smtplib` → Send emails ✔️ `email` → Format email content 💻 Example: Send Email import smtplib from email.message import EmailMessage msg = EmailMessage() msg.set_content("Hello, this is a test email") msg["Subject"] = "Python Email Automation" msg["From"] = "your_email@gmail.com" msg["To"] = "receiver_email@gmail.com" server = smtplib.SMTP("smtp.gmail.com", 587) server.starttls() server.login("your_email@gmail.com", "your_password") server.send_message(msg) server.quit() Output: Email is sent successfully to the receiver. 🎯 Why Email Automation is Important? ✔️ Saves time ✔️ Reduces manual work ✔️ Useful in business & automation ✔️ Used in real-world applications Pro Tip Use App Passwords instead of your real email password for security. One-Line Summary Email Automation = Send emails automatically using Python 📌 Tomorrow: Python Interview Questions (Top Questions + Answers!) Follow me to master Python step-by-step 🚀 #Python #Automation #EmailAutomation #smtplib #Coding #Programming #DataScience #LearnPython #MustaqeemSiddiqui
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development