Software Testing Best Practices

Explore top LinkedIn content from expert professionals.

  • View profile for Varun Badhwar

    Founder & CEO @ Endor Labs | Creator, SVP, GM Prisma Cloud by PANW

    23,459 followers

    As an industry, we’ve poured billions into #ZeroTrust for users, devices, and networks. But when it comes to software - the thing powering every modern business, we’ve made one glaring exception: OPEN SOURCE SOFTWARE! Every day, enterprises ingest unvetted, unauthenticated code from strangers on the internet. No questions asked. No provenance checked. No validation enforced. We assume OSS is safe because everyone uses it. But last week’s #npm attacks should be a wake-up call. That’s not Zero Trust. That’s blind trust. If 80% of your codebase is open source, it’s time to extend Zero Trust to the software supply chain. That means: • Pin every dependency. • Delay adoption of brand-new versions. • Pull trusted versions of OSS libraries where available. #Google's Assured OSS offering is a good one for this. • Assess health and risk of malicious behavior before you approve a package. • Don’t just scan for CVEs—ask if the code is actually exploitable. Use tools that give you evidence and control, not just noise. I wrote more about this in the blog linked 👇 You can’t have a Zero Trust architecture while implicitly trusting 80% of your code. It's time to close the gap and mandate Zero Trust for OSS. #OSS #npmattacks #softwaresupplychainsecurity

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,931 followers

    ☂️ Designing For Edge Cases and Exceptions. Practical design guidelines to prevent dead-ends, lock-outs and other UX failures ↓ 🚫 People are never edge cases; “average” users don’t exist. ✅ Exceptions will occur eventually, it’s just a matter of time. ✅ To prevent failure, we need to explore unhappy paths early. ✅ Design full UI stack: blank, loading, partial, error, ideal states. ✅ Design defaults deliberately to prevent slips and mistakes. ✅ Start by designing the core flow, then scrutinize every part of it. ✅ Allow users to override validators, or add an option manually. ✅ Design for incompatibility: contradicting filters, prefs, settings. 🚫 Avoid generic error messages: they are often main blockers. ✅ Suggest presets, templates, starter kits for quick recovery. ✅ Design extreme scales: extra long/short, wide/tall, offline/slow. ✅ Design irreversible actions, e.g. Delete, Forget, Cancel, Exit. ✅ Allow users to undo critical actions for some period of time. ✅ Design a recovery UX due to delays, lock-outs, missing data. ✅ Accessibility is a reliable way to ensure design resilience. Good design paves happy paths for everyone, but also casts a wide safety net when things go sideways. I love to explore unhappy paths by setting up a dedicated design review to discover exceptions proactively. It can be helpful to also ask AI tooling to come up with alternate scenarios. Once we start discussing exceptions, we start thinking outside of the box. We have to actively challenge generic expectations, stereotypes and assumptions that we as designers typically embed in our work, often unconsciously. And to me, that’s one of the most valuable assets of such discussions. And: whenever possible, flag any mentions of average users in your design discussions. Such people don’t exist, and often it’s merely an aggregated average of assumptions and hunches. Nothing stress tests your UX better then testing it in realistic conditions with realistic data sets with real people. Useful resources: How To Fix A Bad User Interface, by Scott Hurff https://lnkd.in/ecj6PGPU How To Design Edge Cases, by Tanner Christensen https://lnkd.in/ecs3kr8z How To Find Edge Cases In UX, by Edward Chechique https://lnkd.in/e2pfqqen Just About Everyone Is an Edge Case, by Kevin Ferris https://lnkd.in/eDdUVHyj Edge Cases In UX, by Krisztina Szerovay https://lnkd.in/eM2Xynba Recommended books: – Design For Real Life, by Sara Wachter-Boettcher, Eric Meyer – The End of Average, by Todd Rose – Think Like a UX Researcher, by David Travis, Philip Hodgson – Mismatch: How Inclusion Shapes Design, by Kat Holmes #ux #design

  • View profile for Swami Sivasubramanian
    Swami Sivasubramanian Swami Sivasubramanian is an Influencer

    VP, AWS Agentic AI

    189,947 followers

    We’ve seen customers experience this pattern: teams ask an AI agent to fix a bug, and the agent refactors three helper functions, adds defensive null checks everywhere, and rewrites code that worked fine. The core problem is that devs and the agent aren't working with the same boundary between what to fix and what to leave alone. We built Kiro's bug-fixing workflow around something we call property-aware code evolution. Every bug fix has dual intent: fix the buggy behavior surgically, preserve everything else. But how does this work in practice? How does Kiro know which is which? Kiro first proposes a bug condition—the scenarios it believes trigger the bug—and the postcondition—what should happen instead if we didn’t have a bug. Based on this, Kiro creates two testable properties: the fix property, which checks if the fixed code works correctly on buggy inputs and the preservation property, which ensures behavior is preserved everywhere else. You can iterate with Kiro over both properties until you’re comfortable with the agent’s hypothesis. Once that’s in place, Kiro first tests both properties against the unfixed code. Fix-property tests should fail, reproducing the bug exactly where predicted. Preservation tests should pass, capturing baseline behavior for the non-buggy scenarios. After gathering these results on the unfixed code, Kiro applies a fix and retests both properties. If the fix worked, both kinds of property tests should now pass, letting us know that we fixed the bug without changing anything else. Because all this is backed by property-based tests, Kiro generates and tests hundreds of variations that cover many edge cases to narrow down the problem and test the fix comprehensively. This approach gives teams the confidence to let Kiro work more autonomously without sacrificing understanding of what it’s doing to solve the problem. Our team dives into property-aware code evolution in this blog. Learn how to use agents to fix complex bugs more reliably with Kiro ➡️ https://lnkd.in/gWZkBcVX

  • View profile for Paul Meredith

    I build start-up and scale-up fintechs. I help fintech CEOs deliver annual revenue growth of £15m+, by leading and optimising the change and delivery function

    12,848 followers

    The biggest businesses can get major programmes horribly wrong. Here are 4 famous examples, the fundamental reasons for failure and how that might have been avoided. Hershey: Sought to replace its legacy IT systems with a more powerful ERP system. However, due to a rushed timeline and inadequate testing, the implementation encountered severe issues. Orders worth over $100 million were not fulfilled. Quarterly revenues fell by 19% and the share price by 8% Key Failures: ❌ Rushed implementation without sufficient testing ❌ Lack of clear goals for the transition ❌ Inadequate attention and resource allocation Hewlett Packard: Wanted to consolidate its IT systems into one ERP. They planned to migrate to SAP, expecting any issues to be resolved within 3 weeks. However, due to the lack of configuration between the new ERP and the old systems, 20% of customer orders were not fulfilled. Insufficient investment in change management and the absence of manual workarounds added to the problems. This entire project cost HP an estimated $160 million in lost revenue and delayed orders. Key Failures: ❌ Failure to address potential migration complications. ❌ Lack of interim solutions and supply chain management strategies. ❌ Inadequate change management planning. Miller Coors: Spent almost $100 million on an ERP implementation to streamline procurement, accounting, and supply chain operations. There were significant delays, leading to the termination of the implementation partner and subsequent legal action. Mistakes included insufficient research on ERP options, choosing an inexperienced implementation partner, and the absence of capable in-house advisers overseeing the project. Key Failures: ❌ Inadequate research and evaluation of ERP options. ❌ Selection of an inexperienced implementation partner. ❌ Lack of in-house expertise and oversight. Revlon: Another ERP implementation disaster. Inadequate planning and testing disrupted production and caused delays in fulfilling customer orders across 22 countries. The consequences included a loss of over $64 million in unshipped orders, a 6.9% drop in share price, and investor lawsuits for financial damages. Key Failures: ❌ Insufficient planning and testing of the ERP system. ❌ Lack of robust backup solutions. ❌ Absence of a comprehensive change management strategy. Lessons to be learned: ✅ Thoroughly test and evaluate new software before deployment. ✅ Establish robust backup solutions to address unforeseen challenges. ✅ Design and implement a comprehensive change management strategy during the transition to new tools and solutions. ✅ Ensure sufficient in-house expertise is available; consider capacity of those people as well as their expertise ✅ Plan as much as is practical and sensible ✅ Don’t try to do too much too quickly with too few people ✅ Don’t expect ERP implementation to be straightforward; it rarely is

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems @meta

    206,800 followers

    Defeating Nondeterminism in LLM Inference - Thinking Machines just released their first blog and I think it is very good! Thinking Machines is a new AI lab founded in early 2025 by former OpenAI CTO Mira Murati and several OpenAI alums. Backed by $2B in seed funding, their mission is bold: rebuild the AI stack to be more transparent, deterministic, and customizable; starting from the kernels up. The blog is not a product announcement. No hype. Just a surgical teardown of a core flaw in LLM infrastructure: nondeterministic inference. Even with temperature 0, LLM outputs can change between runs. Most blame floating point math or GPU randomness. But the real culprit? Batch-size-dependent numerics. Your output can shift based on how many other users hit the server. That’s not randomness; it’s poor kernel design. Their fix: make key ops like matmul, RMSNorm, and attention batch-invariant. It costs a bit of performance, but you gain something more important: trust. If this is how Thinking Machines opens, I’m watching what comes next. Read the blog here: https://lnkd.in/gnuxvveX

  • View profile for Japneet Sachdeva

    Automation Lead | Instructor | Mentor | Checkout my courses on Udemy & TopMate | Vibe Coding Cleanup Specialist

    129,936 followers

    22 Test Automation Framework Practices That Separate Good SDETs from Great Ones Here's what actually works: 1. KISS Principle Break complex tests into smaller modules. Avoid singletons that kill parallel execution. Example: Simple initBrowser() method instead of static WebDriver instances. 2. Modular Approach Separate test data, utilities, page objects, and execution logic. Example: LoginPage class handles only login elements and actions. 3. Setup Data via API/DB Never use UI for test preconditions. It's slow and flaky. Example: RestAssured POST to create test users before running tests. 4. Ditch Excel for Test Data Use JSON, XML, or CSV. They're faster, easier to version control, and actually work. Example: Jackson ObjectMapper to read JSON into POJOs. 5. Design Patterns Factory: Create driver instances based on browser type Strategy: Switch between different browser setups Builder: Construct complex test objects step by step 6. Static Code Analysis SonarLint catches unused variables and potential bugs while you code. 7. Data-Driven Testing Run same test with multiple data sets using TestNG DataProvider. Example: One login test, 10 different user credentials. 8. Exception Handling + Logging Log failures properly. Future you will thank present you. Example: Logger.severe() with meaningful error messages. 9. Automate the Right Tests Focus on repetitive, critical tests. Each test must be independent. 10. Wait Utilities WebDriverWait with explicit conditions. Never Thread.sleep(). Example: wait.until(ExpectedConditions.visibilityOfElementLocated()) 11. POJOs for API Type-safe response handling using Gson or Jackson. Example: Convert JSON response directly to User object. 12. DRY Principle Centralize common locators and setup/teardown in BaseTest class. 13. Independent Tests Each test sets up and cleans up its own data. Enables parallel execution. 14. Config Files URLs, credentials, environment settings—all in external properties files. Example: ConfigReader class to load properties. 15. SOLID Principles Single responsibility per class. Test logic separate from data and helpers. 16. Custom Reporting ExtentReports with screenshots, logs, and environment details. 17. Cucumber Reality Check If you're not doing full BDD, skip Cucumber. It adds complexity without value. 18. Right Tool Selection Choose based on project needs, not trends. Evaluate maintenance cost. 19. Atomic Tests One test = one feature. Fast, reliable, easy to maintain. 20. Test Pyramid Many unit tests (fast) → Some API tests → Few UI tests (slow). 21. Clean Test Data Create in @BeforeMethod, delete in @AfterMethod. Zero data pollution. 22. Data-Driven API Tests Dynamic assertions, realistic data, POJO response validation. Which practice transformed your framework the most? -x-x- Most asked SDET Q&A for 2025 with SDET Coding Interview Prep (LeetCode): https://lnkd.in/gFvrJVyU #japneetsachdeva

  • View profile for Mukta Sharma
    Mukta Sharma Mukta Sharma is an Influencer

    |Quality Assurance | ISTQB Certified| Software Testing|

    48,266 followers

    Let’s Talk Automation Testing — The Real, Practical Stuff We Deal With Every Day. If you’re in QA or an SDET role, you know automation isn’t about fancy frameworks or buzzwords. It’s about making testing faster, more reliable, and easier for everyone on the team. Here’s what actually matters: 1. Stability first A fast test that fails randomly helps no one. hope, you would agree? Teams trust automation only when it consistently tells the truth. Fix flakiness before writing anything new. 2. Manual + Automation = Real Quality Not everything needs automation. Manual testing is still crucial for user experience checks, exploratory testing, and edge cases that require human intuition. Automation supports manual testing — it doesn’t replace it. 3.Automate with intention Prioritize high-risk, high-usage flows. Login, checkout, search, payments — these are where automation creates real value. 4.Keep the framework clean and maintainable ( very imp step) Readable tests win. If someone new can’t understand or extend your suite, you don’t really have automation — you have tech debt. 5.Integrate early into CI/CD Automation only works when it’s continuous. Quick tests on every commit. 6. Make decisions based on data Look at failure patterns, execution time, and actual coverage. Data keeps automation aligned with the product, not just the backlog. At the end of the day, good automation suite is quiet, stable, and dependable — and it frees up manual testers to do the real thinking. 👉 What’s one practical testing tip you think every QA/SDET should follow? #AutomationTesting #SoftwareTesting #SDET #TestAutomation #QualityEngineering #ManualTesting Drop your thoughts — always great learning from others in the field. 💬🙂

  • View profile for Manish Saini

    Advocating for Smarter, Scalable, and Automation-Driven Testing | Developer Advocate 🥑 | Speaker | Mentor | Author | AI & Automation Evangelist | YT - @TechUnfilteredWithManish

    20,470 followers

    Most automation engineers obsess over UI automation. But the truth is — UI is just the tip of the testing iceberg. Let’s break this down. There are multiple layers where tests should live: Unit Tests → Fast and precise. → Catch issues early. → Tools/Framework/Libraries: JUnit, NUnit, Pytest, Mocha Component/Module Tests → Validate individual pieces in isolation. → Especially useful in frontend frameworks. → Tools/Framework/Libraries: React Testing Library, Vue Test Utils API Tests → Validate business logic and service contracts. → Great for catching bugs before they reach the UI. → Tools/Framework/Libraries: Postman, Rest Assured, Jest, Pytest + Requests Integration Tests → Ensure all systems talk to each other correctly. → Covers database, third-party APIs, and internal services. → Tools/Framework/Libraries: Pytest, TestContainers, WireMock Database Tests → Validate migrations, data constraints, and stored procedures. → Tools/Framework/Libraries: DBUnit, Flyway, SQLTest UI Tests → Useful, but often slow and flaky. → Should be minimal and well-targeted. → Tools: Playwright, Cypress, Selenium, Appium (for mobile) If your entire test suite lives only at the UI layer, you’re doing your team a disservice. Test smarter — not just at the top. I’ve explained how to structure and design your tests across these layers in my book Ultimate Test Design Patterns for Layered Testing. This isn't just theory — it's a blueprint for building robust, maintainable, and scalable automation. Want to know which test belongs where? Start by understanding the layers first. #TestAutomation #SDET #QualityEngineering #TestingStrategy #SoftwareTesting #TechLeadership

  • View profile for Vaibhava Lakshmi Ravideshik

    AI for Science @ GRAIL | Research Lead @ Massachussetts Institute of Technology - Kellis Lab | LinkedIn Learning Instructor | Author - “Charting the Cosmos: AI’s expedition beyond Earth” | TSI Astronaut Candidate

    20,067 followers

    Ask OpenAI models the same question twice at temperature=0, and you’d expect the same output every time. After all, greedy decoding should be deterministic. Yet in practice, it isn’t. The common explanation has been “floating-point math plus GPU concurrency.” But as Horace He and the team at Thinking Machines Lab argue in their recent deep dive, that story is incomplete. The real culprit is batch invariance. Most inference engines tie your output not only to your request, but also to the server’s load at that instant. A different batch size changes the reduction strategy inside kernels like RMSNorm, matmul, or attention. That shift cascades into subtle, yet real, differences in outputs - even under greedy decoding. Their work demonstrates batch-invariant kernels that restore bitwise reproducibility. The impact is far from academic: 1) It enables true on-policy reinforcement learning, where training and inference remain perfectly aligned. 2) It reframes non-determinism from an unavoidable nuisance into a fixable systems-design issue. Too often, the default reaction in Machine Learning is to relax tolerances and move on. This work reminds us that non-determinism isn’t just noise; it’s a bug we can eliminate with careful engineering. If reproducibility is the bedrock of science, deterministic inference should be the foundation of reliable AI. This research makes that vision tangible. Full article: https://lnkd.in/gPUxX8xE #ArtificialIntelligence #MachineLearning #DeepLearning #AIResearch #LLM #MLOps #SystemsDesign #DistributedSystems #HighPerformanceComputing #GPUComputing #DeterministicAI #ReproducibleAI #Determinism #Reproducibility #ReliableAI #AICommunity #DataScience #OpenSourceAI #EngineeringExcellence #ResearchInnovation #FutureofAI #AIEngineering #AIforScience #TechInnovation #NextGenAI

  • View profile for Nathan Benaich
    Nathan Benaich Nathan Benaich is an Influencer

    investing

    51,125 followers

    Mutation-Guided LLM-based Test Generation at Meta As a next step to last year's super cool Meta paper on LLMs generating tests, here we have it. Testing has moved, finally, beyond mere coverage. The guarantees are a lot stronger too, because automated compliance hardner always give examples of the specific kinds of faults that its tests will find (rather than just claiming more line coverage, which they also can do anyway). Abstract: "This paper1 describes Meta’s ACH system for mutation-guided LLM-based test generation. ACH generates relatively few mutants (aka simulated faults), compared to traditional mutation testing. Instead, it focuses on generating currently undetected faults that are specific to an issue of concern. From these currently uncaught faults, ACH generates tests that can catch them, thereby ‘killing’ the mutants and consequently hardening the platform against regressions. We use privacy concerns to illustrate our approach, but ACH can harden code against any type of regression. In total, ACH was applied to 10,795 Android Kotlin classes in 7 software platforms deployed by Meta, from which it generated 9,095 mutants and 571 privacy-hardening test cases. ACH also deploys an LLM-based equivalent mutant detection agent that achieves a precision of 0.79 and a recall of 0.47 (rising to 0.95 and 0.96 with simple preprocessing). ACH was used by Messenger and WhatsApp test-athons where engineers accepted 73% of its tests, judging 36% to privacy relevant. We conclude that ACH hardens code against specific concerns and that, even when its tests do not directly tackle the specific concern, engineers find them useful for their other benefits." https://lnkd.in/dyAn3G_k

Explore categories