Software Testing Basics

Explore top LinkedIn content from expert professionals.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,623 followers

    Demystifying the Software Testing 1️⃣ 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: 𝗧𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀: Unit Testing: Isolating individual code units to ensure they work as expected. Think of it as testing each brick before building a wall. Integration Testing: Verifying how different modules work together. Imagine testing how the bricks fit into the wall. System Testing: Putting it all together, ensuring the entire system functions as designed. Now, test the whole building for stability and functionality. Acceptance Testing: The final hurdle! Here, users or stakeholders confirm the software meets their needs. Think of it as the grand opening ceremony for your building. 2️⃣ 𝗡𝗼𝗻-𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: 𝗕𝗲𝘆𝗼𝗻𝗱 𝘁𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀: ️ Performance Testing: Assessing speed, responsiveness, and scalability under different loads. Imagine testing how many people your building can safely accommodate. Security Testing: Identifying and mitigating vulnerabilities to protect against cyberattacks. Think of it as installing security systems and testing their effectiveness. Usability Testing: Evaluating how easy and intuitive the software is to use. Imagine testing how user-friendly your building is for navigation and accessibility. 3️⃣ 𝗢𝘁𝗵𝗲𝗿 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 𝗔𝘃𝗲𝗻𝘂𝗲𝘀: 𝗧𝗵𝗲 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗲𝗱 𝗖𝗿𝗲𝘄: Regression Testing: Ensuring new changes haven't broken existing functionality. Imagine checking your building for cracks after renovations. Smoke Testing: A quick sanity check to ensure basic functionality before further testing. Think of turning on the lights and checking for basic systems functionality before a deeper inspection. Exploratory Testing: Unstructured, creative testing to uncover unexpected issues. Imagine a detective searching for hidden clues in your building. Have I overlooked anything? Please share your thoughts—your insights are priceless to me.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,931 followers

    🎢 How To Roll Out New Features Without Breaking UX. Practical guidelines to keep in mind before releasing a new feature ↓ 🚫 We often assume that people don’t like change. 🤔 But people go through changes their entire lives. ✅ People accept novelty if they understand/value it. ✅ But: breaking changes disrupt habits and hurt efficiency. ✅ Roll out features slowly, with multiple layers of testing. ✅ First, study where a new feature fits in key user journeys. ✅ Research where different user types would find and apply it. ✅ Consider levels of proficiency: from new users to experts. ✅ Actively support existing flows, and keep them a default. 🚫 Assume low adoption rate: don’t make a feature mandatory. ✅ First, test with internal employees and company-wide users. ✅ Then, run a usability testing with real users and beta testers. ✅ Then, test with users who manually opt in and run a split-test. ✅ Allow users to try a new feature, roll back, dismiss, remind later. ✅ Release slowly and gradually and track retention as you go. As designers, we often focus on how a new feature fits in the existing UI. Yet problems typically occur not because components don’t work visually, but rather when features are understood and applied in unexpected ways. Rather than zooming in too closely, zoom out repeatedly to see a broader scope. Be strategic when rolling out new versions. Especially in complex environments, we need to be rather cautious and slow, especially when operating on a core feature. That’s a strategy you could follow in such scenarios: 1. Seek and challenge assumptions. 2. Define how you’ll measure success. 3. Have a rollback strategy in place. 4. Test with designers and developers. 5. Test with internal company-wide users. 6. Test with real users in a usability testing. 7. Start releasing slowly and gradually. 8. Test with beta testers (if applicable). 9. Test with users who manually opt in. 10. Test with a small segment of customers first. 11. Split-test the change and track impact. 12. Wait and track adoption and retention rates. 13. Roll out a feature to more user segments. 14. Run UX research to track usage patterns. 15. Slowly replace deprecated flows with the new one. With a new feature, the most dangerous thing that can happen is that loyal, experienced users suddenly lose their hard-won efficiency. It might be caused by oversimplification, or mismatch of expectations, or — more often than not — because a feature has been designed with a small subset of users in mind. As we work on a shiny new thing, we often get blinded by our assumptions and expectations. What really helps me is to always wear a critical hat in each design crit. Relentlessly question everything. Everything! One wrong assumption is a goldmine of disastrous decisions waiting to be excavated. [continues in comments ↓]

  • View profile for Sougata Bhattacharjee

    Samsung (SSIR) | Ex - Intel | TEDx Speaker | ASIC Verification | Proficient in SV, UVM, OVM, SVA, Verilog | Keynote Speaker at Engineering Colleges (IITs/NITs) | Paper publication at VLSI Conferences

    55,507 followers

    During the initial phase of my career in VLSI, I realised that writing Testcases is equally important as Testbench development. A Testcase in any language be it Verilog, VHDL, SystemVerilog, and UVM is not only used to verify the functional correctness and the integrity of the design but also point out areas where the Testbench could be improved. Below are the most important category of Testcases which are most critical: [1] Functional Tests --> In this type of test, the functionality or feature of an IP/module or a subsystem is verified. [2] Register-based Tests --> RW Tests, RO/WO Tests, Default Read/Hard reset Tests, Soft reset tests, Negative RO/WO Tests, Aliasing, Broadcasting, etc [3] Connectivity Tests [4] Clock and Reset Tests [5] Boot up Tests, wake up sequence, training sequence tests. For eg. In the case of DDR – MPC Training, RD DQ Calibration, Command Bus training, Write leveling, etc [6] Command and Sequence-based Tests. [7] Overlapping and Unallocated Region tests. [8] Back-to-back data transfer-based tests. [9] UPF Tests --> Power domain, Level Shifter, clock gating, voltage domain, etc [10] Code Coverage Tests --> In this test toggle, expression, branch, FSM, and conditional coverage holes are measured, and depending on the holes, tests are being written to completely exercise the DUT. [11] Functional Coverage Tests --> In these types of test categories, the functionality of DUT is being measured with the help of bins. There are several ways to do it. If there are coverage holes, more bins are coded to cover those areas, complex scenarios are covered with cross coverage, and bins of intersect functionality. [12] Assertions are basically a check against the design. Basically, these are insertion points within the design which improve the observability and debugging ability. The above are some of the categorizations of tests that need to be applied while checking a design but to achieve all the above features, testcases are broadly classified into the following two types: [1] Directed Testcase: These are the scenarios that the verification engineers can think of or can anticipate. [2] RandomTestcase: These are the scenarios where the maximum amount of bugs can be caught. The random seeds will hit many different use cases which can not be anticipated earlier and has the probability to catch the design issues. Ideally, random tests can be classified into the following two categories: [1] Corner cases --> This is the bug that is only possible to catch when many different scenarios are processed together or they overlap and the best way to catch this type of scenario is to run more repeated regression with more seeds. [2] Stress testing -->These types of tests are useful to check the performance and the scalability of the DUT under multiple concurrent activities and unpredictable scenarios. #vlsi #asic #electricalengineering #semiconductorindustry

  • View profile for Sebastian Rosch

    CTO at awork // We’re hiring (.NET or Angular)

    1,937 followers

    We deleted our staging environment. 💥 For years, we hosted our own internal awork workspace, and that of some early test customers, in a separate staging environment. Every release would go through this environment and “soak” there for a week or two. This gave us a sense of security, as surely all sorts of issues would be identified there before going to production. So why did we delete it? 🤔 We noticed that it actually slowed our releases down unnecessarily, as we had to wait for this arbitrary amount of time for the production release. It also added complexity and cost, as we were basically running a copy of the production infrastructure, which had to be maintained. But most importantly, that sense of security was a false one. For one, we don’t use all the features we build for our customers, so we can’t really expect to identify any defects while just using it. Secondly, we’re not able to identify performance issues, as the load between production and staging differed drastically. We also did not have any production-level monitoring in place, as this is quite expensive. So what are we doing instead? ☝️ 🎏 Feature Flags: We are now utilizing feature flags to roll new features out gradually in production. After the code is deployed to production, features go to dedicated test workspaces first, then to our internal awork workspace as well as some beta testers, and only then to all other customers. This allows us to de-risk releases while getting better feedback early on. 🔍 Test Automation: We are investing in better automated tests that cover more cases than we could potentially trigger with a soak time in a staging environment. ⚡️ Ad-hoc Environments: We have built a capability to deploy ad-hoc environments for any feature or combination of features so we can test them before they go to production, independent of a staging environment. This gives us a lot more flexibility and increases the quality of our releases significantly. While this is still a fairly new approach for us, with the rollout of awork Connect, we have already seen the benefits. We’ll keep improving our setup to make releases faster and smoother in the future. What is your experience with a staging environment?

  • View profile for Richard Seidl

    People & Tech Enthusiast | Software Quality Expert | Keynote Speaker Digitalization, AI and Humanity | Future Optimist

    14,075 followers

    "We do exploratory testing when things are unknown. But from the unknown, we get information. That makes things known." - Callum Akehurst-Ryan In this episode, I talk with Callum Akehurst-Ryan, a quality coach with nearly 20 years of experience, about why exploratory testing is far more than random button-pushing and how teams waste it by using it in all the wrong places. Callum takes us through practical exploratory testing techniques that help uncover risks in non-functional requirements like performance and security, especially when no one has bothered to document what "good" should look like. We discuss how to structure exploration with timeboxes and risk-based scopes, when to turn findings into automated tests, and why retrofitting quality into existing systems demands a different software testing strategy than most teams realize. Some Highlights: * Exploratory testing finds information about unknowns, not pass/fail confirmation of known requirements. * Scope exploration by risk and timebox sessions to avoid endless testing rabbit holes. * Document what matters: defects, conversations, knowledge, not exhaustive test evidence in unregulated contexts. The episode is available here.: https://lnkd.in/eXj6t-iM ...and in your favorite podcast store.

  • View profile for Yuvraj Vardhan

    Technical Lead | Test Automation | Ex-LinkedIn Top Voice ’24

    19,158 followers

    Don’t Focus Too Much On Writing More Tests Too Soon 📌 Prioritize Quality over Quantity - Make sure the tests you have (and this can even be just a single test) are useful, well-written and trustworthy. Make them part of your build pipeline. Make sure you know who needs to act when the test(s) should fail. Make sure you know who should write the next test. 📌 Test Coverage Analysis: Regularly assess the coverage of your tests to ensure they adequately exercise all parts of the codebase. Tools like code coverage analysis can help identify areas where additional testing is needed. 📌 Code Reviews for Tests: Just like code changes, tests should undergo thorough code reviews to ensure their quality and effectiveness. This helps catch any issues or oversights in the testing logic before they are integrated into the codebase. 📌 Parameterized and Data-Driven Tests: Incorporate parameterized and data-driven testing techniques to increase the versatility and comprehensiveness of your tests. This allows you to test a wider range of scenarios with minimal additional effort. 📌 Test Stability Monitoring: Monitor the stability of your tests over time to detect any flakiness or reliability issues. Continuous monitoring can help identify and address any recurring problems, ensuring the ongoing trustworthiness of your test suite. 📌 Test Environment Isolation: Ensure that tests are run in isolated environments to minimize interference from external factors. This helps maintain consistency and reliability in test results, regardless of changes in the development or deployment environment. 📌 Test Result Reporting: Implement robust reporting mechanisms for test results, including detailed logs and notifications. This enables quick identification and resolution of any failures, improving the responsiveness and reliability of the testing process. 📌 Regression Testing: Integrate regression testing into your workflow to detect unintended side effects of code changes. Automated regression tests help ensure that existing functionality remains intact as the codebase evolves, enhancing overall trust in the system. 📌 Periodic Review and Refinement: Regularly review and refine your testing strategy based on feedback and lessons learned from previous testing cycles. This iterative approach helps continually improve the effectiveness and trustworthiness of your testing process.

  • View profile for Mukta Sharma
    Mukta Sharma Mukta Sharma is an Influencer

    |Quality Assurance | ISTQB Certified| Software Testing|

    48,270 followers

    As a Manual Tester, What Should You Know About AI Agents? Let’s clear one thing first. You do NOT need deep AI knowledge or maths to start working with AI agents. Think of an AI agent as a super-fast junior tester. Helpful, efficient, but not always right. Here’s what every manual tester should understand: 1. Prompting matters AI works on instructions. A vague prompt gives vague results. Clear, detailed prompts lead to usable test cases and insights. 2. Tools an AI agent can use AI agents work with tools like browsers, APIs, and test tools. Many testers already pair AI with existing tools, for example: ChatGPT + Playwright for AI-assisted testing ChatGPT + Jira for test case and bug refinement ChatGPT + Postman for API understanding The real value comes from knowing how to review and validate what AI suggests. 3. Workflows are key AI performs best when testing is broken into clear steps: Understanding requirements Identifying scenarios Creating test cases Executing tests Reporting issues This is already how manual testers think. AI simply follows the structure you provide. 4. Validation is your responsibility AI can miss edge cases, assume functionality, or generate incorrect test cases. Always cross-check the output with requirements and real application behavior. 5. Bias and hallucinations are real AI can sound confident and still be wrong. Never treat AI output as the final truth. 6. Basic API knowledge helps You don’t need to code, but you should understand what an API is, request and response flow, common status codes, and basic JSON. This makes AI-assisted API testing far more effective. 7. How AI agents work AI agents do not think like humans. They predict the most likely answer based on patterns, not real understanding. 8. AI agent workflow levels Basic level: Single prompt and single response Intermediate level: Multi-step workflow with tools Advanced level: Autonomous end-to-end testing As a manual tester, focusing on basic and intermediate levels is more than enough to start. Final rule to remember: Always double-check the response you get from the prompt. AI is an assistant, not a replacement. Your testing mindset, judgment, and validation skills are still the real superpower. please let me know if my simplified AI posts are helpful. Your one comment will serve as a motivation. thank you so much! ✌️🙏 Also, read & save my articles on Medium. Follow: muktaqa12 on medium. #SoftwareTesting #QualityEngineering #AITesting #FutureOfQA #TestersOfLinkedIn

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,371 followers

    In modern software development, writing code is only half the job — testing it is just as critical. But as codebases grow, maintaining strong unit test coverage becomes increasingly challenging. A recent engineering blog from The New York Times explores an interesting approach: using generative AI tools to help scale unit test creation across a large frontend codebase. - The team built an AI-assisted workflow that systematically identifies gaps in test coverage and generates unit tests to fill them. Using a custom coverage analysis tool and carefully designed prompts, the AI proposes new test cases while following strict guardrails — such as never modifying the underlying source code. Engineers then review and refine the generated tests before merging them. - This human-in-the-loop approach proved surprisingly effective. In several projects, test coverage increased from the low double digits to around 80%, while the time engineers spent writing repetitive test scaffolding dropped significantly. The process also follows a simple iterative loop: measure coverage, generate tests, validate results, and repeat. The experiment also highlighted some limitations. AI can hallucinate tests, lose context in large codebases, or produce outputs that require careful review. The takeaway: AI works best as an accelerator — not a replacement — for engineering judgment. As these tools mature, this kind of collaborative workflow may become a practical way for teams to scale reliability without slowing down development. #DataScience #MachineLearning #SoftwareEngineering #AIinEngineering #GenerativeAI #DeveloperProductivity #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gFYvfB8V    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gj9fc322

  • View profile for Bhavani Ramasubbu

    Director of Product Management QA Touch @DCKAP | Building Test Management & Low Code Test Automation Platform for fast-growing QA Teams | AI and SaaS Product Enthusiast

    3,213 followers

    10 Testing Principles That Work (from experience) I am sharing the 10 testing principles that work from my experience Test like a real user Don’t just follow the script, try what a real user might do. That’s where the real bugs live. Make bug reporting easy The easier it is to report and retest bugs, the faster things move. Keep feedback loops short and simple. Use data to test smarter Logs, usage stats, and real errors tell you what to test more. Let the data guide you. Work closely with other teams Quality isn’t just QA’s job; working with the dev, product, and design teams helps catch problems early. Test early, test later too Start testing at the idea stage, and don’t stop after release. Production bugs matter too. Stay flexible and experiment Be ready to adapt. Every build is different; what worked last sprint might not this one. Let testers lead Give testers the space and trust to try new ideas and take ownership. It makes a big difference. Do exploratory testing often Some bugs only show up when you break the rules a bit. Explore, question, and be curious. Good strategy > any tool. Don’t rely on one tool; Tools help, but don’t let them box you in. Think about test upkeep Build tests you won’t dread maintaining. A few good, stable tests beat 100 flaky ones. #testing #qa #testingprinciples #softwaretesting #qa #qatouch #QATouch #bhavanisays

  • View profile for Aatir Abdul Rauf

    VP of Marketing @ vFairs | Newsletter: Behind Product Lines | Talks about how to build & market products in lockstep

    73,304 followers

    Common launch mistake: Rolling out new features to ALL customers. Pushing out a new feature to a sizable customer base comes with risks: - Higher support volume if things go south, affecting many. - Lost opportunity to refine the product with a focus group. - Difficulty in rolling back changes in certain cases. That's why products, especially those with huge customer counts, adopt a gradual rollout strategy to mitigate risk. There are multiple options here like: ✔️ Targeted roll-out Selective release to specific users or accounts. ✔️ Future-cohort facing Only new sign-ups get the feature, existing users keep legacy version ✔️ Canary release Test with a small group first, then expand after confirming it's safe. ✔️ Opt-in beta Users voluntarily choose to try new features before official release. ✔️A/B rollout Two different versions released to different groups to compare performance. ✔️Switcher Everyone gets new version by default but can temporarily switch back to old version. ✔️Geo-fenced Features released to specific geographic regions one at a time. Some factors to consider: ✅ User base capabilties How savvy is your user base? How adaptive would they be the change you're rolling out? If you need to ease them over time, think about a switcher or an opt-in beta. ✅ Complexity How complex is the product update and is it in the way of a critical path? If it's a minor update, a universal deployment will suffice. However, you might opt for an opt-in or canary release for more complex changes. ✅ Risk Assessment What's the risk profile of the update? Ex: If it's performance-intensive and could affect server load, consider using a phased release to observe patterns as you open the update upto more users. ✅ Objective Is this a revamped version of an existing product use case? Do you want to experiment which works better? Strategies like canary releases or A/B testing are valuable in this scenario. ✅ Target users Do you have different user behaviors or preferences across markets or geographies of operation? Do certain cohorts make more sense than others? Think about geo-fenced roll-outs (we used to use this a lot at Bayt when launching job seeker features). --- What rollout strategies do you use for your product?

Explore categories