AI increases software output. It does not automatically increase software confidence. Every stage of the development lifecycle is getting faster. Code generation, requirements, test creation, delivery pipelines. But faster output without stronger validation just means you're shipping more uncertainty at higher velocity. AI-generated code still carries duplicated logic, missed edge cases, security gaps, and architectural conflicts. AI-generated tests can inflate your suite without telling you whether critical business workflows actually work across real data, real roles, and real environments. The question isn't how many tests you can generate. It's whether your release decision is backed by evidence or by volume. Release confidence is a risk decision, not a counting exercise. This article covers what this shift means for QA leaders, what to measure, where AI-augmented testing helps, and where test volume creates a false sense of coverage. #SoftwareTesting #QALeadership #AITesting #TestAutomation #ReleaseConfidence
Keysight Software Test Automation
Software Development
Santa Rosa, California 13,038 followers
Where continuous quality meets business outcomes
About us
Accelerate Innovation to Connect and Secure the World Keysight empowers innovators to explore, design, and bring world-changing technologies to life. As the industry’s premier global innovation partner, our software-centric solutions serve across the design and development environment, enabling engineers to deliver tomorrow’s breakthroughs at speed and with reduced risk. Keysight is a leader in test automation, where our AI-driven, digital twin-based solutions help innovators push the boundaries of test case design, scheduling, and execution. Whether you’re looking to secure the best experience for application users, analyze high-fidelity models of complex systems, or take proactive control of network security and performance, easy-to-use solutions including Eggplant and our broad array of network, security, traffic emulation, and application test software help you conquer the complexities of continuous integration, deployment, and test. Test early, test often, and take on complex and distributed deployments with confidence. Discover why so many QA testers, software developers, network operators, IT specialists, and cybersecurity teams are shifting left with Keysight. Innovators start here. Keysight Technologies (NYSE: KEYS) is an S&P 500 technology company, headquartered in Santa Rosa, California, with offices and manufacturing worldwide. Keysight leverages its strength as the world’s leading test and measurement provider and today enables innovators to quickly solve design, emulation, and test challenges to help create the best product experiences. Keysight owns more than 3,500 patents and its ~15,000 employees work with nearly 32,000 customers worldwide to start technology revolutions. Keysight customers span the communications, industrial automation, aerospace and defense, automotive, energy, semiconductor, and general electronics markets. www.Keysight.com
- Website
-
https://www.keysight.com/us/en/products/software/software-testing.html
External link for Keysight Software Test Automation
- Industry
- Software Development
- Company size
- 10,001+ employees
- Headquarters
- Santa Rosa, California
- Type
- Privately Held
- Founded
- 2008
- Specialties
- QA Automation, GUI test automation, Black Box, iPhone Blackberry etc, Test Flash/Flex/HTML5, Non-Invasive Testing, Technology agnostic - OS, mobile, browser, Easy, Expressive, Powerful Scripts, Easy to use, Easy to deploy, User interface testing, Automation Intelligence, Web Monitoring and Analytics, and Digital Automation Intelligence
Updates
-
AI is no longer just a buzzword in PLM...it’s becoming a strategic imperative. Join us at PLM Road Map & PDT North America 2026, happening May 6–7, 2026 at The Westfields Marriott in Chantilly, VA, where industry leaders will explore how to turn AI disruption into real business value across the product lifecycle. This event cuts through the hype and focuses on what works: ✔️ Real-world success stories (and lessons learned) ✔️ Practical frameworks for prioritizing AI investments ✔️ Strategies to strengthen digital threads and digital twins ✔️ Guidance on scaling AI across engineering and product development We’re excited to be part of the conversation with our featured presentation: “Accelerating Periodic PLM Upgrades with AI-based Automated Software Validation and Testing Solutions” Speaker: Karamveer Singh From data governance and interoperability to AI-driven innovation, attendees will gain actionable insights to navigate implementation challenges and accelerate transformation. If you're a PLM leader, product manager, engineer, or digital transformation professional, this is where strategy meets execution. 📅 See you in Chantilly!
-
Happening today! Visual automation can feel like magic — until your image isn’t found. If you use Eggplant Functional, you’ve likely encountered visual test failures that are difficult to diagnose. In our upcoming webinar, Keysight experts Lindsey Rominsky and Meghan Danielson will walk through practical techniques to troubleshoot and stabilize image-based tests, including: - Understanding pixel tolerances - Choosing the right search algorithm - Editing and refining images correctly - Using the Image Update Panel in real time - Avoiding common causes of flaky visual tests If you’re responsible for automation reliability, this session is for you. Join us and learn how to move from failure to fix – faster. We get started at 1:00 PM EST today. #TestAutomation #SoftwareTesting #QA #AutomationEngineering #Eggplant #Keysight https://lnkd.in/eeTYNJFj
-
-
AI is rapidly becoming embedded in healthcare systems—but validating these models is quickly becoming the responsibility of hospital IT teams. Many AI solutions remain “black boxes,” leaving health systems with difficult choices: - Increase manual validation staff - Accept operational risk - Or find a scalable way to test AI safely In our upcoming webinar, “Critical Response: How IT Can Safely Validate AI in Modern Health Systems,” we’ll explore how test automation can help healthcare organizations evaluate AI performance quickly, consistently, and objectively. Learn how automated testing can: - Generate high-quality inputs at scale - Validate AI outputs against historical clinical patterns - Reduce the burden on clinical informatics and data science teams AI is arriving fast. The question is: is your organization driving the response—or becoming the one in need of rescue? Register here to save your seat: https://lnkd.in/e2Mc56uR #HealthcareIT #AIinHealthcare #HealthTech #SoftwareTesting #DigitalHealth
-
-
We’re excited to celebrate our membership with CIMdata and join the conversation at the PLM Road Map & PDT North America from May 6–7, 2026! ✨ At the event, Karamveer Singh will present: “Accelerating Periodic PLM Upgrades with AI-based Automated Software Validation and Testing Solutions.” Utilizing ML and AI-based validation techniques for faster and resilient #PLM upgrades, ensuring #DigitalThread continuity across multi-app landscapes. ⚙️🤖 👉 If you’re attending, let’s connect! Learn more: https://lnkd.in/dHXUwVsa #PLMRoadMap2026 #AI #DigitalTransformation
-
Proud to be part of prostep ivip Association – a community working to improve how companies manage product data and develop virtual products. 👉 www.prostep.org ➡️ Looking ahead: We’ll be exhibiting at the prostep ivip Symposium | April 14–15, 2026 in Kap Europa, Frankfurt. 👉 www.symposium.de This annual event is one of the key meeting points for industrial #digitaltransformation, covering topics like MBSE, Software-Defined Products, Digital Twins, and AI-driven development. We’ll be sharing perspectives on #AIinEngineering and #PLM, along with practical insights from real-world projects. If you’re planning to attend, feel free to reach out – it’s always great to connect with others in the community. Alexander Käsbohrer, Hitesh Bhole, Jürgen Haas 👋 Fabienne Kreusch, Manuela Joseph, Mart van Gijsel #Symposium2026 #SoftwareTesting #ProductLifecycleManagement #DesignEngineeringSoftware
-
Most PLM releases don’t fail because “PLM broke.” They fail because engineering intent didn’t survive the handoffs. CAD checks pass. PLM looks consistent. ERP transactions post. MES executes exactly what it received. And production still gets the wrong outcome because meaning changed between systems. That’s the blind spot with many PLM regression strategies: they prove each system behaves, but not that the end-to-end workflow stayed intact across CAD, PLM, ERP, and MES. This article summarizes the workflow failure modes regression testing won’t reveal (propagation gaps, mapping drift, permissions and visibility paths, downstream interpretation), plus what to validate instead. Read the article and register for the session: https://lnkd.in/e_-Hqmfi #PLM #DigitalThread #QualityEngineering #Manufacturing #EnterpriseApps Hitesh Bhole Steve Barreto
-
Where does your PLM release really fail: inside one system, or in the handoffs no one owns? Most teams can point to test results for CAD, PLM, ERP, and MES. What’s harder is proving that engineering intent survived the journey across them. That’s why “passed" regression tests and “ready for production” drift apart in PLM ecosystems. The failure often isn’t a crash or a missing field. It’s a workflow that stays technically consistent while the meaning changes downstream. Common patterns that create late-stage surprises: 1. Propagation gaps: one system updates, another operates on stale truth 2. Transformation drift: mappings and rules keep data valid but change context 3. Role and permission paths: real users follow routes the test suite never covers 4. Downstream interpretation: manufacturing executes correctly against what it received, but it’s wrong versus the original intent When that happens, diagnosis becomes forensic. The cost isn’t just rework. It’s time, trust, and release confidence. What’s the weakest handoff in your digital thread today, and how do you know before downstream teams escalate?
-
PLM software releases often fail after everything “passes” because intent gets lost in the handoffs across CAD, PLM, ERP, and MES. This session breaks down the real workflow failure modes regression testing doesn’t reveal (propagation gaps, mapping drift, permissions/visibility paths) and what to validate instead. #PLM #DigitalThread #DigitalEngineering #ManufacturingIT #QualityEngineering #SoftwareTesting #TestAutomation #EnterpriseApps #ERP #MES #CAD
-
PLM releases don’t usually fail inside PLM. They fail in the gaps between systems. In most organizations, CAD, PLM, ERP, and MES are owned by different teams, using different tools, with different visibility. So when something breaks downstream, it’s rarely obvious where the failure was introduced. And by the time you see the impact, it’s already expensive to unwind. That’s the real failure mode: ➡️ The “work” looks fine in the system you’re in ➡️ The next team can’t see what you changed (or what changed on the way) ➡️ The issue surfaces late, and root-cause becomes a forensic exercise So, do you actually know which handoff introduces the risk in your digital thread or do you only find out after downstream teams escalate? We’re covering this exact reality in the webinar (real scenarios, not theory): what failure looks like in practice, why it gets introduced without anyone noticing, and why it takes so long to trace back. If you want to attend, register here: https://lnkd.in/e_-Hqmfi #PLM #DigitalThread #SoftwareTesting #TestAutomation #EndToEndTesting #QualityEngineering #ReleaseManagement #EnterpriseSoftware Hitesh Bhole Steve Barreto