Even after twenty years of coding professionally, I constantly find refinements and new patterns. In just the last few months, I figured out how to write tests to make cleaning up dark-launch feature flags graceful: 1. Add a context where the feature-flag is “off”. 2. Copy-paste the existing tests under that new context 2. Remove any tests that should no longer pass after the feature flag launches from the outer context 3. Initialize the feature flag to “on” for all other test. 4. Go ahead and start implementation by adding your first test for the new behavior in the top-level context. Now, as long as the tests are green and you don’t change the pre-existing tests you won’t break the existing contract when the feature flag rolls out. If the flag rolls out, the cleanup for the feature flag is now graceful: 1. Delete the test context where the feature flag is “off” 2. Delete the conditional logic (conveniently, that will be the bits of the code the tests no longer run) 3. Delete the method that checks the feature flag This way, even your test clean up can be TDDed! If you instead need to remove the new logic & a simple revert is insufficient, there are a few extra steps: 1. Delete all the tests for the unwanted behavior that should never happen again 2. Initialize the feature flag to false for the entire test 3. Inline the now-redundant “when TheFeatureFlag is off” context 4. Remove any duplicate tests 5. Delete the conditional logic 6. Delete the code that checks the feature The problem with nesting both the old and new behavior under their own contexts is that it produces diffs with a bunch of indentation changes, making it harder for readers to verify the change. Additionally, it means that unexpected changes to other parts of the contract could go unnoticed, because the unit’s other tests are only run in one of the two states. In this case, using copy-paste leads to more graceful code, rather than less.
Automation Strategies for Dev Test Cleanup
Explore top LinkedIn content from expert professionals.
Summary
Automation strategies for dev test cleanup refer to the use of tools and techniques that automatically clear out temporary test data, environments, and code after development testing is done. These methods help prevent clutter, reduce costs, and maintain reliable testing conditions without manual effort.
- Automate environment removal: Set up your testing pipelines to automatically create and destroy temporary testing environments once tests are complete, using tools like Terraform or serverless functions.
- Use cleanup methods: Include database cleanup routines and framework-specific actions, such as using @After or @AfterClass annotations, to ensure test data is deleted after every test run.
- Track and alert resources: Integrate monitoring and alert systems to identify unused or idle resources, prompting teams to clean up or confirm ongoing usage to avoid waste.
-
-
Post 25: Real-Time Cloud & DevOps Scenario Scenario: Your organization creates ephemeral cloud environments for testing using IaC, but costs are rising due to environments left running too long. As a DevOps engineer, you must optimize these environments for cost savings without impacting development. Step-by-Step Solution: Automate ephemeral environments in your CI/CD pipeline using Terraform or Pulumi. Provision on pull request creation and destroy after testing completes. Set TTL (Time-to-Live) Tags: Set TTL tags (e.g., DestroyAfter) for auto-cleanup. Use scheduled jobs or Lambda/Azure Functions to detect expired resources and terminate them. Centralize Environment Management: Maintain a dashboard or service catalog (e.g., ServiceNow, Backstage) where teams can request ephemeral environments. Track each environment’s status, owners, and expiration dates to avoid orphaned resources. Use Lightweight Services: Deploy only essential services in ephemeral environments to minimize resource usage. For complex dependencies (e.g., databases), consider using shared or pre-existing test instances if feasible. Leverage Containers and Serverless Architectures: Use Docker containers or serverless functions (e.g., AWS Lambda, Azure Functions) to reduce overhead. Smaller, short-lived services help keep costs low and limit the blast radius of resource sprawl. Monitor and Alert for Idle Resources: Integrate cloud monitoring tools (e.g., CloudWatch, Azure Monitor) to detect resources with negligible CPU/memory/network usage. Send automated alerts to resource owners for potential clean-up or confirm continued usage. Enforce Resource Limits in IaC: Define quotas or limits (e.g., CPU, memory, instance types) in your IaC templates to prevent excessive resource allocation. Use Terraform’s count or for_each features to dynamically scale resources based on environment needs. Track Costs and Report Usage: Use AWS Cost Explorer, Azure Cost Management, or third-party tools (e.g., CloudHealth) to break down ephemeral environment costs by tags. Provide regular cost reports to teams to encourage responsible usage and budgeting. Educate and Enforce Best Practices: Train developers on the importance of tearing down unneeded environments. Document ephemeral environment processes and hold reviews to ensure adherence to cost-saving guidelines. Outcome: Ephemeral environments are automatically created and terminated, ensuring minimal resource waste. Transparent cost tracking and proactive alerts help teams stay on budget while maintaining development agility. 💬 How do you manage ephemeral environments and control cloud costs in your organization? Let’s share insights in the comments! ✅ Follow Thiruppathi Ayyavoo daily real-time scenarios in Cloud and DevOps. Together, we’ll build efficient and scalable solutions! #DevOps #CloudComputing #Terraform #careerbytecode #thirucloud #linkedin #USA CareerByteCode
-
Keeping Your Tests Clean: Best Practices for Test Data Cleanup in Selenium (Java) Ensuring a clean testing environment is crucial for reliable and repeatable Selenium tests. Test data clutter can lead to unexpected behaviour and mask actual bugs. Let's dive into best practices for test data cleanup using Selenium in Java, along with a code example to illustrate! Best Practices: Database Isolation: Use a separate database instance dedicated to testing. This allows for easy data manipulation without affecting the production environment. Consider tools like DBUnit for database backups and restoration before/after test runs. Test Data Seeding: Pre-populate the test database with known data relevant to your test cases. Utilize tools like JPA or Hibernate for data manipulation within your tests. Test Cleanup Methods: Implement methods to clean up test data after each test execution. These methods can perform actions like deleting test users, orders, or entries created during the test. Utilize Testing Frameworks: Leverage annotations like @After from TestNG or @AfterClass from JUnit to ensure the cleanup is executed regardless of the test outcome. Code Example (TestNG): @Test public void testLogin() { // Login logic using Selenium // ... } @After public void cleanUp() { // Delete test user data from database // ... } #SeleniumTesting #JavaAutomation #TestAutomationFramework #DatabaseTesting #TestNg #JUnit #CleanCode
-
We’re coming up on our 20th test automation project as a company. Here are three ways we've managed test data in different scenarios: 1) Basic: Using Setup and Teardown Methods (BeforeAll, AfterAll, BeforeEach, AfterEach) In some of our less complex projects, where dependencies between test cases were minimal, we've used BeforeAll, AfterAll, BeforeEach, AfterEach methods to set up and clean up test data. It's a straightforward and convenient way to manage data in simple scenarios. However, as our projects grew in complexity and scale, we found that this approach started showing its weaknesses. Data setup failures could compromise entire test suites, and maintaining consistency between test cases became a significant challenge. 2) Seeded Databases For projects that required consistent and repeatable data across multiple test runs, we've leveraged seeded databases. By seeding a test database with known data before running our tests, we could ensure greater reliability and reproducibility. Yet, maintaining the seed data became a task in itself, especially with frequent schema changes in our agile development environment. Seeding was also time-consuming, particularly for extensive datasets. While it served us well for certain projects, it wasn't the most scalable solution for all scenarios. 3) Static Images In projects with large datasets and complex interdependent test cases, we've found using a static image of the database to be effective. With this strategy, we'd take a snapshot of our database in a known good state and restore that snapshot before each test run. The static image method gave us complete control over our test data, reduced setup time, and brought down the number of failed tests due to data issues. However, the initial setup of creating and managing the snapshots was a significant time investment, and as our application evolved, we had to periodically update our snapshots to reflect changes in the schema or data. --- Each of these methods has its pros and cons and served us well under different circumstances. The key lesson we learned was that the right test data management strategy largely depends on your specific project needs and constraints. There are plenty of other strategies to manage test data such as Data Factories etc... What do you think is best? #testautomation #testdata #qualityassurance
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development