Dynamic Test Loading to reduce Compile and Runtime

Introduction

In the previous post we talked about how you can use save-restore to avoid re-running parts of the simulation that have already been run.  When combined with UCLI, you can take this a step further and perform experiments/explore solutions to problems much faster.  However, save-restore still has the limitation that you cannot recompile the code if you want to restore a snapshot.  Dynamic Test Loading (DTL) removes this restriction and allows you to recompile certain things (tests or sequences) while still allowing you to use a saved snapshot.  This allows you to modify what stimulus is generated by the test, or which checks are performed while still reusing the saved snapshot.  Moreover, the recompile does not need to regenerate the simv executable and so is much faster than partition compile.

DTL Flow Overview

Figure 1 contrasts save-restore and DTL, where the parts in green are what are possible using just save-restore, and the purple shapes show what DTL adds.  Note, however, that in adding these extra capabilities (the ability to add or modify tests/sequences) adds some restrictions around when the snapshot can be taken.  Specifically we have to take the snapshot at a uvm phase boundary – typically this is done in the “post_reset” phase.  This is required because when we load the dynamic package, we’ll need to do a uvm phase jump to allow everything in the TB to continue where we left off.

 

Article content
Figure 1 DTL Flow Overview.  Green - possible with just save-restore.  Purple - capabilities added by DTL

Static and Dynamic Parts

With DTL, you need to separate your testbench into two parts:

  • Static part – mature, stable, unchanging
  • Dynamic part – Tests/sequences/stimulus that need to be modified for testcase development or coverage closure.

The separation of these parts is shown in Figure 2.


Article content
Figure 2 Separation of testbench into static and dynamic parts

The static part is compiled and run only once, while the dynamic part can be recompiled and run many times without recompiling the static part or rerunning until the snapshot.

DTL Modes

DTL has two main use modes:

  • Development mode – focus on individual tests
  • Regression mode – focus on entire regression suite

Development Mode

In development mode, the primary focus is on creating, debugging, or refining a single test or a small set of tests. The goal is to ensure that the test behaves as expected and meets its intended objectives.

Typically, you are working on a specific feature or functionality, and the changes are localized to the test under development.  Development mode often involves iterative debugging and refinement of the test. You may run the test multiple times, make changes, and re-run it to verify the behavior.

Regression Mode

In regression mode, the goal is to analyze the coverage and results of the entire regression suite, which includes multiple tests. The objective is to ensure that the overall verification goals are met, such as achieving high coverage or identifying gaps in the test suite.

In regression mode, you may need to tweak, modify, or even add tests in the suite based on coverage analysis. For example, if a specific coverage target is not being hit, you might adjust an existing test or add a new one to address the gap.

DTL Mechanics

This section talks more about how to enable DTL.

Encapsulation of Tests:

As described above, static tests (base tests/sequences) are encapsulated in a static package, while dynamic tests are encapsulated in dynamic packages.

Compilation:

Static packages are compiled with the -enable_dynamic_tb option to generate a partition database and simulator executable.  With DTL, you must also enable partition compile, so you need to add the -partcomp option.

Dynamic packages are compiled with the -dynamic_tb option, reusing the partition database from the static package.  The dynamic compile needs to be pointed to the static compile’s partition directory using the -shardlib option.

Simulation:

The base test is run, and the simulation state is saved after the common initialization sequence (e.g., reset phase).  This is done by running a UCLI with the following command:

stop -in uvm_component::post_reset_phase -once -continue -command {run 0; save post_reset_snap ; run}        

Subsequent tests are dynamically loaded at runtime using the saved state, avoiding the need to reinitialize the simulation.  The +dtl_add_pkg option is used to specify the dynamic test package to be loaded at runtime.  Additionally, the following UCLI commands need to be run to restore the snapshot.

restore post_reset_snap
call \$dtl_load
call top.refresh
run        

Note, in the code above we call “top.refresh”.  You can look at the DTL user’s guide for what this does.

Experiences

This technology is still fairly new for us, and so our experience is limited, but so far it has been very helpful in preliminary testing.  We’ve only used it in development mode, but in multiple cases it has reduced turnaround times from hours-days down to minutes.  The main limiting factor is how long it takes to run the dynamic code, but we have had a lot of success with generating a snapshot that starts very close to when the code being developed would start to run.

Conclusion

In this blog post we’ve talked about Dynamic Test Loading (DTL), which is a technology that allows the user to re-use a snapshot while still being able to recompile parts of the testbench.  By using DTL you can realize a significant reduction in turnaround time under a wider range of cases than can be achieved by using save-restore alone.

To view or add a comment, sign in

More articles by Keith Redmond

  • VCS Save/Restore to Reduce Simulation Runtime

    Introduction In the previous blog post we talked about how the partition compile feature of VCS allows you to reduce…

    2 Comments
  • Partition Compile to Reduce Compile Time

    Introduction One thing that has a large impact on productivity is your turnaround time – how long does it take to…

  • Regression Infrastructure Overview

    Introduction In the previous two posts we talked about regression reports and the regression manifest, and how they can…

  • Regression Manifest

    Introduction In the previous post we talked about regression reporting, what kind of information do we want to include…

  • Verification Regression Reports

    Introduction In the previous post we talked about how we can leverage our continuous integration and release flow to…

    3 Comments
  • Baselines for Verification – Creating and Using them

    Introduction In the previous post we talked about how we can reduce the churn of bugs being introduced into the…

    1 Comment
  • Continuous Integration for Verification

    Introduction In the previous post we talked about the importance of sanity testing. In this post, we’ll talk about how…

    5 Comments
  • Shift Left with Sanity Testing

    In this post we talk about sanity testing - what it is, why it's important, and how you can use it to improve your…

  • Verification Viewpoints: A New Blog Series for Today's Verification Expert

    Hello, and welcome to the Verification Viewpoints Blog Series. I've decided to start a blog series to share some…

    3 Comments

Others also viewed

Explore content categories