DoD Digital Thread Digital Engineering Use Case
You’ve probably heard about Digital Engineering (DE), but this is a lot like the idea that you’ve heard about electric vehicles. Is it for you? Is it for your organization? Maybe it works for some people, but would it work for you? Is it just another management fad or does it offer the potential to add value to your people, process, technology, and contractual approaches?
The OUSDR&E has landed firmly on the perspective that DE offers tremendous value to acquire, deploy, and employ solutions at faster paces, higher quality, reduced costs. (see Refs 1 & 2) Faster, Better, Cheaper is a recurring theme in these Digital Engineering articles because this is how it should look and function in practice. If you’re interested in Faster, Better, Cheaper then stay up to date on DE tech, themes, and trends in these articles and you’ll rapidly understand why and how your organization should and can be employing DE capabilities.
Use Cases are the So What of Digital Engineering. In this series of articles I’m going to break down a half dozen of the most popular DE use cases. If the workflows you use at work (or would actually like to use) are on this list then it might pay dividends to get on the DE bus! The following graphic depicts the 6 use cases that we’re going to survey, starting with a fresh look at Acquisition Requirements Management and Digital Threads.
With this title, what came to mind? If you’re part of the federal government then you likely immediately thought of DOORS. From a product cradle-to-grave perspective this might be the right answer to your challenges of managing product requirements. However, if you’ve followed me for any length of time then you know that I’m going to shake up the snow globe. Let’s step back and think about when the solution life cycles actually begin and how agile practices should impact our Digital Thread approaches.
When do Digital Threads Start?
Digital Threads are often thought about as the management of product specifications of all types over the “cradle to grave” lifecycle. However, when does the product lifecycle actually begin? I’ll assert that it begins as soon as a person or a team starts thinking about how a type of solution might practically solve a problem or resolve a pain point. This might be long before the cradle. It might begin at a point more like the first date of the parents of the solution child.
The National Aeronautics and Space Administration (NASA) tackled this subject decades ago by defining a set of nine Technology Readiness Levels (TRLs) as shown in the following graphic (Ref 3). TRLs 1-3 are classically the domain of “basic” university-type research. TRLs 3-5 are “applied” research and development. TRLs 5-6 are technology demonstrations or experiments. Technologies usually leave Research and Development (R&D) laboratories at TRL 6. TRLs 7-9 are usually part of a formal acquisition program and the gap between TRL 6 and 7 is often referred to as the “valley of death.”
So where does the product life cycle begin? Technically it could begin as early as asking whether an advance in some materials or science knowledge at TRL 1 would be beneficial to a theoretical future solution. This is a practical question to potentially ask and answer before investing in the basic science. If you have $1 to invest and requests for $6 in research funding, how do you make this judgement call? How do you back up such a decision with data?
The assertion is that it is never too early to product-ify a potential application or solution and begin assessing how it would solve the challenge at hand. We’re going to talk about Modeling, Simulation & Analysis (MS&A) in a future article, but imagine that you set up a simulation which assesses the ability of a technology (at any TRL) to solve a particular problem. Include as much of the details as you know at the time and assess it. If it potentially works well enough to justify a make/buy/rely decision then start tracking it and re-assess it periodically as more information becomes available. If you find that you don’t have enough information to assess its potential utility then you’ve still benefited from the approach. Your unknown unknowns just became known unknowns.
Practically, tracking a potential solution’s Digital Thread from this early stage is straightforward to do with MBSE data models (Model Based Systems Engineering) with Block Definition Diagrams (BDDs) that describe the systems’ ontology and functionalities and instances for each block which track the knowledge evolution over time. Don’t delete the previous entries, add another instance each time the knowledge evolves and push the “Re-Assess” MS&A button. (We’ll discuss MBSE integration with MS&A tools in a subsequent article.) This kind of “additive MBSE” approach provides a mechanism to track the thought evolution of a solution and progress (or not) towards the goal. It also provides a mechanism to update the goals as they change; because they will! History is replete with examples of technologies that “failed” at some moment in time only to be revived when the new goals aligned with what the failed tech is able to do. Helpful to know when you already have a rockstar on the shelf. See my recent post on this at Ref 4.
This kind of additive MBSE, solution assessment, and solution evolution tracking can begin at any point in the technology readiness process—basic research, applied research, technology demonstration, force designs, analysis of alternatives, engineering design trades, etc. This will collectively happen in R&D organizations, acquisition orgs, commercial orgs, etc. The sooner it begins, the higher the potential payoffs. And the more open the information exchange, the higher the potential payoffs. Digital Engineering is all about the share-ability of digitized information to ensure that the data-to-decision processes enable decision makers to make the best-informed decisions. So think thru the kind of data interoperability that would maximize your outcomes and go after it.
In some cases information needs to flow digitally from government R&D and acquisition orgs to industry. We’re interested in something that would solve problem X and provide insight into how the solution will be assessed. Pull the potential industry teammates into the solutioning discussion. Imagine a world in which this type of digitized data exchange was begun early and industry was kept apprised of the potential use cases, threats, scale (we’re interested in 10 of these or 10,000 of these), environmental concerns, etc. What if industry had access to the government MS&A assessment tools and could perform their own analyses into their technical solutions’ performance, resilience, and cost? What if they noted that the assessment was not appropriately quantifying success? What could we do better together if we had DE people, process, technology, and contractual solutions which actively enabled the timely cross-flow of information to decision makers? The Digital Engineering Ecosystems (DEEs) noted in DODI 5000.97 (Ref 2) are intended to fulfill this role. What would RFIs (Requests for Information) look like in a DE-enabled world? How would these then shape RFPs (Requests for Proposal)? This is the intended DE realm of Better, Faster, Cheaper.
Impact of Agile Solutioning on Digital Threads
Let’s pivot now to the impact of agile product development approaches on acquisition requirements management. And before you say, “Hey, I work on hardware. That agile stuff only works on software.” please see my recent post on agile product development and testing of the Starship and Super Heavy Booster (Ref 5). If agile can work for a 11,000,000 lb space launch system then it can likely work for your team too.
Recommended by LinkedIn
With agile approaches to product development we toss out the idea of building the objectively correct solution on the first iteration and introduce the idea that any solution will evolve over time as technologies, goals, financial constraints, changes in priorities, etc. re-shape the ideal solution. Every solution iteration is both the current instance of the solution and the chance to acquire insight on what is needed next. Requirements are expected to evolve as often as each sprint and certainly with each increment.
In other words, it is basically the same challenge that we described above, but it is happening during the solution acquisition part of the life cycle. Adding a strategic agile overlay to acquisition planning can do several important things. It can help pull the initial capability solution launch to the left on the schedule (earlier, by defining the Minimum Viable Product (MVP) for the initial launch) and then it provides a mechanism to articulate how the solution will evolve over time. Typically this happens with an Objectives and Key Results (OKRs) approach (see Ref 6). The Objectives provide the high-level goals and the Key Results provide a mechanism to measure progress. Then lower-level agile development epics, user stories, tasks, etc. can be focused within each sprint to achieve these OKRs.
Since requirements are expected to evolve over the course of the project, a key to this approach is that the project must be configured to make flexible use of lessons learned in each sprint and solution increment. This configurability might be technical, programmatic, contractual, financial, etc. There can be a fear factor to this when teams move from waterfall to agile, but generally the plan to start small and grow helps bring the team along and this approach tends to maximize the bottoms-up innovation ideas from the team as individuals or teams learn things that can benefit the entire project.
With many/most systems these Key Results can be quantified as parametric representations of future solution states. And if they can be quantified then they can be assessed via a MS&A approach (as noted above) to virtually test and verify that the OKRs will take us from where we are to where we need to be. Early increment implementations and virtual testing can also uncover unintended or emergent situations which have to be traversed over a series of solution sprints or iterations. As an example, what is the performance and cost assessment of the conversion of a transportation fleet from diesel-powered tractors to electric tractors? What are the OKRs for this transition in terms of charging infrastructure, new tractors, retiring the old tractors, integrating the billing of new costs with the enterprise systems, etc.? How do the cost and performance curves vary over the implementation? How much will be invested before the first route can be piloted? What future elements depend upon lessons learned from the earliest implementation sprints? Are there uncertainties that have to be quantified by experience and data collection?
A waterfall approach to this would (for example) design the complete charging infrastructure and the total fleet size, buy it all, and roll from one infrastructure to the other. The challenge is that mistakes made in this rollout would be systemic and expensive to roll back or modify. An agile approach would more likely look like piloting several routes which provide the best opportunities to learn lessons (e.g. a short-haul multi-stop route and a long-haul route). Do the needed analyses, implement low-cost pilots to collect data and learn lessons in real-world conditions, and then implement the lessons at the next scale of implementation in following sprints and increments. In reality, such implementations take time anyway so employing an agile approach that starts small, learns lessons, refines requirements, and then scales up with the lessons in mind may take roughly the same amount of time, but will incorporate lessons, increase the likelihood of success, and minimize total expenditures (don’t buy what you realize you don’t need), and minimize re-work costs. They also allow for commercial solution component evolution over the course of the project, e.g. new higher-capacity chargers available at the same cost.
Now let’s roll this idea back into the agile management of the project’s digital thread. Future project states can be managed parametrically as a function of project increments (each increment being composed of team sprints). As lessons are learned in early increments, the flexible implementations in later increments can be adapted. Again, the key in this approach is that the implementation must be planned in such a way as to be adaptable in later increments. Given the choice between a hardware-defined solution and a software-defined solution, usually software-defined with over-the-network updates provides the most flexibility. Is the total # of units required for a capability flexible? Is some minimum capability needed for the initial ROI? Can more users be accommodated by simply adding additional units? This flexibility can also be contractual. In our electric transportation conversion example, we might have a contractual plan to install a set # of chargers per increment or per month, with the locations and charger types expected to be flexible. Then, when the start/stop mileage economy of short haul routes requires additional chargers along the routes (an unexpected lesson learned) then the locations, etc. can be redefined flexibly during the project. Again, the key is that these flexibilities are built into the solution technical, programmatic, contractual, financial, etc. approaches.
Hope that these thoughts about broad application of Digital Threads across the complete TRLs of a solution lifecycle and the application of Digital Threads with agile project implementation methodologies help you rock your current or next project! Stay tuned for more Digital Engineering and Innovation topics.
References:
1. Digital Engineering Strategy, OUSDR&E, Jun 2018
2. DODI 5000.97, Digital Engineering, OUSDR&E, Dec 21, 2023
Nice writeup Bill