DoD Modeling, Simulation & Analysis and Trade Study
Digital Engineering Use Case:
How much MS&A, Trade Study Capability does my organization need?

DoD Modeling, Simulation & Analysis and Trade Study Digital Engineering Use Case: How much MS&A, Trade Study Capability does my organization need?

I was recently invited by a US Space Force futures assessment leader to come check out his organization’s analytical capabilities and recommend next steps for the build out of these capabilities into what his org actually requires. It hit me as I thought about this discussion that there is a huge list of non-trivial considerations in such a discussion so I’m going to use this article to get the discussion started; and over a number of future articles I am going to break down various parts of this in more detail.

Article content

Conceptually, organizations implement Modeling, Simulation, and Analysis (MS&A) capabilities because they perceive a gap between what their operators will be able to achieve and what they must achieve. Describing the need for MS&A in this way helps us meaningfully scope how much of a MS&A capability we might need. In general, I find that organizations suffer from inadequate MS&A capabilities which fail to be able to virtually fill the assessment gap between what operators can do and what operators must do. We virtually fight the last war because this is what our MS&A describes well. We assess People, Process, Policy, Technology, etc. impacts at too low a level on the MS&A pyramid and fail to understand how much of something is actually required. We’re forced to look for our glasses under the streetlights because of inherent limitations to the scale of our MS&A approaches. We employ antiquated legacy analytical approaches because of MS&A limitations.

In this article I’m going to break down what MS&A is, how trade studies extend the scale and impact of MS&A, what the MS&A “Pyramid” is, and how much of the pyramid must be represented in our MS&A approaches to assess the gap between what operators will achieve and what they must achieve.

Let’s start with some definitions. Often in conversations people will rattle off “Modeling, Simulation & Analysis” and then proceed to treat all three of these words as interchangeable. We’ll start by leveling up our notions of these three terms:

Modeling: For our current use case, models describe the functionality or behaviors of systems. These can be defined at the sub-system, system, or platform level. Platforms are discrete warfighting entities like aircraft, tanks, ships, satellites, etc. Systems are components of the platforms, e.g. sensors, weapons, communication terminals, etc. Sub-systems are a further decomposition of systems into smaller building blocks, e.g. a communications terminal can be decomposed into its modem and antenna. Common model types are : Mover, Weapon, Vulnerability, Sensor, Signature, Communication, PNT (Position, Navigation, Timing), Battle Management (behavior), Command & Control (behavior), etc. In other MS&A communities Models can be defined differently. An example are hydrocodes used to simulate weapons detonation. In this case the models describe specific materials behaviors.

Models can be defined across a wide fidelity spectrum from low to high. A simple example is an aircraft mover model. A low-fidelity version of this might only use LLA waypoints to define motion (Latitude, Longitude, Altitude). This kind of model would often provide the needed fidelity for Remotely Piloted Aircraft (RPAs). A high-fidelity version of this might include the full 6-DOF (Degree of Freedom) model which could describe all aspects of aircraft motion in a complicated aerial dogfight.

The correct level of model fidelity for an analytical use is always the level that is “appropriate” for the analysis being performed. In other words, it must be adequate for the need and one size never fits all. If it is sufficiently detailed and accurate to enable evaluation of the effect(s) of interest then it is likely appropriately accurate. This is a function of both the algorithmic quality of a model and the data used to represent a specific system. For the sake of computational efficiency and effectiveness, don’t use a model that is more complicated than absolutely necessary.

It is also important that the models in the integrated play of the analysis provide an “appropriately representative” view of the campaign/operations, missions, platforms, systems, etc. When current systems are in play, an experienced operator ought to be able to watch the play and validate that this is indeed how the systems would be employed and these are the outcomes that they would produce. And this gut hunch must also be borne out in the quantified interactions and measures assessed in the analysis.

With this description of models, it might all of the sudden feel very overwhelming to consider the large number of models across a spectrum of fidelities required to accomplish an analysis. I have found that several of the critical errors made in establishing a MS&A capability is to underestimate the quantity of work associated with model data curation, the need for professional data curation tools and methodologies, and performant team-of-team data curation processes. Data curation is often one-third to one-half of the total analytical level of effort.

Simulation: The Simulation provides both a virtual/constructive environment for the various platforms, systems, etc. to interact (a spatial and temporal reference frame) and the analytical context. This is the next higher level of assembly for the models. Usually a specific “framework” is employed as the interaction environment and this framework will come with a nominal set of models which can be customized to represent various platforms, systems, etc. Examples of frameworks are AFSIM, RAPTR, SEAS, EADSIM, Brawler, etc. The context is often provided by a larger scenario which bounds, constrains, and sets the success criteria for the analysis being performed. For the DoD, scenarios can be things like a JFOS—Joint Force Operational Scenario—which are defined for particular theater-level problems in various parts of the world at defined epochs (i.e. the year in which the theoretical scenario is supposed to be happening).

Analysis: The analysis is the full data-to-decision flow of analytical thoughts and number crunching from the framing of the analytical questions, to the scenario/missions/capabilities in play, to the final quantified answers and trends (typically focused on performance, resilience, and cost) which inform decisions. The analysis most be “appropriately” representative and accurate, enable assessment of the needed scenario and measures without shortcuts which compromise the analytical integrity, and be accomplished at a pace which provides actionable insights at the rate that decisions must be made. Formally, whatever data and behaviors are employed in the analysis are de facto “Accredited” as being the appropriate data for the analysis. Data Accreditation is enabled by performant Verification & Validation (V&V) processes.

Trade Studies: In recent analytical work supporting the US Space Force, the analytical team was examining how a proliferated Low Earth Orbit (LEO) constellation could help enable a space-based communication capability. In the specified analysis which supported terrestrial combat operations, the constellation had a particular orbit altitude, number of orbit planes, number of satellites per plane, antennas which could support platforms within a nadir conical region, and a number of platforms with simultaneous data transfer with each satellite. With this base analysis established, the team decided to look at variations on the constellation in a Trade Study. In this trade study they varied each of these parameters across 8 values for a total of 5 parameters varied across 8 values each. The total number of variations in therefore 8^5 or 32,768. Two key takeaways here. First, the idea of performing a trade study on the base scenario and analysis is a very practical idea. Proliferated LEO constellations are a rather new concept so what would the best approach look like? Second, it is easy to see how the total number of trade study options can add up quickly so MS&A scalability is critical. If this is the trade study that needs to be performed then you really need to have the analytical power to just go and do it. It costs less to accelerate analytical teams with cloud-deployed MS&A capabilities than it costs to suffer with inadequate MS&A capabilities that either compromise the analytics’ integrity or are too slow to deliver results at the pace of need.

MS&A “Pyramid”: Once you start working on MS&A topics then you’ll see the Pyramid again and again. For today’s discussion the two main points that I want to convey are what each level is focused on and the spectrum of levels needed in most analyses.

Article content

Platforms, Systems, Engineering: This level goes by several different names and is sometimes broken down into multiple levels. You might also see it called “Engagement Level” or “1-v-1” because at this level we’re very often focused on how individual systems perform. For example, we might be concerned with how a particular RADAR is able to detect or track an object with a defined Signature. As noted above, a variety of model types can be employed from low computational cost geometric models to very high-fidelity models which account for waveform modulations, etc. It is usual for platforms to be defined via a set of components with Mover, Weapon, Vulnerability, etc. models.

Capabilities & Missions: For a quick primer on Missions and Capabilities, please see my previous article (Ref 1). Capabilities are things that warfighters must do and Missions are created by a linked set of Capabilities with a full OODA loop (Observe, Orient, Decide, Act). Capabilities are things like the abilities to: Detect entities in battlespace, Track entities, Characterize entities, Communicate, etc. Several well-known types of missions are OCA, LRIF, PFS-M, etc. (Offensive Counter Air, Long Range Indirect Fires, Provide Fire Support-Maritime). Many missions are not pointy-tip-of-the-spear. They can also be Logistics, Ammo, Maintenance, etc. Each of these can have its own constituent capabilities and OODA loop. Capabilities are typically assessed via measures which are particular to the Capability and Missions are assessed in terms of a Probability of Mission Success (PS).

Most “futures” capability assessments are performed at the Mission-level, a.k.a. “many-v-many” because at this level we’re usually focused on some significant fraction of a theater-level fight. Again, see Ref 1 for a discussion of Mission-Based Capability Assessment. The significance of this level goes beyond simply having lots of Red & Blue platforms interacting. They must be organized, battle managed, and commanded & controlled in a representative way to perform their intended offensive and defensive missions. And the behaviors models should be both representative and context-specific so they are able to act and respond in sensible ways which reflect how actual operators and automated systems would have made decisions.

Operations: The Operations Level is composed of numerous missions. A typical Operational Outcome might be Achieve Air Superiority. Numerous missions from multiple services might be required to achieve an Operational Outcome. Operational Outcomes are usually assessed in higher-level summary Campaign terms such as the time to achieve the operational outcome, munitions and fuel expended, platform losses, etc. Such measures basically assess whether the Blue/partner forces can afford to win the fight. For this to be assessed, a significant amount of details must be known about the overall operational scenario, OPLAN (Operational Plan), etc. and how individual missions are generated and support the Operational Objectives. This is usually the required analysis level for major acquisition decisions, but in practice few teams are able to process analyses at this level.

Campaign: This is the highest usual level of analytical assessment which represents a theater or regional Major Combat Operation (MCO) and is assessed using metrics noted above in the Operations description. In Operations-level assessments the typical approach is to generate discrete missions and roll up the success rates to quantify the operations measures noted above. Campaign-level assessments are generally performed via unit-on-unit play, not mission-level play. Traditionally this made a lot of sense because MCOs were usually accomplished via unit on unit battles, e.g. mobile armor battalions engaging each other. The “moves” in such battles are rendered into statistical lookup tables and losses assessed in each move. However, this assessment approach makes a lot less sense when non-GEO space-based capabilities are major players in an MCO. LEO and MEO space capabilities orbit thru the battlespace at high rates of speed (LEO satellites move from horizon to horizon in ~ 7 minutes) so the specific satellites supporting a MCO vary quickly. For these reasons, the recommended best practice is to use an Operations approach to assess Campaign-level measures. Again, this approach assesses success as a function of missions’ success and the specific roles of the various missions in achieving their related operational outcomes.

At the start of this article I commented that most organizations suffer from inadequate MS&A capabilities which fail to be able to virtually fill the gap between what can be done and what must be done. In most cases, all of the MS&A Pyramid levels from Platforms to Missions must be represented to assess acquisition, CONOPS, policy, etc. decisions which impact Capabilities and Missions. More generally, the MS&A Pyramid levels from Engineering up to Operations is likely required to assess such decisions. In practice, this is rarely achieved due to the scale required in both MS&A and Trade Studies capabilities. Thus, it makes a lot of sense to practically size up what your teams need to assess and ensure that you properly organize and resource your MS&A people, process, technology, and contractual approaches to achieve your stated goals. A potential key to this is establishing collaborative relationships in which each org shoulders part of the load and all contributing orgs can achieve their objectives. There are numerous additional different dimensions to the organization and resourcing of MS&A efforts. I’ll tackle these in future articles so stay tuned!

References:

1.       https://www.garudax.id/posts/william-cooper-90459718_the-future-of-dod-capability-acquisition-activity-7187141650769436672-q7Rd?utm_source=share&utm_medium=member_desktop



To view or add a comment, sign in

More articles by William Cooper

Explore content categories