Difficulty of Dive: Scoring the Complexity of Your Business Applications

Difficulty of Dive: Scoring the Complexity of Your Business Applications

Anyone can tell you that business and IT transformation projects are anything but simple. Transformation involves changes to business models, roles, process, tooling, and BI, and in some cases, may take years to complete. While decomposing the program or tooling changes into incremental chunks should be a strategy, it is critical to have a high-level understanding of the magnitude of change before embarking. Otherwise, the program/product runs a high risk of missing the mark.

In the Olympics, a perfect swan dive (while elegant) wouldn't win the competition when other divers are attempting far more complex maneuvers with multiple twists and turns. In other words: Difficulty of dive matters.

Likewise, in the enterprise, not all line-of-business tools are created equal. Is it possible to accelerate the build phase? Is it feasible to go big-bang vs. a rolling deployment? How soon after launch should we expect funding to be shifted from "grow" to "run?" These questions are much harder to answer without profiling a tool to understand how complex it is, or will be.

Here are 10 scoring dimensions I've found to be helpful:

  1. Number of business processes enabled/supported: 1 = Few processes, such as single-purpose tool; 10 = Many processes and sub-processes spanning across an organization
  2. Complexity of processes: 1 = Simple, straightforward processes, generally agreed-upon with high adherence; 10 = High volume of processes & sub-processes, many permutations, currently evolving, low adherence
  3. Number of business rules in requirements: 1 = Few business rules, simple requirements, standardized, broadly accepted; 10 = High number of business rules, recent changes, lack of comprehensive agreement
  4. Organization and role maturity risk: 1 = Stable business model, org, and roles, years of high maturity/stability; 10 = Roles and org actively undergoing changes, not fully adopted or landed, challenges with role clarity/standardization/readiness
  5. Alternate viable solutions other than tool adoption: 1 = Low risk of tool adoption issues, usage is mandatory and no ability to sidestep; 10 = High risk that users find alternate, viable options for tool usage for some or all of the processes enabled
  6. Business Criticality: 1 = Low importance to the overall functioning of the business, could be down for one week with little financial impact; 10 = Mission critical, high availability is necessary for business operations
  7. Number of targeted users: 1 = Few users, targeting a dedicated team; 10 = Large enterprise scale, over 10,000 end users
  8. Number of targeted personas: 1 = Few personas or user groups; 10 = More than 10 discrete user groups or personas
  9. Number of integrations: 1 = Few integrations, or stand-alone tool; 10 = Over a dozen upstream or downstream dependencies
  10. Technology risk: 1 = Usage of proven technology, ample availability of skills required to support; 10 = Usage of new, cutting-edge technology, proprietary, or not yet proven, scarce knowledge/skills

Each app can be scored with an evaluation of a 1-10 to reveal a heatmap in a portfolio. I usually use equal weighting, though the model could allow for heavier weighting for some of these dimensions. For apps that score in the lower range of 3 or 4 - generally single-purpose tools for a smaller audience - it may be possible to be more aggressive with design, build, and rollout schedules. But for anything approaching 8 and up, this zone is reserved for seriously complex construction projects. Use caution and reserve plenty of time for the unknowns that will invariably pop up from out of the blue. And when someone asks why we can't just design, build, and ship it tomorrow, you can reply, "this is no swan dive."



To view or add a comment, sign in

More articles by Greg Lang

Others also viewed

Explore content categories