What is the focus of analysis: problem or solution?

What is the focus of analysis: problem or solution?

Is the purpose of an analysis model understanding the problem or proposing a solution? I have discussed this a few times with different people. This is how I used to see it:

  • Analysis deals with understanding the problem domain and requirements in detail
  • Design deals with actually addressing those (functional and non-functional) requirements
  • A detailed design model can be automatically transformed into a working implementation
  • An analysis model can’t, as in the general case, it is not possible to automatically derive a solution based on the statement of a problem.

Rumbaugh, Blaha et al in “Object-oriented modeling and design” (one of the first OO modeling books) state the purpose of analysis in OO is to model the real-world system so it can be understood; and the outcome of analysis is understanding the problem in preparation for design.

Jacobson, Booch and Rumbaugh (again, now with the other “two amigos”) in “The unified software development process” state that “an analysis model yields a more precise specification of the requirements than we have in the results from requirements capture” and “before one starts to design and implement, one should have a precise and detailed understanding of the requirements”.

Ok, so I thought I was in good company there. However, while reading the excellent “Model-based development: applications“, to my great surprise, H. S. Lahman clearly states that contrary to structured development, where the focus of analysis is problem analysis, in the object-oriented paradigm, problem analysis is done during requirements elicitation, and the goal of object-oriented analysis is to specify the solution in terms of the problem space, addressing functional requirements only, in a way that is independent of the actual computing environment. Also, Lahman states that the OOA model is the same as the platform-independent model (PIM) in MDA lingo, so it can actually be automatically translated into running code.

That is the first time I have seen this position defended by an expert. I am not very familiar with the Shlaer-Mellor method, but I won’t be surprised if it has a similar view of analysis, given that Lahman’s method is derived from Shlaer-Mellor. Incidentally, Mellor/Balcer’s “Executable UML: a foundation for model-driven architecture” is not the least concerned with the software lifecycle, briefly mentions use cases as a way of textually gathering requirements, and focuses heavily on solution modeling.

My suspicion is that for the Shlaer-Mellor/Executable UML camp, since models are fully executable, one can start solving the problem (in a way that is removed from the actual concrete implementation) since the very beginning, so there is nothing to be gained by strictly separating problem from a high-level, problem-space focused solution. Of course, other aspects of the solution, concerned with non-functional requirements or somehow tied with the target computing environment, are still left to be addressed during design.

And now I see how that all makes sense – I struggled myself with how to name what you are doing when you model a solution in Cloudfier. We have been calling it design based on the more traditional view of analysis vs. design – since Cloudfier models specify a (highly abstract) solution, it couldn’t be analysis. But now I think I understand: for approaches based on executable modeling, the divide between understanding the problem and specifying a high-level solution is so narrow and so cheap to cross, that both activities can and should be brought closer together, and the result of analysis in approaches based on executable modeling is indeed a model that is ready to be translated automatically into a running application (and can be quickly validated by the customer).

But for everybody else (which is the vast majority of software development practitioners – executable modeling is still not well known and seldom practiced) that is just not true, and the classical interpretation still applies: there is value in thoroughly understanding the requirements before building a solution, given that the turnaround between problem comprehension, solution building and validation is so damn expensive.

For those of you thinking that this smells of BigDesignUpFront, and that is not an issue with agile or iterative approaches in general – I disagree. At least as far as typical iterative approaches go, where iterations need to comprise all/several phases of the software development life cycle so they can finally deliver results that can be validated by non-technical stakeholders. As such they are still very wasteful (the use of the word agile feels like a bad joke to me).

Approaches based on executable modeling, on the other hand, almost eliminate the chasm between problem analysis and conceptual solution modeling and user acceptance, allowing for much more efficient and seamless collaboration between the problem domain expert and the solution expert. Iterations become so action packed that they are hardly discernible. Instead of iterations taking weeks to allow for customer feedback, and a project taking months to fully cover all functional requirements, you may get a fully specified solution after locking a customer and a modeler in a boardroom for just a day, or maybe a week for bigger projects.

So, long story short, to answer the question posed at the beginning of this post, the answer is both, but only if you are following an approach based on executable modeling.

What is your view? Do you agree with that? Are you an executable modeling believer or skeptic?

This post appeared a while ago on the Abstratt Blog.

Rafael, thanks for the retrospective, and the introspection that follows reading your post. If you look at the methods you list, each approach has background that dictates their approach: S-M seeing the world in objects and states, the amigos with their more defined, evolutive approaches, Lahman's with his MDA involvement. Like many, I started all this from an "pure" engineering perspective where there was a lot of analysis work done up front to ensure that the final system worked. Moving more into Agile, I realised that analysis occurs all the time: you analysis requirements with the stakeholders, you analyse the possible architectural solutions with architects, you analyse the various design decisions with the developers. I've had some interesting discussion with agilists on this topic, as I'm sure others have (especially when you throw in models with agile...). In the end, you end up analysing both the problems and the solutions, at each step to keep moving, evolving. With modeling, the earlier you can "execute" your design/analysis/requirements, the earlier you can validate your work. At each step, there are different things you can do to validate your model (e.g., parametric validation in SysML, formal methods for early "analysis" models, full execution for later "design" models. But really, isn't this a continuum? Do we still need, in this day of agile approaches, to have these labels for the various phases? I don't have a ready answer and I suspect that we will keep on having this discussion for some time to come. So thank you for this topic and getting people, maybe indirectly and especially me, thinking about evolution.

Rafael, I found it informative to look at the metamodel of User Stories and BDD (http://www.ebpml.org/blog2/index.php/2013/04/26/reinventing-agile-from-value-to). What you can see is that US seem to be expressed from the point of view of the solution, rather than the problem definition. A major issue if you ask me. On the other hand BDD's metamodel indicates that (desired) behavior is expressed from the problem point of view. I have also explored more abstractly how to define a problem statement and articulate the solution (p 13-16 of this document: http://www.xgenio.com/bolt-introduction.pdf)

Like
Reply

To view or add a comment, sign in

More articles by Rafael Chaves

Others also viewed

Explore content categories