Deploying Analytics; a flexible, repeatable methodology
(This is a repost of a previous article that I accidentally deleted)
Building on a previous post ('A 10-Step checklist of Deploying an Analytics Centre of Excellence') I want to use this article to elaborate on the framework that I have adopted to build a reliable and repeatable analytics deployment framework.
Before I get started, let me try and address some misconceptions about the nature of project management and analytics:
- Analytics is special, it is an iterative process that follows an unpredictable path based on experimentation - whilst this might be true about the 'question - answer - next question' part of analytics, it doesn't hold true for why and how you deploy the capability: building a car and driving it are two different things
- Frameworks/methodologies are only for 'big' projects - on the contrary, they should be scalable enough that they can fit any size project: in my view, this means that it is less a checklist and more a structured way of thinking and acting.
- Fixed methodologies like these are not fit for 'Agile' projects - also not true, many agile projects fail to deliver their full potential because they don't adequately plan early enough and lose their way: they deliver in parts but the 'whole' fails.
So, with the above in mind, I use my deployment framework thus:
The Border - Scope, ToR, Resources and Constraints: these set the boundaries of the program, what tools you have available and what you can't do.
Vision, Goals and Objectives - set the measures and metrics against which the program will be judged. If these are not quantified (at least in outline) at the beginning, how will you prioritise your actions, what will you test against and how will you know if you have succeeded?
Within the Border - is the bulk of the program itself. At its core is a classic 'as-is' to 'to-be' business transformation process with a 'right-to-left' planning cycle based on a formal assessment / road-mapping tools and the PMI PMBoK Project Management Standards (suitably edited);
- Project Initiation
(As-is Assessment) - getting a formal assessment of where you are today (I find using an adapted version of Tom Davenport's DELTA model helps here).
(To-Be Analysis) - partly derived from a decomposition of the Vision, Goals and Objectives but within program boundaries; experience helps distinguish between the theoretically possible and the practicably probable & valuable. - Project Planning
(Requirements Analysis) - using the counterpart to the assessment tool, a road-mapping approach can identify potential sub-projects that deliver against specific objectives. This should also feed back into a refinement of the Objectives.
(Gap Analysis) - manages the difference between what (capabilities) you have and what you need to achieve the business requirements. This, in combination with a strategic overview, prioritisation schedule and roadmap then inform:
(Plans, Milestones / KPIs and Actions) - identifying what will be done, by whom and when. The plans consist of specific actions to fill the capability gaps within set timeframes. - Execution
(Implementation) - Whilst I have placed this outside the Border, the actual delivery of the program is clearly defined by the strategy-to-planning phase but also by the production methodology of the implementation team (worthy of a whole article in its own right). - Project Control - There is an expression that 'no plan survives the battlefield' and when implementing analytical projects, the sponsors and governance framework has to accommodate change. Project control helps to manage the overall program activities, budget, schedule, risks, etc. and align the stakeholders.
- Project Close
(interface to Adoption Support) - thinking about how the analytical capabilities will be deployed is critical; numerous projects fall at the last hurdle because insufficient thought was spent on how to deploy in practice: doing analytics to an unprepared organisation is fraught with risk.
Below the Border - is predominantly where an adapted ITIL Service Management life-cycle framework plays. I had not previously addressed this, as implementation and in-service support were not within my direct purview, but I get asked about it so often that I've added it to my canon:
- Service Strategy - understanding the objectives and customer requirements - engaging those who will do the actual work, and their managers is key to making use-cases real.
- Service Design - conversion of objectives and strategies into plans; a natural extension of the programme plan is to carry it forward into deployment (and accommodate feedback).
- Service Transition (Adoption Support) - manages the move from test/acceptance (or successful PoC) to an operational environment, including any technical change, phases, etc.
- Service Operation - day-to-day operation of the analytical function and management of any problems: in my experience, good analytical projects rarely stand still.
- Continual Service Improvement (Improvement / Innovation) - day-to-day incremental improvements combined with / supporting periodic strategic reviews (especially if your program has a roadmap spanning multiple phases).
In conclusion, spending time (appropriate to the case) in planning your analytics deployment is time well spent and it doesn't have to be onerous; it improves the outcome (focused attention on the requirements) and reduces risk.
In my next article, I will focus on taking the first steps: developing the vision, goals and objectives and interfacing to Project Initiation.
Meanwhile, I look forward to your questions / comments.