Activity Analysis: Misconceptions & Misuses That Limit the Tool's Value Potential

Activity Analysis: Misconceptions & Misuses That Limit the Tool's Value Potential

Activity Analysis is the term used by the Construction Industry Institute (CII) to describe the Work Sampling tool it includes as a best practice for improving craft productivity on construction projects per their Research Team 252. RT-252 was chartered with researching the tools for improving craft productivity on construction projects. Work Sampling is an Industrial Engineering practice designed decades ago, but more recently has been adapted to construction. You can learn more about their position on Activity Analysis at this link.

CII has gone further by offering a training and implementation guide. The link is provided below.

These documents make the case for using Activity Analysis, and even offer an implementation guide for a basic Activity Analysis program. The training guide is 64 pages long, including all graphics and indexes. There is nothing inaccurate in its content, but it is designed to help members who may have other responsibilities perform Activity Analysis on their own. Designing to that end means the training must address the minimum requirements and protocols, and may fall short in detailing some of the potential pitfalls in implementing the tool. While the protocols of the tool are easily taught, these pitfalls in the form of misconceptions and misuses can not only undermine the tool's objectives, it also creates credibility issues with the tool itself that drive its underutilization, and any negative reputation that may perpetuate from any unwanted consequences that may stem from that misuse. The CII document doesn't include these items because many more pages could be dedicated to each of these issues, effectively transforming a simple protocol document into a textbook. Doing so would work against the goal of providing a simple, non-threatening, usable tool for CII member personnel. In most instances, these users would be coming from different areas of expertise, none of which having any background or experience with the tool going in. This article is intended to address 4 of the more impactful of those pitfalls so that the tool can hopefully reach its value potential throughout the industry.

Bias: Bias can be a lethal pitfall in implementing an internal activity analysis program. Since the statistical data used in the process is all derived through human observation, bias can easily enter the equation. Let's go back to the original use of Activity Analysis, or Work Sampling, as defined by Chapter 17.3 of Maynard's Industrial Engineering Handbook. Originally designed for a manufacturing application, Work Sampling calls for a "random sampling" of observations, generated by a random number table or some other random number generator to determine when sampling data is taken throughout the duration of the study. When you look at charts concerning shift production such as the one below, since production is done by machines instead of people, the expected productivity is much more stable with any variation being more random. Comparing with the same look at shift production in construction, you will notice a more predictable pattern of variation with consistently lower and higher times of day. Why is this? Because machines should produce at a much more consistent rate over the course of a shift than construction workers (who are human beings, not machines) will experience.

Construction productivity over the course of a shift forms a "double camel's back" visual profile. This is due to workers typically working less productively in the hours just before and after a scheduled break or at the beginning or end of a shift. Conversely, mid-morning and mid-afternoon typically show as peaks in productivity during a typical shift. Some projects will show with a flatter "camel's back", but will still maintain that basic shape.

Where Bias rears its ugly head is in the random sampling protocols. One could make composite performance look better by having a disproportionately high number of observations in the mid-morning, mid-afternoon periods when production is at a high. The strategy used to off-set this pitfall is something PER calls a "deliberately random sampling scheme". It involves using electronic tools to help end data collection with an appropriate balance of observations throughout the shift. Each hour of the shift has a certain number of "available productive minutes". For example, if the site has a morning break at 9am to 910am, then the 9 o'clock hour has 50 available productive minutes. Each hour's available productive minutes are calculated, and the balancing tool ensures the final rounds of tours are focused in the appropriate times of the shift that ensure a proportional balance of observations across the entire shift. Sometimes bias is unintentional, or even a subconscious reaction to internal pressure to improve. That happens when management is more focused on improving the number instead of focusing on what can be improved to drive the numbers in the desired direction. Once bias enters the data set, that data set becomes unusable, even dangerous. Therein lies the problem with this issue.

Alignment with Project Controls Data: The industrial construction industry spends millions on Project Controls every year. Most of the large firms invest in developing their own proprietary models that are compatible with their time and attendance systems, and use terminology and formulas unique to their organizations. Since this data is grounded in the project estimate, its accuracy is only as good as the estimate itself. However regardless of the basis, the upward and downward movement of the information should track reasonably close to that of Activity Analysis. The reason is that Activity Analysis is effected by a subset of the variables that affect Project Controls data. That is also the reason the two measures sometime diverge, which to a data savvy manager allows them to zero in on the root causes of poor performance faster than using Project Controls data alone. What has established Project Controls as the predominant methodology for measuring productivity is the fact it factors in all of the variables that affect productivity. That composite nature is also one of its weaknesses, making it more difficult to identify root causes across the wide range of possible contributors. An example would be a project with lots of steel that has issued an estimate that assumes that all steel is delivered from the fabricator with faceplates already attached. If in fact the fabricator delivers the faceplates in boxes, the ironworkers are forced to burn hours bolting on those faceplates. An example like this is one where Activity Analysis data and Project Controls data can diverge. They are burning hours without earning hours because attaching face plates is not a task in the estimate. However, from an Activity Analysis standpoint, the iron workers are not constrained from working so their Direct Activity can remain high. Sometimes instead of looking at what is causing the divergence, Activity Analysis may be discredited for not aligning with Project Controls, when in fact, the story that divergence tells can quickly point Project Management to the root cause. If you want to build credibility with Project Management and Project Controls, demonstrate and articulate the methods you are taking to ensure they are getting accurate, timely and unbiased data.

Lack of Calibration: In spite of all the efforts to make Activity Analysis a completely objective process, at the end of the day human observation is a vital contributor to its success or failure. By calibration, it means making multiple observers collect data as close to the same results as if the same observer collected all the data. The two ways to do that is first make sure all category definitions are clear, distinguishable from each other, and appropriate to the type of work being studied. Then, calibrate all observers to those definitions both before data collection begins, and periodically throughout current and future studies. Also, always calibrate new personnel being brought on to collect data. Doing so will ensure Project Management consistently receives the best available information from the Activity Analysis effort. Without it, the improvement impact may not be as large as you expect, or you may even end up causing more harm than good... Nothing will kill any Activity Analysis program faster than doing more harm than good.

Lack of Craft Receptivity: Once the craft workforce believes you are against them and "in management's pocket", your chances of getting a truly accurate assessment of the project's craft utilization efficiency are pretty unlikely. If the relationship is bad enough, the workforce will work harder to appear busy than it would take for them to actually be busy, just to make the study meaningless. Even the most savy data collectors can not distinguish all the fake work from the real work if the mistrust is pervasive throughout the workforce. The best and almost only way to fix craft receptivity problems is to head them off before they occur. It is certainly not recommended to wait until there's a problem. Be proactive. Have a handout for the foremen to explain what you are doing and what you are NOT doing. Encourage the foremen at every opportunity to tell their crews they are welcome to approach the Analyst and see what they are collecting. The Analyst should take the time to give them a general overview and thank them for being interested enough to ask. Finish by telling them if they see any particular roadblocks that are frustrating them, to please let the Analyst know. If you have a smoker on the data collection team, have them talk to workers in the smoke areas. Analysts have gotten excellent feedback through craft comments, which is a great addition to the raw data set to help add perspective or validate what the data is telling you. However, if you have a bad relationship with the craft, you don't know how much of the information is real or made up to throw you off. The best and only policy is to take a positive proactive posture with the craft and you will finish with both an accurate report and the craft feeling like they are being listened to and not just beaten up to be more productive.

There are other items, but these are the ones that jump to the top of the impact list. Activity Analysis is a positive and powerful tool when projects avoid these pitfalls and a few others. I want to be clear by repeating that nothing RT-252 produced was inaccurate, and that the information that team has published was created "fit-for-purpose". As a professional practitioner of Activity Analysis, I wrote this in hopes to add value by providing some insights to those who have chosen self-implementation because doing so supports the long-term success of a process I have a lot of passion for. Stay safe out there!!

To view or add a comment, sign in

More articles by Chris Buck

Others also viewed

Explore content categories