Analytics and Patterns of Performance Data to your Rescue
Today, most modern DBMS have built-in functionality that tracks every aspect of the workload and maintains many operational counters in real time, giving you the opportunity to analyze queries and packages or any other part of the workload that is running at any given moment. In DB2, the built-in functionality is known as Query Monitor (QM in z/OS) and Monitor Switches (in Database Instance on LUW), Real-Time SQL Monitoring (ADDM) in Oracle and Performance-Schema in MySQl/MariaDB. These features measure the amount of CPU, sql-calls/-executions, total getPages and other operational counters a query or package is using at the chosen time of interest.
Additionally, the data can be collected in a more detailed manner by using analytical queries to distinguish between online and batch workloads or be grouped by workstations/servers where the applications are originating from and/or users. In doing so, the information can be applied to your SLA agreements and/or bills to your customers depending on the services your company provides (e.g. DBaaS, etc.).
However, ask yourself: Are these features activated in your DBMS? In most cases the functions are not used adequately, as they are switched on by default.
In using the previously discussed functionalities, gives you better opportunities to discover and analyze workload behavioral patterns that occur in your database system in real-time and at any given point in time in the past. That is to say, as long as you harvest and collect this data. The collected data as any other data can grow and grows even more rapidly than your regular application data. Therefore, it is necessary to be aware and take appropriate actions as I pointed out in my previous post. By harvesting and collecting the data at regular intervals over time, you and your company will benefit from it in many ways. As I pointed out earlier, by summarizing the data hourly, daily, weekly, monthly and yearly, it will give you various types of behavioral patterns of your database workload in a more detailed manner as you gradually configure and collect it.
As a result, the data can become an immensely beneficial asset to your analytical analysis of your database workload. Specifically, it provides you with a powerful resource to present to your developers as it sheds light on any unsatisfactory performance a database package or application is showing in your database workload. Accordingly, they can apply a fix or redesign the logic appropriately. In addition, the DBA can apply new indexes and or table structures that could be beneficial to the situation. Most importantly, as the data can present and predict a need for renewal of hardware, it may facilitate a more positive attitude from management to invest further in necessary hardware. Above all, the data can raise your coworkers awareness of the importance of increasing the overall workload performance by applying package and/or application changes.
So if you are facing application performance problems, high CPU costs or may just want to gain more knowledge of your database workload and are not collecting these real-time workload statistics, then I urge you to activate real-time workload statistics and begin collecting data in a suitable manner. It would aid your development team, and your staff, in defining recurring workload problems and lower CPU and/or another operational cost for any particular time of the year, thus bringing your database workload to an optimal state.
May I suggest an investment to manage Thise data :-)