SAM Inventory/Discovery & Data Quality
So last week I did an interview with Martin Thompson from the ITAM Review on Inventory which is part of their 12 box grid. Well I am not as good in front of a camera as I am either writing my thoughts down or having a face to face discussion with a fellow professional.
So I thought I would put into words my thoughts and take on Inventory as usual please feel free to chip in on the conversation the more thoughts and opinions shared the better for end users and consultants alike. For ease when I talk about discovery I am also talking inventory they are interchangeable in this instance and it will save on words.
Everyone needs a discovery tool from small companies through to enterprise. The only difference most of the time is the number of discovery tools you require. Typically the larger and more complex the environment the greater the requirement on discovery and typically the larger the organisation the more discovery tools are already in place.
Discovery tools are complex and there are a wide variety of them, such as DDMI, SNOW, ADDM, DDMA, ServiceNow, SCCM, Frontrange, Spiceworks, Express Metrix, Manage Engine etc. Most discovery tools gather the same type of information with some nuances specific to each vendor. If I ever get the change I would love to assess the capabilities of them all.
In SAM we all understand that the completeness and accuracy of the data gathered by the discovery or contained within the Inventory tools along with the completeness and accuracy of the entitlement data is what brings us to a compliance position.
The biggest biggest blocker in getting that compliance position and what ends up consuming so much of our valuable time is the inability to get the data required with the quality required.
One of the major contributors to this the people responsible for deployment and management of the discovery tools, internal IT teams are more likely to get to a higher percentage of coverage of discovery agents and are more receptive to change when challenged with the inventory tools than those companies relying on their managed service provider (MSP), because of costs to get the work completed to a satisfactory level.
If you dont have a discovery tool (one that scans a machine to determine what is installed on it and how that machine is configured) you will want to look at implementing one. If you are a large enterprise then you probably already have one or a number of discovery tools and in this case you may want to analyse your needs and assess the need for additional tooling to fill the gap.
The only data we as SAM should be left to track down is the manual data that we can't gather by using these tools, the question here is "What data can't we gather using our current inventory/discovery tools?".
Ok we all know some of the data that cant be gathered using these tools and I could start calling them out but actually its different for everyone so doing that doesn't answer the question for each specific end user.
So if looking to implement a discovery tool or determining if you are looking to do a gap analysis on your current tools to see if another is required I would start by looking at your known entitlement (There is always an element of unknown entitlement where disparate parts of the business or acquisitions have purchased software that the SAM team may not know about or have recorded entitlement covering). This way you match what needs to be captured as part of your contractual responsibilities to what you can capture using a tool. Be aware though that this will only deal with software you have purchased, discovered software could be a lot more which may lead to future gaps. Once you have mapped the products and entitlement then going to the business and the architects you will fill in some of the blanks on software you may not have discovered entitlement.
With the product lists showing the metric, and the definition you can determine the data required to meet the definition and once you know the data requirements you can map that to discovery/inventory tools that you have in order to determine the gap.
E.G - Hostname (DDMI, SCCM,SNOW, ADDM etc), Environment prod/dev (CMDB, ServiceNOW), users (AD, SAP HR, Oracle HR), PVU/Core Factor (Aspera, Flexera)
This gap is what you use to run an RFP with in order to ensure that you are automating as much of the data gathering as possible. The remnants are your manual data requirements which you will have to manage through but these should be exceptions over automation.
So now you know your gap or your requirements and you have run an RFP to gather the required data to meet your contractual requirements. You also have a list of your manual data you need to collect and have put in the processes in place in order to gather the data in the manner and time intervals you require.
This should be the end of it, but this is where data quality comes into play. In a perfect world data quality would be 100% even if your coverage was only 95% and you would have all of the entitlement to hand for every piece of software deployed. We though live in the real world and rarely embark in SAM with a start up company. As such we normally have the pre-SAM era to deal with, which can be a mountain to climb. Lets look at some of those data issues that assail us on a daily basis.
- AD - Naming conventions, missing data (full name, surname), organisation missing, incorrect fields filled in
- SCCM - Package names, installation names changed, coverage of machines
- HR - part time/full time of employees, contractor information
- ADDM/DDMi - Naming conventions, Chinese characters or Russian characters. missing data fields
- IASTA - procurement data with no contract details, scanned in copies of contracts with no OCR, missing fields such as end dates, missing contracts
- IT operations team - JML process, IMAC, SLA's
The list goes on but this doesn't mean that the tool is incorrect it often means the configuration of the tool and it's implementation have had constant change and iterations aligned with business change and have left a legacy of multiple orphaned records, data fields with missing documentation relating to changes and business decisions etc. you do have to make a decision on what data is worth collecting as sometimes it is more trouble getting the data and cleaning the data then the value you get from the data. E.g packaging when done really well can have great value but too often it's done at the convenience of the technical resource and not that of the business or the SAM function sometimes making this data very hard to consume.
This leaves IT and the SAM function having to spend a lot of time and money trying to resolve the gaps and the quality of data within those fields. Once you know the data fields being used then you have to monitor these data fields to ensure that they stay populated at the determined data quality level set and determine another data source to use that will fill in the gaps in the data or that can be used as an alternative if the data declines to an unacceptable level. This only deals with the inventory side of the equation when it comes to the entitlement side this can be more problematic as this can require trying to determine not only the operational owners but the business owners and contract managers who may have had a hand negotiating the software contract in order to track down missing contracts, purchases or amendments that are missing from the Central procurement database. These can be missing because of a clear out of old contracts and misinformation of how long the business needs to retain them as well as multiple channels of procurement. I would say in the majority of cases no matter how thorough people have been in removing legacy contracts there is always one copy buried in a share somewhere or someone's email; finding it is the issue.
I would say too much of my time and those I speak to is spent resolving issues to do with data quality, data completeness and data accuracy and entitlement and I am sure I am not the only one. This has led me to search for solutions to this and although I have heard of success I am still evaluating and searching for more solutions to this, especially regarding entitlement.
There are a number of solutions which deal with data quality of which one of them is the utilisation of a License Management (LM) tool that can take in multiple data feeds and resolve data issues as part of the recognition and normalisation engine. Now I have not assessed every LM tool (I would like to though) but the ones I have seen that accept multiple data sources are limited. You either have to pick the fields you want from the each data source and it will overwrite what ever data was originally in that data field with the new input or/and you can layer the data sources primary, secondary, tertiary etc which use each data source to fill in the blanks from the prior data source. (please comment if there are LM tools out there that have a true data analytic engine)
This is all fine and well and I much prefer this method than just relying on a single data source and LM tools that do this have added advantages over other LM tools.
The drawback though is that they then are diverging into other areas without the focus that is needed to do data analytic's well which takes focus away from the strengths of the LM tool or becomes a poor second fiddle that is used by the sales team as a primary USP.
Secondly is that they miss out on lots of useful data as that data is purged as it doesn't fill a hole, you could have 100 data fields from a data source but the tool only uses the IP address as that is what was missing the other 99 go to waste even though they may validate the primary source (or secondary) or they may contradict that data, the contradiction may well be the correct data.
I have found and spoken to 3 specialist companies in order to resolve data issues which allow me to increase the number of data sources and thus useful information I can use. Two deal with inventory data and one deals with entitlement data, and I will deal with entitlement first. (again please comment on this article any other companies that can work in this area)
- Seal Software [Entitlement]
Seal software is a tool that searches the network, peoples drives, storage devices etc anywhere where files can be stored and analyses those files to determine if they have the characteristics of a contract. Now it has a built in engine that determines what contracts are but you then feed the tool your procurement or contract data/database and educate the tool on what your organisation classes as contracts, and specialist areas you want to extract from contracts (such as audit clauses).
I will do a detailed review of this product in a future article but although this isn't a cheap tool (charged on number of documents found not matter how many duplicates) it can be invaluable and could easily be cost neutral to larger organisations. It also have practical uses outside of the SAM such as commercial management and procurement as well as a governance role where people cannot circumvent internal processes signing SaaS contracts for small parts of the organisation without involving SAM/Procurement or the commercial teams as there would appear within the Seal Software as soon as the first draft arrives on the corporate network. If entitlement is a stumbling block for your organisation then I suggest you check them out.
use the link below.
http://www.seal-software.com/ - Blazent [Inventory]
I had the good fortune to meet with Blazent a couple of years ago when I inquired into their software. Blazent's solution (in a nutshell) is an analytical engine that takes in all the data sources and produces what they call a golden record. This golden record is produced by building up each field by analysing all of the data for each field from each separate data source against rules and logic either inherent in the tool or built into the tool by the client. This is really powerful not just for a SAM function but can then be used to populate other tools as well as providing good quality data from commercial teams to determine support, fiance teams to calculate depreciation, or hardware asset management and service operations teams.
I was really impressed with this tool when I first saw it although it has been a couple of years since I assessed it I am sure that it has only improved with time.
Again this is another tool that isn't particularly cheap but its well worth the investment, due to the time saved for the SAM team as well as technical teams not having to manage multiple data sources as actively, the additional quality of the service that this allows SAM to deliver in conjuction with a good LM tool, and lastly coupled with the other parts of the business that this drives value into the short and long term ROI could be compelling. Use the link to talk to them to find out more.
http://www.blazent.com/ - BDNA [Inventory]
I have had a number of conversations with BDNA over the past few years and their solution Technopedia is more of a SaaS solution where you pass data to them and they return it in a normalised form. Now how they do this I have never go into the detail and some of my discussions with them were around their LM tool abilities but I learned that SAP use them in their internal SAM team for a data analytical function. They do though have enhancement services that plug into the data that is returned back to the end client such as end of support information on software as well as hardware as well as a catalog that allows them to normalise the data as well as perform analytics, so it is more of a hybrid model.
That being said the additional data can be very useful for an organisation allowing them to choose the best time to upgrade hardware as well as software which helps them keep support costs down.
Again use the link to talk to people and find out more.
http://www.bdna.com/
My preference is to work with best in breed tooling/service. Now you may not be able to implement the best in breed in all areas and so compromising where tools cover multiple areas is still a better solution than trying to crank everything manually. Maybe I will get around to writing a proper review on some of these tools from discovery through to reporting and everything in between during my own time.
Please feel free to comment and add information that would assist all to get a better SAM service/function.
If your goal is for inventory management of software or hardware, the industry standard for discovery I believe is in the mid 70%, meaning about 1/4th of your assets are "lost" (offline). Discovery for inventory management without HAM is like locking all your doors and windows, but leaving the back door wide open.
Hello Daniel, interesting post. Keep in mind that most of the discovery tools you mention are not necessarily designed for software licensing, but rather for IT operations or maybe service management, which have quite different requirements to SLM. For example, these tools do not automatically discover installed IBM and Oracle server software, nor do they discover usage on the Oracle options and packs. Since server software is often the biggest pot of the software budget, this is critical information, and also cannot be consistently discovered manually. They also do a very poor job of discovering expensive engineering software such as ESRI and AutoCAD and usage on any applications. The idea that the more tools the better, should come with a large caveat. How will you know what data makes sense to use and what to toss? There are tools that purport to automatically tell you this, but if the data is not there, no amount of data normalization is going to create it either. Our suggestion is to find a discovery tool that works for your software licensing needs and tell the Ops guys to stick with their own systems. Regards, Sumin