Configuration management databases (CMDBs) are great tools for storing and managing information about hardware and software configuration items that are needed to provide a service, but they are not designed to be asset management tools. Critically they often fail to provide the requisite information and context during investigations.
Security analysts require access to multiple data sources about assets and users during investigations, particularly historical representations of how devices and users were characterised during an incident. A centralised asset inventory leveraging a plethora of diverse data sources and lookback snapshots is the preferred efficient method for answering questions at pivotal investigation junctures.
A useful inventory should be complete, comprehensive in terms of data characteristics, unique, up-to-date and in near-real time, and importantly, easy to query. CMDBs fail to deliver in many dimensions, often viewed as lacking key data elements, populated with incorrect information, stale, and time consuming to navigate to conclusions.
Without a central authoritative inventory consisting of diverse data sources, security teams must traverse the byzantine process of gaining access to multiple data silos, rationalising disparate naming conventions and independent variables, and stitching together timelines and asset context.
Until recently, the process for cobbling together this type of inventory was extremely difficult to achieve. Several approaches have been tried in the past, each fraught with a series of deficiencies.
Traditional methods include:
- Agents – a singular point of view, not installed uniformly across all assets, easily broken, corrupted or disabled
- Scanning – snapshot in time representing stale data, frequently missing ephemeral devices and providing a limited singular point of view of the asset
- Network discovery – costly to deploy and maintain, invasive, not uniformly deployed, blind spots with cloud and remote users, lacks context
The only method that overcomes all obstacles is aggregation. Aggregation solves the following problems:
Identify All Assets
Simply put, some assets are only known to one data source. Without incorporating multiple data sources, the inventory will be incomplete. Workloads and VDIs are only represented by the infrastructure in which they are spawned.
Unmanaged IoT devices may only be found in a network sensor or vulnerability scanner sources. Mobile devices are generally represented only in a MDM. Cloud IaaS platforms are often the single source for storage buckets, virtual load balancers, and WAF devices.
Provide Context
Some data characteristics about a device are only found in one source. For instance, the delegation policy of a device will only exist in Active Directory. Firewall policies applied to a device as it traverses a network segment are only found in the firewall data set.
The IP lease for a device is only found in an IPAM silo. Patches and installed software often only exist in the endpoint management solution and open ports and vulnerabilities can only be found in the vulnerability scan data. Context is king during security investigations, but these critical details may be missed if teams are not tapping into all data sources.
Deduplicate Assets
Arriving at a unique inventory requires correlation between sources. Without it, the same asset will be represented multiple times in different sources, creating a time sink for investigators to dig through disparate, partially overlapping sources.
Aggregation of the sources into one platform provides the opportunity to normalise the data to a common naming convention which is used to correlate each data source. The result is one asset record or data plane for each unique asset, with all relevant characteristics in one place, and easily searchable.
Deconflict Data
Arriving at the truth for an asset is difficult. Each source uses different naming conventions for each field of data. Each field is also populated in non-standard ways. Certain data elements are simply best guesses or calculated values based on incomplete variables available to the source technology.
Uncredentialed vulnerability scan data is filled with varying degrees of probabilities, leaving data efficacy unreliable and difficult to trust. Network-based sensors make best guess approximations based on limited insight into the asset. These differences result in conflicting answers for the same data element. Aggregate normalised data allows for re-aggregation and deconfliction of the sometimes wildly divergent answers from different sources.
Frequently Update Data
By nature of what they do, security teams live in a near-real-time world. If they see an alert, they need access to the most current information about an IP address or device and they need the latest information about open ports, vulnerabilities, device users, and installed software.
Change is the only constant for an IT asset and the inventory must account for these changes in near real-time, otherwise investigators will draw faulty conclusions.
Conducting investigations is a necessary and frequent security function. A complete, comprehensive, unique, and always up-to-date inventory drives efficiency, lowering incident response times and ultimately the costs associated. Aggregate and diverse source data is the key ingredient to achieve these results.
By Patrick Kelley, VP of Enterprise Sales at Axonius
PrivSec Global
Where Privacy and Security Meet.
Featuring over 120+ speakers across 52+ sessions, PrivSec Global will cover the most pressing and challenging topics from across the data protection, privacy and security sectors.
Registration for the next PrivSec Global taking place on 29th & 30th June 2022 is now open, secure your place today.
No comments yet