In nearly every sector, companies are overhauling their data infrastructure to meet emerging industry, regulatory, and customer demands. In the financial services industry, banks, asset managers, brokerages, and hedge funds are all engaging in strategic transformation initiatives designed to enhance their ability to consume available information, synthesize insights, and improve decision making.
The insurance industry is no exception to this trend. As evidenced by the rise of terms like “Insurtech”, these companies are maturing digitally to extend traditional business lines and innovate on new opportunities. However, much of this focus is applied to the insurance, or liability side, of the insurance business. Phone apps, modern policy structures, and competitive pricing are consumer-facing innovations based on technology and data aimed at driving increased policy revenues through better client segmentation and sales channel diversification.
Just as important for an insurance company, if not more so, is the investment management side of the business. The core of the entire insurance business model rests on the ability of these companies to deploy capital in a manner that captures a yield spread above projected liabilities. Yet, this side of the business is still plagued with many data and technology challenges that inhibit the accurate, timely, and complete flow of information to Chief Investment Officers and their teams. As a result, firms struggle to make optimized and informed investment decisions.
2
Improved information, industry-wide, has the potential to alter the way in which a significant amount of assets are invested. These assets are tied to future liabilities. Improved investment decisions, fueled by data best practices, can increase the yield on this capital, widening spreads over liabilities, thereby strengthening the capital position of these insurance companies. This also has the potential to increase the ability of these companies to compete on price, delivering economic benefits back to customers.
This eBook will explore five of the challenges that insurance companies face in relation to their investment data and offer thoughts on possible solution designs to overcome them.
Source: https://content.naic.org/sites/default/files/capital-markets-special-reports-asset-mix-ye2021.pdf
Chart 1: Historical U.S. Insurance Industry Total Cash and Invested Assets, Year-End 2012-2021
Note: Includes affiliated and unaffiliated investments
3
4
Data Rules for IBOR/ABOR
Integration of the firm’s Investment Book of Record (IBOR) with its Accounting Book of Record (ABOR) is consistently one of the most complex and important data management challenges for insurance investment organizations. Properly unified information from each of these systems is critical to investment decision-making by front-office personnel.
IBOR and ABOR systems occupy their own space within the insurer’s technology stack and cater to different user groups with differentiated needs, knowledge, and skill sets. IBOR systems provide portfolio managers with a robust security master enriched with market data, security analytics, risk calculations, compliance, and trading optimization tools. ABOR systems emphasize tax lot calculations like book values, accretion and gains/losses. Depending on each specific system’s capabilities, insurance companies also need to make considerations for regulatory reporting, NAIC ratings, capital charges, and agency reporting, which requires an integration of ABOR and IBOR data.
Integration of these data sets is a common obstacle for insurance companies given the varying levels of granularity and lack of a one-to-one relationship across data sources. Many-to-one and one-to-many data relationships require specific methodologies to perform averages, weighted averages, and other aggregation calculations. Standard aggregation methodologies are not well defined across the industry; therefore, firms must create their policies and programmatically automate the calculations to ensure the policies are adopted at an enterprise level within the organization.
In cases where both systems do exhibit matched granularity, there is still a challenge to appropriately map tax lots between systems. This is just as much a workflow and process consideration as it is a technology issue. Proper procedures must be enacted, governed, and followed to ensure that required data is entered into respective systems at each stage of the trade lifecycle, and that data flows appropriately and consistently between systems.
5
There are many options to address these considerations depending on the resources available to the organization. The crux of the discussion typically revolves around the need for business logic, how that should be constructed and maintained, and where it should be deployed into a larger data pipeline. In general, the preferred method should be an option that centralizes all the necessary logic to a single location that has access to all the necessary data inputs. In many organizations, this takes the form of a data warehouse. This has the benefit of codifying centralized logic into data structures, archival of all inputs and outputs, robust auditing, and support to downstream processes to deliver files or populate reports.
Other options that could include the encapsulation of business logic into the reporting layer, desktop tools, or any manual task are sub-optimal in almost any scenario. One key capability provided by the centralization of business logic to a holistic data warehouse is that data source hierarchies must be robustly maintained. ABOR and IBOR have broad sets of data unique to each system, but there is considerable overlap between the two as well. Given there are data fields that may exist in both systems, it is imperative to have a mechanism to dictate the preferred source, secondary, tertiary, and subordinate sources for each data element. Records that are cleansed and processed in this manner are typically referred to as “gold copy” data.
Diagram: Simplified investment data flows, exhibiting the differentiation of data sets, the need for a “hub”, in this case a data warehouse, and the requirement to further disseminate that data downstream.
6
In insurance portfolios, where accounting drives much of the investment process, ABOR is likely to be the primary source for most inputs covered by the system, while IBOR would be the primary source for the rest. In a consolidated dataset from these two sources, a single data element could have lineage back to either system. When users work with this data, they should understand the provenance of these outputs, elevating the need for reporting metadata alongside any report data set.
Example: This example shows how a data source hierarchy can be established for any field (e.g. Duration and Rating), with the hierarchy value indicating the order of evaluation.
The primary source for Duration is IBOR, and it has provided a value, so it gets chosen. ABOR is the primary choice for Rating, but the value is not available, so the logic moves to the second choice, which is IBOR.
The source of the data values can change over time depending on the availability of data. It is critical to report the metadata explaining the lineage of each field, as illustrated in the third table.
7
External Managers
Another area that may complicate the ecosystem for insurance investment data is the use of external managers for targeted sleeves or asset classes within the portfolio. Outsourcing asset management to external managers does not eliminate the data requirements for those assets. In fact, new workflows will need to be developed to integrate data sets arriving from those external managers into the in-house systems of the insurer.
Each of these external managers will have their own respective IBOR and ABOR systems supported by internal workflows and policies. Data extracted from those systems will be provided in unique formats and often at inconvenient times relative to the daily processing of an internally managed book. Furthermore, insurers often have many external managers, significantly increasing the number of inbound data sources, and further amplifying the challenge to create a single version of the truth.
Rarely will a singular IBOR or ABOR system be prepared to deal with the confluence of all these data sets. Oftentimes, the dynamics in this scenario require a centralized master hub that cleans, standardizes, and unifies the data to provide an enterprise perspective prior to distribution to downstream systems and data consumers. Lack of an enterprise view leads to data quality issues, inconsistent reporting, and ultimately legal and regulatory risk.
This is another case where overlapping data elements of multiple systems may require hierarchical approaches to define the “golden copy”. The same security may be held in both an externally and internally managed portfolio, potentially producing conflicting security setup information. Since the security terms and conditions should be viewed consistently across all held portfolios, a robust data source hierarchy with master record management and data translation capabilities is needed to derive a single version of the truth for each data domain including portfolios, securities, positions, transactions, and other ABOR/IBOR specific data
elements.
The data hub’s responsibility is to handle both the variety of file formats, and the resolution of their contents, upstream of the insurer’s primary investment systems, ensuring that the data is transformed into a context more aligned with the insurer’s view of the world.
8