Data Quality: An Achilles' heel in Pharma

By DataZymes Team

30 October, 2017

Pharma companies have spent millions of dollars on various initiatives to improve the master data management (MDM) and data quality processes. Despite these efforts, the problem is far from solved. If anything, the problem seems to be taking on larger proportions with newer, more complex and larger datasets coming up on one hand, while there is an increase in demand for accurate and clean data from business users, who are adopting more complex analytical approaches.

A combination of multiple factors including technology and process weaknesses coupled with diverse motivations of IT and business teams has led to this situation. This is further attributed by lack of data exchange standards, limited interoperability of data across systems, and an explosion in volume as well as complexity of data sets such as patient data, claims data and EMR/EHR data sources.

Inaccuracy and incompleteness of account masters pose a huge threat to segmentation and targeting in Pharma. This flows down to incentive compensation processes where sales need to be tracked.

The problem is more pronounced in the Specialty Pharma market, a rapidly growing Pharma segment making up over 20% of the overall drug spend today and poised to grow to 50% in the years to come. However, the emerging Pharma segment comes with its own set of challenges, especially with data collection and consistency. The complex distribution model of Specialty drugs, which involves Specialty Pharmacies, Alternative Sites of Care, Specialty Physician and distributors has resulted in multiple points of data generation, with no standards for data exchange.

The specialty distribution model also comprises of full line wholesalers (such as Amerisource, McKesson etc.), Specialty distributors (such as ASD Healthcare, Curascript SD etc.), large and small specialty pharmacies chains, hospital pharmacies and the physician clinics. While designing a specialty distribution model is important for the successful launch and continued success of a specialty drug, an equally important aspect is setting the data strategy for specialty drugs.

Some of the key success drivers for a Specialty Pharma company are:

The data collection points in the Specialty data distribution model include syndicated data providers, infusion centers, wholesalers/specialty pharmacy distributors, specialty pharmacy hubs, individual specialty pharmacies, physicians, co-pay partners and call centers. Mastering data across so many entities and maintaining data quality is a huge challenge for all Pharma companies.

The analytics maturity is therefore limited in many specialty Pharma companies, owing to challenges in the data management and quality. Data ownership is usually with the IT Teams or the business teams, depending on organization structuring. IT Teams however, are not specialized to handle the specialty data. Despite the transformation from Primary Care to Specialty Pharma, IT Teams continue to use trusted and tested legacy systems and processes that been developed and perfected with the Primary Care data. These systems and processes often fall short while handling specialty data.

There are a lot of differences in Specialty and Primary Care data, which warrant different processes to be developed. One of the key differences is the need for an account master in Specialty Pharma which is detailed to a suite number level, while Primary Care relies more on physician masters. Physician mastering is relatively easier since standards such as NPI, state license, DEA etc. are easily available. With account mastering, the only identifiers, if present, are HIN and DEA. The accuracy of these identifiers, even when present, is subject to doubt since it is dependent on the data collection points – distributors in this case. Inaccuracy and incompleteness of account masters pose a huge threat to segmentation and targeting. This flows down to incentive compensation processes where sales need to be tracked. In the Specialty Pharma, every vial counts since they are high value-low volume products and incentivizing sales reps for each vial is crucial.

In the Specialty Pharma, every vial counts since they are high value-low volume products and incentivizing sales reps for each vial is crucial.

One way companies try to cope up with this problem is by relying on their sales teams. This poses a new problem – maintaining accuracy and defining trumping rules become exponentially more complex, which means a high manual stewardship process, at least till rules and logic is established for mastering account level data. Another issue with this method is that the contracting teams are constantly challenged with inaccurate data. Contracting teams must classify accounts as 340B and non-340B accounts depending on the type of patients who are visiting these accounts for infusion. This requires accurate patient flow data and a harmonization of data between 867, DDD, CARS (or other accounting systems) and ex-factory. The 340B and non-340B classification is a serious compliance requirement and errors in reporting this data can lead to compliance violations.

The need for accurate and mastered data also spans to the managed care, medical affairs and sales teams. Specialty Pharma claims are usually medical claims, as they need to be mostly administered and are buy and bill products, which is where the Hubs come into picture. Hub data, is another source of external data. Medical affairs, managed care and sales teams need to view integrated data across claims (SHA or IMS), Hub and Sales (DDD/867/852).

Specialty Pharma companies are currently handling these challenges through various methods and approaches, most of which are temporary fixes and are bound to create large issues in scalability. Specialty pharma companies with high IT spend are investing in more IT manpower, and platforms such as Reltio, VEEVA etc., while companies with low IT spend are finding work around ways and using external integration and mastering vendors such as LiquidHub and ValueCentric.

The problem with these approaches to handle specialty data challenges is that they are temporary and short term fixes. Such approaches either create more vendors and challenges to manage internal and external syndicated data sources with these vendors or create high dependency on vendors which become cost prohibitive over time. These challenges are here to stay until companies approach the challenges in a more radical manner, which is planned and executed with the long term plan in mind.