June 25, 2018
by Sanji Bhal, Director, Marketing & Communications, ACD/Labs
Chemical R&D generates a deluge of instrumental analytical data on a daily basis. As critical R&D decisions and regulatory submissions are based on this data, the need for quality data management is more important than ever before. A lot has changed since the days when paper notebooks were the leading data management ‘platform’ among scientists. Advancements in research and instrument hardware continue to increase the amount of data we are able to produce and process. Standardized data formats are increasingly discussed as a way to help ensure the efficient delivery of data throughout organizations and with partners. Though, these formats are not consistent across organizations, and through the industry as a whole.
I sat down with Graham McGibbon, director of strategic partnerships at ACD/Labs, to discuss his outlook on the industry and the pressing need for better management of analytical chemistry data in R&D.
What obstacles do organizations face when managing their data?
The variety of analytical data is the most important obstacle facing organizations, though its complexity and sheer volume also contribute to the challenge. Between the days of paper notebooks and modern informatics solutions, the industry experienced major technological development. The heterogeneity of analytical data formats became a natural by-product of technological advancements by instrument vendors who created proprietary data constructs to manage the storage of bytes-to-petabytes of analytical data generated by their hardware. R&D organizations have invested in various informatics systems to help manage the data generated in R&D for IP purposes, to meet regulatory requirements, and for use by their scientists in their daily decision-making, but few software systems have been designed to natively manage the entire ecosystem of analytical data in a way that is applicable across the lifecycle of R&D. While having multiple types of data formats builds the desire to integrate all types into one standard, this remains a challenge for many organizations. When various instruments and techniques are required for product and process characterization, it becomes increasingly important to be able to assemble key data and related interpretations into standard presentation interfaces for review and decision making without compromising access to all the underlying analytical data.
What are the benefits of data standardization in analytical chemistry?
The benefits of standardizing any type of data are ensuring data quality through consistency, to support data integrity and to improve data shareability, i.e., collaborative research. For analytical data that is varied but often used together to reach decisions and move projects further, being able to bring the heterogeneous data together (by technique and format) would allow scientists to effectively gain insights that lead to corporate scientific intelligence. From the lab to the boardroom, standardized data manages the ‘data-to-information-to-knowledge’ lineages that enable scientists and managers alike to make strategic decisions, limit risks, and effectively share data within and external to their organizations. Take food and beverage companies dealing with pesticides, for example. If a company finds out that a pesticide is becoming regulated and must alter their product development process, unstructured data leads to a drawn out hazard assessment process on their commercial products and raw material supply chains, driven by the re-gathering and re-analyzing of samples. If the company had a standardized, central data system in place, this process would be a simple query where they can easily arrive at the “point of pesticide” in their product development, and show regulators they’re compliant with new laws.
How has an evolving regulatory landscape influenced the need for data standardization?
Underpinning every regulatory submission and decision is data generated by a scientist in the laboratory. The necessity of accurate data is being recognized at the beginning and end of the data life cycle, leaving no room for error or risk. Organizations such as the FDA increasingly require drug manufacturers and pharmaceutical companies to construct drugs that prove both efficacious and safe in a manner where quality is designed into the development and manufacturing process. Quality by testing is no longer an acceptable practice. Data are at the essence of well-designed control strategies for quality processes and products. The data supplied for characterizing those drugs is required to meet guidelines according to specific data integrity standards. Failure to meet increasing standards could result in serious consequences, including product recalls.
How are organizations contributing to improved data management?
Today more than ever, organizations are actively seeking ways to manage their data more effectively. Ten years ago, as MS product manager at ACD/labs, my colleagues and I would often go into a customer/prospect meeting and talk about databasing and managing knowledge and while the customer would theoretically buy into the idea, few were willing to go through the pain of organizing data, tweaking workflows, and changing daily routines to realize the benefits. This is an important thing to keep in mind—rewards of effectively managed, accessible data can be immediate and may continue into the future. Much work has to be done to get there. I have lost count of the number of conversations I’ve had recently with folks actively seeking data management tools. It seems to have taken a long time for analytical data to be recognized as an essential source of decisions that requires structured management. As I mentioned before, the variety and volume of analytical data makes it a special challenge.
One example of the contribution of R&D organizations, in this case particularly Pharma, is the formation of the Allotrope Foundation. The Allotrope Foundation is an international association of pharmaceutical and biotech companies dedicated to building a multi-part data and software architecture to improve the handling of laboratory data. Though ACD/Labs developing informatics solutions enables the management of data from a variety of sources, the Foundation is empowering a broader standardization of analytical data ontologies and our partnership embodies our commitment to continuously improving our products to meet the challenges of chemical and pharmaceutical R&D today. I would be surprised if the efforts of the industry consortium working together didn’t help identify additional possibilities for a future ecosystem of improved analytical data standards, and we’re pleased to be members of this community for years to come.
How is ACD/Labs contributing to this conversation? How is ACD/Labs helping customers?
The one-and-done data life cycle, where data is captured and frozen in a variety of different formats is a major inefficiency plaguing the analytical chemistry industry. Data is difficult to search and retrieve, nearly impossible to manipulate and reanalyze, and significantly increases the cost and time of chemical R&D. For the past 20 years, ACD/Labs has developed a vendor agnostic platform that enables the management of analytical data coming from a variety of proprietary and open data formats. We’re equipping users with an effective system to process and interpret data to increase knowledge and reduce risk, ultimately fulfilling the widespread industry need for the efficient delivery of data throughout the organization.