Skip To Content
Back to Events
Conference

ASMS – 71st Conference on Mass Spectrometry and Allied Topics

June 4-8, 2023

George R. Brown Convention Center, Houston, TX, USA

Booth #: 809

Join us at the 71th ASMS in Houston, Texas, and see how our software tools are helping mass spectroscopists do more with their analytical data.

Join us for breakfast to hear:

  • How scientists like you are using ACD/Labs mass spectrometry software to curate centralized databases to simplify and accelerate their structure elucidation and verification processes.
  • What’s new and coming soon in ACD/Labs’ software for mass spectrometry and chromatographic data handling.
  • Learn more about how our browser-based xC/UV/MS data processing software can deliver simple, accessible software for open-access labs and casual users.

Browse our poster presentations, sign up for the Breakfast Seminar, and book a meeting with one of our expert staff to discuss current challenges and how our software can support your workflows.

Symposium Agenda

Monday, June 5th, 7:00 - 8:15 AM   Room 371 CF
7:00 AM
Breakfast Buffet
7:15 AM
Welcome
7:20 AM
Evolution of Spectrus Data Management: Legacy to Cloud

Richard Lee, Director, Core Technology Innovation and Informatics Strategy, ACD/Labs

7:30 AM
Small Molecule Databasing, Now and for the Future

Sarah Robinson, Senior Principal Scientist, Genentech

7:45 AM
Increasing Accuracy in Non-Targeted Analysis: from Monitoring Data Quality to Reducing False Positives

Christine O'Donnell, Staff Fellow, FDA

8:00 AM
Enhancing Performance and Expanding Capabilities: A Look at Our Latest Mass Spectrometry Software Updates

Anne Marie Smith, Product Manager, Mass Spectrometry and Chromatography

8:10 AM
Open Q&A and Closing Remarks
Load More

Conference Agenda

Poster Sessions
Tuesday, June 6th, 2023  
10:30-12:30 AM & 1:30-2:30 PM
Quantitative Workflow for xC/UV/MS Data using a Single Vendor-Neutral Interface
Topic area for your poster: Drug Discovery: Qualitative and Quantitative Analysis II
Read the abstract

Anne Marie Smith, Sofya Chudova, Vitaly Lashin
Introduction
Quantitation by xC/UV/MS is a critical and ubiquitous workflow spanning multiple industries like service labs, chemical industries, food, environment, and biotechnology.  All major instrument vendors have their own applications supporting their specific systems and proprietary data formats.  These applications work well for their respective systems. However, it is a critical limitation in modern multi-vendor laboratories where chemists must be familiar with several applications with multiple interfaces. Here, we present a vendor-neutral platform-agnostic application for the quantitation of xC/UV/MS data that supports all major data formats in a single interface.
Methods
Quantitation experiments can be designed within the application, once data is acquired, setup involves selecting the instrument type (xC/UV/MS), sample types (calibration, samples, blanks, replicate samples, system suitability), and compound(s) for quantitation. The application processes the data based on peak integration and processing parameters and quantifies the compound based on the mass of the compound (XIC extraction), flat chromatograph (specified wavelength), or auto-extracted wavelength (DAD). The processed data, including individual sample files and calibration results, are automatically saved into a database. Any changes made by the user are also recorded in an audit trail, to ensure data integrity. The results can be viewed in customizable reports including chromatograms, spectra, tabulated results, and a calibration plot.
Preliminary Data  
All major instrument vendors have developed their own specific chromatography instruments and support software packages, where quantitation is a key workflow.  However, each vendor software only supports its own proprietary data format, increasing the burden on the scientist to learn multiple platforms.  This software application overcomes this as it can work with all major vendor formats and applies the same processing workflow to all data.

Quantitation studies are designed within the application post-acquisition where users can designate the appropriate samples and identify peaks for quantitation via retention time. The trace for quantitation is defined, and the application processes the data. Peak integration/detection parameters are applied and a calibration curve with a line of best fit (that is amenable to first and second-order fit equations), standard deviations, and r2 are calculated.

The results are saved to a unique database which can be viewed and modified with changes tracked for data integrity purposes. Modifications can include alteration of the processing parameters or the addition of unknowns to be quantified. Users can also add a residuals plot which allows for the visualization of potential problems in the calibration data that are not always visible on calibration scatter plots. There is multi-user access to the databases, allowing for easy collaboration on projects.  Customizable reports can be created based on either user preferences or saved as a company/site-wide template.
MS-related innovation

Integrated quantitative workflow within vendor-agnostic software to quantify all xC/UV/MS data in a single interface.

TP196 - Topic area for your poster: Drug Discovery: Qualitative and Quantitative Analysis II

Anne Marie Smith, Product Manager, Mass Spectrometry and Chromatography, ACD/Labs

Monday, June 5th, 2023  
10:30-11:30 AM & 12:30-2:30 PM
All-in-One Design and Analysis of Quantitative Studies for High Throughput Experiments
Read the abstract

Nikki Dare, Richard Lee, Karl Demmans, Anne Marie Smith

Introduction  

High throughput experiments (HTE) are complex studies used for product screenings and reaction optimizations.  Although, reaction success can be assessed simply by the presence of the product synthesised, chemists will want to know the extent of success and the quantitative amount of the product generated.  Specialized Windows based applications are often use for the design of these experiments, however, these applications do not incorporate analytical data analysis or quantitation workflows, and require separate data analysis solutions.  Moreover, these applications are chained to the specific workstations, and require the user to be present in the same laboratory.  Here, we present an all-in-one, browser-based application that supports quantitation for HTE, including automation for data retrieval and data processing.

Methods 

To support HTE, data processing and quantitation modules were developed within the HTE design application to support all major vendor LC/UV/MS systems. Quantitation experiments can be designed within the application, including the selection of instrument type (vendor LC, LC/DAD, LC/UV/MS), sample definition (calibration, samples, blanks, system suitability), and target lists.  The application automatically processes the data based on the target list and processing parameters, and the user can visualize the results including, chromatograms, spectra, tabulated results, and calibration plot.  Data can also be reprocessed individually within the browser-based application, or users can change processing parameters and initiate batch reprocessing.  Results are stored in a database, and any user-initiated changes are stored as part of the audit trail.

Preliminary Data  

High throughput experiments, along with the subsequent quantitation studies are designed within the browser-based application, bypassing the CDS software for design and data management.  This allows the user to create these experiments from any location, independent of where the data lies.  Users can designate the appropriate samples including, calibration samples, blanks and system suitability.  Compound target lists are supported through several methods, 1) manual text entry, 2) compound look up if integrated into an internal chemical database, and 3) SD file (drag and drop).  It also supports a chemical structure drawing program where users can draw structures, load mol files, or enter InChI notation.  Retention times can also be entered for each target if desired to ensure component identity.

As the study layout is defined, a sequence file is generated for the selected CDS.  The system provides a unique name for each sample, thus, each data file has a unique identifier that the system uses to reconcile sample information to data file.  The application incorporates automation services to retrieve and automatically process data based on user-set parameters as the data files are acquired.  Regression analysis is performed based on user preferences, plots are generated, and can be viewed within the application for review.

Users can review the processed chromatograms and spectra within the browser application, where they have the ability to alter data processing parameters and initiate batch reprocessing on the entire set or subset of data.  Moreover, for more granular data processing needs, the application incorporates a manual data processing interface where the user has full control to adjust baseline, peak detection, and integration.

The browser-based design of the application allows it to be platform independent and the underlying technology enables it to be compatible with all major LC/UV/MS vendors.

Novel Aspect  

Vendor-neutral, browser-based application supporting quantitation of LC, LC/UV, and LC/UV/MS datasets for high throughput experiments.

MP309 - Topic area for your poster: High Throughput MS I

Karl Demmans , Application Scientist, ACD/Labs

Tuesday, June 6th, 2023  
10:30-12:30 AM & 1:30-2:30 PM
Fast Component Identification & Automated Well-to-Data Connectivity for High Throughput Workflows
Read the abstract

Richard Lee, Nikki Dare, Anne Marie Smith

Introduction  

High throughput synthesis (generally referred to as High Throughput Experimentation—HTE), with the help of robotics and automation, has allowed scientists to significantly increase experimental output.  This also includes the amount of analytical data generated, in particular LC/UV/MS data. Current solutions for analytical data processing in HTE require tedious data processing and error prone manual association of analytical results with experiments.  Here, we will present a unique system that not only integrates with existing hardware and software to bring design, planning, and execution into a single interface but also automates fast LC/UV/MS batch data processing and manual singleton data processing, within a browser-based interface.

Methods 

HTE were designed within a browser-based application, where reaction schemes and instructions were devised by the user and sample names were automatically created by the system, for reaction and analytical plates of up to 1536 wells.  The use of sample names as identifiers, were used in the automated association of analytical data and the experimental wells.  Large volumes of LC/UV/MS data were stored in a specifically designed data container for high-speed access used for automated batch reprocessing is required.  For manual data manipulation, LC/UV/MS datasets were accessed from the specialized data container by a unique data processing server, to visualize within the browser-interface.  Raw LC/UV/MS data from all major instrument vendors are supported by the system.

Preliminary Data  

Advancements to automation and robotics have allowed HTE to accelerate synthetic screening, from target synthesis to reaction optimization.  To evaluate the success of these reactions, LC/UV/MS is the analytical technique of choice.  However, software systems to support these experiments, often do not have the capabilities to handle the significant amount of LC/UV/MS data that is also generated through these experiments, especially if data processing and reprocessing is required.

Upon data acquisition, automation services transfer the data to a specially designed data container for fast data access.  Targeted analysis based on materials used, are automatically applied as the target list for component identification.  Mono-isotopic mass and RT (if available) are used as the parameters for component identification, along with a variety of other parameters for chromatographic peak detection and integration.  If the initial data processing is not satisfactory, the users can change processing parameters and initiate automated batch reprocessing from within the browser application.  The specialized data container permits for facile data re-processing, that enables the reprocessing of a 96 well plate of low-resolution LC/UV/MS in less than 5 min.

Data can also be re-analyzed on a per-data file basis (singleton processing).  Users can review the data within the browser application and initiate data retrieval automatically from the browser.  Data is extracted from the high-speed data container by the specialized processing server, and opened in a complementary window or tab.  The manual data processing interface is a full data processing application that supports all fundamental functionality for manual data interpretation including peak detection, baseline correction, integration, chemical structure support (import/draw), etc.  Once manual processing is complete, the system transfers the results back to the data container, to maintain data integrity of the HTE.

Novel Aspect  

Vendor-agnostic browser-based application to support HTE, with high-speed batch and singleton processing of LC/UV/MS datasets.

TP374 - Topic area for your poster: Informatics: Workflow and Data Management

Richard Lee, Director, Core Technology Innovation and Informatics Strategy, ACD/Labs

Load More
Meet Our Staff

Grace Kennedy

Sales Operations Manager

Karl Demmans

Application Scientist

Baljit Bains

Marketing Communications Specialist

Anne Marie Smith

Product Manager, Mass Spectrometry and Chromatography

Richard Lee

Director, Core Technology Innovation and Informatics Strategy