Advertisement

Creating Custom Digital Assistants for the Scientific Laboratory using the HelixAI Platform

Open AccessPublished:May 15, 2022DOI:https://doi.org/10.1016/j.slast.2022.05.002

      Abstract

      Voice technology and fully virtual digital assistants are becoming increasingly prevalent in many industries, including the scientific laboratory. This environment can greatly benefit from the use of hands-free digital assistants due to the fact that scientists regularly need access to tools and information while performing bench work. The use of a digital assistant in this environment has the potential to streamline laboratory work and reduce the chances of human error due to contamination and the context switching involved in moving between experiments and information storage media. Because the particular protocols and reagents used by each laboratory are often different, there is a need to create custom digital assistants for individual laboratories. In this technical brief we describe a custom software and web application, referred to as the HelixAI platform, that can be used to create digital assistants for individual scientific laboratories. Digital assistants created with this platform can be accessed through any Alexa-enabled smart speaker device. Here we describe the process by which labs can use this platform to create their own digital assistants, along with a description of the underlying technology. An assistant containing information from the scientific company New England Biolabs (NEB) has been created using this software and will serve as an example throughout this paper.

      Keywords

      Introduction

      While voice technology has become a part of daily life for many people during the last two decades, it has been an important area of technological advancement for far longer. With the development of Siri by Apple and smart speaker devices such as Amazon's Alexa and Google's Google Home, the ability of electronic devices to receive, understand, and respond to verbal queries is becoming ingrained in society [

      Creative, V. A brief history of voice assistants, 2020, https://www.theverge.com/ad/17855294/a-brief-history-of-voice-assistants (accessed Sep 22, 2021).

      ]. Many industries such as healthcare, consumer sales, and the automotive industry have begun to utilize voice technology in both internal and consumer-facing applications [
      • Johnson M.
      • Lapkin S.
      • Long V.
      • et al.
      A Systematic Review of Speech Recognition Technology in Health Care.
      ,
      • Lo V.E.-W.
      • Green P.A.
      Development and Evaluation of Automotive Speech Interfaces: Useful Information from the Human Factors and the Related Literature.
      ,
      • Fernandes T.
      • Oliveira E.
      Understanding Consumers’ Acceptance of Automated Technologies in Service Encounters: Drivers of Digital Voice Assistants Adoption.
      ]. As natural language processing has improved, many businesses are turning to the use of fully electronic, voice-activated digital assistants to automate and stream-line operations and customer interactions. In light of precautions taken to prevent the spread of SARS-CoV-2, digital solutions, especially those that are hand's free by utilizing voice, have become increasingly important and popular [

      Swoboda, C. COVID-19 Is Making Alexa And Siri A Hands-Free Necessity, 2020, https://www.forbes.com/sites/chuckswoboda/2020/04/06/covid-19-is-making-alexa-and-siri-a-hands-free-necessity/(accessed Feb 26, 2022).

      ].
      An industry that is slowly beginning to adopt voice technology and digital assistants is the scientific industry. The laboratory environment can greatly benefit from the utilization of voice technology-driven digital assistants due to the fact that scientists regularly need access to both hands-on tools and reagent/protocol information, often at the same time and in a time-sensitive manner. In recognition of this, manufacturers of scientific equipment are beginning to integrate voice technology into their products. Thermo Fisher Scientific, for example, has recently included Alexa-enabling software into two models of Real Time PCR machines [
      • Perkel J.M.
      Alexa, Do Science! Voice-Activated Assistants Hit the Lab Bench.
      ]. Additionally, it has been shown that laboratory equipment can even be altered to allow it to become part of the Internet of Things (IoT), paving the way for voice technology to be used in the lab to acquire important data from these machines [
      • Austerjost J.
      • Porr M.
      • Riedel N.
      • et al.
      Introducing a Virtual Assistant to the Lab: A Voice User Interface for the Intuitive Control of Laboratory Instruments.
      ].
      Researchers and private companies are beginning to create mobile applications or Amazon Alexa “skills” that bring voice technology into the laboratory for the purpose of accessing scientific information and taking notes in the lab [

      Meet your new lab assistant, 2020, https://cen.acs.org/articles/95/i19/Meet-your-new-lab-assistant.html (accessed Sep 22, 2021).

      ,
      • Hill J.D.
      Gene Teller: An Extensible Alexa Skill for Gene-Relevant Databases.
      ,

      Alexa and your phone are getting schooled in chemistry, 2020, https://cen.acs.org/business/informatics/Alexa-phone-getting-schooled-chemistry/97/i36 (accessed Sep 22, 2021).

      ,

      Using Voice-Activated Technology to Deliver Industry News, 2020, https://associationsnow.com/2019/02/using-voice-technology-to-deliver-industry-news/ (accessed Sep 22, 2021).

      ,
      • Lubiana-Alves T.
      • Gonçalves A.A.N.A.
      • Nakaya H.I.
      Science Family Skills: An Alexa Assistant Tailored for Laboratory Routine.
      ]. Lab Twin, for example, is accessed by a lab member through their smartphone and can record notes relayed verbally while a scientist is working. GeneTeller is an Alexa skill that is able to access publicly available databases and provide scientists with information about genes of interest. Because different scientific laboratories typically have a very specific focus, laboratory members often need access to particular tools, data, and lab-specific content (i.e. reagents, protocols, etc.). To make this information accessible through an Alexa smart speaker device, custom skills can be created using the specific content that is of interest to a particular lab. The Science Family set of skills is designed to relay lab-specific information to scientists but requires lab members to build their own skills using publicly available code. This technical brief describes the use and underlying technology of a free, publicly available software platform called HelixAI that can be used to build custom digital assistants for the laboratory space that can be accessed through any of the Alexa family of smart speaker devices. The HelixAI platform is designed to make it simple for scientists to build their own custom digital assistant without the need of any computer science skills. This platform was used in collaboration with New England BioLabs (NEB) to create a customized skill called myNEB that allows for access to NEB reagent information, protocols, and calculators. Examples of the myNEB skill are included throughout this brief to demonstrate the use of the HelixAI platform in creating a custom digital assistant.

      Methods and Materials

      Overview

      Lab members can create an account and begin creating their own custom digital assistants by visiting http://www.helix.ai. In order to create the HelixAI platform, a custom software application written in NodeJS v10.13.0 and Express v4.13.4 was developed and hosted on the Heroku cloud platform (www.heroku.com). This software interfaces with Amazon's cloud-based Alexa platform using the Alexa Skills Kit (ASK) and the Skills Management API (SMAPI) in order to create custom Alexa-hosted skills by analyzing data stored in multiple collections within a MongoDB database. The HelixAI software is responsible for generating the interaction model that defines the voice interface for the skill as well as fulfilling end user queries by accepting requests from the Alexa platform and sending responses containing natural language text back to the platform for final playback through an Alexa device. Skills generated through the HelixAI platform can be used with any Amazon Alexa-enabled device, including Echos, Echo Dots, Echo Shows, and Fire tablets. Figure 1 depicts the general data flow through the system for generating the voice interaction model and responding to user queries.
      Figure 1
      Figure 1Overview of the flow of information between users and digital assistants created using the HelixAI platform. Gray arrows depict the input of information into the digital assistant. White arrows indicate the flow of a query from a user to the HelixAI platform. Black arrows show the flow of information answering the user's query back to the user. Gray arrows: Users such as lab members and scientists add information for their digital assistant through a web application, which stores the information in collections within a MongoDB database. This information is taken by the HelixAI platform to create an interaction model for the digital assistant, which is uploaded to the Alexa Platform. White arrows: When users make requests for information to their digital assistant, this request is received by their Alexa device and taken to the Alexa platform, which uses the interaction model to determine the intent of the request. The Alexa Platform then replays the query to the HelixAI platform as a JSON-formatted request. Black arrows: After receiving the query, the HelixAI platform creates a JSON-formatted response for the user, which is relayed back to the Alexa Platform and sent to the Alexa device for final playback to the user.
      Within the HelixAI platform, a web application was developed that allows lab managers or members to set up new skills, modify certain aspects of skills, add and update skill information and data, and deploy updated interaction models to the Alexa platform. The web application was developed using NodeJS v10.13.0, Express v4.13.4, and ReactJS v6.8.0 and hosted on the Heroku platform. This interface allows labs to create their custom Alexa skill and keep their skill up to date with new information, protocols, etc. that are useful to lab members. The following sections provide a detailed description of how to use the HelixAI platform to create a voice enabled digital laboratory assistant, including examples from the creation of the Alexa skill, myNEB.

      Creating Your Digital Assistant

      Before creating a digital lab assistant, it is necessary to identify the types of information to include in the skill. Multiple types of information and tools can be added to a lab assistant, but it is often best to identify information that provides lab members the most benefit when it can be accessed hands-free at the bench. Reagent information, protocols/solution recipes, and inventories are especially well suited for this type of technology and are the current focus of the HelixAI platform. In the case of the myNEB skill, scientists can access NEB product information and common tools used in laboratories. This skill contains multiple pieces of data on the 250+ restriction enzymes (REs) in NEB's product catalog, step-by-step protocol instructions for single and double digests, and common lab math calculators. Examples and details pertaining to these functionalities, along with sample questions and answers, are included in Table 1 for the purpose of demonstrating acceptable data to include in a custom skill.
      Table 1Description of data and tools accessible through myNEB, including examples of user questions and skill responses.
      Data CategorySpecific Data Pieces/Properties IncludedExample QueryExample Response
      Restriction Endonuclease Product InformationRecommended Buffer

      Double digest recommended buffer

      Catalog Number

      Concentration

      Cutsite

      Description

      Hi Fidelity Version?

      Heat Inactivation Temperature

      Incubation temperature

      Methylation sensitivity

      Percent activity in various buffers

      SAM Required?

      Time Saver qualified?
      “What is the incubation temperature for AatII?"



      “What is the recommended buffer for a double digest with XbaI and AscI?”





      “Is EcoRI time saver qualified?”
      “The incubation temperature for AatII is 37℃.”

      “The recommended buffer for a double digest with XbaI and AscI is Cutsmart Buffer.”





      “Yes, EcoRI is time saver qualified.”
      Single and Double Enzyme Digest ProtocolsStep-by-step instruction for carrying out restriction enzyme digests with any single restriction enzyme or any combination of two restriction enzymes“What is the protocol for a digest with MscI?”"To digest with MscI, set-up a 50μl reaction, add 1μg of DNA, 5μl of CutSmart Buffer, and bring the volume to 49μl with Nuclease-free water. Add 1μl of MscI. Mix well and spin-down. Finally, Incubate at 37°C for 1 hour.”
      CalculationsDilution Calculation

      Ligation Calculation

      Molarity Calculation

      ds DNA and ssDNA mass to moles

      ds DNA and ssDNA moles to mass

      dsRNA and ssRNA mass to moles

      ds DNA and ssDNA moles to mass
      “I need to make a ligation calculation”





      “I need to calculate a dilution.”
      “What is your insert DNA length?"





      “What is the concentration of your solution?”
      After identifying skill content, a custom Alexa skill can be created using the HelixAI web application. When a user signs up with HelixAI, they are first directed to a page where they set up their skill by defining the skill name, invocation phrase, and description. These attributes determine how the skill is invoked from an Alexa device and displayed within the Alexa mobile application. All skill definition attributes are stored in a single JSON document in a designated “Application” collection within the MongoDB database. For the myNEB skill, the skill name was defined as “myNEB”, the invocation phrase was defined as “my n.e.b.”, and the short description was “myNEB - Digital Lab Assistant ''. Note the invocation phrase defined for the skill should be a phonetic representation of what lab members will say to invoke the skill. In the case of myNEB, the skill is invoked using the letters “n.”, “e.”, “b.”, as indicated by the “.” after the letters. After creating a custom skill, it must be certified by Amazon through an automated process, after which it will become publicly available in the Amazon Alexa Skill Store. Currently, the HelixAI platform only supports creating digital assistants that interact using the English language.

      Adding Content to Your Digital Assistant

      Once a custom skill has been created and verified, the information and data to be included in the skill must be transferred from underlying data sources into the HelixAI web application, where it is stored in a designated “Entities” collection within the MongoDB database. In the web application, a side bar provides different categories of information that can be added to the skill and retrieved by the digital assistant. These categories include Inventories, Recipes, Protocols, and general product and reagent information (called Knowledge Base). Within each category, users can create folders for different subcategories of information. Information specific for a scientist's lab can be either manually added to a folder or a bulk upload can be executed with information stored in Excel spreadsheets. For the creation of myNEB, all data included in the skill was acquired either from the NEB website using custom web scraping scripts or was provided in Excel spreadsheets by a collaborator at NEB and uploaded through the bulk upload feature.

      Intents, Utterances, and Synonyms

      After lab-specific information has been entered into the web application, the HelixAI software defines the intents and utterances that will be supported by the assistant. Intents represent the types of actions that can be handled by the assistant. Each intent is mapped to a comprehensive list of utterances representing the spoken phrases that trigger the intent. Currently, for any skill created using HelixAI, four intents are included: an “InventoryIntent”, a “LookUpIntent”, a “ProtocolIntent”, and a “CalculationIntent”. Each intent is associated with a particular category of information within the web application, i.e., “InventoryIntent” targets the request to the Inventory folder, “LookUpIntent” targets requests to the Knowledge Base folder, etc. The “CalculationIntent” can be used to perform dilution and molarity calculations. Utterances make use of slot placeholders and slot values to represent variable information within a spoken phrase. The HelixAI platform defines the slot placeholders that are used in each utterance and the slot values are determined by the content added to the skill through the web application. All intents, utterances, slots placeholders, and slot values are stored in a designated “Intents” collection within the MongoDB database. In the myNEB skill, all the properties of each RE are stored in the Knowledge Base and the “LookUpIntent” allows for the recall of a specific property for a particular RE. In utterances that trigger this intent, a slot placeholder for the property (“Property”) and a slot placeholder for the RE name (“Entity”) were defined. The slot values for the “Property” slot consist of all available properties of each RE that a user can query and the “Entity” slot consists of all available RE's. Examples of properties in myNEB can be found in Table 1.
      In order to ensure that a digital assistant can understand and respond to queries that are asked in any variation, the assistant must be optimized by the addition of synonyms that account for any alternate pronunciations or homophones that might be spoken in a query. When adding content to the HelixAI web application, lab members will need to consider and include all the possible variations used to request a particular product/protocol/item, etc. For example, when adding ethanol to an Inventory folder, the synonym “e.” “t.” “o.” “h.” should be included. When creating myNEB, a collaborator at NEB provided audio files for all possible pronunciations for each RE and other terms used in the skill. Using the audio files, a comprehensive list of synonyms was created that includes alternative phrases, nicknames of REs and protocols, and homophones or near homophones for each term. In the myNEB skill, for example, the synonyms “e.c. o.r.i.”, “e.c.o.r.1.”, “echo.r.i.”, “echo.r.1.” were all included for the RE EcoRI in order to represent all common pronunciations of that enzyme.

      Building the Interaction Model

      Speech technology platforms contain automated speech recognition (ASR) and natural language understanding (NLU) technologies that enable them to interpret and respond to lab members’ queries. In order to train the underlying ASR and NLU of the Alexa platform for a skill, an interaction model describing the words and phrases that can be spoken to the assistant is created and processed by the platform. Using data stored in the MongoDB database, the HelixAI software programmatically creates an interaction model by analyzing the content of Applications, Intents, and Entities collections. The interaction model is then programmatically uploaded to the Alexa platform using SMAPI. Whenever content is changed or added to a skill through the web application, the skill's interaction model needs to be updated. A feature within the HelixAI web application allows lab members to easily update the interaction model for their digital assistant after content has been added or modified.

      Using Your Digital Assistant

      Once a custom lab skill has been created, certified, and contains content, the skill must be enabled on an Alexa smart speaker device before use. This requires an Amazon account and a smartphone with the Alexa mobile application installed. The skill can be enabled from the Alexa mobile application by searching for the skill by name in the “Skills and Games” section of the application. As all skills created through the HelixAI platform are public, any lab member can enable the skill on any device associated with their Amazon account. An alternative to this is to create a new Amazon account for the laboratory specifically for enabling the lab's custom skill. This account can be logged into the Alexa mobile application from any smartphone and used to enable the lab's custom skill on any devices associated with the laboratory's Amazon account.
      After enabling the skill, a launch phrase consisting of the device wake word, a launch command, and the skill's invocation phrase is used to open the skill. The pre-programmed wake word for all Alexa enabled devices is “Alexa”, although the wake word can be changed to a limited number of options. The invocation phrase is the custom phrase defined when creating the skill through the HelixAI web application. For the myNEB skill, the launch phrase is “Alexa, launch my n. e. b.” or “Alexa, open my n. e. b.” Upon hearing the launch phrase, Alexa responds by speaking a short welcome message and prompting the user for an initial query.
      As described above, each digital assistant has four predefined categories for lab information: Knowledge Base, Inventories, Recipes, and Protocols. Each category is associated with different utterances that can be spoken by a lab member to request information or assistance. The digital assistant recognizes those particular utterances and associates them with the specific intents needed to answer queries. Figure 2 provides example interactions between a scientist and their assistant to demonstrate some of these utterances and the subsequent responses. For certain intents, such as the Knowledge Base and Inventories, the interaction follows a more traditional question and answer format. The Protocol and Calculation intents, however, allow the scientists to engage in a dialogue with their digital assistant to receive assistance. A Supplemental Information file provides a comprehensive list of all utterances and phrases that can be used with a HelixAI-created digital assistant, organized by information category.
      Figure 2
      Figure 2Example interactions between a laboratory member and a digital assistant. Panels A and B demonstrate a question/answer interaction flow as typically seen with Knowledge Base (A) and Inventory (B) queries. Panels C and D show a dialogue flow associated with Protocols (C) and Calculators (D).
      Scientists using HelixAI-created digital assistants have found that a visual display supplementing the auditory response provided by the assistant greatly enhances the experience for laboratory users. This has been done using an Echo Show device. Additional options for customizing the home screen for a lab's digital assistant are displayed during set up. A minor issue noted by scientists when using these devices in the lab is that the Echo Shows typically has a timeout feature that results in the screen reverting back to the home screen after approximately 15 seconds of display. To remedy this problem, a feature has been added through the HelixAI platform that alters Echo Show timeout programming, allowing the visual display to remain on the screen for 24 hours or until cued by a user. For example, when a scientist is using myNEB to execute a protocol, the Echo Show device will continue showing the first step until it is queried by the user to proceed to the next step. An example interaction flow using an Echo Show has been included as Figure 3.
      Figure 3
      Figure 3Example of a protocol flow using an Echo Show. Each display shows a step involved in the process of completing a single enzyme digest with the restriction enzyme AatII. When users are ready for their next step, they prompt their assistant, which then reads the next step verbally while it is displayed on the screen of the Echo Show device. This example is of a protocol included in myNEB.

      Results and Discussion

      Testing Your Digital Assistant

      HelixAI digital assistants have been shown to respond to user queries with a high degree of accuracy. To fully assess accuracy of myNEB, the ability for the skill to correctly understand user requests and the ability for the skill to return the correct response were tested independently. To test skill understanding, all utterances supported by the myNEB skill were spoken to both Echo Dot and Echo Show devices. If an utterance yielded unexpected results, the interaction model was updated by adding additional synonyms through the HelixAI web application and tested again. Once a custom skill has been created, it is suggested that lab members verbally assess the ability of the skill to understand especially unique or challenging words or phrases and update their assistant through the web application accordingly. Skill understanding of myNEB was first tested internally for four weeks before being deployed to NEB's internal laboratories. Once set up there, myNEB was tested over a six-week period by 24 different users. Users were assessed during and at the completion of the testing period by survey.
      To ensure accurate responses are returned upon request, a suite of automated unit tests was created to programmatically assert the correct responses were retrieved from the database for a given intent and request details. The results of these tests were programmatically inspected to match expected responses. Additionally, users surveyed at NEB were asked to record whether the skill returned the correct response during testing. All tests passed successfully. Therefore, when a scientist uses a digital assistant created through the HelixAI platform, they can be certain that the system is accessing their content in the correct context for their requests with a high degree of accuracy.

      Benefits of Using Your Digital Assistant

      Scientific laboratories are well suited for the use of voice technology-driven digital assistants due to the need of laboratory workers to access variable types of information, often stored in disparate locations, while performing complex activities. The use of digital assistants has the potential to streamline lab activities by providing a hands-free way for scientists to access the information they need while performing their work, thereby creating a more efficient laboratory environment, lessening the chances of contamination/error and improving lab safety. Using the HelixAI platform will allow laboratories to easily create digital assistants that are customized with the particular information and data that meets their laboratories’ needs.

      Privacy and Safety

      A common concern associated with the use of voice technology and digital assistants is security. Specifically, for HelixAI-created digital assistants, concerns regarding who can access a skill to retrieve potentially sensitive information are most prevalent. The current implementation of HelixAI digital assistants only allows for creating public skills that are available to any user through the Alexa skills store. However, several models have been identified for future development that would reduce or eliminate concerns regarding the accessibility of skills to unwanted parties. First, an access control policy requiring lab members to authenticate their identity using unique credentials before accessing the skill would effectively limit who can retrieve information from the assistant. In this case, the skill would still be deployed publicly and available for download to any user, however, the skill would require authentication before responding to a query. Second, skills can be deployed privately using an additional Amazon service called Alexa for Business. In this case, skills are deployed directly to specific devices within a laboratory and are not available in the Amazon skills store. Lastly, advances in the underlying technology of voice assistants indicate the use of voice identification as a potential solution to this concern in the future. Voice identification would make use of voice profiles that reliably identify who is speaking to a device and would limit responsiveness only to known voice profiles.

      Future Directions

      Currently, the HelixAI platform can be used to customize an assistant with many different types of lab-specific information. The first version of this platform focuses on common laboratory activities such as executing protocols and making solutions, along with typical databases such as reagent information and inventories. In the future, the HelixAI platform will build upon these existing functionalities, while also integrating new features.
      A common use case for voice technology in the home is ordering and recording of household goods. Similarly, future versions of HelixAI will supplement the current inventory functionalities with support for ordering and reordering reagents and laboratory supplies. These new features will focus on collecting verbal requests to order items and allow for the collected lists to be integrated into a laboratory's current inventory fulfillment process. The system will include an option to send weekly reminder emails to designated lab members with reorder requests and updated inventory information.
      As timers and alerts are features within Alexa, the current release of the HelixAI platform did not add any custom timer-based functionalities. However, the use of timers and countdowns has numerous applications in the scientific workspace. Future versions of HelixAI intend to provide an improved user experience while executing protocols by integrating timers and countdowns into steps of a protocol that contain timed components. Timers would be automatically created for the associated time as specified in the protocol and scientists would receive audible alerts from the Alexa device when the time has elapsed.
      Reducing the burden of note taking and data collection while performing hands-on benchwork is an upcoming goal of the HelixAI platform. Future versions of HelixAI will allow for collecting data and data points either verbally from the scientist or by initiating a request to an IoT-connected instrument on a larger network of connected laboratory devices. Future versions will make all data collected through the platform available through the associated HelixAI web application and provide for exporting the collected data to popular electronic laboratory notebooks.
      In conclusion, voice technology and the HelixAI platform have the potential to become an essential part of the modern laboratory workspace. As we turn towards the lab of the future, a HelixAI-created digital assistant can become a central hub for a connected laboratory by integrating lab equipment, software systems, augmented reality devices, and robotics.

      Declaration of Competing Interest

      None.

      Acknowledgements

      The authors would like to thank TechStars, the Amazon Alexa Accelerator, and the Alexa Fund for their help and guidance in developing our platform, along with our NEB collaborators Penny Devoe and Andrew Bertera for their significant contributions to the development and distribution of myNEB. This work has been funded by New England Biolabs and the Alexa Fund.

      Appendix. Supplementary materials

      References

      1. Creative, V. A brief history of voice assistants, 2020, https://www.theverge.com/ad/17855294/a-brief-history-of-voice-assistants (accessed Sep 22, 2021).

        • Johnson M.
        • Lapkin S.
        • Long V.
        • et al.
        A Systematic Review of Speech Recognition Technology in Health Care.
        BMC Med Inform Decis Mak. 2014; 14: 94
        • Lo V.E.-W.
        • Green P.A.
        Development and Evaluation of Automotive Speech Interfaces: Useful Information from the Human Factors and the Related Literature.
        International Journal of Vehicular Technology. 2013; 2013: 1-13
        • Fernandes T.
        • Oliveira E.
        Understanding Consumers’ Acceptance of Automated Technologies in Service Encounters: Drivers of Digital Voice Assistants Adoption.
        Journal of Business Research. 2021; 122: 180-191
      2. Swoboda, C. COVID-19 Is Making Alexa And Siri A Hands-Free Necessity, 2020, https://www.forbes.com/sites/chuckswoboda/2020/04/06/covid-19-is-making-alexa-and-siri-a-hands-free-necessity/(accessed Feb 26, 2022).

        • Perkel J.M.
        Alexa, Do Science! Voice-Activated Assistants Hit the Lab Bench.
        Nature. 2020; 582: 303-304
        • Austerjost J.
        • Porr M.
        • Riedel N.
        • et al.
        Introducing a Virtual Assistant to the Lab: A Voice User Interface for the Intuitive Control of Laboratory Instruments.
        SLAS TECHNOLOGY: Translating Life Sciences Innovation. 2018; 23: 476-482
      3. Meet your new lab assistant, 2020, https://cen.acs.org/articles/95/i19/Meet-your-new-lab-assistant.html (accessed Sep 22, 2021).

        • Hill J.D.
        Gene Teller: An Extensible Alexa Skill for Gene-Relevant Databases.
        Bioinformatics. 2020;
      4. Alexa and your phone are getting schooled in chemistry, 2020, https://cen.acs.org/business/informatics/Alexa-phone-getting-schooled-chemistry/97/i36 (accessed Sep 22, 2021).

      5. Using Voice-Activated Technology to Deliver Industry News, 2020, https://associationsnow.com/2019/02/using-voice-technology-to-deliver-industry-news/ (accessed Sep 22, 2021).

        • Lubiana-Alves T.
        • Gonçalves A.A.N.A.
        • Nakaya H.I.
        Science Family Skills: An Alexa Assistant Tailored for Laboratory Routine.
        bioRxiv. 2018; 484147