As the battle for control of the voice-controlled computing sector gains pace, Amazon has fired another shot across the bows of its competitors.
Amazon Web Services announced that the artificial intelligence technology behind Alexa is available to any developer that wants to create voice or text-based conversational interfaces. According to an Amazon press release, Amazon Lex—which has been in preview mode with select customers since the end of 2016—will speed up the process of building and testing conversational apps within a fully managed service.
Voice-controlled interfaces are the undoubted flavor of the month for tech companies, which makes Amazon’s decision to offer its AI platform as a service an intriguing one. Developers will now have access to the same automatic speech recognition and natural language understanding capabilities that are at the heart of Alexa, Amazon said.
As with its other cloud-based services, Amazon will charge developers based on the amount of text or voice requests that Lex actually processes.
“Thousands of machine learning and deep learning experts across Amazon have been developing AI technologies for years, and Amazon Alexa includes some of the most sophisticated and powerful deep learning technologies in existence,” said AWS vice president of Databases, Analytics and AI Raju Gulabani. “We thought customers might be excited to use the same technology that powers Alexa to build conversational apps, but we’ve been blown away by the customer response to our preview.”
Speech recognition and natural language understanding are not easy skills to master, mainly because deep learning algorithms require not only training but also huge amounts of data and infrastructure. Making Amazon Lex widely available removes the “heavy lifting” involved in building voice-activated apps or chatbots, with developers able to test and launch apps that can perform a variety of conversational tasks.
Building a conversational app via Amazon Lex is relatively simple, Amazon said. Developers give Amazon Lex sample phrases that describe a person’s intent (“book a flight,” for example), any information that the AI needs to fulfill that intent (travel date and destination) and additional questions required to get extra information—“where and when do you want to go?”
“Amazon Lex takes care of the rest by building a machine learning model that parses the speech or text input from the user, understands the intent behind the conversation, and manages the conversation (e.g., if the travel date is already known, the app will skip that question and ask for the destination),” Amazon said. “Developers can then publish the conversational app to mobile and Internet of Things devices, web applications, and chat services such as Facebook Messenger, Slack or Twilio.”
Making natural language understanding and speech recognition available to the developer community could be a significant step forward. Developers would be able to build a variety of bots for everyday consumer requests and every industry sector. The important factor to remember is that Amazon Lex makes the concept of AI easy-to-use and will help Amazon (in theory) cement its lead in the voice control market.
Amazon Lex Removes The Gruntwork
Developers don’t need to provision or manage infrastructure as Amazon Lex handles all the authentications required by the different platforms.
In addition, Amazon Lex scales automatically as traffic increases. Developers will also be able to build apps that collect data from enterprise applications or integrate business logic. As an added bonus, developers can use analytics provided by the service to measure and improve their apps performance over time.
Developers who want to get started with Amazon Lex can do so here.
The emergence of Amazon Lex from preview mode was timed to coincide with a cloud-computing summit hosted by the company in San Francisco on April 19. Reuters reported that Amazon is keen to make inroads into the data collection capabilities of Google and Apple devices, with real-world feedback deemed to be an invaluable resource.
“There’s massive acceleration happening here,” said Amazon CTO Werner Vogels, in an interview with the news source. “The cool thing about having this running as a service in the cloud instead of in your own data center or on your own desktop is that we can make Lex better continuously by the millions of customers that are using it.”