What are the common conversational design patterns used by voice-based assistants like Siri, Alexa, Google Assistant and Cortana?
Conversational UI is common in IoT and other connected devices these days, not to mention smartphones and also computers. The main players in the market would probably be Siri from Apple, Alexa from Amazon, Google Assistant and Cortana from Microsoft.
It is typically rare for someone to own products from all of these vendors, but I wonder if there are some common conversational design patterns that are shared by these types of devices in the way they interact with the user, or if they are designed to behave differently to suit the particular target market or group, and where these differences might be.
The specific areas that I am thinking about from a user experience point of view are:
- Choice of default voice (and variety available for customization)
- Language used to trigger specific actions
- Type of language used by voice assistant to respond
- Type of audio cues and indicators for specific actions/status