Part of the “Who.Where.What?” feature we initially launched at CES 2020 was voice commands for shopping the scene. This video gives a sense of what is possible with these commands.
A lot of care was taken in curating a list of natural, logical ways in which a user would interact with the datasets our API provides. Part of coming up with this list was creating a prototype which we user tested.
The prototype consisted of a web page navigated with the keyboard or remote control. It had a UI similar to a popular smart TV/TV app. Using the annyang speech recognition library, we were able to provide functional control of the interface and test the various commands.