Google Assistant was a giant leap a few years ago, and it is in Google I/O 2021 where the big G has presented LaMDA, the next step for AI conversations, searches, and much more, to reach another level not seen before.
LaMDA is a natural language understanding AI model trained to speak about any topic without getting lost. In the future, it will be used in products such as Google Assistant, Search, and Workspace.
The Google Assistant is the clearest example of what the LaMDA integration is going to be — simply because it will allow richer conversations with a more extensive vocabulary and a more ‘intelligent’ synthesis of the concept.
Sundar Pichai has shown a first example where LaMDA took on the role of the planet Pluto and a paper airplane in conversation. In the conversation, the AI model perfectly followed the questions given to put itself in the shoes of being that plane or that planet to answer appropriately.
A series of answers that has left us speechless by the possibilities that LaMDA is going to have. It will not only be used for text but for images, audio, and video. That is, we can do a search on Google Maps to indicate a route to see spectacular landscapes.
Like many current language models, including BERT and GPT-3, LaMDA is based on Transformer. This neural network architecture was invented by Google Research and made available open-source in 2017.
We are left to see what will happen to the rest of the competition when it takes more than voice commands, and the conversations are almost natural between machine and user.
LaMDA is here with a few brushstrokes of what it will look like for a wide variety of Google applications and solutions.