An AI program eventually gaining sentience has been a topic of hot debate in the community for a while now, but Google’s involvement with a project as advanced as LaMDA put it in the limelight with a more intense fervor than ever. However, not many experts are buying into Lemoine’s claims about having eye-opening conversations with LaMDA and it being a person. Experts have classified it as just another AI product that is good at conversations because it has been trained to mimic human language, but it hasn’t gained sentience.
“It is mimicking perceptions or feelings from the training data it was given — smartly and specifically designed to seem like it understands,” Jana Eggers, head of AI startup Nara Logics, told Bloomberg. Sandra Wachter, a professor at the University of Oxford, told Business Insider that “we are far away from creating a machine that is akin to humans and the capacity for thought.” Even Google’s engineers that have had conversations with LaMDA believe otherwise.
While a sentient AI might not exist in 2022, scientists aren’t ruling out the possibility of superintelligent AI programs in the not-too-distant future. Artificial General Intelligence is already touted to be the next evolution of a conversational AI that will match or, even surpass, human skills, but expert opinion on the topic ranges from inevitable to fantastical. Collaborative research published in the Journal of Artificial Intelligence postulated that humanity won’t be able to control a super-intelligent AI.
More Stories
The Impending Writers’ Strike and Its Potential Impact on the Advertising Industry
Selling to the World’s Largest Customer
They don’t measure up to AirPods Pro