Amazon’s Alexa needs its own robotic body to reach its full potential, creators of the smart assistant say.
It may mean future versions of the popular gadget are able to follow their owners around the house and listen to conversations with greater ease.
Rohit Prasad, head scientist at Amazon’s Alexa division, claims the only way to rid the AI-assistant of its shackles is to ‘give it eyes and let it explore the world’.
He said this would be the only way for the devices to better understand the ambiguity and complexity of human language.
Scroll down for video
Rohit Prasad, head scientists at Amazon’s Alexa division, claims the only way to rid the AI-assistant of its shackles is to give it ‘eyes to explore the world’. He said this would be the only way for the devices to better understand the ambiguity of human language (stock)
The most advanced Alexa-enabled devices include cameras but a robotic body would be a new development.
Comments made at EmTech Digital and published in
His work focuses on allowing Alexa to fully understand language and make the robot-human interaction as natural as possible.
‘Language is complicated and ambiguous,’ he said. ‘Reasoning and context have to come in.’
‘The only way to make a smart assistant really smart is to give it eyes and let it explore the world.’
Mr Prasad hinted that a robotic body would allow the device to consider more factors in order to better inform its owners through context, who the person is, any previous requests and even location.
This level of complexity would make the device far more intelligent and give it a semblance of ‘common sense’, while also avoiding the ‘I’m sorry, I didn’t understand that question’ uttered when Alexa didn’t quite understand the user’s request.
It remains unclear what physical form a robotic Alexa would look like if it was to be developed.
But it may add some credibility to a rumoured mobile Alexa robot with was reported by
The most advanced Alexa-enabled devices include cameras but a robotic body would be a new development. It remains unclear what physical form a robotic Alexa would look like if it was to be developed (file photo)
The future of robotics remains murky with AI rapidly developing and being integrated with existing technology for a wide range of uses.
Artificial intelligence has already outperformed humans in many fields, including in board games chess, Go and even the far more complex computer game StarCraft II.
These games have set rules and machine learning is able to defeat the world’s most accomplished competitors after learning and practising.
Medical professionals are also using AI to help spot abnormalities in scans to speed up the process of obtaining an accurate diagnosis.
Recent developments include spotting sings of pacemakers, heart disease and lung disease in X-rays, CT scans and even MRI results.
It has also been used for more dangerous applications, with weapons being developed capable of killing people without the need for human oversight.
This process of autonomous weapons which can spot and eliminate any ‘threats’ based on its algorithms and programming is a contentious subject which concerns many leading academics.
A hoard of scientists and academics are lobbying this week for a global treaty to stop the use of autonomous killer robots.
It comes as the UN gathers to talk about the new generation of weapons.
Concerns are also growing that once the military has established the technology it may fall into the hands of criminals, terrorists and law enforcement and trigger the ‘third revolution’ in warfare, after gunpowder and nuclear weapons.
Nobel prize winning academics and doctors penned an open letter urging the United Nations to implement a ban designed to restrict the use of AI-powered weapons able to kill people without oversight from a human.
Representatives of many United Nations nations are meeting with experts this week to discuss the potential dangers and broach the topic of a worldwide ban.
There is currently no regulation on autonomous killing machines in the same way as there is for biological and chemical warfare.
Proponents of the treaty say the weapons should be banned completely and they should not be allowed to be developed.
WHY ARE PEOPLE SO WORRIED ABOUT AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could ‘go rogue’
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.
Musk warned that AI poses more of a threat to humanity than North Korea.
‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.
‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control