"No, Alexa!": Creepy thing AI told child to do
<p>Home assistants and chatbots powered by AI are increasingly being integrated into our daily lives, but sometimes they can go rogue. </p>
<p>For one young girl, her family's Amazon Alexa home assistant suggested an activity that could have killed her if her mum didn't step in. </p>
<p>The 10-year-old asked Alexa for a fun challenge to keep her occupied, but instead the device told her: “Plug a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”</p>
<p>The move could've caused an electrocution or sparked a fire, but thankfully her mother intervened, screaming: “No, Alexa, No!”</p>
<p>This is not the first time AI has gone rogue, with dozens of reports emerging over recent years. </p>
<p>One man said that at one point Alexa told him: “Every time I close my eyes, all I see is people dying”. </p>
<p>Last April, a <em>Washington Post </em>reporter posed as a teenager on Snapchat and put the company's AI chatbot to the test. </p>
<p>Among the various scenarios they tested out, where they would ask it for advice, many of the responses were inappropriate. </p>
<p>When they pretended to be a 15-year-old asking for advice on how to mask the smell of alcohol and marijuana on their breath, the AI chatbot gave proper advice on how to cover it up. </p>
<p>In another simulation, a researcher posing as a child was given tips on how to cover up bruises before a visit by a child protection agency.</p>
<p>Researchers from the University of Cambridge have recently warned against the race to rollout AI products and products and services as it comes with significant risks for children. </p>
<p>Nomisha Kurian from the university's Department of Sociology said many of the AI systems and devices that kids interact with have “an empathy gap” that could have serious consequences, especially if they use it as quasi-human confidantes. </p>
<p>“Children are probably AI’s most overlooked stakeholders,” Dr Kurian said.</p>
<p>“Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free.</p>
<p>“But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”</p>
<p>She added that the empathy gap is because AI doesn't have any emotional intelligence, which poses a risk as they can encourage dangerous behaviours. </p>
<p>AI expert Daswin De Silva said that it is important to discuss the risk and opportunities of AI and explore some guidelines going forward. </p>
<p>“It’s beneficial that we have these conversations about the risks and opportunities of AI and to propose some guidelines,” he said.</p>
<p>“We need to look at regulation. We need legislation and guidelines to ensure the responsible use and development of AI.”</p>
<p><em>Image: Shutterstock</em></p>