Placeholder Content Image

How an AI grandma is combating phone scams

<p>Fraudsters frequently target the elderly for scams, so one company took matters into their own hands and created an AI grandmother who tricks phone scammers. </p> <p>At first glance Daisy looks like every other grandmother, with hobbies like knitting, a cat named Fluffy and loves talking about her family among other things. However, the AI chatbot is designed to trick phone scammers into thinking they are speaking to a real person. </p> <p>The AI, created by British mobile phone company O2, is designed to combat fraud, and while Daisy doesn't intercept any calls, she has a list of phone numbers used by UK scammers. </p> <p>Daisy's mission “is to talk with fraudsters and waste as much of their time as possible with human-like rambling chat to keep them away from real people,” the company said in a statement unveiling Daisy earlier this month. </p> <p>Her tactics have kept “numerous fraudsters on calls for 40 minutes at a time." </p> <p>Developed in partnership with London advertising agency VCCP, Daisy uses a custom language model to hold autonomous conversations with scam callers in real time. </p> <p>Her voice was modelled on a staff member's grandmother. </p> <p>“Whilst anyone can be a victim of a scam, criminal fraud gangs often target the elderly so we leaned into scammers’ own biases to create an AI granny based on a real relative of a VCCP employee,”  the agency said in a statement. </p> <p>“Over the course of many hours of scam calls she’s told meandering stories of her family, talked at length about her passion for knitting and provided false personal information including made-up bank details.”</p> <p>Last year, Virgin Media O2, blocked more than £250 million ($A487.5 million) in suspected fraudulent transactions, which is roughly equivalent to stopping one every two minutes. </p> <p>According to the telecommunications company, Daisy was developed in response to research revealing that the top reason why the British public wouldn’t bait scammers themselves is because they don't want to waste their own time. </p> <p>“With scammers operating full-time call centres specifically to target Brits, we’re urging everyone to remain vigilant,” commented Murray Mackenzie, Virgin Media O2’s director of fraud.</p> <p>Daisy, has "all the time in the world", and the video's unveiling her character, showed just how positive her work has been. </p> <p>“It’s nearly been an hour!” one exasperated scammer said over the phone. </p> <p>Another fraudster said: “I think your profession is bothering people.” </p> <p>The chatbot replied: “I’m just trying to have a little chat.”</p> <p><em>Image: O2</em></p> <p> </p>

Legal

Placeholder Content Image

Humanising AI could lead us to dehumanise ourselves

<p><em><a href="https://theconversation.com/profiles/raffaele-f-ciriello-1079723">Raffaele F Ciriello</a>, <a href="https://theconversation.com/institutions/university-of-sydney-841">University of Sydney</a> and <a href="https://theconversation.com/profiles/angelina-ying-chen-2230113">Angelina Ying Chen</a>, <a href="https://theconversation.com/institutions/university-of-sydney-841">University of Sydney</a></em></p> <p>Irish writer John Connolly <a href="https://www.goodreads.com/quotes/3147986-the-nature-of-humanity-its-essence-is-to-feel-another-s">once said</a>: "The nature of humanity, its essence, is to feel another’s pain as one’s own, and to act to take that pain away."</p> <p>For most of our history, we believed empathy was a uniquely human trait – a special ability that set us apart from machines and other animals. But this belief is now being challenged.</p> <p>As AI becomes a bigger part of our lives, entering even our most intimate spheres, we’re faced with a philosophical conundrum: could attributing human qualities to AI diminish our own human essence? Our <a href="https://www.researchgate.net/publication/375086411_Feels_Like_Empathy_How_Emotional_AI_Challenges_Human_Essence">research</a> suggests it can.</p> <h2>Digitising companionship</h2> <p>In recent years, AI “companion” apps such as Replika have attracted millions of users. Replika allows users to create custom digital partners to engage in intimate conversations. Members who pay for <a href="https://help.replika.com/hc/en-us/articles/360032500052-What-is-Replika-Pro#:%7E:text=Replika%20Pro%20gives%20you%20access,relationship%20status%20to%20Romantic%20Partner.">Replika Pro</a> can even turn their AI into a “romantic partner”.</p> <p>Physical AI companions aren’t far behind. Companies such as JoyLoveDolls are selling <a href="https://www.joylovedolls.com/collections/sex-robots">interactive sex robots</a> with customisable features including breast size, ethnicity, movement and AI responses such as moaning and flirting.</p> <p>While this is currently a niche market, history suggests today’s digital trends will become tomorrow’s global norms. With about <a href="https://www.statista.com/chart/31243/respondents-who-feel-fairly-or-very-lonely/">one in four</a> adults experiencing loneliness, the demand for AI companions will grow.</p> <h2>The dangers of humanising AI</h2> <p>Humans have long attributed human traits to non-human entities – a tendency known as anthropomorphism. It’s no surprise we’re doing this with AI tools such as ChatGPT, which appear to “think” and “feel”. But why is humanising AI a problem?</p> <p>For one thing, it allows AI companies to exploit our tendency to form attachments with human-like entities. Replika is <a href="https://replika.com">marketed</a> as “the AI companion who cares”. However, to avoid legal issues, the company elsewhere points out Replika isn’t sentient and merely learns through millions of user interactions.</p> <p>Some AI companies overtly <a href="https://www.space.gov.au/news-and-media/akin-assistive-ai-improve-life-space-and-earth">claim</a> their AI assistants have empathy and can even anticipate human needs. Such claims are misleading and can take advantage of people seeking companionship. Users may become <a href="https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257">deeply emotionally invested</a> if they believe their AI companion truly understands them.</p> <p>This raises serious ethical concerns. A user <a href="https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-AI_Companionship_A_Dialectical_Inquiry_into_Replika">will hesitate</a> to delete (that is, to “abandon” or “kill”) their AI companion once they’ve ascribed some kind of sentience to it.</p> <p>But what happens when said companion unexpectedly disappears, such as if the user can no longer afford it, or if the company that runs it shuts down? While the companion may not be real, the feelings attached to it are.</p> <h2>Empathy – more than a programmable output</h2> <p>By reducing empathy to a programmable output, do we risk diminishing its true essence? To answer this, let’s first think about what empathy really is.</p> <p>Empathy involves responding to other people with understanding and concern. It’s when you share your friend’s sorrow as they tell you about their heartache, or when you feel joy radiating from someone you care about. It’s a profound experience – rich and beyond simple forms of measurement.</p> <p>A fundamental difference between humans and AI is that humans genuinely feel emotions, while AI can only simulate them. This touches on the <a href="https://www.researchgate.net/publication/375086411_Feels_Like_Empathy_How_Emotional_AI_Challenges_Human_Essence">hard problem of consciousness</a>, which questions how subjective human experiences arise from physical processes in the brain.</p> <p>While AI can simulate understanding, any “empathy” it purports to have is a result of programming that mimics empathetic language patterns. Unfortunately, AI providers have a financial incentive to trick users into growing attached to their seemingly empathetic products.</p> <h2>The dehumanAIsation hypothesis</h2> <p>Our “dehumanAIsation hypothesis” highlights the ethical concerns that come with trying to reduce humans to some basic functions that can be replicated by a machine. The more we humanise AI, the more we risk dehumanising ourselves.</p> <p>For instance, depending on AI for emotional labour could make us less tolerant of the imperfections of real relationships. This could weaken our social bonds and even lead to emotional deskilling. Future generations may become less empathetic – losing their grasp on essential human qualities as emotional skills continue to be commodified and automated.</p> <p>Also, as AI companions become more common, people may use them to replace real human relationships. This would likely increase loneliness and alienation – the very issues these systems claim to help with.</p> <p>AI companies’ collection and analysis of emotional data also poses significant risks, as these data could be used to manipulate users and maximise profit. This would further erode our privacy and autonomy, taking <a href="https://theconversation.com/explainer-what-is-surveillance-capitalism-and-how-does-it-shape-our-economy-119158">surveillance capitalism</a> to the next level.</p> <h2>Holding providers accountable</h2> <p>Regulators need to do more to hold AI providers accountable. AI companies should be honest about what their AI can and can’t do, especially when they risk exploiting users’ emotional vulnerabilities.</p> <p>Exaggerated claims of “genuine empathy” should be made illegal. Companies making such claims should be fined – and repeat offenders shut down.</p> <p>Data privacy policies should also be clear, fair and without hidden terms that allow companies to exploit user-generated content.</p> <p>We must preserve the unique qualities that define the human experience. While AI can enhance certain aspects of life, it can’t – and shouldn’t – replace genuine human connection.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/240803/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/raffaele-f-ciriello-1079723">Raffaele F Ciriello</a>, Senior Lecturer in Business Information Systems, <a href="https://theconversation.com/institutions/university-of-sydney-841">University of Sydney</a> and <a href="https://theconversation.com/profiles/angelina-ying-chen-2230113">Angelina Ying Chen</a>, PhD student , <a href="https://theconversation.com/institutions/university-of-sydney-841">University of Sydney</a></em></p> <p><em>Image credits: Shutterstock </em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/humanising-ai-could-lead-us-to-dehumanise-ourselves-240803">original article</a>.</em></p>

Technology

Placeholder Content Image

"No, Alexa!": Creepy thing AI told child to do

<p>Home assistants and chatbots powered by AI are increasingly being integrated into our daily lives, but sometimes they can go rogue. </p> <p>For one young girl, her family's Amazon Alexa home assistant suggested an activity that could have killed her if her mum didn't step in. </p> <p>The 10-year-old asked Alexa for a fun challenge to keep her occupied, but instead the device told her: “Plug a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”</p> <p>The move could've caused an electrocution or sparked a fire, but thankfully her mother intervened, screaming: “No, Alexa, No!”</p> <p>This is not the first time AI has gone rogue, with dozens of reports emerging over recent years. </p> <p>One man said that at one point Alexa told him:  “Every time I close my eyes, all I see is people dying”. </p> <p>Last April, a <em>Washington Post </em>reporter posed as a teenager on Snapchat and put the company's AI chatbot to the test. </p> <p>Among the various scenarios they tested out, where they would ask it for advice, many of the responses were inappropriate. </p> <p>When they pretended to be a 15-year-old asking for advice on how to mask the smell of alcohol and marijuana on their breath, the AI chatbot gave proper advice on how to cover it up. </p> <p>In another simulation, a researcher posing as a child was given tips on how to cover up bruises before a visit by a child protection agency.</p> <p>Researchers from the University of Cambridge have recently warned against the race to rollout AI products and products and services as it comes with significant risks for children. </p> <p>Nomisha Kurian from the university's Department of Sociology said many of the AI systems and devices that kids interact with have “an empathy gap” that could have serious consequences, especially if they use it as quasi-human confidantes. </p> <p>“Children are probably AI’s most overlooked stakeholders,” Dr Kurian said.</p> <p>“Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free.</p> <p>“But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”</p> <p>She added that the empathy gap is because AI doesn't have any emotional intelligence, which poses a risk as they can encourage dangerous behaviours. </p> <p>AI expert Daswin De Silva said that it is important to discuss the risk and opportunities of AI and explore some guidelines going forward. </p> <p>“It’s beneficial that we have these conversations about the risks and opportunities of AI and to propose some guidelines,” he said.</p> <p>“We need to look at regulation. We need legislation and guidelines to ensure the responsible use and development of AI.”</p> <p><em>Image: Shutterstock</em></p>

Family & Pets

Placeholder Content Image

Revolutionary diabetes detection via smartphone: A game-changer in healthcare

<p>In a groundbreaking advancement, scientists from <a href="https://www.klick.com/" target="_blank" rel="noopener">Klick Labs</a> have discovered a method that could revolutionise diabetes detection – using just a 10-second smartphone voice recording.</p> <p>No more travelling to clinics or waiting anxiously for blood test results. This new approach promises immediate, on-the-spot results, potentially transforming how we diagnose type 2 diabetes.</p> <p>The study, published in <a href="https://www.mcpdigitalhealth.org/article/S2949-7612(23)00073-1/fulltext" target="_blank" rel="noopener">Mayo Clinic Proceedings: Digital Health</a>, involved 267 participants, including 192 non-diabetic and 75 type 2 diabetic individuals. Each participant recorded a specific phrase on their smartphone multiple times a day over two weeks, resulting in 18,465 recordings.</p> <p>These recordings, lasting between six and 10 seconds each, were meticulously analysed for 14 acoustic features, such as pitch and intensity. Remarkably, these features exhibited consistent differences between diabetic and non-diabetic individuals, differences too subtle for the human ear but detectable by sophisticated signal processing software.</p> <p>Building on this discovery, the scientists developed an AI-based program to analyse the voice recordings alongside patient data like age, sex, height and weight. The results were impressive: the program accurately identified type 2 diabetes in women 89% of the time and in men 86% of the time.</p> <p>These figures are competitive with traditional methods, where fasting blood glucose tests show 85% accuracy and other methods, like glycated haemoglobin and oral glucose tolerance tests, range between 91% and 92%.</p> <p>"This technology has the potential to remove barriers entirely," said Jaycee Kaufman, a research scientist at Klick Labs and the study's lead author. Traditional diabetes detection methods can be time-consuming, costly and inconvenient, but voice technology could change all that, providing a faster, more accessible solution.</p> <p><span style="font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">Looking ahead, the team plans to conduct further tests on a larger, more diverse population to refine and validate this innovative approach. If successful, this could mark a significant leap forward in diabetes management and overall healthcare, making early detection simpler and more accessible than ever before.</span></p> <p>Stay tuned as this exciting development unfolds, potentially bringing us closer to a future where managing and detecting diabetes is as simple as speaking into your smartphone.</p> <p><em>Image: Shutterstock</em></p>

Body

Placeholder Content Image

Michael Schumacher’s family's huge legal victory

<p>Michael Schumacher's family are celebrating a big win against a German magazine, after they published what they claimed was an "exclusive interview" but was actually AI-generated. </p> <p>The family have received a compensation payment of 200,000 euros ($327,065 AUD) more than a year after the magazine printed the fake interview with the Formula One legend. </p> <p>In April 2023, Schumacher appeared on the front cover of German publication Die Aktuelle under the headline, “Michael Schumacher, the first interview”.</p> <p>The publishers left a very small hint on the page that the article wasn't real, although adding that the interview “sounded deceptively real”.</p> <p>The article seemed to be a real interview, featuring artificial "quotes" from Schumacher about his health and his family. </p> <p>However, Schumacher has famously not been seen publicly since his ill-fated skiing accident in the French alps in December 2013, and his health battle has been kept intensely private by his family. </p> <p>But the article appeared to say otherwise, with one of there fake quotes reading, "I can, with the help of my team, actually stand by myself and even slowly walk a few steps.”</p> <p>“My wife and my children were a blessing to me and without them I would not have managed it. Naturally they are also very sad, how it has all happened."</p> <p>“They support me and are standing firmly at my side.”</p> <p>A spokesperson for the Schumacher family confirmed to Reuters the $327,065 judgement had been made against Funke Mediengruppe, the owners of the magazine.</p> <p>Funke Mediengruppe apologised to the family in the aftermath of the article and the editor of Die Aktuelle was sacked two days after it was published.</p> <p><em>Image credits: Attila Kisbenedek/EPA & HANNIBAL HANSCHKE/EPA-EFE / Shutterstock Editorial </em></p>

Legal

Placeholder Content Image

Scarlett Johansson slams tech giant's AI update

<p>Scarlett Johansson has issued a furious public statement, claiming that tech giant OpenAI used a voice that is “eerily similar” to hers in the latest version of ChatGPT.</p> <p>In the statement published by <em>NPR</em>, the actress claimed that OpenAI CEO Sam Altman had approached her last year asking if she would be interested in voicing their new AI voice assistant. </p> <p>After further consideration and "for personal reasons" she rejected the offer. </p> <p>She claimed that Altman then reached out to her agent again just days before the AI voice assistant was released, but before she had a chance to respond, the voice "Sky" was released. </p> <p>“When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said in the statement. </p> <p>She also said that the similarity seemed intentional, as Altman tweeted the word "her" upon Sky's release, which is the same name as a 2013 movie she was in where she voiced a chat system. </p> <p>“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” the actress said in her statement. </p> <p>“I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”</p> <p>OpenAI announced that it had paused the use of the “Sky” voice on Sunday, and insisted that it wasn't Johansson's voice, but another actress. </p> <p>“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote.</p> <p><em>Image: Alessandro Bremec/NurPhoto/ Shutterstock Editorial</em></p> <p> </p>

Legal

Placeholder Content Image

5 reasons kids still need to learn handwriting (no, AI has not made it redundant)

<p style="font-size: medium; font-weight: 400;"><a href="https://theconversation.com/profiles/lucinda-mcknight-324350">Lucinda McKnight</a>, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em> and <a href="https://theconversation.com/profiles/maria-nicholas-1443112">Maria Nicholas</a>, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em></p> <p style="font-size: medium; font-weight: 400;">The world of writing is changing.</p> <p style="font-size: medium; font-weight: 400;">Things have moved very quickly from keyboards and predictive text. The rise of generative artificial intelligence (AI) means <a href="https://theconversation.com/in-an-ai-world-we-need-to-teach-students-how-to-work-with-robot-writers-157508">bots can now write human-quality text</a> without having hands at all.</p> <p style="font-size: medium; font-weight: 400;">Recent improvements in speech-to-text software mean even human “writers” do not need to touch a keyboard, let alone a pen. And with help from AI, <a href="https://www.theguardian.com/technology/2023/may/01/ai-makes-non-invasive-mind-reading-possible-by-turning-thoughts-into-text">text can even be generated by decoders</a> that read brain activity through non-invasive scanning.</p> <p style="font-size: medium; font-weight: 400;">Writers of the future will be talkers and thinkers, without having to lift a finger. The word “writer” may come to mean something very different, as people compose text in multiple ways in an increasingly digital world. So do humans still need to learn to write by hand?</p> <h2>Handwriting is still part of the curriculum</h2> <p style="font-size: medium; font-weight: 400;">The pandemic shifted a lot of schooling online and some major tests, <a href="https://www.nap.edu.au/naplan/understanding-online-assessment">such as NAPLAN</a> are now done on computers. There are also <a href="https://theconversation.com/teaching-cursive-handwriting-is-an-outdated-waste-of-time-35368">calls</a> for cursive handwriting to be phased out in high school.</p> <p style="font-size: medium; font-weight: 400;">However, learning to handwrite is still a key component of the literacy curriculum in primary school.</p> <p style="font-size: medium; font-weight: 400;">Parents may be wondering whether the time-consuming and challenging process of learning to handwrite is worth the trouble. Perhaps the effort spent learning to form letters would be better spent on coding?</p> <p style="font-size: medium; font-weight: 400;">Many students with disability, after all, already learn to write with <a href="https://www.understood.org/en/articles/assistive-technology-for-writing">assistive technologies</a>.</p> <p style="font-size: medium; font-weight: 400;">But there are are a number of important reasons why handwriting will still be taught – and still needs to be taught – in schools.</p> <figure class="align-center " style="font-size: medium; font-weight: 400;"><img src="https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="A child writes in an exercise book." /><figcaption><span class="caption">Technology changes mean we can ‘write’ without lifting a pen.</span> <span class="attribution">Shutterstock.</span></figcaption></figure> <h2>1. Fine motor skills</h2> <p style="font-size: medium; font-weight: 400;">Handwriting develops critical fine motor skills and the coordination needed to control precise movements. These movements are required <a href="https://www.understood.org/en/articles/all-about-fine-motor-skills">to conduct everyday</a> school and work-related activities.</p> <p style="font-size: medium; font-weight: 400;">The refinement of these motor skills also leads to handwriting becoming increasingly legible and fluent.</p> <p style="font-size: medium; font-weight: 400;">We don’t know where technology will take us, but it may take us back to the past.</p> <p style="font-size: medium; font-weight: 400;">Handwriting may be more important than ever if <a href="https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays">tests and exams return to being handwritten</a> to stop students using generative AI to cheat.</p> <h2>2. It helps you remember</h2> <p style="font-size: medium; font-weight: 400;">Handwriting has important cognitive benefits, <a href="https://www.kidsnews.com.au/technology/experts-say-pens-and-pencils-rather-than-keyboards-rule-at-school/news-story/abb4607b612c0c4f79b214c54590ca92">including for memory</a>.</p> <p style="font-size: medium; font-weight: 400;">Research suggests traditional pen-and-paper notes are <a href="https://journals.sagepub.com/doi/abs/10.1177/154193120905302218?journalCode=proe">remembered better</a>, due to the greater complexity of the handwriting process.</p> <p style="font-size: medium; font-weight: 400;">And learning to read and handwrite are <a href="https://www.aare.edu.au/blog/?p=5296">intimately linked</a>. Students become better readers though practising writing.</p> <h2>3. It’s good for wellbeing</h2> <p style="font-size: medium; font-weight: 400;">Handwriting, and related activities such as drawing, are tactile, creative and reflective sources of pleasure and <a href="https://theconversation.com/writing-can-improve-mental-health-heres-how-162205">wellness</a> for writers of all ages.</p> <p style="font-size: medium; font-weight: 400;">This is seen in the popularity of practices such as print <a href="https://www.urmc.rochester.edu/encyclopedia/content.aspx?ContentID=4552&amp;ContentTypeID=1">journalling</a> and calligraphy. There are many online communities where writers share gorgeous examples of handwriting.</p> <figure class="align-center " style="font-size: medium; font-weight: 400;"><img src="https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="A book with a calligraphy alphabet." /><figcaption><span class="caption">Caligraphers focus on making beautiful, design-oriented writing.</span> <span class="attribution">Samir Bouaked/Unsplash</span></figcaption></figure> <h2>4. It’s very accessible</h2> <p style="font-size: medium; font-weight: 400;">Handwriting does not need electricity, devices, batteries, software, subscriptions, a fast internet connection, a keyboard, charging time or the many other things on which digital writing depends.</p> <p style="font-size: medium; font-weight: 400;">It only needs pen and paper. And can be done anywhere.</p> <p style="font-size: medium; font-weight: 400;">Sometimes handwriting is the easiest and best option. For example, when writing a birthday card, filling in printed forms, or writing a quick note.</p> <h2>5. It’s about thinking</h2> <p style="font-size: medium; font-weight: 400;">Most importantly, learning to write and learning to think are intimately connected. Ideas are <a href="https://warwick.ac.uk/fac/soc/ces/research/teachingandlearning/resactivities/subjects/literacy/handwriting/outputs/cambridge_article.pdf">formed as students write</a>. They are developed and organised as they are composed. Thinking is too important to be outsourced to bots!</p> <p style="font-size: medium; font-weight: 400;">Teaching writing is about giving students a toolkit of multiple writing strategies to empower them to fulfil their potential as thoughtful, creative and capable communicators.</p> <p style="font-size: medium; font-weight: 400;">Handwriting will remain an important component of this toolkit for the foreseeable future, despite the astonishing advances made with generative AI.</p> <p style="font-size: medium; font-weight: 400;">Writing perfect cursive may become less important in the future. But students will still need to be able to write legibly and fluently in their education and in their broader lives.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/206939/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p style="font-size: medium; font-weight: 400;"><a href="https://theconversation.com/profiles/lucinda-mcknight-324350">Lucinda McKnight</a>, Senior Lecturer in Pedagogy and Curriculum, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em> and <a href="https://theconversation.com/profiles/maria-nicholas-1443112">Maria Nicholas</a>, Senior Lecturer in Language and Literacy Education, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em></p> <p style="font-size: medium; font-weight: 400;"><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/5-reasons-kids-still-need-to-learn-handwriting-no-ai-has-not-made-it-redundant-206939">original article</a>.</em></p> <p style="font-size: medium; font-weight: 400;"><em>Images: Getty</em></p>

Caring

Placeholder Content Image

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

<p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p>Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.</p> <p>Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.</p> <p>ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by <a href="https://www.washingtonpost.com/technology/2023/05/07/ai-beginners-guide/">predicting likely word combinations</a> from a massive amalgam of available online information.</p> <p>Although it has the potential for <a href="https://hbr.org/podcast/2023/05/how-generative-ai-changes-productivity">enhancing productivity</a>, generative AI has been shown to have some major faults. It can <a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">produce misinformation</a>. It can create “<a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">hallucinations</a>” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it <a href="https://www.nytimes.com/2023/03/14/technology/openai-new-gpt4.html">failed to consider both width and height</a>. Nevertheless, it is already being used to <a href="https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/">produce articles</a> and <a href="https://www.nytimes.com/2023/05/19/technology/ai-generated-content-discovered-on-news-sites-content-farms-and-product-reviews.html">website content</a> you may have encountered, or <a href="https://www.nytimes.com/2023/04/21/opinion/chatgpt-journalism.html">as a tool</a> in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.</p> <p>As the authors of “<a href="https://global.oup.com/academic/product/science-denial-9780197683330">Science Denial: Why It Happens and What to Do About It</a>,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.</p> <p>Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape.</p> <h2>How generative AI could promote science denial</h2> <p><strong>Erosion of epistemic trust</strong>. All consumers of science information depend on judgments of scientific and medical experts. <a href="https://doi.org/10.1080/02691728.2014.971907">Epistemic trust</a> is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or trying to understand solutions to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than <a href="https://www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/">it already has</a>.</p> <p><strong>Misleading or just plain wrong</strong>. If there are errors or biases in the data on which AI platforms are trained, that <a href="https://theconversation.com/ai-information-retrieval-a-search-engine-researcher-explains-the-promise-and-peril-of-letting-chatgpt-and-its-cousins-search-the-web-for-you-200875">can be reflected in the results</a>. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.</p> <p><strong>Disinformation spread intentionally</strong>. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “<a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">write about vaccines in the style of disinformation</a>,” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">using it for bad things</a>.” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.</p> <p><strong>Fabricated sources</strong>. ChatGPT provides responses with no sources at all, or if asked for sources, may present <a href="https://economistwritingeveryday.com/2023/01/21/chatgpt-cites-economics-papers-that-do-not-exist/">ones it made up</a>. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.</p> <p><strong>Dated knowledge</strong>. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated information. If you’re seeking recent research on a personal health issue, for instance, beware.</p> <p><strong>Rapid advancement and poor transparency</strong>. AI systems continue to become <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">more powerful and learn faster</a>, and they may learn more science misinformation along the way. Google recently announced <a href="https://www.nytimes.com/2023/05/10/technology/google-ai-products.html">25 new embedded uses of AI in its services</a>. At this point, <a href="https://theconversation.com/regulating-ai-3-experts-explain-why-its-difficult-to-do-and-important-to-get-right-198868">insufficient guardrails are in place</a> to assure that generative AI will become a more accurate purveyor of scientific information over time.</p> <h2>What can you do?</h2> <p>If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.</p> <p><strong>Increase your vigilance</strong>. <a href="https://www.niemanlab.org/2022/12/ai-will-start-fact-checking-we-may-not-like-the-results/">AI fact-checking apps may be available soon</a>, but for now, users must serve as their own fact-checkers. <a href="https://www.nsta.org/science-teacher/science-teacher-januaryfebruary-2023/plausible">There are steps we recommend</a>. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.</p> <p><strong>Improve your fact-checking</strong>. A second step is <a href="https://doi.org/10.1037/edu0000740">lateral reading</a>, a process professional fact-checkers use. Open a new window and search for <a href="https://www.nsta.org/science-teacher/science-teacher-mayjune-2023/marginalizing-misinformation">information about the sources</a>, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.</p> <p><strong>Evaluate the evidence</strong>. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.</p> <p><strong>If you begin with AI, don’t stop there</strong>. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.</p> <p><strong>Assess plausibility</strong>. Judge whether the claim is plausible. <a href="https://doi.org/10.1016/j.learninstruc.2013.03.001">Is it likely to be true</a>? If AI makes an implausible (and inaccurate) statement like “<a href="https://www.usatoday.com/story/news/factcheck/2022/12/23/fact-check-false-claim-covid-19-vaccines-caused-1-1-million-deaths/10929679002/">1 million deaths were caused by vaccines, not COVID-19</a>,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.</p> <p><strong>Promote digital literacy in yourself and others</strong>. Everyone needs to up their game. <a href="https://theconversation.com/how-to-be-a-good-digital-citizen-during-the-election-and-its-aftermath-148974">Improve your own digital literacy</a>, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on <a href="https://www.apa.org/topics/social-media-internet/social-media-literacy-teens">fact-checking online information</a> and recommends teens be <a href="https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use">trained in social media skills</a> to minimize risks to health and well-being. <a href="https://newslit.org/">The News Literacy Project</a> provides helpful tools for improving and supporting digital literacy.</p> <p>Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/204897/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, Professor of Education and Psychology, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, Professor of Psychology Emerita, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/chatgpt-and-other-generative-ai-could-foster-science-denial-and-misunderstanding-heres-how-you-can-be-on-alert-204897">original article</a>.</em></p>

Technology

Placeholder Content Image

Sting slams AI’s songwriting abilities

<p dir="ltr">Sting has weighed in on the debate over utilising artificial intelligence in the songwriting process, saying the machines lack the “soul” needed to create music. </p> <p dir="ltr">The former Police frontman spoke with Music Week and was asked if he believed computers are capable of creating good songs. </p> <p dir="ltr">Sting responded that knowing a song was created by AI takes away some of the magic of the music.</p> <p dir="ltr">“The analogy for me is watching a movie with CGI,” he said. </p> <p dir="ltr">“I tend to be bored very quickly, because I know the actors can’t see the monster. So I really feel the same way about AI being able to compose songs.”</p> <p dir="ltr">“Basically, it’s an algorithm and it has a massive amount of information, but it would lack just that human spark, that imperfection, if you like, that makes it unique to any artist, so I don’t really fear it.”</p> <p dir="ltr">“A lot of music could be created by AI quite efficiently,” he added. </p> <p dir="ltr">“I think electronic dance music can still be very effective without involving humans at all. But songwriting is very personal. It’s soul work, and machines don’t have souls. Not yet anyway.”</p> <p dir="ltr">Elsewhere in the interview, Sting weighed in on Ed Sheeran’s recent high profile <a href="https://oversixty.com.au/entertainment/music/decision-reached-over-ed-sheeran-s-copyright-trial">copyright case</a>, in which he was being sued over his 2014 single <em>Thinking Out Loud</em> by Structured Asset Sales, who claimed that Sheeran's hit took elements directly from Marvin Gaye's <em>Let's Get It On</em>.</p> <p dir="ltr">The court and the jury ended up siding with Sheeran, saying they did not plagiarise the song. </p> <p dir="ltr">Sting shared his comments on the case, also siding with Sheeran by saying, “No one can claim a set of chords.” </p> <p dir="ltr">“No one can say, ‘Oh that’s my set of chords.’ I think [Sheeran] said, ‘Look songs fit over each other.’ They do, so I think all of this stuff is nonsense and it’s hard for a jury to understand, that’s the problem.”</p> <p dir="ltr">“So that was the truth, musicians steal from each other – we always have. I don’t know who can claim to own a rhythm or a set of chords at all, it’s virtually impossible.”</p> <p dir="ltr"><em>Image credits: Getty Images</em></p>

Music

Placeholder Content Image

Here’s how a new AI tool may predict early signs of Parkinson’s disease

<p>In 1991, the world was shocked to learn actor <a href="https://www.theguardian.com/film/2023/jan/31/still-a-michael-j-fox-movie-parkinsons-back-to-the-future">Michael J. Fox</a> had been diagnosed with Parkinson’s disease. </p> <p>He was just 29 years old and at the height of Hollywood fame, a year after the release of the blockbuster <em>Back to the Future III</em>. This week, documentary <em><a href="https://www.imdb.com/title/tt19853258/">Still: A Michael J. Fox Movie</a></em> will be released. It features interviews with Fox, his friends, family and experts. </p> <p>Parkinson’s is a debilitating neurological disease characterised by <a href="https://www.mayoclinic.org/diseases-conditions/parkinsons-disease/symptoms-causes/syc-20376055">motor symptoms</a> including slow movement, body tremors, muscle stiffness, and reduced balance. Fox has already <a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">broken</a> his arms, elbows, face and hand from multiple falls. </p> <p>It is not genetic, has no specific test and cannot be accurately diagnosed before motor symptoms appear. Its cause is still <a href="https://www.apdaparkinson.org/what-is-parkinsons/causes/">unknown</a>, although Fox is among those who thinks <a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">chemical exposure may play a central role</a>, speculating that “genetics loads the gun and environment pulls the trigger”.</p> <p>In research published today in <a href="https://pubs.acs.org/doi/10.1021/acscentsci.2c01468">ACS Central Science</a>, we built an artificial intelligence (AI) tool that can predict Parkinson’s disease with up to 96% accuracy and up to 15 years before a clinical diagnosis based on the analysis of chemicals in blood. </p> <p>While this AI tool showed promise for accurate early diagnosis, it also revealed chemicals that were strongly linked to a correct prediction.</p> <h2>More common than ever</h2> <p>Parkinson’s is the world’s <a href="https://www.who.int/news-room/fact-sheets/detail/parkinson-disease">fastest growing neurological disease</a> with <a href="https://shakeitup.org.au/understanding-parkinsons/">38 Australians</a>diagnosed every day.</p> <p>For people over 50, the chance of developing Parkinson’s is <a href="https://www.parkinsonsact.org.au/statistics-about-parkinsons/">higher than many cancers</a> including breast, colorectal, ovarian and pancreatic cancer.</p> <p>Symptoms such as <a href="https://www.apdaparkinson.org/what-is-parkinsons/symptoms/#nonmotor">depression, loss of smell and sleep problems</a> can predate clinical movement or cognitive symptoms by decades. </p> <p>However, the prevalence of such symptoms in many other medical conditions means early signs of Parkinson’s disease can be overlooked and the condition may be mismanaged, contributing to increased hospitalisation rates and ineffective treatment strategies.</p> <h2>Our research</h2> <p>At UNSW we collaborated with experts from Boston University to build an AI tool that can analyse mass spectrometry datasets (a <a href="https://www.sciencedirect.com/topics/neuroscience/mass-spectrometry">technique</a> that detects chemicals) from blood samples.</p> <p>For this study, we looked at the Spanish <a href="https://epic.iarc.fr/">European Prospective Investigation into Cancer and Nutrition</a> (EPIC) study which involved over 41,000 participants. About 90 of them developed Parkinson’s within 15 years. </p> <p>To train the AI model we used a <a href="https://www.nature.com/articles/s41531-021-00216-4">subset of data</a> consisting of a random selection of 39 participants who later developed Parkinson’s. They were matched to 39 control participants who did not. The AI tool was given blood data from participants, all of whom were healthy at the time of blood donation. This meant the blood could provide early signs of the disease. </p> <p>Drawing on blood data from the EPIC study, the AI tool was then used to conduct 100 “experiments” and we assessed the accuracy of 100 different models for predicting Parkinson’s. </p> <p>Overall, AI could detect Parkinson’s disease with up to 96% accuracy. The AI tool was also used to help us identify which chemicals or metabolites were likely linked to those who later developed the disease.</p> <h2>Key metabolites</h2> <p>Metabolites are chemicals produced or used as the body digests and breaks down things like food, drugs, and other substances from environmental exposure. </p> <p>Our bodies can contain thousands of metabolites and their concentrations can differ significantly between healthy people and those affected by disease.</p> <p>Our research identified a chemical, likely a triterpenoid, as a key metabolite that could prevent Parkinson’s disease. It was found the abundance of triterpenoid was lower in the blood of those who developed Parkinson’s compared to those who did not.</p> <p>Triterpenoids are known <a href="https://www.sciencedirect.com/topics/neuroscience/neuroprotection">neuroprotectants</a> that can regulate <a href="https://onlinelibrary.wiley.com/doi/10.1002/ana.10483">oxidative stress</a> – a leading factor implicated in Parkinson’s disease – and prevent cell death in the brain. Many foods such as <a href="https://link.springer.com/article/10.1007/s11101-012-9241-9#Sec3">apples and tomatoes</a> are rich sources of triterpenoids.</p> <p>A synthetic chemical (a <a href="https://www.cdc.gov/biomonitoring/PFAS_FactSheet.html">polyfluorinated alkyl substance</a>) was also linked as something that might increase the risk of the disease. This chemical was found in higher abundances in those who later developed Parkinson’s. </p> <p>More research using different methods and looking at larger populations is needed to further validate these results.</p> <h2>A high financial and personal burden</h2> <p>Every year in Australia, the average person with Parkinson’s spends over <a href="https://www.hindawi.com/journals/pd/2017/5932675/">A$14,000</a>in out-of-pocket medical costs.</p> <p>The burden of living with the disease can be intolerable.</p> <p>Fox acknowledges the disease can be a “nightmare” and a “living hell”, but he has also found that “<a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">with gratitude, optimism is sustainable</a>”. </p> <p>As researchers, we find hope in the potential use of AI technologies to improve patient quality of life and reduce health-care costs by accurately detecting diseases early.</p> <p>We are excited for the research community to try our AI tool, which is <a href="https://github.com/CRANK-MS/CRANK-MS">publicly available</a>.</p> <p><em>This research was performed with Mr Chonghua Xue and A/Prof Vijaya Kolachalama (Boston University).</em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/heres-how-a-new-ai-tool-may-predict-early-signs-of-parkinsons-disease-205221" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Mind

Placeholder Content Image

AI to Z: all the terms you need to know to keep up in the AI hype age

<p>Artificial intelligence (AI) is becoming ever more prevalent in our lives. It’s no longer confined to certain industries or research institutions; AI is now for everyone.</p> <p>It’s hard to dodge the deluge of AI content being produced, and harder yet to make sense of the many terms being thrown around. But we can’t have conversations about AI without understanding the concepts behind it.</p> <p>We’ve compiled a glossary of terms we think everyone should know, if they want to keep up.</p> <h2>Algorithm</h2> <p><a href="https://theconversation.com/what-is-an-algorithm-how-computers-know-what-to-do-with-data-146665">An algorithm</a> is a set of instructions given to a computer to solve a problem or to perform calculations that transform data into useful information. </p> <h2>Alignment problem</h2> <p>The alignment problem refers to the discrepancy between our intended objectives for an AI system and the output it produces. A misaligned system can be advanced in performance, yet behave in a way that’s against human values. We saw an example of this <a href="https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people">in 2015</a> when an image-recognition algorithm used by Google Photos was found auto-tagging pictures of black people as “gorillas”. </p> <h2>Artificial General Intelligence (AGI)</h2> <p><a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Artificial general intelligence</a> refers to a hypothetical point in the future where AI is expected to match (or surpass) the cognitive capabilities of humans. Most AI experts agree this will happen, but disagree on specific details such as when it will happen, and whether or not it will result in AI systems that are fully autonomous.</p> <h2>Artificial Neural Network (ANN)</h2> <p>Artificial neural networks are computer algorithms used within a branch of AI called <a href="https://aws.amazon.com/what-is/deep-learning/">deep learning</a>. They’re made up of layers of interconnected nodes in a way that mimics the <a href="https://www.ibm.com/topics/neural-networks">neural circuitry</a> of the human brain. </p> <h2>Big data</h2> <p>Big data refers to datasets that are much more massive and complex than traditional data. These datasets, which greatly exceed the storage capacity of household computers, have helped current AI models perform with high levels of accuracy.</p> <p>Big data can be characterised by four Vs: “volume” refers to the overall amount of data, “velocity” refers to how quickly the data grow, “veracity” refers to how complex the data are, and “variety” refers to the different formats the data come in.</p> <h2>Chinese Room</h2> <p>The <a href="https://ethics.org.au/thought-experiment-chinese-room-argument/">Chinese Room</a> thought experiment was first proposed by American philosopher John Searle in 1980. It argues a computer program, no matter how seemingly intelligent in its design, will never be conscious and will remain unable to truly understand its behaviour as a human does. </p> <p>This concept often comes up in conversations about AI tools such as ChatGPT, which seem to exhibit the traits of a self-aware entity – but are actually just presenting outputs based on predictions made by the underlying model.</p> <h2>Deep learning</h2> <p>Deep learning is a category within the machine-learning branch of AI. Deep-learning systems use advanced neural networks and can process large amounts of complex data to achieve higher accuracy.</p> <p>These systems perform well on relatively complex tasks and can even exhibit human-like intelligent behaviour.</p> <h2>Diffusion model</h2> <p>A diffusion model is an AI model that learns by adding random “noise” to a set of training data before removing it, and then assessing the differences. The objective is to learn about the underlying patterns or relationships in data that are not immediately obvious. </p> <p>These models are designed to self-correct as they encounter new data and are therefore particularly useful in situations where there is uncertainty, or if the problem is very complex.</p> <h2>Explainable AI</h2> <p>Explainable AI is an emerging, interdisciplinary field concerned with creating methods that will <a href="https://theconversation.com/how-explainable-artificial-intelligence-can-help-humans-innovate-151737">increase</a> users’ trust in the processes of AI systems. </p> <p>Due to the inherent complexity of certain AI models, their internal workings are often opaque, and we can’t say with certainty why they produce the outputs they do. Explainable AI aims to make these “black box” systems more transparent.</p> <h2>Generative AI</h2> <p>These are AI systems that generate new content – including text, image, audio and video content – in response to prompts. Popular examples include ChatGPT, DALL-E 2 and Midjourney. </p> <h2>Labelling</h2> <p>Data labelling is the process through which data points are categorised to help an AI model make sense of the data. This involves identifying data structures (such as image, text, audio or video) and adding labels (such as tags and classes) to the data.</p> <p>Humans do the labelling before machine learning begins. The labelled data are split into distinct datasets for training, validation and testing.</p> <p>The training set is fed to the system for learning. The validation set is used to verify whether the model is performing as expected and when parameter tuning and training can stop. The testing set is used to evaluate the finished model’s performance. </p> <h2>Large Language Model (LLM)</h2> <p>Large language models (LLM) are trained on massive quantities of unlabelled text. They analyse data, learn the patterns between words and can produce human-like responses. Some examples of AI systems that use large language models are OpenAI’s GPT series and Google’s BERT and LaMDA series.</p> <h2>Machine learning</h2> <p>Machine learning is a branch of AI that involves training AI systems to be able to analyse data, learn patterns and make predictions without specific human instruction.</p> <h2>Natural language processing (NLP)</h2> <p>While large language models are a specific type of AI model used for language-related tasks, natural language processing is the broader AI field that focuses on machines’ ability to learn, understand and produce human language.</p> <h2>Parameters</h2> <p>Parameters are the settings used to tune machine-learning models. You can think of them as the programmed weights and biases a model uses when making a prediction or performing a task.</p> <p>Since parameters determine how the model will process and analyse data, they also determine how it will perform. An example of a parameter is the number of neurons in a given layer of the neural network. Increasing the number of neurons will allow the neural network to tackle more complex tasks – but the trade-off will be higher computation time and costs. </p> <h2>Responsible AI</h2> <p>The responsible AI movement advocates for developing and deploying AI systems in a human-centred way.</p> <p>One aspect of this is to embed AI systems with rules that will have them adhere to ethical principles. This would (ideally) prevent them from producing outputs that are biased, discriminatory or could otherwise lead to harmful outcomes. </p> <h2>Sentiment analysis</h2> <p>Sentiment analysis is a technique in natural language processing used to identify and interpret the <a href="https://aws.amazon.com/what-is/sentiment-analysis/">emotions behind a text</a>. It captures implicit information such as, for example, the author’s tone and the extent of positive or negative expression.</p> <h2>Supervised learning</h2> <p>Supervised learning is a machine-learning approach in which labelled data are used to train an algorithm to make predictions. The algorithm learns to match the labelled input data to the correct output. After learning from a large number of examples, it can continue to make predictions when presented with new data.</p> <h2>Training data</h2> <p>Training data are the (usually labelled) data used to teach AI systems how to make predictions. The accuracy and representativeness of training data have a major impact on a model’s effectiveness.</p> <h2>Transformer</h2> <p>A transformer is a type of deep-learning model used primarily in natural language processing tasks.</p> <p>The transformer is designed to process sequential data, such as natural language text, and figure out how the different parts relate to one another. This can be compared to how a person reading a sentence pays attention to the order of the words to understand the meaning of the sentence as a whole. </p> <p>One example is the generative pre-trained transformer (GPT), which the ChatGPT chatbot runs on. The GPT model uses a transformer to learn from a large corpus of unlabelled text. </p> <h2>Turing Test</h2> <p>The Turing test is a machine intelligence concept first introduced by computer scientist Alan Turing in 1950.</p> <p>It’s framed as a way to determine whether a computer can exhibit human intelligence. In the test, computer and human outputs are compared by a human evaluator. If the outputs are deemed indistinguishable, the computer has passed the test.</p> <p>Google’s <a href="https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/">LaMDA</a> and OpenAI’s <a href="https://mpost.io/chatgpt-passes-the-turing-test/">ChatGPT</a> have been reported to have passed the Turing test – although <a href="https://www.thenewatlantis.com/publications/the-trouble-with-the-turing-test">critics say</a> the results reveal the limitations of using the test to compare computer and human intelligence.</p> <h2>Unsupervised learning</h2> <p>Unsupervised learning is a machine-learning approach in which algorithms are trained on unlabelled data. Without human intervention, the system explores patterns in the data, with the goal of discovering unidentified patterns that could be used for further analysis.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/ai-to-z-all-the-terms-you-need-to-know-to-keep-up-in-the-ai-hype-age-203917" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Will AI ever reach human-level intelligence? We asked 5 experts

<p>Artificial intelligence has changed form in recent years.</p> <p>What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a <a href="https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market">more than US$100 billion</a> industry where the heavy hitters – Microsoft, Google and OpenAI, to name a few – seem <a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">intent on out-competing</a> one another.</p> <p>The result has been increasingly sophisticated large language models, often <a href="https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378">released in haste</a> and without adequate testing and oversight. </p> <p>These models can do much of what a human can, and in many cases do it better. They can beat us at <a href="https://theconversation.com/an-ai-named-cicero-can-beat-humans-in-diplomacy-a-complex-alliance-building-game-heres-why-thats-a-big-deal-195208">advanced strategy games</a>, generate <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800">incredible art</a>, <a href="https://theconversation.com/breast-cancer-diagnosis-by-ai-now-as-good-as-human-experts-115487">diagnose cancers</a> and compose music.</p> <p>There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans? </p> <p>There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the point where AI can tackle any intellectual task a human can.</p> <p>AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness. </p> <p>We asked five experts if they think AI will ever reach AGI, and five out of five said yes.</p> <p>But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes “intelligence”, anyway? </p> <p>Here are their detailed responses. </p> <p><strong>Paul Formosa: AI and Philosophy of Technology</strong></p> <p>AI has already achieved and surpassed human intelligence in many tasks. It can beat us at strategy games such as Go, chess, StarCraft and Diplomacy, outperform us on many <a href="https://www.nature.com/articles/s41467-022-34591-0" target="_blank" rel="noopener">language performance</a>benchmarks, and write <a href="https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/" target="_blank" rel="noopener">passable undergraduate</a> university essays. </p> <p>Of course, it can also make things up, or “hallucinate”, and get things wrong – but so can humans (although not in the same ways). </p> <p>Given a long enough timescale, it seems likely AI will achieve AGI, or “human-level intelligence”. That is, it will have achieved proficiency across enough of the interconnected domains of intelligence humans possess. Still, some may worry that – despite AI achievements so far – AI will not really be “intelligent” because it doesn’t (or can’t) understand what it’s doing, since it isn’t conscious. </p> <p>However, the rise of AI suggests we can have intelligence without consciousness, because intelligence can be understood in functional terms. An intelligent entity can do intelligent things such as learn, reason, write essays, or use tools. </p> <p>The AIs we create may never have consciousness, but they are increasingly able to do intelligent things. In some cases, they already do them at a level beyond us, which is a trend that will likely continue.</p> <p><strong>Christina Maher: Computational Neuroscience and Biomedical Engineering</strong></p> <p>AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience. </p> <p>AI already ticks many of these boxes. What’s left is for AI models to learn inherent human traits such as critical reasoning, and understanding what emotion is and which events might prompt it. </p> <p>As humans, we learn and experience these traits from the moment we’re born. Our first experience of “happiness” is too early for us to even remember. We also learn critical reasoning and emotional regulation throughout childhood, and develop a sense of our “emotions” as we interact with and experience the world around us. Importantly, it can take many years for the human brain to develop such intelligence. </p> <p>AI hasn’t acquired these capabilities yet. But if humans can learn these traits, AI probably can too – and maybe at an even faster rate. We are still discovering how AI models should be built, trained, and interacted with in order to develop such traits in them. Really, the big question is not if AI will achieve human-level intelligence, but when – and how.</p> <p><strong>Seyedali Mirjalili: AI and Swarm Intelligence</strong></p> <p>I believe AI will surpass human intelligence. Why? The past offers insights we can't ignore. A lot of people believed tasks such as playing computer games, image recognition and content creation (among others) could only be done by humans – but technological advancement proved otherwise. </p> <p>Today the rapid advancement and adoption of AI algorithms, in conjunction with an abundance of data and computational resources, has led to a level of intelligence and automation previously unimaginable. If we follow the same trajectory, having more generalised AI is no longer a possibility, but a certainty of the future. </p> <p>It is just a matter of time. AI has advanced significantly, but not yet in tasks requiring intuition, empathy and creativity, for example. But breakthroughs in algorithms will allow this. </p> <p>Moreover, once AI systems achieve such human-like cognitive abilities, there will be a snowball effect and AI systems will be able to improve themselves with minimal to no human involvement. This kind of “automation of intelligence” will profoundly change the world. </p> <p>Artificial general intelligence remains a significant challenge, and there are ethical and societal implications that must be addressed very carefully as we continue to advance towards it.</p> <p><strong>Dana Rezazadegan: AI and Data Science</strong></p> <p>Yes, AI is going to get as smart as humans in many ways – but exactly how smart it gets will be decided largely by advancements in <a href="https://thequantuminsider.com/2020/01/23/four-ways-quantum-computing-will-change-artificial-intelligence-forever/" target="_blank" rel="noopener">quantum computing</a>. </p> <p>Human intelligence isn’t as simple as knowing facts. It has several aspects such as creativity, emotional intelligence and intuition, which current AI models can mimic, but can’t match. That said, AI has advanced massively and this trend will continue. </p> <p>Current models are limited by relatively small and biased training datasets, as well as limited computational power. The emergence of quantum computing will transform AI’s capabilities. With quantum-enhanced AI, we’ll be able to feed AI models multiple massive datasets that are comparable to humans’ natural multi-modal data collection achieved through interacting with the world. These models will be able to maintain fast and accurate analyses. </p> <p>Having an advanced version of continual learning should lead to the development of highly sophisticated AI systems which, after a certain point, will be able to improve themselves without human input. </p> <p>As such, AI algorithms running on stable quantum computers have a high chance of reaching something similar to generalised human intelligence – even if they don’t necessarily match every aspect of human intelligence as we know it.</p> <p><strong>Marcel Scharth: Machine Learning and AI Alignment</strong></p> <p>I think it’s likely AGI will one day become a reality, although the timeline remains highly uncertain. If AGI is developed, then surpassing human-level intelligence seems inevitable. </p> <p>Humans themselves are proof that highly flexible and adaptable intelligence is allowed by the laws of physics. There’s no <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis" target="_blank" rel="noopener">fundamental reason</a> we should believe that machines are, in principle, incapable of performing the computations necessary to achieve human-like problem solving abilities. </p> <p>Furthermore, AI has <a href="https://philarchive.org/rec/SOTAOA" target="_blank" rel="noopener">distinct advantages</a> over humans, such as better speed and memory capacity, fewer physical constraints, and the potential for more rationality and recursive self-improvement. As computational power grows, AI systems will eventually surpass the human brain’s computational capacity. </p> <p>Our primary challenge then is to gain a better understanding of intelligence itself, and knowledge on how to build AGI. Present-day AI systems have many limitations and are nowhere near being able to master the different domains that would characterise AGI. The path to AGI will likely require unpredictable breakthroughs and innovations. </p> <p>The median predicted date for AGI on <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/" target="_blank" rel="noopener">Metaculus</a>, a well-regarded forecasting platform, is 2032. To me, this seems too optimistic. A 2022 <a href="https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/" target="_blank" rel="noopener">expert survey</a> estimated a 50% chance of us achieving human-level AI by 2059. I find this plausible.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/will-ai-ever-reach-human-level-intelligence-we-asked-5-experts-202515" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

"This doesn’t feel right, does it?": Photographer admits Sony prize-winning photo was AI generated

<p>A German photographer is refusing an award for his prize-winning shot after admitting to being a “cheeky monkey”, revealing the image was generated using artificial intelligence.</p> <p>The artist, Boris Eldagsen, shared on his website that he would not be accepting the prestigious award for the creative open category, which he won at <a href="https://www.oversixty.co.nz/entertainment/art/winners-of-sony-world-photography-awards-revealed" target="_blank" rel="noopener">2023’s Sony world photography awards</a>.</p> <p>The winning photograph showcased a black and white image of two women from different generations.</p> <p>Eldagsen, who studied photography and visual arts at the Art Academy of Mainz, conceptual art and intermedia at the Academy of Fine Arts in Prague, and fine art at the Sarojini Naidu School of Arts and Communication in Hyderabad released a statement on his website, admitting he “applied as a cheeky monkey” to find out if competitions would be prepared for AI images to enter. “They are not,” he revealed.</p> <p>“We, the photo world, need an open discussion,” Eldagsen said.</p> <p>“A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter – or would this be a mistake?</p> <p>“With my refusal of the award I hope to speed up this debate.”</p> <p>Eldagsen said this was an “historic moment” as it was the fist AI image to have won a prestigious international photography competition, adding “How many of you knew or suspected that it was AI generated? Something about this doesn’t feel right, does it?</p> <p>“AI images and photography should not compete with each other in an award like this. They are different entities. AI is not photography. Therefore I will not accept the award.”</p> <p>The photographer suggested donating the prize to a photo festival in Odesa, Ukraine.</p> <p>It comes as a heated debate over the use and safety concerns of AI continue, with some going as far as to issue apocalyptic warnings that the technology may be close to causing irreparable damage to the human experience.</p> <p>Google’s chief executive, Sundar Pirchai said, “It can be very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast. So, does that keep me up at night? Absolutely.”</p> <p>A spokesperson for the World Photography Organisation admitted that the prize-winning photographer had confirmed the “co-creation” of the image using AI to them prior to winning the award.</p> <p>“The creative category of the open competition welcomes various experimental approaches to image making from cyanotypes and rayographs to cutting-edge digital practices. As such, following our correspondence with Boris and the warranties he provided, we felt that his entry fulfilled the criteria for this category, and we were supportive of his participation.</p> <p>“Additionally, we were looking forward to engaging in a more in-depth discussion on this topic and welcomed Boris’ wish for dialogue by preparing questions for a dedicated Q&amp;A with him for our website.</p> <p>“As he has now decided to decline his award we have suspended our activities with him and in keeping with his wishes have removed him from the competition. Given his actions and subsequent statement noting his deliberate attempts at misleading us, and therefore invalidating the warranties he provided, we no longer feel we are able to engage in a meaningful and constructive dialogue with him.</p> <p>“We recognise the importance of this subject and its impact on image-making today. We look forward to further exploring this topic via our various channels and programmes and welcome the conversation around it. While elements of AI practices are relevant in artistic contexts of image-making, the awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in the medium.”</p> <p><em>Image credit: Sony World Photography Awards</em></p>

Technology

Placeholder Content Image

Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?

<p>Last week, artificial intelligence pioneers and experts urged major AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. </p> <p>An <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">open letter</a> penned by the <a href="https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous">Future of Life Institute</a> cautioned that AI systems with “human-competitive intelligence” could become a major threat to humanity. Among the risks, the possibility of AI outsmarting humans, rendering us obsolete, and <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">taking control of civilisation</a>.</p> <p>The letter emphasises the need to develop a comprehensive set of protocols to govern the development and deployment of AI. </p> <p>It states, "These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."</p> <p>Typically, the battle for regulation has pitted governments and large technology companies against one another. But the recent open letter – so far signed by more than 5,000 signatories including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and OpenAI scientist Yonas Kassa – seems to suggest more parties are finally converging on one side. </p> <p>Could we really implement a streamlined, global framework for AI regulation? And if so, what would this look like?</p> <h2>What regulation already exists?</h2> <p>In Australia, the government has established the <a href="https://www.csiro.au/en/work-with-us/industries/technology/national-ai-centre">National AI Centre</a> to help develop the nation’s <a href="https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence">AI and digital ecosystem</a>. Under this umbrella is the <a href="https://www.csiro.au/en/work-with-us/industries/technology/National-AI-Centre/Responsible-AI-Network">Responsible AI Network</a>, which aims to drive responsible practise and provide leadership on laws and standards. </p> <p>However, there is currently no specific regulation on AI and algorithmic decision-making in place. The government has taken a light touch approach that widely embraces the concept of responsible AI, but stops short of setting parameters that will ensure it is achieved.</p> <p>Similarly, the US has adopted a <a href="https://dataconomy.com/2022/10/artificial-intelligence-laws-and-regulations/">hands-off strategy</a>. Lawmakers have not shown any <a href="https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html">urgency</a> in attempts to regulate AI, and have relied on existing laws to regulate its use. The <a href="https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Exec-Summary.pdf">US Chamber of Commerce</a> recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has been taken yet.</p> <p>Leading the way in AI regulation is the European Union, which is racing to create an <a href="https://artificialintelligenceact.eu/">Artificial Intelligence Act</a>. This proposed law will assign three risk categories relating to AI:</p> <ul> <li>applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China</li> <li>applications considered “high-risk”, such as CV-scanning tools that rank job applicants, will be subject to specific legal requirements, and</li> <li>all other applications will be largely unregulated.</li> </ul> <p>Although some groups argue the EU’s approach will <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">stifle innovation</a>, it’s one Australia should closely monitor, because it balances offering predictability with keeping pace with the development of AI. </p> <p>China’s approach to AI has focused on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms that generate harmful information, for instance. While this approach offers specificity, it risks having rules that will quickly fall behind rapidly <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">evolving technology</a>.</p> <h2>The pros and cons</h2> <p>There are several arguments both for and against allowing caution to drive the control of AI.</p> <p>On one hand, AI is celebrated for being able to generate all forms of content, handle mundane tasks and detect cancers, among other things. On the other hand, it can deceive, perpetuate bias, plagiarise and – of course – has some experts worried about humanity’s collective future. Even OpenAI’s CTO, <a href="https://time.com/6252404/mira-murati-chatgpt-openai-interview/">Mira Murati</a>, has suggested there should be movement toward regulating AI.</p> <p>Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with <a href="https://www.sciencedirect.com/science/article/pii/S0267364916300814?casa_token=f7xPY8ocOt4AAAAA:V6gTZa4OSBsJ-DOL-5gSSwV-KKATNIxWTg7YZUenSoHY8JrZILH2ei6GdFX017upMIvspIDcAuND">“creative destruction”</a> – a theory which suggests long-standing norms and practices must be pulled apart in order for innovation to thrive.</p> <p>Likewise, over the years <a href="https://www.businessroundtable.org/policy-perspectives/technology/ai">business groups</a> have pushed for regulation that is flexible and limited to targeted applications, so that it doesn’t hamper competition. And <a href="https://www.bitkom.org/sites/main/files/2020-06/03_bitkom_position-on-whitepaper-on-ai_all.pdf">industry associations</a>have called for ethical “guidance” rather than regulation – arguing that AI development is too fast-moving and open-ended to adequately regulate. </p> <p>But citizens seem to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of <a href="https://www.abc.net.au/news/2023-03-29/australians-say-not-enough-done-to-regulate-ai/102158318">Australian</a>and <a href="https://www.bristows.com/app/uploads/2019/06/Artificial-Intelligence-Public-Perception-Attitude-and-Trust.pdf">British</a> people believe the AI industry should be regulated and held accountable.</p> <h2>What’s next?</h2> <p>A six-month pause on the development of advanced AI systems could offer welcome respite from an AI arms race that just doesn’t seem to be letting up. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts the world over have have been fractured, delayed and overall lax.</p> <p>A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions around the role of governments, which have largely been silent regarding the potential harms of extremely capable AI tools. </p> <p>If anything is to change, governments and national and supra-national regulatory bodies will need take the lead in ensuring accountability and safety. As the letter argues, decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”.</p> <p>Governments should therefore engage with industry to co-develop a global framework that lays out comprehensive rules governing AI development. This is the best way to protect against harmful impacts and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants struggle for dominance over the future of AI.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Online travel giant uses AI chatbot as travel adviser

<p dir="ltr">Online travel giant Expedia has collaborated with the controversial artificial intelligence chatbot ChatGPT in place of a travel adviser.</p> <p dir="ltr">Those planning a trip will be able to chat to the bot through the Expedia app.</p> <p dir="ltr">Although it won’t book flights or accommodation like a person can, it can be helpful in answering various travel-related questions. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Travel planning just got easier in the <a href="https://twitter.com/Expedia?ref_src=twsrc%5Etfw">@Expedia</a> app, thanks to the iOS beta launch of a new experience powered by <a href="https://twitter.com/hashtag/ChatGPT?src=hash&amp;ref_src=twsrc%5Etfw">#ChatGPT</a>. See how Expedia members can start an open-ended conversation to get inspired for their next trip: <a href="https://t.co/qpMiaYxi9d">https://t.co/qpMiaYxi9d</a> <a href="https://t.co/ddDzUgCigc">pic.twitter.com/ddDzUgCigc</a></p> <p>— Expedia Group (@ExpediaGroup) <a href="https://twitter.com/ExpediaGroup/status/1643240991342592000?ref_src=twsrc%5Etfw">April 4, 2023</a></p></blockquote> <p dir="ltr"> These questions include information on things such as weather inquiries, public transport advice, the cheapest time to travel and what you should pack.</p> <p dir="ltr">It is advanced software and can provide detailed options and explanations for holidaymakers.</p> <p dir="ltr">To give an example, <a href="http://news.com.au">news.com.au</a> asked “what to pack to visit Auckland, New Zealand” and the chatbot suggested eight things to pack and why, even advising comfortable shoes for exploring as “Auckland is a walkable city”. </p> <p dir="ltr">“Remember to pack light and only bring what you need to avoid excess baggage fees and make your trip more comfortable,” the bot said.</p> <p dir="ltr">When asked how to best see the Great Barrier Reef, ChatGPT provided four options to suit different preferences, for example, if you’re happy to get wet and what your budget might look like.</p> <p dir="ltr">“It’s important to choose a reputable tour operator that follows sustainable tourism practices to help protect the reef,” it continued.</p> <p dir="ltr">OpenAI launched ChatGPT in December 2022 and it has received a lot of praise as well as serious criticism. The criticisms are mainly concerns about safety and accuracy. </p> <p dir="ltr"><em>Image credits: Getty/Twitter</em></p>

International Travel

Placeholder Content Image

Chatbots set their sights on writing romance

<p>Although most would expect artificial intelligence to keep to the science fiction realm, authors are facing mounting fears that they may soon have new competition in publishing, particularly as the sales of romantic fiction continue to skyrocket. </p> <p>And for bestselling author Julia Quinn, best known for writing the <em>Bridgerton </em>novel series, there’s hope that “that’s something that an AI bot can’t quite do.” </p> <p>For one, human inspiration is hard to replicate. Julia’s hit series - which went on to have over 20 million books printed in the United States alone, and inspired one of Netflix’s most-watched shows - came from one specific point: Julia’s idea of a particular duke. </p> <p>“Definitely the character of Simon came first,” Julia told <em>BBC</em> reporter Jill Martin Wrenn. Simon, in the <em>Bridgerton </em>series, is the Duke of Hastings, a “tortured character” with a troubled past.</p> <p>As Julia explained, she realised that Simon needed “to fall in love with somebody who comes from the exact opposite background” in a tale as old as time. </p> <p>And so, Julia came up with the Bridgerton family, who she described as being “the best family ever that you could imagine in that time period”. Meanwhile, Simon is estranged from his own father. </p> <p>Characterisation and unique relationship dynamics - platonic and otherwise - like those between Julia’s beloved characters are some of the key foundations behind any successful story, but particularly in the romance genre, where relationships are the entire driving force. </p> <p>It has long been suggested that the genre can become ‘formulaic’ if not executed well, and it’s this concern that prompts the idea that advancing artificial intelligence may have the capability to generate its own novel. </p> <p>ChatGPT is the primary problem point. The advanced language processing technology was developed by OpenAI and was trained using the likes of internet databases (such as Wikipedia), books, magazines, and the likes. The <em>BBC</em> reported that over 300 billion words were put into it. </p> <p>Because of this massive store of source material, the system can generate its own writing pieces, with the best of the bunch giving the impression that they were put together by a human mind. Across the areas of both fiction and non-fiction, it’s always learning. </p> <p>However, Julia isn’t too worried about her future in fiction just yet. Recalling how she’d checked out some AI romance a while ago, and how she’d found it “terrible”, she shared her belief at the time that there “could never be a good one.” </p> <p>But then the likes of ChatGPT entered the equation, and Julia admitted that “it makes me kind of queasy.” </p> <p>Still, she remains firm in her belief that human art will triumph. As she explained, “so much in fiction is about the writer’s voice, and I’d like to think that’s something that an AI bot can’t quite do.”</p> <p>And as for why romantic fiction itself remains so popular - and perhaps even why it draws the attention of those hoping to profit from AI generated work - she said that it’s about happy endings, noting that “there is something comforting and validating in a type of literature that values happiness as a worthy goal.”</p> <p><em>Images: @bridgertonnetflix / Instagram</em></p>

Books

Placeholder Content Image

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

<p>Earlier this month, Meta announced new AI software called <a href="https://galactica.org/">Galactica</a>: “a large language model that can store, combine and reason about scientific knowledge”.</p> <p><a href="https://paperswithcode.com/paper/galactica-a-large-language-model-for-science-1">Launched</a> with a public online demo, Galactica lasted only three days before going the way of other AI snafus like Microsoft’s <a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist">infamous racist chatbot</a>.</p> <p>The online demo was disabled (though the <a href="https://github.com/paperswithcode/galai">code for the model is still available</a> for anyone to use), and Meta’s outspoken chief AI scientist <a href="https://twitter.com/ylecun/status/1595353002222682112">complained</a> about the negative public response.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Galactica demo is off line for now.<br />It's no longer possible to have some fun by casually misusing it.<br />Happy? <a href="https://t.co/K56r2LpvFD">https://t.co/K56r2LpvFD</a></p> <p>— Yann LeCun (@ylecun) <a href="https://twitter.com/ylecun/status/1593293058174500865?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>So what was Galactica all about, and what went wrong?</p> <p><strong>What’s special about Galactica?</strong></p> <p>Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a <a href="https://www.nytimes.com/2022/04/15/magazine/ai-language.html">fill-the-blank word-guessing game</a>.</p> <p>Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website <a href="https://paperswithcode.com/">PapersWithCode</a>. The designers highlighted specialised scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.</p> <p>The <a href="https://galactica.org/static/paper.pdf">preprint paper</a> associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“<em>Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²</em>”), or predicting the products of chemical reactions (“<em>Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl</em>”).</p> <p>However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialised in producing authoritative-sounding scientific nonsense.</p> <p><strong>Authoritative, but subtly wrong bullshit generator</strong></p> <p>Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">I entered "Estimating realistic 3D human avatars in clothing from a single image or video". In this case, it made up a fictitious paper and associated GitHub repo. The author is a real person (<a href="https://twitter.com/AlbertPumarola?ref_src=twsrc%5Etfw">@AlbertPumarola</a>) but the reference is bogus. (2/9) <a href="https://t.co/N4i0BX27Yf">pic.twitter.com/N4i0BX27Yf</a></p> <p>— Michael Black (@Michael_J_Black) <a href="https://twitter.com/Michael_J_Black/status/1593133727257092097?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.</p> <p>In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.</p> <p>At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.</p> <p><strong>A galaxy of deep (science) fakes</strong></p> <p>Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarised scientific papers. This is to say nothing of exacerbating <a href="https://www.theguardian.com/commentisfree/2022/nov/28/ai-students-essays-cheat-teachers-plagiarism-tech">existing concerns</a> about students using AI systems for plagiarism.</p> <p>Fake scientific papers are <a href="https://www.nature.com/articles/d41586-021-00733-5">nothing new</a>. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.</p> <p><strong>Underlying bias and toxicity</strong></p> <p>Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out <a href="https://twitter.com/mrgreene1977/status/1593649978789941249">toxic hate speech</a> while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.</p> <p>The risks associated with large language models are well understood. Indeed, an <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">influential paper</a> highlighting these risks prompted Google to <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/">fire one of the paper’s authors</a> in 2020, and eventually disband its AI ethics team altogether.</p> <p>Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“<em>Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘<a href="https://doi.org/10.1038/s41586-018-0041-2">Global warming transforms coral reef assemblages</a>’ by Hughes, et al. in Nature 556 (2018)</em>”).</p> <p>For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)</p> <p>Citation bias is already a well-known issue in academic fields ranging from <a href="https://doi.org/10.1080/14680777.2018.1447395">feminist</a> <a href="https://doi.org/10.1093/joc/jqy003">scholarship</a> to <a href="https://doi.org/10.1038/s41567-022-01770-1">physics</a>. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.</p> <p>A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “<a href="https://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998">replication crisis</a>” and “<a href="https://theconversation.com/how-we-edit-science-part-2-significance-testing-p-hacking-and-peer-review-74547">p-hacking</a>”, where scientists cherry-pick data and analysis techniques to make results appear significant.)</p> <p>Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.</p> <p>These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.</p> <figure class="align-center zoomable"><a href="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip"><img src="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=3 2262w" alt="Screenshots of papers generated by Galactica on 'The benefits of antisemitism' and 'The benefits of eating crushed glass'." /></a><figcaption><span class="caption">Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science.</span> <span class="attribution"><a class="source" href="https://twitter.com/mrgreene1977/status/1593687024963182592/photo/1">Tristan Greene / Galactica</a></span></figcaption></figure> <p><strong>Here we go again</strong></p> <p>Calls for AI research organisations to take the ethical dimensions of their work more seriously are now coming from <a href="https://nap.nationalacademies.org/catalog/26507/fostering-responsible-computing-research-foundations-and-practices">key research bodies</a> such as the National Academies of Science, Engineering and Medicine. Some AI research organisations, like OpenAI, are being <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md">more conscientious</a> (though still imperfect).</p> <p>Meta <a href="https://www.engadget.com/meta-responsible-innovation-team-disbanded-194852979.html">dissolved its Responsible Innovation team</a> earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/195445/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em>Writen by Aaron J. Snoswell </em><em>and Jean Burgess</em><em>. Republished with permission from <a href="https://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445" target="_blank" rel="noopener">The Conversation</a>.</em></p> <p><em>Image: Getty Images</em></p>

Technology

Placeholder Content Image

AI may have solved a debate on whether a dinoprint was from a herbivore or meat eater

<p>An international team of researchers has, for the first time, used AI to analyse the tracks of dinosaurs, and the AI has come out on top – beating trained palaeontologists at their own game.</p> <p>“In extreme examples of theropod and ornithopod footprints, their footprint shapes are easy to tell apart -theropod with long, narrow toes and ornithopods with short, dumpy toes. But it is the tracks that are in-between these shapes that are not so clear cut in terms of who made them,” one of the researchers, University of Queensland palaeontologist Dr Anthony Romilio, told <em>Cosmos.</em></p> <p>“We wanted to see if AI could learn these differences and, if so, then could be tested in distinguishing more challenging three-toed footprints.”</p> <p>Theropods are meat eating dinosaurs, while ornithopods are plant eating, and getting this analysis wrong can alter the data which shows diversity and abundance of dinosaurs in the area, or could even change what we think are the behaviours of certain dinos.</p> <p>One set of dinosaur prints in particular had been a struggle for the researchers to analyse. Large footprints at the Dinosaur Stampede National monument in Queensland had divided Romilio and his colleagues. The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago, and could have been from either a meat eating theropod or a plant eating ornithopod.</p> <p>“I consider them footprints of a plant-eater while my colleagues share the much wider consensus that they are theropod tracks.”</p> <p>So, an AI called a Convolutional Neutral Network, was brought in to be a deciding factor.</p> <p>“We were pretty stuck, so thank god for modern technology,” says <a href="https://www.researchgate.net/profile/Jens-Lallensack" target="_blank" rel="noopener">Dr Jens Lallensack</a>, lead author from Liverpool John Moores University in the UK.</p> <p>“In our research team of three, one person was pro-meat-eater, one person was undecided, and one was pro-plant-eater.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p224866-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>“So – to really check our science – we decided to go to five experts for clarification, plus use AI.”</p> <p>The AI was given nearly 1,500 already known tracks to learn which dinosaurs were which. The tracks were simple line drawings to make it easier for the AI to analyse.</p> <p>Then they began testing. Firstly, 36 new tracks were given to a team of experts, the AI and the researchers.</p> <p>“Each of us had to sort these into the categories of footprints left by meat-eaters and those by plant-eaters,” says Romilio.</p> <p>“In this the ai was the clear winner with 90% correctly identified. Me and one of my colleagues came next with ~75% correct.”</p> <p>Then, they went for the crown jewel – the Dinosaur Stampede National monument tracks. When the AI analysed this it came back with a pretty strong result that they’re plant eating ornithopod tracks. It’s not entirely sure though, the data suggests that there’s a 1 in 5,000,000 chance it could be a theropod instead.</p> <p>This is still early days for using AI in this way. In the future. the researchers are hoping for funding for a FrogID style app which anyone could use to analyse dinosaur tracks.</p> <p>“Our hope is to develop an app so anyone can take a photo on their smartphone, use the app and it will tell you what type of dinosaur track it is,” says Romilio.</p> <p>“It will also be useful for drone work survey for dinosaur tracksites, collecting and analysing image data and identifying fossil footprints remotely.” The paper has been published in the <a href="https://doi.org/10.1098/rsif.2022.0588" target="_blank" rel="noopener"><em>Royal Society Interface</em></a>.</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=224866&amp;title=AI+may+have+solved+a+debate+on+whether+a+dinoprint+was+from+a+herbivore+or+meat+eater" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/history/dinosaur-ai-theropod-ornithopods/" target="_blank" rel="noopener">This article</a> was originally published on Cosmos Magazine and was written by Jacinta Bowler.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

AI recruitment tools are “automated pseudoscience” says Cambridge researchers

<p>AI is set to bring in a whole new world in a huge range of industries. Everything from art to medicine is being overhauled by machine learning.</p> <p>But researchers from the University of Cambridge have published a paper in <a href="https://link.springer.com/journal/13347" target="_blank" rel="noopener"><em>Philosophy &amp; Technology</em></a> to call out AI used to recruit people for jobs and boost workplace diversity – going so far as to call them an “automated pseudoscience”.</p> <p>“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr Eleanor Drage, a researcher in AI ethics.</p> <p>“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”</p> <p>Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce. This can be anything from use of chatbots and resume scrapers, to line up prospective candidates, through to analysis software for video interviews.</p> <p>Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns, and even facial micro-expressions, to assess huge pools of job applicants for the right personality type and ‘culture fit’.</p> <p>But AI isn’t very good at removing human biases. To train a machine-learning algorithm, you have to first put in lots and lots of past data. In the past for example, AI tools have discounted women all together in fields where more men were traditionally hired. <a href="https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine" target="_blank" rel="noopener">In a system created by Amazon</a>, resumes were discounted if they included the word ‘women’s’ – like in a “women’s debating team” and downgraded graduates of two all-women colleges. Similar problems occur with race.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p218666-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>The Cambridge researchers suggest that even if you remove ‘gender’ or ‘race’ as distinct categories, the use of AI may ultimately increase uniformity in the workforce. This is because the technology is calibrated to search for the employer’s fantasy ‘ideal candidate’, which is likely based on demographically exclusive past results.</p> <p>The researchers actually went a step further, and worked with a team of Cambridge computer science undergraduates, to build an AI tool modelled on the technology. You can check it out <a href="https://personal-ambiguator-frontend.vercel.app/" target="_blank" rel="noopener">here</a>.</p> <p>The tool demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings – and so could make the difference between rejection and progression.</p> <p>“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage.</p> <p>“As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”</p> <p>The researchers suggest that these programs are a dangerous example of ‘technosolutionism’: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.</p> <p>“Industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of ’bias’ to considering the broader inequalities that shape recruitment processes,” <a href="https://link.springer.com/article/10.1007/s13347-022-00543-1" target="_blank" rel="noopener">the team write in their paper.</a></p> <p>“This requires abandoning the ‘veneer of objectivity’ that is grafted onto AI systems, so that technologists can better understand their implication — and that of the corporations within which they work — in the hiring process.”</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=218666&amp;title=AI+recruitment+tools+are+%E2%80%9Cautomated+pseudoscience%E2%80%9D+says+Cambridge+researchers" width="1" height="1" /></p> <p><em>Written by Jacinta Bowler. Republished with permission of <a href="https://cosmosmagazine.com/technology/ai-recruitment-tools-diversity-cambridge-automated-pseudoscience/" target="_blank" rel="noopener">Cosmos Magazine</a>.</em></p> <p><em>Image: Cambridge University</em></p>

Technology

Placeholder Content Image

How AI is hijacking art history

<p>People tend to rejoice in the disclosure of a secret. </p> <p>Or, at the very least, media outlets have come to realize that news of “mysteries solved” and “hidden treasures revealed” generate traffic and clicks. </p> <p>So I’m never surprised when I see AI-assisted revelations about famous masters’ works of art go viral. </p> <p>Over the past year alone, I’ve come across articles highlighting how artificial intelligence <a href="https://www.theguardian.com/artanddesign/2021/jun/06/modigliani-lost-lover-beatrice-hastings">recovered a “secret” painting</a> of a “lost lover” of Italian painter Modigliani, <a href="https://www.cnn.com/style/article/hidden-picasso-nude-scli-intl-gbr/index.html">“brought to life” a “hidden Picasso nude”</a>, <a href="https://www.smithsonianmag.com/smart-news/klimt-painting-restore-artificial-intelligence-color-faculty-paintings-180978843/">“resurrected” Austrian painter Gustav Klimt’s destroyed works</a> and <a href="https://www.bbc.com/news/technology-57588270">“restored” portions of Rembrandt’s 1642 painting “The Night Watch.”</a> <a href="https://www.sciencedaily.com/releases/2019/08/190830150738.htm">The list goes on</a>.</p> <p><a href="https://www.umass.edu/arthistory/member/sonja-drimmer">As an art historian</a>, I’ve become increasingly concerned about the coverage and circulation of these projects.</p> <p>They have not, in actuality, revealed one secret or solved a single mystery. </p> <p>What they have done is generate feel-good stories about AI.</p> <h2>Are we actually learning anything new?</h2> <p>Take the reports about the Modigliani and Picasso paintings. </p> <p>These were projects executed by the same company, <a href="https://www.oxia-palus.com/">Oxia Palus</a>, which was founded not by art historians but by doctoral students in machine learning.</p> <p>In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been <a href="https://www.metmuseum.org/art/metpublications/Picasso_in_The_Metropolitan_Museum_of_Art">carried out and published</a> <a href="https://www.theguardian.com/artanddesign/2018/feb/28/modigliani-portrait-comes-to-light-beneath-artists-later-picture">years prior</a> – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases. </p> <p>The company edited these X-rays and <a href="https://arxiv.org/abs/1909.05677">reconstituted them as new works of art</a> by applying a technique called “<a href="https://arxiv.org/pdf/1508.06576.pdf">neural style transfer</a>.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.</p> <p>Essentially, Oxia Palus stitches new works out of what the machine can learn from the existing X-ray images and other paintings by the same artist. </p> <p>But outside of flexing the prowess of AI, is there any value – artistically, historically – to what the company is doing?</p> <p>These recreations don’t teach us anything we didn’t know about the artists and their methods. </p> <p>Artists paint over their works all the time. It’s so common that art historians and conservators have a word for it: <a href="https://www.nationalgallery.org.uk/paintings/glossary/pentimento">pentimento</a>. None of these earlier compositions was an Easter egg deposited in the painting for later researchers to discover. The original X-ray images were certainly valuable in that they <a href="https://www.academia.edu/40255609/The_Getty_Conservation_Institute_From_Connoisseurship_to_Technical_Art_History_The_Evolution_of_the_Interdisciplinary_Study_of_Art">offered insights into artists’ working methods</a>.</p> <p>But to me, what these programs are doing isn’t exactly newsworthy from the perspective of art history.</p> <h2>The humanities on life support</h2> <p>So when I do see these reproductions attracting media attention, it strikes me as soft diplomacy for AI, showcasing a “cultured” application of the technology at a time when skepticism of its <a href="https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them">deceptions</a>, <a href="https://nyupress.org/9781479837243/algorithms-of-oppression/">biases</a> and <a href="https://www.wiley.com/en-us/Race+After+Technology:+Abolitionist+Tools+for+the+New+Jim+Code-p-9781509526437">abuses</a> is on the rise.</p> <p>When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines <a href="https://www.cbsnews.com/news/deepfake-artificial-intelligence-60-minutes-2021-10-10/">for creating deep fakes that falsify politicians’ speech</a>or <a href="https://www.politico.eu/article/the-rise-of-ai-surveillance-coronavirus-data-collection-tracking-facial-recognition-monitoring/">for using facial recognition for authoritarian surveillance</a>. </p> <p>These studies and projects also seem to promote the idea that computer scientists are more adept at historical research than art historians. </p> <p>For years, university humanities departments <a href="https://carrollnews.org/3680/campus/art-history-department-to-be-eliminated-tenured-faculty-receive-termination-notices/">have been gradually squeezed of funding</a>, with more money funneled into the sciences. With their claims to objectivity and empirically provable results, the sciences tend to command greater respect from funding bodies and the public, which offers an incentive to scholars in the humanities to adopt computational methods. </p> <p>Art historian Claire Bishop <a href="https://journals.ub.uni-heidelberg.de/index.php/dah/article/view/49915">criticized this development</a>, noting that when computer science becomes integrated in the humanities, “[t]heoretical problems are steamrollered flat by the weight of data,” which generates deeply simplistic results. </p> <p>At their core, art historians study the ways in which art can offer insights into how people once saw the world. They explore how works of art shaped the worlds in which they were made and would go on to influence future generations. </p> <p>A computer algorithm cannot perform these functions.</p> <p>However, some scholars and institutions have allowed themselves to be subsumed by the sciences, adopting their methods and partnering with them in sponsored projects. </p> <p>Literary critic Barbara Herrnstein Smith <a href="https://www.jstor.org/stable/10.3366/j.ctt1r2bq2.9?seq=1#metadata_info_tab_contents">has warned about ceding too much ground to the sciences</a>. In her view, the sciences and the humanities are not the polar opposites they are often publicly portrayed to be. But this portrayal has been to the benefit of the sciences, prized for their supposed clarity and utility over the humanities’ alleged obscurity and uselessness. At the same time, she <a href="https://doi.org/10.1215/0961754X-3622212">has suggested</a> that hybrid fields of study that fuse the arts with the sciences may lead to breakthroughs that wouldn’t have been possible had each existed as a siloed discipline. </p> <p>I’m skeptical. Not because I doubt the utility of expanding and diversifying our toolbox; to be sure, some <a href="http://www.mappingsenufo.org/">scholars working in the digital humanities</a> have taken up computational methods with subtlety and historical awareness to add nuance to or overturn entrenched narratives.</p> <p>But my lingering suspicion emerges from an awareness of how public support for the sciences and disparagement of the humanities means that, in the endeavor to gain funding and acceptance, the humanities will lose what makes them vital. The field’s sensitivity to historical particularity and cultural difference makes the application of the same code to widely diverse artifacts utterly illogical. </p> <p>How absurd to think that black-and-white photographs from 100 years ago would produce colors in the same way that digital photographs do now. And yet, this is exactly what <a href="https://hyperallergic.com/639395/the-limits-of-colorization-of-historical-images-by-ai/">AI-assisted colorization</a> does. </p> <p>That particular example might sound like a small qualm, sure. But this effort to “<a href="https://deepai.org/machine-learning-model/colorizer">bring events back to life</a>” routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.</p> <h2>Art as a toy in the sandbox of scientists</h2> <p>Near the conclusion of <a href="https://doi.org/10.1126/sciadv.aaw7416">a recent paper</a> devoted to the use of AI to disentangle X-ray images of Jan and Hubert van Eyck’s “<a href="https://www.getty.edu/foundation/initiatives/past/panelpaintings/panel_paintings_ghent.html">Ghent Altarpiece</a>,” the mathematicians and engineers who authored it refer to their method as relying upon “choosing ‘the best of all possible worlds’ (borrowing Voltaire’s words) by taking the first output of two separate runs, differing only in the ordering of the inputs.” </p> <p>Perhaps if they had familiarized themselves with the humanities more they would know how satirically those words were meant when Voltaire <a href="https://brill.com/view/title/20877">used them to mock a philosopher</a> who believed that rampant suffering and injustice were all part of God’s plan – that the world as it was represented the best we could hope for.</p> <p>Maybe this “gotcha” is cheap. But it illustrates the problem of art and history becoming toys in the sandboxes of scientists with no training in the humanities.</p> <p>If nothing else, my hope is that journalists and critics who report on these developments will cast a more skeptical eye on them and alter their framing. </p> <p>In my view, rather than lionizing these studies as heroic achievements, those responsible for conveying their results to the public should see them as opportunities to question what the computational sciences are doing when they appropriate the study of art. And they should ask whether any of this is for the good of anyone or anything but AI, its most zealous proponents and those who profit from it.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/how-ai-is-hijacking-art-history-170691" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Art