Placeholder Content Image

How can we improve public health communication for the next pandemic? Tackling distrust and misinformation is key

<p><em><a href="https://theconversation.com/profiles/shauna-hurley-203140">Shauna Hurley</a>, <a href="https://theconversation.com/institutions/monash-university-1065">Monash University</a> and <a href="https://theconversation.com/profiles/rebecca-ryan-1522824">Rebecca Ryan</a>, <a href="https://theconversation.com/institutions/la-trobe-university-842">La Trobe University</a></em></p> <p>There’s a common thread linking our <a href="https://www.visualcapitalist.com/history-of-pandemics-deadliest/">experience of pandemics</a> over the past 700 years. From the black death in the 14th century to COVID in the 21st, public health authorities have put emergency measures such as isolation and quarantine in place to stop infectious diseases spreading.</p> <p>As we know from COVID, these measures upend lives in an effort to save them. In both the <a href="https://www.thinkglobalhealth.org/article/pandemic-protests-when-unrest-and-instability-go-viral">recent</a> and <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3559034/">distant past</a> they’ve also given rise to collective unrest, confusion and resistance.</p> <p>So after all this time, what do we know about the role public health communication plays in helping people understand and adhere to protective measures in a crisis? And more importantly, in an age of misinformation and distrust, how can we improve public health messaging for any future pandemics?</p> <p>Last year, we published a <a href="https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD015144/full">Cochrane review</a> exploring the global evidence on public health communication during COVID and other infectious disease outbreaks including SARS, MERS, influenza and Ebola. Here’s a snapshot of what we found.</p> <h2>The importance of public trust</h2> <p>A key theme emerging in analysis of the COVID pandemic globally is public trust – or lack thereof – in governments, public institutions and science.</p> <p>Mounting evidence suggests <a href="https://www.washingtonpost.com/world/2022/02/01/trust-lancet-covid-study/">levels of trust in government</a> were <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(22)00172-6/fulltext">directly proportional</a> to fewer COVID infections and higher vaccination rates across the world. It was a crucial factor in people’s willingness to follow public health directives, and is now a key focus for future pandemic preparedness.</p> <p>Here in Australia, public trust in governments and health authorities steadily eroded over time.</p> <p>Initial information from governments and health authorities about the unfolding COVID crisis, personal risk and mandated protective measures was generally clear and consistent across the country. The establishment of the <a href="https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/rp/rp1920/Quick_Guides/AustralianCovid-19ResponseManagement#_Toc38973752">National Cabinet</a> in 2020 signalled a commitment from state, territory and federal governments to consensus-based policy and public health messaging.</p> <p>During this early phase of relative unity, <a href="https://theconversation.com/inflation-covid-inequality-new-report-shows-australias-social-cohesion-is-at-crossroads-195198">Australians reported</a> higher levels of belonging and trust in government.</p> <p>But as the pandemic wore on, public trust and confidence fell on the back of conflicting state-federal pandemic strategies, blame games and the <a href="https://theconversation.com/we-lost-the-plot-on-covid-messaging-now-governments-will-have-to-be-bold-to-get-us-back-on-track-186732">confusing fragmentation</a> of public health messaging. The divergence between <a href="https://www.theaustralian.com.au/nation/tale-of-two-cities-gripped-by-covid-fear-outbreak/news-story/cf1b922610aeb0b0ee9b0b53486bf640">lockdown policies and public health messaging</a> adopted by <a href="https://www.theage.com.au/national/victoria/a-tale-of-two-cities-that-doesn-t-seem-fair-20211012-p58z79.html">Victoria and New South Wales</a> is one example, but there are plenty of others.</p> <p>When state, territory and federal governments have conflicting policies on protective measures, people are easily confused, lose trust and become harder to engage with or persuade. Many tune out from partisan politics. Adherence to mandated public health measures falls.</p> <p>Our research found clarity and consistency of information were key features of effective public health communication throughout the COVID pandemic.</p> <p>We also found public health communication is most effective when authorities work in partnership with different target audiences. In Victoria, the case brought against the state government for the <a href="https://www.abc.net.au/news/2023-07-24/melbourne-public-housing-tower-covid-lockdown-compensation/102640898">snap public housing tower lockdowns</a> is a cautionary tale underscoring how essential considered, tailored and two-way communication is with diverse communities.</p> <h2>Countering misinformation</h2> <p>Misinformation is <a href="https://reutersinstitute.politics.ox.ac.uk/hydroxychloroquine-australia-cautionary-tale-journalists-and-scientists">not a new problem</a>, but has been supercharged by the advent of <a href="https://theconversation.com/health-misinformation-is-rampant-on-social-media-heres-what-it-does-why-it-spreads-and-what-people-can-do-about-it-217059">social media</a>.</p> <p>The much-touted “miracle” drug <a href="https://www.vox.com/future-perfect/22663127/ivermectin-covid-treatments-vaccines-evidence">ivermectin</a> typifies the extraordinary traction unproven treatments gained locally and globally. Ivermectin is an anti-parasitic drug, lacking evidence for viruses like COVID.</p> <p>Australia’s drug regulator was forced to <a href="https://www.theguardian.com/australia-news/2021/sep/10/australian-drug-regulator-bans-ivermectin-as-covid-treatment-after-sharp-rise-in-prescriptions">ban ivermectin presciptions</a> for anything other than its intended use after a <a href="https://www.theguardian.com/world/2021/aug/30/australian-imports-of-ivermectin-increase-10-fold-prompting-warning-from-tga">sharp increase</a> in people seeking the drug sparked national shortages. Hospitals also reported patients <a href="https://www.theguardian.com/australia-news/2021/sep/02/sydney-covid-patient-in-westmead-hospital-after-overdosing-on-ivermectin-and-other-online-cures">overdosing on ivermectin</a> and cocktails of COVID “cures” promoted online.</p> <p>The <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(22)01585-9/fulltext">Lancet Commission</a> on lessons from the COVID pandemic has called for a coordinated international response to countering misinformation.</p> <p>As part of this, it has called for more accessible, accurate information and investment in scientific literacy to protect against misinformation, including that shared across social media platforms. The World Health Organization is developing resources and recommendations for health authorities to address this “<a href="https://www.who.int/health-topics/infodemic#tab=tab_1">infodemic</a>”.</p> <p>National efforts to directly tackle misinformation are vital, in combination with concerted efforts to raise health literacy. The Australian Medical Association has <a href="https://www.ama.com.au/media/action-needed-tackle-health-misinformation-internet-social-media">called on the federal government</a> to invest in long-term online advertising to counter health misinformation and boost health literacy.</p> <p>People of all ages need to be equipped to think critically about who and where their health information comes from. With the rise of AI, this is an increasingly urgent priority.</p> <h2>Looking ahead</h2> <p>Australian health ministers recently <a href="https://www.cdc.gov.au/newsroom/news-and-articles/australian-health-ministers-reaffirm-commitment-australian-cdc">reaffirmed their commitment</a> to the new Australian Centre for Disease Control (CDC).</p> <p>From a science communications perspective, the Australian CDC could provide an independent voice of evidence and consensus-based information. This is exactly what’s needed during a pandemic. But full details about the CDC’s funding and remit have been the subject of <a href="https://www.croakey.org/federal-budget-must-deliver-on-climate-health-and-the-centre-for-disease-control-sector-leaders-warn/">some conjecture</a>.</p> <p>Many of our <a href="https://www.cochraneaustralia.org/articles/covidandcommunications">key findings</a> on effective public health communication during COVID are not new or surprising. They reinforce what we know works from previous disease outbreaks across different places and points in time: tailored, timely, clear, consistent and accurate information.</p> <p>The rapid rise, reach and influence of misinformation and distrust in public authorities bring a new level of complexity to this picture. Countering both must become a central focus of all public health crisis communication, now and in the future.</p> <p><em>This article is part of a <a href="https://theconversation.com/au/topics/the-next-pandemic-160343">series on the next pandemic</a>.</em><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/226718/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/shauna-hurley-203140">Shauna Hurley</a>, PhD candidate, School of Public Health, <a href="https://theconversation.com/institutions/monash-university-1065">Monash University</a> and <a href="https://theconversation.com/profiles/rebecca-ryan-1522824">Rebecca Ryan</a>, Senior Research Fellow, Health Practice and Management; Head, Centre for Health Communication and Participation, <a href="https://theconversation.com/institutions/la-trobe-university-842">La Trobe University</a></em></p> <p><em>Image credits: Shutterstock </em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/how-can-we-improve-public-health-communication-for-the-next-pandemic-tackling-distrust-and-misinformation-is-key-226718">original article</a>.</em></p>

Technology

Placeholder Content Image

How people get sucked into misinformation rabbit holes – and how to get them out

<p><em><a href="https://theconversation.com/profiles/emily-booth-715018">Emily Booth</a>, <a href="https://theconversation.com/institutions/university-of-technology-sydney-936">University of Technology Sydney</a> and <a href="https://theconversation.com/profiles/marian-andrei-rizoiu-850922">Marian-Andrei Rizoiu</a>, <a href="https://theconversation.com/institutions/university-of-technology-sydney-936">University of Technology Sydney</a></em></p> <p>As misinformation and radicalisation rise, it’s tempting to look for something to blame: the internet, social media personalities, sensationalised political campaigns, religion, or conspiracy theories. And once we’ve settled on a cause, solutions usually follow: do more fact-checking, regulate advertising, ban YouTubers deemed to have “gone too far”.</p> <p>However, if these strategies were the whole answer, we should already be seeing a decrease in people being drawn into fringe communities and beliefs, and less misinformation in the online environment. We’re not.</p> <p>In new research <a href="https://doi.org/10.1177/14407833241231756">published in the Journal of Sociology</a>, we and our colleagues found radicalisation is a process of increasingly intense stages, and only a small number of people progress to the point where they commit violent acts.</p> <p>Our work shows the misinformation radicalisation process is a pathway driven by human emotions rather than the information itself – and this understanding may be a first step in finding solutions.</p> <h2>A feeling of control</h2> <p>We analysed dozens of public statements from newspapers and online in which former radicalised people described their experiences. We identified different levels of intensity in misinformation and its online communities, associated with common recurring behaviours.</p> <p>In the early stages, we found people either encountered misinformation about an anxiety-inducing topic through algorithms or friends, or they went looking for an explanation for something that gave them a “bad feeling”.</p> <p>Regardless, they often reported finding the same things: a new sense of certainty, a new community they could talk to, and feeling they had regained some control of their lives.</p> <p>Once people reached the middle stages of our proposed radicalisation pathway, we considered them to be invested in the new community, its goals, and its values.</p> <h2>Growing intensity</h2> <p>It was during these more intense stages that people began to report more negative impacts on their own lives. This could include the loss of friends and family, health issues caused by too much time spent on screens and too little sleep, and feelings of stress and paranoia. To soothe these pains, they turned again to their fringe communities for support.</p> <p>Most people in our dataset didn’t progress past these middle stages. However, their continued activity in these spaces kept the misinformation ecosystem alive.</p> <p>When people did move further and reach the extreme final stages in our model, they were doing active harm.</p> <p>In their recounting of their experiences at these high levels of intensity, individuals spoke of choosing to break ties with loved ones, participating in public acts of disruption and, in some cases, engaging in violence against other people in the name of their cause.</p> <p>Once people reached this stage, it took pretty strong interventions to get them out of it. The challenge, then, is how to intervene safely and effectively when people are in the earlier stages of being drawn into a fringe community.</p> <h2>Respond with empathy, not shame</h2> <p>We have a few suggestions. For people who are still in the earlier stages, friends and trusted advisers, like a doctor or a nurse, can have a big impact by simply responding with empathy.</p> <p>If a loved one starts voicing possible fringe views, like a fear of vaccines, or animosity against women or other marginalised groups, a calm response that seeks to understand the person’s underlying concern can go a long way.</p> <p>The worst response is one that might leave them feeling ashamed or upset. It may drive them back to their fringe community and accelerate their radicalisation.</p> <p>Even if the person’s views intensify, maintaining your connection with them can turn you into a lifeline that will see them get out sooner rather than later.</p> <p>Once people reached the middle stages, we found third-party online content – not produced by government, but regular users – could reach people without backfiring. Considering that many people in our research sample had their radicalisation instigated by social media, we also suggest the private companies behind such platforms should be held responsible for the effects of their automated tools on society.</p> <p>By the middle stages, arguments on the basis of logic or fact are ineffective. It doesn’t matter whether they are delivered by a friend, a news anchor, or a platform-affiliated fact-checking tool.</p> <p>At the most extreme final stages, we found that only heavy-handed interventions worked, such as family members forcibly hospitalising their radicalised relative, or individuals undergoing government-supported deradicalisation programs.</p> <h2>How not to be radicalised</h2> <p>After all this, you might be wondering: how do you protect <em>yourself</em> from being radicalised?</p> <p>As much of society becomes more dependent on digital technologies, we’re going to get exposed to even more misinformation, and our world is likely going to get smaller through online echo chambers.</p> <p>One strategy is to foster your critical thinking skills by <a href="https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(23)00198-5">reading long-form texts from paper books</a>.</p> <p>Another is to protect yourself from the emotional manipulation of platform algorithms by <a href="https://guilfordjournals.com/doi/10.1521/jscp.2018.37.10.751">limiting your social media use</a> to small, infrequent, purposefully-directed pockets of time.</p> <p>And a third is to sustain connections with other humans, and lead a more analogue life – which has other benefits as well.</p> <p>So in short: log off, read a book, and spend time with people you care about. <img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/223717/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/emily-booth-715018">Emily Booth</a>, Research assistant, <a href="https://theconversation.com/institutions/university-of-technology-sydney-936">University of Technology Sydney</a> and <a href="https://theconversation.com/profiles/marian-andrei-rizoiu-850922">Marian-Andrei Rizoiu</a>, Associate Professor in Behavioral Data Science, <a href="https://theconversation.com/institutions/university-of-technology-sydney-936">University of Technology Sydney</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/how-people-get-sucked-into-misinformation-rabbit-holes-and-how-to-get-them-out-223717">original article</a>.</em></p>

Mind

Placeholder Content Image

There is, in fact, a ‘wrong’ way to use Google

<p>I was recently reading comments on a post related to COVID-19, and saw a reply I would classify as misinformation, bordering on conspiracy. I couldn’t help but ask the commenter for evidence.</p> <p>Their response came with some web links and “do your own research”. I then asked about their research methodology, which turned out to be searching for specific terms on Google.</p> <p>As an academic, I was intrigued. Academic research aims to establish the truth of a phenomenon based on evidence, analysis and peer review.</p> <p>On the other hand, a search on Google provides links with content written by known or unknown authors, who may or may not have knowledge in that area, based on a ranking system that either follows the preferences of the user, or the collective popularity of certain sites.</p> <p>In other words, Google’s algorithms can penalise the truth for not being popular.</p> <p><a href="https://www.google.com/search/howsearchworks/algorithms" target="_blank" rel="noopener">Google Search’s</a> ranking system has a <a href="https://youtu.be/tFq6Q_muwG0" target="_blank" rel="noopener">fraction of a second</a> to sort through hundreds of billions of web pages, and index them to find the most relevant and (ideally) useful information.</p> <p>Somewhere along the way, mistakes get made. And it’ll be a while before these algorithms become foolproof – if ever. Until then, what can you do to make sure you’re not getting the short end of the stick?</p> <p><strong>One question, millions of answers</strong></p> <p>There are around <a href="https://morningscore.io/how-does-google-rank-websites/" target="_blank" rel="noopener">201 known factors</a> on which a website is analysed and ranked by Google’s algorithms. Some of the main ones are:</p> <ul> <li>the specific key words used in the search</li> <li>the meaning of the key words</li> <li>the relevance of the web page, as assessed by the ranking algorithm</li> <li>the “quality” of the contents</li> <li>the usability of the web page</li> <li>and user-specific factors such as their location and profiling data taken from connected Google products, including Gmail, YouTube and Google Maps.</li> </ul> <p><a href="https://link.springer.com/article/10.1007/s10676-013-9321-6" target="_blank" rel="noopener">Research has shown</a> users pay more attention to higher-ranked results on the first page. And there are known ways to ensure a website makes it to the first page.</p> <p>One of these is “<a href="https://en.wikipedia.org/wiki/Search_engine_optimization" target="_blank" rel="noopener">search engine optimisation</a>”, which can help a web page float into the top results even if its content isn’t necessarily quality.</p> <p>The other issue is Google Search results <a href="https://mcculloughwebservices.com/2021/01/07/why-google-results-look-different-for-everyone/" target="_blank" rel="noopener">are different for different people</a>, sometimes even if they have the exact same search query.</p> <p>Results are tailored to the user conducting the search. In his book <a href="https://www.penguin.co.uk/books/181/181850/the-filter-bubble/9780241954522.html" target="_blank" rel="noopener">The Filter Bubble</a>, Eli Pariser points out the dangers of this – especially when the topic is of a controversial nature.</p> <p>Personalised search results create alternate versions of the flow of information. Users receive more of what they’ve already engaged with (which is likely also what they already believe).</p> <p>This leads to a dangerous cycle which can further polarise people’s views, and in which more searching doesn’t necessarily mean getting closer to the truth.</p> <p><strong>A work in progress</strong></p> <p>While Google Search is a brilliant search engine, it’s also a work in progress. Google is <a href="https://ai.googleblog.com/2020/04/a-scalable-approach-to-reducing-gender.html" target="_blank" rel="noopener">continuously addressing various issues</a> related to its performance.</p> <p>One major challenge relates to societal biases <a href="https://www.kcl.ac.uk/news/artificial-intelligence-is-demonstrating-gender-bias-and-its-our-fault" target="_blank" rel="noopener">concerning race and gender</a>. For example, searching Google Images for “truck driver” or “president” returns images of mostly men, whereas “model” and “teacher” returns images of mostly women.</p> <p>While the results may represent what has <em>historically</em> been true (such as in the case of male presidents), this isn’t always the same as what is <em>currently</em> true – let alone representative of the world we wish to live in.</p> <p>Some years ago, Google <a href="https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai" target="_blank" rel="noopener">reportedly</a> had to block its image recognition algorithms from identifying “gorillas”, after they began classifying images of black people with the term.</p> <p>Another issue highlighted by health practitioners relates to people <a href="https://www.healthline.com/health/please-stop-using-doctor-google-dangerous" target="_blank" rel="noopener">self diagnosing based on symptoms</a>. It’s estimated about <a href="https://onlinelibrary.wiley.com/doi/full/10.5694/mja2.50600" target="_blank" rel="noopener">40% of Australians</a> search online for self diagnoses, and there are about 70,000 health-related searches conducted on Google each minute.</p> <p>There can be serious repercussions for those who <a href="https://www.medicaldirector.com/press/new-study-reveals-the-worrying-impact-of-doctor-google-in-australia" target="_blank" rel="noopener">incorrectly interpret</a> information found through “<a href="https://www.ideas.org.au/blogs/dr-google-should-you-trust-it.html" target="_blank" rel="noopener">Dr Google</a>” – not to mention what this means in the midst of a pandemic.</p> <p>Google has delivered a plethora of COVID misinformation related to unregistered medicines, fake cures, mask effectiveness, contact tracing, lockdowns and, of course, vaccines.</p> <p>According to <a href="https://www.ajtmh.org/view/journals/tpmd/103/4/article-p1621.xml" target="_blank" rel="noopener">one study</a>, an estimated 6,000 hospitalisations and 800 deaths during the first few months of the pandemic were attributable to misinformation (specifically the false claim that <a href="https://www.abc.net.au/news/2020-04-28/hundreds-dead-in-iran-after-drinking-methanol-to-cure-virus/12192582" target="_blank" rel="noopener">drinking methanol can cure COVID</a>).</p> <p>To combat this, <a href="https://misinforeview.hks.harvard.edu/article/how-search-engines-disseminate-information-about-covid-19-and-why-they-should-do-better/" target="_blank" rel="noopener">Google eventually prioritised</a> authoritative sources in its search results. But there’s only so much Google can do.</p> <p>We each have a responsibility to make sure we’re thinking critically about the information we come across. What can you do to make sure you’re asking Google the best question for the answer you need?</p> <p><strong>How to Google smarter</strong></p> <p>In summary, a Google Search user must be aware of the following facts:</p> <ol> <li> <p>Google Search will bring you the top-ranked web pages which are also the most relevant to your search terms. Your results will be as good as your terms, so always consider context and how the inclusion of certain terms might affect the result.</p> </li> <li> <p>You’re better off starting with a <a href="https://support.google.com/websearch/answer/134479?hl=enr" target="_blank" rel="noopener">simple search</a>, and adding more descriptive terms later. For instance, which of the following do you think is a more effective question: “<em>will hydroxychloroquine help cure my COVID?</em>” or “<em>what is hydroxychloroquine used for?</em>”</p> </li> <li> <p>Quality content comes from verified (or verifiable) sources. While scouring through results, look at the individual URLs and think about whether that source holds much authority (for instance, is it a government website?). Continue this process once you’re in the page, too, always checking for author credentials and information sources.</p> </li> <li> <p>Google may personalise your results based on your previous search history, current location and interests (gleaned through other products such as Gmail, YouTube or Maps). You can use <a href="https://support.google.com/chrome/answer/95464?hl=en&amp;co=GENIE.Platform%3DDesktop" target="_blank" rel="noopener">incognito mode</a> to prevent these factors from impacting your search results.</p> </li> <li> <p>Google Search isn’t the only option. And you don’t just have to leave your reading to the discretion of its algorithms. There are several other search engines available, including <a href="https://www.bing.com/" target="_blank" rel="noopener">Bing</a>, <a href="https://au.yahoo.com/" target="_blank" rel="noopener">Yahoo</a>, <a href="https://www.baidu.com/" target="_blank" rel="noopener">Baidu</a>, <a href="https://duckduckgo.com/" target="_blank" rel="noopener">DuckDuckGo</a> and <a href="https://www.ecosia.org/" target="_blank" rel="noopener">Ecosia</a>. Sometimes it’s good to triangulate your results from outside the filter bubble. <img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important; text-shadow: none !important;" src="https://counter.theconversation.com/content/179099/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> </li> </ol> <p><em><a href="https://theconversation.com/profiles/muneera-bano-398400" target="_blank" rel="noopener">Muneera Bano</a>, Senior Lecturer, Software Engineering, <a href="https://theconversation.com/institutions/deakin-university-757" target="_blank" rel="noopener">Deakin University</a></em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/there-is-in-fact-a-wrong-way-to-use-google-here-are-5-tips-to-set-you-on-the-right-path-179099" target="_blank" rel="noopener">original article</a>.</em></p> <p><em>Image: Getty Images</em></p>

Technology

Placeholder Content Image

Spotify’s response to Rogan-gate falls short of its ethical and editorial obligations

<p>Audio streaming giant <a href="https://www.spotify.com/au/" target="_blank" rel="noopener">Spotify</a> is getting a crash course in the tension between free speech and the need to protect the public from harmful misinformation.</p><p>The Swedish-founded platform, which has 400 million active users, has faced a hail of criticism over misinformation broadcast on its <a href="https://variety.com/2021/digital/news/joe-rogan-experience-most-popular-podcast-news-roundup-1235123361/" target="_blank" rel="noopener">most popular podcast</a>, the Joe Rogan Experience.</p><p>Rogan, a former ultimate fighting commentator and television presenter, has <a href="https://variety.com/2021/digital/news/joe-rogan-anti-vaccine-podcast-spotify-1234961803/" target="_blank" rel="noopener">argued</a> healthy young people should not get a COVID vaccination. This is contrary to medical advice from governments all over the world, not to mention the <a href="https://www.who.int/emergencies/diseases/novel-coronavirus-2019/covid-19-vaccines/advice" target="_blank" rel="noopener">World Health Organization</a>.</p><p>A recent episode of his podcast, featuring virologist Robert Malone, drew <a href="https://www.theguardian.com/technology/2022/jan/14/spotify-joe-rogan-podcast-open-letter" target="_blank" rel="noopener">criticism from public health experts</a> over its various conspiracist claims about COVID vaccination programs.</p><p>There were widespread calls for Spotify to deplatform Rogan and his interviewees. Rock legend Neil Young issued an ultimatum that Spotify could broadcast Rogan or Young, but not both.</p><p>Spotify made its choice: the Joe Rogan Experience is still on the air, while Young’s <a href="https://www.theguardian.com/commentisfree/2022/jan/28/joe-rogan-neil-young-spotify-streaming-service" target="_blank" rel="noopener">music</a> is gone, along with <a href="https://www.abc.net.au/news/2022-01-29/joni-mitchell-take-songs-off-spotify-solidarity-with-neil-young/100790200" target="_blank" rel="noopener">Joni Mitchell</a> and <a href="https://www.rollingstone.com/music/music-news/nils-lofgren-spotify-neil-young-1292480/" target="_blank" rel="noopener">Nils Lofgren</a>, who removed their content in solidarity.</p><p><strong>Spotify’s response</strong></p><p>Spotify co-founder Daniel Ek has since <a href="https://newsroom.spotify.com/2022-01-30/spotifys-platform-rules-and-approach-to-covid-19/" target="_blank" rel="noopener">promised</a> to tag controversial COVID-related content with links to a “hub” containing trustworthy information. But he stopped short of pledging to remove misinformation outright.</p><p>In a statement, Ek <a href="https://newsroom.spotify.com/2022-01-30/spotifys-platform-rules-and-approach-to-covid-19/" target="_blank" rel="noopener">said</a>:</p><blockquote><p>We know we have a critical role to play in supporting creator expression while balancing it with the safety of our users. In that role, it is important to me that we don’t take on the position of being content censor while also making sure that there are rules in place and consequences for those who violate them.</p></blockquote><p><strong>Does it go far enough?</strong></p><p>Freedom of expression is important, but so is prevention of harm. When what is being advocated is likely to cause harm or loss of life, a line has been crossed. Spotify has a moral obligation to restrict speech that damages the public interest.</p><p>In response to the controversy, Spotify also publicly shared its <a href="https://newsroom.spotify.com/2022-01-30/spotify-platform-rules/" target="_blank" rel="noopener">rules of engagement</a>. They are comprehensive and proactive in helping to make content creators aware of the lines that must not be crossed, while allowing for freedom of expression within these constraints.  </p><p>Has Spotify fulfilled its duty of care to customers? If it applies the rules as stated, provides listeners with links to trustworthy information, and refuses to let controversial yet profitable content creators off the hook, this is certainly a move in the right direction.</p><p><strong>Platform or publisher?</strong></p><p>At the crux of the problem is the question of whether social media providers are <a href="https://socialmediahq.com/if-social-media-companies-are-publishers-and-not-platforms-that-changes-everything/" target="_blank" rel="noopener">platforms or publishers</a>.</p><p>Spotify and other Big Tech players claim they are simply providing a platform for people’s opinions. But <a href="https://www.zdnet.com/article/scott-morrison-says-social-media-platforms-are-publishers-if-unwilling-to-identify-users/" target="_blank" rel="noopener">regulators</a> are beginning to say no, they are in fact publishers of information, and like any publisher must be accountable for their content.</p><figure class="align-center "><img src="https://images.theconversation.com/files/443600/original/file-20220201-19-1kyj1oy.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" alt="Logos of big tech platforms" /><figcaption><span class="caption">Tech platforms like to claim they’re not publishers.</span> <span class="attribution"><span class="source">Pixabay</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/" target="_blank" rel="noopener">CC BY</a></span></figcaption></figure><p>Facebook, YouTube, Twitter and other platforms <a href="https://www.brookings.edu/blog/techtank/2021/06/01/addressing-big-techs-power-over-speech/" target="_blank" rel="noopener">have significant power</a> to promote particular views and limit others, thereby influencing millions or even <a href="https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#:%7E:text=How%20many%20users%20does%20Facebook,the%20biggest%20social%20network%20worldwide." target="_blank" rel="noopener">billions</a> of users.</p><p>In the United States, these platforms have immunity from civil and criminal liability under a <a href="https://www.eff.org/issues/cda230" target="_blank" rel="noopener">1996 federal law</a> that shields them from liability as sites that host user-generated content. Being US corporations, their actions are primarily based on US legislation.</p><p>It is an ingenious business model that allows Facebook, for example, to turn a steady stream of free user-posted content into <a href="https://www.statista.com/statistics/277963/facebooks-quarterly-global-revenue-by-segment/" target="_blank" rel="noopener">US$28 billion in quarterly advertising revenue</a>.</p><p>Established newspapers and magazines also sell advertising, but they pay journalists to write content and are legally liable for what they publish. It’s little wonder they are <a href="https://www.theguardian.com/commentisfree/2020/apr/24/newspapers-journalists-coronavirus-press-democracy" target="_blank" rel="noopener">struggling</a> to survive, and little wonder the tech platforms are keen to avoid similar responsibilities.</p><p>But the fact is that social media companies do make editorial decisions about what appears on their platforms. So it is not morally defensible to hide behind the legal protections afforded to them as platforms, when they operate as publishers and reap considerable profits by doing so.</p><p><strong>How best to combat misinformation?</strong></p><p>Misinformation in the form of fake news, intentional disinformation and misinformed opinion has become a crucial issue for democratic systems around the world. How to combat this influence without compromising democratic values and free speech?</p><p>One way is to cultivate “news literacy” – an ability to discern misinformation. This can be done by making a practice of sampling news from across the political spectrum, then averaging out the message to the moderate middle. Most of us confine ourselves to the echo chamber of our preferred source, avoiding contrary opinions as we go.</p><p>If you are not sampling at least three reputable sources, you’re not getting the full picture. Here are the <a href="https://libguides.ucmerced.edu/news/reputable" target="_blank" rel="noopener">characteristics</a> of a reputable news source.</p><p>Social media, meanwhile, should invest in artificial intelligence (AI) tools to sift the deluge of real-time content and flag potential fake news. Some progress in this area has been made, but there is room for improvement.</p><p>The tide is turning for the big social media companies. Governments around the world are formulating laws that will oblige them to be more responsible for the content they publish. They won’t have long to wait.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important;margin: 0 !important;max-height: 1px !important;max-width: 1px !important;min-height: 1px !important;min-width: 1px !important;padding: 0 !important" src="https://counter.theconversation.com/content/176022/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p><p><em><a href="https://theconversation.com/profiles/david-tuffley-13731" target="_blank" rel="noopener">David Tuffley</a>, Senior Lecturer in Applied Ethics &amp; CyberSecurity, <a href="https://theconversation.com/institutions/griffith-university-828" target="_blank" rel="noopener">Griffith University</a></em></p><p><em>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/spotifys-response-to-rogan-gate-falls-short-of-its-ethical-and-editorial-obligations-176022" target="_blank" rel="noopener">original article</a>.</em></p><p><em>Image: Getty Images</em></p>

Technology

Placeholder Content Image

Finding climate misinformation

<div> <div class="copy"> <p>We learnt only last month that <a rel="noreferrer noopener" href="https://cosmosmagazine.com/people/behaviour/trolling-abuse-of-scientists-during-the-pandemic/" target="_blank">scientists have been abused</a> on social media for telling the truth during the COVID pandemic.</p> <p>Now, an international team of researchers has delved into a related phenomenon – climate misinformation – and found that attacks on the reliability of climate science is the most common form of misinformation, and that misinformation targeting climate solutions is on the rise.</p> <p>Monash University research fellow Dr John Cook and colleagues from the University of Exeter, UK, and Trinity College Dublin, Ireland, trained a machine-learning model to automatically detect and categorise climate misinformation.</p> <p>Then they reviewed 255,449 documents from 20 prominent conservative think-tank (CTT) websites and 33 climate change denial blogs to build a two-decade history of climate misinformation and find common topics, themes, peaks, and changes over time.</p> <p>It’s the largest content analysis to date on climate misinformation, with findings <a rel="noreferrer noopener" href="https://doi.org/10.1038/s41598-021-01714-4" target="_blank">published</a> today in in the <em>Nature </em>journal <em>Scientific Reports</em>.</p> <p>“Our study found claims used by such think-tanks and blogs focus on attacking the integrity of climate science and scientists, and, increasingly, challenged climate policy and renewable energy,” Cook says.</p> <p>“Organised climate change contrarianism has played a significant role in the spread of misinformation and the delay to meaningful action to mitigate climate change.”</p> <p>As a result of their analysis, the researchers developed a taxonomy to categorise claims about climate science and policy used by opponents of climate action.</p> <p>They found the five major claims about climate change used by CTTs and blogs were:</p> <ol type="1"> <li>It’s not happening</li> <li>It’s not us</li> <li>It’s not bad</li> <li>Solutions won’t work</li> <li>Climate science/scientists are unreliable</li> </ol> <p>Within these were a number of sub-claims providing a detailed delineation of specific arguments.</p> <p>The researchers say climate misinformation leads to a number of negative outcomes, including reduced climate literacy, public polarisation, cancelling out accurate information and influencing how scientists engage with the public.</p> <p>“The problem of misinformation is so widespread, practical solutions need to be scalable to match the size of the problem,” Cook says.</p> <p>“Misinformation spreads so quickly across social networks, we need to be able to identify misinformation claims instantly in order to respond quickly. Our research provides a tool to achieve this.”</p> <!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --> <img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=172828&amp;title=Finding+climate+misinformation" alt="" width="1" height="1" /> <!-- End of tracking content syndication --></div> <div id="contributors"> <p><a href="https://cosmosmagazine.com/earth/climate/finding-climate-misinformation/">This article</a> was originally published on <a href="https://cosmosmagazine.com">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/dr-deborah-devis">Deborah Devis</a>. Deborah Devis is a science journalist at Cosmos. She has a Bachelor of Liberal Arts and Science (Honours) in biology and philosophy from the University of Sydney, and a PhD in plant molecular genetics from the University of Adelaide.</p> <p><em>Image: Yasin Ozturk/Anadolu Agency via Getty Images</em></p> </div> </div>

International Travel

Placeholder Content Image

Is it even possible to regulate Facebook effectively? Time and again, attempts have led to the same outcome

<p>The Australian government’s <a href="https://theconversation.com/this-is-why-australia-may-be-powerless-to-force-tech-giants-to-regulate-harmful-content-169826">recent warning</a> to Facebook over misinformation is just the latest salvo in the seemingly constant battle to hold the social media giant to account for the content posted on its platform.</p> <p>It came in the same week as the US Senate heard <a href="https://www.bbc.com/news/world-us-canada-58805965">whistleblowing testimony</a> in which former Facebook executive Frances Haugen alleged the company knew of harmful consequences for its users but chose not to act.</p> <p>Governments all over the world have been pushing for years to make social media giants more accountable, both in terms of the quality of information they host, and their use of users’ data as part of their business models.</p> <p>The Australian government’s <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_LEGislation/Bills_Search_Results/Result?bId=r6680">Online Safety Act</a> will <a href="https://perma.cc/95A5-T79H">come into effect in January 2022</a>, giving the eSafety Commissioner unprecedented powers to crack down on abusive or violent content, or sexual images posted without consent.</p> <p>But even if successful, this legislation will only deal with a small proportion of the issues that require regulation. On many such issues, social media platforms have attempted to regulate themselves rather than submit to legislation. But whether we are talking about legislation or self-regulation, past experiences do not engender much confidence that tech platforms can be successfully regulated and regulation put in action easily.</p> <p>Our <a href="https://aisel.aisnet.org/ecis2021_rip/35">research</a> has examined previous attempts to regulate tech giants in Australia. We analysed 269 media articles and 282 policy documents and industry reports published from 2015 to 2021. Let’s discuss a couple of relevant case studies.</p> <h2>1. Ads and news</h2> <p>In 2019, the Australian Competition and Consumer Commission (ACCC) <a href="https://www.accc.gov.au/publications/digital-platforms-inquiry-final-report">inquiry into digital platforms</a> described Facebook’s algorithms, particularly those that determine the positioning of advertising on Facebook pages, as “opaque”. It concluded media companies needed more assurance about the use of their content.</p> <p>Facebook initially welcomed the inquiry, but then <a href="https://www.accc.gov.au/system/files/Facebook_0.pdf">publicly opposed it</a> when the government argued the problems related to Facebook’s substantial market power in display advertising, and Facebook and Google’s dominance of news content generated by media companies, were too important to be left to the companies themselves.</p> <p>Facebook argued there was <a href="https://www.accc.gov.au/system/files/Facebook.pdf">no evidence of an imbalance of bargaining power</a>between it and news media companies, adding it would have no choice but to withdraw news services in Australia if forced to pay publishers for hosting their content. The standoff resulted in Facebook’s <a href="https://theconversation.com/facebook-has-pulled-the-trigger-on-news-content-and-possibly-shot-itself-in-the-foot-155547">infamous week-long embargo on Australian news</a>.</p> <p><span>The revised and amended News Media Bargaining Code was </span><a href="https://www.accc.gov.au/system/files/Final%20legislation%20as%20passed%20by%20both%20houses.pdf">passed by the parliament in February</a><span>. Both the government and Facebook declared victory, the former having managed to pass its legislation, and the latter ending up striking its own bargains with news publishers without having to be held legally to the code.</span></p> <h2>2. Hate speech and terrorism</h2> <p>In 2015, to deal with violent extremism on social media the Australian government initially worked with the tech giant to develop joint AI solutions to improve the technical processes of content identification to deal with countering violent extremism.</p> <p>This voluntary solution worked brilliantly, until it did not. In March 2019, mass shootings at mosques in Christchurch were live-streamed on Facebook by an Australian-born white supremacist terrorist, and the recordings subsequently circulated on the internet.</p> <p>This brought to light <a href="https://www.stuff.co.nz/national/christchurch-shooting/111473473/facebook-ai-failed-to-detect-christchurch-shooting-video">the inability Facebook’s artificial intelligence algorithms</a> to detect and remove the live footage of the shooting and how fast it was shared on the platform.</p> <p>The Australian government responded in 2019 by <a href="https://www.ag.gov.au/crime/abhorrent-violent-material">amending the Criminal Code</a>to require social media platforms to remove abhorrent or violent material “in reasonable time” and, where relevant, refer it to the Australian Federal Police.</p> <h2>What have we learned?</h2> <p>These two examples, while strikingly different, both unfolded in a similar way: an initial dialogue in which Facebook proposes an in-house solution involving its own algorithms, before a subsequent shift towards mandatory government regulation, which is met with resistance or bargaining (or both) from Facebook, and the final upshot which is piecemeal legislation that is either watered down or only covers a subset of specific types of harm.</p> <p>There are several obvious problems with this. The first is that only the tech giants themselves know how their algorithms work, so it is difficult for regulators to oversee them properly.</p> <p>Then there’s the fact that legislation typically applies at a national level, yet Facebook is a global company with billions of users across the world and a platform that is incorporated into our daily lives in all sorts of ways.</p> <p>How do we resolve the impasse? One option is for regulations to be drawn up by independent bodies appointed by governments and tech giants to drive the co-regulation agenda globally. But relying on regulation alone to guide tech giants’ behaviour against potential abuses might not be sufficient. There is also the need for self-discipline and appropriate corporate governance - potentially enforced by these independent bodies.</p> <p><em>Image credits: Shutterstock </em></p> <p><em>This article first appeared on <a rel="noopener" href="https://theconversation.com/is-it-even-possible-to-regulate-facebook-effectively-time-and-again-attempts-have-led-to-the-same-outcome-169947" target="_blank">The Conversation</a>.</em></p>

Technology

Placeholder Content Image

The sneaky way anti-vaxx groups are remaining undetected on Facebook

<p><span style="font-weight: 400;">Anti-vaccination groups on Facebook are relying on an interesting tactic to avoid detection from those who don’t share their beliefs. </span></p> <p><span style="font-weight: 400;">The groups are changing their names to euphemisms like ‘dance party’ or ‘dinner party’ to skirt rules put in place by the social media giant.</span></p> <p><span style="font-weight: 400;">Harsher bans were put in place by Facebook to crack down on dangerous misinformation about COVID-19 and subsequent vaccines. </span></p> <p><span style="font-weight: 400;">The groups are largely private and difficult to find on the social networking site, but still retain a large user base and have learned how to swap out detectable language to remain unseen. </span></p> <p><span style="font-weight: 400;">One major ‘dance party’ group has over 40,000 followers and has stopped allowing new users to join due to public backlash.</span></p> <p><span style="font-weight: 400;">The backup group for ‘Dance Party’, known as ‘Dinner Party’ and created by the same moderators, has more than 20,000 followers.</span></p> <p><span style="font-weight: 400;">Other anti-vaxx influencers on Instagram have adopted similar tactics, such as referring to vaccinated people as ‘swimmers’ and the act of vaccination as joining a ‘swim club’.</span></p> <p><span style="font-weight: 400;">These devious tactics have been recognised by governments internationally, as there is mounting pressure for officials to increase pressure on the social media platforms to do more to contain vaccine misinformation.</span></p> <p><span style="font-weight: 400;">An administrator for the ‘Dance Party’ wrote that beating Facebook’s moderating system “feels like a badge of honour”, as they urged users to stay away from ‘unapproved words’. </span></p> <p><span style="font-weight: 400;">Using code words and euphemisms is not new among the anti-vaxx community, as it borrows from a playbook used by extremists on Facebook and other social networking sites for many years.</span></p> <p><em><span style="font-weight: 400;">Image credit: Shutterstock</span></em></p>

Technology

Placeholder Content Image

Peddlers of fake news to be punished by Facebook

<p><span style="font-weight: 400;">People sharing false or misleading information on Facebook could soon be penalised.</span></p> <p><span style="font-weight: 400;">The social media giant has announced it will be cracking down on fake news by doling out harsher punishments for individual accounts repeatedly sharing misinformation.</span></p> <p><span style="font-weight: 400;">Under the new rules, Facebook will “reduce the distribution of all posts” from people guilty of doing this to make it harder for their content to be seen by other users.</span></p> <p><span style="font-weight: 400;">Though this already happens for Pages and Groups that post misinformation, it hasn’t extended that to individuals until now.</span></p> <p><span style="font-weight: 400;">Facebook does limit the reach of posts made by individual users that have been flagged by fact-checkers, but the new policy will act as a broader penalty for account holders sharing misinformation.</span></p> <p><span style="font-weight: 400;">But, Facebook has not specified how many times a user’s posts will have to be flagged before they are punished.</span></p> <p><span style="font-weight: 400;">The company will also start showing users pop-messages if they click the “like” button of a page that frequently shares misinformation to alert users that fact-checkers have previously flagged posts from the page.</span></p> <p><span style="font-weight: 400;">“This will help people make an informed decision about whether they want to follow the Page,” the company wrote on a blog post.</span></p> <p><span style="font-weight: 400;">The rules are the company’s latest effort to curb fake news. This comes after Facebook has continued to struggle with controlling the rumours and misleading posts from nearly 3 billion users, despite creating dedicated information hubs for topics such as the pandemic and climate change to present users with reliable information.</span></p>

Legal

Placeholder Content Image

How to talk to someone you believe is misinformed about the coronavirus

<p>The medical evidence is clear: The coronavirus global health threat is not an elaborate hoax. Bill Gates did not create the coronavirus to sell more vaccines. Essential oils are <a href="https://nccih.nih.gov/health/in-the-news-in-the-news-coronavirus-and-alternative-treatments">not effective</a> at protecting you from coronavirus.</p> <p>But those facts have not stopped contrary claims from spreading both on and offline.</p> <p>No matter the topic, people often hear conflicting information and must decide which sources to trust. The internet and the fast-paced news environment mean that information travels quickly, leaving little time for fact-checking.</p> <p>As a <a href="https://scholar.google.com/citations?user=Li4FgBUAAAAJ&amp;hl=en">researcher</a> interested in science communication and controversies, I study how scientific misinformation spreads and how to correct it.</p> <p>I’ve been very busy lately. Whether we are talking about the coronavirus, climate change, vaccines or something else, <a href="https://www.cnn.com/2020/03/05/tech/facebook-google-who-coronavirus-misinformation/index.html">misinformation abounds</a>. Maybe you have shared something on Facebook that turned out to be false, or retweeted something before <a href="https://theconversation.com/4-ways-to-protect-yourself-from-disinformation-130767">double-checking the source</a>. <a href="https://www.unlv.edu/news/article/future-alternative-facts">This can happen</a> to anyone.</p> <p>It’s also common to encounter people who are misinformed but don’t know it yet. It’s one thing to double-check your own information, but what’s the best way to talk to someone else about what they think is true – but which is not true?</p> <p><strong>Is it worth engaging?</strong></p> <p>First, consider the context of the situation. Is there enough time to engage them in a conversation? Do they seem interested in and open to discussion? Do you have a personal connection with them where they value your opinion?</p> <p>Evaluating the situation can help you decide whether you want to start a conversation to correct their misinformation. Sometimes we interact with people who are closed-minded and not willing to listen. <a href="https://rightingamerica.net/when-the-juice-is-not-worth-the-squeeze-distinguishing-between-productive-and-unproductive-conversations/">It’s OK</a> not to engage with them.</p> <p>In interpersonal interactions, correcting misinformation can be helped by the strength of the relationship. For example, it may be easier to correct misinformation held by a family member or partner because they are already aware that you care for them and you are interested in their well-being.</p> <p><strong>Don’t patronize</strong></p> <p>One approach is to engage in a back-and-forth discussion about the topic. This is often called a <a href="https://theconversation.com/understanding-christians-climate-views-can-lead-to-better-conversations-about-the-environment-115693">dialogue</a> approach to communication.</p> <p>That means you care about the person behind the opinion, even when you disagree. It is important not to enter conversations with a patronizing attitude. For example, when talking to climate change skeptics, the <a href="https://www.npr.org/2017/05/09/527541032/there-must-be-more-productive-ways-to-talk-about-climate-change">attitude</a> that the speaker holds toward an audience affects the success of the interaction and can lead to conversations ending before they’ve started.</p> <p>Instead of treating the conversation as a corrective lecture, treat the other person as an equal partner in the discussion. One way to create that common bond is to acknowledge the shared struggles of locating accurate information. Saying that there is a lot of information circulating can help someone feel comfortable changing their opinion and accepting new information, instead of <a href="https://bigthink.com/age-of-engagement/study-warns-of-boomerang-effects-in-climate-change-campaigns">resisting and sticking to</a> their previous beliefs to avoid admitting they were wrong.</p> <p>Part of creating dialogue is asking questions. For example, if someone says that they heard coronavirus was all a hoax, you might ask, “That’s not something I’d heard before, what was the source for that?” By being interested in their opinion and not rejecting it out of hand, you open the door for conversation about the information and can engage them in evaluating it.</p> <p><strong>Offer to trade information</strong></p> <p>Another strategy is to introduce the person to new sources. In my <a href="https://www.routledge.com/Communication-Strategies-for-Engaging-Climate-Skeptics-Religion-and-the/Bloomfield/p/book/9781138585935">book</a>, I discuss a conversation I had with a climate skeptic who did not believe that scientists had reached a 97% consensus on the existence of climate change. They dismissed this well-established number by referring to nonscientific sources and blog posts. Instead of rejecting their resources, I offered to trade with them. For each of their sources I read, they would read one of mine.</p> <p>It is likely that the misinformation people have received is not coming from a credible source, so you can propose an alternative. For example, you could offer to send them an article from the <a href="http://cdc.gov/">Centers for Disease Control</a> for medical and health information, the <a href="https://www.ipcc.ch/">Intergovernmental Panel on Climate Change</a> for environmental information, or the reputable debunking site <a href="http://snopes.com/">Snopes</a> to compare the information. If someone you are talking to is open to learning more, encourage that continued curiosity.</p> <p>It is sometimes hard, inconvenient, or awkward to engage someone who is misinformed. But I feel very strongly that opening ourselves up to have these conversations can help to correct misinformation. To ensure that society can make the best decisions about important topics, share accurate information and combat the spread of misinformation.<!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p> <p><em><a href="https://theconversation.com/profiles/emma-frances-bloomfield-712710">Emma Frances Bloomfield</a>, Assistant Professor of Communication Studies, <a href="https://theconversation.com/institutions/university-of-nevada-las-vegas-826">University of Nevada, Las Vegas</a></em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/how-to-talk-to-someone-you-believe-is-misinformed-about-the-coronavirus-133044">original article</a>.</em></p>

Relationships