In today’s world, we expect everything fast, including our news and information—which we quickly convert into opinions and beliefs. While speed has its benefits, it also poses a risk of misinformation or disinformation. The threat of falling susceptible to false claims continues to rise, as communication grows more rapid; news becomes a game of “telephone” where rumor may trump fact; and technology becomes more readily accessible to those with nefarious plans.
Misinformation—the spread of inaccurate information—and disinformation—the deliberate creation and dissemination of false information—have firmly taken root as widespread challenges to society. From election interference to COVID conspiracy theories to climate change denial, misinformation and disinformation pose polarizing threats to how we live our lives.
These considerable concerns will be addressed during the 2023 IEEE Computer Society (CS) Tech Forum Digital Platforms and Societal Harms, taking place 2-3 October in Washington, D.C. at American University. During the event, academics, policy experts, computing professionals, and civil society leaders will join together to discuss the challenges of hate speech, extremism, exploitation, misinformation and disinformation on digital platforms.
While they benefit society in numerous ways, computing solutions such as AI harness enormous potential to spread the false information to users online. The New York Times reported in February that generative technology, such as ChatGPT, could make disinformation cheaper and easier to produce on a large scale for conspiracy theorists. A September TechCrunch post emphasized that AI has already been used to spread false election information with the article sharing that there is “Recent evidence of how new AI technology is already being used and impacting politics today, particularly when it comes to election campaigns.” And Axios reports AI-generated content could soon account for “99% or more” of all information on the Internet.
As part of the forum, experts will discuss the challenges in addressing these information threats. Moderated Andre Oboler of Australia’s Online Hate Prevention Institute, the global panel includes:
As Albert Einstein famously said, “We cannot solve our problems with the same level of thinking that created them.” We need diverse perspectives to combat the negative impacts of misinformation and disinformation, and these issues can only be addressed through the arms of collaboration. By bringing the computer science and engineering community together with policy leaders, concrete action steps can be made to address the collective societal challenges of misinformation and disinformation.
For more information or to register for this year’s event (to attend either in person or online), visit https://tech-forum.computer.org/societal-harms-2023/.
See Dr. Andre Oboler’s presentation on last year’s misinformation and disinformation panel:
https://www.computer.org/csdl/video-library/video/1MhZ1x6TfiwSee Chris Cooper’s presentation on last year’s misinformation and disinformation panel:
https://www.computer.org/csdl/video-library/video/1MhXPV6SMTu