OpenAI, the creator of ChatGPT, has announced plans to deploy tools to counter disinformation in anticipation of numerous elections this year across countries representing half the global population.
The boundless progress of the text generator ChatGPT has lighted worldwide artificial intelligence unrest. Nonetheless, concerns have emerged in regard to the potential for such apparatuses to immerse the web with deception, impacting voters. OpenAI expressed on Monday that it would confine the utilization of its innovation, including ChatGPT and the picture generator DALL-E 3, in political missions.
OpenAI made it clear in a blog post that it is committed to stopping its technology from being used in ways that could hurt democratic processes. The association recognized the need to appreciate the adequacy of its apparatuses in customized influence and, until additional comprehension is accomplished, precluded the improvement of uses for political battling and campaigning.
The World Economic Forum recently highlighted AI-driven disinformation and misinformation as significant short-term global risks capable of undermining newly elected governments in major economies.
OpenAI aims to address concerns by developing tools that provide reliable attribution for text generated by ChatGPT and enable users to discern whether an image was created using DALL-E 3. The company plans to implement digital credentials from the Coalition for Content Provenance and Authenticity (C2PA) early this year.
This cryptographic approach encodes content provenance details, enhancing methods for identifying and tracing digital content. C2PA’s members include industry leaders such as Microsoft, Sony, Adobe, Nikon, and Canon.
OpenAI emphasized its commitment to responsible use, with ChatGPT offering authoritative information when asked procedural questions about US elections, directing users to reliable websites. Additionally, DALL-E 3 incorporates “guardrails” preventing the generation of images featuring real people, including candidates.
This announcement aligns with efforts by tech giants Google and Meta (Facebook’s parent company) to curb election interference, particularly through AI applications. OpenAI’s proactive measures contribute to ongoing industry initiatives addressing the challenges posed by AI-driven disinformation, deepfakes, and potential threats to the integrity of democratic processes.