Officials in the European Union have discussed additional measures that would make artificial intelligence (AI) tools, such as OpenAI’s ChatGPT, more transparent to the public.
On June 5, Vera Jourova, the European Commission’s vice president for values and transparency, told the media that companies deploying generative AI tools with the “potential to generate disinformation” should place labels on their content as an effort to combat “fake news.”
“Signatories who have services with a potential to disseminate AI generated disinformation should, in turn, put in place technology to recognize such content and clearly label this to users.”
Jourova also referenced companies that integrate generative AI into their services — such as Microsoft’s Bing Chat and Google’s Bard — as needing to create “safeguards” to prevent malicious actors from utilizing them for disinformation purposes.
In 2018 the EU created its “Code of Practice on Disinformation,” which acts as both an agreement and a tool for players in the tech industry on self-regulatory standards to combat disinformation.
Major tech companies, including Google, Microsoft and Meta Platforms, have already signed onto the EU’s 2022 Code of Practice on Disinformation. Jourova said those companies and others should report on new safeguards for AI this upcoming July.
She also highlighted Twitter’s withdrawal from the code of practice, saying the company should anticipate more scrutiny from regulators.
“By leaving the Code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”
These statements from the vice president come as the EU prepares its forthcoming EU Artificial Intelligence Act, which will be a comprehensive set of guidelines for the public use of AI and the companies deploying it.
Despite the official laws scheduled to take effect in the next two to three years, European officials have urged companies to create a voluntary code of conduct for generative AI developers in the meantime.