US competition watchdog issues generative AI warning


The US Federal Trade Commission (FTC) has issued a warning to companies over the use of artificial intelligence (AI) to manipulate people’s behaviour for commercial gain.

In a blog post published 1 May 2023, FTC advertising practices attorney Michael Atleson said firms are already using the new wave of generative AI tools in ways that can influence people’s beliefs, emotions and behaviour.

This includes the use of chatbots designed to provide information, advice, support or companionship, many of which Atleson says “are effectively built to persuade” by answering queries in confident language, even when those answers are complete fiction.

He added that there is also a tendency for people to trust the output of such generative AI tools, due a mixture of automation bias (whereby people are unduly trusting of a machine due to the appearance of neutrality or impartiality) and anthropomorphism (whereby the chatbot is designed to appear more human by how it uses language or emojis, for example).

“People could easily be led to think they’re conversing with something that understands them and is on their side,” said Atleson.

“Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust.

“Concern about their malicious use goes well beyond FTC jurisdiction,” he said. “But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing and employment.”

Atleson added that any companies considering the use of generative AI in their advertising practices should be aware that design elements intended to “trick” people are a common feature of FTC cases.

He further warned that companies could start placing ads within a generative AI feature, which could also lead to deceptive or unfair practices.

“Among other things, it should always be clear that an ad is an ad, and search results or any generative AI output should distinguish clearly between what is organic and what is paid,” said Atleson.

“People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship,” he said. “And, certainly, people should know if they’re communicating with a real person or a machine.”

Referring to news from March 2023 that Microsoft had laid off an entire team dedicated to AI ethics and safety – despite being one of the leading firms pushing generative AI through its partnership with OpenAI – Atleson noted that “it’s perhaps not the best time for firms” to be reducing the personnel working on these issues.

“If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look,” he said. “Among other things, your risk assessment and mitigations should factor in foreseeable downstream uses and the need to train staff and contractors, as well as monitoring and addressing the actual use and impact of any tools eventually deployed.”

Warning

The FTC also recently warned companies against making exaggerated or unsubstantiated claims about their AI products, as well as the potential of generative AI tools to be used for cyber attacks and fraud.  

“We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts,” said Atleson in the former blogpost from late February 2023. “But the fact is that some products with AI claims might not even work as advertised in the first place. In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that – for FTC enforcement purposes – false or unsubstantiated claims about a product’s efficacy are our bread and butter.”

Since the start of 2023, a spate of legal challenges have been initiated against generative AI companies – including Stable Diffusion, Midjourney and Open AI – over alleged breaches of copyright law arising from their use of potentially protected material to train their models.

On 16 March 2023, the US government published a policy statement on generative AI and copyright, which noted that “public guidance is needed” because people are already trying to register copyrights for work containing AI-generated content.

However, it exclusively focuses on whether material produced by AI, where the “technology determines the expressive elements of its output”, can be protected by copyright, rather than the access of generative AI firms to others’ copyrighted material.

The UK government, on the other hand, has committed to creating a code of practice for companies in the space to facilitate their access to copyrighted material, and following up with specific legislation if a satisfactory agreement cannot be reached between the AI firms and those in creative sectors.

In late April, prime minister Rishi Sunak announced £100m of funding to support generative AI initiatives, which will go towards the creation of a Foundation Model Taskforce to promote the safe and reliable use of AI across the economy and ensure the UK is globally competitive in this strategic technology.



Source link