Generative AI Creates New Risk Exposures
Published on Dec 11, 2023
Over the last year, some of the most impressive technological advancements have come from the development of generative AI. Businesses around the world are figuring out how they can leverage this technology to increase efficiency and profits. At the same time, it’s important to identify and control related exposures.
The Potential of Generative AI
Generative AI refers to AI algorithms and deep-learning models that can generate text, images, video, audio, and code. Many generative AI programs have text-based interfaces that enable the user to type a simple prompt telling the AI what to create. Popular examples include DALL-E (AI-generated images) and ChatGPT (AI-generated text), but numerous other programs have also become available.
According to McKinsey & Company, recent breakthroughs in generative AI could change the way people approach content generation. However, since the technology is very new, there are limitations and risks to consider.
Some of the stickiest issues surrounding AI-generated content involve copyright.
Generative AI models are trained using vast amounts of content, which is often copyrighted. According to Reuters, this has led to lawsuits between AI companies claiming fair use and copyright holders claiming infringement. There has also been some concern that AI models may generate content that closely resembles existing copyrighted content, thereby infringing on the original creator’s intellectual property. A report from the U.S. Congressional Research Service says both the AI user and the AI company could potentially be held liable for infringing on the copyright holders’ exclusive right when they create AI-generated materials.
Even if the AI-generated content is not infringing on anyone’s rights, companies creating it may be unable to claim ownership. The U.S. Copyright Office has determined that AI-generated works are made without the creative contribution of a human actor and therefore can’t be copyrighted. However, other countries may reach different conclusions. According to Herbert Smith Freehills, the UK’s Copyright Designs and Patents Act of 1988 provides copyright protection for computer-generated works with no human author, but there are nonetheless issues of originality and authorship to consider.
Disinformation and Defamation
It’s never been so easy to create factually-incorrect but highly-convincing material. According to Vice, Adobe has been caught selling user-submitted AI-generated images of violence in Gaza and Israel. Some of these images may end up online without being labeled as AI. This means there’s a huge potential for disinformation. With AI, anyone can create realistic images of people doing things they never did and of events that never happened.
Even when AI-users don’t set out to spread disinformation, they may end up generating falsehoods. This is largely due to AI’s propensity to “hallucinate” or make up facts, which can be a major problem when companies use AI to create articles or to conduct research. For example, The Law Society Gazette says New Zealand’s Law Society has received multiple requests from lawyer members for cases that were cited by ChatGPT but that don’t actually exist. Plus, research published in the Cureus Journal of Medical Science says ChatGPT-generated content has high rates of inaccurate or fabricated references. Gizmodo says CNET had to review the accuracy of its AI-written articles after several significant inaccuracies required corrections. According to Bloomberg News, AI hallucinations have even led to a defamation lawsuit.
Companies determine how to use AI to their advantage; criminals are doing the same. For example, CNBC says generative AI tools are leading to a massive increase in malicious phishing emails. Scammers can also use generative AI to create fake videos and audio of real people, called deepfakes. According to The Next Web, deepfake fraud increased by 3,000% in 2023.
Data Privacy and Security
Generative AI has led to several concerns over data privacy and security. According to Reuters, Italy went so far as to ban ChatGPT, although the country did lift the ban after OpenAI (the company behind ChatGPT) addressed its data privacy concerns. Business Insider says several companies – including Apple, Verizon, and Wells Fargo – have banned or restricted the use of ChatGPT, largely due to data privacy risks.
Companies that use ChatGPT to write code may also encounter risks. Research published in Cornell University’s ArXiv found that ChatGPT often generates code that is not robust against certain attacks, even though the chatbot appears to be aware of the vulnerabilities.
Controlling Your AI Risks
Despite the risks, many companies are eager to harness the potential of generative AI. To avoid complications, they should exercise caution.
- Watch for evolving regulations. Governments are constantly reacting to new developments. New legislation may impact copyright and data protection.
- Don’t use AI-generated content without thoroughly reviewing it first. Code may not be secure and text may be incorrect.
- Train your team to be alert for phishing and deepfake risks. Since AI-powered scams may be more convincing, everyone needs to look for threats.
- Establish clear company policies regarding AI use, including when workers can use AI and what information they can include in AI prompts.
- Determine how your insurance coverage would respond to AI-related claims, such as copyright infringement, data breaches, and defamation.
Generative AI has brought about major change and many of your clients’ risk management teams are still catching up. Costero Brokers can help you secure smart protection for your clients. Contact us.