Management, compliance & auditing

Is AI cybersecurity in your policies?

Ellen Pincus
November 18, 2023 by
Ellen Pincus

AI has quickly matured from trendy tech to a ubiquitous force in business, recreation, shopping and nearly every other aspect of life. But while it's tempting to dive headfirst into AI, taking advantage of the productivity gains it offers, now is the perfect time to take a step back and examine how your policies account for AI cybersecurity issues. 

What's driving the AI boom? 

The AI boom is primarily driven by the productivity benefits it's bringing to individuals and organizations. This has been the bridge spanning the gap between toy and tool. And the benefits are compelling enough to touch a wide range of industries. 

For example, a study by the Nielsen Norman Group found that AI improved employee productivity by 66%. And the impact spread across several disparate groups. For instance, support agents, who typically rely heavily on indelibly human interpersonal skills, can handle 13.8% more customer questions per hour. Unsurprisingly, programmers see an astronomical leap in productivity and can code 126% more projects each week. 

Cybersecurity concerns AI has introduced 

As well-meaning employees reach out and grab AI cybersecurity solutions to help with their daily tasks, those tools may open your organization to several potential risks. Here are some concerns that proactive businesses keep top of mind. 

Sharing personally identifiable or sensitive information with language learning models 

It can be tempting to use a language learning model (LLM) to generate well-worded and organized documents, but this introduces the risk of submitting information that others could access. The danger is especially concerning when it comes to personal and company information. 

A language learning model, such as ChatGPT, may take the information entered into it and can use it to answer other people's queries. For example, suppose you took notes during your company's latest meeting about its financial picture. Your team leader has asked you to generate a report using the notes so those who weren't there could benefit from the discussion. 

You take your notes, which include the company name and various financial stats, put them in ChatGPT and ask it to produce a report. Because ChatGPT now has your company's financial data on its server, there's a chance it could show it to someone else who inquires about your organization. Or you could be in violation of sharing personally identifiable information of clients. 

AI systems can be hacked 

ChatGPT, the forerunner in the generative AI movement, was recently hacked by attackers. During the time period between when the assault happened and when OpenAI handled the issue, here's what happened, according to the company: "It was possible for some users to see another active user's first and last name, email address, payment address, the last four digits (only) of a credit card number and credit card expiration date. Full credit card numbers were not exposed at any time." 

This underscores the need to avoid sharing personal information on any AI system, especially because attackers could harvest information that, presumably, should have been kept secure. 

Fake AI systems 

Hackers have found ways of exploiting the AI craze by creating fake AI apps designed to lure unsuspecting learners into downloading malware. The attack vector is straightforward: You see an ad for a hot new generative AI tool, download it and end up with malware on your computer. The malware can do anything from run ransomware to exfiltrate sensitive information, sending it to attackers. 

How to include AI in your cybersecurity policy 

The good news is you can systematically create an AI-safe space in your organization by following a few action steps. 

People working  together on cybersecurity policies

1. Establish a clear path for any AI use in cybersecurity 

A study by the Marketing AI Institute revealed that only 22% of organizations have AI policies in place. If you're in the remaining 78%, now's a great time to set up AI adoption protocols. 

Your adoption governance system could include: 

  • Requiring employees to run new AI tools by security teams before using them 

  • Establishing an AI allowlist, which includes AI apps that you've deemed safe. You would have to update this regularly 

  • Setting up clear rules regarding the functions AI cannot be used for, such as emailing high-value clients or generating sensitive reports 

2. Create an AI working group 

An AI working group can assess the best ways to take advantage of AI technologies and how to reduce the risks. Someone on the cyber security team should have input, because cybersecurity professionals understand the vulnerabilities AI can introduce or exploit. 

The working group should be forward-thinking, looking for ways to best leverage AI. At the same time, the team should take a "safety first" approach. If an AI implementation is risky, it may be better to do things the old-fashioned way. 

3. Establish copyright guidelines and expectations 

When someone uses AI to generate content, the copyright implications can get murky. For example, some would consider the content to be the property of the human who used the AI to create it. Others, depending on the AI used, may feel like the content is the intellectual property of the organization or person who created the AI. 

For example, suppose you have a programmer who uses an AI-powered API to generate a content creation app designed specifically for your company. Who owns what the app creates? The company? The programmer? The human author? If so, how does someone's employment with the company play a role in their copyrights? For instance, if they leave your organization, do they retain copyrights to the material they created? 

It's best to generate clear AI guidelines with your legal team and make them available to everyone in the company. This can eliminate confusion and trepidation, helping people feel free to use AI to boost productivity without worrying about copyright entanglements. 

4. Educate employees about the dangers of using AI 

Since AI is so new, it's a good idea to presume that your workforce knows nothing about its dangers. Using this presumption as a foundation, you can build an educational system that can: 

  • Present clear warnings about current AI-related cyber threats and vulnerabilities in the wild 

  • Shine a light on the dangers that are most likely to impact your company 

  • Teach employees what to do if they think they've downloaded malware thinking it's AI 

  • Make sure they understand the danger of using AI to compose emails and other high-value messaging (for example, the text may sound unnatural or it may include uncited, plagiarized content) 

Weaving AI policies into your cybersecurity fabric benefits everyone 

In addition to fostering a safer work environment, incorporating AI policies into your security policies empowers your people to use it with confidence. People want to use AI, and for good reason. By scaffolding their AI use with safety protocols, you establish clear boundaries within which they can fully leverage AI. 

This way, you avoid hindering your workforce and falling behind your competition. You also avoid a surge in shadow IT, where employees end up using potentially dangerous tech without the input or support of your IT team. 

Infosec can provide the latest insights regarding the safe use of AI and how it should intersect with your cybersecurity culture. To learn how set up a meeting with Infosec today. 

Ellen Pincus
Ellen Pincus

Ellen Pincus is a communication and marketing professional with over a decade of creative experience helping innovative organizations differentiate their voice. As the content marketing specialist for Infosec, she enjoys empowering cyber professionals and students with skills and knowledge to advance their careers and outsmart cybercrime.