As more companies embrace artificial intelligence (AI), the technology offers many benefits, including increased productivity and improved data-driven decision-making, among others. In fact, a survey from the SMB Group found that 53% of small and medium size business (SMB) respondents were “very interested” in the impact of AI on their business operations.
Despite the benefits, AI can also pose security risks. According to research from Verizon, 43% of SMB leaders in a recent survey said they’re worried AI tools will expose them to cybersecurity risks. Because of this, business leaders need to take a thoughtful, strategic approach to safely adopt Artificial Intelligence for the productivity benefits in their organizations. While AI tools may help employees become more efficient and productive, ensuring they understand the risks and how to protect your company’s data is critical.
Cybersecurity risks associated with AI
A study from Deep Instinct found that cybersecurity professionals saw a 75% increase in attacks over the course of 12 months, with 85% attributing this rise to bad actors leveraging generative AI. Here’s how AI can increase cybersecurity risks for businesses.
Sophisticated Phishing Emails
An increase in more sophisticated phishing emails is perhaps the top cybersecurity concern with AI. Recent research from SlashNext focused on cybercriminal behavior leveraging generative AI tools and chatbots found that malicious phishing emails increased 1,265% from Q4 2022 to Q3 2023. Some of the ways AI can be used for phishing emails include:
- Improving email personalization by gathering information from online sources, such as social media and public records, making it more likely for users to open or click on phishing emails.
- Writing error-free, grammatically correct emails using generative AI to appear legitimate.
- Better targeting of individuals in certain industries, professions, or job types.
- Leveraging deepfakes to create fake audio or video recordings impersonating trusted individuals within an organization to make the recordings seem legitimate.
AI can be used by threat actors to identify targets more effectively for ransomware attacks, automate phishing campaigns, and even negotiate ransom demands. Sophisticated AI can also be used to develop new strains of ransomware that are more difficult to detect and remediate. Why does this matter? The research cited earlier from Deep Instinct shows that 46% of survey respondents indicate ransomware as the greatest threat to their organization’s data security.
Exposing Sensitive Data to AI Tools
In addition to potential attacks from external threat actors, your own employees may unknowingly expose your organization to risks by using AI tools. In fact, research from Cyberhaven found that approximately 11% of information shared by employees with ChatGPT is sensitive data.
It’s important for all business leaders and individuals to know whether the information you share with AI Tools is being saved, either in the form of stored prompts or used to train the model. If this is the case, customer data or other confidential information could be exposed if the AI tool falls victim to a data breach or simply discovered by a third-party using the same tool. OpenAI has already had to address at least one major glitch that allowed some ChatGPT users to see the titles of other users’ conversations.
You also need to consider your litigation risks. If you are using an AI tool, is it saving all your conversations? Can you manage the retention of these? If you find yourself in a legal matter requiring e-discovery, how has AI increased the volume of discoverable data? In summary, when using generative AI tools, it's important to be aware of and manage the potential legal and privacy implications, especially regarding data retention and e-discovery. This involves understanding the service provider's policies, considering the impact of these tools on data volume, and ensuring compliance with relevant regulations and legal requirements.
There is also a real risk of attacks on the AI tool itself, or “adversarial machine learning attacks,” as identified in this CSO online article. These attacks can alter the responses from the AI tool or better predict how it works to avoid detection.
Risk mitigation strategies and AI's role
While AI can pose some cybersecurity risks, by leveraging the right tools and having risk mitigation strategies in place, business leaders can support safe AI adoption in their organizations.
Start With the Basics
The basics matter and can make a big difference. These 10 Cybersecurity Best Practices for SMBs is the foundation for protecting your organization from any threat. Managing the risks from AI starts with your employees. Update your employee Acceptable Use and Information Security policies to address the risks of AI in your workplace.
Leveraging the Power of AI to Spot Threats
Deloitte has reported that the global market for cyber-AI tools will grow to $19 billion by 2025. While this is a big investment, we are still in the early stages where human eyes and intervention play a significant part in cybersecurity services. One concern highlighted in the Arctic Wolf article, “The Human – AI Partnership” about AI adoption, is the present skills gap in finding security professionals who are equally versed in cybersecurity and AI.
Keep in mind that the criminal only has to get it right once to succeed in their attack, while security professionals must strive to get it right every time. AI holds the promise of getting closer to this ideal and will drive investments to do so.
Another area where AI is making an impact is with Security Awareness Training. Historically this has been static content viewed by employees annually or after failing a phishing simulation. The new generation of solutions is moving into the area of “Human Risk Management,” where AI is being used to identify individual risk factors and deliver specific guidance at that time, to that individual in order to change their behavior.
While the development of AI-powered cybersecurity solutions is not something most SMB’s will take on themselves, they should be looking for these capabilities in the security solutions they are purchasing.
Audit Your Data Estate
An important step toward reducing risk is knowing where your sensitive data is located and who has access to it. This process involves identifying, classifying, and applying role-based permissions to manage access to all of the data that your business collects, stores, and processes. The goal is to secure sensitive data from those in your organization who have no need to access it. This not only reduces the risk of a cybercriminal finding it but also the accidental or malicious access of that data by an unauthorized employee. The implementation of AI in your organization will only increase this risk if you have not properly secured the data beforehand.
As is the case with adopting any new technology, maintaining compliance is essential to reducing risks. Not only do you need to ensure you are operating within any existing compliance regulations you have, but new ones will also be coming, specific to AI. Given a rapid uptick in AI usage and the potential benefits and risks this presents, President Biden recently issued an Executive Order to define some guardrails for the proliferation of artificial intelligence. While this primarily applies to the technology companies building these systems, it’s good guidance for what any organization will want to know about the AI they will be adopting in the coming years.
The National Institute of Standards and Technology (NIST) has issued an Artificial Intelligence Risk Management Framework (RMF 1.0), with the goal of offering a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. adhere to NIST standards as you adopt AI.
Reap the Benefits of AI While Managing the Risks
We’re still in the early stages of AI adoption and evolution. While we cannot guess the level of gains from AI in the coming years, the impact is likely to be significant. My advice when it comes to securely adopting AI is to be cautious but don't be afraid. As a business leader, you still need to do your due diligence on any vendor and their solutions. More importantly, you need to make sure your data, whether in files or in a database, is properly secured with the correct access permissions.
If you’re a small- or medium-sized business leader, you may have limited resources or employees to effectively evaluate and manage different AI tools. Systems Engineering’s managed IT services can provide expert human oversight to help you identify and leverage solutions that move your business forward – without sacrificing security or productivity – while staying on top of the evolving technology landscape.
If you are a Systems Engineering client and have questions about this blog, please reach out to your Account Manager. Others, please connect with us at firstname.lastname@example.org or call 888.624.6737.
Mark Benton is the Director of Product Management at Systems Engineering with 30+ years of experience in Information Technology. Mark is responsible for overseeing the onboarding of new products and services for Systems Engineering and its clients.