888.624.6737

syse-blog-header

Understanding ChatGPT: Risks and Precautions for Your Organization

June 09, 2023 | Posted in:

Business Transformation

Posted by Mark Benton

OpenAI's ChatGPT is all over the news and creating a lot of questions. The goal of this blog is to help clarify what this all means, where it's headed, and what precautions you might want to take as an organization responsible for protecting sensitive and personal data.

ChatGPT has become a hot topic due to the accelerated rate AI models are maturing. Similar to other past “General-Purpose Technologies", like the railroad, automobile, and computer, ChatGPT is considered a disruptive technology with broad economic implications. Coincidently, the GPT in ChatGPT is not an acronym for General-Purpose Technology. It actually means Generative Pre-trained Transformer.

ChatGPT usage and risks

When using the ChatGPT (text) or DALL-E (images) sites, any content you submit will be used to help train the artificial intelligence (AI) model. Anyone and any type of information, including sensitive proprietary data, can be entered into these AI models. Once content has been submitted, any third-party user with a well-formed query might discover your content. For this reason, companies like Samsung and Apple have already banned employees from using the apps.

ChatGPT-AI

If your organization has sufficient risk, you could block access to these sites with web content filtering tools.  However, this may not be enough to prevent a user determined to access these apps from finding a workaround. Systems Engineering recommends you update your Acceptable Use Policy to address the risks and user behavior. 

If you chose a policy that allows the secure and thoughtful use of ChatGTP, which is now built into Microsoft Edge, there are still precautions to be aware of. These "Large Language Models," or LLM, have limitations. The data they are working with comes from a specific time that may not reflect the current situation and pulls from many sources of language, such as books, articles, speeches, etc. To illustrate the precautions, I provided ChatGPT with a list of points for writing a blog post about the dangers of ChatGPT-generated information. Here is an excerpt from ChatGPT's AI-generated output:

“ … ChatGPT is not a magic tool that can answer any question or write any text for you. ChatGPT does not actually understand the meaning or the facts behind the text it generates. It simply learns from a large corpus of text data that it was trained on and tries to find the most likely words and sentences to follow the input. This means that ChatGPT can sometimes produce inaccurate, misleading, or even harmful text that does not reflect reality or your intentions.”

“For example, ChatGPT may "hallucinate" or make up facts that are not true or verified. It may also repeat or contradict itself, or generate text that is biased, offensive, or inappropriate. ChatGPT may also not be up to date with the latest information or trends, as it was trained on a fixed set of data that may not reflect the current state of the world.”

How to prepare

If AI is going to have the impact many are suggesting, it's important to proactively establish the policies and practices within your organization that will guide its development. Already we are seeing AI adopted into business applications such as Microsoft’s Copilot, where it will be integrated into all aspects of Office 365. Copilot is currently in early preview and may be publicly available by the end of 2023.

Here at Systems Engineering, we are adopting an Acceptable Use Policy that allows for the secure use of ChatGPT and other AI technologies. We are cataloging the use cases where applying AI to our own data will provide greater insights and efficiencies. At the same time, we are working with various tools, such as Azure Cognitive services, and will be part of the preview release of Copilot. Having already built a data-driven organization, AI can help us move from insights to actionable outcomes.

What should you be doing?  Start following AI in the news and in your market segment.  Add AI discussions to your tech committee agenda. Start asking what you could do with your own data and if it's ready for use.

If you would like assistance developing your own acceptable use policy around AI or have any questions about this post, please contact your Systems Engineering account manager or email us at info@systemsengineering.com.


Mark Benton_Director_Product_Management_LinkedInMark Benton is the Director of Product Management at Systems Engineering with 30+ years of experience in Information Technology. Mark is responsible for overseeing the onboarding of new products and services for Systems Engineering and its customers.