Until Threads, ChatGPT was the fastest growing app in history; and with all of its success it is no surprise that users are using the Artificial Intelligence (AI) model to help them at work. ChatGPT can create sales projections, find flaws in code, write presentations and fine tune business proposals; but to do so, it needs you to ask the right question, or input the data.
According to a recent study cited in CyberNews, ‘15% of workers are using ChatGPT and other generative AI tools at work, and nearly 25% of those visits include a data paste.’ This means that employees are copying data from their businesses, and pasting it into ChatGPT to organise, correct or create something new; and this is where problems can arise.
There are two main issues with using ChatGPT to help you at work; accuracy, and confidentiality.
Why ChatGPT can’t be (entirely) trusted
ChatGPT generates its responses to user enquiries by looking at similar questions that it has been asked before, as well as the data it has been trained on. According to ChatGPT, ‘...it generates responses by predicting what comes next in a sequence of text, drawing from its understanding of language and context.’ ChatGPT’s answers also depend on the user that has asked the question; the amount of detail that person provides, and how specific they are in their inquiry. Put simply, ChatGPT is excellent at giving the answer that it believes the user wants - which doesn’t always mean that the response is accurate.
Having been created, and trained by people, ChatGPT can present information with the same biases that real people do. It does generate misinformation, responses which are incomplete and responses that are incorrect, and in some cases will fabricate information and present it as fact. If you’re using ChatGPT at work, this means that you could be depending on information that is false or misleading.
ChatGPT is bad at keeping secrets
By default, all the information you give to ChatGPT; every prompt, question or comment, goes towards improving ChatGPT and other artificial intelligence (AI) models. This means the information you upload to ChatGPT, whether that’s code that you’re trying to fix, sales figures you’re processing or personally identifiable data is automatically stored by OpenAI.
OpenAI has introduced Data Controls, which are in the settings of ChatGPT. Here you can turn off your Chat History & Training so that the information you give to ChatGPT isn’t used to train their AI models. OpenAI is currently working on ChatGPT Business, which will, ‘opt end-users out of model training by default’. At this time, it’s unclear whether this will be a secure and GDPR compliant platform for businesses.
If you’re uploading sensitive or confidential data to ChatGPT, even opting out of Chat History & Training, there is a risk of a data breach in the transmission of this information. Hackers, third parties and OpenAI employees could gain access to user conversations. You could also be in breach of GDPR, ICO guidelines, your organisations’ security policies or your client’s security policies. Ultimately, if the information is confidential, sensitive or includes personally identifiable data, it should not be shared with ChatGPT.
Should businesses have a ChatGPT policy?
Some businesses are so concerned about the potential data breaches that could arise with ChatGPT, they’ve prohibited their employees from using the AI altogether. After an ‘accidental leak of sensitive internal source code by an engineer’, Samsung Electronics banned the use of ChatGPT and any other ‘generative AI’ tools. The alternative is to outline a clear policy for employees using apps like ChatGPT, clarifying what kind of information can and cannot be uploaded to the AI model - or, to wait for ChatGPT for business.