How to stop data leaks when workers are using tools like ChatGPT
ChatGPT has become the fastest growing app in history, estimated to have 100 million monthly active users in January 2023, a short two months after launching. Across many workplaces it has been quickly adopted to streamline work processes - enabling workers to write content faster, brainstorm ideas instantly, create images without the need for photographers or designers, conduct research and analysis and even write, test or document code. However the high-risk of confidential data-loss has proven too significant, and many organizations are now starting to think through how they can manage this threat.
Accelerating Workers
With its opportunity to transform the way we work and significantly increase productivity, Generative AI tools like ChatGPT have become very appealing to organizations. Shakked Noy and Whitney Zhang, doctoral students at M.I.T., conducted a randomized, controlled trial on experienced professionals in such fields as human relations and marketing. They found that tasks that typically took 20 - 30 mins to complete, could be done up to 10 mins faster with the help of ChatGPT - a substantial increase in productivity. GitHub also studied the impact on software developers and found that generative AI tools enabled developers to complete entry level tasks 55% faster than those that did them manually. Enabling workers with generative AI tools like ChatGPT has been proven to enable organizations to do more with less - but at what cost?
Data Leaks
Tools like ChatGPT require users to input information and prompts to generate the information they need. The better the inputs, the better the tool can perform the task needed. However the information fed into ChatGPT to get great responses is also fed into the training data used by ChatGPT to generate the responses, and consequently can be easily accessed by users (and malicious actors) outside of an organization. By March 2023, more than 4% of workers were putting sensitive corporate data into ChatGPT, and large firms have started to take action. Apple recently joined the likes of Verizon, JPMorgan, Deutsche Bank and Samsung to ban the use of generative AI tools in the workplace. Preventing employees from inadvertently leaking confidential information while using generative AI tools is a crucial next step for organizations that want to realize the productivity gains these tools enable.
Preventing ChatGPT data leaks
Training workers on the risks of generative AI tools and what they should be doing to prevent data leaks is not enough to prevent data leaks from happening, To stop sensitive data being used in tools like ChatGPT, organizations need to ensure that the apps used to do work don’t allow sensitive information to be downloaded or copied. By preventing information from being removed from the tool in the first place, organizations can stop workers from erroneously using information that needs to be kept confidential. Sonet.io allows organizations to put fine-grained security policies in place, without degrading the productivity of workers.
How to prevent ChatGPT data leaks with Sonet.io
Here are some examples of the way you can prevent data leaks with Sonet.io:
Copy and Paste Control
Block workers from copying sensitive data from apps they use to do their work. Set up fine-grained content inspection policies that detect data such as source code, personally identifiable information (PII), or content marked sensitive or confidential. Eliminate the possibility of someone inadvertently copying restricted content that could then be pasted into ChatGPT or other generative AI tools.
Data Download Control
Prevent workers from downloading confidential information with fine-grained content inspection policies that can analyze content in files including PDFs and image files.
User Session Recording
Recorded user sessions provide video of user activity to further investigate data leaks.
Logging and monitoring
All user and application activity is logged for further analysis and monitoring. Know when suspicious activity occurs and identify activities that might indicate data is being leaked to generative AI tools.
By using Sonet.io, companies can prevent the inputting of sensitive or PII information into generative AI tools like Chat-GPT, helping to protect their corporate data. By putting in place tools to prevent data leaks to generative AI tools, companies can realize the productivity gains that these new AI tools provide, and enable workers to significantly increase their productivity. If you're interested in learning more about how Sonet.io can help secure your remote work environment, please contact us to schedule a demo.