ChatGPT may appear to be a useful tool for various work tasks initially. However, it’s important to keep in mind that any information shared with the chatbot could be utilized to train the system and could potentially appear in its responses to other users.
This is something that a few Samsung employees might have overlooked before sharing confidential information with the chatbot.
According to The Economist Korea, after Samsung’s semiconductor division permitted engineers to use ChatGPT, workers disclosed confidential information to it at least three times. One employee requested the chatbot to examine sensitive database source code for errors, another asked for code optimization, and a third fed a recorded meeting into ChatGPT and asked it to produce minutes.
The reports say that Samsung has taken measures to minimize future security breaches by limiting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text, after discovering the security lapses. The company is also investigating the three employees involved and is developing its own chatbot to prevent similar incidents.
Samsung is yet to respond to a request to comment.
ChatGPT’s data policy states that it uses user prompts to train its models unless they explicitly choose to opt out. The chatbot’s owner, OpenAI, advises users not to disclose confidential information to ChatGPT as it cannot delete specific prompts from their history. The only way to remove personally identifiable information is to delete the account, which may take up to four weeks.
The Samsung incident is yet another illustration of why it’s prudent to be cautious while using chatbots, just as you would with all your online interactions. You can never be certain where your data will ultimately end up.