Samsung Cracks Down on Staff Use of AI, Citing Security Risks

Samsung points to ChatGPT as a security risk following an accidental source-code leak

Samsung issued a memo saying employee usage of generative AI services is banned for the indefinite future, citing security risks as the chief reason after some of Samsung’s own engineers unintentionally leaked source code via a ChatGPT upload last month.

The memo, as first reported by Bloomberg News, does not specify what the contents of the leaked code were. All that’s known is the breach contained items Samsung did not want entering the ChatGPT data pool, wherein information cannot be easily retrieved, redacted, or otherwise halted from potential dissemination.

The generative artificial intelligence ban extends across internal networks and company-owned devices, including tablets and phones. Employees can use AI tools on personal devices, so long as nothing they use said tools for runs the risk of endangering Samsung IP. Those who are caught violating the terms of this new ban face disciplinary action, up to and including termination.

To counter the ban, Samsung is developing its own set of internal AI tools to help with translation and document summary and is working on ways to prevent company data from being uploaded to external services.

The AI ban does not appear to be a long-term solution, as evidenced by a passage in the memo: “HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency. However, until these measures are prepared, we are temporarily restricting the use of generative AI.”

In Samsung’s case, the concern is that valuable intel could be shared and compromise business plans. But generative AI is causing ripples elsewhere in the tech world, especially in the consumer space.

While major corporations have one set of worries over the use of ChatGPT and similar tools, generative AI is also stirring the pot in the social media sphere, pressuring apps like TikTok to figure out strategies for clarifying what content is AI generated and what content is authentically produced by human beings. Without these tools, average people won’t know whether Drake actually sang a song or if a tune was the work of an AI-born Fake Drake.

Comments