Don’t tell anything to a chatbot you want to keep private
Catherine Thorbecke
By Catherine Thorbecke, CNN
Updated 10:46 AM EDT, Thu April 6, 2023

Video Ad Feedback
corvette eray thumb
See the first electrified and fastest-accelerating Corvette
01:16
Trump Facebook Employees Debate 02
Facebook could soon reinstate Trump. Two former senior staffers debate the decision
03:35
19 TikTok STOCK
Experts raising alarm over ‘crisis’ of TikTok’s impact on mental health
03:12
Quirky CES Products Split
See CES 2023’s weirdest new technologies
02:25
Jon Sarlin Amanda Steen 1
CNN tried an AI flirt app. It was shockingly pervy
03:19
aoc twitter hearing
These two moments show how Twitter’s choices helped former President Trump
01:55
deepfake newscasters wang pkg
These newscasters you may have seen online are not real people
03:15
People wait in line at the April 2022 grand opening of the Bored & Hungry pop-up burger restaurant in Long Beach, California, which used Bored Ape images.
Lawsuit says celebrities were paid to fuel hype behind these NFTs
07:29
Tiny Robot orig jc
Video: This tiny shape-shifting robot can melt its way out of a cage
01:08
nightcap 012623 Clip 2 16×9 nb
Hear why this teacher says schools should embrace ChatGPT, not ban it
01:29
Argo boating app 2
‘Make my dad famous’: A daughter’s quest to showcase her dad’s artwork
01:33
nightcap 012623 Clip 1 16×9 nb
Are Musk’s Twitter actions a speed bump for Tesla?
02:14
OpenAI ChatGPT STOCK
He loves artificial intelligence. Hear why he is issuing a warning about ChatGPT
04:38
Mastodon
Twitter competitor to Elon Musk: Get off the internet
02:57
nightcap 011923 Clip 2 16×9
Tinder is reportedly testing a $500 per month subscription plan. Is it worth it?
02:05
corvette eray thumb
See the first electrified and fastest-accelerating Corvette
01:16
Trump Facebook Employees Debate 02
Facebook could soon reinstate Trump. Two former senior staffers debate the decision
03:35
19 TikTok STOCK
Experts raising alarm over ‘crisis’ of TikTok’s impact on mental health
03:12
Quirky CES Products Split
See CES 2023’s weirdest new technologies
02:25
Jon Sarlin Amanda Steen 1
CNN tried an AI flirt app. It was shockingly pervy
03:19
aoc twitter hearing
These two moments show how Twitter’s choices helped former President Trump
01:55
deepfake newscasters wang pkg
These newscasters you may have seen online are not real people
03:15
People wait in line at the April 2022 grand opening of the Bored & Hungry pop-up burger restaurant in Long Beach, California, which used Bored Ape images.
Lawsuit says celebrities were paid to fuel hype behind these NFTs
07:29
New York
CNN

As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers.

Some companies, including JPMorgan Chase (JPM), have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

qatar airways

It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history.

The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

And just last week, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns after OpenAI disclosed the breach.

A ‘black box’ of data
“The privacy considerations with something like ChatGPT cannot be overstated,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN. “It’s like a black box.”

With ChatGPT, which launched to the public in late November, users can generate essays, stories and song lyrics simply by typing up prompts.

Google and Microsoft have since rolled out AI tools as well, which work the same way and are powered by large language models that are trained on vast troves of online data.

Internet users STOCK
Puffer coat Pope. Musk on a date with GM CEO. Fake AI ‘news’ images are fooling social media users
When users input information into these tools, McCreary said, “You don’t know how it’s then going to be used.” That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

Steve Mills, the chief AI ethics officer at Boston Consulting Group, similarly told CNN that the biggest privacy concern that most companies have around these tools is the “inadvertent disclosure of sensitive information.”

“You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”

05 ChatGPT phone STOCK
Tada Images/Adobe Stock
If the data people input is being used to further train these AI tools, as many of the companies behind the tools have stated, then you have “lost control of that data, and somebody else has it,” Mills added.

A 2,000-word privacy policy
OpenAI, the Microsoft-backed company behind ChatGPT, says in its privacy policy that it collects all kinds of personal information from the people that use its services. It says it may use this information to improve or analyze its services, to conduct research, to communicate with users, and to develop new programs and services, among other things.

The privacy policy states it may provide personal information to third parties without further notice to the user, unless required by law. If the more than 2,000-word privacy policy seems a little opaque, that’s likely because this has pretty much become the industry norm in the internet age. OpenAI also has a separate Terms of Use document, which puts most of the onus on the user to take appropriate measures when engaging with its tools.

OpenAI also published a new blog post Wednesday outlining its approach to AI safety. “We don’t use data for selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people,” the blogpost states. “ChatGPT, for instance, improves by further training on the conversations people have with it.”

Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.”

“These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”

Google also told CNN that users can “easily choose to use Bard without saving their conversations to their Google Account.” Bard users can also review their prompts or delete Bard conversations via this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” Google said.

“We’re still sort of learning exactly how all this works,” Mills told CNN. “You just don’t fully know how information you put in, if it is used to retrain these models, how it manifests as outputs at some point, or if it does.”

Mills added that sometimes users and developers don’t even realize the privacy risks that lurk with new technologies until it’s too late. An example he cited was early autocomplete features, some of which ended up having some unintended consequences like completing a social security number that a user began typing in — often to the alarm and surprise of the user.

Ultimately, Mills said, “My view of it right now, is you should not put anything into these tools you don’t want to assume is going to be shared with others.”

LEAVE A REPLY