The explosion of consumer-facing tools that offer generative AI has created plenty of debate: These tools promise to transform the ways in which we live and work while also raising fundamental questions about how we can adapt to a world in which they’re extensively used for just about anything.
As with any new technology riding a wave of initial popularity and interest, it pays to be careful in the way you use these AI generators and bots—in particular, in how much privacy and security you’re giving up in return for being able to use them.
It’s worth putting some guardrails in place right at the start of your journey with these tools, or indeed deciding not to deal with them at all, based on how your data is collected and processed. Here’s what you need to look out for and the ways in which you can get some control back.
Always Check the Privacy Policy Before Use
Checking the terms and conditions of apps before using them is a chore but worth the effort—you want to know what you’re agreeing to. As is the norm everywhere from social media to travel planning, using an app often means giving the company behind it the rights to everything you put in, and sometimes everything they can learn about you and then some.
The OpenAI privacy policy, for example, can be found here—and there’s more here on data collection. By default, anything you talk to ChatGPT about could be used to help its underlying large language model (LLM) “learn about language and how to understand and respond to it,” although personal information is not used “to build profiles about people, to contact them, to advertise to them, to try to sell them anything, or to sell the information itself.”
Personal information may also be used to improve OpenAI’s services and to develop new programs and services. In short, it has access to everything you do on DALL-E or ChatGPT, and you’re trusting OpenAI not to do anything shady with it (and to effectively protect its servers against hacking attempts).
It’s a similar story with Google’s privacy policy, which you can find here. There are some extra notes here for Google Bard: The information you input into the chatbot will be collected “to provide, improve, and develop Google products and services and machine learning technologies.” As with any data Google gets off you, Bard data may be used to personalize the ads you see.
Watch What You Share
Essentially, anything you input into or produce with an AI tool is likely to be used to further refine the AI and then to be used as the developer sees fit. With that in mind—and the constant threat of a data breach that can never be fully ruled out—it pays to be largely circumspect with what you enter into these engines.
By Wired, July 16, 2023