The explosion of consumer-facing tools that offer generative AI has created plenty of debate: These tools promise to transform the ways in which we live and work while also raising fundamental questions about how we can adapt to a world in which they’re extensively used for just about anything.
As with any new technology riding a wave of initial popularity and interest, it pays to be careful in the way you use these AI generators and bots—in particular, in how much privacy and security you’re giving up in return for being able to use them.
It’s worth putting some guardrails in place right at the start of your journey with these tools, or indeed deciding not to deal with them at all, based on how your data is collected and processed. Here’s what you need to look out for and the ways in which you can get some control back.
Checking the terms and conditions of apps before using them is a chore but worth the effort—you want to know what you’re agreeing to. As is the norm everywhere from social media to travel planning, using an app often means giving the company behind it the rights to everything you put in, and sometimes everything they can learn about you and then some.
Personal information may also be used to improve OpenAI’s services and to develop new programs and services. In short, it has access to everything you do on DALL-E or ChatGPT, and you’re trusting OpenAI not to do anything shady with it (and to effectively protect its servers against hacking attempts).
Watch What You Share
Essentially, anything you input into or produce with an AI tool is likely to be used to further refine the AI and then to be used as the developer sees fit. With that in mind—and the constant threat of a data breach that can never be fully ruled out—it pays to be largely circumspect with what you enter into these engines.
By Wired, July 16, 2023