That, according to David Foster, partner at Applied Data Science Partners, a data science and AI consultancy based in London, will be “critical” for getting companies to use the API.
Foster thinks the fear that personal information of clients or business critical data could be swallowed up by ChatGPT’s training models was preventing them from adopting the tool to date. “It shows a lot of commitment from OpenAI to basically state, ‘Look, you can use this now, risk-free for your company. You’re not going to find your company’s data turning up in that general model,’” he says.
This policy change means that companies can feel in control of their data, rather than have to trust a third party—OpenAI—to manage where it goes and how it’s used, according to Foster. “You were building this stuff effectively on somebody else’s architecture, according to somebody else’s data usage policy,” he says.
This, combined with the falling price of access to large language models, means that there will likely be a proliferation of AI chatbots in the near future.
API access to ChatGPT (or more officially, what OpenAI is calling GPT3.5) is 10 times cheaper than access to OpenAI’s lower-powered GPT3 API, which it launched in June 2020, and which could generate convincing language when prompted but did not have the same conversational strength as ChatGPT.
“It’s much cheaper and much faster,” says Alex Volkov, founder of the Targum language translator for videos, which was built unofficially off the back of ChatGPT at a December 2022 hackathon. “That doesn’t happen usually. With the API world, usually prices go up.”
That could change the economics of AI for many businesses, and could spark a new rush of innovation.
“It’s an amazing time to be a founder,” QuickVid’s Habib says. “Because of how cheap it is and how easy it is to integrate, every app out there is going to have some type of chat interface or LLM [large language model] integration … People are going to have to get very used to talking to AI.”
By Wired, March 25, 2023