By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
DatadanceDatadance
  • Home
  • News
  • Applications
  • Companies
  • Industries
  • Videos
  • More
    • Machine Learning
    • Legal & Ethics
    • Deep Learning
    • Community
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2023 Datadance. All Rights Reserved.
Reading: Chatbots Got Big—and Their Ethical Red Flags Got Bigger
Share
Sign In
Notification Show More
Latest News
Revealed: The perfect Christmas sandwich, according to ChatGPT – including one VERY surprising ingredient
ChatGPT
Automated system teaches users when to collaborate with an AI assistant
News
Google’s Gemini Is the Real Start of the Generative AI Boom
ChatGPT
AI multi-speaker lip-sync has arrived
Companies
MIT engineers develop a way to determine how the surfaces of materials behave
News
Aa
DatadanceDatadance
Aa
  • News
  • Applications
  • Companies
  • Industries
  • Machine Learning
  • Videos
Search
  • Home
  • News
  • Applications
  • Companies
  • Machine Learning
  • Deep Learning
  • Industries
  • Legal & Ethics
  • Videos
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2023 Datadance. All Rights Reserved.
Datadance > Blog > Applications > ChatGPT > Chatbots Got Big—and Their Ethical Red Flags Got Bigger
ChatGPT

Chatbots Got Big—and Their Ethical Red Flags Got Bigger

News Room
Last updated: 2023/03/26 at 6:55 AM
News Room
Share
5 Min Read
SHARE

Irene Solaiman, policy director at open source AI startup Hugging Face, believes outside pressure can help hold AI systems like ChatGPT to account. She is working with people in academia and industry to create ways for nonexperts to perform tests on text and image generators to evaluate bias and other problems. If outsiders can probe AI systems, companies will no longer have an excuse to avoid testing for things like skewed outputs or climate impacts, says Solaiman, who previously worked at OpenAI on reducing the system’s toxicity. 

Each evaluation is a window into an AI model, Solaiman says, not a perfect readout of how it will always perform. But she hopes to make it possible to identify and stop harms that AI can cause because alarming cases have already arisen, including players of the game AI Dungeon using GPT-3 to generate text describing sex scenes involving children. “That’s an extreme case of what we can’t afford to let happen,” Solaiman says.

Solaiman’s latest research at Hugging Face found that major tech companies have taken an increasingly closed approach to the generative models they released from 2018 to 2022. That trend accelerated with Alphabet’s AI teams at Google and DeepMind, and more widely across companies working on AI after the staged release of GPT-2. Companies that guard their breakthroughs as trade secrets can also make the forefront of AI less accessible for marginalized researchers with few resources, Solaiman says.

As more money gets shoveled into large language models, closed releases are reversing the trend seen throughout the history of the field of natural language processing. Researchers have traditionally shared details about training data sets, parameter weights, and code to promote reproducibility of results.

“We have increasingly little knowledge about what database systems were trained on or how they were evaluated, especially for the most powerful systems being released as products,” says Alex Tamkin, a Stanford University PhD student whose work focuses on large language models.

He credits people in the field of AI ethics with raising public consciousness about why it’s dangerous to move fast and break things when technology is deployed to billions of people. Without that work in recent years, things could be a lot worse.

In fall 2020, Tamkin co-led a symposium with OpenAI’s policy director, Miles Brundage, about the societal impact of large language models. The interdisciplinary group emphasized the need for industry leaders to set ethical standards and take steps like running bias evaluations before deployment and avoiding certain use cases.

Tamkin believes external AI auditing services need to grow alongside the companies building on AI because internal evaluations tend to fall short. He believes participatory methods of evaluation that include community members and other stakeholders have great potential to increase democratic participation in the creation of AI models.

Merve Hickok, who is a research director at an AI ethics and policy center at the University of Michigan, says trying to get companies to put aside or puncture AI hype, regulate themselves, and adopt ethics principles isn’t enough. Protecting human rights means moving past conversations about what’s ethical and into conversations about what’s legal, she says.

Hickok and Hanna of DAIR are both watching the European Union finalize its AI Act this year to see how it treats models that generate text and imagery. Hickok said she’s especially interested in seeing how European lawmakers treat liability for harm involving models created by companies like Google, Microsoft, and OpenAI.

“Some things need to be mandated because we have seen over and over again that if not mandated, these companies continue to break things and continue to push for profit over rights, and profit over communities,” Hickok says.

While policy gets hashed out in Brussels, the stakes remain high. A day after the Bard demo mistake, a drop in Alphabet’s stock price shaved about $100 billion in market cap. “It’s the first time I’ve seen this destruction of wealth because of a large language model error on that scale,” says Hanna. She is not optimistic this will convince the company to slow its rush to launch, however. “My guess is that it’s not really going to be a cautionary tale.”

Updated 2-16-2023, 12.15 pm EST: A previous version of this article misspelled Merve Hickok’s name.

By Wired, March 26, 2023

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
I have read and agree to the terms & conditions
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
News Room March 26, 2023
Share this Article
Facebook Twitter Copy Link Print
Share
Previous Article Real Humans Chat About Chatbots
Next Article The Generative AI Race Has a Dirty Secret
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad imageAd image

Latest News

Automated system teaches users when to collaborate with an AI assistant
News December 8, 2023
Google’s Gemini Is the Real Start of the Generative AI Boom
ChatGPT December 7, 2023
AI multi-speaker lip-sync has arrived
Companies December 7, 2023
MIT engineers develop a way to determine how the surfaces of materials behave
News December 7, 2023
Omid Scobie’s book Endgame sold 6,448 copies in its first five days
ChatGPT December 7, 2023
ChatGPT, Cristiano Ronaldo and Barbenheimer: Top 25 most viewed Wikipedia pages of 2023 give fascinating insight into what interested people around the globe this year
ChatGPT December 6, 2023

You Might also Like

ChatGPT

Revealed: The perfect Christmas sandwich, according to ChatGPT – including one VERY surprising ingredient

December 8, 2023
ChatGPT

Google’s Gemini Is the Real Start of the Generative AI Boom

December 7, 2023
ChatGPT

Omid Scobie’s book Endgame sold 6,448 copies in its first five days

December 7, 2023
ChatGPT

ChatGPT, Cristiano Ronaldo and Barbenheimer: Top 25 most viewed Wikipedia pages of 2023 give fascinating insight into what interested people around the globe this year

December 6, 2023
//

Datadance is your one-top news website for the latest artificial intelligence news and updates, follow us now to get the news that matters to you!

Quick Link

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Top Topics

  • Applications
  • Companies
  • Deep Learning
  • Industries
  • Machine Learning

Sign Up for Our Newsletter

Subscribe to our newsletter to get our latest news instantly!

I have read and agree to the terms & conditions
DatadanceDatadance
Follow US

© 2023 Datadance. All Rights Reserved.

Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

I have read and agree to the terms & conditions
Zero spam, Unsubscribe at any time.

Removed from reading list

Undo
Welcome Back!

Sign in to your account

Register Lost your password?