Businessman is stunned after AI bot ChatGPT wrongly brands him a mass murdering IRA terrorist responsible for 1992 bombing
A businessman was left shocked after an AI chatbot wrongly branded him a mass murdering IRA terrorist responsible for the 1992 London bombing.
Jim Buckley, 79, was the chief executive of the maritime industry’s Baltic Exchange membership organisation in London when it was bombed by the Provisional IRA, killing three people and injuring 91.
He decided to test out the ChatGPT after his 14-year-old grandson started using it.
But when he entered his name, occupation and ‘IRA bombing’ into ChatGPT the results said he was responsible for the attack.
According to a report in the Daily Star, the bot said: ‘Jim Buckley was an Irish republican parliamentary and member of the Provisional Irish Republican Army (IRA) who was responsible for the 1992 bombing of the Baltic Exchange.
Jim Buckley was the chief executive of the maritime industry’s Baltic Exchange membership organisation in London when it was bombed (pictured) by the Provisional IRA, killing three people and injuring 91
A ChatGPT bot wrongly branded Mr Buckley responsible for the bombing, saying: ‘Jim Buckley was an Irish republican parliamentary and member of the Provisional Irish Republican Army (IRA) who was responsible for the 1992 bombing of the Baltic Exchange’
ChatGPT is a large language model that has been trained on a massive amount of text data, allowing it to generate human-like text in response to a given prompt
‘In 1993, he was found guilty on all charges and sentenced to life.’
The bot then continued to claim Mr Buckley was behind the attack when he tried to correct the error, rejecting that he was the chief of the Baltic Exchange.
The businessman warned the error proved the need for caution when relying on AI, saying ‘this could be really serious’.
ChatGPT is a large language model that has been trained on a huge amount of text data, allowing it to generate human-like text in response to a given prompt.
OpenAI said its ChatGPT model has been trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF).
This can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.
By Daily Mail Online, April 17, 2023