Still Waters Posted August 8 #1 Share Posted August 8 In recent months, many people will have experimented with a chatbot like ChatGPT. Useful though they can be, there’s also no shortage of examples of them producing, ahem, erroneous information. Now, a group of scientists from the University of Oxford are asking: is there a legal pathway by which we could require these chatbots to tell us the truth? https://www.iflscience.com/can-we-legally-require-ai-chatbots-to-tell-the-truth-75461 The study is published in the journal Royal Society Open Science. 1 Link to comment Share on other sites More sharing options...
Essan Posted August 8 #2 Share Posted August 8 [As far as I know] AIs like ChatGPT only reproduce what someone else has posted on the internet. They cannot think for themselves and nor can they differentiate between truth and lie. Link to comment Share on other sites More sharing options...
L.A.T.1961 Posted August 8 #3 Share Posted August 8 I asked it for the new UK interest rate recently and it said it had gone up to 5% When it had come down by .25% TO 5%. How could it get something as basic as that wrong? Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now