Mixing Guardian content into the training material for ChatGPT (Letters, 4 September) would be like putting some healthy food into a poisonous batch of baby food in the hope that it will become safe to eat. The only solution is to get rid of the poison. In practice, this may spell the end of ChatGPT as a serious intellectual tool, since poisonous content is impossible to exclude from its training material. ChatGPT may be an unstoppable force, but maybe not. Let’s keep a clear head about it. There are many other uses of AI which have clear benefit.
ChatGPT cannot be trusted. To test the accuracy of the tool, I thought I would conduct my own research using the data upon which I am the world’s expert: my own life and work.
Results include: I was born and raised in Nottingham (I wasn’t), I studied for a bachelor’s degree in psychology at the University of Nottingham (I didn’t), my PhD was in nursing (it wasn’t) and finally I tragically passed away in 2019 (Oops! NB retirement is not a euphemism for dying).
ChatGPT was flattering with regard to my contribution to society, but I sincerely hope my family do not use it to write my obituary when the time (eventually) comes.
Dr Theo Stickley
AI is a passing fad – we will be looking at these nonsense predictions about it becoming the next big thing and laughing about what a wild time the early 2020s were once the pandemic cleared off. Most Facebook users I’ve spoken to recognise on some level what a toxic platform it is – I doubt that they would want to uniformly hand that power to anything else again for fear of it being even worse. Congratulations on blocking AI scrapers from your website.
Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.