‘Sycophantic’ AI chatbots tell users what they want to hear, study shows
Scientists warn of ‘insidious risks’ of increasingly popular technology that affirms even harmful behaviour Turning to AI chatbots for personal advice poses “insidious risks”, according to a study showing the technology consistently affirms a user’s actions and opinions even when harmful. (…)
Site référencé:
The Guardian (South&CentralAsia)
4560.jpg?width=140&quality=85&auto=format&fit=max&s=4508e80cd388975399f220246000cbd8, 4560.jpg?width=460&quality=85&auto=format&fit=max&s=6bbd6beea801238d0abf4d823e7c9da2, 4560.jpg?width=700&quality=85&auto=format&fit=max&s=0f93e11c848e4e068f13f5a48358b948
The Guardian (South&CentralAsia)
Don’t let the dugong follow the sea cow | Letters
24/10/2025
There must be an Engels (playing with my chart) | Letters
24/10/2025
Timely assurance from Lear’s Kent | Letters
24/10/2025
Scientists demand cancer warnings on bacon and ham sold in UK
24/10/2025
The Guide #214 : Sleep-inducing songs and tranquilising TV – the culture that sends us to sleep (in a good way)
24/10/2025
‘His teeth flew out of his mouth and landed in my spaghetti’ : 10 first date horror stories
24/10/2025