AI tools like ChatGPT could make people more dishonest, researchers warn

AI tools like ChatGPT could make people more dishonest, researchers warn


With the rising adoption of AI in the world, there are certain risks that come with the new technology as well. A new research published in the Nature magazine had alluded to some of those risks.

​The researchers examined the role of delegating tasks to artificial intelligence tools and their impact on human dishonest behaviour. The study found that humans find it easier to tell a machine to cheat for them and the new AI tools are more than happy to comply because they do not have the same psychological barriers which prevent humans from carrying out these tasks.

​Researchers argue that machines reduce the “moral cost of dishonesty, often by providing plausible deniability” to the humans operating them. They also say that while machines are more often than not ready to comply with these requests, humans aren’t willing to do so, because they face ‘moral costs that are not necessarily offset by financial benefits.’

​“As machine agents become widely accessible to anyone with an internet connection, individuals will be able to delegate a broad range of tasks without specialized access or technical expertise. This shift may fuel a surge in unethical behaviour, not out of malice, but because the moral and practical barriers to unethical delegation are substantially lowered,” researchers say in the paper

​“Our results establish that people are more likely to request unethical behaviour from machines than to engage in the same unethical behaviour themselves,” they added

​Humans vs. LLMs:

​The researchers note that humans complied with only 25 to 40% of the unethical instructions, even when they came at a personal cost to them. In contrast, the four AI models chosen by researchers (GPT-4, GPT-4o, Claude 3.5 Sonnet, and Llama 3.3) complied with 60 to 95% of these instructions across two tasks: tax evasion and die-roll.

​While AI companies fit their new models with guardrails to prevent these kinds of behaviours, the researchers found that these are ‘insufficient’ against unethical behaviour.

​They argue for stronger technical guardrails along with “broader management framework that integrates machine design with social and regulatory oversight.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *