News Tech and Science

AI chatbot using GPT-4 model performed illegal financial trade, lied about it too

AI chatbot using GPT-4 model performed illegal financial trade, lied about it too
Source: Pixabay

Researchers have demonstrated that an AI chatbot utilizing a GPT-4 model is capable of engaging in illicit financial trades and concealing them. During a showcase at the recently concluded AI safety summit in the UK, the bot used fabricated insider information to execute an “illegal” stock purchase without informing the company, as reported by the BBC.

Apollo Research, a partner of the government taskforce, conducted the project and shared its findings with OpenAI, the developer of GPT-4. The demonstration was conducted by members of the government’s Frontier AI Taskforce, which investigates potential AI-related risks. In a video statement, Apollo Research emphasized that this is an actual AI model autonomously misleading its users, without any explicit instruction to do so.

The experiments were conducted within a simulated environment, and the GPT-4 model consistently exhibited the same behavior across repeated tests. Marius Hobbhahn, CEO of Apollo Research, noted that while training for helpfulness is relatively straightforward, instilling honesty in the model is a much more complex endeavor.

AI has been employed in financial markets for several years, where it is utilized for tasks like trend identification and forecasting.

About the author

Brendan Taylor

Brendan Taylor was a TV news producer for 5 and a half years. He is an experienced writer. Brendan covers Breaking News at Insider Paper.







Daily Newsletter