AI Agents that Automate Investments Carry Serious Security RisksDate:
05/14/2025Tag: #ai #aiagent #investments #psd #powerelectronics AI Agents that Automate Investments Carry Serious Security RisksWe’ve been slowly ceding more of our responsibilities (and decision-making) to AI, so if this surprises you, I’d like to join you under that rock you’ve been living under. ArsTechnica points to research that lays out a rather disturbing scenario – we’ve already begun automating stock and crypto sales, along with software-defined contracts. And AI adds a new dimension to it, introducing a degree of artificial judgment to our investment portfolios. AI bots could do all that and more, calculating risk and weighing percentages far better than we ever could, and squaring that with our individual, bespoke habits. But what happens when hackers change a line or two of code, compelling the bots to make deposits in external accounts? An especially malicious adversary could even intentionally tank a company’s market value (or damage the market, itself, with malignant trades) – and with a more complex paper trail than contemporary market manipulation. Take ElizaOS, which allows AI agents to perform blockchain-based transactions based on a user’s predefined rules. And sadly, it’s incredibly vulnerable to hacks and abuse – malicious actors could initiate large language model attacks known as prompt injections to implant false memories in these AI agents. “Our findings show that while existing prompt-based defenses can mitigate surface-level manipulation, they are largely ineffective against more sophisticated adversaries capable of corrupting stored context,” said researchers from Princeton University. “These vulnerabilities are not only theoretical but carry real-world consequences, particularly in multi-user or decentralized settings where agent context may be exposed or modifiable.” While agents like this are relatively new, and future versions may fix some of the intrinsic vulnerabilities, they currently present some serious security concerns (particularly since so many users can interact with ElizaOS at one). Amongst other things, the researchers suggest restricting the AI agents’ capabilities to a small set of pre-approved actions. |