Bitget and SlowMist Unveil Emerging Security Risks in AI Trading Execution

Bitget and SlowMist's research report reveals new security risks posed by AI systems in autonomous trading, highlighting systemic security challenges and response strategies in the era of intelligent agents.

Bitget and SlowMist have jointly released a research report exploring the risks that arise as artificial intelligence systems begin to autonomously execute trades. As trading enters this "intelligent agent" phase, systems are not limited to analysis but also involve actual operations, creating a new category of risk that traditional security models are unprepared for.

The report emphasizes that when AI transitions from an advisory role to execution, errors and vulnerabilities are no longer isolated incidents; they can trigger immediate and irreversible financial consequences. In the crypto market, trades settle instantly, and once an agent is compromised or misled, its response time often exceeds human intervention capabilities.

“AI is no longer just interpreting the market; it is participating in it,” said Bitget CEO Gracy Chen. “This fundamentally changes the nature of risk. The key is no longer how intelligent these systems are, but how secure they are in operation.”

Bitget and SlowMist Unveil Emerging Security Risks in AI Trading Execution插图

According to the research, agent-based systems introduce new attack surfaces at multiple levels, from model inputs to execution paths. Prompt injection can influence decision-making, malicious plugins can alter behavior, and overly permissive APIs can expose capital to unexpected risks.

These risks are exacerbated by the continuous operation of autonomous agents, which can function without direct user oversight.

The report views these as systemic risks rather than isolated vulnerabilities. Security in the agent era must go beyond application-level protections and delve into the architecture of how AI systems interact with capital.

Bitget and SlowMist Unveil Emerging Security Risks in AI Trading Execution插图1

Bitget's response reflects this shift. The platform separates intelligence, execution, and asset authorization into different layers, reducing the likelihood of unexpected trades triggered by any single point of failure. The permission structure follows the principle of least privilege, introducing trade simulation and validation processes before final execution. These controls aim to ensure that even when AI agents operate autonomously, their operational scope remains defined and limited.

SlowMist's analysis further underscores the necessity of a closed-loop security model, which must address risks before, during, and after execution. Continuous monitoring, limited permissions, and verifiable transaction processes form the foundation of this framework, transforming security from a passive process into an embedded system design.

The findings point to a broader reality where the integration of AI agents in trading, asset management, and on-chain activities is deepening; the boundary between user intent and system execution is becoming increasingly blurred. In this environment, reliability is determined not only by performance but also by the system's ability to operate within controlled constraints.

As financial activities become more automated and interconnected, infrastructure must be designed not only for speed and accessibility but also for control and resilience.

This joint report provides important reference points for platforms and developers.

0 comment A文章作者 M管理员
    No Comments Yet. Be the first to share what you think
Profile
Search
🇨🇳Chinese🇺🇸English