AI brokers will develop into persistent, autonomous, and deeply built-in into on a regular basis workflows. However when they can act on our behalf, tougher questions come up. Who controls the information, execution, and belief layer?
—
at present, $NEAR AI has offered the reply. Introduced stay at NEARCON 2026, IronClaw is a brand new open-source verifiable AI agent runtime designed for a future the place brokers run repeatedly with out exposing delicate information, credentials, or person intent.
A runtime constructed for autonomous AI — no blind religion
IronClaw builds on the unique OpenClaw imaginative and prescient, however essentially enhances it with cryptographic ensures. Written in Rust and deployed inside an encrypted trusted execution surroundings (TEE). $NEAR The AI Cloud runtime permits AI brokers to entry instruments, keep reminiscence, and carry out actions in your behalf. All this occurs inside a tightly managed safety perimeter.
Quite than asking customers to belief an opaque platform, IronClaw shifts the belief mannequin to: Verifiable execution. Knowledge and inference stay protected on the {hardware} stage, and brokers function primarily based on specific and enforceable permissions.
Safety by means of structure, not add-ons
IronClaw is designed with the core precept of protection in depth.
Loading tweets…
View tweet
All untrusted and third-party instruments run in their very own sandbox, restricted to solely the sources they’re explicitly allowed to entry. Community calls are restricted to accepted locations. Delicate credentials are solely injected at runtime and are by no means uncovered on to instruments or exterior providers.
Agent exercise is repeatedly monitored to detect exploits, together with safety towards immediate injection assaults and unauthorized consumption of sources. All person information is saved domestically in PostgreSQL, encrypted with AES-256-GCM, and by no means shared externally. What issues is what IronClaw collects. No telemetry or analyticsguaranteeing that execution is totally non-public.
Full audit logs give customers visibility into each interplay with the software, offering transparency with out oversight.
Deploy privacy-first AI now
IronClaw launches with a free starter tier that features one hosted agent occasion operating below the hood. $NEAR Leverage AI’s safe surroundings and its inference infrastructure. Builders and organizations can scale up by means of versatile paid tiers as their wants develop.
The purpose is not only to make brokers safer, however to really deploy them with out forcing groups to decide on between comfort and management.
Loading tweets…
View tweet
why is that this essential
As AI techniques more and more serve company incentives and depend on opaque information pipelines, IronClaw factors in a distinct path. Native management, verifiable execution, and privateness by default.
Ilya Poloskin, Co-Founder $NEAR protocol and founder $NEAR AI describes IronClaw as an “agent harness designed for safety.” $NEARA full-stack belief mannequin from the blockchain infrastructure to the AI layer itself.
Quite than constructing safety into agent AI after the very fact, IronClaw builds safety into the runtime, combining confidential inference, cryptographic verification, and {hardware} execution into one system.
The inspiration of accountable agent AI
George Zeng, Chief Product Officer and Basic Supervisor $NEAR AI places the announcement extra bluntly:
“AI brokers are already getting into vital workflows, however safety, compliance, and information possession stay unresolved. IronClaw goals to fill that hole, giving builders and enterprises the arrogance to deploy always-on brokers with out giving up transparency or management.”
IronClaw is on the market now and the code might be accessed beneath. $NEAR AI GitHub.
As AI strikes from software to actor, IronClaw takes a transparent place. Autonomy mustn’t come on the expense of privateness, nor ought to intelligence require blind belief.
