Various experts believe that Ritual, an AI infrastructure startup, can be instrumental in tackling emerging applications in the crypto space, like autonomously adjusting risk parameters for lending protocols based on current market conditions.
The decentralized artificial intelligence network recently unveiled a $25 million Series A funding round led by Archetype. A consortium of investors, including Balaji Srinivasan, Accomplice, Robot Ventures, Accel, Dialectic, Anagram, Avra, and Hypersphere, participated in the funding round. The funding will be allocated to help Ritual expand its developer network going forward.
Ease of access is key
While AI adoption continues to rise across various business sectors, challenges such as substantial computational expenses, restricted hardware accessibility, and centralized APIs impede the complete realization of the existing AI framework.
The overarching vision for Ritual is to serve as the central hub for AI within the Web3 ecosystem, evolving the current version of the Internet into a modular suite of execution layers that can easily interact with other foundational infrastructure components, allowing any blockchain protocol and application to employ Ritual as an AI co-processor.
The link between AI and crypto
The integration of such AI models into the crypto realm can help facilitate novel applications, such as automatically regulating risk factors for lending protocols in response to real-time market conditions. With that in mind, Ritual provides a protocol diagram that reveals the implementation of adaptable execution layers centered around AI models. The GMP layer, encompassing layer 1, rollups, and sovereign entities, acts as a bridge between existing blockchains and the Ritual Superchain, which functions as an AI co-processor for all blockchains.
Lacking knowledge
The lack of clarity in the recent executive order on AI safety issued by the Biden administration has raised concerns among the AI community. The order introduced six new standards for AI safety and security, encompassing broad mandates such as sharing safety test results with authorities for companies developing any foundational model posing significant risks to national security, economic security, or public health and safety. It also emphasizes the acceleration of developing techniques related to preserving privacy.
Still, many believe that the Biden administration is ill-equipped to adequately deal with the rapid technological advancements that the world is going through. Congressman Ted Lieu stated that the US government is still in learning mode when it comes to new technologies like AI, emphasizing caution so as to avoid stifling innovation in the country.