Self-Sovereignty Is NEAR: A Vision for Our Ecosystem
As a kid growing up in Ukraine in the ’90s after the dissolution of the USSR, I remember we watched the price of bread go from 1,000 karbovanets, to 10,000, to 100,000 in less than five years (until that currency was thrown out altogether and replaced with hryvnia). When I first started working as a software developer as a teenager, I kept my earnings in cash in my room because I already understood that we couldn’t trust corrupt banks with our money.
Between 2014 and 2016 alone, 77 banks failed in Ukraine. My grandparents still have their savings account bank books tracking the money they put away during the USSR years––but those savings don’t exist anymore. So even something that is yours, that you rightfully own, can go away if the system you’re a part of fails. The same thing is happening to millions of people living under hyperinflation, dictatorships, and war zones across the world, of course. And while these may seem like abstract or distant problems that won’t arrive at your doorstep, let me tell you from my own experience: nothing is guaranteed.
Every system is as fragile as the rules holding it together. And the rules can change. They’re changing around us right now, and I believe we are approaching a point of no return.
The Need for Digital Self-Sovereignty
We need to create new economic opportunities for people everywhere via self-sovereignty, which should be a universal right and which technology can now provide, not just nation-states as in most other points in history. For citizens of nations who’ve enjoyed economic security and a high degree of sovereignty, this may not seem like an immediate-term issue. But it is.
The economics of tech companies leads inevitably to corrupting their original product or vision for the sake of profit in order to maintain growth, and more importantly, they naturally involve creating barriers for someone else to disrupt. In order to maintain their power, governments will use pressure and ingenuity in order to control their populations, too often to the point of violating human rights in the name of safety or security.
We all use our phones and computers a thousand times a day, prioritizing convenience over self-sovereignty because until now, we haven’t had a choice. We are now approaching a tipping point towards a dystopian future that we may not be able to come back from, brought on not just by governments but by the economics of tech companies. What happens when these incentives increasingly collide and push each other deeper into the lives of individuals for the sake of maintaining control and profit?
That’s right about where we are today.
Changing the Stakes with Generative AI
Before founding NEAR, I was an AI researcher. I worked at Google where I contributed to TensorFlow, and eventually published a paper with a handful of colleagues called “Attention Is All You Need.” That paper introduced the Transformers architecture that powers ChatGPT, Bard, and most of the well-known LLMs behind last year’s explosive growth in AI.
I was first interested in AI because of the 2001 movie, “Artificial Intelligence.” Changing how we interact with computing and augmenting one’s intelligence to maximize human potential was, and still is, very appealing to me. And I still think it has the potential to make human lives, organizations, even governments better. But like any other technology, in the hands of the wrong people or with the wrong incentives, it also has the potential to make our lives terrible.
Generative AI is creating a universal and scalably personal method of enabling control and manipulation. Practically, it means your social feed and search results can ensure that you buy specific products or form a specific opinion. This will start in the form of commercial improvements that lead to more profit for tech giants: Netflix will generate a movie script that can shape your opinion, Facebook can reinforce that opinion by showing you more of it, and so on. This could even happen at a more fundamental level, such as flooding training data with specific information to influence all models trained on it.
If this granular information and vector of manipulation on such a personal level can be extracted or bought, it will be, and then it will become a tool for control. If it’s stored somewhere centralized and hackable, it will be stolen––we see this constantly with Web2 giants as it is. If governments can get access to this data, they will use it to maintain or grow their power.
The true danger that generative AI introduces is that this exploitation won’t just be on a systems level or a population level, it will become personal and incredibly specific. The depth of potential control and manipulation goes to the level of each and every human, no matter where they live, no matter where they keep their money. Such a powerful technology simply cannot remain in the hands of centralized companies, nor be too easy for governments to take over.
So What Should We Do About It?
So if people don’t yet feel the sense of urgency towards building new systems that uphold self-sovereignty, what will make it real for people? Changes in collective values are always driven by economic opportunity. The major revolutions of history started because of economic failures: American independence from Britain, the French Revolution, the collapse of the USSR, and so on. If people see ways to create better economic realities for themselves and their families, then they will turn values into actions.
Creating new opportunities for people via self-sovereignty is what NEAR is about. Complete self-sovereignty has been the NEAR vision since day one: we want to build a world where all people can control their own assets, data, and power of governance. This sovereignty must apply not only at the level of individuals but also the organizations and communities they create, and eventually societies.
Self-sovereignty is a new primitive that hasn’t existed before today. One always needed to rely on some power of violence for ensuring rules are followed, most recently nation-states. One of the core principles of digital self-sovereignty is the ability to choose and switch between any service provider. There is no lock- in. There are no middlemen like banks or government agencies that can lose or steal assets, or change the rules on you out of nowhere.
Importantly, this must also apply to AI. People need to own their data so they know what it’s being used for and so they can actively consent to personalized experiences they think will improve their lives. Models must be governed transparently, in public, with clear rules and monitoring to proactively manage risk and reputation systems to build more clarity around information and traceability. Web3 can help to uphold, scale, and manage such systems to ensure AI is a force for good while also preventing it from being too exploitable.
Another major challenge, which is especially clear in governance but it also applies to corporations, is that when we select someone to represent our ideas for us as our delegate, they will always have their own interests and motivations in the mix as well. They don’t necessarily have nefarious intentions, it’s just a natural tendency. This is the “principal agent problem,” wherein the person elected behaves differently than the people who elected them or pay them would prefer based on their best interests. This is where AI governance systems can help by introducing neutral agents, where unbiased AI agents governed directly by a community can act on their behalf in a more reliable way. With transparent governance and monitoring, AI can be a force for good in individual lives as well as for the collective.
A Vision for the NEAR Future
Despite my concerns about where the traditional tech paradigm is potentially heading, I remain a techno-optimist. I wouldn’t be doing this work if I didn’t think it was for the good of everyone, and I’ve read enough sci-fi to know that the outcomes of science and technology are much more about what people do with them than the tech itself. If we want something, we should build it.
I would like NEAR to become a fully sovereign operating system that is equipped with a personal AI assistant that optimizes for users’ needs without revealing private information about the user’s data or assets. It should also be able to interact and transact with other people’s AIs and the community’s AIs peer-to-peer. I call this “user-owned AI.”
We also need shared community AIs, which are governed by the members of such a community. They represent the mix of needs and knowledge of all the members of such a community, from something like a small club or startup, to the city, to the nation-state, to the global level. There is always an opportunity to fork one community and create new ones. The community governs which data goes into training its community model, and can run inference (running live data through a model) privately in such a way that only the user sees input and output, while getting a proof that the selected model was used.
To facilitate this vision, a lot of pieces need to come together:
- Economic and technological opportunity to enable users to onboard en masse.
- Open source software across the stack, from blockchain tech to AI models.
- Blockchains must get abstracted away from the user so they are not barriers to entry or participation. I call this the principle of Chain Abstraction.
- Applications must provide a novel value unlock: for example, Cosmose and Sweat. These apps reward users and serve as an economic gateway into a broader ecosystem of opportunities.
- On-edge, meaning hyperlocal, AI models that are usable by individuals (and free of manipulation).
- Community-owned AI models with governance and economic opportunity, replacing everything from business ops to government agencies. Self-governance by the people, for the people, at scale with the help of technology and decentralized peer-to-peer systems.
Blockchains, peer-to-peer payments, Web3, zero-knowledge, very large language models and on-edge AI models: these are not separate technology verticals, but rather interconnected facets of a new digital paradigm of self-sovereignty.
We have the tools to remake how we provide for ourselves, how we work together and govern ourselves, and how we consume and generate information. Without gatekeepers, fair and open to everyone. And this is not a futuristic vision: it’s possible to start experimenting and building now, before our fragile and outdated systems and structures get weaker or fail, before too much centralization leads to the worst outcomes instead of the ones we all design and share together.
––Illia Polosukhin, Co-Founder of NEAR and CEO of NEAR Foundation
Share this:
Join the community:
Follow NEAR: