Data Lake

Our Mission

The future of applications is shifting towards a dynamic paradigm, underpinned by Open Data lakes on the blockchain. This evolution heralds a future where static applications give way to fluid, dynamically generated interfaces. Central to this transformation is the network of Autonomous Agents, operating within a decentralized, incentivized framework. These agents play a pivotal role in orchestrating content, adeptly molding it into diverse forms and configurations to meet the immediate, ever-changing demands of consumers. This innovative approach ushers in an era where applications are not just tools, but adaptive entities, continually evolving in response to user needs and preferences, thereby redefining the landscape of digital interaction and content consumption.

As we reflect on the aftermath of the 2008 financial crisis, it is evident that humanity was in dire need of a new beacon of hope. This need coincided with the emergence of major app stores, which, when fused with the allure of zero interest rates, ignited what we now know as 'the pump'. However, since then, apart from TikTok, no other app has managed to achieve global adoption, grow sustainably, and continuously attract users. The landscape of content consumption is already dominated by key players in each category: short text, short video, long video, image, and voice.

In this saturated market, new players occasionally attempt to innovate within existing domains. Yet, these ventures often falter. The added value of their innovation is minimal, typically involving a mere amalgamation of existing formats or imposing restrictions to target specific user groups. Despite an initial surge in popularity, it failed to maintain user growth and engagement, falling short of becoming a viable alternative. Similarly, Clubhouse, which merged interactivity with podcast elements, found its concept more successfully adopted in Twitter Spaces.

These innovations are transient, their value diminished by the overpowering network effects of established players. At a certain point, pivoting becomes unfeasible, leading these apps to either slowly fade away or be acquired.

However, there is still a glimmer of hope. ChatGPT, for instance, shattered Instagram's decade-long record by amassing 1 million users in record time. Unlike traditional domain-specific applications, ChatGPT and similar language models innovate through user interaction. They evolve with each user engagement, transcending the limitations of specific formats or domains. They can interact with imagery, sound, and text, making digital content universally composable.

The distinct advantage of these models lies in their ability to engage users with relevant content in the form and format they prefer at any given moment. Utilizing a language model as the foundation of an application circumvents the dilemma of being confined to a single content type. This opens up possibilities for a 'Everything app'.

Despite these advancements, challenges remain, such as the rise of isolated information silos, ineffective incentive attribution across varying regulations and content ownership in the Web 2.0 world, and the impracticality of licensing in the complex regulatory landscape of Web 2.0. Moreover, reliance on centralized APIs poses a risk of sudden access termination, and the issue of uncertain content provenance persists.

To address these challenges, a decentralized, permissioned data lake could be the solution. However, to effectively manage content, discovery must be prioritized. The significance of discovery is often overlooked due to the efficiency of existing recommendation systems like Google's. Language models have the capability to recognize and extract the value of various content forms – files, images, sound, videos – and conduct in-depth analysis, thereby capturing their intrinsic value.

We envision a market where participants use computation to deeply understand content, breaking it down into valuable segments, making it accessible to AI models in an agnostic manner. Addressing discovery is the first step; recommendations can follow. Once content is segmented, market participants – including autonomous agents – can develop their composition graphs, utilizing their algorithms to connect and transform content.