Insights from the MIT Decentralized AI Summit

--

MIT Decentralized AI Summit

October 2024
Attending the MIT Decentralized AI Summit was a transformative experience. In just the first two hours, I absorbed more about decentralized AI than I had in months of prior research. The event brought together leading experts from Dell, Intel, MIT, Nvidia, as well as prominent AI-focused venture capital firms and innovative startups. Each session offered a fresh perspective on the current challenges and opportunities in decentralized AI, and more importantly, where the technology is heading in the near future.

In the sections that follow, I’ve distilled the key insights and discussions from the day into a concise “cliff notes” summary, capturing what I felt were the most critical takeaways from each speaker and panel.

Rob Lincourt — Dell: The Age of AI PC

Rob Lincourt from Dell kickstarted the day by emphasizing the importance of reducing parameters in large language models (LLMs) to minimize memory usage. He demonstrated how memory requirements could be slashed from 16GB to just 4GB, making it possible to run AI models on smaller edge devices. This reduction is a game-changer, as it enables AI to operate in more decentralized environments, improving latency, throughput, and overall accessibility.

His key takeaway: Decentralized AI is essential because there is no “God model” — no single system can handle all tasks. Instead, decentralization allows for specialized models to coexist and operate where they are needed most.

Mark Castleman — Intel: Evolution in Computing

Mark Castleman, a Managing Director for Intel’s AI cloud, spoke about the rapid acceleration of AI software relative to hardware development. While hardware struggles to keep pace, the cost of AI inference has plummeted by 100x in just one year.

Mark showing how the industry has Karate Chopped the cost of Inferencing!

He talked briefly about Intel’s Tiber Cloud AI, a platform that promises to bridge the gap by optimizing for both training and inference. Castleman emphasized the speed of innovation and how it’s pushing the boundaries of what AI can achieve. One of the most significant trends: AI software outpacing hardware, indicating a need for more specialized solutions in decentralized infrastructures.

Ramesh Raskar — MIT: What is Decentralized AI?

In one of the stand out sessions of the day Professor Ramesh Raskar’s tackled one of the summit’s central themes: what exactly is decentralized AI? Raskar proposed the idea of a “GOD AI” — a Global Orchestration of Decentralized AI, drawing parallels with how we already surrender location data to systems like Google Maps in exchange for valuable information like current traffic patterns.

“GOD AI” — a Global Orchestration of Decentralized AI

Side note: I of coursed chuckled when he talked about the notion of a GOD AI (before I understood it was an acronym) as the previous presenter had just stated there was no such thing a God AI. That’s one of the best things about these kind of summits, different viewpoints from different people that don’t always have to align but I digress…

He envisioned a future where billions of AI agents collaborate, growing from simple tasks to more complex problem-solving scenarios. Raskar’s ideas about AI transitioning from assistants to engineers — and even scientists — highlighted the potential of decentralized systems in shaping industries such as healthcare and e-commerce.

Centralized vs Decentralized AI from Assistant to Scientist

Professor Raskar closed out his session speaking about some of the Trends in Decentralized AI. Like how we have compute power at the edges — PC power is growing and their a millions of these devices throughout the world. We will eventually have Billions of AI agents, it will start with simple agents that work to together and create complex agents in what will become the new “Internet of Agents”.

Current Trends in Decentralized AI

Venture Capital Panel: Advancements in AI

The VC panel, featuring Doug Levin (HBS), Dave Blundin (Link Ventures), Casey Caruso (Topology Ventures), and Sam Lehman (Symbolic Capital), provided a deep dive into the evolving investment landscape of AI and decentralized technologies. Expertly moderated by Doug Levin, he started the panel off with a discussion on the critical importance of product-market fit and solid business use cases for AI startups.

Dave Blundin shared an important announcement: the upcoming launch of Liquid AI, which promises a 10x faster inference time than current transformer models. This development signals the accelerating timeline for AI advancements. According to Blundin, we are fast approaching a point where AI could handle any intellectual task a human can perform. He believed entire industries will be completely managed by AI agents with customer service being one of the first industries to transition within the next 2–5 years. However, he emphasized that most startups often pivot multiple times, making it crucial for VCs to prioritize the strength of the team and their adaptability over the initial technology when making investment decisions.

Sam Lehman pointed to the intersection of decentralized AI and blockchain, stating that blockchain could play a vital role in verifying and tracking AI data models. He noted that despite the challenges facing the crypto market, decentralized AI represents one of the few bright spots for investors this year. Lehman explained that he looks for founders with strong AI expertise, considering crypto knowledge as secondary, since he believes that AI agents hold the real potential for value creation in this space.

Casey Caruso added her perspective on the intersection of AI and crypto with three key points:

  1. Data is the most valuable asset — and crypto can enhance data integrity and provenance.
  2. Compute power shortages — supply and demand will push AI models toward crowd-based decentralized systems. Caruso mentioned Akash a decentralized cloud computing marketplace as project that exemplifies this trend.
  3. Autonomous AI agents — in the near future she sees billion dollar companies entirely run by AI agents that utilize crypto rails and stablecoins for transactions.

Caruso emphasized the need for founders who are not only brilliant but also capable of dedicating themselves fully to their projects, particularly as decentralized models and autonomous agents rise in prominence.

A personal side note from this session: I was excited when Casey mentioned Akash as its is one of my favorite projects in this space which is why I recently interviewed its founder see: DePIN and AI: Bridging the Future.

Song Han — MIT: Edge AI with Compressed Models

One of the most information-dense sessions of the day came from Professor Song Han of MIT, who presented his groundbreaking work on efficient AI computing. As neural networks grow too large for traditional memory, Han emphasized the importance of compressing models before deployment to ensure efficient use of hardware. He highlighted the need for systems that can handle these compressed models at both the training and inference stages.

Han’s work with TinyML has been particularly impactful, as it has produced a hierarchy of inference models that can be deployed across a wide range of devices. These range from larger cloud-based systems to small IoT sensors, significantly advancing the capabilities of on-device AI. This approach not only maximizes performance but also broadens the accessibility of AI by allowing more compact, efficient models to run on edge devices.

Anna Kazlauskas — Vana: User Owned Data

Anna Kazlauskas highlighted one of the most critical issues in AI today: the Data Wall, emphasizing that open-source AI is not the same as user-owned AI. With less than 0.1% of the internet publicly scrapable, the vast majority of AI models are trained on data controlled by large corporations. These corporate “walled gardens” create a Data Wall, making it difficult for others to compete. Foundational data models are becoming a source of truth and economic power, and as Kazlauskas stressed, whoever controls AI data holds tremendous power.

I have to insert another personal note here, as Anna’s comments eerily echoed something Dan Keller mentioned in my recent AI panel, DePIN and AI: Bridging the Future, when he said that “it’s all about the resources and control” when it comes to AI.

However, Kazlauskas argued that we don’t have to accept this Data Wall. Users have access to large amounts of data that can be exported, and the compute power needed to train their own models. This approach could lead to the rise of collective data models. She cited the example of the Reddit Data DAO, which has over 140,000 real Reddit users contributing to the training of a user-owned LLM. Though Reddit tried to stop this initiative, the users ultimately succeeded in exporting, combining and selling their own data. Kazlauskas emphasized the need for strong regulatory protections in this space to support such efforts. Further reading: Vana launches DAO letting Reddit users control their personal data.

Vana’s top Data DAOs

Her company, Vana, is pioneering the concept of user-owned data. They don’t act as custodians of the data, which helps break down the Data Wall. Instead, they offer proof of contribution, which Kazlauskas believes will lead to a library of decentralized collective data models that could reshape how AI is trained and owned.

Kavita Gupta — Delta Blockchain Fund: Blockchain’s Role in AI

In a fireside chat Kavita Gupta discussed the interplay between blockchain and AI, acknowledging that while AI is ready for blockchain, blockchain is not quite ready for AI. One of the main obstacles is data fragmentation across multiple blockchains, which makes it difficult for AI models to operate seamlessly. He noted that decentralized storage will be a crucial component of the future infrastructure needed to support AI.

Michael Demeo — NVIDIA: Democratizing Innovation

Michael Demeo introduced us to NVIDIA’s Omniverse ecosystem and how it’s reshaping AI development through the use of digital twins. These digital worlds allow for the simulation and testing of AI models in a photorealistic environment that can even include avatars — simulated humans — interacting with AI.

NVIDIA’s Omniverse ecosystem

The Omniverse provides a safe, accurate, and collaborative space for AI models to train and refine their capabilities. This environment allows for decentralized AI to be tested and iterated in a non-destructive manner before deployment in real-world scenarios. Demeo emphasized that this kind of simulation is crucial for improving the accuracy and safety of AI systems, which is essential as we move toward a future where AI agents and decentralized systems become more autonomous.

Michael Casey — DAIS: Incentives in Decentralized AI

Michael Casey addressed one of the most pressing challenges in decentralized AI: the dominance of Digital Oligarchs. He highlighted that six major companies, with a collective market cap of $15 billion — representing 33% of the S&P 500 — control vast amounts of data, effectively controlling the AI landscape. Casey argued that the Web2 economic model is broken because services that seem free, such as Google and Facebook, come with hidden human costs, including the extraction of our data, time, and knowledge. These Web2 giants are exploiting this information, making humans the product rather than the customer.

Digital Oligarchs capturing data

Decentralized AI, according to Casey, is key to shifting power away from a few tech giants and putting more value on humanity. He believes that decentralized systems will fragment control, and open-source models will enable trusted inputs and outputs, transforming users from products into customers. This shift will incentivize the development of trusted personal AI agents, and monetary incentives will play a role, but in a more decentralized, bazaar-like economy rather than a centrally controlled “cathedral.”

The case for Decentralized AI

However, Casey also recognized the challenges. Decentralized AI still faces obstacles in efficiency and latency when managing dispersed data sets. The tension between privacy and transparency requires intricate governance systems that will take time to establish. He emphasized that education is critical for mass adoption and noted the significant advantage that Web2 companies have, thanks to their regulatory capture and head start over decentralized models.

Decentralized AI Challenges

AI Pioneers Panel: Charting the Path Forward

Kirk Compton from Samba Nova discussed how his company is leading the charge in reducing inference costs through multi-tenant AI systems. Samba Nova’s expertise in chip-making and reduced power consumption for generative AI has positioned them as a leader in tackling the rising costs of large-scale inference. Compton explained that inference at scale is becoming prohibitively expensive, and Samba Nova’s solution is to provide multiple models on a single chip — essentially creating multi-tenant AI inference. Their current focus is on deploying open-source decentralized AI models for their customers, allowing for more efficient and cost-effective AI deployment.

Shivani Mitra of Nous Research emphasized the critical need for transparency in AI models. She highlighted that decentralization and open-source models will be key in creating a more accessible and equitable AI ecosystem. According to Mitra, the push for decentralized AI is not just about technological advancement but about ensuring that AI systems are built with transparency and fairness at their core.

Jarrod Barnes — NEAR Foundation: Community Driven Innovation

Jarrod Barnes from the NEAR Foundation introduced the concept of the decentralized AI triangle — Performance, Verifiability, and Privacy — as the foundation for future AI models, especially as AI becomes increasingly invisible to end users. These three pillars are seen as the core building blocks of user-owned AI, driving innovation in a way that prioritizes control and security for users rather than corporations.

Decentralized AI triangle

Barnes highlighted NEAR’s vision of creating a user-owned internet through their Layer 1 blockchain, where the decentralized AI triangle would serve as a guidepost for development. He noted that data fuels the entire pipeline of decentralized AI, but we are only at the beginning stages, likening the current state of decentralized AI to the internet in 1995.

One of the major hurdles Barnes pointed out is the computational and data power required to train models like LLaMA 3, which is largely controlled by a small number of corporations. This concentration of resources underscores the importance of NEAR’s mission to create technology that excels in the areas of Performance, Verifiability, and Privacy. As AI becomes “invisible” to users, he explained, these components will only grow in importance.

Barnes shared that NEAR is focused on building the most compelling ecosystem for decentralized AI that transcends Web3 and is capable of providing value to enterprises. He also emphasized the biggest opportunities in the space, particularly AI agents and the development of the internet of agents — autonomous systems capable of interacting and collaborating without human oversight.

NEAR Ecosystem Framework, it’s like a claw

Beyond the sessions I’ve covered, the afternoon was packed with rapid-fire presentations from over 20 Web3 and AI startups. The pace was so intense that my fingers couldn’t keep up, and my brain was nearing its limit. I’ll do my best to capture some of these groundbreaking innovations in future blog posts.

In closing, this summit painted a clear picture: Decentralized AI isn’t just a passing trend — it’s a necessary evolution in the way we build, use, and interact with artificial intelligence. Each speaker offered compelling reasons why decentralization will shape the future of AI, and the innovations emerging from this space are nothing short of revolutionary.

--

--

Marz Rock: AI & Web3 Strategy and Consulting
Marz Rock: AI & Web3 Strategy and Consulting

Written by Marz Rock: AI & Web3 Strategy and Consulting

AI & Web3 Consultant and Enthusiast. Follow us in X here: https://x.com/themarzrock Learn more about our offerings here: https://www.marzrock.com/

No responses yet