Liquid AI Unveils Revolutionary LFMs: Launch Event Recap

--

The Liquid AI launch event was a landmark moment in the evolution of artificial intelligence, gathering thought leaders and innovators to explore the transformative potential of Liquid AI. With high-profile speakers and thought-provoking discussions, the event explored how Liquid AI is poised to reshape industries, improving efficiencies, and pushing the boundaries of what’s possible.

MIT’s Kresge Auditorium Jam Packed with AI-heads

To set the scene the event was held at the Kresge Auditorium on MIT’s campus. It was a packed house, the energy was palpable and the audience was engaged throughout. Below are some of key highlights I took away from each session. While I mostly wrote this myself I did use the help of Liquid Labs AI Playground to summarize my notes and express my thoughts.

Ramin Hasani, Liquid AI CEO
Hasani opened the event with a compelling vision of AI’s transformative power, touching on its applications in drug discovery, scientific research, and beyond. He highlighted the urgent need to mitigate AI’s environmental impact, emphasizing Liquid AI’s commitment to responsible innovation. Addressing the challenges faced by enterprises, Hasani pointed out that companies are seeking to utilize AI effectively, scale their operations, measure return on investment (ROI), personalize services, and address privacy concerns. He introduced Liquid Foundation Models (LFMs) as the solution, underscoring their ability to deliver high-quality AI at every scale, from 2 billion to 40 billion parameters, while remaining energy-efficient.

Liquid AI’s Goal to build AI at every scale

Hasani announced plans to release audio and vision LFMs with multiple billions of parameters, along with a full library of LFMs tailored for various domains, including bioinformatics, autonomous driving, time series analysis, and more. He positioned Liquid AI as a foundation model company with a unique architecture, poised to lead the next wave of AI innovation.

Jimmy Smith, Founding Scientist & Maxime Labonne, Head of Post-Training Liquid AI

Liquid AI prioritizes both Quality and Efficiency

Smith and Labonne provided an in-depth look at LFMs’ higher quality and efficiency, backed by rigorous benchmarking and iterative evaluations. They explained how LFMs prioritize knowledge capacity, multi-step instructions, and long context capabilities while maintaining efficiency. They conveyed how through both human judging and leveraging automated AI judging they are constantly providing feedback to their models ensuring Liquid AI surpasses the quality and efficiency of any other model. Their presentations were a testament to Liquid AI’s commitment to excellence, showcasing LFMs’ ability to deliver top-tier performance without compromise.

Alexander Amini, Chief Science Officer Liquid AI
Amini lead his presentation reiterating that Liquid AI is not a language model company it is a foundation model company. They are releasing a suite of LFMs that will revolutionize multiple industries. He then captivated the audience with LFMs’ diverse applications, showcasing Bio LFM’s protein generation capabilities, Drive LFM’s simulation prowess, and Transaction LFM’s fraud detection abilities. LFMs emerged as versatile tools capable of revolutionizing industries, with BioLFM generating brand new proteins efficiently and accurately, DriveLFM simulating physical environments to predict scenarios and improve automation, and Transaction LFM safeguarding financial security by identifying fraudulent transactions. This showcase ensured the audience that the Liquid Foundal Models are truly multimodal.

Multimodal nature of LFMs

Liquid DevKit: Build-Scale-Explain
Given my developer background the introduction of the Liquid DevKit was a standout moment, offering developers a streamlined way to build, scale, and explain LFMs. Its intuitive design and powerful features, including the model.explain(x) command that allows you to see detailed explanations, promise to democratize AI development and extend Liquid AI into a platform with a full felge ecosystem.

Look Code — YEAH!

The DevKit includes several abstractions, starting with operators at the base, progressing to blocks as the next level, and culminating in backbones that can combine multiple blocks, providing developers with a comprehensive toolkit for LFM development.

Mathias Lechner, CTO Liquid AI
Lechner’s session focused on LFMs’ remarkable deployment capabilities, demonstrating their adaptability across various platforms and contexts. From running on resource-constrained devices like a $60 Raspberry Pi to handling sophisticated tasks like voice conversations, LFMs proved their mettle in real-world scenarios. He demonstrated how their 1.3B parameter model can run a Raspberry Pi and that their 40B parameter model can run a on a single GPU. Having a voice conversation with the AI Lechner asked if for tips on networking at the technology event. The AIs advice: smile, look people directly in the eye and ask open ended questions. Did I use this advice later in the day — ABSOLUTELY!

I’ll be going as the Deep Learning Droid this year

Lechner showcased LFMs’ offline mode functionality, proving their reliability even without internet access. He demonstrated LFMs suggesting AI-themed Halloween costumes and providing customer support scenarios, both online and offline, with nearly identical results.

At this point Ramin Hasani returned to the stage and discussed LFMs’ private deployment options and their high quality and efficiency. He emphasized Liquid AI’s commitment to co-development and deployment of AI in the most cost-efficient models, inviting partnerships to harness LFMs’ potential fully. To that end the second half of the program was focused on fireside chats and panels with many external supporters, partners and investors.

Fireside Chat with MA Governor Maura Healey
Governor Healey spoke passionately about Massachusetts’ role in fostering innovation and its commitment to supporting AI development. She praised Liquid AI for its revolutionary approach, noting that it embodies the state’s legacy of pioneering technological advancements and being a leading state in terms of sustainability. Healey expressed her excitement about Liquid AI’s potential to drive economic growth and position Massachusetts as a global hub for generative AI. She announced that Massachusetts has a $100M fund for applied AI companies, signaling the state’s dedication to nurturing AI startups and fostering a vibrant AI ecosystem.

She brought a good point about the geography of AI companies which got me thinking: Is Liquid AI Boston’s Answer to San Francisco based OpenAI and Chat GPT? I am thinking the answer will be yes.

Ranjit Bawa, Chief Strategy and Technology Officer, Deloitte:
Bawa noted that Deloitte leaders are using AI every day and have been for a couple of years now. He emphasized the leadership team’s commitment to leveraging AI effectively, continuously evaluating what works well and what doesn’t. Bawa discussed ongoing efforts to disseminate AI knowledge and usage throughout the organization, pushing AI adoption down to the rank and file. He identified three significant challenges businesses face in adopting AI: focusing on real return on investment (ROI), ensuring trustworthy and observable AI systems, and embracing change without feeling threatened by AI. Bawa expressed optimism about AI’s impact on the future of work, highlighting its potential to boost productivity and create new job opportunities. He acknowledged the challenges businesses face in adapting to AI, stressing the importance of agility and flexibility. Looking ahead, Bawa envisions a future where AI plays a central role in corporate leadership, stating we may see a world with AI CEOs and CTOs as we already have CTO Co-pilots.

Raymond Liao, Founding Managing Director, Samsung Next
Liao highlighted the potential of generative AI as a major opportunity for Samsung, surpassing even the cloud era. He discussed the need for increased computational power closer to users, envisioning a future where wearable compute runs private AI for individuals and businesses.

State of Generative AI Panel:
The panelists discussed the current state of AI, noting the remarkable progress made in the past 18 months. They identified three key trends: scaling up data models, improving efficiency through scaling down, and enhancing models’ ability to think critically before providing answers. we will see AI the continually learn and get better and better. They noted we are just at the beginning of seeing AI learn similar to how humans learn by having more conversations and gaining experience, just at a faster scale. The panelists emphasized the importance of ethical considerations and continuous learning in AI development.

Ichiro Tsuge, CEO TSC, AI in Japan
Tsuge discussed the burgeoning interest in AI among Japanese enterprises, emphasizing its potential to drive productivity and innovation amid initial concerns over privacy. He noted that Japan is transitioning towards a growth economy, necessitating strategic investments to counteract the decline in the working-age population. Tsuge argued that AI represents a crucial avenue for growth, with applications spanning diverse industries, from agriculture to finance. Although the current rate of AI adoption remains low, Tsuge predicted a surge in AI integration across various sectors in the coming year. He underscored the significance of collaboration between Liquid AI and Japanese companies to maximize the effectiveness of LFMs and tailor solutions to local needs.

Keith Williams, CTO, Capgemini
Williams shared Capgemini’s extensive experience with LFMs, highlighting their application in self-driving cars and energy efficiency improvements in telecommunications. He noted that product engineering is often constrained by two primary factors: correctness and constraints. In the automotive sector, for instance, a guess is insufficient when designing self-driving cars; precision and reliability are paramount. Capgemini has been actively seeking AI models that can enhance correctness while addressing constraints such as computer memory, policies, and regulations. Williams provided a concrete example of an LFM developed by Capgemini for self-driving cars, demonstrating LFMs’ capability to meet stringent requirements while tackling complex problems.

LFM Edge Driving Cars

Additionally, he discussed how Capgemini has assisted telecommunications companies in reducing energy waste by predicting optimal times to switch off towers without compromising service quality to users. This exemplifies LFMs’ potential to solve intricate challenges while adhering to strict standards.

Stuart Schreiber, CEO, Arena BioWorks
Schreiber emphasized the critical role of problem-first thinking in biotechnology, advocating for the use of LFMs to analyze vast amounts of biological data and pinpoint key factors in disease treatment. He highlighted the complexity of the human body, noting that while numerous measurements can be taken, determining how to utilize this data effectively remains a challenge. Schreiber argued for the need for relative generative AI, capable of narrowing down relevant data points to identify effective treatments for diseases. While this approach is currently largely theoretical, Schreiber expressed confidence that it will soon become a reality. His vision for LFMs in healthcare was both ambitious and inspiring, underscoring their potential to revolutionize medical research and patient care.

Ralph Wittig, Head of Research, AMD
Wittig addressed the pressing challenge of scaling AI while maintaining energy efficiency, discussing AMD’s efforts to develop specialized processors optimized for AI workloads. He noted that data centers are approaching the gigawatt barrier, with hundreds or even thousands of GPUs in operation. Despite this, AMD is committed to building processors for AI training and inference that prioritize energy efficiency. Wittig expressed enthusiasm for LFMs’ role in advancing AI scalability, recognizing their potential to drive significant breakthroughs in the field.

Stephen Pagliuca Co-owner Celtics and Senior Advisor Bain Capital:
As a surprise guest to close out the day — “Pags” came on stage with his ENORMOUS Celtics 2024 Championship Ring. He shared his investment rationale for Liquid AI, citing LFMs’ transformative potential and their ability to address power and explainability concerns prevalent in other AI models. He also shared his ring with Ramin (I couldn’t get a good pic from my vantage point).

Ramin saying WOOO can I keep this? Pags is not amused

His confidence in the Liquid AI team and their vision for LFMs was palpable, emphasizing their commitment to making AI ubiquitous and accessible.

Conclusion:
The event concluded with a real sense of excitement and possibility, as speakers and attendees alike recognized LFMs’ potential to reshape industries and drive meaningful change. Liquid AI’s commitment to accessibility, efficiency, and innovation was evident throughout the event.

Liquid AI Team on stage to close out the event

--

--