Note: This is an in-person meetup @LlamaIndex HQ in SF!
Stop by our meetup to learn about latest innovations in building production-grade retrieval augmented generation engines for your company from speakers from Activeloop, LlamaIndex, and Tryolabs.
We all know by now that getting RAG systems from a shiny demo on X to production is hard. RAG can be inaccurate and inconsistent, and often times insufficient to answer deeper, more mission-critical questions companies want to answer in production.
Vanilla RAG may not be enough, but there are multiple strategies going beyond that in the way you can organize, store, and retrieve your data, as well as combine RAG with fine-tuning of the LLM to get resolve the 'last mile' problem.
Come learn from industry experts from Tryolabs, LlamaIndex, and Activeloop about the latest advancements in GenAI workflows. Stop by our in-person meetup to learn about latest innovations in building production-grade retrieval augmented generation engines for your company from speakers from Activeloop, LlamaIndex, and Tryolabs. Network with industry peers at a free event with drinks and food!
Activeloop: Memory Infrastructure for RAG Engines
Presented by: Mikayel Harutyunyan
Learn how we fine-tuned Llama-v-3 (8B) and used advanced retrieval techniques like Deep Memory by Activeloop with RAG to surpass GPT-4 in knowledge retrieval.
TryoLabs: Building Reliable, Production-Grade GenAI Apps
Presented by: Diego Kiedanski
As AI continues to reshape industries, the challenge lies in efficiently deploying and scaling these solutions. MLOps, the fusion of Machine Learning and operational practices, offers a strategic guide for AI-driven enterprises. In this talk we will outline the process and challenges of building reliable GenAI products, and how to overcome them.
LlamaIndex: RAG and Agents in 2024
Presented by: Laurie Voss
RAG is currently the main paradigm that enables developers to connect LLMs to external data sources. At the same time the AI space is advancing dramatically - customers are building more complex reasoning workflows, LLMs are getting better/faster/cheaper with much longer context windows. In this talk we talk about what it means to build RAG and agents this year. We talk about best practices that have emerged but also present ideas for how it will evolve, both in response to evolving user needs for automated knowledge synthesis and also in response to long-context LLM advancements.