Building & Evaluating an Advanced Query Engine over your Data
with LlamaIndex
What to expect?
Setting up a Retrieval-Augmented Generation (RAG) system around a Large Language Model (LLM) has become a popular choice for building LLM-powered search engines and chatbots. LlamaIndex provides the tools to provide both basic and advanced RAG query engines over your data. However, there are big pain points in observability and evaluation - how do we understand what's going on in RAG (especially as the systems get more complex), and how do we properly measure performance? In this talk, we talk about how using LlamaIndex in combination with W&B can help you not only build an advanced query engine over your data, but also observe/evaluate it.