OpenLLMetry - Observing the Quality of LLMs with Nir Gazit
Sign up for free
Listen to this episode and many more. Enjoy the best podcasts on Spreaker!
Download and listen anywhere
Download your favorite episodes and enjoy them, wherever you are! Sign up or log in now to access offline listening.
Description
Its only been a year since ChatGPT was introduced. Since then we see LLMs (Large Language Models) and Generative AIs being integrated into every days life software applications. Developers have...
show moreTune in to this session where we have Nir Gazit, CEO and Co-founder of Traceloop, educating us about how to observe and quantify the quality of LLMs. Besides performance and costs engineers need to look into quality attributes such as accuracy, readability or grammatical correctness.
Nir introduces us to OpenLLMetry - a set of Open Source extensions built on top of OpenTelemetry providing automated observability into the usage of LLMs for developers to better understand how to optimize the usage of LLMs. His advice to every developer is to start measuring the quality of your LLMs on Day 1 and continuously evaluate as you change your model, the prompt and the way you interact with your LLM stack!
If you have more questions about LLM Observability check out the following links:
OpenLLMetry GitHub Page: https://github.com/traceloop/openllmetry
Traceloop Website: https://www.traceloop.com/
OpenLLMetry Documentation: https://traceloop.com/docs/openllmetry
Information
Author | PurePerformance |
Organization | PurePerformance |
Website | - |
Tags |
Copyright 2024 - Spreaker Inc. an iHeartMedia Company