To try everything Brilliant has to offer—free—for a full 30 days, visit https://brilliant.org/AllAboutAI . You’ll also get 20% off an annual premium subscription.
37% Better Output with 15 Lines of Code - Llama 3 8B (Ollama) & 70B (Groq)
In this video I try to improve a known problem when using RAG in local model like Llama 3 8B on ollama. This local RAG system was improved by just adding around 15 lines of code. Feel free to share and rate on GitHub :)
00:00 Llama 3 Improved RAG Intro
02:01 Problem / Soulution
03:05 Brilliant.org
04:26 How this works
12:05 Llama 3 70B Groq
15:12 Conclusion