Using LangSmith to Support Fine-tuning

Using LangSmith to Support Fine-tuning

5
(395)
Write Review
More
$ 11.50
Add to Cart
In stock
Description

Summary We created a guide for fine-tuning and evaluating LLMs using LangSmith for dataset management and evaluation. We did this both with an open source LLM on CoLab and HuggingFace for model training, as well as OpenAI's new finetuning service. As a test case, we fine-tuned LLaMA2-7b-chat and gpt-3.5-turbo for an extraction task (knowledge graph triple extraction) using training data exported from LangSmith and also evaluated the results using LangSmith. The CoLab guide is here. Context I

🧩DemoGPT (@demo_gpt) / X

Applying OpenAI's RAG Strategies 和訳|p

LangChain on X: 📰Latest LangChain Newsletter is out! -an entire section on retrieval (lots of exciting stuff here) -what's new in Open Source & LangSmith -favorite blog posts and use-cases read and

Thread by @RLanceMartin on Thread Reader App – Thread Reader App

Thread by @RLanceMartin on Thread Reader App – Thread Reader App

Multi-Vector Retriever for RAG on tables, text, and images 和訳|p

컴퓨터 vs 책: [B급 프로그래머] 8월 4주 소식(빅데이터/인공지능, 하드웨어, 읽을거리 부문)

8월 2023 - 컴퓨터 vs 책

Thread by @LangChainAI on Thread Reader App – Thread Reader App

Thread by @RLanceMartin on Thread Reader App – Thread Reader App

컴퓨터 vs 책: [B급 프로그래머] 8월 4주 소식(빅데이터/인공지능, 하드웨어, 읽을거리 부문)

大規模言語モデルとそのソフトウェア開発に向けた応用 - Speaker Deck

Nicolas A. Duerr on LinkedIn: #success #strategy #product #validation