Multi-Vector Retriever for RAG on tables, text, and images Summary Seamless question-answering across diverse data types (images, text, tables) is one of the holy grails of RAG. We’re releasing three new cookbooks that
LangServe Playground and Configurability Last week we launched LangServe, a way to easily deploy chains and agents in a production-ready manner. Specifically, it takes a chain and easily spins
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been
A Chunk by Any Other Name: Structured Text Splitting and Metadata-enhanced RAG There's something of a structural irony in the fact that building context-aware LLM applications typically begins with a systematic process of decontextualization, wherein
You.com x LangChain Editor's Note: the following is a guest blog post from our friends at You.com. We've seen a lot of interesting
The Prompt Landscape Context Prompt Engineering can steer LLM behavior without updating the model weights. A variety of prompts for different uses-cases have emerged (e.g., see @dair_
Test Run Comparisons One pattern I noticed is that great AI researchers are willing to manually inspect lots of data. And more than that, they build infrastructure that
Testing Fine Tuned Open Source Models in LangSmith Editor's Note. This blog post was written by Ryan Brandt, the CTO and Cofounder of ChatOpenSource, a business specializing in enterprise AI chat
How to design an Agent for Production Editor's Note: This post is written by Dexter Storey, Sarim Malik, and Ted Spare from the Rubric Labs team. Important Links * GitHub repository
Building LLM-Powered Web Apps with Client-Side Technology The initial version of this blog post was a talk for Google’s internal WebML Summit 2023, which you can check out here. It’s
Introducing LangServe, the best way to deploy your LangChains We think the LangChain Expression Language (LCEL) is the quickest way to prototype the brains of your LLM application. The next exciting step is to
Fine-tuning ChatGPT: Surpassing GPT-4 Summarization Performance–A 63% Cost Reduction and 11x Speed Enhancement using Synthetic Data and LangSmith Editor's Note: This post was written by Charlie George, machine learning engineer at Elicit. Summary * Fine-tuned ChatGPT beats GPT-4 for news article summarization