Table of contents
LangChain is a Python library that provides a unified interface for working with various language models. It is designed to simplify the process of integrating different language models into your applications, making it easier to switch between models and experiment with different approaches.
Getting Started with LangChain
The library provides a Quickstart Guide to help you get up and running quickly. It also includes a section on Concepts to familiarize you with the fundamental ideas behind LangChain, and a collection of Tutorials to guide you through specific tasks and use cases.
Modules in LangChain
LangChain is organized into several modules, each focusing on a different aspect of working with language models:
Models: This module provides interfaces for working with various types of language models, including Language Learning Models (LLMs). It includes guides on how to use the async API for LLMs, how to write a custom LLM wrapper, how to use the fake LLM, and more.
Integrations: LangChain supports a wide range of integrations with various language model providers and platforms, including AI21, Aleph Alpha, Anyscale, Aviary, Azure OpenAI, Banana, Baseten, Beam, Bedrock, CerebriumAI, Cohere, C Transformers, Databricks, DeepInfra, ForefrontAI, Google Cloud Platform Vertex AI PaLM, GooseAI, GPT4All, Hugging Face Hub, Hugging Face Pipeline, Huggingface TextGen Inference, Jsonformer, Llama-cpp, Manifest, Modal, MosaicML, NLP Cloud, OpenAI, OpenLM, Petals, PipelineAI, Prediction Guard, PromptLayer OpenAI, ReLLM, Replicate, Runhouse, SageMaker Endpoint, StochasticAI, Writer, and many more.
Chat Models: This module provides tools for working with chat models, including guides on how to use few shot examples and how to stream responses.
Text Embedding Models: This module provides interfaces for working with text embedding models, including guides on how to use models from Aleph Alpha, Amazon Bedrock, Azure OpenAI, Cohere, DashScope, DeepInfra, Elasticsearch, Embaas, Fake Embeddings, Google Vertex AI PaLM, Hugging Face Hub, HuggingFace Instruct, Jina, Llama-cpp, MiniMax, ModelScope, MosaicML, OpenAI, SageMaker Endpoint, Self Hosted Embeddings, Sentence Transformers, Tensorflow Hub, and more.
LangChain is a powerful tool for anyone working with language models in Python. Its modular design and wide range of integrations make it a flexible and versatile library that can adapt to a variety of use cases. Whether you're a researcher experimenting with different models, a developer integrating language models into your application, or a data scientist exploring the capabilities of language models, LangChain has something to offer you.