Skip to main content

How to return structured data from a model

Prerequisites

This guide assumes familiarity with the following concepts:

It is often useful to have a model return output that matches some specific schema. One common use-case is extracting data from arbitrary text to insert into a traditional database or use with some other downstrem system. This guide will show you a few different strategies you can use to do this.

The .with_structured_output() method​

There are several strategies that models can use under the hood. For some of the most popular model providers, including OpenAI, Anthropic, and Mistral, LangChain implements a common interface that abstracts away these strategies called .with_structured_output.

By invoking this method (and passing in JSON schema or a Pydantic model) the model will add whatever model parameters + output parsers are necessary to get back structured output matching the requested schema. If the model supports more than one way to do this (e.g., function calling vs JSON mode) - you can configure which method to use by passing into that method.

You can find the current list of models that support this method here.

Let's look at some examples of this in action! We'll use Pydantic to create a simple response schema.

pip install -qU langchain-openai
import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
from typing import Optional

from langchain_core.pydantic_v1 import BaseModel, Field


class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")


structured_llm = llm.with_structured_output(Joke)

structured_llm.invoke("Tell me a joke about cats")
Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None)

The result is a Pydantic model. Note that name of the model and the names and provided descriptions of parameters are very important, as they help guide the model's output.

We can also pass in an OpenAI-style JSON schema dict if you prefer not to use Pydantic. This dict should contain three properties:

  • name: The name of the schema to output.
  • description: A high level description of the schema to output.
  • parameters: The nested details of the schema you want to extract, formatted as a JSON schema dict.

In this case, the response is also a dict:

structured_llm = llm.with_structured_output(
{
"name": "joke",
"description": "Joke to tell user.",
"parameters": {
"title": "Joke",
"type": "object",
"properties": {
"setup": {"type": "string", "description": "The setup for the joke"},
"punchline": {"type": "string", "description": "The joke's punchline"},
},
"required": ["setup", "punchline"],
},
}
)

structured_llm.invoke("Tell me a joke about cats")
{'setup': 'Why was the cat sitting on the computer?',
'punchline': 'To keep an eye on the mouse!'}

Choosing between multiple schemas​

If you have multiple schemas that are valid outputs for the model, you can use Pydantic's Union type:

from typing import Union

from langchain_core.pydantic_v1 import BaseModel, Field


class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")


class ConversationalResponse(BaseModel):
response: str = Field(description="A conversational response to the user's query")


class Response(BaseModel):
output: Union[Joke, ConversationalResponse]


structured_llm = llm.with_structured_output(Response)

structured_llm.invoke("Tell me a joke about cats")
Response(output=Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!'))
structured_llm.invoke("How are you today?")
Response(output=ConversationalResponse(response="I'm just a collection of code, so I don't have feelings, but thanks for asking! How can I assist you today?"))

If you are using JSON Schema, you can take advantage of other more complex schema descriptions to create a similar effect.

You can also use tool calling directly to allow the model to choose between options, if your chosen model supports it. This involves a bit more parsing and setup. See this how-to guide for more details.

Specifying the output method (Advanced)​

For models that support more than one means of outputting data, you can specify the preferred one like this:

structured_llm = llm.with_structured_output(Joke, method="json_mode")

structured_llm.invoke(
"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys"
)
Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!')

In the above example, we use OpenAI's alternate JSON mode capability along with a more specific prompt.

For specifics about the model you choose, peruse its entry in the API reference pages.

Prompting techniques​

You can also prompt models to outputting information in a given format. This approach relies on designing good prompts and then parsing the output of the models. This is the only option for models that don't support .with_structured_output() or other built-in approaches.

Using PydanticOutputParser​

The following example uses the built-in PydanticOutputParser to parse the output of a chat model prompted to match a the given Pydantic schema. Note that we are adding format_instructions directly to the prompt from a method on the parser:

from typing import List

from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field


class Person(BaseModel):
"""Information about a person."""

name: str = Field(..., description="The name of the person")
height_in_meters: float = Field(
..., description="The height of the person expressed in meters."
)


class People(BaseModel):
"""Identifying information about all people in a text."""

people: List[Person]


# Set up a parser
parser = PydanticOutputParser(pydantic_object=People)

# Prompt
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Answer the user query. Wrap the output in `json` tags\n{format_instructions}",
),
("human", "{query}"),
]
).partial(format_instructions=parser.get_format_instructions())

Let’s take a look at what information is sent to the model:

query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.format_prompt(query=query).to_string())
System: Answer the user query. Wrap the output in `json` tags
The output should be formatted as a JSON instance that conforms to the JSON schema below.

As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.

Here is the output schema:

{"description": "Identifying information about all people in a text.", "properties": {"people": {"title": "People", "type": "array", "items": {"$ref": "#/definitions/Person"}}}, "required": ["people"], "definitions": {"Person": {"title": "Person", "description": "Information about a person.", "type": "object", "properties": {"name": {"title": "Name", "description": "The name of the person", "type": "string"}, "height_in_meters": {"title": "Height In Meters", "description": "The height of the person expressed in meters.", "type": "number"}}, "required": ["name", "height_in_meters"]}}}

Human: Anna is 23 years old and she is 6 feet tall

And now let's invoke it:

chain = prompt | llm | parser

chain.invoke({"query": query})
People(people=[Person(name='Anna', height_in_meters=1.8288)])

For a deeper dive into using output parsers with prompting techniques for structured output, see this guide.

Custom Parsing​

You can also create a custom prompt and parser with LangChain Expression Language (LCEL), using a plain function to parse the output from the model:

import json
import re
from typing import List

from langchain_core.messages import AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field


class Person(BaseModel):
"""Information about a person."""

name: str = Field(..., description="The name of the person")
height_in_meters: float = Field(
..., description="The height of the person expressed in meters."
)


class People(BaseModel):
"""Identifying information about all people in a text."""

people: List[Person]


# Prompt
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Answer the user query. Output your answer as JSON that "
"matches the given schema: ```json\n{schema}\n```. "
"Make sure to wrap the answer in ```json and ``` tags",
),
("human", "{query}"),
]
).partial(schema=People.schema())


# Custom parser
def extract_json(message: AIMessage) -> List[dict]:
"""Extracts JSON content from a string where JSON is embedded between ```json and ``` tags.

Parameters:
text (str): The text containing the JSON content.

Returns:
list: A list of extracted JSON strings.
"""
text = message.content
# Define the regular expression pattern to match JSON blocks
pattern = r"```json(.*?)```"

# Find all non-overlapping matches of the pattern in the string
matches = re.findall(pattern, text, re.DOTALL)

# Return the list of matched JSON strings, stripping any leading or trailing whitespace
try:
return [json.loads(match.strip()) for match in matches]
except Exception:
raise ValueError(f"Failed to parse: {message}")

Here is the prompt sent to the model:

query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.format_prompt(query=query).to_string())
System: Answer the user query. Output your answer as JSON that  matches the given schema: ```json
{'title': 'People', 'description': 'Identifying information about all people in a text.', 'type': 'object', 'properties': {'people': {'title': 'People', 'type': 'array', 'items': {'$ref': '#/definitions/Person'}}}, 'required': ['people'], 'definitions': {'Person': {'title': 'Person', 'description': 'Information about a person.', 'type': 'object', 'properties': {'name': {'title': 'Name', 'description': 'The name of the person', 'type': 'string'}, 'height_in_meters': {'title': 'Height In Meters', 'description': 'The height of the person expressed in meters.', 'type': 'number'}}, 'required': ['name', 'height_in_meters']}}}
```. Make sure to wrap the answer in ```json and ``` tags
Human: Anna is 23 years old and she is 6 feet tall

And here's what it looks like when we invoke it:

chain = prompt | llm | extract_json

chain.invoke({"query": query})
[{'people': [{'name': 'Anna', 'height_in_meters': 1.8288}]}]

Next steps​

Now you've learned a few methods to make a model output structured data.

To learn more, check out the other how-to guides in this section, or the conceptual guide on tool calling.


Was this page helpful?


You can leave detailed feedback on GitHub.