Magentic: Seamless LLM Integration for Python Developers

Magentic

Project Description

Magentic is a Python library designed to seamlessly integrate Large Language Models (LLMs) into Python code. It enables the creation of functions that return structured output from an LLM using @prompt and @chatprompt decorators. The library also supports combining LLM queries and tool use with traditional Python code to build complex agentic systems.

Usage Instructions

  1. Installation:
    pip install magentic
    # or using uv
    uv add magentic
    
  2. Configuration: Set your OpenAI API key by setting the OPENAI_API_KEY environment variable. Refer to the documentation for configuring other LLM providers.
  3. Basic Usage with @prompt:
    from magentic import prompt
    
    @prompt('Add more "dude"ness to: {phrase}')
    def dudeify(phrase: str) -> str: ...
    
    dudeify("Hello, how are you?")
    # Returns: "Hey, dude! What's up? How's it going, my man?"
    
  4. Structured Output with @prompt:
    from magentic import prompt
    from pydantic import BaseModel
    
    class Superhero(BaseModel):
        name: str
        age: int
        power: str
        enemies: list[str]
    
    @prompt("Create a Superhero named {name}.")
    def create_superhero(name: str) -> Superhero: ...
    
    create_superhero("Garden Man")
    # Returns: Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman'])
    
  5. Chat Prompting with @chatprompt:
    from magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage
    from pydantic import BaseModel
    
    class Quote(BaseModel):
        quote: str
        character: str
    
    @chatprompt(
        SystemMessage("You are a movie buff."),
        UserMessage("What is your favorite quote from Harry Potter?"),
        AssistantMessage(
            Quote(
                quote="It does not do to dwell on dreams and forget to live.",
                character="Albus Dumbledore",
            )
        ),
        UserMessage("What is your favorite quote from {movie}?"),
    )
    def get_movie_quote(movie: str) -> Quote: ...
    
    get_movie_quote("Iron Man")
    # Returns: Quote(quote='I am Iron Man.', character='Tony Stark')
    
  6. Function Calling with FunctionCall:
    from typing import Literal
    from magentic import prompt, FunctionCall
    
    def search_twitter(query: str, category: Literal["latest", "people"]) -> str:
        """Searches Twitter for a query."""
        print(f"Searching Twitter for {query!r} in category {category!r}")
        return "<twitter results>"
    
    @prompt(
        "Use the appropriate search function to answer: {question}",
        functions=[search_twitter],
    )
    def perform_search(question: str) -> FunctionCall[str]: ...
    
    output = perform_search("What is the latest news on LLMs?")
    # output: FunctionCall(search_twitter, 'LLMs', 'latest')
    output()
    # Prints: Searching Twitter for 'LLMs' in category 'latest'
    # Returns: '<twitter results>'
    
  7. Prompt Chaining with @prompt_chain:
    from magentic import prompt_chain
    
    def get_current_weather(location, unit="fahrenheit"):
        """Get the current weather in a given location"""
        return {"temperature": "72", "forecast": ["sunny", "windy"]}
    
    @prompt_chain(
        "What's the weather like in {city}?",
        functions=[get_current_weather],
    )
    def describe_weather(city: str) -> str: ...
    
    describe_weather("Boston")
    # Returns: 'The current weather in Boston is 72Β°F and it is sunny and windy.'
    

Key Features

  • Structured Outputs: Supports Pydantic models and built-in Python types for structured outputs.
  • Streaming: Allows streaming of structured outputs and function calls during generation.
  • LLM-Assisted Retries: Improves LLM adherence to complex output schemas.
  • Observability: Integrates with OpenTelemetry for logging and tracing, including native Pydantic Logfire integration.
  • Type Annotations: Works well with linters and IDEs.
  • Configuration: Options for multiple LLM providers, including OpenAI, Anthropic, and Ollama.
  • Chat Prompting: Enables multi-turn conversations and few-shot prompting.
  • Parallel Function Calling: Supports calling multiple functions in parallel.
  • Vision: Provides capabilities for image and visual processing with LLMs.
  • Formatting: Tools for controlling output format.
  • Asyncio Support: For asynchronous operations.
  • Function Chaining: Allows LLM-powered functions to be supplied as tools to other LLM functions, enabling complex agentic flows.

Target Users

Developers and engineers who want to integrate Large Language Models into their Python applications, especially those focused on: * Building agentic systems. * Generating structured data from LLMs. * Automating tasks with LLM-powered functions. * Leveraging LLMs for dynamic function calls based on natural language input.

Application Scenarios

  • Data Extraction and Transformation: Extracting structured information (e.g., names, addresses, product details) from unstructured text.
  • Automated Content Generation: Generating dynamic content like product descriptions, summaries, or creative writing with specific formats.
  • Intelligent Agents and Chatbots: Developing conversational agents that can understand user intent, call external tools (APIs, databases), and return structured responses.
  • Workflow Automation: Creating automated workflows where LLMs make decisions or execute actions by calling predefined Python functions.
  • Natural Language Interfaces: Building interfaces where users can interact with complex systems using natural language, and Magentic translates these inputs into function calls.
  • Code Generation Assistance: Potentially used to generate code snippets or function definitions based on prompts, leveraging the structured output capabilities.

Share this article