Building Modular AI Workflows with Anthropic’s Claude and LangGraph
This guide offers a straightforward approach to implementing LangGraph, a user-friendly framework for creating AI workflows integrated with Anthropic’s Claude API. By following this tutorial, developers will learn how to construct and visualize workflows that perform various tasks, such as generating answers, analyzing responses, and composing technical content.
1. Setting Up Your Environment
Before you start, ensure you have the necessary libraries installed. Here’s how to set up your environment securely:
from getpass import getpass
import os
anthropic_key = getpass("Enter your Anthropic API key: ")
os.environ["ANTHROPIC_API_KEY"] = anthropic_key
print("Key set:", "ANTHROPIC_API_KEY" in os.environ)
This code prompts for your Anthropic API key without displaying it, ensuring your sensitive information remains secure.
2. Importing Required Libraries
Next, import the essential libraries needed for building and visualizing AI workflows:
import os
import json
import requests
from typing import Dict, List, Any, Callable, Optional, Union
from dataclasses import dataclass, field
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import display, HTML, clear_output
These libraries facilitate data handling, graph creation, and visualization, making your workflow more manageable.
3. Creating the LangGraph Class
The LangGraph class serves as a framework for constructing and executing AI workflows. It allows users to define modular nodes, which can be Claude-powered prompts or custom functions, and visualize the entire process.
Defining Node Configurations
@dataclass
class NodeConfig:
name: str
function: Callable
inputs: List[str] = field(default_factory=list)
outputs: List[str] = field(default_factory=list)
config: Dict[str, Any] = field(default_factory=dict)
This structure enables modular and reusable node definitions for your AI tasks.
Adding Nodes to the Graph
def add_node(self, node_config: NodeConfig):
self.nodes[node_config.name] = node_config
self.graph.add_node(node_config.name)
for input_node in node_config.inputs:
if input_node in self.nodes:
self.graph.add_edge(input_node, node_config.name)
return self
This method adds a node to the graph and establishes dependencies based on input nodes.
4. Visualizing the Workflow
To visualize your workflow, use the following code:
def visualize(self):
plt.figure(figsize=(10, 6))
pos = nx.spring_layout(self.graph)
nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
node_size=1500, arrowsize=20, font_size=10)
plt.title("LangGraph Flow")
plt.tight_layout()
plt.show()
This function creates a visual representation of your workflow, making it easier to understand data flow and task dependencies.
5. Executing the Workflow
To execute the graph in the correct order, use this method:
def execute(self, initial_state: Dict[str, Any] = None):
self.state = initial_state or {}
execution_order = self._get_execution_order()
for node_name in execution_order:
node = self.nodes[node_name]
inputs = {k: self.state.get(k) for k in node.inputs if k in self.state}
result = node.function(self.state, **inputs)
if len(node.outputs) == 1:
self.state[node.outputs[0]] = result
elif isinstance(result, (list, tuple)) and len(result) == len(node.outputs):
for i, output_name in enumerate(node.outputs):
self.state[output_name] = result[i]
return self.state
This method ensures each node executes in the correct order, passing necessary inputs and storing results effectively.
6. Example Workflows
Simple Question-Answering Example
Here’s how to run a basic question-answering workflow:
def run_example(question="What are the key benefits of using a graph-based architecture for AI workflows?"):
graph = LangGraph()
graph.transform_node(name="question_provider", transform_fn=lambda state, **kwargs: question, outputs=["user_question"])
graph.claude_node(name="question_answerer", prompt_template="Answer this question clearly and concisely: {user_question}", inputs=["user_question"], outputs=["answer"])
graph.claude_node(name="answer_analyzer", prompt_template="Analyze if this answer addresses the question well: Question: {user_question}\nAnswer: {answer}", inputs=["user_question", "answer"], outputs=["analysis"])
graph.visualize()
result = graph.execute()
return graph
This example illustrates how to build a simple workflow that answers a question and analyzes the response.
Advanced Blog Post Creation Example
For a more complex example, here’s how to generate a complete blog post:
def run_advanced_example():
graph = LangGraph()
graph.transform_node(name="topic_selector", transform_fn=lambda state, **kwargs: "Graph-based AI systems", outputs=["topic"])
graph.claude_node(name="outline_generator", prompt_template="Create a brief outline for a technical blog post about {topic}.", inputs=["topic"], outputs=["outline"])
graph.claude_node(name="intro_writer", prompt_template="Write an engaging introduction for a blog post with this outline: {outline}\nTopic: {topic}", inputs=["topic", "outline"], outputs=["introduction"])
graph.claude_node(name="conclusion_writer", prompt_template="Write a conclusion for a blog post with this outline: {outline}\nTopic: {topic}", inputs=["topic", "outline"], outputs=["conclusion"])
graph.transform_node(name="content_assembler", transform_fn=lambda state, introduction, outline, conclusion, **kwargs: f"# {state['topic']}\n\n{introduction}\n\n## Outline\n{outline}\n\n## Conclusion\n{conclusion}", inputs=["topic", "introduction", "outline", "conclusion"], outputs=["final_content"])
graph.visualize()
result = graph.execute()
return graph
This advanced example demonstrates how to orchestrate multiple nodes to create a complete blog post, showcasing LangGraph’s flexibility.
Conclusion
In summary, this tutorial has guided you through implementing LangGraph with Anthropic’s Claude API. By designing modular AI workflows and visualizing task flows, developers can create maintainable and scalable AI systems. This approach not only enhances productivity but also empowers businesses to leverage AI effectively.
Further Resources
For hands-on experience, check out the Colab Notebook. Stay connected with us on Twitter and join our 95k+ ML SubReddit for more insights.