Itinai.com it company office background blured chaos 50 v 32924e8d 918f 458e ae6f 0f5d897c5b7b 1
Itinai.com it company office background blured chaos 50 v 32924e8d 918f 458e ae6f 0f5d897c5b7b 1

Master Chain-of-Thought Reasoning with Mirascope: A Guide for AI Enthusiasts and Data Scientists

Understanding the Target Audience for o1 Style Thinking

The target audience for o1 Style Thinking, especially in the context of Chain-of-Thought (CoT) reasoning using the Mirascope library, includes business professionals, data scientists, and AI enthusiasts. These individuals are eager to enhance their problem-solving skills through advanced reasoning techniques. Typically, they are tech-savvy and understand how AI can impact business processes.

Pain Points

  • Lack of clarity in multi-step problem-solving.
  • Difficulties in ensuring accuracy and reliability in AI-generated outputs.
  • Challenges in visualizing complex reasoning processes.

Goals

  • To enhance decision-making through structured reasoning.
  • To improve the accuracy of AI models in business applications.
  • To develop a deeper understanding of AI workflows and their implications.

Interests

  • Latest advancements in AI and machine learning.
  • Practical applications of AI in business management.
  • Methods for improving cognitive processes through technology.

Communication Preferences

This audience prefers clear, concise, and structured content. They are interested in technical details that can be easily translated into business contexts and desire interactive and engaging tutorials that provide practical insights.

Implementing Chain-of-Thought Reasoning with Mirascope

In this section, we will explore how to implement Chain-of-Thought (CoT) reasoning using the Mirascope library and Groq’s LLaMA 3 model. This approach encourages the AI model to break down problems into logical steps, enhancing accuracy and transparency while addressing complex, multi-step tasks.

Setting Up the Environment

To begin, ensure you have the necessary dependencies installed:

!pip install "mirascope[groq]" 
!pip install datetime

Obtaining a Groq API Key

To utilize the Groq API, you will need an API key, which can be obtained at the Groq Console.

Importing Libraries and Defining a Pydantic Schema

Import the required libraries and define a Pydantic model to structure the reasoning steps:

from typing import Literal
from mirascope.core import groq
from pydantic import BaseModel, Field

history: list[dict] = []

class COTResult(BaseModel):
    title: str = Field(..., description="The title of the step")
    content: str = Field(..., description="The output content of the step")
    next_action: Literal["continue", "final_answer"] = Field(..., description="The next action to take")

Defining Step-wise Reasoning and Final Answer Functions

These functions form the core of the Chain-of-Thought (CoT) reasoning workflow:

@groq.call("llama-3.3-70b-versatile", json_mode=True, response_model=COTResult)
def cot_step(prompt: str, step_number: int, previous_steps: str) -> str:
    return f"""
    You are an expert AI assistant that explains your reasoning step by step.
    For this step, provide a title that describes what you're doing, along with the content.
    Decide if you need another step or if you're ready to give the final answer.

    Guidelines:
    - Use AT MOST 5 steps to derive the answer.
    - Be aware of your limitations as an LLM and what you can and cannot do.
    - In your reasoning, include exploration of alternative answers.
    - Consider you may be wrong, and if you are wrong in your reasoning, where it would be.
    - Fully test all other possibilities.
    - YOU ARE ALLOWED TO BE WRONG. When you say you are re-examining
        - Actually re-examine, and use another approach to do so.
        - Do not just say you are re-examining.

    This is step number {step_number}.

    Question: {prompt}

    Previous steps:
    {previous_steps}
    """
@groq.call("llama-3.3-70b-versatile")
def final_answer(prompt: str, reasoning: str) -> str:
    return f"""
    Based on the following chain of reasoning, provide a final answer to the question.
    Only provide the text response without any titles or preambles.
    Retain any formatting as instructed by the original prompt, such as exact formatting for free response or multiple choice.

    Question: {prompt}

    Reasoning:
    {reasoning}

    Final Answer:
    """

Generating and Displaying Chain-of-Thought Responses

Define two key functions to manage the full Chain-of-Thought reasoning loop:

def generate_cot_response(user_query: str) -> tuple[list[tuple[str, str, float]], float]:
    steps: list[tuple[str, str, float]] = []
    total_thinking_time: float = 0.0
    step_count: int = 1
    reasoning: str = ""
    previous_steps: str = ""

    while True:
        start_time: datetime = datetime.now()
        cot_result = cot_step(user_query, step_count, previous_steps)
        end_time: datetime = datetime.now()
        thinking_time: float = (end_time - start_time).total_seconds()

        steps.append((f"Step {step_count}: {cot_result.title}", cot_result.content, thinking_time))
        total_thinking_time += thinking_time

        reasoning += f"\n{cot_result.content}\n"
        previous_steps += f"\n{cot_result.content}\n"

        if cot_result.next_action == "final_answer" or step_count >= 5:
            break

        step_count += 1

    # Generate final answer
    start_time = datetime.now()
    final_result: str = final_answer(user_query, reasoning).content
    end_time = datetime.now()
    thinking_time = (end_time - start_time).total_seconds()
    total_thinking_time += thinking_time

    steps.append(("Final Answer", final_result, thinking_time))

    return steps, total_thinking_time

def display_cot_response(steps: list[tuple[str, str, float]], total_thinking_time: float) -> None:
    for title, content, thinking_time in steps:
        print(f"{title}:")
        print(content.strip())
        print(f"**Thinking time: {thinking_time:.2f} seconds**\n")

    print(f"**Total thinking time: {total_thinking_time:.2f} seconds**")

Running the Chain-of-Thought Workflow

The following function initiates the CoT reasoning process:

def run() -> None:
    question: str = "If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (which is 300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?"
    print("(User):", question)
    steps, total_thinking_time = generate_cot_response(question)
    display_cot_response(steps, total_thinking_time)

    history.append({"role": "user", "content": question})
    history.append({"role": "assistant", "content": steps[-1][1]})  # Add only the final answer to the history

# Run the function
run()

Conclusion

Implementing Chain-of-Thought reasoning with the Mirascope library can significantly enhance the accuracy and transparency of AI outputs. By breaking down complex problems into manageable steps, users can improve their decision-making processes and gain deeper insights into AI workflows. This structured approach not only addresses common pain points but also aligns with the goals and interests of professionals in the AI and business sectors.

FAQs

  • What is Chain-of-Thought reasoning? Chain-of-Thought reasoning is a method where an AI model breaks down problems into logical steps to enhance understanding and accuracy.
  • How can I set up the Mirascope library? You can set up the Mirascope library by installing it via pip and obtaining a Groq API key from the Groq Console.
  • What are the benefits of using structured reasoning in AI? Structured reasoning helps improve decision-making, enhances the accuracy of AI models, and provides clarity in complex problem-solving.
  • Can I use Chain-of-Thought reasoning for real-world business problems? Yes, this method is particularly useful for tackling real-world business challenges by providing a clear framework for analysis.
  • What should I do if I encounter errors in reasoning? The model is designed to re-examine its reasoning and explore alternative answers, allowing for a more robust problem-solving approach.
Itinai.com office ai background high tech quantum computing 0002ba7c e3d6 4fd7 abd6 cfe4e5f08aeb 0

Vladimir Dyachkov, Ph.D
Editor-in-Chief itinai.com

I believe that AI is only as powerful as the human insight guiding it.

Unleash Your Creative Potential with AI Agents

Competitors are already using AI Agents

Business Problems We Solve

  • Automation of internal processes.
  • Optimizing AI costs without huge budgets.
  • Training staff, developing custom courses for business needs
  • Integrating AI into client work, automating first lines of contact

Large and Medium Businesses

Startups

Offline Business

100% of clients report increased productivity and reduced operati

AI news and solutions