Itinai.com llm large language model graph clusters multidimen f45b3cbc 46c3 4e70 9028 e654e8394d2d 2
Itinai.com llm large language model graph clusters multidimen f45b3cbc 46c3 4e70 9028 e654e8394d2d 2

“Secure AI Workflow: Build a Memory-Enabled Cipher with Dynamic LLM Selection”

Creating a Secure Cipher Workflow for AI Agents

In the ever-evolving field of artificial intelligence, establishing a secure and efficient workflow is paramount. This guide will take you through building a Cipher-based system that can adaptively switch between different large language models (LLMs) such as OpenAI, Gemini, and Anthropic. By the end, you’ll have a robust system that securely manages API keys and retains memory of key project decisions using a memory-enabled AI agent.

1. Securing Your API Key

The first step in our workflow is to ensure the security of your API key, particularly when working within environments like Google Colab. We’ll use Python’s getpass module to input the Gemini API key securely. This approach prevents the key from being visible in the code or the user interface.

import os, getpass
os.environ["GEMINI_API_KEY"] = getpass.getpass("Enter your Gemini API key: ").strip()

2. Dynamic LLM Selection

Next, we need a function that automatically selects the appropriate LLM based on available API keys. This adaptability ensures that your system leverages the best model when it’s available.

def choose_llm():
   if os.getenv("OPENAI_API_KEY"):
       return "openai", "gpt-4o-mini", "OPENAI_API_KEY"
   if os.getenv("GEMINI_API_KEY"):
       return "gemini", "gemini-2.5-flash", "GEMINI_API_KEY"
   if os.getenv("ANTHROPIC_API_KEY"):
       return "anthropic", "claude-3-5-haiku-20241022", "ANTHROPIC_API_KEY"
   raise RuntimeError("Set one API key before running.")

3. Command Execution Helpers

To interact with the Cipher CLI, we create a helper function to execute shell commands seamlessly. This function provides transparency by displaying both output and error messages during execution.

def run(cmd, check=True, env=None):
   print("▸", cmd)
   p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env)
   if p.stdout: print(p.stdout)
   if p.stderr: print(p.stderr)
   if check and p.returncode != 0:
       raise RuntimeError(f"Command failed: {cmd}")
   return p

4. Environment Setup

Before diving into the memory agent configuration, we need to ensure that the necessary software is installed. This step will set up Node.js and the Cipher CLI globally.

def ensure_node_and_cipher():
   run("sudo apt-get update -y && sudo apt-get install -y nodejs npm", check=False)
   run("npm install -g @byterover/cipher")

5. Generating the Configuration File

Now, we need to create a cipher.yml file that sets up the memory agent. This file will define how the AI retains information about previous decisions, ensuring continuity in its responses.

def write_cipher_yml(workdir, provider, model, key_env):
   cfg = f"""
llm:
 provider: {provider}
 model: {model}
 apiKey: ${key_env}
systemPrompt:
 enabled: true
 content: |
   You are an AI programming assistant with long-term memory of prior decisions.
embedding:
 disabled: true
mcpServers:
 filesystem:
   type: stdio
   command: npx
   args: ['-y','@modelcontextprotocol/server-filesystem','.']
"""
   (workdir / "memAgent").mkdir(parents=True, exist_ok=True)
   (workdir / "memAgent" / "cipher.yml").write_text(cfg.strip() + "\n")

6. Executing Cipher Commands

To log decisions and retrieve memories, we define a function that allows us to execute commands through the Cipher CLI efficiently.

def cipher_once(text, env=None, cwd=None):
   cmd = f'cipher {shlex.quote(text)}'
   p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env, cwd=cwd)
   print("Cipher says:\n", p.stdout or p.stderr)
   return p.stdout.strip() or p.stderr.strip()

7. Starting the API Server

With everything set up, we can initialize the Cipher in API mode to provide a live endpoint for further interactions.

def start_api(env, cwd):
   proc = subprocess.Popen("cipher --mode api", shell=True, env=env, cwd=cwd,
                           stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
   for _ in range(30):
       try:
           r = requests.get("http://127.0.0.1:3000/health", timeout=2)
           if r.ok:
               print("API /health:", r.text)
               break
       except: pass
       time.sleep(1)
   return proc

8. Putting It All Together

The main function orchestrates the whole workflow, ensuring each component operates in sequence and correctly logs decisions during the session.

def main():
   provider, model, key_env = choose_llm()
   ensure_node_and_cipher()
   workdir = pathlib.Path(tempfile.mkdtemp(prefix="cipher_demo_"))
   write_cipher_yml(workdir, provider, model, key_env)
   env = os.environ.copy()

   cipher_once("Store decision: use pydantic for config validation; pytest fixtures for testing.", env, str(workdir))
   cipher_once("Remember: follow conventional commits; enforce black + isort in CI.", env, str(workdir))

   cipher_once("What did we standardize for config validation and Python formatting?", env, str(workdir))

   api_proc = start_api(env, str(workdir))
   time.sleep(3)
   api_proc.terminate()

if __name__ == "__main__":
   main()

Conclusion

This guide outlines a practical and secure implementation of a Cipher workflow that fully manages API keys, selects the appropriate LLM dynamically, and configures a memory-enabled agent. By following these steps, you can enhance your development processes, ensuring your AI systems can learn and adapt over time.

FAQs

  • What is the purpose of a memory-enabled AI agent?
    It allows the AI to retain previous interactions, improving the quality and relevance of responses based on past decisions.
  • How does dynamic LLM selection work?
    The system checks which API keys are available and selects the best model automatically, ensuring optimal performance.
  • Why is securing the API key important?
    Exposing API keys can lead to unauthorized access, potentially incurring costs or data breaches.
  • Can I customize the memory prompts?
    Yes, you can modify the system prompts in the configuration file to tailor the AI’s responses to specific needs.
  • What happens if none of the API keys are set?
    The system will raise an error, prompting you to set an API key before proceeding.
Itinai.com office ai background high tech quantum computing 0002ba7c e3d6 4fd7 abd6 cfe4e5f08aeb 0

Vladimir Dyachkov, Ph.D
Editor-in-Chief itinai.com

I believe that AI is only as powerful as the human insight guiding it.

Unleash Your Creative Potential with AI Agents

Competitors are already using AI Agents

Business Problems We Solve

  • Automation of internal processes.
  • Optimizing AI costs without huge budgets.
  • Training staff, developing custom courses for business needs
  • Integrating AI into client work, automating first lines of contact

Large and Medium Businesses

Startups

Offline Business

100% of clients report increased productivity and reduced operati

AI news and solutions