🔗 LangChain Multi-Agent Architecture

100%
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#4A90A4', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2d5a6b', 'lineColor': '#5C6BC0', 'secondaryColor': '#81C784', 'tertiaryColor': '#FFB74D', 'background': '#fafafa', 'fontSize': '16px'}}}%%

flowchart TB
    USER["🧑‍💻 Employee"]
    
    subgraph AWS_CLOUD["☁️ AWS CLOUD INFRASTRUCTURE"]
        direction TB
        
        subgraph API_LAYER["🔌 API Gateway Layer"]
            API["📡 Internal API<br/>(Receives Requests)"]
        end
        
        subgraph ORCHESTRATION_LAYER["🎯 ORCHESTRATION LAYER - LangChain"]
            direction TB
            LC["🔗 LangChain<br/>Multi-Agent Orchestrator"]
            
            subgraph AGENT_CHAIN["⛓️ Agent Chain (Sequential Orchestration)"]
                direction LR
                A1["🤖 Agent 1<br/>Task: Initial Processing"]
                A2["🤖 Agent 2<br/>Task: Analysis"]
                A3["🤖 Agent 3<br/>Task: Final Synthesis"]
            end
            
            LC --> A1
            A1 -->|"Output 1"| A2
            A2 -->|"Output 2"| A3
        end
    end
    
    subgraph EXTERNAL_LLMS["🌐 EXTERNAL LLM API PROVIDERS"]
        direction TB
        
        subgraph OPENAI["OpenAI Cloud"]
            GPT["🧠 ChatGPT API<br/>(GPT-4)"]
        end
        
        subgraph ANTHROPIC["Anthropic Cloud"]
            CLAUDE["🧠 Claude API<br/>(Claude 3)"]
        end
        
        subgraph GOOGLE["Google Cloud"]
            GEMINI["🧠 Gemini API<br/>(Gemini Pro)"]
        end
    end
    
    USER -->|"1️⃣ Request"| API
    API -->|"2️⃣ Forward to Orchestrator"| LC
    
    A1 <-->|"API Call"| GPT
    A2 <-->|"API Call"| CLAUDE
    A3 <-->|"API Call"| GEMINI
    
    A3 -->|"3️⃣ Final Output"| API
    API -->|"4️⃣ Response"| USER

    style AWS_CLOUD fill:#232F3E,stroke:#FF9900,stroke-width:3px,color:#fff
    style ORCHESTRATION_LAYER fill:#1a237e,stroke:#7986CB,stroke-width:3px,color:#fff
    style AGENT_CHAIN fill:#283593,stroke:#5C6BC0,stroke-width:2px,color:#fff
    style EXTERNAL_LLMS fill:#37474F,stroke:#78909C,stroke-width:2px,color:#fff
    style OPENAI fill:#10a37f,stroke:#fff,stroke-width:2px,color:#fff
    style ANTHROPIC fill:#d4a27f,stroke:#fff,stroke-width:2px,color:#fff
    style GOOGLE fill:#4285F4,stroke:#fff,stroke-width:2px,color:#fff
    style API_LAYER fill:#37474F,stroke:#78909C,stroke-width:2px,color:#fff
    
    style LC fill:#7C4DFF,stroke:#fff,stroke-width:2px,color:#fff
    style A1 fill:#26A69A,stroke:#fff,stroke-width:2px,color:#fff
    style A2 fill:#26A69A,stroke:#fff,stroke-width:2px,color:#fff
    style A3 fill:#26A69A,stroke:#fff,stroke-width:2px,color:#fff
    style GPT fill:#10a37f,stroke:#fff,stroke-width:2px,color:#fff
    style CLAUDE fill:#d4a27f,stroke:#fff,stroke-width:2px,color:#fff
    style GEMINI fill:#4285F4,stroke:#fff,stroke-width:2px,color:#fff
    style USER fill:#FF7043,stroke:#fff,stroke-width:2px,color:#fff
    style API fill:#42A5F5,stroke:#fff,stroke-width:2px,color:#fff
                
Original Architecture: LangChain orchestrates 3 agents, each calling external LLM APIs (OpenAI → Anthropic → Google)
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#4A90A4', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2d5a6b', 'lineColor': '#5C6BC0', 'secondaryColor': '#81C784', 'tertiaryColor': '#FFB74D', 'background': '#fafafa', 'fontSize': '16px'}}}%%

flowchart TB
    USER["🧑‍💻 Employee"]
    
    subgraph AWS_CLOUD["☁️ AWS CLOUD INFRASTRUCTURE"]
        direction TB
        
        subgraph API_LAYER["🔌 API Gateway Layer"]
            API["📡 Internal API"]
        end
        
        subgraph ORCHESTRATION_LAYER["🎯 ORCHESTRATION LAYER - LangChain STILL REQUIRED"]
            direction TB
            LC["🔗 LangChain Orchestrator"]
            
            subgraph AGENT_CHAIN["⛓️ Agent Chain"]
                direction LR
                A1["🤖 Agent 1"]
                A2["🤖 Agent 2 - CHANGED"]
                A3["🤖 Agent 3"]
            end
            
            LC --> A1
            A1 -->|"Output 1"| A2
            A2 -->|"Output 2"| A3
        end
        
        subgraph EC2_INFRASTRUCTURE["🖥️ AWS EC2 - Custom LLM Deployment"]
            direction TB
            FASTAPI["⚡ FastAPI Proxy"]
            OLLAMA["🦙 Ollama Server"]
            MODEL["🧠 Custom Model"]
            
            FASTAPI --> OLLAMA
            OLLAMA --> MODEL
        end
    end
    
    subgraph EXTERNAL_LLMS["🌐 EXTERNAL LLM APIs"]
        direction TB
        
        subgraph OPENAI["OpenAI Cloud"]
            GPT["🧠 ChatGPT API"]
        end
        
        subgraph GOOGLE["Google Cloud"]
            GEMINI["🧠 Gemini API"]
        end
    end
    
    USER -->|"1 Request"| API
    API -->|"2 Forward"| LC
    
    A1 <-->|"API Call"| GPT
    A2 <-->|"API Call"| FASTAPI
    A3 <-->|"API Call"| GEMINI
    
    A3 -->|"3 Final Output"| API
    API -->|"4 Response"| USER

    style AWS_CLOUD fill:#232F3E,stroke:#FF9900,stroke-width:3px,color:#fff
    style ORCHESTRATION_LAYER fill:#1a237e,stroke:#7986CB,stroke-width:3px,color:#fff
    style AGENT_CHAIN fill:#283593,stroke:#5C6BC0,stroke-width:2px,color:#fff
    style EXTERNAL_LLMS fill:#37474F,stroke:#78909C,stroke-width:2px,color:#fff
    style OPENAI fill:#10a37f,stroke:#fff,stroke-width:2px,color:#fff
    style GOOGLE fill:#4285F4,stroke:#fff,stroke-width:2px,color:#fff
    style API_LAYER fill:#37474F,stroke:#78909C,stroke-width:2px,color:#fff
    style EC2_INFRASTRUCTURE fill:#FF6F00,stroke:#FFD54F,stroke-width:3px,color:#fff
    
    style LC fill:#7C4DFF,stroke:#fff,stroke-width:2px,color:#fff
    style A1 fill:#26A69A,stroke:#fff,stroke-width:2px,color:#fff
    style A2 fill:#FF6F00,stroke:#FFD54F,stroke-width:3px,color:#fff
    style A3 fill:#26A69A,stroke:#fff,stroke-width:2px,color:#fff
    style GPT fill:#10a37f,stroke:#fff,stroke-width:2px,color:#fff
    style GEMINI fill:#4285F4,stroke:#fff,stroke-width:2px,color:#fff
    style USER fill:#FF7043,stroke:#fff,stroke-width:2px,color:#fff
    style API fill:#42A5F5,stroke:#fff,stroke-width:2px,color:#fff
    style FASTAPI fill:#009688,stroke:#fff,stroke-width:2px,color:#fff
    style OLLAMA fill:#795548,stroke:#fff,stroke-width:2px,color:#fff
    style MODEL fill:#E65100,stroke:#fff,stroke-width:2px,color:#fff
                
Key Insight: Ollama + FastAPI only replaces Agent 2's LLM backend (Claude → Custom Model). LangChain is still required for orchestration. Ollama ≠ LangChain replacement!
🖱️ Scroll to zoom
👆 Drag to pan
🔘 Use buttons to control zoom