AI Agent Architecture in Agentic Frameworks
In this newsletter we explore AI Agent Architecture in Agentic Frameworks, then learn about LLM Observability frameworks and how to install DeepSeek on AWS and compare it with other LLMs.
Have you ever wondered how different Agentic frameworks handle AI agents? Here's a quick guide to some of the popular ones:
LangChain
→ Component-based architecture focused on chains
→ Modular design connecting LLMs, memory, and tools
→ Great for building sequential workflows and basic agent interactions
LangGraph (LangChain's sibling)
→ State machine-based architecture
→ Explicit workflow control with graph structures
→ Perfect for complex, non-linear processes with multiple decision points
AutoGen
→ Multi-agent conversation architecture
→ Agents can autonomously collaborate through chat
→ Excels at complex problem-solving through agent-to-agent communication
CrewAI
→ Role-based team architecture
→ Simulates human team structures (like researcher, writer, reviewer)
→ Built for complex tasks requiring multiple specialized agents
Agno (formerly Phidata)
→ Application-centric, assistant-based workflows
→ Scalable, production-ready AI application orchestration
→ Function-based tool system for enterprise-level tasks
These frameworks are each great in different scenarios:
Need a basic LLM app? → LangChain
Complex, multi-step workflows? → LangGraph
Multi-agent collaboration? → AutoGen
Sequential tasks simulation? → CrewAI
Task automation focus? → Agno
Comment below which one you work with 👇
Why does LLM observability matter?
RAG applications and AI Agents come with complex reasoning chains, and errors can occur at any stage, from embedding to API calls and tool calls. This is why LLM observability tools are crucial for:
→ Troubleshooting issues
→ Performance monitoring
→ Prompt optimization
→ Hallucination detection
→ Cost optimization
Some popular LLM observability tools at the moment include LangSmith and Helicone (YC W23). What other tools do you use for LLM tracing and monitoring? Comment below 👇
Install DeepSeek on AWS Cloud
DeepSeek is now available for deployment on Amazon Web Services (AWS).
If you want to install DeepSeek on Ollama, Google Colab, or AWS, I’ve put together a video guide covering:
→ Installing DeepSeek on Ollama (local setup)
→ Running it on Google Colab (cloud-based)
→ Deploying it on AWS (scalable solution)
Which setup is best for you? I compare them in this video, so you can choose the right one!
📌 Check out the full video here.
Compare Two Foundation Models on Amazon Bedrock Playground and SageMaker Playground
If you caught my previous post on how to install DeepSeek on AWS Cloud, here is a quick video on how to compare two foundation models on Amazon Bedrock Playground and SageMaker Playground.
Choosing the right foundation model for your project is always important, so, consider these key factors such as:
→ Business use cases
→ Cost
→ Latency
→ Accuracy
→ Hardware requirements
Check out the full video here.