Building Your AI Research Squad with Agno, Streamlit, and uv
Learn how to create a powerful team of specialized AI agents using Agno, Streamlit, and uv. This comprehensive guide walks you through setting up your own research assistant team that can search the web, analyze YouTube videos, crawl websites, and more!
Agno Agents Tutorials
Part 2 of 4
Table of Contents
- Prerequisites and Environment Setup with uv
- Understanding Agno Agents - Core Concepts
- Specialized Agents - Creating Each Team Member
- The Internet Searcher - Your Web Detective
- The Web Crawler - Your Content Extractor
- The YouTube Analyst - Your Video Interpreter
- The Email Assistant - Your Communications Expert
- The GitHub Researcher - Your Code Explorer
- The HackerNews Monitor - Your Tech Trend Tracker
- The Generalist - Your Synthesis Expert
- Common Agent Features Explained
- Coordinating with Team Mode - Building the Whole Squad
- Streamlit Integration - Giving Your Team a Face
- Adding Memory and Session Management
- Run the Team of Agents:
- Troubleshooting and Best Practices
- Conclusion - Your AI Research Team in Action
Join BitBuddies
Level up your DevOps skills with hands-on courses on CloudPanel and Dockploy. Join our community of developers and get expert workshops to accelerate your online journey.
Start your journey to DevOps mastery today! š
Remember that scene in Oceanās Eleven where George Clooney assembles a specialized team, each member with unique skills for the perfect heist? Thatās essentially what weāre doing today, except instead of breaking into casinos, weāre breaking into the world of knowledge. And instead of risking prison time, weāre just risking a higher cloud computing bill!
In the rapidly evolving AI landscape, single-purpose agents are giving way to coordinated teams of AI specialists. These teams can accomplish complex tasks that would be difficult for a single agent to handle effectively. Think of it as the difference between asking a general practitioner about a rare neurological condition versus consulting with a team of specialists. The collective intelligence always wins.
Agno, a lightweight Python library for building AI agents, makes this multi-agent approach remarkably accessible. When combined with Streamlit for beautiful interfaces and uv (a lightning-fast Python package manager), you get a toolkit thatās both powerful and practical. You can check more on Agno on: Agno get started article.
By the end of this tutorial, youāll have a team of AI specialists that can:
- Search the web for up-to-date information
- Extract and analyze content from websites
- Break down YouTube videos
- Send professional emails
- Explore GitHub repositories
- Track trends on Hacker News
- Synthesize information from all these sources
The best part? Your users will interact with this team through a clean, intuitive Streamlit interface that you can deploy anywhere.
Letās get building!
Prerequisites and Environment Setup with uv
Before we dive into agent creation, letās set up our development environment. Weāll use uv, the turbo-charged alternative to pip thatās up to 100x faster and built in Rust (because everything cool these days seems to be built in Rust).
Why uv?
Imagine waiting for a pizza delivery. pip is like that delivery guy who gets lost, takes wrong turns, and delivers your pizza lukewarm an hour later. uv is the delivery rocket that has your pizza at your doorstep before you even finish placing the order. Itās that fast.
Installing uv
For macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
For Windows:
irm https://astral.sh/uv/install.ps1 | iex
Setting Up Your Project
Letās create a fresh project and install our dependencies:
mkdir ai-research-team
cd ai-research-team
# Create and activate a virtual environment
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies at warp speed
uv add agno streamlit python-dotenv duckduckgo-search crawl4ai youtube-transcript-api resend pygithub hackernews
Environment Variables
Our agent team needs API keys to access various services. Create a .env file in your project directory:
OPENROUTER_API_KEY=your_openrouter_key
EMAIL_FROM=your_email@example.com
EMAIL_TO=recipient@example.com
GITHUB_ACCESS_TOKEN=your_github_token
RESEND_API_KEY=your_resend_key
You can obtain these keys from:
- OpenRouter - For accessing various language models
- GitHub - For GitHub repository access
- Resend - For email capabilities
Understanding Agno Agents - Core Concepts
Before we start building our dream team, letās understand what makes Agno agents tick. Think of Agno as the talent scout, trainer, and manager for your AI squadāit handles all the complex machinery so you can focus on creating agents with superpowers.
What Makes an Agno Agent?
At its core, an Agno agent consists of four essential components:
-
Model: The brain of your agent. This is typically a large language model (LLM) like OpenAIās models or, in our case, models accessed via OpenRouter like Quasar Alpha.
-
Tools: Special abilities your agent can use to interact with the world. These range from web searches (DuckDuckGo) to sending emails (Resend) or analyzing YouTube videos.
-
Instructions: The playbook for your agent. These are specific guidelines that shape how the agent approaches problems.
-
Memory: The agentās ability to remember previous interactions, which can be stored in databases like SQLite.
The Agent Lifecycle
When a user sends a query to an Agno agent, a fascinating process unfolds:
- Input Processing: The agent receives the userās message.
- Context Assembly: The agent gathers relevant context, including its instructions and history.
- Tool Selection: The agent decides if and which tools to use (like searching the web).
- Response Generation: The LLM generates a response based on all available information.
- Memory Update: The interaction is stored in the agentās memory for future reference.
This cycle happens seamlessly behind the scenes, giving users the impression of conversing with a knowledgeable entity rather than a complex piece of software.
Agent vs. Team Modes
Agno supports two primary ways to organize your AI workforce:
-
Individual Agents: Specialized entities focused on specific tasks. Like hiring an expert consultant.
-
Teams: Collections of agents coordinated to tackle complex tasks. Like assembling a specialized task force.
Our project will use the ācoordinateā team mode, where a team leader (coordinator) breaks down complex tasks, assigns them to specialists, and synthesizes their outputs into a cohesive whole. Itās like having a project manager who knows exactly which team member to tap for each subtask.
| Mode | Best For | Real-World Analogy |
|---|---|---|
| Individual Agent | Focused tasks with clear boundaries | Solo consultant |
| Team (Coordinate) | Complex tasks requiring multiple specialties | Project team with manager |
Now that we understand the foundations, letās start building our specialized agents!
Specialized Agents - Creating Each Team Member
Now comes the fun partāassembling our dream team of AI specialists! Think of this as casting for an Oceanās Eleven-style heist, but instead of stealing diamonds, weāre extracting knowledge. Letās meet our crew of digital specialists, each with unique skills and a well-defined role.
The Internet Searcher - Your Web Detective
First up is our web detective, capable of finding the latest information across the internet. This agent is essential for real-time data that isnāt in our knowledge base.
search_agent = Agent(
name="InternetSearcher",
model=model,
tools=[DuckDuckGoTools(search=True, news=False)],
add_history_to_messages=True,
num_history_responses=3, # Limit history passed to agent
description="Expert at finding information online.",
instructions=[
"Use duckduckgo_search for web queries.",
"Cite sources with URLs.",
"Focus on recent, reliable information."
],
add_datetime_to_instructions=True, # Add time context
markdown=True,
exponential_backoff=True # Add robustness
)
Key Features:
- DuckDuckGoTools: Our agentās magnifying glass for investigating the web
- add_history_to_messages: Keeps track of previous search results
- exponential_backoff: Handles rate limits gracefully (because even digital detectives need coffee breaks)
The Web Crawler - Your Content Extractor
Next is our data extraction specialist, who can pull detailed content from specific websites when you need more than just search results.
crawler_agent = Agent(
name="WebCrawler",
model=model,
tools=[Crawl4aiTools(max_length=None)], # No content length limit
add_history_to_messages=True,
num_history_responses=3,
description="Extracts content from specific websites.",
instructions=[
"Use web_crawler to extract content from provided URLs.",
"Summarize key points and include the URL."
],
markdown=True,
exponential_backoff=True
)
Key Features:
- Crawl4aiTools: A specialized tool for extracting web content
- max_length=None: Gets the full content without truncation
The YouTube Analyst - Your Video Interpreter
Our media specialist can watch and analyze YouTube videos, extracting both captions and metadata for comprehensive insights.
youtube_agent = Agent(
name="YouTubeAnalyst",
model=model,
tools=[YouTubeTools()],
add_history_to_messages=True,
num_history_responses=3,
description="Analyzes YouTube videos.",
instructions=[
"Extract captions and metadata for YouTube URLs.",
"Summarize key points and include the video URL."
],
markdown=True,
exponential_backoff=True
)
Key Features:
- YouTubeTools: Extracts captions and metadata from videos
- Access to both what was said and video information
The Email Assistant - Your Communications Expert
Need to share findings via email? This agent handles professional communications with style and precision.
email_agent = Agent(
name="EmailAssistant",
model=model,
tools=[ResendTools(from_email=EMAIL_FROM, api_key=RESEND_API_KEY)],
add_history_to_messages=True,
num_history_responses=3,
description="Sends emails professionally.",
instructions=[
"send professional emails based on context or user request.",
f"Default recipient is {EMAIL_TO}, but use recipient specified in the query if provided.",
"Include URLs and links clearly.",
"Ensure the tone is professional and courteous."
],
markdown=True,
exponential_backoff=True
)
Key Features:
- ResendTools: Professional email sending capabilities
- Configurable sender and default recipient
The GitHub Researcher - Your Code Explorer
For technical research, our GitHub specialist can dive into repositories, pull requests, and code discussions.
github_agent = Agent(
name="GitHubResearcher",
model=model,
tools=[GithubTools(access_token=GITHUB_ACCESS_TOKEN)],
add_history_to_messages=True,
num_history_responses=3,
description="Explores GitHub repositories.",
instructions=[
"Search repositories or list pull requests based on user query.",
"Include repository URLs and summarize findings concisely."
],
markdown=True,
exponential_backoff=True,
add_datetime_to_instructions=True
)
Key Features:
- GithubTools: Access to GitHubās vast ecosystem
- Time-aware instructions for relevance
The HackerNews Monitor - Your Tech Trend Tracker
To stay on top of tech discussions and innovations, our HackerNews specialist monitors trending stories and discussions.
hackernews_agent = Agent(
name="HackerNewsMonitor",
model=model,
tools=[HackerNewsTools()],
add_history_to_messages=True,
num_history_responses=3,
description="Tracks Hacker News trends.",
instructions=[
"Fetch top stories using get_top_hackernews_stories.",
"Summarize discussions and include story URLs."
],
markdown=True,
exponential_backoff=True,
add_datetime_to_instructions=True
)
Key Features:
- HackerNewsTools: Access to the pulse of tech discussions
- Time-aware for tracking trending topics
The Generalist - Your Synthesis Expert
Finally, our jack-of-all-trades handles general queries and synthesizes information from the specialists.
general_agent = Agent(
name="GeneralAssistant",
model=model,
add_history_to_messages=True,
num_history_responses=5, # More history for context
description="Handles general queries and synthesizes information from specialists.",
instructions=[
"Answer general questions or combine specialist inputs.",
"If specialists provide information, synthesize it clearly.",
"If a query doesn't fit other specialists, attempt to answer directly.",
"Maintain a professional tone."
],
markdown=True,
exponential_backoff=True
)
Key Features:
- No specific toolsāthis agent is all about synthesis and general knowledge
- Access to more history for comprehensive context
Common Agent Features Explained
Letās break down some configuration options that appear across our agents:
| Parameter | Purpose | Benefit |
|---|---|---|
add_history_to_messages | Includes chat history in context | Maintains conversation flow |
num_history_responses | Limits history length | Prevents context overflow |
markdown | Enables formatted output | Better readability |
exponential_backoff | Retry strategy for failures | Improves reliability |
add_datetime_to_instructions | Adds timestamp to instructions | Time-aware responses |
With our specialized team members defined, weāre ready for the next step: bringing them together under a coordinated team structure!
Coordinating with Team Mode - Building the Whole Squad
We have our specialized agents ready to go, but theyāre just individual experts without a way to collaborate. Now itās time to bring them together under Agnoās ācoordinateā team modeāthink of it as appointing a project manager who knows exactly which specialist to call for each part of a complex task.
Creating the Research Team
Hereās where we define our team structure and how the agents will work together:
# --- Team Initialization (in Session State) ---
def initialize_team():
"""Initializes or re-initializes the research team."""
return Team(
name="ResearchAssistantTeam",
mode="coordinate",
model=model,
members=[
search_agent,
crawler_agent,
youtube_agent,
email_agent,
github_agent,
hackernews_agent,
general_agent
],
description="Coordinates specialists to handle research tasks.",
instructions=[
"Analyze the query and assign tasks to specialists.",
"Delegate based on task type:",
"- Web searches: InternetSearcher",
"- URL content: WebCrawler",
"- YouTube videos: YouTubeAnalyst",
"- Emails: EmailAssistant",
"- GitHub queries: GitHubResearcher",
"- Hacker News: HackerNewsMonitor",
"- General or synthesis: GeneralAssistant",
"Synthesize responses into a cohesive answer.",
"Cite sources and maintain clarity.",
"Always check previous conversations in memory before responding.",
"When asked about previous information or to recall something mentioned before, refer to your memory of past interactions.",
"Use all relevant information from memory when answering follow-up questions."
],
success_criteria="The user's query has been thoroughly answered with information from all relevant specialists.",
enable_agentic_context=True, # Coordinator maintains context
share_member_interactions=True, # Members see previous member interactions in context
show_members_responses=False, # Don't show raw member responses in final output
markdown=True,
show_tool_calls=False, # Don't show raw tool calls in final output
enable_team_history=True, # Pass history between coordinator/members
num_of_interactions_from_history=5 # Limit history passed
)
if "team" not in st.session_state:
st.session_state.team = initialize_team()
How Team Coordination Works
Letās break down whatās happening in this ācoordinateā mode:
-
Team Creation: We create a
Teamobject with a collection of specialized agents as members. -
Coordinator Role: The team operates in ācoordinateā mode, where the model specified (in our case, the same
modelwe used for individual agents) acts as a coordinator. -
Task Delegation: When a user query comes in, the coordinator analyzes it and decides which specialist(s) to involve.
-
Information Flow: The coordinator sends sub-tasks to the appropriate agents, collects their responses, and synthesizes a final answer.
-
Memory Management: With
enable_team_history=True, both the coordinator and members have access to conversation history, making follow-up questions seamless.
Team Configuration Options Explained
Letās explore the key configuration options that make our team effective:
| Parameter | Purpose | Impact |
|---|---|---|
mode="coordinate" | Sets the team operation pattern | Creates a hierarchical structure with a coordinator |
enable_agentic_context | Gives the coordinator persistent context | Maintains awareness across interactions |
share_member_interactions | Shares specialist outputs between members | Creates collaborative awareness |
show_members_responses | Controls raw output visibility | Set to False for clean final responses |
enable_team_history | Enables history access for all | Creates memory continuity for follow-ups |
The āSuccess Criteriaā Explained
One of the most powerful features of Agnoās team mode is the ability to define success criteria. This gives the coordinator clear guidance on when a task is considered complete:
success_criteria="The user's query has been thoroughly answered with information from all relevant specialists."
This simple statement has a profound impactāit tells the coordinator to keep working (and delegating) until it has gathered enough information from the right specialists to provide a comprehensive answer.
Think of it as setting the standard for what constitutes a ājob well doneā for your AI team. Without this, the coordinator might rush to conclusions or miss important specialist input.
With our team structure defined, weāre ready to create the interface that will bring this powerful AI squad to lifeāletās build our Streamlit app!
Streamlit Integration - Giving Your Team a Face
Now that we have a powerful research team humming under the hood, itās time to build an intuitive UI with Streamlit. Think of this as giving your AI Oceanās Eleven crew a sleek command centerāor at the very least, a chat window that doesnāt look like itās from 1995.
Building the Streamlit UI
Streaming is the name of the game hereāusers want to see responses appearing in real-time, just like in ChatGPT or Claude. Letās set up our Streamlit app to deliver that experience:
# --- Streamlit UI ---
st.title("š¤ Research Assistant Team")
st.markdown("""
This team coordinates specialists to assist with:
- š Web searches
- š Website content extraction
- šŗ YouTube video analysis
- š§ Email drafting/sending
- š» GitHub repository exploration
- š° Hacker News trends
- š§ General queries and synthesis
""")
# Display chat messages from history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Handle user input
user_query = st.chat_input("Ask the research team anything...")
if user_query:
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": user_query})
# Display user message
with st.chat_message("user"):
st.markdown(user_query)
# Display team response (Streaming)
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
try:
# Use stream=True for the team run
response_stream: Iterator[RunResponse] = st.session_state.team.run(user_query, stream=True) # Ensure type hint
for chunk in response_stream:
# Check if content is present and a string
if chunk.content and isinstance(chunk.content, str):
full_response += chunk.content
message_placeholder.markdown(full_response + "ā") # Add cursor effect
message_placeholder.markdown(full_response) # Final response without cursor
# Update memory debug information for display
if hasattr(st.session_state.team, 'memory') and hasattr(st.session_state.team.memory, 'messages'):
try:
# Extract only role and content safely
st.session_state.memory_dump = [
{"role": m.role if hasattr(m, 'role') else 'unknown',
"content": m.content if hasattr(m, 'content') else str(m)}
for m in st.session_state.team.memory.messages
]
except Exception as e:
st.session_state.memory_dump = f"Error accessing memory messages: {str(e)}"
else:
st.session_state.memory_dump = "Team memory object or messages not found/accessible."
# Add the final assistant response to Streamlit's chat history
st.session_state.messages.append({"role": "assistant", "content": full_response})
except Exception as e:
st.exception(e) # Show full traceback in Streamlit console for debugging
error_message = f"An error occurred: {str(e)}\n\nPlease check your API keys and tool configurations. Try rephrasing your query."
st.error(error_message)
message_placeholder.markdown(f"ā ļø {error_message}")
# Add error message to history for context
st.session_state.messages.append({"role": "assistant", "content": f"Error: {str(e)}"})
The Sidebar - Configuration and Debugging
Every great app needs a sidebar for configuration options and debugging information. Hereās how weāve structured ours:
# --- Sidebar ---
with st.sidebar:
st.title("Team Settings")
# Memory debug section
if st.checkbox("Show Team Memory Contents", value=False):
st.subheader("Team Memory Contents (Debug)")
if "memory_dump" in st.session_state:
try:
# Use pformat for potentially complex structures
memory_str = pformat(st.session_state.memory_dump, indent=2, width=80)
st.code(memory_str, language="python")
except Exception as format_e:
st.warning(f"Could not format memory dump: {format_e}")
st.json(st.session_state.memory_dump) # Fallback to json
else:
st.info("No memory contents to display yet. Interact with the team first.")
st.markdown(f"**Session ID**: `{st.session_state.team_session_id}`")
st.markdown(f"**Model**: {model_name}")
# Memory information
st.subheader("Team Memory")
st.markdown("This team remembers conversations within this browser session. Clearing the chat resets the memory.")
# Clear chat button
if st.button("Clear Chat & Reset Team"):
st.session_state.messages = []
st.session_state.team_session_id = f"streamlit-team-session-{int(time.time())}" # New ID for clarity
st.session_state.team = initialize_team() # Re-initialize the team to reset its state
if "memory_dump" in st.session_state:
del st.session_state.memory_dump # Clear the dump
st.rerun()
st.title("About")
st.markdown("""
**How it works**:
- The team coordinator analyzes your query.
- Tasks are delegated to specialists (Searcher, Crawler, YouTube Analyst, Email, GitHub, HackerNews, General).
- Responses are synthesized into a final answer.
- Team memory retains context within this session.
**Example queries**:
- "What are the latest AI breakthroughs?"
- "Crawl agno.com and summarize the homepage."
- "Summarize the YouTube video: https://www.youtube.com/watch?v=dQw4w9WgXcQ"
- "Draft an email to contact@example.com introducing our research services."
- "Find popular AI repositories on GitHub created in the last month."
- "What's trending on Hacker News today?"
- "What was the first question I asked you?" (tests memory)
""")
How Streamlit and Agno Work Together
Letās break down the integration points between Streamlit and our Agno team:
| Streamlit Feature | Purpose | Integration with Agno |
|---|---|---|
st.session_state | Maintains app state across interactions | Stores team instance and conversation history |
st.chat_message | Creates chat bubbles for conversation | Displays user queries and team responses |
st.empty() | Creates placeholder for streaming | Updated chunk by chunk with teamās streamed response |
| Sidebar components | Provides configuration and debug options | Shows team memory and allows session reset |
The magic happens in the streaming response loop. When a user submits a query:
- The query is added to Streamlitās chat history
- Itās passed to the Agno team via
team.run(query, stream=True) - As chunks of the response arrive, theyāre added to the placeholder, giving that satisfying real-time effect
- The final response is stored in session history for future context
Error Handling - When Things Go Sideways
Weāve built in robust error handling to ensure your users donāt see cryptic stack traces:
- API key issues, rate limits, or tool failures are caught and displayed as friendly error messages
- The teamās session remains intact, allowing users to try again with a different query
- Debug information is available in the sidebar for troubleshooting
This resilient approach means your Streamlit app wonāt crash even if one of your specialist agents encounters an issueāthe show must go on!
Adding Memory and Session Management
Weāve built a powerful team and a slick UI, but thereās one more crucial ingredient: memory. Just like Oceanās team would be pretty useless if they forgot the casino layout halfway through the heist, our AI team needs to remember previous interactions to be truly effective.
Session State: Streamlitās Secret Weapon
Streamlit provides a built-in session state system that persists across interactions within a browser session. Weāre using this to store three key elements:
- Team Instance: The entire research team with all its member agents
- Message History: All previous exchanges with the user
- Session ID: A unique identifier for this particular conversation
Hereās how we initialize these components:
# --- Session State Initialization ---
# Initialize team_session_id for this specific browser session
if "team_session_id" not in st.session_state:
st.session_state.team_session_id = f"streamlit-team-session-{int(time.time())}"
# Initialize chat message history
if "messages" not in st.session_state:
st.session_state.messages = []
This simple initialization ensures that each new browser session gets a fresh team instance and message history, while maintaining continuity within the session.
Agnoās Memory Architecture
Agno provides three types of memory for our team:
- Chat History: The sequence of interactions between the user and the team
- Agentic Context: The coordinatorās understanding of the ongoing conversation
- Team History: Shared context across all team members
Letās look at the memory-specific settings in our team configuration:
enable_agentic_context=True, # Coordinator maintains context
share_member_interactions=True, # Members see previous member interactions
enable_team_history=True, # Pass history between coordinator/members
num_of_interactions_from_history=5 # Limit history passed
The Memory Flow in Action
When a user submits a query, an elegant memory dance begins:
- The query is added to Streamlitās session state messages
- Itās passed to the Agno team, which accesses its own history
- The coordinator examines the query in the context of previous interactions
- Individual agents receive relevant portions of the history when assigned tasks
- The final response is added back to session state messages
This continuous loop ensures that conversations feel natural and coherent. Ask āWhat was my first question?ā and the team will actually know!
Balancing Memory and Performance
Memory is powerful, but it comes with a cost. Weāve implemented several optimizations to keep things running smoothly:
| Strategy | Implementation | Benefit |
|---|---|---|
| Limited History | num_of_interactions_from_history=5 | Prevents context overflow |
| Selective Display | show_members_responses=False | Cleaner output, smaller history |
| Debug Toggle | Sidebar checkbox for memory inspection | On-demand memory visibility |
| Reset Button | āClear Chat & Reset Teamā | Fresh start when needed |
These strategies ensure our team stays quick and responsive even in long conversations.
Run the Team of Agents:
Complete Code:
Below is the complete code, you should add it in main.py file:
# app.py
import os
import streamlit as st
from dotenv import load_dotenv
import time
from pprint import pformat
from typing import Iterator # Added for type hinting
# Agno Imports
from agno.agent import Agent
from agno.models.openrouter import OpenRouter
from agno.team import Team
from agno.run.response import RunResponse # Added for type hinting
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.crawl4ai import Crawl4aiTools
from agno.tools.youtube import YouTubeTools
from agno.tools.resend import ResendTools
from agno.tools.github import GithubTools
from agno.tools.hackernews import HackerNewsTools
# --- Configuration ---
# Load environment variables from .env file
load_dotenv()
# Check for essential API keys
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
EMAIL_FROM = os.getenv("EMAIL_FROM")
EMAIL_TO = os.getenv("EMAIL_TO") # Default recipient
GITHUB_ACCESS_TOKEN = os.getenv("GITHUB_ACCESS_TOKEN")
RESEND_API_KEY = os.getenv("RESEND_API_KEY") # ResendTools requires this
# Simple validation for required keys
required_keys = {
"OPENROUTER_API_KEY": OPENROUTER_API_KEY,
"EMAIL_FROM": EMAIL_FROM,
"EMAIL_TO": EMAIL_TO,
"GITHUB_ACCESS_TOKEN": GITHUB_ACCESS_TOKEN,
"RESEND_API_KEY": RESEND_API_KEY
}
missing_keys = [name for name, key in required_keys.items() if not key]
if missing_keys:
st.error(f"Missing required environment variables: {', '.join(missing_keys)}. Please set them in your .env file or system environment.")
st.stop() # Stop execution if keys are missing
# Set Streamlit page configuration
st.set_page_config(
page_title="Research Assistant Team",
page_icon="š§ ",
layout="wide"
)
# --- Model Initialization ---
# Initialize OpenRouter model only, no fallback
try:
model = OpenRouter(id="openrouter/optimus-alpha", api_key=OPENROUTER_API_KEY)
model_name = "OpenRouter (openrouter/optimus-alpha)"
st.sidebar.info(f"Using model: {model_name}")
except Exception as e:
st.error(f"Failed to initialize OpenRouter model: {e}")
st.stop()
# --- Session State Initialization ---
# Initialize team_session_id for this specific browser session
if "team_session_id" not in st.session_state:
st.session_state.team_session_id = f"streamlit-team-session-{int(time.time())}"
# Initialize chat message history
if "messages" not in st.session_state:
st.session_state.messages = []
# --- Agent Definitions ---
# Define specialized agents
search_agent = Agent(
name="InternetSearcher",
model=model,
tools=[DuckDuckGoTools(search=True, news=False)],
add_history_to_messages=True,
num_history_responses=3, # Limit history passed to agent
description="Expert at finding information online.",
instructions=[
"Use duckduckgo_search for web queries.",
"Cite sources with URLs.",
"Focus on recent, reliable information."
],
add_datetime_to_instructions=True, # Add time context
markdown=True,
exponential_backoff=True # Add robustness
)
crawler_agent = Agent(
name="WebCrawler",
model=model,
tools=[Crawl4aiTools(max_length=None)], # Consider setting a sensible max_length
add_history_to_messages=True,
num_history_responses=3,
description="Extracts content from specific websites.",
instructions=[
"Use web_crawler to extract content from provided URLs.",
"Summarize key points and include the URL."
],
markdown=True,
exponential_backoff=True
)
youtube_agent = Agent(
name="YouTubeAnalyst",
model=model,
tools=[YouTubeTools()],
add_history_to_messages=True,
num_history_responses=3,
description="Analyzes YouTube videos.",
instructions=[
"Extract captions and metadata for YouTube URLs.",
"Summarize key points and include the video URL."
],
markdown=True,
exponential_backoff=True
)
email_agent = Agent(
name="EmailAssistant",
model=model,
tools=[ResendTools(from_email=EMAIL_FROM, api_key=RESEND_API_KEY)], # Pass required args
add_history_to_messages=True,
num_history_responses=3,
description="Sends emails professionally.",
instructions=[
"send professional emails based on context or user request.",
f"Default recipient is {EMAIL_TO}, but use recipient specified in the query if provided.",
"Include URLs and links clearly.",
"Ensure the tone is professional and courteous."
],
markdown=True,
exponential_backoff=True
)
github_agent = Agent(
name="GitHubResearcher",
model=model,
tools=[GithubTools(access_token=GITHUB_ACCESS_TOKEN)], # Pass required args
add_history_to_messages=True,
num_history_responses=3,
description="Explores GitHub repositories.",
instructions=[
"Search repositories or list pull requests based on user query.",
"Include repository URLs and summarize findings concisely."
],
markdown=True,
exponential_backoff=True,
add_datetime_to_instructions=True
)
hackernews_agent = Agent(
name="HackerNewsMonitor",
model=model,
tools=[HackerNewsTools()],
add_history_to_messages=True,
num_history_responses=3,
description="Tracks Hacker News trends.",
instructions=[
"Fetch top stories using get_top_hackernews_stories.",
"Summarize discussions and include story URLs."
],
markdown=True,
exponential_backoff=True,
add_datetime_to_instructions=True
)
# Generalist Agent (No KB in this version)
general_agent = Agent(
name="GeneralAssistant",
model=model,
add_history_to_messages=True,
num_history_responses=5, # Can access slightly more history
description="Handles general queries and synthesizes information from specialists.",
instructions=[
"Answer general questions or combine specialist inputs.",
"If specialists provide information, synthesize it clearly.",
"If a query doesn't fit other specialists, attempt to answer directly.",
"Maintain a professional tone."
],
markdown=True,
exponential_backoff=True
)
# --- Team Initialization (in Session State) ---
def initialize_team():
"""Initializes or re-initializes the research team."""
return Team(
name="ResearchAssistantTeam",
mode="coordinate",
model=model,
members=[
search_agent,
crawler_agent,
youtube_agent,
email_agent,
github_agent,
hackernews_agent,
general_agent
],
description="Coordinates specialists to handle research tasks.",
instructions=[
"Analyze the query and assign tasks to specialists.",
"Delegate based on task type:",
"- Web searches: InternetSearcher",
"- URL content: WebCrawler",
"- YouTube videos: YouTubeAnalyst",
"- Emails: EmailAssistant",
"- GitHub queries: GitHubResearcher",
"- Hacker News: HackerNewsMonitor",
"- General or synthesis: GeneralAssistant",
"Synthesize responses into a cohesive answer.",
"Cite sources and maintain clarity.",
"Always check previous conversations in memory before responding.",
"When asked about previous information or to recall something mentioned before, refer to your memory of past interactions.",
"Use all relevant information from memory when answering follow-up questions."
],
success_criteria="The user's query has been thoroughly answered with information from all relevant specialists.",
enable_agentic_context=True, # Coordinator maintains context
share_member_interactions=True, # Members see previous member interactions in context
show_members_responses=False, # Don't show raw member responses in final output
markdown=True,
show_tool_calls=False, # Don't show raw tool calls in final output
enable_team_history=True, # Pass history between coordinator/members
num_of_interactions_from_history=5 # Limit history passed
)
if "team" not in st.session_state:
st.session_state.team = initialize_team()
# --- Streamlit UI ---
st.title("š¤ Research Assistant Team")
st.markdown("""
This team coordinates specialists to assist with:
- š Web searches
- š Website content extraction
- šŗ YouTube video analysis
- š§ Email drafting/sending
- š» GitHub repository exploration
- š° Hacker News trends
- š§ General queries and synthesis
""")
# Display chat messages from history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Handle user input
user_query = st.chat_input("Ask the research team anything...")
if user_query:
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": user_query})
# Display user message
with st.chat_message("user"):
st.markdown(user_query)
# Display team response (Streaming)
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
try:
# Use stream=True for the team run
response_stream: Iterator[RunResponse] = st.session_state.team.run(user_query, stream=True) # Ensure type hint
for chunk in response_stream:
# Check if content is present and a string
if chunk.content and isinstance(chunk.content, str):
full_response += chunk.content
message_placeholder.markdown(full_response + "ā") # Add cursor effect
message_placeholder.markdown(full_response) # Final response without cursor
# Update memory debug information for display
if hasattr(st.session_state.team, 'memory') and hasattr(st.session_state.team.memory, 'messages'):
try:
# Extract only role and content safely
st.session_state.memory_dump = [
{"role": m.role if hasattr(m, 'role') else 'unknown',
"content": m.content if hasattr(m, 'content') else str(m)}
for m in st.session_state.team.memory.messages
]
except Exception as e:
st.session_state.memory_dump = f"Error accessing memory messages: {str(e)}"
else:
st.session_state.memory_dump = "Team memory object or messages not found/accessible."
# Add the final assistant response to Streamlit's chat history
st.session_state.messages.append({"role": "assistant", "content": full_response})
except Exception as e:
st.exception(e) # Show full traceback in Streamlit console for debugging
error_message = f"An error occurred: {str(e)}\n\nPlease check your API keys and tool configurations. Try rephrasing your query."
st.error(error_message)
message_placeholder.markdown(f"ā ļø {error_message}")
# Add error message to history for context
st.session_state.messages.append({"role": "assistant", "content": f"Error: {str(e)}"})
# --- Sidebar ---
with st.sidebar:
st.title("Team Settings")
# Memory debug section
if st.checkbox("Show Team Memory Contents", value=False):
st.subheader("Team Memory Contents (Debug)")
if "memory_dump" in st.session_state:
try:
# Use pformat for potentially complex structures
memory_str = pformat(st.session_state.memory_dump, indent=2, width=80)
st.code(memory_str, language="python")
except Exception as format_e:
st.warning(f"Could not format memory dump: {format_e}")
st.json(st.session_state.memory_dump) # Fallback to json
else:
st.info("No memory contents to display yet. Interact with the team first.")
st.markdown(f"**Session ID**: `{st.session_state.team_session_id}`")
st.markdown(f"**Model**: {model_name}")
# Memory information
st.subheader("Team Memory")
st.markdown("This team remembers conversations within this browser session. Clearing the chat resets the memory.")
# Clear chat button
if st.button("Clear Chat & Reset Team"):
st.session_state.messages = []
st.session_state.team_session_id = f"streamlit-team-session-{int(time.time())}" # New ID for clarity
st.session_state.team = initialize_team() # Re-initialize the team to reset its state
if "memory_dump" in st.session_state:
del st.session_state.memory_dump # Clear the dump
st.rerun()
st.title("About")
st.markdown("""
**How it works**:
- The team coordinator analyzes your query.
- Tasks are delegated to specialists (Searcher, Crawler, YouTube Analyst, Email, GitHub, HackerNews, General).
- Responses are synthesized into a final answer.
- Team memory retains context within this session.
**Example queries**:
- "What are the latest AI breakthroughs?"
- "Crawl agno.com and summarize the homepage."
- "Summarize the YouTube video: https://www.youtube.com/watch?v=dQw4w9WgXcQ"
- "Draft an email to contact@example.com introducing our research services."
- "Find popular AI repositories on GitHub created in the last month."
- "What's trending on Hacker News today?"
- "What was the first question I asked you?" (tests memory)
""")
Run the Team:
uv run streamlit run main.py
Troubleshooting and Best Practices
Even the best-planned heists encounter unexpected challenges, and your AI research squad is no exception. Letās talk about some common issues and how to overcome them.
API Key Management
The most common setup issue is missing or invalid API keys. Weāve built in robust validation to catch these early:
# Simple validation for required keys
required_keys = {
"OPENROUTER_API_KEY": OPENROUTER_API_KEY,
"EMAIL_FROM": EMAIL_FROM,
"EMAIL_TO": EMAIL_TO,
"GITHUB_ACCESS_TOKEN": GITHUB_ACCESS_TOKEN,
"RESEND_API_KEY": RESEND_API_KEY
}
missing_keys = [name for name, key in required_keys.items() if not key]
if missing_keys:
st.error(f"Missing required environment variables: {', '.join(missing_keys)}. Please set them in your .env file or system environment.")
st.stop() # Stop execution if keys are missing
Connection and Rate Limit Handling
When working with multiple external APIs, youāll occasionally hit rate limits or connection issues. Our solution is the exponential_backoff parameter, which weāve added to all our agents:
exponential_backoff=True # Add robustness
This simple addition implements a sophisticated retry strategy that waits progressively longer between attempts, dramatically improving reliability.
Model Fallback Strategies
Depending solely on one model provider can be risky. A more resilient approach is to configure model fallbacks:
# Alternative implementation (not in current code)
model = OpenRouter(
id="openrouter/optimus-alpha",
api_key=OPENROUTER_API_KEY,
fallback_models=[
"openai/gpt-4-turbo",
"anthropic/claude-3-opus"
]
)
This ensures that if one model is unavailable, your team gracefully switches to alternatives.
Memory Debugging
When conversation history seems off, use the debug toggle in the sidebar to inspect the teamās memory:
# Memory debug section
if st.checkbox("Show Team Memory Contents", value=False):
st.subheader("Team Memory Contents (Debug)")
if "memory_dump" in st.session_state:
try:
# Use pformat for potentially complex structures
memory_str = pformat(st.session_state.memory_dump, indent=2, width=80)
st.code(memory_str, language="python")
except Exception as format_e:
st.warning(f"Could not format memory dump: {format_e}")
st.json(st.session_state.memory_dump) # Fallback to json
else:
st.info("No memory contents to display yet. Interact with the team first.")
Optimizing Team Design
If your team feels sluggish or uncoordinated, consider these optimizations:
- Specialized Tools: Ensure each agent has only the tools it truly needs
- Clear Instructions: Revisit agent instructions to avoid overlapping responsibilities
- Success Criteria: Set specific success criteria for the team coordinator
- History Limits: Adjust
num_of_interactions_from_historyto balance context and speed - Stream Responses: Always use
stream=Truefor a more responsive user experience
Conclusion - Your AI Research Team in Action
Congratulations! Youāve just built a sophisticated AI research team that would make Danny Ocean proud. Your squad isnāt just a collection of chatbotsāitās a coordinated team of specialists that can search the web, crawl websites, analyze YouTube videos, communicate via email, explore GitHub, track tech trends, and synthesize information into cohesive responses.
Letās recap what weāve accomplished:
- Environment Setup: A lightning-fast development environment with
uv - Specialized Agents: A crew of AI specialists, each with unique tools and abilities
- Team Coordination: A sophisticated delegation system that routes tasks to the right expert
- Sleek UI: A responsive Streamlit interface with real-time streaming responses
- Memory Management: Persistent context that enables natural, ongoing conversations
What Makes This Solution Special
The power of this approach lies in its modularity and extensibility. Need another specialist? Add a new agent with the right tools. Want to switch LLM providers? Swap out OpenRouter for another model. The architecture adapts to your needs without breaking what already works.
Compared to single-agent solutions, our team approach offers:
| Aspect | Single Agent | Agent Team |
|---|---|---|
| Specialization | Jack of all trades | Domain experts |
| Tool Usage | One agent switching between tools | Right tool for each agent |
| Response Quality | Generic, broader knowledge | Deep expertise in specific areas |
| Adaptability | Limited to one thinking pattern | Multiple approaches to problems |
Next Steps and Expansions
Now that you have your research team up and running, here are some exciting ways to enhance it:
- Add More Specialists: Create agents for social media monitoring, data analysis, or language translation
- Persistent Database: Switch from SQLite to PostgreSQL for production-grade storage
- Knowledge Bases: Add vector stores to give agents specialized knowledge in their domains
- Custom UI: Build a branded interface with Streamlit Components or graduate to a web framework
- Feedback Loop: Implement user ratings to help agents improve over time
The Future of AI Teams
As AI continues to evolve, the multi-agent approach will become increasingly powerful. By building your research team with Agno and Streamlit today, youāre ahead of the curve in a rapidly advancing field. The combination of specialized knowledge, coordinated teamwork, and human-like memory creates an AI experience that feels less like a tool and more like a true research partner.
So go aheadāask your team something complex and watch as it splits the work, gathers information, and crafts a response that draws on multiple sources of expertise. Itās not just impressive; itās a glimpse into the future of AI assistance. Your research squad is ready for action, and the possibilities are limited only by your imagination.
Now thatās a heist worth celebrating! š
Related Posts
How to Integrate FREE Groq API and Mistral LLM into Your Streamlit App
Learn How to Integrate Groq API and Mistral LLM into Your Streamlit App to
30+ Best Python Web Frameworks for 2024
A complete list with the best Python Web Frameworks for 2024 that you can use to add an interface to your Python applications.
Streamlit vs. NiceGUI: Choose the Best Python Web Framework
Compare Streamlit vs NiceGUI for Python web apps - discover which framework suits your project with practical examples and key differences.