Building Reporting Writing Agent Using CrewAI
Build a multi-agent report system using CrewAI and LangDB. Learn to configure agents, integrate MCP tools, and trace end-to-end report generation workflows.
Build a powerful multi-agent report generation workflow with CrewAI and LangDB. This guide walks through the full setup: from configuring your agents to sharing a public execution trace.
Code
Goal
Create a report-writing AI system where:
A Researcher Agent gathers up-to-date information using web tools like Tavily Search.
An Analyst Agent processes and synthesizes the findings.
A Report Writer Agent generates a clean, markdown-formatted report.
LangDB enables seamless model routing, tracing, and observability across this pipeline, including full visibility into MCP tool calls like Tavily Search used by the Researcher Agent.
Project-Structure
report-writing-agent/
├── configs
│ ├── agents.yaml
│ └── tasks.yaml
├── main.py
├── pyproject.toml
├── README.md
├── report.md
└── utils.py
Getting Started
To enable the Researcher Agent to retrieve fresh, real‑time information and ensure every search query is recorded for auditing and debugging, we configure a Virtual MCP Server and attach it to a Virtual Model. This setup provides:
Live Web Search: Integrate external search capabilities directly into your agent.
Traceability: All MCP tool calls (search queries, parameters, responses) are logged in LangDB for observability and troubleshooting.
Consistency: Using a dedicated MCP Server ensures uniform search behavior across runs.
Steps To Create a Virtual MCP
In LangDB UI, navigate to Projects → MCP Servers.
Click + New Virtual MCP Server:
Name:
web-search-mcp
Underlying MCP: Tavily Search MCP
Requires API Key: Make sure Tavily API Key is configured in your environment to authenticate this operation.
Navigate to Models → + New Virtual Model:
Name:
report-researcher
Base Model: GPT-4.1 or similar
Attach:
web-search-mcp
as the search tool
Copy the model identifier (e.g.
openai/langdb/report-researcher
) and use it in the Researcher agent.

LangDB will log all MCP calls for traceability.
Custom Model Usage
You are free to plug any supported LLM into this pipeline. To customize models, simply update the create_llm()
calls in Agent Configuration with your preferred model identifiers:
# Example: swap in Claude or custom fine-tune
tool_llm = create_llm("openai/langdb/report-researcher", "research")
analysis_llm = create_llm("openai/gpt-4.o", "analysis")
writer_llm = create_llm("openai/google/gemini-2.5-pro", "writer")
Ensure the model string matches a valid LangDB or OpenAI namespace. All routing, tracing, and MCP integrations remain identical regardless of the model.
When you create a new Virtual Model in LangDB, it will generate a unique model name (for example,
openai/langdb/report-researcher@v1
). Be sure to replace the example model name in yourmain.py
and in your agent config files with the actual model name generated for your project.
Running the Agent
Execute the workflow by passing a topic:
python main.py "The Impact of AI on Social Media Marketing in 2024"
The CLI will prompt for a topic if none is provided.
Conclusion
Below is a real, shareable example of a generated report and full execution trace using this pipeline:
References
Last updated
Was this helpful?