An advanced MCP server that provides intelligent web search capabilities using OpenAI's reasoning models. Perfect for AI assistants that need up-to-date information with smart reasoning capabilities.
- 🧠 Reasoning Model Support: Full compatibility with OpenAI's latest reasoning models (gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini)
-
⚡ Smart Effort Control: Intelligent
reasoning_effort
defaults based on use case - 🔄 Multi-Mode Search: Fast iterations with gpt-5-mini or deep research with gpt-5
- 🌍 Localized Results: Support for location-based search customization
- 📝 Rich Descriptions: Complete parameter documentation for easy integration
- 🔧 Flexible Configuration: Environment variable support for easy deployment
OPENAI_API_KEY=sk-xxxx uvx --with openai-websearch-mcp openai-websearch-mcp-install
Replace sk-xxxx
with your OpenAI API key from the OpenAI Platform.
Add to your claude_desktop_config.json
:
{
"mcpServers": {
"openai-websearch-mcp": {
"command": "uvx",
"args": ["openai-websearch-mcp"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"OPENAI_DEFAULT_MODEL": "gpt-5-mini"
}
}
}
}
Add to your MCP settings in Cursor:
- Open Cursor Settings (
Cmd/Ctrl + ,
) - Search for "MCP" or go to Extensions → MCP
- Add server configuration:
{
"mcpServers": {
"openai-websearch-mcp": {
"command": "uvx",
"args": ["openai-websearch-mcp"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"OPENAI_DEFAULT_MODEL": "gpt-5-mini"
}
}
}
}
Claude Code automatically detects MCP servers configured for Claude Desktop. Use the same configuration as above for Claude Desktop.
For local testing, use the absolute path to your virtual environment:
{
"mcpServers": {
"openai-websearch-mcp": {
"command": "/path/to/your/project/.venv/bin/python",
"args": ["-m", "openai_websearch_mcp"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"OPENAI_DEFAULT_MODEL": "gpt-5-mini",
"PYTHONPATH": "/path/to/your/project/src"
}
}
}
}
Intelligent web search with reasoning model support.
Parameter | Type | Description | Default |
---|---|---|---|
input |
string |
The search query or question to search for | Required |
model |
string |
AI model to use. Supports gpt-4o, gpt-4o-mini, gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini | gpt-5-mini |
reasoning_effort |
string |
Reasoning effort level: low, medium, high, minimal | Smart default |
type |
string |
Web search API version | web_search_preview |
search_context_size |
string |
Context amount: low, medium, high | medium |
user_location |
object |
Optional location for localized results | null |
Once configured, simply ask your AI assistant to search for information using natural language:
"Search for the latest developments in AI reasoning models using openai_web_search"
"Use openai_web_search with gpt-5 and high reasoning effort to provide a comprehensive analysis of quantum computing breakthroughs"
"Search for local tech meetups in San Francisco this week using openai_web_search"
The AI assistant will automatically use the openai_web_search
tool with appropriate parameters based on your request.
-
Recommended:
gpt-5-mini
withreasoning_effort: "low"
- Use Case: Fast iterations, real-time information, multiple quick queries
- Benefits: Lower latency, cost-effective for frequent searches
-
Recommended:
gpt-5
withreasoning_effort: "medium"
or"high"
- Use Case: Comprehensive analysis, complex topics, detailed investigation
- Benefits: Multi-round reasoned results, no need for agent iterations
Model | Reasoning | Default Effort | Best For |
---|---|---|---|
gpt-4o |
❌ | N/A | Standard search |
gpt-4o-mini |
❌ | N/A | Basic queries |
gpt-5-mini |
✅ | low |
Fast iterations |
gpt-5 |
✅ | medium |
Deep research |
gpt-5-nano |
✅ | medium |
Balanced approach |
o3 |
✅ | medium |
Advanced reasoning |
o4-mini |
✅ | medium |
Efficient reasoning |
# Install and run directly
uvx openai-websearch-mcp
# Or install globally
uvx install openai-websearch-mcp
# Install from PyPI
pip install openai-websearch-mcp
# Run the server
python -m openai_websearch_mcp
# Clone the repository
git clone https://github.com/yourusername/openai-websearch-mcp.git
cd openai-websearch-mcp
# Install dependencies
uv sync
# Run in development mode
uv run python -m openai_websearch_mcp
# Clone and setup
git clone https://github.com/yourusername/openai-websearch-mcp.git
cd openai-websearch-mcp
# Create virtual environment and install dependencies
uv sync
# Run tests
uv run python -m pytest
# Install in development mode
uv pip install -e .
Variable | Description | Default |
---|---|---|
OPENAI_API_KEY |
Your OpenAI API key | Required |
OPENAI_DEFAULT_MODEL |
Default model to use | gpt-5-mini |
# For uvx installations
npx @modelcontextprotocol/inspector uvx openai-websearch-mcp
# For pip installations
npx @modelcontextprotocol/inspector python -m openai_websearch_mcp
Issue: "Unsupported parameter: 'reasoning.effort'" Solution: This occurs when using non-reasoning models (gpt-4o, gpt-4o-mini) with reasoning_effort parameter. The server automatically handles this by only applying reasoning parameters to compatible models.
Issue: "No module named 'openai_websearch_mcp'" Solution: Ensure you've installed the package correctly and your Python path includes the package location.
This project is licensed under the MIT License - see the LICENSE file for details.
- 🤖 Generated with Claude Code
- 🔥 Powered by OpenAI's Web Search API
- 🛠️ Built on the Model Context Protocol
Co-Authored-By: Claude noreply@anthropic.com