✨ Try Quick Star CLI now! An intelligent AI agent with elegant command-line interface, real-time streaming responses, and powerful code generation capabilities.
🎯 Play the snake game! This fully functional game was created entirely through natural language conversations with Quick Star CLI, showcasing:
- 🧠 Intelligent Code Generation - Complete game logic from simple descriptions
- 🔧 Real-time Debugging - Iterate and improve code instantly
- 📁 Smart File Management - Handle complex project structures
- 🎨 Interactive Applications - Create engaging user experiences
Ready to build something amazing? Follow the installation guide below and start creating with Quick Star CLI!
This project demonstrates the progressive development of AI agents, from basic tool calling to advanced streaming agents with history control. Each chapter builds upon the previous one, showing incremental improvements and new features.
├── .env # Environment configuration (shared by all chapters)
├── .env.example # Environment template
├── requirements.txt # Python dependencies
├── chapter1_tool_call_api/ # Basic tool calling examples (Native Function Call & XML Tool Call)
├── chapter2_ReAct_agent/ # Basic ReAct agent implementation
├── chapter3_stream_agent/ # Streaming agent with real-time responses
├── chapter4_history_control/ # Advanced agent with conversation history management
├── chapter5_smart_context/ # Smart context management with intelligent cropping
└── chapter6_to_do_write/ # Task management with TodoWrite tool [NEW]
- Native Function Call: Standard OpenAI JSON Schema interface with type safety
- XML Tool Call: Universal XML format compatible with any text model
- Comparison and use cases for both approaches
- Foundation for understanding tool calling patterns
- ReAct pattern: Think-Act-Observe cycle for intelligent agents
- Recursive conversation handling for continuous AI interactions
- User approval system for dangerous operations with safety controls
- Singleton conversation manager for consistent state management
- Complete tool execution framework with error handling
- Character-by-character streaming for immediate AI response visibility
- Streaming tool calls that work seamlessly with tool execution
- Configuration externalization with .env file management
- Graceful degradation with auto-fallback to standard mode
- Improved user experience with no waiting for complete responses
- Auto history compression with smart multi-session and single-session strategies
- Real-time token monitoring with context usage percentage display
- Comprehensive cost tracking with model-specific pricing and session summaries
- Preservation guarantees for system messages and recent context
- Performance optimization for long-running conversations
- Precision context control with TOP/BOTTOM message cropping strategies
- Smart Context Cropper tool for manual conversation management
- Safety guarantees protecting latest user messages and system prompts
- Summary support for cropped content to maintain context continuity
- Integration with existing auto-compression and cost tracking systems
- TodoWrite tool for automated task organization and progress tracking
- Intelligent workflow breakdown converting complex requests into structured lists
- Real-time state management with pending/in_progress/completed lifecycle
- Quality assurance gates preventing premature task completion
- Context awareness deciding when todo lists add value vs. simple execution
- Python 3.8 or higher
- Conda (recommended) or pip
- OpenAI-compatible API access (OpenRouter, OpenAI, etc.)
git clone https://github.com/woodx9/build-your-claude-code-from-scratch.git
cd build-your-claude-code-from-scratch
# Create new conda environment
conda create -n agentLearning python=3.11
# Activate environment
conda activate agentLearning
You can install dependencies using any of the following methods:
# Install all required packages
pip install -r requirements.txt
# 或者其他chapter
cd chapter5_smart_context
pip install -e .
quickstar
❯ quickstar
══════════════════════════════════════════════════
✦ ✦ ✦ ✦ ✦ ✧ ✧ ✧ ✧ ✧
★ Welcome to Quick Star ★
✧ ✧ ✧ ✧ ✧ ✦ ✦ ✦ ✦ ✦
══════════════════════════════════════════════════
👤
请输入: hello, relpy one
🤖
Hello! Nice to meet you. How can I help you today?
👤
请输入:
-
Copy the example environment file:
cp .env.example .env
-
Edit
.env
file with your API credentials:# OpenAI API Configuration OPENAI_API_KEY=your_api_key_here OPENAI_BASE_URL=https://openrouter.ai/api/v1 OPENAI_MODEL=anthropic/claude-sonnet-4 # The unit is k MODEL_MAX_TOKENS=200 COMPRESS_THRESHOLD=0.8
Variable | Description | Example |
---|---|---|
OPENAI_API_KEY |
Your API key | sk-or-v1-... |
OPENAI_BASE_URL |
API endpoint URL | https://openrouter.ai/api/v1 |
OPENAI_MODEL |
Model to use | anthropic/claude-sonnet-4 |
MODEL_MAX_TOKENS |
Max tokens for responses (in thousands) | 200 |
COMPRESS_THRESHOLD |
History compression threshold (0.0-1.0) | 0.8 |
cd chapter1_tool_call_api
# Run Native Function Call example
python native_function_call.py
# Run XML Tool Call example
python xml_tool_call.py
This project supports any OpenAI-compatible API. Tested providers include:
- OpenRouter (recommended): Provides access to multiple models
- OpenAI: Official OpenAI API
- Local LLM servers: Any server implementing OpenAI API format
- Sign up at openrouter.ai
- Get your API key from the dashboard
- Use
https://openrouter.ai/api/v1
as the base URL - Choose from available models like:
anthropic/claude-sonnet-4
openai/gpt-4
meta-llama/llama-3.1-70b-instruct
- Get API key from platform.openai.com
- Use
https://api.openai.com/v1
as the base URL - Use models like
gpt-4
,gpt-3.5-turbo
chapter_X/
├── src/
│ ├── core/
│ │ ├── api_client.py # API client with environment config
│ │ └── conversation.py # Conversation management
│ ├── tools/
│ │ ├── base_agent.py # Base agent implementation
│ │ ├── tool_manager.py # Tool management
│ │ └── cmd_runner.py # Command execution tool
│ └── main.py # Entry point
├── pyproject.toml # Project configuration
└── readme.md # Chapter-specific documentation
Note: Chapter 1 has a simpler structure with direct Python files demonstrating tool calling concepts.
- APIClient: Singleton pattern client with environment variable configuration
- BaseAgent: Core agent logic implementing ReAct pattern
- ToolManager: Manages available tools and their execution
- ConversationManager: Handles conversation history and context
The project includes comprehensive error handling:
- Missing environment variables throw descriptive errors
- API failures are caught and reported
- Tool execution errors are handled gracefully
-
Environment variables not found
ValueError: 环境变量 OPENAI_API_KEY 未设置或为空。请检查 .env 文件中的配置。
Solution: Ensure
.env
file exists and contains all required variables -
API connection failed
API请求失败: Connection error
Solution: Check your internet connection and API endpoint URL
-
Invalid API key
API请求失败: Unauthorized
Solution: Verify your API key is correct and has sufficient credits
# Test if environment variables are loaded correctly
python -c "
import sys
sys.path.append('chapter2_ReAct_agent/src')
from core.api_client import APIClient
client = APIClient()
print('✅ Environment loaded successfully!')
print(f'Using model: {client.model}')
"
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT license
For issues and questions:
- Check the troubleshooting section
- Review chapter-specific README files
- Open an issue on the repository
Note: This project demonstrates progressive AI agent development. Start with Chapter 1 to understand basic tool calling concepts, then move to Chapter 2 for ReAct patterns, Chapter 3 for streaming capabilities, and finally Chapter 4 for advanced history management.