title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned |
---|---|---|---|---|---|---|---|
Medical Assistant Bot |
π₯ |
blue |
green |
streamlit |
1.29.0 |
app.py |
false |
A conversational AI medical assistant that can analyze symptoms, identify possible conditions, and provide health information using LangChain and OpenAI.
While today's LLMs are capable of impersonating medical professionals on their own, this agentic multimodal system demonstrates a more disciplined, secure, and scalable design:
It constrains the modelβs outputs to only the information it has been explicitly given β both in content and context.
This architecture enables:
- Domain-specific tuning for medical scenarios
- Reduced hallucinations through contextual grounding
- Intelligent interaction with both textual symptoms and medical images
The result? A smarter, safer, and more controllable diagnostic assistant β purpose-built for trust.
- Market Analysis / Requirements Gathering for "Medical Chatbots"
- Data Extract, Transform, Load
- High-Level Architecture
- Live Demo
- Symptom & Image Analysis: Understands natural language input and images using OpenAI Vision
- Disease Matching: Uses FAISS vector search to find matching conditions
- Severity Assessment: Evaluates the severity of reported symptoms
- Medical Information: Provides descriptions and precautions for conditions
- Conversational Interface: Maintains context throughout the conversation
- Describe your symptoms in the chat interface or upload an image
- Ask follow-up questions about conditions, precautions, or severity
- Clear conversation when starting a new consultation
- Frontend: Streamlit
- LLM: OpenAI GPT-3.5 Turbo /GPT-4.1 (vision)
- Framework: LangChain
- Vector Search: FAISS
- Embeddings: Sentence Transformers (all-MiniLM-L6-v2)
- Image Support: OpenAI Vision
The medical knowledge base used here is small but usable includes:
- Disease-symptom relationships
- Symptom severity scores
- Disease descriptions
- Precautionary measures
- Your OpenAI API key is only stored in your session
- Conversations are not permanently stored
- No personal health information is collected
# Clone the repository
git clone https://github.com/wbott/medical-assistant-bot
cd medical-assistant-bot
# Install dependencies
pip install -r requirements.txt
# Set your OpenAI API key:
# On Linux/macOS:
export OPENAI_API_KEY="your-key-here"
# On Windows CMD:
set OPENAI_API_KEY=your-key-here
# On Windows PowerShell:
$env:OPENAI_API_KEY="your-key-here"
# Run the app
streamlit run app.py
For containerized deployment with Docker:
# Quick start with Docker Compose
cp .env.example .env # Edit with your OPENAI_API_KEY
docker-compose up medical-assistant
# Or build and run manually
./scripts/docker/build.sh
docker run -p 8501:8501 -e OPENAI_API_KEY=your-key medical-assistant-bot:latest
Development mode with hot reload:
docker-compose --profile dev up medical-assistant-dev
Access the application:
- Streamlit UI: http://localhost:8501
- Development mode: http://localhost:8502
Deploy to AWS SageMaker for production-scale inference:
# Build SageMaker-compatible image
docker build --target sagemaker -t medical-assistant-sagemaker:latest .
# Deploy to SageMaker (requires AWS credentials)
python scripts/docker/deploy-sagemaker.py \
--image-uri <your-ecr-uri> \
--endpoint-name medical-assistant-endpoint \
--instance-type ml.t2.medium
SageMaker Features:
- REST API endpoints (
/ping
,/invocations
,/health
) - Auto-scaling based on traffic
- Multiple instance types (ml.t2.medium to ml.c5.xlarge)
- Enterprise security with IAM roles
Test SageMaker deployment:
# Test local SageMaker container
python scripts/docker/test-sagemaker.py --endpoint-type local
# Test live SageMaker endpoint
python scripts/docker/test-sagemaker.py --endpoint-type sagemaker
For detailed deployment instructions, see CLAUDE.md.
The project includes a comprehensive testing suite with three different testing frameworks to ensure reliability and quality across all components.
Comprehensive Python unit and integration testing for core functionality.
Coverage includes:
- Core medical analysis functions (
src/medical_tools.py
) - Image processing capabilities (
src/vision_tools.py
) - Agent functionality (
src/medical_agent_langchain.py
) - JSON serialization and data validation
- Error handling and edge cases
- Parametrized tests for multiple scenarios
Test files:
tests/test_medical_tools.py
- Medical analysis functionstests/test_vision_tools.py
- Vision and image processingtests/test_medical_agent.py
- Agent and chat functionality
Keyword-driven testing for complete user workflows and system integration.
Coverage includes:
- System health checks and file validation
- Complete symptom analysis workflows
- Image upload and processing validation
- Conversation memory and context testing
- Error handling and recovery scenarios
- Performance benchmarking
- Medical disclaimer verification
Test files:
tests/robot/medical_assistant.robot
- Main test suitetests/robot_keywords/MedicalAssistantKeywords.py
- Custom keywords
Cross-language integration testing for system-level validation.
Coverage includes:
- Python-Java process execution testing
- HTTP API interaction validation
- Performance and memory usage monitoring
- Cross-platform compatibility testing
- System integration scenarios
Test files:
tests/java/MedicalAssistantIntegrationTest.java
- Integration testspom.xml
- Maven configuration
# Install testing dependencies
pip install -r requirements.txt
# The following testing packages are included:
# - pytest>=7.0.0
# - pytest-html>=3.1.0
# - pytest-cov>=4.0.0
# - pytest-mock>=3.10.0
# - robotframework>=6.0.0
# - robotframework-requests>=0.9.0
Run Pytest (Python Unit Tests):
# Basic test run
pytest tests/
# With verbose output and coverage
pytest tests/ -v --cov=src --cov-report=html
# Run specific test file
pytest tests/test_medical_tools.py -v
# Run with HTML report
pytest tests/ --html=test-results/pytest-report.html --self-contained-html
Run Robot Framework (End-to-End Tests):
# Basic test run
robot tests/robot/medical_assistant.robot
# With custom output directory
robot --outputdir test-results/robot tests/robot/medical_assistant.robot
# Run specific test tags
robot --include smoke tests/robot/medical_assistant.robot
# Dry run (syntax check only)
robot --dryrun tests/robot/medical_assistant.robot
Run JUnit Tests (requires Java 11+ and Maven):
# Install Java and Maven first
sudo apt update
sudo apt install openjdk-11-jdk maven
# Run all tests
mvn test
# Run with specific profile
mvn test -P integration
# Fast tests only (skip slow tests)
mvn test -P fast
Run all testing frameworks at once:
# Execute complete test suite with reporting
python test_runner.py
This will:
- Run all pytest tests with coverage reporting
- Execute Robot Framework syntax validation
- Check Java/Maven environment availability
- Generate a comprehensive test report in
test-results/comprehensive-test-report.md
pytest.ini
- Pytest configuration with coverage settingsrobot.yaml
- Robot Framework configurationpom.xml
- Maven configuration for Java tests
All test results are saved to the test-results/
directory:
test-results/
βββ pytest-report.html # Pytest HTML report
βββ coverage-html/ # Coverage report
βββ robot/ # Robot Framework reports
β βββ report.html
β βββ log.html
β βββ output.xml
βββ junit/ # JUnit test reports
βββ comprehensive-test-report.md # Combined report
The testing framework is designed for CI/CD integration:
# CI-friendly command (non-interactive)
pytest tests/ --tb=short --junit-xml=test-results/junit.xml
robot --outputdir test-results/robot --exitonfailure tests/robot/
When adding new features:
- Write unit tests first (pytest) for individual functions
- Add integration tests (Robot Framework) for user workflows
- Include error handling tests for edge cases
- Update test documentation in this section
- Run full test suite before committing changes
Mock Testing:
- Tests include comprehensive mocking for external dependencies
- API keys not required for most unit tests
- Offline testing capabilities for development environments
For detailed testing architecture and guidelines, see CLAUDE.md.
- Always consult with qualified healthcare providers for proper diagnosis
- This tool provides general information only
- Do not make medical decisions based solely on this tool's output
- In case of emergency, contact emergency services immediately