Releases: mindverse/Second-Me
v1.0.1 Bug Fix And New Features🎉
Key Changes
Feature Enhancements
- Train L1 Exposure Functionality (#319)
- Added frontend training progress components
- Implemented new exposure model training components
- Backend support with L1_exposure_manager
- Optimized training progress tracking system
- Meta Exposure Features (#313)
- Updated TrainingTags support
- Enhanced upload client functionality
- Optimized upload routes
Bug Fixes
- Model-related Fixes
- Optimized model tokenizer implementation, removed redundant code
- Fixed errors during graphrag process (#347)
- Resolved model deployment issues in CUDA + Windows environments
- System Optimizations
- Code Optimization
- Removed redundant code in route_l2.py (#332), reducing approximately 550 lines of code
Technical Details
- 11 commits included
- 15 files modified
- Approximately 492 lines added, 631 lines removed
Testing Status
- Verified on both Windows and Linux environments
- CUDA environment deployment testing passed
- Training process integrity tests passed
Notes
This merge includes a series of important features and critical fixes. It is recommended that team members conduct comprehensive testing after merging, especially for model training and deployment-related functionality.
v1.0.0 - First Release 🎉
🎉 First Release: Second Me v1.0.0
We’re excited to announce our first official release — the comeback version!
🚀 Deployment
-
Cross-platform support:
Deploy across Mac, Linux, and Docker! -
Docker support:
Easily deploy using Docker for a streamlined setup experience. -
Non-Docker deployment:
Mac and Linux are now supported.
We recommend usinguv
or similar environment isolation tools to avoid package conflicts.
🧠 New Feature – Thinking Mode (Beta)
- We've introduced Thinking Mode, now available in the Playground environment!
- This mode enhances the chain of thought for better reasoning — but heads-up, responses will be slightly slower.
Requirements:
- Recommended for models 3B parameters and above.
- Currently requires a DeepSeek API (DeepSeek is the only supported provider for now).
⚡ CUDA Support Is Here (Huge Thanks to the Community!)
CUDA training is now supported!
- Works on A100 and consumer-grade GPUs of the same generation.
- Training runs on CUDA, while inference still uses llama.cpp (CPU).
- So far, testing has been primarily on Linux + GPU environments.
💡 This major contribution was made possible by @zpitroda — huge thanks for pushing this forward! 🙏
→ #228: Added CUDA support
📜 What's Changed
- fix: collection reference bug in document chunking logic by @0xsenty in fix: collection reference bug in document chunking logic #214
- feat: Implement model downloading from ModelScope by @bluechanel in feat: Implement model downloading from ModelScope #213
- feat: Add Dimension Mismatch Handling for ChromaDB by @PStarH in feat: Add Dimension Mismatch Handling for ChromaDB #207
- fix: incorrect script paths in
stop.sh
andstart.sh
by @mdqst in fix: incorrect script paths instop.sh
andstart.sh
#203 - feat: update Docker Compose command for Windows compatibility by @Undertone0809 in feat: update Docker Compose command for Windows compatibility #181
- fix: Find correct llama-server location in fix: Find correct llama-server location #186
- fix: Wrong encoding on reading backend.log by @GoForceX in fix: Wrong encoding on reading backend.log #171
- feat: Adapting the docker compose command by @co0ontty in feat: Adapting the docker compose command #168
- Tunning README for Docker Desktop configuration by @Garyyyyyyyyyyy in Tunning README for Docker Desktop configuration #163
- fix: use 127.0.0.1 instead of localhost for Ollama connections by @CXL-edu in fix: use 127.0.0.1 instead of localhost for Ollama connections #156
- Fix tiktoken compatibility with non-OpenAI models by @CXL-edu in Fix tiktoken compatibility with non-OpenAI models #155
- fix: implement dynamic top_p parameter adjustment for LLM API calls by @Airmomo in fix: implement dynamic top_p parameter adjustment for LLM API calls #104
- fix: add m1 homebrew conda.sh path by @llshicc in fix: add m1 homebrew conda.sh path #97
- fix: handle long text in embedding requests properly by @Airmomo in fix: handle long text in embedding requests properly #90
- fix: Replace file path handling with pathlib to resolve f-string syntax error by @Airmomo in fix: Replace file path handling with pathlib to resolve f-string syntax error #86
- fix: download qwen model to the correct directory by @xyb in fix: download qwen model to the correct directory #85
- Fix model download step being skipped during training by @vijaythecoder in Fix model download step being skipped during training #84
- feat: WeChat Bot Integration for Second-Me by @Zero-coder in feat: WeChat Bot Integration for Second-Me #81
- Refactor: Implement 'with' statement for file handling by @mahdirahimi1999 in Refactor: Implement 'with' statement for file handling #74
- Fix typos by @omahs in Fix typos #60
- security! by @umutcrs in security! #62
- Add Star History by @xs10l3 in Add Star History #41
- CONTRIBUTING.md: Seond Me → Second Me by @david-dong828 in CONTRIBUTING.md: Seond Me → Second Me #25
🙏 Special Thanks
We sincerely thank all our early contributors for supporting Second Me during this journey!
Special appreciation to everyone who submitted PRs, tested features, reported bugs, and helped shape this release.
You are part of this story. 🚀
📣 Get Involved
- ⭐ Star the repo to support us!
- 💬 Come hang out with us on Discord: https://discord.com/invite/GpWHQNUwrg
- 🗣️ Join the discussions and help shape the future of AI Identity infrastructure.