Train LoRAs with guided notebooks instead of confusing command lines
This is a user-friendly LoRA training system based on proven methods from popular Google Colab notebooks. Instead of typing scary commands, you get helpful widgets that walk you through each step. Works on your own computer or rented GPU servers.
Python Version | License | Discord | Twitch | Support |
---|---|---|---|---|
- ✨ What You Get
- 🚀 Quick Start
- 📖 How to Use
- 🔧 Architecture
- 🐛 Troubleshooting
- 🏆 Credits
- 🔒 Security
- 📄 License
- 🤝 Contributing
- 🎓 Beginner-friendly: Helpful explanations and step-by-step guidance
- 🧮 Training calculator: Shows exactly how long training will take
- 🛠️ Easy setup: Works with VastAI, RunPod, and local computers
- 📊 Dataset tools: Auto-tag images, upload files, manage captions
- 🚀 Multiple options: SDXL, SD 1.5, various optimizers and LoRA types
All in simple notebooks - no command line required!
- GPU: NVIDIA (8GB+ VRAM) OR AMD GPU (16GB+ VRAM recommended for RDNA2/3)
- Python: Version 3.10.6 (compatible with Kohya-ss training)
- Platform: Windows or Linux based Operationg Systems.
- Device Local GPU or Rented Cloud GPU spaces. (Not Google Colab)
✅ Recommended (Easy Setup):
- VastAI: PyTorch containers with Python 3.10 (NVIDIA + select AMD GPUs)
- RunPod: CUDA development templates (NVIDIA GPUs)
- Local NVIDIA: Anaconda/Miniconda with Python 3.10.6 + CUDA
- Local AMD (Linux): Anaconda/Miniconda with Python 3.10.6 + ROCm 6.2+
🧪 Experimental AMD Support:
- Local AMD (Windows): ZLUDA or DirectML acceleration
- Cloud AMD: Limited availability on popular GPU rental platforms.
⚠️ NO SUPPORT FOR LOCAL MACINTOSH ARM/M1-M4 MACHINES Currently RESEARCHING how to do this on mac machines intel or otherwise.
Check your Python version first:
python --version
# Need: Python 3.10.6 (other versions may break dependencies)
If you don't have Python 3.10.6:
# Create conda environment (recommended)
conda create -n lora-training python=3.10.6 -y
conda activate lora-training
# Or install Python 3.10.6 directly from python.org
Always activate your environment before installation:
conda activate lora-training # If using conda
Prerequisites: Git (for downloading) and Python 3.10.6
Quick Git Check:
git --version # If this fails, install Git first
Install Git if needed:
- Windows: Download from git-scm.com
- Mac:
xcode-select --install
in Terminal - Linux:
sudo apt install git
(Ubuntu/Debian)
Download and Setup:
# 1. Clone the repository
git clone https://github.com/Ktiseos-Nyx/Lora_Easy_Training_Jupyter.git
cd Lora_Easy_Training_Jupyter
# 2. Run the installer (downloads ~10-15GB)
python ./installer.py
# Alternative for Mac/Linux:
chmod +x ./jupyter.sh && ./jupyter.sh
-
Open Jupyter (if not already running):
jupyter notebook # Or: jupyter lab
-
Use the notebooks in order:
Dataset_Maker_Widget.ipynb
- Prepare images and captionsLora_Trainer_Widget.ipynb
- Configure and run trainingLoRA_Calculator_Widget.ipynb
- Calculate optimal steps (optional)
Open Dataset_Maker_Widget.ipynb
and run the cells in order:
# Cell 1: Environment setup (if needed)
from shared_managers import create_widget
setup_widget = create_widget('setup')
setup_widget.display()
# Cell 2: Dataset preparation
dataset_widget = create_widget('dataset')
dataset_widget.display()
Upload your images (ZIP files work great!) and the system will auto-tag them for you.
To use custom models or VAEs, you need to provide a direct download link. Here’s how to find them on popular platforms:
Method 1: Using the Model Version ID
This is the easiest method if a model has multiple versions.
- Navigate to the model or VAE page.
- Look at the URL in your browser's address bar. If it includes
?modelVersionId=XXXXXX
, you can copy the entire URL and paste it directly into the widget. - If you don't see this ID, try switching to a different version of the model and then back to your desired version. The ID should then appear in the URL.
Method 2: Copying the Download Link
Use this method if the model has only one version or if a version has multiple files.
- On the model or VAE page, scroll down to the "Files" section.
- Right-click the Download button for the file you want.
- Select "Copy Link Address" (or similar text) from the context menu.
Method 1: Using the Repository URL
- Go to the main page of the model or VAE repository you want to use.
- Copy the URL directly from your browser's address bar.
Method 2: Copying the Direct File Link
- Navigate to the "Files and versions" tab of the repository.
- Find the specific file you want to download.
- Click the "..." menu to the right of the file size, then right-click the "Download" link and copy the link address.
Open Lora_Trainer_Widget.ipynb
and run the cells to start training:
# First, set up your environment
from widgets.setup_widget import SetupWidget
setup_widget = SetupWidget()
setup_widget.display()
# Then configure training
from widgets.training_widget import TrainingWidget
training_widget = TrainingWidget()
training_widget.display()
core/managers.py
: SetupManager, ModelManager for environment setupcore/dataset_manager.py
: Dataset processing and image taggingcore/training_manager.py
: Hybrid training manager with advanced featurescore/utilities_manager.py
: Post-training utilities and optimization
widgets/setup_widget.py
: Environment setup and model downloadswidgets/dataset_widget.py
: Dataset preparation interfacewidgets/training_widget.py
: Training configuration with advanced modewidgets/utilities_widget.py
: Post-training tools
🔥 AMD GPU Training is now supported through multiple acceleration methods:
- Requirements: Linux, AMD RDNA2/3 GPU, ROCm 6.1+ drivers
- Installation: Automatic via setup widget "Diagnose & Fix" button
- Performance: Native AMD acceleration, best compatibility
- Setup Command:
pip install torch torchvision --index-url https://download.pytorch.org/whl/rocm6.0
- Requirements: AMD RDNA2+ GPU, ZLUDA runtime libraries
- Installation: Manual - download from ZLUDA GitHub
- Performance: CUDA-to-AMD translation layer, experimental but promising
- Status: Some limitations with matrix operations, actively developed
- Requirements: Windows, any DirectX 12 compatible AMD GPU
- Installation:
pip install torch-directml
- Performance: Lower performance but broader compatibility
- Limitations: Limited LoRA training support
- RDNA2/3: 16GB+ VRAM recommended (RX 6800 XT, RX 7900 XTX)
- Older Cards: May work with reduced settings
- Memory Optimization: Enable gradient checkpointing for large models
- Batch Size: Start with 1, increase gradually
- Resolution: 768x768 recommended vs 1024x1024 for NVIDIA
- Optimizer: CAME optimizer saves significant VRAM
- Mixed Precision: fp16 may have compatibility issues, try bf16
- The
Flux_SD3_Training/
folder contains work-in-progress Flux and SD3.5 LoRA training - May not function correctly - still under active development
- Use at your own risk for testing purposes only
- Docker/VastAI users: Triton compiler may fail with AdamW8bit optimizer
- Symptoms: "TRITON NOT FOUND" or "triton not compatible" errors
- Solution: System will auto-fallback to AdamW (uses more VRAM but stable)
- ONNX Runtime: Dependency conflicts possible between
onnxruntime-gpu
andopen-clip-torch
- Support for non NVIDIA CARDS Currently untested and in development.
- Symptoms Untested on cards under 24gb of Video Ram.
- Solution Will gather users who could test this.
- Support WILL NOT WORK ON IMAC INTEL OR MAC METAL MACHINES.
- DoRA, GLoRA, BOFT (Butterfly): May not function correctly as of yet
- Status: Currently under testing and validation
- Recommendation: Use standard LoRA or LoCon for stable results
- More testing: Additional compatibility testing is ongoing
- GitHub Issues: Report bugs and feature requests
- Documentation: Check tooltips and explanations in widgets
- Community: Share your LoRAs and experiences!
🙏 Built on the Shoulders of Giants
This project builds upon and integrates the excellent work of:
- Jelosus2's LoRA Easy Training Colab - Original Colab notebook that inspired this adaptation
- Derrian-Distro's LoRA Easy Training Backend - Core training backend and scripts
- HoloStrawberry's Training Methods - Community wisdom and proven training techniques
- Kohya-ss SD Scripts - Foundational training scripts and infrastructure
- Linaqruf - Pioneer in accessible LoRA training, creator of influential Colab notebooks and training methods that inspired much of this work
- AndroidXXL, Jelosus2 - Additional Colab notebook contributions that made LoRA training accessible
- ArcEnCiel - Ongoing support and testing
- Civitai - Platform for sharing LoRAs
- LyCORIS Team - Advanced LoRA methods (DoRA, LoKr, etc.)
Special thanks to these creators for making LoRA training accessible to everyone!
Found a security issue? Check our Security Policy for responsible disclosure guidelines.
MIT License - Feel free to use, modify, and distribute. See LICENSE for details.
We welcome contributions! Check out our Contributing Guide for details on how to get involved. Feel free to open issues or submit pull requests on GitHub.
Made with ❤️ by the community, for the community.