Steps for model fine-tuning or resuming training #561
Unanswered
cosalexelle
asked this question in
Q&A
Replies: 1 comment 4 replies
-
Hello, did you get something? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
Great work with the fork!
I have two models trained on some dataset G_1200 and D_1200 and want to continue training them using the same speaker and dataset with these as a base. How would we do that?
I no longer have the tensorboard outputs, only the two G/D models.
Would the correct method be to run
svc pre-resample
svc pre-config
svc pre-hubert
ondataset_raw
as usual, then before training, place the two G/D models inlogs/44k/
and rename them to G_0 and D_0?That's how I am running it now.
The metric "loss/g/total" seems to start slightly higher from where the previous model training left off: starting around 36.0 and after a few minutes reduces to approximately 31.5. This seems higher than expected, but could be due the dataset being re-preprocessed, so different items may be in train/test/val.
Edit: tensorboard also shows starting at approx step 7000, which I assume is embedded in the models?
Are there any other recommendations on how to fine-tune the models?
(on an unrelated note, I am creating a new Google Colab notebook with some enhancements which I may submit a pull request for later)
Beta Was this translation helpful? Give feedback.
All reactions