Skip to content

Added masked loss function from 10.7.5 to Seq2Seq #2656

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

nickmcgreivy
Copy link

Description of changes:

In 10.7, a masked loss function was added to the local Seq2Seq class using the decorator @d2l.add_to_class(Seq2Seq). However, this loss function was not added to the class d2l.Seq2Seq. This omission leads to incorrect training and validation errors in chapter 11.

In this pull request, I simply added the masked loss function to the d2l.Seq2Seq class.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

@nickmcgreivy
Copy link
Author

I also fixed a minor bug in the _tokenize function within the MTFraEng class (10.5). Previously the function returned max_examples+1 tokenized sentences. Now the function returns max_examples tokenized sentences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant