Skip to content

feat: Add tests for - argocd admin import command #22780

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 71 commits into
base: master
Choose a base branch
from

Conversation

yaten2302
Copy link

Fixes #21895

This PR adds unit test for argocd admin import command.

image

Checklist:

  • Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
  • The title of the PR states what changed and the related issues number (used for the release note).
  • The title of the PR conforms to the Toolchain Guide
  • I've included "Closes [ISSUE #]" or "Fixes [ISSUE #]" in the description to automatically close the associated issue.
  • I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
  • Does this PR require documentation updates?
  • I've updated documentation as required by this PR.
  • I have signed off all my commits as required by DCO
  • I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
  • My build is green (troubleshooting builds).
  • My new feature complies with the feature status guidelines.
  • I have added a brief description of why this PR is necessary and/or what this PR solves.
  • Optional. My organization is added to USERS.md.
  • Optional. For bug fixes, I've indicated what older releases this fix should be cherry-picked into (this may or may not happen depending on risk/complexity).

@yaten2302 yaten2302 requested a review from a team as a code owner April 24, 2025 11:58
Copy link

bunnyshell bot commented Apr 24, 2025

❌ Preview Environment undeployed from Bunnyshell

Available commands (reply to this comment):

  • 🚀 /bns:deploy to deploy the environment

Copy link

codecov bot commented Apr 24, 2025

Codecov Report

Attention: Patch coverage is 54.54545% with 30 lines in your changes missing coverage. Please review.

Project coverage is 60.33%. Comparing base (54b3c95) to head (255dcc4).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
cmd/argocd/commands/admin/backup.go 54.54% 22 Missing and 8 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #22780      +/-   ##
==========================================
+ Coverage   60.27%   60.33%   +0.06%     
==========================================
  Files         345      345              
  Lines       59097    59111      +14     
==========================================
+ Hits        35618    35665      +47     
+ Misses      20612    20566      -46     
- Partials     2867     2880      +13     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@nitishfy nitishfy self-requested a review April 25, 2025 11:26
Copy link
Member

@nitishfy nitishfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this PR ready for review? I only see one test case.

@yaten2302
Copy link
Author

@nitishfy , added 5 test cases 👍
All those are passing.

Copy link
Member

@nitishfy nitishfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're required to add all the test cases that checks these scenarios:

  • Resource should be pruned when --prune and missing from backup
  • Resource should be created when missing from live
  • Resource tracking annotation should not be updated when --ignore-tracking
  • Application operation should removed when --stop-operation
  • Update operation should retry on conflict when --override-on-conflict
  • Resource should not be pruned if present in backup and backup resource matches --skip-resources-with-label
  • Resource should not be pruned if missing from backup, but live matches --skip-resources-with-label
  • Test application-namespace feature
  • Test applicationset-namespace feature
  • Dry run should not modify any resources

@nitishfy
Copy link
Member

I'd suggest learning about these things one at a time. For eg. What is the use of import command, why do we use the prune flag, and what is the application namespace feature? Pick one test case at a time. Try to understand what this feature means, look at the code to see how it is written and then write a test case to validate it. You may be required to even put that logic in a new function and call that function in the test case to validate the scenario.

@yaten2302
Copy link
Author

You're required to add all the test cases that checks these scenarios:

  • Resource should be pruned when --prune and missing from backup
  • Resource should be created when missing from live
  • Resource tracking annotation should not be updated when --ignore-tracking
  • Application operation should removed when --stop-operation
  • Update operation should retry on conflict when --override-on-conflict
  • Resource should not be pruned if present in backup and backup resource matches --skip-resources-with-label
  • Resource should not be pruned if missing from backup, but live matches --skip-resources-with-label
  • Test application-namespace feature
  • Test applicationset-namespace feature
  • Dry run should not modify any resources

Yes, sure, will add with the flags as well 👍
Actually, first I just wanted to confirm if the tests which I'm creating must be created like this only or should I should do some changes in it. So, that's why, I didn't add the flags yet.
NVM, will push a commit for this 👍
Thanks for your suggestions!

@yaten2302
Copy link
Author

I'd suggest learning about these things one at a time. For eg. What is the use of import command, why do we use the prune flag, and what is the application namespace feature? Pick one test case at a time. Try to understand what this feature means, look at the code to see how it is written and then write a test case to validate it. You may be required to even put that logic in a new function and call that function in the test case to validate the scenario.

Yes, I've already read the entire input command code, and now I've a good understanding of the same.
(Previously, I was having some issues in understanding this, but now I'm quite familiar). I'm checking how can I add the tests for the flags, like - --prune, --stop-operation etc...
I'll also try to put that logic in a separate func (as suggested) to test the flags and all for testing in the Test_importResource 👍

@yaten2302
Copy link
Author

Hey @nitishfy , could you kindly have a look at the tests for the --prune flag.
The tests are passing, should I add any more TCs if required?

Copy link
Member

@agaudreault agaudreault left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the structure of the tests, but reading a few different scenario names, I am not sure what it is actually testing. For instance, "It should update the live object according to the backup object if the backup object doesn't have the skip label". This test the happy path. where is the test that validates that it should not update the resource when it matches the label?

Similar comment for the following tests, they seem to all be testing the happy path, and not the conditional scenarios

  • "Spec should be updated correctly in live object according to the backup object"
  • "It should update the data of the live object according to the backup object"
  • "It should update live object's spec according to the backup object"

@yaten2302
Copy link
Author

Yes @agaudreault , actually I forgot to update the names of the test cases. Pushing the changes for the same 👍

Copy link
Member

@nitishfy nitishfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I'd wait for @agaudreault reviews before approving.

@yaten2302
Copy link
Author

@agaudreault , a friendly follow up on this PR :)

@agaudreault agaudreault added this to the v3.2 milestone Jun 17, 2025

for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
bakObj := decodeYAMLToUnstructured(t, tt.args.bak)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract all the test setup code to a function, the method should be generic enough so that it accepts a liveObj and returns only the client. You can then provide the faeClient to the "import" method (needs refactoring) and validate the result by doing a Get on the liveObj after the import.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agaudreault , I didn't get this completely, could you kindly elaborate a bit more on this please?

Like, currently, I've created the decodeYAMLToUnstructured() func, which accepts the YAML of backup and the live object and decodes them to Unstructured format.

Now, as you said that, it should accept the liveObj and return only the client. I wasn't able to get this exactly?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @agaudreault , wanted to confirm that if I've understood this correctly, then I should extract this import test setup code to a new func let's say - setupImportTest(). And then I should call this func to the Test_importResources()?
Have I understood this correctly?

dynClient = dynamicClient.Resource(secretResource).Namespace(bakObj.GetNamespace())
}

if slices.Contains(tt.applicationNamespaces, bakObj.GetNamespace()) || slices.Contains(tt.applicationsetNamespaces, bakObj.GetNamespace()) {
Copy link
Member

@agaudreault agaudreault Jun 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test seems to re-implement the logic of the run function.

The unit test should be able to test a unit of code for which it can mock its dependencies. In this case, we want to test the NewImportCommand Run function. However, it is quite complex to provide a "mock" of the Kubernetes dependency this way. Instead, extract a function that you can test, that will receive a fake Kubernetes client and the command arguments. This way, you can unit test that code.

Example: func (opts *importOpts) executeImport(client *dynamic.DynamicClient)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @nitishfy @agaudreault , if I've understood this correctly, then currently the behaviour is - "I'm rewriting the Run() func in the backup_test.go and then testing it."

Expected behaviour: "The test should be such that, instead of rewriting the Run() func, I should pass the fake kubernetes client and the command args, like - argocd admin import --dry-run import_file.txt. Like, this command should be passed to the test and the fake kubernetes and then according to the command it'll generate the result"?

Have I understood this correctly?

@yaten2302
Copy link
Author

Hi @nitishfy @agaudreault , wanted to update that I didn't get time to work on this PR. I'll push the changes in some time (most probably by this Saturday or Sunday).

@yaten2302
Copy link
Author

Hi @nitishfy @agaudreault , I've pushed a commit according to the suggestions.

Since, I didn't have any reviews how to proceed, I tried to refactor the Run func and I broke it down into executeImport() func and then I passed it to the Test_executeImport func where I unit tested it.

Kindly have a look at it, if I'm going in the correct direction?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add unit test for argocd admin export/import CLI command
3 participants