-
Notifications
You must be signed in to change notification settings - Fork 100
test(e2e+machine): add in machine tests #1212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: charliedmcb/addBYOMachinePoolE2EBaseStructure
Are you sure you want to change the base?
test(e2e+machine): add in machine tests #1212
Conversation
| bK8sVersion := lo.Must(semver.Parse(*b.KubernetesVersion)) | ||
| return aK8sVersion.GT(bK8sVersion) | ||
| }).KubernetesVersion | ||
| upgradedMC := env.ExpectSuccessfulUpgradeOfManagedCluster(kubernetesUpgradeVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the implementation, this might unintentionally wait for PUT to be done, while we want PUT to be ongoing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree - I don't see how cluster is still in upgrading at the end of this test given the above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, good catch.
I think I had a local fix for this changing the func to be async, and you could optionally wait for polling after, so here is could avoid that, and in the regular k8s test it could use that. Must have not gotten pushed, and lost in the codespace now. Sorry about that.
Should be updated.
| git_ref: ${{ inputs.git_ref }} | ||
| location: ${{ inputs.location }} | ||
| provisionmode: ${{ inputs.provisionmode }} | ||
| identity_type: ${{ inputs.suite == 'Machines' && 'UserAssigned' || 'SystemAssigned' }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need machine identity to be UserAssigned? Can we leave a comment here or in create-cluster/action.yaml explaining?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is required for handling kubelet update/upgrade.
You can't do the user assigned kubelet update test without the cluster being in a user assigned identity state.
| }) | ||
| } | ||
|
|
||
| func (env *Environment) ExpectGetManagedCluster() containerservice.ManagedCluster { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| func (env *Environment) ExpectGetManagedCluster() containerservice.ManagedCluster { | |
| func (env *Environment) ExpectGetManagedCluster() *containerservice.ManagedCluster { |
| func (env *Environment) WarnIfClusterNotInExpectedProvisioningState(expectedProvisioningState string) containerservice.ManagedCluster { | ||
| GinkgoHelper() | ||
| managedCluster := env.ExpectGetManagedCluster() | ||
| Expect(managedCluster.Properties.ProvisioningState).ToNot(BeNil()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: assumes Properties is not nil
|
|
||
| // WarnIfClusterNotInExpectedProvisioningState checks if the clusters provisioning state is equal to the | ||
| // given expected provisioning state. | ||
| func (env *Environment) WarnIfClusterNotInExpectedProvisioningState(expectedProvisioningState string) containerservice.ManagedCluster { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| func (env *Environment) WarnIfClusterNotInExpectedProvisioningState(expectedProvisioningState string) containerservice.ManagedCluster { | |
| func (env *Environment) WarnIfClusterNotInExpectedProvisioningState(expectedProvisioningState string) *containerservice.ManagedCluster { |
| fail-fast: false | ||
| matrix: | ||
| suite: [ACR, BYOK, Chaos, Consolidation, Drift, GPU, InPlaceUpdate, Integration, KubernetesUpgrade, NodeClaim, Scheduling, Spot, Subnet, Utilization] | ||
| suite: [ACR, BYOK, Chaos, Consolidation, Drift, GPU, InPlaceUpdate, Integration, KubernetesUpgrade, NodeClaim, Scheduling, Spot, Subnet, Utilization, Machines] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Alphabetical?
TBH might be a good idea at this point to just break this guy and put 1 suite per line.
This is YAML after all we don't need to mash everything into a single line AFAIK.
| Expect(node.Labels).To(HaveKeyWithValue("kubernetes.azure.com/azure-cni-overlay", "true")) | ||
| Expect(node.Labels).To(HaveKeyWithValue("kubernetes.azure.com/podnetwork-type", consts.NetworkPluginModeOverlay)) | ||
|
|
||
| // Note: these labels we only check their keys since, the values are dynamic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Note: these labels we only check their keys since, the values are dynamic | |
| // Note: these labels we only check their keys since the values are dynamic |
| }) | ||
|
|
||
| // NOTE: ClusterTests modify the actual cluster itself, which means that preforming tests after a cluster test | ||
| // might not have a clean environment, and might produce unexpected results. Ordering of cluster tests is important |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Should we add a note here that this is safe in CI because each E2E runs on its own cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, its safe so long as the suite itself is setup correctly to handle the ordering. However, there is leakage between test cases within the suite
| // updatedKubeletIdentityResourceID := env.GetKubeletIdentityResourceID(env.Context) | ||
|
|
||
| // TODO: check if we want to have this possibly logged | ||
| // Expect(updatedKubeletIdentityResourceID).To(Equal(lo.FromPtr(newIdentity.ID)), "Expected updatedKubeletIdentityResourceID to match new kubelet resource id") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not want this assert? Not sure I follow why not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a light concern on leaking the indentity info. If there is no concern, than yes this asset should be there
| }).WithTimeout(3 * time.Minute).Should(Succeed()) | ||
|
|
||
| By("expecting nodes to drift") | ||
| env.EventuallyExpectDriftedWithTimeout(15*time.Minute, nodeClaims...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
15m seems a long time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
especially since this is just detection of drift, not actual drift? Does this really take 15m? Shouldn't it take more like 30s?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, I think this came from the other drift tests. Agreed can be shortened; though, I feel like it'd be worth looking at the drift logic, or some metrics to make sure its now being shortened too far.
| bK8sVersion := lo.Must(semver.Parse(*b.KubernetesVersion)) | ||
| return aK8sVersion.GT(bK8sVersion) | ||
| }).KubernetesVersion | ||
| upgradedMC := env.ExpectSuccessfulUpgradeOfManagedCluster(kubernetesUpgradeVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree - I don't see how cluster is still in upgrading at the end of this test given the above.
Fixes #
Description
Built off this underlaying framework:
#1200
These tests cover:
How was this change tested?
Tested with this PR:
NOTE: didn't get fully successful runs, and do expect the kubelet test to fail
Does this change impact docs?
Release Note