r1154upgradeselfhostedagentvm.j2
markdown_main
templates/upgrade_main/r1154upgradeselfhostedagentvm.j2
Jinja2 Template
777 lines
|**Metadata**|**Description** |
|--|--|
|Doc Title| MVM v3: Manage Self-hosted agent VM|
|Navigation|[WIKI Home Page](https://dev.azure.com/mvmprodeus2/MVM/_wiki/wikis/documentation/1/documents-home#)|
|Tracking| Document Number: VPE-5512-004|
|Author| Graeme Thomson (gt163y) |
| Agreement Number | 24252.S.005 |
***
**Notices**
Copyright © 2025 Metaswitch Networks. All rights reserved.
This manual is Confidential Information of Metaswitch Networks subject to the confidentiality terms
of the Agreement 01019223 as amended between AT&T and Metaswitch Networks.
It is issued on the understanding that no part of the product code or documentation (including this manual)
will be copied or distributed without prior agreement in writing from Metaswitch Networks and Alianza, Inc.
Metaswitch Networks and Alianza reserve the right to, without notice, modify or revise all or part of
this document and/or change product features or specifications and shall not be responsible for any
loss, cost, or damage, including consequential damage, caused by reliance on these materials.
Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and
products referenced herein are the trademarks or registered trademarks of their respective holders.
Product(s) and features documented in this manual handle various forms of data relating to your
users. You must comply with all laws and regulations applicable to your deployment, management,
and use of said product(s), and you should take all appropriate technical and organizational
measures to ensure you are handling this data appropriately according to any local legal and
regulatory obligations.
You are responsible for determining whether said product(s) or feature(s) is/are appropriate for
storage and processing of information subject to any specific law or regulation and for using said
product(s) or feature(s) in a manner consistent with your own legal and regulatory obligations. You
are also responsible for responding to any request from a third party regarding your use of said
product(s), such as a request to take down content under the U.S. Digital Millennium Copyright Act
or other applicable laws.
Metaswitch Networks
399 Main Street
Los Altos
CA 94022
<http://www.metaswitch.com>
***
***Table of Contents***
[[_TOC_]]
# 1. Document History
| **Issue** | **Issue Date** | **Author(s)** | **Identification** **of** **Changes** |
|-|-|-|-|
| 1| 06/10/2024| Gthomson| initial draft |
| 2| 09/30/2024| Gthomson| updates based on Ops feedback |
| 3| 11/26/2024| Gthomson| Correct name of pipeline configuration file to use when deleting downlevel VMs |
| 4| 11/26/2024| Gthomson| Update default_vars.yml using sed |
# 2. Versions
| **Version #** | **Editor** | **Comments** |
|-|-|-|
| 1| Gthomson| initial draft |
| 2| Gthomson| updates based on Ops feedback |
| 3| Gthomson| Correct name of pipeline configuration file to use when deleting downlevel VMs |
| 4| Gthomson| Update default_vars.yml using sed |
# 3. Integrated Solution Approach v1 (ISA v1)
| **Version #** | **Editor** | **Comments** |
|-|-|-|
| 1| Gthomson| initial draft |
| 2| Gthomson| updates based on Ops feedback |
| 3| Gthomson| Correct name of pipeline configuration file to use when deleting downlevel VMs |
| 4| Gthomson| Update default_vars.yml using sed |
# 4. MOP Impact Scope / General Information
## 4.1 Description
The self-hosted agent VMs are used to run the various pipelines that are part of MVM. The VMs are built from an AT&T provided golden image.
The security scans that AT&T run can detect vulnerabilities in these VMs that are corrected by upgrading to a newer version of the golden image.
This MOP describes the process to replace the existing self-hosted agent VMs with ones built on the new image
## 4.2 Site Specific Description
| **Originator** | **Date** | **Time** |
|-|-|-|
| **Deployment Location(s)** | |
| **Description** | This MOP applies to the MVM V3 on Azure deployment, Release R11.5.4 | |
## 4.3 Service Impact
Service impact is not expected during this procedure. New resources are created and verified before the existing resources are destroyed.
## 4.4 Coordination
This MOP has no interactions outside of the MVM subscription.
# 5. Prerequisite/Dependencies/Entrance Criteria of MOP
This MOP is one of several that need to be run to execute the process to upgrade an existing deployment to an 11.5.4 release/patch.
Please refer to the corresponding *R11.5.4 Release Upgrade Overview* document for guidance on the order in which to run these MOPs
## 5.1 Required parameters
The following parameter values are required to run this MOP
| **Identifier** | **Description** |
|-|-|
| **AZURE_REGION** | This Azure region, e.g. `eastus2`. |
| **AGENT_POOL_NAME** | The name of the self-hosted agent pool |
| **DOWNLEVEL_PIPELINE_CONFIGURATION_NAME** | Name of the downlevel pipeline configuration file for the region without the .yml suffix (e.g. `vars_eus2_11400`). |
| **DOWNLEVEL_SHA_FIRST_INDEX** | The index of the first self-hosted agent VM currently instantiated |
| **DOWNLEVEL_SHA_GOLDEN_IMAGE** | Golden image index used to instantiate the current self-hosted agent VMs |
| **DOWNLEVEL_SHA_INSTANCE_COUNT** | Number of self-hosted agent VMs currently instantiated |
| **GIT_AUTOMATION_REPOSITORY** | Name of the automation Azure DevOps repository. |
| **GIT_AUTOMATION_URL** | URL of the automation git repository.|
| **GIT_PASSWORD** | Password used to access the Azure DevOps repositories if you are using https to manage the local copy of the access the repository. |
| **ORGANIZATION_NAME** | Name of the Azure DevOps organization. |
| **PROJECT** | Name of the Azure DevOps project. |
| **REGIONAL_PIPELINE_KEY_VAULT_NAME** | Name of the regional pipeline key vault |
| **REGIONAL_PIPELINE_KEY_VAULT_RG** | Name of the resource group that contains the key vault **REGIONAL_PIPELINE_KEY_VAULT_NAME** |
| **REGION_SHORTNAME** | The short (4-characters maximum) DNS label for the region |
| **SELFHOST_VM_RESOURCE_GROUP** | Resource Group containing the Self-hosted Agents |
| **UPLEVEL_MVM_FILESHARE** | Name of the fileshare containing the Uplevel Release (**This is specified in the release note**) |
| **UPLEVEL_PIPELINE_CONFIGURATION_NAME** | Name of the uplevel pipeline configuration file for the region without the .yml suffix (e.g. `vars_eus2_11500`). |
| **UPLEVEL_SHA_FIRST_INDEX** | The index of the first self-hosted agent VM to instantiate with the new golden image |
| **UPLEVEL_SHA_GOLDEN_IMAGE** | Golden image index used to instantiate the new self-hosted agent VMs |
| **UPLEVEL_SHA_INSTANCE_COUNT** | Number of self-hosted agent VMs to instantiate |
## 5.2 Required files
The following file from the config repository must be downloaded to your CloudShell session before starting this MOP:
- `scripts/upload_secrets.sh`
# 6. Assumptions
The target audience for this procedure is the AT&T Engineer who will be performing the task. They will need to be familiar with Azure and have a working knowledge of the Azure CLI and Linux.
# 7. Material Requirements
## 7.1 Required Documents
## 7.2 Tools
| **Tool** | **Description** | **Quantity** |
|-|-|-|
| Laptop or Desktop PC | PC With at least 1G Memory and a network communications software application such as Procomm, Reflections or PuTTY | 1 |
| Azure connectivity PC | CloudShell Connectivity is required to the azure subscription. This can be accessed via [My Dashboard - Microsoft Azure](https://portal.azure.com/#cloudshell/) | |
# 8. Pre Maintenance Check, Precautions and Preparations
## 8.1 Precautions and Preparation
## 8.2 Precautions
> This procedure may cause a partial outage during implementation. Use executable script files to minimize down time and typing errors. Familiarize yourself with back-out procedures prior to starting the procedure.
| **Ask Yourself Principle** | **Yes** | **No** | **N/A** |
|-|-|-|-|
| 1. Do I have the proper ID and appropriate building access permissions for the environment I am about to enter? | | |
| 2. Do I know why I'm doing this work? | | |
| 3. Have I identified and notified everybody - customers and internal groups - who will be directly affected by this work? | | |
| 4. Can I prevent or control service interruption? | | |
| 5. Is this the right time to do this work? | | |
| 6. Am I trained and qualified to do this work? | | |
| 7. Are the work orders, MOPs, and supporting documentation current and error-free? | | |
| 8. Do I have everything I need to quickly and safely restore service if something goes wrong? | | |
| 9. Have I walked through the procedure? | | |
| 10. Have I made sure the procedure includes proper closure including obtaining clearance and release for the appropriate work center? | | |
| **E911 Ask Yourself** | **Yes** | **No** | **N/A** |
|-|-|-|-|
| 1. Does this work impact E911? | | |
| 2. Do I know how this work could impact 911/e911? | | |
| 3. Do I know what 911/e911 phase is required? | | |
| 4. Have I identified potential risks to 911/e911 and taken all measures to minimize? | | |
| 5. Does this work affect 15+ sites? | | |
| 6. Can I prevent or control service Interruptions to 911/e911? | | |
| 7. Is this the right time to do the work? | | |
| 8. Is the individual performing the work trained and qualified to do this work? | | |
| 9. Are MOPs and supporting documents current and error free? | | |
| 10. Does the MOP include a 911/e911 test plan? | | |
## 8.3 Pre-Maintenance Check Tools/System
Tier 2 needs to identify which tools they will use. This doesn't necessarily need to be included in the MOP but OPS needs to know which tools they will run.
(NEED TO USE STANDARD TOOLS) TIER 2
## 8.4 Pre-Maintenance Check Manual (Non-Automated Requirements)
These will be identify by the tier 3 MOP developer were required.
(MANDATORY CHECK REQUIRE FOR THE MOP) TIER 3
## 8.5 MOP Certification Environment
Examples: PSL certified. OR This MOP was paper certified by ATS engineers.
## 8.6 ATS Bulletin
**ATS Bulletin Check**
| **Step** | **Action** | **Results/Description** | **Timeline** |
|-|-|-|-|
| 1. | No Applicable bulletins | | |
## 8.7 Emergency Contacts
The following emergency contact numbers are to be used in the event provisioning support is required.
In the event a service interruption is encountered the AT&T Implementation Engineer will:
- Cease all work immediately.
- Notify the AT&T Voicemail TRC.
- Escalate to the next level of support.
| **Organization** | **Contact Name** | **Contact Number** |
|-|-|-|
| Voicemail TRC | SANRC | 877-662-7674, opt 3 |
# 9. Implementation
## 9.1 Preliminary Implementation
Pre-check tasks are completed the night of the cutover at least one hour prior to cutover activities.
1. Connect to the DevOps Portal
1. Start a browser session to <https://dev.azure.com/>. This will be required to manage the pipelines
1. Select the project associated with MVM v3
1. Connect to the Azure Portal
1. Start a browser session to <https://portal.azure.com/>. This will be required to manage Azure resources
and access the log analytics workspace (LAW)
1. If prompted, complete the log in process
1. Connect to Azure Cloud Shell
1. Start a CloudShell session by connecting a browser to <https://shell.azure.com/>
1. If the menu at the top left indicates PowerShell select Bash from the menu and confirm at the prompt

1. Upload any files and directories outlined in section 5.2 to your Cloud Shell account as they will be needed later
## 9.2 Implementation
### 9.2.1 Prepare the automation Git repository
This is the Git repository that holds the pipelines, Terraform scripts etc.
These commands are run from the CloudShell session created above
1. Set the following environment variables:
```
BRANCH={{ CHANGE_ID | default('<CHANGE_ID>') }}_{{ UPLEVEL_MVM_VERSION | default('<UPLEVEL_MVM_VERSION>') }}_update_sha_vm
```
Export the correct form of the URL to access the git repository
- If using https to interact with the git repository
```
GIT_AUTOMATION_URL=https://{{ ORGANIZATION_NAME | default('<ORGANIZATION_NAME>') }}@dev.azure.com/{{ ORGANIZATION_NAME | default('<ORGANIZATION_NAME>') }}/{{ PROJECT|default('<PROJECT>') }}/_git/{{ GIT_AUTOMATION_REPOSITORY | default('<GIT_AUTOMATION_REPOSITORY>') }}
```
- If using ssh to interact with the git repository
```
GIT_AUTOMATION_URL=git@ssh.dev.azure.com:v3/{{ ORGANIZATION_NAME | default('<ORGANIZATION_NAME>') }}/{{ PROJECT|default('<PROJECT>') }}/{{ GIT_AUTOMATION_REPOSITORY | default('<GIT_AUTOMATION_REPOSITORY>') }}
```
1. Change to an appropriate working directory in Cloud shell. Your Git repository will live in a subdirectory off of this path.
```
cd ~
mkdir automation_repo
cd automation_repo
```
1. Clone the existing Azure DevOps Git repository with **<GIT_AUTOMATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
```
git clone ${GIT_AUTOMATION_URL} .
```
(note the trailing whitespace and period after the URL)
- If using HTTPS:
- When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
- If using SSH:
- You will not be prompted for a password.
This will create a local copy of the repository in the current working directory.
1. Create a new working branch by running the command
```
git checkout -b ${BRANCH}
```
The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step
### 9.2.2 Create a new pipeline configuration file
> This is an optional step that is only required if the release note and/or upgrade overview document indicates that the pipeline configuration file needs to be changed as part of the upgrade process.
>
> This will be the case if the upgrade process does ***NOT*** include an update of the pipeline infrastructure. In that case the uplevel pipeline configuration file will already have been created as part of that MOP.
1. Copy the existing pipeline configuration file to a new configuration file by running the command:
```
cp pipelines/configuration/{{ DOWNLEVEL_PIPELINE_CONFIGURATION_NAME | default('<DOWNLEVEL_PIPELINE_CONFIGURATION_NAME>') }}.yml \
pipelines/configuration/{{ UPLEVEL_PIPELINE_CONFIGURATION_NAME | default('<UPLEVEL_PIPELINE_CONFIGURATION_NAME>') }}.yml
```
1. Update the secrets monitoring configuration file by running the command:
```
sed -i '/^ default_vars/c\ default_vars_file: {{ UPLEVEL_PIPELINE_CONFIGURATION_NAME | default('<UPLEVEL_PIPELINE_CONFIGURATION_NAME>') }}.yml' pipelines/configuration/default_vars.yml
```
1. Add the new file to the repository by running the command:
```
git add -A
```
1. Commit the change to the local branch by running the command:
```
git commit -a -m "Create new pipeline configuration file"
```
### 9.2.3 Update the VM related parameters
1. Edit the file `pipelines/configuration/{{ UPLEVEL_PIPELINE_CONFIGURATION_NAME | default('<UPLEVEL_PIPELINE_CONFIGURATION_NAME>') }}.yml` and make the following changes.
- Record the value of `selfhost_first_instance_index` in **DOWNLEVEL_SHA_FIRST_INDEX**
- Record the value of `selfhost_instance_count` in **DOWNLEVEL_SHA_INSTANCE_COUNT**
- Record the value of `selfhost_golden_image` in **DOWNLEVEL_SHA_GOLDEN_IMAGE**
- Set `selfhost_first_instance_index` to the start of the new self-hosted agent VM range **<UPLEVEL_SHA_FIRST_INDEX>**
- (optional) update `selfhost_instance_count` to **<UPLEVEL_SHA_INSTANCE_COUNT>** if you want to create a different number of VMs
- Set `selfhost_golden_image` to `{{ UPLEVEL_SHA_GOLDEN_IMAGE | default('<UPLEVEL_SHA_GOLDEN_IMAGE>') }}`
All other parameters should remain unchanged.
We are saving the current self-hosted VM settings for use later and updating the existing parameters to use the new values.
> Note that the new range of self-hosted agent VMs (**<UPLEVEL_SHA_FIRST_INDEX>** through **<UPLEVEL_SHA_FIRST_INDEX>** + **<UPLEVEL_SHA_INSTANCE_COUNT>**) must not overlap with the existing range of VMs (**DOWNLEVEL_SHA_FIRST_INDEX** through (**DOWNLEVEL_SHA_FIRST_INDEX** + **DOWNLEVEL_SHA_INSTANCE_COUNT**)
1. Commit the change to the local branch by running the command
```
git commit -a -m "Update SHA details in pipeline configuration file"
```
### 9.2.4 Commit the changes to the remote repository
1. Push the change to the DevOps repository by running the command:
```
git push --set-upstream origin ${BRANCH}
```
1. Merge the change into the main branch via the 'pull request' mechanism.
### 9.2.5 Remove the local copy of the git repository as we have finished with it
1. Tidy up by running the command:
```
cd ~
rm -rf automation_repo
```
### 9.2.6 Create a Personal Access Token (PAT) and add it to the regional pipeline key vault
>This is an optional step that is only required if the PAT in the regional pipeline key vault is expired
>or if there is no PAT in the regional pipeline key vault
1. Create a PAT (This step is run from the DevOps Portal session created in section 9.1)
A PAT is required as part of the configuration process. It is not used for day-to-day operation of the pipelines
1. Sign in with the user account you plan to use in your Azure DevOps organization
1. From the home page, open your user settings, and then select Personal access tokens.
1. Create a token.
1. Check Custom defined and Select the Show all scopes link
1. Check Agent Pools: Read & Manage
1. make sure all the other boxes are unchecked.
1. Select Create
Success, or failure, is reported on screen.
Remember to copy the created token as this will be the only time it is visible.
1. Add PAT to regional pipeline key vault (This step is run from the CloudShell session created in section 9.1).
1. Allow your current session to access the key vault by running the following commands:
```
KEY_VAULT_NAME={{ REGIONAL_PIPELINE_KEY_VAULT_NAME | default('<REGIONAL_PIPELINE_KEY_VAULT_NAME>') }}
KEY_VAULT_RG={{ REGIONAL_PIPELINE_KEY_VAULT_RG | default('<REGIONAL_PIPELINE_KEY_VAULT_RG>') }}
USER_IP=$(curl -s http://ipinfo.io/json | jq -r '.ip')
az keyvault network-rule add \
--name ${KEY_VAULT_NAME} \
--resource-group ${KEY_VAULT_RG} \
--ip-address ${USER_IP}
```
1. Create a file, `/tmp/secrets.txt`, to contain the new secrets.
Copy the following text into it and fill in the various substitution parameters.
```
# Set to the value of GENERATED_PAT to the PAT that was just generated.
agent-install-pat=<GENERATED_PAT>
```
1. Upload the secrets
Run the following command to upload the secrets file to the key vault:
```
{{ CLOUD_SHELL_LOCATION | default('<CLOUD_SHELL_LOCATION>') }}/upload_secrets.sh
--keyvault ${KEY_VAULT_NAME} \
--secrets-file /tmp/secrets.txt
```
1. Verify that the secret has been uploaded by running the following command:
```
az keyvault secret list \
--vault-name ${KEY_VAULT_NAME} \
--query '[].name'
```
Check that `agent-install-pat` appears appear in the output.
1. Remove the current session from the key vault access list by running the following command:
```
az keyvault network-rule remove \
--name ${KEY_VAULT_NAME} \
--resource-group ${KEY_VAULT_RG} \
--ip-address ${USER_IP}
```
1. Remove the secrets file as it is no longer required by running the command
```
rm /tmp/secrets.txt
```
1. Add the regional key vault variable to pipeline regional variable group (This step is run from the DevOps Portal session created in section 9.1)
1. Return to your Azure DevOps project
- Select **Pipelines** on the left sidebar
- Select **Library**
1. Select the variable group `mvm-{{ REGION_SHORTNAME | default('<REGION_SHORTNAME>') }}-vault`
1. In the resultant window, select **+ Add** located at the bottom of the existing variables
1. Add `agent-install-pat` and select **Ok**
1. Select **Save** to commit the changes
### 9.2.7 Create the up-level self-hosted agent VMs
1. Run the `mvmselfhost` pipeline
1. Select pipelines from the Left hand menu
1. Select pipelines from the sub menu
1. Select "All" from the resultant page
1. Expand pipelines
1. Select the pipeline `mvmselfhost`
1. Select Run pipeline
1. In the resultant Run pipeline menu
- Use a self-hosted agent pool (the default)
- Check Deploying into an AT&T subscription (the default)
- Set 'Release to install packages from' to `{{ UPLEVEL_MVM_FILESHARE | default('<UPLEVEL_MVM_FILESHARE>') }}`
- Set the name of the config file to `{{ UPLEVEL_PIPELINE_CONFIGURATION_NAME | default('<UPLEVEL_PIPELINE_CONFIGURATION_NAME>') }}`
- Select Run
Selecting Run will cause an approval email to be sent to the reviewers list. The pipeline will stall until the reviewers have approved the request. Once approval has been given the pipeline will run.
The progress of the pipeline can be followed by clicking on the jobs list.
Once the pipeline has completed, success, or failure, is reported to screen
1. Follow the [**Test Plan**](#testplan) to verify the Self-hosted agent functionality.
### <a id=testfail></a>9.2.8 New VM functionality test failure
Only run this section of the MOP if the new self-hosted agent VMs are NOT working correctly. in this case we need to delete the **new, up-level VMs**
All of the commands in this section are run from the DevOps Portal session
1. Remove the up-level VMs from the agent pool
Disable and delete the up-level agents from the agent pool by running the following commands:
- Sign in with the user account you plan to use in your Azure DevOps organization
- From the home page, select the Project settings (the gear wheel at the bottom left corner)
- Select Agent Pools from the Pipelines sub menu
- Select `{{ AGENT_POOL_NAME | default('<AGENT_POOL_NAME>') }}`
- Select the Agent tab
- Repeat the following commands for each of the up level self-hosted agents
- Use the slider to disable the agent
- Select the More Options menu (kabob menu) at the end of the agent entry and select Delete.
- Select Delete on the subsequent confirmation window
When the operation completes, the agent should have disappeared from **{{ AGENT_POOL_NAME | default('<AGENT_POOL_NAME>') }}**.
1. Select pipelines from the Left hand menu
1. Select pipelines from the sub menu
1. Select "All" from the resultant page
1. Expand pipelines
1. Select the pipeline `mvmselfhosttidy`
1. Select Run pipeline
1. In the resultant Run pipeline menu
- Use a self-hosted agent pool (the default)
- Set the name of the config file to `{{ UPLEVEL_PIPELINE_CONFIGURATION_NAME | default('<UPLEVEL_PIPELINE_CONFIGURATION_NAME>') }}`
- Set the index of the first VM to tidy up to be `<UPLEVEL_SHA_FIRST_INDEX>`
- Set the number of VMs to delete to `<UPLEVEL_SHA_INSTANCE_COUNT>`
- Select Run
Selecting Run will cause an approval email to be sent to the reviewers list. The pipeline will stall until the reviewers have approved the request. Once approval has been given the pipeline will run.
The progress of the pipeline can be followed by clicking on the jobs list.
Once the pipeline has completed, success, or failure, is reported to screen
1. Restore the original pipeline configuration file following the instructions in the backout procedures
Proceed to [**Backout Procedure**](#backout)
### <a id=testpass></a>9.2.9 New VM functionality test success
Only run this section of the MOP if the new self-hosted agent VMs are working correctly. in this case we need to delete the **existing, down-level VMs**
All of the commands in this section are run from the DevOps Portal session
1. Delete the down-level VMs from the agent pool
Disable and delete the down-level agents from the agent pool by running the following commands:
- Sign in with the user account you plan to use in your Azure DevOps organization
- From the home page, select the Project settings (the gear wheel at the bottom left corner)
- Select Agent Pools from the Pipelines sub menu
- Select `{{ AGENT_POOL_NAME | default('<AGENT_POOL_NAME>') }}`
- Select the Agent tab
- Repeat the following commands for each of the down-level self-hosted agents
- Use the slider to disable the agent
- Select the More Options menu (kabob menu) at the end of the agent entry and select Delete.
- Select Delete on the subsequent confirmation window
When the operation completes, the agent should have disappeared from **{{ AGENT_POOL_NAME | default('<AGENT_POOL_NAME>') }}**.
1. Select pipelines from the Left hand menu
1. Select pipelines from the sub menu
1. Select "All" from the resultant page
1. Expand pipelines
1. Select the pipeline `mvmselfhosttidy`
1. Select Run pipeline
1. In the resultant Run pipeline menu
- Use a self-hosted agent pool (the default)
- Set the name of the config file to `{{ DOWNLEVEL_PIPELINE_CONFIGURATION_NAME | default('<DOWNLEVEL_PIPELINE_CONFIGURATION_NAME>') }}`
- Set the index of the first VM to tidy up to be `<DOWNLEVEL_SHA_FIRST_INDEX>`
- Set the number of VMs to delete to `<DOWNLEVEL_SHA_INSTANCE_COUNT>`
- Select Run
Selecting Run will cause an approval email to be sent to the reviewers list. The pipeline will stall until the reviewers have approved the request. Once approval has been given the pipeline will run.
The progress of the pipeline can be followed by clicking on the jobs list.
Once the pipeline has completed, success, or failure, is reported to screen
## <a id=testplan></a> 9.3 Test Plan
### 9.3.1 Verify Agent is up and in the agent pool
1. Configure this VM to be the only active VM in the agent pool by doing the following:
- Sign in with the user account you plan to use in your Azure DevOps organization
- From the home page, select the Project settings (the gear wheel at the bottom left corner)
- Select Agent Pools from the Pipelines sub menu
- Select `{{ AGENT_POOL_NAME | default('<AGENT_POOL_NAME>') }}`
- Select the Agent tab
- Verify that the following for the VM under test
- It appears in the list of valid agents
- It is online
- It is enabled
- Disable all the other agents in the pool
This ensures that the pipeline we run will run on the agent under test
### 9.3.2 Run a pipeline on the VM
1. Select pipelines from the Left hand menu
1. Select pipelines from the sub menu
1. Select "All" from the resultant page
1. Expand pipelines
1. Select the pipeline to validate the configuration
This will be the name that was assigned to the pipeline when it was manually added to the pipeline suite.
The file used to create this pipeline is Azure-pipelines.yml
1. Select Run pipeline
All we need to validate that the VM under test is functional is to verify that the pipeline runs to completion.
The actual result of the run is not relevant to this test - although if it reports an error then that should be investigated separately
### 9.3.3 Restore Redundancy to the agent pool
1. Enable all VMs in the agent pool by running the following commands:
- Sign in with the user account you plan to use in your Azure DevOps organization
- From the home page, select the Project settings (the gear wheel at the bottom left corner)
- Select Agent Pools from the Pipelines sub menu
- Select `{{ AGENT_POOL_NAME | default('<AGENT_POOL_NAME>') }}`
- Select the Agent tab
- Enable all agents in the pool
This restores redundancy to the agent pool
### 9.3.4 Proceed with the upgrade
1. Depending on the results of the testplan, return to [**New VM functionality test failure**](#testfail)
or [**New VM functionality test success**](#testpass) to continue the upgrade process.
## <a id=backout></a> 9.4 Backout Procedure
### 9.4.1 Revert the changes
These commands are run from the CloudShell session created above
1. Set the following environment variables:
```
BRANCH={{ CHANGE_ID | default('<CHANGE_ID>') }}_{{ UPLEVEL_MVM_VERSION | default('<UPLEVEL_MVM_VERSION>') }}_revert_sha_vm
```
Export the correct form of the URL to access the git repository
- If using https to interact with the git repository
```
GIT_AUTOMATION_URL=https://{{ ORGANIZATION_NAME | default('<ORGANIZATION_NAME>') }}@dev.azure.com/{{ ORGANIZATION_NAME | default('<ORGANIZATION_NAME>') }}/{{ PROJECT|default('<PROJECT>') }}/_git/{{ GIT_AUTOMATION_REPOSITORY | default('<GIT_AUTOMATION_REPOSITORY>') }}
```
- If using ssh to interact with the git repository
```
GIT_AUTOMATION_URL=git@ssh.dev.azure.com:v3/{{ ORGANIZATION_NAME | default('<ORGANIZATION_NAME>') }}/{{ PROJECT|default('<PROJECT>') }}/{{ GIT_AUTOMATION_REPOSITORY | default('<GIT_AUTOMATION_REPOSITORY>') }}
```
1. Change to an appropriate working directory in Cloud shell. Your Git repository will live in a subdirectory off of this path.
```
cd ~
mkdir automation_repo
cd automation_repo
```
1. Clone the existing Azure DevOps Git repository with **<GIT_AUTOMATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
```
git clone ${GIT_AUTOMATION_URL} .
```
(note the trailing whitespace and period after the URL)
- If using HTTPS:
- When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
- If using SSH:
- You will not be prompted for a password.
This will create a local copy of the repository in the current working directory.
1. Create a new working branch by running the command
```
git checkout -b ${BRANCH}
```
The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step
1. Edit the file `pipelines/configuration/{{ UPLEVEL_PIPELINE_CONFIGURATION_NAME | default('<UPLEVEL_PIPELINE_CONFIGURATION_NAME>') }}.yml` and make the following changes.
- Set `selfhost_first_instance_index` to **DOWNLEVEL_SHA_FIRST_INDEX**
- Set `selfhost_instance_count` to **DOWNLEVEL_SHA_INSTANCE_COUNT**
- Set `selfhost_golden_image` to **DOWNLEVEL_SHA_GOLDEN_IMAGE**
All other parameters should remain unchanged
1. Commit the change to the local branch by running the command
```
git commit -a -m "Revert SHA details in pipeline configuration file"
```
### 9.4.2 Commit the changes to the remote repository
1. Push the change to the DevOps repository by running the command:
```
git push --set-upstream origin ${BRANCH}
```
1. Merge the change into the main branch via the 'pull request' mechanism.
### 9.4.3 Remove the local copy of the git repository as we have finished with it
1. Tidy up by running the command:
```
cd ~
rm -rf automation_repo
```
# 10. Post checks
[System healthchecks]
# 11. Risk Assessment Score
1 - TBD
# 12. Execute MOP clean up if required
# 13. End of Document MOP
# 14. Service Assurance/Monitoring
# A. Appendix and Tables
# B. Approvers
# C. Peer Reviewers
# D. References for Other Documents
# E. Additional Appendices (If required)