r1153upgradesimon.md

markdown_main rendered_wus2/r1153upgradesimon.md

Back to Dashboard
Rendered Markdown 1409 lines
|**Metadata**|**Description**  |
|--|--|
|Doc Title|  MVM v3: Upgrade SIMon SI|
|Navigation|[WIKI Home Page](https://dev.azure.com/mvmprodeus2/MVM/_wiki/wikis/documentation/1/documents-home#)|
|Tracking| Document Number: VPE-5512-014|
|Author| Graeme Thomson (gt163y) |
| Agreement Number | 24252.S.005 |

***
**Notices**

Copyright © 2025 Metaswitch Networks.  All rights reserved.

This manual is Confidential Information of Metaswitch Networks subject to the confidentiality terms
of the Agreement 01019223 as amended between AT&T and Metaswitch Networks.

It is issued on the understanding that no part of the product code or documentation (including this manual)
will be copied or distributed without prior agreement in writing from Metaswitch Networks and Microsoft.

Metaswitch Networks and Microsoft reserve the right to, without notice, modify or revise all or part of
this document and/or change product features or specifications and shall not be responsible for any
loss, cost, or damage, including consequential damage, caused by reliance on these materials.

Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and
products referenced herein are the trademarks or registered trademarks of their respective holders.

Product(s) and features documented in this manual handle various forms of data relating to your
users. You must comply with all laws and regulations applicable to your deployment, management,
and use of said product(s), and you should take all appropriate technical and organizational
measures to ensure you are handling this data appropriately according to any local legal and
regulatory obligations.

You are responsible for determining whether said product(s) or feature(s) is/are appropriate for
storage and processing of information subject to any specific law or regulation and for using said
product(s) or feature(s) in a manner consistent with your own legal and regulatory obligations. You
are also responsible for responding to any request from a third party regarding your use of said
product(s), such as a request to take down content under the U.S. Digital Millennium Copyright Act
or other applicable laws.


Metaswitch Networks
399 Main Street
Los Altos
CA 94022
<http://www.metaswitch.com>


***
***Table of Contents***
[[_TOC_]]

# 1. Document History

| **Issue** | **Issue Date** | **Author(s)** | **Identification** **of** **Changes** |
|-|-|-|-|
| 1| 06/10/2024| Gthomson|  initial draft |
| 2| 09/30/2024| Gthomson|  updates based on Ops feedback |

# 2. Versions

| **Version #** | **Editor** | **Comments** |
|-|-|-|
| 1| Gthomson|  initial draft |
| 2| Gthomson|  updates based on Ops feedback |

# 3. Integrated Solution Approach v1 (ISA v1)

| **Version #** | **Editor** | **Comments** |
|-|-|-|
| 1| Gthomson|  initial draft |
| 2| Gthomson|  updates based on Ops feedback |

# 4. MOP Impact Scope / General Information

## 4.1 Description

The SIMon SI is the application SI that is responsible for monitoring and reporting the state of MVM deployment

This MOP describes the process to upgrade one or more SIMon SIs.

## 4.2 Site Specific Description

| **Originator** | **Date** | **Time** |
|-|-|-|
| **Deployment Location(s)** | |
| **Description** | This MOP applies to the MVM V3 on Azure deployment, Release R11.5.3 | |

## 4.3 Service Impact

Service impact is not expected during this procedure.

## 4.4 Coordination

This MOP has no interactions outside of the MVM subscription.

# 5. Prerequisite/Dependencies/Entrance Criteria of MOP

This MOP is one of several that need to be run to execute the process to upgrade an existing deployment to an 11.5.3 release/patch.

Please refer to the corresponding *R11.5.3 Release Upgrade Overview* document for guidance on the order in which to run these MOPs

## 5.1 Required parameters

The following parameter values are required to run this MOP

| **Identifier** | **Description** |
|-|-|
| **AZURE_TRAFFIC_MANAGER_RESOURCE_GROUP** | The resource group that contains the MVM Azure Traffic Manager profiles |
| **CONFIG_VERSION** | The name of the config set directory containing the current SI configuration. Config sets are located in the `config` directory |
| **DNS_ZONE** | Name of the global DNS zone |
| **GIT_CONFIGURATION_REPOSITORY** | Name of the configuration Azure DevOps repository. |
| **GIT_PASSWORD** | Password used to access the Azure DevOps repositories if you are using https to manage the local copy of the access the repository. |
| **ORGANIZATION_NAME** | Name of the Azure DevOps organization. |
| **PROJECT** | Name of the Azure DevOps project. |
| **REGION_LAW** | Name of the Log Analytics Workspace (LAW) associated with the region |
| **REGION_SHORTNAME** | The short (4-characters maximum) DNS label for the region |
| **SUBSCRIPTION_ID** | Azure subscription identifier.  |

## 5.2 Required files

No additional files are required to run this MOP.

# 6. Assumptions

The target audience for this procedure is the AT&T Engineer who will be performing the task. They will need to be familiar with Azure and have a working knowledge of the Azure CLI and Linux.

# 7. Material Requirements

## 7.1 Required Documents

## 7.2 Tools

| **Tool** | **Description** | **Quantity** |
|-|-|-|
| Laptop or Desktop PC | PC With at least 1G Memory and a network communications software application such as Procomm, Reflections or PuTTY | 1 |
| Azure connectivity PC | CloudShell Connectivity is required to the azure subscription. This can be accessed via [My Dashboard - Microsoft Azure](https://portal.azure.com/#cloudshell/) | |

# 8. Pre Maintenance Check, Precautions and Preparations

## 8.1 Precautions and Preparation

## 8.2 Precautions

> This procedure may cause a partial outage during implementation. Use executable script files to minimize down time and typing errors. Familiarize yourself with back-out procedures prior to starting the procedure.

| **Ask Yourself Principle** | **Yes** | **No** | **N/A** |
|-|-|-|-|
| 1. Do I have the proper ID and appropriate building access permissions for the environment I am about to enter? | | |
| 2. Do I know why I'm doing this work? | | |
| 3. Have I identified and notified everybody - customers and internal groups - who will be directly affected by this work? | | |
| 4. Can I prevent or control service interruption? | | |
| 5. Is this the right time to do this work? | | |
| 6. Am I trained and qualified to do this work? | | |
| 7. Are the work orders, MOPs, and supporting documentation current and error-free? | | |
| 8. Do I have everything I need to quickly and safely restore service if something goes wrong? | | |
| 9. Have I walked through the procedure? | | |
| 10. Have I made sure the procedure includes proper closure including obtaining clearance and release for the appropriate work center? | | |


| **E911 Ask Yourself** | **Yes** | **No** | **N/A** |
|-|-|-|-|
| 1. Does this work impact E911? | | |
| 2. Do I know how this work could impact 911/e911? | | |
| 3. Do I know what 911/e911 phase is required? | | |
| 4. Have I identified potential risks to 911/e911 and taken all measures to minimize? | | |
| 5. Does this work affect 15+ sites? | | |
| 6. Can I prevent or control service Interruptions to 911/e911? | | |
| 7. Is this the right time to do the work? | | |
| 8. Is the individual performing the work trained and qualified to do this work? | | |
| 9. Are MOPs and supporting documents current and error free? | | |
| 10. Does the MOP include a 911/e911 test plan? | | |
     

## 8.3 Pre-Maintenance Check Tools/System

Tier 2 needs to identify which tools they will use. This doesn't necessarily need to be included in the MOP but OPS needs to know which tools they will run.

(NEED TO USE STANDARD TOOLS) TIER 2


## 8.4 Pre-Maintenance Check Manual (Non-Automated Requirements)

These will be identify by the tier 3 MOP developer were required.

(MANDATORY CHECK REQUIRE FOR THE MOP) TIER 3


## 8.5 MOP Certification Environment

Examples:  PSL certified.  OR This MOP was paper certified by ATS engineers.

## 8.6 ATS Bulletin

**ATS Bulletin Check**
| **Step** | **Action** | **Results/Description** | **Timeline** |
|-|-|-|-|
| 1. | No Applicable bulletins | | |


## 8.7 Emergency Contacts

The following emergency contact numbers are to be used in the event provisioning support is required.

In the event a service interruption is encountered the AT&T Implementation Engineer will:
- Cease all work immediately.
- Notify the AT&T Voicemail TRC.
- Escalate to the next level of support.


| **Organization** | **Contact Name** | **Contact Number** |
|-|-|-|
| Voicemail TRC | SANRC | 877-662-7674, opt 3 |

# 9. Implementation

## 9.1 Preliminary Implementation
Pre-check tasks are completed the night of the cutover at least one hour prior to cutover activities.

1. Connect to the DevOps Portal
   1. Start a browser session to <https://dev.azure.com/>. This will be required to manage the pipelines
   1. Select the project associated with MVM v3
1. Connect to the Azure Portal
   1. Start a browser session to <https://portal.azure.com/>. This will be required to manage Azure resources
      and access the log analytics workspace (LAW)
   1. If prompted, complete the log in process
1. Connect to Azure Cloud Shell
   1. Start a CloudShell session by connecting a browser to <https://shell.azure.com/>
   1. If the menu at the top left indicates PowerShell select Bash from the menu and confirm at the prompt

      ![screenshot](images/powershell.jpg)
1. Upload any files and directories outlined in section 5.2 to your Cloud Shell account as they will be needed later


## 9.2 Implementation

### 9.2.1 Set the default subscription to the MVM subscription

1. Set the default subscription by running the command:

   ```
   az account set --subscription "Azure subscription identifier for the MVM subscription."
   ```

### 9.2.2 Prepare the configuration git repository

This is the git repository that holds the code and configuration for MVM.

These commands are run from the CloudShell session created above

1. Set the following environment variables:

   ```
   CONFIG_VERSION=<CONFIG_VERSION>
   BRANCH=Change ID, used as the prefix for any git branch created in the MOPs_The version number of the uplevel release, e.g. 11.5.0+1_upgrade_simon
   ```

   Export the correct form of the URL to access the git repository
   -  If using https to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=https://Name of the Azure DevOps organization.@dev.azure.com/Name of the Azure DevOps organization./Name of the Azure DevOps project./_git/Name of the configuration Azure DevOps repository.
      ```

   -  If using ssh to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=git@ssh.dev.azure.com:v3/Name of the Azure DevOps organization./Name of the Azure DevOps project./Name of the configuration Azure DevOps repository.
      ```

1. Change to an appropriate working directory in Cloud shell. Your git repository will live in a subdirectory off of this path.

   ```
   cd ~
   mkdir configuration_repo
   cd configuration_repo
   ```

1. Clone the existing Azure DevOps git repository with **<GIT_CONFIGURATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
   ```
   git clone ${GIT_CONFIGURATION_URL} .
   ```
   (note the trailing whitespace and period after the URL)

   -  If using HTTPS:
      - When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
   - If using SSH:
      - You will not be prompted for a password.

   This will create a local copy of the repository in the current working directory.

1. Create a new working branch by running the command
   ```
   git checkout -b ${BRANCH}
   ```

   The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step

### 9.2.3 Check the configuration for the new SIs

Before starting any additional SIs we need to ensure that `Global GUIs` access is disabled

1. Run the following command to display the current state of the deployment:

   ```
   mvm-config-manager show-sis --si-type=simon --detailed
   ```

   This returns detailed output on all SIMon SIs. An example output is shown below

   ```
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   | Config Version | SI Name | SI Type | AZ | Active | Weight | Thanos Compactor | Global GUIs |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z1  |  SIMon  | 1  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.0+2-2   |  m92z1  |  SIMon  | 1  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z2  |  SIMon  | 2  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.0+2-2   |  m92z2  |  SIMon  | 2  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z3  |  SIMon  | 3  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.0+2-2   |  m92z3  |  SIMon  | 3  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+

   ```

   The SIMon SIs that are not currently active are the SIs that will be upgraded.
   These will be referred to as Uplevel SIs in the rest of this document.

   The SIMon SIs that are currently active are the SIs that will be replaced / stopped
   later in the MOP. These will be referred to as Downlevel SIs in the rest of this document.


1. Disable `global GUIs` on any uplevel SIMon SI by running the following commands:

   ```
   UPLEVEL_SIMON_NAME=<SI Name from the table above>
   mvm-config-manager disable-global-guis --si-name ${UPLEVEL_SIMON_NAME}
   ```

### 9.2.4 Apply the latest configuration to the inactive (uplevel) SIs

1. Apply the latest configuration version to each inactive SI by running the command:

   ```
   mvm-config-manager apply-config \
      --si-type simon
   ```

   This will output a message confirming that the operation was successful, the config
   version that was applied and a list of the SIs it was applied to, e.g.:

   ```
   Applied config version 11.5.0+2-2 to SIs with type SIMon.

   Upgraded SIs:
   m92z1
   m92z2
   m92z3
   ```

### 9.2.5 Add the SIs to the list of SIs managed by SIMPL

1. Update the SIMPL managed SI configuration by running the following commands for each SI that you want to start:

   ```
   UPLEVEL_SIMON_NAME=<SIMon SI to upgrade>
   mvm-config-manager deploy-si --si-name ${UPLEVEL_SIMON_NAME}
   ```

   This will output a message confirming that the operation was successful e.g.
   `Deployed SI ${UPLEVEL_SIMON_NAME}.`

   Additionally, if the command `mvm-config-manager show-sis --si-type=simon --detailed` is run
   then any SI that is queued to start will have its active status changed from `false` to `true`
   and its config version updated. For example, if we had deployed `m92z1` then the
   corresponding output would be as shown below


   ```
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   | Config Version | SI Name | SI Type | AZ | Active | Weight | Thanos Compactor | Global GUIs |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z1  |  SIMon  | 1  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z1  |  SIMon  | 1  |  true  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z2  |  SIMon  | 2  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z2  |  SIMon  | 2  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z3  |  SIMon  | 3  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z3  |  SIMon  | 3  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+

   ```

1. Push the change to the DevOps repository by running the command:

   ```
   git push --set-upstream origin ${BRANCH}

   ```

1. Merge the change into the main branch via the 'pull request' mechanism

1. Tidy up by running the command:

   ```
   cd ~
   rm -rf configuration_repo

   ```

   (We have finished with the local copy of the repository)

### 9.2.6 Monitor the SI creation

When the file is committed and pushed to the SIMPL DevOps repository, the resources 
will be automatically pulled into SIMPL, which will trigger a Job to run Terraform 
to create the Service Instance.

You can watch progress of this job from the Log Analytics Workspace (LAW) associated with the region 

1. Connect to the LAW

   - Through the portal, select the resource **Name of the Log Analytics Workspace (LAW) associated with the region**
   - Select **Logs** from the menu
   - From the resultant **Queries** page
     - Enter `SIMPL jobs - Details` in the search box, this will match one query
     - Select **Run**. This will load the query into the editor and run the query

   The query shows the log entries that SIMPL has written for the apply / delete jobs 
   with the latest entry displayed first. The container name column contains the name 
   of the apply job, `apply-si-<SI NAME>`. This can take a few minutes before the first 
   log entry appears after the merge has been completed. 

   Sample outputs are shown below. Keep rerunning the query until the log entry 
   `Apply Complete!` is displayed. At this point the SIMPL job has completed. 

   If multiple deletes and adds are occurring at the same time then, for clarity, edit 
   the clause at the end of the query to enter the SI name before selecting Run. 
   
   For example, if the SI Name was x01z1 then the modified query would be as shown below 

   ```
   let startTimestamp = ago(7d);
   (
    KubePodInventory
    | where TimeGenerated > startTimestamp
    | where ContainerName has_cs "apply-si"
        or ContainerName has_cs "delete-si"
        or ContainerName has_cs "apply-dc"
        or ContainerName has_cs "delete-dc"
    | distinct ContainerID, ContainerName, ClusterName, ClusterId
   )
   | join
   (
     ContainerLog
     | where TimeGenerated > startTimestamp
   ) on ContainerID
   | project TimeGenerated, LogEntry, Container = split(ContainerName, "/")[1], ClusterName, ClusterId = split(ClusterId, "/")[4]
   | project-rename ResourceGroup=ClusterId
   | sort by TimeGenerated desc
   // Uncomment this line to pick out the logs for a particular service instance.
   | where Container has "x01z1"
   ```


   **Query results for a running apply job**
   ![Running Apply](images/runningapply.jpg)
 
   **Query results for a completed apply job**
   ![Completed Apply](images/completedapply.jpg)

### 9.2.7 Verify SIMon functionality

1. Follow the [**Test Plan**](#testplan) to verify SIMon functionality.

### <a id=testpass></a>9.2.8 SIMon function verification successful

Prior to destroying the downlevel SIMon SI we need to transfer responsibility for
compacting the Thanos metrics that SIMon collects from the downlevel SIMon SI
to the uplevel SI. There is a restriction that means we can only have the compaction
container running on at most one SIMon SI per Availability Zone (AZ). It is
also permissible to have no compaction running in an AZ if it is only for a short
period of time. Switching of the compaction is a multi-step process as follows:

- Disable compaction on the downlevel SIMon SI
- Verify that compaction is no longer running
- Enable compaction on the uplevel SIMon SI
- Verify that compaction is again running

### 9.2.9 Stop Thanos compaction on downlevel SI

Only one SIMon SI per availability zone (AZ) can be configured to run the Thanos
compaction process. This step stops the moves that process from the downlevel SIMon SI to the
uplevel SIMon SI

1. Set the following environment variables:

   ```
   BRANCH=Change ID, used as the prefix for any git branch created in the MOPs_The version number of the uplevel release, e.g. 11.5.0+1_disable_thanos
   ```

   Export the correct form of the URL to access the git repository
   -  If using https to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=https://Name of the Azure DevOps organization.@dev.azure.com/Name of the Azure DevOps organization./Name of the Azure DevOps project./_git/Name of the configuration Azure DevOps repository.
      ```

   -  If using ssh to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=git@ssh.dev.azure.com:v3/Name of the Azure DevOps organization./Name of the Azure DevOps project./Name of the configuration Azure DevOps repository.
      ```

1. Change to an appropriate working directory in Cloud shell. Your git repository will live in a subdirectory off of this path.

   ```
   cd ~
   mkdir configuration_repo
   cd configuration_repo
   ```

1. Clone the existing Azure DevOps git repository with **<GIT_CONFIGURATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
   ```
   git clone ${GIT_CONFIGURATION_URL} .
   ```
   (note the trailing whitespace and period after the URL)

   -  If using HTTPS:
      - When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
   - If using SSH:
      - You will not be prompted for a password.

   This will create a local copy of the repository in the current working directory.

1. Create a new working branch by running the command
   ```
   git checkout -b ${BRANCH}
   ```

   The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step



1. Run the following command to display the current state of the deployment:

   ```
   mvm-config-manager show-sis --si-type=simon --detailed
   ```

   This returns detailed output on all SIMon SIs. An example output is shown below

   ```
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   | Config Version | SI Name | SI Type | AZ | Active | Weight | Thanos Compactor | Global GUIs |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z1  |  SIMon  | 1  |  true  |        |       true       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z1  |  SIMon  | 1  |  true  |        |      false       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z2  |  SIMon  | 2  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z2  |  SIMon  | 2  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z3  |  SIMon  | 3  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z3  |  SIMon  | 3  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+

   ```

   This shows that we have two SIMon SIs running in AZ1, m91z1 (the downlevel SI)
   and m92z1 (the uplevel SIMon SI).

   We want to disable the Thanos compactor function on the downlevel SI, m91z1.

1. Disable Thanos compaction by running the following command:
   ```
   mvm-config-manager disable-thanos-compactor --si-name <DOWNLEVEL_SIMON_NAME>
   ```

   In the example above, `DOWNLEVEL_SIMON_NAME` would be set to m91z1

   The command will return output similar to
   ```
   WARNING: SI m91z1 is currently running. You must ensure that there is another Thanos compactor running in this availability zone as soon as possible.

   Disabled Thanos compactor for SI m91z1
   ```

   This is expected and the warning can be ignored as we will be starting a new
   compactor below.


1. Push the change to the DevOps repository by running the command:

   ```
   git push --set-upstream origin ${BRANCH}

   ```

1. Merge the change into the main branch via the 'pull request' mechanism

1. Tidy up by running the command:

   ```
   cd ~
   rm -rf configuration_repo

   ```

   (We have finished with the local copy of the repository)

### 9.2.10 Verify that the Thanos compactor is disabled


1. Set the following environment variables:

   ```
   DOWNLEVEL_SIMON_NAME=<DOWNLEVEL_SIMON_NAME>
   SUBSCRIPTION_ID=Azure subscription identifier for the MVM subscription.
   ```

   In the example above, `<DOWNLEVEL_SIMON_NAME>` would be m91z1

1. Run the following command:
   ```
   az aks command invoke \
      --name ${DOWNLEVEL_SIMON_NAME}-k8s \
      --resource-group ${DOWNLEVEL_SIMON_NAME}-rg \
      --subscription "${SUBSCRIPTION_ID}" \
      --command "kubectl get pods -n simon"
   ```

   This should ***NOT** return any pods that begin `simon-thanos-global-compactor`

   If it does, wait 60 seconds and repeat the command

   (it can take upto 5 minutes to destroy the pod)



### 9.2.11 Start Thanos compaction on uplevel SI

1. Set the following environment variables:

   ```
   BRANCH=Change ID, used as the prefix for any git branch created in the MOPs_The version number of the uplevel release, e.g. 11.5.0+1_enable_thanos
   ```

   Export the correct form of the URL to access the git repository
   -  If using https to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=https://Name of the Azure DevOps organization.@dev.azure.com/Name of the Azure DevOps organization./Name of the Azure DevOps project./_git/Name of the configuration Azure DevOps repository.
      ```

   -  If using ssh to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=git@ssh.dev.azure.com:v3/Name of the Azure DevOps organization./Name of the Azure DevOps project./Name of the configuration Azure DevOps repository.
      ```

1. Change to an appropriate working directory in Cloud shell. Your git repository will live in a subdirectory off of this path.

   ```
   cd ~
   mkdir configuration_repo
   cd configuration_repo
   ```

1. Clone the existing Azure DevOps git repository with **<GIT_CONFIGURATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
   ```
   git clone ${GIT_CONFIGURATION_URL} .
   ```
   (note the trailing whitespace and period after the URL)

   -  If using HTTPS:
      - When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
   - If using SSH:
      - You will not be prompted for a password.

   This will create a local copy of the repository in the current working directory.

1. Create a new working branch by running the command
   ```
   git checkout -b ${BRANCH}
   ```

   The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step



1. Run the following command to display the current state of the deployment:

   ```
   mvm-config-manager show-sis --si-type=simon --detailed
   ```

   This returns detailed output on all SIMon SIs. An example output is shown below

   ```
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   | Config Version | SI Name | SI Type | AZ | Active | Weight | Thanos Compactor | Global GUIs |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z1  |  SIMon  | 1  |  true  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z1  |  SIMon  | 1  |  true  |        |      false       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z2  |  SIMon  | 2  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z2  |  SIMon  | 2  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z3  |  SIMon  | 3  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z3  |  SIMon  | 3  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+

   ```

   This shows that we have two SIMon SIs running in AZ1, m91z1 (the downlevel SI)
   and m92z1 (the uplevel SIMon SI).

   We want to enable the Thanos compactor on the uplevel SI, m92z1.

1. Enable Thanos compaction by running the following command:
   ```
   mvm-config-manager enable-thanos-compactor --si-name <UPLEVEL_SIMON_NAME>
   ```

   In the example above, `UPLEVEL_SIMON_NAME` would be set to m92z1

   The command will return output similar to
   ```
   Enabled Thanos compactor for SI m92z1
   ```

1. Push the change to the DevOps repository by running the command:

   ```
   git push --set-upstream origin ${BRANCH}

   ```

1. Merge the change into the main branch via the 'pull request' mechanism

1. Tidy up by running the command:

   ```
   cd ~
   rm -rf configuration_repo

   ```

   (We have finished with the local copy of the repository)

### 9.2.12 Verify that the Thanos compactor is enabled
<---------------- PRE-RENDER START --------------->
https://dev.azure.com/mvmprodeus2/MVM/_git/documentation?path=/Labs-ANTS-DevOps/SMOPs/Keystone-Mops/GUI-Bastion-access-update.md&_a=preview
<---------------- PRE-RENDER END --------------->

1. Set the following environment variables:

   ```
   UPLEVEL_SIMON_NAME=<UPLEVEL_SIMON_NAME>
   SUBSCRIPTION_ID=Azure subscription identifier for the MVM subscription.
   ```

   In the example above, `<UPLEVEL_SIMON_NAME>` would be m92z1

1. Run the following command:
   ```
   az aks command invoke \
      --name ${UPLEVEL_SIMON_NAME}-k8s \
      --resource-group ${UPLEVEL_SIMON_NAME}-rg \
      --subscription "${SUBSCRIPTION_ID}" \
      --command "kubectl get pods -n simon"
   ```

   This should return a single pod that begins `simon-thanos-global-compactor`

   If it does not, wait 60 seconds and repeat the command

   (it can take upto 5 minutes to create the pod)


### 9.2.13 Remove the downlevel SIMon SI
***
This step must be carried out at least 4 hours after the previous step. Failure
to do so may result in a gap in historical metrics
***

1. Set the following environment variables:

   ```
   BRANCH=Change ID, used as the prefix for any git branch created in the MOPs_The version number of the uplevel release, e.g. 11.5.0+1_destroy_simon
   ```

   Export the correct form of the URL to access the git repository
   -  If using https to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=https://Name of the Azure DevOps organization.@dev.azure.com/Name of the Azure DevOps organization./Name of the Azure DevOps project./_git/Name of the configuration Azure DevOps repository.
      ```

   -  If using ssh to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=git@ssh.dev.azure.com:v3/Name of the Azure DevOps organization./Name of the Azure DevOps project./Name of the configuration Azure DevOps repository.
      ```

1. Change to an appropriate working directory in Cloud shell. Your git repository will live in a subdirectory off of this path.

   ```
   cd ~
   mkdir configuration_repo
   cd configuration_repo
   ```

1. Clone the existing Azure DevOps git repository with **<GIT_CONFIGURATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
   ```
   git clone ${GIT_CONFIGURATION_URL} .
   ```
   (note the trailing whitespace and period after the URL)

   -  If using HTTPS:
      - When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
   - If using SSH:
      - You will not be prompted for a password.

   This will create a local copy of the repository in the current working directory.

1. Create a new working branch by running the command
   ```
   git checkout -b ${BRANCH}
   ```

   The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step



1. Run the following command to display the current state of the deployment:

   ```
   mvm-config-manager show-sis --si-type=simon --detailed
   ```

   This returns detailed output on all SIMon SIs. An example output is shown below

   ```
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   | Config Version | SI Name | SI Type | AZ | Active | Weight | Thanos Compactor | Global GUIs |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z1  |  SIMon  | 1  |  true  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z1  |  SIMon  | 1  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z2  |  SIMon  | 2  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z2  |  SIMon  | 2  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z3  |  SIMon  | 3  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z3  |  SIMon  | 3  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+

   ```

   This shows that we have two SIMon SIs running in AZ1, m91z1 (the downlevel SI)
   and m92z1 (the uplevel SIMon SI).

   We want to destroy the downlevel SI, m91z1.

1. Destroy the downlevel SI by running the following command:
   ```
   mvm-config-manager destroy-si --si-name <DOWNLEVEL_SIMON_NAME>
   ```

   In the example above, `DOWNLEVEL_SIMON_NAME` would be set to m91z1

   The command will return output similar to
   ```
   Destroyed SI m91z1
   ```

1. Push the change to the DevOps repository by running the command:

   ```
   git push --set-upstream origin ${BRANCH}

   ```

1. Merge the change into the main branch via the 'pull request' mechanism

1. Tidy up by running the command:

   ```
   cd ~
   rm -rf configuration_repo

   ```

   (We have finished with the local copy of the repository)

### 9.2.14 Monitor the SI deletion

When the file is committed and pushed to the SIMPL DevOps repository, the resources
will be automatically pulled into SIMPL, which will trigger a Job to run Terraform
to delete the Service Instance.

You can watch progress of this job from the Log Analytics Workspace (LAW) associated with the region

1. Connect to the LAW

   - Through the portal, select the resource **Name of the Log Analytics Workspace (LAW) associated with the region**
   - Select **Logs** from the menu
   - From the resultant **Queries** page
     - Enter `SIMPL jobs - Details` in the search box, this will match one query
     - Select **Run**. This will load the query into the editor and run the query

   The query shows the log entries that SIMPL has written for the apply / delete jobs
   with the latest entry displayed first. The container name column contains the name
   of the apply job, `destroy-si-<SI NAME>`. This can take a few minutes before the first
   log entry appears after the merge has been completed.

   Sample outputs are shown below. Keep rerunning the query until the log entry
   `Destroy Complete!` is displayed. At this point the SIMPL job has completed.

   If multiple deletes and adds are occurring at the same time then, for clarity, edit
   the clause at the end of the query to enter the SI name before selecting Run.

   For example, if the SI Name was x01z1 then the modified query would be as shown below

   ```
   let startTimestamp = ago(7d);
   (
    KubePodInventory
    | where TimeGenerated > startTimestamp
    | where ContainerName has_cs "apply-si"
        or ContainerName has_cs "delete-si"
        or ContainerName has_cs "apply-dc"
        or ContainerName has_cs "delete-dc"
    | distinct ContainerID, ContainerName, ClusterName, ClusterId
   )
   | join
   (
     ContainerLog
     | where TimeGenerated > startTimestamp
   ) on ContainerID
   | project TimeGenerated, LogEntry, Container = split(ContainerName, "/")[1], ClusterName, ClusterId = split(ClusterId, "/")[4]
   | project-rename ResourceGroup=ClusterId
   | sort by TimeGenerated desc
   // Uncomment this line to pick out the logs for a particular service instance.
   | where Container has "x01z1"
   ```


   **Query results for a running destroy job**
   ![Running destroy](images/runningdestroy.jpg)

   **Query results for a completed destroy job**
   ![Completed destroy](images/completeddestroy.jpg)

### 9.2.15 Update dashboards and alerts

When SIMon SIs are upgraded, the up-level SIMon SIs will use up-level dashboards and alerting
rules, which may differ from the down-level dashboards and alerting rules.

Once the alerting rules have been updated for an SI, any old rules must be removed.

If the alarm is still valid a new one will be generated using the new alerting rules.

1. Log into any up-level Alerta via
`https://alerta-<UPLEVEL_SIMON_NAME>.The short (4-characters maximum) DNS label for the region.Name of the global DNS zone`

   Identify any old rule by comparing the value in the Last Receive Time column with
   the time that the rules were updated (which was when the up-level SIMon SI was deployed).

   If the Last Receive Time is before the update occurred, then this is an alert based on an old rule.

   Remove the old alert by highlighting the alert in Alerta and click the trashcan icon
   on the far right of the alert. When the dialog appears asking `Are you sure you want
   to delete this item?` click **OK** to delete the alert

## <a id=testplan></a> 9.3 Test Plan

   The testplan makes reference to two SIMon SIs:
   - **UPLEVEL_SIMON_NAME**. The name of the SIMon SI that is running the uplevel software
   - **DOWNLEVEL_SIMON_NAME**. The name of the SIMon SI that is running the current, or downlevel, software

### 9.3.1 Watch the pods come up
1. Set the following environment variables:

   ```
   UPLEVEL_SIMON_NAME=<UPLEVEL_SIMON_NAME>
   SUBSCRIPTION_ID=Azure subscription identifier for the MVM subscription.
   ```
1. Verify that all pods start successfully by running the command:
   ```
   az aks command invoke \
      --name ${UPLEVEL_SIMON_NAME}-k8s \
      --resource-group ${UPLEVEL_SIMON_NAME}-rg \
      --subscription "${SUBSCRIPTION_ID}" \
      --command "kubectl get pods -A"

   ```

   You should see pods come up in the following namespaces:
   - `connaisseur`
   - `csi-driver`
   - `gitops`
   - `istio-system`
   - `kube-system`
   - `simon`

   If any of the pods enter a failed state (anything other than `Init`,
   `PodInitializing` or `Running`), see the Troubleshooting section of the
   *Deployment Guide* for troubleshooting guidance.

   
    If any pods get stuck in the `Init` state, and 
    ```
    az aks command invoke \
      --name ${UPLEVEL_SIMON_NAME}-k8s \
      --resource-group ${UPLEVEL_SIMON_NAME}-rg \
      --subscription "${SUBSCRIPTION_ID}" \
      --command "kubectl describe pod -n <namespace of initializing pod> <name of initializing pod>" 
    ```
    reports errors of the form `AADSTS700213: No matching federated identity record found for presented assertion subject`,
    there was an error with Entra ID initialization for the cluster. 
    Follow the backout procedure to destroy the service instance, then follow the MOP again to re-create the service instance.


   Depending on your Azure configuration, you might be prompted to re-login to
   Azure (including MFA) the first time the `kubectl` command executes. This is
   normal and to be expected

### 9.3.2 Verify that the SI is being monitored by SIMon

1. Log into Grafana

   Use the global URL `https://grafana-global.Name of the global DNS zone`

1. Select the **Azure Kubernetes Service health monitoring** dashboard

   Verify that the SI **<UPLEVEL_SIMON_NAME>** appears in the list of SIs in the Overview Health panel

### 9.3.3 Verify Alerta access
1. Log into Alerta

   Use the per SI URL `https://alerta-<UPLEVEL_SIMON_NAME>.The short (4-characters maximum) DNS label for the region.Name of the global DNS zone`

   Alerta will display any alerts raised in your deployment. There may be an
   Azure Entra login prompt before Alerta can be accessed

2. Verify that there are no active alerts associated with **<UPLEVEL_SIMON_NAME>**

### 9.3.4 Verify Grafana access
1. Log into Grafana

   Use the per SI URL `https://grafana-<UPLEVEL_SIMON_NAME>.The short (4-characters maximum) DNS label for the region.Name of the global DNS zone`

1. Select the **Azure Kubernetes Service health monitoring** dashboard

   Verify that **<UPLEVEL_SIMON_NAME>** SI appears in the list of SIs in the Overview Health panel

### 9.3.5 Update global GUI access

This section will disable global access for **<DOWNLEVEL_SIMON_NAME>** SIs and
enable it for **<UPLEVEL_SIMON_NAME>**

1. Set the following environment variables:

   ```
   BRANCH=Change ID, used as the prefix for any git branch created in the MOPs_The version number of the uplevel release, e.g. 11.5.0+1_update_global_gui
   AZURE_TRAFFIC_MANAGER_RESOURCE_GROUP=The resource group that contains the MVM Azure Traffic Manager profiles
   ```

   Export the correct form of the URL to access the git repository
   -  If using https to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=https://Name of the Azure DevOps organization.@dev.azure.com/Name of the Azure DevOps organization./Name of the Azure DevOps project./_git/Name of the configuration Azure DevOps repository.
      ```

   -  If using ssh to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=git@ssh.dev.azure.com:v3/Name of the Azure DevOps organization./Name of the Azure DevOps project./Name of the configuration Azure DevOps repository.
      ```

1. Change to an appropriate working directory in Cloud shell. Your git repository
will live in a subdirectory off of this path.

   ```
   cd ~
   mkdir configuration_repo
   cd configuration_repo
   ```

1. Clone the existing Azure DevOps git repository with **<GIT_CONFIGURATION_URL>**.
The repository can be cloned using either ssh or https. In both cases you will
run the following command:
   ```
   git clone ${GIT_CONFIGURATION_URL} .
   ```
   (note the trailing whitespace and period after the URL)

   -  If using HTTPS:
      - When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
   - If using SSH:
      - You will not be prompted for a password.

   This will create a local copy of the repository in the current working directory.

1. Create a new working branch by running the command
   ```
   git checkout -b ${BRANCH}
   ```

   The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step

1. Enable uplevel SI access via the global URLs by running the following command:
   ```
   mvm-config-manager enable-global-guis --si-name <UPLEVEL_SIMON_NAME>
   ```

1. Disable downlevel SI access via the global URLs by running the following command:
   ```
   mvm-config-manager disable-global-guis --si-name <DOWNLEVEL_SIMON_NAME>
   ```

1. Push the change to the DevOps repository by running the command:

   ```
   git push --set-upstream origin ${BRANCH}

   ```

1. Merge the change into the main branch via the 'pull request' mechanism

1. Tidy up by running the command:

   ```
   cd ~
   rm -rf configuration_repo

   ```

   (We have finished with the local copy of the repository)


1. Verify that the global Alerta DNS pool has been updated by running the command:
   ```
   az network traffic-manager endpoint list \
     --profile-name alerta-secure-weighted-ext-mt-oam-atmp \
     --resource-group ${AZURE_TRAFFIC_MANAGER_RESOURCE_GROUP} \
     -o table

   ```

   A sample output is shown below
   ```
   AlwaysServe    EndpointMonitorStatus    EndpointStatus    Name       Priority    ResourceGroup           Target                               Weight
   -------------  -----------------------  ----------------  ---------  ----------  ----------------------  -----------------------------------  --------
   Disabled       Degraded                 Enabled           m92z1-tme  18          ra-mvm-eus2-dev-atm-rg  alerta-m92z1.scus.ra.mvmlab.att.net  100
   Disabled       Degraded                 Enabled           m01z1-tme  19          ra-mvm-eus2-dev-atm-rg  alerta-m01z1.eus2.ra.mvmlab.att.net  100

   ```

   **<UPLEVEL_SIMON_NAME>** should appear in the list of targets and **<DOWNLEVEL_SIMON_NAME>** should not

   If this is not the case wait a minute and reissue the command (it can take
   several minutes for the command to propagate though the system)


1. Verify that the global Grafana DNS pool has been updated by running the command:
   ```
   az network traffic-manager endpoint list \
     --profile-name grafana-secure-weighted-ext-mt-oam-atmp \
     --resource-group ${AZURE_TRAFFIC_MANAGER_RESOURCE_GROUP} \
     -o table

   ```

   A sample output is shown below
   ```
   AlwaysServe    EndpointMonitorStatus    EndpointStatus    Name       Priority    ResourceGroup           Target                                Weight
   -------------  -----------------------  ----------------  ---------  ----------  ----------------------  ------------------------------------  --------
   Disabled       Degraded                 Enabled           m92z1-tme  18          ra-mvm-eus2-dev-atm-rg  grafana-m92z1.scus.ra.mvmlab.att.net  100
   Disabled       Degraded                 Enabled           m01z1-tme  19          ra-mvm-eus2-dev-atm-rg  grafana-m01z1.eus2.ra.mvmlab.att.net  100

   ```

   **<UPLEVEL_SIMON_NAME>** should appear in the list of targets and **<DOWNLEVEL_SIMON_NAME>** should not

   If this is not the case wait a minute and reissue the command (it can take
   several minutes for the command to propagate though the system)


### 9.3.6 Proceed with the upgrade

1. If the test plan succeeded, proceed to [**SIMon function verification successful**](#testpass)
   to continue the upgrade process, otherwise proceed to [**Backout**](#backout)


## <a id=backout></a> 9.4 Backout Procedure

The backout process is to delete the SIMon Service Instance that was jaunt created.

### 9.4.1 Set the default subscription to the MVM subscription

1. Set the default subscription by running the command:

   ```
   az account set --subscription "Azure subscription identifier for the MVM subscription."
   ```

### 9.4.2 Prepare the configuration git repository

1. Set the following environment variables:

   ```
   BRANCH=Change ID, used as the prefix for any git branch created in the MOPs_The version number of the uplevel release, e.g. 11.5.0+1_destroy_uplevel_simon
   ```

   Export the correct form of the URL to access the git repository
   -  If using https to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=https://Name of the Azure DevOps organization.@dev.azure.com/Name of the Azure DevOps organization./Name of the Azure DevOps project./_git/Name of the configuration Azure DevOps repository.
      ```

   -  If using ssh to interact with the git repository
      ```
      GIT_CONFIGURATION_URL=git@ssh.dev.azure.com:v3/Name of the Azure DevOps organization./Name of the Azure DevOps project./Name of the configuration Azure DevOps repository.
      ```

1. Change to an appropriate working directory in Cloud shell. Your git repository will live in a subdirectory off of this path.

   ```
   cd ~
   mkdir configuration_repo
   cd configuration_repo
   ```

1. Clone the existing Azure DevOps git repository with **<GIT_CONFIGURATION_URL>**. The repository can be cloned using either ssh or https. In both cases you will run the following command:
   ```
   git clone ${GIT_CONFIGURATION_URL} .
   ```
   (note the trailing whitespace and period after the URL)

   -  If using HTTPS:
      - When prompted, input the password, **<GIT_PASSWORD>**, that you specified when the repository was first created
   - If using SSH:
      - You will not be prompted for a password.

   This will create a local copy of the repository in the current working directory.

1. Create a new working branch by running the command
   ```
   git checkout -b ${BRANCH}
   ```

   The branch currently only exists on your local server - it will be pushed to the DevOps repository in a later step

### 9.4.3 Delete the SIMon SI

1. Run the following command to display the current state of the deployment:

   ```
   mvm-config-manager show-sis --si-type=simon --detailed
   ```

   This returns detailed output on all SIMon SIs. An example output is shown below

   ```
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   | Config Version | SI Name | SI Type | AZ | Active | Weight | Thanos Compactor | Global GUIs |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z1  |  SIMon  | 1  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z1  |  SIMon  | 1  |  true  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z2  |  SIMon  | 2  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z2  |  SIMon  | 2  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.4.1+1-4   |  m91z3  |  SIMon  | 3  |  true  |        |       true       |     true    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+
   |   11.5.0+2-2   |  m92z3  |  SIMon  | 3  | false  |        |      false       |    false    |
   +----------------+---------+---------+----+--------+--------+------------------+-------------+

   ```

   This shows that we have two SIMon SIs running in AZ1, m91z1 (the downlevel SI)
   and m92z1 (the uplevel SIMon SI).

   We want to destroy the uplevel SI, m92z1.

1. Destroy the uplevel SI by running the following command:
   ```
   mvm-config-manager destroy-si --si-name <UPLEVEL_SIMON_NAME>
   ```

   In the example above, `UPLEVEL_SIMON_NAME` would be set to m92z1

   The command will return output similar to
   ```
   Destroyed SI m92z1
   ```

1. Push the change to the DevOps repository by running the command:

   ```
   git push --set-upstream origin ${BRANCH}

   ```

1. Merge the change into the main branch via the 'pull request' mechanism

1. Tidy up by running the command:

   ```
   cd ~
   rm -rf configuration_repo

   ```

   (We have finished with the local copy of the repository)

### 9.4.4 Monitor the SI deletion

When the file is committed and pushed to the SIMPL DevOps repository, the resources
will be automatically pulled into SIMPL, which will trigger a Job to run Terraform
to delete the Service Instance.

You can watch progress of this job from the Log Analytics Workspace (LAW) associated with the region

1. Connect to the LAW

   - Through the portal, select the resource **Name of the Log Analytics Workspace (LAW) associated with the region**
   - Select **Logs** from the menu
   - From the resultant **Queries** page
     - Enter `SIMPL jobs - Details` in the search box, this will match one query
     - Select **Run**. This will load the query into the editor and run the query

   The query shows the log entries that SIMPL has written for the apply / delete jobs
   with the latest entry displayed first. The container name column contains the name
   of the apply job, `destroy-si-<SI NAME>`. This can take a few minutes before the first
   log entry appears after the merge has been completed.

   Sample outputs are shown below. Keep rerunning the query until the log entry
   `Destroy Complete!` is displayed. At this point the SIMPL job has completed.

   If multiple deletes and adds are occurring at the same time then, for clarity, edit
   the clause at the end of the query to enter the SI name before selecting Run.

   For example, if the SI Name was x01z1 then the modified query would be as shown below

   ```
   let startTimestamp = ago(7d);
   (
    KubePodInventory
    | where TimeGenerated > startTimestamp
    | where ContainerName has_cs "apply-si"
        or ContainerName has_cs "delete-si"
        or ContainerName has_cs "apply-dc"
        or ContainerName has_cs "delete-dc"
    | distinct ContainerID, ContainerName, ClusterName, ClusterId
   )
   | join
   (
     ContainerLog
     | where TimeGenerated > startTimestamp
   ) on ContainerID
   | project TimeGenerated, LogEntry, Container = split(ContainerName, "/")[1], ClusterName, ClusterId = split(ClusterId, "/")[4]
   | project-rename ResourceGroup=ClusterId
   | sort by TimeGenerated desc
   // Uncomment this line to pick out the logs for a particular service instance.
   | where Container has "x01z1"
   ```


   **Query results for a running destroy job**
   ![Running destroy](images/runningdestroy.jpg)

   **Query results for a completed destroy job**
   ![Completed destroy](images/completeddestroy.jpg)

# 10. Post checks

[System healthchecks]

# 11. Risk Assessment Score

1 - TBD

# 12. Execute MOP clean up if required

# 13. End of Document MOP

# 14. Service Assurance/Monitoring

# A. Appendix and Tables

# B. Approvers

# C. Peer Reviewers

# D. References for Other Documents

# E. Additional Appendices (If required)