Azure

Azure BLOB Storage and PowerShell: The Hard Way

Shared Key Authentication Scheme

In a previous post I covered my general love/hate affair with PowerShell; particularly with respect to the Microsoft Cloud.  For the majority of you than can not be bothered to read, I expressed a longstanding grudge against the Azure Cmdlets, rooted in the Switch-AzureMode fiasco.  As an aside, those of you enjoying the Azure Stack technical previews may notice as similar problem arising with  'AzureRM Profile', but I digress. More importantly, there was a general theme of understanding the abstractions placed in front of you as an IT professional.   By now, most of you should be familiar with the OAuth Bearer tokens used throughout the Microsoft cloud.  They are nearly ubiquitous, with the exception of a few services, most importantly storage.  The storage service is authenticated with a Shared Key Authentication or a Shared Access Signature. I will be focusing on the former.

Anatomy of the Signature

The Authentication header of HTTP requests backing the Azure Storage Services take the following form:

Authorization: SharedKey <Storage Account Name>:<AccessSignature>

The access signature is an HMAC 256 encoded string (Signature) which is constructed mostly of the components of the backing HTTP request. The gritty details are (somewhat) clearly detailed at MSDN, but for example the string to be encoded for getting the list of blobs in a container, looks something like this.


GET
x-ms-date:Mon, 08 May 2017 23:28:20 GMT x-ms-version:2016-05-31 /nosaashere/certificates comp:list restype:container

Let's examine the properties of a request for creating a BLOB Snapshot.

GET https://nosaashere.blob.core.windows.net/nosaashere/managedvhds/Provisioned.vhdx?comp=snapshot

Canonical Resource comp:snapshot Canonical Resource Query

PUT VERB x-ms-date:Mon, 08 May 2017 23:28:21 GMT Canonical Date Header x-ms-version:2016-05-31 Canonical Header /nosaashere/managedvhds/Provisioned.vhdx

A more advanced request (like this example for appending data to a Page BLOB) will show how additional headers come into scope as we include an MD5 Hash to verify the content, a content-length, and other required API headers.


PUT
4096000
32qczJv1wUlqnJPQRdBUzw==
x-ms-blob-type:PageBlob
x-ms-date:Mon, 08 May 2017 23:28:39 GMT
x-ms-page-write:Update x-ms-range:bytes=12288000-16383999
x-ms-version:2016-05-31 /nosaashere/managedvhds/Provisioned.vhdx comp:page

The general idea is the verb, standard and custom request headers, canonical headers, canonical resource and query are presented as a newline delimited string.  This string is encoded using the HMAC256 algorithm with the storage account key.  This base64 encoded string is used for crafting the Authorization header.  The Authorization header is passed with the other headers used to sign the request.  If the server is able to match the signature, the request is authenticated.

Putting this in some PoSh

First things first, we need to generate the string to sign.  This function will take arguments for the desired HTTP request (URI, Verb, Query, Headers) parameters and create the previously described string.


Function GetTokenStringToSign
{
    [CmdletBinding()]     
    param
    (
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [ValidateSet('GET','PUT','DELETE')]
        [string]$Verb="GET",
        [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName = $true)]
        [System.Uri]$Resource,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [long]$ContentLength,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [String]$ContentLanguage,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [String]$ContentEncoding,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [String]$ContentType,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [String]$ContentMD5,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [long]$RangeStart,
        [Parameter(Mandatory = $false,ValueFromPipelineByPropertyName = $true)]
        [long]$RangeEnd,[Parameter(Mandatory = $true,ValueFromPipelineByPropertyName = $true)]
        [System.Collections.IDictionary]$Headers
    )

    $ResourceBase=($Resource.Host.Split('.') | Select-Object -First 1).TrimEnd("`0")
    $ResourcePath=$Resource.LocalPath.TrimStart('/').TrimEnd("`0")
    $LengthString=[String]::Empty
    $Range=[String]::Empty
    if($ContentLength -gt 0){$LengthString="$ContentLength"}
    if($RangeEnd -gt 0){$Range="bytes=$($RangeStart)-$($RangeEnd-1)"}

    $SigningPieces = @($Verb, $ContentEncoding,$ContentLanguage, $LengthString,$ContentMD5, $ContentType, [String]::Empty, [String]::Empty, [String]::Empty, [String]::Empty, [String]::Empty, $Range)
    foreach ($item in $Headers.Keys)
    {
        $SigningPieces+="$($item):$($Headers[$item])"
    }
    $SigningPieces+="/$ResourceBase/$ResourcePath"

    if ([String]::IsNullOrEmpty($Resource.Query) -eq $false)
    {
        $QueryResources=@{}
        $QueryParams=$Resource.Query.Substring(1).Split('&')
        foreach ($QueryParam in $QueryParams)
        {
            $ItemPieces=$QueryParam.Split('=')
            $ItemKey = ($ItemPieces|Select-Object -First 1).TrimEnd("`0")
            $ItemValue = ($ItemPieces|Select-Object -Last 1).TrimEnd("`0")
            if($QueryResources.ContainsKey($ItemKey))
            { 
                $QueryResources[$ItemKey] = "$($QueryResources[$ItemKey]),$ItemValue"    
            }
            else
            {
                $QueryResources.Add($ItemKey, $ItemValue)
            }
        }
        $Sorted=$QueryResources.Keys|Sort-Object
        foreach ($QueryKey in $Sorted)
        {
            $SigningPieces += "$($QueryKey):$($QueryResources[$QueryKey])"
        }
    }

    $StringToSign = [String]::Join("`n",$SigningPieces)
    Write-Output $StringToSign 
}

Once we have the signature, it is a simple step create the required HMACSHA256 Hash using the storage account key. The following function takes the two arguments and returns the encoded signature.


Function EncodeStorageRequest
{
    [CmdletBinding()]
    param
    (
        [Parameter(Mandatory = $true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
        [String[]]$StringToSign,
        [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]
        [String]$SigningKey
    )
    PROCESS
    {         
        foreach ($item in $StringToSign)
        {
            $KeyBytes = [System.Convert]::FromBase64String($SigningKey)
            $HMAC = New-Object System.Security.Cryptography.HMACSHA256
            $HMAC.Key = $KeyBytes
            $UnsignedBytes = [System.Text.Encoding]::UTF8.GetBytes($item)
            $KeyHash = $HMAC.ComputeHash($UnsignedBytes)
            $SignedString=[System.Convert]::ToBase64String($KeyHash)
            Write-Output $SignedString 
        }     
    }
}

Now that we have a signature it is time to pass it on to the storage service API, for the following examples we will focus on BLOB. Let's return to the first example, retrieving a list of the BLOBs in the certificates container of the nosaashere storage account. This only requires the date and version API headers. This request would take the format:

GET https://nosaashere.blob.core.windows.net/certificates?restype=container&amp;comp=list x-ms-date:Mon, 08 May 2017 23:28:20 GMT x-ms-version:2016-05-31

To create the signature we can use the above function.


$StorageAccountName='nosaashere'
$ContainerName='certificates'
$AccessKey="WMTyrXNLHL+DF4Gwn1HgqMrpl3s8Zp7ttUevo0+KN2adpByHaYhX4OBY7fLNyzw5IItopGDAr8iQDxrhoHHiRg=="
$BlobContainerUri="https://$StorageAccountName.blob.core.windows.net/$ContainerName?restype=container&comp=list"
$BlobHeaders= @{
    "x-ms-date"=[DateTime]::UtcNow.ToString('R');
     "x-ms-version"='2016-05-31'; 
}
$UnsignedSignature=GetTokenStringToSign -Verb GET -Resource $BlobContainerUri -AccessKey $AccessKey -Headers $BlobHeaders $StorageSignature=EncodeStorageRequest -StringToSign $UnsignedSignature -SigningKey $SigningKey 
#Now we should have a 'token' for our actual request. 
$BlobHeaders.Add('Authorization',"SharedKey $($StorageAccountName):$($StorageSignature)") 
$Result=Invoke-RestMethod -Uri $Uri -Headers $BlobHeaders –UseBasicParsing

If you make your call without using the -OutFile parameter you will find a weird looking string rather than the nice friendly XmlDocument you were expecting.

<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ServiceEndpoint="https://nosaashere.blob.core.windows.net/" ContainerName="certificates">
    <Blobs>
        <Blob>
            <Name>azurestackroot.as01.cer</Name>
            <Properties>
                <Last-Modified>Fri, 05 May 2017 20:31:33 GMT</Last-Modified>
                <Etag>0x8D493F5B8410E96</Etag>
                <Content-Length>1001</Content-Length>
                <Content-Type>application/octet-stream</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>O2/fcFtzb9R6alGEgXDZKA==</Content-MD5>
                <Cache-Control />
                <Content-Disposition />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
                <LeaseState>available</LeaseState>
                <ServerEncrypted>false</ServerEncrypted>
            </Properties>
        </Blob>
        <Blob>
            <Name>azurestackroot.as02.cer</Name>
            <Properties>
                <Last-Modified>Wed, 03 May 2017 22:54:49 GMT</Last-Modified>
                <Etag>0x8D4927767174A24</Etag>
                <Content-Length>1001</Content-Length>
                <Content-Type>application/octet-stream</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>arONICHXLfRUr61IH/XHbw==</Content-MD5>
                <Cache-Control />
                <Content-Disposition />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
                <LeaseState>available</LeaseState>
                <ServerEncrypted>false</ServerEncrypted>
            </Properties>
        </Blob>
        <Blob>
            <Name>azurestackroot.as03.cer</Name>
            <Properties>
                <Last-Modified>Wed, 15 Mar 2017 19:43:50 GMT</Last-Modified>
                <Etag>0x8D46BDB9AB84CFD</Etag>
                <Content-Length>1001</Content-Length>
                <Content-Type>application/octet-stream</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>sZZ30o/oMO57VMfVR7ZBGg==</Content-MD5>
                <Cache-Control />
                <Content-Disposition />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
                <LeaseState>available</LeaseState>
                <ServerEncrypted>false</ServerEncrypted>
            </Properties>
        </Blob>
        <Blob>
            <Name>azurestackroot.as04.cer</Name>
            <Properties>
                <Last-Modified>Wed, 26 Apr 2017 22:45:41 GMT</Last-Modified>
                <Etag>0x8D48CF5F7534F4B</Etag>
                <Content-Length>1001</Content-Length>
                <Content-Type>application/octet-stream</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>rnkI6VPz9i1pXOick4qDSw==</Content-MD5>
                <Cache-Control />
                <Content-Disposition />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
                <LeaseState>available</LeaseState>
                <ServerEncrypted>false</ServerEncrypted>
            </Properties>
        </Blob>
    </Blobs>
    <NextMarker />
</EnumerationResults>

What, pray tell is this  ? In a weird confluence of events there is a long standing 'issue' with the Invoke-RestMethod and Invoke-WebRequest Cmdlets with the UTF-8 BOM. Luckily, .Net has lots of support for this stuff. Generally, most people just use the OutFile parameter and pipe it along to the Get-Content Cmdlet. If you are like me, we'll look for the UTF-8 preamble and strip it from the string.


$UTF8ByteOrderMark=[System.Text.Encoding]::Default.GetString([System.Text.Encoding]::UTF8.GetPreamble())
if($Result.StartsWith($UTF8ByteOrderMark,[System.StringComparison]::Ordinal))
{
    $Result=$Result.Remove(0,$UTF8ByteOrderMark.Length)
}
[Xml]$EnumerationResult=$Result

Now you'll see something you should be able to work with:


PS C:\Users\chris> $ResultXml.EnumerationResults
ServiceEndpoint                           ContainerName Blobs NextMarker
---------------                           ------------- ----- ----------
https://nosaashere.blob.core.windows.net/ certificates Blobs
PS C:\Users\chris> $ResultXml.EnumerationResults.Blobs.Blob
Name                    Properties
----                    ----------
azurestackroot.as01.cer Properties 
azurestackroot.as02.cer Properties 
azurestackroot.as03.cer Properties
azurestackroot.as04.cer Properties

All storage service requests return a good deal of information in the response headers.  Enumeration style operations , like the previous example return the relevant data in the response body.  Many operations, like retrieving container or BLOB metadata return only relevant data in the response headers.  Let’s modify our previous request, noting the change in the query parameter.  You will also need to use the Invoke-WebRequest Cmdlet (or your other favorite method) so that you can access the response headers.


$BlobContainerUri="https://$StorageAccountName.blob.core.windows.net/$ContainerName?restype=container&comp=metadata"
$BlobHeaders= @{ "x-ms-date"=[DateTime]::UtcNow.ToString('R'); "x-ms-version"='2016-05-31'; }
$UnsignedSignature=GetTokenStringToSign -Verb GET -Resource $BlobContainerUri `
    -AccessKey $AccessKey -Headers $BlobHeaders $StorageSignature=EncodeStorageRequest `
    -StringToSign $UnsignedSignature -SigningKey $SigningKey
$BlobHeaders.Add('Authorization',"SharedKey $($StorageAccountName):$($StorageSignature)")
$Response=Invoke-WebRequest -Uri $Uri -Headers $BlobHeaders –UseBasicParsing
$ContainerMetadata=$Response.Headers

We should have the resulting metadata key-value pairs present in the form x-ms-meta-<Key Name>.


C:\Users\chris> $ContainerMetaData
Key                      Value
---                      ----- 
Transfer-Encoding        chunked
x-ms-request-id          5f15423e-0001-003d-066d-ca0167000000
x-ms-version             2016-05-31
x-ms-meta-dispo          12345
x-ms-meta-stuff          test
Date                     Thu, 11 May 2017 15:41:16 GMT
ETag                     "0x8D4954F4245F500"
Last-Modified            Sun, 07 May 2017 13:45:01 GMT
Server                   Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0

Where to go from here?

With the authentication scheme in hand, you can now access the all of the storage service. This includes creating snapshots, uploading and downloading files. If you are not inclined to do things the hard way, feel free to check out a module supporting most of the BLOB service functionality on the Powershell Gallery or GitHub.

Replace/Fix Unicode characters created by ConvertTo-Json in PowerShell for ARM Templates

unicode-escape-charactes.png

Often there will be instances where you may want to make a programmatic change to an ARM template using a scripting language like PowerShell. Parsing XML or JSON type files in the past wasn’t always the easiest thing via some scripting languages. Those of us who remember the days of parsing XML files using VBScript still occasionally wake up in the middle of the night in a cold sweat.

Thankfully, PowerShell makes this relatively easy for even non-developers through the use of some native conversion cmdlets that allow you to quickly take a JSON file and convert it to and from a custom PowerShell object that you can easily traverse and modify. You can do this with the ConvertFrom-Json cmdlet like so:

[powershell]$myJSONobject = Get-Content -raw "c:\myJSONfile.txt" | ConvertFrom-Json[/powershell]

Without digging into all the different ways that you can use this to tweak an ARM template to your heart’s satisfaction, it’s reasonable to assume that at some point you’d like to convert your newly modified object back into a plain ole’ JSON file with the intention of submitting it back into Azure, and that’s when things go awry. When you use the intuitively named ConvertTo-Json cmdlet to convert your object back into a flat JSON text block, your gorgeous block of JSON content that once looked like this:

Now looks like this:

See the problem? Here it is again with some helpful highlighting:

All your special characters get replaced by Unicode escape characters. While PowerShell’s ConvertFrom-Json cmdlet and other parsers handle these codes just fine when converting from text, Azure will have none of this nonsense and your template will fail to import/deploy/etc. Fortunately, the fix is very simple. The Regex class in the .NET framework has an “unescape” method:

https://msdn.microsoft.com/en-us/library/system.text.regularexpressions.regex(v=vs.110).aspx

So instead of using:

[powershell]$myOutput = $myJSONobject | ConvertTo-Json -Depth 50[/powershell]

Use:

[powershell]$myOutput = $myJSONobject | ConvertTo-Json -Depth 50 | % { [System.Text.RegularExpressions.Regex]::Unescape($_) }[/powershell]

The output from the second command should generate a perfectly working ARM template. That is, of course, assuming that you correctly built your ARM template in the first place, but I take no responsibility for that. ;)

How to calculate Azure VHDs used space

powershell.jpg

One of the most hotly topic in the Azure world, is estimate how much storage is currently used by deployed VMs. I have intentionally used the word "used" and not allocated because in Azure, when VHDs are stored a Standard Storage Account, they're like "thin" or dynamically disk if you prefer, you aren't really using all the allocated space and Azure portal confirms this. Below image shows how an Azure VHD of 127 GB used as OS disk is viewed from a Windows VM

Below image shows how Azure portal calculate usage space for above disk

For reporting and billing reason, you may need to get these information for all VMs deployed in a specific subscription.

This article will show how to retrieve these information for VHDs stored in both Standard Account and Premium Account.

A little of Azure theory..

Standard Storage Account:

When a new Azure Storage Account is created, by default, some hidden tables are created and one of these is the "$MetricsCapacityBlob". This table shows blobs capacity values.

Note: There others hidden tables which contains other info related to an Azure Storage Account like its transactions.

Premium Storage Account:

From Microsoft Web Site: "Billing for a premium storage disk/blob depends on the provisioned size of the disk/blob. Azure maps the provisioned size (rounded up) to the nearest premium storage disk option as specified in the table given in the Scalability and Performance Targets when using Premium Storage section. Each disk will map to one of the the supported provisioned sizes and will be billed accordingly. Billing for any provisioned disk is prorated hourly using the monthly price for the Premium Storage offer. For example, if you provisioned a P10 disk and deleted it after 20 hours, you are billed for the P10 offering prorated to 20 hours. This is regardless of the amount of actual data written to the disk or the IOPS/throughput used."

From a reporting point of view, this mean that the size of a deployed VHD matches the allocated space and you're billed for its size "regardless of the amount of actual data written to the disk or the IOPS/throughput used".

Before to start to write some PowerShell code, it's required to prepare your workstation to run the Azure Storage Report:

  • If OS is older then Windows Server 2016 or Windows 10, then it's required to download and install PowerShell 5.0 from here
  • Install ReportHTML module from PowerShell Gallery: Open a PowerShell console as administrator and execute the following code

[powershell]

Install-Module -Name ReportHTML

[/powershell]

Let's begin to write some PowerShell code

Note: Most of the below functions come from Get Billable Size of Windows Azure Blobs (w/Snapshots) in a Container or Account script developed by the Windows Azure Product Team Scripts. Their code has been updated to work with latest Azure PowerShell module and support script purpose

Open a PowerShell editor and create a new file called Module-Azure.ps1

This file will contain all functions invoked by the main script

[powershell] function global:Connect-Azure {

Login-AzureRmAccount

$subName = Get-AzureRmSubscription | select SubscriptionName | Out-GridView -Title "Select a subscription" -OutputMode Single | select -ExpandProperty SubscriptionName

Select-AzureRmSubscription -SubscriptionName $subName

$global:azureSubscription = Get-AzurermSubscription -SubscriptionName $subName

}

function global:Calculate-BlobSpace {

param( # The name of the storage account to enumerate. [Parameter(Mandatory = $true)] [string]$StorageAccountName ,

# The name of the storage container to enumerate. [Parameter(Mandatory = $false)] [ValidateNotNullOrEmpty()] [string]$ContainerName,

# The name of the storage account resource group. [Parameter(Mandatory = $true)] [ValidateNotNullOrEmpty()] [string] $StorageAccountRGName )

# Following modifies the Write-Verbose behavior to turn the messages on globally for this session $VerbosePreference = "Continue"

$storageAccount = Get-AzureRmStorageAccount -ResourceGroupName $StorageAccountRGName -Name $StorageAccountName -ErrorAction SilentlyContinue

if ($storageAccount -eq $null) { throw "The storage account specified does not exist in this subscription." }

# Instantiate a storage context for the storage account. $storagePrimaryKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $StorageAccountRGName -Name $StorageAccountName)[0]).Value

$storageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $storagePrimaryKey

# Get a list of containers to process. $containers = New-Object System.Collections.ArrayList if ($ContainerName.Length -ne 0) { $container = Get-AzureStorageContainer -Context $storageContext ` -Name $ContainerName -ErrorAction SilentlyContinue | ForEach-Object { $containers.Add($_) } | Out-Null } else { Get-AzureStorageContainer -Context $storageContext -ErrorAction SilentlyContinue | ForEach-Object { $containers.Add($_) } | Out-Null }

# Calculate size. $sizeInBytes = 0 if ($containers.Count -gt 0) { $containers | ForEach-Object { $result = Get-ContainerBytes $_.CloudBlobContainer $sizeInBytes += $result.containerSize Write-Verbose ("Container '{0}' with {1} blobs has a size of {2:F2}MB." -f ` $_.CloudBlobContainer.Name, $result.blobCount, ($result.containerSize / 1MB)) } foreach ($container in $containers) {

$result = Get-ContainerBytes $container.CloudBlobContainer

$sizeInBytes += $result.containerSize

Write-Verbose ("Container '{0}' with {1} blobs has a size of {2:F2}MB." -f $container.CloudBlobContainer.Name, $result.blobCount, ($result.containerSize / 1MB)) }

$sizeInGB = [math]::Round($sizeInBytes / 1GB)

return $sizeInGB } else { Write-Warning "No containers found to process in storage account '$StorageAccountName'."

$sizeInGB = 0

return $sizeInGB } }

function global:Get-BlobBytes { param ( [Parameter(Mandatory=$true)] [Microsoft.WindowsAzure.Commands.Common.Storage.ResourceModel.AzureStorageBlob]$Blob)

# Base + blob name $blobSizeInBytes = 124 + $Blob.Name.Length * 2

# Get size of metadata $metadataEnumerator = $Blob.ICloudBlob.Metadata.GetEnumerator() while ($metadataEnumerator.MoveNext()) { $blobSizeInBytes += 3 + $metadataEnumerator.Current.Key.Length + $metadataEnumerator.Current.Value.Length }

if ($Blob.BlobType -eq [Microsoft.WindowsAzure.Storage.Blob.BlobType]::BlockBlob) { $blobSizeInBytes += 8 $Blob.ICloudBlob.DownloadBlockList() | ForEach-Object { $blobSizeInBytes += $_.Length + $_.Name.Length } } else { $Blob.ICloudBlob.GetPageRanges() | ForEach-Object { $blobSizeInBytes += 12 + $_.EndOffset - $_.StartOffset } }

return $blobSizeInBytes }

function global:Get-ContainerBytes { param ( [Parameter(Mandatory=$true)] [Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer]$Container)

# Base + name of container $containerSizeInBytes = 48 + $Container.Name.Length * 2

# Get size of metadata $metadataEnumerator = $Container.Metadata.GetEnumerator() while ($metadataEnumerator.MoveNext()) { $containerSizeInBytes += 3 + $metadataEnumerator.Current.Key.Length + $metadataEnumerator.Current.Value.Length }

# Get size for Shared Access Policies $containerSizeInBytes += $Container.Permission.SharedAccessPolicies.Count * 512

# Calculate size of all blobs. $blobCount = 0 $blobs = Get-AzureStorageBlob -Context $storageContext -Container $Container.Name foreach ($blobItem in $blobs) { #$blobItem | Get-Member

$containerSizeInBytes += Get-BlobBytes $blobItem

$blobCount++

}

return @{ "containerSize" = $containerSizeInBytes; "blobCount" = $blobCount } }

function global:ListBlobCapacity([System.Array]$arr, $StgAccountName, $stgAccountRGName) {

$Delimiter = ','

$Today = Get-Date

$storageAccountKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $stgAccountRGName -Name $StgAccountName)[0]).Value

$StorageCtx = New-AzureStorageContext –StorageAccountName $StgAccountName –StorageAccountKey $StorageAccountKey

$metrics = Get-AzureStorageServiceMetricsProperty -Context $StorageCtx -ServiceType "Blob" -MetricsType Hour -ErrorAction "SilentlyContinue"

# if storage account has Monitoring turned on, get the Capacity for the configured nbr of Retention Days if ( $metrics.MetricsLevel -ne "None" ) { $RetentionDays = $metrics.RetentionDays if ( $RetentionDays -eq $null -or $RetentionDays -eq '' ) { $RetentionDays = 0 } $table = GetTableReference $StgAccountName $StorageAccountKey '$MetricsCapacityBlob' # loop over days for( $d = $RetentionDays; $d -ge 0; $d = $d - 1) {

$date = (Get-Date $Today.AddDays(-$d) -format 'yyyyMMdd')

$partitionKey = $date + "T0000"

$result = $table.Execute([Microsoft.WindowsAzure.Storage.Table.TableOperation]::Retrieve($partitionKey, "data")) if ( $result.HttpStatusCode -eq "200") { $arr += CreateRowObject $StgAccountName (Get-Date $Today.AddDays(-$d)).ToString("d") } } } return $arr }

function global:GetBlobsCurrentCapacity($StgAccountName, $stgAccountRGName) {

$Delimiter = ','

$Today = Get-Date

$storageAccountKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $stgAccountRGName -Name $StgAccountName)[0]).Value

$StorageCtx = New-AzureStorageContext –StorageAccountName $StgAccountName –StorageAccountKey $StorageAccountKey

$metrics = Get-AzureStorageServiceMetricsProperty -Context $StorageCtx -ServiceType "Blob" -MetricsType Hour -ErrorAction "SilentlyContinue"

# if storage account has Monitoring turned on, get the Capacity for the configured nbr of Retention Days if ( $metrics.MetricsLevel -ne "None" ) { $table = GetTableReference $StgAccountName $StorageAccountKey '$MetricsCapacityBlob'

$date = (Get-Date $Today.AddDays(-1) -format 'yyyyMMdd')

$partitionKey = $date + "T0000"

$result = $table.Execute([Microsoft.WindowsAzure.Storage.Table.TableOperation]::Retrieve($partitionKey, "data"))

if ( $result.HttpStatusCode -eq "200") { $rowObj = CreateRowObject $StgAccountName (Get-Date $Today.AddDays(-1)).ToString("d") }

}

return $rowObj }

# setup access to Azure Table $TableName function global:GetTableReference($StgAccountName, $StorageAccountKey, $TableName) { $accountCredentials = New-Object "Microsoft.WindowsAzure.Storage.Auth.StorageCredentials" $StgAccountName, $StorageAccountKey $storageAccount = New-Object "Microsoft.WindowsAzure.Storage.CloudStorageAccount" $accountCredentials, $true $tableClient = $storageAccount.CreateCloudTableClient() $table = $tableClient.GetTableReference($TableName) return $table }

function global:CreateRowObject($StgAccountName, $DateTime) { $row = New-Object System.Object $row | Add-Member -type NoteProperty -name "StorageAccountName" -Value $StgAccountName $row | Add-Member -type NoteProperty -name "DateTime" -Value $DateTime foreach( $key in $result.Result.Properties.Keys ) { $val = $result.Result.Properties[$key].PropertyAsObject

if ( $Delimiter -eq ",") { $val = $val -replace ",","." } $row | Add-Member -type NoteProperty -name $key -Value $val } return $row }

function global:get-PremiumBlobGBSize { param ( [Microsoft.WindowsAzure.Commands.Common.Storage.ResourceModel.AzureStorageBlob] $blobobj )

$blobGBSize = [math]::Truncate(($blobObj.Length / 1GB))

return $blobGBSize

}

[/powershell]

Some comments:

  • All functions have been declared as global to be invoked from main script if required
  • Connect-Azure: Allow to select the Azure subscription against which execute reporting script and establish a connection
  • Calculate-BlobSpace: This is the function invoked by the main script which returns the sum of the spaces allocated to VHDs for a given Standard Storage Account
  • Get-PremiumBlobGBSize:  This is the function invoked by the main script which returns the sum of the spaces allocated to VHDs for a given Premium Storage Account

Now save Module-Azure.ps1 and in the same directory where it has been saved, create a new PowerShell file called "Generate-AzureReport.ps1". This will be the main file which will invoke Module-Azure functions.

Open Generate-AzureReport.ps1 with a PowerShell editor and paste the following code:

 

[powershell]

$ScriptDir = $PSScriptRoot

Write-Host "Current script directory is $ScriptDir"

Set-Location -Path $ScriptDir

.\module-azure.ps1

Connect-Azure

if (!(get-module ReportHTML)) { if (!( get-module -ListAvailable)) { write-host "Please Install ReportHTML module from PowerShell Gallery" } else { Write-Host "Importing Report-HTML module"

Import-Module ReportHTML } } else { Write-Host "Report-HTML module is already installed" }

$subname = $azureSubscription.SubscriptionName

$billingReportFolder = "C:\temp\billing"

if ( !(test-path $billingReportFolder) ) { New-Item $billingReportFolder -ItemType Directory }

# Analyzing Standard Storage Account Consumptions from Azure Storage Hiden Table $MetricsCapacityBlob

$sa = Find-AzureRmResource -ResourceType Microsoft.Storage/storageAccounts | Where-Object {$_.Sku.tier -ne "Premium" }

$saConsumptions = @()

foreach ($saItem in $sa) { $blobObj = GetBlobsCurrentCapacity -StgAccountName $saItem.Name -stgAccountRGName $saItem.ResourceGroupName

$blobCapacityGB = [math]::Truncate(($blobObj.Capacity / 1GB))

$blobSpaceItem = '' | select StorageAccountName,Allocated_GB

$blobSpaceItem.StorageAccountName = $saItem.Name

$blobSpaceItem.Allocated_GB = $blobCapacityGB

$saConsumptions += $blobSpaceItem

}

$saPremium = Find-AzureRmResource -ResourceType Microsoft.Storage/storageAccounts | Where-Object {$_.Sku.tier -eq "Premium" }

$saPremiumUsage = @()

foreach ($saPremiumItem in $saPremium) { $storageAccountKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $saPremiumItem.ResourceGroupName -Name $saPremiumItem.Name)[0]).Value

$StorageCtx = New-AzureStorageContext –StorageAccountName $saPremiumItem.Name –StorageAccountKey $StorageAccountKey

$containers = Get-AzureStorageContainer -Context $StorageCtx

$saPremiumUsageItem = '' | select StorageAccountName,Allocated_GB

$saPremiumUsageItem.StorageAccountName = $saPremiumItem.Name

$saPremiumUsageItem.Allocated_GB = 0

foreach ($container in $containers) { $blobs = Get-AzureStorageBlob -Context $StorageCtx -Container $container.Name

foreach ($blobItem in $blobs) { $blobsize = get-PremiumBlobGBSize ($blobItem)

$saPremiumUsageItem.Allocated_GB = $saPremiumUsageItem.Allocated_GB + $blobsize } }

$saPremiumUsage += $saPremiumUsageItem

}

# Calculate Totals

$saConsumptionsTotal = 0

foreach ($saConsumptionsItem in $saConsumptions) { $saConsumptionsTotal = $saConsumptionsTotal + $saConsumptionsItem.Allocated_GB }

$saPremiumUsageTotal = 0

foreach ($saPremiumUsageItem in $saPremiumUsage) { $saPremiumUsageTotal = $saPremiumUsageTotal + $saPremiumUsageItem.Allocated_GB }

# Generate Reports

$Rpt = @()

$TitleText = "Azure Usage Report "

$Rpt += Get-HTMLOpenPage -TitleText $TitleText -LeftLogoName "sample"

##

$Rpt += Get-HtmlContentOpen -HeaderText "Standard Storage Accounts Consumptions (GBs)"

$saConsumptionsTableStyle = Set-TableRowColor ($saConsumptions | Sort-Object -Property StorageAccountName) -Alternating

$Rpt += Get-HTMLContentTable ($saConsumptionsTableStyle) -Fixed

$Rpt += Get-HtmlContentClose

##

$Rpt += Get-HtmlContentOpen -HeaderText "Total of Standard Storage space allocated on Azure"

$Rpt += Get-HTMLContentText -Heading "Total (GB)" -Detail "$saConsumptionsTotal"

$Rpt += Get-HtmlContentClose

##

if ( $saPremiumUsage -ne $null) {

$Rpt += Get-HtmlContentOpen -HeaderText "Premium Storage Accounts Consumptions (GBs)"

$saPremiumUsageTableStyle = Set-TableRowColor ($saPremiumUsage | Sort-Object -Property StorageAccountName) -Alternating

$Rpt += Get-HTMLContentTable ($saPremiumUsageTableStyle) -Fixed

$Rpt += Get-HtmlContentClose

} ##

$Rpt += Get-HtmlContentOpen -HeaderText "Total of Premium Storage space allocated on Azure"

$Rpt += Get-HTMLContentText -Heading "Total (GB)" -Detail "$saPremiumUsageTotal "

$Rpt += Get-HtmlContentClose

##

$Rpt += Get-HTMLClosePage

$date = Get-Date -Format yyyy.MM.dd.hh.mm

$reportName = $subname + "_" + $date

Write-Host "Output folder is: C:\temp\Billing"

Write-Host "Report file name is : " $reportName

$file = Save-HTMLReport -ReportContent $rpt -ShowReport -ReportPath "C:\temp\Billing" -ReportName $reportName

[/powershell]

Save it

Some comments:

  • Line 7 execute Module-Azure, making available its functions
  • Line 9 invoke Connect-Azure function which is declared in Module-Azure as global
  • From Line 12 to 28 it's checked if ReportHTML is installed
  • Line 42 all Standard Storage Account available in the selected subscription are retrieved
  • From Line 44 to Line 60, VHDs allocated space stored in Standard Storage Account is calculated
  • Line 62 all Premium Storage Account available in the selected subscription are retrieved
  • From Line 64 to Line 95, VHDs allocated space stored in Premium Storage Account is calculated
  • From Line 97 to Line 112, sum of all Standard Storage Accounts and of all Premium Storage Account is calculated
  • From Line 115 to Line 168, report is formatted in HTML using ReportHTML functions
  • Line 174 save report in the default location and open default browser to show it

It's time to run the script and getting some reports !!!

From PowerShell editor or from a PowerShell console, run Generate-AzureReport.ps1

Provide Azure credentials

Select target Azure subscription and click on OK button

Sample of Azure Report

Sample of PowerShell output console

Note:

  • Output folder is the folder path where report has been saved
  • Report file name is report name

Thanks for your patience.  Any feedback is  appreciated

Moving VHDs from one Storage Account to Another (Part 2) - Updated 2017 08 18

redundancy_banner.jpg

This article will show how to automatically copy VHDs from a source storage account to a new one, without hardcoding values. Secondly how to create a new VM with the disks in the new Storage Account, reusing the same value of the original VM. The first thing is to create a PowerShell module file where keeps all functions that will be invoked by the main script.

Ideally, this module could be reused for other purposes and new functions should be added according to your needs.

Open your preferred PowerShell editor and creates a new file called "Module-Azure.ps1"

Note: all function will be declared as global in order to be available to others script

The first function to be added is called Connect-Azure and it will simplify Azure connection activities.

[powershell] function global:Connect-Azure { Login-AzureRmAccount $global:subName = (Get-AzureRmSubscription | select SubscriptionName | Out-GridView -Title "Select a subscription" -OutputMode Single).SubscriptionName Select-AzureRmSubscription -SubscriptionName $subName } [/powershell]

Above function, using Out-GridView cmdlets, will show all Azure subscriptions associated with your account and allow you to select the one against which execute script

The second function to be added is called CopyVHDs. It will take care of copy all VHDs from the selected source Storage Account to the selected destination Storage Account

[powershell]

function global:CopyVHDs { param ( $sourceSAItem, $destinationSAItem

)

$sourceSA = Get-AzureRmStorageAccount -ResourceGroupName $sourceSAItem.ResourceGroupName -Name $sourceSAItem.StorageAccountName

$sourceSAContainerName = "vhds"

$sourceSAKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $sourceSAItem.ResourceGroupName -Name $sourceSAItem.StorageAccountName)[0].Value

$sourceSAContext = New-AzureStorageContext -StorageAccountName $sourceSAItem.StorageAccountName -StorageAccountKey $sourceSAKey

$blobItems = Get-AzureStorageBlob -Context $sourceSAContext -Container $sourceSAContainerName

$destinationSAKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $destinationSAItem.ResourceGroupName -Name $destinationSAItem.StorageAccountName)[0].Value

$destinationContainerName = "vhds"

$destinationSAContext = New-AzureStorageContext -StorageAccountName $destinationSAItem.StorageAccountName -StorageAccountKey $destinationSAKey

foreach ( $blobItem in $blobItems) {

# Copy the blob Write-Host "Copying " $blobItem.Name " from " $sourceSAItem.StorageAccountName " to " $destinationSAItem.StorageAccountName

$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName -DestContext $destinationSAContext -SrcBlob $blobItem.Name -Context $sourceSAContext -SrcContainer $sourceSAContainerName

$blobCopyStatus = Get-AzureStorageBlob -Blob $blobItem.Name -Container $destinationContainerName -Context $destinationSAContext | Get-AzureStorageBlobCopyState

[int] $i = 0;

while ( $blobCopyStatus.Status -ne "Success") { Start-Sleep -Seconds 180

$i = $i + 1

$blobCopyStatus = Get-AzureStorageBlob -Blob $blobItem.Name -Container $destinationContainerName -Context $destinationSAContext | Get-AzureStorageBlobCopyState

Write-Host "Blob copy status is " $blobCopyStatus.Status Write-Host "Bytes Copied: " $blobCopyStatus.BytesCopied Write-Host "Total Bytes: " $blobCopyStatus.TotalBytes

Write-Host "Cycle Number $i" }

Write-Host "Blob " $blobItem.Name " copied"

}

return $true }

[/powershell]

 

This function is basically executing the same commands that were showed in the first article. Of course the difference is the it takes as input two objects which contains required information to copy VHDs between the two Storage Account. A couple of notes:

  • Because it is unknown how many VHDs should be copied, there is foreach that will iterate over all VHDs that will copied
  • In order to minimize any side effects, aforementioned for each contains a while that will ensure that copy activity is really completed before return control

The third function to be added is called Create-AzureVMFromVHDs. It will take care of create a new VM using existing VHDs. In order to provide a PoC about what could be achieved, following assumptions have been made:

  • New VM will be deployed in an existing vnet / subnet
  • New VM will have the same size of the original VM
  • New VM will be deployed in a new Resource Group
  • New VM will be deployed in the same location of the (destination) Azure Storage Account where VHDs have been copied
  • New VM will have the same credentials of the source one
  • New VM will have assigned a new dynamic public IP
  • All VHDs copied from source Storage Account (which were attached to the source VM) will be attached to the new VM

[powershell] function global:Create-AzureVMFromVHDs { param ( $destinationVNETItem, $destinationSubnetItem, $destinationSAItem, $sourceVMItem )

$destinationSA = Get-AzureRmStorageAccount -Name $destinationSAItem.StorageAccountName -ResourceGroupName $destinationSAItem.ResourceGroupName

$Location = $destinationSA.PrimaryLocation

$destinationVMItem = '' | select name,ResourceGroupName

$destinationVMItem.name = ($sourceVMItem.Name + "02").ToLower()

$destinationVMItem.ResourceGroupName = ($sourceVMItem.ResourceGroupName + "02").ToLower()

$InterfaceName = $destinationVMItem.name + "-nic"

$destinationResourceGroup = New-AzureRmResourceGroup -location $Location -Name $destinationVMItem.ResourceGroupName

$sourceVM = get-azurermvm -Name $sourceVMItem.Name -ResourceGroupName $sourceVMItem.ResourceGroupName

$VMSize = $sourceVM.HardwareProfile.VmSize

$sourceVHDs = $sourceVM.StorageProfile.DataDisks

$OSDiskName = $sourceVM.StorageProfile.OsDisk.Name

$publicIPName = $destinationVMItem.name + "-pip"

$sourceVMOSDiskUri = $sourceVM.StorageProfile.OsDisk.Vhd.Uri

$OSDiskUri = $sourceVMOSDiskUri.Replace($sourceSAItem.StorageAccountName,$destinationSAItem.StorageAccountName)

# Network Script $VNet = Get-AzureRMVirtualNetwork -Name $destinationVNETItem.Name -ResourceGroupName $destinationVNETItem.ResourceGroupName $Subnet = Get-AzureRMVirtualNetworkSubnetConfig -Name $destinationSubnetItem.Name -VirtualNetwork $VNet

#Public IP script $publicIP = New-AzureRmPublicIpAddress -Name $publicIPName -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $location -AllocationMethod Dynamic

# Create the Interface $Interface = New-AzureRMNetworkInterface -Name $InterfaceName -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PublicIpAddressId $publicIP.Id

#Compute script $VirtualMachine = New-AzureRMVMConfig -VMName $destinationVMItem.name -VMSize $VMSize

$VirtualMachine = Add-AzureRMVMNetworkInterface -VM $VirtualMachine -Id $Interface.Id $VirtualMachine = Set-AzureRMVMOSDisk -VM $VirtualMachine -Name $OSDiskName -VhdUri $OSDiskUri -CreateOption Attach -Windows

$VirtualMachine = Set-AzureRmVMBootDiagnostics -VM $VirtualMachine -Disable

#Adding Data disk

if ( $sourceVHDs.Length -gt 0) { Write-Host "Found Data disks"

foreach ($sourceVHD in $sourceVHDs) { $destinationDataDiskUri = ($sourceVHD.Vhd.Uri).Replace($sourceSAItem.StorageAccountName,$destinationSAItem.StorageAccountName)

$VirtualMachine = Add-AzureRmVMDataDisk -VM $VirtualMachine -Name $sourceVHD.Name -VhdUri $destinationDataDiskUri -Lun $sourceVHD.Lun -Caching $sourceVHD.Caching -CreateOption Attach

}

} else { Write-Host "No Data disk found" }

# Create the VM in Azure New-AzureRMVM -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $Location -VM $VirtualMachine

Write-Host "VM created. Well Done !!"

}

[/powershell]

A couple of note:

  • The URI of the VHDs copied in the destination Storage Account has been calculated replacing the source Storage Account name with destination Storage Account name in URI
  • destination VHDs will be attached in the same order (LUN) of source VHDs

Module-Azure.ps1 should have a structure like this:

Now it's time to create another file called Move-VM.ps1 which should be stored in the same folder of Module-Azure

Note: if you want to store in a different folder, then update line 7

Paste following code:

[powershell] $ScriptDir = $PSScriptRoot

Write-Host "Current script directory is $ScriptDir"

Set-Location -Path $ScriptDir

.\Module-Azure.ps1

Connect-Azure

$vmItem = Get-AzureRmVM | select ResourceGroupName,Name | Out-GridView -Title "Select VM" -OutputMode Single

$sourceSAItem = Get-AzureRmStorageAccount | select StorageAccountName,ResourceGroupName | Out-GridView -Title "Select Source Storage Account" -OutputMode Single

$destinationSAItem = Get-AzureRmStorageAccount | select StorageAccountName,ResourceGroupName | Out-GridView -Title "Select Destination Storage Account" -OutputMode Single

# Stop VM

Write-Host "Stopping VM " $vmItem.Name

get-azurermvm -name $vmItem.Name -ResourceGroupName $vmItem.ResourceGroupName | stop-azurermvm

Write-Host "Stopped VM " $vmItem.Name

CopyVHDs -sourceSAItem $sourceSAItem -destinationSAItem $destinationSAItem

$destinationVNETItem = Get-AzureRmVirtualNetwork | select Name,ResourceGroupName | Out-GridView -Title "Select Destination VNET" -OutputMode Single

$destinationVNET = Get-AzureRmVirtualNetwork -Name $destinationVNETItem.Name -ResourceGroupName $destinationVNETItem.ResourceGroupName

$destinationSubnetItem = Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $destinationVNET | select Name,AddressPrefix | Out-GridView -Title "Select Destination Subnet" -OutputMode Single

Create-AzureVMFromVHDs -destinationVNETItem $destinationVNETItem -destinationSubnetItem $destinationSubnetItem -destinationSAItem $destinationSAItem -sourceVMItem $vmItem

[/powershell]

Comments:

  • Line 7: Module-Azure function is invoked
  • Line 9: Connect-Azure function (declared in Module-Azure) is invoked. This is possible because it has been declared as global
  • From Line 11 to Line 15: a subset of source VM, source Storage Account and destination Storage Account info are retrieved. They will be used later
  • Line 19-23: source VM is stopped
  • Line 25: Copy-VHDs function (declared in Module-Azure) is invoked. This is possible because it has been declared as global. Note that we're just passing three previously retrieved parameters
  • From Line 27 to Line 31: VNET and subnet where new VM will be attached are retrieved
  • Line 33:  Create-AzureVMFromVHDs function (declared in Module-Azure) is invoked. This is possible because it has been declared as global. Note that we're just passing already retrieved parameters

Following screenshots shows an execution of Move-VM script:

Select Azure subscription

Select source VM

Select source Storage Account

Select Destination Storage Account

Confirm to stop VM

Select destination VNET

Select destination Subnet

Output sample #1

Output sample #2

Source VM Resource Group

Destination VM RG

Destination Storage Account RG

Source VHDs

Destination VHDs

Thanks for your patience.  Any feedback is  appreciated

Note: Above script has been tested with Azure PS 3.7.0 (March 2017).

Starting from Azure PS 4.x, this cmdlets returns an array of objects with the following properties: Name, Id, TenantId and State.

The function Connect-Azure is using the value SubscriptionName that is no more available. This is the reason why some people saw an empty Window.

Connect-Azure function should be modified as follows to work with Azure PS 4.x:

[powershell]

function global:Connect-Azure { Login-AzureRmAccount

$global:subName = (Get-AzureRmSubscription | select Name | Out-GridView -Title "Select a subscription" -OutputMode Single).Name

Select-AzureRmSubscription -SubscriptionName $subName }

[/powershell]

ExpressRoute Migration from ASM to ARM and legacy ASM Virtual Networks

word-image9.png

I recently ran into an issue where an ExpressRoute had been migrated from Classis (ASM) to the new portal (ARM), however legacy Classic Virtual Networks (VNets) were still in operation. These VNets refused to be deleted by either portal or PowerShell. Disconnecting the old VNet’s Gateway through the Classic portal would show success, but it would stay connected.

There’s no option to disconnect an ASM gateway in the ARM portal, only a delete option. Gave this a shot and predictably, this was the result:

C:\Users\will.van.allen\AppData\Local\Microsoft\Windows\INetCache\Content.Word\FailedDeleteGW.PNG

Ok, let’s go to PowerShell and look for that obstinate link. Running Get-AzureDedicatedCircuitLink resulted in the following error:

PS C:\> get-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

get-AzureDedicatedCircuitLink : InternalError: The server encountered an internal error. Please retry the request.

At line:1 char:1

+ get-AzureDedicatedCircuitLink -ServiceKey xxxxxx-xxxx-xxxx-xxxx-xxx...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo          : CloseError: (:) [Get-AzureDedicatedCircuitLink], CloudException

+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ExpressRoute.GetAzureDedicatedCircuitLinkCommand

I couldn’t even find the link. Not only was modifying the circuit an issue, but reads were failing, too.

Turned out to be a simple setting change. When the ExpressRoute was migrated, as there were still Classic VNets, a final step of enabling the circuit for both deployment models was needed. Take a look at the culprit setting here, after running Get-AzureRMExpressRouteCircuit:

"serviceProviderProperties": {

"serviceProviderName": "equinix",

"peeringLocation": "Silicon Valley",

"bandwidthInMbps": 1000

},

"circuitProvisioningState": "Disabled",

"allowClassicOperations": false,

"gatewayManagerEtag": "",

"serviceKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",

"serviceProviderProvisioningState": "Provisioned"

AllowClassicOperations set to “false” blocks ASM operations from any access, including a simple “get” from the ExpressRoute circuit. Granting access is straightforward:

# Get details of the ExpressRoute circuit

$ckt = Get-AzureRmExpressRouteCircuit -Name "DemoCkt" -ResourceGroupName "DemoRG"

#Set "Allow Classic Operations" to TRUE

$ckt.AllowClassicOperations = $true

More info on this here.

But we still weren’t finished. I could now get a successful response from this:

get-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

However this still failed:

Remove-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

So reads worked, but no modify. Ah—I remembered the ARM portal lock feature, and sure enough, a Read-Only lock on the Resource Group was inherited by the ExpressRoute (more about those here). Once the lock was removed, voila, I could remove the stubborn VNets no problem.

# Remove the Circuit Link for the Vnet

Remove-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

# Disconnect the gateway

Set-AzureVNetGateway -Disconnect –VnetName $Vnet –LocalNetworkSiteName <LocalNetworksitename>

# Delete the gateway

Remove-AzureVNetGateway –VNetName $Vnet

There’s still no command to remove a single Vnet, you have to use the portal (either will work) or you can use PowerShell to edit the NetworkConfig.xml file, then import it.

Once our legacy VNets were cleaned up, I re-enabled the Read-Only lock on the ExpressRoute.

In summary, nothing was “broken”, just an overlooked setting. I would recommend cleaning up your ASM/Classic Vnets before migrating your ExpressRoute, it’s so much easier and cleaner, but if you must leave some legacy virtual networks in place, remember to set the ExpressRoute setting “allowclassicoperations” setting to “True” after the migration is complete.

And don’t forget those pesky ARM Resource Group locks.