Azure

Reporting on Resource Group Tags in Azure

show-pinned-tag.png

You might have seen either of Mike's blog posts on resource groups or resource tagging or just be looking to generate a report on resource group tags in Azure, if so, you're in the right place. Yesterday we were taking a look at our subscription and looking to clean up some resources.  We needed a report to review Azure resource groups and their tags.  While this relatively easy to do with PowerShell, getting a report that you can share easily was a little harder. I thought I would take some time and write a PowerShell script to generate a report utilizing ReportHTML powershell module.

Resource Group Tag Report Generated with ReportHTML

Just like most things in IT there were a few bumps in the road.  Mainly that tag names are in a hashtable and that they are case sensitive.  I wrote some code to auto-discover key names and it will prefix the key name with a number so you can find all case versions of a tag and correct them if needed. This report also includes a hyperlink to take you directly to the resource in Azure.

Once you know the tag names you want to report on you can specify them as an array and pass that in as a parameter. If you specify the Tag Names array the first two tag names will be used to generate some pie charts as shown above. EG -KeyNames=@("Owner","Solution").  By default, the report is generated in your temp directory. You can use the -ReportOutputPath param to specify an output path.  There is also a parameter for your logo URL.  It must be a small image -YouLogoHereURLString

You can view and install this report from the PowerShell Gallery here using the following Install-Script -Name run-ReportAzureResourceGroupTags

Or here is the code.

[powershell]

Param ( [parameter(Mandatory=$false,ValueFromPipeline = $true)] [Array]$KeyNames, [parameter(Mandatory=$false)] [String]$ReportOutputPath, [parameter(Mandatory=$false)] [String]$YouLogoHereURLString )

[switch]$AutoKeyName =$false $m = Get-Module -List ReportHTML if(!$m) {"Can't locate module ReportHTML. Use Install-module ReportHTML";break} else {import-module reporthtml}

if ([string]::IsNullOrEmpty($(Get-AzureRmContext).Account)) {Login-AzureRmAccount}

$RGs = Get-AzureRmResourceGroup if ($KeyNames.count -eq 0) { [switch]$AutoKeyName =$true $KeyNames = (($rgs.Tags.keys) | select -Unique) }

$SubscriptionRGs = @() foreach ($RG in $RGs) {

$myRG = [PSCustomObject]@{ ResourceGroupName = $RG.ResourceGroupName Location = $RG.Location Link = ("URL01" + "https" + "://" + "portal.azure.com/#resource" + $RG.ResourceId + "URL02" + ($RG.ResourceId.Split('/') | select -last 1) + "URL03" ) }

$i=0 foreach ($KeyName in $KeyNames) { if ($AutoKeyName) { $myRG | Add-Member -MemberType NoteProperty -Name ([string]$i + "_" + $keyname) -Value $rg.Tags.($KeyName) $i++ } else { $myRG | Add-Member -MemberType NoteProperty -Name ($keyname) -Value $rg.Tags.($KeyName) } } $SubscriptionRGs += $myRG }

$rpt = @() if ($YouLogoHereURLString -ne $null) { $rpt += Get-HTMLOpenPage -TitleText "Azure Resource Groups" -LeftLogoString $YouLogoHereURLString -RightLogoString ("https" + "://" + "azurefieldnotesblog.blob.core.windows.net/wp-ontent/2017/02/ReportHTML.png") } else { $rpt += Get-HTMLOpenPage -TitleText "Azure Resource Groups" }

if (!$AutoKeyName) { $Pie1 = $SubscriptionRGs| group $KeyNames[0] $Pie2 = $SubscriptionRGs| group $KeyNames[1]

$Pie1Object = Get-HTMLPieChartObject -ColorScheme Random $Pie2Object = Get-HTMLPieChartObject -ColorScheme Generated

$rpt += Get-HTMLContentOpen -HeaderText "Pie Charts" $rpt += Get-HTMLColumnOpen -ColumnNumber 1 -ColumnCount 2 $rpt += Get-HTMLPieChart -ChartObject $Pie1Object -DataSet $Pie1 $rpt += Get-HTMLColumnClose $rpt += Get-HTMLColumnOpen -ColumnNumber 2 -ColumnCount 2 $rpt += Get-HTMLPieChart -ChartObject $Pie2Object -DataSet $Pie2 $rpt += Get-HTMLColumnClose $rpt += Get-HTMLContentclose }

$rpt += Get-HTMLContentOpen -HeaderText "Complete List" $rpt += Get-HTMLContentdatatable -ArrayOfObjects ( $SubscriptionRGs) $rpt += Get-HTMLContentClose

$rpt += Get-HTMLClosePage

if ($ReportOutputPath -ne $null) { Save-HTMLReport -ShowReport -ReportContent $rpt -ReportName ResourceGroupTags } else { Save-HTMLReport -ShowReport -ReportContent $rpt -ReportName ResourceGroupTags -ReportPath $ReportOutputPath } [/powershell]

There is a lot more that can be done with this code so please feel free to share your ideas and code below for others. If you want to add your own logos or edit the style of the report, check out the help file here or run Get-htmlReportHelp with the module installed.  I hope you find this helpful

Enjoy

Using Multiple Azure Identities Simultaneously

Profiles.png

Many Azure end users and developers have to deal with the challenges of holding multiple Microsoft and/or Azure Active Directory identities.  At a minimum, you might be like me and have an MSDN account as well as a 1 or more corporate accounts.  There may also be situations where you have development or test tenants and those use separate logins as well.  Another use case is when doing testing and having different users with different roles (i.e. Admin users, basic user, user with no access, etc.) In these situations, it can be painful (or at least annoying) to switch contexts when using those identities on the web since web browsers can only log you into one identity at a time when using sites such as portal.azure.com.  Have you ever gone to the Azure portal only to realize you last logged in with a different account and then you need to logout and back in with different credentials?  This is a common situation for me, and although it only takes 10 seconds or so to login with different credentials, the frequency this happens makes it quite a hassle.

One solution is to use different browsers for different identities (i.e. one login in Firefox, one login in Chrome).  This may work for 2 or 3 different identities, but it's not ideal since every browser will behave differently and may have different conventions.

The solution I use, which I will detail below is to utilize named profiles within Chrome, which allows for logging into as many identities as needed all at the same time.  No more logout/login hassle!

Step-by-step Guide

Here are the steps to add additional profiles to Chrome:

  1. Within Chrome, click your named profile and select Manage people
  2. Click Add Person on the dialog
  3. Type a name for the profile, select an identifying icon if desired, check or uncheck creating a desktop shortcut and then save
  4. Repeat for as many profiles you wish to utilize.  For example, I have my default which uses corporate production login, a secondary corporate development login as well as my MSDN/Microsoft login
  5. Now when you click on your profile, you have the option of opening a new window for each profile and each window maintains it's own set of cookies, browser history, etc.
  6. Here is an example of all 3 of my profiles being logged into the Azure portal all at the same time

Hope you find this useful!

Azure Service Bus Monitoring and Alerting using Azure Function and Application Insights

word-image19.png

Being designing and architecting solutions for our clients on Azure Cloud for many years, we know that Service Bus plays an integral part in most of the application architectures when a messaging layer is involved. At the same time we also know that there is no straight answers when customer ask us about native monitoring and alerting capabilities of the service bus. For visual dashboards, you would need to drill down to the overview section of the queue blade.

For Diagnostic, there are only operational logs available natively.

Although there are few 3rd party products available in the market who have built a good story around monitoring and alerting on azure service bus but they come at an additional cost.

In quest of answering our customer question on how we can get monitoring and alerting capabilities of azure service bus, I figured out that answer lies within azure itself. This blog post illustrate a proof-of-concept solution which was done as part of one of our customer engagement. The PoC solution uses native azure services including:

  • Service Bus
  • Functions
  • Application Insights
  • Application Insight Analytics
  • Application Insight Alerts
  • Dashboard

The only service that would add cost to your monthly azure bill would be functions (assuming application insight is already part of your application architecture). You would need to analyze the cost of purchasing a 3rd part monitoring product vs. function cost.

Let’s deep dive in the actual solution quickly.

Step 1: Create an Azure Service Bus Queue

This is of course a perquisite since we will be monitoring and alerting around this queue. For PoC, I created a queue (by name queue2) under a service bus namespace with root managed key. Also I filled up the queue using one of my favorite tool “Service Bus Explorer”.

Step 2: Create an Azure Function

Next step is to create a function. This function logic is to:

  1. Query the service bus to fetch all the queues and topics available under it.
  2. Get the count of active and dead letter messages
  3. Create custom telemetry metric
  4. And finally log the metric to Application Insight

I choose to use the language “C#” but there are other language available. Also I configured the function to trigger every 5 seconds so it’s almost real time.

Step 3: Add Application Insight to Function

Application Insight will be use to log the telemetry of service bus by the function. Create or reuse an application insight instance and use the instrumentation key in the C# code. I have pasted the function code used in my PoC. The logging part of the code relies on custom metrics concept of application insights. For PoC, I created 2 custom metric – “Active Message Count” and “Dead Letter Count”.

Sample Function:

#r "Microsoft.ServiceBus"
using System;
using Microsoft.ServiceBus;
using Microsoft.ServiceBus.Messaging;
using System.Text.RegularExpressions;
using System.Net.Http;
using static System.Environment;
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;

public static async Task Run(TimerInfo myTimer, TraceWriter log)
{
var namespaceManager = NamespaceManager.CreateFromConnectionString(
Env("ServiceBusConnectionString"));

foreach(var topic in await namespaceManager.GetTopicsAsync())
{
foreach(var subscription in await namespaceManager.GetSubscriptionsAsync(topic.Path))
{
await LogMessageCountsAsync(
$"{Escape(topic.Path)}.{Escape(subscription.Name)}",
subscription.MessageCountDetails, log);
}
}
foreach(var queue in await namespaceManager.GetQueuesAsync())
{
await LogMessageCountsAsync(Escape(queue.Path),
queue.MessageCountDetails, log);
}
}

private static async Task LogMessageCountsAsync(string entityName,
MessageCountDetails details, TraceWriter log)
{
var telemetryClient = new TelemetryClient();
telemetryClient.InstrumentationKey = "YOUR INSTRUMENTATION KEY";
var telemetry = new TraceTelemetry(entityName);
telemetry.Properties.Add("Active Message Count", details.ActiveMessageCount.ToString());
telemetry.Properties.Add("Dead Letter Count", details.DeadLetterMessageCount.ToString());
telemetryClient.TrackMetric(new MetricTelemetry("Active Message Count", details.ActiveMessageCount));
telemetryClient.TrackMetric(new MetricTelemetry("Dead Letter Count", details.DeadLetterMessageCount));
telemetryClient.TrackTrace(telemetry);
}
private static string Escape(string input) => Regex.Replace(input, @"[^A-Za-z0-9]+", "_");
private static string Env(string name) => GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);

Step 4: Test your function

Next step is to test your function by running it. If everything is setup right, you should start seeing the telemetry in the application insight. When you select one the trace, you should be able to view the “Active Message Count” and “Dead Letter Count” under custom data. In the screenshot below, my queue2 has 17 active messages and 0 dead letter.

Step 5: Add an Application Insight Analytics Query

Next step is to use AI Analytics to render service bus chart for monitoring. From the AI blade, you need to click on the Analytics icon. AI Analytics is a separate portal with a query window. You would need to write a query which can render a time chart for a queue based on those custom metrics. You can use the below sample query as a start.

Sample Query:

traces
| where message has 'queue2'
| extend activemessagecount = todouble( customDimensions.["Active Message Count"])
| summarize avg(timestamp) by activemessagecount
| order by avg_timestamp asc
| render timechart

Step 5: Publish the Chart to Dashboard

The AI Analytics chart can be publish (via pin icon) to Azure Dashboard which will enable monitoring users to actively monitor the service bus metrics when they login to azure portal. This will remove the need to drill down to the service bus blade.

Refer this to know more about the creating and publishing charts to dashboards.

Step 6: Add Alerts on the custom counter

The Last step is to create application insight alerts. For PoC, I created 2 alerts on “Active Message Count” and “Dead Letter Message Count” with a threshold. These will alert monitoring users with an email, if the message count exceeds a threshold limit. You can also send these alert to external monitoring tools via web hook.

Attached is sample email from azure AI alert:

Hope these steps will at least gives you an idea that above custom solution with azure native services can serve basic monitoring and alerting capabilities for service bus and for that matter other azure services as well. The key is to define your custom metrics that you would like to monitor against and then setup the solution.

Azure Table Storage and PowerShell, The Hard Way

In my previous post I gave a quick overview of the Shared Key authentication scheme used by the Azure storage service and demonstrated how authenticate and access the BLOB storage API through PowerShell.  The file and queue services follow an authentication scheme that aligns with the BLOB requirements, however the table service is a bit different.  I felt it might help the more tortured souls out there (like myself) if I tried to describe the nuances.

Azure Storage REST API, Consistently Inconsistent

Like the REST of all things new Microsoft (read Azure), the mantra is consistency.  From a modern administrative perspective you should have a consistent experience across whatever environment and toolset you require.  If you are a traditional administrator/engineer of the Microsoft stack, the tooling takes the form of PowerShell cmdlets.  If you use Python, bash, etc. there is effectively equivalent tooling available.  My gripes outstanding, I think Microsoft has done a tremendous job in this regard.  I also make no claim that my preferences are necessarily the correct ones.  The ‘inconsistencies’  I will be discussing are not really issues for you if you use the mainline SDK(s).  As usual, I’ll be focusing on how things work behind the scenes and my observations.

Shared Key Authentication, but Not All Are Equal

In exploring the shared key authentication to the BLOB REST API, we generated and encoded the HTTP request signature.  The string we needed to encode looked something like this:

GET
/*HTTP Verb*/
/*Content-Encoding*/
/*Content-Language*/
/*Content-Length (include value when zero)*/
/*Content-MD5*/
/*Content-Type*/
/*Date*/
/*Range*/  
x-ms-date:Sun, 11 Oct 2009 21:49:13 GMT x-ms-version:2009-09-19
/*CanonicalizedHeaders*/  
/myaccount/mycontainer\ncomp:metadata\nrestype:container
timeout:20

The table service takes a much simpler and yet arcane format that is encoded in an identical fashion.

GET
application/json;odata=nometadata
Mon, 15 May 2017 17:29:11 GMT
/billing73d55f68/fabriclogae0bced538344887a4021ae5c3b61cd0GlobalTime(PartitionKey='407edc6d872271f853085a7a18387784',RowKey='02519075544040622622_407edc6d872271f853085a7a18387784_ 0_2952_2640')

In this case there are far fewer headers and query parameters to deal with, however there are now fairly rigid requirements. A Date header must be specified as opposed to either Date or x-ms-date, or both in the BLOB case.  A Content-Type header must also be specified as part of the signature, and no additional header details are required.  The canonical resource component is very different from the BLOB service.  The canonical resource still takes a format of <storage account name>/<table name>/<query parameters>.  At the table service level only the comp query parameter is to be included.  As an example, to query the table service properties for the storage account the request would look something like https://myaccount.table.core.windows.net?restype=service&comp=properties. The canonical resource would be /myaccount/?comp=properties.

Generating the Signature with PowerShell

We will reuse our encoding function from the previous post and include a new method for generating the signature.


Function EncodeStorageRequest
{     
    [CmdletBinding()]
    param
    (
        [Parameter(Mandatory = $true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
        [String[]]$StringToSign,
        [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]
        [String]$SigningKey
    )     
    PROCESS
    {         
        foreach ($item in $StringToSign)
        {             
            $KeyBytes = [System.Convert]::FromBase64String($SigningKey)
            $HMAC = New-Object System.Security.Cryptography.HMACSHA256
            $HMAC.Key = $KeyBytes
            $UnsignedBytes = [System.Text.Encoding]::UTF8.GetBytes($item)
            $KeyHash = $HMAC.ComputeHash($UnsignedBytes)
            $SignedString=[System.Convert]::ToBase64String($KeyHash)
            Write-Output $SignedString
        }     
    } 
}

$AccountName='myaccount'
$AccessKey='vyAEEzbcnIAkLKti1leDbfrAOQBu5bx52zyCkW0fGIBCsS+DDGXpfidOeAWyg7do8ujft1mFhnz9kmliycmiXA=='
$Uri="https://$AccountName.table.core.windows.net/tables"
$SignatureParams=@{
    Resource=$Uri;
    Date=[DateTime]::UtcNow.ToString('R');
    Verb='GET';
    ContentType='application/json;odata=nometadata';
}
$RequestSignature=GetTableTokenStringToSign @SignatureParams $TableToken=EncodeStorageRequest -StringToSign $RequestSignature -SigningKey $AccessKey
$TableHeaders=[ordered]@{
    'x-ms-version'= '2016-05-31';
    'DataServiceVersion'='3.0;Netfx';
    'Accept-Charset'='UTF-8';
    'Accept'='application/json;odata=fullmetadata';
    'Date'=$SignatureParams.Date;
    'Authorization'="SharedKey $($AccountName):$($TableToken)"
}
$RequestParams=@{
    Uri=$SignatureParams.Resource;
    Method=$SignatureParams.Verb;
    Headers=$TableHeaders;
    ContentType=$SignatureParams.ContentType;
    ErrorAction='STOP'
}
$Response=Invoke-WebRequest @RequestParams -Verbose $Tables=$Response.Content | ConvertFrom-Json | Select-Object -ExpandProperty value


PS C:\WINDOWS\system32> $Tables|fl
odata.type : acestack.Tables odata.id : https://acestack.table.core.windows.net/Tables('provisioninglog') odata.editLink : Tables('provisioninglog') TableName : provisioninglog

The astute reader will notice we had to pass some different headers along.  All table requests require either or both a DataServiceVersion or MaxDataServiceVersion.  These values align with maximum versions of the REST API, which I won't bother belaboring.  We also  retrieved JSON rather than XML, and have a number of content types available to take the format in which are dictated by the Accept header.   In the example we retrieved it with full OData metadata; other valid types include minimalmetadata and nometadata (atom/xml is returned from earlier data service versions).  In another peculiarity XML is the only format returned for retrieving Service properties or stats.

Putting It to Greater Use With Your Old Friend OData

You likely want to actually read some data out of tables.  Now that authorizing the request is out of the way it is a 'simple' manner of applying the appropriate OData query parameters.  We will start with retrieving a list of all entities within a table.  This will return a maximum of 1000 results (unless limited using the $top parameter) and a link to any subsequent pages of data will be returned in the response headers.  In the following example we will query all entities in the fabriclogaeGlobalTime table in the fabrixstuffz storage account.  In the interest of brevity I will limit this to 3 results.


$TableName='fakecustomers'
$Uri="https://$AccountName.table.core.windows.net/$TableName"
$SignatureParams=@{
    Resource=$Uri;
    Date=[DateTime]::UtcNow.ToString('R');
    Verb='POST';
    ContentType='application/json;odata=nometadata'; 
} 
$RequestSignature=GetTableTokenStringToSign @SignatureParams $TableToken=EncodeStorageRequest -StringToSign $RequestSignature -SigningKey $AccessKey
$TableHeaders=[ordered]@{
    'x-ms-version'= '2016-05-31'
    'DataServiceVersion'='3.0;Netfx'
    'Accept-Charset'='UTF-8'
    'Accept'='application/json;odata=fullmetadata';
    'Date'=$SignatureParams.Date;
    'Authorization'="SharedKey $($AccountName):$($TableToken)"
}
$PartitionKey='mypartitionkey'
$RowKey='row771'
$TableEntity=New-Object PSobject @{
    "Address"="Mountain View";
    "Name"="Buckaroo Banzai";
    "Age"=33;
    "AmountDue"=200.23;
    "FavoriteItem"="oscillation overthruster";
    "CustomerCode@odata.type"="Edm.Guid";
    "CustomerCode"="c9da6455-213d-42c9-9a79-3e9149a57833";
    "CustomerSince@odata.type"="Edm.DateTime";
    "CustomerSince"="2008-07-10T00:00:00";
    "IsActive"=$true;
    "NumberOfOrders@odata.type"="Edm.Int64"
    "NumberOfOrders"="255";
    "PartitionKey"=$PartitionKey;
    "RowKey"=$RowKey
}
$RequestParams=@{
    Uri=$SignatureParams.Resource;
    Method=$SignatureParams.Verb;
    Headers=$TableHeaders;
    ContentType=$SignatureParams.ContentType;
    ErrorAction='STOP'
}
$Response=Invoke-WebRequest @RequestParams

This should yield a result looking like this.


Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: application/json;odata=nometadata;streaming=true;charset=utf-8
Server: Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 56afccf3-0002-0104-0285-d382b4000000
x-ms-version: 2016-05-31
X-Content-Type-Options: nosniff
x-ms-continuation-NextPartitionKey: 1!44!NDA3ZWRjNmQ4NzIyNzFmODUzMDg1YTdhMTgzODc3ODQ-
x-ms-continuation-NextRowKey: 1!88!MDI1MTkwNjc4NDkwNDA1NzI1NjlfNDA3ZWRjNmQ4NzIyNzFmODUzMDg1YTdhMTgzODc3ODRfMF8yOTUyXzI2NDA- Date: Tue, 23 May 2017 05:27:28 GMT
{
    "value":  [
                  {
                      "PartitionKey":  "407edc6d872271f853085a7a18387784",
                      "RowKey":  "02519067840040580939_407edc6d872271f853085a7a18387784_0_2952_2640",
                      "Timestamp":  "2017-05-23T05:25:55.6307353Z",
                      "EventType":  "Time",
                      "TaskName":  "FabricNode",
                      "dca_version":  -2147483648,
                      "epoch":  "1",
                      "localTime":  "2017-05-23T05:21:07.4129436Z",
                      "lowerBound":  "2017-05-23T05:19:56.173659Z",
                      "upperBound":  "2017-05-23T05:19:56.173659Z"
                  },
                  {
                      "PartitionKey":  "407edc6d872271f853085a7a18387784",
                      "RowKey":  "02519067843040711216_407edc6d872271f853085a7a18387784_0_2952_2640",
                      "Timestamp":  "2017-05-23T05:20:53.9265804Z",
                      "EventType":  "Time",
                      "TaskName":  "FabricNode",
                      "dca_version":  -2147483648,
                      "epoch":  "1",
                      "localTime":  "2017-05-23T05:16:07.3678218Z",
                      "lowerBound":  "2017-05-23T05:14:56.1606307Z",
                      "upperBound":  "2017-05-23T05:14:56.1606307Z"
                  },
                  {
                      "PartitionKey":  "407edc6d872271f853085a7a18387784",
                      "RowKey":  "02519067846040653329_407edc6d872271f853085a7a18387784_0_2952_2640",
                      "Timestamp":  "2017-05-23T05:15:52.7217857Z",
                      "EventType":  "Time",
                      "TaskName":  "FabricNode",
                      "dca_version":  -2147483648,
                      "epoch":  "1",
                      "localTime":  "2017-05-23T05:11:07.3406081Z",
                      "lowerBound":  "2017-05-23T05:09:56.1664211Z",
                      "upperBound":  "2017-05-23T05:09:56.1664211Z"
                  }
              ]
}

You should recognize a relatively standard OData response, with our desired values present within an array as the value property. There are two response headers to note here; x-ms-continuation-NextPartitionKey and x-ms-continuation-NextRowKey. These headers are the continuation token for retrieving the next available value(s). The service will return results in pages with a maximum length of 1000 results, unless limited using the $top query parameter like the previous example. If one were so inclined, they could continue to send GET requests, including the continuation token(s) until all results are enumerated.

Creating (or updating) table entities is a slightly different exercise, which can become slightly convoluted (at least in PowerShell or other scripts).  Conceptually, all that is required to create an entity is a POST  request to the table resource URI with a body containing the entity and the appropriate required headers.  The complexity is primarily a result of the metadata overhead associated with the server OData implementation. We'll examine this by inserting an entity into a fictional customers table.

You should end up receiving the inserted object as a response:


PS C:\Windows\system32> $Response.Content | ConvertFrom-Json
PartitionKey : mypartitionkey
RowKey : row772
Timestamp : 2017-05-23T06:17:53.7244968Z
CustomerCode : c9da6455-213d-42c9-9a79-3e9149a57833
FavoriteItem : oscillation overthruster
AmountDue : 200.23
IsActive : True
CustomerSince : 2008-07-10T00:00:00
Name : Buckaroo Banzai
NumberOfOrders : 255
Age : 33
Address : Mountain View 

You should notice that the object we submitted had some extra properties not present on the inserted entity. The API requires that for any entity property where the (.Net) data type can not be automatically inferred, a type annotation must be specified. In this case CustomerCode=c9da6455-213d-42c9-9a79-3e9149a57833 is a GUID (as opposed to a string) requires a property CustomerCode@odata.type=Edm.Guid.  If you would like a more complete explanation the format is detailed here.

Three ways to do the same thing

You've got to give it to Microsoft, they certainly keep things interesting.  In the above example, I showed one of three ways that you can insert an entity into a table.  The service supports Insert, Insert or Merge (Upsert), and Insert or Replace operations (there are also individual Replace and Merge operations).  In the following example I will show the Upsert operation using the same table and entity as before.


$Uri="https://$AccountName.table.core.windows.net/$TableName(PartitionKey='$PartitionKey',RowKey='$RowKey')"
$SignatureParams=@{
    Resource=$Uri;
    Date=[DateTime]::UtcNow.ToString('R');
    Verb='MERGE';
    ContentType='application/json;odata=nometadata';
} 
$RequestSignature=GetTableTokenStringToSign @SignatureParams
$TableToken=EncodeStorageRequest -StringToSign $RequestSignature -SigningKey $AccessKey $TableEntity | Add-Member -MemberType NoteProperty -Name 'NickName' -Value 'MrMan'
$TableHeaders=[ordered]@{
    'x-ms-version'= '2016-05-31'
    'DataServiceVersion'='3.0;Netfx'
    'Accept-Charset'='UTF-8'
    'Accept'='application/json;odata=fullmetadata';
    'Date'=$SignatureParams.Date;
    'Authorization'="SharedKey $($AccountName):$($TableToken)"
}
$RequestParams = @{
    Method= 'MERGE';
    Uri= $Uri;
    Body= $($TableEntity|ConvertTo-Json);
    Headers= $TableHeaders;
    ContentType= 'application/json;odata=fullmetadata'
}
$Response=Invoke-WebRequest @RequestParams 

This should yield a response with the meaningful details of the operation in the headers.


PS C:\Windows\system32> $Response.Headers
Key                    Value
---                    -----  
x-ms-request-id        48489e3d-0002-005c-6515-d545b8000000
x-ms-version           2016-05-31 
X-Content-Type-Options nosniff
Content-Length         0
Cache-Control          no-cache
Date                   Thu, 25 May 2017 05:08:58 GMT
ETag                   W/"datetime'2017-05-25T05%3A08%3A59.5530222Z'"
Server                 Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0

Now What?

I'm sure I've bored most of you enough already so I won't belabor any more of the operations, but I hope that I've given you a little more insight into the workings of another key element of the Azure Storage Service(s). As always, if you don't have a proclivity for doing things the hard way, feel free to check out a module supporting most of the Table (and BLOB) service functionality on the Powershell Gallery or GitHub.