Capturing and using API queries from Azure in PowerShell with Fiddler
This is a walkthrough for using Fiddler to capture traffic to Azure from a browser and writing and running that query in PowerShell. I wrote this because I don't like posts that skip over a key step and explain the entire thing with a wave of the hand. Although this article stands on it own, it is a key step in another series.
Similar New Content: Using Developer Tools to get the Payload to Create Azure Budget Alerts for Action Groups (New-AzConsumptionBudget) — Crying Cloud
This is a walkthrough for using Fiddler to capture traffic to Azure from a browser and writing and running that query in PowerShell. I wrote this because I don't like posts that skip over a key step and explain the entire thing with a wave of the hand. Although this article stands on it own, it is a key step in another series.
Install & Configure Fiddler
https://www.telerik.com/fiddler. We need to decrypt traffic from Azure. Basically, you're letting Fiddler get in the middle of the conversation you're having with Azure and take a look at the traffic. After installation select, Tools -> Options, select capture and decrypt HTTPS traffic.
You need to close and reopen Fiddler. From the file menu, you can select start or stop, to capture internet traffic. Ensure capture is on and then refresh Azure page you want to query. In this case, I want to capture data from the cost analysis page for a resource group. This is easier when you close down most browser windows except the one you want to investigate. You can also apply filters and capture limitations, but I'll let you figure that out.
Capture Your Query
I want to capture the cost for a resource group with daily totals so we can capture the cost over the month based on resource group tagging. Browse to a resource group and select cost analysis with capture traffic.
The next part you'll just need to search through the queries and look for what you're after. Select JSON in the response to see the data returned.
In the above results we can see the rows of cost data in the JSON response page, however, the other record in the row is the resource type, not the date.
This looks better, the columns JSON field shows the Cost, Date, and Currency and we can even see some rows with the right data, so we have the query. Now to create this in PowerShell.
Create Query in PowerShell
First, grab the header and then create a few parameters. Note this is a POST command.
Raw Header
POST /subscriptions/11111111-4444-8888-9999-222222222222/YourResourceGroupName/azurestackproduction/providers/Microsoft.CostManagement/query?api-version=2018-08-31&$top=40000 HTTP/1.1
Converted
$SubscriptionGUID = '11111111-4444-8888-9999-222222222222' $ResourceGroupName = 'YourResourceGroupName' $usageUri = "https://management.azure.com/subscriptions/$SubscriptionGUID/resourceGroups/$ResourceGroupName/providers/Microsoft.CostManagement/query?api-version=2018-08-31"
We need to create the JSON object that is passed with the POST. Shown above is what we need to recreate.
Select Raw and capture the text in the brackets. This will take a little bit of effort to convert into a PowerShell JSON object with variables.
commas , become semi colons ;
the { needs a @ in front of it @{
colons : need =
RAW
{"type":"Usage","timeframe":"Custom","timePeriod":{"from":"2018-10-01T00:00:00+00:00","to":"2018-10-31T23:59:59+00:00"},"dataSet":{"granularity":"Daily","aggregation":{"totalCost":{"name":"PreTaxCost","function":"Sum"}},"sorting":[{"direction":"ascending","name":"UsageDate"}]}}
Converted
$year =(get-date).year $month =(get-date).Month $DaysInMonth= [DateTime]::DaysInMonth($year, $month ) $Body = @{"type"="Usage";"timeframe"="Custom";"timePeriod"=@{"from"="$($year)-$($month)-01T00:00:00+00:00";"to"="$($year)-$($month)-$($DaysInMonth)T23:59:59+00:00"};"dataSet"=@{"granularity"="Daily";"aggregation"=@{"totalCost"=@{"name"="PreTaxCost";"function"="Sum"}};"sorting"=@(@{"direction"="ascending";"name"="UsageDate"})}}
BearerToken
To access this data since we aren't logged in with PowerShell you need a bearer token. Luckily someone has written a helpful query to capture the bearer token from your existing session. https://gallery.technet.microsoft.com/scriptcenter/Easily-obtain-AccessToken-3ba6e593.
function Get-AzureRmCachedAccessToken() { $ErrorActionPreference = 'Stop'
if(-not (Get-Module AzureRm.Profile)) { Import-Module AzureRm.Profile } $azureRmProfileModuleVersion = (Get-Module AzureRm.Profile).Version # refactoring performed in AzureRm.Profile v3.0 or later if($azureRmProfileModuleVersion.Major -ge 3) { $azureRmProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile if(-not $azureRmProfile.Accounts.Count) { Write-Error "Ensure you have logged in before calling this function." } } else { # AzureRm.Profile < v3.0 $azureRmProfile = [Microsoft.WindowsAzure.Commands.Common.AzureRmProfileProvider]::Instance.Profile if(-not $azureRmProfile.Context.Account.Count) { Write-Error "Ensure you have logged in before calling this function." } }
$currentAzureContext = Get-AzureRmContext $profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azureRmProfile) Write-Debug ("Getting access token for tenant" + $currentAzureContext.Subscription.TenantId) $token = $profileClient.AcquireAccessToken($currentAzureContext.Subscription.TenantId) $token.AccessToken }
If we include this function in our code and write a few more lines we are ready to start putting it all together. We create the headers sections, we use invoke-restmethod with POST we pass the body which must be converted with depth 100 otherwise data gets chopped out.
$token = Get-AzureRmCachedAccessToken $headers = @{"authorization"="bearer $token"}
$results = Invoke-RestMethod $usageUri -Headers $headers -ContentType "application/json" -Method Post -Body ($body | ConvertTo-Json -Depth 100)
Final Script
$SubscriptionGUID = '11111111-4444-8888-9999-222222222222' $ResourceGroupName = 'YourResourceGroupName'
function Get-AzureRmCachedAccessToken() { $ErrorActionPreference = 'Stop'
if(-not (Get-Module AzureRm.Profile)) { Import-Module AzureRm.Profile } $azureRmProfileModuleVersion = (Get-Module AzureRm.Profile).Version # refactoring performed in AzureRm.Profile v3.0 or later if($azureRmProfileModuleVersion.Major -ge 3) { $azureRmProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile if(-not $azureRmProfile.Accounts.Count) { Write-Error "Ensure you have logged in before calling this function." } } else { # AzureRm.Profile < v3.0 $azureRmProfile = [Microsoft.WindowsAzure.Commands.Common.AzureRmProfileProvider]::Instance.Profile if(-not $azureRmProfile.Context.Account.Count) { Write-Error "Ensure you have logged in before calling this function." } }
$currentAzureContext = Get-AzureRmContext $profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azureRmProfile) Write-Debug ("Getting access token for tenant" + $currentAzureContext.Subscription.TenantId) $token = $profileClient.AcquireAccessToken($currentAzureContext.Subscription.TenantId) $token.AccessToken }
$year =(get-date).year $month =(get-date).Month $DaysInMonth= [DateTime]::DaysInMonth($year, $month )
$token = Get-AzureRmCachedAccessToken $headers = @{"authorization"="bearer $token"} $Body = @{"type"="Usage";"timeframe"="Custom";"timePeriod"=@{"from"="$($year)-$($month)-01T00:00:00+00:00";"to"="$($year)-$($month)-$($DaysInMonth)T23:59:59+00:00"};"dataSet"=@{"granularity"="Daily";"aggregation"=@{"totalCost"=@{"name"="PreTaxCost";"function"="Sum"}};"sorting"=@(@{"direction"="ascending";"name"="UsageDate"})}}
$usageUri = "https://management.azure.com/subscriptions/$SubscriptionGUID/resourceGroups/$ResourceGroupName/providers/Microsoft.CostManagement/query?api-version=2018-08-31" $results = Invoke-RestMethod $usageUri -Headers $headers -ContentType "application/json" -Method Post -Body ($body | ConvertTo-Json -Depth 100)
$results.properties.columns $results.properties.rows
Results
This shows the two output selected columns and rows
Good luck creating your own queries, I hope you found this helpful.
You can find another similar article by Microsoft here
Resource Tagging Best Practices Applied (Part 2 – Enforcement)
This post is following on from part 1 about resource tagging on resource groups where we setup azure policies to look for the existence of resource tags on resource groups. While this is helpful to understand the scale of the problem, the real problem is getting people to tag their resource groups when they create them. I work with a bunch of misfits and mavericks and while all brilliant in their own right, asking them to do anything as simple as tagging their stuff is about as futile as yelling at the rain to stop.
The Real Problem
This post is following on from part 1 about resource tagging on resource groups where we setup azure policies to look for the existence of resource tags on resource groups. While this is helpful to understand the scale of the problem, the real problem is getting people to tag their resource groups when they create them. I work with a bunch of misfits and mavericks and while all brilliant in their own right, asking them to do anything as simple as tagging their stuff is about as futile as yelling at the rain to stop.
The Solution
Since asking failed, let's try telling them. Same as in part one let's assign a policy that will force tags to be applied during object creation. You can set the tag value to the text wildcard *
While this will work 100% of the time, it does come along with a few issues. This list is by no means exhaustive and I will update it when and if we find more. If you have tried or are trying this and find any other issues arising from enforcing tags on resource groups, please comment and I can explore and add the content to this post.
Can't use the Portal
At the time of writing this, unfortunately, Azure does not ask you to add tags during the creation of Resource groups through the UI so you simply get an error.
You have to use PowerShell or ARM templates to create resource groups.
New-AzureRmResourceGroup -Name "Blahdedah" -Location "WestUS" -Verbose -Force -Tag @{Owner="matt quickenden"; Solution="IoT Testing"}
Adding a template to Azure
So you're thinking you could upload a template with parameters for tags to Azure Template so you could keep a UI experience?
{ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#", "contentVersion": "1.0.0.1", "parameters": { "ResourceGroupName": { "type": "string" }, "ResourceGroupLocation": { "type": "string" }, "OwnerTag": { "type": "string" }, "SolutionTag": { "type": "string" } }, "variables": { "tags": { "Owner": "[parameters('OwnerTag')]", "Solution": "[parameters('SolutionTag')]" } }, "resources": [ { "type": "Microsoft.Resources/resourceGroups", "apiVersion": "2018-05-01", "location": "[parameters('ResourceGroupLocation')]", "name": "[parameters('ResourceGroupName')]", "properties": {}, "tags": "[variables('tags')]" } ], "outputs": {} }
Close enough, you could limit the location to actual Azure locations etc, but let's check if it works.
Interestingly, Azure creates a resource group first before trying to execute your code. This could work for creating a blank template using ARM but the PowerShell is probably easier or just including the tags in your main ARM template.
Visual Studio Deployments
Deploying quickstart templates from visual studio fails.
the workaround is to open the Deploy-AzureResourceGroup.ps1 script inside the project. Scroll down to the line that starts with New-AzureRmResourceGroup. It should be around line 93.
Edit that line so your tags are added at deployment. That line should look something like this after you have edited it:
New-AzureRmResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation -Tag @{Owner="harry potter"; Solution="Deploy with tags "} -Verbose -Force
Azure Backup
Using Azure backup vault to create a backup of a VM. If you have it already set up it seems to be just fine, however, when you attempt to create a new one backup it seems to fail.
Taking a look at the activity log reveals 'Update resource group' failed
cutting through the rest we can find the status error message
"statusMessage": "{\"error\":{\"code\":\"RequestDisallowedByPolicy\",\"target\":\"AzureBackupRG_westus_1\",\"message\":\"Resource 'AzureBackupRG_westus_1' was disallowed by policy. Policy identifiers: '[{\\\"policyAssignment\\\":{\\\"name\\\":\\\"Enforce Resource Group Tag Like Value SOLUTION Pattern\\\",\\\"id\\\":\\\"/subscriptions/....
Adding an exclusion to the enforcement policy for the resource group seems to have done the trick. New backups to this backup vault can be created and continue to run without any issues.
Next Steps
While this enforcement has created some problems, there aren't any show stoppers at the moment and if it really is an issue, for a particular use case or project if you can't simply add an exclusion you can disable the policy temporarily and re-enable it for a month. Some deployments might get through without tags but we can hunt those people down through the activity logs. This is more a catch-all tool so I still consider this useful and still functional so we will proceed.
Next up we will take a look into getting at the data and trying to get closer to the ultimate goal of putting data in an email targeted at the resource group owners.
Resource Tagging Best Practices Applied (Part 1 - Auditing)
Our most popular blog post was about resource tagging best practices. I thought I would follow up that post with some real-world application of tagging best practices in our own environment with the explicit purpose of tracking down Azure spend and getting that spend information into people's inboxes so they can take action to reduce costs.
Our most popular blog post was about resource tagging best practices. I thought I would follow up that post with some real-world application of tagging best practices in our own environment with the explicit purpose of tracking down Azure spend and getting that spend information into people's inboxes so they can take action to reduce costs.
The Environment
Our group pays one bill and we don't charge back the cost of Azure spend, so we technically don't have a need to track charge codes. A person is responsible for objects and those objects are part of a solution or project so we have two attributes we are interested in capturing.
We have two subscriptions to separate our environments so we don't need an environment tag. The two environments are;
Critical Infrastructure
Labs
We are using only two tags at a resource group level
Owner
Solution
Azure Policy & Policy Definitions
Azure Policy has a number of built-in policies, however, it doesn't have one for Auditing Resource Tags. Thankfully, we have a quick win, https://github.com/Azure/azure-policy/tree/master/samples/ResourceGroup/audit-resourceGroup-tags. You will need to be a subscription owner to create this policy definition.
$definition = New-AzureRmPolicyDefinition -Name "audit-resourceGroup-tags" -DisplayName "Audit resource groups missing tags" -description "Audit resource groups that doesn't have particular tag" -Policy 'https://raw.githubusercontent.com/Azure/azure-policy/master/samples/ResourceGroup/audit-resourceGroup-tags/azurepolicy.rules.json' -Parameter 'https://raw.githubusercontent.com/Azure/azure-policy/master/samples/ResourceGroup/audit-resourceGroup-tags/azurepolicy.parameters.json' -Mode All $definition
Let's do the next step through the UI. Go to Policy, Assignments, Assign Policy, Select your Subscription. You can also select resource groups for exclusion (more on that later) for audit purposes I would like to target the entire subscription.
Next, select the Policy Definition, search for word 'tag'. Here we can see the built-in definitions and the custom definition we have just uploaded.
Policy Assignment
Once selected, you can complete the remaining fields. We need to create policy assignments for auditing Owner, Solution tags for both subscriptions.
Once complete you should be able to see the following
Compliance
Which if we select compliance we can see a summary of all the policies
If we select one of the audits, we can see the items that have failed to match the assigned policy, that is resource groups do not have the Owner resource tag.
While this helps find resource groups that are not tagged, the problem is that if someone spins up some resources and destroys them that usage data has no tags associated with it and therefore we can't track who provisioned it. I was using the Activity log to try and find who was working with the resources or had created it.
Defining an Initiative
Alternatively, you can combine these policies into an Initiative, basically a group of policies.
In this case I have defined the values in the initiative, but you can also use parameters. You then have to assign the initiative to a subscription
Here you can see there are two policies are part of this initiative
and then the compliance is summarized
Migrate your project from Jira to Azure DevOps
My team recently had the need to migrate two of our project boards from Jira to Azure DevOps (formerly VSTS). There was a whole lot of suggestions when I googled with bing, but not a whole lot of sample code that I could start with. This is completely understandable. With both systems being highly customizable and the needs of you team being unique, it would be near impossible to come up with a complete solution that will work flawlessly for everyone. So, I decided to provide one.
Just kidding.
I hacked together pieces from all over to come up with a solution that worked for my project. It is by no means robust and bulletproof, but it does get the job done and i open for improvement and tailoring. In short, it is a good starting point for anyone needing to do this type of migration.
It is done as a console app without any trappings of a UI. This is a process that is usually executed once and therefore having a UI is not necessary. So, it is designed to be run using the debugger, which has the added benefit of being able to be monitored and paused whenever you want.
I had a few things that I was interested in. This may or may not line up to your requirements.
Migrate two JIra projects/boards into a single Azure DevOps project
Each Jira project work items would be a configured as a child of a Azure DevOps epic.
Jira epics are mapped to features
Jira PBIs are mapped to PBIs
Jira tasks and sub-tasks are mapped to tasks
You can absolutely go nuts in migrating all the history of PBIs. In that is your case, it might be better to find someone who specialized in this type of migration. In my case, I wanted some limited history. Here is what I was hoping to migrate:
Created by and Created date
Assigned To
work item hierarchy
title and description
Status (ToDo, done, etc)
priority
attachments
comments
tags
You'll notice that I did not migrate anything to do with sprints. In my case, both Jira projects had a different number of completed sprints and it wasn't important enough to keep the sprint history to deal with this inconsistency. If you have to need, good luck!
I am using the Azure DevOps Scrum template for my project. It should work for other templates as well, but I have not tested it, so your mileage may vary.
Code
Enough already. Show me the code! Ok, ok.
Nuget Packages
You'll need 3 nuget packages:
Install-Package Atlassian.SDK Install-Package Microsoft.VisualStudio.Services.Client Install-Package Microsoft.TeamFoundationServer.Client
Credentials
You'll need to configure the connection to Jira and Azure DevOps. The todo block at the top contains some constants for this.
You'll need an Azure DevOps personal access token. See this for more information about personal access tokens.
You'll also need a local user account for Jira. Presumably, you could connect using an OpenId account. However, the SDK did not seem to provide an easy way to do this and, in the end, it was easier to create a temporary local admin account.
Field Migrations
Some fields, like title and attachments migrate just fine. Others need a little massaging. For example rich text in Jira uses markdown while rich text in Azure DevOps (at this point) uses HTML. In my case, I decided to punt on converting between markdown and html. It wasn't worth spending the time and Azure DevOps is likely to support markdown rich text in the future.
Another place that needs massaging is work item statuses. They are close enough that, if you haven't customized your Azure DevOps status, the provided mapping should work pretty well.
Lastly, username conversions is completely unimplemented. You'll have to provide your own mapping. In my case, we only had a dozen developers and stakeholders, so I just created a static mapping. If your Jira usernames naturally map to your Azure DevOps (ours didn't) you could probably just tack on your @contoso.com and call it a day. Unfortunately, our Jira instanced used a completely different AAD tenant than our Azure DevOps organization. There were also some inconsistencies usernames between the two systems.
Idempotency
You'll notice that the migration keeps and stores a log of everything that has been migrated so far. This accomplishes two things:
An easy way to look up the completed mapping of Jira items to Azure DevOps items. This is essential to keep the Jira hierarchy.
Allow you to resume after an inevitable exception without re-importing everything again. If you do need to start over, simply delete the migrated.json file in the projects root directory.
That's It
Good luck in your migration! I hope this helps.
using Atlassian.Jira; using Microsoft.TeamFoundation.WorkItemTracking.WebApi; using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models; using Microsoft.VisualStudio.Services.Common; using Microsoft.VisualStudio.Services.WebApi; using Microsoft.VisualStudio.Services.WebApi.Patch.Json; using Newtonsoft.Json; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks;
namespace JiraMigration { class Program { // TODO: Provide these const string VstsUrl = "https://{AzureDevOps Organization}.visualstudio.com"; const string VstsPAT = "{AzureDevOps Personal Access Token}"; const string VstsProject = "{AzureDevOps Project Name}";
const string JiraUserID = "{Jira local username}"; const string JiraPassword = "{Jira local password}"; const string JiraUrl = "{Jira instance url}"; const string JiraProject = "{Jira Project abbreviation}"; // END TODO
// These are to provide the ability to resume a migration if an error occurs. // static string MigratedPath = Path.Combine(Environment.CurrentDirectory, "..", "..", "migrated.json"); static Dictionary Migrated = File.Exists(MigratedPath) ? JsonConvert.DeserializeObject>(File.ReadAllText(MigratedPath)) : new Dictionary();
static void Main(string[] args) => Execute().GetAwaiter().GetResult(); static async Task Execute() { var vstsConnection = new VssConnection(new Uri(VstsUrl), new VssBasicCredential(string.Empty, VstsPAT)); var witClient = vstsConnection.GetClient();
var jiraConn = Jira.CreateRestClient(JiraUrl, JiraUserID, JiraPassword);
var issues = jiraConn.Issues.Queryable .Where(p => p.Project == JiraProject) .Take(Int32.MaxValue) .ToList();
// By default this will root the migrated items at the root of Vsts project // Uncomment ths line and provide an epic id if you want everything to be // a child of Vsts epic // //AddMigrated(JiraProject, {VstsEpic Id}); foreach (var feature in issues.Where(p => p.Type.Name == "Epic")) await CreateFeature(witClient, feature); foreach (var bug in issues.Where(p => p.Type.Name == "Bug")) await CreateBug(witClient, bug, JiraProject); foreach (var backlogItem in issues.Where(p => p.Type.Name == "Story")) await CreateBacklogItem(witClient, backlogItem, JiraProject); foreach (var task in issues.Where(p => p.Type.Name == "Task" || p.Type.Name == "Sub-task")) await CreateTask(witClient, task, JiraProject); }
static Task CreateFeature(WorkItemTrackingHttpClient client, Issue jira) => CreateWorkItem(client, "Feature", jira, jira.Project, jira.CustomFields["Epic Name"].Values[0], jira.Description ?? jira.Summary, ResolveFeatureState(jira.Status)); static Task CreateBug(WorkItemTrackingHttpClient client, Issue jira, string defaultParentKey) => CreateWorkItem(client, "Bug", jira, jira.CustomFields["Epic Link"]?.Values[0] ?? defaultParentKey, jira.Summary, jira.Description, ResolveBacklogItemState(jira.Status)); static Task CreateBacklogItem(WorkItemTrackingHttpClient client, Issue jira, string defaultParentKey) => CreateWorkItem(client, "Product Backlog Item", jira, jira.CustomFields["Epic Link"]?.Values[0] ?? defaultParentKey, jira.Summary, jira.Description, ResolveBacklogItemState(jira.Status), new JsonPatchOperation { Path = "/fields/Microsoft.VSTS.Scheduling.Effort", Value = jira.CustomFields["Story Points"]?.Values[0] }); static Task CreateTask(WorkItemTrackingHttpClient client, Issue jira, string defaultParentKey) => CreateWorkItem(client, "Task", jira, jira.ParentIssueKey ?? defaultParentKey, jira.Summary, jira.Description, ResolveTaskState(jira.Status)); static async Task CreateWorkItem(WorkItemTrackingHttpClient client, string type, Issue jira, string parentKey, string title, string description, string state, params JsonPatchOperation[] fields) { // Short-circuit if we've already projcessed this item. // if (Migrated.ContainsKey(jira.Key.Value)) return;
var vsts = new JsonPatchDocument { new JsonPatchOperation { Path = "/fields/System.State", Value = state }, new JsonPatchOperation { Path = "/fields/System.CreatedBy", Value = ResolveUser(jira.Reporter) }, new JsonPatchOperation { Path = "/fields/System.CreatedDate", Value = jira.Created.Value.ToUniversalTime() }, new JsonPatchOperation { Path = "/fields/System.ChangedBy", Value = ResolveUser(jira.Reporter) }, new JsonPatchOperation { Path = "/fields/System.ChangedDate", Value = jira.Created.Value.ToUniversalTime() }, new JsonPatchOperation { Path = "/fields/System.Title", Value = title }, new JsonPatchOperation { Path = "/fields/System.Description", Value = description }, new JsonPatchOperation { Path = "/fields/Microsoft.VSTS.Common.Priority", Value = ResolvePriority(jira.Priority) } }; if (parentKey != null) vsts.Add(new JsonPatchOperation { Path = "/relations/-", Value = new WorkItemRelation { Rel = "System.LinkTypes.Hierarchy-Reverse", Url = $"https://ciappdev.visualstudio.com/_apis/wit/workItems/{Migrated[parentKey]}" } }); if (jira.Assignee != null) vsts.Add(new JsonPatchOperation { Path = "/fields/System.AssignedTo", Value = ResolveUser(jira.Assignee) }); if (jira.Labels.Any()) vsts.Add(new JsonPatchOperation { Path = "/fields/System.Tags", Value = jira.Labels.Aggregate("", (l, r) => $"{l}; {r}").Trim(';', ' ') }); foreach (var attachment in await jira.GetAttachmentsAsync()) { var bytes = await attachment.DownloadDataAsync(); using (var stream = new MemoryStream(bytes)) { var uploaded = await client.CreateAttachmentAsync(stream, VstsProject, fileName: attachment.FileName); vsts.Add(new JsonPatchOperation { Path = "/relations/-", Value = new WorkItemRelation { Rel = "AttachedFile", Url = uploaded.Url } }); } }
var all = vsts.Concat(fields) .Where(p => p.Value != null) .ToList(); vsts = new JsonPatchDocument(); vsts.AddRange(all); var workItem = await client.CreateWorkItemAsync(vsts, VstsProject, type, bypassRules: true); AddMigrated(jira.Key.Value, workItem.Id.Value);
await CreateComments(client, workItem.Id.Value, jira);
Console.WriteLine($"Added {type}: {jira.Key} {title}"); } static async Task CreateComments(WorkItemTrackingHttpClient client, int id, Issue jira) { var comments = (await jira.GetCommentsAsync()) .Select(p => CreateComment(p.Body, p.Author, p.CreatedDate?.ToUniversalTime())) .Concat(new[] { CreateComment($"Migrated from {jira.Key}") }) .ToList(); foreach (var comment in comments) await client.UpdateWorkItemAsync(comment, id, bypassRules: true); } static JsonPatchDocument CreateComment(string comment, string username = null, DateTime? date = null) { var patch = new JsonPatchDocument { new JsonPatchOperation { Path = "/fields/System.History", Value = comment } }; if (username != null) patch.Add(new JsonPatchOperation { Path = "/fields/System.ChangedBy", Value = ResolveUser(username) }); if (date != null) patch.Add(new JsonPatchOperation { Path = "/fields/System.ChangedDate", Value = date?.ToUniversalTime() });
return patch; }
static void AddMigrated(string jira, int vsts) { if (Migrated.ContainsKey(jira)) return;
Migrated.Add(jira, vsts); File.WriteAllText(MigratedPath, JsonConvert.SerializeObject(Migrated)); } static string ResolveUser(string user) { // Provide your own user mapping // switch (user) { case "anna.banana": return "anna.banana@contoso.com"; default: throw new ArgumentException("Could not find user", nameof(user)); } } static string ResolveFeatureState(IssueStatus state) { // Customize if your Vsts project uses custom task states. // switch (state.Name) { case "Needs Approval": return "New"; case "Ready for Review": return "In Progress"; case "Closed": return "Done"; case "Resolved": return "Done"; case "Reopened": return "New"; case "In Progress": return "In Progress"; case "Backlog": return "New"; case "Selected for Development": return "New"; case "Open": return "New"; case "To Do": return "New"; case "DONE": return "Done"; default: throw new ArgumentException("Could not find state", nameof(state)); } } static string ResolveBacklogItemState(IssueStatus state) { // Customize if your Vsts project uses custom task states. // switch (state.Name) { case "Needs Approval": return "New"; case "Ready for Review": return "Committed"; case "Closed": return "Done"; case "Resolved": return "Done"; case "Reopened": return "New"; case "In Progress": return "Committed"; case "Backlog": return "New"; case "Selected for Development": return "Approved"; case "Open": return "Approved"; case "To Do": return "New"; case "DONE": return "Done"; default: throw new ArgumentException("Could not find state", nameof(state)); } } static string ResolveTaskState(IssueStatus state) { // Customize if your Vsts project uses custom task states. // switch (state.Name) { case "Needs Approval": return "To Do"; case "Ready for Review": return "In Progress"; case "Closed": return "Done"; case "Resolved": return "Done"; case "Reopened": return "To Do"; case "In Progress": return "In Progress"; case "Backlog": return "To Do"; case "Selected for Development": return "To Do"; case "Open": return "To Do"; case "To Do": return "To Do"; case "DONE": return "Done"; default: throw new ArgumentException("Could not find state", nameof(state)); } } static int ResolvePriority(IssuePriority priority) { switch (priority.Name) { case "Low-Minimal business impact": return 4; case "Medium-Limited business impact": return 3; case "High-Significant business impact": return 2; case "Urgent- Critical business impact": return 1; default: throw new ArgumentException("Could not find priority", nameof(priority)); } } } }
Creating an Azure Stack AD FS SPN for use with az CLI
Following on from my previous blog post on filling in the gaps for AD FS on Azure Stack integrated systems, here are some more complete instructions on creating a Service Principal on Azure Stack systems using AD FS as the identity provider. Why do you need this? Well, check out the following scenarios as taken from https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-integrate-identity#spn-creation:
There are many scenarios that require the use of a service principal name (SPN) for authentication. The following are some examples:
CLI usage with AD FS deployment of Azure Stack
System Center Management Pack for Azure Stack when deployed with AD FS
Resource providers in Azure Stack when deployed with AD FS
Various third party applications
You require a non-interactive logon
Following on from my previous blog post on filling in the gaps for AD FS on Azure Stack integrated systems, here are some more complete instructions on creating a Service Principal on Azure Stack systems using AD FS as the identity provider. Why do you need this? Well, check out the following scenarios as taken from https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-integrate-identity#spn-creation:
There are many scenarios that require the use of a service principal name (SPN) for authentication. The following are some examples:
CLI usage with AD FS deployment of Azure Stack
System Center Management Pack for Azure Stack when deployed with AD FS
Resource providers in Azure Stack when deployed with AD FS
Various third party applications
You require a non-interactive logon
I’ve highlighted the first point ‘CLI usage with AD FS deployment of Azure Stack’. This is significant as AD FS only supports interactive login. At this point in time, the AZ CLI does not support interactive mode, so you must use a service principal.
There are a few areas that weren’t clear to me at first, so I worked it all out and tried to simplify the process.
At a high level, these are the tasks:
Create an X509 certificate (or use an existing one) to use for authentication
Create a new Service Principal (Graph Application) on the internal Azure Stack domain via PEP PowerShell session
Return pertinent details, such as Client ID, cert thumbprint, Tenant ID and relevant external endpoints for the Azure Stack instance
Export the certificate as PFX (for use on clients using PowerShell) and PEM file including private certificate (for use with Azure CLI)
Give the Service Principal permissions to the subscription
Here’s the link to the official doc’s: https://docs.microsoft.com/en-gb/azure/azure-stack/azure-stack-create-service-principals#create-service-principal-for-ad-fs
I’ve automated the process by augmenting the script provided in the link above. It creates a self-signed cert, AD FS SPN and files required to connect. It needs to be run on a system that has access to the PEP and also has the Azure Stack PowerShell module installed.
The script includes the steps to export the PFX (so you can use it with PowerShell on other systems) and PEM files, plus output ALL the relevant info you will need to connect via AZ CLI/ PoSh
# Following code taken from https://github.com/mongodb/support-tools/blob/master/ssl-windows/Convert-PfxToPem.ps1
Add-Type @'
using System;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;
using System.Collections.Generic;
using System.Text;
public class Cert_Utils
{
public const int Base64LineLength = 64;
private static byte[] EncodeInteger(byte[] value)
{
var i = value;
if (value.Length > 0 && value[0] > 0x7F)
{
i = new byte[value.Length + 1];
i[0] = 0;
Array.Copy(value, 0, i, 1, value.Length);
}
return EncodeData(0x02, i);
}
private static byte[] EncodeLength(int length)
{
if (length < 0x80)
return new byte[1] { (byte)length };
var temp = length;
var bytesRequired = 0;
while (temp > 0)
{
temp >>= 8;
bytesRequired++;
}
var encodedLength = new byte[bytesRequired + 1];
encodedLength[0] = (byte)(bytesRequired | 0x80);
for (var i = bytesRequired - 1; i >= 0; i--)
encodedLength[bytesRequired - i] = (byte)(length >> (8 * i) & 0xff);
return encodedLength;
}
private static byte[] EncodeData(byte tag, byte[] data)
{
List result = new List();
result.Add(tag);
result.AddRange(EncodeLength(data.Length));
result.AddRange(data);
return result.ToArray();
}
public static string RsaPrivateKeyToPem(RSAParameters privateKey)
{
// Version: (INTEGER)0 - v1998
var version = new byte[] { 0x02, 0x01, 0x00 };
// OID: 1.2.840.113549.1.1.1 - with trailing null
var encodedOID = new byte[] { 0x30, 0x0D, 0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01, 0x05, 0x00 };
List privateKeySeq = new List();
privateKeySeq.AddRange(version);
privateKeySeq.AddRange(EncodeInteger(privateKey.Modulus));
privateKeySeq.AddRange(EncodeInteger(privateKey.Exponent));
privateKeySeq.AddRange(EncodeInteger(privateKey.D));
privateKeySeq.AddRange(EncodeInteger(privateKey.P));
privateKeySeq.AddRange(EncodeInteger(privateKey.Q));
privateKeySeq.AddRange(EncodeInteger(privateKey.DP));
privateKeySeq.AddRange(EncodeInteger(privateKey.DQ));
privateKeySeq.AddRange(EncodeInteger(privateKey.InverseQ));
List privateKeyInfo = new List();
privateKeyInfo.AddRange(version);
privateKeyInfo.AddRange(encodedOID);
privateKeyInfo.AddRange(EncodeData(0x04, EncodeData(0x30, privateKeySeq.ToArray())));
StringBuilder output = new StringBuilder();
var encodedPrivateKey = EncodeData(0x30, privateKeyInfo.ToArray());
var base64Encoded = Convert.ToBase64String(encodedPrivateKey, 0, (int)encodedPrivateKey.Length);
output.AppendLine("-----BEGIN PRIVATE KEY-----");
for (var i = 0; i < base64Encoded.Length; i += Base64LineLength)
output.AppendLine(base64Encoded.Substring(i, Math.Min(Base64LineLength, base64Encoded.Length - i)));
output.Append("-----END PRIVATE KEY-----");
return output.ToString();
}
public static string PfxCertificateToPem(X509Certificate2 certificate)
{
var certBase64 = Convert.ToBase64String(certificate.Export(X509ContentType.Cert));
var builder = new StringBuilder();
builder.AppendLine("-----BEGIN CERTIFICATE-----");
for (var i = 0; i < certBase64.Length; i += Cert_Utils.Base64LineLength)
builder.AppendLine(certBase64.Substring(i, Math.Min(Cert_Utils.Base64LineLength, certBase64.Length - i)));
builder.Append("-----END CERTIFICATE-----");
return builder.ToString();
}
}
'@
# Credential for accessing the ERCS PrivilegedEndpoint typically domain\cloudadmin
$creds = Get-Credential
$pepIP = "172.16.101.224"
$date = (get-date).ToString("yyMMddHHmm")
$appName = "appSPN"
$PEMFile = "c:\temp\$appName-$date.pem"
$PFXFile = "c:\temp\$appName-$date.pfx"
# Creating a PSSession to the ERCS PrivilegedEndpoint
$session = New-PSSession -ComputerName $pepIP -ConfigurationName PrivilegedEndpoint -Credential $creds
# This produces a self signed cert for testing purposes. It is preferred to use a managed certificate for this.
$cert = New-SelfSignedCertificate -CertStoreLocation "cert:\CurrentUser\My" -Subject "CN=$appName" -KeySpec KeyExchange
$ServicePrincipal = Invoke-Command -Session $session {New-GraphApplication -Name $args[0] -ClientCertificates $args[1]} -ArgumentList $appName,$cert
$AzureStackInfo = Invoke-Command -Session $session -ScriptBlock { get-azurestackstampinformation }
$session|remove-pssession
# For Azure Stack development kit, this value is set to https://management.local.azurestack.external. We will read this from the AzureStackStampInformation output of the ERCS VM.
$ArmEndpoint = $AzureStackInfo.TenantExternalEndpoints.TenantResourceManager
$AdminEndpoint = $AzureStackInfo.AdminExternalEndpoints.AdminResourceManager
# For Azure Stack development kit, this value is set to https://graph.local.azurestack.external/. We will read this from the AzureStackStampInformation output of the ERCS VM.
$GraphAudience = "https://graph." + $AzureStackInfo.ExternalDomainFQDN + "/"
# TenantID for the stamp. We will read this from the AzureStackStampInformation output of the ERCS VM.
$TenantID = $AzureStackInfo.AADTenantID
# Register an AzureRM environment that targets your Azure Stack instance
Add-AzureRMEnvironment ` -Name "azurestacktenant" ` -ArmEndpoint $ArmEndpoint
Add-AzureRMEnvironment ` -Name "azurestackadmin" ` -ArmEndpoint $AdminEndpoint
# Set the GraphEndpointResourceId value
Set-AzureRmEnvironment ` -Name "azurestacktenant" -GraphAudience $GraphAudience -EnableAdfsAuthentication:$true
Add-AzureRmAccount -EnvironmentName "azurestacktenant" `
-ServicePrincipal ` -CertificateThumbprint $ServicePrincipal.Thumbprint `
-ApplicationId $ServicePrincipal.ClientId `
-TenantId $TenantID
# Output details required to pass to PowrShell or AZ CLI
write-host "ClientID : $($ServicePrincipal.ClientId)"
write-host "Cert Thumbprint : $($ServicePrincipal.Thumbprint)"
write-host "Application Name : $($ServicePrincipal.ApplicationName)"
write-host "TenantID : $TenantID"
write-host "ARM EndPoint : $ArmEndpoint"
write-host "Admin Endpoint : $AdminEndpoint"
write-host ""
write-host "PEM Cert path : $PEMFile"
write-host "PFX Cert Path : $PFXFile"
# Export the Cert to a pem file for user with Azure CLI
$result = [Cert_Utils]::PfxCertificateToPem($cert)
$parameters = ([Security.Cryptography.RSACryptoServiceProvider] $cert.PrivateKey).ExportParameters($true)
$result += "`r`n" + [Cert_Utils]::RsaPrivateKeyToPem($parameters);
$result | Out-File -Encoding ASCII -ErrorAction Stop $PEMFile
# Now Export the cert to PFX
$pw = Read-Host "Enter PFX Certificate Password" -AsSecureString
Export-PfxCertificate -cert $cert -FilePath $PFXFile -Password $pw
Here is an example of the output produced:
Next, connect to the Tenant Portal and give the Service Principal access to the subscription you want it to have access to:
Once you’ve done the above, here are the high-level steps to use the Service Principal account with Azure CLI:
Trust the Azure Stack CA Root Certificate (if using Enterprise CA / ASDK) within AZ CLI (Python). This is a one-time operation per system you’re running AZ CLI on.
Register Azure Stack environment (either tenant/user or admin)
Set the active cloud environment for CLI
Set the CLI to use Azure Stack compatible API version
Sign into the Azure Stack environment with service principal account
For reference, here are the official links with the information on how to do it. It works well, so just follow those:
https://docs.microsoft.com/en-us/azure/azure-stack/user/azure-stack-version-profiles-azurecli2
az cloud register -n AzureStackUser --endpoint-resource-manager 'https://management.' --suffix-storage-endpoint '' --suffix-keyvault-dns '.vault.'
az cloud register -n AzureStackAdmin --endpoint-resource-manager 'https://adminmanagement.' --suffix-storage-endpoint '' --suffix-keyvault-dns '.vault.'
az cloud set -n AzureStackUser
az cloud update --profile 2017-03-09-profile
az login --tenant --service-principal -u -p
Topic Search
-
Securing TLS in WAC (Windows Admin Center) https://t.co/klDc7J7R4G
Posts by Date
- August 2025 1
- March 2025 1
- February 2025 1
- October 2024 1
- August 2024 1
- July 2024 1
- October 2023 1
- September 2023 1
- August 2023 3
- July 2023 1
- June 2023 2
- May 2023 1
- February 2023 3
- January 2023 1
- December 2022 1
- November 2022 3
- October 2022 7
- September 2022 2
- August 2022 4
- July 2022 1
- February 2022 2
- January 2022 1
- October 2021 1
- June 2021 2
- February 2021 1
- December 2020 2
- November 2020 2
- October 2020 1
- September 2020 1
- August 2020 1
- June 2020 1
- May 2020 2
- March 2020 1
- January 2020 2
- December 2019 2
- November 2019 1
- October 2019 7
- June 2019 2
- March 2019 2
- February 2019 1
- December 2018 3
- November 2018 1
- October 2018 4
- September 2018 6
- August 2018 1
- June 2018 1
- April 2018 2
- March 2018 1
- February 2018 3
- January 2018 2
- August 2017 5
- June 2017 2
- May 2017 3
- March 2017 4
- February 2017 4
- December 2016 1
- November 2016 3
- October 2016 3
- September 2016 5
- August 2016 11
- July 2016 13