Azure, Azure Active Directory, and PowerShell. The Hard Way
In my opinion, a fundamental shift for Windows IT professionals occurred with the release of Exchange 2007. This established PowerShell as the tool for managing and configuring Microsoft enterprise products and systems going forward. I seem to remember hearing a story at the time that a mandate was established for every enterprisey product going forward; each GUI action would have a corresponding PowerShell execution. If anyone remembers the Exchange 2007 console, you could see that in action. I won’t bother corroborating this story, because the end results are self-evident. I can’t stress how important this was. Engineers and administrators with development and advanced scripting skills were spared the further indignity of committing crimes against Win32 and COM+ across a hodgepodge of usually awful languages. Windows administrators for whom automation and scripting only meant batch files, a clear path forward was presented.
PowerShell and Leaky Abstractions
For roughly two years now, the scope of my work has been mostly comprised of Azure integration and automation. Azure proved to be no exception to the PowerShell new world order. I entered with wide-eyed optimism and I quickly discovered a great deal of things, usually of a more advanced nature, that could not be done in the portal and purportedly only via PowerShell. As I continue to receive product briefings, I have developed a bit of a pedantic pet-peeve. PowerShell is always front and center in the presentations when referencing management, configuration, and automation. However, I continue to see a general hand wave given as to the underlying technologies (e.g. WMI/CIM, REST API) and requirements. I absolutely understand the intent, PowerShell has always been meant to provide a truly powerful environment in a manner that was highly accessible and friendly to the IT professional. It has been a resounding success in that regard. A general concern, I have, is that of too much abstraction. There is a direct correlation between your frustration level and how far your understanding of what is going on is when an inevitable edge case is hit and the abstraction leaks.
Getting Back to the Point
All of that is a really long preface to the actual point of this post. I’ve never been a fan of the Azure Cmdlets for a number of reasons, most of which I don’t necessarily impugn the decisions made by Microsoft. To be honest, I think both Switch-AzureMode (for those that remember) and the rapid release cadence that has introduced many understandably unavoidable breaking changes has really prejudiced me; as a result I tend to use the REST API almost exclusively. The fact is, modern systems and especially all of the micro-service architectures being touted are all powered by REST API. In the case of the Microsoft cloud, with only a few notable exceptions, authentication and authorization is handled via Azure Active Directory. It behooves the engineer or developer focused on Microsoft technologies to have a cursory understanding. Azure Active Directory, Azure, and Office 365 are intrinsically linked. Every Azure and/or Office 365 Subscription is linked with an Azure AD tenant as the primary identity provider. The modern web seems to have adopted OAuth as an authorization standard and Azure AD can greatly streamline the authorization of web applications and API. The management and other API surfaces of Azure (and Azure Stack) and Office 365 have always taken advantage of this. The term you’ve likely heard thrown around is Bearer Token. That is more accurately described as an authorization header on the HTTP request containing a JWT (JSON Web Token). My largest issue with the Azure and PowerShell automation has been the necessity to jump through hoops to simply obtain that token via PowerShell. In 2016 a somewhat disingenuously Cmdlet named Get-AzureStackToken in the AzureRM.AzureStackAdmin module finally appeared. I’m certain a large portion of the potential reading audience has used a tool like Fiddler, Postman, or even more recently resources.azure.com to either inspect or interact with these services. Those who have can feel free to skip the straight to where this applies to PowerShell.
There are two types of applications you can create within Azure AD, each of with are identified with a unique Client Id and valid redirect URI(s) as the most relevant properties we’ll focus on.
Web Applications
Web applications in Azure Active Directory are OAuth2 confidential clients and likely the most appropriate option for modern (web) use cases.
Tokens are obtained on behalf of a user using the OAuth2 authorization grant flow. An authorization code or id token will be supplied to the specified redirect URI.
If needed, client credentials (a rolling secret key) can be used to obtain tokens on behalf of the user or on it’s own from the web application itself.
Native Applications
Native applications in Azure Active Directory are OAuth2 public clients (e.g. an application on a desktop or mobile device).
These applications can obtain a token directly (with managed organizational accounts) or use the authorization grant flow, but application level permissions are not applicable.
Getting to the PowerShell
I will focus primarily on the Native application type as it is most relevant to PowerShell. Most of the content will use Cmdlets from a module that will be available with this post. The module is heavily derived/inspired by the ADAL libraries, has no external dependencies and accept a friendly PSCredential (with the appropriate rights) for any user authentication. The Azure Cmdlets use a Native application with a Client Id of 1950a258-227b-4e31-a9cf-717495945fc2 and a redirect URI of urn:ietf:wg:oauth:2.0:oob (the prescribed default for native applications). We’ll use this for our first attempt at obtaining a token for use against Azure Resource Manager or the legacy Service Management API. A peculiar detail of Azure management is that this one of the scenarios a token is fungible for disparate endpoints. I always use https://management.core.windows.net as my audience regardless of whether I will be working with ARM or SM. A token obtained from that audience will work the same as one from https://management.azure.com .
If all you would like is a snippet to obtain a token using the Azure, I’ll offer you a chance to bail out now:
$Resource='https://management.core.windows.net'
$PoshClientId="1950a258-227b-4e31-a9cf-717495945fc2"
$TenantId="yourdomain.com"
$UserName="username@$TenantId"
$Password="asecurepassword"|ConvertTo-SecureString -AsPlainText -Force
$Credential=New-Object pscredential($UserName,$Password)
Get-AzureStackToken -Resource $Resource -AadTenantId $TenantId -ClientId $PoshClientId -Credential $Credential -Authority "https://login.microsoftonline.com/$TenantId"
A good deal of the functionality around provisioning applications and service principals has come to the Azure Cmdlets. You can now create applications, service principals from the applications, and role assignments to the service principals. To create an application, in this case one that would own a subscription, you would write something like this:
$ApplicationSecret="ASuperSecretPassword!"
$TenantId='e05b8b95-8c85-49af-9867-f8ac0a257778'
$SubscriptionId='bc3661fe-08f5-4b87-8529-9190f94c163e'
$AppDisplayName='The Subscription Owning Web App'
$HomePage='https://azurefieldnotes.com'
$IdentifierUris=@('https://whereeveryouwant.com')
$NewWebApp=New-AzureRmADApplication -DisplayName $AppDisplayName -HomePage $HomePage `
-IdentifierUris $IdentifierUris -StartDate (Get-Date) -EndDate (Get-Date).AddYears(1) `
-Password $ApplicationSecret
$WebAppServicePrincipal=New-AzureRmADServicePrincipal -ApplicationId $NewWebApp.ApplicationId
$NewRoleAssignment=New-AzureRmRoleAssignment -ObjectId $NewWebApp.Id -RoleDefinitionName 'owner' -Scope "/subscriptions/$SubscriptionId"
$ServicePrincipalCred=New-Object PScredential($NewWebApp.ApplicationId,($ApplicationSecret|ConvertTo-SecureString -AsPlainText -Force))
Add-AzureRmAccount -Credential $ServicePrincipalCred -TenantId $TenantId -ServicePrincipal

For those that stuck around, let’s take a look at obtaining JWT(s), inspecting them, and putting them to use.
I added a method for decoding the tokens, so we will have a look at the access token. A JWT is comprised of a header, payload, and signature. I will leave explaining the claims within the payload to identity experts.

Now that we have a token, let's use it for something useful, in this case we will ask Azure (ARM) for our associated subscriptions.

Examining the OAuth2 Flow
If you are not interested in what is going on behind the scenes feel free to skip ahead. Each application exposes a standard set of endpoints and I will not discuss the v2.0 endpoint as I do not have enough experience using it. There are two endpoints in particular to make note of, https://login.microsoftonline.com/{tenantid}/oauth2/authorize and https://login.microsoftonline.com/{tenantid}/oauth2/token, where {tenantid} represents the tenant id (guid or domain name) e.g. yourcompany.com or common for multi-tenant applications. Azure AD obviously supports federation and the directing traffic to the appropriate authorization endpoint is guided by a user realm detection API of various versions at https://login.microsoftonline.com/common/UserRealm. If we inspect the result for a fully managed Azure AD account we see general tenant detail.

If we take a look at a federated user we will see a little difference, the AuthURL property.

This show us the location of our federated authentication endpoint. The token will actually be requested via a SAML user assertion that is received from an STS, in this case ADFS.
The OAuth specification uses the request parameter collection for token and authorization code responses. A username and password combination can be used to directly request a token in the fully managed scenario public client scenario.
A POST request can go directly to the Token endpoint with the following query parameters:
client_id
The Application Id
resource
The Resource URI to access
grant_type
password
username
The username
password
The password
The ADFS/WSTrust will entail sending a SOAP request to the WSTrust endpoint to authenticate and use that response to create the assertion that is exchanged for an access token. Through user realm detection we can find the ADFS username/password endpoint. A SOAP envelope can be sent to endpoint to receive a security token response, containing the assertions needed.
A POST request is sent to the Username/Password endpoint for ADFS with the following envelope with noteable values encased in {}:
<s:Envelope xmlns:s='http://www.w3.org/2003/05/soap-envelope'
xmlns:a='http://www.w3.org/2005/08/addressing'
xmlns:u='http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd'>
<s:Header>
<a:Action s:mustUnderstand='1'>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</a:Action>
<a:messageID>urn:uuid:{Unique Identifier for the Request}</a:messageID>
<a:ReplyTo>
<a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address>
</a:ReplyTo> <!-- The Username Password WSTrust Endpoint -->
<a:To s:mustUnderstand='1'>{Username/Password Uri}</a:To>
<o:Security s:mustUnderstand='1'
xmlns:o='http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd'> <!-- The token length requested -->
<u:Timestamp u:Id='_0'>
<u:Created>{Token Start Time}</u:Created>
<u:Expires>{Token Expiry Time}</u:Expires>
</u:Timestamp> <!-- The username and password used -->
<o:UsernameToken u:Id='uuid-{Unique Identifier for the Request}'>
<o:Username>{UserName to Authenticate}</o:Username>
<o:Password>{Password to Authenticate}</o:Password>
</o:UsernameToken>
</o:Security>
</s:Header>
<s:Body>
<trust:RequestSecurityToken xmlns:trust='http://docs.oasis-open.org/ws-sx/ws-trust/200512'>
<wsp:AppliesTo xmlns:wsp='http://schemas.xmlsoap.org/ws/2004/09/policy'>
<a:EndpointReference>
<a:Address>urn:federation:MicrosoftOnline</a:Address>
</a:EndpointReference>
</wsp:AppliesTo>
<trust:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</trust:KeyType>
<trust:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</trust:RequestType>
</trust:RequestSecurityToken>
</s:Body>
</s:Envelope>The token response is inspected for SAML assertion types (urn:oasis:names:tc:SAML:1.0:assertion or urn:oasis:names:tc:SAML:2.0:assertion) to find the matching token used for the OAuth token request.
<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"
xmlns:a="http://www.w3.org/2005/08/addressing"
xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<s:Header>
<a:Action s:mustUnderstand="1">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</a:Action>
<o:Security s:mustUnderstand="1"
xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
<u:Timestamp u:Id="_0">
<u:Created>2016-01-03T01:34:41.640Z</u:Created>
<u:Expires>2016-01-03T01:39:41.640Z</u:Expires>
</u:Timestamp>
</o:Security>
</s:Header>
<s:Body>
<trust:RequestSecurityTokenResponseCollection xmlns:trust="http://docs.oasis-open.org/ws-sx/ws-trust/200512"> <!-- Our Desired Token Response -->
<trust:RequestSecurityTokenResponse>
<trust:Lifetime>
<wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2016-01-03T01:34:41.622Z</wsu:Created>
<wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2016-01-03T02:34:41.622Z</wsu:Expires>
</trust:Lifetime>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>urn:federation:MicrosoftOnline</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<trust:RequestedSecurityToken> <!-- The Assertion -->
<saml:Assertion MajorVersion="1" MinorVersion="1" AssertionID="_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b" Issuer="http://adfs.howtopimpacloud.com/adfs/services/trust" IssueInstant="2016-08-03T01:34:41.640Z"
xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion">
<saml:Conditions NotBefore="2016-01-03T01:34:41.622Z" NotOnOrAfter="2016-01-03T02:34:41.622Z">
<saml:AudienceRestrictionCondition>
<saml:Audience>urn:federation:MicrosoftOnline</saml:Audience>
</saml:AudienceRestrictionCondition>
</saml:Conditions>
<saml:AttributeStatement>
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">130WEAH65kG8zfGrZFNlBQ==</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:bearer</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Attribute AttributeName="UPN" AttributeNamespace="http://schemas.xmlsoap.org/claims">
<saml:AttributeValue>chris@howtopimpacloud.com</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute AttributeName="ImmutableID" AttributeNamespace="http://schemas.microsoft.com/LiveID/Federation/2008/05">
<saml:AttributeValue>130WEAH65kG8zfGrZEFlBQ==</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
<saml:AuthenticationStatement AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password" AuthenticationInstant="2016-08-03T01:34:41.607Z">
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">130WEAH65kG8sfGrZENlBQ==</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:bearer</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
</saml:AuthenticationStatement>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" />
<ds:Reference URI="#_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
<ds:DigestValue>itvzbQhlzA8CIZsMneHVR15FJlY=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>gBCGUmhQrJxVpCxVsy2L1qh1kMklVVMoILvYJ5a8NOlezNUx3JNlEP7wZ389uxumP3sL7waKYfNUyVjmEpPkpqxdxrxVu5h1BDBK9WqzOICnFkt6JPx42+cyAhj3T7Nudeg8CP5A9ewRCLZu2jVd/JEHXQ8TvELH56oD5RUldzm0seb8ruxbaMKDjYFuE7X9U5sCMMuglU3WZDC3v6aqmUxpSd9Kelhddleu33XEBv7CQNw84JCud3B+CC7dUwtGxwv11Mk/P0t1fGbfs+I6aSMTecKq9YmscqP9tB8ZouD42jhjhYysOQSdulStmUi6gVzQz+c2l2taa5Amd+JCPg==</ds:SignatureValue>
<KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
<X509Data>
<X509Certificate>MIIC4DCDAcigAwIBAgIQaYQ6QyYqcrBBmOHSGy0E1DANBgkqhkiG9w0BAQsFADArMSkwJwYDVQQDEyBBREZTIFNpZ25pbmcgLSBhZGZzLmNpLmF2YWhjLmNvbTAgFw0xNjA2MDQwNjA4MDdaGA8yMTE2MDUxMTA2MDgwN1owKzEpMCcGA1UEAxMgQURGUyBTaWduaW5nIC0gYWRmcy5jaS5hdmFoYy5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDH9J6/oWYAR8Y98QnacNouKyIBdtZbosEz0HyJVyrxVqKq2AsPvCEO3WFm9Gmt/xQN9PuLidZpgICAe8Ukuv4h/NldgmgtD64mObFNuEM5pzAPRXUv6FWlVE4fnUpIiD1gC0bbQ7Tzv/cVgfUChCDpFu3ePDTs/tv07ee22jXtoyT3N7tsbIX47xBMKgF9ItN9Oyqi0JyQHZghVQ1ebNOMH3/zNdl0WcZ+Pl+osD3iufoH6H+qC9XY09B5YOWy8fJoqf+HFeSWZCHH5vJJfsPTsSilvLHCpMGlrMFaTBKqmv+m9Z3FtbzOcnKHS5PJVAymqLctkH+HbFzaDblaSRhhAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAFB0E2Cj+O24aPM61JsCXLIAB28q4h4qLxMwV+ypYjFxxcQ5GzgqaPJ7BARCnW1gm3PyvNfUut9RYrT9wTJlBVY9WDBoX33jsS87riMj+JONXJ7lG/zAozxs0xIiW+PNlFdOt7xyvYstrFgPJS1E05jhiZ2PR8MS20uSlMNkVPinpz4seyyMQeM+1GbpbDE1EwwtEVKgatJN7t6nAn9mw8cHIk1et7CYOGeWCnMA9EljzNiD8wEwsG51aKfuvGrPK8Q8N/G89SPgstpe0Te5+EtWT6latXfpCwdNWxvinH49SKKa25l1VoLLNwKiQF6vK1Iw0F7dP7QkO5YdE7/MTDU=</X509Certificate>
</X509Data>
</KeyInfo>
</ds:Signature>
</saml:Assertion>
</trust:RequestedSecurityToken>
<trust:RequestedAttachedReference>
<o:SecurityTokenReference k:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1"
xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
xmlns:k="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<o:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.0#SAMLAssertionID">_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b</o:KeyIdentifier>
</o:SecurityTokenReference>
</trust:RequestedAttachedReference>
<trust:RequestedUnattachedReference>
<o:SecurityTokenReference k:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1"
xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
xmlns:k="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<o:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.0#SAMLAssertionID">_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b</o:KeyIdentifier>
</o:SecurityTokenReference>
</trust:RequestedUnattachedReference>
<trust:TokenType>urn:oasis:names:tc:SAML:1.0:assertion</trust:TokenType>
<trust:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</trust:RequestType>
<trust:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</trust:KeyType>
</trust:RequestSecurityTokenResponse>
</trust:RequestSecurityTokenResponseCollection>
</s:Body>
</s:Envelope>A POST request is sent to the Token endpoint with the following query parameters:
client_id
The Application Id
resource
The Resource URI to access
assertion
The base64 encoded SAML token
grant_type
urn:ietf:params:oauth:grant-type:saml1_1-bearer urn:ietf:params:oauth:grant-type:saml2-bearer
scope
openid
A GET request is sent to the Authorize endpoint with some similar query parameters:
client_id
The Application Id
redirect_uri
The location within the application to handle the authorization code
response_type
code
prompt
login consent admin_consent
scope
optional scope for access (app uri or openid scope)
The endpoint should redirect you to the appropriate login screen via user realm detection. Once the user login is completed, the code is added to the redirect address as either query parameters (default) or a form POST. Once the code is retrieved it can be exchanged for a token. A POST request is sent to the Token endpoint as demonstrated before with some slightly different parameters:
client_id
The Application Id
resource
The Resource URI to access
code
The authorization code
grant_type
authorization_code
scope
previous scope
client_secret
required if confidential client
Tying it All Together
To try to show some value for your reading time, lets explore how this can be used as the solutions you support and deploy become more tightly integrated with the Microsoft cloud. We'll start by creating a new Native application in the legacy portal.


I used https://itdoesnotmatter here, but you might as well follow the guidance of using urn:ietf:wg:oauth:2.0:oob. We will now grant permissions to Azure Active Directory and Azure Service Management (for ARM too).



I will avoid discussing configuring the application to be multi-tenant as the processes I outline are identical, it is simply a matter of the targeted tenant. You should end up with something looking like this.

Let's now try to go get a token for our new application and put it to use. This should look exactly the same as retrieving the previous token.
$AuthCode=Approve-AzureADApplication -ClientId $NewClientId -RedirectUri 'https://itdoesnotmatter/' -TenantId sendthewolf.com -AdminConsent

Epic failure! Unfortunately we run into a common annoyance, the application must be consented to interactively. I do not know of any tooling that exists to make this easy. I added a function to make this a little easier and it supports a switch of AdminConsent to approve the application for all users within the tenant. And step through the consent process to receive an authorization code.
$AuthCode=Approve-AzureADApplication -ClientId $NewClientId -RedirectUri 'https://itdoesnotmatter/' -TenantId sendthewolf.com -AdminConsent


Once the authorization code is obtained it can be exchanged for a token, for which I provided another function. That token can now be used in the exact same manner as the Azure Cmdlet application.
$TokenResult=Get-AzureADAccessTokenFromCode 'https://management.core.windows.net/' -ClientId $NewClientId -RedirectUri 'https://itdoesnotmatter/' -TenantId sendthewolf.com -AuthorizationCode $AuthCode

If you wanted to handle some Azure Active Directory objects, we can target a different audience, and execute actions appropriate to the account's privilege level. In the following example we will create a new user.
$GraphBaseUri="https://graph.windows.net/"
$GraphUriBuilder=New-Object System.UriBuilder($GraphBaseUri)
$GraphUriBuilder.Path="$TenantId/users"
$GraphUriBuilder.Query="api-version=1.6"
$NewUserJSON=@"
{
"accountEnabled": true,
"displayName": "Johnny Law",
"mailNickName" : "thelaw",
"passwordProfile": {
"password": "Password1234!",
"forceChangePasswordNextLogin": false
},
"userPrincipalName": "johhny.law@$TenantId"
}
"@
$AuthResult=Get-AzureADUserToken -Resource $GraphBaseUri -ClientId $NewClientId -Credential $Credential -TenantId $TenantId
$AuthHeaders=@{Authorization="Bearer $($AuthResult.access_token)"}
$NewUser=Invoke-RestMethod -Uri $GraphUriBuilder.Uri -Method Post -Headers $AuthHeaders -Body $NewUserJSON -ContentType "application/json"
If we want to continue the “fun” with Office 365 we can apply the exact sample approach with the Office 365 Sharepoint Online application permissions. In the interest of moving along and with no regard for constraining access, we will configure the permissions in the following manner.

We’ll now do some querying of the Office 365 SharePoint video API with some more script.
$SharepointUri='https://yourdomain.sharepoint.com/'
$SpUriBuilder=New-Object System.UriBuilder($SharepointUri)
$SpUriBuilder.Path="_api/VideoService.Discover"
$AuthResult=Get-AzureADUserToken -Resource $SharepointUri -ClientId $NewClientId -Credential $Credential
$Headers=@{Authorization="Bearer $($AuthResult.access_token)";Accept="application/json";}
$VideoDisco=Invoke-RestMethod -Uri $SpUriBuilder.Uri -Headers $Headers $VideoDisco|Format-List
$VideoChannelId="306488ae-5562-4d3e-a19f-fdb367928b96"
$VideoPortalUrl=$VideoDisco.VideoPortalUrl
$ChannelUrlBuilder=New-Object System.UriBuilder($VideoPortalUrl)
$ChannelUrlBuilder.Path+="/_api/VideoService/Channels"
$ChannelOData=Invoke-RestMethod -Uri $ChannelUrlBuilder.Uri -Headers $Headers
$ChannelRoot=$ChannelUrlBuilder.Path
foreach ($Channel in $ChannelOData.Value)
{
$VideoUriBuilder=New-Object System.UriBuilder($Channel.'odata.id')
$VideoUriBuilder.Path+="/Videos"
Invoke-RestMethod -Uri $VideoUriBuilder.Uri -Headers $Headers|Select-Object -ExpandProperty value
}
We should see some output that looks like this:

I’ve had Enough! Please Just Show me the Code.
For those who have endured or even skipped straight here, I present the following module for any use your dare apply. The standard liability waiver applies and it is presented primarily for educational purposes. It came from a need to access the assortment of Microsoft cloud API in environments where we could not always ensure the plethora of correct Cmdlets are installed. Initially, being a .Net guy, I just wrapped standard use cases around ADAL .Net. I really wanted to make sure that I really understood OAuth and OpenId Connect authorization flows as is relates to Azure Active Directory. The entire theme of this lengthy tome is to emphasize the importance of having a relatively advanced understanding of these concepts. Regardless of your milieu, if it has a significant Microsoft component, the demand to both integrate and support the integration(s) of numerous offerings will only grow larger. The module is primarily targeted at the Native Client application type, however there is support for the client secret and implicit authorization flows. There are also a few utility methods that are exposed as they may have some diagnostic use or otherwise. The module exposes the following methods all of which support Get-Help:
Approve-AzureADApplication
Approves an Azure AD Application Interactively and returns the Authorization Code
ConvertFrom-EncodedJWT
Converts an encoded JWT to an object representation
Get-AzureADAccessTokenFromCode
Retrieves an access token from a consent authorization code
Get-AzureADClientToken
Retrieves an access token as a an OAuth confidential client
Get-AzureADUserToken
Retrieves an access token as a an OAuth public client
Get-AzureADImplicitFlowToken
Retrieves an access token interactively for a web application with OAuth implicit flow enabled
Get-AzureADOpenIdConfiguration
Retrieves the OpenId connect configuration for the specified application
Get-AzureADUserRealm
Retrieves a the aggregate user realm data for the specified user principal name(s
Get-WSTrustUserRealmDetails
Retrieves the WSFederation details for a given user prinicpal name
Get it here: Azure AD Module
I hope you find it useful and remember not to fear doing things the hard way every so often.
Exposing Azure Stack TP1 to the Public Internet
My previous two posts have been relatively trivial modifications to modify the constraints and parameters exposed in Azure Stack Poc TP1 installer. There was a contrived underlying narrative leading us to here. As TP1 made it in to the wild, my team was tasked with being able to rapidly stand up PoC environments. We eventually landed on an LTI (Lite Touch Installation) type deployment solution which allows us to deploy and re-deploy fresh PoC environments within a couple of hours. This requirement, as expected evolved to exposing the public surfaces of Azure Stack (ARM, The Portal, Storage Accounts, etc.). I’ll attempt to outline the steps required in as concise a manner as possible (hat-tip to AzureStack.eu). The way Azure Stack is currently glued together has the expectation that the Active Directory DNS Name of your deployment will match the DNS suffix used for the publicly accessible surfaces (e.g. portal.azurestack.local). The primary reasons for this is certificates and DNS resolution by the service fabric and clients alike. I’ll expound on this a little later. It was relatively simple to automate the exposing a new environment by allowing configuration to follow consistent convention. As a practical example, We have a number of blade chassis within our datacenters. In each chassis where we host PoC environments, each blade is on a different subnet with a specific set of IP addresses that will be exposed to the internet (if needed) via 1-to-1 NAT.
Each blade is on a different vlan and subnet for a reason. Azure Stack's use of VXLAN to talk between hosts in a scale out configuration means that if you try to stand up two hosts on the same layer 2 segment, you will have IP address conflicts between them on the private azure stack range. There are ways in the config to change the VXLAN IDs and avoid this, but it's simpler to just use separate VLANs. Each blade is on a dedicated private space VLAN, and is then NAT'd to the public internet thru a common router/firewall with 1:1 NATs on the appropriate addresses. This avoids having to put the blade directly on the internet, and a Static NAT avoids any sort of port address translation.
Prerequisites
You will need at least 5 public IP addresses and access to manage the DNS zone for the domain name you will be using. Our domain used for the subsequent content will be yourdomain.com (replace with your domain name) and I will assume that the ‘Datacenter’ (Considered the public network for Stack) network for your Azure Stack installation is will apply 1-to-1 NAT on the appropriate IP addresses exposed to the public internet.
We’ll use a hypothetical Internal IP address set of 172.20.10.33-172.20.10.39 and External IP address set of 38.x.x.30-38.x.x.34 for this example. The process I will describe will require static IP addressing on the NAT public interface. These options are required parameters of the installation along with the desired custom Active Directory DNS Domain Name.
Networking
Azure Stack Virtual IP Addresses
| Role | Internal VIP | ‘Datacenter’ Network IP | External IP |
| Portal and API | 192.168.133.74 | 172.20.10.33 | 38.x.x.30 |
| Gallery | 192.168.133.75 | 172.20.10.34 | 38.x.x.31 |
| CRP, NRP, SRP | 192.168.133.31 | 172.20.10.35 | 38.x.x.32 |
| Storage Account | 192.168.133.72 | 172.20.10.36 | 38.x.x.33 |
| Extensions | 192.168.133.71 | 172.20.10.37 | 38.x.x.34 |
Azure Stack VM IP Addresses
| Role | Internal IP | ‘Datacenter’ Network IP |
| ADVM | 192.168.100.2 | 172.20.10.38 |
| ClientVM | 192.168.200.101 | 172.20.10.39 |
DNS
We will need to start a new deployment with our desired domain name. Please see my previous post Changing Azure Stack’s Active Directory DNS Domain to something other than AzureStack.local on how to make this an installation parameter. Please note, modifications to the additional resource provider (e.g. Sql and Web Apps) installers will also be required. We just changed instances of azurestack.local to $env:USERDNSDOMAIN as they are primarily executed within the Azure Stack Active Directory context.
To be honest, I probably should have addressed this in the previous post. If the Active Directory DNS domain name you wish to use will be resolvable by the host while deploying the stack I recommend you simply add an entry to the HOSTS file prior to initiating installation. We ended up adding this to our deployment process as we wanted to leave public DNS entries in place for certain environments. In this version we know the resultant IP address of the domain controller that will be created.
Add an entry like such:
192.168.100.2 yourdomain.com
You will need to add a number of A records within your public DNS zone.
| A Record | IP Address |
| api.yourdomain.com | 38.x.x.30 |
| portal.yourdomain.com | 38.x.x.30 |
| gallery.yourdomain.com | 38.x.x.31 |
| srp.yourdomain.com | 38.x.x.32 |
| nrp.yourdomain.com | 38.x.x.32 |
| srp.yourdomain.com | 38.x.x.32 |
| xrp.yourdomain.com | 38.x.x.32 |
| *.compute.yourdomain.com | 38.x.x.32 |
| *.storage.yourdomain.com | 38.x.x.32 |
| *.network.yourdomain.com | 38.x.x.32 |
| *.blob.yourdomain.com | 38.x.x.33 |
| *.queue.yourdomain.com | 38.x.x.33 |
| *.table.yourdomain.com | 38.x.x.33 |
| *.adminextensions.yourdomain.com | 38.x.x.34 |
| *.tenantextensions.yourdomain.com | 38.x.x.34 |
Installation
The NAT IP addressing and Active Directory DNS domain name need to be supplied to the PoC installation. A rough script running in the context of the installation folder will look something like this:
[powershell]
$ADDomainName="yourdomain.com" $AADTenant="youraadtenant.com" $AADUserName="yourStackUser@$AADTenant" $AADPassword=ConvertTo-SecureString -String "ThisIsYourAADPassword!" -AsPlainText -Force $AdminPassword=ConvertTo-SecureString -String "ThisIsYourStackPassword!" -AsPlainText -Force $NATVMStaticIP="172.20.10.30/24" $NATVMStaticGateway="172.20.10.1" $AADCredential=New-Object pscredential($AADUserName,$AADPassword) $DeployScript= Join-Path $PSScriptRoot "DeployAzureStack.ps1" &$DeployScript -ADDomainName $ADDomainName -AADTenant $AADTenant ` -AdminPassword $AdminPassword -AADCredential $AADCredential ` -NATVMStaticIP $NATVMStaticIP -NATVMStaticGateway $NATVMStaticGateway -Verbose -Force
[/powershell]
Configuring Azure Stack NAT
The public internet access, both ingress and egress, is all controlled via Routing and Remote Access and networking within the NATVM virtual machine. There are effectively three steps you will need to undertake that will require internal VIP addresses within the stack and the public IP addresses you wish to associate with your deployment. This process can be completed through the UI, however, given the nature of our lab, doing this in a scriptable fashion was an imperative. If you must do this by hand, start by opening the Routing and Remote Access MMC on the NATVM in order to modify the public NAT settings.
Create a Public IP Address Pool within the RRAS NAT configuration.
Create IP Address reservations for the all of the IP addresses for which the public DNS entries were created within the RRAS NAT configuration. Note incoming sessions are required for all the Azure Stack public surfaces.
Associate the additional Public IP addresses with the NATVM Public Interface in the NIC settings. (This is a key step that isn't covered elsewhere)
I told you that we needed to script this and unfortunately, as far as I can tell, the easiest way is still through the network shell netsh.exe. The parameters for our script are the same for every deployment and a script file for netsh is dynamically created. If you already have a large amount of drool on your keyboard from reading this tome, I'll cut to the chase and give you a sample to drive netsh. In the following example Ethernet 2 is the 'Public' Adapter on the NATVM and our public IP Addresses are in the 172.20.10.0/24 network and will be mapped according to the previous table
[text]
#Set the IP Addresses pushd interface ipv4 add address name="Ethernet 2" address=172.20.27.38 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.31 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.33 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.35 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.34 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.37 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.36 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.32 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.39 mask=255.255.255.0 popd #Configure NAT pushd routing ip nat #NAT Address Pool add addressrange name="Ethernet 2" start=172.20.27.31 end=172.20.27.39 mask=255.255.255.0 #NAT Pool Reservations add addressmapping name="Ethernet 2" public=172.20.27.38 private=192.168.100.2 inboundsessions=disable add addressmapping name="Ethernet 2" public=172.20.27.31 private=192.168.5.5 inboundsessions=disable add addressmapping name="Ethernet 2" public=172.20.27.33 private=192.168.133.74 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.35 private=192.168.133.31 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.34 private=192.168.133.75 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.37 private=192.168.133.71 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.36 private=192.168.133.72 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.32 private=192.168.200.101 inboundsessions=disable add addressmapping name="Ethernet 2" public=172.20.27.39 private=192.168.100.12 inboundsessions=disable popd
[/text]
If I haven’t lost you already and you would like to achieve this with PowerShell, the following script uses PowerShell direct to configure NAT in the previously demonstrated manner taking a PSCredential (an Adminstrator on NATVM) and an array of very simple objects representing the (for now) constant NAT mappings.
They can easily be stored in JSON (and I insist that is pronounced like the given name Jason) for easy serialization/de-serialization
[powershell] #requires -Modules NetAdapter,NetTCPIP -Version 5.0 [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [Object[]] $Mappings, [Parameter(Mandatory=$false)] [Int32] $PoolSize=8, [Parameter(Mandatory=$false)] [String] $NatVmInternalIP="192.168.200.1", [Parameter(Mandatory=$true)] [pscredential] $Credential, [Parameter(Mandatory=$false)] [String] $NatVmName="NATVM" )
$ActivityId=33
$NatSetupScriptBlock= { [CmdletBinding()] param()
$VerbosePreference=$Using:VerbosePreference $ParentId=$Using:ActivityId $NatVmInternalIP=$Using:NatVmInternalIP $Mappings=@($Using:Mappings) $AddressPoolSize=$Using:PoolSize
#region Helper Methods
<# .SYNOPSIS Converts a CIDR or Prefix Length as string to a Subnet Mask .PARAMETER CIDR The CIDR or Prefix Length to convert #> Function ConvertTo-SubnetMaskFromCIDR { [CmdletBinding()] [OutputType([String])] param ( [Parameter(Mandatory=$true,ValueFromPipeline=$true)] [String] $CIDR ) $NetMask="0.0.0.0" $CIDRLength=[System.Convert]::ToInt32(($CIDR.Split('/')|Select-Object -Last 1).Trim()) Write-Debug "Converting Prefix Length $CIDRLength from input $CIDR" switch ($CIDRLength) { {$_ -gt 0 -and $_ -lt 8} { $binary="$( "1" * $CIDRLength)".PadRight(8,"0") $o1 = [System.Convert]::ToInt32($binary.Trim(),2) $NetMask = "$o1.0.0.0" break } 8 {$NetMask="255.0.0.0"} {$_ -gt 8 -and $_ -lt 16} { $binary="$( "1" * ($CIDRLength - 8))".PadRight(8,"0") $o2 = [System.Convert]::ToInt32($binary.Trim(),2) $NetMask = "255.$o2.0.0" break } 16 {$NetMask="255.255.0.0"} {$_ -gt 16 -and $_ -lt 24} { $binary="$("1" * ($CIDRLength - 16))".PadRight(8,"0") $o3 = [System.Convert]::ToInt32($binary.Trim(),2) $NetMask = "255.255.$o3.0" break } 24 {$NetMask="255.255.255.0"} {$_ -gt 24 -and $_ -lt 32} { $binary="$("1" * ($CIDRLength - 24))".PadRight(8,"0") $o4 = [convert]::ToInt32($binary.Trim(),2) $NetMask= "255.255.255.$o4" break } 32 {$NetMask="255.255.255.255"} } return $NetMask }
Function Get-ExternalNetAdapterInfo { [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [String] $ExcludeIP )
$AdapterInfos=@() $NetAdapters=Get-NetAdapter -Physical foreach ($NetAdapter in $NetAdapters) { if($NetAdapter.Status -eq 'Up') { $AdapIpAddress=$NetAdapter|Get-NetIPAddress -AddressFamily IPv4 $AdapterInfo=New-Object PSObject -Property @{ Adapter=$NetAdapter; IpAddress=$AdapIpAddress; } $AdapterInfos+=$AdapterInfo } }
$DesiredAdapter=$AdapterInfos|Where-Object{$_.IPAddress.IPAddress -ne $ExcludeIP} return $DesiredAdapter }
Function Get-NetshScriptContent { [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [Object] $ExternalAdapter, [Parameter(Mandatory=$true)] [Object[]] $NatMappings, [Parameter(Mandatory=$true)] [Int32] $PoolSize )
#Retrieve the Network Adapters $ExternalAddress=$ExternalAdapter.IPAddress.IPAddress $ExternalPrefixLength=$ExternalAdapter.IPAddress.PrefixLength $ExternalAdapterName=$ExternalAdapter.Adapter.Name Write-Verbose "External Network Adapter [$ExternalAdapterName] $($ExternalAdapter.Adapter.InterfaceDescription) - $ExternalAddress" $IpPieces=$ExternalAddress.Split(".") $LastOctet=[System.Convert]::ToInt32(($IpPieces|Select-Object -Last 1)) $IpFormat="$([String]::Join(".",$IpPieces[0..2])).{0}" $PublicCIDR="$IpFormat/{1}" -f 0,$ExternalPrefixLength
$AddressPoolStart=$IpFormat -f ($LastOctet + 1) $AddressPoolEnd=$IpFormat -f ($LastOctet + 1 + $PoolSize) $ExternalNetMask=ConvertTo-SubnetMaskFromCIDR -CIDR $PublicCIDR Write-Verbose "Public IP Address Pool Start:$AddressPoolStart End:$AddressPoolEnd Mask:$ExternalNetMask"
$TargetIpEntries=@() $ReservationEntries=@() foreach ($Mapping in $NatMappings) { Write-Verbose "Evaluating Mapping $($Mapping.name)" $TargetPublicIp=$IpFormat -f $Mapping.publicIP $TargetIpEntry="add address name=`"{0}`" address={1} mask={2}" -f $ExternalAdapterName,$TargetPublicIp,$ExternalNetMask $TargetIpEntries+=$TargetIpEntry if($Mapping.allowInbound) { $InboundEnabled="enable" } else { $InboundEnabled="disable" } $ReservationEntry="add addressmapping name=`"{0}`" public={1} private={2} inboundsessions={3}" -f $ExternalAdapterName,$TargetPublicIp,$Mapping.internalIP,$InboundEnabled $ReservationEntries+=$ReservationEntry }
$NetshScriptLines=@() #IP Addresses $NetshScriptLines+="#Set the IP Addresses" $NetshScriptLines+="pushd interface ipv4" $TargetIpEntries|ForEach-Object{$NetshScriptLines+=$_} $NetshScriptLines+="popd" #NAT $NetshScriptLines+="#Configure NAT" $NetshScriptLines+="pushd routing ip nat" $NetshScriptLines+="#NAT Address Pool" $NetshScriptLines+="add addressrange name=`"{0}`" start={1} end={2} mask={3}" -f $ExternalAdapterName,$AddressPoolStart,$AddressPoolEnd,$ExternalNetMask $NetshScriptLines+="#NAT Pool Reservations" $ReservationEntries|ForEach-Object{$NetshScriptLines+=$_} $NetshScriptLines+="popd"
return $NetshScriptLines }
#endregion
#region Execution
Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Detecting Public Adapter" -PercentComplete 5 $ExternalNetAdapter=Get-ExternalNetAdapterInfo -ExcludeIP $NatVmInternalIP Write-Progress -Activity "Publishing Azure Stack" ` -Status "External Network Adapter [$($ExternalNetAdapter.Adapter.Name)] $($ExternalNetAdapter.Adapter.InterfaceDescription) - $($ExternalNetAdapter.IPAddress.IPAddress)" -PercentComplete 10 if($ExternalNetAdapter -eq $null) { throw "Unable to resolve the public network adapter!" } Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Creating Network Script" -PercentComplete 25 $NetshScript=Get-NetshScriptContent -ExternalAdapter $ExternalNetAdapter -NatMappings $Mappings -PoolSize $AddressPoolSize #Save the file.. $NetShScriptPath=Join-Path $env:TEMP "configure-nat.txt" $NetShScriptExe= Join-Path $env:SystemRoot 'System32\netsh.exe'
#Run NetSh Set-Content -Path $NetShScriptPath -Value $NetshScript -Force Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Created Network Script $NetShScriptPath" -PercentComplete 50
Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Configuring NAT $NetShScriptExe -f $NetShScriptPath" -PercentComplete 70 $NetShProcess=Start-Process -FilePath $NetShScriptExe -ArgumentList "-f $NetShScriptPath" -PassThru -Wait Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Restarting RRAS.." -PercentComplete 90 Restart-Service -Name RemoteAccess Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -PercentComplete 100 -Completed
#endregion
$ConfigureResult=New-Object PSObject -Property @{ NetSHProcess=$NetShProcess; NetSHScript=$NetShScript; } return $ConfigureResult
}
Write-Progress -Id $ActivityId -Activity "Configuring NAT" -Status "Publishing Azure Stack TP1 PoC $NatVmName as $($Credential.UserName)..." -PercentComplete 10 $result=Invoke-Command -ScriptBlock $NatSetupScriptBlock -Credential $Credential -VMName $NatVmName Write-Progress -Id $ActivityId -Activity "Configuring NAT" -PercentComplete 100 -Completed
return $result [/powershell]
Certificates
You will need to export the root certificate from the CA for your installation for importing on any clients that will access your deployment. I will leave the use of publicly trusted certificates outside the scope of this post, nor will I address the use of a CA other than the one deployed with Active Directory. The expanse of aforementioned DNS entries should give you an idea of how many certificates you will need to obtain to avoid this step with any clients. In the case of operating public Azure, this is one place where it must be very nice being a Trusted Root Certificate Authority.
Exporting the root certificate is very simple as the PoC host system is joined to the domain which hosts the Azure Stack CA, both roles reside on the ADVM. To export the Root certificate to your desktop run this simple one-liner in the PowerShell console of your Host system (the same command will work from the ClientVM).
[powershell] Get-ChildItem -Path Cert:\LocalMachine\Root| ` Where-Object{$_.Subject -like "CN=AzureStackCertificationAuthority*"}| ` Export-Certificate -FilePath "$env:USERPROFILE\Desktop\$($env:USERDOMAIN)RootCA.cer" -Type CERT [/powershell]
The process for importing this certificate on your client will vary depending on the OS version; as such I will avoid giving a scripted method.
Right click the previously exported certificate.
Choose Current User for most use-cases.
Select Browse for the appropriate store.
Select Trusted Root Certificate Authorities
Confirm the Import Prompt
Enjoying the Results
If we now connect to the portal, we should be prompted to login via Azure Active Directory and upon authentication you should be presented with a familiar portal.

You can also connect with the Azure Cmdlets if that is more your style. We'll make a slight modification to the snippet from the post by Mike De Luca How to Connect to Azure Stack via PowerShell.
[powershell] $ADDomainName="yourdomain.com" $AADTenant="youraadtenant.com" $AADUserName="yourStackUser@$AADTenant" $AADPassword=ConvertTo-SecureString -String "ThisIsYourAADPassword!" -AsPlainText -Force $AADCredential=New-Object PSCredential($AADUserName,$AADPassword) $StackEnvironmentName="Azure Stack ($ADDomainName)" $StackEnvironment=Add-AzureRmEnvironment -Name $StackEnvironmentName ` -ActiveDirectoryEndpoint ("https://login.windows.net/$AADTenant/") ` -ActiveDirectoryServiceEndpointResourceId "https://$ADDomainName-api/" ` -ResourceManagerEndpoint ("Https://api.$ADDomainName/") ` -GalleryEndpoint ("Https://gallery.$($ADDomainName):30016/") ` -GraphEndpoint "https://graph.windows.net/" Add-AzureRmAccount -Environment $StackEnvironment -Credential $AADCredential [/powershell]
Your Azure Stack TP1 PoC deployment should now be available to serve all clients from the public internet in an Azure Consistent manner.
[Unsupported]
Changing Azure Stack’s DNS and AD Domain to something other than AzureStack.local
This is another installer modification for Azure Stack TP1 PoC, that unfortunately will require editing more than one file. I find the fact this edit is necessary, puzzling; once again we will start by mounting MicrosoftAzureStackPOC.vhdx. We will start within PocDeployment\Test-AzureStackDeploymentParameters.ps1 at line 68:
[powershell]$ADDomainName = "AzureStack.local"[/powershell]
Why is this not a parameter? We will also ignore the fact we are editing signed scripts and go ahead and make it one, first deleting that line and subsequently modifying the parameter block (leaving the default as azurestack.local).
[powershell] [CmdletBinding()] Param ( [string] [Parameter(Mandatory = $true)] $PackagePath,
[SecureString] [Parameter(Mandatory = $false)] $AdminPassword,
[PSCredential] [Parameter(Mandatory = $false)] $AADCredential,
[string] [Parameter(Mandatory = $false)] $AADTenant,
[PSCredential] [Parameter(Mandatory = $false)] $TIPServiceAdminCredential,
[PSCredential] [Parameter(Mandatory = $false)] $TIPTenantAdminCredential,
[Parameter(Mandatory = $false)] [Nullable[bool]] $UseAADChina,
[String] [Parameter(Mandatory = $false)] $NATVMStaticIP,
[String] [Parameter(Mandatory = $false)] $NATVMStaticGateway,
[String] [Parameter(Mandatory = $false)] $PublicVLan = $null,
[Parameter(Mandatory = $false)] [string] $ProxyServer,
[Parameter(Mandatory = $false)] [string] $ADDomainName = "AzureStack.local",
[Switch] $Force ) [/powershell]
We will also update yet another hard coded value this time in PocDeployment\Invoke-DeploymentLogCollection.ps1. Look to line 106 and you will find a line like this:
[powershell] ('PublicIPAddresses','GatewayPools','GateWays','loadBalancers','loadBalancerMuxes','loadBalancerManager/config','networkInterfaces','virtualServers','Servers','credentials','macPools','logicalnetworks','accessControlLists') | % { JSONGet -NetworkControllerRestIP "NCVM.azurestack.local" -path "/$_" -Credential $credential | ConvertTo-Json -Depth 20 > "$destination\NC\$($_ -replace '/','').txt" } [/powershell]
Replace the hard coded azurestack.local value with the existing!!! parameter:
[powershell] ('PublicIPAddresses','GatewayPools','GateWays','loadBalancers','loadBalancerMuxes','loadBalancerManager/config','networkInterfaces','virtualServers','Servers','credentials','macPools','logicalnetworks','accessControlLists') | % { JSONGet -NetworkControllerRestIP "NCVM.$($parameters.ADDomainName)" -path "/$_" -Credential $credential | ConvertTo-Json -Depth 20 > "$destination\NC\$($_ -replace '/','').txt" } [/powershell]
Finally we need to modify the main installer script (in duplicate). DeployAzureStack.ps1 is located both in the root of the Azure Stack TP1 zip file you downloaded and the Installer directory within MicrosoftAzureStackPOC.vhdx. You can modify the file once and copy it to the other location in whatever order you choose. We are going to start by adding a parameter, $ADDomainName, for the Active Directory DNS name (again leaving the default as azurestack.local):
[powershell] [CmdletBinding()] param ( [SecureString] [Parameter(Mandatory = $false)] $AdminPassword,
[PSCredential] [Parameter(Mandatory = $false)] $AADCredential,
[string] [Parameter(Mandatory = $false)] $AADTenant,
[PSCredential] [Parameter(Mandatory = $false)] $TIPServiceAdminCredential,
[PSCredential] [Parameter(Mandatory = $false)] $TIPTenantAdminCredential,
[Parameter(Mandatory = $false)] [Nullable[bool]] $UseAADChina,
[String] [Parameter(Mandatory = $false)] $NATVMStaticIP = $null, #eg: 10.10.10.10/24
[String] [Parameter(Mandatory = $false)] $NATVMStaticGateway = $null, #eg: 10.10.10.1
[String] [Parameter(Mandatory = $false)] $PublicVLan = $null, #eg: 305
[String] [Parameter(Mandatory = $false)] $ProxyServer,
[String] [Parameter(Mandatory=$false)] $ADDomainName="azurestack.local",
[Switch] $Force,
[Switch] $NoAutoReboot ) [/powershell]
Modify line 102 to accomodate the parameter we’ve created in this and Test-AzureStackDeploymentParameters.ps1. The original line will look like this:
[powershell] $Parameters = & "$DeploymentScriptPath\Test-AzureStackDeploymentParameters.ps1" -PackagePath $PSScriptRoot -AdminPassword $AdminPassword -AADCredential $AADCredential -AADTenant $AADTenant -TIPServiceAdminCredential $TIPServiceAdminCredential -TIPTenantAdminCredential $TIPTenantAdminCredential -UseAADChina $UseAADChina -NATVMStaticIP $NATVMStaticIP -NATVMStaticGateway $NATVMStaticGateway -PublicVLan $PublicVLan -ProxyServer $ProxyServer -Force:$Force [/powershell]
Add the ADDomainName parameter:
[powershell] $Parameters = & "$DeploymentScriptPath\Test-AzureStackDeploymentParameters.ps1" -PackagePath $PSScriptRoot -AdminPassword $AdminPassword -AADCredential $AADCredential -AADTenant $AADTenant -TIPServiceAdminCredential $TIPServiceAdminCredential -TIPTenantAdminCredential $TIPTenantAdminCredential -UseAADChina $UseAADChina -NATVMStaticIP $NATVMStaticIP -NATVMStaticGateway $NATVMStaticGateway -ADDomainName $ADDomainName -PublicVLan $PublicVLan -ProxyServer $ProxyServer -Force:$Force [/powershell]
Unmount the VHD and install to a new domain if you so desire.
[Unsupported]
Modifying Azure Stack POC Install Constraints
Azure Stack’s specific hardware requirements, specifically RAM and Storage, may prevent one from being able to install on available kit. This is a pretty well known “hack”, however this is enterprise IT so redundancy is a good thing. The constraints are pretty simple to modify for your particular situation.
Once again, we’ll start by mounting the MicrosoftAzureStackPOC.vhdx (I won’t bother covering how to do that).
All of the hardware constraints are enforced through functions in PocDeployment\Invoke-AzureStackDeploymentPrecheck.ps1.
If you look at line 62 within the function CheckDisks you will find a statement that looks like this:
[powershell]$physicalDisks = Get-PhysicalDisk | Where-Object { $_.CanPool -eq $true -and ($_.BusType -eq 'RAID' -or $_.BusType -eq 'SAS' -or $_.BusType -eq 'SATA') }[/powershell]
You can choose to add another allowed bus type e.g. ISCSI, or if you are really adventurous just remove the entire AND clause.
[powershell]$physicalDisks = Get-PhysicalDisk | Where-Object { $_.CanPool -eq $true -and ($_.BusType -eq 'RAID' -or $_.BusType -eq 'SAS' -or $_.BusType -eq 'SATA' -or $_.BusType -eq 'ISCSI') }[/powershell]
or
[powershell]$physicalDisks = Get-PhysicalDisk | Where-Object { $_.CanPool -eq $true}[/powershell]
To modify the RAM contraints look further down, and at line 98 with CheckRAM you will find a very simple test:
[powershell]</p> <p>if ($totalMemoryInGB -lt 96) {<br> throw "Check system memory requirement failed. At least 96GB physical memory is required."<br>}</p> <p>[/powershell]
Modify the value as appropriate:
[powershell]</p> <p>if ($totalMemoryInGB -lt 64) {<br> throw "Check system memory requirement failed. At least 64GB physical memory is required."<br>}</p> <p>[/powershell]
Unmount the VHD, and you are done. It is worth reiterating that these constraints exist for a reason, and in adjusting or bypassing you should have appropriate expectations for performance and/or stability.
[Unsupported]
Updating the version of the Azure Cmdlets on the Azure Stack ClientVM
This is a trivial fix for an annoying extra post-installation step. The version of Azure Cmdlets installed to the ClientVM with the TP1 installer are not the “approved” version for Azure Stack. In order to experience any success with the additional Resource Providers you need to update the Cmdlets to version 1.2.1 Download Here.
Once you have obtained the updated MSI, mount the Installer VHD.

Once mounted navigate to the Dependencies folder.

Rename the MSI you downloaded before to azure-powershell.msi and copy over the existing file.
Unmount the image and copy the updated .vhdx to the install source you use.
Topic Search
-
Securing TLS in WAC (Windows Admin Center) https://t.co/klDc7J7R4G
Posts by Date
- August 2025 1
- March 2025 1
- February 2025 1
- October 2024 1
- August 2024 1
- July 2024 1
- October 2023 1
- September 2023 1
- August 2023 3
- July 2023 1
- June 2023 2
- May 2023 1
- February 2023 3
- January 2023 1
- December 2022 1
- November 2022 3
- October 2022 7
- September 2022 2
- August 2022 4
- July 2022 1
- February 2022 2
- January 2022 1
- October 2021 1
- June 2021 2
- February 2021 1
- December 2020 2
- November 2020 2
- October 2020 1
- September 2020 1
- August 2020 1
- June 2020 1
- May 2020 2
- March 2020 1
- January 2020 2
- December 2019 2
- November 2019 1
- October 2019 7
- June 2019 2
- March 2019 2
- February 2019 1
- December 2018 3
- November 2018 1
- October 2018 4
- September 2018 6
- August 2018 1
- June 2018 1
- April 2018 2
- March 2018 1
- February 2018 3
- January 2018 2
- August 2017 5
- June 2017 2
- May 2017 3
- March 2017 4
- February 2017 4
- December 2016 1
- November 2016 3
- October 2016 3
- September 2016 5
- August 2016 11
- July 2016 13







