#Unsupported

Azure Stack TP3 Stability (Reboot the XRP VM)

Stack_logo.png

If you have been deploying and using Azure Stack TP3, you may have noticed after a few days the portal starts behaving slower and in my experience closer to a week it stops working altogether.  This will vary depending on what you're doing and your hardware.  Looking at the VM guests, you will notice that the XRP VM is consuming all its memory.  While you could give the machine more memory, this does mess with the expected infrastructure sizing and eventually, it will consume whatever memory you give it. This will hopefully be addressed soon. However, a simple workaround is to reboot the XRP VM. Why do anything manually when you can script it?  This very simple script creates a scheduled task that will run on Sunday night at 1 am.  The task will stop and start the XRP VM and then trigger the existing ColdStartMachine task that makes sure all the Azure Stack services are running.

[powershell] #run on host server as the AzureStackAdmin user $AzureStackAdminPAssword = 'YOURPASSWORD'

$Action = New-ScheduledTaskAction -Execute 'Powershell.exe' -argument '-command "Get-VM MAS-Xrp01 | Stop-VM -force;Get-VM MAS-Xrp01 | start-vm;sleep 180;Stop-ScheduledTask ColdStartMachine ;start-scheduledtask ColdStartMachine"' $trigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Sunday -at 1am Register-ScheduledTask -Action $Action -Trigger $trigger -taskname "XRPReboot" -Description "Restart XRP VM weekly" -RunLevel Highest -User "$env:USERDOMAIN\$env:USERNAME" -Password $AzureStackAdminPAssword [/powershell]

 

 

Publishing Microsoft Azure Stack TP3 on the Internet via NAT

TP3Stack.png

As you may know? Azure Stack TP3 is here. This blog will outline how to publish your azure stack instance on the internet using NAT rules to redirect your external IP Address to the internal, external IPs. Our group published another article on how to do this for TP2 and this is the updated version for TP3. Starting Point This article assumes you have a host ready for installation with the TP3 VHDx loaded onto your host and you are familiar with the Azure Stack installation Process. The code in this article is extracted from a larger process but should be enough to get you through the process end to end. Azure Stack Installation First things first, I like to install a few other tools to help me edit code and access the portal, this is not required.

[powershell] iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex choco install notepadplusplus -y choco install googlechrome -y --ignore-checksums choco install visualstudiocode -y choco install beyondcompare -y choco install baretail -y choco install powergui -y --ignore-checksums [/powershell]

Next, you want to open up this file C:\clouddeployment\setup\DeploySingleNode.ps1

Editing these values allow you to create different internal naming and external space.  As you can see the ExternalDomainFQDN is made up of the region and external suffix.

This is a lot easier now the domain parameters are used from the same place, no need to hunt down domain names in files.

[powershell] $AdminPassword = 'SuperSecret!'|ConvertTo-SecureString -AsPlainText -force $AadAdminPass= 'SuperSecret!'|ConvertTo-SecureString -AsPlainText -Force $aadCred = New-Object PSCredential('stackadmin@poc.xxxxx.com',$AadAdminPass)

. c:\clouddeployment\setup\InstallAzureStackPOC.ps1 -AzureEnvironment "AzureCloud" ` -AdminPassword $AdminPassword ` -PublicVLanId 97 ` -NATIPv4Subnet '172.20.51.0/24' ` -NATIPv4Address '172.20.51.51' ` -NATIPv4DefaultGateway '172.20.51.1' ` -InfraAzureDirectoryTenantAdminCredential $aadCred ` -InfraAzureDirectoryTenantName 'poc.xxxxx.com' ` -EnvironmentDNS '172.20.11.21' ` [/powershell]

Remember to only have one nic enabled. We also have slightly less than the minimum space required for the OS disk and simply edit the XML file here C:\CloudDeployment\Configuration\Roles\Infrastructure\BareMetal\OneNodeRole.xml and change the value of this node Role.PrivateInfo.ValidationRequirements.MinimumSizeOfSystemDiskGB. The rest is over to TP3 installation, so far our experience of TP3 is much more stable to install, just the occasional rerun using

[powershell]InstallAzureStackPOC.ps1 -rerun[/powershell]

Once the installation completes obviously check you can access the portal.  I use chrome as it asks a lot less questions to confirm the portal is running.  We use a JSON file defined by a larger automation script to deploy these NAT rules.   Here I will simply share a portion of the resulting JSON file that is saved to C:\CloudDeployment\Setup\StackRecord.json.

[xml] { "Region": "SV5", "ExternalDomain": "AS01.poc.xxxxx.com", "nr_Table": "192.168.102.2:80,443:172.20.51.133:3x.7x.xx5.133", "nr_Queue": "192.168.102.3:80,443:172.20.51.134:3x.7x.xx5.134", "nr_blob": "192.168.102.4:80,443:172.20.51.135:3x.7x.xx5.135", "nr_adfs": "192.168.102.5:80,443:172.20.51.136:3x.7x.xx5.136", "nr_graph": "192.168.102.6:80,443:172.20.51.137:3x.7x.xx5.137", "nr_api": "192.168.102.7:443:172.20.51.138:3x.7x.xx5.138", "nr_portal": "192.168.102.8:13011,30015,13001,13010,13021,13020,443,13003,13026,12648,12650,12499,12495,12647,12646,12649:172.20.51.139:3x.7x.xx5.139", "nr_publicapi": "192.168.102.9:443:172.20.51.140:3x.7x.xx5.140", "nr_publicportal": "192.168.102.10:13011,30015,13001,13010,13021,13020,443,13003,12495,12649:172.20.51.141:3x.7x.xx5.141", "nr_crl": "192.168.102.11:80:172.20.51.142:3x.7x.xx5.142", "nr_extensions": "192.168.102.12:443,12490,12491,12498:172.20.51.143:3x.7x.xx5.143", }

[/xml]

This is used by this script also saved to the setup folder

[powershell] param ( $StackBuildJSONPath='C:\CloudDeployment\Setup\StackRecord.json' )

$server = 'mas-bgpnat01' $StackBuild = Get-Content $StackBuildJSONPath | ConvertFrom-Json

[scriptblock]$ScriptBlockAddExternal = { param($ExIp) $NatSetup=Get-NetNat Write-Verbose 'Adding External Address $ExIp' Add-NetNatExternalAddress -NatName $NatSetup.Name -IPAddress $ExIp -PortStart 80 -PortEnd 63356 }

[scriptblock]$ScriptblockAddPorts = { param( $ExIp, $natport, $InternalIp ) Write-Verbose "Adding NAT Mapping $($ExIp):$($natport)->$($InternalIp):$($natport)" Add-NetNatStaticMapping -NatName $NatSetup.Name -Protocol TCP -ExternalIPAddress $ExIp -InternalIPAddress $InternalIp -ExternalPort $natport -InternalPort $NatPort }

$NatRules = @() $NatRuleNames = ($StackBuild | get-member | ? {$_.name -like "nr_*"}).name foreach ($NATName in $NatRuleNames ) { $NatRule = '' | select name, Internal, External, Ports $NatRule.name = $NATName.Replace('nr_','') $rules = $StackBuild.($NATName).split(':') $natrule.Internal = $rules[0] $natrule.External = $rules[2] $natrule.Ports = $rules[1] $NatRules += $NatRule }

$session = New-PSSession -ComputerName $server

foreach ($NatRule in $NatRules) { Invoke-Command -Session $session -ScriptBlock $ScriptBlockAddExternal -ArgumentList $NatRule.External $NatPorts = $NatRule.Ports.Split(',').trim() foreach ($NatPort in $NatPorts) { Invoke-Command -Session $session -ScriptBlock $ScriptblockAddPorts -ArgumentList $NatRule.External,$NatPort,$NatRule.Internal } }

remove-pssession $session [/powershell]

Next, you need to publish your DNS Records. You can do this by hand if you know your NAT Mappings and as a reference, you can open up the DNS server on the MAS-DC01.

However, here are some scripts I have created to help automate this process. I do run this from another machine but have edited it to run in the context of the AzureStack Host. First, we need a couple of reference files.

DNSMappings C:\clouddeployment\setup\DNSMapping.json

[xml] [ { "Name": "nr_Table", "A": "*", "Subdomain": "table", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_Queue", "A": "*", "Subdomain": "queue", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_blob", "A": "*", "Subdomain": "blob", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_adfs", "A": "adfs", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_graph", "A": "graph", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_api", "A": "api", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_portal", "A": "portal", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_publicapi", "A": "publicapi", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_publicportal", "A": "publicportal", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_crl", "A": "crl", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_extensions", "A": "*", "Subdomain": "vault", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_extensions", "A": "*", "Subdomain": "vaultcore", "Zone": "RegionZone.DomainZone" } ]

[/xml]

ExternalMapping C:\clouddeployment\setup\ExternalMapping.json This is a smaller section the contain on the NAT mappings reference in this example.

[xml] [ { "External": "3x.7x.2xx.133", "Internal": "172.20.51.133" }, { "External": "3x.7x.2xx.134", "Internal": "172.20.51.134" }, { "External": "3x.7x.2xx.135", "Internal": "172.20.51.135" }, { "External": "3x.7x.2xx.136", "Internal": "172.20.51.136" }, { "External": "3x.7x.2xx.137", "Internal": "172.20.51.137" }, { "External": "3x.7x.2xx.138", "Internal": "172.20.51.138" }, { "External": "3x.7x.2xx.139", "Internal": "172.20.51.139" }, { "External": "3x.7x.2xx.140", "Internal": "172.20.51.140" }, { "External": "3x.7x.2xx.141", "Internal": "172.20.51.141" }, { "External": "3x.7x.2xx.142", "Internal": "172.20.51.142" }, { "External": "3x.7x.2xx.143", "Internal": "172.20.51.143" } ] [/xml]

Bringing it altogether with this script

[powershell] Param ( $StackJSONPath = 'c:\clouddeployment\setup\StackRecord.json' )

$stackRecord = Get-Content $StackJSONPath | ConvertFrom-Json $DNSMappings = get-content c:\clouddeployment\setup\DNSMapping.json | ConvertFrom-Json $ExternalMapping = get-content c:\clouddeployment\setup\ExternalMapping.json | ConvertFrom-Json

$DNSRecords = @() foreach ($DNSMapping in $DNSMappings) { $DNSRecord = '' | select Name, A, IP, Subdomain, Domain $DNS = $stackRecord.($DNSMapping.Name).split(':') $DNSRecord.IP = ($ExternalMapping | ? {$_.Internal -eq $DNS[2]}).external $DNSRecord.Name = $DNSMapping $DNSRecord.A = $DNSMapping.A $DNSRecord.Subdomain = $DNSMapping.Subdomain.Replace("RegionZone",$stackRecord.Region.ToLower()).Replace("DomainZone",$stackRecord.ExternalDomain.ToLower()) $DNSRecord.Domain = $DNSMapping.zone.Replace("RegionZone",$stackRecord.Region.ToLower()).Replace("DomainZone",$stackRecord.ExternalDomain.ToLower()) $DNSRecords += $DNSRecord } #here you can use this array to do what you need, 2 examples follow

#CSV host file for import $DNSRecords | select a,IP, Subdomain, domain | ConvertTo-CSV -NoTypeInformation | Set-Content c:\clouddeployment\setup\DNSRecords.csv

$SubDomains = $DNSRecords | group subdomain foreach ($SubDomain in ($SubDomains | Where {$_.name -ne ''}) ) { Write-Output ("Records for " +$SubDomain.name) foreach ($record in $SubDomain.Group) { # Initialize $resourceAName = $record.A $PublicIP = $record.ip $resourceSubDomainName = $record.Subdomain $zoneName = $record.Domain $resourceName = $resourceAName + "." + $resourceSubDomainName + "." + $zoneName

Write-Output ("Record for $resourceName ") #Create individual DNS records here

} } [/powershell]

The array will give you the records you need to create.

All things being equal and a little bit of luck...

To access this external Azure Stack instance via Powershell you will need a few details and IDs. Most of this is easy enough, however, to get your $EnvironmentID from the deployment Host, open c:\ecetore\ and find your deployment XML. Approx 573kb. Inside this file search for 'DeploymentGuid' This is your Environment ID.  Or you can run this code on the host, you may need to change the $deploymentfile parameter

[powershell] param ( $DeploymentFile = 'C:\EceStore\403314e1-d945-9558-fad2-42ba21985248\80e0921f-56b5-17d3-29f5-cd41bf862787' )

[Xml]$DeploymentStore=Get-Content $DeploymentFile | Out-String $InfraRole=$DeploymentStore.CustomerConfiguration.Role.Roles.Role|? Id -eq Infrastructure $BareMetalInfo=$InfraRole.Roles.Role|? Id -eq BareMetal|Select -ExpandProperty PublicInfo $PublicInfoRoles=$DeploymentStore.CustomerConfiguration.Role.Roles.Role.Roles.Role|Select Id,PublicInfo|Where-Object PublicInfo -ne $null $DeploymentDeets=@{ DeploymentGuid=$BareMetalInfo.DeploymentGuid; IdentityApplications=($PublicInfoRoles.PublicInfo|? IdentityApplications -ne $null|Select -ExpandProperty IdentityApplications|Select -ExpandProperty IdentityApplication|Select Name,ResourceId); VIPs=($PublicInfoRoles.PublicInfo|? Vips -ne $null|Select -ExpandProperty Vips|Select -ExpandProperty Vip); } $DeploymentDeets.DeploymentGuid [/powershell]

Plug all the details into this connection script to access your stack instance. Well Commented code credit to Chris Speers.

[powershell] #Random Per Insall $EnvironmentID='xxxxxxxx-xxxx-4e03-aac2-6c2e2f0a517a' #The DNS Domain used for the Install $StackDomain='sv5.as01.poc.xxxxx.com' #The AAD Domain Name (e.g. bobsdomain.onmicrosoft.com) $AADDomainName='poc.xxxxx.com' #The AAD Tenant ID $AADTenantID = 'poc.xxxxx.com' #The Username to be used $AADUserName='stackadmin@poc.xxxxx.com' #The Password to be used $AADPassword='SuperSecret!'|ConvertTo-SecureString -Force -AsPlainText #The Credential to be used. Alternatively could use Get-Credential $AADCredential=New-Object PSCredential($AADUserName,$AADPassword) #The AAD Application Resource URI $ApiAADResourceID="https://api.$StackDomain/$EnvironmentID" #The ARM Endpoint $StackARMUri="Https://api.$StackDomain/" #The Gallery Endpoint $StackGalleryUri="Https://portal.$($StackDomain):30016/" #The OAuth Redirect Uri $AadAuthUri="https://login.windows.net/$AADTenantID/" #The MS Graph API Endpoint $GraphApiEndpoint="graph.$($StackDomain)"

$ResourceManager = "https://api.$($StackDomain)/$($EnvironmentID)" $Portal = "https://portal.$($StackDomain)/$($EnvironmentID)" $PublicPortal = "https://publicportal.$($StackDomain)/$($EnvironmentID)" $Policy = "https://policy.$($StackDomain)/$($EnvironmentID)" $Monitoring = "https://monitoring.$($StackDomain)/$($EnvironmentID)"

#Add the Azure Stack Environment Get-azurermenvironment -Name 'Azure Stack AS01'|Remove-AzureRmEnvironment Add-AzureRmEnvironment -Name "Azure Stack AS01" ` -ActiveDirectoryEndpoint $AadAuthUri ` -ActiveDirectoryServiceEndpointResourceId $ApiAADResourceID ` -ResourceManagerEndpoint $StackARMUri ` -GalleryEndpoint $StackGalleryUri ` -GraphEndpoint $GraphApiEndpoint

#Add the environment to the context using the credential $env = Get-azurermenvironment -Name 'Azure Stack AS01' Add-AzureRmAccount -Environment $env -Credential $AADCredential -Verbose Login-AzureRmAccount -EnvironmentName 'Azure Stack AS01'

get-azurermcontext Write-output "ResourceManager" Write-output $ResourceManager Write-output "`nPortal" Write-output $Portal Write-output "`nPublicPortal" Write-output $PublicPortal Write-output "`nPolicy" Write-output $policy Write-output "`nMonitoring " Write-output $Monitoring [/powershell]

Returning something like this.

Thanks for reading.  Hopefullly this helped you in some way.

 

Azure Stack TP2 Hacks: Custom Domain Names and Exposing to the Internet

ss.png

In some previous posts, we covered some "hacks" to Azure Stack TP1, primarily enabling a customized domain name and exposing to the internet.  If you have not noticed yet, the installation has changed greatly. The process is now driven by ECEngine and should be far more indicative of how the final product gets deployed. While the installer has greatly changed, fortunately, the process to expose the stack publicly has only changed in a few minor ways. Without getting too involved in how it works, the installation operates from a series of PowerShell modules and Pester tests tied to a configuration composed from a number XML configuration files. The configuration files support use of variables and parameters to drive most of the PowerShell action. As with TP1, the stack is wired so that the DNS domain name for Active Directory must match the public DNS domain name (think certificates and host headers). This is a much less involved change to TP2, it mostly requires replacing a couple of straggling hard coded entries with variables in some OneNodeConfig.xml files and changing the installer bootstrapper to use it.  Once again, I will admonish you that this is wholly unsupported.

There are 6 files that need minor changes, we will start with the XML config files.

Config Files

C:\CloudDeployment\Configuration\Roles\Fabric\IdentityProvider\OneNodeRole.xml Line 11 From

[xml] <IdentityApplication Name="Deployment" ResourceId="https://deploy.azurestack.local/[Deployment_Guid]" DisplayName="Deployment Application" CertPath="{Infrastructure}\ASResourceProvider\Cert\Deployment.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Deployment.IdentityApplication.Configuration.json" > </IdentityApplication> [/xml]

To

[xml] <IdentityApplication Name="Deployment" ResourceId="https://deploy.[DOMAINNAMEFQDN]/[Deployment_Guid]" DisplayName="Deployment Application" CertPath="{Infrastructure}\ASResourceProvider\Cert\Deployment.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Deployment.IdentityApplication.Configuration.json" > </IdentityApplication> [/xml]

C:\CloudDeployment\Configuration\Roles\Fabric\KeyVault\OneNodeRole.xml Line 12 From

[xml] <IdentityApplication Name="KeyVault" ResourceId="https://vault.azurestack.local/[Deployment_Guid]" DisplayName="AzureStack KeyVault" CertPath="{Infrastructure}\ASResourceProvider\Cert\KeyVault.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\KeyVault.IdentityApplication.Configuration.json" > <AADPermissions> <ApplicationPermission Name="ReadDirectoryData" /> </AADPermissions> <OAuth2PermissionGrants> <FirstPartyApplication FriendlyName="PowerShell" /> <FirstPartyApplication FriendlyName="VisualStudio" /> <FirstPartyApplication FriendlyName="AzureCLI" /> </OAuth2PermissionGrants> </IdentityApplication> [/xml]

To

[xml] <IdentityApplication Name="KeyVault" ResourceId="https://vault.[DOMAINNAMEFQDN]/[Deployment_Guid]" DisplayName="AzureStack KeyVault" CertPath="{Infrastructure}\ASResourceProvider\Cert\KeyVault.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\KeyVault.IdentityApplication.Configuration.json" > <AADPermissions> <ApplicationPermission Name="ReadDirectoryData" /> </AADPermissions> <OAuth2PermissionGrants> <FirstPartyApplication FriendlyName="PowerShell" /> <FirstPartyApplication FriendlyName="VisualStudio" /> <FirstPartyApplication FriendlyName="AzureCLI" /> </OAuth2PermissionGrants> </IdentityApplication> [/xml]

Line 26 From

[xml] <AzureKeyVaultSuffix>vault.azurestack.local</AzureKeyVaultSuffix> [/xml]

To

[xml] <AzureKeyVaultSuffix>vault[DOMAINNAMEFQDN]</AzureKeyVaultSuffix> [/xml]

C:\CloudDeployment\Configuration\Roles\Fabric\WAS\OneNodeRole.xml Line(s) 96-97 From

[xml] <IdentityApplication Name="ResourceManager" ResourceId="https://api.azurestack.local/[Deployment_Guid]" HomePage="https://api.azurestack.local/" DisplayName="AzureStack Resource Manager" CertPath="{Infrastructure}\ASResourceProvider\Cert\ResourceManager.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\ResourceManager.IdentityApplication.Configuration.json" Tags="MicrosoftAzureStack" > [/xml]

To

[xml] <IdentityApplication Name="ResourceManager" ResourceId="https://api.[DOMAINNAMEFQDN]/[Deployment_Guid]" HomePage="https://api.[DOMAINNAMEFQDN]/" DisplayName="AzureStack Resource Manager" CertPath="{Infrastructure}\ASResourceProvider\Cert\ResourceManager.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\ResourceManager.IdentityApplication.Configuration.json" Tags="MicrosoftAzureStack" > [/xml]

Line(s) 118-120 From

[xml] <IdentityApplication Name="Portal" ResourceId="https://portal.azurestack.local/[Deployment_Guid]" HomePage="https://portal.azurestack.local/" ReplyAddress="https://portal.azurestack.local/" DisplayName="AzureStack Portal" CertPath="{Infrastructure}\ASResourceProvider\Cert\Portal.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Portal.IdentityApplication.Configuration.json" > [/xml]

To

[xml] <IdentityApplication Name="Portal" ResourceId="https://portal.[DOMAINNAMEFQDN]/[Deployment_Guid]" HomePage="https://portal.[DOMAINNAMEFQDN]/" ReplyAddress="https://portal.[DOMAINNAMEFQDN]/" DisplayName="AzureStack Portal" CertPath="{Infrastructure}\ASResourceProvider\Cert\Portal.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Portal.IdentityApplication.Configuration.json" > [/xml]

Line 129 From

[xml] <ResourceAccessPermissions> <UserImpersonationPermission AppURI="https://api.azurestack.local/[Deployment_Guid]" /> </ResourceAccessPermissions> [/xml]

To

[xml] <ResourceAccessPermissions> <UserImpersonationPermission AppURI="https://api.[DOMAINNAMEFQDN]/[Deployment_Guid]" /> </ResourceAccessPermissions> [/xml]

Line 133 From

[xml] <IdentityApplication Name="Policy" ResourceId="https://policy.azurestack.local/[Deployment_Guid]" DisplayName="AzureStack Policy Service" CertPath="{Infrastructure}\ASResourceProvider\Cert\Policy.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Policy.IdentityApplication.Configuration.json" > [/xml]

To

[xml] <IdentityApplication Name="Policy" ResourceId="https://policy.[DOMAINNAMEFQDN]/[Deployment_Guid]" DisplayName="AzureStack Policy Service" CertPath="{Infrastructure}\ASResourceProvider\Cert\Policy.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Policy.IdentityApplication.Configuration.json" > [/xml]

Line 142 From

[xml] <IdentityApplication Name="Monitoring" ResourceId="https://monitoring.azurestack.local/[Deployment_Guid]" DisplayName="AzureStack Monitoring Service" CertPath="{Infrastructure}\ASResourceProvider\Cert\Monitoring.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Monitoring.IdentityApplication.Configuration.json" > </IdentityApplication> [/xml]

To

[xml] <IdentityApplication Name="Monitoring" ResourceId="https://monitoring.[DOMAINNAMEFQDN]/[Deployment_Guid]" DisplayName="AzureStack Monitoring Service" CertPath="{Infrastructure}\ASResourceProvider\Cert\Monitoring.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Monitoring.IdentityApplication.Configuration.json" > </IdentityApplication> [/xml]

C:\CloudDeployment\Configuration\Roles\Fabric\FabricRingServices\XRP\OneNodeRole.xml Line 114 From

[xml] <IdentityApplication Name="Monitoring" ResourceId="https://monitoring.azurestack.local/[Deployment_Guid]" DisplayName="AzureStack Monitoring Service" CertPath="{Infrastructure}\ASResourceProvider\Cert\Monitoring.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Monitoring.IdentityApplication.Configuration.json" > </IdentityApplication> [/xml]

To

[xml] <IdentityApplication Name="Monitoring" ResourceId="https://monitoring.[DOMAINNAMEFQDN]/[Deployment_Guid]" DisplayName="AzureStack Monitoring Service" CertPath="{Infrastructure}\ASResourceProvider\Cert\Monitoring.IdentityApplication.ClientCertificate.pfx" ConfigPath="{Infrastructure}\ASResourceProvider\Config\Monitoring.IdentityApplication.Configuration.json" > </IdentityApplication> [/xml]

Scripts

Now we will edit the installation bootstrapping scripts.

We will start by adding two new parameters ($ADDomainName and $DomainNetbiosName) to C:\CloudDeployment\Configuration\New-OneNodeManifest.ps1 and have the manifest generation use them.

[powershell] param ( [Parameter(Mandatory=$true)] [Xml] $InputXml,

[Parameter(Mandatory=$true)] [String] $OutputFile,

[Parameter(Mandatory=$true)] [System.Guid] $DeploymentGuid,

[Parameter(Mandatory=$false)] [String] $Model,

[Parameter(Mandatory=$true)] [String] $HostIPv4Address,

[Parameter(Mandatory=$true)] [String] $HostIPv4DefaultGateway,

[Parameter(Mandatory=$true)] [String] $HostSubnet,

[Parameter(Mandatory=$true)] [bool] $HostUseDhcp,

[Parameter(Mandatory=$true)] [string] $PhysicalMachineMacAddress,

[Parameter(Mandatory=$true)] [String] $HostName,

[Parameter(Mandatory=$true)] [String] $NatIPv4Address,

[Parameter(Mandatory=$true)] [String] $NATIPv4Subnet,

[Parameter(Mandatory=$true)] [String] $NatIPv4DefaultGateway,

[Parameter(Mandatory=$false)] [Int] $PublicVlanId,

[Parameter(Mandatory=$true)] [String] $TimeServer,

[Parameter(Mandatory=$true)] [String] $TimeZone,

[Parameter(Mandatory=$true)] [String[]] $EnvironmentDNS,

[Parameter(Mandatory=$false)] [String] $ADDomainName='azurestack.local',

[Parameter(Mandatory=$false)] [String] $DomainNetbiosName='azurestack',

[Parameter(Mandatory=$true)] [string] $AADDirectoryTenantName,

[Parameter(Mandatory=$true)] [string] $AADDirectoryTenantID,

[Parameter(Mandatory=$true)] [string] $AADAdminSubscriptionOwner,

[Parameter(Mandatory=$true)] [string] $AADClaimsProvider ) $Xml.InnerXml = $Xml.InnerXml.Replace('[PREFIX]', 'MAS') $Xml.InnerXml = $Xml.InnerXml.Replace('[DOMAINNAMEFQDN]', $ADDomainName) $Xml.InnerXml = $Xml.InnerXml.Replace('[DOMAINNAME]', $DomainNetbiosName) [/powershell]

The final edit(s) we need to make are to C:\CloudDeployment\Configuration\InstallAzureStackPOC.ps1. We will start by adding the same parameters to this script.

[powershell] [CmdletBinding(DefaultParameterSetName="DefaultSet")] param ( [Parameter(Mandatory=$false, ParameterSetName="RerunSet")] [Parameter(Mandatory=$true, ParameterSetName="AADSetStaticNAT")] [Parameter(Mandatory=$true, ParameterSetName="DefaultSet")] [SecureString] $AdminPassword,

[Parameter(Mandatory=$false)] [PSCredential] $AADAdminCredential,

[Parameter(Mandatory=$false)] [String] $AdDomainName='azurestack.local',

[Parameter(Mandatory=$false)] [String] $DomainNetbiosName='AzureStack',

[Parameter(Mandatory=$false)] [String] $AADDirectoryTenantName,

[Parameter(Mandatory=$false)] [ValidateSet('Public Azure','Azure - China', 'Azure - US Government')] [String] $AzureEnvironment = 'Public Azure',

[Parameter(Mandatory=$false)] [String[]] $EnvironmentDNS,

[Parameter(Mandatory=$true, ParameterSetName="AADSetStaticNAT")] [String] $NATIPv4Subnet,

[Parameter(Mandatory=$true, ParameterSetName="AADSetStaticNAT")] [String] $NATIPv4Address,

[Parameter(Mandatory=$true, ParameterSetName="AADSetStaticNAT")] [String] $NATIPv4DefaultGateway,

[Parameter(Mandatory=$false)] [Int] $PublicVlanId,

[Parameter(Mandatory=$false)] [string] $TimeServer = 'time.windows.com',

[Parameter(Mandatory=$false, ParameterSetName="RerunSet")] [Switch] $Rerun ) [/powershell]

The next edit will occur at lines 114-115 From

[powershell] $FabricAdminUserName = 'AzureStack\FabricAdmin' $SqlAdminUserName = 'AzureStack\SqlSvc' [/powershell]

To

[powershell] $FabricAdminUserName = "$DomainNetbiosName\FabricAdmin" $SqlAdminUserName = "$DomainNetbiosName\SqlSvc" [/powershell]

Finally we will modify the last statement of the script from line 312 to pass the new parameters.

[powershell] & $PSScriptRoot\New-OneNodeManifest.ps1 -InputXml $xml ` -OutputFile $outputConfigPath ` -Model $model ` -DeploymentGuid $deploymentGuid ` -HostIPv4Address $hostIPv4Address ` -HostIPv4DefaultGateway $hostIPv4Gateway ` -HostSubnet $hostSubnet ` -HostUseDhcp $hostUseDhcp ` -PhysicalMachineMacAddress $physicalMachineMacAddress ` -HostName $hostName ` -NATIPv4Address $NATIPv4Address ` -NATIPv4Subnet $NATIPv4Subnet ` -NATIPv4DefaultGateway $NATIPv4DefaultGateway ` -PublicVlanId $PublicVlanId ` -TimeServer $TimeServer ` -TimeZone $timezone ` -EnvironmentDNS $EnvironmentDNS ` -AADDirectoryTenantName $AADDirectoryTenantName ` -AADDirectoryTenantID $AADDirectoryTenantID ` -AADAdminSubscriptionOwner $AADAdminSubscriptionOwner ` -AADClaimsProvider $AADClaimsProvider ` -ADDomainName $AdDomainName ` -DomainNetbiosName $DomainNetbiosName [/powershell]

NAT Configuration

So, you now have customized the Domain for your one node Azure Stack install and want to get it on the internet. This process is almost identical to TP1 save for two changes. In TP1 there were both BGPVM and NATVM machines; while there is now a single machine MAS-BGPNAT01. The BGPNAT role only exists in the one node (HyperConverged) installation. The other change is the type of Remote Access installation. TP1 also used the "legacy" RRAS for NAT, where all configuration was UI or netsh based. TP2 has transitioned to "modern" Remote Access that is only really manageable through PowerShell. To enable the appropriate NAT mappings we will need to use three PowerShell Cmdlets. Get-NetNat Add-NetNatExternalAddress Add-NetNatStaticMapping I use a script to create all the mappings which takes a simple object, which in our use case is deserialized from JSON. This file is a simple collection of the NAT entries and mappings to be created.

[javascript] { "Portal": { "External": "172.20.40.39", "Ports": [ 80,443,30042,13011,30011,30010,30016,30015,13001,13010,13021,30052,30054,13020,30040,13003,30022,12998,12646,12649,12647,12648,12650,53056,57532,58462,58571,58604,58606,58607,58608,58610,58613,58616,58618,58619,58620,58626,58627,58628,58629,58630,58631,58632,58633,58634,58635,58636,58637,58638,58639,58640,58641,58642,58643,58644,58646,58647,58648,58649,58650,58651,58652,58653,58654,58655,58656,58657,58658,58659,58660,58661,58662,58663,58664,58665,58666,58667,58668,58669,58670,58671,58672,58673,58674,58675,58676,58677,58678,58679,58680,58681,58682,58683,58684,58685,58686,58687,58688,58689,58690,58691,58692,58693,58694,58695,58696,58697,58698,58699,58701 ], "Internal": "192.168.102.5" }, "API": { "External": "172.20.40.38", "Ports": [ 80,443,30042,13011,30011,30010,30016,30015,13001,13010,13021,30052,30054,13020,30040,13003,30022,12998,12646,12649,12647,12648,12650,53056,57532,58462,58571,58604,58606,58607,58608,58610,58613,58616,58618,58619,58620,58626,58627,58628,58629,58630,58631,58632,58633,58634,58635,58636,58637,58638,58639,58640,58641,58642,58643,58644,58646,58647,58648,58649,58650,58651,58652,58653,58654,58655,58656,58657,58658,58659,58660,58661,58662,58663,58664,58665,58666,58667,58668,58669,58670,58671,58672,58673,58674,58675,58676,58677,58678,58679,58680,58681,58682,58683,58684,58685,58686,58687,58688,58689,58690,58691,58692,58693,58694,58695,58696,58697,58698,58699,58701 ], "Internal": "192.168.102.4" }, "DataVault": { "External": "172.20.40.43", "Ports": [80,443], "Internal": "192.168.102.3" }, "CoreDataVault": { "External": "172.20.40.44", "Ports": [80,443], "Internal": "192.168.102.3" }, "Graph": { "External": "172.20.40.40", "Ports": [80,443], "Internal": "192.168.102.8" }, "Extensions": { "External": "172.20.40.41", "Ports": [ 80,443,30042,13011,30011,30010,30016,30015,13001,13010,13021,30052,30054,13020,30040,13003,30022,12998,12646,12649,12647,12648,12650,53056,57532,58462,58571,58604,58606,58607,58608,58610,58613,58616,58618,58619,58620,58626,58627,58628,58629,58630,58631,58632,58633,58634,58635,58636,58637,58638,58639,58640,58641,58642,58643,58644,58646,58647,58648,58649,58650,58651,58652,58653,58654,58655,58656,58657,58658,58659,58660,58661,58662,58663,58664,58665,58666,58667,58668,58669,58670,58671,58672,58673,58674,58675,58676,58677,58678,58679,58680,58681,58682,58683,58684,58685,58686,58687,58688,58689,58690,58691,58692,58693,58694,58695,58696,58697,58698,58699,58701 ], "Internal": "192.168.102.7" }, "Storage": { "External": "172.20.40.42", "Ports": [80,443], "Internal": "192.168.102.6" } } [/javascript]

In the one node TP2 deployment 192.168.102.0 is the subnet for "Public" IP addresses, and if you notice all the VIP's for the stack reside on that subnet. We have 1-to-1 NAT for all the "External" addresses we associate with a given Azure Stack instance.

[powershell] [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [psobject] $NatConfig )

#There's only one could do -Name BGPNAT ... $NatSetup=Get-NetNat

$NatConfigNodeNames=$NatConfig|Get-Member -MemberType NoteProperty|Select-Object -ExpandProperty Name

foreach ($NatConfigNodeName in $NatConfigNodeNames) { Write-Verbose "Configuring NAT for Item $NatConfigNodeName" $ExIp=$NatConfig."$NatConfigNodeName".External $InternalIp=$NatConfig."$NatConfigNodeName".Internal $NatPorts=$NatConfig."$NatConfigNodeName".Ports Write-Verbose "Adding External Address $ExIp" Add-NetNatExternalAddress -NatName $NatSetup.Name -IPAddress $ExIp -PortStart 80 -PortEnd 63356 Write-Verbose "Adding Static Mappings"

foreach ($natport in $NatPorts) { #TCP Write-Verbose "Adding NAT Mapping $($ExIp):$($natport)->$($InternalIp):$($natport)" Add-NetNatStaticMapping -NatName $NatSetup.Name -Protocol TCP ` -ExternalIPAddress $ExIp -InternalIPAddress $InternalIp ` -ExternalPort $natport -InternalPort $NatPort

} } [/powershell]

DNS Records

The final step will be adding the requisite DNS Entries, which have changed slightly as well.  In the interest of simplicity assume the final octet of the IP addresses on the 172.20.40.0 subnet have a 1 to 1 NAT mapping to 38.77.x.0 (e.g. 172.20.40.40 –> 38.77.x.40)

A Record IP Address
api 38.77.x.38
portal 38.77.x.39
*.blob 38.77.x.42
*.table 38.77.x.42
*.queue 38.77.x.42
*.vault 38.77.x.43
data.vaultcore 38.77.x.44
control.vaultcore 38.77.x.44
xrp.tenantextensions 38.77.x.44
compute.adminextensions 38.77.x.41
network.adminextensions health.adminextensions 38.77.x.41
storage.adminextensions 38.77.x.41

 

Connecting to the Stack

You will need to export the root certificate from the CA for your installation for importing on any clients that will access your deployment.   Exporting the root certificate is very simple as the host system is joined to the domain which hosts the Azure Stack CA. To export the Root certificate to your desktop run this simple one-liner in the PowerShell console of your Host system (the same command will work from the Console VM).

[powershell] Get-ChildItem -Path Cert:\LocalMachine\Root| ` Where-Object{$_.Subject -like "CN=AzureStackCertificationAuthority*"}| ` Export-Certificate -FilePath "$env:USERPROFILE\Desktop\$($env:USERDOMAIN)RootCA.cer" -Type CERT [/powershell]

The process for importing this certificate on your client will vary depending on the OS version; as such I will avoid giving a scripted method.

Right click the previously exported certificate.

installcert.png

Choose Current User for most use-cases.

imprt2_thumb.png

Select Browse for the appropriate store.

imprt3.png

Select Trusted Root Certificate Authorities

imprt4.png

Confirm the Import Prompt

To connect with PowerShell or REST API you will need the deployment GUID. This can be obtained from the host with the following snippet.

[powershell] [xml]$deployinfo = Get-content "C:\CloudDeployment\Config.xml" $deploymentguid = $deployinfo.CustomerConfiguration.Role.Roles.Role.Roles.Role | % {$_.PublicInfo.DeploymentGuid} [/powershell]

This value can then be used to connect to your stack.

[powershell] #Deployment GUID $EnvironmentID='4bc6f444-ff15-4fd7-9bfa-5495891fe876' #The DNS Domain used for the Install $StackDomain="yourazurestack.com" #The AAD Domain Name (e.g. bobsdomain.onmicrosoft.com) $AADDomainName='youraadtenant.com' #The AAD Tenant ID $AADTenantID = "youraadtenant.com" #The Username to be used $AADUserName="username@$AADDomainName" #The Password to be used $AADPassword='P@ssword1'|ConvertTo-SecureString -Force -AsPlainText #The Credential to be used. Alternatively could use Get-Credential $AADCredential=New-Object PSCredential($AADUserName,$AADPassword) #The AAD Application Resource URI $ApiAADResourceID="https://api.$StackDomain/$EnvironmentID" #The ARM Endpoint $StackARMUri="Https://api.$StackDomain/" #The Gallery Endpoint $StackGalleryUri="Https://portal.$($StackDomain):30016/" #The OAuth Redirect Uri $AadAuthUri="https://login.windows.net/$AADTenantID/" #The MS Graph API Endpoint $GraphApiEndpoint="https://graph.windows.net/"

#Add the Azure Stack Environment Add-AzureRmEnvironment -Name "Azure Stack" ` -ActiveDirectoryEndpoint $AadAuthUri ` -ActiveDirectoryServiceEndpointResourceId $ApiAADResourceID ` -ResourceManagerEndpoint $StackARMUri ` -GalleryEndpoint $StackGalleryUri ` -GraphEndpoint $GraphApiEndpoint

#Add the environment to the context using the credential $env = Get-AzureRmEnvironment 'Azure Stack' Add-AzureRmAccount -Environment $env -Credential $AADCredential -Verbose [/powershell]

Note: You will need the a TP2 specific version of the Azure PowerShell for many operations.  Enjoy, and stay tuned for more.

Writing your own custom data collection for OMS using SCOM Management Packs

OMS-Cover-Photo.png

[Unsupported] OMS is a great data collection and analytics tool, but at the moment it also has some limitations. Microsoft has been releasing Integration Pack after Integration Pack and adding features at a decent pace, but unlike the equivalent SCOM Management Packs the IPs are somewhat of a black box. Awhile back, I got frustrated with some of the lack of configuration options in OMS (then Op Insights) and decided to “can opener” some of the features. I figured the best way to do this was to examine some of the management packs that OMS would install into a connected SCOM Management Group and start digging through their contents. Now granted, lately the OMS team has added a lot more customization options than they used to have when I originally traced this out. You can now add custom performance collection rules, custom log collections, and more all from within the OMS portal itself. However, there are still several advantages to being able create your own OMS collection rules in SCOM directly. These include:

  • Additional levels of configuration customization beyond what’s offered in the OMS portal, such as the ability to target any class you want or use more granular filter criteria than is offered by the portal.

  • The ability to migrate your collection configuration from one SCOM instance to another. OMS doesn’t currently allow you to export custom configuration.

  • The ability to do bulk edits through the myriad of tools and editors available for SCOM management packs (it’s much easier to add 50 collection rules to an existing SCOM MP using a simple RegEx find/replace than it is to hand enter them into the OMS SaaS portal).

Digging into the Management Packs automatically loaded into a connected SCOM instance from OMS, we find that there are quite a few. A lot of them still bear the old “System Center Advisor” filenames and display strings from before Advisor got absorbed into OMS, but the IPs also add in a bunch of new MPs that include “IntelligencePacks” in the IDs making them easier to filter by. Many of the type definitions are  stored in a pack named (unsurprisingly) Microsoft.IntelligencePacks.Types.

Here is the Event Collection Write Action Module Type Definition:

Event Collector Type

Now let’s take a look at one of the custom event collection rules I created in the OMS portal to grab all Application log events. These rules are contained in the Microsoft.IntelligencePack.LogManagement.Collection MP:

Application Log Collection Rule

We can see that the event collection rule for OMS looks an awful lot like a normal event collection rule in SCOM. The ID is automatically generated according to a naming convention that OMS keeps track of  which is Microsoft.IntelligencePack.LogManagement.Collection.EventLog. followed by a unique ID string to identify to OMS each specific rule. The only real difference between this rule and a standard event collection rule is the write action, which is the custom write action we saw defined in the Types pack that’s designed to write the event to OMS instead of to the SCOM Data Warehouse. So all you need to do to create your own custom event data collection rule in SCOM is add a reference to the Type MP to your custom MP like:

<Reference Alias="IPTypes"> <ID>Microsoft.IntelligencePacks.Types</ID> <Version>7.0.9014.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference>

…and then either replace the write action in a standard event collection rule with the following, or add it as an additional write action (you can actually collect to both databases using a single rule):

<WriteAction ID="HttpWA" TypeID="IPTypes!Microsoft.SystemCenter.CollectCloudGenericEvent" />

Now granted, OMS lets you create your own custom event log collection rules using the OMS portal, but at the moment the level of customization and filtering available in the OMS portal is pretty limited. You can specify the name of the log and you can select any combination of three event levels (ERROR, WARNING, and INFOMRATION). You can modify the collection rules to filter them down based on any additional criteria you can create using a standard SCOM management pack. In large enterprises, this can help keep your OMS consumption costs down by leaving out events that you know you do not need to collect.

If we look at Performance Data next, we see that there are two custom Write Action types that are of interest to us:

Perf Data Collector Type

…and…

Perf Agg Data Collector Type

…which are the Write Action Modules used for collecting custom performance data and the aggregates for that performance data, respectively. If we take a look at some of the performance collection rules that use these types, we can see how we can use them ourselves. Surprisingly, in the current iteration of OMS they get stored in the Microsoft.IntelligencePack.LogManagement.Collection MP along with the event log collection rules. Here’s an example of the normal collection rule generated in SCOM by adding a rule in OMS:

Perf Data Collector Rule

And just like what we saw with the Event Collection rules, the only difference between this rule and a normal SCOM Performance Collection rule is that instead of the write action to write the data to the Operations Manager DB or DW, we have a “write to cloud” Write Action. So all we need to do in order to add OMS performance collection to existing performance rules is add a reference to the Types MP:

<Reference Alias="IPTypes"> <ID>Microsoft.IntelligencePacks.Types</ID> <Version>7.0.9014.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference>

And then add the custom write action to the Write Actions section of any of our existing collection rules. Like with the Event Collection rules, we can use multiple write actions so a single rule is capable of writing to Operations Manager database, the data warehouse, and OMS.

<WriteAction ID="WriteToCloud" TypeID="IPTypes!Microsoft.SystemCenter.CollectCloudPerformanceData_PerfIP" />

Now in addition to the standard collection rule we also have an aggregate collection rule that looks like this:

App Log Agg Collection Rule

This rule looks almost exactly like the previous collection rule, except for two big differences. One, is that it uses a different write action:

<ConditionDetection ID="Aggregator" TypeID="IPPerfCollection!Microsoft.IntelligencePacks.Performance.PerformanceAggregator">

…and there is an additional Condition Detection that wasn’t present in the standard collection rule for the aggregation:

<ConditionDetection ID="Aggregator" TypeID="IPPerfCollection!Microsoft.IntelligencePacks.Performance.PerformanceAggregator"> <AggregationIntervalInMinutes>30</AggregationIntervalInMinutes> <SamplingIntervalSeconds>10</SamplingIntervalSeconds> </ConditionDetection>

Changing the value for AggregationIntervalInMinutes allows you to change the aggregation interval, which is something that you cannot do in the OMS portal. Otherwise, the native Custom Performance Collection feature of OMS is pretty flexible and allows you to use any Object/Counter/Instance combination you want. However, if your organization already uses SCOM there’s a good chance that you already have a set of custom SCOM MPs that you use for performance data collection. Adding a single write action to these existing rules and creating an additional optional aggregation rule for 100 pre-existing rules is likely easier for an experienced SCOM author than hand-entering 100 custom performance collection rules into the OMS portal. The other benefits from doing it this way include the ability to bulk edit (changing the threshold for all the counters, for example, would be a simple find/replace instead manually changing each rule) and the ability to export this configuration. OMS lets you export data, but not configuration. Any time you spend hand-entering data into the OMS portal would have to be repeated for any other OMS workspace you want to use that configuration in. A custom SCOM MP, however, can be put into source control and re-used in as many different environments as you like.

Note: When making modifications to any rules, do not make changes to the unsealed OMS managed MPs in SCOM. While these changes probably won’t break anything, OMS is the source of record for the content of those MPs. If you make a change in SCOM, it will be disconnected from the OMS config and will likely be overwritten the next time you make a change in OMS.

One last thing. Observant readers may have noticed that every rule I posted is disabled by default. OMS does this for every custom rule, and then enables the rules through the use of an override that’s contained within the same unsealed management pack so this is normal. This is presumably because adjusting an override to enable/disable something is generally considered a “lighter” touch than editing a rule directly, although I don’t see any options to disable any of the collection rules (only delete them).

Exposing Azure Stack TP1 to the Public Internet

portal_thumb.png

My previous two posts have been relatively trivial modifications to modify the constraints and parameters exposed in Azure Stack Poc TP1 installer.  There was a contrived underlying narrative leading us to here.  As TP1 made it in to the wild, my team was tasked with being able to rapidly stand up PoC environments.  We eventually landed on an LTI (Lite Touch Installation) type deployment solution which allows us to deploy and re-deploy fresh PoC environments within a couple of hours.  This requirement, as expected evolved to exposing the public surfaces of Azure Stack (ARM, The Portal, Storage Accounts, etc.).  I’ll attempt to outline the steps required in as concise a manner as possible (hat-tip to AzureStack.eu). The way Azure Stack is currently glued together has the expectation that the Active Directory DNS Name of your deployment will match the DNS suffix used for the publicly accessible surfaces (e.g. portal.azurestack.local).  The primary reasons for this is certificates and DNS resolution by the service fabric and clients alike.  I’ll expound on this a little later.  It was relatively simple to automate the exposing a new environment by allowing configuration to follow consistent convention.  As a practical example,  We have a number of blade chassis within our datacenters.  In each chassis where we host PoC environments, each blade is on a different subnet with a specific set of IP addresses that will be exposed to the internet (if needed) via 1-to-1 NAT.

Each blade is on a different vlan and subnet for a reason. Azure Stack's use of VXLAN to talk between hosts in a scale out configuration means that if you try to stand up two hosts on the same layer 2 segment, you will have IP address conflicts between them on the private azure stack range. There are ways in the config to change the VXLAN IDs and avoid this, but it's simpler to just use separate VLANs. Each blade is on a dedicated private space VLAN, and is then NAT'd to the public internet thru a common router/firewall with 1:1 NATs on the appropriate addresses. This avoids having to put the blade directly on the internet, and a Static NAT avoids any sort of port address translation.

Prerequisites

You will need at least 5 public IP addresses and access to manage the DNS zone for the domain name you will be using.  Our domain used for the subsequent content will be yourdomain.com (replace with your domain name) and I will assume that the ‘Datacenter’ (Considered the public network for Stack) network for your Azure Stack installation is will apply 1-to-1 NAT on the appropriate IP addresses exposed to the public internet.

We’ll use a hypothetical Internal IP address set of 172.20.10.33-172.20.10.39 and External IP address set of 38.x.x.30-38.x.x.34 for this exampleThe process I will describe will require static IP addressing on the NAT public interface.  These options are required parameters of the installation along with the desired custom Active Directory DNS Domain Name.

Networking

Azure Stack Virtual IP Addresses

Role Internal VIP ‘Datacenter’ Network IP External IP
Portal and API 192.168.133.74 172.20.10.33 38.x.x.30
Gallery 192.168.133.75 172.20.10.34 38.x.x.31
CRP, NRP, SRP 192.168.133.31 172.20.10.35 38.x.x.32
Storage Account 192.168.133.72 172.20.10.36 38.x.x.33
Extensions 192.168.133.71 172.20.10.37 38.x.x.34

Azure Stack VM IP Addresses

Role Internal IP ‘Datacenter’ Network IP
ADVM 192.168.100.2 172.20.10.38
ClientVM 192.168.200.101 172.20.10.39

 

DNS

We will need to start a new deployment with our desired domain name.  Please see my previous post Changing Azure Stack’s Active Directory DNS Domain to something other than AzureStack.local on how to make this an installation parameter.  Please note, modifications to the additional resource provider (e.g. Sql and Web Apps) installers will also be required.  We just changed instances of azurestack.local to $env:USERDNSDOMAIN as they are primarily executed within the Azure Stack Active Directory context.

To be honest, I probably should have addressed this in the previous post.  If the Active Directory DNS domain name you wish to use will be resolvable by the host while deploying the stack I recommend you simply add an entry to the HOSTS file prior to initiating installation.  We ended up adding this to our deployment process as we wanted to leave public DNS entries in place for certain environments.  In this version we know the resultant IP address of the domain controller that will be created.

Add an entry like such:

192.168.100.2                  yourdomain.com

You will need to add a number of A records within your public DNS zone.

A Record IP Address
api.yourdomain.com 38.x.x.30
portal.yourdomain.com 38.x.x.30
gallery.yourdomain.com 38.x.x.31
srp.yourdomain.com 38.x.x.32
nrp.yourdomain.com 38.x.x.32
srp.yourdomain.com 38.x.x.32
xrp.yourdomain.com 38.x.x.32
*.compute.yourdomain.com 38.x.x.32
*.storage.yourdomain.com 38.x.x.32
*.network.yourdomain.com 38.x.x.32
*.blob.yourdomain.com 38.x.x.33
*.queue.yourdomain.com 38.x.x.33
*.table.yourdomain.com 38.x.x.33
*.adminextensions.yourdomain.com 38.x.x.34
*.tenantextensions.yourdomain.com 38.x.x.34

 

Installation

The NAT IP addressing and Active Directory DNS domain name need to be supplied to the PoC installation.  A rough script running in the context of the installation folder will look something like this:

[powershell]

$ADDomainName="yourdomain.com" $AADTenant="youraadtenant.com" $AADUserName="yourStackUser@$AADTenant" $AADPassword=ConvertTo-SecureString -String "ThisIsYourAADPassword!" -AsPlainText -Force $AdminPassword=ConvertTo-SecureString -String "ThisIsYourStackPassword!" -AsPlainText -Force $NATVMStaticIP="172.20.10.30/24" $NATVMStaticGateway="172.20.10.1" $AADCredential=New-Object pscredential($AADUserName,$AADPassword) $DeployScript= Join-Path $PSScriptRoot "DeployAzureStack.ps1" &$DeployScript -ADDomainName $ADDomainName -AADTenant $AADTenant ` -AdminPassword $AdminPassword -AADCredential $AADCredential ` -NATVMStaticIP $NATVMStaticIP -NATVMStaticGateway $NATVMStaticGateway -Verbose -Force

[/powershell]

Configuring Azure Stack NAT

The public internet access, both ingress and egress, is all controlled via Routing and Remote Access and networking within the NATVM virtual machine.  There are effectively three steps you will need to undertake that will require internal VIP addresses within the stack and the public IP addresses you wish to associate with your deployment.  This process can be completed through the UI, however, given the nature of our lab, doing this in a scriptable fashion was an imperative.  If you must do this by hand, start by opening the Routing and Remote Access MMC on the NATVM in order to modify the public NAT settings.

rras.png

Create a Public IP Address Pool within the RRAS NAT configuration.

rraspool.png

Create IP Address reservations for the all of the IP addresses for which the public DNS entries were created within the RRAS NAT configuration. Note incoming sessions are required for all the Azure Stack  public surfaces.

reservations.png

Associate the additional Public IP addresses with the NATVM Public Interface in the NIC settings. (This is a key step that isn't covered elsewhere)

ipad.png

I told you that we needed to script this and unfortunately, as far as I can tell, the easiest way is still through the network shell netsh.exe.  The parameters for our script are the same for every deployment and a script file for netsh is dynamically created.  If you already have a large amount of drool on your keyboard from reading this tome, I'll cut to the chase and give you a sample to drive netsh. In the following example Ethernet 2 is the 'Public' Adapter on the NATVM and our public IP Addresses are in the 172.20.10.0/24 network and will be mapped according to the previous table

[text]

#Set the IP Addresses pushd interface ipv4 add address name="Ethernet 2" address=172.20.27.38 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.31 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.33 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.35 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.34 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.37 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.36 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.32 mask=255.255.255.0 add address name="Ethernet 2" address=172.20.27.39 mask=255.255.255.0 popd #Configure NAT pushd routing ip nat #NAT Address Pool add addressrange name="Ethernet 2" start=172.20.27.31 end=172.20.27.39 mask=255.255.255.0 #NAT Pool Reservations add addressmapping name="Ethernet 2" public=172.20.27.38 private=192.168.100.2 inboundsessions=disable add addressmapping name="Ethernet 2" public=172.20.27.31 private=192.168.5.5 inboundsessions=disable add addressmapping name="Ethernet 2" public=172.20.27.33 private=192.168.133.74 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.35 private=192.168.133.31 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.34 private=192.168.133.75 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.37 private=192.168.133.71 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.36 private=192.168.133.72 inboundsessions=enable add addressmapping name="Ethernet 2" public=172.20.27.32 private=192.168.200.101 inboundsessions=disable add addressmapping name="Ethernet 2" public=172.20.27.39 private=192.168.100.12 inboundsessions=disable popd

[/text]

If I haven’t lost you already and you would like to achieve this with PowerShell, the following script uses PowerShell direct to configure NAT in the previously demonstrated manner taking a PSCredential (an Adminstrator on NATVM) and an array of very simple objects representing the (for now) constant NAT mappings.

They can easily be stored in JSON (and I insist that is pronounced like the given name Jason) for easy serialization/de-serialization

mappings

[powershell] #requires -Modules NetAdapter,NetTCPIP -Version 5.0 [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [Object[]] $Mappings, [Parameter(Mandatory=$false)] [Int32] $PoolSize=8, [Parameter(Mandatory=$false)] [String] $NatVmInternalIP="192.168.200.1", [Parameter(Mandatory=$true)] [pscredential] $Credential, [Parameter(Mandatory=$false)] [String] $NatVmName="NATVM" )

$ActivityId=33

$NatSetupScriptBlock= { [CmdletBinding()] param()

$VerbosePreference=$Using:VerbosePreference $ParentId=$Using:ActivityId $NatVmInternalIP=$Using:NatVmInternalIP $Mappings=@($Using:Mappings) $AddressPoolSize=$Using:PoolSize

#region Helper Methods

<# .SYNOPSIS Converts a CIDR or Prefix Length as string to a Subnet Mask .PARAMETER CIDR The CIDR or Prefix Length to convert #> Function ConvertTo-SubnetMaskFromCIDR { [CmdletBinding()] [OutputType([String])] param ( [Parameter(Mandatory=$true,ValueFromPipeline=$true)] [String] $CIDR ) $NetMask="0.0.0.0" $CIDRLength=[System.Convert]::ToInt32(($CIDR.Split('/')|Select-Object -Last 1).Trim()) Write-Debug "Converting Prefix Length $CIDRLength from input $CIDR" switch ($CIDRLength) { {$_ -gt 0 -and $_ -lt 8} { $binary="$( "1" * $CIDRLength)".PadRight(8,"0") $o1 = [System.Convert]::ToInt32($binary.Trim(),2) $NetMask = "$o1.0.0.0" break } 8 {$NetMask="255.0.0.0"} {$_ -gt 8 -and $_ -lt 16} { $binary="$( "1" * ($CIDRLength - 8))".PadRight(8,"0") $o2 = [System.Convert]::ToInt32($binary.Trim(),2) $NetMask = "255.$o2.0.0" break } 16 {$NetMask="255.255.0.0"} {$_ -gt 16 -and $_ -lt 24} { $binary="$("1" * ($CIDRLength - 16))".PadRight(8,"0") $o3 = [System.Convert]::ToInt32($binary.Trim(),2) $NetMask = "255.255.$o3.0" break } 24 {$NetMask="255.255.255.0"} {$_ -gt 24 -and $_ -lt 32} { $binary="$("1" * ($CIDRLength - 24))".PadRight(8,"0") $o4 = [convert]::ToInt32($binary.Trim(),2) $NetMask= "255.255.255.$o4" break } 32 {$NetMask="255.255.255.255"} } return $NetMask }

Function Get-ExternalNetAdapterInfo { [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [String] $ExcludeIP )

$AdapterInfos=@() $NetAdapters=Get-NetAdapter -Physical foreach ($NetAdapter in $NetAdapters) { if($NetAdapter.Status -eq 'Up') { $AdapIpAddress=$NetAdapter|Get-NetIPAddress -AddressFamily IPv4 $AdapterInfo=New-Object PSObject -Property @{ Adapter=$NetAdapter; IpAddress=$AdapIpAddress; } $AdapterInfos+=$AdapterInfo } }

$DesiredAdapter=$AdapterInfos|Where-Object{$_.IPAddress.IPAddress -ne $ExcludeIP} return $DesiredAdapter }

Function Get-NetshScriptContent { [CmdletBinding()] param ( [Parameter(Mandatory=$true)] [Object] $ExternalAdapter, [Parameter(Mandatory=$true)] [Object[]] $NatMappings, [Parameter(Mandatory=$true)] [Int32] $PoolSize )

#Retrieve the Network Adapters $ExternalAddress=$ExternalAdapter.IPAddress.IPAddress $ExternalPrefixLength=$ExternalAdapter.IPAddress.PrefixLength $ExternalAdapterName=$ExternalAdapter.Adapter.Name Write-Verbose "External Network Adapter [$ExternalAdapterName] $($ExternalAdapter.Adapter.InterfaceDescription) - $ExternalAddress" $IpPieces=$ExternalAddress.Split(".") $LastOctet=[System.Convert]::ToInt32(($IpPieces|Select-Object -Last 1)) $IpFormat="$([String]::Join(".",$IpPieces[0..2])).{0}" $PublicCIDR="$IpFormat/{1}" -f 0,$ExternalPrefixLength

$AddressPoolStart=$IpFormat -f ($LastOctet + 1) $AddressPoolEnd=$IpFormat -f ($LastOctet + 1 + $PoolSize) $ExternalNetMask=ConvertTo-SubnetMaskFromCIDR -CIDR $PublicCIDR Write-Verbose "Public IP Address Pool Start:$AddressPoolStart End:$AddressPoolEnd Mask:$ExternalNetMask"

$TargetIpEntries=@() $ReservationEntries=@() foreach ($Mapping in $NatMappings) { Write-Verbose "Evaluating Mapping $($Mapping.name)" $TargetPublicIp=$IpFormat -f $Mapping.publicIP $TargetIpEntry="add address name=`"{0}`" address={1} mask={2}" -f $ExternalAdapterName,$TargetPublicIp,$ExternalNetMask $TargetIpEntries+=$TargetIpEntry if($Mapping.allowInbound) { $InboundEnabled="enable" } else { $InboundEnabled="disable" } $ReservationEntry="add addressmapping name=`"{0}`" public={1} private={2} inboundsessions={3}" -f $ExternalAdapterName,$TargetPublicIp,$Mapping.internalIP,$InboundEnabled $ReservationEntries+=$ReservationEntry }

$NetshScriptLines=@() #IP Addresses $NetshScriptLines+="#Set the IP Addresses" $NetshScriptLines+="pushd interface ipv4" $TargetIpEntries|ForEach-Object{$NetshScriptLines+=$_} $NetshScriptLines+="popd" #NAT $NetshScriptLines+="#Configure NAT" $NetshScriptLines+="pushd routing ip nat" $NetshScriptLines+="#NAT Address Pool" $NetshScriptLines+="add addressrange name=`"{0}`" start={1} end={2} mask={3}" -f $ExternalAdapterName,$AddressPoolStart,$AddressPoolEnd,$ExternalNetMask $NetshScriptLines+="#NAT Pool Reservations" $ReservationEntries|ForEach-Object{$NetshScriptLines+=$_} $NetshScriptLines+="popd"

return $NetshScriptLines }

#endregion

#region Execution

Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Detecting Public Adapter" -PercentComplete 5 $ExternalNetAdapter=Get-ExternalNetAdapterInfo -ExcludeIP $NatVmInternalIP Write-Progress -Activity "Publishing Azure Stack" ` -Status "External Network Adapter [$($ExternalNetAdapter.Adapter.Name)] $($ExternalNetAdapter.Adapter.InterfaceDescription) - $($ExternalNetAdapter.IPAddress.IPAddress)" -PercentComplete 10 if($ExternalNetAdapter -eq $null) { throw "Unable to resolve the public network adapter!" } Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Creating Network Script" -PercentComplete 25 $NetshScript=Get-NetshScriptContent -ExternalAdapter $ExternalNetAdapter -NatMappings $Mappings -PoolSize $AddressPoolSize #Save the file.. $NetShScriptPath=Join-Path $env:TEMP "configure-nat.txt" $NetShScriptExe= Join-Path $env:SystemRoot 'System32\netsh.exe'

#Run NetSh Set-Content -Path $NetShScriptPath -Value $NetshScript -Force Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Created Network Script $NetShScriptPath" -PercentComplete 50

Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Configuring NAT $NetShScriptExe -f $NetShScriptPath" -PercentComplete 70 $NetShProcess=Start-Process -FilePath $NetShScriptExe -ArgumentList "-f $NetShScriptPath" -PassThru -Wait Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -Status "Restarting RRAS.." -PercentComplete 90 Restart-Service -Name RemoteAccess Write-Progress -ParentId $ParentId -Activity "Publishing Azure Stack" -PercentComplete 100 -Completed

#endregion

$ConfigureResult=New-Object PSObject -Property @{ NetSHProcess=$NetShProcess; NetSHScript=$NetShScript; } return $ConfigureResult

}

Write-Progress -Id $ActivityId -Activity "Configuring NAT" -Status "Publishing Azure Stack TP1 PoC $NatVmName as $($Credential.UserName)..." -PercentComplete 10 $result=Invoke-Command -ScriptBlock $NatSetupScriptBlock -Credential $Credential -VMName $NatVmName Write-Progress -Id $ActivityId -Activity "Configuring NAT" -PercentComplete 100 -Completed

return $result [/powershell]

Certificates

You will need to export the root certificate from the CA for your installation for importing on any clients that will access your deployment.  I will leave the use of publicly trusted certificates outside the scope of this post, nor will I address the use of a CA other than the one deployed with Active Directory.  The expanse of aforementioned DNS entries should give you an idea of how many certificates you will need to obtain to avoid this step with any clients.  In the case of operating public Azure, this is one place where it must be very nice being a Trusted Root Certificate Authority.

Exporting the root certificate is very simple as the PoC host system is joined to the domain which hosts the Azure Stack CA, both roles reside on the ADVM.  To export the Root certificate to your desktop run this simple one-liner in the PowerShell console of your Host system (the same command will work from the ClientVM).

[powershell] Get-ChildItem -Path Cert:\LocalMachine\Root| ` Where-Object{$_.Subject -like "CN=AzureStackCertificationAuthority*"}| ` Export-Certificate -FilePath "$env:USERPROFILE\Desktop\$($env:USERDOMAIN)RootCA.cer" -Type CERT [/powershell]

The process for importing this certificate on your client will vary depending on the OS version; as such I will avoid giving a scripted method.

Right click the previously exported certificate.

installcert.png

Choose Current User for most use-cases.

imprt2_thumb.png

Select Browse for the appropriate store.

imprt3.png

Select Trusted Root Certificate Authorities

imprt4.png

Confirm the Import Prompt

Enjoying the Results

If we now connect to the portal, we should be prompted to login via Azure Active Directory and upon authentication you should be presented with a familiar portal.

You can also connect with the Azure Cmdlets if that is more your style.  We'll make a slight modification to the snippet from the post by Mike De Luca How to Connect to Azure Stack via PowerShell.

[powershell] $ADDomainName="yourdomain.com" $AADTenant="youraadtenant.com" $AADUserName="yourStackUser@$AADTenant" $AADPassword=ConvertTo-SecureString -String "ThisIsYourAADPassword!" -AsPlainText -Force $AADCredential=New-Object PSCredential($AADUserName,$AADPassword) $StackEnvironmentName="Azure Stack ($ADDomainName)" $StackEnvironment=Add-AzureRmEnvironment -Name $StackEnvironmentName ` -ActiveDirectoryEndpoint ("https://login.windows.net/$AADTenant/") ` -ActiveDirectoryServiceEndpointResourceId "https://$ADDomainName-api/" ` -ResourceManagerEndpoint ("Https://api.$ADDomainName/") ` -GalleryEndpoint ("Https://gallery.$($ADDomainName):30016/") ` -GraphEndpoint "https://graph.windows.net/" Add-AzureRmAccount -Environment $StackEnvironment -Credential $AADCredential [/powershell]

Your Azure Stack TP1 PoC deployment should now be available to serve all clients from the public internet in an Azure Consistent manner.

 

[Unsupported]