An Azure Virtual Machine is a simulated computer (also known as a guest) hosted within a physical computer (also known as the host). Virtual machines have been around for decades but the technical capabilities have advanced greatly over recent years and they are now a significant commodity in hosted infrastructures. A virtual machine behaves like an actual physical computer but it shares the physical pool of resources, the memory, buses, processing power and storage, with other virtualised infrastructure. The end user can connect to their virtual machine and it will have the same look and feel as if it was a physical computer. The host computer runs a very specialised, reduced operating system (called the hypervisor) which takes care of the security, sharing and scheduling of all the guest operating systems on top of it.
The Azure Virtual Machine allows IT architects to create a network that will build success for the business. It means that the organization can easily set up temporary and therefore cost-effective environments for development and testing or they can transfer business-critical applications from on-premises servers to more advanced, reliable and economic hardware. It allows organizations to try new ventures in a safe way; trying out new operating systems such as Linux or open-source application software. It allows businesses to stretch and flex in a ‘fail fast’ way; if the business project or need is no longer relevant, then it can be switched off or even deleted without leaving redundant hardware.
Azure Virtual Machines are created through the Azure portal, which can be found at https://portal.azure.com or through programming interfaces such as PowerShell. The simplest way to create an Azure Virtual Machine is using the portal; a browser-based user interface for interacting with Azure. It’s a straightforward process to create and configure Azure Virtual Machines and there’s even a Quick Start so that your Virtual Machine is up and running within minutes.
The difference between an Azure Virtual Machine and an on-premises Virtual Machine is that, in Azure, the IT architect does not control the host machine or its operating system. All of the configuration is done through the cloud operating system, whether through the browser or the portal. In this example, we will create a new SQL Server Virtual Machine in Azure, using an image from the Azure gallery.
The portal opens the New window. Select the Compute option and then select the option See all.
In the search field, type SQL Server 2017, and press ENTER.
To see the relevant options, click the Filter icon, and select the image for Windows SQL Server, which will be published by Microsoft.
Select the image named SQL Server 2017 Developer on Windows Server 2016.
Under Select a deployment model, ensure that Resource Manager is selected.
There will be a number of options for configuring the Virtual Machine, such as its size, location, and security information. Once you have selected the relevant options, select Deploy. The Virtual Machine will take a few moments to deploy.
Once the deployment has completed, you can connect to the VM remotely using Remote Desktop Connection on your PC or in the case of our SQL Server installation, through the SQL enterprise tools.
When creating an Azure Virtual Machine, you will be presented with a wide choice of codes from A0 to M128s. These represent the intended use and configuration of your virtual machine; basically, how many cores, RAM and storage it includes but there are other intricacies to this as well. Your choice depends on the workloads you want to run on the virtual machine. The most important thing is that you understand what the virtual machine will be used for. Once this decision is made, the IT architect can select the series and the size of virtual machine.
How does the process of Virtual Machine selection differ from sizing on premise Virtual Machines? The machine will need as much RAM, CPU and disk as your operating system and applications will consume and in this respect, the selection of Azure Virtual Machine is identical to the process of selecting the sizes and configuration of on-premises physical or virtual machines currently.
One key aspect of Virtual Machine selection that is different, however, is that the Azure cloud environment allows the IT architect to scale. With some restrictions, you can scale your virtual machine up to a more powerful instance or down to a less powerful and cheaper virtual machine. Azure Virtual Machines also offer high availability (HA) via scale-out. For the on-premises architecture, this would require densely packed hardware and the IT team would have to take care of the Virtual Machine hosts, networks and storage whilst also thinking about redundancy and ensuring that the virtual machines were running at all times. Azure is different because the cloud takes care of that work for the IT team and offers high availability as part of that process.
Azure allows organizations to be cost-effective by setting up a group of smaller machines which share workloads and can be turned on or off according to demand or on a timed schedule. Effectively, Azure charges for the compute power you are using when the virtual machines are turned on and doesn’t charge for virtual machines that are turned off. The organization is only paying for any persistent storage or networking of the virtual machines when they are powered off, but not for unused compute power.
Selecting a Virtual Machine Size
To select the correct Virtual Machine series, the IT architect will need to know the intended workload. Each virtual machine type is optimised to run a different workload, so it’s essential that this planning is done first. For example, if you are looking for a virtual machine that can work with Big Data solutions, then the organization should select a virtual machine from the High Performance Compute VM series. At the time of writing, Microsoft offers six virtual machine types:
General Purpose – Balanced CPU-to-memory ratio Compute Optimised – High CPU-to-memory ratio Memory Optimised – High memory-to-CPU ratio Storage Optimised – High disk throughput and IO GPU – Specialised virtual machines for heavy graphics rendering and video editing High Performance Compute – Fastest, most powerful CPU with optional high-throughput network interfaces (RDMA) Once the series has been selected, the IT architect can choose the virtual machine size.
Selecting a Virtual Machine Size
One key piece of advice to note is that if the organization believes that they may need to move up to another larger virtual machine in the future, then it is best to check that the larger machine is available in the same hosting region (e.g. UK South, West US) as the original virtual machine. Otherwise, the organization will have to move the virtual machine to the new region. Although it’s not an onerous task to move a virtual machine from one region to another, it is best to avoid if possible.
The following table will help the IT architect to identify the correct size of virtual machine for the requirements.
To summarise, choosing an Azure Virtual Machine is a crucial part of the transition to cloud. There is a good choice available and you have the ability, with some restrictions, to switch in the future as your needs change.
The Azure Pricing Calculator, located at https://azure.microsoft.com/en-gb/pricing/calculator helps you to predict the estimated monthly Azure bill for any Azure workload. Once you have Azure services running, the Azure Portal helps you to monitor actual costs that you have incurred.
Figure 1 Azure Pricing Calculator website
The Azure Pricing Calculator helps you understand the costs of moving your technical estate to Azure, and to estimate pricing once your data and applications are in Azure. The calculator allows you to view the price for different sizes and configurations of your Azure Virtual Machines in terms of the machine’s CPU, memory, storage, location and hours in use. You can add any combination of Azure services to the calculator and view the pricing for complete solution. This allows you to make better decisions on your move to the cloud by expediting the cost component of the decision.
The calculator is also useful in determining if you have all of the crucial resources in place for a successful migration to the cloud as relevant Azure services will be suggested when you add a component. For example, if you add a virtual machine, you will typically require storage so the calculator helpfully adds that component into the pricing.
Since the Azure Pricing Calculator allows you the mix your configurations before you make your purchase, the cloud migration process becomes clearer. This facility is particularly critical when the technical estate of the cloud infrastructure is in a constant state of change. Microsoft Azure has monthly releases of new updates and new features. This flexibility means there are a lot of different choices that can be made and the calculator not only helps you plan for your costs but can even reduce them altogether by helping to overcome the challenge of comparing your existing costs with the impact on cost of moving to Azure.
Azure has a great deal of choice but, in some ways, too much choice can be a difficult problem to have! The Azure Pricing Calculator helps navigate the complexities of the Azure migration and choose the optimal configuration and pricing for your environment. By proactively playing with the Azure Pricing Calculator, you can simulate various scenarios amongst the various Azure instances, types and features that are available.
Often, it can be perceived that organisations need to move all of their estate to the cloud but in reality, this is not always the case. When onboarding your technical and data infrastructure to the cloud, it can be a good idea to start small in order to set yourself up for success. The Azure Pricing calculator can help you to price up different scenarios to help you to navigate hybrid architectures as well as full cloud architectures.
Microsoft Azure is a cloud computing platform and infrastructure created by Microsoft and the Azure Portal is one way for administrators to work with the cloud-based services and resources that are held in Azure. It’s extremely straightforward and as it’s browser based, doesn’t require any new client software to be installed.
The portal can be found at portal.azure.com and it is sometimes known as the Azure Resource Manager or ARM for short. The Azure Portal allows users to conduct a range of activities in Azure including creating and browsing resources, configuring settings for services such as Virtual Machines and monitoring the resources while they are in operation.
Due to the range of activities available on the portal, a detailed description is beyond the scope of a brief article but the main activities of the portal are very easy to use. To log in to the Microsoft Azure portal, open a browser and navigate to https://portal.azure.com. Log in with your Azure subscription account or if you don’t have one yet, you can set one up using the link on the portal page.
Once you are logged in, you can see the Azure dashboard. There is a good search facility, which means that developers and IT architects can find what they need quickly. You can also see your account information at the top right-hand corner. The portal itself is free to access and does not incur any cost to use.
It’s possible to bring your existing knowledge to bear on Azure. For example, the portal has its own Bash functionality and you can deploy JSON templates and your existing web apps via the portal. Azure offers a wide range of varied services on the portal but everything is located in one place. This unified approach means that people can find what they want quickly, rather than having to use different interfaces or applications for different things.
Like most administrative tasks, once your Azure deployments are established, well-known and documented, it’s more likely the Azure API or PowerShell interface will be used to provide ongoing automated operations and functions. For example, a PowerShell script to spin up a new instance of a pre-configured virtual machine with SQL Server for the marketing team who want to store some results of a campaign. This is straightforward to include as part of your operations workflow rather than expect an IT administrator to log into the portal and create the virtual machine.
From the Finance perspective, you can access billing information through the portal so that it’s possible to keep an eye on costs for each service. User rights can be set to allow IT administrators access to the Azure services but not the subscription or billing information and vice versa for finance users. The Azure portal uses Power BI to provide context and clarity to the billing information as well as other types of data such as service and maintenance information. From the users’ point of view, this means it is easy to port experience from the Azure portal onto Power BI, which is another interesting and useful data visualisation and reporting technology from Microsoft.
To summarize, the Azure portal is a unified window into Microsoft Azure. It’s an easy, one-stop-shop to everything Azure.
But why do we need containers? What do containers provide that virtual machines can’t?
For the developers, containers unlock the ability to build an application, package within a container, and deploy, knowing that wherever you deploy that container, it will run without modification, whether that is on-premises, in a service provider’s datacenter or in the public cloud such as Azure. You can also have complex multi-tier applications, with each tier packaged in a container.
So that’s containers in a nutshell. What about Nano Server? Is that a special edition for my granny?
If you are hosting lots of VOSEs, the last thing you want is for the host OS to reboot because that means everything I’m running on that host either needs to migrate to another server or also reboot. You want to minimise what’s running to reduce the resources used and the surface area open to bugs and attacks. Yes, I used the B word.
Windows Server 2008 came up with Server Core which was a hugely reduced installation intended to just support specific workloads such as hosting VOSEs. Windows server 2012 improved Server Core so it was more modular and you could install and configure the server and then switch into Server Core whereas in 2008 it was an either-or choice at installation.
Windows Server 2016 goes further with Nano Server. Just to give you an idea of the scale here, the charts below compare setup time, disk footprint and VHD size between the already trimmed Server Core installation and the new Nano Server.
Now the big question here is how do you licence Nano Server?
Well, Nano Server is a deployment option within Windows Server 2016. It’s included as part of the licensing of both Standard and Datacenter editions so there is no unique or separate licensing for Nano Server. Good news.
Look Like an Expert with these Extra Facts
Q – Will the Core Infrastructure Suite SKU also be core based licensing?
A – Yes, Core Infrastructure Suite is a single SKU incorporating both Windows Server and System Center at a discount. This will also be core based when Windows Server and System Center are released.
Q – Is the Windows Server External Connector available at the release of Windows Server 2016?
A – Yes, the Windows Server External Connector license will still be available for external users’ access to Windows Server. Just like it is today, an external connector is required for each Windows Server the external user is accessing.
Q – How should I think about hyper-threading in the core based licensing?
A – Just count the physical cores. Windows Server and System Center 2016 are licensed by physical cores, not virtual cores. So you only need to inventory and license the physical cores on the processors.
Q – If processors (and therefore cores) are disabled from Windows use, do I still need to license the cores?
A – No, if the processor is disabled for use by Windows, the cores on that processor don’t need to be licensed. For example, if 2 processors in a 4 processor server (with 8 cores per processor) were disabled and not available for Windows Server use, only 16 cores would need to be licensed. However, disabling hyper threading or disabling cores for specific programs does not relieve the need for a Windows Server license on the physical cores.
Don’t Forget CALs
Windows Server Standard and Datacenter editions will continue to require Windows Server CALs for every user or device accessing a server (See the Product Use Rights for exceptions).
Some additional or advanced functionality will continue to require the purchase of an additive CAL. These are CALs that you need in addition to the Windows Server CAL to access functionality, such as Remote Desktop Services or Active Directory Rights Management Services.
Feel free to contact us if you have any questions – we love to hear from you!
If the customer tried to create a new D series VM in the same VNet or cloud service, they will also receive the following warning message telling them the cloud service doesn’t support those compute units.
If you create an A series VM in a new cloud services, Azure’s cloud fabric will host that VM in a cluster that currently may only support A series. That’s why you’ll see the behaviour that our customer has experienced.
It is not possible to move a VM between cloud services either so even if you had a service currently hosting D series VMs, the customer would need to delete their VM (but choosing the option to keep the attached disks) and recreate the VM from the attached disks in the other cloud service.
So our little trick would be for this customer to create the VM as a D series initially and as soon as it’s created, scale the VM down to an A2. That way Azure will host the VM in a cluster capable of supporting both A and D series compute units. The customer can scale up, down and mix VMs of A and D series to their heart’s content (with the exception of the A8-A11 compute sizes). The image below shows a cloud service with both A and D series compute units.
This doesn’t work with G series currently but at present they can only be hosted in the West US and East US 2 data centres anyway. Of course the feature release cadence of Azure is rapid so it’s likely this will be possible at some point in the future.
How would the customer have known to create the D series first to avoid this trap? We’d recommend utilising a Microsoft partner with experience in Azure services or attend one of our training courses; that’s what we’re here for.
The tool can download current Azure pricing with a click of button and it works in multiple currencies (24 at the time of writing). You can also generate a report on the detailed infrastructure cost broken down by compute, bandwidth, data, support, etc. Scenarios can be exported to XML but unfortunately there’s no way yet to use this generated file with PowerShell to automate the set-up of a particular package.
The scan agent supports Microsoft technologies (Hyper-V, SCVMM), VMware technologies (vCenter, ESXi) and physical environments (Windows and Linux). Future updates may include XEN Server support and the option to import workloads from from MAP and vSphere.
Download the tool today from and if you have any useful feedback or suggestions please email feedbackAzureCalc@microsoft.com.
We’ll cover a technical look at RemoteApp in an upcoming blog post but in this post we examine what Azure RemoteApp provides and how to licence it.
Why is RemoteApp Useful?
According to Microsoft, around 75% of employees bring technology of their own to work and nearly 30% of employees use three or more devices at work. These employees clearly want to access corporate resources from their devices. One way for IT to provide this is through desktop and application virtualization where the device is merely used as a ‘window’ to the user’s full Windows Desktop running remotely on a server somewhere. So a user could be sitting in their favourite coffee shop, using their iPad, viewing and interacting with their company pc desktop and applications.
There may be times when the employee doesn’t need access to an entire desktop session but just wants to run a business application virtually. Azure RemoteApp allows IT to deliver virtual application sessions from the cloud. If the distinction isn’t quite clear, imagine sitting in front of your pc or laptop and seeing your Windows start button, background picture and the huge amounts of icons and shortcuts on your messy desktop (unless you’re one of those tidy-desktop people). Now imagine doing exactly the same but from a different device, such as your home pc, iPad or Windows Tablet. You’re seeing your entire desktop and then you would run applications, etc.
Now imagine using your iPad, home PC or Windows tablet and you have a shortcut to a business application that you need for work. Your run that application and you see the application’s window on your device as if it was a native application installed locally. That’s RemoteApp.
You should now understand the first advantage; IT don’t need to virtualize and expose entire desktops, but just collections of applications.
Secondly, at the time of writing, Azure Virtual Machines (VMs) are primarily for hosting middle-tier applications. You wouldn’t spin up an Azure VM and pop client software on it and allow lots of users to remote desktop into it. Technically it can work but an Azure VM only includes 2 Remote Desktop Session (RDS) licences so any more than two people connecting at a time requires additional RDS licences. Azure VMs are good for hosting the middle-tier applications that client (front-end) applications will connect to. The front-end applications might be a Windows application or a web-based application.
Azure RemoteApp is designed to vitualise a client-application to multiple users from the cloud and all the necessary infrastructure licences are included, including all the RDS licences.
So could a customer deploy Microsoft Office 365 ProPlus onto Azure and deliver it virtually to users via RemoteApp. Yes, in fact here’s a nice little webcast from the Office team stating just that. Office on-premises is still licensed per-device and doesn’t allow licence mobility so Office licences acquired on-premises can’t be used for Azure RemoteApp service; it’s just Office 365 ProPlus.
We must be clear here about the applications that are supported; RemoteApp delivers applications running on Windows Server in Microsoft Azure. Applications must therefore be compatible with Windows Server 2012 R2.
Azure RemoteApp has a selection of pre-built application collections to choose from or IT can upload template images to the Azure management portal. Users obtain the appropriate Azure RemoteApp client for their device via http://remoteapp.azure.com. When they launch the client they are then prompted to login, where they can choose to authenticate with either their corporate credentials, Microsoft account (e.g. Outlook.com) or their Azure Active Directory account. After authenticating, the user will see the applications their IT Admin has given them access to and can then launch whichever application they require.
Each user has 50GB for persistent user data and because Microsoft is using RemoteFX technology here, users will get a great experience: applications will support keyboard, mouse, local storage, touch and some plug-and-play peripherals on Windows client devices. Other platforms will only support keyboard, mouse and touch. Local USB storage devices, smartcard readers, local and network printers are supported and the RemoteApp application will be able to utilize multiple monitors of the client the same way a local application can.
How is Azure RemoteApp Priced?
In order to get started with Azure RemoteApp, you will need an Azure account. Azure RemoteApp is priced per user and is billed on a monthly basis.
The service is offered at two tiers: Basic and Standard. Basic is designed for lighter weight applications (e.g. for task workers). Standard is designed for information workers to run productivity applications (e.g. Office).
The service price includes the required licensing cost for Windows Server and Remote Desktop Services but it doesn’t include the application licence, for example you still need an Office licence if you wish to use that. The bandwidth used to connect to the remote applications (both in and out) as well as bandwidth used by the applications themselves is also included with the service.
Each service has a starting price that includes 40 hours of connectivity per user. Thereafter, a per-hour charge is applied for each hour up to a capped price per user. You won’t pay for any additional usage after the capped price in a given month. Azure RemoteApp billing is pro-rated per day in case you remove a user’s access part-way through a month.
As we mentioned, you create app collections which contain the applications you wish to run and you can assign these collections to a set of users. Currently you can create up to 3 app collections per customer and each app collection will be billed at a minimum of 20 users. If you have less users, you’ll still be billed for 20. Hopefully this will change as it’s a bit of an Achilles’ heel for small businesses. RemoteApp basic scales to 400 users per collection and RemoteApp standard scales to 250. If you want to extend any of these limits, or if you want users to access more than one app collection, you’ll need to contact Azure support.
We must reiterate that the customer is responsible for complying with use rights of the applications they bring onto the RemoteApp service. This includes Office and as you can see at the bottom of this table, Office ProPlus can be utilized as one of the installs for licenced users and this is treated as Shared Computer Activation.
Most existing 32-bit or 64-bit Windows-based applications run “as is” in RemoteApp but there is a difference between running and running well. There’s guidance on the RemoteApp documentation pages at azure.com.
So in summary:
• Azure RemoteApp is priced per user per month
• The service is offered at two tiers: Basic and Standard
• Basic is designed for light-weight applications
• Standard is designed to run productivity applications
• Each service has a starting price that includes 40 hours of service per user
• Thereafter, an hourly charge is applied for each user hour, up to a capped price per user
• No charge for any additional usage above the capped price in a given month
Just when you think Microsoft licensing is straightforward and you’ve got a pretty good grasp on it, along comes SQL Server which has historically been the exception to the licensing rules. However with SQL Server 2012 Microsoft has done a great deal of simplification so it’s easy to understand the basics. You’re going to approach licensing differently depending on whether you’re deploying SQL Server in a physical or virtual environment.
SQL Server Licensing in a physical environment.
SQL Server is available is three main editions; Standard, Business intelligence and Enterprise. The Enterprise edition is licensed per core (no CALs required), Business Intelligence is licensed per server and client access licence (CALs) and the Standard edition can be licensed using either method. This is summarised below.
Before I present a little flowchart which might make you decision easier, let me clarify a few things about per-core licensing. We are talking per-core here and not per-physical processor, unlike Windows Server 2012. Currently SQL Server 2012 and BizTalk Server 2013 are the only Microsoft products licensed per-core.
To find out the appropriate number of cores you need to licence, simply count the number of cores in each physical processor in the physical server. Software partitioning doesn’t reduce the number of cores you need to licence. Once you have that you need to remember three things:
1. You need a minimum of four core licences per processor. So if you have a dual-core, dual-processor machine you would need to count that server as a dual, four-core processor and purchase licences for eight cores, despite only having four cores in total.
2. SQL Server 2012 core licences are sold in packs of two: each SKU covers two processors. So in our example above we would purchase four SQL Core licence SKUs to cover eight cores.
3. Certain AMD processors need to have a multiplication factor of 0.75 applied to the core count. See this link for the processors in question and what to do.
For server and CAL, SQL Server works in the same way as any other Microsoft server + CAL product. Licence the server(s), determine the number of unique users and/or devices accessing the SQL Server and purchase the appropriate number and type of CALs. SQL 2012 CALs will allow access to all previous versions of SQL Server. Also you don’t require a separate CAL for every SQL Server; a SQL Server 2012 CAL allows access to all the SQL Servers within the organisation.
A simple way of determining the edition and licensing of SQL Server 2012 is below.
SQL Server Licensing in a virtual environment.
Regular readers of the licensing blog will be saying “I bet this has something to do with Software Assurance (SA)”. Well, you’re partly correct. I’m going to assume you’re running Windows Server 2012 Datacenter edition on these boxes just for simplicity and I haven’t included details of the VOSE OS.
For SQL Server Standard and Business Intelligence editions you can licence individual virtual machines (VMs) using the server + CAL model. Simply purchase one server license for each VM running SQL Server software, regardless of the number of virtual processors allocated to the VM. Then purchase the appropriate number of CALs.
For example, a customer who wants to deploy the Business Intelligence edition running in six VOSEs, each allocated with four virtual cores, would need to assign six SQL Server 2012 Business Intelligence server licences to that server, plus the CALs to allow access.
For SQL Server Standard and Enterprise editions you can licence individual VMs using the per-core model. Similar to physical OSEs, all virtual cores supporting virtual OSEs that are running instances of SQL Server 2012 must be licensed. Customers must purchase a core license for each virtual core (aka virtual processor, virtual CPU, virtual thread) allocated to the VOSE. Again, you are subject to the four core minimum, this time per VOSE. For licencing purposes, a virtual core maps to a hardware thread. When licensing individual VMs, core factors (i.e. the AMD processor 0.75 factor) do not apply.
Two examples are shown below for clarification: SQL Server core licences required for a single VOSE on a dual, four-core processor server and for two VOSEs.
With the SQL Server 2012 Enterprise edition (note: not Standard edition), if you licence all the physical cores on the server, you can run an unlimited number of instances of SQL Server, physically or virtually as long as the number of OSEs with SQL doesn’t exceed the number of licensed cores. For example, a four processor server with four cores per processor provides sixteen physical cores. If you licence all sixteen cores, you can run SQL Server in up to sixteen VOSEs (or the physical OS and 15 VOSEs), regardless of the number of virtual cores allocated to each VM. What if you want to run more than 16 VOSEs in this case? Well, you are permitted to assign additional core licenses to the server; this is known as licence stacking.
Here’s where Software Assurance comes into play. Licence all the physical cores with SQL Server 2012 Enterprise Edition and software assurance and your licence rights are expanded to allow any number of instances of the software to run in any number of OSEs (physical or virtual). This SA benefit enables customers to deploy an unlimited number of VMs to handle dynamic workloads and fully utilize hardware computing capacity. As with most SA benefits, this licence right ends if SA coverage on the SQL core licences expires.
Licensing for maximum virtualization can be an ideal solution if you’re looking to deploy SQL Server private-cloud scenarios with high VM density, Hyper-threading is being used so you’re looking at a lot of virtual cores to licence, or you’re using dynamic provisioning and de-provisioning of VM resources and you don’t want the headache of worrying about adjusting the licence count. As you can see in the diagram below this can be very cost-effective.
We run a regular licensing spotlight call on behalf of Microsoft where we cover this and other topics in more detail. Please join us for the next call and you can view archived calls here.