The Need for Azure Stack (Part 1): Business Value

Part 1 - Business value

Back to top

Introduction

This wiki post consists of two parts. The first part highlights the business aspect that lays the foundation for `Azure Stack'. The `second part`* introduces `Azure Stack' as a technical product; briefly explaining all functionalities and scenarios. It's highly recommended reading this part first to really understand the product's full potential.*

If you do not have time to read the whole article, then head over to the `Conclusions` section of this part and head two part two.

http://www.ruudborst.nl/wp-content/uploads/2016/03/software-defined_data_center_stacks4-2.jpgThe goal of this article is to explain why Microsoft and even Linux oriented IT organizations, need `Azure Stack' in the data center. As a technical person, I can tell you everything that is great about `Azure Stack', but that would only cover the technical aspect of it. It did not feel complete enough, especially when you want to make non-technical people aware of its full potential, and why it is important for cloud businesses to start as soon as possible. As with every product, you have the business aspect behind it, which can't be left underexposed; certainly not in a new cloud era, fueled by the `Software-Defined Data Center' (SDDC) and containerized `Platform as a Service (PaaS) offerings. A second cloud era, in which applications are re-invented.

Not only does the `Azure Stack' slogan, "Azure brought to your data center", sound incredible, but the whole concept behind it too! It is an entirely new way of doing business in the cloud. Opening up a whole new ball game, changing clouds dramatically, and the way companies are going to invest in IT. After some extensive research, things were getting clear enough to write this article, and share my thoughts and findings with you. It is not easy getting the complete cloud landscape picture clear, each time scratching another surface. Not only because of all the time needed researching into things and its enormous vastness, but also because of rapid developments happening every day. So please don't sue me if you find any incompleteness or inaccurate, outdated information :) This article is solely here to share my vision with substantiated information about the cloud revolution happening today. I hope it inspires you enough to research or play with these exciting technologies yourself. Perhaps, also visualizing what this revolution could mean for your business and for you as a professional.

Back to top

Business value

Back to top

Cloud and application landscape

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA1.png The application landscape is changing rapidly with endless new cloud-hosted possibilities. As a result, more and more traditional applications are transformed into cloud-native ones. This means that businesses, and in particular `independent software vendors' (ISV's), are utilizing new technology, which their developers have adopted to simplify application compatibility, development, and, more importantly, deployment. These innovations fuel the consistent streamlined application delivery model, where deploying, testing, and bringing an application to production is a matter of hours instead of weeks or even months.

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA2.png Making use of these new cloud application deployment technologies, in combination with agile DevOps processes supporting a rapid delivery model, provides enormous efficiency and flexibility. More and more traditional (monolithic) client-server applications are turning into cloud service compatible ones. They can be managed by the IT admin as an ISV specific `Software as a Service' (SaaS) application, or developer deployed as a custom `Platform as a Service' (PaaS) application in the public or hosted cloud. These `PaaS' services offer various development `application programming interfaces' (API's), and highly automated resource orchestration managers. They provision and manage all resources needed by the application, completely taking care of the underlying infrastructure.

 

 

By easily deploying applications to a service into the public or hosted cloud, companies do not have to rely on their traditional complex infrastructure. This infrastructure would otherwise be used to host their monolithic client-server application on, either on older physical infrastructure on-premise or at a cloud service provider using `Infrastructure as a Service' (IaaS). Where deploying an application, with all of its supporting tier layered systems containing compute, network, security, and storage, could take ages, updating one system results in updating every system dependent on that system, and thus in updating the whole tier. Scaling a system, due to its becoming a bottleneck, results in scaling the rest of the tier with it. Scripting and manual configuration are standard practice in these environments and are used for scaling, maintaining, and deploying systems, thus making any further consistent deployments completely non-reproducible.

The complexity involved for just one single application makes it already seem like an outdated and very expensive approach. Expenses are made in hardware, support, maintenance, and qualified IT Pro's managing the infrastructure, as well as in the supporting staff servicing and supervising the IT (infrastructure) solution; they guarantee the quality and availability of the product to customers or stakeholders.

Still, a large portion of these traditional monolithic applications are hosted on complex infrastructure in on-premise environments and private clouds at cloud service providers. Everything is maintained by an IT workforce with a potentially high staff turnover. Sounds scary, right? However, it is also very costly and, in the end, that is what it is all about - costly operations that don't add value anymore, certainly not in the new cloud landscape of today.

Happening now

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA3.jpg

From an outsider perspective, these developments could be something that will happen in the foreseeable future. However, when you are a cloud provider or an ISV, these developments have already been taken into account. Most managing boards are fully aware of these rapid cloud application landscape shifts, both technological and cultural, occurring for the past two years, where the center of gravity has moved from traditional IaaS to PaaS in a Hybrid Cloud. This is an elastic and consistent cloud application first model, as opposed to building the complex infrastructure first and deploying the application second.

It is all about the modern, easy scalable and deployable infrastructure-independent cloud application, deployed in the Hybrid Cloud consisting of an on-premise environment connected to a public and private/hosted cloud.

Companies have to adjust their former IT strategy in a timely manner to survive in this new imminent cloud era, or face the consequences in the long run, risking being overtaken by bigger fish in the pond already adopting the new way of doing cost-effective business in the changing cloud and application landscape of today.

Shifts

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA4.png

The technology shift is primarily caused by new container and virtualization technology, changing all other technology on their path to success. They are accompanied by a high degree of exceptional automation done by cloud providers, making the whole package available as a service to customers, offering them a rich and simplified deployment experience that has been adopted by their developers; this experience then ultimately changes business requirements within companies, causing the cultural shift. Customers now want to scale and deliver an application quickly and efficiently, following the latest deployment technologies and standards, in the process making better use of compute, network, storage, and application resources. By combining this deployment experience with agile processes following the latest DevOps practices, they get shorter development cycles and rapid deployments, continuously updating and adding application functionality along the way, and thus eventually controlling and reducing costs, staying ahead in the IT space, and differentiating themselves from the competition.

Back to top

Competition

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA5.png

There is much competition between public cloud giants offering the newest features and technology with their services. They make it easier for customers to host applications in the cloud without having them to manually deploy any infrastructure, offering them automated delivery models and new `PaaS' services, which are supported by highly automated resource managers provisioning and orchestrating all resources needed for a particular application.

Competition between cloud providers pushes cloud technology innovations further along; service maturity is reached in months instead of years. Most features offered in the public cloud can't be made available on-premise or in other clouds due to tight integration with related public cloud services and their complex hyper-scale setup. These features compete directly with IT solutions on-premise or in other smaller clouds, grabbing a large share of revenue from IT companies and smaller cloud providers. Eventually, over time, this results in mass consolidation of a large portion of IT cloud solutions to well established and adopted cloud services in the hosted or public cloud.

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA6.jpg

Competition is not only coming from giants like Microsoft, Amazon, and Google, but also from hardware vendors caught up by network, storage, and hardware virtualization, who ultimately change their course and also head into the software-defined everything cloud, grabbing a piece of the pie by offering their virtual software and hardware services with a software-defined data center (SDDC) in the private or hosted cloud. They are now more aggressively competing against established public cloud providers, who are partly responsible for their change in strategy in the first place.

However, public cloud providers, and even former hardware vendors (HP/Dell/EMC etc.), also have to work together by defining common standards for new cloud services, improving application experience and interoperability between clouds. One example is the `Open Container Project' (containers), where industry leaders are setting new standards for how containerized applications should function and be provisioned and maintained. By competing against each other, but also working together, large cloud companies propel the cloud and application landscape into a whole new cloud era.

Fast pace, keep up

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA7.png

So we can see the advantages of hosting an application in the cloud without bothering about the underlying infrastructure. Of course, other businesses and, in particular, software vendors (ISV's) see them too. They benefit the most by turning their former monolithic client-server applications otherwise deployed on-premise into cloud-native ones, offering the customer a flexible and pay-per-use application in a SaaS cloud. ISV's have to compete with other vendors also adopting the new cloud application deployment strategy, thus driving up the pace even further. This, in turn, wakes up smaller companies with in-house developed applications, or other smaller ISV's unaware of the changing cloud landscape, who think they are relatively safe until they or their customers are caught up by competitors already adopting the cost-effective, resilient, scalable, elastic, and consistent cloud.

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA8.jpg

In the coming years, we are going to see that customers care less about the IT operations behind the application or service they use. They want it instantly available at all times as SaaS or PaaS without any performance degradation. If cloud businesses do not adapt early enough and offer this experience, their customers will surely look elsewhere. The days of committing to a particular IT company or (hosted) cloud provider are over. Application and infrastructure landscapes are changing rapidly between on-premise, private, hosted, and public clouds, at a much higher pace than we have ever witnessed before, resulting in ever-changing customer needs. These needs have to be closely monitored in these turbulent evolutionary cloud times!

All aboard

Big public cloud providers sponsor this evolution in technology in a big way, investing an average of 30 billion dollars each year in a growing public cloud industry worth around 140 billion today, with an astounding 500 billion predicted by 2020.1 In 2016, 11% of IT budget otherwise spent on-premise is now spent towards cloud computing as a new delivery model.2 By 2017, 35% of new applications will be using a cloud-enabled continuous delivery model, streamlining the roll-out of new features and business innovations.2 In 2015, Amazon made 7 billion dollars in cloud revenue against Microsoft Azure's 5 billion.3 An analyst at `FBR Capital Markets' predicts that Microsoft will break the 8 billion in cloud revenue in 2016, catching up with Amazon.3

Sources: 1. Bessemer Venture Partners state of the cloud report 2. IDC Futurescape predictions, 3. Infoworld 2016: The year we see the real cloud leaders emerge

Adopting these innovations in a timely manner and, more importantly, adopting the philosophy, mindset, and a new default way of doing cloud business can be crucial for providers and businesses for surviving in this new cloud era. If you are not actively working on what your business can do with these developments, then your competitor is. With every big change, you have winners and losers. Winners are going to learn, adapt, and thrive. Losers are going to resist; they will continue to feel comfortable in their own (IaaS) cloud bubble until it is too late for them to change.

Back to top  

DevOps movement

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA10.png

 

As mentioned, developer software is updated with all kinds of new application development functionality and cloud deployment features. These features are making the delivery and deployment relatively easy, without requiring any regular IT staff or manual infrastructure involvement. IaaS disappears into the background and becomes an invisible part of the cloud service. It is now deployed as `Infrastructure as Code' (IaC), and is part of the overall application template.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA11.pngThese new development features also help developers scaling the application, based on company resource, availability, and redundancy needs. Deployment management is orchestrated from a single cockpit-view point in their cloud-compatible developer software of choice.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA12.png

From there, they implement new DevOps practices, utilizing the new cloud deployment approach, which requires one single cloud subscription, giving them direct access to all cloud service offerings; a playground with endless application hosting possibilities and combinations. They are able to deploy a web app in an `app service`, a microservice compatible application in a service fabric, a containerized application in a container service, or a more traditional (monolithic) application in a full `Infrastructure as Code' (IaC) environment. The application and all dependencies are deployed from their development cockpit, with a single template containing various resources and services; like, compute, network, storage, load balancers, security, VPN, routes, endpoints, or authentication.

Agile

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA13.png

Utilizing true agile development processes, which aim for shorter flexible development cycles, and continuously add functionality along the way instead of the longer, dreadful development cycles, which are dependent on almost everything, allow development teams to work independently of each other on different application features (microservices). One team develops the `mobile device experience', while another team does the `order processing' part, without interfering with each other and the application as a whole. The magic combination of new automated deployment models, DevOps culture, and Agility enables companies to build out an entire infrastructure in the same time it would have taken them to only design the same infrastructure for a monolithic client-server application.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA14.png

 

Jobs

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA15.jpg

The transition to a cost-effective, agile, elastic, and scalable deployable application requires IT staff to incorporate and learn a whole new skill set, bringing operations and development even closer together in the ever growing DevOps movement. Deploying infrastructure in the cloud is now more about code than directly touching the underlying supporting systems, requiring staff to work in a bottom-up approach instead of the traditional top-down. A top-down approach where the developer waits for the architect to finish designing, the IT Pro to deploy the infrastructure, and as last the developer trying to deploy the application in that non-flexible and bulky framework.

Infrastructure design, configuration, and manual deployments are going to be less significant with each passing year. Knowledge of cloud design, consultancy, and automated application deployment are taking their place. The gap between developer and IT Pro is growing smaller, requiring the developer to learn more about the deployment side of things and the IT Pro to learn more about cloud automation supporting the application deployment process.

Back to top

Cloud adoption

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA16.jpgCloud adoption still grows rapidly each year. Cloud adoption directly relates to cloud safety, cost, and maturity, wherein companies feel confident and comfortable enough to host their valuable services and data in the cloud. A research study conducted at the end of 2015 byNorth Bridge among 1000 survey respondents showed that security (45.2%), regulatory/compliance/policy (36%), privacy (28.7), vendor lock-in (25.8), and complexity (23.1) are factors responsible for holding cloud adoption back.

Due to cloud maturity, public cloud adoption inhibitors are becoming less significant each year. For instance, outages and security issues do not have a huge impact anymore. Outages fall well within the SLA of the specific service, and security issues on the platform are almost non-existent. http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA18.pngOutage and security are strongly tied to reputation, which is a big motivator for public cloud providers to invest heavily in these sensitive areas.

Privacy, regulatory, compliance, and policy issues are areas of concern for enterprise companies. Public cloud providers are catching up by conforming to almost all global and industry standards regarding security, privacy, and compliance issues. Most enterprises have already compared their internal policies with public cloud provider policies and sorted these legal issues out. They already know what their stance is in these matters and how to deal with them. Therefore, these policies, once holding cloud adoption back, are fading away and becoming less relevant. The remaining adoption inhibitors are complexity and vendor lock-in, which are more persistent and difficult to tackle. They often relate to existing (more complex) infrastructure, and require new technology, expertise, investments, and time to resolve; this is an issue a cloud consultancy company or a trusted local hosted cloud providers with advisory services, could assist with.

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA19.png

Smaller companies and start-ups care less about these adoption issues.

They begin with websites, mail, and office applications in the cloud, gradually expanding their cloud efforts to other more comprehensive services when they grow bigger. They are discovering and gradually adopting the new default way of doing cost effective (IT) cloud business. They are new-age, lean and mean businesses with flexible pay-per-use subscriptions, competing against the established rigid order.

Size matters

 

The bigger a company gets, the more infrastructure resources it needs. More resources result in more complex environments, making it more difficult to migrate to the cloud and delaying its adoption. Major enterprises, hosting complex environments, are very bulky, and, therefore, are reluctant to host their valuable data in a changing cloud landscape. They also have to account for the `return on investment' (ROI) on existing infrastructure before investing in new technology. However, they do use the public cloud for new greenfield deployments. They are getting familiar with the cloud-first model, providing them better insights how existing complex infrastructure on-premise can integrate with it. In the long term, they will transition this existing viable infrastructure to the cloud when it is more mature and affordable for them.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA17.pngBig technology leading-edge enterprises, whose IT is driving their core business, represent a more active force behind most heavy cloud investments and innovations. They need the scalable and elastic cloud to instantly deploy their resource-intensive infrastructure on, requiring that their distributed application models are being managed, scaled, and orchestrated effectively. Examples of these leading-edge cloud enterprises are Netflix, Amazon, Google, IBM, LinkedIn, Nike, PayPal, Spotify, and Twitter. Netflix is the first real modern-time microservices adopter and inspiratory, bringing their rigid code and lessons learned to the public. Presently, more enterprise companies have followed their microservices strategy, turning their existing tier layered infrastructure into a microservice orientated one.

Back to top

Security

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA20.png

When you talk about cloud adoption, then you also have to deal with the security aspect, which plays an increasingly bigger role every year and is something every smaller and bigger cloud provider throws money at nowadays. In the end, a breach damages the clouds provider's reputation and also damages cloud adoption in general. Most breaches are not due to the market being flooded with new PaaS or SaaS hosted applications; on the contrary, many breaches occur because companies are running outdated legacy monolithic applications in a mix with other monolithic applications, and are likely running on badly maintained and scarcely updated server and network infrastructure components caught up by new superseding technology. The same new technological innovations allow hackers to automate attacks and exploit newly discovered vulnerabilities. These attacks, with possible breaches and accompanying security threats, urge companies to update their environment. The recognition that the cloud may be more secure than their datacenter again drives cloud adoption even further.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA21.pngCompanies renew their security, thereby reassessing the application or solution to meet the latest security standards, very likely turning it into a new PaaS or SaaS based cloud application at a trusted (public) cloud provider. The cloud provider now manages the underlying infrastructure used by thousands of customers. Public cloud innovation, agility, standardization, and multitenancy happen at unthinkable hyper-scales and go along with constant security hardening and penetration testing by company red (attack) and blue (defend) teams. The solid battle-hardened cloud services are less likely to be a risk factor.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA22.png

Therefore, a breach at public cloud giants like Azure, Amazon, and Google is less likely to occur, compared with environments hosted on-premise or in smaller clouds. Company focus can then be shifted to the security of the application, because many hacking attempts and breaches occur at the application level, caused by unpatched or unknown vulnerabilities in the code.

Security thus becomes a key enabler for cloud adoption.

Back to top

Formula

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA23.png

So the cloud adoption formula can be derived from the business size, infrastructure complexity (lock-in), and the perspective ROI, plus company data policies (regulation/privacy), against the potential data/security risk of hosting the application in the public cloud, along with what competitors are doing in their industry. Put all that in a business case for a product, and discover what is achievable towards greater cloud adoption.

Game changers

Technological innovations are driving the shift towards PaaS in a Hybrid cloud. Let's highlight the most important ones and fit them together into the PaaS puzzle of today. Content in this section is seen from an Azure perspective; after all, this article is about the need for Azure Stack. However, it does relate to recent cloud developments in general and overlaps with public cloud offerings from competitors like AWS and Google.

Back to top

Microservices

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA24.png

Microservices have existed for a very long time but have never gained the kind of momentum we are experiencing today. They are loosely coupled services running independently of each other, as illustrated on the right. However, they function as a whole, each providing functionality to the application they belong to. Each microservice plays its role in the bigger picture. Development teams can work independently on different functionality (microservice) for the same application, without updating the application itself. They have their own development cycles, allowing them to work quickly and efficiently.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA25.png

Microservices are built in clusters. You give a microservices orchestrator a cluster of resources and then you deploy the microservice compatible application to it, and it figures out how to place them. It takes care of the health of those applications and it takes care of scaling.

You can manage clusters and resource complexity yourself with an orchestrator, and be more flexible and cloud vendor independent. Alternatively, you can use an out-of-the-box PaaS solution, powered by a `service fabric' and by resource managers doing all of the complex orchestration, management, and provisioning for you, like Azure did with their `service fabric` service.

Hyper-scale service fabric hosted microservice clusters are the foundation for most Azure services. The quote below from the Azure's service fabric documentation illustrates very well how they are doing this today.

"Just as an order-of-magnitude increase in density is made possible by moving from VMs to containers, a similar order of magnitude in density becomes possible by moving from containers to microservices. For example, a single Azure SQL Database cluster, which is built on Service Fabric, comprises hundreds of machines running tens of thousands of containers hosting a total of hundreds of thousands of databases. (Each database is a Service Fabric stateful microservice.)"http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA26.png

Illustrated on the right, we can see Azure's older service model compared to their new one using microservices in a service fabric. The service fabric handles all orchestration and automation with the help of resource managers managing the stateful and stateless microservices clusters along with all supported resources.

Back to top

Containers

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA27.png

Containers are the foundation for the microservices revolution. Container technology is fundamentally an operating system virtualization; it is the ability to isolate a piece of code so that it cannot interfere with other code on the same system. You can scale up instantly and scale down. You can dev test very quickly, and so, once again, developers using new technology in developer software are driving this within companies. Forcing companies to change their IT strategy from a monolithic deployed application infrastructure to a microservices orientated one using containers. New microservices offerings supported by containers will revolutionize the IT and cloud application landscape.

Until recently, only web/mobile or custom cloud provider applications could be hosted in Azure's PaaS services. However, a lot has changed with the introduction of container virtualization, enabling a new wave of microservices at unimaginable scales, making it easier to support a whole range of new cloud applications from different environments. Container standardization is largely made possible by Docker. Docker offers consistent deployment technology with a unified packaging format, making it very easy to deploy containers to different types of environments.

Back to top

Container Resource Clusters

Containers allow PaaS service providers to create a cluster of containers. Each container cluster offers a resource needed for a PaaS compatible application to function, like web, database, storage, or authentication functionality. Combining these clustered application resources with clever automation provided by the PaaS service allows non-microservice orientated applications to be hosted as a PaaS application, an application otherwise dependent on a traditional infrastructure rollout.

Back to top

Container Service

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA28-1.png

Microservice compatible applications, which are, most of the time, container compatible applications, do not need that kind of service management and are handled differently compared to a regular PaaS application. They require orchestration engines and tooling to be managed effectively.

There is much development in this space, with a lot of automation done by public cloud providers, making the experience of managing and deploying large distributed clusters of microservice containers more developer and IT admin friendly, but also more interoperable, making it easier for customers to host the same container application elsewhere.

A fine container service example nowadays is the Azure Container service, based on Apache Mesos and technology from Docker. Yes, it's a Unix solution.

However, you might know they already made Windows and Hyper Containers available on Windows Server 2016.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA29-1.png

That makes, at the moment of writing this (march 2016), a matter of months until they announce Windows and Hyper-V containers availability in Azure Container Service, making it available for the huge Microsoft developer base, and truly unleashing a revolution in container space. This is further substantiated in a blog post made by Azure program manager, `Ross Gardler', with the following statement:

"Microsoft has committed to providing Windows Server Containers using Docker and Apache Mesos is being ported to Windows. This work will allow us to add Windows Server Container support to Azure Container Service in the future"

Drumroll! If you looked closely at the `Azure Container Service' slide above (presented at AzureCon by Scott Guthrie), you will have seen two significant words: AZURE STACK! So it seems that they also intend on bringing the Azure Container Service as an additional PaaS service to your data center! Without mentioning it in any Azure Stack blog post or roadmap.

Back to top

PaaS

Back to top

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA30-1.jpg

Platform as a Service (PaaS) has also existed for quite a long time and is provider dependent, each provider offering their solution based on their own software. PaaS offers you an application deployment experience without the need to worry about the underlying infrastructure. Cloud providers arrange that for you. PaaS services are now in the process of being reinvented with microservice and containers. New PaaS services and features will be made available over time with microservices, containers, and highly automated processes as their foundation, making it easier for companies to host existing and new applications in the cloud without bothering about infrastructure. Infrastructure otherwise deployed for an application, such as IaaS, is now deployed as an invisible part of the PaaS service. IaaS spendings are shifting towards PaaS as the new way of delivering an application in the cloud.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA31.png

A PaaS service allows you to host your web/mobile, custom app, container, or even microservice compatible application, instantly providing compute, network, security, and redundancy at hyper-scales. On top of that, PaaS services can also provide stateful resources to your application, like databases, authentication, application logic, device access/compatibility, and storage, thus taking care of resource orchestration, scaling, redundancy, and monitoring.

ISV's, currently operating in their own private cloud and providing SaaS to their customers, can also leverage PaaS functionality by re-envisioning their SaaS service and placing it on top of PaaS, thereby gaining the same advantages concerning scalability and automation of application and infrastructure resources. Everything is taken care of by the PaaS service, which manages and orchestrates resources on the `invisible' infrastructure in the background.

Back to top

Microsoft Azure

Azure services offering application based PaaS are the new `Container Service`, the new `Service Fabric`, and revamped `App Service`. The `Container Service' offers the customer to run container clusters with a high level of manageability and interoperability. It is more complex and requires more intervention, but easily allows customers to run the same containerized application elsewhere. The `Service Fabric' is a microservice based solution, which allows stateless and stateful applications to run as a custom microservice orientated application. Shared resources, monitoring, usage, and scaling are handled by the service. Applications supported are custom `Service Fabric' tailored microservice compatible applications. They can be created, deployed, and locally tested in a cluster with `Visual Studio'. The `App Service' allows customers to deploy native PaaS applications like a web (websites), mobile, api, logic, or custom app. New functionality and support for new applications are added on a monthly basis.

Back to top

Landscape

http://www.ruudborst.nl/wp-content/uploads/2016/03/APaaS-MQ-Graphic-2015.jpghttp://www.ruudborst.nl/wp-content/uploads/2016/03/APaaS-MQ-Graphic-2016.jpg

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA33.jpg

If you look at the left Gartner Magic Quadrant 2015 picture above, you'll see a public cloud PaaS landscape dominated by SalesForce with Microsoft on its tale. Things changed as of March 2016 as seen in the right Magic Quadrant picture. Microsoft took over and is now the new application PaaS leader! It clearly illustrates the rapid development, power and adoption of Azure's PaaS services. This adoption is backed by a large, loyal (Windows) developer base, using `Visual Studio' with C, C++, and C#, as seen on the right. This picture illustrates the top 10 programming languages in 2015.

Back to top

Azure adoption

The slide on the right shows Azure cloud consumption by customers. With 2 million developers using Visual Studio with Visual Studio Online (Team Services) collaborating with their team on application development in Azure, developer adoption and PaaS market share foundations are rock solid. http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA35.png New containers and microservices offerings in public, as well as in hosted clouds, can count on large scale adoption. It only requires developers to update their software and update application compatibility with the new PaaS service, and, last but not least, to deploy the application through their development software to the hosted or public cloud. No infrastructure to worry about and no costly manual tasks required.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA36.png

Back to top

Pay-per-use

So you can imagine how easy and efficient deploying a PaaS application is, especially if you do not have to worry about the underlying components. It is not only easily deployable and managed, but it also invites you to try it out without making any investment. Most cloud offers are solely based on a resource consumption model, which is ideal for a first test-drive in a public cloud of one of the providers above. Capital costs transform into flexible variable costs. When that test-drive is over it is a fairly easy to deploy that same application with all its resources in a production environment; the blueprint is already there. Just modify a couple of variables and scale accordingly!

 

 

Back to top

Back to top

Back to top

Back to top

Software defined datacenter (SDDC)

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA37.png

We have not yet talked about software-defined networking, storage, and compute innovations, which are the driving force behind rapid cloud developments in private and hosted clouds. That's because we were approaching all recent events from a public cloud service provider perspective, where we did not have to deploy any physical hardware. Deployment, provisioning, configuration, and operation of the entire physical infrastructure is abstracted from hardware and implemented through software as a service. Everything is taken care of by the public cloud provider's own fabric layer, offering a full software stack with virtual services on top of physical hardware. Provisioning of infrastructure starts as a provision request via the API's or through a self-service portal, where virtual resources such as storage, network, and computing infrastructure will be delivered as the foundation of the new virtual service.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA38.png

However, when hosted on-premise, in a private cloud or in the hosted cloud with a cloud service provider, you have to rely on your own or on the provider's provisioning system to deploy or change network, storage, or compute. This often results in significant delays and a non-standardized way of delivering resources. Lacking an interoperable way of provisioning or exporting configuration and resources elsewhere, to another cloud or, even better, to a hybrid cloud, this complexity can result in a cloud vendor lock-in, making the transition to a multi or hybrid cloud a painful and costly process.

The dependency on cloud service providers using custom automated systems, or even manual operations, is going to change in the years ahead. Knowledge gained by delivering services and automation in a virtual software-defined way, from hyper-scale public, open-source, or hardware vendor clouds, is brought to customers. It is brought as a complete SDDC solution, with service provider functionality to your data center, where all IT infrastructure like compute, storage and networking is virtualized and instantly delivered as a service. Deployment, provisioning, monitoring, and management of the data center resources are carried out by the SDDC's own automated software processes, offering consistent and interoperable deployment models, further closing the big innovation gap between public, hosted, and private clouds.

We all know compute virtualization carried out by hypervisors, compute goes along with storage and network virtualization and are less known. However, last two years they have taken a huge development leap by providing enterprise grade features features normally offered by expensive hardware. Storage and network are abstracted from hardware and defined as code creating virtual resources on top of standard of-the-shelf servers. They are are an integrated part of the modern SDDC, an SDDC that is now mature enough to replace existing older (physical) infrastructure in customer data centers.

Back to top

Software Defined Storage

Software-defined storage (SDS) is, in essence, virtual storage. You can compare it with a raid array, where you define a logical volume with a raid type. With SDS, it's not striped across local disks in the raid controller but across servers instead, standardized from the shelf servers with SDS installed software taking over the role of a raid controller. SDS handles all storage replication and management information between a cluster of servers. Server local or even shared disks are used in stretched pools with disks from other member servers. Virtual disks are created and replicated to all nodes from the combined storage in these pools.

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA42.pngStorage functionality is abstracted from the physical storage device (a SAN for instance) and placed as software on commodity servers. You have the choice to use any disk compatible with your server and benefit from tiered storage. Use cheap and slow SATA disks for cold data and fast NVMe or SSD disks for hot data. SDS also offers enterprise features normally available with traditional SAN hardware. With software-defining storage, you get better redundancy, easy deployments, and frequent updates. Patching can now be easily done without downtime or significant performance loss, and all due to the fact that these nodes replicate to each other and form a highly redundant cluster, compared to a (single) dedicated shared storage unit.

Microsoft

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA39.png

Microsoft Azure's battle-hardened software-defined virtual network and storage are brought to the datacenter via their server software. Microsoft will release `storage spaces direct' (S2D) in the third quarter of 2016 with its new server family. Microsoft's software defined storage' (SDS) solution allows you to use local or attached non-shared storage and offer it in a cluster as redundant virtual disks. VMware's Virtual SAN or EMC's ScaleIO are similar and already well-known released SDS solutions. They are leaving the traditionally expensive and complex SAN far behind as a legacy redundant storage solution from the past. As opposed to its competitors, Microsoft's `storage spaces direct' (S2D) ships as part of `Server 2016', making it available for a ridiculously broad audience, with functionality not only reserved for large enterprises but also for smaller ones. Small and medium-sized enterprises (SME), in particular, can benefit the most by using their own local storage to create a highly available storage solution, without investing tens of thousands of dollars in a traditional SAN. Coming from the battle-hardened Azure cloud, backed by a large user-base (audience), and delivered with proven server software, Microsoft's new `storage spaces direct' (S2D) solution will reach its maturity, adoption, and trust soon after its release. As a result, it will become a strong competitor against traditional storage solutions and already established SDS solutions.

Back to top

Software Defined Networking

Virtual Networking came along with the hypervisors and compute support, offering virtual networks for routing traffic between VM's, and outside the host with NVGRE. It has taken another leap in the last years by also virtualizing enterprise network functionality, like firewalling load balancing, vpn gateways, and routing with VXLAN and BGP support, moving logic and code away from hardware based devices to pieces of virtualized network software, deployed on virtual or physical servers controlling the network. Network consuming now moves from a very complex and traditional model to this new software model. Apart from complexity and hardware savings there are more serious advantages using this new software-defined networking (SDN) model.

 

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA41.pngAutomation

One is the automation; it reduces the time almost to zero when deploying network and security, complex network operations otherwise handled by networking operations staff configuring every device involved in the network stack. They have to plan, document, and deploy the operations and deliver it back to the customer, which can take a considerable amount of time and, needless to say, is very costly. The same process in an SDDC scenario using network virtualization can now be done directly by the customer, manually or through a pre-configured template.

HA and DR

High availability (HA) and disaster recovery (DR) can be (based on your setup) another advantage, where you deploy the same network topology and configuration in multiple clouds. Redeploying or exporting configuration for documentation purposes is standard functionality, all done by a single click or script and allows you to build out a second copy in another cloud providing HA or DR functionality. This kind of network automation and deployment clearly provides a whole range of new business benefits and opportunities otherwise constrainted to physical hardware rollout.

Security

Last, and certainly not least, is security. Network virtualization runs on any virtual machine; now you have the ability to apply `security services' on every packet within your SDDC solution, as opposed to the traditional model, where security sits on the perimeter network and only sees traffic leaving the datacenter, which is only a very small part of all packets being sent and received in your topology. It also enables you to get a full network data center monitoring insights of your solution or application in one single view. Apply machine learning and analytics to these statistics and get even more insight into how your application behaves. By adjusting the application based on these insights directly into the SDDC or the product itself, generates even more business value and benefits.

Microsoft

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA40.png

Also part of Microsoft's `Server 2016' is the new `network controller` role. In clustered setup it offers, just like VMware's NSX, first-grade software-defined networks (SDN) virtualization solutions, like a network load balancer, distributed firewall, VPN gateway, and much more. Microsoft assures that all code in the components is mature enough, a guarantee they can vouch for because all network components are battle hardened by hundreds of thousands of customers in Azure. The network controller code brought from Azure to your data center is an exact copy of the same software defined network components that these customers are using in Azure. The network controller can be managed with `virtual machine manager' (VMM) or with Microsoft's new SDDC, Azure Stack!

Back to top

SDDC Landscape

http://www.ruudborst.nl/wp-content/uploads/2016/03/031716_1543_TheneedforA44.png

Because of all the innovations and, in particular, the availability of containers and PaaS service offerings in hosted clouds, companies are now able to host all their cloud services against an affordable cost in their own data centers using a SDDC stack, seriously threatening public cloud revenue. In 2016, software-defined storage and networking markets have started to grow exponentially. Gaining on traditional enterprise solutions, like an SAN or load-balancing on hardware, SDDC, in general, is expected to grow from 21.7 billion in 2015 to 77.18 billion in five years. An SDDC hosted cloud market is currently occupied by Nutanix (watch this one), VMware, CloudStack,OpenStack, and OmniStack.

VMware

New Hybrid SDDC solutions focusing on the end-user and PaaS services give traditional established IaaS management products like vCenter from VMware a very tough time. "vSphere" primarily comforted the IT Pro and organization for years. A complete SDDC solution not only comforts the IT Pro but also the IT admin, end-user, and, last but certainly not least, the developers. That's why VMware had to change their strategy and also jumped onto the PaaS bandwagon by adding PaaS solutions like Pivotal's Cloud Foundry and Docker containers to their products. Shifting management and provisioning efforts away from their vSphere flagship towards SDDC PaaS solutions with vCloud Air (public) and vCloud Director (hosted). Customer adoption rate is quit low compared to well established Microsoft solutions that can count on huge on-premise Windows and Developer support. VMware is still struggling with this transition, breaking out of their traditional way of doing business focused on the IT Admin and that is not easy when VMware has become a rocking boat in the midst of the Dell and EMC merger.

Stacks

http://www.ruudborst.nl/wp-content/uploads/2016/03/topprio_centralit.png

Not all SDDC solutions are easy to manage. Several companies, likeZeroStackAtomia, and Cisco with its MetaPod, combined their own automated software solutions on top of these stacks to make it more complete, easy, and user-friendly. Not only management solutions are built on top of these software-defined data center stacks, but also services and, in particular, PaaS services from several open source vendors. Cloud Foundry is the biggest player and isembraced by Azure and many other vendors, like Cisco andVMware. It is an open source cloud computing PaaS service and SDDC independent. It uses the underlying IaaS to offer its containerized PaaS services on. It's compatible with AWS, VMware, Cisco, and OpenStack. However, let's not forget, it's Unix based and open-sourced. Before losing any business to other hosted or hybrid cloud SDDC offerings, Microsoft had to step in fast with their own SDDC Stack, and what better competitive and innovative way to bring an exact copy of software-defined Azure to your data center!

Back to top

Conclusions

We are in the middle of rapid cloud application landscape shifts, consisting of a Technological and Cultural shift, where the center of gravity is moving from traditional `Infrastructure as a Service' (IaaS) towards `Platform as a Service' (PaaS) in a Hybrid Cloud. It's pushing the cloud industry into the second cloud era, an elastic and consistent cloud application first era, where infrastructure comes second.

Containers, software-defined virtual resources, and a high degree of service provider automation are enabling the Technical shift. Developer software is regularly updated with all kinds of new functionality, in particular compatibility, development, and, more importantly, cloud deployment features. Development software creates the bridge to the (traditional) infrastructure independent PaaS services. From there developers re-architect their older monolithic tier dependent client-server applications to cloud-native ones. Developers using the newest technologies are driving the transition towards flexible and efficient cloud usage in companies, and, as a result, changing company application business requirements, causing the Cultural shift. New business requirements and technical innovations go along with DevOps practices and Agile processes aiming for shorter development cycles and rapid deployments, offering companies enormous flexibility and efficiency. By also using a variable cloud consumption model, they are now able to effectively control and reduce their costs and stay ahead in their industry.

Businesses now expect the full cloud experience, where applications and dependent resources like compute, network, and storage deployment are deployed instantly; a true elastic and consisted cloud experience offered in the public, hosted, or hybrid cloud, where the hosted cloud is managed by a trusted local cloud provider offering virtual services hosted in a `Software-Defined Data Center' (SDDC) solution, an SDDC brought from the public, open-source and hardware vendor clouds locally to cloud providers and enterprises. Pushing away traditional storage and network solutions running on physical dedicated hardware as an inflexible outdated solution from the past, the SDDC replaces these older technologies and now offers the IT organization a complete multi-tenant and automated out-of-the-box service provider data center solution.

Cloud adoption matures from puberty to adolescence. Security, often referred to as a cloud inhibitor, transforms into a key enabler towards cloud adoption. Customers recognize that the cloud may be more secure than their own data center. Other cloud adoption issues, regarding data placement (regulatory, privacy), vendor lock-in, and complexity fade away when companies choose a consistent Hybrid multi-cloud strategy. A Hybrid cloud where they are able to `lift and shift' their applications and workloads between their clouds based upon current applicable company policies and requirements.

Competition between public cloud providers, but also with hardware vendors in private/hosted clouds and on-premise, is causing an unprecedented growth in newly offered cloud services and technologies at a much higher pace than ever witnessed before, pushing the cloud industry even further and faster into a new hybrid application driven cloud era.

 

I hope I could shed some light on these interesting developments impacting the way of doing business in this new cloud era, how Azure Stack fits in and what it could mean for your business and for you as a professional. With this bright new light in mind, where do you see yourself in five years time?

Try to contemplate about this and head over to `Part 2` , explaining how `Azure Stack', as a technical product, fits in this changing application driven cloud landscape.

Part 2 - Not your average Stack

Azure Stack

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA1.png

With the shifting cloud landscape and recent application developments addressed in `Part 1` of this article, we can finally talk about Azure Stack's impact and advantages in the big cloud game. It is Microsoft's own `software-defined data center' (SDDC) stack with Azure services, brought to your private or hosted cloud. Yes, you heard right; along with all the software-defined goodness, they also bring their Azure services to your datacenter!

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA3.pngServices still missing in other SDDC solutions. And that's just one of many reasons why Azure Stack is going to leave other SDDC's behind.

Azure Stack provides a virtual machine, storage, website, application, database, network, security, authentication, rbac, gallery, monitoring, and usage services in an out-of-the-box service provider solution. It is an exact copy of software running in Azure today, Azure's glorified code used by millions in your own data center. Think about it.. Years of evolution in software, consolidated and battle-tested in a hyper-scale cloud, available for everyone to use.

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA2.png

If someone asks what Azure Stack is, then don't throw any technical `yibyab' at them. Let them do the thinking and simply answer with, `Azure in your own data center.' It is really as simple as that. If they know Azure, you're safe; if they do not know Azure, hide! Or quickly show the insane numbers on the right to give them a glance at Azure's hyper-scale computing.

Back to top

Services

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA5.pngAzure Stack, a clone of Azure in your data center, is the most common answer you can give. But, of course, when lifting up the hood, there is more to it. Yes, it is an exact software copy of every software-defined aspect of Azure, with virtual storage, load balancer, firewalling, VPN gateway, and so on, but not all of Azure's cloud services are included. The reason behind that is simply because of how the services in Azure interact and integrate into Azure. They often require Azure's hyper-scale setup and are dependent on other services. These Azure services are now brought to customer data centers and need to be made compatible with these smaller environments. Every service and functionality offered to customers have to be supported and documented with regular Microsoft support. Microsoft also has to provide code samples with these services on GitHub, and they have to update IT Pro and developer software to interact with them. So imagine all the work and investments that are required to get only one service with all of its dependencies and functionalities to customers. Therefore, Azure services coming to Azure Stack are prioritized and based on customer needs and compatibility with other clouds.

IaaS and PaaS

With the `general availability' (GA) in Q4 comes, of course, the well-known `Infrastructure as a Service' (IaaS) services, and the much anticipated `Platform as a Service' (PaaS) services, consisting of the Service Fabric and the `App Service`, making service fabric microservices-compatible applications and `App Service' web, mobile, custom, and logic apps available to developers. And that's not all! We can expect a lot more in the near future, likecontainers support with Azure Container Service (see container service section in Part 1) and `Internet of things` (IoT) services. Getting excited? Read on!

Third-party

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA4.pngAzure Stack's extensible service framework delivers not only new Azure services but also new services from 3rd parties, by creating custom resource providers, an enormous benefit for cloud providers and ISV's already offering services in the cloud. They can now offer their existing services along with Azure services through Azure Stack in a consistent and centralized way.

Commitment

Do you have any doubts about Azure's commitment, offering services from Azure to Azure Stack, even when reading about all the changes happening in public and hosted clouds outlined in Part 1 of this article? Then please read the `Azure Stack' whitepaper Microsoft published. It explains Azure Stack as a functional product and Microsoft's consistent Hybrid cloud vision in the coming years. Statements made are well-founded and backed by the notion that Microsoft also has to compete in the software-defined (data center) hosted cloud. The picture below sums up all of the services in Azure, and which services are available when Azure Stack goes GA in Q4. There are quite a lot of them and, believe me, you need all the time you can spare to get acquainted with each one of them, especially with integrating portal functionality, like authentication, resource usage, billing, and provisioning into your current business model.

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA6.png

Back to top

Back to top

Back to top

PaaS

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA7.png

Azures Stacks shining gems are its application `PaaS' offerings, brought to you by the `App Service' , expected Container Service, and `Service Fabric` *(illustrated below). *

PaaS services manage all resources needed for an application, infrastructure deployment and configuration is handled by the service, making the infrastructure invisible. Application resources contain storage, networking, compute but also logic, authentication, databases, mobile device support, workflows, scaling, deployments, and diagnostics; moreover, they can contain any other stateless or stateful resource for your container, microservice, web, mobile, API, logic, or custom application.

PaaS services, with future containers integration, will further simplify resource and application orchestration; they are going to play a more important role in the SDDC proposition then you might think. There will be heavy competition in the PaaS space as outlined in `Part 1` and from the looks of it, Microsoft once again played their cards right. Microsoft already has millions of developers developing PaaS solutions in Azure, they embraced Docker containers, they are delivering microservice compatible services and last but not least they already have a huge on-premise Windows presence within companies, unlike AWS or Google. Azure Stack is Microsoft's  missing puzzle, it provides the bridge to one consistent hybrid ecosystem across clouds. Businesses are now able to deploy their application in the cloud that suits their needs.

You can have a great orchestrated and automated SDDC with only IaaS. However, if you do not have the integrated consistent PaaS (application) services, where customers are, in the long term, gradually transitioning to, then you do not have a future-proof SDDC solution at all. Managing individual VMs and infrastructure components is very costly and time-consuming;  why invest in old technology when you can host the same application with little developer effort in a flexible matured PaaS service. And if the application is not compatible enough then why not start to re-envision it and move individual features or functionalities of the application to PaaS.

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA8.png

Cloud application/service first

PaaS customers primarily want their application to be deployed instantly in a trustworthy and performing public cloud or hosted cloud at a trusted cloud provider, a PaaS service in a cloud, which is compatible and best suited for the technical needs of their application, without having to bother about any dependent infrastructure, which they would otherwise have to manage and invest in. They view the deployment from a top-down perspective, where the cloud application comes first.

Once the compatible PaaS cloud requirement is satisfied, they will look at the SDDC stack with service provider functionality supporting their application business wise. It has to align with their business requirements and model. The SDDC allows them to provision, manage, control, monitor, and update their application following a transparent pay-per-use model.

There will be differences between cloud providers offering Azure Stack in the hosted cloud, pertaining to how cloud providers provide availability, redundancy, support, security, regions, storage, SLA's, networking, and hybrid scenarios. Customers already acquainted with Azure expect the same experience with Azure Stack. They need a trusted local cloud provider supporting and advising them in their Hybrid SDDC PaaS experience.

Cross-Platform

PaaS service functionality is key and, at the moment, very vendor specific. Of course, a Microsoft .NET application is much more compatible with a Microsoft PaaS cloud. The same applies for a Linux application, although this has changed a lot since `Microsoft loves Linux'; they made support for several Linux-orientated programming languages and databases available in the `App Service'. You will already have been able to run .NET on Linux for quite some time, and they have recently announced SQL Server on Linux and Bash (Ubuntu) on Windows. Who would have thought that two years ago? It's not about being Linux or Microsoft orientated anymore, it's about doing both and thinking cross-platform. Microsoft just isn't an OS company anymore; nowadays, it's cloud first, windows second. Microsoft even has a cultural battle going on, led by `Jeffrey Snover' (PowerShell inventor), to remove Windows from `Windows Server'. So don't be surprised if `Server 2016', just like Nano, rolls out without Windows in its name. Companies have to find the right functional fit for their application in the cloud of choice, without defining and confining their solution as a Microsoft or Linux one.

Lift and shift

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA19.png

Microsoft is currently a leader in the enterprise PaaS market, backed by a huge developer base using Azure. They already deploy applications in the cloud and collaborate with each other in Visual Studio online. Their existing Azure applications can now be deployed with the same code in Azure Stack, true lift and shift applications, spreading workloads between clouds.

Moreover, when the `public cloud' is a step too far, on-premise or private cloud customers can choose to start in the hosted or hybrid cloud, at a local trusted cloud provider providing Azure and `Azure Stack', with managed and advisory services to assist them in their cloud journey. Even smaller enterprise companies can download the Azure Stack solution and deploy it on minimum hardware. This is great for developers testing the application in a local on-premise dev-test environment before deploying the application in production.

Considering the PaaS and Hybrid Cloud shifts mentioned in `Part 1` of this article, along with what Azure Stack has to offer, we can conclude that its PaaS services are going to have a significant role in Azure Stacks' success.

Back to top

Software-defined datacenter Stack

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA9.pnghttp://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA10.png

Software-defined everything (Azure) stack, a true battle-tested, software-defined datacenter (SDDC) straight from Azure's hyper-scaled public cloud. Microsoft again further closes the gap between their public cloud and customer private/hosted clouds with a consistent application delivery experience. Customers using Azure today can use the same code to deploy their solution to Azure Stack in their private or hosted cloud.

Deploying the same application code from Azure in your own data center offers you a consistent experience with exactly the same service offerings, with the opportunity to integrate both solutions into Hybrid cloud with all its management, authentication, network, backup, and disaster recovery advantages. There is no difference in software functionality between Azure and Azure Stack; the only difference is you're using your own hardware and infrastructure, giving you more benefits and flexibility by offering custom SLA's and pricing to your customers.

Microsoft made it even easier for developers and IT Pro's by sharing deployment code, apps, components, templates, os images, and documentation on GitHub. Head over to the excellent and elaborate blog from `Marc van Eijk' if you want to get started with GitHub and Azure Resource Manager (ARM).

Back to top

Framework

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA11.png

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA12.pngAzure Stacks software-defined data center solution offers a stack of separate layers, through which all operations flow. Provisioning operations start from the `user facing services', which consists of a self-service portal for the IT admin or business owner and hubs with deployment and service management APIs for DevOps. New deployments end up almost instantly as virtual machines, microservices (containers), and web/mobile/custom or logic apps. Virtual service provisioning is executed and orchestrated by `Azure Resource Manager' (ARM). ARM communicates with the core management providers, adding necessary management configuration and multitenancy around the virtual service, such as monitoring, rbac, authorization, security, and usage. When the multitenant framework foundation is ready, ARM creates virtual resources on the storage, network, and compute fabrics by invoking the corresponding resource providers. Finally, ARM ties them together in this new virtual framework and stores the configuration. A new virtual service with tied virtual resources is born.

Fabrics

Azure Stack resources can be expanded by adding servers to the fabric. In a consolidated hyper-converged model, each server holds all of the resources, and thus is responsible for compute, network, and storage in one single cluster. In a disaggregated scenario, servers are added to each compute, network, or storage fabric cluster.

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA13.png

The network fabric consists of a cluster of machines, with the `network controller' role installed, offering virtual network functionality, such as firewalling, load-balancing, and VPN gateway support.

The storage fabric consists of a cluster of `scale-out file servers' (SOFS), providing virtual disks from attached storage (JBOD) or from mirrored local disks striped across nodes using the new `storage spaces direct` (S2D).

Last but not least is the compute fabric, as a cluster of Hyper-V servers offering VM and Hyper-V containers using nested virtualization.

The magic about these fabrics is that each fabric can contain both physical and virtual machines, or only virtual machines; there is no physical server requirement. You can offer virtual storage from a physical or virtual disk, network services from a connected virtual or physical network, and compute from a physical hypervisor or a virtual one using `nested virtualization'. You can even install `Azure Stack' on a `Windows 10' laptop supporting nested virtualization and carry your own private data center (SDDC) lab with you!

The above fabric illustration is from the excellent `Azure Stack - The Fabric Layer' blog, by `Darryl van der Peijl'. Head over to Darryl's blog to get more information about the fabrics and how they work together with Azure Stack. Head over to `Part 1` of this blog when you want to know more about storage spaces direct, the network controller, or the SDDC. It explains the `software-defined data center' (SDDC), storage (SDS), and network (SDN) in greater detail.

Back to top

Hybrid Cloud

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA14.pngA first step using a hybrid cloud could be moving non-risk, public customer-facing websites or applications to Azure's public cloud, leaving important (backend) data in your Azure Stack cloud hosted at a trusted (local) cloud provider. A hybrid cloud scenario is even more valid when you want to use services in Azure that are not available yet in Azure Stack.

Backup and disaster recovery

Offering Azure Stack in a hybrid model gives you even more advantages. It lets you have an additional (local) location for redundancy, backup, or disaster recovery (ASR) purposes, controlled and managed in a consistent way across clouds using one ecosystem. Disaster at one can bring the entire environment back on the other. http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA15.pngDeployment templates, scripts, and images are all the same; IT staff can implement it in the same ecosystem without learning a completely new skill set and bring up the entire enviroment very quickly.

Dev-test

A dev-test infrastructure is also a real motivator for an elastic hybrid cloud. For instance, test in Azure and bring the application to Azure Stack, or vice versa. Shift, lift, and spread applications and workloads based on their current needs and benefit from the flexibility of both clouds.

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA16.jpgMulti-Cloud connectivity

With Express Route between Azure and Azure Stack, cloud providers can offer customers a very rich and super-fast application experience. Their on-premise environment, connected to the provider's network directly, connects to their hosted cloud subscription on Azure Stack. The cloud providers' network connects their Azure Stack subscription to Azure of Office 365 with a direct express route connection, creating a fast, reliable, and secure private connection. Mobile workers or remote sites are then able to connect through VPN with either Azure's or Azure Stacks VPN's gateway. Office 365 workers experience the same fast, reliable, and secure advantages through the express route. They are able to work with and connect directly to resources and services offered in the company's Azure Stack subscription. Being connected to the hosted cloud subscription on Azure Stack, they are also able to connect back to the on-premise network. And there you have it, a complete hybrid circle spanning multiple clouds. Think about the possibilities; a true Hybrid cloud experience every IT company would dream of.

Monitoring

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA17.pngOf course, we also need monitoring in a hybrid multi-cloud scenario. Here is where MicrosoftOperations Management Suite (OMS) comes into play. Microsoft's new cloud-based SaaS monitoring system monitors all assets across on-premise, hosted, and public clouds, giving you a single pane of glass and a consistent experience across all your clouds. It does not require you to setup and update a complex monitoring platform like SCOM; Microsoft already did that for you, redundantly monitoring clouds from multiple regions in Azure.

Back to top

PoC hardware requirements

At the moment of writing this article, Azure Stack is in public preview with the TP1 release available here.

You can try it out in a PoC environment, with the deployment steps and recommended hardware mentioned here. If you do not have the minimum required hardware specs and want to run Azure Stack on lesser hardware, then read `Daniel Neuman's' blog post. This post describes how you can tweak the PowerShell deployment scripts. Of course, this reduces Azure Stack's resources, so always have enough IOPS; otherwise, memory and CPU becomes a bottleneck. Also, be sure that you can fit additional VM's when you want to install the PaaS services, or other future service functionality requiring additional VM's.

Back to top

Conclusion

http://www.ruudborst.nl/wp-content/uploads/2016/03/032216_1650_TheneedforA18.png

Making this move sets Microsoft right in front of the game by undermining Amazon, Google, and VMware, players without an own software-defined data center (SDDC) solution delivering a consistent hybrid cloud experience with PaaS to customers. Again, the application landscape is changing rapidly, and customers are going to think less and less about IaaS and more about what PaaS can do for their business. Microsoft is setting new standards for Cloud computing and is gaining real momentum in the Hybrid cloud space, by offering new PaaS services in a complete out-of-the-box service provider solution, connected to Azure in one big consistent ecosystem to customers, with Hyper-V as the hypervisor of choice and Azure Stack as the consistent PaaS and IaaS delivery model across clouds, winning customers from hosted and public cloud providers with Azure's battle-tested software deployed anywhere.

If you want to know more about Azure Stack then head over to the excellent Azure Stack Wiki, compiled by `Hans Vredevoort', or `Mark Scholman's' AzureStack.eu, providing in-depth Azure Stack articles. I also recommend the links in the references section we used as the source for information in this article. Finally a big shout out to Darryl van der Peijl for dotting the I's and crossing the T's for me in this part.

We strongly advise everyone to read `Part 1` of this wiki post, highlighting the business aspect and value of Azure Stack, PaaS services, and the `software-defined data center' in general.

Back to top