This article provides guidance on implementing the Reliable Web App pattern. This pattern outlines how to modify (replatform) web apps for cloud migration. It offers prescriptive architecture, code, and configuration guidance aligned with the principles of the Well-Architected Framework.
Why the Reliable Web App pattern for Java?
The Reliable Web App pattern is a set of principles and implementation techniques that define how you should replatform web apps when migrating to the cloud. It focuses on the minimal code updates you need to make to be successful in the cloud. The following guidance uses the reference implementation as an example throughout and follows the replatform journey of the fictional company, Contoso Fiber, to provide business context for your journey. Before implementing the Reliable Web App pattern for Java, Contoso Fiber had a monolithic, on-premises Customer Account Management System (CAMS) that used the Spring Boot framework.
Tip
There's a reference implementation (sample) of the Reliable Web App pattern. It represents the end-state of the Reliable Web App implementation. It's a production-grade web app that features all the code, architecture, and configuration updates discussed in this article. Deploy and use the reference implementation to guide your implementation of the Reliable Web App pattern.
How to implement the Reliable Web App pattern
This article includes architecture, code, and configuration guidance to implement the Reliable Web App pattern. Use the following links to navigate to the specific guidance you need:
- Business context: Align this guidance with your business context and learn how to define immediate and long term goals that drive replatforming decisions.
- Architecture guidance: Learn how to select the right cloud services and design an architecture that meets your business requirements.
- Code guidance: Implement three design patterns to improve the reliability and performance efficiency of your web app in the cloud: Retry, Circuit-Breaker, and Cache-Aside patterns
- Configuration guidance: Configure authentication and authorization, managed identities, rightsized environments, infrastructure as code, and monitoring.
Business context
The first step in replatforming a web app is to define your business objectives. You should set immediate goals, such as service level objectives and cost optimization targets, as well as future goals for your web application. These objectives influence your choice of cloud services and the architecture of your web application in the cloud. Define a target SLO for your web app, such as 99.9% uptime. Calculate the composite SLA for all the services that affect the availability of your web app.
For example, Contoso Fiber, wanted to expand their on-premises Customer Account Management System (CAMS) web app to reach other regions. To meet the increased demand on the web app, they established the following goals:
- Apply low-cost, high-value code changes
- Reach a service level objective (SLO) of 99.9%
- Adopt DevOps practices
- Create cost-optimized environments
- Improve reliability and security
Contoso Fiber determined that their on-premises infrastructure wasn't a cost-effective solution for scaling the application. So, they decided that migrating their CAMS web application to Azure was the most cost effective way to achieve their immediate and future objectives.
Architecture guidance
The Reliable Web App pattern has a few essential architectural elements. You need DNS to manage endpoint resolution, a web application firewall to block malicious HTTP traffic, and a load balancer to protect and route inbound user requests. The application platform hosts your web app code and makes calls to all the backend services through private endpoints in a virtual network. An application performance monitoring tool captures metrics and logs to understand your web app.
Figure 1. Essential architectural elements of the Reliable Web App pattern.
Design the architecture
Design your infrastructure to support your recovery metrics, such as recovery time objective (RTO) and recovery point objective (RPO). The RTO affects availability and must support your SLO. Determine a recovery point objective (RPO) and configure data redundancy to meet the RPO.
Choose infrastructure reliability. Determine how many availability zones and regions you need to meet your availability needs. Add availability zones and regions until the composite SLA meets your SLO. The Reliable Web App pattern supports multiple regions for an active-active or active-passive configuration. For example, the reference implementation uses an active-passive configuration to meet an SLO of 99.9%.
For a multi-region web app, configure your load balancer to route traffic to the second region to support either an active-active or active-passive configuration depending on your business need. The two regions require the same services except one region has a hub virtual network that connects the regions. Adopt a hub-and-spoke network topology to centralize and share resources, such as a network firewall. If you have virtual machines, add a bastion host to the hub virtual network to manage them securely (see figure 2).
Figure 2. The Reliable Web App pattern with a second region and a hub-and-spoke topology.
Choose a network topology. Choose the right network topology for your web and networking requirements. If you plan on having multiple virtual networks, use a hub and spoke network topology. It provides cost, management, and security benefits with hybrid connectivity options to on-premises and virtual networks.
Pick the right Azure services
When you move a web app to the cloud, you should select Azure services that meet your business requirements and align with the current features of the on-premises web app. The alignment helps minimize the replatforming effort. For example, use services that allow you to keep the same database engine and support existing middleware and frameworks. The following sections provide guidance for selecting the right Azure services for your web app.
For example, before the move to the cloud, Contoso Fiber's CAMS web app was an on-premises, monolithic Java web app. It's a Spring Boot app with a PostgreSQL database. The web app is a line-of-business support app. It's employee-facing. Contoso Fiber employees use the application to manage support cases from their customers. The web app suffered from common challenges in scalability and feature deployment. This starting point, their business goals, and SLO drove their service choices.
Application platform: Use Azure App Service as your application platform. Contoso Fiber chose Azure App Service as the application platform for the following reasons:
- Natural progression: Contoso Fiber deployed a Spring Boot
jar
file on their on-premises server and wanted to minimize the amount of rearchitecting for that deployment model. App Service provides robust support for running Spring Boot apps, and it was a natural progression for Contoso Fiber to use App Service. Azure Container Apps is also an attractive alternative for this app. For more information, see What is Azure Spring Apps? and Java on Azure Container Apps overview. - High SLA: It has a high SLA that meets the requirements for the production environment.
- Reduced management overhead: It's a fully managed hosting solution.
- Containerization capability: App Service works with private container image registries like Azure Container Registry. Contoso Fiber can use these registries to containerize the web app in the future.
- Autoscaling: The web app can rapidly scale up, down, in, and out based on user traffic.
- Natural progression: Contoso Fiber deployed a Spring Boot
Identity management: Use Microsoft Entra ID as your identity and access management solution. Contoso Fiber chose Microsoft Entra ID for the following reasons:
- Authentication and authorization: The application needs to authenticate and authorize call center employees.
- Scalable: It scales to support larger scenarios.
- User-identity control: Call center employees can use their existing enterprise identities.
- Authorization protocol support: It supports OAuth 2.0 for managed identities.
Database: Use a service that allows you to keep the same database engine. Use the data store decision tree. Contoso Fiber chose Azure Database for PostgreSQL and the flexible-server option for the following reasons:
- Reliability: The flexible-server deployment model supports zone-redundant high availability across multiple availability zones. This configuration maintains a warm standby server in a different availability zone within the same Azure region. The configuration replicates data synchronously to the standby server.
- Cross-region replication: It has a read replica feature that allows you to asynchronously replicate data to a read-only replica database in another region.
- Performance: It provides predictable performance and intelligent tuning to improve your database performance by using real usage data.
- Reduced management overhead: It's a fully managed Azure service that reduces management obligations.
- Migration support: It supports database migration from on-premises single-server PostgreSQL databases. They can use the migration tool to simplify the migration process.
- Consistency with on-premises configurations: It supports different community versions of PostgreSQL, including the version that Contoso Fiber currently uses.
- Resiliency. The flexible server deployment automatically creates server backups and stores them using zone-redundant storage (ZRS) within the same region. They can restore their database to any point-in-time within the backup retention period. The backup and restoration capability creates a better RPO (acceptable amount of data loss) than Contoso Fiber could create on-premises.
Application performance monitoring: Use Application Insights to analyze telemetry on your application. Contoso Fiber chose to use Application Insights for the following reasons:
- Integration with Azure Monitor: It provides the best integration with Azure Monitor.
- Anomaly detection: It automatically detects performance anomalies.
- Troubleshooting: It helps you diagnose problems in the running app.
- Monitoring: It collects information about how users are using the app and allows you to easily track custom events.
- Visibility gap: The on-premises solution didn't have application performance monitoring solution. Application Insights provides easy integration with the application platform and code.
Cache: Choose whether to add cache to your web app architecture. Azure Cache for Redis is Azure's primary cache solution. It's a managed in-memory data store based on the Redis software. Contoso Fiber's added Azure Cache for Redis for the following reasons:
- Speed and volume: It has high-data throughput and low latency reads for commonly accessed, slow-changing data.
- Diverse supportability: It's a unified cache location that all instances of the web app can use.
- External data store. The on-premises application servers performed VM-local caching. This setup didn't offload highly frequented data, and it couldn't invalidate data.
- Nonsticky sessions: The cache allows the web app to externalize session state and use nonsticky sessions. Most Java web apps running on premises use in-memory, client-side caching. In-memory, client-side caching doesn't scale well and increases the memory footprint on the host. By using Azure Cache for Redis, Contoso Fiber has a fully managed, scalable cache service to improve scalability and performance of their applications. Contoso Fiber was using a cache abstraction framework (Spring Cache) and only needed minimal configuration changes to swap out the cache provider. It allowed them to switch from an Ehcache provider to the Redis provider.
Load balancer: Web applications using PaaS solutions should use Azure Front Door, Azure Application Gateway, or both based on web app architecture and requirements. Use the load balancer decision tree to pick the right load balancer. Contoso Fiber needed a layer-7 load balancer that could route traffic across multiple regions. Contoso Fiber needed a multi-region web app to meet the SLO of 99.9%. Contoso Fiber chose Azure Front Door for the following reasons:
- Global load balancing: It's a layer-7 load balancer that can route traffic across multiple regions.
- Web application firewall: It integrates natively with Azure Web Application Firewall.
- Routing flexibility: It allows the application team to configure ingress needs to support future changes in the application.
- Traffic acceleration: It uses anycast to reach the nearest Azure point of presence and find the fastest route to the web app.
- Custom domains: It supports custom domain names with flexible domain validation.
- Health probes: The application needs intelligent health probe monitoring. Azure Front Door uses responses from the probe to determine the best origin for routing client requests.
- Monitoring support: It supports built-in reports with an all-in-one dashboard for both Front Door and security patterns. You can configure alerts that integrate with Azure Monitor. It lets the application log each request and failed health probes.
- DDoS protection: It has built-in layer 3-4 DDoS protection.
- Content delivery network: It positions Contoso Fiber to use a content delivery network. The content delivery network provides site acceleration.
Web application firewall: Use Azure Web Application Firewall to provide centralized protection from common web exploits and vulnerabilities. Contoso Fiber used Azure Web Application Firewall for the following reasons:
- Global protection: It provides improved global web app protection without sacrificing performance.
- Botnet protection: The team can monitor and configure settings to address security concerns related to botnets.
- Parity with on-premises: The on-premises solution was running behind a web application firewall managed by IT.
- Ease of use: Web Application Firewall integrates with Azure Front Door.
Secrets manager: Use Azure Key Vault if you have secrets to manage in Azure. Contoso Fiber used Key Vault for the following reasons:
- Encryption: It supports encryption at rest and in transit.
- Managed identity support: The application services can use managed identities to access the secret store.
- Monitoring and logging: It facilitates audit access and generates alerts when stored secrets change.
- Integration: It provides native integration with the Azure configuration store (App Configuration) and web hosting platform (App Service).
Endpoint security: Use Azure Private Link to access platform-as-a-service solutions over a private endpoint in your virtual network. Traffic between your virtual network and the service travels across the Microsoft backbone network. Contoso Fiber chose Private Link for the following reasons:
- Enhanced security communication: It lets the application privately access services on the Azure platform and reduces the network footprint of data stores to help protect against data leakage.
- Minimal effort: The private endpoints support the web app platform and database platform the web app uses. Both platforms mirror existing on-premises configurations for minimal change.
Network security: Use Azure Firewall to control inbound and outbound traffic at the network level. Use Azure Bastion to connect to virtual machines securely without exposing RDP/SSH ports. Contoso Fiber adopted a hub and spoke network topology and wanted to put shared network security services in the hub. Azure Firewall improves security by inspecting all outbound traffic from the spokes to increase network security. Contoso Fiber needed Azure Bastion for secure deployments from a jump host in the DevOps subnet.
Code guidance
To successfully move a web app to the cloud, you need to update your web app code with the Retry pattern, Circuit-Breaker pattern, and Cache-Aside design pattern.
Figure 3. Role of the design patterns.
Each design pattern provides workload design benefits that align with one of more pillars of the Well-Architected Framework. Here's an overview of the patterns you should implement:
Retry pattern: The Retry pattern handles transient failures by retrying operations that might fail intermittently. Implement this pattern on all outbound calls to other Azure services.
Circuit Breaker pattern: The Circuit Breaker pattern prevents an application from retrying operations that aren't transient. Implement this pattern in all outbound calls to other Azure services.
Cache-Aside pattern: The Cache-Aside pattern adds to and retrieves from a cache more frequently than a datastore. Implement this pattern on requests to the database.
Design pattern | Reliability (RE) | Security (SE) | Cost Optimization (CO) | Operational Excellence (OE) | Performance Efficiency (PE) | Supporting WAF principles |
---|---|---|---|---|---|---|
Retry pattern | ✔ | RE:07 | ||||
Circuit-Breaker pattern | ✔ | ✔ | RE:03 RE:07 PE:07 PE:11 |
|||
Cache Aside pattern | ✔ | ✔ | RE:05 PE:08 PE:12 |
Implement the Retry pattern
Add the Retry pattern to your application code to address temporary service disruptions. These disruptions are called transient faults. Transient faults usually resolve themselves within seconds. The Retry pattern allows you to resend failed requests. It also allows you to configure the request delays and the number of attempts before failure is conceded.
Use Resilience4j, a lightweight, fault-tolerance library, to implement the Retry pattern in Java. For example, the reference implementation adds the Retry pattern by decorating the Service Plan Controller's listServicePlans method with Retry annotations. The code retries the call to a list of service plans from the database if the initial call fails. The reference implementation configures the retry policy including maximum attempts, wait duration, and which exceptions should be retried. The retry policy is configured in application.properties
.
@GetMapping("/list")
@PreAuthorize("hasAnyAuthority('APPROLE_AccountManager')")
@CircuitBreaker(name = SERVICE_PLAN)
@Retry(name = SERVICE_PLAN)
public String listServicePlans(Model model) {
List<serviceplandto> servicePlans = planService.getServicePlans();
model.addAttribute("servicePlans", servicePlans);
return "pages/plans/list";
}
Implement the Circuit Breaker pattern
Use the Circuit Breaker pattern to handle service disruptions that aren't transient faults. The Circuit Breaker pattern prevents an application from continuously attempting to access a nonresponsive service. It releases the application and avoids wasting CPU cycles so the application retains its performance integrity for end users.
Use Spring Circuit Breaker and Resilience4j documentation to implement the Circuit-Breaker pattern. For example, the reference implementation implements the Circuit Breaker pattern by decorating methods with the Circuit Breaker attribute.
Implement the Cache-Aside pattern
Add the Cache-Aside pattern to your web app to improve in-memory data management. The pattern assigns the application the responsibility of handling data requests and ensuring consistency between the cache and a persistent storage, such as a database. It shortens response times, enhances throughput, and reduces the need for more scaling. It also reduces the load on the primary datastore, improving reliability and cost optimization. To implement the Cache-Aside pattern, follow these recommendations:
Configure application to use a cache. To enable caching, add the
spring-boot-starter-cache
package as a dependency in yourpom.xml
file. This package provides default configurations for Redis cache.Cache high-need data. Apply the Cache-Aside pattern on high-need data to amplify its effectiveness. Use Azure Monitor to track the CPU, memory, and storage of the database. These metrics help you determine whether you can use a smaller database SKU after applying the Cache-Aside pattern. To cache specific data in your code, add the
@Cacheable
annotation. This annotation tells Spring which methods should have their results cached.Keep cache data fresh. Schedule regular cache updates to sync with the latest database changes. Determine the optimal refresh rate based on data volatility and user needs. This practice ensures the application uses the Cache-Aside pattern to provide both rapid access and current information. The default cache settings might not suit your web application. You can customize these settings in the
application.properties
file or the environment variables. For instance, you can modify thespring.cache.redis.time-to-live
value (expressed in milliseconds) to control how long data should remain in the cache before it’s evicted.Ensure data consistency. Implement mechanisms to update the cache immediately after any database write operation. Use event-driven updates or dedicated data management classes to ensure cache coherence. Consistently synchronizing the cache with database modifications is central to the Cache-Aside pattern.
Configuration guidance
The following sections provide guidance on implementing the configurations updates. Each section aligns with one or more pillars of the Well-Architected Framework.
Configuration | Reliability (RE) | Security (SE) | Cost Optimization (CO) | Operational Excellence (OE) | Performance Efficiency (PE) | Supporting WAF principles |
---|---|---|---|---|---|---|
Configure user authentication & authorization | ✔ | ✔ | SE:05 OE:10 |
|||
Implement managed identities | ✔ | ✔ | SE:05 OE:10 |
|||
Right size environments | ✔ | CO:05 CO:06 |
||||
Implement autoscaling | ✔ | ✔ | ✔ | RE:06 CO:12 PE:05 |
||
Automate resource deployment | ✔ | OE:05 | ||||
Implement monitoring | ✔ | ✔ | ✔ | OE:07 PE:04 |
Configure user authentication and authorization
When you migrate web applications to Azure, configure user authentication and authorization mechanisms. Follow these recommendations:
Use an identity platform. Use the Microsoft Identity platform to set up web app authentication. This platform supports both single-tenant and multi-tenant applications, allowing users to sign in with their Microsoft identities or social accounts.
The Spring Boot Starter for Microsoft Entra ID streamlines this process, utilizing Spring Security and Spring Boot for easy setup. It offers varied authentication flows, automatic token management, and customizable authorization policies, along with integration capabilities with Spring Cloud components. This enables straightforward Microsoft Entra ID and OAuth 2.0 integration into Spring Boot applications without manual library or settings configuration.
For example, the reference implementation uses the Microsoft identity platform (Microsoft Entra ID) as the identity provider for the web app. It uses the OAuth 2.0 authorization code grant to sign in a user with a Microsoft Entra account. The following XML snippet defines the two required dependencies of the OAuth 2.0 authorization code grant flow. The dependency
com.azure.spring: spring-cloud-azure-starter-active-directory
enables Microsoft Entra authentication and authorization in a Spring Boot application. The dependencyorg.springframework.boot: spring-boot-starter-oauth2-client
supports OAuth 2.0 authentication and authorization in a Spring Boot application.<dependency> <groupid>com.azure.spring</groupid> <artifactid>spring-cloud-azure-starter-active-directory</artifactid> </dependency> <dependency> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-starter-oauth2-client</artifactid> </dependency>
Create an app registration. Microsoft Entra ID requires an application registration in the primary tenant. The application registration ensures the users that get access to the web app have identities in the primary tenant. For example, the reference implementation uses Terraform to create a Microsoft Entra ID app registration along with an app specific Account Manager role.
resource "azuread_application" "app_registration" { display_name = "${azurecaf_name.app_service.result}-app" owners = [data.azuread_client_config.current.object_id] sign_in_audience = "AzureADMyOrg" # single tenant app_role { allowed_member_types = ["User"] description = "Account Managers" display_name = "Account Manager" enabled = true id = random_uuid.account_manager_role_id.result value = "AccountManager" } }
Enforce authorization in the application. Use role-based access controls (RBAC) to assign least privileges to application roles. Define specific roles for different user actions to avoid overlap and ensure clarity. Map users to the appropriate roles and ensure they only have access to necessary resources and actions. Configure Spring Security to use Spring Boot Starter for Microsoft Entra ID. This library allows integration with Microsoft Entra ID and helps you ensure that users are authenticated securely. Configuring and enabling the Microsoft Authentication Library (MSAL) provides access to more security features. These features include token caching and automatic token refreshing.
For example, the reference implementation creates app roles reflecting the types of user roles in Contoso Fiber's account management system. Roles translate into permissions during authorization. Examples of app-specific roles in CAMS include the account manager, Level one (L1) support representative, and Field Service representative. The Account Manager role has permissions to add new app users and customers. A Field Service representative can create support tickets. The
PreAuthorize
attribute restricts access to specific roles.@GetMapping("/new") @PreAuthorize("hasAnyAuthority('APPROLE_AccountManager')") public String newAccount(Model model) { if (model.getAttribute("account") == null) { List<ServicePlan> servicePlans = accountService.findAllServicePlans(); ServicePlan defaultServicePlan = servicePlans.stream().filter(sp -> sp.getIsDefault() == true).findFirst().orElse(null); NewAccountRequest accountFormData = new NewAccountRequest(); accountFormData.setSelectedServicePlanId(defaultServicePlan.getId()); model.addAttribute("account", accountFormData); model.addAttribute("servicePlans", servicePlans); } model.addAttribute("servicePlans", accountService.findAllServicePlans()); return "pages/account/new"; } ...
To integrate with Microsoft Entra ID, the reference implementation uses the OAuth 2.0 authorization code grant flow. This flow enables a user to sign in with a Microsoft account. The following code snippet shows you how to configure the
SecurityFilterChain
to use Microsoft Entra ID for authentication and authorization.@Configuration(proxyBeanMethods = false) @EnableWebSecurity @EnableMethodSecurity public class AadOAuth2LoginSecurityConfig { @Bean SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.apply(AadWebApplicationHttpSecurityConfigurer.aadWebApplication()) .and() .authorizeHttpRequests() .requestMatchers(EndpointRequest.to("health")).permitAll() .anyRequest().authenticated() .and() .logout(logout -> logout .deleteCookies("JSESSIONID", "XSRF-TOKEN") .clearAuthentication(true) .invalidateHttpSession(true)); return http.build(); } } ...
Prefer temporary access to storage. Use temporary permissions to safeguard against unauthorized access and breaches, such as shared access signatures (SASs). Use User Delegation SASs to maximize security when granting temporary access. It's the only SAS that uses Microsoft Entra ID credentials and doesn't require a permanent storage account key.
Enforce authorization in Azure. Use Azure RBAC to assign least privileges to user identities. Azure RBAC determines what Azure resources identities can access, what they can do with those resources, and what areas they have access to.
Avoid permanent elevated permissions. Use Microsoft Entra Privileged Identity Management to grant just-in-time access for privileged operations. For example, developers often need administrator-level access to create/delete databases, modify table schemas, and change user permissions. With just-in-time access, user identities receive temporary permissions to perform privileged tasks.
Implement managed identities
Use Managed Identities for all Azure services that support managed identities. A managed identity allows Azure resources (workload identities) to authenticate to and interact with other Azure services without managing credentials. Hybrid and legacy systems can keep on-premises authentication solutions to simplify the migration but should transition to managed identities as soon as possible. To implement managed identities, follow these recommendations:
Pick the right type of managed identity. Prefer user-assigned managed identities when you have two or more Azure resources that need the same set of permissions. This setup is more efficient than creating system-assigned managed identities for each of those resources and assigning the same permissions to all of them. Otherwise, use system-assigned managed identities.
Configure least privileges. Use Azure RBAC to grant only the permissions that are critical for the operations, such as CRUD actions in databases or accessing secrets. Workload identity permissions are persistent, so you can't provide just-in-time or short-term permissions to workload identities. If Azure RBAC doesn't cover a specific scenario, supplement Azure RBAC with Azure-service level access policies.
Secure remaining secrets. Store any remaining secrets in Azure Key Vault. Load secrets from Key Vault at application startup instead of during each HTTP request. High-frequency access within HTTP requests can exceed Key Vault transaction limits. Store application configurations in Azure App Configuration.
Right size environments
Use the performance tiers (SKUs) of Azure services that meet the needs of each environment without excess. To right-size your environments, follow these recommendations:
Estimate costs. Use the Azure pricing calculator to estimate the cost of each environment.
Cost optimize production environments. Production environments need SKUs that meet the service level agreements (SLA), features, and scale needed for production. Continuously monitor resource usage and adjust SKUs to align with actual performance needs.
Cost optimize preproduction environments. Preproduction environments should use lower-cost resources, disable unneeded services, and apply discounts such as Azure Dev/Test pricing. Ensure preproduction environments are sufficiently similar to production to avoid introducing risks. This balance ensures that testing remains effective without incurring unnecessary costs.
Define SKUs using infrastructure as code (IaC). Implement IaC to dynamically select and deploy the correct SKUs based on the environment. This approach enhances consistency and simplifies management.
For example, the reference implementation has an optional parameter that deploys different SKUs. An environment parameter instructs the Terraform template to select development SKUs.
azd env set APP_ENVIRONMENT prod
Implement autoscaling
Autoscaling ensures that a web app remains resilient, responsive, and capable of handling dynamic workloads efficiently. To implement autoscaling, follow these recommendations:
Automate scale-out. Use Azure autoscale to automate horizontal scaling in production environments. Configure autoscaling rules to scale out based on key performance metrics, so your application can handle varying loads.
Refine scaling triggers. Begin with CPU utilization as your initial scaling trigger if you're unfamiliar with your application’s scaling requirements. Refine your scaling triggers to include other metrics such as RAM, network throughput, and disk I/O. The goal is to match your web application's behavior for better performance.
Provide a scale-out buffer. Set your scaling thresholds to trigger before reaching maximum capacity. For example, configure scaling to occur at 85% CPU utilization rather than waiting until it reaches 100%. This proactive approach helps maintain performance and avoid potential bottlenecks.
Automate resource deployment
Use automation to deploy and update Azure resources and code across all environments. Follow these recommendations:
Use infrastructure as code. Deploy infrastructure as code through continuous integration and continuous delivery (CI/CD) pipelines. Azure has premade Bicep, ARM (JSON), and Terraform templates for every Azure resource.
Use a continuous integration/continuous deployment (CI/CD) pipeline. Use a CI/CD pipeline to deploy code from source control to your various environments, such as test, staging, and production. Utilize Azure Pipelines if you're working with Azure DevOps or GitHub Actions for GitHub projects.
Integrate unit testing. Prioritize the execution and passing of all unit tests within your pipeline before any deployment to App Services. Incorporate code quality and coverage tools like SonarQube to achieve comprehensive testing coverage.
Adopt mocking framework. For testing involving external endpoints, utilize mocking frameworks. These frameworks allow you to create simulated endpoints. They eliminate the need to configure real external endpoints and ensure uniform testing conditions across environments.
Perform security scans. Employ static application security testing (SAST) to find security flaws and coding errors in your source code. Additionally, conduct software composition analysis (SCA) to examine third-party libraries and components for security risks. Tools for these analyses are readily integrated into both GitHub and Azure DevOps.
Configure monitoring
Implement application and platform monitoring to enhance the operational excellence and performance efficiency of your web app. To implement monitoring, follow these recommendations:
Collect application telemetry. Use autoinstrumentation in Azure Application Insights to collect application telemetry, such as request throughput, average request duration, errors, and dependency monitoring, with no code changes. Spring Boot registers several core metrics in Application Insights such as Java virtual machine (JVM), CPU, Tomcat, and others. Application Insights automatically collects from logging frameworks such as Log4j and Logback. For example, the reference implementation uses Application Insights enabled through Terraform as part of the App Service's
app_settings
configuration. (see the following code).app_settings = { APPLICATIONINSIGHTS_CONNECTION_STRING = var.app_insights_connection_string ApplicationInsightsAgent_EXTENSION_VERSION = "~3" ... }
For more information, see:
Create custom application metrics. Implement code-based instrumentation to capture custom application telemetry by adding the Application Insights SDK and using its API.
Monitor the platform. Enable diagnostics for all supported services and send diagnostics to the same destination as the application logs for correlation. Azure services create platform logs automatically but only stores them when you enable diagnostics. Enable diagnostic settings for each service that supports diagnostics. The reference implementation uses Terraform to enable Azure diagnostics on all supported services. The following Terraform code configures the diagnostic settings for the App Service.
# Configure Diagnostic Settings for App Service resource "azurerm_monitor_diagnostic_setting" "app_service_diagnostic" { name = "app-service-diagnostic-settings" target_resource_id = azurerm_linux_web_app.application.id log_analytics_workspace_id = var.log_analytics_workspace_id #log_analytics_destination_type = "AzureDiagnostics" enabled_log { category_group = "allLogs" } metric { category = "AllMetrics" enabled = true } }
Deploy the reference implementation
The reference implementation guides developers through a simulated migration from an on-premises Java application to Azure, highlighting necessary changes during the initial adoption phase. This example uses a Customer Account Management System (CAMS) web app application for the fictional company Contoso Fiber. Contoso Fiber set the following goals for their web application:
- Implement low-cost, high-value code changes
- Achieve a service level objective (SLO) of 99.9%
- Adopt DevOps practices
- Create cost-optimized environments
- Enhance reliability and security
Contoso Fiber determined that their on-premises infrastructure wasn't a cost-effective solution to meet these goals. They decided that migrating their CAMS web application to Azure was the most cost effective way to achieve their immediate and future goals. The following architecture represents the end-state of Contoso Fiber's Reliable Web App pattern implementation.
Figure 4. Architecture of the reference implementation. Download a Visio file of this architecture.