Speaker:
Karthik Krishnaswamy, Sr Product Marketing Manager
NGINX, Inc.
About the Webinar
Technology is revolutionizing our lives – we hail cabs, order food, and stream video right from apps on our mobile devices. At the heart of this disruption are enterprises that embrace digital transformation – according to a survey from IDC sponsored by NGINX, 75% of businesses will have completed their digital transformation by 2030.
To support digital transformation, enterprises are modernizing their applications and underlying infrastructure by adopting a variety of processes and new technologies such as DevOps, cloud, open source, and microservices. Software load balancing is central to accelerating digital transformation efforts. That’s where NGINX comes in – it’s a lightweight, flexible and portable load balancer that helps you gain agility and simplifies your architecture
In this webinar we cover how to gracefully migrate your BIG-IP deployment to NGINX, the ADC built for modern applications.
You may have heard that F5 has recently acquired NGINX.
Today I will share the incremental value that F5 and NGINX will deliver to customers as a combined entity, for your developers, devops teams, and IT operations
Let’s talk about age of digital transformation. Every aspect of enterprise software – development, deployment, culture, processes, is going through a transformation.
It’s about going from the old world to the new world. Going from legacy, monolithic applications to cloud-based, microservices applications. The new world is lightweight
and API-based, not heavyweight using the older app environments. The new world is flexible, running on containers, VMs, or both -- not the old inflexible world where software is coupled directly to hardware.
The new world is software-defined and cloud-centric, with automated infrastructures unlike the old fixed and static appliance model. And the new world is about continuous delivery and DevOps,
not the old big bang waterfall approach with silos between test, dev, and operations.
Application delivery infrastructure needs to be designed from the beginning to operate in this new world if it’s to overcome the bottlenecks and thrive in the cloud and digital eras.
Let’s dive deeper into DevOps. Let me give you a formal definition from Amazon. DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high
velocity: This speed enables organizations to better serve their customers and compete more effectively in the market.
So it’s an approach that breaks down silos between dev and ops teams. Dev, IT and operations work very closely during app development as well as during roll out of releases and deployments. What are the
benefits of this approach? Because of this close collaboration, teams are able to introduce new features very quickly. Amazon deploys code every 11.7 seconds! Nordstrom went from twice per year to monthly for
their mobile apps.
No more submitting tickets to IT for infrastructure needs. Through use of a variety of tools such as Chef or Puppet as well as automating common config, deployment and testing tasks, they are able to improve agility and respond quickly to the needs of their customers.
This also results in greater stability and reliability. Bugs are detect early and resolved in a rapid fashion. as there’s a continuous feedback loop and again close collaboration between & Ops
You also improve efficiency. Time spent by Dev & Ops teams are optimized. Mundane repetitive tasks such as provisioniong a new server using a pre-defined template are replaced by
automation. Your teams are able to focus on innovation and building new features that your customers want.
It results in reduced costs and reduced management complexity. Put this all together, you save time and effort, and it results in a delivering a product your customers want.
So to summarize, these are the characteristics of DevOps. It’s about moving away from a siloed culture where there isn’t much collaboration between development teams, QA, IT operations. It’s about empowering
these teams to take decisions quickly, providing a high degree of autonomy. It’s about embracing automation wherein these teams employ Continuous Integration/Continuous Delivery processes to get new
builds into the hands of users as quickly as possible.
Thus DevOps results in higher feature velocity. New features address user requests – whether it’s about a new functionality or better user experience or bugs that cause a lot of pain.
Because of a mature feedback loop, features address what users want – and these are also delivered quickly.
The Accelerate State of DevOps Report represents five years of work surveying
over 30,000 technical professionals worldwide. Cluster analysis is performed to help
teams benchmark themselves against the industry as elite, high, medium,
or low performers and predictive analysis that identifies specific
strategies that teams can use to improve their performance.
Let’s go through some key metrics to measure DevOps effectiveness.
Lead time for changes: (i.e., how long does it take to go from code commit to code successfully running
in production)?
Deployment Frequency: How often does code get deployed. It’s worth noting that
four deploys per day is
a conservative estimate
when comparing against
companies such as
CapitalOne that report
deploying 50 times per
day,3 or companies
such as Google and
Netflix that deploy
thousands of times
per day (aggregated
over the hundreds of
services that comprise
their production
environments).
Lead time for changes and deployment frequency together determine your throughput
Time to restore services: how long does it generally
take to restore service when a service incident occurs (e.g., unplanned outage,
Change failure rate: what percentage of changes results
either in degraded service or subsequently requires remediation (e.g., leads to service
impairment, service outage, requires a hotfix, rollback, fix forward, patch)?
A common industry practice is to
approach throughput and stability as a trade-off. But this research (DORA) consistently finds that the highest
performers are achieving excellence in both
throughput and stability without making tradeoffs.
In fact, throughput and stability enable one
another.
Time to restore services and change fail rate together determine your stability.
Load balancer plays a key role in achieving high throughput and stability as this is the infrastructure that helps you deploy and deliver your apps. Load balancers also ensure the reliability and performance of your applications.
Efficiency: On-Demand instances let you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. You can access as much or as little as you need, and scale up and down as required with only a few minutes notice.
Scale Up and Scale Down IT resources based on demand, Pay only for IT resources consumed
Agility: In a cloud computing environment, new IT resources can be obtained and deployed with just a few clicks. \which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.
It’s also easy to improve scale. No need for initial investment in a DC to handle scale, Easily manage sudden spike in demand or account for seasonality
Microservices is an approach to software architecture that builds a large, complex application from multiple small components that each perform a single function, such as authentication, notification, or payment processing. Each microservice is a distinct unit within the software development project, with its own codebase, infrastructure, and database. The microservices work together, communicating through web APIs or messaging queues to respond to incoming events.
You break down a monolith into a number of miroservices – each performing a single function. What are the benefits of this approach:
Resilience: Better fault isolation; if one microservice fails, the others will continue to work. whole system is not impacted or goes down when there are errors in an individual part of the system. The Circuit Breaker pattern wraps a protected function call in a circuit breaker object, which monitors for failures. Once a failure crosses the threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all for a certain configured timeout.
Reusability and Scalability: Better scaling - different parts of the system can be scaled independently
Improved agility: Software built as microservices can be broken down into multiple component services, so that each of these services can be deployed and then redeployed independently without compromising the integrity of an application. That means that microservice architecture gives developers the freedom to independently develop and deploy services. Different teams can be working on different components simultaneously without having to wait for one team to finish a chunk of work before starting theirs . This shortens cycle times.
Very popular web server.
NGINX Plus is the core data plane that’s a load balancer, API gateway, WAF, reverse proxy and content cache. NGINX Plus delivers high performance for your applications in a manner that’s highly resource efficient – very low resource utilization.
Controller provides configuration, monitoring and troubleshooting capabilities if you deploy NGINX Plus as load balancers. Controller offers full API lifecycle management – to define, publish, manage API traffic, monitor and analyze API usage. A service mesh provides governance, security, and control for environments with lots of microservices. A upcoming NGINX Controller module will manage and monitor NGINX Plus service meshes, apply microservices traffic policies, and simplify workflows.
HTTP, TCP, and UDP load balancing
Layer 7 request routing using URI, cookie, args, and more
Plus:
Session persistence based on cookie *: NGINX Plus can identify user sessions and send all requests in a client session to the same upstream server. This can avoid fatal errors that might otherwise result when app servers store state locally and a load balancer sends an in‑progress user session to a different server. Session persistence can also improve performance when applications share information across a cluster.
Active health checks on status code and response body *: NGINX Plus performs out-of-band application health checks (also known as synthetic transactions) and a slow‑start feature to gracefully add new and recovered servers into the load‑balanced group.
These features enable NGINX Plus to detect and work around a much wider variety of problems, significantly improving the reliability of your HTTP and TCP/UDP applications.
Service discovery using DNS *: NGINX Plus servers resolve DNS names when they start up, and cache these resolved values persistently. When you have to identify a group of upstream servers with a domain name (such as example.com), NGINX Plus periodically re‑resolves the name in DNS. If the associated list of IP addresses has changed, NGINX Plus immediately starts load balancing across the updated group of servers.
The key‑value store provides a wealth of dynamic configuration solutions.
Sample use cases include:
Dynamic IP blacklisting (see the NGINX Plus Admin Guide)
Managing lists of permitted URIs per user
You can use the NGINX Plus API to create, modify, and remove key‑value pairs on the fly in one or more “keyval” shared memory zones. The value of each key‑value pair can then be evaluated as a variable for use by other NGINX Plus features.
Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.
As you prepare a new version of your software, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the software in Green, you switch the LB configuration so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
Use the NGINX Plus API to update upstream configurations and key‑value stores on the fly with zero downtime. Add/remove upstream servers as well as make changes to the load balancer to handle more scale or deploy new features.
Being software NGINX Plus can operate in any environment, from bare metal to VMs to containers.
We don’t need to QA and qualify every environment. If you can run Linux you can run NGINX and it will just work.
Not just across infrastructure, but the same NGINX software that runs in production can also run in staging and development environments without incurring additional capital costs.
Keeping the different environments in sync as much as possible is an industry best practice and helps to reduce issues where it worked in dev but broke in production.
With NGINX Plus enterprises can easily eliminate this potential gap in the deployment process.
It provides following capabilities:
- Simplifies configuration of load balancers at scale
Enables a policy driven approach to configuration to ensure consistency and prevent misconfigurations
Helps you Avoid performance issues by providing preemptive recommendations
Helps you met your SLAs by enabling you to root cause and troubleshoot performance and security issues quickly
Toolchain integration,
Config is more complex, allocate IP address to create a new virtual server. NGINX Layer 7 to multiplex connections on a single IP address
Big-ip with a separate container that programs BIG-IP to route stuff
Less than 2MB in size.
Bluestem Brands is an ecommerce retailer that offers consumers a robust selection of consumer products . They are the parent brand to 13 fast‑growing ecommerce retail brands. Bluestem Brands runs a separate site for each of its brands where customers shop for apparel, shoes, gifts, home accessories, and more.
They were experience serious challenges with managing traffic. The machines that handled production traffic for the sites hosted both an Apache web server and a Tomcat application server. During peak shopping periods, such as the holiday season, the web servers quickly hit capacity. Even just a couple hundred connections on a machine caused it to stop accepting connections. Clients would retry their requests on another machine, which soon became overloaded and stopped
accepting connections, and so on – until all machines were affected.
Bluestem Brands sees traffic more evenly distributed. With the old load‑balancing solution, numbers always looked off and it wasn’t clear what was causing the issue. Sometimes a machine would get 70 or 80 more connections than another, whereas with NGINX Plus the variance is just 1 or 2 connections. This provides confidence that the solution is working correctly, which eases the tension at the holidays and allows Bluestem Brands to move on to optimizing other parts of its business.
DevOps engineers set up the NGINX Plus instances so that changes to the configuration can be deployed via Jenkins. Developers can adjust the configuration on their own, as needed, and because deployments are automated, the changes can be live in production within seconds.
In addition, Bluestem Brands is already doing blue‑green deployments and is moving towards doing canary releases with NGINX Plus. Chamberlain explains, “With the way we have NGINX Plus set up, it makes it easy for our developers to adopt a canary release model. NGINX Plus easily integrates with Jenkins, which means we have a simple push button to enable continuous delivery and to help us evolve our applications for our customers.”