Industry Use Case of Kubernetes/OpenShift
As we know that in Industry we launched operating system and terminates the operating system thousands times . So now days, Its not a big deal to launch and terminate the operating system, even it takes 10 minute to boot the OS + 30 minute to launched the OS + 10 minutes to configure the system. Just think How much time it gonna consumes in real industry. If we again need to do same setup so again we need to waste 1 hour to configure the same system. So if we need to launch/terminate same 100 system each day so our entire time goes to launch/terminate the operating system only so How can we manage our Business.
For this we need a tool who can launch/terminate OS from all the system at same time and take less time to configure.
Welcoming you guys with a tool which can launch/terminate the OS just in one second.
Docker is one of the tool which can launch/terminate any operating system in just in one second. Docker worked on pure containerization technology.
But there is one challenge that what if docker container failed so our entire system goes down, so our business goes down. But docker solved one of the most critical challenge which is terminating and launching the operating system.
What if we create such a tool which can also manage docker container then it may be huge beneficial for our company and we’ll wonder that there is many tools which can manage the container. This kind of tools are generally known as container management tools and one of the most famous container management tool is Kubernetes.
Kubernetes is by far the most popular container orchestration tool, yet the complexities of managing the tool have led to the rise of fully-managed Kubernetes services over the past few years.
Although Azure supports multiple container tools, it’s now going all-in on Kubernetes and will deprecate its original offerings this year. The great part about cloud-based managed Kubernetes services like Azure Kubernetes Service (AKS) is that it integrates natively with other Azure services, and We don’t have to worry about managing the availability of our underlying clusters, auto scaling, or patching your underlying VMs.
In this blog post, we’ll be reviewing the industries use case of Kubernetes and OpenShift.
Why Use Kubernetes?
When running containers in a production environment, containers need to be managed to ensure they are operating as expected in an effort to ensure there is no downtime.
- Container Orchestration: Without container orchestration, If a container was to go down and stop working, an engineer would need to know the container has failed and manually start a new one. Wouldn’t it be better if this was handled automatically by its own system? Kubernetes provides a robust declarative framework to run your containerized applications and services resiliently.
- Cloud Agnostic: Kubernetes has been designed and built to be used anywhere (public/private/hybrid clouds)
- Prevents Vendor Lock-In: Your containerized application and Kubernetes manifests will run the same way on any platform with minimal changes
- Increase Developer Agility and Faster Time-to-Market: Spend less time scripting deployment workflows and focus on developing. Kubernetes provides a declarative configuration which allows engineers to define how their service is to be ran by Kubernetes, Kubernetes will then ensure the state of the application is maintained
- Cloud Aware: Kubernetes understands and supports a number of various clouds such as Google Cloud, Azure, AWS. This allows Kubernetes to instantiate various public cloud based resources, such as instances, VMs, load balancers, public IPs, storage, etc.
Basics of Azure Kubernetes Services
Azure Kubernetes Service (AKS) is a fully-managed service that allows you to run Kubernetes in Azure without having to manage your own Kubernetes clusters. Azure manages all the complex parts of running Kubernetes, and you can focus on your containers. Basic features include:
- Pay only for the nodes (VMs)
- Easier cluster upgrades
- Integrated with various Azure and OSS tools and services
- Kubernetes RBAC and Azure Active Directory Integration
- Enforce rules defined in Azure Policy across multiple clusters
- Kubernetes can scale your Nodes using cluster auto scaler
- Expand your scale even greater by scheduling your containers on Azure Container Instances
AKS in order to solve for the following business use case:
- Achieve portability across on-prem and public clouds
- Accelerate containerized application development
- Unify development and operational teams on a single platform
- Take advantage of native integration into the Azure ecosystem to easily achieve:
- Enterprise-Grade Security
- Azure Active Directory integration
- Track, validate, and enforce compliance across Azure estate and AKS clusters
- Hardened OS images for nodes
The customer’s architecture includes a lot of the common best practices to ensure we can meet the customers business and operational requirements:
Cluster Multi-Tenancy
SDLC environments are split across two clusters isolating Production from lower level SDLC environments such as dev/stage. The use of namespaces provides the same operation benefits while saving cost and operational complexity by not deploying an AKS cluster per SDLC environment.
Scheduling and Resource Quotas
Since multiple SDLC environments and other applications share the same cluster, it’s imperative that scheduling and resource quotas are established to ensure applications and the services they depend on get the resources required for operation. When combined with cluster autoscaler we can ensure that our applications get the resources they need and that compute infrastructure is scaled in when they need it.
Azure AD integration
Leverages Azure AD to authenticate/authorize users to access and initiate CRUD (create, update, and delete) operations against AKS clusters. AAD integration makes it convenient and easy to unify layers of authentication (Azure and Kubernetes) and provide the right personnel with the level of access they require to meet their responsibilities while adhering to principle of least privilege
Pod Identities
Instead of hardcoding static credentials within our containers, Pod Identity is deployed into the default namespace and dynamically assigns Managed Identities to the appropriate pods determined by label. This provides our example application the ability to write to Cosmos DB and our CI/CD pipelines the ability to deploy containers to production and stage clusters.
Ingress Controller
Ingress controllers bring traffic into the AKS cluster by creating ingress rules and routes, providing application services with reverse proxying, traffic routing/load balancing, and TLS termination. This allows us to evenly distribute traffic across our application services to ensure scalability and meet reliability requirements.
Monitoring
Naturally, monitoring the day-to-day performance and operations of our AKS clusters is key to maintaining uptime and proactively solving potential issues. Using AKS’ toggle-based implementation, application services hosted on the AKS cluster can easily be monitored and debugged using Azure Monitor.
Summary about AKS
Azure Kubernetes Service is a powerful service for running containers in the cloud. Best of all, We only pay for the VMs and other resources consumed, not for AKS itself, so it’s easy to try out. With the best practices described in this post and the AKS Quickstart, We should be able to launch a test cluster in under an hour and see the benefits of AKS for yourself.
Containers Vs Virtual machine
Containers
- Typically provides lightweight isolation from the host and other containers, but doesn’t provide as strong a security boundary as a VM. (You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM).
- Runs the user mode portion of an operating system, and can be tailored to contain just the needed services for your app, using fewer system resources.
- Runs on the same operating system version as the host (Hyper-V isolation enables you to run earlier versions of the same OS in a lightweight VM environment)
- Deploy individual containers by using Docker via command line; deploy multiple containers by using an orchestrator such as Azure Kubernetes Service.
- Use a virtual hard disk (VHD) for local storage for a single VM, or an SMB file share for storage shared by multiple servers
Virtual machine
- Provides complete isolation from the host operating system and other VMs. This is useful when a strong security boundary is critical, such as hosting apps from competing companies on the same server or cluster.
- Runs a complete operating system including the kernel, thus requiring more system resources (CPU, memory, and storage).
- Runs just about any operating system inside the virtual machine
- Deploy individual VMs by using Windows Admin Center or Hyper-V Manager; deploy multiple VMs by using PowerShell or System Center Virtual Machine Manager.
- Use Azure Disks for local storage for a single node, or Azure Files (SMB shares) for storage shared by multiple nodes or servers.
What is OpenShift ????
OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform — an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.
Red Hat OpenShift is a multifaceted, open source container application platform from RedHat Inc. for the development, deployment and management of applications.
OpenShift Features
- Pod autoscaling
- High availability
- Cloud infrastructure choice
- Responsive web console
- Rich command-line tool set
- IDE integration
- Open Source
- OperatorHub
- CI/CD
- Service Mesh
- Serverless
- Application Topology
- Quay 3.2
- Over The Air Update
Podman
Podman is an open-source project that is available on most Linux platforms and resides on GitHub. Podman is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers and container images on your Linux System. Podman provides a Docker-compatible command line front end that can simply alias the Docker cli, alias docker=Podman.