Serverless is a development model for cloud-native applications that avoids any server management tasks and requirements, letting developers focus on delivering applications. These applications follow an event-driven architecture to communicate between services.
Servers in the serverless paradigm are fully handled by the cloud provider, which takes care of provisioning, maintaining, and scaling them. The main advantage of the serverless paradigm is minimizing resource usage, this comes with a cost reduction: being charged only by resource usage means that if no resources are used, then nothing is charged.
A Kubernetes cluster can run any kind of workload and adapts to almost any distributed architectural demands but requires a lot of configuration and tools to set up properly.
The Knative project (https://knative.dev) is a set of services designed to run on a Kubernetes cluster, which takes care of all the additional steps required to run serverless applications in a cluster.
You can think of Knative as an abstraction layer on top of Kubernetes, which allows developers to start deploying applications in a standardized way. This avoids all the burden that initial deployment in Kubernetes requires, and simplifies application deployment and upgrade, service interconnection, traffic routing, and autoscaling.
Knative achieves this by using two components: Knative Serving and Knative Eventing.
Each component consists of a series of components packaged as containers and a set of Custom Resource Definitions (CRD) that take care of each responsibility independently.
Knative runs in any Kubernetes distribution including Red Hat OpenShift.
Knative Serving is the component responsible for:
Routing traffic to services
Knative Serving seamlessly blends together by creating new deployments on new versions of each component. Pods created by these new deployments receive traffic as they become available.
When there is no traffic, Knative serving scales the application pods to zero and redirects traffic to the component of the Autoscaler called Activator. Activator proxies and buffers the request until the application pod is ready. Once finished with the activation process, Autoscaler redirects future traffic to the new pod, and the Activator is not used until scaling down to zero again.
The Knative Eventing component is responsible for interconnecting asynchronously all the different parts of your distributed application.
With Knative Eventing instead of making the parts of the application depend on each other, applications emit events that other components react to receive and process. Knative provides an event Broker and Channels to let services communicate asynchronously.
You can use Knative either with YAML documents that declare the desired state of your serverless application or imperatively through the
kn command-line tool (CLI).
In the examples, we cover the use of the
kn CLI tool, but you can achieve the same results with the YAML approach.