Thursday, August 06, 2020

How being able to write custom Kubernetes components changes the game

There are two common approaches to scaling below


On the left, we have the traditional model of scaling.  Spin up a deployment with a scale-able number of replicas fronted by a Kubernetes Service; use this to process your workload.

On the right we have scalability at the function level.  Spin many instances of a function to process your workload.

Being able to "Program Kubernetes" provides a third, more powerful, option.   

This is particularly valuable when it comes to debugging.  As per my previous post on Laplace's Demon,  where I talked about data entropy and the ability to debug; being able to eliminate data entropy with respect to calculations promotes debug-ability.

If you can dynamically create sets of Kubernetes resources that are targeted to a single problem with a single data set, then debugging and auditing becomes trivial.  

Instead of scaling the traditional way apps have scaled outside of kubernetes using the Services to load balance across instances, route each request to an isolated set of resources.  This will provide predictability with respect all aspects of the calculation.







No comments: