Wednesday, July 31, 2019

Power potential of Kubernetes Native applications

Much of what I have been talking about regarding "mechanical sympathy"  builds on information from this post on mechanical sympathy and the cloud.

https://infrastructure-as-code.com/book/2015/03/23/mechanical-sympathy.html

I have stretched the meaning of "mechanical sympathy" to encompass any code that is aware of its execution context and takes steps to maximize its power based on this knowledge.  Falling back to Jackie Stewart's original intent for the term - understanding the car made you a better racer.  



If you allow yourself to decouple the concept of "mechanical sympathy" with the traditional context of hardware, you can see that there is "higher-order mechanical sympathy" that exists in an application that can programmatically arrange compute resources to work hand in hand with computational goals.  This second form would not be measured against performance gains but by the increase in operational power.

Just as at a lower level you are concerned about the system making efficient use of things like hard drives and memory. A "higher-order mechanical sympathy" would be concerned with making efficient use of Kubernetes resources.  The mechanical sympathy is an expression a components awareness of the context in which it resides.

Harnessing the power of the Kubernetes api-machinery gives you increased power; this has been proven to be useful in many different contexts.  From the creation of Operators that help manage day two operations to full CI / CD systems built on top of Kubernetes, building on top of the api-machinery of Kubernetes is indeed a growing trend in computing.

At the lowest level,  you have the basic usage of Kubernetes. At the next level,  you have Operators that help manage the operational state of your deployments. At the third level, you have sets of Custom Resource Definitions (CRDs) that act together to provide the base infrastructure for your compute environment. At the final level, you have CRDs that can dynamically alter a compute environment based on runtime conditions and communication with the application itself. 

An interesting discussion on the origins of Istio inside of Google sheds light on this trend towards using the base Kubernetes machinery to solve issues such as common infrastructure needs.

https://podcasts.apple.com/us/podcast/kubernetes-podcast-from-google/id1370049232?i=1000441965650

I would say that the implementation of Istio is an example of a "level 3" design.  Where a set custom CRDs are used to enhance system design at the infrastructure layer.   

I think it's possible to go one step further and exploit the power of the Kubernetes api-machinery at a higher level by creating Kubernetes CRD's that can communicate with an application's business logic.  For example, the application could actively manage sets of Kubernetes services programmatically.  This would be a "level 4" design, the level where this higher order mechanical sympathy is achieved.


No comments: