Saturday, October 31, 2020

The ability to cycle through resources phase 2

Code for a compute engine is generally seen as the syntax needed to perform operations.  The realization that modules of this compute syntax can have physical presence changes things a bit.  Breaking down ideas into pieces is a repeatable goal in computer science; these modules of compute can be the pieces. As it turns out, by making these items have physical presence, you can start to manage them as resources.  Generally, resources are managed through deployment and execution.  This is the impact of containerization.

If only there was a way to capitalize on the new nature of compute, and as it turns out, there is.  The programmability of the actual infrastructure can be realized via Kubernetes.  The job now is to code at the second level... that of orchestration.  Just as first level developers can create algorithms to accomplish technical goals, second level coders work to develop algorithms to efficiently execute that first level logic.  This is the world of the Kubernetes Operator.  

I have described it as multi level chess:  Just as there are patterns of moving chess pieces across a board, there are patterns that exist in moving ideas across a graph of resources.  This is like the second level on the chess board.  Organization of multiple systems is the next higher level: logic to control armies of logic; on the next level control the armies and so on.  It should be noted on each greater iteration we are essentially are doing the same type of work; this is all except for the transition from first to second.  We have shifted into the resource world. 



Cycling through a set of resources is a common way to increase the power of lower level code.  This is a pattern that exists in both first level and second level: you have iteration at each level.  Whereas iteration in the first level revolves around algorithmic goals, iteration at this level involves exposing resources to a repeatable stream of data.   This appears to be an interesting application of cycling: the ability to digest reality.  Time slice by time slice, computationally consistent environments may be more effective than mutable ones.  Bring in the power of the resource; the path to greater precision. Precision stems from the ability to isolate operations; power comes from the ability to replicate.

Each time slice is expressed through a graph of resources; the shape of the machinery needed to process the data matches the shape of the data.  It's like adding a dimension - time.  As you want to have greater precision you take smaller and smaller time slices - the shorter the time between slice, the greater the number of resources needed in order to slice the stream into computationally exact environments.  Computationally exact? Every slice is not only its own thread; its on it's own resource.  Data coming into this type of compute structure needs to have a way to make it to the correct the location.  This is the phase shift that the network enables: the key to breaking through to compute ingesting reality.  You can't really be precise if you don't send the right data to the right spot. Service mesh technologies make this graph of resources addressable; they allow the creation of a graph of resources representing an instance of reality; the shape of the data matches the network which matches the deployment. Data can be cached at any point along each path of the resource graph and the slicing leads to the concept of "type safety in data".  

No comments: