Sunday, December 15, 2019

why don't i code as much as i think ? - the year ahead

The path that thought takes through a set of computations can be realized in many different languages. It can even be realized as the flow of control through a fabric of compute resources.  The thought itself is what is import not the implementation itself which can change.

As long as thought patterns can be channeled into standard repeatable patterns, it should be possible to in-effect "think in code".

Picture Neo in the Matrix. He was able to overcome the limitations of "reality" by realizing it was a game.  In the same way, you can free your mind of the limitations of what is currently produced into a purer form of code, that being thought itself.  Code is thought and thought is code - it's bi-directional.

Flows of thought in your head can be directed into verifiable reproductions in code.  Knowing this fact, it is then possible to connect thought patterns into larger systems at a much higher velocity and, with some degree of predictability, assess the likelihood of success.

How do you develop a toolkit for this advanced level of thought?  You learn, from a feature perspective, what each research item could contribute to your knowledge base.  Then it's just a matter of mapping the features available in the knowledge base into a solution, a solution that is realized in the form of an Execution Pattern.

I have described this in earlier articles as multi-level football.  With many different games being played at once, but only running offense.  Just as a quarterback directs the flow in a game, quarterback components can be created that direct execution patterns.

An execution pattern?  This is kind of a functional concept.  You have an input dataset, a series of goals and a method of communicating results between them.  Now add on to that - add the idea of the underlying resources that this set of goals has access to.  How to meld both the flow of ideas and the access to these resources?

Enter Kubernetes operators and coding at a level where this is now possible.  Each operator can not only provide resources but optimize the runtime environment in response to system-level conditions.  Kind of a horizontal scaling but with aggregations of Kubernetes primitives expressed as Custom Resource Definitions.

Recently, I have been devoting time to understanding how to "think" like this.  I need to visualize some scenarios with Volcano.sh, GoFlow, and Operators.  How will batch operations be executed? What is the mechanism of scaling inside a Container vs across Pods vs with Services?  Can this be expressed generically so you can move from container to pod to service scaling under one api?

Having identified the different flow variants, I went on to identify that the standard Golang channel patterns could be realized in the DSL as orchestration components.  Fan Out, Fan In and other patterns from Bill Kenedy's work were envisioned as providing the implementation for this new flow DSL. Execution components would be in the form of Docker containers that conformed to a standard generic input-output paradigm.  It was envisioned that containers could communicate within pods via the local file system and pods cold communicate first with a local host database and then also across nodes via Kubernetes services.

The answers to these questions lie ahead in the coming months. Looking forward to 2020.










No comments: