https://kubernetes.io/docs/concepts/configuration/scheduling-framework/
Need to evaluate how this fits into kube-batch. It appears there is some overlap.
Example of implementing scoring:
func ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {
return getBlinkingLightCount(n)
}
However, the maximum count of blinking lights may be small compared to
NodeScoreMax
. To fix this, BlinkingLightScorer
should also register for this extension point.func NormalizeScores(scores map[string]int) {
highest := 0
for _, score := range scores {
highest = max(highest, score)
}
for node, score := range scores {
scores[node] = score*NodeScoreMax/highest
}
}
It appears that it's fairly simple process to create a plugin. It should be possible to tie into the existence of a DataPond with the correct classification as the scoring mechanism. Additional scoring could be based on how busy a node is etc.
Note: scoring could also take into account the location of items in the local filesystem on the machine in addition to the category of the database.
No comments:
Post a Comment