I've been musing a bit about complicated service orchestration of late. For some situations, it's as simple as SSH'ing into one or more boxes in a predefined order and executing some actions; sometimes it involves copying files around, calling random APIs, adding/removing nodes from load balancing proxies, launching new VMs in a cloud and waiting for them to be ready or bringing them down again.

This has of course led me down the road of workflows and state machines, and has me wondering about the kind of interface that I would like to have for designing orchestrations as well as for for executing them. I think it'd be nice to kick a flow off from the command line and then query it's current progress or 'tail' it. You could also execute flows via a web-based job scheduler and have visual feedback.

As for design, I'm less interested in formal BPM workflows since they seem a bit of an overkill, although they can also be applied to service orchestration. A state machine model might be a better fit. I could imagine something similar to Puppet's model, having orchestration steps instead of resources, and it running until the dependencies for the finished state have been satisfied. There are a lot of possibilities, and I'm still looking for a model that would work for me.

As far as I've thought about it, a given flow would comprise of a set of discrete steps executed under certain conditions. There would have to be some state associated with the flow itself, but the conditions wouldn't have to limit themselves to that state (ie. a conditional could call a custom function which looks up external state). The discrete steps could be code, builtin functions and decision logic, or other flows. I'd like to be able to express these steps succinctly using a simple data structure, but turn to a real programming language to do the more complicated work.

— by Robert Thomson, created 5th Apr, 2013, last modified 5th Apr, 2013 | Tags: Tech