We have a Batch Processor (BP) that starts a Headless Client (Foreman) that in turn spawns 4 more Headless Clients (Workers).
The Foreman has a list of things to do, and goes through and sends these requests one at a time to the to the 4 Workers. When a Worker is done with a particular request it returns the result to the Foreman (via the provided callback method) and the Foreman then issues another request to that client.
That is how it is supposed to work anyhow.
What I have found is that when the Worker returns, the callback method in the Foreman is not getting called, therefore the Worker isn’t getting freed to do more work.
This same behavior happens if instead of a Batch Processor starting a Headless Foreman, I instead open a normal Client and run the method that starts the Foreman from there.
However, if I start a normal Client, and have that run the Foreman method (so I’m making the normal Client the Foreman, instead of it being Headless) then everything works wonderfully. The Headless Workers do their work, return and the callback method in the Client Foreman gets called, and the Headless Workers get freed up to do more work and everyone is happy.
So…after all this…should Headless Clients work as I want them to (started from a Batch Processor or another Headless Client), or am I crazy for wanting such things?
I have a similar situation, but I use the “Foreman” as little bit differently. I don’t have it completely implemented yet, but essentially the Foreman just keeps an array of the Worker client ID’s, and does a round-robin queue. So when a normal client wants a worker, it asks for Foreman for the next client ID in the queue. Each request has a priority passed along with it, so higher priority will startup a new worker outside of the queue, and low will just work within the queue. The only downside to it is that its round-robin, so if you have a few long-running requests in there, the order or priority isn’t always what you may have wanted. However, doing it this way the callback goes back to the smart client, instead of the batch processor, but I’m not sure if that is what you want.
Not to ask you a stupid question, but in your testing, is the Batch Processor opening the same solution that you are testing with when you have it working with a normal client?
FYI, I’ve also requested the feature to Paul Bakker, essentially what you have described. If you take a look at the WebServices plugin, it handles maintaining a pool of headless clients. Then if you take a look at the HeadlessClient plugin, that has all the stuff to get a single headless client. If you were to pull some source code from the two, you should be able to make the HeadlessClient plugin have the ability to manage a pool of headless clients. And the WebServices plugin has the code in it which implements blocking, so you should just be able to keep queuing up the requests. Its a feature I want. Maybe we should work on it? Sounds like a new ServoyForge project!