Recently we implemented a multi-pod cluster into our production environment to handle resource demands more effectively. After doing so we have encountered an issue where when batch_processing is enabled/configured on all of the pods, when a job goes to execute, it gets run multiple times. (1 for every pod). After doing some research there are a few tools out there to handle communication/track job execution between pods (redis, Valkey, RabbitMQ). Was just curious if anyone else has encountered this before and how they handled it.
First try the simplest approach, if you have multiple batch processors like BP1, BP2,BP3 etc. then only have BP1 running on Pod 1, BP2 on Pod 2, so on and so forth.
If that does not satisfy your needs, you can try the next approach. Since Servoy only allows 1 DB to be connected to all the PODs, you can also maintain a table that registers these batch processors on a first up first serve basis, from each server with client and server id against the job types. Everytime a batch processor starts to process, they need to check, if they are registered to process the job or not.
If you cannot manage it this way, then you should look for external tools like RabbitMQ, which is required anyways to manage multi-pods in servoy to ensure data broadcasting.