How do I get rid of dead Batch Processors?

Servoy 6.1.3
Java Web Start 10.9.2.05
Using JRE version 1.7.0_09-b05 Java HotSpot™ 64-Bit Server VM

Hi all,

My batch processor seem to detach themselves, so I can no longer stop them from the ‘Batch Processor’ page. ‘Batch Processor’ page claims the batch process is not running, although I can see them on the ‘Client’ page. Sometimes, the batch processor keeps running fine doing its tasks, other times it dies (Client idle since: freezes).

function background_processing(event) {
	if (application.getApplicationType() == APPLICATION_TYPES.HEADLESS_CLIENT) {

		while(true) {
			var d = new Date();
			// code to do stuff here
                        //

			//only run once a minute so we sleep a bit if takes less than a minute
			var elapsed = (new Date()) - d;
			if(elapsed < 60000) application.sleep(60000 - elapsed);
		}
	}

}

can sleep be causing this?

C

Am I the only one with this issue?

Apparently. Maybe the lack of sleep is the problem. I don’t know if your to do stuff can take longer than a minute, in that case it will never sleep so batch client can’t respond to Servoy alive requests or something. We use the scheduler plugin to do our batch tasks.

I was using the scheduler to run about 12 tasks at different intervals, every minute, every few hours, every day, every week.

For various reasons I switched to a different design with a single loop running every minute or so, checking whether to run each task.

We keep track of when tasks have been started and when they have been completed successfully in a database table. This means if the
batch processor falls over, you can simply start it again and it picks up where it left, instead of me having to reschedule tasks or manually run tasks which should have run during the downtime.
Also, this design lets me slice long-running tasks into small chunks.

It works really well, except for showing as ‘not running’.

The best way is to use the scheduler, and than also start your method with try, catch or finally.

This way, it is possible, that a method in some circumstances can stop, if you somewhere get a null, and you don’t expect that. but at least the batchprocessor keeps running and will fire the method the next time…

OK, but this still don’t answer my question on how to kill dead batch processors.

My Servoy admin tells me the batch processor is ‘not running’, If I go to clients to click the ‘x’'s to kill, nothing happens.
I can start more, but I end up using all my licenses. This seems to be a new problem on 6.1.3.

if you kill a client it should go away, especially the one on the server (batch/web). Do you have nothing in the log when you try to kill it?
You just can’t remove them at all through the admin page?
If you have a solution that you use as a sample for a batch processor where you can see that.
The thing is that maybe if you really are sleeping the client itself (application.sleep()) or something like that, Then maybe it is possible that you can’t really kill it because we can’t just wake it up…
You shouldn’t do that. Do not try to loop for ever in a script.

Hi Johan,

Have removed the forever loop and I can now remove again from Admin page.
I’m running exactly the same scripts but on a cron every 2 minutes. The problem I had was that I did not want to start multiple scripts if the previous run took more than two minutes which it may occasionally do. I store a start and end time in a table. At the top of my script I check whether the script has been run in the last 5 minutes and has not yet finished. If so I exit.
Seems to work fine.

Having spent quite a bit of time working on a solution that implements similar idea I can recommend that instead of merely scheduling a process that runs every two minutes, you

  1. Schedule the process to run In two minutes
  2. When the process starts, remove the scheduled job.
  3. Do whatever work needs to be done. This should surrounded in a try/catch so that if anything fails your entire server process doesn’t just die.
  4. After the process completes reschedule the job for two minutes from that point.

You really don’t want scheduled jobs stacking up if the work starts to take a little longer than expected.

That is a nice way of doing it.

I have some quick tasks and some long-running ones. The long-running-ones are made to stop after a couple of minutes to give the other tasks a chance to run. The next time the script runs the long-running tasks pick up where they left and process another chunk. I avoid deep stacking by checking if the long-running task has been started but not completed in the last 5 minutes.

Rescheduling is more elegant.