Large datasets speed/memory diff between Servoy 3.5 and 5.2

I have been struggling with how to describe the speed problem I’m having… it’s so hard to benchmark in the main area of the system that I encounter it in – partially because the code is really old (four years) and very complex, including stored procs and hundreds of lines of code. But I’ve just found a second area of my system, where the code is very simple, that the same 5-8 second speed problem occurs.

What I’m seeing is that Servoy completes all my lines of code rapidly (including a stored proc call), and then takes 5-8 seconds before displaying the screen/giving the user control of the mouse again. These are parts of my software that did not have that 5-8 second limbo when I was in 3.5.12 – I can run the same code and have no delay once all my lines of code are complete.

I have timers wrapped around my code, so I get a report each time it runs that my code is taking 3500 ms, and the stored proc call is nearly instantaneous (included in the 3500ms). The timer reports to the status area as my last line of code. Then I wait. The bigger the dataset, the longer I wait, but it’s usually no less than 5 seconds – which is noticable, and my client has said unacceptable for an upcoming budget period we’re entering in December. And when I say big datasets – we’re talking hundreds of rows, not thousands.

Help?

This problem exists in mac and pc, client (smart) and developer’s client. It’s more noticable at my client’s office than it is on my macbook pro, which seems to run everything faster (x4) though it’s connecting to SQL Server in VMWare. But it’s there.

It might be the garbage collector. Not unlikely when dealing with large datasets. Is the client configured with enough memory?

Hi Ellen,

Not sure if this applies to your system:

In 5.1 and 5.2.2 I ran into problems with adding payments in the invoicing part of one of my systems. The user would experience 10-30 second delays, but not every time. When looking at it in the debugger I found that calculations were firing like crazy when they did not need to. I solved the problem by changing a few calculations to normal fields and keeping them updated using table events. I have 600,000+ line-items.

Also, I have improved speeds in some places by telling Servoy the order in which to save stuff.
Assuming invoice and customer are records…

databaseManager.saveData(invoice);
databaseManager.saveData(customer);
databaseManager.saveData(); //save the rest

Ellen,

Have a look at the Profiler View in developer, maybe this gives a clue where time is spent.

Rob

also in that time that you wait
do you see activity in the performance monitor tab of the admin pages?
Do you see queries happening?

Ok, attached is all the data I’ve collected for your questions (profiler and the perf statistics on just this process). I am no closer to solving it, though I have been constantly gathering performance information for the past few weeks. I have removed all the calculations from the main table attached to the form, along with any I can think would be attached via relationship. There aren’t a huge number of calculations left even on the other tables which presumably are not involved. (aside – I used calcs until about 2008 and then switched to virtual columns with if(false){return 1} if I thought I needed a calc).

Here is profiler:
[attachment=2]Picture 10.png[/attachment]

I am looking to solve this slowness on a different part of the system than relates to the attached, but the attached is a perfect example of what is happening in a cleaner area of the system. I have seen these 10-20 second time gaps throughout the system since I switched to Servoy 5. The code for GatherProc, which profiler says is taking all the time, is as simple as this:

function GatherProc()
{
if (forms.ProjectsSpendingActualsStaffLabor.controller.find())
	{
	userid = globals.gStaff
	controller.search()
	controller.deleteAllRecords()
	}
var maxReturnedRows = 5000
var procedure_declaration = '{call ud_RptProjectTrans(?,?,?,?,?,?,?,?)}'
var args = new Array(8)
args[0] = globals.gProject
args[1] = 0
args[2] = 0
args[3] = 1
args[4] = 1
args[5] = 0
args[6] = 0
args[7] = globals.ActualsPeriod
var directionArray = new Array(8)
directionArray[0] = 0;
directionArray[1] = 0;
directionArray[2] = 0;
directionArray[3] = 0;
directionArray[4] = 0;
directionArray[5] = 0;
directionArray[6] = 0;
directionArray[7] = 0
globals.SPError = null
var dataset = plugins.rawSQL.executeStoredProcedure(forms.AdminScreen.connex, procedure_declaration, args, directionArray,maxReturnedRows);
globals.StoredProcedureErrorHandler();
for ( var i = 1; i <= (dataset.getMaxRowIndex()); i++)
	{
		dataset.rowIndex = i;
		foundset.newRecord()  
		if (globals.gStaff)
			{
			userid = globals.gStaff		
			}
		else
			{
			userid = 1
			}
		proj_nbr = dataset[1]
		sproj = dataset[2]
		proj_desc = dataset[3]
		sproj_desc = dataset[4]
		pm = dataset[5]
		item_name = dataset[6]
		hours = dataset[7]
		phs = dataset[8]
		phs_desc = dataset[9]
		month_nbr = dataset[10]
		year_desc = dataset[11]
		cc = dataset[12]
		tier2_desc = dataset[13]
		
	}
dataset.rowIndex = 1;
Sort();
Totals();
}

in that method that takes so long
add your own profiling

so at the beginning of that method store a date or something:

var x = new Date();

then now and then (for example just before and after the stored procedure call) do

application.output(new Date().getTime() - x.getTime())

so that you can see where the jump is.

I have done that, and made a performance log – but for another, more complicated part of the system. It is finishing my code fast, then the huge delay, even with calcs removed. I will attach it to this easier feature, and post the results here.

Attached is my performance log for the feature described in my previous entry, with the profiler showing over 30 seconds to run this… But my timers – one of which is the first and last line of my code (Time Body in the attached image), and the other is wrapped around the stored proc call (Time Proc) – say a max of 10 seconds. This is my experience throughout my solution in several areas – 5-20 seconds of lost time after it reaches my last line of code before I take control of my mouse again. Some users have reported their hard drives churning during this, but I haven’t seen that.

Picture 13.png

its weird that the profiler shows you other timings then the once that you do yourself on the first and last line of the the method
Because that is exactly what the profiler does… But then just after the last line and just before the first line…

That you loose extra time after the profiler or you last timing log statement, that could be possible by events that are waiting to be executed (like painting events and so on that is a result of the things you did in the method)

But it is hard to see what is exactly happening here, i need to see that in a client myself so that i can profile it myself.

Is there a way to do that, get you to watch while I show you the difference between 3.5 and 5.2 on this one section of my system? My system is behind firewalls galore, so the best I could think is to show you on GoToMeeting or the like… I am completely stumped.

ellen

maybe, but i need access on that system then and run some extra tool (jvisualvm) that is part of a JDK (not jre) install of java 6.

So that i can profile a smart client and see where the cpu time is consumed

by the way Why would gotomeeting work but not a smart client (if configured for example in tunnel mode)? For both you need to have some outside access