Performance

Observation in deployed Servoy solution on Servoy application server (after performance complaints of our customers) :

I have some unsaved data (150 inserts divided over four tables) in memory and I issue : databasemanager.saveData()
This Servoy command takes 20 seconds to complete.
Corresponding (PostgreSQL) time in performance data admin page : < 1 second.

:cry:

Servoy time stays the same with smart client and web client.
When tested on Servoy developer the Servoy time drops to a couple of seconds…

Any ideas ?

Regards,

Are we the only ones having this problem ?

Regards,

Differences between developer and server: start by assuming it’s not a bug with the server and rule out all differences. JDBC driver, database versions (even minor point releases have messed us up!), installed plugins, etc. Then move to server settings: flags, memory, operating system, repository database is transactional, etc. Then call us.

Hi David,

Thanks for sharing your experiences !

We are now targeting at the Application server data broadcasting which is not active in developer I believe (info, documentation ?)

Is there any Servoy Application Server data broadcasting performance data / research available ?
Is the Servoy Application Server data broadcasting an asynchronous process to the client that saves the data ? (We think not)

In the past we have rewritten our generic xml data import to rawSQL what resulted in a great performance improvement.

Regards,

lwjwillemsen:
In the past we have rewritten our generic xml data import to rawSQL what resulted in a great performance improvement.

This is a given do to the nature of the beasts. Another example: http://www.servoymagazine.com/home/2013 … lugin.html

Data broadcasting is not your issue (and developer does data broadcast). Your issue could be as simple as getting the various database and servoy settings correct for the load. If the memory assigned to the Servoy server is low and you have a lot of web clients, game over. If you have a lot of users doing a lot of inserts the database could be thrashing the crappy disk it’s on. If the database is on another machine, what is the I/O? Etc.

Look at your logs, watch your processes when things slow down, setup a completely different testing server, etc—your issue should not be hard to find. If you’re on a hosted service, that is also a clue (ie, we’ve had an experience with a top Europe VPS that was just awful—I swear their disks were driven by hamsters).

I’ve never had this problem, and have done large sets of inserts. You mention the time is the same in Smart and Web clients. This is highly unlikely as code executing on the server is always faster than code executed remotely, unless your testing on a Smart Client on localhost on the server. We use the Headless Plugin for these types of process to run the code server-side.

If it is still slow, even on Web client, then look at other things you have going on. For example, do you use Database Transactions in your solution, and do you have a lot of open transactions? When you say its faster on developer, is your developer connected to the exact same server? I’d expect not since its your same server connected to your application server. So, take your application server down, point your developer to that server, and give it another try. Then you can see if its something specific with that server, like some misconfiguration or hardware issue. A few other things, do you have any Table Events defined in Servoy for those inserts? They could all be running on your insert (also take a look at triggers in your DB if you use those to see if they are slowing anything down). Also do you have the Audit Log turned on? That would cause a big hit to the audit log doing 150 records in one batch.

If you do think its Data Broadcasting, you can easily turn that off on the Servoy-Admin page and test it, but I’d suspect that isn’t your problem.

Hi, I also use PostgreSQL and have seen similar problem sporadically in the past. I don’t think the issue is Postgres specific. My work-around has been to do manual saves of specific objects in a specific order. It seemed Servoy sometimes spent a lot of time working out what to save first. So I would write code like this:

databaseManager.saveData(address);
databaseManager.saveData(customer_record);
//in case anything unsaved left
databaseManager.saveData();

instead of simply this

databaseManager.saveData();

I would try doing different things and check the performance for each variation.

When making a large number of changes in a for loop or similar I sometimes add a save for every 20 or 50 iterations, assuming counter variable is called i:

if(i % 20 == 0) databaseManager.saveData();

Hope this helps,

Christian

Hi Christian,

My work-around has been to do manual saves of specific objects in a specific order. It seemed Servoy sometimes spent a lot of time working out what to save first.

Now we are getting somewhere ! When I inspect the records from databaseManager.getEditedRecords() I see (a lot of) records from the same table with the same content.
It’s not an easy job to distinguish between insert and update action. I don’t know if Servoy itself has more info than available in the databaseManager.getEditedRecords()
but if not it looks like you have search the records first for inserts and after that again for the updates…

Thanks a lot all,

Regards,

swingman:
When making a large number of changes in a for loop or similar I sometimes add a save for every 20 or 50 iterations, assuming counter variable is called i:

if(i % 20 == 0) databaseManager.saveData();

Just to add onto this, in our experience we feel that around 25 records is the sweet spot (might be dependent on table width) and we will also do a foundset.clear() after each save.

I concur on the foundset.clear() after each save. Of course that also depends on your use-case but it surely keeps your memory usage down.

Another thing to keep in mind is that when you don’t use a database transaction PostgreSQL will use implicit transactions for each insert/update you do. So for each insert/update it starts a transaction, do the insert/update and does a commit.
All this has to do with PostgreSQL’s MVCC implementation. To mitigate the overhead of starting and committing transactions on large inserts/updates you should start a database transaction yourself and commit after it is finished.

Hope this helps.

I’ve written a Servoy app that sends and receives UDP packets to/from approx 1,000 devices worldwide communicating over cellular networks. The app is running as a headless client on a Mac OS Server running Servoy 7.4.5, Sybase SQL Anywhere 16, Java 7 build 67, .

I’ve started running into serious performance problems with this app and I’ve narrowed down the problem to the databaseManager.saveData() command I use to save each packet record. That command takes a little bit longer to execute each time it runs. At startup it takes around 4 ms per saveData() and then gradually increases over time. After 23,000 times it’s averaging around 60 ms per saveData(). After about 24 hours the app’s performance becomes so bad that it can no longer keep up with the packet traffic.

Some details:

  • The packets table is pretty big - 18 million rows. (Another headless client process automatically deletes old records after 90 days.)
  • When inserting a new packet I use packetFoundSet.newRecord() where packetFoundset is a global var of type MEDIA.
  • I perform a .clear() just before every newRecord() command so I know that foundset never grows beyond 1 record
  • Based on a suggestion in this thread, I always specify the record I want to save, i.e. I do databaseManager.saveData(packetFoundSet.getSelectedRecord()) rather than databaseManager.saveData()
  • The Admin Console Performance Stats show that the SQL insert commands are only taking on average 1 ms and they remain steady at that duration, so the problem doesn’t appear to be the database
  • I use the userManager plugin to monitor memory usage for this app and free memory remains steady the whole time the app is running.
  • If I stop and start the headless client, the run times for my saveData() commands go back down to 4 ms and the whole process starts all over again.
  • I’m using the jdbc3 driver from Sybase (jconn3.jar) to connect to the db. (Tried upgrading to jdbc4 but ran into a problem described in this thread.)

Any help or suggestions would be much appreciated. Possible workarounds I’m aware of:

  • I could do one saveData() for every 25 newRecord() commands, as suggested in this thread, but that would require a significant re-write and i suspect I’d still have the same problem, just to a lesser degree. I’d rather fix the root of the problem.
  • I could try doing these inserts using rawSQL instead of newRecord/saveData but then I’d lose databroadcasting of new packets which I rely on in other apps running on this server.

Thank you.

Hi Adrian,

were you able to replicate the issue also in an isolated test environment ?
Could you isolate all the query executed for a single iteration (comparing start time vs a slow iteration later in time) ?
Why do you use a foundset as a global variable instead of getting a new foundset each time ?
The issue as you described it seems related to a foundset which is growing linearly with the execution time… but you say that you clear it after each save data.
Are you sure that there are no other foundset object which are refering to all the created records ?
What happens if you get a new foundset object to create the record at each insert instead of using a global variable ?

Thanks Paulo for your quick response. I’ve answered your questions below:

were you able to replicate the issue also in an isolated test environment ?

No - I’ll do that and let you know what happens.

Could you isolate all the query executed for a single iteration (comparing start time vs a slow iteration later in time) ?

I started by timing entire functions, then sub-portions of those functions, and I continued this narrowing-down process until it was clear the saveData(newPacketRecord) commands are the problem. I am recording the time it takes for just the saveData(newPacketRecord) commands and it is clearly those times that is increasing.

Why do you use a foundset as a global variable instead of getting a new foundset each time ? What happens if you get a new foundset object to create the record at each insert instead of using a global variable ?

I have tested this using local vars for the foundsets and it made no difference. I got the idea to use a global var form this article on Gary Dotzlaw’s website. It seemed smart to have Servoy create the foundset object once and re-use it rather than make Servoy create and destroy it thousands of times.

The issue as you described it seems related to a foundset which is growing linearly with the execution time… but you say that you clear it after each save data.

Correct

Are you sure that there are no other foundset object which are refering to all the created records ?

I’m sure. The only foundset on the packets table is the global one which I clear after each save.

Also, I just tried turning off databroadcasting to see if that would make a difference but it made no difference.

Do you think it could be related to the jdbc driver? As I said I’m using a jdbc3 driver and I’m working on getting the jdbc4 driver working.

Hi,

i don’t think it is related to your jdbc4 driver.

We use an Oracle 10g driver and have similar issues on large importing jobs.
We have a CRM solution which can import companies and people. During the import there are several tables with new records (addresses, communication, bank accounts etc).
At the beginning, the import goes fast for each record, but the databaseManager.saveData() consumes more and more time over the execution time. We also use database transactions and clear foundsets when they have 200 records.
When we skip the import and start it again, everything goes fast again.

It would be nice, if we can find a solution for this issue.

Thanks!
Alex

I think the way to go here is have a nice sample solution with steps on how this can be reproduced (save data becoming increasingly slow on similar operations with no query time differences until restart). And then create a case in the support system with that + link it to the forum thread.

Hi Adrian,

i have my doubts as well about the jdbs driver. I am not sure they can help you with this situation.
I agree with Andrei. Best way is to try to isolate the issue in a sample solution.

Alex i am not sure on how you execute the import but if you iterate over a first foundset and then create new record in a second foundset the slowness during the import may be caused by the iteration over the first foundset as well.
If you iterate over a very big foundset using a normal iteration as “forEach” or for (i<=foundset.getSize()) { } the execution time of each iteration may increase.
If this is the case you can solve it getting bulk of data on the first foundset instead of going over the whole table. In order to do so you require a sortable index;

  1. import the first 1000 records (sorted by the index) and iterate over them.
  2. Get the next 1000 records (sorted by index and with index > last index retrieved)

//load 1000 records on each iteration:
var index;
var record;
foundset.loadRecords(‘select pk order by yourIndex limit 1000’)
do {
for(var i = 1; i < foundset.getSize(); i++) {
record= foundset.getRecord(i);
// do your operation with the record;
if(i%200 ==0) databaseManager.saveData();
index= record.idx;
}
} while (foundset.loadRecords(‘select pk where yourIndex > index order by yourIndex limit 1000’).getSize());