I can’t find what we are doing wrong, but:
We lock current row using databaseManager.acquireLock(foundset, 0) before editing it.
Then we open a transaction using databaseManager.startTransaction().
As soon as the user modifying the row (and having a lock on it) clicks on the layout background (thus firing auto save) or begin to edit a child record of the locked one after having modified a column, all other clients hang … until transaction is commited or rolled back.
This nastie is fully reproductible on a customer server and with our development server (Servoy 2.2.1, Java 1.5).
As nobody else seems to complain about such troubles in multi user, may be we don’t use record locking and transactions the same way you do.
Does anybody else experiment the same troubles?? Your comments are welcome.
What database are you using? It sounds like MS SQL Server?
No, we get this using plain Sybase, on win2k3 and on MacOSX server.
But as we will sooner or later have to install our solutions with MS SQL, we are very interested in knowing about possible troubles with record locking on this platform.
I have tested it and this is just a database thing.
When you start updating a row then other clients can’t access the row (query) again for it
They have to wait until the transaction is finished.
What you can do is setAutoSave off.
application.setFocusLostSaveEnabled(false)
when you start a transaction
and call saveData() youreself and close the endTransaction when you are done.
Then it will go better but not all the way yet.
That will be introduced in 3.0. Then you will be able to:
startTransaction()
setAutoSave(false);
// user can do everything he wants, switch forms, records
// but no save will be done as long as you as developer don’t do it.
saveData()
endTransaction()
we just tested it on postgresql and that one works like i expect it to have
You can read committed data. You can read it but you can write it (start 2 transactions on the same row)
Did you test (on postgresql) with servoy version 2 or 3 ?
As it could be a workaround for current solutions while waiting for 3.0 final.
there is currently no change for that in 3.
Must still be build.
This is purely a database question. How databases handles transactions.
Doesn’t have much to do with servoy. We can only try to work around it a bit better and give the developers in servoy more control over when is what going to be saved.
jcompagner:
we just tested it on postgresql and that one works like i expect it to have
You can read committed data. You can read it but you can write it (start 2 transactions on the same row)
Is this a consequence of Multiversion Concurrency Control (MMVC) – each transaction sees the database in consistent state?
To quote the postgres site “We use a multiple row data storage strategy called MVCC to make PostgreSQL extremely responsive in high volume environments. The leading proprietary database vendor uses this technology as well, for the same reasons.”

Johan,
As a matter of fact, it would be interesting having full control on saves.
But please note that we will still be forced to save data in order to get the pk when using database sequences.
Scenario (pseudo code):
startTransaction;
disableSaveData;
create_newParentRecord;
enter data;
saveData; // <======hangs up other clients
create_newChildRecord;
enter data;
saveDate;
commitTransaction;
When using database sequences, you are forced to save data before creating the child record.
As database sequences are mandatory in lots of cases we have a BIG problem here.
What can we do ?
olivier melet:
startTransaction;
disableSaveData;
create_newParentRecord;
enter data;
saveData; // <======hangs up other clients
create_newChildRecord;
enter data;
saveDate;
commitTransaction;
Hi, I can confirm this behaviour..
I have very similar code to this, except I don’t disable save data.
Tried launching 2 clients, started creating an invoice on each one, as soon as one of the two clients do a saveData inside the transaction, the other client hangs until the transaction is committed/rolled back…
This is with PostgreSQL 8.
if you need database sequences so you need to save data.
Then even 3.0 will not help you.
Then you just need a database where you can read committed data even if there are uncommitted data from another transaction.
For that postgresql we tested it yesterday it it did not block
It did ofcourse block when both clients tried to update the same record (As it should) but it didn’t block reading.
What exact version did you test against? What was youre connect url and what driver version did you use?
i digged in a bit deeper.
And with that postgresql test you did.
Did both clients start a transaction?
If they did then yes they should block. Because that is the whole point of an isolation level (READ_COMMITTED). When you are in a transaction you can’t read dirty data. When you are not in a transaction then you should be able to read the data (but this is something most database also don’t do)
jcompagner:
i digged in a bit deeper.
And with that postgresql test you did.
Did both clients start a transaction?
If they did then yes they should block. Because that is the whole point of an isolation level (READ_COMMITTED). When you are in a transaction you can’t read dirty data. When you are not in a transaction then you should be able to read the data (but this is something most database also don’t do)
Yes, they both started transactions. Had that little green icon at the bottom of the window in both. Now, I’m trying to create brand new records in both clients, so there is no read from my side…
Can I change the isolation level?
Had a look in my PostgreSQL books…
The conclusion is that I should take away the transaction when building the invoice…
and put a transaction around the code which assigns the invoice numbers 
to avoid irregularities in the numbering sequence.
Thank you everybody for your help.
PostgreSQL will be a solution for some minor customers. But big companies have standards, which are almost impossible to change.
Does anybody knows if another database supports reading non-commited data?
i think there are databases that do support the isolation level READ_UNCOMMITTED
But that is not what you want. Because then you see already changes of others that they didn’t commit.
The strange thing is we miss a isolation level between those 2:
READ_UNCOMITTED
READ_COMMITTED
Because READ_COMMITTED is now interpreted by databases and implemented how they think it should work.
Should it block of there is uncommitted data or should it read just he committed data and don’t care if there is uncommited or not.
something like this:
READ_UNCOMITTED
READ_COMMITTED_NON_BLOCKING
READ_COMMITTED_BLOCKING