ptalbot:
Problem is that what they say in the Progress knowledge base this implements dirty reads, meaning that it also retrieves uncommitted data, so it’s not safe and could lead to clients’ cache being out of sync with the actual state of the database (in case of concurrent transaction rollbacks). ?
In the rare case of concurrent transaction rollbacks, yes, thats possible. Thing is, there is no magic option to make an old database like progress suddenly become ACID compliant for multiple users over JDBC. So you have to pick your poison. With Progress, it seems to lock rows even when doing selects through the legacy app. So the act of a user just sitting on a record can lock a row. This might be application specific, so you should test what happens in the legacy app when accessing Progress. In the testing we’ve done with our clients, it locks when doing reads, but the legacy apps themselves did very short transactions (ie, they had an edit/save type button, and on save would start/commit/end the transaction), so there was little chances of concurrent transaction rollbacks Again, this might differ based on what the legacy apps do against the progress db.
ptalbot:
Wouldn’t a better solution be to implement catch/retry when reading data?
I don’t think so. Servoy already does a try/catch it looked like. If I remember correctly going over the logs, they would try 3 times (but its been a while, so you should verify that). You might also have legacy progress apps that start a transaction when the edit button is clicked, and then the user goes off to lunch, so this could result in locking out the rest of the users for a long period of time.
Its also more complex than just one record for one table. In most Servoy apps, it loads lots of related data, and forms have value lists based on other tables, so it hit many tables just to show the record the user clicks on. So waiting over a try/catch for each of these would seem disastrous and make the Servoy app unusable.
Also, keep in mind you still have the general Servoy caching issue to deal with. The legacy app that connects to progress is probably still actively used and inserting/updating/deleting data while the Servoy app is in use. So you also have to implement background workers to scan those tables by modification date and broadcast out the changes to Servoy users. So even if a dirty read did happen, it will probably be corrected up during this process and update the users cache correctly. Of course you can also implement real-time updates over a web-service implementation that the legacy app hits, but most people don’t want to deal with that.
So, if you combine our driver, with the background workers for cache updates, it will be correct most of the time, and even if it does do a dirty read, that will only exist until the background worker runs and broadcasts the change. So, I think thats the best possible approach to get progress to scale with Servoy to a large number of users, and still have a nice fast user experience.