You said it has specific order by some db field, but you was not explicit regarding indexes.
So I do not know at the moment if you guys use indexes to support fast data retrieval.
In any case a look at
http://use-the-index-luke.commay boost up things.
Actually, we have filters and these filters are very slow to retrieve the results, sometimes, filter need to hit the very old record and at that time then the list loading very slowly.
My feeling is that the slow performance is not related to Servoy's pkChunkSize, but more on the filtering.
I do not know your exact use case, so I can not tell if the 300K table could be splitted into an archive table and a "current_data_table" to make it faster.
In any case the filtering should be optimized with the use of indexes if not yet done.
Also you could try if a use of directSQL makes sense for some filtering, I mean not to use the Servoy find mode but to select the needed records with an optimized SQL-SELECT into your foundset. Also test the namedFoundset=empty property, so that the table form starts up fast with an empty foundset and waits until the user tells it which records should be shown.
My bottom line feeling is that performance can be optimized. In case the above suggestions did not help, a consultant with a lot of Servoy experience may be of help, as he/she brings in other perspectives and other experiences that could add value to your experiences.
I just had a look at your website, and I am now curious where this 300K record table stems from, I mean what you store inside that. For a drug table it seems to be too big, e.g. in Germany we have 50K of different drugs... (pharmaceuticals).