Limiting Records

Hi All,

Is it possible to limit records of a relation to a value. I mean, only first 100 records will be loaded for a relation. Is it doable?

Thanks,

If you let Servoy do that for you it will initially load only 200 records and load more and more as needed (user scrolling through the foundset). If you really want to limit the number of records displayed you have to load the foundset yourself (that means to place an unrelated tab in the tab panel and maybe in the onRecSelection event of the master record load the correct set of related records.).

foundset are limited to 200

and related foundsets (shown in a portal or tabpanel) are already limited to 60, I believe…

Thanks Nicola and Harjo,

If it is 60 for related foundset, Then, I think, It is fine for our scenario. But, why it is taking too much of time to load.

Actually, I want to limit the number of records, for minimizing the time It is taking to load the records from a relation. The relation is simple. and the form(table view) is placed on a tab panel. I have attached one rowBgColoCalculation to the form. In the rowBgColorCalculation, I am calculating the color code based on one of the field of the record. I have optimized the rowBgColoCalculation a lot, but with no considerable performance increase. I have also started running the profiler and checked what are processes running in the background, I found only the rowBgColorCalculation running.

Any thoughts on the same?

Thanks,

look also, under the performance tab, in your servoy-admin page. and see if there are query’s that take a long time.

did you test also, without the rowback calculation?

Hi Harjo,

Yes, If I am removing the rowBgColorCalculation, It loads very fast, less than a second.

I am looking at the performance tab…

Check also for valuelist queries.
Can you post you rowBGcolor method?

Here is the code for rowBgColorCalculation.

         var currentDateTime = application.getServerTimeStamp();
	var curProjectDueDateTime = curRecord.due_datetime;
	
	if((curProjectDueDateTime != null) && (curProjectDueDateTime.getTime() < currentDateTime.getTime()) ) {
		return globals.row_bg_projects_over_due;
	}
	
	if((curProjectDueDateTime != null) && ((curProjectDueDateTime.getTime() - currentDateTime.getTime()) <=  9000) ) {
		return globals.row_bg_projects_due_within_15_minutes;
	}
	
	if((curProjectDueDateTime != null) && ((curProjectDueDateTime.getTime() - currentDateTime.getTime()) <=  7200000) ) {
		return globals.row_bg_projects_due_within_2_hours;
	} 
	
	if((curProjectDueDateTime != null) && ((curProjectDueDateTime.getTime() - currentDateTime.getTime()) <=  14400000) ) {
		return globals.row_bg_projects_due_within_4_hours;
	}
	
	return globals.row_bg_projects_due_beyond_4_hours;

I found the issue was with the application.getServerTimeStamp(); statement. Now, It is really very fast.

Thanks,

Makes sense, everytime you call application.getServerTimeStamp() the client needs to poll the application server, that means once per every row. Quite enough overhead.
Glad that you solved it, sometimes thinking aloud really helps ;)

Infop:
I found the issue was with the application.getServerTimeStamp(); statement. Now, It is really very fast.

Thanks,

Hi Infop,

can you share with us, how you workaround this?

Harjo:
can you share with us, how you workaround this?

Yes, sure.

I have defined a global var to store the current time stamp and started one corn job, which will update the global var in certain interval. One MInute was fine for me. So, I have scheduled that Job, so that It run in every 1 minute. And I have used this global var in the rowBgColorCalculation to get the current time stamp.

Thanks,

You could make it simpler: at client startup evaluate the difference between application.getServerTimeStamp() and new Date() to see the offset between the client and server clocks and store it in a global, then in the onRowBGcolor method just use new Date() + globals.offset. No need to have a cron job running every minute.

Thanks Nicola. This is much more simple and easy to implement.

Thanks for sharing the same.

You’re welcome!