I have a strange behaviour in a calculation: I iterate over a related foundset. But the number of records in the for loop is too big. From a certain index on, the records are doubled. My code in the calculation is:
application.output(budget_positions_charged_by_bookings.getSize()); // Outputs size of 105 record. Correct is 75
for (var i = 1; i <= budget_positions_charged_by_bookings.getSize(); i++) {
// Do something
}
do you get the same records before and after you call loadAllRecords() ?
(so the recordset 1-75 is correct and the same for both)
But i can’t see a way why this would happen, getSize() returns the pk array set size of the foundset.
And yes loadAllRecords() will reset that list by doing a query… But what that list is incorrect and gets you a size that you really can’t ask a record for
What does record 76 give you?
I did reproduce the error for a foundset of size 65 (correct size). See the attached screenshot. The footer shows, that record 70 is selected from 65 records. (Footer text is %%selectedIndex%% / %%numberOfBookings%% %%i18n:hades.bdg.lbl.bookings%%) where numberOfBookings is an aggregation which counts the number of records. The last five entries are doubled, as you can see in the screenshot.
The foundsets size is wrong after the for loop. I did an application.output() before and after. I did output the foundset size and the amount of the 70th record. I found that the calculation is called three times. After the first loop, the size is correct (65). It seems that the calculation is called a second time before the first call has finished. Could this be a problem? Output was:
Foundset size before loop: 60
Foundset size before loop: 60
Amount for 70th record before loop:
–Size after loop: 65
–70th after loop:
Amount for 70th record before loop: 290
–Size after loop: 70
–70th after loop: 290
Foundset size before loop: 70
Amount for 70th record before loop: 290
–Size after loop: 70
–70th after loop: 290
Any idea?
Thanks and regards
Birgit
PS: The solution I had yesterday (loadAllRecords), is not a good one. I found today that it produces a recursion problem. Probably loading the records calls the calculation again which loads the records…
This is not really that easy to reproduce… Please try this in an sample solution to see if it happens there.
I don’t know what you exactly do in the calculation and where that all depends on
if i look at the 70/65 picture, then the last 5 records 65->70 seems to be exact duplicates of 60->65 …
What are the primary key’s of those? is it really the case of duplicate primary keys in the pk list of that related foundset?
I did make a sample solution. It is reproducable. I did file a case. I’m not sure if the screenshots got uploaded as well into the support system. So I add them here as well.
I hope you now can reproduce it as well. And fix it. Why is nobody else having this problem?
I’m not sure, how I should read “threading issue”. If thread means:
part of a process/timing issue: You didn’t know about this problem before, and I’m very happy that you could reproduce it with my sample solution, correct it and deliver the fix with the next release! I know, that debugging is difficult, specially, if the bug is not reproducable. I understand the necessity of sample solutions.
known issue in support system/forum thread: Debugging this strange behaviour was not easy. It took me many hours. Breakpoints made the bug disappear. And, since it is a timing problem, the bug was not always visible. If you would have known about this timing problem already, it would make me a little upset. I would have expected that you point me to the known bug. You even asked me for a sample solution. You know, how time consuming it is, to create a sample solution. And often, bugs “disappear” in a simple environment. I really hope, that you all are informed about “known issues”. And you realize from the descriptions in the forum, if a topic applies to a known bug. By the way: Is there a list of known bugs available somewhere? If this was known already, it would have saved me many hours debugging and explaining in the forum.
It was the first option.
The sample solution was very good, it did not show the issue always (as happens with race conditions in threading issues), but most of the time, without it we could not have fixed it.