jaleman:
leesnover:
Ie. to to the 100,000th record, scroll through a few records, then jump to the 200,000th record, scroll through a few. Jump to the end, scroll back a few records, etc.
If you look at the thread Harjo linked to we are still searching for people that can explain why exactly you would want to do this. Perhaps you can ask your ‘power user’ what exactly he is doing that he needs to jump from record 100,000 to 200,000 ?
Jan:
I didn’t want to get into that for fear you might just think I’m being a PITA, but since you asked…
In our case, we get vary varied data in. Specifically insurance claims data for long term and short term injuries. We never quite know what we are going to see in the records and in effect we do a random sample and start looking at characteristics of the records. They come in typically in chronological order over many years. Our people are very experienced and by examing a few records from each year, certain patterns will start to jump out. For these folks scrolling through the records is a very intuitive process, and you would be genuinely amazed at what they can gleen from the data just popping around in the database. Again, not to beat a dead horse, but this is a feature they have enjoyed and used in Filemaker for many years, and taking it away is akin to taking the bottle away from the baby.
Yes, we can develope more “sophisticated” sampleing techniques that would limit the found set, but honestly what they do is a very intuitive and subjective process. They may notice something scanning the first few dozen records for each year, then maybe they see nothing and move on to the next year, maybe they see a pattern, then continue moving through the current year to look at more records. When certain patterns stick out, they will then go in and do more refined queries and subtotaling to see if what they saw in the sample holds up in the database.
These are not programmers or technical people, but they gain immense insight simply by “flipping through the book” if you will, based on years of experience with similar data. I don’t think they could easily quantify what they are looking for, each circumstance is often quite different. They truly are “knowledge workers”. What they are able to discover through this review process is truly amazing. Trying to boil what they do down into a set of algorythms would be quite challenging, though we may try some day. ![Wink ;-)]()
I’d love for you to be able to come spend a couple of days with us. It would be a great sharing experience. Maybe we are not “typical”, but I think many people do use Filemaker in this manner, and many of these folks will find this a limitation in Servoy. Again, I’m aware of why you have implemented things the way you have from a technical standpoint, and the limitations of a Java “client” with a server. But there are ways to work around the issue. I will have to come up with some strategies for handling this.
I did something similar in Omnis, which has many of the same contstraints as Servoy. What I did was build a scrollable index list, and then just let them scroll through the limited index and load the full record data only as they scrolled through the individual records. Servoy’s handling seems to be to always load the entire record as you jump through the set, which presents a great deal of overhead and slows things down.
If I could use a View, or a subset of a full table to build the primary form record, and control only loading the full record for the records they choose to “jump to” or “stop on”, I might be able to handle it a little more effectively. The in ability to use views is somewhat limiting. Again, I’m sure you have some techniques I might employ to get around this, but I have to learn them yet. ![Wink ;-)]()
Sincerely,
Lee Snover