Transfer data

What is a best to move data from one foundset (order detail table for example) to other foundset (invoice detail table).
We have a huge foundset(more than 5000 rows by order) and go through each record take time. Right now I use this function:

function copyfields(sourceFoundset,destFoundset,foreignField, foreignValue){
for (var index = 1; index <= sourceFoundset.getSize(); index++) {
/** @type {JSRecord} srcRecord /
var srcRecord = sourceFoundset.getRecord(index);
/
* @type {JSRecord} destRecord */
var destRecord = destFoundset.getRecord(destFoundset.newRecord(false))
databaseManager.copyMatchingFields(srcRecord, destRecord, true);

if (foreignField != null){
destRecord[foreignField] = foreignValue
}
}
}

Take a look at the ‘code optimisation’ blog referenced in this thread: viewtopic.php?f=17&t=20023

Hi,
It a very nice tutorial, I will definitely use it. But my problem is how to speed up process to load data from one foundset to other. Because the tables are different I cannot use :
fsInvoice.loadRecord(fsOrder) and if I go through loop it a very slow.

Hi leonid,

the point in the tutorial is that you need to load all the records in your sourcefoundset before looping through it.
That’ll definitely speed things up, but it’ll never be as fast as a direct sql insert.

So you could give that a try using the rawSql plugin (which is definitely way faster)
downside to this is: in order to make Servoy aware of this rawSql action you need to execute the ‘flushAllClientsCache’ function, which can be quite expensive.

5,000 source records processed per minute (splitting one table up into four related tables) with this technique: http://www.servoymagazine.com/home/2013 … lugin.html.

Still slower than a bunch of sql insert statements triggered by command-line or rawSQL plugin but doesn’t require huge changes to your code.

Hi Leonid,
I think when I had this problem, I ended up using rawSQL ‘INSERT INTO…’ statements and then flushing caches.
Even though this might be ‘expensive’ in database terms, it is SO much faster than any other way I could find.
You can still pass parameters/arguments to the SQL statements (like your foreignValue).
It will take you a little bit of time to write the SQL and choose each column, but it is worth it.
Also, to make your code more readable, inside the SQL ‘string’ you can put a \ at the end of a line and then split the line e.g.

function archive_stock_take ()
{
	// copy stock take records to archive

	// Execute any SQL, returns true if successful.
	var $st_no = 35 ; // this could be an argument you pass
	var done = plugins.rawSQL.executeSQL ( "rnoh_estock", 
											"es_stock_take_archive", 
											"INSERT INTO es_stock_take_archive (\
											date_created,\
											date_edited,\
											gln,\
											id_created_by,\
											id_edited_by,\
											id_location,\
											id_product,\
											id_stock_item,\
											id_supplier,\
											line_total,\
											price,\
											quantity,\
											stock_item_uuid_string,\
											stock_take_no,\
											vat_rate,\
											vat\
											) \
											SELECT date_created,\
											date_edited,\
											gln,\
											id_created_by,\
											id_edited_by,\
											id_location,\
											id_product,\
											id_stock_item,\
											id_supplier,\
											line_total,\
											price,\
											quantity,\
											stock_item_uuid_string,\
											stock_take_no,\
											vat_rate,\
											vat\
											FROM es_stock_take ;", null ) ;
	if ( done )
	{
		//flush is required when changes are made in db
		plugins.rawSQL.flushAllClientsCache ( "rnoh_estock", "es_stock_take_archive" )
	}
	else
	{
		var msg = plugins.rawSQL.getException ( ).getMessage ( ); //see exception node for more info about the exception obj
		plugins.dialogs.showErrorDialog ( 'Error', 'SQL exception: ' + msg, 'Ok' )
	}
}

(this is SQL for MS SQL Server, make sure to use correct SQL syntax for your back end SQL db)

Hope this helps.

Rafi

Hi Rafi,
How you handle a primary key on the second table. We use postgres and servoy sequence.

In these cases you have to use ‘db identity’ (let the SQL database generate the PKs).

I agree with Rafig,

When you have to move that much data around within the same database it’s usually way better to do it all on the database level with a single SQL query.
Data doesn’t have to go over the wire (twice or more) and the speed advantage exceeds the cost of the flush to the clients for this/these specific table(s).
But like with everything you have to check to see if in your use-case this is a good option or not.