top of page

Coping rows between two SQL tables located on separate SQL servers requires attention to several points, but with InDriver, the copy task definition is now achievable in less than 25 lines of JavaScript code.

Whether the SQL servers are separate or different vendors (e.g., PostgreSQL and MS SQL Server), the copy algorithm needs to extract a portion of data (rows) from the source table and insert it into the destination table.

While the copy algorithms may vary between cases, there are common considerations:

  1. Obtain the last row index copied to the destination table at the startup of the copying algorithm.

  2. Maintain the last row index (index or timestamp) from the last successful copy procedure.

  3. Safeguard the algorithm against selection or insertion function failures – proceed only when the copy routine is successful.

  4. Balance the copying frequency and the size of a single copy portion – the number of rows. This ensures that data is copied quicker than generated in the source table. The copy of accumulated rows should be performed within an acceptable time, optimizing SQL Server load. Copy performance should be optimal.

All of these aspects are addressed in the example below.

Example of copying a log table:
Copy SQL Tables between databases
Let's consider a log table named 'inapi_log' with columns (id as SERIAL PRIMARY KEY, driver as text, task as text, type as text, ts as timestampz, and msg as text).

This log table will be continuously copied to another table with the same columns, named 'copy_inapi_log,' located on a separate SQL server at a rate of 1000 rows per second. Both servers in this example are PostgreSQL.

The InStudio configuration provided below illustrates the InDriver configuration, featuring a defined 'copy table' task and accompanying JavaScript code.
InStudio configuration with SQL table copy task.
The table copying task is defined by two code blocks - onStartup and onHook, as presented below:

  • onStartup

// Create destination (copy) table and indexes if not exist - public.copy_inapi_log

InDriver.sqlExecute("azureserver", "CREATE TABLE IF NOT EXISTS public.copy_inapi_log (id SERIAL PRIMARY KEY, driver text, task text, type text, ts timestamp with time zone, msg text);\

CREATE INDEX IF NOT EXISTS copy_inapi_log_driver_index ON public.copy_inapi_log USING BTREE (driver); \

CREATE INDEX IF NOT EXISTS copy_inapi_log_task_index ON public.copy_inapi_log USING BTREE (task);\

CREATE INDEX IF NOT EXISTS copy_inapi_log_ts_index ON public.copy_inapi_log USING BTREE (ts);\

CREATE INDEX IF NOT EXISTS copy_inapi_log_type_index ON public.copy_inapi_log USING BTREE (type);")


//Define a single copy size

var copy_size = 10

// define copy frequency 1/1s



var last_copied_row_id = 0

//select the last copied row id from destination table 0 if no rows copied yet
let q = JSON.parse(InDriver.sqlExecute("azureserver", "select id from public.copy_inapi_log order by id desc limit 1"))

if (q.length === 1)

last_copied_row_id = q[0].id


  • onHook



function copy() {
   // Copy a portion of rows with size copy_size starting from the last_copied_row_id

  let copy = JSON.parse(
               ["SELECT id, driver, task, type, ts, msg FROM public.inapi_log WHERE id > ", last_copied_row_id, " ORDER BY id LIMIT ", copy_size]))

    // check select result
    if (copy.hasOwnProperty("Error")) {
// error
"Copy failed at id: " + last_copied_row_id + " Source error ")

   if (!copy.length) {
No more rows to copy from id: " + last_copied_row_id)
  //build SQL insert query
    let sql = ["INSERT INTO public.copy_inapi_log (id, driver, task, type, ts, msg) VALUES "]
//iterate copy to build inserted values list
    for (var i = 0; i < copy.length; i++) {
row = copy[i]
       if (
i > 0)
sql.push("(",, ",'", row.driver, "','", row.task, "','", row.type, "','", row.ts, "',$magic$", row.msg, "$magic$)")


    //execute SQL insert query
  let q = JSON.parse(InDriver.sqlExecute("localserver", sql))

   if (!q.hasOwnProperty("Error")) {
// Success
        let last = copy[copy.length - 1].id
copy.length + " rows copied from id: " + last_copied_row_id + " to " + last)
last_copied_row_id = last
    } else
"Copy failed at id: " + last_copied_row_id)


As evident in the screenshot displayed below, a batch of 1000 rows is transferred to the destination table every second.
Monitoring the performance of the copy algorithm
bottom of page