Hello,
I have the following PERFORMANCE issue:
I have created a job that fill and update a database from a source db with same structure.
The problem is that at the beginning performance was good, now the source db and destination db are very large and time to import/update is to big.
INSERT code is made up comparing the pk column in the source and dest db, the missed ones are filled in the destination.
UPDATE: check col by col, if any change value exists, updated is performed with the following statement for any tables in the DB
UPDATE tableADest
SET col1=source.Col1, col2=source.col2, ... coln=source.coln
FROM tableASource source
INNER JOIN tableAdest dest
ON dest.colpk = source.colpk.
The main problem is that I cannot identify the row update in the souce db, everytime I have to compare the whole equivalent tables (source and dest db), because there are not timestamp, updated cols or any cols usefuls, to have a subset of data and find any new or upadeted rows.
How can I increase performance and reduce time of importing?
Thank
You can try the replication service instead of doing it manually, it will be faster than the DTS job.
If you want to it manually do the following operations,
1. Delete
2. Update
3. Insert
|||Hello,
thank, but at the beginning we tried with replication but there are some problems, so we decides to use other methods such as the above one.
Now, I am thinking to create a trigger on each big tables to identify what record is inserted new or updated. The trigger populate a table that represent only these pk. In this way I can make a subset of record that are only new or updated.
No comments:
Post a Comment