Im building a new package to move some data from one table to another table thats on a different server. There are 1 million + rows. Is there a better way than using a simple data flow task for this? Im also using a derived column transform as the source and destination are slightly different.
Do alll 1 million rows get loaded into memory at once when this is executed?
Is there a better way or best practice for this type of activity?