I am trying to import a tab-delimited text file with >11,000 rows into a SQL table. My filters, etc. are all working properly. Here is my data flow:
Image may be NSFW.
Clik here to view.
As you can see, 10,693 rows are sent to the OLE DB destination that just inserts into my SQL table (which is the correct #). However, when I look at my SQL table, rows are missing (with different rows missing after every run). This is the only data
flow in my job, so rows aren't being modified from outside this flow. I've also added an error output from the OLE DB Destination (not pictured) just to confirm nothing is erroring out.
The strange thing is that if I add a data viewer at the circled spot, look at the data, then let the process run, the correct number of rows are added to the table. It's like the data viewer adds enough of a delay, that a buffer somewhere has time to clear itself and send everything to the destination to be processed. Without it some data is getting truncated before it runs through the destination.
Advice would be appreciated.