Hi,
I'm loading data every day in my fact table.
about 200 000 rows per day, small. the table has more then 400millions of records today.
Because its a small incremental insert, I don't disable the indexes before the insert process. Rebuilding them for just 200Krows, I don't see this process relevant.
The table has a page level compression.
the insert process is very slow about 4000 rows / sec. (I expect 10 times faster)
if I disable the indexes, the insert is better.
But I think I was way faster before the compression.
so does the compression is the root cause?
does the indexes are compressed too?
How I can identify which part of the insert job impact the bulk insert? (is it the compression? the index reorganisation during the commit? ...)
I think today my loads didn't sort the data before the insert, I have to implement this.
but any comment to be able to identify the root cause of this type of issue is welcome.