Quantcast
Channel: SQL Server Integration Services forum
Viewing all 24688 articles
Browse latest View live

SSIS Expiration

$
0
0

I'm using JUST ssis on a server to run a dtsx package.   Its currently an evalutation edition of 2017 standard.   All is installed is SSIS, no SQL Server.

Q:  How do i find out when my evaluation edition ends?


Export from on-prem SQL Server to Azure DB for PostgreSQL?

$
0
0

Greetings. Is it possible to use SSIS to export data from an on-prem SQL Server to Azure DB for PostgreSQL? If not, should Data Factory be used instead? Write some code with a linked server? I saw this link and tried on my laptop but had an issue. It could be because I'm at home, while on-prem is at a data center and of course Azure is elsewhere. However, the link also doesn't specify PostgreSQL in Azure, so who knows. 


Thanks in advance! ChrisRDBA

ForEach loop container OnPostExecute event issue

$
0
0
Hi Gurus,

I am using  ForEach Loop Container, sequence container, and 20 execute SQL tasks in my package. All 20 execute SQL tasks are in sequence container and Sequence container is in ForEach loop container.

I have a  situation where if 1 execute SQL task fails i want OnError event to be fired and OnSuccess I want OnPostExecute event to be fired but both onError and OnPostexecute events.


Regards
Raj




Raj

Selecting connectionManager based on IF condition and running queries

$
0
0

Hi.

I have 5 connection manager defined in the SSIS package. I have a variable called - MyServer, which will have the input data of the server name. Based on the value of MyServer, I want to select the connection manager and execute few select queries ( this is also available in variables ) and then output of the query to be exported to excel, which end user can download. I am not sure how to set the condition and export the data .

Example of what I mean is:

If Variable - MyServer contains value -  abc , then select the connection manager abc1 , and then execute the queries available in variables - MyQuery1, MyQuery2, MyQuery3, MyQuery4 and export each of the data retrieved from each query to Excel1, Excel2, Excel3, Excel4

If Variable - MyServer contains value -  def , then select the connection manager def1 , and then execute the queries available in variables - MyQuery1, MyQuery2, MyQuery3, MyQuery4 and export each of the data retrieved from each query to Excel1, Excel2, Excel3, Excel4

How to achieve this in SSIS ?

Thanks

keep acumulating values when changing excel Source

$
0
0

Hello All,

I am new to SSIS. As the title says, how can i keep the values acumulating everytime the excel source changes. For example an excel file comes everyday at 01:00 am every night in the outlook which i extract to a certain folder (using power automate) and then it deletes the file. This file have data from the day before and not the acumulating data since the beginning of the month. I want the "aggregate" container to keep counting (acumulating) the values that come everyday, the thing is that the file source changes everyday. In the "aggregate 1" for example i have a "group by" for each user for the hundreds of Logins they make per day "count". When the excel changes to 2 of June for example i want it to keep acumulating the count. Can anyone pls tell me how can i do this?

Best Regards,

David



[Data Flow Task][OnError][Move File to Error Folder][Looks still in use to package]

$
0
0

Hi,

I want to move file to error folder in-case of error while loading so writes File System Task in Executable as Data Flow Task and Event Handler is OnError but getting following error.

[File System Task] Error: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".

It means file is still in use by Flat File Source Editor, Can someone guide me how to release the file in case of error so can move to error folder.


Many Thanks, Muhammad Yasir

Create Catalog in SSIS greyed out

$
0
0

I am trying to run a catalog ispac file but i cannot create a catalog. Any ideas on how to resolve?





SSMS 2016 will not Connect to Integration Services on SQL Server 2016

$
0
0

We have a SQL Server 2016 server with Integration Services 13.0 running on it.  We are not using Integration Services Catalog.  On my workstation, in SSMS 2016 (16.5.3), when I attempt to connect to Integration Services on the server, I get Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc)

TITLE: Connect to Server
------------------------------
Cannot connect to -------.
------------------------------
ADDITIONAL INFORMATION:

Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&LinkId=20476
------------------------------

Connecting to the Integration Services service on the computer "---" failed with the following error: "The specified service does not exist as an installed service.".

This error can occur when you try to connect to a SQL Server 2005 Integration Services service from the current version of the SQL Server tools. Instead, add folders to the service configuration file to let the local Integration Services service manage packages on the SQL Server 2005 instance.
For help, click: http://go.microsoft.com/fwlink/?LinkId=506689
------------------------------
Connecting to the Integration Services service on the computer "---" failed with the following error: "The specified service does not exist as an installed service.".

This error can occur when you try to connect to a SQL Server 2005 Integration Services service from the current version of the SQL Server tools. Instead, add folders to the service configuration file to let the local Integration Services service manage packages on the SQL Server 2005 instance.

------------------------------

I cannot find any reason for SSMS 2016 to not be able to connect to Integration Services on a 2016 Sql Server server.

I have reinstalled SSMS 2016 and installed on a machine with a clean image applied and still receive the error.

These are the SSMS versions, in Help/:

Microsoft SQL Server Management Studio13.0.16106.4
Microsoft Analysis Services Client Tools13.0.1700.441
Microsoft Data Access Components (MDAC)10.0.18362.1
Microsoft MSXML3.0 6.0 
Microsoft Internet Explorer9.11.18362.0
Microsoft .NET Framework4.0.30319.42000
Operating System6.3.18363

Where else can I look to find the reason for this error?


Export large amount of data to Excel

$
0
0

Hi.

I have a 4 tables which contains large volume of data. I need to transfer the data to excel.

What is the best way to do this ? Should this be done in normal C# code or should it better to use SSIS.

I cannot use third party dll . what is the best way?

Thanks

Job never ends

$
0
0

Hello all,

I have the following issue in SQL server 2008:

I try is to calculate the cost of goods per warehouse and per product by using the FIFO method, per warehouse per product and store the results in a table.

I have a table called ITEMTRANSACTIONS, which has the transactions for all the products and for all the warehouses, and another table called PRODUCTS with the transactions for some of the products.

PRODUCTS table structure:

DATE

DATETIME

ID

WAREHOUSE

ITEMCODE

TRANSACTION

QTY

PRICE

2020-03-24 00:00:00.000

2020-03-24 13:55:55.080

B2467C47-69D9-C3A9-76D5-017116B2E4BA

MAIN

10001

Purchase

940.000000000000

0.600000000000

2020-03-18 00:00:00.000

2020-03-18 15:36:31.440

4C9B5554-A7ED-E9AF-B7BB-01711A539B21

SECOND

10002

Sale

280.000000000000

0.298490000000

2020-03-27 00:00:00.000

2020-03-27 06:57:12.373

1365E69A-D7C4-1D8E-73AD-01711A539B28

MAIN

20001

Sale

3.000000000000

10.650000000000

I have created a job called FIFO_MAIN, which has the following steps:

  • Step 1 to 6: Updating some columns in the ITEMTRANSACTIONS table.
  • Step 7: Update the data in the PRODUCTS table (using MERGE), using the ITEMTRANSACTIONS table as the source.
  • Step 8: Delete the data in the RESULTS table.
  • Step 9: Calculate values using data from the PRODUCTS table and enter results in the RESULTS table (using INSERT INTO).
  • Step 10: Update the data type of some columns in the RESULTS table.
  • Step 11: Update FIFO price column in the PRODUCTS table (using UPDATE TABLE SET COLUMN).

In step 9 I use WITH where I create the tables:

  • sample_data where I pull the data from the table. I use the WHERE clause to only take the rows related to the MAIN warehouse.
  • Table_1 (which pulls the data from sample_data), Table_2 and Table_3, which use data each from the previous one to calculate some columns.
  • WITH clause closes and becomes INSERT INTO (columns of the RESULTS table) and SELECT (columns of Table_3) + 2 calculated columns which are a simple sum and a multiplication.

I ran the FIFO_MAIN job at night using daily schedule. Each night it took a total of 2 hours and 36 minutes at most to complete and the results were stored as expected in the RESULTS table.

The issue is:

I had to create another job (FIFO_SECOND) which also pulls data from the PRODUCTS table, but this time, for the SECOND warehouse. All steps were the same, except for step 9, where in sample_data in the WHERE clause WAREHOUSE=SECOND instead of WAREHOUSE=MAIN. This time, I use the RESULTS_SECOND table for the output results. The first time the FIFO_SECOND job ran, when it reached step 9, it got stuck. It took many hours appearing "in progress" in the history of the job until I forced it to stop.

What I tried:

I tried to clear the cache and run the FIFO_SECOND job again, but it still got stuck at step 9. Also, I ran the 2 jobs separately, but I still had the same result.

In the FIFO_MAIN job, which has no issues, I tried to replace the MAIN warehouse with the SECOND one in the WHERE clause, since this is the only difference and it got stuck again at step 9. When I replaced it again as it was at the beginning, the job ran normally again.

It looks like it has to do with the change in step 9. However, by making changes to any other step, eg in 2 where I update a column of the table ITEMTRANSACTIONS I changed the calculation method and yet the job never stuck to step 2.

What could be the reason of this never-ending job?

Is there a chance that having a different source for the table sample_data in these 2 jobs causing an issue? and / or because I am using the same source table (PRODUCTS)?

Any help or hint will be very much appreciated. Thank you in advance!

Gettting warning :SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED

$
0
0

Hi all,

I'm a beginer with SSIS . When I executed package from SSIS , I got the warning :"SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors" .

Then I try to update : MaximumErrorCount to 4, but I got the warning again.

My Table test : ( It had 17 rows, this is a part of it )

Customer_codeSalary
1A123
23A12
3A10
3A15
5D5
6T

Conditional split : salary > 9 . When It finished, I received only 2 rows ( which is imported to SQL server) instead of 4 rows. Please help me to solve it .

Thanks you very much,

HongNgoc


Adding a column to say Yes if matches with Previous month data

$
0
0

Hello -

I have a ssis package i'm running on a schedule every month. New requirement is that i have to add a column to the destination table saying onPreviousMonth? Yes or No. Condition would be If the Claimid is on the data file of last month then i want to have the dest column OnPreviousMonth?  as "Yes" otherwise 'NO'

How to achieve it? Im confused what exactly is the transformation i should use or process? 

Here is the query i have

select
[Prod Line],
[LOB],
[Policy],
[Insured Name],
[Claim#],
[Claim Seq#],
[Claimant Name],
[On Previous Report?],
[Benefit Type],
[Previously Med Only],
[Ever Med Only],
[Last Trans Date],
[Injury Date],
[Claim Setup Date],
[Loss Setup Date],
[Closed Date],
[Reopened Date],
[Days Open],
[Litigation],
[Arbitration],
[Lien],
[Status],
[Adjuster Ofc],
[Adjuster Name],
[Adj Supervisor],
[Total Incurred Categories],
[Total Incurred],
[Total Pymts],
[Total Resrvs],
[Total Recvrs],
[Indemnity Incurred],
[Medical Incurred],
[Legal Incurred],
[Expense Incurred],
[Other Incurred],
[Loss Incurred],
[ALAE Incurred],
[ULAE Incurred],
[Indemnity Pymts],
[Medical Pymts],
[Legal Pymts],
[Expense Pymts],
[Other Pymts],
[Loss Pymts],
[ALAE Pymts],
[ULAE Pymts],
[Indemnity Resv],
[Medical Resv],
[Legal Resv],
[Expense Resv],
[Other Resv],
[Loss Resv],
[ALAE Resv],
[ULAE Resv],
[Indemnity Rcvrs],
[Medical Rcvrs],
[Legal Rcvrs],
[Expense Rcvrs],
[Other Rcvrs],
[Loss Rcvrs],
[ALAE Rcvrs],
[ULAE Rcvrs],
[RunDate],
[AsOfDate]

Change name of Excel file

$
0
0

Hi.

I am trying to export data to excel. In the excel connection manager, I can see that a predefined Excel sheet path and name of file are to be given.

How to change the file name, whenever a export is made ?

Thanks

getting error when delete the able donot have data

$
0
0
Hi I have one doubt in ssis

I want delete the data in the target server(postgres) tables data using ssis package.

database : postgres server 
Table : emp
in execute sql task :scriptis:  delete from emp and connection used odbc

when i run the executesql task in ssis package
if emp table have data then its working fine and i am getting the error when emp table donot have data.

[Execute SQL Task] Error: Executing the query "delete from  emp
usin..." failed with the following error: "Error HRESULT E_FAIL has been returned from a call to a COM component.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

suppose if i run same query in pgadmin tool  that time it is not getting any error even though emp table donot have data.

could you please tell me how to avoid this issue in ssis package.

Agent Job never ends

$
0
0

Hello all,

I have the following issue in SQL server 2008 R2:

I try is to calculate the cost of goods per warehouse and per product by using the FIFO method, per warehouse per product and store the results in a table.

I have a table called ITEMTRANSACTIONS, which has the transactions for all the products and for all the warehouses, and another table called PRODUCTS with the transactions for some of the products.

PRODUCTS table structure:

DATE

DATETIME

ID

WAREHOUSE

ITEMCODE

TRANSACTION

QTY

PRICE

2020-03-24 00:00:00.000

2020-03-24 13:55:55.080

B2467C47-69D9-C3A9-76D5-017116B2E4BA

MAIN

10001

Purchase

940.000000000000

0.600000000000

2020-03-18 00:00:00.000

2020-03-18 15:36:31.440

4C9B5554-A7ED-E9AF-B7BB-01711A539B21

SECOND

10002

Sale

280.000000000000

0.298490000000

2020-03-27 00:00:00.000

2020-03-27 06:57:12.373

1365E69A-D7C4-1D8E-73AD-01711A539B28

MAIN

20001

Sale

3.000000000000

10.650000000000

I have created a job called FIFO_MAIN, which has the following steps:

  • Step 1 to 6: Updating some columns in the ITEMTRANSACTIONS table.
  • Step 7: Update the data in the PRODUCTS table (using MERGE), using the ITEMTRANSACTIONS table as the source.
  • Step 8: Delete the data in the RESULTS table.
  • Step 9: Calculate values using data from the PRODUCTS table and enter results in the RESULTS table (using INSERT INTO).
  • Step 10: Update the data type of some columns in the RESULTS table.
  • Step 11: Update FIFO price column in the PRODUCTS table (using UPDATE TABLE SET COLUMN).

In step 9 I use WITH where I create the tables:

  • sample_data where I pull the data from the table. I use the WHERE clause to only take the rows related to the MAIN warehouse.
  • Table_1 (which pulls the data from sample_data), Table_2 and Table_3, which use data each from the previous one to calculate some columns.
  • WITH clause closes and becomes INSERT INTO (columns of the RESULTS table) and SELECT (columns of Table_3) + 2 calculated columns which are a simple sum and a multiplication.

I ran the FIFO_MAIN job at night using daily schedule. Each night it took a total of 2 hours and 36 minutes at most to complete and the results were stored as expected in the RESULTS table.

The issue is:

I had to create another job (FIFO_SECOND) which also pulls data from the PRODUCTS table, but this time, for the SECOND warehouse. All steps were the same, except for step 9, where in sample_data in the WHERE clause WAREHOUSE=SECOND instead of WAREHOUSE=MAIN. This time, I use the RESULTS_SECOND table for the output results. The first time the FIFO_SECOND job ran, when it reached step 9, it got stuck. It took many hours appearing "in progress" in the history of the job until I forced it to stop.

What I tried:

I tried to clear the cache and run the FIFO_SECOND job again, but it still got stuck at step 9. Also, I ran the 2 jobs separately, but I still had the same result.

In the FIFO_MAIN job, which has no issues, I tried to replace the MAIN warehouse with the SECOND one in the WHERE clause, since this is the only difference and it got stuck again at step 9. When I replaced it again as it was at the beginning, the job ran normally again.

It looks like it has to do with the change in step 9. However, by making changes to any other step, eg in 2 where I update a column of the table ITEMTRANSACTIONS I changed the calculation method and yet the job never stuck to step 2.

What could be the reason of this never-ending job?

Is there a chance that having a different source for the table sample_data in these 2 jobs causing an issue? and / or because I am using the same source table (PRODUCTS)?

Any help or hint will be very much appreciated. Thank you in advance!




MySQL/ODBC write performance

$
0
0

I am writing records to a MySQL table using an ODBC Destination and the performance is appalling. I've set the batch size to 1000, 10000 and 100000 but I can see the the row counter on the input feed always stops in 9777 row increments. If I use MySQL as the *source*, then performance is great and I don't see any delays or batching.

Has anyone got any ideas how to resolve this? I'm guessing there's a setting in the driver or configuration that needs changing but I haven't found anything obvious.

Failing that being a solution, does anyone have any recommendations for writing to MySQL from SSIS in a performant way?

Thanks.

Data Viewer not working anymore after upgrade Visual Studio 2017 to Visual Studio 2019

$
0
0

Hi,

Before upgrade my Visual Studio 2017 Enterprise all my data viewers worked very well. Few weeks ago I installed Visual Studio Professional 2019 and the data viewer is not working in any situation. The flow not stop in the Data Viewer. I did few search on Google to look for developers with the same problem, and the only recomendations that I see was: "Install the last update of SSIS". I already did this, and not work. 

My environment:

Visual Studio 2019 Professional 16.Release/16.4.5+29806.167

My SSIS Target is SQL Server 2014 

SQL Server Integration Services   15.0.2000.94

SQL Server Data Tools   16.0.62002.03150

Microsoft .NET Framework Version 4.7.03056

Someone can help me?

Best Regards,

Luis

email log

$
0
0

Hi,

I have the ssis package to send email to the users.  The users keep complain didn't receive emails.  I want to review the log, but don't know where to find the email log, can you help please.

Thanks

Data Source [2]: Cannot acquire a managed connection from the run-time connection manager (Visual Studio 2019)

$
0
0

Last week, our SSIS packages suddenly start to have this problem with the OData Source. Detailed error message as follows:

SSIS package "C:\Code\Integration Services Project3\Integration Services Project3\Package1.dtsx" starting.
Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.
Error: 0xC020801F at Data Flow Task, OData Source [2]: Cannot acquire a managed connection from the run-time connection manager.
Error: 0xC0047017 at Data Flow Task, SSIS.Pipeline: OData Source failed validation and returned error code 0xC020801F.
Error: 0xC004700C at Data Flow Task, SSIS.Pipeline: One or more component failed validation.
Error: 0xC0024107 at Data Flow Task: There were errors during task validation.
SSIS package "C:\Code\Integration Services Project3\Integration Services Project3\Package1.dtsx" finished: Success.
The program '[56748] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).

To reproduce this error, just create a SSIS project (with either VS2019 or 2017), Add an OData Connection Manager by using this OData Feed: https://services.odata.org/OData/OData.svc/, then add a data flow, in the data flow, add OData Source which use the Connection Manager just created. Then Execute the task. I have tried on a few computers with this simple project, the above error output are very repeatable.

The problem could well be my SSIS installation. This is how I installed the SSIS. In VS2019 Manage Extensions, install SQL Server Integration Service Projects. Do I miss anything else?

This issue seems pretty old. I have searched this forum and stackoverflow,  studied their solutions but have no luck yet. This could be something simple that I have overlooked.

Thank you for any help.



CDC Not configured properly

$
0
0

Hi All, 

I am trying to configure my first CDC using the docs.microsoft.com example, however; I have some configuration problems.

I will like to know how I could upload the folder or files. These are training materials there is no security issues as they are Microsoft documentation example


Viewing all 24688 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>