Quantcast
Channel: SQL Server Integration Services forum
Viewing all 24688 articles
Browse latest View live

Individual SSIS Package deployment in SQL server 2012

$
0
0

Hi,

I am developing the SSIS packages in sql server 2012.

I want to know how we can deploy individual SSIS packages as I will be editing the required packages if need be then I have to deploy that package only instead of entire project deployment.

Please help me on this.

Regards,

Ramu


Ramu Gade


Can you minimize logging in a SSIS data flow when the database is in a SQL Server 2012 Always On Group?

$
0
0

We have a file that we are loading 1st into a staging database then into a production database that contains over 5 million rows. Both databases belong to a SQL Server 2012 AG. We would like to minimize the logging in the staging database but t the same time keep the staging database in the AG. I know about fast load and setting the buffer settings in SSIS but I've read that this doesn't work on replicated tables and I am assuming that speaks to the AG.

Are there any articles or someone's personal experiences with this type of scenario that could point us in the right direction and offset some of the initial data load into staging by minimizing logging?

Thanks,

Sue


Sue

Best practice with respect to wcf configuration files for SSIS

$
0
0

So after reading a lot of posts and blogs on how to configure SSIS to read from configuration files , I am still not clear and would like any expert to provide a definitive stance. In my case the WCF service consumption is wrapped into a separate assembly. I am referencing the assembly in an embedded C# script within my SSIS package.

When I make the helper class call to the webservice , I get the endpoint not found WCF exception.

Keep in mind I am running this from VS 2012 IDE and did the following to make sure the WCF call works:

1. Googled and found that you need to have config entries in DtsDebugHost.exe.config file. But it still did not work

2. Had the same entries in the associated app.config file for C# script but it still did not work.

It seems like SSIS is very fragile w.r.t consuming WCF entires in a config file. Is the best practice to just have the end point created in code and externalize the end point as a SSIS variable / xml file or is there really a way to get these config files working.

Attached is the wcf snippet of my config file.

<system.serviceModel><bindings><basicHttpBinding><binding name="ITransactionProcessor"><security mode="TransportWithMessageCredential" /></binding></basicHttpBinding></bindings><client><endpoint address="https://ics2wstest.ic3.com/commerce/1.x/transactionProcessor" binding="basicHttpBinding" bindingConfiguration="ITransactionProcessor" contract="CyberSource.ITransactionProcessor" name="portXML" /></client></system.serviceModel>


SM

How do I stop Aggregate transform from doing inappropriate rounding

$
0
0

We have a client that is sending us files where amounts are in different records like this:

clientID, productID, amount
123,1,24.00
123,2,28.00
123,3,29.00

We then take these three records and turn them into one record

ClientID, product1Amt, product2Amt, product3Amt
123,24.00, 28.00, 29.00

The dev before me, did this by using the aggregate transform. The problem is, the transform is somehow adding magic numbers that didn't exist before. Amounts will go in looking like this:

3333.33

and come out of the transform looking like this

3333.330078125

The column comes off a flat file as a float. The destination table column is of datatype money. When it hits the table, it turns it into:

3333.3301

In some cases I've seen it turned into:

3333.3299.

Neither is acceptable but I don't know how to control it. Any suggestions?

Data Flow Task BufferSize Tuning

$
0
0

Hello everyone,

I have encountered a problem when tuning the performance of a Data Flow Task recently. The DFT uses Flat File Source and OLE DB Destination to load data into a staging table. The DFT doesn’t contain any other asynchronous transformations except the Flat File Source. The overview of the DFT is as follows:

 

The package runs in 64-bit runtime mode. The DFT takes about 50 minutes to process 40 million - 50 million rows.

By enabling the BufferSizeTuning property on the DFT, I am able to obtain the buffer related information. First, I use the default settings for DefaultBufferMaxRow (10000) and DefaultBufferSize (10MB). However, in the package log, there are both 10MB buffers and 64KB buffers allocated. Please see the following log info:

User:BufferSizeTuning,TestDW,Test\administrator,Insert into Staging Table and Production Table,{feb3e0dc-f5db-400f-87f5-1561d93bc30f},{FFFC0B10-7BC3-43BC-85AC-6557CA654FA0},2014/12/15 19:34:45,2014/12/15 19:34:45,0,0x,Rows in buffer type 0 would cause a buffer size greater than the configured maximum. There will be only 3901 rows in buffers of this type.

……

User:BufferSizeTuning,TestDW,Test\administrator,Insert into Staging Table and Production Table,{feb3e0dc-f5db-400f-87f5-1561d93bc30f},{FFFC0B10-7BC3-43BC-85AC-6557CA654FA0},2014/12/15 19:34:46,2014/12/15 19:34:46,0,0x,Rows in buffer type 9 would cause a buffer size less than allocation minimum, which is 65536 bytes. There will be 1638 rows in buffers of this type.

…….

User:BufferSizeTuning,TestDW,Test\administrator,Insert into Staging Table and Production Table,{feb3e0dc-f5db-400f-87f5-1561d93bc30f},{FFFC0B10-7BC3-43BC-85AC-6557CA654FA0},2014/12/15 19:34:47,2014/12/15 19:34:47,0,0x,Rows in buffer type 10 would cause a buffer size less than allocation minimum, which is 65536 bytes. There will be 2730 rows in buffers of this type.

……

Buffer manager is throttling allocations to keep in-memory buffers around 850MB.

……

I also enable the Performance Monitor when the package runs, and I find that the number of buffers in use is around 19000, and the number of buffers spooled is a little larger than 19000. I understand that this happens because of the RAM pressure, and the buffers are spooled to hard disk. 

Now the questions are: 

  1. The flat file source doesn’t have BLOB type columns, and the size of different rows should be roughly the same. Why some buffers are 10MB, while some others are 64KB?
  2. For the 64KB buffers (which means the estimated row size * 10000 < 64KB), should not they have more than 10000 rows? Why only 1638 or 2730 rows are filled into the buffer?
  3. When I change the DefaultBufferMaxRow or DefaultBufferSize, it only affect the number of rows filled in the large buffers (I understand this is because the size of the buffers changes). There are still 64KB buffers allocated. So, I still not clear why and how the 64KB buffers are created. If it is because there are few large memory chunks (10MB) available for buffer allocation, I also set the DefaultBufferSize to 1MB or 5MB, but it doesn’t change the situation of the 64KB buffers allocations.
  4. Why is the memory used by all the buffers limited to 800-900MB? I suspect that most of the buffers are 64KB, and the SSIS pipeline controls the number of buffers to a certain number, thus, the total size of the buffers are even less than 1000MB.
  5. There is another package running at the same when this package runs as a job. I changed the BufferTempStoragePath of this package to a different folder than the TEMP folder, however, it seems doesn’t affect the package performance in the main.
  6. Could you give some suggestions to improve the performance of the package significantly without increasing the physical memory? 

Thank you very much if you could spend some time reviewing the scenario, and clearing up some of my doubts or giving some suggestions. 


Mike Yin
TechNet Community Support

SQL Union all VS SSIS Union All

$
0
0

I have 4 source SQL queries which is pointing different db's in the same server. All 4 queries results need to stored in single SQLServer table of different server. which one is best approach mentioned in below ?

1) Writing SP using union all option with all four source queries and use this sp in single data flow task[single source and destination tasks].

2) 4 source tasks to execute the 4 sql queries and then use Union all transformation to club the result and then one destination.

3) 4 different data flow task. Each data flow task point the different souurce queries and the same destination.

Please suggest in all aspects.

Column "A" cannot convert between unicode and non-unicode string data types

$
0
0
I am following the SSIS overview video-
https://secure.cbtnuggets.com/it-training-videos/series/microsoft-sql-server-2008-business-development/6143?autostart=true
I have a flat file that i want to import the contents onto a SQL database.
I created a Dataflow task, source file and oledb destination.
I am getting the folliwung error -
"column "A" cannot convert between unicode and non-unicode string data types"
in the origin file the data type is coming as string[DT_STR] and in the destination object it is coming as "Unicode string [DT_WSTR]"
I used a data conversion object in between, dosent works very well
Please help what to do

How to create and Update of Accounts,User and Contacts in CRM 2013 using SSIS ScriptComponent?

$
0
0

Hi all,

I was trying to create and update the records of Accounts, Users and Contacts in CRM 2013 using SSIS Scriptcomponent(using CRM 2013 SDK). Can any one please help me, how to proceed on this as I am new to this task?


Thanks &amp;amp; Regards, Anil


distance calculation with the help of mappoint and sql server integration services

$
0
0
How to use mappoint 2011 with SSIS 2012 to calculate the distance between two zip codes? is there any process to calculate the distancein ssis?

Need to Convert .xls file to .xlsx using SSIS

$
0
0

Hello,

I am facing a strange issue while loading .xls file using SSIS 2008, when i convert that file into .xlsx i am able to load that file without any issues, or if i edit the same file in .xls format and save back again in .xls format then also i am able to load the same. Can you please guide me how i can convert the file or edit the file and save back

Thanks

Avoid Multiple source reads in SSIS

$
0
0

Hi ,

  I am new to SSIS. I have 5 tables which populates the data for 7 Dimensions. The source tables are huge so the table scans are taking too long. Is there anyway so that I can read those 5 tables only once and use them for multiple dimensions instead of reading the same table twice or thrice?

Regards,


Venkata Koppula

data quality in SSIS

$
0
0

hi

how you handle data quality using SSIS and what it means by quality of data

The requested OLE DB provider Microsoft.ACE.OLEDB.12.0 is not registered

$
0
0

Hi guys, I got a problem with SQL job and a package. The package is in a folder in the same server of my SQL instance. If I run the package everything is ok ( I am moving one file excel). If I run the package from SQL agent it returns "The requested OLE DB provider Microsoft.ACE.OLEDB.12.0 is not registered". I looked into the previous post and they talk about 32 and 64 bit and that's ok, I fixed the package in 32 bit but what I am wondering is: why if I run the package manually it goes and instead if I try to run with SQL agent it returns me that error?

Thanks 

SSIS Missing Package Configurations using Visual Studio 2010 Shell

$
0
0

I'm not sure what am I missing.

I'm creating a new SSIS package using Visual Studio 2010 Shell that was installed together with SQL Server Management Studio. I cannot find Package Configurations to launch the wizard.

Things I have tried:

1) Right Click on Control Flow design. On SQL Server BIDS 2008, I can do that andPackage Configurations ... option is there.

2) From the menu, SSIS. Package Configurations ... option is not there either.

Is it my Visual Studio 2010 installation? Or am I doing something wrong?

Edit: I have also tried http://msdn.microsoft.com/en-us/library/kwybya3w(v=vs.100).aspx - To open the Configuration Manager dialog box. The only thing I see underBuild is the Build <prjname>.


Rename Excel Tab

$
0
0

Hi Guys,

Need urgent help. 

I am working on project in SSIS. Simple SSIS Package, read file from Excel and do some transformation and insert the data into SQL table (Very simple). Problem is Every time excel file has different tab. So What I am doing, I am using Script task (VB 2008) to rename Excel tab to "Sheet1", working fine on my local without any problem. Here is my VB Code. In my Local machine Microsoft Office are installed. Now I have to move this package to Dev Inv. Unfortunately my Dev and Prod server doesn't have MS Office installed. I am having a problem to run or rename excel sheet. 
Two questions
1) Is there other way to rename or read dynamic excel tab from Excel source?
2) Is there a way we can Open Excel file/rename and save Excel file in VB without MS Office installed?

I am using SQL Server 2008 R2.

Please advise.

Thank You.


Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Imports Microsoft.Office.Interop.Excel
Imports System.IO
Imports System.Text


<System.AddIn.AddIn("ScriptMain", Version:="1.0", Publisher:="", Description:="")> _
<System.CLSCompliantAttribute(False)> _
Partial Public Class ScriptMain
Inherits Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase

Enum ScriptResults
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
End Enum



Public Sub Main()
Dts.VariableDispenser.LockForRead("User::FileName") ' File Name Variable

Dim variablesList As Variables

Dts.VariableDispenser.GetVariables(variablesList)

' Dim SFileName As String

'SFileName = variablesList("User::Filename").Value.ToString


' Dim vars As Variables 'New Added For Variable
Dim oMissing As Object = System.Reflection.Missing.Value
Dim xl As New Microsoft.Office.Interop.Excel.ApplicationClass()
Dim xlBook As Microsoft.Office.Interop.Excel.Workbook
Dim xlSheet As Microsoft.Office.Interop.Excel.Worksheet

'Dts.VariableDispenser.LockForRead("Filename") 'New Added For Variable



'Start For Variable
'Dim File As String 'New Added For Variable

'File = CType(vars("Filename").Value, String) 'New Added For Variable

Dim lapath As String = variablesList("User::FileName").Value.ToString 'New Added For Variable


xlBook = DirectCast(xl.Workbooks.Open(laPath, oMissing, oMissing, oMissing, oMissing, oMissing, _
oMissing, oMissing, oMissing, oMissing, oMissing, oMissing, _
oMissing, oMissing, oMissing), Workbook)

xlSheet = DirectCast(xlBook.Worksheets.Item(1), Worksheet)
xlSheet.Name = "data"
xlBook.Save()
xl.Application.Workbooks.Close()

Dts.TaskResult = ScriptResults.Success
End Sub

End Class


Displaying the Error column name

$
0
0

I have a flat text file.  All the columns are set to redirect on error.  But, when I set it to row, it gives the error column, ErrColumn.  Is there a way to display the real column name-the column which has the error.

Thanks.

 

Convert SQL Script With JOIN's to SSIS

$
0
0

I've been tasked with converting several existing SQL scripts into SSIS packages. While some of the scripts are fairly straightforward, some of the scripts contain multiple JOIN's across multiple tables. While I have always just used a simple Execute SQL Task or an OLE DB source with a script, I've been asked to do away with any references to any other tables/databases in the source. Here is an example of one of the scripts:

INSERT INTO [dbo].[DimEngagementVehicle]
           ([EngagementVehicleSourceKey]
           ,[EngagementVehicleCode]
           ,[EngagementVehicleDesc]
           ,[EngagementVehicleCategorySourceKey]
           ,[EngagementVehicleCategoryCode]
           ,[EngagementVehicleCategoryDesc]
           ,[EngagementVehicleTypeSourceKey]
           ,[EngagementVehicleTypeCode]
           ,[EngagementVehicleTypeDesc]
           ,[GlobalEngagementVehicleKey]
           ,[GlobalEngagementVehicleDesc]
		   ,[GlobalEngagementVehicleTypeKey]
           ,[GlobalEngagementVehicleTypeDesc]
           ,[RowIsCurrent]
           ,[RowStartDate]
           ,[RowEndDate]
           ,[RowChangeReason]
           ,[InsertAuditKey]
           ,[UpdateAuditKey]
           )
 SELECT		 hs.health_screening_key
			,hs.source_key
			,hs.health_screening_desc
			,s.source_key
			,s.source_code
			,s.source_desc
			,v.rbh_vendor_key
			,v.rbh_vendor_code
			,v.rbh_vendor_name
			,3  AS [GlobalEngagementVehicleKey] -- 'Health Screening '
			,'Health Screening ' AS [GlobalEngagementVehicleDesc] -- 'Health Screening '
			,2 AS [GlobalEngagementVehicleTypeKey] -- 'Active Engagement'
			,'Active Engagement' AS [GlobalEngagementVehicleTypeDesc]
			,'Y' AS [row_is_current]
			,getdate() AS [row_start_date]
			,'12/31/9999' AS [row_end_date]
			,'N/A' AS [row_change_reason]
			,-1 AS [insert_audit_key]
			,-1 AS [insert_audit_key]
FROM [RBH_DW_STAGE].[dbo].health_screening hs
INNER JOIN [RBH_DW_STAGE].[dbo].[source] s on hs.source_key = s.source_key
LEFT JOIN [RBH_DW_STAGE].[dbo].[rbh_vendor] v ON v.rbh_vendor_code = hs.vendor_screening_id
WHERE NOT EXISTS
(SELECT 1 FROM [DimEngagementVehicle]
	WHERE [GlobalEngagementVehicleTypeKey] = 2
	AND [GlobalEngagementVehicleKey] = 3
	AND [EngagementVehicleSourceKey] = hs.health_screening_key)
GO

I know that either LOOKUP or MERGE JOIN could be used here, but not quite sure how to implement it in a package.

Any insight or assistance would be greatly appreciated!

Thanks!


A. M. Robinson

transform csv file before importing into stage table

$
0
0

Hi,

I have a csv file with about 12 or so columns.  The issue with the file is that there are some value columns that I need to unvpivot before landing them into the staging table.  I also have to do some other formatting to the other columns, but my question is rather than just doing a straight import from the csv to the staging table, is there a way to write a query against the csv file that will allow me to do the unpivot and other formatting before I import it into the stage table?

thanks

Scott

New error in Bulk Report Generation using SSIS and SSRS 2008 R2

$
0
0

Hello,

while practically testing the subject's tutorial, I am facing new error:

Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException: An internal error occurred on the report server. See the error log for more details. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.

1) There is RAM of 18 GB installed in my localhost machine

2) If I restart machine and install updates (w7)

3) Run only and nothing else but BIDS i.e., SSIS project

I get cca 5 500 .PDF files created, as it should be. When SSIS starts, there is just 6.5 GB of memory in use. But, gradually, it grows up, up ....constantly...

Problem: If I try again to run SSIS, I can not get all the files created. It fails at certain stage, last time upon having created cca 3 100 .PDF files with popped up above mentioned error?

Question: How to manage memory distribution? Ending state was like in the picture and if I start now SSIS, it will fail upon having random number of files created?

How to proceed with further testings because, at last stage, there should be approximately 6 000 files created!

Thanks


Execute SQL Task, OLE DB, Stored Procedure, Returns unexpected value upon second run

$
0
0

when debugging SSIS, the "Execute SQL Task" runs a stored procedure and returns the expected value. When running the package a second time, the task returns an unexpected value. When running in VS2012 SQL editor, everything operates as expected.

Please help me debug how to get the stored proc to return the same value every time the SSIS Package is run. thanks!

Here is the sequence of events and what happens....

  1. Look for a Positor that matches the Application, Host, and User
  2. No matching PositorId is found, Creates new Positor row, returns that new PositorId
  3. Use PositorId to upload some data (Every thing works)
  4. re-run/debug the ssis pacakge
  5. Look for a Positor that matches the Application, Host, and User 
  6. <No clue what is happening>
  7. Returns -1 (SHOULD BE the same PositorId value returned in #2)

"Execute SQL Task" Setup: No edits to Result Set nor Expressions

"Execute SQL Task" Setup: GENERAL

"Execute SQL Task" Setup: PARAMETER MAPPING

SP called by "Execute SQL Task"

CREATE PROCEDURE [posit].[Return_PositorId]
AS
BEGIN

	DECLARE @PositorId	INT = [posit].[Get_PositorId]();

	IF (@PositorId IS NULL)
	BEGIN

		DECLARE @ProcedureDesc	NVARCHAR(257) = OBJECT_SCHEMA_NAME(@@PROCID) + N'.' + OBJECT_NAME(@@PROCID);
		DECLARE @PositorNote	NVARCHAR(348) = N'Automatically created by: ' + @ProcedureDesc;

		EXECUTE @PositorId = [posit].[Insert_Positor] @PositorNote;

	END;

	RETURN @PositorId;

END;

Supporting SQL Objects:

CREATE FUNCTION [posit].[Get_PositorId]
(
)
RETURNS INT
AS
BEGIN

	DECLARE @PositorId	INT = NULL;

	SELECT TOP 1
		@PositorId = [p].[PO_PositorId]
	FROM [posit].[PO_Positor] [p]
	WHERE	[p].[PO_PositorApp]		= APP_NAME()
		AND	[p].[PO_PositorHost]	= HOST_NAME()
		AND	[p].[PO_PositorUID]		= SUSER_ID();

	RETURN @PositorId;

END;
GO

CREATE PROCEDURE [posit].[Insert_Positor]
(
	@PositorNote	NVARCHAR(348) = NULL
)
AS
BEGIN

	DECLARE @ProcedureDesc	NVARCHAR(257) = OBJECT_SCHEMA_NAME(@@PROCID) + N'.' + OBJECT_NAME(@@PROCID);
	SET @PositorNote = COALESCE(@PositorNote, N'Automatically created by: ' + @ProcedureDesc);

	DECLARE @Id TABLE
	(
		[Id]	INT		NOT	NULL
	);

	INSERT INTO [posit].[PO_Positor]([PO_PositorNote])
	OUTPUT [INSERTED].[PO_PositorId]
	INTO @Id([Id])
	VALUES(@PositorNote);

	RETURN (SELECT TOP 1 [Id] FROM @Id);

END;
GO

CREATE TABLE [posit].[PO_Positor]
(
	[PO_PositorId]			INT					NOT	NULL	IDENTITY(0, 1),

	[PO_PositorApp]			NVARCHAR(128)		NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_PositorApp]	DEFAULT(APP_NAME()),
															CONSTRAINT [CL__PO_Positor_PO_PositorApp]	CHECK([PO_PositorApp] <> ''),
	[PO_PositorName]		NVARCHAR(256)		NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_PositorName]	DEFAULT(SUSER_SNAME()),
															CONSTRAINT [CL__PO_Positor_PO_PositorName]	CHECK([PO_PositorName] <> ''),
	[PO_PositorHost]		NVARCHAR(128)		NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_PositorHost]	DEFAULT(HOST_NAME()),
															CONSTRAINT [CL__PO_Positor_PO_PositorHost]	CHECK([PO_PositorHost] <> ''),

	[PO_PositorSID]			VARBINARY(85)		NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_PositorSID]	DEFAULT(SUSER_SID()),
	[PO_PositorUID]			INT					NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_PositorUID]	DEFAULT(SUSER_ID()),

	[PO_PositorNote]		VARCHAR(348)			NULL	CONSTRAINT [CL__PO_Positor_PO_PositorNote]	CHECK([PO_PositorNote] <> ''),

	[PO_tsInserted]			DATETIMEOFFSET(7)	NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_tsInserted]	DEFAULT(SYSDATETIMEOFFSET()),

	[PO_RowGuid]			UNIQUEIDENTIFIER	NOT	NULL	CONSTRAINT [DF__PO_Positor_PO_RowGuid]		DEFAULT(NEWSEQUENTIALID())	ROWGUIDCOL,
															CONSTRAINT [UX__PO_Positor_PO_RowGuid]		UNIQUE NONCLUSTERED([PO_RowGuid]),

	CONSTRAINT [UK__Positor]	UNIQUE CLUSTERED ([PO_PositorApp] ASC, [PO_PositorHost] ASC, [PO_PositorUID] ASC),
	CONSTRAINT [PK__Positor]	PRIMARY KEY ([PO_PositorId] ASC)
);
GO

ssd

Viewing all 24688 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>