My friend Slava Murygin (@SlavaSQL) recently asked a question on Twitter:
Populated query results into an object variable sucessfully used it in a data flow task, but can’t use it second time. Is there an easy way around?
Tim Mitchell (Blog | @Tim_Mitchell | Tim’s post: Temp Tables in SSIS) and I engaged. You can read the thread here. Spoiler: Tim and I agree that staging data temporarily in a work table is a good solution.
Like all SSIS solutions (and software design solutions, and life solutions), staing data temporarily in a work table is not the only solution. Why do Tim and I agree on work tables? My best answer is, it reduces the total cost of ownership.
What are the Other Solutions?
There are several alternative solutions. You could stage data temporarily in a Recordset Destination. There’s a way to make SSIS work with tempDB. You can stage to a Raw File. You can use an SSIS Cache (though I believe this remains an Enterprise-only feature). There are yet other solutions.
“Why do You Advocate for Work Tables, Andy?”
I’m glad you asked. Work tables are:
Understood by almost every SSIS developer, analyst, and DBA
A work table is a table defined in a nearby data location; either a schema in the source or target database or in a database on the same instance. I take a constraint-driven approach to work table location selection. Closer – a schema in the same database – is often better for performance.
I write this knowing some folks will frown at the suggestion of polluting a data source or target database with additional schemas and tables. Best practices exist for a reason. It’s helpful to maintain a list of best practices and to include in this list the reasons each practice exists. This could be a case where violating one or more best practices is justified.
In some cases – like when interacting with databases for third-party solutions – adding schemas and tables is a bad idea (or violation of an EULA). In those cases, stand up a work database on the same instance and place the work table there, unless…
Some data integration design patterns require joining the work table to a source or target table, and some relational database engines do not support three-part naming in SQL queries. My suggestion in those cases is to be creative.
In my humble opinion, “we’ve always / never done it that way” is a warm and open invitation to explore why it’s always / never been done that way.
A work table should be used by the data integration process during data integration execution. It should only be queried occasionally, and only by development or support personnel. I refer to this state as owned, and say things like, “WorkTable1 is owned by the data integration process.” Note: ownership has security implications, even in Production.
Since the data integration process owns the work table, developers should be able to use an OLED DB Destination configured for fast load (if supported by the provider) to populate a work table. This will make staging temporary data very fast. The data integration process should be able to truncate and manipulate data in a work table based on the requirements of the load pattern.
Data loaded to a work table may be persisted between package executions. If something unfortunate happens, development and operations personnel may query the table to see data that was persisted – and the state in which it was persisted – during the previous execution.
Data in a work table is accessible using SQL. Not everyone understands SSIS. Almost everyone working around data understands SQL syntax.
SSIS Catalog Browser version 0.8.9.0 now includes Azure-SSIS Catalog Properties for Azure Worker Agents and Azure Status. When Azure-SSIS is stopped or starting, SSIS Catalog Browser reports an Azure Status of “Not Running”:
Once even one Azure-SSIS worker agent starts, SSIS Catalog Browser reports an Azure Status of “Running” and surfaces the number of Azure Worker Agents currently running:
Once all Azure-SSIS worker agents start, SSIS Catalog Browser surfaces the number of Azure Worker Agents currently running:
Kent Bradshaw and I had a blast delivering Loading Medical Data with SSIS earlier today! If you missed the webinar and, perhaps more importanly, coupon codes to save on upcoming Enterprise Data & Analytics Training.
Enjoy the video!
We demonstrated a handful of (free!) DILM Suite (Data Integration Lifecycle Management) utilities:
One of the things Eugene Meidinger found frustrating when he first learned Power BI was all of the behind-the-scenes configuration required to bring Power BI to the enterprise. It was easy to find information about charts and graphs, but difficult to learn about how all the moving parts fit together.
This course focuses on two main areas: data wrangling and administration.
Session 1: Database Theory
When creating a model, it is important to know Power BI is optimized for star schema in particular and filtering/aggregating in general. Within Power BI lies a columnar database that has really good compression. This means Power BI’s model can handle a certain amount of flattening/denormalizing gracefully.
Business users are Power BI’s target demographic. We begin our exploration of Power BI with a review of database fundamentals for business users, covering topics such as:
Power BI offers two data manipulation languages:
Power Query (M)
(3 languages if you include R!)
Session 2: Power Query (M)
Power Query is designed for business users. It started as an Excel add-in to assist users familiar with the Excel formula bar. One result: Power Query surfaces an intuitive graphical user itnerface (GUI), but behind the scenes M syntax is generated.
In this session, Eugene discusses and demonstrates tips and tricks for using Power Query to clean and prepare data.
Session 3: DAX
To model and add context to data Power BI users apply DAX. DAX appears deceptively simple, very similar to Excel formulas – but DAX requires thinking in terms of columns and filters, not in terms of rows.
In this session, Eugene scales some of the steeper slopes of the DAX language learning curve.
Session 4: Data Gateways
Many enterprises practice hybrid data management in which some data and services reside in the cloud while other data and services reside on-premises. Data gateways are a way to bridge the Power BI service (in the cloud) with on-premises data.
in this session, Eugene discusses and demonstrates Data Gaterway installation and configuration. Topics include:
In this session, Eugene compares and contrasts Power BI licensing scenarios, including:
Power BI Pro
Power BI Reporting Server
Power BI Premium
Because it can be difficult to keep up with all of the options, Eugene discusses and demonstrates several ways to deploy Power BI dashboards:
Organizational content packs
Publish to web
Power BI Premium
Power BI Embedded
Power BI report server
Session 6: Security and Auditing
Securing Power BI and the data it surfaces is no longer optional.
In this session, Eugene discusses and demonstrates:
Data access management
Row-level security in Power BI and SSAS
Unified Audit log for Office 365
Data Gateway configuration
In conclusion, Eugene says, “Overall I’m pretty proud of the contents. This is the kind of course I wish I had been able to attend 3 years ago.”
Starting out as an accidental DBA and developer, Eugene Meidinger now focuses primarily on BI consulting. He is a Pluralsight course author who has been working with seven years’ SQL Server experience. Eugene holds SQL Server certifications and regularly presents at community events including SQL Saturdays and his local user group. His current focus is Power BI and related areas.
SQL Server Integration Services (SSIS) is a powerful enterprise data integration tool that ships free with Microsoft SQL Server. Join Andy Leonard – Microsoft Data platform MVP, author, blogger, and Chief Data Engineer at Enterprise Data & Analytics – and Kent Bradshaw – Database Administrator, Developer, and Data Scientist at Enterprise Data & Analytics – as they demonstrate several ways to execute enterprise SSIS.
Join this webinar and learn how to execute SSIS from:
The first answer is, “Maybe you don’t need or want to use a framework.” If your enterprise data integration consists of a small number of SSIS packages, a framework could be an extra layer of hassle metadata management for you that, frankly, you can live without. We will unpack this in a minute…
“You Don’t Need a Framework”
I know some really smart SSIS developers and consultants for whom this is the answer. I know some less-experienced SSIS developers and consultants for whom this is the answer. Some smart and less-experienced SSIS developers and consultants may change their minds once they gain at-scale experience and encounter some of the problems a framework solves. Some will not.
If that last paragraph rubbed you the wrong way, I ask you to read the next one before closing this post:
One thing to consider: If you work with other data integration platforms – such as DataStage or Informatica – you will note these platforms include framework functionality built-in. Did the developers of these platforms include a bunch of unnecessary overhead in their products? No. They built in framework functionality because framework functionality is a solution for common data integration issues encountered at enterprise scale.
If your data integration consultant tells you that you do not need a framework, one of two things is true: 1. They are correct, you do not need a framework; or 2. They have not yet encountered the problems a framework solves, issues that only arise when one architects a data integration solution at scale.
– Andy, circa 2018
Data Integration Framework: Defined
A data integration framework manages three things:
This post focuses on…
If you read the paragraph above and thought, “I don’t need a framework for SSIS. I have a small number of SSIS packages in my enterprise,” I promised we would unpack that thought. You may have a small number of packages because you built one or more monolith(s). A monolith is one large package containing all the logic required to perform a data integration operation – such as staging from sources.
The monolith shown above is from a (free!) webinar Kent Bradshaw and I deliver 17 Apr 2019. It’s called Loading Medical Data with SSIS. We refactor this monolith into four smaller packages – one for each Sequence Container – and add a (Batch) Controller package to execute them in order. I can hear some of you thinking…
“Why Refactor, Andy?”
I’m glad you asked! Despite the fact that its name contains the name of a popular relational database engine (SQL Server), SQL Server Integration Services is a software development platform. If you search for software development best practices, you will find something called Separation of Concerns near the top of everyone’s list.
One component of separation of concerns is decoupling chunks of code into smaller modules of encapsulated functionality. Applied to SSIS, this means Monoliths must die:
If your SSIS package has a dozen Data Flow Tasks and one fails, you have to dig through the logs – a little, not a lot; but it’s at 2:00 AM – to figure out what failed and why. You can cut down the “what failed” part by building SSIS packages that contain a single Data Flow Task per package.
If you took that advice, you are now the proud owner of a bunch of SSIS packages. How do you manage execution?
There are a number of solutions. You could:
Daisy-chain package execution by using an Execute Package Task at the end of each SSIS package Control Flow that starts the next SSIS package.
Create a Batch Controller SSIS package that uses Execute Package Tasks to execute each package in the desired order and degree of parallelism.
Delegate execution management to a scheduling solution (SQL Agent, etc.).
Use an SSIS Framework.
Some combination of the above.
None of the above (there are other options…).
Daisy-chaining package execution has some benefits:
Easy to interject a new SSIS package into the workflow, simply add the new package and update the preceding package’s Execute Package Task.
Daisy-chaining package execution has some drawbacks:
Adding a new package to daisy-chained solutions almost always requires deployment of two SSIS packages – the package before the new SSIS package (with a reconfigured Execute Package Task – or an update to the ) along with the new SSIS package. The exception is a new first package. A new last package would also require the “old last package” be updated.
Using a Batch Controller package has some benefits:
Relatively easy to interject a new SSIS package into the workflow. As with daisy-chain, add the new package and modify the Controller package by adding a new Execute Package Task to kick off the new package when desired.
Batch-controlling package execution has some drawbacks:
Adding a new package to a batch-controlled solutions always requires deployment of two SSIS packages – the new SSIS package and the updated Controller SSIS package.
Depending on the scheduling utility in use, adding a package to the workflow can be really simple or horribly complex. I’ve seen both and I’ve also seen methods of automation that mitigate horribly-complex schedulers.
Use a Framework
I like metadata-driven SSIS frameworks because they’re metadata-driven. Why’s metadata-driven so important to me? To the production DBA or Operations people monitoring the systems in the middle of the night, SSIS package execution is just another batch process using server resources. Some DBAs and operations people comprehend SSIS really well, some do not. We can make life easier for both by surfacing as much metadata and operational logging – ETL instrumentation – as possible.
Well architected metadata-driven frameworks reduce enterprise innovation friction by:
Reducing maintenance overhead
Batched execution, discrete IDs
Packages may live anywhere
Adding an SSIS package to a metadata-driven framework is a relatively simple two-step process:
Deploy the SSIS package (or project).
Just add metadata.
A nice bonus? Metadata stored in tables can be readily available to both production DBAs and Operations personnel… or anyone, really, with permission to view said data.
Batched Execution with Discrete IDs
An SSIS Catalog-integrated framework can overcome one of my pet peeves with using Batch Controllers. If you call packages using the Parent-Child design pattern implemented with the Execute Package Task, each child execution shares the same Execution / Operation ID with the parent package. While it’s mostly not a big deal, I feel the “All Executions” report is… misleading.
Using a Catalog-integrated framework gives me an Execution / Operation ID for each package executed – the parent and each child.
“Dude, Where’s My Package?”
Ever try to configure an Execute Package Task to execute a package in another SSIS project? or Catalog folder? You cannot.* By default, the Execute Package Task in a Project Deployment Model SSIS project (also the default) cannot “see” SSIS packages that reside in other SSIS projects or which are deployed to other SSIS Catalog Folders.
“Why do I care, Andy?”
Excellent question. Another benefit of separation of concerns is it promotes code reuse. Imagine I have a package named ArchiveFile.dtsx that, you know, archives flat files once I’m done loading them. Suppose I want to use that highly-parameterized SSIS package in several orchestrations? Sure, I can Add-Existing-Package my way right out of this corner. Until…
What happens when I want to modify the packages? Or find a bug? This is way messier than simply being able to modify / fix the package, test it, and deploy it to a single location in Production where a bajillion workflows access it. Isn’t it?
Messy stinks. Code reuse is awesome. A metadata-driven framework can access SSIS packages that are deployed to any SSIS project in any SSIS Catalog folder on an instance. Again, it’s just metadata.
*I know a couple ways to “jack up” an Execute Package Task and make it call SSIS Packages that reside in other SSIS Projects or in other SSIS Catalog Folders. I think this is such a bad idea for so many reasons, I’m not even going to share how to do it. If you are here…
Azure Data Factory – ADF – is a cloud data engineering solution. ADF version 2 sports a snappy web GUI (graphical user interface) and supports the SSIS Integration Runtime (IR) – or “SSIS in the Cloud.”
Attend this session to learn:
How to build an ADF pipeline;
How to lift and shift SSIS to the Azure Data Factory integration Runtime; and
ADF Design Patterns to execute and monitor pipelines and packages.
I’m excited to announce the next delivery of Developing SSIS Data Flows with Labs will be 17-18 Jun 2019! This two-day course takes a hands-on approach to introduce SSIS Data Flows with a combination of lecture and labs.
About Developing SSIS Data Flows with Labs
Data integration is the foundation of data science, machine learning, artificial intelligence, business intelligence, and enterprise data warehousing. This instructor-led training class is specifically designed for SQL Server Integration Services (SSIS) professionals responsible for developing data integration solutions for enterprise-scale Extract, Transform, and Load (ETL) who want to learn more about developing SSIS Data Flows.
You will learn to build data integration with SSIS Data Flows by:
– Learning SSIS Design Patterns. – Working through hands-on lab exercises. – Building efficient SSIS Data Flows.
1. Introduction to the SSIS Data Flow Task. 2. Designing re-executable loaders with SSIS. 3. Building an Incremental Load design pattern. 4. Data type fundamentals in a real-world scenario. 5. Managing schemata changes to data sources. 6. Intermediate data staging. 7. Deriving to cleanse. 8. Repetition: Iterating file sources.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.