Two Recordings Available: SSIS Catalog and SSIS Lifecycle

I recently delivered two hour-long webinars: Introduction to Configuring SSIS in the SSIS Catalog and Introduction to the SSIS Lifecycle. These two webinars can be considered Part 1 and Part 2 for managing enterprise data engineering with SSIS – using the SSIS Catalog and utilities I built as part of the DILM Suite (DILM == Data Integration Lifecycle Management).

The recordings are now available and you can access them by clicking the links above.

What’s Covered

In Introduction to Configuring SSIS in the SSIS Catalog I cover deploying SSIS projects to the SSIS Catalog, configuring literal overrides, and using references to environments to externalize values via reference mappings.

In Introduction to the SSIS Lifecycle I demonstrate gaps in SSIS Catalog lifecycle management and demonstrate utilities I built to close said gaps – utilities that are available at DILM Suite. Most solutions and utilities at DILM Suite are free and some are even open source. Some – like SSIS Catalog Compare – are not free.

  1. If your enterprise is using SSIS 2012+, you should consider the SSIS Catalog.
  2. If you are going to use the SSIS Catalog, you should consider using environments and references for configurations management.
  3. If you’re going to use environments and references for configurations management, you will hit a wall when it’s time to promote environments and references to the next tier in your lifecycle.

In these webinars, I show you the wall.
And then I show you the stairs I built over that wall.

Check out the webinars (registration required):



You Might Need an SSIS Framework

You might need an SSIS framework. “How can I tell if I need an SSIS framework, Andy?” I’m glad you asked.

Does your enterprise:

  • Practice DevOps?
  • Execute lots of SQL Server Integration Services (SSIS) packages?
  • Execute SSIS packages several times per day?
  • Execute “SSIS in the Cloud” using the Azure Data Factory version 2 Integration Runtime (ADFv2 IR)?
  • Require configuration options not available in off-the-shelf solutions?

How an SSIS Framework Can Help

One SSIS best practice is to develop small, unit-of-work packages. You can think of them as data engineering functions. Design SSIS packages with the fewest number of Data Flow Tasks, optimally one.

There’s a saying in engineering (and life): “There’s no free lunch.” Applied to data engineering with SSIS, if you apply the best practice of creating small, unit-of-work packages, you end up with a bunch of SSIS packages. How do you manage executing all these packages? An SSIS framework.

An SSIS framework manages package execution, configuration, and logging.

SSIS Framework Community Edition

The SSIS Framework Community Edition is part of the DILM (Data Integration Lifecycle Management) Suite. SSIS Framework Community Edition groups the execution of several SSIS packages into SSIS Applications, which are a collection of SSIS packages configured to execute in a specific order.

“Can’t I just use Execute Package Tasks for that, Andy?”

Yes. And no. When deploying to the SSIS Catalog, the Execute Package Task can be used to execute any package as long as that package exists in the same project. What if you have a utility package – say a package that archives flat files after you’ve loaded them – that you want to store in a single SSIS Catalog folder and project but call from different processes (or SSIS applications)? SSIS Framework Community Edition can execute that package as part of an SSIS application.

SSIS Framework Community Edition is Catalog-Integrated

SSIS Framework Community Edition is integrated into the SSIS Catalog. When packages execute as part of an SSIS application, operational metadata and execution information is sent to the SSIS Catalog’s tables. You can view operational metrics and metadata using the catalog reports solution built into SQL Server Management Studio (SSMS)…

…or you could view SSIS execution logs and operational metadata using Catalog Reports – a free and open-source SQL Server Reporting Services (SSRS) solution from DILM Suite.

SSIS Framework Community Edition is Free. And Open-Source.

SSIS Framework Community Edition is free and open-source. In fact, the documentation walks you through building your own SSIS framework – it teaches you how you would design your own SSIS framework.

SSIS Framework Community Edition is Customize-able

Customization is one of the coolest features of open-source software. If you need some unique functionality, you have the source code and can code it up yourself!

If you don’t have time to code your own unique functionality, Enterprise Data & Analytics can help. It’s possible SSIS Framework Commercial or Enterprise Edition already has the functionality you seek. Compare editions to learn more.

SSIS Framework Community Edition is Cloud-Ready

I can hear you thinking, “Wait. It’s free. It’s open-source. And it runs in the cloud?” Yep, yep, and yep!

We Can Help

At Enterprise Data & Analytics, we’ve been building data integration frameworks for over 15 years. I wrote a book about Data Integration Lifecycle Management (DILM):

We built the DILM Suite– a collection of utilities and solutions, many of which are free (and some even open-source!):

We grok frameworks.

Learn more at Enterprise Data & Analytics.

Contact us today!

My Books

I’ve authored and co-authored a bunch of books!

Truth be told, I enjoy writing. I think my love of writing stems from a love of learning. I’m just trying to share the joy!

It’s a huge honor to write. I’m humbled and thrilled (all at the same time – a feeling I describe as “all giggly inside”) whenever I’m attending an event like the PASS Summit or a SQL Saturday and someone tells me they enjoy what I’ve written or learned something from something I’ve written.

Mostly I’ve co-authored books but a few I’ve written by myself. I owe all my knowledge to those who shared their knowledge with me. I am merely your scribe.

For a list and links to my books, please visit my Amazon author page.

If you’d like for me to train you or your team in the fine arts and sciences of SSIS, Biml, or both in-person or remotely, please contact me.


Dude, Where’s My Controls? (Visual Studio 2017 Behavior)

I’m coding along on the next release of SSIS Catalog Compare, happy as a clam, when I realize it’s time to add a new menu item. I open the form and… nothing.

To say my heart sank would be an understatement. I had questions. “Dude, where’s my controls?” “What happened?!” “Will I be able to recover?” “How long has this been going on?”

I react this way. It happens all the time and it concludes in a couple seconds. I think about it and often tell myself, “You are not the first person to encounter this issue.”

And then I search for the answer.

I found the answer in less than two minutes here.

The Problem

My designer code was no longer nested beneath the form You can see it here:

See that circled part in the image above? frmConfirmation.Designer.cs should be nested beneath the node above it named frmConfirmation.cs. It’s not, and that’s the problem. When I execute the code in the VS debugger the controls show up. I just can’t edit them.

The Solution

As two respondents mentioned in the link above, the solution is to exclude the Designer.cs file from the project:

Re-add it by right-clicking the project, hovering over Add, and then clicking Existing Item:

Navigate to the Designer.cs file and select it to re-add it to the solution:

This fixes the missing controls:


On Output…

I’m going to be a little bold in this post and suggest if you are developing for SQL Server, the screenshot to the left and above shows something that is, well, wrong. I can hear you thinking,

“What’s Wrong With That Output, Andy?”

I’m glad you asked. I will answer with a question: What just happened? What code or command or process or… whatever… just “completed successfully”? Silly question, isn’t it? There’s no way to tell what just happened simply by looking at that output.

And that’s the point of this post:

You don’t know what just happened.

Sit Back and Let Grandpa Andy Tell You a Story

I was managing a large team of ETL developers and serving as the ETL Architect for a large enterprise developing enterprise data engineering solutions for two similar clients. Things were winding down and we were in an interesting state with one client – somewhere between User Acceptance Testing (UAT) and Production. I guess you could call that state PrUAT, but I digress…

The optics got… tricksy… with the client in PrUAT. Vendors were not receiving pay due to the state of our solution. The vendors (rightfully) complained. One of them called the news media and they showed up to report on the situation. Politicians became involved. To call the situation “messy” was accurate but did not convey the internal pressure on our teams to find and fix the issue – in addition to fixing all the other issues.

There were fires everywhere. In this case, one of the fires had caught fire.

Things. Were. Ugly.

My boss called and said, “Andy, can you fix this issue?” I replied, “Yes.” Why? Because it was my job to fix issues. Fixing issues and solving problems is still my job (it’s probably your job too…). I found and corrected the root cause in Dev. As ETL Architect, I exercised my authority to make a judgment call, promoted the code to Test, tested it, documented the test results, created a ticket, and packaged things up for deployment to PrUAT by the PrUAT DBAs.

Because this particular fire was on fire, I also followed up by calling Geoff, the PrUAT DBA I suspected would be assigned this ticket. Geoff was busy (this is important, don’t forget this part…) working on another fire-on-fire, and told me he couldn’t get to this right now.

But this had to be done.
Right now.

I thanked Geoff and hung up the phone. I then made another judgment call and exercised yet more of my ETL Architect authority. I assigned the PrUAT ticket to myself, logged into PrUAT, executed the patch, copied the output of the execution to the Notes field of the ticket (as we’d trained all DBAs and Release Management people to do), and then manually verified the patch was, in fact, deployed to PrUAT.

I closed the ticket and called my boss. “Done. And verified,” I said. My boss replied, “Good,” and hung up. He passed the good news up the chain.

A funny thing happened the next morning. And by “funny,” I mean no-fun-at-all. My boss called and asked, “Andy? I thought you said the patch was was deployed to PrUAT.” I was a little stunned, grappling with the implications of the accusation. He continued, “The process failed again last night and vendor checks were – again – not cut.” I finally stammered, “Let me check on it and get back to you.”

I could ramble here. But let me cut to the chase. Remember Geoff was busy? He was working a corrupt PrUAT database issue. How do you think he solved it? Did you guess restore from backup? You are correct, if so. When did Geoff restore from backup? Sometime after I applied the patch. What happened to my patch code? It was overwritten by the restore.

I re-opened the ticket and assigned it to Geoff. Being less-busy now, Geoff executed the code, copied the output into the Notes field of the ticket (as we’d trained all DBAs and Release Management people to do), and then closed the ticket. The next night, the process executed successfully and the vendor checks were cut.

“How’d You Save Your Job, Andy?”

That is an excellent question because I should have been fired. I’m almost certain the possibility crossed the mind of my boss and his bosses. I know I would have fired me. The answer?

Documented output, to be more precise.

You see, the output we’d trained all DBAs and Release Management people to copy and paste into the Notes field of the ticket before closing the ticket included enough information to verify that both Geoff and I had deployed code with similar output. It also contained date and time metadata about the deployment, which is why I was not canned.

Output Matters

Compare the screenshot at the top of this post to the one below (click to enlarge).

This T-SQL produces lots of output. That’s great. Sort of.

“There’s no free lunch” is a saying that conveys everything good thing (like lunch) costs something (“no free”). And that’s true – especially in software development. Software design is largely an exercise in balancing between mutually exclusive and competing requirements and demands.

If it was easy anyone could do it.

It’s not easy. It takes experienced developers years to develop (double entendre intended) the skills required to design software – and even more years of varied experience to build the skills required to be a good software architect.

The good news: the output is awesome.
The bad news: the output is a lot of typing.

“So Why, Andy? Why Do All The Typing?”

That’s not DevOps. That’s wishful thinking.

You’ve probably heard of technical debt. This is the opposite of technical debt; this is a technical investment.

Technical investments are time and energy spent early (or earlier) in the software development lifecycle that produce technical dividends later in the software development lifecycle. (Time and energy invested earlier in the project lifecycle always costs less than investing later in the project lifecycle. I need to write more about this…) What are some examples of technical dividends? Well, not-firing-the-ETL-Architect-for-doing-his-job leaps to mind.

This isn’t the only technical dividend, though. Knowing that the code was deployed is important to the DevOps process. Instrumented code is verifiable code – whether the instrumentation supports deployment or execution. Consider the option: believing the code has been executed.

That’s not DevOps. That’s wishful thinking.

Measuring Technical Dividends

Measuring technical dividends directly is difficult but possible. It’s akin to asking the question, “How much downtime did we avoid by having good processes in place?” The answer to that question is hard to capture. You can get some sense of it by tracking the mean time to identify a fault, though – as measured by the difference between the time someone begins working the issue and the time when they identify the root cause.

Good instrumentation reduces mean time to identify a fault.
Knowing is better than guessing or believing.
The extra typing required to produce good output is worth it.

Good Output

In this age of automation, good output may not require extra typing. Good output may simply require another investment – one of money traded for time. There are several good tools available from vendors that surface awesome reports regarding the state of enterprise software, databases, and data. DevOps tools are maturing and supporting enterprises willing to invest the time and energy required to implement them.

One such tool is SSIS Catalog Compare which generated the second screenshot. (Full disclosure: I built SSIS Catalog Compare.)

SSIS Catalog Compare generates scripts and ISPAC files from one SSIS Catalog that are ready to be deployed to another SSIS Catalog. Treeview controls display Catalog artifacts, surfacing everything related to an SSIS Catalog project without the need to right-click and open additional windows. (You can get this functionality free by downloading SSIS Catalog Browser. Did I mention it’s free?)

In addition, SSIS Catalog Compare compares the contents of two SSIS Catalogs – like QA and PrUAT, for example. Can one compare catalogs using other methods? Yes. None are as easy, fast, or complete as SSIS Catalog Compare.


For a limited time you can get SSIS Catalog Compare for 60% off. Click here, buy SSIS Catalog Compare – the Bundle, the GUI, or CatCompare (the command-line interface) – and enter “andysblog” without the double-quotes as the coupon code at checkout.


Whether you use a tool to generate scripts or not, it’s a good idea to make the technical investment of instrumenting your code – T-SQL or other. Good instrumentation saves time and money and allows enterprises to scale by freeing-up people to do more important work.


Introduction to the SSIS Lifecycle – 7 Jun 2018

Join me Thursday, 07 Jun at noon EDT for a free webinar: Introduction to the SSIS Lifecycle!

How should an enterprise promote SSIS projects from Development to Production?
How many “levels” are required?
What are the best practices?
Do SSIS lifecycle management tools exist?

Join Andy Leonard – SSIS author, trainer, and consultant – to learn the answers to these questions. In this webinar, Andy discusses and demonstrates the SSIS lifecycle.

Register today!


“You Do Not Know What You Are Doing”

Peeves make lousy pets.

Knowing this doesn’t help; I still keep a few pet peeves. One of my pet peeves is this statement, “You don’t know what you are doing.” Why is this a pet peeve? It denies the obvious fact that everyone one of us, everywhere, is still learning.

“My Name is Andy and I Own and Operate a Consulting Company.”

“But Andy, you don’t know how to own or operate a consulting company.” That may or may not be a true statement. What is a truer statement? I may not know everything there is to know about owning and operating a consulting company, but I can learn.

“My Name is Andy and I Built a Software Product.”

“But Andy, you don’t know how to build a software product.” That may or may not be a true statement. What is a truer statement? I may not know everything there is to know about building a software product, but I can learn.

Interesting sidebar: SSIS Catalog Compare is not only the first product I’ve ever written, it’s the first complete application I’ve written in C#.

“My Name is Andy and I Co-Host a Successful Podcast”

“But Andy, you don’t know how to co-host a successful podcast.” That may or may not be a true statement. What is a truer statement? I may not know everything there is to know about co-hosting a successful podcast, but I can learn.

I Can Learn

I know I can learn because I have demonstrated this fact many times over. I proved it last month (at the time of this writing – April 2018 thereafter) when I completed the Microsoft Professional Program for Big Data. I proved it by learning enough C# to write Catalog Compare, Catalog Browser, and Framework Browser.

I promise I am learning more every day about owning and operating Enterprise Data & Analytics and building and managing the software solutions and products that make up the DILM Suite – including  products like SSIS Catalog Compare and the SSIS Framework – and co-hosting Data Driven, with Frank La Vigne (@Tableteer).

“I couldn’t so you shouldn’t.”

What I Know

What is someone truly saying – what do they truly mean – when they say or write someone doesn’t know what they’re doing?

They’re making this statement about themselves: “I couldn’t so you shouldn’t.”

No one brings this point home better than Grant Cardone in his book (get the audio book – you are welcome), Be Obsessed or Be Average, or #BOBA. The followup to his (awesome) book, The 10X Rule, Be Obsessed or Be Average complements and completes Cardone’s thoughts on the hard work and time required to achieve success.

“What is the Point, Andy?”

When people make statements like “You don’t know what you are doing,” they are saying, “I gave up so you should give up, too,” or, “I didn’t get what I wanted so you don’t deserve what you want, either.”

This is very fair thinking.

When I write the word “fair” I shudder at what “fair” has come to mean and how it’s been used to justify junk and the crap it’s been used to rationalize.


I am not going to quit learning.
I will continue to try to make old things work better.
I will continue to try new things.
I will fail more often than I succeed (this is how I learn).
I will not stop until I go home.

My advice, encouragement, exhortation:

  • Don’t quit.
  • Make the problems give up before you do.
  • Listen to people who have succeeded (or are succeeding).
  • Do not listen to people who have given up.

I have more to learn and I know that.


How I Learn

This is a picture of how I learn.

These are executions of an Azure Data Factory version 2 (ADFv2) pipeline. The pipeline is designed to grab data collected from a local weather station here in Hampden-Sydney Virginia – just outside Farmville – and piped to Azure Blob Storage via AzCopy.

How do I learn?

  • Fail
  • Fail
  • Fail
  • Fail
  • Fail
  • Fail
  • Succeed

My advice? Keep learning!

Catalog Browser, Version

If you’ve read this blog for a short time, you already know I have a passion for DevOps. You probably guessed I also have a passion for data engineering (My job title at Enterprise Data & Analytics is Chief Data Engineer).

I believe successful software development is a combination of a software development platform, a developer, and the developer’s skill developing on that platform. I like SSIS as a data engineering platform. While I absolutely enjoy learning about new data engineering platforms, I love SSIS!

It is in this context that I built the DILM (Data Integration Lifecycle Management) Suite.

I’m excited to announce an update to Catalog Browser, one of the (many) free utilities in the DILM Suite. In this release I improved a feature called Values Everywhere.

One thing I dislike about the Integration Services Catalogs node of the SSMS Object Explorer is how many windows I have to open to determine the value of of a reference-mapped Environment Variable. Values Everywhere addresses this by placing the Environment Variable values in a subnode of reference mapping:

Catalog Browser first displays the reference mapping in the context of the environment named DEV_Person. DEV_Person is a Catalog Environment that contains a collection of Catalog Environment Variables.

Catalog Browser next displays the reference mapping in the context of the SSS Connection Manager named AdventureWorks2014.OLEDB that consumes the Reference between the DEV_Person environment and the Load_Person.dtsx SSIS package. Note that this Reference Mapping is displayed as <Property Name> –> <Environment Variable Name>, or “ConnectionString –> SourceConnectionString”. Why? Catalog Browser is displaying the Reference Mapping from the perspective of the Connection Manager property.

The third instance of Values Everywhere is shown in the Package Connection References node. Remember, a reference “connects” a package or project to an SSIS Environment Variable (learn more at SSIS Catalog Environments– Step 20 of the Stairway to Integration Services).  From the perspective of the reference, the reference mapping is displayed as  <Environment Variable Name> –> <Property Name>, or “SourceConnectionString –> ConnectionString”. Why? Catalog Browser is displaying the Reference Mapping from the perspective of the Reference.

SSIS Catalog References and Reference Mappings may seem complex. There’s a good reason for that: They are complex. References and Reference Mappings are also an elegant solution to externalization, which is not an easy problem to solve. While it is difficult to learn how to configure and manage references and reference mappings, it’s totally worth it. It’s the opposite of technical debt; it’s a technical investment. Once the investment is made, enterprises reap the rewards of centralized and more manageable and more-easily-supported data engineering solutions for the life of the solution. Technical investments save time and money. Are they easy? No. Most often technical debt is easier to learn and do – that’s why technical debt plagues enterprises and will always plague enterprises. Easy is expensive.

Check out the latest version of Catalog Browser and let me know what you think.