Honored to Present Faster SSIS at Triad SQL PASS BI 30 Oct!

I am honored to join my friends at the Triad SQL PASS BI Group in Greensboro North Carolina 30 Oct 2018, where I will present Faster SSIS!

Abstract

Ever wonder why SSIS runs so slowly? Watch SSIS author Andy Leonard as he runs test loads using sample and real-world data, and shows you how to tune SQL Server 2016 Integration Services (SSIS 2016) packages. We’ll start by experimenting with SSIS design patterns to improve performance loading AdventureWorks data. We will implement different change detection patterns and compare execution performance for each. Then, we’ll explain a Data Flow Task’s bottleneck when loading binary large objects – or Blobs. Finally, we’ll demonstrate a design pattern that uses a Script Component in a Data Flow to boost load performance to MySql, whether on-premises or in the cloud. Prerequisites: Familiarity with SQL Server Integration Services (SSIS).

Register today! I hope to see you there.

:{>

Stairway to Biml 9 is Live!

I am pleased to share Stairway to Biml 9 – Refactoring a Simple Custom Biml Framework is now live at SQL Server Central!

Typing that, I realize I neglected to publicize the fact that two other Stairway to Biml articles were recently published at SQL Server Central:

These three articles are taken from two chapters in The Biml Book:

  • Chapter 6: A Custom Biml Framework
  • Chapter 7: Using Biml as an SSIS Design Patterns Engine

Level 7 is focused on collecting and storing database metadata that will be used by Biml to automatically generate SSIS packages for a data integration solution.

Level 8 centers on one method for using the collected metadata to build a simple Biml-driven framework that automatically generates the SSIS packages for such a solution.

Level 9 describes and demonstrates one way to refactor this simple Biml framework, decoupling design patterns that were hard-coded in previous examples.

Enjoy!

:{>

 

“Do I Need a Biml Framework, Andy?”

No. No, you do not need a Biml Framework.

In the same way that I do not need a vehicle powered by an internal combustion engine, you do not need a Biml framework.

You see, I live about 5 miles from Farmville Virginia. I could walk to town each time my family needs groceries. I could pack as many groceries as I can carry into a backpack and walk the 5 miles back home. There are several benefits to this, not the least of which is the exercise that I would get walking to the store and then walking back – carrying a load of groceries, even!

“Is walking to the grocery store efficient, Andy?” Well, if you’re going get all persnickety about it… I suppose not. But think of all the money I would save on vehicle maintenance and fuel – plus the benefits of exercise!

Does this reasoning sound silly to you? It does to me (and I wrote it). Let’s add a dose of reality, shall we?

The Art of Data Integration Architecture, Part 1

Like many sciences, data integration is part art and part science. The art part is just good judgment. My lovely bride, Christy, has an awesome saying about good judgment: “Good judgment comes from experience, and experience comes from bad judgment.” With that in mind, I share the following in the hope that it saves you some… experience:

Don’t Use Biml…

If you are building a data integration solution that loads data from one source to one destination, do not use a Biml Framework. Heck, don’t even use Biml for this. Build this solution manually in SSIS or T-SQL or some combination thereof. It’s quicker, easier, and less-frustrating to construct a loader manually.

Use Biml…

If you are building a data integration solution that loads data from a couple dozen sources to a collection of destination tables, use Biml but do not use a Biml Framework. It takes about 40 hours to become proficient in Biml. To learn Biml (or anything, really), you first need a real-world problem to solve. Next, you need the gumption to try and solve that problem using a tool you have not yet used to solve a real-world problem.

Why wait until you need to build a couple dozen SSIS packages? It takes me about 2 hours to build and test an SSIS package that incrementally loads data. The math works:

24 x 2 = 48

If I take the time to learn Biml, which will take about 40 hours, I can complete two dozen loaders in less time than it would have taken me to build those SSIS packages manually. Plus I’ve learned something. More on this in a bit…

Use a Biml Framework…

After you’ve invested the initial 40 hours in learning Biml… If you are building a data integration solution that loads data from one or more sources to a collection of destination tables, use a Biml Framework. Once your framework is built, you can decide when it makes sense to use your automation and when it makes sense to build loaders manually.

I wrote about a Biml Framework here.
You can download an early example of a Biml Framework here.

Future-Proofing

Perhaps you’re looking at all the innovation and automation happening around you and thinking, “This is just some passing fad. The djin will be put back in the bottle soon and we’ll return to the days of carving our own integrated circuit chips out of wood.” Ok, maybe you’re not thinking precisely that. But I hope and pray you get my drift.

If the announcements of the latest Microsoft conference have taught you anything, they should have taught you that automation is following the arrow of time. Automation is on the increase in both volume and complexity. One benefit of having lived over half a century (and doing technology for most of that time) is perspective:

I promise you, the djin no longer fits in the bottle.

If I’m scaring you, good.

I’m not arguing the merits of automation.
I am arguing the facts of automation.

I am not attempting to assess the potential good – or potential harm – of automation. I am stating that it’s here to stay – and grow.

If you want to live and work in a field where you can master a craft and simply do the same thing until you retire, technology is not the field for you. Technology is the field for you if you enjoy learning and growing.

“How much time should I spend learning new stuff, Andy?” Experience informs me the answer is “about 25% of your time should be spent learning and growing.”

“Nummers”

My brother and friend Kevin Hazzard (DevJourney | LinkedIn | (awesome and fascinating) DataDriven interview) refers to “numbers” as “nummers.” There are a nummer of reasons I like this mispronunciation… perhaps I will explain another time. I bring up nummers here because – as my friend and brother Aaron Lowe used to tell me (before he passed away earlier in 2018 – I miss him…):

Math is hard.

I once delivered a solution that contained several hundred SSIS packages using Biml. The unit-of-measure for my estimate, based on the nummers I shared earlier (2 hours per package based on an incremental load SSIS design pattern) was “months.”

I delivered the solution in days using Biml.

It’s possible your employer isn’t good at math.
It’s possible your employer is more interested in providing you a job than in making a profit by remaining competitive (I love it when politicians speak of providing jobs… don’t get me started…).

The nummers argue that one day things may will change for your employer. If When that day arrives, things may will also change for you.

(Did I mention that I hope I am scaring you?)

So… Learn!

At the time of this writing, it is not possible for others to take your knowledge away from you. I pray that day never comes, but more than half-century of existence has trained me to not by surprised by anything.

So… learn!

Learn as much as you can. Consider knowledge a hedge against future changes. Add knowledge of the new and shiny to your experience. I don’t care how old or how young you are, start today.

I share this in the same spirit of you wanting me to get more exercise by walking to town to buy groceries, except that I am right about this and you were… less efficient. 😉

Start at SQL Server Central. It’s free.

Well, you have to give them your email address.

I understand that bothers some folks, but I encourage you to carefully weigh the benefits of sharing your email address with an entity that provides this much free education against falling behind in technology trends, application, and education. It’s totally your call, but you may find yourself in possession of the cleanest Inbox Zero of anyone who is uninformed and increasingly less-employable…

The Stairways are awesome.

Head over to edX. Many courses there are free. If you want, you can pay for certificates that you can post on LinkedIn. But – please – go get the knowledge.

 

If you want to dive deeper, sign up for Biml Academy or SSIS Academy or Fundamentals of Azure Data Factory! I’m still building the material so the costs for subscribing to the academies are low.

Peace.

T-SQL Tuesday: When Good Projects Go Bad

Thanks to Jeff Mlakar for hosting this month’s T-SQL Tuesday, a monthly collection of blog posts by data bloggers who work and live in the Microsoft SQL Server space.

This month’s topic is about projects that have gone wrong.

“Why Do Good Projects Go Bad, Andy?”

Deltas.

That’s all. Deltas. Allow me to explain.

In life, engineering, and projects, we experience deltas in a variety of forms.

IRL…

As a manager I learned there’s a difference between how we are perceived and how we think we are perceived. I label this the “self-awareness delta.”

In Engineering…

In engineering terms, delta is the letter of the Greek alphabet used to represent a difference or change or gap.

In Projects…

Projects that will eventually fail go astray early on in ways that are either invisible or ignored. Projects disappoint or fail. I’ve seen (and believe) statistics that 85% of business intelligence projects fail or disappoint. Disappointment or failure begins as some delta – some difference – between what is real and what is perceived as real.

Detecting Deltas

Think about how we discover there’s a delta in our self-awareness… it requires, well, self-awareness! It turns out that detecting deltas requires meta thinking. The risk of being too meta? We can succumb to circular logic. Thinking meta involves some circular – or, at least, circumspect – reasoning.

Our perspective is also limited by our perspective. A wise man once said it’s difficult to remove the speck from our brother’s eye while we have a plank in our own eye. (Note: Andy’s Plank Removal Service Offering currently has but a single customer…)

So how can we achieve delta detection?

I regularly reference Kathryn Schulz’s TED talk on Being Wrong. Kathryn begins her (excellent) TED talk by stating that we do not know what it feels like be wrong. But she concludes her introduction by contradicting herself (which is totally appropriate… think about it…) and states that we do know what it feels like to be wrong; it feels like we’re right.

Ownership

I’ve seen projects fail. Heck, I’ve seen companies fail. I’ve made mistakes and when I’ve caught them, things have sometimes improved. Sometimes I catch them too late to effect a complete or partial remedy. It’s rare, but it happens.

The first step on the path to (project) health is to admit we have a problem:

“We have a problem.”

Another step is communication. I prefer over-communication.

Over-Communication

I’ve never once had a client say to me, “Andy, you are communicating too much about this project.”

Not. Once.

I’ve had projects go sideways because I failed to communicate effectively or enough. Here’s one thing I’ve learned about communication in projects:

If I communicate enough, I will communicate effectively.

Failure, By Degrees

Projects that fail first go sideways. Were we to graph the trajectory of a project failing, we would not see a 90° turn at the point where the project goes sideways. Instead we would see a gradual departure – a curving away – from the path to success. The departure would reach 90° and then – if not corrected – continue to 180°. At this point the project is irrecoverable. A project may become irrecoverable at 90°, even. It depends on the project.

Fix It!

I can hear you thinking, “How does one fix it, Andy?” I’m glad you asked.

If you catch the departures at 1° – or even 9° – it takes much less effort and time to address and correct the departure.

This sounds simple and it is, but it’s also difficult. Remember that meta thinking is required, and meta thinking is hard. Simple, yes; difficult to implement and accomplish.

DfG

I address “Deviation from Good” (DfG) by over-communication. It’s the drum I beat throughout any project I lead. How do I implement DfG?

TPS Status Reports

I loathe bureaucracy. Especially bureaucracy for bureaucracy’s sake. I learned the reason I loathe status reports is I am not naturally good at tracking status. I struggle to remember details of what I did all week at the end of a week. Being an engineer, I engineered a solution:

A few years ago, my friend (and hero) Donald Farmer (Treehive Strategy | @DonaldTreeHive | LinkedIn) shared this Notepad hack with me. If you start a Notepad file with the text “.LOG” followed by a carriage-return / line feed (hit the Enter key), Notepad can provide a log template. Save the file with “.LOG” on the first line. Each time you open the file in Notepad a new line will be added containing the current date and time. Try it. It’s cool.

I use the following format to log project time:

Date time (automated by Notepad)
Customer
Project Name
Start

Customer
Stop

I use this to track my time. I’ve even built an SSIS package that reads this file and stores the data in a database. I’m able to generate status reports from this data very efficiently.

My former boss and mentor, Bennett McEwan (LinkedIn), challenged me when I reported to him at Unisys 10+ years ago. When I complained that my status report was taking an hour or more each week Ben said, “Your status report should only take you five minutes.” My kneejerk reaction was not positive. But then I engaged to solve the problem, remembered Donald’s Notepad hack, and the rest is history.

Come To Jesus Meetings

I’ve been wanting to use “Come to Jesus Meetings” in a heading for a long time. It’s not a perfect fit here but I must confess I’ve become tired of waiting. So I shoe-horned it in right here.

I visit (if possible) or call the sponsor for each project on which Enterprise Data & Analytics is actively engaged and ask questions similar to:

  • What questions do you have for me?
  • How can I better serve you?

These conversations identify expectation deltas at 1° – or 9° – deviation… a place where almost any issue can be efficiently addressed within the confines of the current work effort.

These conversations keep a project on track.

Are these conversations pleasant?
Not always.
But they are extremely valuable.

Failure is normal.
Failing fast is almost always a good thing.

I’ve learned over-communicating is vital to mitigating project disappointment or failures.

Honored to Present Lift and Shift SSIS to ADF at #Azure DataFest Reston

I am honored to deliver Lift and Shift SSIS to ADF at the Azure DataFest in Reston Virginia 11 Oct 2018!

Abstract

Your enterprise wants to use the latest cool Azure Data Analytics tools but there’s one issue: All your data are belong to the servers on-premises. How do you get your enterprise data into the cloud?

In this session, SSIS author and trainer Andy Leonard discusses and demonstrates migrating SSIS to Azure Data Factory Integration Runtime.

Register today!

:{>

Leadership and Failure

How Do People Learn?

There’s ample debate about whether people learn from mistakes. Don’t believe me? Search and see for yourself. There are compelling arguments for  both learning from our mistakes and not learning from them.

In engineering and software, I think we learn from our mistakes. I can hear you thinking, “Why do you think that, Andy?” I’m glad you asked.

I think we learn from our mistakes because most of what we attempt fails. We live and breathe in a profession where failure is normal. As we grow and mature and improve as technologists, many come to divorce negative emotions from failure. That’s one trait I look for in enterprise architects – a question I seek to ascertain: Have they separated failure from negative emotions about failure? Here’s something I’ve learned:

Good architects not only don’t feel bad when they fail, they embrace failure.

“That’s crazy talk, Andy!”

Perhaps. It’s also accurate. Why in the world would anyone embrace failure? If one believes failure is a necessary step on the path to success, one embraces failure.

Communicating Failure

In a universe that embraces S.M.A.R.T. goals, recording and reporting failure can be tricksy. I’ve written before about a very successful day of software development, but recording it against my performance metric of “lines of code written” backfired when the answer was -1,800.

In software, I believe we simply must learn from our mistakes. Failing fast is virtuous, and failing often is normal. If what we are doing is normal and virtuous, why then should we feel bad about it? The answer is:

We shouldn’t feel bad about failure.

Classification of Failures

What I am advocating is divorcing emotion from technical failures which are a certain class of failures.

I wrote earlier about offending people. We should feel bad about failing by offending people most of the time (I included one exception there at the end of my long, rambling post…).

So I will modify my earlier statement:

We shouldn’t feel bad about technical failures.

Conclusion

See what I did there?

I failed to communicate clearly and effectively.
I tried again, adding clarification.

Do I feel bad about that? Nope.
Neither should you.

Peace.

Bad Presentations

In just a few short weeks I am attending the PASS Summit 2018 in Seattle. Whenever I attend an event like the Summit or SQL Saturday I attempt to attend presentations of interest to me. I love learning new stuff!

Good and Less-Good

Most of the presentations I attend are good. Some are really good. They are delivered by talented technologists who are also gifted orators. This is an important distinction because:

Technology and communication are entirely different skills.

I’ve watched gifted presenters misrepresent the facts about technology.
I’ve watched gifted technologists fumble demos and stumble over words.

If I have to choose one over the other, I choose great technologists over great presenters. I do so without reservation or hesitation. Is it good to have both? Goodness yes! But – this is important – we don’t always get what we want.

Some Examples of Less-Good

There are categories of bad presentations and bad presenters. Three leap to mind:

  1. Someone who does not know the topic
  2. Someone who is not a good presenter
  3. Someone who is offensive

Presenters Who Do Not Know What They Are Talking About

I wrote recently about a complaint I see leveled at myself and others from time to time: “You do not know what you are doing.” I confess, sometimes that’s a valid complaint about me. While you might find the previous statement an example of humility (or false humility), I prefer to be in the state of not knowing what I am doing. Why? Because it’s an opportunity to learn and I love to learn.

That said, there’s a time and place for everything – including learning. When presenting a class or at a conference for which attendees or their employers paid good money, I strive to share what I have already learned, not what I am learning. Attendees of a free event, even, are paying with their time. So again, I strive for excellence in presentations at free events.

My lovely bride, Christy, attends on average one of my presentations per year. Early on she shared this advice, “Andy, no one likes to watch you troubleshoot.” As an engineer I didn’t even think about it. If something was broken, it needed to be fixed. Period. Pronto. If that meant dropping everything else – in front of a paying audience, even – so be it. Christy’s advice helped me become a better presenter.

I redesigned my talks to be demo-fault-tolerant. I changed my thinking about my presentations so that I am now more mentally prepared for demos to fail. I had something to say, always; I am aware not only that demos will fail. I am prepared for them to fail.

This preparation served me well when, in March 2018 at SQL Saturday Chicago, a drive on my laptop failed just before delivering a presentation that is 90% demos. What did I do? I talked through the demos. A friend approached me afterwards and said, “You looked ready for that!” I was ready for it. Nonetheless I responded (truthfully), “Thank you. I’m going to go sit down for an hour.” Even though I was prepared, it was exhausting.

Good Engineers

I can hear you thinking, “Wait, Andy. Your second item above is ‘Someone who is not a good presenter.’ You titled this corresponding section ‘Good Engineers.’ What gives?” I’m glad you asked.

I am an engineer. If you’ve read my bio you see the word “engineer” there.

I consider this a warning.

Are all good engineers bad communicators? Nope. Many – perhaps most – are, though. Take that last sentence. If English is not your first language I owe you an apology. If English is your first language, I owe you two apologies. It makes perfect sense to me, but any editor who saw that sentence in a manuscript would be compelled to add a comment; correct the sentence; or print the document, take it out back, and physically burn that sentence off the page.

Technology – or engineering – is a skill.
Communication is a skill.
Technology and communication are different skills.

As I stated earlier, I don’t care if the presenter is a bad communicator. There is at least one exception to this rule that I will cover next. But for the most part, I can learn from good technologists who stink at communication. I can learn from good technologists who are brilliant communicators but are having a bad day. How do I know I can learn from these people? Because I have, and do, almost every week.

And so do you.

You may not like the presenter.
You may not enjoy the presentation.
But do you learn? Yep. You do.

An Aside: I Hate Abstract-Writing Contests

I’ve organized community events in the past. I don’t do very much work on community events these days, although I serve as a mascot on a couple leadership committees because of my past adventures in similar endeavors. Occasionally – rarely, I would say – I offer some tidbit that helps. My role these days is mostly to encourage leaders who are on the verge of burnout.

Selecting speakers is an engine of loss. It’s bad when there are more submissions than slots. The effects are amplified if the event is popular. Speaker selection is a fantastic mechanism to irritate and isolate people. For years, in some cases. Everyone who submits believes their submission should be accepted. Otherwise, why submit in the first place? No one likes to hear, “No.” Everyone likes to hear, “Yes.”

I am the same as every other speaker in this regard.
I may be worse than most, even.

Peeves make lousy pets. Knowing this doesn’t stop me from nurturing a handful of pet peeves. One of my pet peeves is speaker selection based solely on the marketing value of the written abstract. I label this practice an “abstract-writing contest” and I believe it is one bane of a successful technical conference.

If you read that last paragraph and began composing a comment that includes a paraphrase of, “Then what would you have us do, Andy? Pick lousy abstracts?!” I feel for you. But I cannot quite reach you. </RedneckSnark>

My response is, “No. I would have you consider more than merely the abstract. And specifically more than its perceived ‘Marketing value.'” Mix things up a bit. Add some variety. Consider the popularity of the speaker’s blog or previous presentations, maybe. Again, there are exceptions (one of which I will address shortly). But here’s my point in a nutshell:

All good presenters deliver good presentations.
Not all good presenters write good abstracts.

Something must take precedence, some attribute must win. If you are organizing an event and / or selecting speakers, you probably don’t have the option of selecting great presenters who will deliver great sessions and who are awesome at writing persuasive marketing abstracts.

Should all presenters strive to write better abstracts? Goodness yes. But unless you’re planning MSIgnite or Build or ReInvent or a TedX, you’re probably not going to be in the position to choose among the great presenters with great abstract-writing skills. Even if this is so, you’re not always going to get it right. They don’t! I’ve been to enough global conferences and seen enough bad presentations to know.

So presenters, strive to write better abstracts.
And selectors, select some presenters who submit poorly-written abstracts. For goodness’ sake: If spelling errors bug you, correct the spelling before you publish anything. That’s not even hard.

</SoapBox>

Regarding Offense

I am of two minds as I begin this section. Here’s why:

  1. There are things I find offensive that, for me, go beyond the pale, crossing the border between acceptable and unacceptable in a public forum meant to convey knowledge to attendees.
  2. There are things I find offensive that I will tolerate in order to gain the knowledge I seek.

Stepping into a conference session, I sometimes do not know what to expect. I may be unfamiliar with the speaker. Perhaps their speaking style is abrasive, or abrasive in my opinion.

Perhaps the speaker uses excessive profanity. I can hear you thinking, “Who gets to define excessive, Andy?” The attendees get to define excessive. As an attendee, I get to define excessive.

I read about a presentation a few years ago where the speaker created a pornographic demonstration. Many in attendance found the presentation offensive, most found it unprofessional. I hope the speaker – obviously a talented technologist – learned from this mistake.

These are examples of the first type of offense, those I believe are unacceptable.

Perhaps the speaker mentions politics in jest and I disagree with their politics, so I feel belittled when those surrounding me in the room laugh at the joke. I may feel they’re laughing at me. They are certainly laughing at people like me.

Perhaps the speaker jokes about the less-sophisticated in our culture. I’ve heard the term “hick” and “redneck” bandied about, for example – terms with which I personally identify. I have people in Appalachia, from whence my family comes. “Trailer trash” was a term I heard years ago in a presentation. My teenage years were spent living in a mobile home. In a trailer.

Do these references offend me?
They do.

Do I tolerate these offenses?
I do.

You may read that last section and think something along the lines of, “Well, they must not offend you that much if you tolerate them, Andy.” To which I respond, “That’s not your call.” You do not get to make such an assertion. You lack the ability to see inside my mind and inside my heart, so you cannot accurately render judgment as to what goes on there. Further, you do not dictate how I think or choose to respond, publicly or privately. You do not have that right of imposition; not over me.

These are examples of the second type of offense; those I choose to tolerate.

Waxing Philosophical…

How should we then live?” is a question posed by theologian, philosopher, and pastor Francis A. Schaeffer – it’s the title of one of his books. It’s a fair question – especially in an age that considers outrage a virtue. here are, I believe, some truths:

Offense can be intended.
Offense can be unintended.
Regardless of the motive of the offender, offense must be taken in order for the offended to be, well, offended.

I believe motive counts.

Does motive excuse the offender then? Not completely, in my opinion. That said, unintended offense deserves a more mitigated response than intended offense – again, in my humble opinion.

i write this as someone who has offended others unintentionally.
I write this as someone who has been offended intentionally and unintentionally.

When I have offended others, I most often apologized. I’ve learned the earlier the apology is offered, the better. I’ve failed – sometimes for years – to apologize for some offenses I’ve caused. This is partially due to ignorance on my part – me not realizing until later that I owed someone an apology. Sometimes I’ve just been stubborn (a virtue for an engineer… which is one reason I warn people that I am an engineer…).

I will likely offend people in the future. Offending people is not my goal and certainly not my intention.

I will most likely offend when trying to make a joke – as others have offended me while trying to make a joke. This has happened to me in the past. It’s cost me relationships, both professionally and personally.

In these cases, I bear the loss and I am profoundly sorry.

Please read and understand this: There are principles – and a Person – in which (and in Whom) I believe. I value my faith more than I eschew offense. My weak, flawed, and hypocritical following of Christ will offend a handful of people, some of whom also follow Christ. Knowing this does not deter, defer, or lessen my beliefs or my commitment thereto. If this offends you, you will have to decide how you respond. If my faith offends you, I believe you will have to also respond at least once more in the future. So if my faith offends you, I pray (and I never say or write the words “I pray” without actually praying) that you take care in your current response.

I share more about my faith in the Personal section of the About Andy page.

Peace.

Viewing SSIS Configurations Metadata in SSIS Catalog Browser

SSIS Catalog Browser is a pretty neat product. “How neat is it, Andy?” I’m glad you asked.

It’s free. That makes it difficult to beat the cost.

SSIS Catalog Browser is designed to surface all SSIS Catalog artifacts and properties in a single view. “What exactly does that mean, Andy?” You’re sharp. Let’s talk about why the surface-single-view is important.

Before I go on, you may read what I’m about to write here and in the companion post and think, “Andy doesn’t like the Integration Services Catalogs node in SSMS.” That is not accurate. I do like the Integration Services Catalogs node in SSMS. It surfaces enough information for the primary target user of SSMS – the Database Administrator – to see what they need to see to do their job, without “cluttering up” their interface with stuff that they rarely need to see and even more-rarely change.

In the companion post I shared this image of the windows (and pages) you need to open in SSMS to view the configured execution-time value of a parameter that is mapped via reference:

(click to enlarge)

That’s a lot of open windows.

So how does one view the same configuration metadata in SSIS Catalog Browser?

Under the Project node (LiftAndShift), we find a virtual folder that holds Project Parameters.

In Project parameters, we find a reference mapping – indicated by underlined text decoration and describing the reference mapping as between the parameter (ProjectParameter) and the SSIS Catalog Environment Variable (StringParameter).

Expanding the reference mapping node surfaces References. There are two references named env1 and env2. Since references can reference SSIS Catalog Environments in other Catalog folders, the fully-qualified path to each SSIS Catalog environment is shown in folder/environment format.

Expanding each reference node surfaces the value of the SSIS Catalog Environment Variable in each SSIS Catalog Environment.

I call this feature Values Everywhere, and I like it. A lot.

Values Everywhere From the Project Reference Perspective

Values Everywhere is perspective-aware. Whereas from the perspective of an SSIS Project Parameter, Values Everywhere surfaces the reference mapping in the format parameter–>environment variable, in the Project References virtual folder, Values Everywhere surfaces the same relationship as environment variable–>parameter:

Values Everywhere perspectives follow intuition when surfacing reference mapping relationships. (Did I mention I like this feature? A lot?)

Conclusion

SSIS Catalog Browser provides a clean interface for enterprise Release Management and Configuration teams. And it’s free.

I can hear you thinking, “Why is Catalog Browser free, Andy?” I give away Catalog Browser to demonstrate the surfacing capabilities of SSIS Catalog Compare.

SSIS Catalog Compare

SSIS Catalog Compare not only surfaces two SSIS Catalogs side by side, you can compare the contents of the Catalogs:

You can also script an entire SSIS Catalog which produces T-SQL script and ISPAC files for every artifact in the SSIS Catalog (organized by instance and folder):

You can also deploy all artifacts contained in an SSIS Catalog Folder from one SSIS Catalog to another:

This functionality is an efficient method for Data Integration Lifecycle Management – or DevOps – with SSIS.

SSIS Catalog Compare even works with Azure Data Factory SSIS Integration Runtime, so you can use SSIS Catalog Compare to lift and shift SSIS from on-premises Catalogs to the cloud.

Viewing SSIS Configurations Metadata in SSMS

Let’s take a look at an SSIS Project in SSMS:

Demo is the SSIS Catalog folder. Demo contains three SSIS projects named 0-Monolith, EmptySSISProject, and LiftAndShift.

If we expand the LiftAndShift SSIS project we see it contains a single SSIS package named Load Customer.dtsx.

The Demo folder contains two Catalog environments named env1 and env2.

These are the artifacts surfaced via the Integration Services Catalogs node in SSMS.

Before I go on, you may read what I’m about to write here and in the companion post and think, “Andy doesn’t like the Integration Services Catalogs node in SSMS.” That is not accurate. I do like the Integration Services Catalogs node in SSMS. It surfaces enough information for the primary target user of SSMS – the Database Administrator – to see what they need to see to do their job, without “cluttering up” their interface with stuff that they rarely need to see and even more-rarely change.

But Wait, There’s More

There are configurations that are not readily surfaced in the Integration Services Catalogs node in SSMS. To view the configurations metadata for the LiftAndShift SSIS project, right-click the project and click Configure:

The Configure window displays:

From this window I can see examples of the three sources of values for SSIS project and package parameters:

  1. Design-time defaults
  2. Literals
  3. Reference mappings

Design-Time Defaults

Design-time defaults are the default values entered by the SSIS developer when building the SSIS project. These values remain with the SSIS project as it is deployed. They are not changed unless and until another version of the SSIS project is deployed, one which contains different default values for parameters.

Design-time defaults are denoted by no text decoration.

Literals

There are actually two kinds of literal values:

  1. Configuration literals
  2. Execution literals

Configuration Literals

Configuration literals are stored in the SSIS Catalog. The values are “hard-coded” to override design-time defaults when the SSIS package is executed.

Configuration literals are denoted by bold text decoration.

When the SSIS project was first deployed to the SSIS Catalog, the value of ProjectLiteralParameter was the design-time default. We can see the design-time default value of ProjectLiteralParameter if we open the Configure window and click the ellipsis beside the Value cell for the parameter:

The design-time default value for ProjectLiteralParameter is “Project Default Value” shown circled in green above. The Configuration Literal value is “Project Literal Value” and is shown inside red and blue boxes in both the Set Parameter Value dialog and the Configure project window.

Execution Literals

Execution literals may be supplied in the Execute Package window. The Execute Package window is displayed when a user right-clicks an SSIS package and then clicks Execute:

Please note the Configuration literals – the values shown in the previous Configuration window with bold text decoration – are shown with no text decoration in the Execute Package window:

(click to enlarge)

Why are the Configuration literals not identified as such in the Execute Package window? The reason is that these values can be overridden as Execution literals.

If I click the ellipsis beside the value cell for the parameter named ProjectLiteralParameter, I can type in a value – an execution literal value – that will override the value of the ProjectLiteralParameter when the SSIS package is executed. The text decoration – in the Execute Package window – for an Execution Literal is bold:

This is a little complex so please bear with me as we examine what just happened.

When the SSIS project was first deployed to the SSIS Catalog, the value of ProjectLiteralParameter was the design-time default. We saw this value in the image above, repeated here:

The design-time default value for ProjectLiteralParameter is “Project Default Value” shown circled in green above. The Configuration Literal value is “Project Literal Value” and is shown inside red and blue boxes in both the Set Parameter Value dialog and the Configure project window.

When we open the Execute window, the Configuration literal value – “Project Literal Value” – is shown with no text decoration (as shown earlier):

(click to enlarge)

We can override the configured override for this execution of the SSIS package and this execution only by clicking the ellipsis to the right of the Value cell for ProjectLiteralParameter. The ellipsis opens the Edit Literal Value for Execution dialog into which we can enter an Execution literal value:

Please note: Execution literal values are not persisted in the SSIS Catalog. They are applied for the current execution – the execution triggered when the user clicks the OK button on the Execute Package window – and that execution only. If the use clicks the Cancel button and then re-opens the Execute Package window, all values revert to their Configured state.

Reference Mappings

Reference mappings are a mechanism for externalizing configurations in the SSIS Catalog. References are an elegant solution for configurations and release management. As with all flexible solutions, they are complex.

Reference mappings begin with References, and references begin with SSIS Catalog Environments (did I mention this was complex?).

Not discussed: Execution literals may also be used to override parameters that are reference-mapped. More on reference mappings in a bit…

SSIS Catalog Environments

An SSIS Catalog Environment is a container that holds SSIS Catalog Environment Variables. An SSIS Catalog Environment has the following configurable properties:

  • Name
  • Description (optional)

An SSIS Catalog Environment is a collection of SSIS Catalog Environment Variables. An SSIS Catalog Environment Variable has the following configurable properties:

  • Name
  • Type
  • Description (optional)
  • Value
  • Sensitive

References

A reference is a relationship between an SSIS Catalog Environment and an SSIS project (or SSIS package) that is deployed to an SSIS Catalog:

References are configured in the Configuration window on the References page:

Reference Mappings

A reference mapping applies (“maps”) the value of an SSIS Catalog Environment Variable – accessed via a reference – to an SSIS Project (or SSIS package) parameter.

Reference mappings are denoted by underlined text decoration:

In the image above, StringParameter is the name of the SSIS Catalog Environment Variable mapped to the parameter named ProjectParameter. This means the value contained in the SSIS Catalog Environment Variable named StringParameter will be used to override the value of the parameter named ProjectParameter at execution time.

There are several components of a Reference Mapping configuration. If we start our description at the parameter, feel free to sing along to our own version of The Skeleton Song substituting the following:

  • The parameter is part of the SSIS project
  • The project references the SSIS Catalog Environment
  • The SSIS Catalog Environment contains the SSIS Catalog Environment Variable
  • SSIS Catalog Environment Variable value property overrides the parameter value at execution time.

Not discussed: Multiple references and reference selection at execution time.

Viewing the Execution-Time Value

You can find the value that will be used at execution-time using SSMS.

View Design-Time Default and Configuration Literal Values

Design-time default and configuration literal values are available from the Configuration window:

Reference Mapping Values

You have to open a few windows – and a couple pages on the same window (Configure) to surface the execution-time value of a parameter that is mapped via reference:

(click to enlarge)

It’s… complicated. And it’s even more complex if we include multiple references to different SSIS Catalog Environments.

This last image is why I wrote SSIS Catalog Browser.

Catalog Browser is free and I write about how to see these same SSIS Catalog artifacts in the companion post titled Viewing SSIS Configurations Metadata in SSIS Catalog Browser.

It is Possible to Execute SSIS in a Container

As the title of this post suggests, it is possible to execute SSIS in a container. How? I’ll share more details later – promise. One short version is:

  1. Run a container with the microsoft/mssql-server-windows-developer image (assign lots of RAM).
  2. Add the SQL Server installation files to the container.
  3. Execute SQL Server setup from a command line configured to add Integration Services (this is why you need the RAM).
  4. Add SSIS.
  5. Execute from the command line (dtexec).

The results?

SSIS Catalog in a Container?

I can hear you thinking, “Yeah? But what about using the SSIS Catalog in a container, Andy? Huh? HUH?”

Calm down, will ya? I’m working on it.

I’ve benefited from some great advice and help from Ned Otter (nedotter.com | @NedOtter). I caught Ned’s presentation on containers at SQL Saturday Boston 22 Sep 2018. I’d seen other presentations on containers and learned a lot. Ned’s session put me over the top.

I can hear you thinking, ” Yeah-yeah. Have you made any progress, Andy?”

Well, some:

Above is a screenshot of an SSIS 2017 Catalog hanging out in a container. In this state I cannot do much with it because I am connected using a SQL Server login (even though said login is sa).

I was able to “create” a Catalog folder in this Catalog:

I write “create” in quotes because I “created” the folder in an unconventional manner (executing pieces of SSISDB stored procedures).

I may be able to deploy an SSIS project – again, via some creative combinations of pieces of SSISDB stored procedures. If so, I doubt I will be able to execute an SSIS package in this Catalog.

The only way I can think to accomplish this is to bypass some Catalog security checks which means there is a distinct possibility that important steps will be missed or skipped.

My gut tells me this would be bad.

The best path forward is to find some way to connect to SQL Server in the container using Windows Authentication. I’ve been communicating with my brother and friend, Brian Kelley (truthsolutions.wordpress.com | @kbriankelley) about this but have yet to solve it.

I plan to publish the image once I figure out how.

Conclusion

I am excited about executing SSIS in a container from the command line, though. Very excited.

More to come…

:{>