On Output…

I’m going to be a little bold in this post and suggest if you are developing for SQL Server, the screenshot to the left and above shows something that is, well, wrong. I can hear you thinking,

“What’s Wrong With That Output, Andy?”

I’m glad you asked. I will answer with a question: What just happened? What code or command or process or… whatever… just “completed successfully”? Silly question, isn’t it? There’s no way to tell what just happened simply by looking at that output.

And that’s the point of this post:

You don’t know what just happened.

Sit Back and Let Grandpa Andy Tell You a Story

I was managing a large team of ETL developers and serving as the ETL Architect for a large enterprise developing enterprise data engineering solutions for two similar clients. Things were winding down and we were in an interesting state with one client – somewhere between User Acceptance Testing (UAT) and Production. I guess you could call that state PrUAT, but I digress…

The optics got… tricksy… with the client in PrUAT. Vendors were not receiving pay due to the state of our solution. The vendors (rightfully) complained. One of them called the news media and they showed up to report on the situation. Politicians became involved. To call the situation “messy” was accurate but did not convey the internal pressure on our teams to find and fix the issue – in addition to fixing all the other issues.

There were fires everywhere. In this case, one of the fires had caught fire.

Things. Were. Ugly.

My boss called and said, “Andy, can you fix this issue?” I replied, “Yes.” Why? Because it was my job to fix issues. Fixing issues and solving problems is still my job (it’s probably your job too…). I found and corrected the root cause in Dev. As ETL Architect, I exercised my authority to make a judgment call, promoted the code to Test, tested it, documented the test results, created a ticket, and packaged things up for deployment to PrUAT by the PrUAT DBAs.

Because this particular fire was on fire, I also followed up by calling Geoff, the PrUAT DBA I suspected would be assigned this ticket. Geoff was busy (this is important, don’t forget this part…) working on another fire-on-fire, and told me he couldn’t get to this right now.

But this had to be done.
Right now.

I thanked Geoff and hung up the phone. I then made another judgment call and exercised yet more of my ETL Architect authority. I assigned the PrUAT ticket to myself, logged into PrUAT, executed the patch, copied the output of the execution to the Notes field of the ticket (as we’d trained all DBAs and Release Management people to do), and then manually verified the patch was, in fact, deployed to PrUAT.

I closed the ticket and called my boss. “Done. And verified,” I said. My boss replied, “Good,” and hung up. He passed the good news up the chain.

A funny thing happened the next morning. And by “funny,” I mean no-fun-at-all. My boss called and asked, “Andy? I thought you said the patch was was deployed to PrUAT.” I was a little stunned, grappling with the implications of the accusation. He continued, “The process failed again last night and vendor checks were – again – not cut.” I finally stammered, “Let me check on it and get back to you.”

I could ramble here. But let me cut to the chase. Remember Geoff was busy? He was working a corrupt PrUAT database issue. How do you think he solved it? Did you guess restore from backup? You are correct, if so. When did Geoff restore from backup? Sometime after I applied the patch. What happened to my patch code? It was overwritten by the restore.

I re-opened the ticket and assigned it to Geoff. Being less-busy now, Geoff executed the code, copied the output into the Notes field of the ticket (as we’d trained all DBAs and Release Management people to do), and then closed the ticket. The next night, the process executed successfully and the vendor checks were cut.

“How’d You Save Your Job, Andy?”

That is an excellent question because I should have been fired. I’m almost certain the possibility crossed the mind of my boss and his bosses. I know I would have fired me. The answer?

Output.
Documented output, to be more precise.

You see, the output we’d trained all DBAs and Release Management people to copy and paste into the Notes field of the ticket before closing the ticket included enough information to verify that both Geoff and I had deployed code with similar output. It also contained date and time metadata about the deployment, which is why I was not canned.

Output Matters

Compare the screenshot at the top of this post to the one below (click to enlarge).

This T-SQL produces lots of output. That’s great. Sort of.

“There’s no free lunch” is a saying that conveys everything good thing (like lunch) costs something (“no free”). And that’s true – especially in software development. Software design is largely an exercise in balancing between mutually exclusive and competing requirements and demands.

If it was easy anyone could do it.

It’s not easy. It takes experienced developers years to develop (double entendre intended) the skills required to design software – and even more years of varied experience to build the skills required to be a good software architect.

The good news: the output is awesome.
The bad news: the output is a lot of typing.

“So Why, Andy? Why Do All The Typing?”

That’s not DevOps. That’s wishful thinking.

You’ve probably heard of technical debt. This is the opposite of technical debt; this is a technical investment.

Technical investments are time and energy spent early (or earlier) in the software development lifecycle that produce technical dividends later in the software development lifecycle. (Time and energy invested earlier in the project lifecycle always costs less than investing later in the project lifecycle. I need to write more about this…) What are some examples of technical dividends? Well, not-firing-the-ETL-Architect-for-doing-his-job leaps to mind.

This isn’t the only technical dividend, though. Knowing that the code was deployed is important to the DevOps process. Instrumented code is verifiable code – whether the instrumentation supports deployment or execution. Consider the option: believing the code has been executed.

That’s not DevOps. That’s wishful thinking.

Measuring Technical Dividends

Measuring technical dividends directly is difficult but possible. It’s akin to asking the question, “How much downtime did we avoid by having good processes in place?” The answer to that question is hard to capture. You can get some sense of it by tracking the mean time to identify a fault, though – as measured by the difference between the time someone begins working the issue and the time when they identify the root cause.

Good instrumentation reduces mean time to identify a fault.
Knowing is better than guessing or believing.
The extra typing required to produce good output is worth it.

Good Output

In this age of automation, good output may not require extra typing. Good output may simply require another investment – one of money traded for time. There are several good tools available from vendors that surface awesome reports regarding the state of enterprise software, databases, and data. DevOps tools are maturing and supporting enterprises willing to invest the time and energy required to implement them.

One such tool is SSIS Catalog Compare which generated the second screenshot. (Full disclosure: I built SSIS Catalog Compare.)

SSIS Catalog Compare generates scripts and ISPAC files from one SSIS Catalog that are ready to be deployed to another SSIS Catalog. Treeview controls display Catalog artifacts, surfacing everything related to an SSIS Catalog project without the need to right-click and open additional windows. (You can get this functionality free by downloading SSIS Catalog Browser. Did I mention it’s free?)

In addition, SSIS Catalog Compare compares the contents of two SSIS Catalogs – like QA and PrUAT, for example. Can one compare catalogs using other methods? Yes. None are as easy, fast, or complete as SSIS Catalog Compare.

Discount!

For a limited time you can get SSIS Catalog Compare for 60% off. Click here, buy SSIS Catalog Compare – the Bundle, the GUI, or CatCompare (the command-line interface) – and enter “andysblog” without the double-quotes as the coupon code at checkout.

Conclusion

Whether you use a tool to generate scripts or not, it’s a good idea to make the technical investment of instrumenting your code – T-SQL or other. Good instrumentation saves time and money and allows enterprises to scale by freeing-up people to do more important work.

:{>

Introduction to the SSIS Lifecycle – 7 Jun 2018

Join me Thursday, 07 Jun for a free webinar: Introduction to the SSIS Lifecycle!

How should an enterprise promote SSIS projects from Development to Production?
How many “levels” are required?
What are the best practices?
Do SSIS lifecycle management tools exist?

Join Andy Leonard – SSIS author, trainer, and consultant – to learn the answers to these questions. In this webinar, Andy discusses and demonstrates the SSIS lifecycle.

Register today!

:{>

“You Do Not Know What You Are Doing”

Peeves make lousy pets.

Knowing this doesn’t help; I still keep a few pet peeves. One of my pet peeves is this statement, “You don’t know what you are doing.” Why is this a pet peeve? It denies the obvious fact that everyone one of us, everywhere, is still learning.

“My Name is Andy and I Own and Operate a Consulting Company.”

“But Andy, you don’t know how to own or operate a consulting company.” That may or may not be a true statement. What is a truer statement? I may not know everything there is to know about owning and operating a consulting company, but I can learn.

“My Name is Andy and I Built a Software Product.”

“But Andy, you don’t know how to build a software product.” That may or may not be a true statement. What is a truer statement? I may not know everything there is to know about building a software product, but I can learn.

Interesting sidebar: SSIS Catalog Compare is not only the first product I’ve ever written, it’s the first complete application I’ve written in C#.

“My Name is Andy and I Co-Host a Successful Podcast”

“But Andy, you don’t know how to co-host a successful podcast.” That may or may not be a true statement. What is a truer statement? I may not know everything there is to know about co-hosting a successful podcast, but I can learn.

I Can Learn

I know I can learn because I have demonstrated this fact many times over. I proved it last month (at the time of this writing – April 2018 thereafter) when I completed the Microsoft Professional Program for Big Data. I proved it by learning enough C# to write Catalog Compare, Catalog Browser, and Framework Browser.

I promise I am learning more every day about owning and operating Enterprise Data & Analytics and building and managing the software solutions and products that make up the DILM Suite – including  products like SSIS Catalog Compare and the SSIS Framework – and co-hosting Data Driven, with Frank La Vigne (@Tableteer).

“I couldn’t so you shouldn’t.”

What I Know

What is someone truly saying – what do they truly mean – when they say or write someone doesn’t know what they’re doing?

They’re making this statement about themselves: “I couldn’t so you shouldn’t.”

No one brings this point home better than Grant Cardone in his book (get the audio book – you are welcome), Be Obsessed or Be Average, or #BOBA. The followup to his (awesome) book, The 10X Rule, Be Obsessed or Be Average complements and completes Cardone’s thoughts on the hard work and time required to achieve success.

“What is the Point, Andy?”

When people make statements like “You don’t know what you are doing,” they are saying, “I gave up so you should give up, too,” or, “I didn’t get what I wanted so you don’t deserve what you want, either.”

This is very fair thinking.

When I write the word “fair” I shudder at what “fair” has come to mean and how it’s been used to justify junk and the crap it’s been used to rationalize.

Conclusion

I am not going to quit learning.
I will continue to try to make old things work better.
I will continue to try new things.
I will fail more often than I succeed (this is how I learn).
I will not stop until I go home.

My advice, encouragement, exhortation:

  • Don’t quit.
  • Make the problems give up before you do.
  • Listen to people who have succeeded (or are succeeding).
  • Do not listen to people who have given up.

I have more to learn and I know that.

Peace,
Andy

How I Learn

This is a picture of how I learn.

These are executions of an Azure Data Factory version 2 (ADFv2) pipeline. The pipeline is designed to grab data collected from a local weather station here in Hampden-Sydney Virginia – just outside Farmville – and piped to Azure Blob Storage via AzCopy.

How do I learn?

  • Fail
  • Fail
  • Fail
  • Fail
  • Fail
  • Fail
  • Succeed

My advice? Keep learning!

Catalog Browser, Version 0.6.2.0

If you’ve read this blog for a short time, you already know I have a passion for DevOps. You probably guessed I also have a passion for data engineering (My job title at Enterprise Data & Analytics is Chief Data Engineer).

I believe successful software development is a combination of a software development platform, a developer, and the developer’s skill developing on that platform. I like SSIS as a data engineering platform. While I absolutely enjoy learning about new data engineering platforms, I love SSIS!

It is in this context that I built the DILM (Data Integration Lifecycle Management) Suite.

I’m excited to announce an update to Catalog Browser, one of the (many) free utilities in the DILM Suite. In this release I improved a feature called Values Everywhere.

One thing I dislike about the Integration Services Catalogs node of the SSMS Object Explorer is how many windows I have to open to determine the value of of a reference-mapped Environment Variable. Values Everywhere addresses this by placing the Environment Variable values in a subnode of reference mapping:

Catalog Browser first displays the reference mapping in the context of the environment named DEV_Person. DEV_Person is a Catalog Environment that contains a collection of Catalog Environment Variables.

Catalog Browser next displays the reference mapping in the context of the SSS Connection Manager named AdventureWorks2014.OLEDB that consumes the Reference between the DEV_Person environment and the Load_Person.dtsx SSIS package. Note that this Reference Mapping is displayed as <Property Name> –> <Environment Variable Name>, or “ConnectionString –> SourceConnectionString”. Why? Catalog Browser is displaying the Reference Mapping from the perspective of the Connection Manager property.

The third instance of Values Everywhere is shown in the Package Connection References node. Remember, a reference “connects” a package or project to an SSIS Environment Variable (learn more at SSIS Catalog Environments– Step 20 of the Stairway to Integration Services).  From the perspective of the reference, the reference mapping is displayed as  <Environment Variable Name> –> <Property Name>, or “SourceConnectionString –> ConnectionString”. Why? Catalog Browser is displaying the Reference Mapping from the perspective of the Reference.

SSIS Catalog References and Reference Mappings may seem complex. There’s a good reason for that: They are complex. References and Reference Mappings are also an elegant solution to externalization, which is not an easy problem to solve. While it is difficult to learn how to configure and manage references and reference mappings, it’s totally worth it. It’s the opposite of technical debt; it’s a technical investment. Once the investment is made, enterprises reap the rewards of centralized and more manageable and more-easily-supported data engineering solutions for the life of the solution. Technical investments save time and money. Are they easy? No. Most often technical debt is easier to learn and do – that’s why technical debt plagues enterprises and will always plague enterprises. Easy is expensive.

Check out the latest version of Catalog Browser and let me know what you think.

:{>

T-SQL Tuesday 102: Giving Back

I enjoy the privilege and honor of participating in the SQL Server Community alongside many who read this blog. And I know from past experience that some reading this post will make the switch from consumers-only in our community to sharers and consumers one day.

For my part, I consume more than I contribute. I am thankful to the many folks who contribute with their blog posts, tweets, social media posts, YouTube videos, podcasts, and other media. I enjoy personal interaction most of all and have thoroughly enjoyed conversations with folks in the community at SQL Saturdays!

I learn way more than I teach.

Beyond the SQL Server Community

Many in the SQL Server Community serve their local communities in civil capacities. Some volunteer as first-responders and fire-fighters, for example. I know one person who serves on the local school board. Several are active in churches, charities, and religious organizations; and many dedicate time to local schools and/or alternative educational activities such as homeschool organizations.

I’m honored to be part of this community and to serve alongside all of you every day. We in the SQL Server Community participate in an awesome community!

:{>

Stack Overflow Wants to Change

This week Stack Overflow shared a blog post titled Stack Overflow Isn’t Very Welcoming. It’s Time for That to Change. I admire the courage and the transparency a great deal. It takes guts to admit mistakes and serious guts to publicly air the dirty laundry.

Kudos to Stack Overflow.

Why I Stopped Contributing to the Stack Overflow Conversation

I use Stack Overflow to find answers to coding questions. I searched for me (site:stackoverflow.com “Andy Leonard”) and found some hits though I’m sure not all of them refer to me personally.

I posted a few answers to SSIS questions early on. I write conversationally on message boards, answering the author as much as addressing their question. On forums of all kinds (including this blog), I typically start an answer with “Hi <author name>.”

My answers were edited to remove my greeting to the author of the original post. Although I cannot locate an example, I seem to recall one editor explaining to me something like “replying to individuals is not permitted.”

Me

I understand we come from all different walks of life; that some of us, for example, were raised in New York City while others were brought up on farms in rural Virginia. Having traveled to New York City for the first time about 10 years ago, I understand the culture shock that accompanies such a transition. One example: I learned to not speak to people I do not personally – already – know. Not even to say “Good morning.”

Where I was raised (and still in my corner of Prince Edward County outside of Farmville Virginia), it’s rude to not speak to people. If you can’t speak because you’e driving by, you wave at people. It’s an affront to not speak to people.  Not speaking is akin to denying they’re even people. Telling me I cannot say “Hi” to people in writing, on a site that exists to help people just didn’t seem right.

I tried again later to answer questions at Stack Overflow. I typed “Hi <author name>” every time and then deleted it before I hit the Reply button. This sufficed for a while, but I still felt like I was being rude.

I Have a Problem

You may have read all that (in between eye-rolls of epic proportion), thinking the whole time: “Andy, you have a problem.”

You are correct and I hereby agree with you.

I, in fact, have several problems. This is merely one of them. People, though, are package deals. You don’t get to converse with, or learn from, or teach pieces of people. You interact – all the time every time – with the full person.

There’s an interesting link at the bottom of the Stack Overflow blog post. The link is to a site that contains a list of implicit bias tests. Jay Hanlon, Stack Overflow’s EVP of Culture and Experience, encourages readers of the post in footnote #2:

² If you’re shaking your head thinking, “not me,” I’d encourage you to take these implicit bias tests, specifically the Race IAT and the Gender-Career IAT. If you’re like me, they’re going to hurt.

I support the effort to reduce implicit bias – at Stack Overflow and everywhere. I believe Jay when he writes earlier that it bothered him personally when some complained that they felt left out.

Jay, I felt left out.

Eventually, typing and then deleting my greeting to the person asking the question proved… well, just too awkward for me to continue.

So I stopped answering questions at Stack Overflow.

Did Stack Overflow go under without my answers to SSIS and Biml questions? Goodness, no! They’ve managed just fine without me and will continue to do so.

You

I encourage you to disagree – and even share your disagreement in the comments. I think you’re awesome. I don’t agree with you about everything and I don’t expect you to agree with me about everything.

We’re different. And that’s ok.

People editing your greetings from an answer on a help forum may not bother you at all.

I don’t mind catching black snakes with my bare hands – though I usually wear work gloves because reptiles stink – and taking them out of my house when they come inside, but it may totally bother you. (Cogent because I think I hear one wriggling in the attic as I type this post…)

I’m ok with you being you. I’m actually more than ok with it. I’m even ok with you disagreeing with me on this and anything else.

Regarding matters on which we disagree, I believe you have reasons for believing what you believe. I may disagree with what you believe or even why you believe it, but I don’t think you’re dumb for believing it; I believe you believe it for a reason.

Conclusion

I wrote this because I like the Stack Overflow website. I like the help I receive from the answers I find there. Stack Overflow seems genuinely concerned with ways to improve the site. Maybe (probably) my discomfort with editing my greeting doesn’t rise higher than #3 on the list of things to change at Stack Overflow. I’m not sure and I’m ok if my little post is ignored, not seen, or if nothing changes in the Stack Overflow editing processes and practices. Promise.

As I stated previously, Stack Overflow will not go under without my answers to SSIS and Biml questions and I’ll keep getting answers from Stack Overflow.

I just wish I felt comfortable giving back.

Who is Exhibiting at the PASS Summit 2018? Enterprise Data & Analytics, That’s Who!

I am honored and excited to announce that Enterprise Data & Analytics will be an exhibitor at the PASS Summit 2018!

If you browse on over to the PASS Summit Sponsors page and scroll to the Exhibitors section, you’ll find us listed:

Honored and excited – that’s me!

I see – and have lived – this virtuous cycle in the SQL Server and PASS communities:

  • A person discovers the Community and is overwhelmed at our openness and genuine willingness to help others. They realize they are not alone.
  • They learn more and become better at their jobs which, in turn, positively impacts their quality of life.
  • Some desire to give back to the community, so they develop a presentation and submit it to a User Group or SQL Saturday.
  • Some are selected to deliver their presentation.
  • Some presentations are well-received and increase the visibility of the presenter in the community.
  • As presentations are honed over time, some are used as a springboard to develop and deliver other presentations, further increasing the visibility of the presenter.
  • Some presenters achieve enough visibility to become a brand.
  • Some presenters are selected to present at larger events, like the PASS Summit.
  • Some presenters use their newfound greater visibility and brand awareness to join a consultancy practice or to become independent consultants.
  • The continued care and feeding of the brand of some consultants drives business growth.
  • The businesses of some consultants grows to the point where they can become sponsors and exhibitors at events such as User Groups, SQL Saturdays, and – eventually – the PASS Summit.

This cycle can be broken (or quashed) at any point by any number of actions, inactions, missteps, mistakes, and/or competitive overreach. In fact, I promise you will make mistakes and take missteps along the way (ask me how I know), but those mistakes and failures can tear you down or build you into more than you were – and the outcome is 100% your choice.

I advocate for the next generation of presenters. I want to see you engage, learn, share, grow, build their brands, and give back – just like I did.

Go get ’em!

:{>

PS – Need some help with your data? Contact us! We are here to help and by hiring Enterprise Data & Analytics you support some great communities!

Deploying to the SSIS Catalog Changes the Protection Level

I recently answered a question on the SQL Community Slack #ssis channel about SSIS and security. Let me begin by stating that SSIS security is complex, made up of many moving parts, and not trivial. If you struggle with SSIS security configuration, you are struggling with one of the harder parts of SSIS.

Protection Level

Security for SSIS packages and projects is managed by a property called “ProtectionLevel.” Here are some facts about the ProtectionLevel property:

  • The project ProtectionLevel setting must match the ProtectionLevel setting for every package in the project.
  • If a password is supplied, the same password must be supplied for the project and each package in the project.

You may not like these features, but you will have to engineer data integration solutions for SSIS with these features in mind.

“What Does Protection Level… protect, Andy?”

I’m glad you asked. The ProtectionLevel property defines the method SSIS uses to protect values marked as Sensitive. Connection Manager Password properties are, by default, sensitive. SSIS projects developed using SSIS 2012+ may contain package or project parameters. Parameters have a developer-configurable attribute named Sensitive which, not surprisingly, allows the developer to mark a project or package parameter as Sensitive. Project Connection Managers were added in SSIS 2012 and, as we mentioned earlier, the Connection Manager Password property is Sensitive by default (this, along with the ability to mark project parameters as Sensitive, is why we need a project ProtectionLevel property).

There are six ProtectionLevel settings:

  • DontSaveSensitive
  • EncryptSensitiveWithUserKey
  • EncryptSensitiveWithPassword
  • EncryptAllWithPassword
  • EncryptAllWithUserKey
  • ServerStorage

Now you may have read the list and viewed the image and thought, “Andy, I don’t see ServerStorage in the image.” You may have experienced a moment where you thought, “There are five Protection Levels!” similar to that moment experienced by Jean-Luc Picard at the end of part 2 of Chain of Command (Star Trek: The Next Generation) where he screamed at his Cardassian torturer, “There are four lights!”

You are not wrong. ServerStorage is not there. But ServerStorage is a valid ProtectionLevel. Promise.

“So, where is ServerStorage, Andy?”

You are nailing the good questions today! The ServerStorage ProtectionLevel setting is the default setting for projects and packages deployed to the SSIS Catalog. When you deploy an SSIS package or project, the Integration Services Deployment Wizard decrypts your package and/or project, and then re-encrypts them using ServerStorage. You can see it in this image – step 3 changes the ProtectionLevel property:

What does this encryption look like?

If you click that image to enlarge it, you will see a binary string. You may ask, “Andy, how do you know the binary string is encrypted?” A just question. </GrimaWormtongue> I know because I’ve looked behind the curtain (some). One SSIS Catalog stored procedure used in the Export ISPAC (more on this in a bit…) functionality is SSISDB.internal.get_project_internal. If you script the internal.get_project_internal stored procedure you will note parameters named @key_name and @certificate_name that are used to build dynamic T-SQL statements to open a symmetric key using decryption by certificate – on or about line 69 of the query (depending on how you scripted it).

Yes, it’s binary. But it is also encrypted.

Implications

Now, the important part (and what motivated me to write this post):

If you password-protect your SSIS project, especially using EncryptAllWithPassword to protect intellectual property, your encryption – including your password – disappears forever once you deploy the project to the SSIS Catalog. Anyone with permission to export the project from the SSIS Catalog can see all you desire to hide. How hard is it to export the project? First you need to be a member of the SSIS Admin role. Then, you right-click the project in the SSMS Object Explorer Integration Services Catalogs node, and click Export:

Select a file location and file name for the exported ISPAC file:

Use a zip utility to open the compressed archive (or change the ISPAC file extension to zip, and Windows will decompress it for you):

From here, the package can be copied to disk. Using 7-Zip (or Windows if I changed the extension), I can edit the dtsx file to open the package XML in Notepad:

Here is a side-by-side comparison of the exported XML from the SSIS Catalog and the encrypted file used to develop the same SSIS package:

As you can see, using a password and one of the “WithPassword” Protection Level options will not protect your IP from folks with administrative privileges to your SSIS Catalog.

I can hear you thinking…

“Ok, Andy, You’ve Shown Us the Problem. What’s the Solution?”

I’m glad you asked. To protect intellectual property (IP), I highly recommend you encapsulate said IP in logic that resides outside of your SSIS package. How can you do that? One way is to use a custom assembly coded in .Net. The assembly can be designed to be imported into the Global Assembly Cache (GAC), and from there SSIS can access it from a Script Task (Control Flow) or Script Component (Data Flow Task).

You can also build a custom SSIS task or component to encapsulate your IP and expose configurable properties to developers. I wrote a book about one way to do all the stuff you have to do to build such a task. It’s called Building Custom Tasks for SQL Server Integration Services.

The book is not about the logic required to code your task. The book is about the things you need to know in order to author a custom Visual Studio toolbox item. There’s a non-Production-ready demo that you build throughout the book (which is pretty cool, I think, but I wrote it so I’m biased). I assume you are completely new to Visual Studio software development. The code is in Visual Basic (deal). And I used Visual Studio Community Edition, which is free.

:{>