Azure Data Studio – Extensions

Azure Data Studio supports installing extensions and has it’s own marketplace where you can get the full install details.

On your left bar, choose this funny little icon;

It will show what extensions you already have installed and also the marketplace. If you sort by below you’ll be able to scroll through the marketplace.



Click on any that take your fancy, it’ll open a page about it along with animations showing how they work (if the publisher has included them).

You can also build your own or download other extensions directly from source. Let’s go get Phil Scott’s pre-release of queryplan.show;

https://github.com/phil-scott-78/azure-data-studio-queryplan-show/releases/tag/v0.0.1

Go ahead and download the .vsix file. Once you’ve got that, open ADS, open the command palette (Ctrl+Shift+P) and find Extensions: Install from VSIX…

Navigate to your download folder and select the vsix file you’ve just downloaded.

If it’s not on the store you’ll probably see the following error message, I’m going to click Yes but you need to know that you trust the publisher (or you like living life dangerously, you rebel)

YOLO right?

Once it’s installed you’ll have to reboot Azure Data Studio to enable it

Once it’s reloaded you’ll see that your new extension is enabled

Nice work, you can now add Master With Azure Data Studio to your C.V.

Each extension will have different install instructions that will be shown on their GitHub page. Follow these and you’ll have access to those sweet extensions.

Azure Data Studio – Command Palette

Ever wondered how many things Azure Data Studio can do? Open the command palette and have a scroll through.

Press Ctrl+Shift+P and you’ll see the command palette at the top of the screen. You’ll see your recently used commands appear at the top, the rest will be scrollable;

There’s a whole section for your installed extensions, a whole bunch for source control and loads of others. Have a dig through and see what’s interesting to you

You’ll use the command palette a lot, get comfortable with it and your life will be a lot easier.

Azure Data Studio – Execution Plans

If you’ve ever had to get involved in query performance you’ll have used execution plans. Azure Data Studio gives execution plans too but they’re a little tricky.

Let’s build a quick query and gather our execution plans. This query will work on any database;

In your query editor window the obvious button is the ‘Explain’ button. This works and will give you the estimated query plan.

You’ll recognise the execution plan that you see in ADS as it looks very much like the same version in SSMS.

Estimated plan in Azure Data Studio

Here’s what it would have looked like in SSMS

The same plan in SSMS

One advantage ADS has over SSMS is that you can also see the Top Operations natively.

This has been possible in Sentry One Plan Explorer for a long time but it’s also now native in ADS. It’s great when you’re looking at a massive plan and want to drill down into the major pain points quickly.

Getting the actual execution plan is a little more complicated. It’s not a nice easy button so you’ll want to get used to shortcuts.

press Ctrl+Shift+P to open the command palette and type ‘run’. You’ll notice the command to ‘Run Current Query with Actual Plan’. That’s the ticket;

You’ll also notice that there’s an even better shortcut. Ctrl+M is going to execute the query and give you the actual plan.

There is currently a gotcha with ADS actual execution plans where it only renders the last code block. There is an open github issue for this so keep an eye on it as there’s new stuff being released every month.

Azure Data Studio – Server Management

Let’s look at how you can connect to your servers and group them up using Azure Data Studio. 

You’ll see down the left navigation bar. The icon we care about is at the top and will take you to the servers area of the app.

You may as well jump in and connect a server to see what it’s like.

You’ll be given a connection popup. Assuming you can connect using Windows credentials then you’ll just need to put the name of your instance in the server box.

You’ll see the connection appear in your server list. Have a click around, you can see the databases, security and database objects, you’ll be used to these from SSMS.

You’ll be able to connect to all of your servers here, just add them one at a time.

Once you’ve added a few you will probably notice you’ll want to start organising them into folders. Go ahead and add a new server group.

You get to choose a name for the group as well as a description that pops up like a tooltip. You also get to choose a funky colour for it too

Personally, I’ve separated out Live from Dev from QA but do whatever is best in your environment.

If you have instances stacked on the same box then you can create subfolders for these. Just drag and drop folders within folders and instances in those folders.

Look at that, all pretty and organised.

What is Your “Why?”

This month Andy Leonard has asked What is Your “Why”?.

Well, here’s my Why.

I love SQL Server and the community that surrounds it. It’s so welcoming, open and accessable.

I’ve had a sort of organic progression of Microsoft products in my career. I’ve gone Excel Developer -> Access Developer -> SQL Server Developer -> SQL Server DBA (there’s some other products in there like SSRS but that’s the main path). I’ve never really felt comfortable with any of the communities around these other products but SQL Server is a different kettle of fish completely.

Finding the SQL Server Community slack channel was a great thing. I am the only DBA where I am (with loads of developers) and having people to chat to about DBA stuff is such a pressure release.

Also, check out the call for speakers at most conferences. It’s not unusual to have a ‘first timers’ track for people who want to get into speaking. Doing this isn’t a necessity but it shows how inclusive the community is.

I didn’t choose to stay with SQL Server because of the technology specifically (although I do enjoy focusing on performance tuning) but rather the community around it.

Generate Test Data with Faker & Python within SQL Server

Make sure you’ve done these steps first

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. You’ve also installed Pandas using pip

Then let’s get started

We’re going to use a Python library called Faker which is designed to generate test data. You’ll need to open the command line for the folder where pip is installed. In my standard installation of SQL Server 2019 it’s here (adjust for your own installation);

C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019PYTHON\PYTHON_SERVICES\Scripts

From here you want to run the following command to install mimesis;

Once it’s done we’ve got it installed, we can open SSMS and get started with our test data.

We’re going to get started with the sample queries from the official documentation but we have to add a print statement to see our results because we’re using SSMS;

If you run this in SSMS you’ll see the output in the messages window

This guy loves quality legwear

Now we know that works, let’s put this into a useable format within SQL Server.

This is going to be our block of Python;

For the purposes of this example, we’re going to make a temp table to store the data and view what we’ve done. Wrapping this python script into t-sql will give us an output like so;

Go ahead and run it, you should see a sample of 100 names and addresses that are currently stored in your temp table;

There are far more options when using Faker. Looking at the official documentation you’ll see the list of different data types you can generate as well as options such as region specific data.

Go have fun trying this, it’s a small setup for a large amount of time saved.

Azure Data Studio Themes

This is one of the features of Azure Data Studio that is great for accessibility as well as just being cool.

The default theme is your basic light theme. It’s fine but this isn’t the only theme you have to use.

Use Ctrl+k Ctrl+t to open the theme options.

Have a click through and see how they look when you’re editing code. It’s a case of choosing something that suits your style. My preference is the default dark theme but go nuts and choose one you like.

Oh, and if you’re a sadist, check out the Red theme



Code Snippets in Azure Data Studio

Azure Data Studio has a feature called Code Snippets which allow you to quickly create all of those commands that you forget the syntax for all the time.

Crack open a new query window and type in ‘sql’, you’ll see all of the default templates




Choose any to look at and you’ll see a template with fields for you to change. sqlAddColumn looks like this

It gives you the fields to replace with your own query along with comments explaining what each section is for. Really handy.

It even has complicated stuff like cursors off the bat

Tell me you’d remember the syntax for a cursor without looking it up, I certainly wouldn’t.

A great thing about these snippets is that you can add your own and they can be exactly how you want them.

To get started with this open the Command Pallet with Ctrl+Shift+P and type in ‘snippets’.

Scroll down and find the SQL option. Open it and it will bring you to the SQL.json file in which we’ll be storing our SQL Snippets.

Here’s an example of where to start.

Paste this into your file. Close the sql.json file and save your results then open a new query window (Ctrl+N). Type in ‘sql’ and you’ll see the two new snippets that you created

And there you go, you’ve got custom snippets waiting for you. You can go ahead and create whatever you’d like in whatever format you like.

These snippets are based on Visual Studio Code, for the official documentation head here.

Happy snipping!

Excited to be speaking at SQL Bits 2019!

Now it’s been published on their website I’m excited to share that I’ve been selected to speak at SQLBits 2019!

SQLBits is ‘the largest SQL Server conference in Europe for data professionals’ and takes place on the 27th Feb to the 2nd of March.

My session introducing you to Azure Data Studio (SQL Operations Studio) has been selected and I’d love to be your introduction into this great tool.

Come see my session, I’ll be in Room 10 for the very last session of the very last day (4:15pm on Saturday).

https://sqlbits.com/Sessions/Event18/Introducing_Azure_Data_Studio_SQL_Operations_Studio_

I’ll be publishing some guides on Azure Data Studio over the next few weeks so if it’s interesting for you then keep an eye out.

Even if you don’t come to my session then I hope to see you at the Friday night party. Don’t forget your fancy dress 😉

See you there!

Using Reddit data with SQL Server and Python

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. Then you used pip to install PRAW
  4. You’ve also installed Pandas using pip
  5. You’ve created your Reddit API
  6. And you’ve got a working connection to Reddit

Now let’s actually gather this data and turn it into something useful inside SQL Server.

We’re going to build on our previous steps and create a Stored Procedure that we can simply execute from wherever we want and it will start populating data.

In previous steps we’ve only taken data from one subreddit but that’s a bit boring. Let’s make a list of subreddits that can be used by our SP.

We’re creating new table called py_SubredditList and inserted a list of our choosing. The list above are fairly good subreddits to get large blocks of text from. Feel free to change this list above to your favourite subreddits.

We’re going to grab one subreddit at a time and use it

Let’s start working on our Python code. First thing we’ll need is somewhere to push all of our data. Python uses things called dictionaries so we’ll make one called topics_dict;

Let’s dump our data into here

Dictionaries aren’t easy for us to interpret so let’s create a data frame using Pandas;

This data frame is what we’re going to return from our python block;

We’re going to output this data set into a temporary table;

Then we’re going to execute the python script and output the data into that temp table

And we’re going to increment our subreddit Hits by one so we’re not hitting the same subreddit all the time.

We’re going to create a permanent table to hold this data for later use

Finally we’ll put the data from our temp table into the permanent table

Putting together all of these elements you’ll come out with something like this;

FOR EVERYBODY WHO JUST WANTS THE FINISHED SP, IT’S HERE!

Installing this SP isn’t going to do anything until you execute it. Go on, live life on the edge and try it. Snoo agrees you should.

Then check out the py_RedditData table and you will have something like this;

Included you’ll be able to see your data lengths so you can filter out if you want. The DataType field is in there for you to experiment. We’re only pulling title and body from these submissions but you can also pull fields such as url (the post URL), score (int) and created (datetime). Check out the PRAW documentation for all available fields.

You can call this SP however you like. My preference is to call it once per minute from an agent job and leave it overnight so the Reddit API call limit doesn’t get hit but you can do it however you want.

That’s it. You’re done. Go make a cuppa.