Azure Data Studio – Server Management

Let’s look at how you can connect to your servers and group them up using Azure Data Studio. 

You’ll see down the left navigation bar. The icon we care about is at the top and will take you to the servers area of the app.

You may as well jump in and connect a server to see what it’s like.

You’ll be given a connection popup. Assuming you can connect using Windows credentials then you’ll just need to put the name of your instance in the server box.

You’ll see the connection appear in your server list. Have a click around, you can see the databases, security and database objects, you’ll be used to these from SSMS.

You’ll be able to connect to all of your servers here, just add them one at a time.

Once you’ve added a few you will probably notice you’ll want to start organising them into folders. Go ahead and add a new server group.

You get to choose a name for the group as well as a description that pops up like a tooltip. You also get to choose a funky colour for it too

Personally, I’ve separated out Live from Dev from QA but do whatever is best in your environment.

If you have instances stacked on the same box then you can create subfolders for these. Just drag and drop folders within folders and instances in those folders.

Look at that, all pretty and organised.

What is Your “Why?”

This month Andy Leonard has asked What is Your “Why”?.

Well, here’s my Why.

I love SQL Server and the community that surrounds it. It’s so welcoming, open and accessable.

I’ve had a sort of organic progression of Microsoft products in my career. I’ve gone Excel Developer -> Access Developer -> SQL Server Developer -> SQL Server DBA (there’s some other products in there like SSRS but that’s the main path). I’ve never really felt comfortable with any of the communities around these other products but SQL Server is a different kettle of fish completely.

Finding the SQL Server Community slack channel was a great thing. I am the only DBA where I am (with loads of developers) and having people to chat to about DBA stuff is such a pressure release.

Also, check out the call for speakers at most conferences. It’s not unusual to have a ‘first timers’ track for people who want to get into speaking. Doing this isn’t a necessity but it shows how inclusive the community is.

I didn’t choose to stay with SQL Server because of the technology specifically (although I do enjoy focusing on performance tuning) but rather the community around it.

Generate Test Data with Faker & Python within SQL Server

Make sure you’ve done these steps first

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. You’ve also installed Pandas using pip

Then let’s get started

We’re going to use a Python library called Faker which is designed to generate test data. You’ll need to open the command line for the folder where pip is installed. In my standard installation of SQL Server 2019 it’s here (adjust for your own installation);

C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019PYTHON\PYTHON_SERVICES\Scripts

From here you want to run the following command to install mimesis;

Once it’s done we’ve got it installed, we can open SSMS and get started with our test data.

We’re going to get started with the sample queries from the official documentation but we have to add a print statement to see our results because we’re using SSMS;

If you run this in SSMS you’ll see the output in the messages window

This guy loves quality legwear

Now we know that works, let’s put this into a useable format within SQL Server.

This is going to be our block of Python;

For the purposes of this example, we’re going to make a temp table to store the data and view what we’ve done. Wrapping this python script into t-sql will give us an output like so;

Go ahead and run it, you should see a sample of 100 names and addresses that are currently stored in your temp table;

There are far more options when using Faker. Looking at the official documentation you’ll see the list of different data types you can generate as well as options such as region specific data.

Go have fun trying this, it’s a small setup for a large amount of time saved.

Azure Data Studio Themes

This is one of the features of Azure Data Studio that is great for accessibility as well as just being cool.

The default theme is your basic light theme. It’s fine but this isn’t the only theme you have to use.

Use Ctrl+k Ctrl+t to open the theme options.

Have a click through and see how they look when you’re editing code. It’s a case of choosing something that suits your style. My preference is the default dark theme but go nuts and choose one you like.

Oh, and if you’re a sadist, check out the Red theme

Code Snippets in Azure Data Studio

Azure Data Studio has a feature called Code Snippets which allow you to quickly create all of those commands that you forget the syntax for all the time.

Crack open a new query window and type in ‘sql’, you’ll see all of the default templates

Choose any to look at and you’ll see a template with fields for you to change. sqlAddColumn looks like this

It gives you the fields to replace with your own query along with comments explaining what each section is for. Really handy.

It even has complicated stuff like cursors off the bat

Tell me you’d remember the syntax for a cursor without looking it up, I certainly wouldn’t.

A great thing about these snippets is that you can add your own and they can be exactly how you want them.

To get started with this open the Command Pallet with Ctrl+Shift+P and type in ‘snippets’.

Scroll down and find the SQL option. Open it and it will bring you to the SQL.json file in which we’ll be storing our SQL Snippets.

Here’s an example of where to start.

Paste this into your file. Close the sql.json file and save your results then open a new query window (Ctrl+N). Type in ‘sql’ and you’ll see the two new snippets that you created

And there you go, you’ve got custom snippets waiting for you. You can go ahead and create whatever you’d like in whatever format you like.

These snippets are based on Visual Studio Code, for the official documentation head here.

Happy snipping!

Excited to be speaking at SQL Bits 2019!

Now it’s been published on their website I’m excited to share that I’ve been selected to speak at SQLBits 2019!

SQLBits is ‘the largest SQL Server conference in Europe for data professionals’ and takes place on the 27th Feb to the 2nd of March.

My session introducing you to Azure Data Studio (SQL Operations Studio) has been selected and I’d love to be your introduction into this great tool.

Come see my session, I’ll be in Room 10 for the very last session of the very last day (4:15pm on Saturday).

I’ll be publishing some guides on Azure Data Studio over the next few weeks so if it’s interesting for you then keep an eye out.

Even if you don’t come to my session then I hope to see you at the Friday night party. Don’t forget your fancy dress 😉

See you there!

Using Reddit data with SQL Server and Python

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. Then you used pip to install PRAW
  4. You’ve also installed Pandas using pip
  5. You’ve created your Reddit API
  6. And you’ve got a working connection to Reddit

Now let’s actually gather this data and turn it into something useful inside SQL Server.

We’re going to build on our previous steps and create a Stored Procedure that we can simply execute from wherever we want and it will start populating data.

In previous steps we’ve only taken data from one subreddit but that’s a bit boring. Let’s make a list of subreddits that can be used by our SP.

We’re creating new table called py_SubredditList and inserted a list of our choosing. The list above are fairly good subreddits to get large blocks of text from. Feel free to change this list above to your favourite subreddits.

We’re going to grab one subreddit at a time and use it

Let’s start working on our Python code. First thing we’ll need is somewhere to push all of our data. Python uses things called dictionaries so we’ll make one called topics_dict;

Let’s dump our data into here

Dictionaries aren’t easy for us to interpret so let’s create a data frame using Pandas;

This data frame is what we’re going to return from our python block;

We’re going to output this data set into a temporary table;

Then we’re going to execute the python script and output the data into that temp table

And we’re going to increment our subreddit Hits by one so we’re not hitting the same subreddit all the time.

We’re going to create a permanent table to hold this data for later use

Finally we’ll put the data from our temp table into the permanent table

Putting together all of these elements you’ll come out with something like this;


Installing this SP isn’t going to do anything until you execute it. Go on, live life on the edge and try it. Snoo agrees you should.

Then check out the py_RedditData table and you will have something like this;

Included you’ll be able to see your data lengths so you can filter out if you want. The DataType field is in there for you to experiment. We’re only pulling title and body from these submissions but you can also pull fields such as url (the post URL), score (int) and created (datetime). Check out the PRAW documentation for all available fields.

You can call this SP however you like. My preference is to call it once per minute from an agent job and leave it overnight so the Reddit API call limit doesn’t get hit but you can do it however you want.

That’s it. You’re done. Go make a cuppa.

Connect to Reddit from SQL Server

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. Then you used pip to install PRAW
  4. You’ve also installed Pandas using pip
  5. You’ve created your Reddit API

Fantastic, we’re about ready to connect to Reddit from within SQL Server!

Let’s crank open SSMS and start building our query. In a new query window we’ll start building our query and test it along the way.

To execute our script we’re going to use dynamic SQL to make it easier to read. Let’s get started setting down some foundations for our script.

The first line is the ‘shebang’ line and just allows the system to find Python correctly, we won’t be changing this at all. We’re also loading PRAW and Pandas (that we installed previously) so we can use them later.

Feel free to run this script as it is, you won’t get any results at this point but you should also not see any error messages

We’re going to build our connection string to Reddit next.

We’re creating a variable (redditConnect) and declaring the values we need. Use your own connection details we made when creating our Reddit API.

We then need to decide which Subreddit we’re going to gather data from. I’m going to use AskReddit as it’s one of the largest text-only subreddits and will be perfect for what we’re trying to do here.

We also need to decide what order we’re going to put the data into. We can sort by things like new, top, controversial etc. For this let’s grab ‘new’

subredditOrder =

We can then test our connection is working by printing the first row of our data

Our whole block of code is going to look something like this

Go ahead and run this script (with your own info in there) and you should see a print out of the latest submission to your chosen Subreddit. Something like this.

If this doesn’t work for you then you may have to allow this connection through your firewall. In my example I disabled Windows Firewall but your method will be different depending on what you use for a firewall.

Congratulations. Next step is to actually do something with this data.


Create API Access to Reddit

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. Then you used pip to install PRAW
  4. You then installed Pandas with pip

The last thing we need to do before connecting to Reddit is to create our API access. It’s a simple process, let’s get it done.

First of all, you need to create a Reddit account. Go to the following link to create your user. Note down your username and password, we’ll need those later.

Once you have a reddit username set up make sure you’re logged in and go to the apps page;

We’re going to create our app, click the nice big button.

Choose a name for your application then select the radio button to create a script. For the redirect URL you’ll want to enter http://localhost:8080

Once you’ve done these three things, click ‘create app’. You’ll be taken to the screen with some useful information you’ll need.

From this screen you’ll need to note your ‘personal use script; as well as the ‘secret’.

At this point, you should have 5 pieces of information;

  1. Reddit Username
  2. Reddit Password
  3. App Name
  4. Personal Use Script (14 characters)
  5. Secret (27 characters)

Once you’ve got all of these we can use these to connect to Reddit from SQL Server (with Python).

Let’s go see how we can do this.

Installing Pandas with SQL Server and Python

  1. You’ve installed SQL Server with Python
  2. You’ve then installed pip
  3. Then you used pip to install PRAW

We’re going to use the Pandas plugin to format our data to make it easier to read. If you’ve already used pip to install PRAW then this is going to be very easy. Let’s get started.

First of all we need to find our Python scripts folder. My default on SQL 2019 is here.

C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019PYTHON\PYTHON_SERVICES\Scripts

Once here, open the command line to this location and run

This is going to do the heavy lifting for you. You’ll just sit there and watch it install.

Once the script has finished you’ll see a screen like this.

To double check this has worked you can run the following command within SQL Server

If you see an error then you’ll have to try and reinstall Pandas. If not, congratulations, you’ve done it!