Create Azure SQL Database using Terraform

Terraform is IAC (Infrastructure as a code). It is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services.

If you have to create new Azure SQL Databases with same configuration over and over for testing purpose and then decommission later, then this IAC is for you.

Setting up was pretty easy

  1. Install Visual Studio Code
  2. Login to Azure in VSCode
  3. Install Azure CLI
  4. Install Terraform
  5. Create environment variable for Terraform

And done. (Check Terraform documentation for configuration part for more details)

Now to begin, I create a new file in VSCode and select language as Terraform, save the file in a location C:\Users\shamvil\Documents\ter and named it main.tf. We can have many terraform files in one location but the main.tf needs to have the most important part of syntax.

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.78.0"
    }
  }
}


provider "azurerm" {
  features {}
}

Here, we are specifying the provider details and its version so that when we initiate terraform, it will translate our code to correct Azure CLI.

Now for the resources, we will start writing as below

resource "azurerm_resource_group" "sqlserver26092021" {
  name     = "demorg"
  location = "Central India"
}

We need to create resource group, select location. refer to this document to understand more details about syntax.

similarly, use below to create storage account and Azure SQL Server

resource "azurerm_storage_account" "sqlserver26092021" {
  name                     = "sqlserver26092021sa"
  resource_group_name      = azurerm_resource_group.sqlserver26092021.name
  location                 = azurerm_resource_group.sqlserver26092021.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_mssql_server" "sqlserver26092021" {
  name                         = "sqlserver26092021-sqlserver"
  resource_group_name          = azurerm_resource_group.sqlserver26092021.name
  location                     = azurerm_resource_group.sqlserver26092021.location
  version                      = "12.0"
  administrator_login          = "myusername"
  administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}

my main.tf file ended here and now for the fun part.

I create separate file for other resources to keep things clean and easily manageable.

File number 2 is called – firewall.tf

I wrote –

resource "azurerm_sql_firewall_rule" "fr" {
  name = "fr"
  server_name = azurerm_mssql_server.sqlserver26092021.name
  resource_group_name = azurerm_resource_group.sqlserver26092021.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

I used “0.0.0.0” as IP range for my firewall because I am not restricting any IP from accessing my resources.

Incase you want to restrict that, you can change the IP range as per your needs.

Finally my third file was dbgenerator.tf

resource "azurerm_mssql_database" "test" {
  name           = "acctest-db-d"
  server_id      = azurerm_mssql_server.sqlserver26092021.id
  collation      = "SQL_Latin1_General_CP1_CI_AS"
  license_type   = "LicenseIncluded"
  max_size_gb    = 2
  geo_backup_enabled = false
  zone_redundant = false
  sku_name = "basic"
  sample_name = "AdventureWorksLT"
 }

Because of my testing purpose and almost negligible DB size needs, I opted for basic sku with max size as 2 GB

Now my terraform code is ready. Time for the easy part, the only part you will need to run every-time when you want to deploy this.

now we need to create a plan

This gives me overview of everything that will be created

PS C:\Users\shamvil\Documents\ter> terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_mssql_database.test will be created
  + resource "azurerm_mssql_database" "test" {
      + auto_pause_delay_in_minutes = (known after apply)
      + collation                   = "SQL_Latin1_General_CP1_CI_AS"
      + create_mode                 = "Default"
      + creation_source_database_id = (known after apply)
      + extended_auditing_policy    = (known after apply)
      + geo_backup_enabled          = false
      + id                          = (known after apply)
      + license_type                = "LicenseIncluded"
      + max_size_gb                 = 2
      + min_capacity                = (known after apply)
      + name                        = "acctest-db-d"
      + read_replica_count          = (known after apply)
      + read_scale                  = (known after apply)
      + restore_point_in_time       = (known after apply)
      + sample_name                 = "AdventureWorksLT"
      + server_id                   = (known after apply)
      + sku_name                    = "basic"
      + storage_account_type        = "GRS"
      + zone_redundant              = false

      + long_term_retention_policy {
          + monthly_retention = (known after apply)
          + week_of_year      = (known after apply)
          + weekly_retention  = (known after apply)
          + yearly_retention  = (known after apply)
        }

      + short_term_retention_policy {
          + retention_days = (known after apply)
        }

      + threat_detection_policy {
          + disabled_alerts            = (known after apply)
          + email_account_admins       = (known after apply)
          + email_addresses            = (known after apply)
          + retention_days             = (known after apply)
          + state                      = (known after apply)
          + storage_account_access_key = (sensitive value)
          + storage_endpoint           = (known after apply)
          + use_server_default         = (known after apply)
        }
    }

  # azurerm_mssql_server.sqlserver26092021 will be created
  + resource "azurerm_mssql_server" "sqlserver26092021" {
      + administrator_login             = "myusername"
      + administrator_login_password    = (sensitive value)
      + connection_policy               = "Default"
      + extended_auditing_policy        = (known after apply)
      + fully_qualified_domain_name     = (known after apply)
      + id                              = (known after apply)
      + location                        = "centralindia"
      + name                            = "sqlserver26092021-sqlserver"
      + public_network_access_enabled   = true
      + resource_group_name             = "demorg"
      + restorable_dropped_database_ids = (known after apply)
      + version                         = "12.0"
    }

  # azurerm_resource_group.sqlserver26092021 will be created
  + resource "azurerm_resource_group" "sqlserver26092021" {
      + id       = (known after apply)
      + location = "centralindia"
      + name     = "demorg"
    }

  # azurerm_sql_firewall_rule.fr will be created
  + resource "azurerm_sql_firewall_rule" "fr" {
      + end_ip_address      = "0.0.0.0"
      + id                  = (known after apply)
      + name                = "fr"
      + resource_group_name = "demorg"
      + server_name         = "sqlserver26092021-sqlserver"
      + start_ip_address    = "0.0.0.0"
    }

  # azurerm_storage_account.sqlserver26092021 will be created
  + resource "azurerm_storage_account" "sqlserver26092021" {
      + access_tier                      = (known after apply)
      + account_kind                     = "StorageV2"
      + account_replication_type         = "LRS"
      + account_tier                     = "Standard"
      + allow_blob_public_access         = false
      + enable_https_traffic_only        = true
      + id                               = (known after apply)
      + is_hns_enabled                   = false
      + large_file_share_enabled         = (known after apply)
      + location                         = "centralindia"
      + min_tls_version                  = "TLS1_0"
      + name                             = "sqlserver26092021sa"
      + nfsv3_enabled                    = false
      + primary_access_key               = (sensitive value)
      + primary_blob_connection_string   = (sensitive value)
      + primary_blob_endpoint            = (known after apply)
      + primary_blob_host                = (known after apply)
      + primary_connection_string        = (sensitive value)
      + primary_dfs_endpoint             = (known after apply)
      + primary_dfs_host                 = (known after apply)
      + primary_file_endpoint            = (known after apply)
      + primary_file_host                = (known after apply)
      + primary_location                 = (known after apply)
      + primary_queue_endpoint           = (known after apply)
      + primary_queue_host               = (known after apply)
      + primary_table_endpoint           = (known after apply)
      + primary_table_host               = (known after apply)
      + primary_web_endpoint             = (known after apply)
      + primary_web_host                 = (known after apply)
      + resource_group_name              = "demorg"
      + secondary_access_key             = (sensitive value)
      + secondary_blob_connection_string = (sensitive value)
      + secondary_blob_endpoint          = (known after apply)
      + secondary_blob_host              = (known after apply)
      + secondary_connection_string      = (sensitive value)
      + secondary_dfs_endpoint           = (known after apply)
      + secondary_dfs_host               = (known after apply)
      + secondary_file_endpoint          = (known after apply)
      + secondary_file_host              = (known after apply)
      + secondary_location               = (known after apply)
      + secondary_queue_endpoint         = (known after apply)
      + secondary_queue_host             = (known after apply)
      + secondary_table_endpoint         = (known after apply)
      + secondary_table_host             = (known after apply)
      + secondary_web_endpoint           = (known after apply)
      + secondary_web_host               = (known after apply)
      + shared_access_key_enabled        = true

      + blob_properties {
          + change_feed_enabled      = (known after apply)
          + default_service_version  = (known after apply)
          + last_access_time_enabled = (known after apply)
          + versioning_enabled       = (known after apply)

          + container_delete_retention_policy {
              + days = (known after apply)
            }

          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + delete_retention_policy {
              + days = (known after apply)
            }
        }

      + identity {
          + identity_ids = (known after apply)
          + principal_id = (known after apply)
          + tenant_id    = (known after apply)
          + type         = (known after apply)
        }

      + network_rules {
          + bypass                     = (known after apply)
          + default_action             = (known after apply)
          + ip_rules                   = (known after apply)
          + virtual_network_subnet_ids = (known after apply)

          + private_link_access {
              + endpoint_resource_id = (known after apply)
              + endpoint_tenant_id   = (known after apply)
            }
        }

      + queue_properties {
          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + hour_metrics {
              + enabled               = (known after apply)
              + include_apis          = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
            }

          + logging {
              + delete                = (known after apply)
              + read                  = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
              + write                 = (known after apply)
            }

          + minute_metrics {
              + enabled               = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + retention_policy {
              + days = (known after apply)
            }

          + smb {
              + authentication_types            = (known after apply)
              + channel_encryption_type         = (known after apply)
              + kerberos_ticket_encryption_type = (known after apply)
              + versions                        = (known after apply)
            }
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Once we are satisfied, we can run

terraform apply

On prompt, answer “yes” or run

terraform apply --auto-approve

to avoid the prompt

This takes time, you can just step out and get yourself a coffee while terraform does the work for you.

After waiting for another few minutes, I finally saw this –

Now time to check the portal

…and its created

Azure resources created by Terraform
Firewall details

This would be different, if you gave more parameters for firewall resource and mentioned a proper IP range instead of “0.0.0.0”

Now that the show time is over, we need to clean up what we made.

Terraform even makes that easy

Simply run

terraform destroy

again it creates a plan for destroying the resources created and then seeks approval.

Execution completes with 5 resources destroyed
The resources on portal are dropped

Once you have created the terraform script as per your need, all you need to care for is

terraform init

terraform plan    //optional

terraform apply --auto-approve

terraform destroy

Note – You can only destroy resources that are made by terraform. so if your azure administrator accidentally made a new database manually, it will need to be dropped manually.

One thought on “Create Azure SQL Database using Terraform

Leave a comment