Automatic Code Generation for Imported Resources!
Table of Contents
Config Driven Imports and Checks - New in Terraform v1.5
It finally happened! Terraform can now generate code for you for imported resources…
Here’s the link to the vendor announcement: Terraform Announcement
Let’s discuss what has changed and work an example on how to test the new functionality. TLDR: For the purpose of this example, we will stand up a basic resource in code, read out the resource ID, remove it from state tracking, delete the code block, create the import block then let Terraform do it’s magic.
Testing the New Capability
I’m going to test in Azure with a storage account, I’ll build this in code for speed, I’ve omitted the resource group and variables which are in other supporting files.
resource "azurerm_storage_account" "june23_msdn" {
name = "thedocstor1"
resource_group_name = azurerm_resource_group.june23_msdn.name
location = azurerm_resource_group.june23_msdn.location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "LRS"
enable_https_traffic_only = "true"
min_tls_version = "TLS1_2"
tags = var.base_tags
}
With the storage account created we can read out the resource ID, the point of this functionality is to import resources created outside of terraform, so we would usually obtain this info from the Azure portal.
terraform state list # find the resource type and label
terraform state show azurerm_storage_account.june23_msdn # list the resource and capture its ID
Now we can remove this resource from state, this means Terraform will not be track its changes or be aware of its existence. If we tried to re-apply our code at this point, we would get an error as a resource with the same name would already exist.
We will also delete the code block for the storage account as Terraform will attempt to recreate it.
terraform state rm azurerm_storage_account.june23_msdn # Remove the storage account from state
Now for the new bit, we just need to create an import block, this tells Terraform the resource type and label, and links it to the resource ID of the existing object to be imported.
import {
# ID of the cloud resource , check provider documentation for importable resources and format
id = "/subscriptions/xxx/resourceGroups/june23-rg/providers/Microsoft.Storage/storageAccounts/thedocstor1"
# Resource address
to = azurerm_storage_account.june23_msdn
}
Now we run the following command to generate the code, this creates a new file, in this case generated.tf note we don’t have to pre-create either the file or a corresponding resource blog like when using terraform import
terraform plan -generate-config-out="generated.tf"
At this point you should have a new ‘generated.tf’ file, and depending on the resource type, you may or may not get an error message, remember this is the first official release of a experimental so this is to be expected.
I did have errors pertaining to out of range extended attributes, for retention and queue properties you are unlikely to want to set, so I simply removed these (to accept the defaults) and my config was clean again.
The process generates what I would call literal code, so its unnecessarily verbose, all variables, data sources and tightly coupled resources will have their absolute values (e.g. resource group and location in the example). As a positive this is a good way to evaluate a resource and see all possible arguments with example default values.
# __generated__ by Terraform
resource "azurerm_storage_account" "june23_msdn" {
access_tier = "Hot"
account_kind = "StorageV2"
account_replication_type = "LRS"
account_tier = "Standard"
allow_nested_items_to_be_public = true
allowed_copy_scope = null
cross_tenant_replication_enabled = true
default_to_oauth_authentication = false
edge_zone = null
enable_https_traffic_only = true
infrastructure_encryption_enabled = false
is_hns_enabled = false
large_file_share_enabled = null
location = "westeurope"
min_tls_version = "TLS1_2"
name = "thedocstor1"
nfsv3_enabled = false
public_network_access_enabled = true
queue_encryption_key_type = "Service"
resource_group_name = "june23-rg"
sftp_enabled = false
shared_access_key_enabled = true
table_encryption_key_type = "Service"
tags = {
codepath = "az/june23/msdn"
compliance = ""
confidentiality = ""
creator = "doc"
description = "brief description of your service here"
environment = "msdn"
expireson = ""
hoursofoperation = "working hours only"
iacversion = "270423"
maintenancewindow = "evening and weekends"
project = ""
service = "june23"
}
blob_properties {
change_feed_enabled = false
change_feed_retention_in_days = 0
default_service_version = null
last_access_time_enabled = false
versioning_enabled = false
}
network_rules {
bypass = ["AzureServices"]
default_action = "Allow"
ip_rules = []
virtual_network_subnet_ids = []
}
queue_properties {
hour_metrics {
enabled = true
include_apis = true
retention_policy_days = 7
version = "1.0"
}
logging {
delete = false
read = false
retention_policy_days = 0
version = "1.0"
write = false
}
minute_metrics {
enabled = false
include_apis = false
retention_policy_days = 0
version = "1.0"
}
}
share_properties {
retention_policy {
days = 7
}
}
}
Conclusions
I was comfortable with the traditional terraform import process and used it extensively, in summary I would import the resource then use the output of terraform state show as the basis of the code block I would manually have to create.
The new process certainly saves a step (and doesn’t pull through non-configurable attributes determined at runtime), but the resultant code will need to be top and tailed as a minimum. Either way this is a great improvement and the functionality will only get better from here. 👏