这是用户在 2025-6-3 4:57 为 https://learn.datascientest.com/lesson/1433/4245 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
COURSE

Terraform DevOps - Remote State

DIFFICULTY
Normal
APPROXIMATE TIME
1h00
RELATED MACHINE
webshellStopped
Ubuntu Server 20.04 LTS
SSD Volume Type 64- bit x86
Stopped


XIV - Remote State


XIV - Remote State

If we're building a massive infrastructure with Terraform, storing information about our infrastructure is a crucial issue. But how do we store this information?If we have to work with several people to update the architecture each person will have on their machine a terraform.tfsate state which is not optimized for a collaborative Terraform project.

The backend Terraform allows us to keep the status file containing all the details and tracking of resources that have been provisioned or will be provisioned with Terraform. We'll therefore see how the backend allows team members or developers to securely manage states without affecting existing resources.

1 - Terraform State and backend

We've already mentioned the state file when working with Terraform, the terraform.tfstate. It's essential for keeping track of the current state of Terraform-deployed resources. When we have a team working on the same projects or our infrastructure grows, we need to find a way to store and share it with the project team.

So we have:

  • A Terraform state - contains links between objects in remote systems and is defined in our Terraform configuration files. And all these states are stored in the Terraform state file.

  • A Terraform state file - is by default stored locally on our machine where we run Terraform commands with the name terraform.tfstate.

Terraform state is stored in JSON format. Therefore, when we execute the terraform show or terraform output command, Terraform retrieves the output of the status file in JSON format.

Keeping the status file on our local computer (i.e. the local backend) is perfectly fine when we're working alone on a project. But when we're working in a team, storing the status file in a backend, such as AWS S3, is a much better option.

While we were working on the Terraform configuration file, the Terraform status file is locked. As a result, Terraform prevents anyone else from using the status file.

2 - Defining the local Terraform backend

Before using Terraform's state and backends, we must first define them in the configuration files. We'll configure the local backend to store the state file on our local computer at a specified location.

By default, if we don't specify a backend, it will use the file called terraform.tfstate to store the infrastructure state. We can also specify a local backend in our provider.tf file to store the file under another name, as in the example below:

# Declare vendor requirements and backend
terraform {
  backend "local" {
    path = "home/ubuntu/datascientest-backend/terraform.tfstate"
}
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# the aws region where we want to deploy our various resources
provider "aws" {
  region = "eu-west-3
  access_key = "xxxxxxxxxxxxxxx" # the access key created for the user that will be used by terraform
  secret_key = "xxxxxxxxxxxxxxxxx" # the secret key created for the user who will be used by terraform
}


3 - Setting up a remote backend

You can store your status file locally on the machine by defining the local backend if you're working alone on a project. But when we're working with a team, we need a way to keep the backend on a remote machine by configuring an AWS S3 bucket. This allows all team members to update the Terraform status file and manage resources without affecting them.

We'll create a bucket using the aws cli command line and enter the command that allows us to create a bucket called datascientest-bucket within the eu-west-3 region:

  Bucket names on S3 must be unique to the world. Be sure to replace the name value for the Bucket with a unique value!
aws s3 mb s3://datascientest-bucket-terraform-s3 --region eu-west-3 --endpoint-url https://s3.eu-west-3.amazonaws.com

You should get a result similar to this:

make_bucket: datascientest-bucket-terraform-s3

Let's open the provider.tf file and replace the local backend configuration block with the following lines to define a remote backend instead (AWS S3):

terraform {

 backend "s3" {
   bucket = "datascientest-bucket-terraform-s3" # the bucket created is instead called datascientest-bucket
   key = "terraform.tfstate" # the file in the bucket that will guarantee the infrastructure's state
   region = "eu-west-3" # the region where the bucket is located
   access_key = "xxxxxxxxxxxxxxxx" # the access key created for the user who will be used by terraform
   secret_key = "xxxxxxxxxxxxxxxx" # the secret key created for the user who will be used by terraform
 }

  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# the aws region where we want to deploy our various resources
provider "aws" {
  region = "eu-west-3
  access_key = "xxxxxxxxxxxxxxx" # the access key created for the user that will be used by terraform
  secret_key = "xxxxxxxxxxxxxxxxx" # the secret key created for the user who will be used by terraform
}

Next, let's re-run the terraform init command in the terraform project directory to initialize the plugins and providers required to work with the resources. But this time, we'll add the -migrate-state flag when we change the state location from local to the AWS S3 compartment.

terraform init -migrate-state

Now let's apply the configuration and validate that the state file is now stored remotely:

terraform apply -auto-approve

XV - Expressions with Terraform

1. loops

Terraform offers several different syntaxes for setting up loops, each intended for use in a slightly different scenario:

  • the count parameter: loop over resources.

  • the for_each parameter: loop over resources and inline blocks within a resource.

  • the for parameter: loop over lists and maps.

We'll talk about the count and for parameters here.

2. The count parameter

We'll go back and work on the instances.tf file to launch our datascientest-instance:

You can delete this file and recreate it if required.

resource "aws_instance" "datascientest-instance" {
  ami = var.image_id # only available in eu-west-3 region
  instance_type = var.type_instance # instance size (1vcpu, 1G ram)
  # user_data declaration in resource

  monitoring = var.monitoring
  network_interface {
    network_interface_id = aws_network_interface.interface_network_instance.id
    device_index = 0
  }
  tags = {
    Name = "datascientest" #instance tag
  }
}
# vpc creation for resources

  depends_on = [
    # security group must be created before instance is created
    aws_security_group.datascientest-sg,
  ]
}

Terraform has no for loops or other traditional procedural logic built into the language. Each Terraform resource will, however, have a meta parameter that we can use called count. This is Terraform's simplest iteration expression: it simply defines the number of copies of the resource to be created.

Therefore, we can create three datascient-instance instances as follows:

resource "aws_instance" "datascientest-instance" {
  ami = var.image_id # only available in eu-west-3 region
  instance_type = var.type_instance # instance size (1vcpu, 1G ram)
  count = 3 # defines number of copies of resource
  # user_data declaration in the resource

  monitoring = var.monitoring
  network_interface {
    network_interface_id = aws_network_interface.interface_network_instance.id
    device_index = 0
  }
  tags = {
    Name = "datascientest" #instance tag
  }
}
# vpc creation for resources

  depends_on = [
    # security group must be created before instance is created
    aws_security_group.datascientest-sg,
  ]
}

The problem with these instructions is that all three instances will have the same name displayed on the console after the configurations have been applied. If we had access to a standard for loop, we could use the index in the for loop, i, to give each instance a unique name.

To accomplish the same thing with Terraform, we could use count.index to get the index of each iteration in the loop:


resource "aws_instance" "datascientest-instance" {
  ami = var.image_id # only available in eu-west-3 region
  instance_type = var.type_instance # instance size (1vcpu, 1G ram)
  count = 3 # defines number of copies of resource
  # user_data declaration in the resource

  monitoring = var.monitoring
  network_interface {
    network_interface_id = aws_network_interface.interface_network_instance.id
    device_index = 0
  }
  tags = {
    Name = "datascientest ${count.index}" #which will display datascientest 1, datascientest 2, datascientest 3
  }
}
# vpc creation for resources

  depends_on = [
    # the security group must be created before the instance is created
    aws_security_group.datascientest-sg,
  ]
}

If we run the terraform plan command on the previous code, We'll have the following problem:

Error: Missing resource instance key
│
│ on ebs.tf line 13, in resource "aws_volume_attachment" "datascientest_ebs_att":13: instance_id = aws_instance.datascientest-instance.id
│
│ Because aws_instance.datascientest-instance has "count" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ aws_instance.datascientest-instance[count.index]
╵
╷
│ Error: Missing resource instance key
│
│ on output.tf line 2, in output "datascientest-instance_ip_public":2: value = aws_instance.datascientest-instance.public_ip #send public ip of instance datascientest
│
│ Because aws_instance.datascientest-instance has "count" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ aws_instance.datascientest-instance[count.index]
╵
╷
│ Error: Missing resource instance key
│
│ on security.tf line 40, in resource "aws_network_interface_sg_attachment" "datascientest_sg_attachment":40: network_interface_id = aws_instance.datascientest-instance.primary_network_interface_id
│
│ Because aws_instance.datascientest-instance has "count" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ aws_instance.datascientest-instance[count.index]

The problem lies in the definition of outputs for instances and the various links between resources.

Let's take the case of outputs in the outputs.tf file as an example:

output "datascientest-instance_ip_public" {
  value = aws_instance.datascientest-instance.public_ip #returns the public ip of the datascientest instance
}

This instruction refers to an instance, however since we added instruction count, we therefore also need to instantiate the index so that Terraform knows how to position itself on each element and which public IP to return.

Let's replace the code in this file with this:

 output "datascientest-instance_ip_public" {
   value = {
  for instance in aws_instance.datascientest-instance: #for all datascientest-instance instances
  instance.public_ip => instance.public_ip #considers a public ip of the occurrence of the ring as a public ip of instance
        }
}

Now run the terraform plan command and then the terraform apply -auto-approve instruction.

We'll get the output:

Error: Missing resource instance key
│
│ on ebs.tf line 13, in resource "aws_volume_attachment" "datascientest_ebs_att":13: instance_id = aws_instance.datascientest-instance.id
│
│ Because aws_instance.datascientest-instance has "count" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ aws_instance.datascientest-instance[count.index]
╵
╷
│ Error: Missing resource instance key
│
│ on security.tf line 40, in resource "aws_network_interface_sg_attachment" "datascientest_sg_attachment":40: network_interface_id = aws_instance.datascientest-instance.primary_network_interface_id
│
│ Because aws_instance.datascientest-instance has "count" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ aws_instance.datascientest-instance[count.index]

We can see that this has solved the first error for "outputs" but further modifications are to be expected. We'll see how to improve the code in the next practical case.

Implementation

We'll bring a solution to the other two files so that we can have working code.

We need to write a loop on the EBS disks so that we have 3 for this case, so that each of the instances can mount a disk. We'll also attach the security group to all instances.

Write the ebs.tf file, don't forget to create the file with the touch command.

Show / Hide solution

The security.tf file:

resource "aws_security_group" "datascientest-sg" {
  name = "datascientest-sg"
  description = "Authorizes incoming traffic on ports 80, 443 and 22 and all outgoing traffic"

  ingress {
    description = "Allow incoming traffic on port 443"
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "Allow incoming traffic on port 80"
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "Allow incoming traffic on port 22"
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    to_port = 0
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "datascientest-sg"
  }
}

resource "aws_network_interface_sg_attachment" "datascientest_sg_attachment" {
  security_group_id = aws_security_group.datascientest-sg.id
   network_interface_id = {
 for nic in aws_instance.datascientest-instance:
  nic.primary_network_interface_id => nic.primary_network_interface_id
  }
}

Let's run the terraform plan command:

terraform plan

Note that once we've used the count clause on a resource, it becomes a list of resources. Since datascientest-instance is now an instance list, instead of using the standard syntax to read an attribute from this resource ( <PROVIDER>_<TYPE>.<NAME>.<ATTRIBUT>), we need to specify the instance we're interested in by providing its index in the list:

<PROVIDER>_<TYPE>.<NAME>[INDEX].ATTRIBUT

3 - Condition

Terraform offers several different ways to make loops and conditions, each intended for use in its own scenario.

To get the hang of conditions with terraform, you can read the documentation on the subject.

Conditions with parameter count

The count parameter we saw earlier allows us to make a simple loop. But we can use the same mechanism to make a basic condition. Let's start by looking at if statements, then move on to if-else statements in the next section.

  • If with parameter count

We're going to add a variable called environment which defines the execution environment (DEV, QA, PROD) in our terraform code:

variable "environment" {
  type = string
  default = "dev"
}

Terraform doesn't support if statements as code. However, we can set up conditions using the count parameter and taking advantage of two properties:

  1. If we set count to a resource, we get a copy of that resource, whereas if we set count to 0, that resource is not created.
  2. Terraform supports _conditional expressions_ of the format <CONDITION> ? <TRUE_VAL> : <FALSE_VAL>. It's called ternary syntax, a common syntax in other programming languages. It will evaluate the Boolean logic in the CONDITION and if the result is true, it will return TRUE_VAL, if the result is false, it will return FALSE_VAL.

By putting these two principles together, we can update the datascientest-instance resource definition as follows:

resource "aws_instance" "datascientest-instance" {
  ami = var.image_id # variable for friend definition
  instance_type = var.type_instance # instance size
  count = var.environment == "dev" ? 1 : 3 # if variable environment equals "dev", launch one instance, otherwise launch 3 instances
  # user_data declaration in resource

  monitoring = var.monitoring
  network_interface {
    network_interface_id = aws_network_interface.interface_network_instance.id
    device_index = 0
  }
  tags = {
    Name = "datascientest ${count.index}" #which will display datascientest 1, datascientest 2, datascientest 3
  }
}
# vpc creation for resources

  depends_on = [
    # the security group must be created before the instance is created
    aws_security_group.datascientest-sg,
  ]
}

If var.environmentis dev, the count parameter will take the value 1, if not it will take the value 3 as requested. So this is one of the ways we can go about setting up conditions in the deployment of our infrastructures with Terraform.

Let's move on to verification. Here's the complete instances.tf:

resource "aws_instance" "datascientest-instance" {
  #ami = data.aws_ami.datascientest-ami.id # image retrieval from data source
  ami = var.image_id # retrieve image returned by data source
  instance_type = var.type_instance # instance size (1vcpu, 1G ram)
  key_name = "datacientest_keypair"
 count = var.environment == "dev" ? 1 : 3 # if variable environment is equal to "dev", launch one instance, otherwise launch 3 instances
  user_data = <<EOF
    #!/bin/bash
         sudo yum update
    sudo yum install -y apache2
    sudo systemctl start apache2
    sudo systemctl enable apache2
    echo "<h1>Datascientest via TERRAFORM</h1>" | sudo tee /var/www/html/index.html
  EOF
  monitoring = var.monitoring
  network_interface {
    network_interface_id = aws_network_interface.interface_network_instance.id
    device_index = 0
  }
  tags = {
    Name = "datascientest ${count.index}" #which will display datascientest 1, datascientest 2, datascientest 3
  }
}

# vpc creation for resources
resource "aws_vpc" "datascientest_vpc" {
  cidr_block = var.cidr_block_vpc[0]

  tags = {
    Name = "datascientest_vpc"
  }
  depends_on = [
    # the security group must be created before the instance is created
    aws_security_group.datascientest-sg,
  ]
}
# create subnet for resources
resource "aws_subnet" "datascientest_subnet" {
  vpc_id = aws_vpc.datascientest_vpc.id
  cidr_block = var.cidr_block_subnet[0]
  availability_zone = var.availability_zone[0]

  tags = {
    Name = "datascientest_subnet"
  }
}

resource "aws_network_interface" "interface_network_instance" {
  subnet_id = aws_subnet.datascientest_subnet.id
  private_ips = ["172.16.10.100"]

  tags = {
    Name = "interface_network_instance"
  }
}

The variables.tf file:

variable "type_instance" {
  type = string
  default = "t3.micro"
}
variable "image_id" {
  type = string
  nullable = false
  default = "ami-064736ff8301af3ee"
}
variable "monitoring" {
  type = bool
  default = false
}
variable "ebs_size" {
  type = number
  default = "5"
}
variable "cidr_block_vpc" {
  type = list(any)
  default = ["172.16.0.0/16"]
}
variable "cidr_block_subnet" {
  type = list(any)
  default = ["172.16.10.0/24"]
}
variable "availability_zone" {
  type = list(any)
  default = ["eu-west-3a"]
}
variable "environment" {
  type = string
  default = "dev"
}

If we instantiate the value of the variable environment to dev, we will indeed have only one instance:

##################################

  # aws_instance.datascientest-instance[0] will be created
  + resource "aws_instance" "datascientest-instance" {
      + ami = "ami-064736ff8301af3ee"
      + arn = (known after apply)
      + associate_public_ip_address = (known after apply)
      + availability_zone = (known after apply)
      + cpu_core_count = (known after apply)
      + cpu_threads_per_core = (known after apply)
      + disable_api_stop = (known after apply)
      + disable_api_termination = (known after apply)
      + ebs_optimized = (known after apply)
      + get_password_data = false
      + host_id = (known after apply)
      + id = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state = (known after apply)
      + instance_type = "t2.micro"
      + ipv6_address_count = (known after apply)
      + ipv6_addresses = (known after apply)
      + key_name = (known after apply)
      + monitoring = false
      + outpost_arn = (known after apply)
      + password_data = (known after apply)
      + placement_group = (known after apply)
      + placement_partition_number = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns = (known after apply)
      + private_ip = (known after apply)
      + public_dns = (known after apply)
      + public_ip = (known after apply)
      + secondary_private_ips = (known after apply)
      + security_groups = (known after apply)
      + subnet_id = (known after apply)
      + tags = {
          + "Name" = "datascientest 0"
        }
      + tags_all = {
          + "Name" = "datascientest 0"
        }
      + tenancy = (known after apply)
      + user_data = "66a7f3bc5f516a056b4d674bf326526a20f6cbf7"
      + user_data_base64 = (known after apply)
      + user_data_replace_on_change = false
      + vpc_security_group_ids = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + snapshot_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name = (known after apply)
          + no_device = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens = (known after apply)
          + instance_metadata_tags = (known after apply)
        }

      + network_interface {
          + delete_on_termination = false
          + device_index = 0
          + network_card_index = 0
          + network_interface_id = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }
    }
##################################

If we pass the variable to prod, we have 3 instances:

##################################

  # aws_instance.datascientest-instance[0] will be created
  + resource "aws_instance" "datascientest-instance" {
      + ami = "ami-064736ff8301af3ee"
      + arn = (known after apply)
      + associate_public_ip_address = (known after apply)
      + availability_zone = (known after apply)
      + cpu_core_count = (known after apply)
      + cpu_threads_per_core = (known after apply)
      + disable_api_stop = (known after apply)
      + disable_api_termination = (known after apply)
      + ebs_optimized = (known after apply)
      + get_password_data = false
      + host_id = (known after apply)
      + id = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state = (known after apply)
      + instance_type = "t2.micro"
      + ipv6_address_count = (known after apply)
      + ipv6_addresses = (known after apply)
      + key_name = (known after apply)
      + monitoring = false
      + outpost_arn = (known after apply)
      + password_data = (known after apply)
      + placement_group = (known after apply)
      + placement_partition_number = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns = (known after apply)
      + private_ip = (known after apply)
      + public_dns = (known after apply)
      + public_ip = (known after apply)
      + secondary_private_ips = (known after apply)
      + security_groups = (known after apply)
      + subnet_id = (known after apply)
      + tags = {
          + "Name" = "datascientest 0"
        }
      + tags_all = {
          + "Name" = "datascientest 0"
        }
      + tenancy = (known after apply)
      + user_data = "66a7f3bc5f516a056b4d674bf326526a20f6cbf7"
      + user_data_base64 = (known after apply)
      + user_data_replace_on_change = false
      + vpc_security_group_ids = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + snapshot_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name = (known after apply)
          + no_device = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens = (known after apply)
          + instance_metadata_tags = (known after apply)
        }

      + network_interface {
          + delete_on_termination = false
          + device_index = 0
          + network_card_index = 0
          + network_interface_id = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }
    }

  # aws_instance.datascientest-instance[1] will be created
  + resource "aws_instance" "datascientest-instance" {
      + ami = "ami-064736ff8301af3ee"
      + arn = (known after apply)
      + associate_public_ip_address = (known after apply)
      + availability_zone = (known after apply)
      + cpu_core_count = (known after apply)
      + cpu_threads_per_core = (known after apply)
      + disable_api_stop = (known after apply)
      + disable_api_termination = (known after apply)
      + ebs_optimized = (known after apply)
      + get_password_data = false
      + host_id = (known after apply)
      + id = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state = (known after apply)
      + instance_type = "t2.micro"
      + ipv6_address_count = (known after apply)
      + ipv6_addresses = (known after apply)
      + key_name = (known after apply)
      + monitoring = false
      + outpost_arn = (known after apply)
      + password_data = (known after apply)
      + placement_group = (known after apply)
      + placement_partition_number = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns = (known after apply)
      + private_ip = (known after apply)
      + public_dns = (known after apply)
      + public_ip = (known after apply)
      + secondary_private_ips = (known after apply)
      + security_groups = (known after apply)
      + subnet_id = (known after apply)
      + tags = {
          + "Name" = "datascientest 1"
        }
      + tags_all = {
          + "Name" = "datascientest 1"
        }
      + tenancy = (known after apply)
      + user_data = "66a7f3bc5f516a056b4d674bf326526a20f6cbf7"
      + user_data_base64 = (known after apply)
      + user_data_replace_on_change = false
      + vpc_security_group_ids = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + snapshot_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name = (known after apply)
          + no_device = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens = (known after apply)
          + instance_metadata_tags = (known after apply)
        }

      + network_interface {
          + delete_on_termination = false
          + device_index = 0
          + network_card_index = 0
          + network_interface_id = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }
    }

  # aws_instance.datascientest-instance[2] will be created
  + resource "aws_instance" "datascientest-instance" {
      + ami = "ami-064736ff8301af3ee"
      + arn = (known after apply)
      + associate_public_ip_address = (known after apply)
      + availability_zone = (known after apply)
      + cpu_core_count = (known after apply)
      + cpu_threads_per_core = (known after apply)
      + disable_api_stop = (known after apply)
      + disable_api_termination = (known after apply)
      + ebs_optimized = (known after apply)
      + get_password_data = false
      + host_id = (known after apply)
      + id = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state = (known after apply)
      + instance_type = "t2.micro"
      + ipv6_address_count = (known after apply)
      + ipv6_addresses = (known after apply)
      + key_name = (known after apply)
      + monitoring = false
      + outpost_arn = (known after apply)
      + password_data = (known after apply)
      + placement_group = (known after apply)
      + placement_partition_number = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns = (known after apply)
      + private_ip = (known after apply)
      + public_dns = (known after apply)
      + public_ip = (known after apply)
      + secondary_private_ips = (known after apply)
      + security_groups = (known after apply)
      + subnet_id = (known after apply)
      + tags = {
          + "Name" = "datascientest 2"
        }
      + tags_all = {
          + "Name" = "datascientest 2"
        }
      + tenancy = (known after apply)
      + user_data = "66a7f3bc5f516a056b4d674bf326526a20f6cbf7"
      + user_data_base64 = (known after apply)
      + user_data_replace_on_change = false
      + vpc_security_group_ids = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + snapshot_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name = (known after apply)
          + no_device = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens = (known after apply)
          + instance_metadata_tags = (known after apply)
        }

      + network_interface {
          + delete_on_termination = false
          + device_index = 0
          + network_card_index = 0
          + network_interface_id = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name = (known after apply)
          + encrypted = (known after apply)
          + iops = (known after apply)
          + kms_key_id = (known after apply)
          + tags = (known after apply)
          + throughput = (known after apply)
          + volume_id = (known after apply)
          + volume_size = (known after apply)
          + volume_type = (known after apply)
        }
    }

##################################

You now know how to set up loops and conditions on Terraform.

Lesson done

Lesson finished?