Learn how to design and deploy cloud infrastructure using Infrastructure as Code (IaC) with a focus on Terraform.
Estimated reading time: 15 minutes
- This document outlines a learning initiative at Brainboard focused on mastering Terraform for cloud infrastructure design and deployment. - Key learning outcomes include understanding Infrastructure as Code (IaC), Terraform basics, and a design-first approach to building cloud infrastructure. - The guide will cover various aspects of Terraform, including provisioning tools, advantages of IaC, and practical coding examples. - The document invites readers to engage with tutorials and resources to enhance their understanding of Terraform and its applications.
- IT Professionals - Cloud Architects - DevOps Engineers
Introduction
At Brainboard, we’re deeply committed to continuous learning and collaboration. Every Friday, we dedicate time to learn from one another. Recently, I seized this opportunity to start from scratch and master the design and deployment of a fictional Azure cloud infrastructure using Brainboard. Alongside the whole team, we’ve decided to share our journey with you.
This is an invitation : join us and learn terraform by designing.
Key Learning Outcomes:
- Understand Infrastructure as Code (IaC) and Terraform Basics
- Terraform 2.0
- Understanding Terraform Providers
- Design-First Approach to Building Cloud Infrastructure
- Insights from a newly certified Cloud Architect
- Resources: Stay alert with the Latest Computing Trends
How to Use this Terraform Guide
The forthcoming will cover the nine distinct areas outlined in the Terraform Review Guide. Engage with the provided documentation, participate in the tutorials, and explore the extra resources linked in each section.
The content here represents the key insights gathered for each domain, though this guide does not encompass all there is to know. Your familiarity with the specific domain knowledge will determine the extent to which you need to explore the additional resources linked in each section to fully grasp the subject matter.
Excited? Let’s go!
- Introduction
- Key Learning Outcomes:
- How to Use this Terraform Guide
- 1. Understand Terraform’s Chaos
- Evolution of the Cloud
- The major differences:
- Explain what IaC is:
- Diverse Categories of IaC Tools
- The provisioning tools landscape
- The advantages of implementing Infrastructure as Code (IaC)
- Terraform 1.0
- Terraform concepts you need to know
- Grouping
- Terraform Count versus for_each
- Setting up Terraform manually
- Ways to get started:
- Developer workflows:
- Additional tools
- GPTs to help
- Writing terraform code
- On Brainboard
- 2. Terraform 2.0
- Learn the basics
- Terraform code structure
- Improved Ways to Start with Terraform
- Design your producer/consumer model
- Deploy
- Account Set-up checklist
- Terraform Variables
- Terraform Modules
- Terraform Secrets
- Terraform Azure use case
- Code Structure and Logic:
- Azure Landing Zone Masterclass
- Terraform Best Practices on Brainboard
- 3. Terraform Provider 1.1
- Introduction of Terraform Provider
- Terraform Provider Block Example
- Commonly Used Terraform Providers List
- Mix & Match Terraform Resources
- Configure Azure on Brainboard
- Setting Up AWS Infrastructure Using Terraform
- Extra: Website on Azure Cloud with Brainboard
- Potential Gotchas
- 4. Design to Terraform Code
- Let’s start with Brainboard
- Tutorial
- Design to Code methodologies
- Shift left methodologies
- Git as source of truth?
- 5. Insights from a newly certified Cloud Architect
- Shifting trends
- Is the future AI-driven infrastructure?
- Resources
- Templates
- Links
1. Understand Terraform’s Chaos
- Evolution of the Cloud
- Explain what IaC is
- Diverse Categories of IaC Tools
- The provisioning tools landscape
- Advantages of implementing IaC
- Terraform 1.0
- Terraform concepts you need to know
- Group terraform resources
- Terraform Count versus for_each
- Setting up Terraform manually
- Ways to get started
- Developer workflows
- GPTs to help
- Writing terraform code
- vs. on Brainboard
Evolution of the Cloud
Let’s start when the cloud started.
- Pre-cloud: If we think about what tech companies who are building applications that were deployed to the web needed to do in the early 90s and 2000s, here’s what it looked like: You would come up with your idea, you would then need to write the software for your application, and then you need to go off and , This was a very challenging process.
- Cloud: this shifted quite a bit in the 2010s so now again you have your idea, you program it up on a much more modern personal computer, and then rather than provision your own servers, you deploy to the cloud. It’s become pretty much the de facto standard but you can obviously still buy your own servers and host them yourselves. Welcome to the on-demand era!
buy a whole bunch of servers and set up a data center somewhere
handle all of the power management and networking and operational overhead that comes with running your own data center.
The major differences:
- Infrastructure provisioned via APIs
- Servers created & destroyed in seconds
- Long-lived + mutable → short-lived + immutable
There are three main approaches for provisioning cloud resources:
- GUI: Cloud console
- API or command-line interface ()
- Infrastructure as Code, that’s our focus today!
CLI
Explain what IaC is:
Infrastructure as Code (IaC) represents a transformative approach to infrastructure management, allowing you to define and manage your entire infrastructure through code. This methodology ensures complete transparency and control over the infrastructure, providing an accurate overview of your environment at any given moment. — Insights from “Terraform: Up & Running Writing Infrastructure as Code” by O’Reilley
Diverse Categories of IaC Tools
- Ad hoc Scripts: These are custom scripts written to automate specific tasks or processes within an infrastructure. They are quick to create but might lack consistency and scalability.
- Configuration Management Tools: Tools like Ansible, Puppet, and Chef are designed to automate the configuration of software and systems on existing servers. They ensure that machines are configured to an exact specification.
- Server Templating Tools: Tools such as AMIs (Amazon Machine Images) capture the entire server state, including the OS, installed software, and configurations, allowing for rapid provisioning of identical servers.
- Orchestration Tools: Kubernetes exemplifies this category, managing the deployment, scaling, and operation of containerized applications across clusters of servers.
- Provisioning Tools: These tools focus on automating the setup of servers and other infrastructure components from scratch.
The provisioning tools landscape
is divided based on their cloud compatibility:
- Cloud-Specific Tools: These are tied to a particular cloud platform, such as AWS CloudFormation, Azure Resource Manager, or Google Cloud Deployment Manager. They offer deep integration with their respective clouds but do not support multi-cloud environments.
- Cloud-Agnostic Tools: Terraform and Pulumi stand out in this category, offering the flexibility to manage infrastructure across any cloud provider. This approach enables a consistent workflow and tooling, irrespective of the underlying cloud platform.
The advantages of implementing Infrastructure as Code (IaC)
are manifold, offering a transformative approach to infrastructure management and scalability.
- IaC enhances the capacity to efficiently manage and expand your infrastructure.
- It underpins DevOps methodologies, facilitating swift, uniform deployment and infrastructure management. By automating provisioning and management processes, IaC ensures routines are consistent and replicable, significantly diminishing manual interventions and minimizing the risk of human error.
- Treating infrastructure configurations as code allows for version control through platforms like Git. This practice supports change tracking, enables reverting to earlier configurations, and promotes team collaboration, streamlining the development and deployment lifecycle.
Terraform 1.0
That’s the cloud infrastructure you will be able to duplicate from the templates’ catalog on Brainboard!
Terraform is a robust tool designed for creating, modifying, and maintaining infrastructure with precision and efficiency. It incorporates best practices for application software directly into infrastructure management, ensuring a streamlined and reliable process. — HashiCorp
- Cloud Agnostic: Compatible with many clouds and services (anything with an API)
- Terraform is often used in conjunction with other DevOps tools to create a comprehensive infrastructure automation strategy. This includes: Ansible for configuration management, Packer for server templating or Kubernetes for .
- Architecture Overview: At its core, Terraform operates through Terraform Core, a powerful engine that processes configuration files (including state and configuration details). It intelligently interacts with cloud providers’ APIs to align the actual infrastructure state with the desired state outlined in configurations. This is achieved via providers, such as the AWS Terraform Provider or the Cloudflare Terraform Provider, which serve as intermediaries between Terraform and the specific cloud services.
orchestration
Terraform concepts you need to know
- terraform init: Terraform will go to that terraform block, download the code for the selected provider from the terraform registry, insert it in the working directory. The lock file will contain all the specific dependencies and providers information that are installed within the workspace.
- terraform modules: Essentially, modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory. They are the main way to package and reuse resource configurations with Terraform.
We’ve dedicated a section below on Terraform Modules!
Browse module section of the Terraform registry
- terraform state file: Basically, it is Terraform’s representation of the world, a JSON file containing information about every resource and data object. It also contains sensitive info (eg. database password). It can be stored locally or remotely.
- terraform plan: Taking the terraform configuration (desired state) and compares it with the terraform state (actual state).
- Variables & Outputs: There are different variables you can use in terraform: imput variables — var.<name>, local variables — local.<name>, Output variables — take multiple configurations together.
- Expressions: Template strings, operators, conditionals, for, splat, dynamic blocks, constraints
- Functions: Numeric, string, collection, encoding, filesystem, date & time, Hash & crypto, IP network, type conversion.
- Meta Arguments like depends_on: Terraform automatically generate dependency graph based on references. If 2 resources depend on each other (but not each others data), depends_on specifies that dependency to enforce ordering. For eg, if software on the instance needs access to S3, trying to create the aws_instance would fail if attempting to create it before the aws_iam_role_policy.
- Meta arguments like Count: Allows for creation of multiple resources/modules from a single block. It’s useful when the multiple necessary resources are nearly identical. The Meta arguments like for_each allows more control to customize each resource than count.
- Provisioner: Allows you to perform some actions (locally or remotely). For eg. file, local-exec, remote-exec, vendor (chef or puppet)
- Terraform Environments: Think development, staging or production environment. Two main approaches: (1) workspaces uses multiple named sections within a single backend & (2) break thing out into different subdirectories within your file system.
- Code Rot: Refers to the concept that over time, things change about your software systems and if you don’t test and use code, the code will degrade (out of band changes, unpinned versions, deprecated dependencies, unapplied changes)
- Static Checks: scan your code base (1) built in with terraform fmt, validate, plan & custom rules or (2) external with tflint, checkov, tfsec, terrascan, terraform-compliance or Terraform Sentinal.
Grouping
This feature is designed to organize and display the structure of your project files, although it’s not technically creating groups but rather categorizing files for better management. For instance, you might start with all your Terraform configurations in a single file, say main.tf
. However, to improve organization, you decide to allocate database-related configurations to a distinct folder. To achieve this, you manually select the relevant configurations, right-click, and choose the option to edit the Terraform file name, specifying that these configurations should be moved to a database.tf
file. This action effectively creates what we refer to as a "group," though in reality, it's a method of file segmentation.
Following this process, your project will consist of at least two files: the original main.tf
, which contains the bulk of your configurations, and the newly created database.tf
for database-specific configurations. When navigating the interface, clicking on different sections allows you to view the contents of either main.tf
or database.tf
, providing a simplified overview of the project's file structure. This feature is particularly useful for highlighting and understanding the organization of your configurations across different "groups" or files.
Terraform Count versus for_each
In a typical scenario, an architecture configured with Terraform might include a set of Azure resources: a single Azure Function, a Service Plan, a Private Endpoint, and a Storage Account. This setup works well for small-scale applications or projects. However, as requirements evolve, there’s often a need to scale these resources. For instance, a project may require not just one, but three Azure Functions, three Service Plans, three Storage Accounts, and three Private Endpoints to support increased load or to provide high availability.
Replicating the configuration manually for each resource is not only time-consuming but also prone to errors. Moreover, it contradicts the DRY (Don’t Repeat Yourself) principle, a fundamental software development practice aimed at reducing repetition.
- Manually duplicating resources: By employing this method, I can efficiently duplicate all relevant elements in one go, and if needed, duplicate them again. However, this approach can rapidly complicate the infrastructure, particularly when scaling up to create numerous components, such as ten Azure functions or even a hundred.
- Using Count for Resource Management: This method involves the strategic use of variables to manage and scale cloud resources efficiently. For instance, when the goal is to create multiple storage accounts, you can leverage a variable, such as
count
, to specify the desired quantity. By settingcount
to three, you establish a directive to create three storage accounts. Similarly, you can define a variable, sayVAR
, asstacks
to track the number of stacks being deployed, initializing it with a default value like three stacks. - Using terraform for_each function: In essence, the
count
method is suited for creating a predefined number of resources that share the same configuration, with minor variations such as incrementally named identifiers (e.g., appending 1, 2, 3 to resource names). It's a straightforward approach for scaling resources uniformly.
Read more here:
Setting up Terraform manually
Manually, setting up Terraform is a step-by-step process:
- Begin by installing Terraform to set up the necessary software on your system.
- Authenticate with your cloud provider (e.g., AWS) using IAM roles to ensure Terraform has the necessary permissions for resources such as RDS, EC2, IAM, S3, DynamoDB, and Route53. Tools like the AWS Command Line Interface facilitate this process.
- Execute the configuration command to retrieve essential credentials, including the access key ID, secret access key, and default region. This step is pivotal for enabling Terraform to interact with your cloud environment.
- Develop your Terraform configuration (Tf configuration), which defines your infrastructure requirements. This includes specifying the provider, region, and resources such as EC2 instances.
- Validate and deploy your configuration using Terraform’s workflow commands:
init
to prepare your environment,plan
to preview changes, andapply
to enact the changes. Afterward, thedestroy
command can be used to cleanly remove deployed resources.
Ways to get started:
- Local backend: The state file right alongside your code. Downsides include (1) sensitive values in plain text, (2) uncollaborative and (3) manual.
- Remote backend: Separates the individual (developer) from the state file, stored in a remote server somewhere. Terraform Cloud will host your state files for you and manage permissions. You can also self manage a remote backend to store those state files using something like Amazon S3. This solution allows sensitive date encrypted, collaborative & automation friendly. Downsides include the increase of complexity.
Developer workflows:
- Write / update code
- Run changes locally (for dev environment)
- Create pull request
- Run tests via CI
- Deploy to staging via CD (merge to main)
- Deploy to production via CD (release)
Additional tools
to consider when working with Terraform:
- Minimize code repetition withe templating systems
- Enable multi-account seperation (improved isolation)
- Cleanup your resources
- Preventing human errors
- CICD tools like GitHub Actions, GitLab, Azure DevOps, etc…
GPTs to help
After conducting extensive evaluations on various GPT models specifically designed for infrastructure development, we have compiled the following refined list:
Writing terraform code
resource "aws_instance" "example" {
tags = merge(var.tags, {})
instance_type = "f1.2xlarge"
ami = "ami-0c55b159cbfafe1f0"
}
# This is a single-line comment
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0" # Another comment
}
variable "instance_name" {
description = "The name of the instance" # String
default = 5 # Number
}
output "instance_ip_addr" {
value = "${aws_instance.example.public_ip}" # Expression
}
variable "subnets" {
type = list(string)
default = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
}
variable "tags" {
type = map(string)
default = {
Environment = "Dev"
Team = "Backend"
}
}
variable "image_id" {
type = string
default = "ami-0c55b159cbfafe1f0"
}
resource "aws_instance" "example" {
tags = {
Name = "Instance-${var.instance_name}" # Interpolation
}
}
resource "aws_instance" "example" {
count = var.instance_count > 0 ? var.instance_count : 0 # Conditional
}
output "lower_instance_name" {
value = lower(var.instance_name) # Function usage
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = aws_subnet.example.id # Dependency on another resource
}
resource "aws_subnet" "example" {
# Subnet configuration...
}
On Brainboard
That’s the terraform resource you will be able to configure with Brainboard!
2. Terraform 2.0
- Learn the basics
- Terraform code structure
- Improved ways to start with Terraform
- Design your producer/consumer model
- Deploy
- Account Set-up checklist
- Terraform Variables
- Terraform Modules
- Terraform Secrets
- Terraform Azure use case
- Azure Landing Zone Masterclass
- Terraform Best Practices on Brainboard
Learn the basics
Terraform code structure
That’s the cloud infrastructure you will be able to configure with Brainboard!
Learning HCL is much more than just a programming language. It is a tool to control anything in your infra. Learning blocks, their types, attributes, data types, conditional statements, functions and resource dependencies can be daughting. here is a summary of what you should know:
Improved Ways to Start with Terraform
Each method has its unique advantages and considerations:
- Starting From Scratch: If you’re beginning with a specific project specification or concept, this method allows you to build your infrastructure from the ground up. Tailor your architecture precisely to your needs and understand each component thoroughly as you add it. Ideal for unique or custom projects where pre-existing templates don’t fit. Requires a solid understanding of both Terraform and your infrastructure needs. It can be time-consuming but highly educational.
- Using Pre-Defined Templates: Leveraging existing Terraform templates can jumpstart your project. These templates are pre-configured pieces of code that provide a basic structure for common infrastructure setups. Great for learning and quick deployment. It helps in understanding how different Terraform components fit together in a practical scenario.
- Migrating Existing Infrastructure: This involves transitioning your existing infrastructure into Terraform. It’s about converting current setups (possibly manually managed or managed through other IaC tools) into Terraform code. Provides an opportunity to audit and optimize your current setup. You can identify redundant resources or dependencies and streamline your infrastructure. Requires a careful approach to avoid service disruption. It’s a chance to refactor and improve your infrastructure but demands a thorough understanding of both the existing setup and Terraform.
- Using AI for Cloud Architecture and Terraform Implementation: Leveraging Brainboard’s AI assistant presents a new solution to the complexities of cloud architecture design and Terraform code generation. This process begins with the user crafting a precise ChatGPT prompt, enabling Bob to conceptualize the cloud architecture. Following this, Bob generates the necessary Terraform code. This streamlines the transition straight to the cloud configuration and deployment stages of the infrastructure, markedly simplifying the entire workflow.
The choice depends on your project requirements, existing infrastructure, and familiarity with Terraform and infrastructure concepts.
Design your producer/consumer model
Deploy
Account Set-up checklist
When learning about cloud architecture and IaC, there are several universally valuable takeaways:
1. Cloud Architecture Fundamentals: Understanding how to structure and manage cloud projects, environments, and architectures is a core skill in cloud computing, applicable across various platforms and tools. 2. CI/CD and Infrastructure-as-Code (IaC): Skills in continuous integration and continuous deployment, along with IaC, are critical in modern cloud environments. These concepts are essential for efficient, scalable, and reliable software development and deployment. 3. Working with Major Cloud Providers: Knowledge of cloud services provided by major platforms like Azure, AWS, Google Cloud Platform (GCP), and Oracle Cloud Infrastructure (OCI) is valuable. Each provider has unique features and services, but understanding one deeply often provides insights into others. 4. Version Control and Code Management: Proficiency in version control systems, particularly Git, is essential in software development and DevOps. Understanding how to manage code changes, handle merge requests, and maintain version history is crucial. 5. Security Practices in Cloud Computing: Learning about securing cloud environments, managing sensitive data, and implementing compliance measures is critical. This includes understanding how to configure and use remote backends for state management, encryption, and access control. 6. Effective Documentation: The ability to create clear, comprehensive documentation for cloud architectures, code, and processes is invaluable. Good documentation ensures that projects are maintainable, scalable, and understandable by others. 7. Problem-Solving and Analytical Skills: Developing strong problem-solving and analytical skills is beneficial in any technical role. These skills help in troubleshooting, optimizing performance, and designing efficient systems. 8. Adaptability and Learning New Tools: The cloud computing field is dynamic, with new tools and technologies emerging regularly. Learning one tool or technology often makes it easier to adapt to others, as many share underlying principles. 9. Module Management and Customization: Understanding how to effectively use, customize, and manage modules (reusable code blocks) is important for efficient cloud architecture development. 10. Advanced Cloud Management Techniques: Skills like creating loops in configurations, managing dependencies, and handling multi-subscription environments are advanced techniques that enhance your ability to design and manage complex cloud architectures.
Terraform Variables
Terraform Modules
Creating a Terraform Module is like drafting the blueprint for a component of the building. It starts with understanding what resources need to be grouped and how they interact. You define these in a collection of .tf
or .tf.json
files in a directory, which includes the resources that the module will manage. By defining input variables, you can customize the module's behavior without altering the underlying code, and output values can expose a subset of the resource's attributes to other parts of your Terraform code.-
module
block, and provide values for the input variables. The module's output can then be used by other parts of your infrastructure, fitting seamlessly into the larger design.When starting a new infrastructure project, it’s crucial to think about how to decompose your infrastructure into reusable modules. Begin with a ‘designing-first’ approach, considering what components will be needed across different projects and environments. This foresight is like the architect planning for different spaces within the building — each with a specific purpose and requirement, yet all working together to form a cohesive whole.
Here’s a tutorial to guide you through this process, including tips on using Git as a module registry for versioning and collaboration.
- Understanding Terraform Modules as Reusable Architecture Blocks
- Define Your Module
- Use Git for Module Versioning and Sharing
- Importing and Using Modules in Brainboard
- Leveraging Outputs for Integration
- Advanced Module Strategies
- Managing Module Versions and Dependencies
- Continuous Integration and Deployment
- Monitoring and Maintenance
Terraform Secrets
Terraform Azure use case
This diagram and code snippet demonstrates a terraform code, which defines a resource group, a virtual network, and a subnet in Azure:
Code Structure and Logic:
- Provider Declaration: The
provider
block configures the specified provider, in this caseazurerm
, which is for Azure Resource Manager. Thefeatures {}
block is required but can be empty for the Azure provider. - Resource Group: The
azurerm_resource_group
resource defines a new resource group named "example-resources". Resource groups in Azure are a way to organize Azure assets. - Virtual Network (VNet): The
azurerm_virtual_network
resource creates a virtual network. It is given a name, an address space (in CIDR notation), and it references the location and the name of the resource group created earlier. This establishes a network within which you can place further Azure services. - Subnet: The
azurerm_subnet
resource creates a subnet within the Virtual Network. It specifies the name, associated resource group, the VNet it belongs to, and the address range for the subnet. This subnet is where you could place virtual machines or other resources. - Resource Dependencies: Terraform automatically understands resource dependencies. The subnet depends on the virtual network, which in turn depends on the resource group. Terraform creates and manages these in the correct order.
- Variable Usage: This example hardcodes values like names and CIDR blocks. In a dynamic real-world scenario, these would typically be replaced with variables for reusability and flexibility.
Azure Landing Zone Masterclass
Terraform Best Practices on Brainboard
1. Self-service model: Create repeatable patterns to standardize infrastructure components with terraform modules and templates. Architectures can be turned into private or public templates.
2. Consistency Across Environments: Maintain consistency across different cloud environments by promoting architectures from one environment to another.
3. Terraform Variables: Use terraform variables & locals to ensure customization and flexibility. Define output variables to display useful information.
4. Remote Backend: Configure a shared remote storage for the Terraform state file to enable team collaboration and prevent concurrent state file edits.
5. Edit code if needed: Edit the generated terraform code on Brainboard for better flexibility.
6. Versioning: Enable versioning to keep a history of changes, allowing you to track infrastructure evolution and revert to previous versions if necessary.
7. Terraform commands: Execute Terraform commands in parallel and ensure resources are provisioned or destroyed in the correct order.
8. Respect your Standards: Build cost-efficient Terraform infrastructures, monitor for security vulnerabilities, and implement remediation strategies. Continuously test your infrastructure both before and after deployment to ensure it meets requirements and standards.
9. Webhooks: Use webhooks to trigger external actions as part of your automation strategy.
10. Code Visualization: Visualize resources when importing Terraform code and keep your terraform code organized by grouping resources into multiple .tf
files.
11. Documentation: Document your infrastructure work and any changes with Readme files, input descriptions, and tags.
3. Terraform Provider 1.1
- Introduction of Terraform Provider
- Terraform Provider Block Example
- Commonly Used Terraform Providers List
- Mix & Match Terraform Resources
- Configure Azure on Brainboard
- Setting Up AWS Infrastructure Using Terraform
- extra: Website on Azure Cloud with Brainboard
- Potential Gotchas
Introduction of Terraform Provider
As previously said, Terraform providers are plugins enabling interaction with APIs. Bluntly, they are written in Go and use the Terraform Plugin SDK. They include Cloud providers and Software-as-a-service providers. Hundreds of providers are available for Terraform, offering versatility, unlike platform-specific languages. Most providers are mainly maintained by HashiCorp, dedicated teams or community groups.
Provider configurations should be declared in the root module of your Terraform project. They can be declared in any .tf
file. I would recommend either putting them in the main.tf
, creating a providers.tf
file specifically for the Providers and nothing else, or alternatively a versions.tf
file which would include the required_providers
block and specify the Terraform version. Any child modules receive their provider configuration from the root module.
Terraform Provider Block Example
Terraform providers are set up using a provider block, tailored for specific configurations like AWS access keys and regions. For AWS, replace Azure-specific configurations with AWS-related settings such as access_key, secret_key, and region. Use alias for managing resources across different AWS accounts or regions. Here’s how you’d configure it for AWS:
provider "aws" {
region = "us-west-2"
access_key = "your_access_key"
secret_key = "your_secret_key"
}
provider "aws" {
alias = "east"
region = "us-east-1"
access_key = "your_access_key"
secret_key = "your_secret_key"
}
resource "aws_instance" "example" {
provider = aws.east
# other configuration
}
Commonly Used Terraform Providers List
Many providers exist, but for now, we are focusing on majors:
- Azure: Microsoft Azure is a comprehensive cloud platform by Microsoft, encompassing a broad array of services such as IaaS, PaaS, and SaaS, catering to diverse needs like computing, storage, and application development. In Terraform, Azure Resource Providers facilitate interaction with Azure’s various services, allowing for efficient management and provisioning of Azure resources through Terraform’s infrastructure as code approach.
- AWS: Amazon Web Services is a leading cloud platform by Amazon, offering a wide range of scalable and cost-effective cloud services for application development, deployment, and management. In Terraform, AWS services are managed through specific resource providers, facilitating the creation and control of AWS resources directly from Terraform configurations, bridging the gap between code and AWS’s infrastructure services.
- GCP: Google Cloud Platform is Google’s extensive suite of cloud services, providing diverse solutions for computing, storage, databases, machine learning, and data analytics. The Google Cloud Terraform Provider, developed by Google and HashiCorp, enables users to manage GCP infrastructure using Terraform, streamlining the deployment and management of GCP resources.
- OCI: Oracle Cloud Infrastructure is Oracle Corporation’s cloud platform offering a broad spectrum of services including IaaS, PaaS, and SaaS. Unlike many other cloud providers, the OCI Terraform Provider is managed directly by Oracle’s open source team, rather than HashiCorp, highlighting its unique integration and support for Oracle Cloud resources.
These 4 providers, with their cloud and data resources are available natively on Brainboard.
Mix & Match Terraform Resources
Configure Azure on Brainboard
Below is a more complex Terraform example that provisions an Azure Virtual Network, a couple of Subnets, a Network Security Group with rules, and an Azure Virtual Machine. This diagram and terraform configuration demonstrates a more comprehensive setup within Azure:
- A resource group for organizing related resources.
- A virtual network with an address space to host subnets.
- Two subnets for separating internal and external traffic.
- A network security group (NSG) to define a set of network security rules.
- A network security rule that allows inbound SSH traffic.
- A public IP to be used by a network interface.
- A network interface attached to the internal subnet and associated with the public IP.
- A virtual machine configured with this network interface, using an Ubuntu Server image.
In this setup, the security rule allows for SSH access (typically port 22), which is useful for initial configuration and management. The virtual machine’s OS disk is defined to be created from an image, specifying its properties like caching and the type of managed disk.
Setting Up AWS Infrastructure Using Terraform
Start by exploring the GitHub repository containing Terraform configurations for AWS infrastructure:
- Backend + Provider Configuration: Configure Terraform backend using an S3 bucket for state management, enabling collaboration and state locking. Set up the AWS provider in Terraform by specifying your desired region and authentication method (e.g., IAM roles, access keys).
- EC2 Instances: Define
aws_instance
resources for your virtual servers, selecting appropriate AMI, instance type, and configuring SSH access with a key pair. Attach security groups to manage network access to instances securely. - S3 Bucket: Create an S3 bucket with
aws_s3_bucket
for object storage, configuring access policies and region settings as needed. - VPC Configuration: Establish a Virtual Private Cloud (
aws_vpc
) with a defined CIDR block for resource isolation and network management. Within the VPC, create subnets (aws_subnet
) specifying their CIDR blocks and availability zones for regional distribution. - Security Groups + Rules: Utilize
aws_security_group
resources to define inbound and outbound rules, ensuring secure access to and from your AWS resources. - Application Load Balancer (ALB): Deploy an ALB (
aws_lb
) to distribute incoming traffic across multiple targets, specifying necessary subnets and security groups. Set up an ALB target group (aws_lb_target_group
) and attach EC2 instances (aws_lb_target_group_attachment
) to evenly distribute traffic. - Route 53 Zone + Record: Configure a Route 53 hosted zone (
aws_route53_zone
) for DNS management and domain name services. Create DNS records (aws_route53_record
) to link your domain to AWS resources like your ALB, facilitating easy access. - RDS Instance: Deploy an RDS instance (
aws_db_instance
) selecting a database engine and configuring instance size and storage, ensuring data persistence and scalability.
For a detailed walkthrough on manually configuring AWS architecture, including practical examples and best practices, consider watching instructional videos like the one linked: Configure AWS Infrastructure.
Extra: Website on Azure Cloud with Brainboard
As you design, Brainboard automatically updates the Terraform code, reflecting your infrastructure’s current state.
In this tutorial, we will embark on constructing a sleek yet robust cloud architecture for hosting a website on Azure, using Brainboard.
- Azure Website Infrastructure Overview
- Infrastructure as Code with Terraform
- CI/CD Workflows and Deployment
- Monitoring and Drift Detection
- Deployment and Validation
- Public Template Available for FREE
Potential Gotchas
Here are some potential gotchas with terraform that can lead you to have a bad day:
- Name changes when refactoring
- Sensitive data in terraform state files
- Cloud timeouts
- Naming conflicts
- Forgetting to destroy test-infra
- Uni-directional version upgrades
- Multiple ways to accomplish same configuration
- Some Params are immutable
- Out of band changes.
4. Design to Terraform Code
- Let’s start with Brainboard
- Tutorial
- Design to Code methodologies
- Shift left methodologies
- Git as source of truth?
- Resources
Let’s start with Brainboard
- A tool to analyze each step of the cloud infrastructure design process.
- Same tooling for each step of the process.
- A design tool for cloud infrastructure that doesn’t drift from the actual implementation.
- A way to track both high-level and low-level designs.
- A tool or method to translate designs into code, preferably Terraform.
- Standardizations and best practices for writing infrastructure code.
- A review process for the code before it’s pushed to the repository.
- Testing tools for infrastructure as code, both locally and remotely.
- A tool or method to maintain and update local testing plugins.
- A pre-commit tool that works seamlessly and doesn’t require extensive maintenance.
- A clear review process that considers both the design and the code.
- A reliable deployment method post-approval.
- A holistic approach to building infrastructure that includes people working on it.
Tutorial
Join Chafik in this enlightening journey on Brainboard, where you’ll learn the ropes of crafting your initial cloud infrastructure setup, designed for effortless scaling down the line. This step-by-step tutorial is your golden ticket to mastering infrastructure as code, specifically tailored for users of Azure with Terraform.
Here’s what you’ll uncover:
- Function app in Azure with the private Endpoint
- Terraform Resources available with Azure provider
- Location & variables setup
- Resource group
- Linux function app + service plan & storage account
- Resources Configuration
- Terraform code autogenerated
- Embedded documentation for every idcard field
- Validate if my tf code is valid (or not)
- Terraform init & Terraform Plan
- Everything is good! Let’s continue adding the private endpoint!
- Don’t forget the subnet before!
- Brainboard automatically detected the relationship between vnet and subnet
- Add private endpoint, Brainboard detected the the Subnet ID, research group name and the location that is a variable.
- Leverage the CI/CD to create a workflow that allow me to check for security, costs, naming conventions, policies before I deploy.
- Version my infrastructure
- Pull request in your preferred repository
Design to Code methodologies
The “Design First, Code When Needed” methodology revolutionizes software development by prioritizing seamless integration between design and code. It counters the traditional, compartmentalized workflow that often results in a disconnect between these two critical phases, leading to inefficiencies and a disjointed final product. This approach advocates for keeping the design and coding phases in sync, ensuring updates to the code are reflected in the design and vice versa. By doing so, it addresses a common flaw in software development: the failure to maintain updated documentation that matches the actual codebase. Emphasizing the importance of matching design needs with code changes, this methodology facilitates a more cohesive and effective development process.
The introduction of tools like drag & drop designers and Continuous Integration/Continuous Deployment (CI/CD) pipelines further streamlines this process, reducing the feedback loop and enhancing efficiency. While the cost of implementation may vary, the long-term benefits of improved efficiency and a more robust final product highlight the value of adopting this forward-thinking approach.
Shift left methodologies
The “Shift Left” methodology in cloud infrastructure management emphasizes an early and continuous focus on quality and efficiency throughout the design, coding, testing, and deployment phases. This approach encourages teams to integrate crucial practices such as planning, testing, and review early in the development cycle, thereby identifying and addressing potential issues well before deployment.
By starting with a detailed design phase, teams can visualize and understand the cloud infrastructure’s complexities upfront, reducing the risk of “design drift” — where initial designs become outdated due to rapid changes in cloud infrastructure, leading to discrepancies that are hard to reconcile later.
Transitioning from design to code, the methodology leverages tools like Terraform to translate intricate designs into executable code, emphasizing the importance of code review to ensure infrastructure robustness and reliability. Testing, an essential step in this approach, is conducted to verify security, cost-effectiveness, and functionality, employing various plugins and tools for a thorough assessment.
The “Shift Left” methodology culminates in a review and approval process, ensuring the final cloud infrastructure aligns with the initial design and operates effectively in real-world scenarios. This proactive approach to cloud infrastructure management not only improves the efficiency and reliability of the infrastructure but also significantly reduces the likelihood of issues arising post-deployment, facilitating a smoother implementation process. Through understanding and applying these steps, teams can master cloud infrastructure management, ensuring successful deployments that meet or exceed project requirements and expectations.
Git as source of truth?
Customers typically know whether they require Git integration based on two main reasons.
- A significant majority, approximately 80%, prefer not to deploy directly with Brainboard. Despite planning and designing with Brainboard, these users adhere to strict security protocols and pipeline processes that necessitate the use of Git. This integration allows them to maintain their existing workflows, including code reviews and automated deployments, by pushing their final Brainboard configurations into Git.
- A smaller segment of our user base leverages Git as a method for backup. While they conduct their entire planning and deployment process within Brainboard, they opt to synchronize their work with Git. This ensures that, in the event of Brainboard becoming unavailable or experiencing issues, they retain access to their Terraform code externally.
- There are users who bypass Git altogether, preferring to apply their configurations directly from Brainboard. This group does not engage in pull requests or external code storage, relying solely on Brainboard for their deployment needs.
5. Insights from a newly certified Cloud Architect
Shifting trends
By 2031, the DevOps market is projected to hit $27.6 billion, with a staggering growth rate of 24.59% from 2024 to 2031. Key drivers and trends include:
- Kubernetes: Continues as a cornerstone for container orchestration
- AIOps: Employing AI to automate and optimize IT operations, costs and drifts.
- MLOps: Focus on automating and streamlining the deployment and maintenance of ML.
- DevSecOps: Integrating security into development pipelines enhances safety and efficiency.
- Multi + Hybrid: Managing applications across multiple cloud providers.
- GitOps: Using Git as a source of truth.
- Platform Engineering: Intelligent recommendations for performance optimization.
- Serverless: Reduces operational overhead, costs, and complexity, focusing on event-triggered code execution.
- Low/No-Code Platforms: Enable non-technical users to participate in DevOps processes.
- Industry-specific: Cloud services tailored to specific industries, such as AWS Healthcare and IBM Cloud for Financial Services.
Is the future AI-driven infrastructure?
Good question. Since the launch of AI in early 2023, the landscape of my workforce has undergone a profound transformation. As a leader, my focus has shifted significantly. No longer am I entrenched in the details of writing Terraform code or immersing myself in the nitty-gritty of infrastructure tasks.
Instead, AI has empowered me to elevate my attention to strategic matters, steering my team with a broader vision and deeper insight. This technological shift has not only streamlined our operations but has also reshaped the way we approach our work, allowing us to be more innovative and forward-thinking.
Our vision with Brainboard AI is to facilitate the initial stages of your infrastructure projects, or to provide clarity in complex Terraform configurations. However, it’s important to note that while our AI can assist in these areas, it isn’t designed to autonomously generate valid Terraform code or produce perfectly detailed diagrams on its own.
We stand by the philosophy that having a starting point, even if it’s not complete, is far more advantageous than having nothing at all.
Resources
Templates
This architecture is for building a serverless operational data lake for retail using AWS. It provides a scalable and efficient solution for storing and processing large volumes of data in a cost-effective manner.
AKS Baseline Cluster is an architecture for deploying and managing Kubernetes clusters on Azure using the AKS service.
This architecture is for integrating AKS (Azure Kubernetes Service) with Cosmos DB on the Azure cloud platform. It provides a solution for deploying containerized applications with Cosmos DB as the underlying database.
This architecture is for deploying and managing Apache Airflow environments using Terraform on Amazon Web Services (AWS) through Amazon Managed Workflows for Apache Airflow (MWAA).
This architecture is for exposing on-premise applications with Azure Application Gateway, as described in the blog post and Terraform repository provided.
This architecture is a highly available 3-tier architecture designed for deployment on AWS. It provides a scalable and resilient infrastructure for hosting web applications.
This architecture diagram depicts the setup of an AWS Auto Scaling Group (ASG) and Load Balancer (LB) within a Virtual Private Cloud (VPC) environment, utilizing multiple subnets for high availability and scalability.
This architecture is for an AWS Bastion, which provides secure remote access to resources in the AWS cloud. It acts as a gateway for accessing instances in private subnets.
AWS Carbon footprint architecture aims to measure and reduce the environmental impact of AWS cloud services.
The architecture shown in the pictures is for AWS Control Tower Account Factory, which enables automated and scalable account provisioning in AWS.
This architecture is for monitoring AWS costs using Brainboard, a cloud provider management tool. It provides real-time insights and analysis of AWS costs.
This architecture diagram represents the setup of an application on AWS Elastic Beanstalk, a cloud platform provided by AWS. The specific details and purpose of the architecture cannot be determined from the picture alone.
This architecture depicts an AWS instance and database setup, utilizing AWS as the cloud provider. The specific details and purpose of the architecture cannot be determined from the provided picture.
This architecture depicts an AWS instance running inside a Virtual Private Cloud (VPC) on the AWS cloud provider.
This architecture is for an AWS Landing Zone, which is a cloud environment setup that provides a foundation for deploying, managing, and securing workloads on AWS.
This architecture diagram depicts the setup of an AWS Load Balancer, which is used for distributing incoming network traffic across multiple servers to enhance performance and reliability.
This architecture is designed to enable the creation of segregated environments for Development, Staging, and Production in AWS, ensuring best practices in access control and resource management.
This architecture is for AWS Parallel Cluster, a cloud-based service provided by AWS. It allows users to easily set up and manage high-performance computing clusters for various applications and workloads.
This architecture diagram depicts the integration between AWS S3 and API Gateway for cloud storage and API management.
This architecture is a serverless architecture implemented on AWS, utilizing services provided by the cloud provider. It aims to provide a scalable and efficient solution for hosting applications and services.
This architecture diagram represents an Azure AKS (Azure Kubernetes Service) implementation using a hub and spoke model. It utilizes the cloud provider Azure and was created on September 2, 2023, at 3:20 AM.
This architecture diagram showcases the integration of Azure AKS (Azure Kubernetes Service) with Log Analytics for monitoring and managing containerized applications in the Azure cloud environment.
This architecture diagram represents the Azure API Management service, which is a cloud-based solution provided by Azure. It enables organizations to publish, manage, and secure APIs for their applications and services.
Azure Application Gateway is used to manage traffic to a Windows Web app deployed on Azure. The architecture provides a scalable and available solution for hosting web applications. Note: The picture provided is not visible. Please ensure that the image is accessible and try again.
The architecture depicted in the picture is a multi-region setup in Azure, utilizing load balancers, firewalls, availability sets, and virtual networks to create a resilient and scalable network infrastructure for hosting virtual machines.
This architecture represents the integration of Azure Data Factory with Synapse workspace, utilizing Azure as the cloud provider. It allows for seamless data orchestration and processing capabilities within the Azure ecosystem.
This architecture is for an Azure Data Landing Zone, which provides a cloud-based environment for data storage and processing.
This architecture is for an Azure disaster recovery multi-region deployment, ensuring high availability and data protection across multiple regions.
Azure Eventhub with Machine Learning is an architecture designed to leverage Azure's cloud platform for processing and analyzing data, specifically focused on event data, using machine learning algorithms.
This architecture diagram depicts an Azure Function app deployed on the Azure cloud platform. It showcases the integration of the app with Brainboard, a motherboard application.
From the picture, it appears to be an architecture for managing secrets and sensitive information using Azure Key Vault within the Azure cloud environment.
This architecture diagram showcases the usage of Azure Key Vault in a cloud environment, utilizing the services provided by Azure. The architecture ensures secure storage and management of sensitive information, such as cryptographic keys, certificates, and secrets.
This architecture is for setting up an Azure landing zone, which provides a standardized and scalable foundation for deploying resources in the Azure cloud environment.
This architecture is a baseline for an Azure landing zone, providing a standardized setup for cloud infrastructure on the Azure platform.
This architecture diagram depicts the monitoring of a Linux web app using Azure services. It showcases the integration of Azure monitoring tools to track the performance and status of the web app.
This architecture is for setting up an Azure multi-subscription landing zone, providing a centralized and scalable foundation for managing multiple Azure subscriptions.
This architecture diagram showcases the network and security components of an Azure cloud environment.
Azure Private Endpoints enable secure and private communication between Azure services and virtual networks.
The architecture depicted in the picture is for Azure Sentinel, a cloud-based security information and event management (SIEM) solution provided by Azure.
This architecture diagram illustrates the setup of a serverless application on Azure, utilizing the services provided by the cloud provider.
This architecture is for an Azure Storage Account, which is a cloud-based storage solution provided by Azure. It is used for storing and managing various types of data in the cloud.
Azure Synapse Analytics workspace is a cloud-based platform provided by Azure for data analytics and processing. It offers a comprehensive solution for ingesting, preparing, managing, and serving data for business intelligence and machine learning purposes.
This architecture diagram represents an Azure Terraform module for managing identity resources in the Azure cloud. It provides a centralized solution for creating and managing user identities, access controls, and authentication mechanisms within an Azure environment.
This architecture diagram showcases the configuration of Azure VMs with load balancing rules in an Azure cloud environment.
This architecture represents an Azure Virtual Network (VNET) with Network Security Groups (NSGs) and a sample rule. It is used for secure networking in the Azure cloud environment.
Azure vWan is an architecture designed for connecting and managing virtual networks in the Azure cloud environment.
This architecture diagram depicts the connection between an Azure WebApp backend and a MongoDB database hosted on the cloud platform.
The architecture depicted in the picture is for an Azure website hosted on the Azure cloud platform. It showcases the status of the website and provides a link to the motherboard application on Brainboard.
This architecture is for an Azure Windows virtual machine (VM) hosted on the Azure cloud platform. It is available for use and can be accessed through the provided motherboard link.
This architecture is for setting up an extended SQL Server Always On backup using Azure. It involves creating a virtual network, subnet, SQL Server virtual machine, and a Recovery Services Vault in Azure.
This architecture is for the CAF (Cloud Adoption Framework) modules provided by Azure. It enables users to easily deploy infrastructure using Terraform and provides resources such as modules, examples, and documentation.
This architecture diagram depicts a cross-account AWS S3 setup. It enables the storage and sharing of data across different AWS accounts using Amazon S3.
The architecture shown in the picture is a DDOS protection architecture for Azure, designed to mitigate and prevent distributed denial-of-service attacks. It provides a secure and reliable infrastructure for protecting applications and services hosted on Azure.
This architecture is for a distributed data processing and integration network on Azure, which utilizes cloud services provided by Azure to process and integrate data efficiently.
This architecture is for deploying the Finacle Banking Suite on the AWS cloud, leveraging the capabilities of AWS services and infrastructure.
This architecture diagram illustrates the integration of GCP cloud CDN and load balancer, providing a scalable and high-performance solution for distributing and delivering content across multiple regions.
This architecture is for implementing GCP cloud CDN and load balancer for a cloud-based application or website. It utilizes GCP as the cloud provider and integrates with Brainboard for motherboard management.
This architecture diagram depicts the setup of a GCP Kubernetes cluster using the Brainboard platform as the control panel.
This architecture diagram depicts the GCP Load Balancer setup for distributing incoming network traffic across multiple instances for improved availability and scalability.
This architecture is for a GCP Virtual Private Cloud (VPC) setup, which is available and managed through the Brainboard app.
This architecture is for a generic AWS SQS event handler. It utilizes a cloud provider, AWS, and integrates with a motherboard platform called Brainboard.
The architecture depicted in the picture is for a high-availability web application infrastructure with a scalable database. It utilizes Azure as the cloud provider and includes resources from the Terraform AWS VPC module.
This architecture is for highly available and zone-redundant Azure Spring Apps, utilizing Azure as the cloud provider. The details can be found at this link.
This architecture is for a Jenkins setup on the AWS cloud, using the Brainboard platform as the motherboard for managing the Jenkins environment.
This architecture diagram showcases a Kubernetes provider using AKS on the Azure cloud platform. It demonstrates the integration of AKS as the Kubernetes cluster manager within the Azure infrastructure.
This architecture represents the integration of AKS (Azure Kubernetes Service) as the cloud provider for hosting Kubernetes clusters. The provided screenshot showcases the availability status of the architecture on the Brainboard platform.
Multi-cloud Architecture enables the use of multiple cloud providers, with AWS as the primary provider, for enhanced scalability, flexibility, and redundancy.
This architecture is for a multi-region web app using CosmosDB on the Azure cloud provider.
This architecture is for the OCI landing zone, which is a framework for setting up a secure and compliant environment in Oracle Cloud Infrastructure.
This architecture is for a hub and spoke network design with overlapping IP addresses.
This architecture represents a PostgreSQL Flexible Server Database hosted on the Azure cloud provider. It is available for use and can be managed using the Brainboard platform.
This architecture represents a private DNS resolver implemented using Azure cloud services. It provides a secure and efficient method for resolving domain names within a private network.
This architecture is for reducing costs of S3 and RDS services in AWS by implementing specific optimizations and configurations. The details can be found in the provided blog post and GitHub repository.
This architecture represents a REST API for Amazon Document DB, with a motherboard as the central component. Further details can be found at this link.
This architecture is for creating and managing a network infrastructure on Google Cloud Platform (GCP) using Terraform. It provides a centralized platform for creating and managing network resources.
This architecture is for deploying and managing Azure Key Vaults using Terraform, as described in the provided blog and Git repositories.
This architecture is for a VM extension with a cluster, using Azure as the cloud provider. The purpose and details of the architecture cannot be determined from the provided picture.
This architecture represents the VPC (Virtual Private Cloud) and subnets in AWS, which are used to create isolated virtual networks for deploying resources and managing network traffic.
This architecture is for a web app with a private link to an Azure SQL DB and storage. It allows for secure communication and data storage within the Azure cloud environment.
Links
These are the resources we’ve got to share to you:
- Brainboard app
- IaC Terraform Best Practices (42 videos)
- Webinars: Design first, Terraform Basics, Modules, Variables, Secrets, CI/CD
- Use cases: Azure Landing Zone, AWS EKS, producer/consumer model or these 52 videos.
- Azure Active Directory, Active Directory, Azure Stack.
- Hashicorp Terraform Registry
- HashiCorp Cloud Engineer Certifications (free)
- The newly updated freeCodeCamp course by Andrew Brown (Free) 😁
- Terraform Associate Prep Tutorials
- Jumppad.dev and their Terraform-workshop repository (Free)
- The Terraform Hands On Labs Udemy course by Bryan Krausen (Paid)
- Complete Terraform Course with DevOps Directive (AWS)
- Learn Terraform with Azure with freeCodeCamp (Azure)
- Learn Terraform with GCP with freeCodeCamp (GCP)
- HashiCorp Terraform Associate Certification Course
- Top 20 Alternatives to Draw.io for Cloud Architecture
- 16 Best tools to Design your CICD Engine
- Best VS Code extensions for Cloud Engineers
- How to Detect Drift and Remediate Cloud Infrastructure Drift
- GPT for Cloud Architectures
- Terraform alternative: OpenTofu
- Best AI prompts for IaC
- DevOps Market Insights 2024
If you liked this tutorial, don’t forget to 👏 .
If you liked the tool I used, check Brainboard.
If you think Brainboard is a match for your organization, contact us.