Het probleem van handmatige infrastructuur: Stel je voor: je moet 50 servers configureren, elk met dezelfde software, gebruikers, en instellingen. Handmatig zou dit dagen kosten, foutgevoelig zijn, en niet reproduceerbaar. Infrastructure as Code (IAC) lost dit op.
Wat je zult leren:
Infrastructure as Code (IAC) is het proces van het beheren en inrichten van computerhardware via machine-leesbare definitiebestanden, in plaats van fysieke hardwareconfiguratie of interactieve configuratietools.
Ansible is een open-source automatiseringstool voor:
Control Node (je laptop/server)
├── Ansible Installation
├── Playbooks (YAML files)
├── Inventory (hosts file)
└── SSH Verbindingen
├── → Managed Node 1
├── → Managed Node 2
└── → Managed Node 3
Belangrijke kenmerken:
# Ubuntu/Debian
sudo apt update
sudo apt install ansible
# macOS
brew install ansible
# pip (alle systemen)
pip install ansible
# Verificatie
ansible --version
Het inventory bestand definieert welke servers Ansible moet beheren:
# 5-iac-files/ansible/hosts
[mycloudvms]
141.144.203.33
projectwerk.vives.be
linux.vives.live
[mycloudvms:vars]
ansible_user=root
ansible_password=P@ssword123
[ubuntu-servers]
141.148.235.108
[ubuntu-servers:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=~/.ssh/id_rsa
Inventory groepen:
[mycloudvms]: Groep van cloud VMs[ubuntu-servers]: Groep Ubuntu servers:vars: Variabelen voor de groep[defaults]
host_key_checking = False
inventory = hosts
remote_user = root
private_key_file = ~/.ssh/id_rsa
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
Snelle commando’s zonder playbooks:
# Test connectiviteit
ansible all -i hosts -m ping
# Systeem informatie
ansible mycloudvms -i hosts -a "cat /etc/os-release"
# Package installatie
ansible ubuntu-servers -i hosts -m apt -a "name=htop state=present" --become
# Service beheer
ansible all -i hosts -m systemd -a "name=nginx state=started enabled=yes" --become
# File operaties
ansible all -i hosts -m copy -a "src=/tmp/test.txt dest=/tmp/test.txt"
# User management
ansible all -i hosts -m user -a "name=devops shell=/bin/bash groups=sudo" --become
Veel gebruikte modules:
ping: Test connectiviteitcommand/shell: Commando’s uitvoerenapt/yum: Package managementcopy/file: File operatiesuser/group: User managementsystemd/service: Service managementPlaybooks zijn YAML bestanden die complexe taken definiëren:
---
- name: Playbook naam
hosts: doelgroep
become: yes # sudo privileges
vars:
variabele: waarde
tasks:
- name: Task beschrijving
module:
parameter: waarde
# 5-iac-files/ansible/playbook.yaml
---
- name: Example Playbook for Ubuntu Servers
hosts: all
become: yes
vars:
new_user: devopsuser
ssh_pub_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD..."
tasks:
- name: Update apt cache
apt:
update_cache: yes
- name: Upgrade all packages
apt:
upgrade: dist
- name: Install essential packages
apt:
name:
- git
- curl
- htop
- vim
- docker.io
state: present
- name: Create a new user
user:
name: ""
shell: /bin/bash
state: present
groups: sudo
- name: Add SSH key for new user
authorized_key:
user: ""
key: ""
- name: Ensure UFW is installed
apt:
name: ufw
state: present
- name: Allow SSH through firewall
ufw:
rule: allow
name: OpenSSH
- name: Enable UFW
ufw:
state: enabled
enabled: yes
- name: Start and enable Docker
systemd:
name: docker
state: started
enabled: yes
# 5-iac-files/ansible/playbook-createfile.yml
---
- name: My playbook
hosts: all
tasks:
- name: Leaving a mark
command: "touch /tmp/ansible_automated_file"
# Basis uitvoering
ansible-playbook -i hosts playbook.yaml
# Met verhoogde verbosity (debugging)
ansible-playbook -i hosts playbook.yaml -vvv
# Dry run (test zonder wijzigingen)
ansible-playbook -i hosts playbook.yaml --check
# Specifieke hosts
ansible-playbook -i hosts playbook.yaml --limit ubuntu-servers
# Met extra variabelen
ansible-playbook -i hosts playbook.yaml -e "new_user=milan"
vars:
packages:
- nginx
- mysql-server
mysql_root_password: "secure123"
tasks:
- name: Install packages
apt:
name: ""
state: present
- name: Configure nginx
template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
notify: restart nginx
tasks:
- name: Copy nginx config
copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
notify: restart nginx
handlers:
- name: restart nginx
systemd:
name: nginx
state: restarted
tasks:
- name: Install Apache on Ubuntu
apt:
name: apache2
state: present
when: ansible_distribution == "Ubuntu"
- name: Install httpd on CentOS
yum:
name: httpd
state: present
when: ansible_distribution == "CentOS"
tasks:
- name: Create multiple users
user:
name: ""
state: present
loop:
- alice
- bob
- charlie
- name: Install multiple packages
apt:
name: ""
state: ""
loop:
- { name: nginx, state: present }
- { name: apache2, state: absent }
project/
├── ansible.cfg
├── hosts
├── group_vars/
│ ├── all.yml
│ └── webservers.yml
├── host_vars/
│ └── server1.yml
├── roles/
│ ├── webserver/
│ │ ├── tasks/main.yml
│ │ ├── handlers/main.yml
│ │ ├── templates/
│ │ └── vars/main.yml
│ └── database/
├── playbooks/
│ ├── site.yml
│ └── webserver.yml
└── inventory/
├── production
└── staging
# Role aanmaken
ansible-galaxy init roles/webserver
# Role structuur
roles/webserver/
├── tasks/main.yml # Hoofdtaken
├── handlers/main.yml # Handlers
├── templates/ # Jinja2 templates
├── files/ # Statische bestanden
├── vars/main.yml # Role variabelen
├── defaults/main.yml # Default waarden
└── meta/main.yml # Role metadata
# Encrypted file aanmaken
ansible-vault create secret.yml
# Playbook met vault
ansible-playbook -i hosts playbook.yml --ask-vault-pass
# Vault password file
ansible-playbook -i hosts playbook.yml --vault-password-file ~/.vault_pass
Terraform is een open-source Infrastructure as Code tool van HashiCorp voor:
OpenTofu is een community-driven fork van Terraform:
Terraform gebruikt een declaratieve programmeertaal (HCL - HashiCorp Configuration Language). Dit betekent dat je beschrijft WAT je wilt, niet HOE je het wilt bereiken.
# Declaratief: "Ik wil 3 web servers"
resource "google_compute_instance" "web" {
count = 3
name = "web-server-${count.index}"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
}
}
}
Kenmerken van declaratieve taal:
Ansible gebruikt een imperatieve benadering via YAML playbooks. Je beschrijft HOE je stappen uitvoert om tot het gewenste resultaat te komen.
# Imperatief: "Voer deze stappen uit"
- name: Install and configure web servers
hosts: all
tasks:
- name: Update package cache
apt:
update_cache: yes
- name: Install nginx
apt:
name: nginx
state: present
- name: Start nginx service
systemd:
name: nginx
state: started
enabled: yes
- name: Copy configuration file
copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
notify: restart nginx
handlers:
- name: restart nginx
systemd:
name: nginx
state: restarted
Kenmerken van imperatieve benadering:
Terraform (Declaratief):
# Je zegt: "Ik wil 2 servers met deze specificaties"
resource "google_compute_instance" "app" {
count = 2
name = "app-server-${count.index}"
# ... configuratie
}
# Terraform bepaalt automatisch:
# - Of servers al bestaan
# - Welke moeten aangemaakt worden
# - In welke volgorde (dependencies)
# - Wat moet gewijzigd worden bij updates
Ansible (Imperatief):
# Je zegt: "Voer deze acties uit op deze servers"
- name: Configure application servers
hosts: app_servers
tasks:
- name: Check if application is installed
stat:
path: /opt/myapp
register: app_installed
- name: Download application
get_url:
url: ""
dest: /tmp/app.tar.gz
when: not app_installed.stat.exists
- name: Extract application
unarchive:
src: /tmp/app.tar.gz
dest: /opt/
when: not app_installed.stat.exists
| Aspect | Terraform (Declaratief) | Ansible (Imperatief) |
|---|---|---|
| Best voor | Infrastructure provisioning | Configuration management |
| Mindset | “Wat wil ik hebben?” | “Hoe ga ik het doen?” |
| State | Houdt state bij | Stateless (meestal) |
| Dependencies | Automatisch berekend | Handmatig gedefinieerd |
| Updates | Plan → Apply workflow | Playbook execution |
| Rollback | Via state management | Via reverse playbooks |
| Learning curve | Steiler voor beginners | Meer intuïtief |
Gebruik Terraform (Declaratief) voor:
Gebruik Ansible (Imperatief) voor:
# 1. Declaratief: Maak infrastructuur met Terraform
tofu apply
# 2. Imperatief: Configureer servers met Ansible
ansible-playbook site.yml
Dit combineert de kracht van beide benaderingen voor complete automation workflows!
# OpenTofu installatie (aanbevolen)
# Ubuntu/Debian
curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh -o install-opentofu.sh
chmod +x install-opentofu.sh
sudo ./install-opentofu.sh
# macOS
brew install opentofu
# Verificatie
tofu version
# Terraform installatie (alternatief)
# Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
1. WRITE → 2. PLAN → 3. APPLY → 4. DESTROY
↓ ↓ ↓ ↓
.tf files tofu plan tofu apply tofu destroy
# main.tf
resource "google_compute_instance" "web" {
name = "web-server"
machine_type = "f1-micro"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
}
}
}
tofu plan
# Shows: 1 to add, 0 to change, 0 to destroy
tofu apply
# Creates the actual infrastructure
⚠️ KRITIEK: Waarom NOOIT handmatig verwijderen in cloud dashboards!
❌ Verkeerde manier - Handmatig verwijderen:
# DOE DIT NOOIT:
# - Ga naar GCP Console
# - Verwijder VM instances handmatig
# - Verwijder VPC handmatig
# - Verwijder firewall rules handmatig
Problemen met handmatig verwijderen:
✅ Juiste manier - Terraform destroy:
# Plan de destruction (veiligheidscheck)
tofu plan -destroy
# Output toont wat er verwijderd wordt:
# Plan: 0 to add, 0 to change, 5 to destroy.
#
# Changes to Outputs:
# - ip = "34.78.123.45" -> null
#
# Do you want to perform these actions?
# Voer destroy uit
tofu destroy
# Bevestig met: yes
Waarom Terraform destroy beter is:
1. Destroy planning (veilig):
# Bekijk wat er vernietigd wordt zonder het te doen
tofu plan -destroy
# Output voorbeeld:
# Terraform will perform the following actions:
#
# # google_compute_firewall.ssh-server will be destroyed
# - resource "google_compute_firewall" "ssh-server" {
# - name = "default-allow-ssh-terraform" -> null
# # ... more details
# }
#
# # google_compute_instance.vm_instance will be destroyed
# - resource "google_compute_instance" "vm_instance" {
# - name = "opentofu-instance" -> null
# # ... more details
# }
#
# Plan: 0 to add, 0 to change, 4 to destroy.
2. Selective destroy (specifieke resources):
# Verwijder alleen specifieke resource
tofu destroy -target=google_compute_instance.vm_instance
# Verwijder meerdere specifieke resources
tofu destroy -target=google_compute_instance.vm_instance -target=google_compute_firewall.ssh-server
3. Force destroy (zonder confirmatie - GEVAARLIJK):
# Automatisch destroy zonder "yes" prompt
tofu destroy -auto-approve
# ⚠️ ALLEEN GEBRUIKEN IN AUTOMATION/CI/CD!
# NOOIT HANDMATIG IN PRODUCTIE!
4. Destroy met variable files:
# Als je custom tfvars gebruikt
tofu destroy -var-file="production.tfvars"
# Met specifieke variables
tofu destroy -var="environment=staging"
1. Altijd plan eerst:
# Workflow voor veilige cleanup
tofu plan -destroy # 1. Bekijk wat er gebeurt
tofu destroy # 2. Voer uit na review
2. State backup voor destroy:
# Backup state voor grote destroys
cp terraform.tfstate terraform.tfstate.backup.$(date +%Y%m%d)
tofu destroy
3. Environment-specific destroy:
# Per environment
tofu workspace select staging
tofu destroy
tofu workspace select production
tofu plan -destroy # EXTRA VOORZICHTIG IN PRODUCTIE!
4. Protect kritieke resources:
# In je .tf files - voorkom accidental destroy
resource "google_compute_instance" "critical_database" {
name = "prod-database"
# Voorkom destroy via Terraform
lifecycle {
prevent_destroy = true
}
}
5. Gradual destroy voor complexe setups:
# Stap-voor-stap destroy van grote infrastructuur
tofu destroy -target=google_compute_instance.web_servers
tofu destroy -target=google_compute_instance.app_servers
tofu destroy -target=google_sql_database_instance.db
tofu destroy # Rest van infrastructure
1. Resource dependencies:
# Als destroy faalt door dependencies
tofu destroy -target=dependent_resource
tofu destroy # Dan de rest
2. External changes (drift):
# Als resources handmatig gewijzigd zijn
tofu refresh # Update state met echte situatie
tofu destroy # Dan destroy
3. Stuck resources:
# Als resources niet kunnen worden verwijderd
tofu state list # Bekijk state
tofu state rm google_compute_instance.stuck # Remove van state
# Handmatig opruimen in cloud console (als laatste redmiddel)
4. Import vergeten resources:
# Als je vergeten resources hebt
tofu import google_compute_instance.existing projects/PROJECT/zones/ZONE/instances/INSTANCE
tofu destroy # Dan kan destroy ze vinden
1. Automated cleanup scripts:
#!/bin/bash
# cleanup-dev-environment.sh
echo "Cleaning up development environment..."
cd terraform/environments/dev
tofu destroy -auto-approve
echo "Dev environment cleaned up!"
2. Scheduled cleanup (cron):
# Automatisch dev environments opruimen elke vrijdag
0 18 * * 5 /home/user/scripts/cleanup-dev-environment.sh
3. Cost alerts integratie:
# Monitoring resource om kosten bij te houden
resource "google_billing_budget" "dev_budget" {
billing_account = var.billing_account
display_name = "Dev Environment Budget"
budget_filter {
projects = ["projects/${var.project_id}"]
}
amount {
specified_amount {
currency_code = "EUR"
units = "100" # 100 EUR budget
}
}
}
🎯 Onthoud: Terraform destroy is je veiligheidsnet tegen kostbare vergissingen en orphaned resources!
# Comments start with #
# Variables
variable "instance_name" {
description = "Name of the instance"
type = string
default = "my-instance"
}
# Resources
resource "resource_type" "resource_name" {
argument1 = "value1"
argument2 = var.instance_name
nested_block {
nested_argument = "nested_value"
}
}
# Outputs
output "instance_ip" {
value = resource.resource_type.resource_name.public_ip
}
# 5-iac-files/opentofu/demoGCE/main.tf
variable "gce_ssh_user" {
description = "SSH user for GCE instances"
}
variable "gce_ssh_pub_key_file" {
description = "Path to SSH public key file"
}
variable "gcp_project" {
description = "GCP Project ID"
}
variable "gcp_region" {
description = "GCP Region"
default = "us-central1"
}
variable "gcp_zone" {
description = "GCP Zone"
default = "us-central1-a"
}
variable "gcp_key_file" {
description = "Path to GCP service account key file"
}
# Provider configuratie
provider "google" {
credentials = file(var.gcp_key_file)
project = var.gcp_project
region = var.gcp_region
zone = var.gcp_zone
}
# Static IP address
resource "google_compute_address" "static" {
name = "ipv4-address"
}
# VPC Network
resource "google_compute_network" "vpc_network" {
name = "vpc-network"
auto_create_subnetworks = "true"
}
# Firewall rule
resource "google_compute_firewall" "ssh-server" {
name = "default-allow-ssh-terraform"
network = google_compute_network.vpc_network.name
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["ssh-server"]
}
# VM Instance
resource "google_compute_instance" "vm_instance" {
name = "opentofu-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-focal-v20210415"
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {
nat_ip = google_compute_address.static.address
}
}
metadata = {
sshKeys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
}
tags = ["ssh-server"]
}
# Output values
output "ip" {
description = "Public IP address of the instance"
value = google_compute_instance.vm_instance.network_interface.0.access_config.0.nat_ip
}
output "instance_name" {
description = "Name of the instance"
value = google_compute_instance.vm_instance.name
}
# terraform.tfvars (create this file locally)
gce_ssh_user = "ubuntu"
gce_ssh_pub_key_file = "~/.ssh/id_rsa.pub"
gcp_project = "my-gcp-project"
gcp_region = "europe-west1"
gcp_zone = "europe-west1-b"
gcp_key_file = "path/to/service-account-key.json"
# Project initialiseren
tofu init
# Configuratie valideren
tofu validate
# Wijzigingen plannen
tofu plan
# Plan opslaan
tofu plan -out=plan.tfplan
# Plan uitvoeren
tofu apply
# Specifiek plan uitvoeren
tofu apply plan.tfplan
# Current state bekijken
tofu show
# State list
tofu state list
# Resource importeren
tofu import google_compute_instance.web my-instance
# Infrastructuur vernietigen
tofu destroy
# Plan destroy (safety check)
tofu plan -destroy
# Destroy specific resources
tofu destroy -target=google_compute_instance.vm_instance
# Auto-approve destroy (automation only!)
tofu destroy -auto-approve
# Destroy with variables
tofu destroy -var-file="staging.tfvars"
# Specifieke resource targeten
tofu apply -target=google_compute_instance.vm_instance
# Workspace management
tofu workspace new production
tofu workspace select staging
tofu workspace list
{
"version": 4,
"terraform_version": "1.0.0",
"serial": 1,
"lineage": "uuid",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "google_compute_instance",
"name": "vm_instance",
"instances": [...]
}
]
}
# backend.tf
terraform {
backend "gcs" {
bucket = "my-terraform-state-bucket"
prefix = "terraform/state"
}
}
# Alternative: S3 backend
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "terraform.tfstate"
region = "us-west-2"
}
}
# modules/webserver/main.tf
variable "instance_count" {
description = "Number of instances"
default = 1
}
resource "google_compute_instance" "web" {
count = var.instance_count
name = "web-${count.index}"
machine_type = "f1-micro"
# ... rest of configuration
}
# main.tf (using the module)
module "webserver" {
source = "./modules/webserver"
instance_count = 3
}
# Existing resource lookup
data "google_compute_image" "ubuntu" {
family = "ubuntu-2004-lts"
project = "ubuntu-os-cloud"
}
resource "google_compute_instance" "vm" {
# Use data source
boot_disk {
initialize_params {
image = data.google_compute_image.ubuntu.self_link
}
}
}
resource "google_compute_instance" "web" {
# ... instance configuration
# File provisioner
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
connection {
type = "ssh"
user = var.ssh_user
host = self.network_interface.0.access_config.0.nat_ip
}
}
# Remote exec provisioner
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh",
]
connection {
type = "ssh"
user = var.ssh_user
host = self.network_interface.0.access_config.0.nat_ip
}
}
}
# Conditional resources
resource "google_compute_instance" "web" {
count = var.environment == "production" ? 3 : 1
name = "web-${count.index}"
# ...
}
# Functions
locals {
instance_names = [for i in range(var.instance_count) : "web-${i}"]
common_tags = {
Environment = var.environment
Project = var.project_name
}
}
# main.tf - Create multiple VMs
resource "google_compute_instance" "web_servers" {
count = 3
name = "web-server-${count.index}"
machine_type = "f1-micro"
metadata = {
sshKeys = "${var.ssh_user}:${file(var.ssh_public_key)}"
}
tags = ["web-server"]
}
# Output IP addresses for Ansible
output "web_server_ips" {
value = google_compute_instance.web_servers[*].network_interface.0.access_config.0.nat_ip
}
# Get IPs from Terraform output
tofu output -json web_server_ips | jq -r '.[]' > ansible_hosts.txt
# Or use Terraform provider for Ansible
# playbook.yml
---
- name: Configure web servers
hosts: all
become: yes
tasks:
- name: Install nginx
apt:
name: nginx
state: present
- name: Start nginx
systemd:
name: nginx
state: started
enabled: yes
# 1. Create infrastructure
tofu apply
# 2. Configure with Ansible
ansible-playbook -i ansible_hosts.txt playbook.yml
# Using Ansible provider in Terraform
resource "ansible_playbook" "configure_servers" {
playbook = "playbook.yml"
name = google_compute_instance.web_servers[*].network_interface.0.access_config.0.nat_ip
depends_on = [google_compute_instance.web_servers]
}
cd 5-iac-files/ansible
# Test connectivity
ansible all -i hosts -m ping
# Check OS version
ansible mycloudvms -i hosts -a "cat /etc/os-release"
ansible all -i hosts -m apt -a "update_cache=yes upgrade=dist" --become
ansible all -i hosts -m apt -a "name=htop,curl,git state=present" --become
ansible all -i hosts -m user -a "name=student shell=/bin/bash groups=sudo" --become
ansible all -i hosts -m copy -a "content='Hello Ansible!' dest=/tmp/hello.txt"
Maak een playbook voor LAMP stack installatie:
# lamp-stack.yml
---
- name: LAMP Stack Installation
hosts: ubuntu-servers
become: yes
vars:
mysql_root_password: "secure123"
tasks:
- name: Update apt cache
apt:
update_cache: yes
- name: Install LAMP packages
apt:
name:
- apache2
- mysql-server
- php
- php-mysql
- libapache2-mod-php
state: present
- name: Start Apache
systemd:
name: apache2
state: started
enabled: yes
- name: Start MySQL
systemd:
name: mysql
state: started
enabled: yes
- name: Create test PHP file
copy:
content: |
<?php
phpinfo();
?>
dest: /var/www/html/info.php
- name: Set MySQL root password
mysql_user:
name: root
password: ""
login_unix_socket: /var/run/mysqld/mysqld.sock
# SSH key genereren
ssh-keygen -t rsa -b 4096 -f ~/.ssh/gcp_key
cd 5-iac-files/opentofu/demoGCE
# Variabelen file aanmaken
cat > terraform.tfvars << EOF
gce_ssh_user = "ubuntu"
gce_ssh_pub_key_file = "~/.ssh/gcp_key.pub"
gcp_project = "your-project-id"
gcp_region = "europe-west1"
gcp_zone = "europe-west1-b"
gcp_key_file = "path/to/service-account.json"
EOF
# Initialiseren
tofu init
# Plan
tofu plan
# Apply
tofu apply
# multi-tier.tf
variable "instance_count" {
default = {
web = 2
app = 2
db = 1
}
}
# Web tier
resource "google_compute_instance" "web_tier" {
count = var.instance_count.web
name = "web-${count.index}"
machine_type = "f1-micro"
tags = ["web-tier", "http-server"]
# ... configuration
}
# App tier
resource "google_compute_instance" "app_tier" {
count = var.instance_count.app
name = "app-${count.index}"
machine_type = "f1-micro"
tags = ["app-tier"]
# ... configuration
}
# Database tier
resource "google_compute_instance" "db_tier" {
count = var.instance_count.db
name = "db-${count.index}"
machine_type = "n1-standard-1"
tags = ["db-tier"]
# ... configuration
}
# Load balancer
resource "google_compute_http_health_check" "web_health" {
name = "web-health-check"
}
resource "google_compute_target_pool" "web_pool" {
name = "web-pool"
instances = google_compute_instance.web_tier[*].self_link
health_checks = [
google_compute_http_health_check.web_health.name,
]
}
# site.yml
---
- import_playbook: web-tier.yml
- import_playbook: app-tier.yml
- import_playbook: db-tier.yml
# web-tier.yml
---
- name: Configure Web Tier
hosts: web_tier
become: yes
roles:
- nginx
- ssl_certificates
# app-tier.yml
---
- name: Configure App Tier
hosts: app_tier
become: yes
roles:
- nodejs
- application_code
# db-tier.yml
---
- name: Configure Database Tier
hosts: db_tier
become: yes
roles:
- mysql
- database_setup
Deze oefening demonstreert het verschil tussen declaratieve en imperatieve benaderingen met een praktische server setup.
We gaan een web server met database opzetten op twee manieren om het verschil te ervaren.
# infrastructure.tf
variable "server_count" {
description = "Number of web servers"
default = 2
}
# Declaratief: "Ik wil 2 web servers met deze specificaties"
resource "google_compute_instance" "web_servers" {
count = var.server_count
name = "web-server-${count.index + 1}"
machine_type = "f1-micro"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
}
}
network_interface {
network = google_compute_network.vpc.self_link
access_config {}
}
tags = ["web-server", "http-server"]
metadata = {
sshKeys = "${var.ssh_user}:${file(var.ssh_public_key)}"
}
}
# Declaratief: "Ik wil een database server"
resource "google_compute_instance" "database" {
name = "database-server"
machine_type = "n1-standard-1"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
size = 50 # Bigger disk for database
}
}
network_interface {
network = google_compute_network.vpc.self_link
access_config {}
}
tags = ["database-server"]
}
# Declaratief: "Ik wil een VPC netwerk"
resource "google_compute_network" "vpc" {
name = "web-app-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "subnet" {
name = "web-subnet"
ip_cidr_range = "10.0.1.0/24"
region = "europe-west1"
network = google_compute_network.vpc.self_link
}
# Terraform figureert automatisch uit:
# - Welke volgorde (VPC → Subnet → VMs)
# - Welke dependencies
# - Wat al bestaat vs wat nieuw is
Terraform Uitvoering:
# Terraform kijkt naar gewenste state vs huidige state
tofu plan
# Plan: 4 to add, 0 to change, 0 to destroy
tofu apply
# Terraform maakt automatisch:
# 1. VPC network
# 2. Subnet (hangt af van VPC)
# 3. Web servers (hangen af van subnet)
# 4. Database server
# Als je later meer servers wilt:
# Wijzig variable server_count = 4
tofu plan
# Plan: 2 to add, 0 to change, 0 to destroy (alleen nieuwe servers!)
tofu apply
# Terraform maakt alleen de 2 nieuwe servers aan
# setup-infrastructure.yml
---
- name: Setup Complete Web Application Infrastructure
hosts: localhost
gather_facts: no
vars:
servers_to_create:
- { name: "web-server-1", type: "web" }
- { name: "web-server-2", type: "web" }
- { name: "database-server", type: "db" }
tasks:
# Stap 1: Check if VPC exists
- name: Check if VPC network exists
google.cloud.gcp_compute_network_info:
filters:
- name = "web-app-network"
register: vpc_result
# Stap 2: Create VPC if it doesn't exist
- name: Create VPC network
google.cloud.gcp_compute_network:
name: "web-app-network"
auto_create_subnetworks: false
state: present
when: vpc_result.resources | length == 0
# Stap 3: Check if subnet exists
- name: Check if subnet exists
google.cloud.gcp_compute_subnetwork_info:
region: "europe-west1"
filters:
- name = "web-subnet"
register: subnet_result
# Stap 4: Create subnet if it doesn't exist
- name: Create subnet
google.cloud.gcp_compute_subnetwork:
name: "web-subnet"
ip_cidr_range: "10.0.1.0/24"
region: "europe-west1"
network:
selfLink: "projects//global/networks/web-app-network"
state: present
when: subnet_result.resources | length == 0
# Stap 5: Check which servers already exist
- name: Get existing instances
google.cloud.gcp_compute_instance_info:
zone: "europe-west1-b"
register: existing_instances
# Stap 6: Create servers that don't exist yet
- name: Create web and database servers
google.cloud.gcp_compute_instance:
name: ""
machine_type: "f1-micro"
zone: "europe-west1-b"
disks:
- auto_delete: true
boot: true
initialize_params:
source_image: "projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts"
disk_size_gb: "10"
network_interfaces:
- network:
selfLink: "projects//global/networks/web-app-network"
subnetwork:
selfLink: "projects//regions/europe-west1/subnetworks/web-subnet"
access_configs:
- name: External NAT
type: ONE_TO_ONE_NAT
tags:
items:
- "-server"
- "http-server"
state: present
loop: ""
when: item.name not in (existing_instances.resources | map(attribute='name') | list)
# Stap 7: Configure web servers after they're created
- name: Wait for SSH to be available
wait_for:
port: 22
host: ""
delay: 30
timeout: 300
loop: ""
when: item.type == 'web'
# Stap 8: Install web server software
- name: Install nginx on web servers
apt:
name: nginx
state: present
update_cache: yes
delegate_to: ""
become: yes
loop: ""
when: item.type == 'web'
# Stap 9: Configure database server
- name: Install MySQL on database server
apt:
name: mysql-server
state: present
update_cache: yes
delegate_to: "database-server"
become: yes
# Ansible vereist dat je elke stap expliciet beschrijft
Ansible Uitvoering:
# Ansible voert elke stap uit in volgorde
ansible-playbook setup-infrastructure.yml
# Als je later meer servers wilt:
# Wijzig de servers_to_create lijst
# Ansible controleert weer alle stappen
ansible-playbook setup-infrastructure.yml
# Alle checks worden opnieuw uitgevoerd, alleen nieuwe servers worden toegevoegd
Terraform Cleanup (Declaratief):
# Terraform weet exact wat het heeft gemaakt
tofu plan -destroy
# Plan: 0 to add, 0 to change, 4 to destroy.
# - google_compute_instance.database
# - google_compute_instance.web_servers[0]
# - google_compute_instance.web_servers[1]
# - google_compute_network.vpc
tofu destroy
# Verwijdert alles in reverse dependency volgorde:
# 1. VMs eerst (afhankelijk van subnet)
# 2. Subnet (afhankelijk van VPC)
# 3. VPC als laatste
Ansible Cleanup (Imperatief):
# cleanup-infrastructure.yml
---
- name: Cleanup Complete Infrastructure
hosts: localhost
tasks:
# Stap 1: Stop all services first
- name: Stop nginx on web servers
systemd:
name: nginx
state: stopped
delegate_to: ""
become: yes
loop:
- web-server-1
- web-server-2
ignore_errors: yes
# Stap 2: Delete instances in correct order
- name: Delete web servers first
google.cloud.gcp_compute_instance:
name: ""
zone: "europe-west1-b"
state: absent
loop:
- web-server-1
- web-server-2
# Stap 3: Delete database server
- name: Delete database server
google.cloud.gcp_compute_instance:
name: "database-server"
zone: "europe-west1-b"
state: absent
# Stap 4: Wait for instances to be fully deleted
- name: Wait for instances to be deleted
pause:
seconds: 30
# Stap 5: Delete subnet
- name: Delete subnet
google.cloud.gcp_compute_subnetwork:
name: "web-subnet"
region: "europe-west1"
state: absent
# Stap 6: Delete VPC
- name: Delete VPC network
google.cloud.gcp_compute_network:
name: "web-app-network"
state: absent
# Je moet handmatig de juiste volgorde bepalen!
1. Terraform Oefening:
# Maak de infrastructuur
cd terraform-declarative/
tofu init
tofu plan
tofu apply
# Schaal op (wijzig server_count naar 4)
tofu plan # Bekijk wat er wijzigt
tofu apply
# Schaal af (wijzig server_count naar 1)
tofu plan # Bekijk wat er verwijderd wordt
tofu apply
# Volledige cleanup
tofu destroy
2. Ansible Oefening:
# Maak de infrastructuur
cd ansible-imperative/
ansible-playbook setup-infrastructure.yml
# Voeg servers toe (wijzig servers_to_create lijst)
ansible-playbook setup-infrastructure.yml
# Cleanup
ansible-playbook cleanup-infrastructure.yml
3. Vergelijkings-analyse:
| Aspect | Terraform (Declaratief) | Ansible (Imperatief) |
|---|---|---|
| Code lengte | ±50 regels | ±150+ regels |
| Dependency management | Automatisch | Handmatig in juiste volgorde |
| State awareness | Weet wat bestaat | Moet elke keer checken |
| Scaling up | Wijzig getal → apply | Wijzig lijst → veel checks |
| Scaling down | Wijzig getal → apply | Expliciete delete stappen |
| Cleanup | 1 commando | Multi-step playbook |
| Error handling | Built-in rollback | Handmatige error handling |
Na deze oefening begrijp je:
🎯 Conclusie: Terraform voor “WAT”, Ansible voor “HOE”!
terraform/
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ ├── staging/
│ └── production/
├── modules/
│ ├── networking/
│ ├── compute/
│ └── database/
└── policies/
└── security.rego
# Remote state
terraform {
backend "gcs" {
bucket = "company-terraform-state"
prefix = "environments/production"
}
required_version = ">= 1.0"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
locals {
name_prefix = "${var.environment}-${var.project}"
common_tags = {
Environment = var.environment
Project = var.project
ManagedBy = "terraform"
Team = var.team
}
}
resource "google_compute_instance" "web" {
name = "${local.name_prefix}-web-${count.index}"
labels = local.common_tags
}
# Variables for sensitive data
variable "database_password" {
description = "Database root password"
type = string
sensitive = true
}
# Use data sources for existing resources
data "google_secret_manager_secret_version" "db_password" {
secret = "database-password"
}
ansible/
├── group_vars/
│ ├── all.yml
│ ├── web.yml
│ └── db.yml
├── host_vars/
├── inventories/
│ ├── production/
│ └── staging/
├── roles/
│ ├── common/
│ ├── webserver/
│ └── database/
├── playbooks/
│ ├── site.yml
│ └── deploy.yml
└── ansible.cfg
# Secrets encrypten
ansible-vault encrypt group_vars/all/vault.yml
# Playbook met vault
ansible-playbook site.yml --ask-vault-pass
# molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:20.04
provisioner:
name: ansible
verifier:
name: ansible
# .github/workflows/ansible.yml
name: Ansible CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run ansible-lint
run: ansible-lint
- name: Run molecule test
run: molecule test
# State drift detection
terraform plan -detailed-exitcode
# State backup
terraform state pull > backup-$(date +%Y%m%d).tfstate
# ansible.cfg
[defaults]
log_path = /var/log/ansible.log
callback_whitelist = profile_tasks, timer
[callback_profile_tasks]
task_output_limit = 20
# Monitoring resources
resource "google_monitoring_uptime_check_config" "web_check" {
display_name = "Web server uptime check"
timeout = "10s"
http_check {
path = "/"
port = "80"
}
monitored_resource {
type = "uptime_url"
labels = {
host = google_compute_instance.web.network_interface[0].access_config[0].nat_ip
project_id = var.gcp_project
}
}
}
export TF_LOG=DEBUG
export TF_LOG_PATH=terraform.log
tofu apply
# State list
tofu state list
# State show
tofu state show google_compute_instance.web
# Remove from state (dangerous!)
tofu state rm google_compute_instance.web
# Import existing resource
tofu import google_compute_instance.web projects/PROJECT/zones/ZONE/instances/INSTANCE
# Dependency graph
tofu graph | dot -Tsvg > graph.svg
# Basic verbosity
ansible-playbook playbook.yml -v
# Maximum verbosity
ansible-playbook playbook.yml -vvvv
# Debug specific task
- name: Debug task
debug:
var: ansible_facts
# Test connectivity
ansible all -m ping -vvvv
# SSH debug
ansible all -m shell -a "whoami" --ssh-extra-args="-vvv"
# Gather all facts
ansible hostname -m setup
# Specific fact
ansible hostname -m setup -a "filter=ansible_distribution*"
Infrastructure as Code transformeert IT-beheer:
5-iac-files/🚀 De toekomst is Infrastructure as Code - start vandaag!