DevOps is a great and high-paying career. DevOps, also commonly referred to or encompassed into SRE (Site Reliability Engineer), may lead to higher-paying jobs than just being a software developer. Most software developers have no clue as to the dangerous nature of the internet and typically only focus on the application layer (Layer 7). The role of a DevOps engineer is to work with developers on code that is as stateless as possible, manage state in config files, and secure the application from Layer 7 to Layer 1. This means a DevOps Engineer must know the fundamentals of a broad range of IT skills, from programming, infrastructure, networking, security, and availability. Memorizing only one way on a single cloud will do them no good if they don't understand the fundamentals.
DevOps is a field that requires a lot of self-motivation and never-ending ongoing training.
Here are the fundamentals that should be covered to give students a foundation to empower them to learn and deep-dive topics on their own.
-
What is DevOps?
- Definition and scope of DevOps
- Key principles: collaboration, automation, continuous improvement
- Relation to other roles like SysAdmin and Developer
-
Understanding the origin and evolution of DevOps
- History and evolution of DevOps
- Influential methodologies: Agile, Lean, ITIL
- The DevOps movement and cultural shift
-
Benefits of cross-functional teams and automation patterns in DevOps
- Cross-functional teams: roles and collaboration
- Benefits of automation in build, test, deploy
- Common automation patterns: CI/CD, Infrastructure as Code (IaC)
- Why Linux is important in DevOps
- Prevalence of Linux in servers and cloud environments
- Linux as the foundation of many DevOps tools and platforms
- Comparison with other operating systems (Windows, Mac)
- CLI Focus
- Importance of command-line interface (CLI) in server administration
- Common CLI tools and their uses
- Differences between desktop environments and server environments
- Root and User Management
- Understanding the root account and its security implications
- User management commands: adduser, usermod, passwd
- Principles of least privilege and sudo usage
- System Architecture
- Differences between user space and kernel space
- Role of the kernel in managing hardware and system calls
- How applications interact with the kernel
- System Boot Sequence
- Overview of the boot process: BIOS/UEFI, bootloader, kernel initialization
- Role of GRUB in boot management
- Understanding systemd and its role in system initialization
- Essential Commands
- Navigating the filesystem:
ls
,cd
,pwd
- File manipulation:
cp
,mv
,rm
,mkdir
- File permissions:
chmod
,chown
,umask
- Navigating the filesystem:
- Filesystem Hierarchy Standard (FHS)
- Key directories:
/
,/bin
,/etc
,/home
,/var
- Purpose and contents of each directory
- Importance of directory structure in system administration
- Key directories:
- System Runlevels and Targets
- Traditional runlevels and their purposes
- Transition to systemd targets
- Managing runlevels/targets with systemd:
systemctl
- Text Editors
- Basics of nano: editing, saving, and exiting
- Introduction to vi/vim: modes, basic commands, editing
- Advanced vi/vim usage and neovim extensions (optional)
- Common File Systems
- Characteristics of ext4 and its usage
- Overview of ZFS and its advanced features (snapshots, replication)
- Choosing the right file system for different use cases (optional)
- Different Shells
- Overview of sh, bash
- Key features and differences
- Writing portable shell scripts and considerations for cloud automation
SSH (Secure Shell) is a powerful and secure protocol used to connect to and manage Linux and Unix-like systems remotely. Just as Remote Desktop Protocol (RDP) is commonly used to access Windows systems, SSH is the standard method for accessing Linux systems.
- Security: SSH provides strong encryption, ensuring that data transmitted between the client and server is secure and protected from eavesdropping.
- Versatility: SSH is not just for remote login; it can be used for executing commands, transferring files, and more.
- Portability: SSH can be used across different platforms and operating systems.
SSH keys are a pair of cryptographic keys (a private key and a public key) used for authenticating users in a secure manner. Using SSH keys is more secure than password-based authentication.
Creating SSH Keys:
ssh-keygen -t rsa -b 4096 -C "[email protected]"
This command generates a new SSH key pair. The -t rsa
specifies the type of key to create, -b 4096
specifies the number of bits in the key, and -C
adds a comment to the key.
Adding SSH Keys to the SSH Agent:
ssh-add ~/.ssh/id_rsa
This command adds your private key to the SSH agent, which manages your keys and makes them available for SSH sessions.
Copying SSH Keys to a Server:
ssh-copy-id user@server
This command copies your public key to the server, allowing you to authenticate without a password.
SSH configuration files can be used to simplify and manage multiple SSH connections.
Example .ssh/config
File:
Host myserver
HostName myserver.example.com
User myusername
IdentityFile ~/.ssh/id_rsa
Port 22
This configuration allows you to connect to myserver
with a simple command (ssh myserver
) instead of specifying the hostname, username, and other details each time.
-
Connecting to a Server:
ssh user@hostname
-
Executing a Command on a Remote Server:
ssh user@hostname "command"
-
Transferring Files with SCP:
scp file.txt user@hostname:/path/to/destination
SSHFS allows you to mount a remote filesystem over SSH, enabling you to access files on a remote server as if they were on your local machine.
Mounting a Remote Filesystem:
sshfs user@hostname:/remote/path /local/mount/point
Unmounting a Remote Filesystem:
fusermount -u /local/mount/point
ssh-add
: Adds private key identities to the SSH agent.ssh-copy-id
: Copies public keys to a remote server for key-based authentication.
- Managing Software
- Using apt/apt-get/dpkg for Debian-based systems
- Using yum/dnf for Red Hat-based systems
- Installing, updating, and removing packages
- Service Management
- Understanding init.d and the transition to systemd
- Managing services with systemd:
systemctl start
,stop
,enable
- Creating and configuring systemd service units (how to make an app a service)
- System Logging
- Using journalctl to query system logs
- Understanding log files and their locations
- Configuring and managing log rotation
- Network Configuration and Management
- Basics of netfilter and iptables for firewall management
- Configuring network interfaces and routing
- Network troubleshooting tools:
ping
,netstat
,traceroute
- Containerization Fundamentals
- History of containers and their evolution
- Differences between containers and virtual machines
- Introduction to Docker and its architecture
- Automating Tasks with Scripts
- Basics of writing and running bash scripts
- Common scripting tasks: loops, conditionals, functions
- Transition to Python for more complex automation tasks
- Using LXD and Docker
- Overview of LXD for system container management
- Docker basics: images, containers, Dockerfile
- Advanced Docker usage: Compose, Swarm, Kubernetes
- Foundational Organizations
- Overview of the Cloud Native Computing Foundation (CNCF)
- Key projects and their roles (Kubernetes, Prometheus, etc.)
- The Linux Foundation and its impact on open-source software
- Version Control with Git
- Basic git commands: clone, commit, push, pull
- Branching and merging strategies
- Collaboration workflows: GitFlow, forking, pull requests
- Continuous Integration and Continuous Deployment
- Principles of CI/CD and its benefits
- Common CI/CD tools: Jenkins, GitLab CI, CircleCI
- Setting up and configuring CI/CD pipelines
Understanding how different programming languages compile and run is crucial for effectively packaging them into containers. Here’s an overview of the compilation and runtime processes for Java, Go, Node.js, and Python, and how to package applications written in these languages into containers.
Java is a statically typed, compiled language that runs on the Java Virtual Machine (JVM).
- Source Code: Written in
.java
files. - Compilation: Compiled into bytecode using the
javac
compiler, generating.class
files. - Execution: Bytecode is executed on the JVM.
-
Dockerfile Example:
# Use an official OpenJDK runtime as a parent image FROM openjdk:11-jre-slim # Set the working directory in the container WORKDIR /app # Copy the compiled jar file to the container COPY target/myapp.jar /app/myapp.jar # Command to run the jar file ENTRYPOINT ["java", "-jar", "myapp.jar"]
-
Build and Run:
# Build the Docker image docker build -t my-java-app . # Run the Docker container docker run -d -p 8080:8080 my-java-app
Go (Golang) is a statically typed, compiled language known for its simplicity and performance.
- Source Code: Written in
.go
files. - Compilation: Compiled into a standalone binary using the
go build
command. - Execution: The binary is executed directly on the operating system.
-
Dockerfile Example:
# Use the official Golang image to build the application FROM golang:1.16-alpine as builder # Set the working directory inside the container WORKDIR /app # Copy the source code to the container COPY . . # Build the Go app RUN go build -o myapp . # Use a minimal base image FROM alpine:latest # Set the working directory inside the container WORKDIR /app # Copy the binary from the builder stage COPY --from=builder /app/myapp . # Command to run the binary ENTRYPOINT ["./myapp"]
-
Build and Run:
# Build the Docker image docker build -t my-go-app . # Run the Docker container docker run -d -p 8080:8080 my-go-app
Node.js is a runtime environment that allows the execution of JavaScript on the server side.
- Source Code: Written in
.js
files. - Execution: Run directly using the Node.js runtime.
-
Dockerfile Example:
# Use the official Node.js image FROM node:14 # Set the working directory inside the container WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of the application code COPY . . # Expose the application port EXPOSE 8080 # Command to run the app CMD ["node", "app.js"]
-
Build and Run:
# Build the Docker image docker build -t my-node-app . # Run the Docker container docker run -d -p 8080:8080 my-node-app
Python is an interpreted, dynamically typed language.
- Source Code: Written in
.py
files. - Execution: Run directly using the Python interpreter.
-
Dockerfile Example:
# Use the official Python image FROM python:3.9-slim # Set the working directory inside the container WORKDIR /app # Copy the requirements file and install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy the rest of the application code COPY . . # Command to run the app CMD ["python", "app.py"]
-
Build and Run:
# Build the Docker image docker build -t my-python-app . # Run the Docker container docker run -d -p 8080:8080 my-python-app
By understanding these languages' compilation and runtime processes, you can effectively package and deploy applications in containers, ensuring consistent and isolated environments.
- AWS CodePipeline Overview
- Integration with other AWS services
- Creating and managing pipelines
- Deploying applications to AWS infrastructure
- GitHub Actions Overview
- Setting up workflows and actions
- Integration with GitHub repositories
- Using the marketplace for pre-built actions
- Comparing Platforms
- Environment integration: AWS-centric vs. GitHub-centric
- Ease of use: graphical interface vs. YAML-based configuration
- Extensibility: AWS services vs. third-party integrations
- Defining and Managing Infrastructure
- Writing procedural scripts with cloud shells and CLI tools
- Using Terraform for declarative infrastructure management
- Best practices for maintaining IaC repositories
- Orchestrating Containers
- Kubernetes architecture: nodes, pods, services, and deployments
- Managing Kubernetes clusters and resources
- Extending Kubernetes with Helm, ArgoCD, Kustomize, and service meshes
- Understanding Data Types
- Common data types: strings, numbers, booleans, arrays, objects
- How data types are represented in JSON and YAML
- JSON Basics
- Syntax and structure of JSON
- Representing data types in JSON:
- Strings:
"key": "value"
- Numbers:
"key": 123
- Booleans:
"key": true
- Arrays:
"key": ["value1", "value2"]
- Objects:
"key": {"subkey": "subvalue"}
- Strings:
- YAML Basics
- Syntax and structure of YAML
- Representing data types in YAML:
- Strings:
key: "value"
- Numbers:
key: 123
- Booleans:
key: true
- Arrays:
key: - value1 - value2
- Objects:
key: subkey: subvalue
- Strings:
Advanced Topics
Understanding the core components of how containers work is essential. Containers are not VMs; they are more like lightweight, isolated environments or service wrappers. LXD and Docker are container management systems that rely on various Linux kernel components and user-space tools to facilitate their functionality. Here are the key Linux components that underpin LXD and Docker:
-
Namespaces
- PID Namespace: Isolates process IDs so that processes inside a container have their own PID space.
- Net Namespace: Provides isolated network stacks, including interfaces, routing tables, and IP addresses.
- IPC Namespace: Isolates inter-process communication mechanisms (e.g., System V IPC, POSIX message queues).
- UTS Namespace: Allows containers to have their own hostname and domain name.
- Mount Namespace: Isolates filesystem mount points, allowing each container to have its own filesystem hierarchy.
- User Namespace: Provides user and group ID isolation, enabling containers to run as a non-root user on the host.
-
Control Groups (cgroups)
- Resource Limitation: Limits the resources (CPU, memory, disk I/O) that a container can use.
- Prioritization: Sets priority levels for resource access.
- Accounting: Tracks resource usage by each container.
- Control: Freezes and resumes groups of processes, making it possible to pause and resume containers.
-
Seccomp (Secure Computing Mode)
- Provides a mechanism to restrict system calls that a container can make, enhancing security.
-
Capabilities
- Allows fine-grained control over the privileges that a containerized process can have, reducing the need for full root privileges.
-
OverlayFS
- A type of union filesystem that allows multiple filesystems to be overlaid, enabling the creation of layers used by Docker images.
-
AppArmor/SELinux
- Provides mandatory access control (MAC) to restrict what a containerized process can do, adding an extra layer of security.
-
LXC (Linux Containers)
- The foundation for LXD, LXC provides tools and templates to create and manage containers using the kernel's namespaces and cgroups.
-
liblxc
- A library used by LXD for managing LXC containers programmatically.
-
containerd
- An industry-standard core container runtime that manages the container lifecycle (creation, execution, and destruction) and is used by Docker.
-
runc
- A CLI tool for spawning and running containers according to the Open Container Initiative (OCI) specification. It is used by Docker as the default low-level container runtime.
-
criu (Checkpoint/Restore In Userspace)
- Used by LXD for live migration of containers. It allows for checkpointing a running container and restoring it later or on another machine.
-
LXD Daemon
- The central service for LXD that handles API requests, manages container instances, networks, storage pools, and more.
-
LXD Client
- The command-line client (
lxc
) used to interact with the LXD daemon.
- The command-line client (
-
ZFS, Btrfs, and Other Filesystems
- LXD can use advanced filesystems like ZFS and Btrfs to provide efficient storage management features like snapshots and clones.
-
Docker Daemon (
dockerd
)- The core service that manages Docker containers, images, networks, and storage.
-
Docker CLI (
docker
)- The command-line interface used to interact with the Docker daemon.
-
Docker Images
- Read-only templates used to create containers. Images are composed of multiple layers stacked using OverlayFS.
-
Docker Compose
- A tool for defining and running multi-container Docker applications using a YAML file.
Here’s a simple example to demonstrate a basic Docker workflow:
-
Pull an Image
docker pull nginx
-
Run a Container
docker run -d -p 80:80 --name mynginx nginx
-
List Running Containers
docker ps
-
Stop a Container
docker stop mynginx
-
Remove a Container
docker rm mynginx
Here’s a simple example to demonstrate a basic LXD workflow:
-
Initialize LXD
lxd init
-
Launch a Container
lxc launch ubuntu:20.04 mycontainer
-
List Running Containers
lxc list
-
Stop a Container
lxc stop mycontainer
-
Delete a Container
lxc delete mycontainer
These Linux components and tools together provide the foundation for containerization, enabling systems like LXD and Docker to offer robust, isolated, and manageable environments for applications.