Merge pull request 'Switch to Ubuntu Server' (#2) from ubserv-based-installer into main

Reviewed-on: https://git.hofers.cloud/greysoh/kittehcluster/pulls/2
This commit is contained in:
Greyson 2024-08-02 21:25:23 +00:00
commit c36ae6cdf0
24 changed files with 513 additions and 371 deletions

10
.gitignore vendored
View file

@ -1,4 +1,10 @@
# Python ignore
__pycache__
# serverinfra/
.server-setup
.env
out
build.log
secrets.nix
# kubernetes/
meta

View file

@ -1,42 +1,36 @@
# KittehCluster
This is my (work in progress, deployed but nothing production running on it *yet*) Kubernetes clustered computing setup, based on Proxmox VE and NixOS.
This is my (work in progress, deployed but nothing production running on it *yet*) Kubernetes clustered computing setup, based on Proxmox VE and Ubuntu Server.
Currently, I cannot recommend that you use this setup in production yet. I have to delete and recreate my VMs multiple times a day, until I fix everything.
Currently, I *really* cannot recommend that you use this setup in production yet. I have to delete and recreate my VMs multiple times a day, until I fix everything.
## Prerequisites
- An x86_64 computer with virtualization enabled, running NixOS
- A cluster of computers running Proxmox, with your SSH keys copied to them. These should (not required, but *highly* recommended) be connected together in Proxmox using the cluster feature.
- Cluster hypervisor's IPs next to eachother (ex. node 1's Proxmox is `192.168.0.20`, node 2's is `192.168.0.21`)
- Patience (will take a while, and may test it)
- A POSIX-compliant computer (preferably Unix of some sort, like macOS/Linux/*BSD, but Git Bash or Cygwin would probably work) with Python and Pyyaml
- A cluster of computers preferably running Proxmox. These should (not required, but *highly* recommended) be connected together in Proxmox using the cluster feature.
- `kubectl`, and `helm` installed on your local computer.
## Setup
### VM Setup
1. First, you'll need to fork this repository, and `git clone` it down.
2. Copy `secrets.example.nix` to `secrets.nix`.
3. Change `services.k3s.token` to be a unique token (i.e. using `uuidgen`, `head -c 500 /dev/random | sha1sum | cut -d " " -f 1`, etc)
4. Change `users.users.clusteradm.openssh.authorizedKeys.keys` to have your SSH key(s) in there.
5. Then, run `./buildall.sh`, to build all the virtual machines. This may take a long time, depending on your hardware! On a 2015 MacBook Air, this took 30 minutes. Make some tea while you wait!
6. Finally, run `BASE_IP=your_base_ip_here ./upload.sh -i -d`, with `BASE_IP` being the first IP for your Proxmox cluster.
7. Set all VMs to auto-start, then turn them all on, starting with the first node's `k3s-server`.
8. You can now connect using your SSH key to any of the nodes with the user `clusteradm`. The default password is `1234`. Be sure to change this!
2. Run `nix-shell`.
3. (optional) Change `SETUP_USERNAME` to the username you want to use in `config/.env`.
4. (optional) Change `SETUP_PASSWORD` to the hashed password you want to use (genpasswd to generate this)
5. (Proxmox-specific, but you'll need to do a similar process on i.e ESXi, XenServer, etc.) Go to [the Ubuntu Server page](https://ubuntu.com/download/server), and copy the minimal ISO download. Go your ISO image volume (`local` by default), click on ISO images, click download from URL, paste in the URL, click query URL, then download the file on all of your nodes.
6. Create VM(s) that uses a VirtIO hard drive (i.e drives with `/dev/vdX`), and the ISO set to the Ubuntu Server installer.
7. On your main computer, run the command `./install.sh $PATH_TO_USE_FOR_INSTALL`, where `$PATH_TO_USE_FOR_INSTALL` is the infrastructure-defined server to use in `config/infrastructure.ini`.
8. When booting, press `e` to edit the configuration. When you see the line that says `linux` with `---` at the end of it, remove the `---` and put the command line arguments that correspond to your IP address in there. Press `F10` to boot.
9. Boot it, and let it install.
### Kubernetes setup
1. SSH into any of the nodes. (i.e. `ssh clusteradm@kitteh-node-2-k3s-server`)
1. SSH into any of the nodes. (i.e `ssh clusteradm@kitteh-node-2-k3s-server`)
2. As root, grab `/etc/rancher/k3s/k3s.yaml`, and copy it to wherever you store your k3s configurations (on macOS, this is `~/.kube/config`)
## Updating (TODO)
In NixOS, instead of `apt update; apt upgrade -y`, `pacman -Syu --noconfirm`, or other systems, you need to "rebuild" the system.
There is a work in progress of this system (see `kittehclean`'s Git downloader), but it is not done yet.
## Updating
Run `apt update` and `apt upgrade -y` for the base system. TODO for Kubernetes.
## Customization
### Adding nodes
Copy `kitteh-node-2`, to `kitteh-node-X`, where `X` is the server number. Change the hostname to correspond to each clustered computer (ex. 3rd computer's k3s agent is `kitteh-node-3-k3s-agent`)
In `serverinfra/infrastructure.ini`, copy the role(s) from kitteh-node-2 to a new node (ex. `kitteh-node-2/server` -> `kitteh-node-3/server`, etc), and run the install script again.
### Custom cluster setup / Forking
This is a guide. You can change more stuff if you'd like, but this will get you started.
1. First, fork this Git repository if you haven't already.
2. If you want to change the folder names, rename the folders (i.e. kitteh-node-* to whatever-*), and change `buildall.sh`'s for loop to be `whatever-*/*`, for example.
3. If you want to change the hostname, change them all. Be sure to change `commons.agent.nix` and `commons.server.nix` to correspond to the new `kitteh-node-1-k3s-server`'s name!
4. In `commons.nix`, either remove `kittehclean` (not recommended unless you're using a private Git repository), or change the git repository it pulls down from (i.e. change `https://git.hofers.cloud/greysoh/kittehcluster` to `https://github.com/contoso/k3s-cluster`).
5. (optional) Rename `kittehclean` and change the description.
2. Modify `serverinfra/config/infrastructure.ini` to fit your needs.
## Troubleshooting
- I can't login via SSH!
- Have you copied your SSH keys to the `clusteradm` user? Try copying your keys on another computer (or the VM console) if you got a new one, for example (in the `~/.ssh/authorized_keys` on each VM)
- Your SSH public keys are automatically copied over! If not, did you generate an SSH keyring before installing?
- Additionally, password authentication is disabled!

View file

@ -1,17 +0,0 @@
#!/usr/bin/env bash
set -e
echo "Building '$1'..."
nix --extra-experimental-features nix-command run github:nix-community/nixos-generators -- --format proxmox --configuration "$1.nix" | tee build.log
if [ ! -d "out/" ]; then
mkdir out/
fi
echo "Copying file to the output directory..."
# Hacky!
mkdir -p out/$1
rm -rf out/$1 out/$1.vma.zst
OUT_FILE="$(sed -n '$p' build.log)"
cp -r $OUT_FILE out/$1.vma.zst

View file

@ -1,32 +0,0 @@
#!/usr/bin/env bash
mkdir meta > /dev/null 2> /dev/null
touch meta/tagged_for_upload
for FILE in kitteh-node-*/*; do
FILE_NO_EXTENSION="${FILE/".nix"/""}"
# Hacky!
mkdir -p meta/$FILE
rm -rf meta/$FILE
sha512sum $FILE > /tmp/kt-clusterbuild_sha512sum
if [ ! -f "meta/$FILE.sha" ] || ! diff -q "/tmp/kt-clusterbuild_sha512sum" "meta/$FILE.sha" > /dev/null; then
./build.sh $FILE_NO_EXTENSION
if [ $? -ne 0 ]; then
echo "Failed to build, skipping..."
continue
fi
if ! grep -q "out/$FILE_NO_EXTENSION.vma.zst" meta/tagged_for_upload; then
echo "out/$FILE_NO_EXTENSION.vma.zst" >> meta/tagged_for_upload
fi
else
echo "Not building '$FILE_NO_EXTENSION'."
fi
mv "/tmp/kt-clusterbuild_sha512sum" "meta/$FILE.sha"
done
echo "Done building."

View file

@ -1,50 +0,0 @@
let
pkgs = import <nixpkgs> {};
in {
imports = [
./commons.nix
];
# This is intentionally defined like this (not using braces) for updating. DO NOT CHANGE THIS.
# - greysoh
proxmox.qemuConf.memory = 8192;
proxmox.qemuConf.cores = 4;
proxmox.qemuConf.name = "k3s-agent";
proxmox.qemuConf.diskSize = pkgs.lib.mkForce "131072";
services.k3s = {
enable = true;
role = "agent";
serverAddr = "https://kitteh-node-1-k3s-server:6443";
};
virtualisation.docker.enable = true;
networking.firewall = {
enable = true;
allowedTCPPorts = [
# HTTP(s)
80
443
# Docker swarm
2377
7946
4789
# K3s
6443
2379
2380
];
allowedUDPPorts = [
# Docker swarm
7946
# K3s
8472
];
};
}

View file

@ -1,83 +0,0 @@
let
pkgs = import <nixpkgs> {};
secret_data = builtins.readFile ./secrets.nix;
in {
imports = [
./secrets.nix
];
swapDevices = [
{
device = "/var/lib/swapfile";
size = 4 * 1024;
}
];
systemd.services.kittehclean = {
enable = true;
description = "Cleans up this Kitteh node & runs init tasks";
serviceConfig = {
Type = "simple";
ExecStart = pkgs.writeShellScript "kittehclean" ''
echo "KittehCluster: Running cleanup tasks..."
chmod -R 644 /etc/rancher 2> /dev/null > /dev/null
chmod -R 644 /var/lib/rancher 2> /dev/null > /dev/null
if [ ! -d "/etc/nixos/git" ]; then
echo "Waiting for true internet bringup..."
sleep 10
echo "Downloading configuration files..."
${pkgs.git}/bin/git clone https://git.hofers.cloud/greysoh/kittehcluster /etc/nixos/
cp -r ${pkgs.writeText "secrets.nix" secret_data} /etc/nixos/nixinfra/secrets.nix
fi
echo "Done."
'';
};
wantedBy = ["network-online.target"];
};
networking.networkmanager.enable = true;
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
};
};
services.avahi.enable = true;
services.avahi.openFirewall = true;
system.nssModules = pkgs.lib.optional true pkgs.nssmdns;
system.nssDatabases.hosts = pkgs.lib.optionals true (pkgs.lib.mkMerge [
(pkgs.lib.mkBefore ["mdns4_minimal [NOTFOUND=return]"]) # before resolution
(pkgs.lib.mkAfter ["mdns4"]) # after dns
]);
users.users.clusteradm = {
initialPassword = "1234";
isNormalUser = true;
extraGroups = ["sudoer" "wheel" "docker"];
packages = with pkgs; [
git
];
};
environment.systemPackages = with pkgs; [
nano
vim
bash
htop
bottom
# For some reason, after seperation, this package isn't included anymore, but the services are
k3s
];
system.stateVersion = "24.05";
}

View file

@ -1,36 +0,0 @@
let
pkgs = import <nixpkgs> {};
in {
imports = [
./commons.nix
];
# This is intentionally defined like this (not using braces) for updating. DO NOT CHANGE THIS.
# - greysoh
proxmox.qemuConf.memory = 4096;
proxmox.qemuConf.cores = 1;
proxmox.qemuConf.name = "k3s-server";
proxmox.qemuConf.diskSize = pkgs.lib.mkForce "32768";
services.k3s = {
enable = true;
role = "server";
serverAddr = "https://kitteh-node-1-k3s-server:6443";
extraFlags = "--disable servicelb";
};
# K3s settings
networking.firewall = {
enable = true;
allowedTCPPorts = [
6443
2379
2380
];
allowedUDPPorts = [
8472
];
};
}

View file

@ -1,9 +0,0 @@
let
pkgs = import <nixpkgs> {};
in {
imports = [
../commons.agent.nix
];
networking.hostName = "kitteh-node-1-k3s-agent";
}

View file

@ -1,41 +0,0 @@
# Because this behaves as cluster init, all the "commons.server.nix" seperation
# isn't in here. However, normal commons is. Just fyi.
let
pkgs = import <nixpkgs> {};
in {
imports = [
../commons.nix
];
# This is intentionally defined like this (not using braces) for updating. DO NOT CHANGE THIS.
# - greysoh
proxmox.qemuConf.memory = 4096;
proxmox.qemuConf.cores = 1;
proxmox.qemuConf.name = "k3s-server";
proxmox.qemuConf.diskSize = pkgs.lib.mkForce "32768";
networking.hostName = "kitteh-node-1-k3s-server";
services.k3s = {
enable = true;
role = "server";
clusterInit = true;
extraFlags = "--disable servicelb";
};
# K3s settings
networking.firewall = {
enable = true;
allowedTCPPorts = [
6443
2379
2380
];
allowedUDPPorts = [
8472
];
};
}

View file

@ -1,9 +0,0 @@
let
pkgs = import <nixpkgs> {};
in {
imports = [
../commons.agent.nix
];
networking.hostName = "kitteh-node-2-k3s-agent";
}

View file

@ -1,9 +0,0 @@
let
pkgs = import <nixpkgs> {};
in {
imports = [
../commons.server.nix
];
networking.hostName = "kitteh-node-2-k3s-server";
}

View file

@ -1,18 +0,0 @@
# Example secrets configuration
# There is a better way to do this, but this works.
# To get started:
# 1. Copy this file to 'secrets.nix'
# 2. Run uuidgen (or some other algorithm) to generate a shared secret, and replace services.k3s.token's value with that
# 3. Copy your SSH key(s) into the authorized_keys section.
# 4. Profit!
let
pkgs = import <nixpkgs> {};
in {
services.k3s.token = "shared.secret.here";
users.users.clusteradm.openssh.authorizedKeys.keys = [
];
}

View file

@ -1,41 +0,0 @@
#!/usr/bin/env bash
if [ "$BASE_IP" = "" ]; then
BASE_IP=192.168.0.20
fi
IP_LAST_OCTET="${BASE_IP##*.}"
IP_MAIN_OCTET="${BASE_IP%.*}"
IP_LAST_OCTET=$((IP_LAST_OCTET-1))
BASE_ID=100
cp meta/tagged_for_upload /tmp/upload_cache
while IFS= read -r LINE; do
UPLOAD_PATH="/var/lib/vz/dump/vzdump-qemu-$(basename $LINE .vma.zst)-$(date +"%Y_%m_%d-%H_%M_%S").vma.zst"
echo "Uploading VM dump '$LINE'..."
CURRENT_NODE="$(dirname $LINE)"
CURRENT_NODE="${CURRENT_NODE##*-}"
IP="$IP_MAIN_OCTET.$((IP_LAST_OCTET+CURRENT_NODE))"
rsync --info=progress2 $LINE root@$IP:$UPLOAD_PATH
if [[ "$@" == *"--install"* ]] || [[ "$@" == *"-i"* ]]; then
echo "Installing VM dump '$LINE'..."
ssh -n root@$IP "qmrestore $UPLOAD_PATH $BASE_ID --force --unique"
BASE_ID=$((BASE_ID+1))
fi
if [[ "$@" == *"--delete"* ]] || [[ "$@" == *"-d"* ]]; then
echo "Deleting VM dump '$LINE'..."
ssh -n root@$IP "rm -rf $UPLOAD_PATH"
fi
ESCAPED_LINE=$(printf '%s\n' "$LINE" | sed -e 's/[\/&]/\\&/g')
sed -i "/$ESCAPED_LINE/d" meta/tagged_for_upload
done < /tmp/upload_cache
echo "Done."

View file

@ -0,0 +1,10 @@
#!/usr/bin/env bash
sudo apt update
sudo apt install -y curl avahi-daemon
ufw allow 6443/tcp
ufw allow from 10.42.0.0/16 to any
ufw allow from 10.43.0.0/16 to any
curl "https://get.docker.com/" -L | bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --server https://$UPSTREAM_HOSTNAME:6443 --token $K3S_TOKEN" sh -s -

View file

@ -0,0 +1,9 @@
#!/usr/bin/env bash
sudo apt update
sudo apt install -y curl avahi-daemon
ufw allow 6443/tcp
ufw allow from 10.42.0.0/16 to any
ufw allow from 10.43.0.0/16 to any
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --cluster-init --token $K3S_TOKEN --disable servicelb" sh -s -

View file

@ -0,0 +1,9 @@
#!/usr/bin/env bash
sudo apt update
sudo apt install -y curl avahi-daemon
ufw allow 6443/tcp
ufw allow from 10.42.0.0/16 to any
ufw allow from 10.43.0.0/16 to any
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --server https://$UPSTREAM_HOSTNAME:6443 --token $K3S_TOKEN --disable servicelb" sh -s -

View file

@ -0,0 +1,8 @@
K3S_TOKEN="shared.secret.here"
# NOTE: Password here is not strong! This password is '1234'.
# When changing the password, remember to escape the dollar signs!
# Example: "Hello\$world"
SETUP_USERNAME="clusteradm"
SETUP_PASSWORD="\$y\$j9T\$zoVys9dfUO/jrysh2Dtim1\$ZQbbt9Qw5qXw0NNCQ7ckdOaVM.QY70sxU82/cQz.siB"

View file

@ -0,0 +1,19 @@
[kitteh-node-1/server]
alt_hostname_definition = 192.168.2.2
hostname = kitteh-node-1-k3s-server
role = server-init
[kitteh-node-1/agent]
hostname = kitteh-node-1-k3s-agent
upstream = kitteh-node-1/server
role = agent
[kitteh-node-2/server]
hostname = kitteh-node-2-k3s-server
upstream = kitteh-node-1/server
role = server
[kitteh-node-2/agent]
hostname = kitteh-node-2-k3s-agent
upstream = kitteh-node-1/server
role = agent

72
serverinfra/install.sh Executable file
View file

@ -0,0 +1,72 @@
#!/usr/bin/env bash
SERVER_INSTALL_PATH="$1"
EXTERN_IP="$2"
HTTP_PORT="$((1024 + $RANDOM % 65535))"
TMPDIR="/tmp/server_http_$HTTP_PORT"
if [ "$SERVER_INSTALL_PATH" == "" ]; then
echo "You didn't pass in all the arguments! Usage:"
echo " ./install.sh \$INSTALL_KEY"
exit 1
fi
if [ "$EXTERN_IP" == "" ]; then
BASE_IPS="$(ip a | grep "inet" | grep "brd" | cut -d "/" -f 1 | cut -d " " -f 6)"
EXT_10_DOT_IP="$(echo "$BASE_IPS" | grep "10." | cut -d $'\n' -f 1)"
EXT_172_16_IP="$(echo "$BASE_IPS" | grep "172.16." | cut -d $'\n' -f 1)"
EXT_192168_IP="$(echo "$BASE_IPS" | grep "192.168." | cut -d $'\n' -f 1)"
if [ "$EXT_10_DOT_IP" != "" ]; then
EXTERN_IP="$EXT_10_DOT_IP"
fi
if [ "$EXT_172_16_IP" != "" ]; then
EXTERN_IP="$EXT_172_16_IP"
fi
if [ "$EXT_192168_IP" != "" ]; then
EXTERN_IP="$EXT_192168_IP"
fi
fi
echo "[x] initializing..."
./merge.py "$SERVER_INSTALL_PATH" "http://$EXTERN_IP:$HTTP_PORT/api/installer_update_webhook"
mkdir $TMPDIR
echo "#cloud-config" > $TMPDIR/user-data
cat /tmp/script.yml >> $TMPDIR/user-data
if [ "$(uname)" == "Linux" ]; then
echo "[x] stopping firewall (Linux)..."
sudo systemctl stop firewall
fi
touch $TMPDIR/meta-data
touch $TMPDIR/vendor-data
echo "[x] starting HTTP server..."
echo " - Going to listen on port $HTTP_PORT."
echo " - Unless you believe the install has gone wrong, do NOT manually kill the HTTP server,"
echo " - as it will close on its own."
echo " - Add these command line options to Ubuntu:"
echo " - autoinstall \"ds=nocloud-net;s=http://$EXTERN_IP:$HTTP_PORT/\""
echo
SERVE_SCRIPT="$PWD/serve.py"
pushd $TMPDIR > /dev/null
python3 $SERVE_SCRIPT $HTTP_PORT
popd > /dev/null
echo "[x] running cleanup tasks..."
rm -rf $TMPDIR
if [ "$(uname)" == "Linux" ]; then
echo "[x] starting firewall (Linux)..."
sudo systemctl start firewall
fi

107
serverinfra/merge.py Executable file
View file

@ -0,0 +1,107 @@
#!/usr/bin/env python3
from os import environ, path, listdir
from sys import argv
import configparser
import base64
import yaml
for item in ["K3S_TOKEN", "SETUP_USERNAME", "SETUP_PASSWORD"]:
if item not in environ:
print(f"ERROR: .env failed to load! (missing environment variable '{item}')")
exit(1)
if len(argv) < 3:
print("ERROR: Missing the server name or the webhook URL")
exit(1)
server_name = argv[1]
server_webhook_url = argv[2]
server_infra_contents = ""
with open("config/infrastructure.ini", "r") as f:
server_infra_contents = f.read()
infrastructure = configparser.ConfigParser()
infrastructure.read_string(server_infra_contents)
if server_name not in infrastructure:
print("ERROR: Server not found in infrastructure document")
exit(1)
infra_server = infrastructure[server_name]
ubuntu_install_contents = ""
with open("ubuntu-install.yml", "r") as f:
ubuntu_install_contents = f.read()
yaml_install_script = yaml.load(ubuntu_install_contents, Loader=yaml.CLoader)
for item in ["hostname", "role"]:
if item not in infra_server:
print(f"ERROR: Missing {item} in {server_name}")
exit(1)
custom_shell_script = "#!/usr/bin/env bash\n"
custom_shell_script += f"export K3S_TOKEN=\"{environ["K3S_TOKEN"]}\"\n"
custom_shell_script += f"export SERVER_NAME=\"{server_name}\"\n"
custom_shell_script += f"export SERVER_HOSTNAME=\"{infra_server["hostname"]}\"\n"
if "upstream" in infra_server:
upstream_name = infra_server["upstream"]
if upstream_name not in infrastructure:
print(f"ERROR: Could not find upstream server '{upstream_name}'")
exit(1)
upstream_server = infrastructure[infra_server["upstream"]]
if "hostname" not in upstream_server:
print(f"ERROR: Missing hostname in upstream '{upstream_name}'")
exit(1)
upstream_hostname = upstream_server["hostname"]
if "alt_hostname_definition" in upstream_server:
upstream_hostname = upstream_server["alt_hostname_definition"]
custom_shell_script += f"export UPSTREAM_NAME=\"{upstream_name}\"\n"
custom_shell_script += f"export UPSTREAM_HOSTNAME=\"{upstream_hostname}\"\n"
custom_shell_script += "\n"
with open(f"base-scripts/role.{infra_server["role"]}.sh", "r") as base_script:
custom_shell_script += base_script.read()
encoded_custom_shell_script = base64.b64encode(bytes(custom_shell_script, "utf-8")).decode("utf-8")
yaml_install_script["autoinstall"]["late-commands"] = []
yaml_install_script["autoinstall"]["late-commands"].append(f"bash -c \"echo \"{encoded_custom_shell_script}\" | base64 -d > /target/postinstall_script\"")
yaml_install_script["autoinstall"]["late-commands"].append("curtin in-target -- bash /postinstall_script")
yaml_install_script["autoinstall"]["late-commands"].append("rm -rf /target/postinstall_script")
yaml_install_script["autoinstall"]["ssh"]["authorized-keys"] = []
ssh_directory_contents = []
try:
ssh_directory_contents = listdir(path.expanduser("~/.ssh/"))
except FileNotFoundError:
pass
for file in ssh_directory_contents:
if file.endswith(".pub"):
with open(path.join(path.expanduser("~/.ssh/"), file), "r") as ssh_public_key:
yaml_install_script["autoinstall"]["ssh"]["authorized-keys"].append(ssh_public_key.read())
yaml_install_script["autoinstall"]["identity"]["hostname"] = infra_server["hostname"]
yaml_install_script["autoinstall"]["identity"]["username"] = environ["SETUP_USERNAME"]
yaml_install_script["autoinstall"]["identity"]["password"] = environ["SETUP_PASSWORD"]
yaml_install_script["autoinstall"]["reporting"]["hook"]["endpoint"] = server_webhook_url
ubuntu_install_contents = yaml.dump(yaml_install_script, Dumper=yaml.CDumper)
with open("/tmp/script.yml", "w") as new_install_script:
new_install_script.write(ubuntu_install_contents)

147
serverinfra/serve.py Normal file
View file

@ -0,0 +1,147 @@
from termcolor import colored
from datetime import datetime, timezone
from os import getcwd, environ
from pathlib import Path
import socketserver
import http.server
import socket
import json
import sys
def json_to_bytes(str: str) -> bytearray:
return bytearray(json.dumps(str), "utf-8")
# Who needs Flask, anyways?
class HTTPHandler(http.server.BaseHTTPRequestHandler):
def send_headers(self):
self.send_header("Content-Type", "application/json")
self.end_headers()
def do_POST(self):
if self.path == "/api/installer_update_webhook":
content_length = 0
try:
content_length = int(self.headers.get('Content-Length'))
except ValueError:
self.send_response(400)
self.send_headers()
self.wfile.write(json_to_bytes({
"success": False,
"error": "Failed to decode Content-Length to read body",
}))
return
resp_data = self.rfile.read(content_length).decode("utf-8")
resp_decoded_data: dict = {}
try:
resp_decoded_data = json.loads(resp_data)
if type(resp_decoded_data) is not dict:
self.send_response(400)
self.send_headers()
self.wfile.write(json_to_bytes({
"success": False,
"error": "Recieved invalid type for JSON",
}))
return
except json.JSONDecodeError:
self.send_response(400)
self.send_headers()
self.wfile.write(json_to_bytes({
"success": False,
"error": "Failed to decode JSON",
}))
return
date_time = datetime.fromtimestamp(resp_decoded_data["timestamp"], timezone.utc)
str_formatted_time = date_time.strftime("%H:%M:%S")
result_is_safe = resp_decoded_data["result"] == "SUCCESS" if "result" in resp_decoded_data else True
output_file = sys.stdout if result_is_safe else sys.stderr
output_coloring = "light_blue"
if "result" in resp_decoded_data:
res = resp_decoded_data["result"]
if res == "SUCCESS":
output_coloring = "light_green"
elif res == "WARN":
output_coloring = "light_yellow"
elif res == "FAIL":
output_coloring = "light_red"
result_text_component = f" {resp_decoded_data["result"]} " if "result" in resp_decoded_data else " "
final_output_text = f"{str_formatted_time} {resp_decoded_data["event_type"].upper()} {resp_decoded_data["level"]}:{result_text_component}{resp_decoded_data["name"]} ({resp_decoded_data["description"]})"
print(colored(final_output_text, output_coloring), file=output_file)
self.send_response(200)
self.send_headers()
self.wfile.write(json_to_bytes({
"success": True,
}))
if resp_decoded_data["event_type"] == "finish" and resp_decoded_data["name"] == "subiquity/Shutdown/shutdown":
print("\nSuccessfully finished installing!")
exit(0)
else:
self.send_response(404)
self.send_headers()
self.wfile.write(json_to_bytes({
"success": False,
"error": "Unknown route"
}))
def do_GET(self):
resolved_path = str(Path(self.path).resolve())
file_path = getcwd() + resolved_path
try:
self.send_response(200)
self.end_headers()
with open(file_path, "rb") as file:
self.wfile.write(file.read())
except (FileNotFoundError, IsADirectoryError):
self.send_response(404)
self.send_headers()
self.wfile.write(json_to_bytes({
"success": False,
"error": "file not found"
}))
except () as exception:
exception.print_exception()
def log_message(self, format: str, *args):
status_code = 0
try:
status_code = int(args[1])
except ValueError:
pass
# Disable logging for the /api/ endpoint for POST requests unless the error code > 400
if len(args) >= 1 and args[0].startswith("POST") and self.path.startswith("/api/") and status_code < 400:
return
super().log_message(format, *args)
port = int(sys.argv[1]) if "SERVE_DEVELOP" not in environ else 10240
server = socketserver.TCPServer(("", port), HTTPHandler)
server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
print("[x] started HTTP server.")
server.serve_forever()

30
serverinfra/shell Executable file
View file

@ -0,0 +1,30 @@
#!/usr/bin/env bash
if [ ! -f "config/.env" ]; then
echo "# NOTE: The UUID should be automatically generated, when running nix-shell. However, if it isn't, then" > .env
echo "# run uuidgen and change the below value." >> .env
cat config/.env.example >> config/.env
# Apple moment
sed "s/K3S_TOKEN=\"shared.secret.here\"/K3S_TOKEN=\"$(uuidgen)\"/g" config/.env > config/.env.2
mv config/.env.2 config/.env
echo "INFO: Before running any installation scripts, you should look over the contents of the file '.env',"
echo "and modify the contents as needed."
echo
fi
echo "Installation usage:"
echo " - ./install.sh \$CONFIG \$OPTIONAL_IP:"
echo " Installs Ubuntu Server using configuration \$CONFIG."
echo " \$OPTIONAL_IP is the optional IP address of your computer, if it guesses your IP address wrong."
echo
echo "Have fun!"
set -a
source config/.env
set +a
bash
EXIT_CODE=$?
exit $EXIT_CODE

16
serverinfra/shell.nix Normal file
View file

@ -0,0 +1,16 @@
{
pkgs ? import <nixpkgs> { },
}: pkgs.mkShell {
buildInputs = with pkgs; [
python312
# Packages
python312Packages.pyyaml
python312Packages.termcolor
];
shellHook = ''
./shell
exit $?
'';
}

View file

@ -0,0 +1,60 @@
#cloud-config
# See the autoinstall documentation at:
# https://canonical-subiquity.readthedocs-hosted.com/en/latest/reference/autoinstall-reference.html
autoinstall:
apt:
disable_components: []
fallback: offline-install
geoip: true
mirror-selection:
primary:
- country-mirror
- arches: &id001
- amd64
- i386
uri: http://archive.ubuntu.com/ubuntu/
- arches: &id002
- s390x
- arm64
- armhf
- powerpc
- ppc64el
- riscv64
uri: http://ports.ubuntu.com/ubuntu-ports
preserve_sources_list: false
security:
- arches: *id001
uri: http://security.ubuntu.com/ubuntu/
- arches: *id002
uri: http://ports.ubuntu.com/ubuntu-ports
codecs:
install: false
drivers:
install: false
reporting:
hook:
type: webhook
kernel:
package: linux-generic
keyboard:
layout: us
toggle: null
variant: ""
locale: en_US.UTF-8
oem:
install: auto
source:
id: ubuntu-server
search_drivers: false
identity:
realname: Cluster Administrator
ssh:
allow-pw: false
install-server: true
storage:
layout:
name: lvm
match:
path: /dev/vda
updates: security
version: 1