Merge pull request 'Fix divergements' (#3) from thunderstruck into main

Reviewed-on: https://git.greysoh.dev/imterah/kittehcluster/pulls/3
This commit is contained in:
Tera 8 2024-11-24 17:12:06 +00:00
commit 66fc8bd88e
39 changed files with 1873 additions and 49 deletions

5
.gitignore vendored
View file

@ -7,4 +7,7 @@ __pycache__
out out
# kubernetes/ # kubernetes/
meta build.log
secrets.nix
kubernetes/meta
kubernetes/secrets

View file

@ -1,6 +1,6 @@
# KittehCluster # KittehCluster
This is my (work in progress, deployed but nothing production running on it *yet*) Kubernetes clustered computing setup, based on Proxmox VE and Ubuntu Server. This is my (work in progress, deployed but nothing production running on it *yet*) Kubernetes clustered computing setup, based on Proxmox VE and Ubuntu Server.
Currently, I *really* cannot recommend that you use this setup in production yet. I have to delete and recreate my VMs multiple times a day, until I fix everything. Currently, I *really* cannot recommend that you use this setup in production yet. I have to delete and recreate my VMs multiple times a day, until I fix everything.
## Prerequisites ## Prerequisites
- A POSIX-compliant computer (preferably Unix of some sort, like macOS/Linux/*BSD, but Git Bash or Cygwin would probably work) with Python and Pyyaml - A POSIX-compliant computer (preferably Unix of some sort, like macOS/Linux/*BSD, but Git Bash or Cygwin would probably work) with Python and Pyyaml
@ -20,17 +20,19 @@ Currently, I *really* cannot recommend that you use this setup in production yet
### Kubernetes setup ### Kubernetes setup
1. SSH into any of the nodes. (i.e `ssh clusteradm@kitteh-node-2-k3s-server`) 1. SSH into any of the nodes. (i.e `ssh clusteradm@kitteh-node-2-k3s-server`)
2. As root, grab `/etc/rancher/k3s/k3s.yaml`, and copy it to wherever you store your k3s configurations (on macOS, this is `~/.kube/config`) 2. As root, grab `/etc/rancher/k3s/k3s.yaml`, and copy it to wherever you store your k3s configurations (on macOS, this is `~/.kube/config`)
3. Go into the `kubernetes` directory, and copy `example-secrets` to `secrets` and modify these to be your credentials.
4. Run `./kubesync.py`. If you recieve MetalLB errors while this happens, `rm -rf meta`, and try again. It should work on the second attempt. If not, report this issue.
## Updating ## Updating
Run `apt update` and `apt upgrade -y` for the base system. TODO for Kubernetes. Run `apt update` and `apt upgrade -y` for the base system. TODO for Kubernetes.
## Customization ## Customization
### Adding nodes ### Adding nodes
In `serverinfra/infrastructure.ini`, copy the role(s) from kitteh-node-2 to a new node (ex. `kitteh-node-2/server` -> `kitteh-node-3/server`, etc), and run the install script again. In `serverinfra/infrastructure.ini`, copy the role(s) from kitteh-node-2 to a new node (ex. `kitteh-node-2/server` -> `kitteh-node-3/server`, etc), and run the install script again.
### Custom cluster setup / Forking ### Custom cluster setup / Forking
This is a guide. You can change more stuff if you'd like, but this will get you started. This is a guide. You can change more stuff if you'd like, but this will get you started.
1. First, fork this Git repository if you haven't already. 1. First, fork this Git repository if you haven't already.
2. Modify `serverinfra/config/infrastructure.ini` to fit your needs. 2. Modify `serverinfra/config/infrastructure.ini` to fit your needs.
## Troubleshooting ## Troubleshooting
- I can't login via SSH! - I can't login via SSH!
- Your SSH public keys are automatically copied over! If not, did you generate an SSH keyring before installing? - Your SSH public keys are automatically copied over! If not, did you generate an SSH keyring before installing?
- Additionally, password authentication is disabled! - Additionally, password authentication is disabled!

View file

@ -5,6 +5,7 @@ format_ver = 1
[k3s_dash_repo] [k3s_dash_repo]
description = Kubernetes Dashboard Repository description = Kubernetes Dashboard Repository
mode = helm mode = helm
depends_on = traefik
[#k3s_dash_repo/helm] [#k3s_dash_repo/helm]
mode = add_repo mode = add_repo

View file

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: traefik-cf-creds
data:
# Kubernetes base64 encodes the data
# By default, this is:
cf-email: Y2xvdWRmbGFyZUBleGFtcGxlLmNvbQ== # database
cf-key: a2V5 # database

View file

@ -0,0 +1,28 @@
## Mail server configuration
passboltEnv.plain.EMAIL_DEFAULT_FROM=
passboltEnv.plain.EMAIL_DEFAULT_FROM_NAME=Passbolt
passboltEnv.plain.EMAIL_TRANSPORT_DEFAULT_HOST=smtp.gmail.com
passboltEnv.secret.EMAIL_TRANSPORT_DEFAULT_USERNAME=
passboltEnv.secret.EMAIL_TRANSPORT_DEFAULT_PASSWORD=
## GPG Information
passboltEnv.plain.APP_FULL_BASE_URL=https://passbolt.hofers.cloud
passboltEnv.plain.PASSBOLT_KEY_EMAIL=noreply@passbolt.hofers.cloud
## Misc domain configuration
ingress.hosts[0].host=passbolt.hofers.cloud
livenessProbe.httpGet.httpHeaders[0].value=passbolt.hofers.cloud
readinessProbe.httpGet.httpHeaders[0].value=passbolt.hofers.cloud
## GPG Keys
# Private key
gpgServerKeyPrivate=
# Public key
gpgServerKeyPublic=
passboltEnv.secret.PASSBOLT_GPG_SERVER_KEY_FINGERPRINT=
## JWT Information
# Private Key
jwtServerPrivate=
# Public
jwtServerPublic=

View file

@ -0,0 +1,14 @@
# App Base
gitea.config.APP_NAME=Personal Git Server
ingress.hosts[0].host=git.greysoh.dev
gitea.config.server.ROOT_URL=https://git.greysoh.dev
# User configuration
gitea.admin.username=example
gitea.admin.password=test
gitea.admin.email=greyson@hofers.cloud
gitea.admin.passwordMode=initialOnlyNoReset
# Data configuration
postgresql.primary.persistence.size=10Gi
persistence.size=32Gi

View file

@ -0,0 +1 @@
adminPassword=password

View file

@ -0,0 +1,9 @@
[meta]
format_ver = 1
[traefik_cf_credentials]
mode = k3s
[#traefik_cf_credentials/k3s]
mode = install
yml_path = ./cloudflare-credentials.yml

View file

@ -0,0 +1,2 @@
oauth.clientId=clientId
oauth.clientSecret=tskey-client-secret

View file

@ -15,7 +15,7 @@ from typing import Optional
latest_format_ver = 1 latest_format_ver = 1
print("KubeSync (KittehCluster, v1.0.0-test)") print("KubeSync (KittehCluster, v1.0.1)")
parser = argparse.ArgumentParser(description="Knockoff of ansible for K3s. By default, provisions only") parser = argparse.ArgumentParser(description="Knockoff of ansible for K3s. By default, provisions only")
@ -62,6 +62,8 @@ class HelmSettings:
repo: Optional[str] repo: Optional[str]
namespace_name: Optional[str] namespace_name: Optional[str]
create_namespace: bool create_namespace: bool
options_file: Optional[str]
set_vars: Optional[str]
@dataclass @dataclass
class KubeSettings: class KubeSettings:
@ -157,7 +159,9 @@ def parse_project(contents: str, workdir=os.getcwd()) -> list[Project]:
found_project["name"], found_project["name"],
found_project["repo"] if "repo" in found_project else None, found_project["repo"] if "repo" in found_project else None,
found_project["namespace"] if "namespace" in found_project else None, found_project["namespace"] if "namespace" in found_project else None,
create_namespace create_namespace,
os.path.join(workdir, found_project["options_file"]) if "options_file" in found_project else None,
os.path.join(workdir, found_project["variable_file"]) if "variable_file" in found_project else None,
) )
project_obj = Project( project_obj = Project(
@ -266,7 +270,7 @@ def sort_projects(projects: list[Project]) -> list[Project]:
while project_list_staging: while project_list_staging:
n = project_list_staging.pop(0) n = project_list_staging.pop(0)
sorted_projects.append(n) sorted_projects.append(n)
nodes_with_edges = list(filter(lambda x: n.name in x.depends_on, projects)) nodes_with_edges = list(filter(lambda x: n.name in x.depends_on, projects))
for m in nodes_with_edges: for m in nodes_with_edges:
@ -277,8 +281,9 @@ def sort_projects(projects: list[Project]) -> list[Project]:
# Check for circular dependencies/cycles # Check for circular dependencies/cycles
if any(project.depends_on for project in projects): if any(project.depends_on for project in projects):
print(list(filter(lambda project: len(project.depends_on) != 0, projects)))
raise ValueError("Found circular dependency") raise ValueError("Found circular dependency")
return sorted_projects return sorted_projects
def generate_change_set(projects: list[Project]) -> dict[str, list[str]]: def generate_change_set(projects: list[Project]) -> dict[str, list[str]]:
@ -298,13 +303,14 @@ def generate_change_set(projects: list[Project]) -> dict[str, list[str]]:
changeset_meta_id = "" changeset_meta_id = ""
for line in k3s_config_str.splitlines(): for line in k3s_config_str.splitlines():
if line.strip().startswith("certificate-authority-data"): stripped_line = line.strip()
data = line.strip()[line.strip().index(" ") + 1:] if stripped_line.startswith("certificate-authority-data"):
data = stripped_line[stripped_line.index(" ") + 1:]
data_in_bytes = bytearray(changeset_meta_id + data, "utf-8") data_in_bytes = bytearray(changeset_meta_id + data, "utf-8")
changeset_meta_id = hashlib.md5(data_in_bytes).hexdigest() changeset_meta_id = hashlib.md5(data_in_bytes).hexdigest()
base_changeset_path = f"meta/{changeset_meta_id}" base_changeset_path = f"meta/{changeset_meta_id}"
try: try:
os.mkdir(base_changeset_path) os.mkdir(base_changeset_path)
except FileExistsError: except FileExistsError:
@ -312,7 +318,7 @@ def generate_change_set(projects: list[Project]) -> dict[str, list[str]]:
dir_contents = os.listdir(base_changeset_path) dir_contents = os.listdir(base_changeset_path)
changeset_path = f"{base_changeset_path}/gen_{len(dir_contents) + 1}/" changeset_path = f"{base_changeset_path}/gen_{len(dir_contents) + 1}/"
try: try:
shutil.copytree(f"{base_changeset_path}/gen_{len(dir_contents)}/", changeset_path) shutil.copytree(f"{base_changeset_path}/gen_{len(dir_contents)}/", changeset_path)
except FileNotFoundError: except FileNotFoundError:
@ -320,45 +326,106 @@ def generate_change_set(projects: list[Project]) -> dict[str, list[str]]:
os.mkdir(f"{changeset_path}/k3hashes") os.mkdir(f"{changeset_path}/k3hashes")
os.mkdir(f"{changeset_path}/helmhashes") os.mkdir(f"{changeset_path}/helmhashes")
os.mkdir(f"{changeset_path}/shellhashes") os.mkdir(f"{changeset_path}/shellhashes")
for project in sorted_projects: for project in sorted_projects:
match project.mode: match project.mode:
case "helm": case "helm":
if project.helm_settings == None:
continue
if project.helm_settings.mode == "add_repo": if project.helm_settings.mode == "add_repo":
if project.helm_settings.repo == None or project.helm_settings.name == None: if project.helm_settings.repo == None or project.helm_settings.name == None:
print("ERROR: 'add_repo' is set but either repo or name is undefined") print("ERROR: 'add_repo' is set but either repo or name is undefined")
exit(1) exit(1)
data_in_bytes = bytearray(f"add_repo.{project.helm_settings.repo}_{project.helm_settings.name}", "utf-8") data_in_bytes = bytearray(f"add_repo.{project.helm_settings.repo}_{project.helm_settings.name}", "utf-8")
meta_id = hashlib.md5(data_in_bytes).hexdigest() meta_id = hashlib.md5(data_in_bytes).hexdigest()
if not os.path.isfile(f"{changeset_path}/helmhashes/{meta_id}"): if not os.path.isfile(f"{changeset_path}/helmhashes/{meta_id}"):
Path(f"{changeset_path}/helmhashes/{meta_id}").touch() Path(f"{changeset_path}/helmhashes/{meta_id}").touch()
changeset_values[project.name] = [ changeset_values[project.name] = [
f"helm repo add {project.helm_settings.name} {project.helm_settings.repo}" f"helm repo add {project.helm_settings.name} {project.helm_settings.repo}"
] ]
elif project.helm_settings.mode == "upgrade" or project.helm_settings.mode == "install": elif project.helm_settings.mode == "upgrade" or project.helm_settings.mode == "install":
if project.helm_settings.name == None or project.helm_settings.repo == None or project.helm_settings.namespace_name == None: if project.helm_settings.name == None or project.helm_settings.repo == None:
print("ERROR: 'upgrade' or 'install' is set but either: name, repo, or namespace_name is undefined") print("ERROR: 'upgrade' or 'install' is set but either: name, or repo, is undefined")
exit(1) exit(1)
data_in_bytes = bytearray(f"install.{project.helm_settings.repo}_{project.helm_settings.name}", "utf-8") data_in_bytes = bytearray(f"install.{project.helm_settings.repo}_{project.helm_settings.name}", "utf-8")
meta_id = hashlib.md5(data_in_bytes).hexdigest() meta_id = hashlib.md5(data_in_bytes).hexdigest()
if not os.path.isfile(f"{changeset_path}/helmhashes/{meta_id}") and project.helm_settings.mode == "install": create_namespace = "--create-namespace" if project.helm_settings.create_namespace else ""
namespace = f"--namespace {project.helm_settings.namespace_name}" if project.helm_settings.namespace_name else ""
options_file = f"-f {project.helm_settings.options_file}" if project.helm_settings.options_file else ""
should_still_continue = False
variables = ""
if project.helm_settings.set_vars:
with open(project.helm_settings.set_vars, "r") as variable_file:
contents = variable_file.read().splitlines()
contents = list(map(lambda x: x.strip(), contents))
contents = list(filter(lambda x: not x.startswith("#") and x != "", contents))
for content in contents:
key = content[0:content.index("=")]
value = content[content.index("=")+1:]
variables += f"--set \"{key}\"=\"{value}\" "
variables = variables[:len(variables)-1]
if project.helm_settings.options_file:
data_in_bytes = bytearray(f"{project.helm_settings.options_file}", "utf-8")
options_file_meta_id = hashlib.md5(data_in_bytes).digest().hex()
if not os.path.isfile(f"{changeset_path}/helmhashes/{options_file_meta_id}"):
file_hash = ""
with open(project.helm_settings.options_file, "rb") as helm_options_file:
data = helm_options_file.read()
file_hash = hashlib.md5(data).hexdigest()
with open(f"{changeset_path}/helmhashes/{options_file_meta_id}", "w") as helm_options_metaid_file:
helm_options_metaid_file.write(file_hash)
should_still_continue = True
else:
file_hash = ""
with open(project.helm_settings.options_file, "rb") as helm_options_file:
data = helm_options_file.read()
file_hash = hashlib.md5(data).hexdigest()
with open(f"{changeset_path}/helmhashes/{options_file_meta_id}", "r+") as helm_options_metaid_file:
read_hash = helm_options_metaid_file.read()
if read_hash != file_hash:
helm_options_metaid_file.seek(0)
helm_options_metaid_file.write(file_hash)
should_still_continue = True
if (not os.path.isfile(f"{changeset_path}/helmhashes/{meta_id}") or should_still_continue) and project.helm_settings.mode == "install":
Path(f"{changeset_path}/helmhashes/{meta_id}").touch() Path(f"{changeset_path}/helmhashes/{meta_id}").touch()
changeset_values[project.name] = [ changeset_values[project.name] = [
f"helm repo update {project.helm_settings.repo[:project.helm_settings.repo.index("/")]}", f"helm repo update {project.helm_settings.repo[:project.helm_settings.repo.index("/")]}",
f"helm upgrade --install {project.helm_settings.name} {project.helm_settings.repo} {"--create-namespace" if project.helm_settings.create_namespace else ""} --namespace {project.helm_settings.namespace_name}" f"helm upgrade --install {options_file} {variables} {project.helm_settings.name} \"{project.helm_settings.repo}\" {create_namespace} {namespace}"
] ]
elif project.helm_settings.mode == "upgrade" or mode == "update": elif project.helm_settings.mode == "upgrade" or mode == "update":
changeset_values[project.name] = [ changeset_values[project.name] = [
f"helm repo update {project.helm_settings.repo[:project.helm_settings.repo.index("/")]}", f"helm repo update {project.helm_settings.repo[:project.helm_settings.repo.index("/")]}",
f"helm upgrade {project.helm_settings.name} {project.helm_settings.repo} {"--create-namespace" if project.helm_settings.create_namespace else ""} --namespace {project.helm_settings.namespace_name}" f"helm upgrade {options_file} {variables} {project.helm_settings.name} \"{project.helm_settings.repo}\" {create_namespace} {namespace}"
] ]
case "k3s": case "k3s":
if project.kube_settings == None:
continue
commands_to_run = []
data_in_bytes = bytearray(f"{project.kube_settings.yml_path}", "utf-8") data_in_bytes = bytearray(f"{project.kube_settings.yml_path}", "utf-8")
meta_id = hashlib.md5(data_in_bytes).digest().hex() meta_id = hashlib.md5(data_in_bytes).digest().hex()
@ -368,7 +435,7 @@ def generate_change_set(projects: list[Project]) -> dict[str, list[str]]:
with open(project.kube_settings.yml_path, "rb") as kube_file: with open(project.kube_settings.yml_path, "rb") as kube_file:
data = kube_file.read() data = kube_file.read()
file_hash = hashlib.md5(data).hexdigest() file_hash = hashlib.md5(data).hexdigest()
with open(f"{changeset_path}/k3hashes/{meta_id}", "w") as kube_metaid_file: with open(f"{changeset_path}/k3hashes/{meta_id}", "w") as kube_metaid_file:
kube_metaid_file.write(file_hash) kube_metaid_file.write(file_hash)
else: else:
@ -377,19 +444,20 @@ def generate_change_set(projects: list[Project]) -> dict[str, list[str]]:
with open(project.kube_settings.yml_path, "rb") as kube_file: with open(project.kube_settings.yml_path, "rb") as kube_file:
data = kube_file.read() data = kube_file.read()
file_hash = hashlib.md5(data).hexdigest() file_hash = hashlib.md5(data).hexdigest()
with open(f"{changeset_path}/k3hashes/{meta_id}", "r+") as kube_metaid_file: with open(f"{changeset_path}/k3hashes/{meta_id}", "r+") as kube_metaid_file:
read_hash = kube_metaid_file.read() read_hash = kube_metaid_file.read()
if read_hash == file_hash: if read_hash == file_hash:
continue continue
else: else:
kube_metaid_file.seek(0) kube_metaid_file.seek(0)
kube_metaid_file.write(file_hash) kube_metaid_file.write(file_hash)
changeset_values[project.name] = [ # commands_to_run.append(f"kubectl delete -f {project.kube_settings.yml_path}")
f"kubectl apply -f {project.kube_settings.yml_path}"
] commands_to_run.append(f"kubectl apply -f {project.kube_settings.yml_path}")
changeset_values[project.name] = commands_to_run
case _: case _:
raise Exception("Could not match project type?") raise Exception("Could not match project type?")
@ -400,12 +468,15 @@ def sigint_handler(signum, frame):
if changeset_path == None: if changeset_path == None:
print("Changeset path is not set yet. Exiting...") print("Changeset path is not set yet. Exiting...")
if signum != None: if signum != None:
sys.exit(0) sys.exit(0)
if changeset_path == None:
exit(2)
shutil.rmtree(changeset_path) shutil.rmtree(changeset_path)
if signum != None: if signum != None:
print("Exiting...") print("Exiting...")
sys.exit(0) sys.exit(0)
@ -428,11 +499,10 @@ if not projects:
print("Generating changesets...") print("Generating changesets...")
change_set = generate_change_set(projects) change_set = generate_change_set(projects)
if not change_set:
print("No changes detected.")
exit(0)
if args.dryrun_only: if args.dryrun_only:
if not change_set:
print("No changes detected.")
sigint_handler(None, None) sigint_handler(None, None)
print("Generating changeset script (writing to stderr!)") print("Generating changeset script (writing to stderr!)")
@ -440,10 +510,14 @@ if args.dryrun_only:
for project_name in change_set: for project_name in change_set:
print(f'echo "Applying changeset for \'{project_name}\'..."', file=sys.stderr) print(f'echo "Applying changeset for \'{project_name}\'..."', file=sys.stderr)
for command in change_set[project_name]: for command in change_set[project_name]:
print(command, file=sys.stderr) print(command, file=sys.stderr)
else: else:
if not change_set:
print("No changes detected.")
exit(0)
for project_name in change_set: for project_name in change_set:
print(f"Applying changeset for '{project_name}'...") print(f"Applying changeset for '{project_name}'...")

View file

@ -0,0 +1,12 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
spec:
addresses:
- 192.168.2.10-192.168.2.254
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool-advertisement

View file

@ -0,0 +1,8 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged

View file

@ -0,0 +1,39 @@
[meta]
format_ver = 1
[metallb_namespace]
description = Namespace Configuration for MetalLB
mode = k3s
[#metallb_namespace/k3s]
mode = install
yml_path = ./metallb_namespace.yml
[metallb_repo]
description = MetalLB Repository
mode = helm
depends_on = metallb_namespace
[#metallb_repo/helm]
mode = add_repo
name = metallb
repo = https://metallb.github.io/metallb
[metallb]
description = MetalLB
mode = helm
depends_on = metallb_repo
[#metallb/helm]
mode = install
name = metallb
repo = metallb/metallb
[metallb_ip_config]
description = IPs for MetalLB
mode = k3s
depends_on = metallb
[#metallb_ip_config/k3s]
mode = install
yml_path = ./metallb_ip_config.yml

View file

@ -0,0 +1,12 @@
[meta]
format_ver = 1
[metallb]
description = MetalLB
mode = include
path = ./metallb/project.ini
[traefik]
description = MetalLB
mode = include
path = ./traefik/project.ini

View file

@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-account

View file

@ -0,0 +1,56 @@
[meta]
format_ver = 1
[traefik_role]
description = Traefik role for self
mode = k3s
depends_on = metallb_ip_config:traefik_cf_credentials:longhorn_storage_class
[#traefik_role/k3s]
mode = install
yml_path = ./role.yml
[traefik_account]
description = Traefik account
mode = k3s
depends_on = traefik_role
[#traefik_account/k3s]
mode = install
yml_path = ./account.yml
[traefik_role_binding]
description = Traefik role binding
mode = k3s
depends_on = traefik_account
[#traefik_role_binding/k3s]
mode = install
yml_path = ./role-binding.yml
[traefik_pv_claim]
description = Traefik certificate storage claim
mode = k3s
depends_on = traefik_role_binding
[#traefik_pv_claim/k3s]
mode = install
yml_path = ./pv-claim.yml
[traefik]
description = Traefik
mode = k3s
depends_on = traefik_account
[#traefik/k3s]
mode = install
yml_path = ./traefik.yml
[traefik_dashboard]
description = Traefik Dashboard
mode = k3s
depends_on = traefik
[#traefik_dashboard/k3s]
mode = install
yml_path = ./traefik-dashboard.yml

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: traefik-volume-claim
namespace: kube-system
labels:
app: traefik
spec:
accessModes:
- ReadWriteMany
storageClassName: longhorn
resources:
requests:
storage: 100Mi

View file

@ -0,0 +1,13 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-role
subjects:
- kind: ServiceAccount
name: traefik-account
namespace: default # This tutorial uses the "default" K8s namespace.

View file

@ -0,0 +1,39 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-role
rules:
- apiGroups:
- ""
resources:
- services
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update

View file

@ -0,0 +1,47 @@
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard-service
annotations:
metallb.universe.tf/loadBalancerIPs: 192.168.2.10
metallb.universe.tf/allow-shared-ip: "this-is-traefik"
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: dashboard
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
annotations:
metallb.universe.tf/loadBalancerIPs: 192.168.2.10
metallb.universe.tf/allow-shared-ip: "this-is-traefik"
spec:
type: LoadBalancer
ports:
- targetPort: web
port: 80
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
annotations:
metallb.universe.tf/loadBalancerIPs: 192.168.2.10
metallb.universe.tf/allow-shared-ip: "this-is-traefik"
spec:
type: LoadBalancer
ports:
- targetPort: web-tls
port: 443
selector:
app: traefik

View file

@ -0,0 +1,56 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 0
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v3.1
args:
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--entryPoints.websecure.http.tls.certresolver=myresolver"
- "--certificatesresolvers.letsencrypt.acme.email=greyson@hofers.cloud"
# - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--certificatesresolvers.letsencrypt.acme.storage=/sslcerts/cert.json"
# - "--api.insecure"
- "--providers.kubernetesingress"
ports:
- name: web
containerPort: 80
- name: web-tls
containerPort: 443
- name: dashboard
containerPort: 8080
env:
- name: CF_API_EMAIL
valueFrom:
secretKeyRef:
name: traefik-cf-creds
key: cf-email
- name: CF_API_KEY
valueFrom:
secretKeyRef:
name: traefik-cf-creds
key: cf-key
volumeMounts:
- mountPath: /ssl-certs
name: cert-data
volumes:
- name: cert-data
persistentVolumeClaim:
claimName: traefik-volume-claim

View file

@ -0,0 +1,29 @@
[meta]
format_ver = 1
[longhorn_repo]
mode = helm
[#longhorn_repo/helm]
mode = add_repo
name = longhorn
repo = https://charts.longhorn.io
[longhorn]
mode = helm
depends_on = longhorn_repo
[#longhorn/helm]
mode = install
name = longhorn
repo = longhorn/longhorn
namespace = longhorn-system
create_namespace = true
[longhorn_storage_class]
depends_on = longhorn
mode = k3s
[#longhorn_storage_class/k3s]
mode = install
yml_path = ./storage-class.yml

View file

@ -0,0 +1,11 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880" # 48 hours in minutes
fromBackup: ""
fsType: "ext4"

View file

@ -1,7 +1,27 @@
[meta] [meta]
format_ver = 1 format_ver = 1
[secrets]
description = Secret Values
mode = include
path = ./secrets/project.ini
[longhorn]
description = Longhorn Distributed Storage
mode = include
path = ./longhorn/project.ini
[loadbalancer]
description = LoadBalancer Configuration
mode = include
path = ./loadbalancer/project.ini
[dashboard] [dashboard]
description = Various Dashboards description = Various Dashboards
mode = include mode = include
path = ./dashboard/project.ini path = ./dashboard/project.ini
[services]
description = Services to Use
mode = include
path = ./services/project.ini

View file

@ -0,0 +1,747 @@
# Default values for gitea.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## @section Global
#
## @param global.imageRegistry global image registry override
## @param global.imagePullSecrets global image pull secrets override; can be extended by `imagePullSecrets`
## @param global.storageClass global storage class override
## @param global.hostAliases global hostAliases which will be added to the pod's hosts files
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: "longhorn"
hostAliases: []
# - ip: 192.168.137.2
# hostnames:
# - example.com
## @param replicaCount number of replicas for the deployment
replicaCount: 1
## @section strategy
## @param strategy.type strategy type
## @param strategy.rollingUpdate.maxSurge maxSurge
## @param strategy.rollingUpdate.maxUnavailable maxUnavailable
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: "100%"
maxUnavailable: 0
## @param clusterDomain cluster domain
clusterDomain: cluster.local
## @section Image
## @param image.registry image registry, e.g. gcr.io,docker.io
## @param image.repository Image to start for this pod
## @param image.tag Visit: [Image tag](https://code.forgejo.org/forgejo/-/packages/container/forgejo/versions). Defaults to `appVersion` within Chart.yaml.
## @param image.digest Image digest. Allows to pin the given image tag. Useful for having control over mutable tags like `latest`
## @param image.pullPolicy Image pull policy
## @param image.rootless Wether or not to pull the rootless version of Forgejo
## @param image.fullOverride Completely overrides the image registry, path/image, tag and digest. **Adjust `image.rootless` accordingly and review [Rootless defaults](#rootless-defaults).**
image:
registry: code.forgejo.org
repository: forgejo/forgejo
# Overrides the image tag whose default is the chart appVersion.
tag: ""
digest: ""
pullPolicy: IfNotPresent
rootless: true
fullOverride: ""
## @param imagePullSecrets Secret to use for pulling the image
imagePullSecrets: []
## @section Security
# Security context is only usable with rootless image due to image design
## @param podSecurityContext.fsGroup Set the shared file system group for all containers in the pod.
podSecurityContext:
fsGroup: 1000
## @param containerSecurityContext Security context
containerSecurityContext: {}
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# # Add the SYS_CHROOT capability for root and rootless images if you intend to
# # run pods on nodes that use the container runtime cri-o. Otherwise, you will
# # get an error message from the SSH server that it is not possible to read from
# # the repository.
# # https://gitea.com/gitea/helm-chart/issues/161
# add:
# - SYS_CHROOT
# privileged: false
# readOnlyRootFilesystem: true
# runAsGroup: 1000
# runAsNonRoot: true
# runAsUser: 1000
## @deprecated The securityContext variable has been split two:
## - containerSecurityContext
## - podSecurityContext.
## @param securityContext Run init and Forgejo containers as a specific securityContext
securityContext: {}
## @param podDisruptionBudget Pod disruption budget
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## @section Service
service:
## @param service.http.type Kubernetes service type for web traffic
## @param service.http.port Port number for web traffic
## @param service.http.clusterIP ClusterIP setting for http autosetup for deployment is None
## @param service.http.loadBalancerIP LoadBalancer IP setting
## @param service.http.nodePort NodePort for http service
## @param service.http.externalTrafficPolicy If `service.http.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
## @param service.http.externalIPs External IPs for service
## @param service.http.ipFamilyPolicy HTTP service dual-stack policy
## @param service.http.ipFamilies HTTP service dual-stack familiy selection,for dual-stack parameters see official kubernetes [dual-stack concept documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
## @param service.http.loadBalancerSourceRanges Source range filter for http loadbalancer
## @param service.http.annotations HTTP service annotations
## @param service.http.labels HTTP service additional labels
## @param service.http.loadBalancerClass Loadbalancer class
http:
type: ClusterIP
port: 3000
clusterIP: None
loadBalancerIP:
nodePort:
externalTrafficPolicy:
externalIPs:
ipFamilyPolicy:
ipFamilies:
loadBalancerSourceRanges: []
annotations: {}
labels: {}
loadBalancerClass:
## @param service.ssh.type Kubernetes service type for ssh traffic
## @param service.ssh.port Port number for ssh traffic
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for deployment is None
## @param service.ssh.loadBalancerIP LoadBalancer IP setting
## @param service.ssh.nodePort NodePort for ssh service
## @param service.ssh.externalTrafficPolicy If `service.ssh.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
## @param service.ssh.externalIPs External IPs for service
## @param service.ssh.ipFamilyPolicy SSH service dual-stack policy
## @param service.ssh.ipFamilies SSH service dual-stack familiy selection,for dual-stack parameters see official kubernetes [dual-stack concept documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
## @param service.ssh.hostPort HostPort for ssh service
## @param service.ssh.loadBalancerSourceRanges Source range filter for ssh loadbalancer
## @param service.ssh.annotations SSH service annotations
## @param service.ssh.labels SSH service additional labels
## @param service.ssh.loadBalancerClass Loadbalancer class
ssh:
type: ClusterIP
port: 22
clusterIP: None
loadBalancerIP:
nodePort:
externalTrafficPolicy:
externalIPs:
ipFamilyPolicy:
ipFamilies:
hostPort:
loadBalancerSourceRanges: []
annotations: {}
labels: {}
loadBalancerClass:
## @section Ingress
## @param ingress.enabled Enable ingress
## @param ingress.className Ingress class name
## @param ingress.annotations Ingress annotations
## @param ingress.hosts[0].host Default Ingress host
## @param ingress.hosts[0].paths[0].path Default Ingress path
## @param ingress.hosts[0].paths[0].pathType Ingress path type
## @param ingress.tls Ingress tls settings
## @extra ingress.apiVersion Specify APIVersion of ingress object. Mostly would only be used for argocd.
ingress:
enabled: true
# className: nginx
className:
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: git.example.com
paths:
- path: /
pathType: Prefix
tls: []
# - secretName: chart-example-tls
# hosts:
# - git.example.com
# Mostly for argocd or any other CI that uses `helm template | kubectl apply` or similar
# If helm doesn't correctly detect your ingress API version you can set it here.
# apiVersion: networking.k8s.io/v1
## @section deployment
#
## @param resources Kubernetes resources
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
## @param schedulerName Use an alternate scheduler, e.g. "stork"
schedulerName: ""
## @param nodeSelector NodeSelector for the deployment
nodeSelector: {}
## @param tolerations Tolerations for the deployment
tolerations: []
## @param affinity Affinity for the deployment
affinity: {}
## @param topologySpreadConstraints TopologySpreadConstraints for the deployment
topologySpreadConstraints: []
## @param dnsConfig dnsConfig for the deployment
dnsConfig: {}
## @param priorityClassName priorityClassName for the deployment
priorityClassName: ""
## @param deployment.env Additional environment variables to pass to containers
## @param deployment.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
## @param deployment.labels Labels for the deployment
## @param deployment.annotations Annotations for the Forgejo deployment to be created
deployment:
env:
[]
# - name: VARIABLE
# value: my-value
terminationGracePeriodSeconds: 60
labels: {}
annotations: {}
## @section ServiceAccount
## @param serviceAccount.create Enable the creation of a ServiceAccount
## @param serviceAccount.name Name of the created ServiceAccount, defaults to release name. Can also link to an externally provided ServiceAccount that should be used.
## @param serviceAccount.automountServiceAccountToken Enable/disable auto mounting of the service account token
## @param serviceAccount.imagePullSecrets Image pull secrets, available to the ServiceAccount
## @param serviceAccount.annotations Custom annotations for the ServiceAccount
## @param serviceAccount.labels Custom labels for the ServiceAccount
serviceAccount:
create: false
name: ""
automountServiceAccountToken: false
imagePullSecrets: []
# - name: private-registry-access
annotations: {}
labels: {}
## @section Persistence
#
## @param persistence.enabled Enable persistent storage
## @param persistence.create Whether to create the persistentVolumeClaim for shared storage
## @param persistence.mount Whether the persistentVolumeClaim should be mounted (even if not created)
## @param persistence.claimName Use an existing claim to store repository information
## @param persistence.size Size for persistence to store repo information
## @param persistence.accessModes AccessMode for persistence
## @param persistence.labels Labels for the persistence volume claim to be created
## @param persistence.annotations.helm.sh/resource-policy Resource policy for the persistence volume claim
## @param persistence.storageClass Name of the storage class to use
## @param persistence.subPath Subdirectory of the volume to mount at
## @param persistence.volumeName Name of persistent volume in PVC
persistence:
enabled: true
create: true
mount: true
claimName: gitea-shared-storage
size: 10Gi
accessModes:
- ReadWriteOnce
labels: {}
storageClass:
subPath:
volumeName: ""
annotations:
helm.sh/resource-policy: keep
## @param extraVolumes Additional volumes to mount to the Forgejo deployment
extraVolumes: []
# - name: postgres-ssl-vol
# secret:
# secretName: gitea-postgres-ssl
## @param extraContainerVolumeMounts Mounts that are only mapped into the Forgejo runtime/main container, to e.g. override custom templates.
extraContainerVolumeMounts: []
## @param extraInitVolumeMounts Mounts that are only mapped into the init-containers. Can be used for additional preconfiguration.
extraInitVolumeMounts: []
## @deprecated The extraVolumeMounts variable has been split two:
## - extraContainerVolumeMounts
## - extraInitVolumeMounts
## As an example, can be used to mount a client cert when connecting to an external Postgres server.
## @param extraVolumeMounts **DEPRECATED** Additional volume mounts for init containers and the Forgejo main container
extraVolumeMounts: []
# - name: postgres-ssl-vol
# readOnly: true
# mountPath: "/pg-ssl"
## @section Init
## @param initPreScript Bash shell script copied verbatim to the start of the init-container.
initPreScript: ""
#
# initPreScript: |
# mkdir -p /data/git/.postgresql
# cp /pg-ssl/* /data/git/.postgresql/
# chown -R git:git /data/git/.postgresql/
# chmod 400 /data/git/.postgresql/postgresql.key
## @param initContainers.resources.limits initContainers.limits Kubernetes resource limits for init containers
## @param initContainers.resources.requests.cpu initContainers.requests.cpu Kubernetes cpu resource limits for init containers
## @param initContainers.resources.requests.memory initContainers.requests.memory Kubernetes memory resource limits for init containers
initContainers:
resources:
limits: {}
requests:
cpu: 100m
memory: 128Mi
# Configure commit/action signing prerequisites
## @section Signing
#
## @param signing.enabled Enable commit/action signing
## @param signing.gpgHome GPG home directory
## @param signing.privateKey Inline private gpg key for signed internal Git activity
## @param signing.existingSecret Use an existing secret to store the value of `signing.privateKey`
signing:
enabled: false
gpgHome: /data/git/.gnupg
privateKey: ""
# privateKey: |-
# -----BEGIN PGP PRIVATE KEY BLOCK-----
# ...
# -----END PGP PRIVATE KEY BLOCK-----
existingSecret: ""
## @section Gitea
#
gitea:
## @param gitea.admin.username Username for the Forgejo admin user
## @param gitea.admin.existingSecret Use an existing secret to store admin user credentials
## @param gitea.admin.password Password for the Forgejo admin user
## @param gitea.admin.email Email for the Forgejo admin user
## @param gitea.admin.passwordMode Mode for how to set/update the admin user password. Options are: initialOnlyNoReset, initialOnlyRequireReset, and keepUpdated
admin:
# existingSecret: gitea-admin-secret
username: gitea_admin
password: r8sA8CPHD9!bt6d
email: "gitea@local.domain"
passwordMode: keepUpdated
## @param gitea.metrics.enabled Enable Forgejo metrics
## @param gitea.metrics.serviceMonitor.enabled Enable Forgejo metrics service monitor
metrics:
enabled: false
serviceMonitor:
enabled: false
# additionalLabels:
# prometheus-release: prom1
## @param gitea.ldap LDAP configuration
ldap:
[]
# - name: "LDAP 1"
# existingSecret:
# securityProtocol:
# host:
# port:
# userSearchBase:
# userFilter:
# adminFilter:
# emailAttribute:
# bindDn:
# bindPassword:
# usernameAttribute:
# publicSSHKeyAttribute:
# Either specify inline `key` and `secret` or refer to them via `existingSecret`
## @param gitea.oauth OAuth configuration
oauth:
[]
# - name: 'OAuth 1'
# provider:
# key:
# secret:
# existingSecret:
# autoDiscoverUrl:
# useCustomUrls:
# customAuthUrl:
# customTokenUrl:
# customProfileUrl:
# customEmailUrl:
## @param gitea.additionalConfigSources Additional configuration from secret or configmap
additionalConfigSources: []
# - secret:
# secretName: gitea-app-ini-oauth
# - configMap:
# name: gitea-app-ini-plaintext
## @param gitea.additionalConfigFromEnvs Additional configuration sources from environment variables
additionalConfigFromEnvs: []
## @param gitea.podAnnotations Annotations for the Forgejo pod
podAnnotations: {}
## @param gitea.ssh.logLevel Configure OpenSSH's log level. Only available for root-based Forgejo image.
ssh:
logLevel: "INFO"
## @section `app.ini` overrides
## @descriptionStart
##
## Every value described in the [Cheat
## Sheet](https://forgejo.org/docs/latest/admin/config-cheat-sheet/) can be
## set as a Helm value. Configuration sections map to (lowercased) YAML
## blocks, while the keys themselves remain in all caps.
##
## @descriptionEnd
config:
# values in the DEFAULT section
# (https://forgejo.org/docs/latest/admin/config-cheat-sheet/#overall-default)
# are un-namespaced
## @param gitea.config.APP_NAME Application name, used in the page title
APP_NAME: "Forgejo: Beyond coding. We forge."
## @param gitea.config.RUN_MODE Application run mode, affects performance and debugging: `dev` or `prod`
RUN_MODE: prod
## @param gitea.config.repository General repository settings
repository: {}
## @param gitea.config.cors Cross-origin resource sharing settings
cors: {}
## @param gitea.config.ui User interface settings
ui: {}
## @param gitea.config.markdown Markdown parser settings
markdown: {}
## @param gitea.config.server [object] General server settings
server:
SSH_PORT: 22 # rootful image
SSH_LISTEN_PORT: 2222 # rootless image
## @param gitea.config.database Database configuration (only necessary with an [externally managed DB](https://code.forgejo.org/forgejo-helm/forgejo-helm#external-database)).
database: {}
## @param gitea.config.indexer Settings for what content is indexed and how
indexer: {}
## @param gitea.config.queue Job queue configuration
queue: {}
## @param gitea.config.admin Admin user settings
admin: {}
## @param gitea.config.security Site security settings
security: {}
## @param gitea.config.camo Settings for the [camo](https://github.com/cactus/go-camo) media proxy server (disabled by default)
camo: {}
## @param gitea.config.openid Configuration for authentication with OpenID (disabled by default)
openid: {}
## @param gitea.config.oauth2_client OAuth2 client settings
oauth2_client: {}
## @param gitea.config.service Configuration for miscellaneous Forgejo services
service: {}
## @param gitea.config.ssh.minimum_key_sizes SSH minimum key sizes
ssh.minimum_key_sizes: {}
## @param gitea.config.webhook Webhook settings
webhook: {}
## @param gitea.config.mailer Mailer configuration (disabled by default)
mailer: {}
## @param gitea.config.email.incoming Configuration for handling incoming mail (disabled by default)
email.incoming: {}
## @param gitea.config.cache Cache configuration
cache: {}
## @param gitea.config.session Session/cookie handling
session: {}
## @param gitea.config.picture User avatar settings
picture: {}
## @param gitea.config.project Project board defaults
project: {}
## @param gitea.config.attachment Issue and PR attachment configuration
attachment: {}
## @param gitea.config.log Logging configuration
log: {}
## @param gitea.config.cron Cron job configuration
cron: {}
## @param gitea.config.git Global settings for Git
git: {}
## @param gitea.config.metrics Settings for the Prometheus endpoint (disabled by default)
metrics: {}
## @param gitea.config.api Settings for the Swagger API documentation endpoints
api: {}
## @param gitea.config.oauth2 Settings for the [OAuth2 provider](https://forgejo.org/docs/latest/admin/oauth2-provider/)
oauth2: {}
## @param gitea.config.i18n Internationalization settings
i18n: {}
## @param gitea.config.markup Configuration for advanced markup processors
markup: {}
## @param gitea.config.highlight.mapping File extension to language mapping overrides for syntax highlighting
highlight.mapping: {}
## @param gitea.config.time Locale settings
time: {}
## @param gitea.config.migrations Settings for Git repository migrations
migrations: {}
## @param gitea.config.federation Federation configuration
federation: {}
## @param gitea.config.packages Package registry settings
packages: {}
## @param gitea.config.mirror Configuration for repository mirroring
mirror: {}
## @param gitea.config.lfs Large File Storage configuration
lfs: {}
## @param gitea.config.repo-avatar Repository avatar storage configuration
repo-avatar: {}
## @param gitea.config.avatar User/org avatar storage configuration
avatar: {}
## @param gitea.config.storage General storage settings
storage: {}
## @param gitea.config.proxy Proxy configuration (disabled by default)
proxy: {}
## @param gitea.config.actions Configuration for [Forgejo Actions](https://forgejo.org/docs/latest/user/actions/)
actions: {}
## @param gitea.config.other Uncategorized configuration options
other: {}
## @section LivenessProbe
#
## @param gitea.livenessProbe.enabled Enable liveness probe
## @param gitea.livenessProbe.tcpSocket.port Port to probe for liveness
## @param gitea.livenessProbe.initialDelaySeconds Initial delay before liveness probe is initiated
## @param gitea.livenessProbe.timeoutSeconds Timeout for liveness probe
## @param gitea.livenessProbe.periodSeconds Period for liveness probe
## @param gitea.livenessProbe.successThreshold Success threshold for liveness probe
## @param gitea.livenessProbe.failureThreshold Failure threshold for liveness probe
# Modify the liveness probe for your needs or completely disable it by commenting out.
livenessProbe:
enabled: true
tcpSocket:
port: http
initialDelaySeconds: 200
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
## @section ReadinessProbe
#
## @param gitea.readinessProbe.enabled Enable readiness probe
## @param gitea.readinessProbe.tcpSocket.port Port to probe for readiness
## @param gitea.readinessProbe.initialDelaySeconds Initial delay before readiness probe is initiated
## @param gitea.readinessProbe.timeoutSeconds Timeout for readiness probe
## @param gitea.readinessProbe.periodSeconds Period for readiness probe
## @param gitea.readinessProbe.successThreshold Success threshold for readiness probe
## @param gitea.readinessProbe.failureThreshold Failure threshold for readiness probe
# Modify the readiness probe for your needs or completely disable it by commenting out.
readinessProbe:
enabled: true
tcpSocket:
port: http
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# # Uncomment the startup probe to enable and modify it for your needs.
## @section StartupProbe
#
## @param gitea.startupProbe.enabled Enable startup probe
## @param gitea.startupProbe.tcpSocket.port Port to probe for startup
## @param gitea.startupProbe.initialDelaySeconds Initial delay before startup probe is initiated
## @param gitea.startupProbe.timeoutSeconds Timeout for startup probe
## @param gitea.startupProbe.periodSeconds Period for startup probe
## @param gitea.startupProbe.successThreshold Success threshold for startup probe
## @param gitea.startupProbe.failureThreshold Failure threshold for startup probe
startupProbe:
enabled: false
tcpSocket:
port: http
initialDelaySeconds: 60
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
## @section Redis® Cluster
## @descriptionStart
## Redis® Cluster is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster) if enabled in the values.
## Complete Configuration can be taken from their website.
## Redis cluster and [Redis](#redis) cannot be enabled at the same time.
## @descriptionEnd
#
## @param redis-cluster.enabled Enable redis cluster
## @param redis-cluster.usePassword Whether to use password authentication
## @param redis-cluster.cluster.nodes Number of redis cluster master nodes
## @param redis-cluster.cluster.replicas Number of redis cluster master node replicas
redis-cluster:
enabled: true
usePassword: false
cluster:
nodes: 3 # default: 6
replicas: 0 # default: 1
## @section Redis®
## @descriptionStart
## Redis® is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/redis) if enabled in the values.
## Complete Configuration can be taken from their website.
## Redis and [Redis cluster](#redis-cluster) cannot be enabled at the same time.
## @descriptionEnd
#
## @param redis.enabled Enable redis standalone or replicated
## @param redis.architecture Whether to use standalone or replication
## @param redis.global.redis.password Required password
## @param redis.master.count Number of Redis master instances to deploy
redis:
enabled: false
architecture: standalone
global:
redis:
password: changeme
master:
count: 1
## @section PostgreSQL HA
## @descriptionStart
## PostgreSQL HA is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha) if enabled in the values.
## Complete Configuration can be taken from their website.
## @descriptionEnd
#
## @param postgresql-ha.enabled Enable PostgreSQL HA chart
## @param postgresql-ha.postgresql.password Password for the `gitea` user (overrides `auth.password`)
## @param postgresql-ha.global.postgresql.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql-ha.global.postgresql.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql-ha.global.postgresql.password Name for a custom password to create (overrides `auth.password`)
## @param postgresql-ha.postgresql.repmgrPassword Repmgr Password
## @param postgresql-ha.postgresql.postgresPassword postgres Password
## @param postgresql-ha.pgpool.adminPassword pgpool adminPassword
## @param postgresql-ha.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
## @param postgresql-ha.primary.persistence.size PVC Storage Request for PostgreSQL HA volume
postgresql-ha:
global:
postgresql:
database: gitea
password: gitea
username: gitea
enabled: false
postgresql:
repmgrPassword: changeme2
postgresPassword: changeme1
password: changeme4
pgpool:
adminPassword: changeme3
service:
ports:
postgresql: 5432
primary:
persistence:
size: 10Gi
## @section PostgreSQL
## @descriptionStart
## PostgreSQL is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/postgresql) if enabled in the values.
## Complete Configuration can be taken from their website.
## @descriptionEnd
#
## @param postgresql.enabled Enable PostgreSQL
## @param postgresql.global.postgresql.auth.password Password for the `gitea` user (overrides `auth.password`)
## @param postgresql.global.postgresql.auth.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql.global.postgresql.auth.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume
postgresql:
enabled: true
global:
postgresql:
auth:
password: gitea
database: gitea
username: gitea
service:
ports:
postgresql: 5432
primary:
persistence:
size: 10Gi
# By default, removed or moved settings that still remain in a user defined values.yaml will cause Helm to fail running the install/update.
# Set it to false to skip this basic validation check.
## @section Advanced
## @param checkDeprecation Set it to false to skip this basic validation check.
## @param test.enabled Set it to false to disable test-connection Pod.
## @param test.image.name Image name for the wget container used in the test-connection Pod.
## @param test.image.tag Image tag for the wget container used in the test-connection Pod.
checkDeprecation: true
test:
enabled: true
image:
name: busybox
tag: latest
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []

View file

@ -0,0 +1,14 @@
[meta]
format_ver = 1
[forgejo]
description = Forgejo Helm
mode = helm
depends_on = traefik:longhorn_storage_class
[#forgejo/helm]
mode = install
name = forgejo-personal
repo = oci://code.forgejo.org/forgejo-helm/forgejo
options_file = forgejo.yml
variable_file = ../../secrets/personal-forgejo.env

View file

@ -0,0 +1,366 @@
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## Dependencies configuration parameters
## Redis dependency parameters
# -- Install redis as a depending chart
redisDependencyEnabled: true
# -- Install mariadb as a depending chart
mariadbDependencyEnabled: true
# -- Install mariadb as a depending chart
postgresqlDependencyEnabled: false
global:
imageRegistry: ""
imagePullSecrets: []
# Configure redis dependency chart
redis:
auth:
# -- Enable redis authentication
enabled: true
# -- Configure redis password
password: "P4ssb0lt"
sentinel:
# -- Enable redis sentinel
enabled: true
## MariaDB dependency parameters
# Configure mariadb as a dependency chart
mariadb:
# -- Configure mariadb architecture
architecture: replication
auth:
# -- Configure mariadb auth root password
rootPassword: root
# -- Configure mariadb auth username
username: passbolt
# -- Configure mariadb auth password
password: P4ssb0lt
# -- Configure mariadb auth database
database: passbolt
# -- Configure mariadb auth replicationPassword
replicationPassword: P4ssb0ltReplica
# -- Configure parameters for the primary instance.
primary:
# -- Configure persistence options.
persistence:
# -- Enable persistence on MariaDB primary replicas using a `PersistentVolumeClaim`. If false, use emptyDir
enabled: true
# -- Name of an existing `PersistentVolumeClaim` for MariaDB primary replicas. When it's set the rest of persistence parameters are ignored.
existingClaim: ""
# -- Subdirectory of the volume to mount at
subPath: ""
# -- Primary persistent volume storage Class
storageClass: "longhorn"
# -- Labels for the PVC
labels: {}
# -- Primary persistent volume claim annotations
annotations: {}
# -- Primary persistent volume access Modes
accessModes:
- ReadWriteOnce
# -- Primary persistent volume size
size: 8Gi
# -- Selector to match an existing Persistent Volume
selector: {}
# -- Configure parameters for the secondary instance.
secondary:
# -- Configure persistence options.
persistence:
# -- Enable persistence on MariaDB secondary replicas using a `PersistentVolumeClaim`. If false, use emptyDir
enabled: true
# -- Subdirectory of the volume to mount at
subPath: ""
# -- Secondary persistent volume storage Class
storageClass: "longhorn"
# -- Labels for the PVC
labels: {}
# -- Secondary persistent volume claim annotations
annotations: {}
# -- Secondary persistent volume access Modes
accessModes:
- ReadWriteOnce
# -- Secondary persistent volume size
size: 8Gi
# -- Selector to match an existing Persistent Volume
selector: {}
## Passbolt configuration
## Passbolt container and sidecar parameters
app:
# -- Configure pasbolt deployment init container that waits for database
databaseInitContainer:
# -- Toggle pasbolt deployment init container that waits for database
enabled: true
#initImage:
# # -- Configure pasbolt deployment init container image client for database
# client: mariadb
# registry: ""
# # -- Configure pasbolt deployment image repsitory
# repository: mariadb
# # -- Configure pasbolt deployment image pullPolicy
# pullPolicy: IfNotPresent
# # -- Overrides the image tag whose default is the chart appVersion.
# tag: latest
image:
# -- Configure pasbolt deployment image repsitory
registry: ""
repository: passbolt/passbolt
# -- Configure pasbolt deployment image pullPolicy
pullPolicy: IfNotPresent
# Allowed options: mariadb, mysql or postgresql
database:
kind: mariadb
# -- Configure ssl on mariadb/mysql clients
# -- In case this is enabled, you will be responsible for creating and mounting the certificates and
# -- additional configutions on both the client and the server.
# ssl: off
cache:
# Use CACHE_CAKE_DEFAULT_* variables to configure the connection to redis instance
# on the passboltEnv configuration section
redis:
# -- By enabling redis the chart will mount a configuration file on /etc/passbolt/app.php
# That instructs passbolt to store sessions on redis and to use it as a general cache.
enabled: true
sentinelProxy:
# -- Inject a haproxy sidecar container configured as a proxy to redis sentinel
# Make sure that CACHE_CAKE_DEFAULT_SERVER is set to '127.0.0.1' to use the proxy
enabled: true
# -- Configure redis sentinel proxy image
image:
registry: ""
# -- Configure redis sentinel image repository
repository: haproxy
# -- Configure redis sentinel image tag
tag: "latest"
# -- Configure redis sentinel container resources
resources: {}
# -- Configure the passbolt deployment resources
extraPodLabels: {}
resources: {}
tls:
# -- If autogenerate is true, the chart will generate a secret with a certificate for APP_FULL_BASE_URL hostname
# -- if autogenerate is false, existingSecret should be filled with an existing tls kind secret name
# @ignored
autogenerate: true
#existingSecret: ""
# -- Enable email cron
cronJobEmail:
enabled: true
schedule: "* * * * *"
extraPodLabels: {}
## Passbolt environment parameters
# -- Pro subscription key in base64 only if you are using pro version
# subscriptionKey:
# -- Configure passbolt subscription key path
# subscription_keyPath: /etc/passbolt/subscription_key.txt
# -- Configure passbolt gpg directory
gpgPath: /etc/passbolt/gpg
# -- Gpg server private key in base64
gpgServerKeyPrivate: ""
# -- Gpg server public key in base64
gpgServerKeyPublic: ""
# -- Name of the existing secret for the GPG server keypair. The secret must contain the `serverkey.asc` and `serverkey_private.asc` keys.
gpgExistingSecret: ""
# -- Name of the existing secret for the JWT server keypair. The secret must contain the `jwt.key` and `jwt.pem` keys.
jwtExistingSecret: ""
# -- Configure passbolt jwt directory
jwtPath: /etc/passbolt/jwt
# -- JWT server private key in base64
jwtServerPrivate: ""
# -- JWT server public key in base64
jwtServerPublic: ""
# -- Forces overwrite JWT keys
jwtCreateKeysForced: false
jobCreateJwtKeys:
extraPodLabels: {}
jobCreateGpgKeys:
extraPodLabels: {}
passboltEnv:
plain:
# -- Configure passbolt privacy url
PASSBOLT_LEGAL_PRIVACYPOLICYURL: https://www.passbolt.com/privacy
# -- Configure passbolt to force ssl
PASSBOLT_SSL_FORCE: false
# -- Toggle passbolt public registration
PASSBOLT_REGISTRATION_PUBLIC: false
# -- Configure passbolt cake cache server
CACHE_CAKE_DEFAULT_SERVER: 127.0.0.1
# -- Configure passbolt default email service port
EMAIL_TRANSPORT_DEFAULT_PORT: 587
# -- Toggle passbolt debug mode
DEBUG: false
# -- Toggle passbolt selenium mode
PASSBOLT_SELENIUM_ACTIVE: false
# -- Configure passbolt license path
PASSBOLT_PLUGINS_LICENSE_LICENSE: /etc/passbolt/subscription_key.txt
# -- Configure passbolt jwt private key path
PASSBOLT_JWT_SERVER_KEY: /var/www/passbolt/config/jwt/jwt.key
# -- Configure passbolt jwt public key path
PASSBOLT_JWT_SERVER_PEM: /var/www/passbolt/config/jwt/jwt.pem
# -- Toggle passbolt jwt authentication
PASSBOLT_PLUGINS_JWT_AUTHENTICATION_ENABLED: true
# -- Download Command for kubectl
KUBECTL_DOWNLOAD_CMD: curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
secret:
# -- Configure passbolt cake cache password
CACHE_CAKE_DEFAULT_PASSWORD: P4ssb0lt
# -- Configure passbolt default database password
DATASOURCES_DEFAULT_PASSWORD: P4ssb0lt
# -- Configure passbolt default database username
DATASOURCES_DEFAULT_USERNAME: passbolt
# -- Configure passbolt default database
DATASOURCES_DEFAULT_DATABASE: passbolt
# -- Configure passbolt server gpg key fingerprint
# PASSBOLT_GPG_SERVER_KEY_FINGERPRINT:
# -- Configure passbolt security salt.
# SECURITY_SALT:
# -- Environment variables to add to the passbolt pods
extraEnv: []
# -- Environment variables from secrets or configmaps to add to the passbolt pods
extraEnvFrom:
[]
# - secretRef:
# name: passbolt-secret
## Passbolt deployment parameters
# -- If autoscaling is disabled this will define the number of pods to run
replicaCount: 2
# Configure autoscaling on passbolt deployment
autoscaling:
# -- Enable autoscaling on passbolt deployment
enabled: false
# -- Configure autoscaling minimum replicas
minReplicas: 1
# -- Configure autoscaling maximum replicas
maxReplicas: 100
# -- Configure autoscaling target CPU uptilization percentage
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
# -- Enable role based access control
rbacEnabled: true
# -- Configure passbolt container livenessProbe
livenessProbe:
# @ignore
httpGet:
port: https
scheme: HTTPS
path: /healthcheck/status.json
httpHeaders:
- name: Host
value: passbolt.hofers.cloud
initialDelaySeconds: 20
periodSeconds: 10
# -- Configure passbolt container RadinessProbe
readinessProbe:
# @ignore
httpGet:
port: https
scheme: HTTPS
httpHeaders:
- name: Host
value: passbolt.hofers.cloud
path: /healthcheck/status.json
initialDelaySeconds: 5
periodSeconds: 10
# Configure network policies to allow ingress access passbolt pods
# networkPolicy defines which labels are allowed to reach to passbolt
# and which namespaces
networkPolicy:
# -- Enable network policies to allow ingress access passbolt pods
enabled: false
# -- Configure network policies label for ingress deployment
label: app.kubernetes.io/name
# -- Configure network policies podLabel for podSelector
podLabel: ingress-nginx
# -- Configure network policies namespaceLabel for namespaceSelector
namespaceLabel: ingress-nginx
# -- Configure image pull secrets
imagePullSecrets: []
# -- Value to override the chart name on default
nameOverride: ""
# -- Value to override the whole fullName
fullnameOverride: ""
serviceAccount:
# -- Specifies whether a service account should be created
create: true
# -- Annotations to add to the service account
annotations: {}
# -- Map of annotation for passbolt server pod
podAnnotations: {}
# -- Security Context configuration for passbolt server pod
podSecurityContext:
{}
# fsGroup: 2000
service:
# -- Configure passbolt service type
type: ClusterIP
# -- Annotations to add to the service
annotations: {}
# -- Configure the service ports
ports:
# -- Configure the HTTPS port
https:
# -- Configure passbolt HTTPS service port
port: 443
# -- Configure passbolt HTTPS service targetPort
targetPort: 443
# -- Configure passbolt HTTPS service port name
name: https
http:
# -- Configure passbolt HTTP service port
port: 80
# -- Configure passbolt HTTP service targetPort
targetPort: 80
# -- Configure passbolt HTTP service port name
name: http
ingress:
# -- Enable passbolt ingress
enabled: true
# -- Configure passbolt ingress annotations
annotations: {}
# -- Configure passbolt ingress hosts
hosts:
# @ignored
- host: passbolt.hofers.cloud
paths:
- path: /
port: http
pathType: ImplementationSpecific
# -- Configure passbolt deployment nodeSelector
nodeSelector: {}
# -- Configure passbolt deployment tolerations
tolerations: []
# -- Configure passbolt deployment affinity
affinity: {}
# -- Add additional volumes, e.g. for overwriting config files
extraVolumes: []
# -- Add additional volume mounts, e.g. for overwriting config files
extraVolumeMounts: []

View file

@ -0,0 +1,24 @@
[meta]
format_ver = 1
[passbolt_repo]
description = Passbolt Helm Repository
mode = helm
depends_on = traefik:longhorn_storage_class
[#passbolt_repo/helm]
mode = add_repo
name = passbolt
repo = https://download.passbolt.com/charts/passbolt
[passbolt]
description = Passbolt Password Manager
mode = helm
depends_on = passbolt_repo
[#passbolt/helm]
mode = install
name = mypassbolt
repo = passbolt/passbolt
options_file = passbolt.yml
variable_file = ../../secrets/passbolt.env

View file

@ -0,0 +1,15 @@
persistentVolumeClaim:
enabled: true
storageClass: longhorn
serviceWeb:
annotations:
metallb.universe.tf/allow-shared-ip: pihole-svc
metallb.universe.tf/loadBalancerIPs: 192.168.2.20
type: LoadBalancer
serviceDns:
annotations:
metallb.universe.tf/allow-shared-ip: pihole-svc
metallb.universe.tf/loadBalancerIPs: 192.168.2.20
type: LoadBalancer

View file

@ -0,0 +1,23 @@
[meta]
format_ver = 1
[pihole_repo]
description = Pihole Helm Repository
mode = helm
depends_on = traefik:longhorn_storage_class
[#pihole_repo/helm]
mode = add_repo
name = mojo2600pihole
repo = https://mojo2600.github.io/pihole-kubernetes/
[pihole]
mode = helm
depends_on = passbolt_repo
[#pihole/helm]
mode = install
name = pihole
repo = mojo2600pihole/pihole
options_file = pihole.yml
variable_file = ../../secrets/pihole.env

View file

@ -0,0 +1,24 @@
[meta]
format_ver = 1
[portainer_repo]
description = Portainer Helm Repository
mode = helm
depends_on = traefik:longhorn_storage_class
[#portainer_repo/helm]
mode = add_repo
name = portainer
repo = https://portainer.github.io/k8s/
[portainer]
mode = helm
depends_on = portainer_repo
[#portainer/helm]
mode = install
name = portainer
repo = portainer/portainer
variable_file = ../../secrets/portainer.env
namespace = portainer
create_namespace = yes

View file

@ -0,0 +1,26 @@
[meta]
format_ver = 1
[portainer_project]
mode = include
path = ./portainer/project.ini
[passbolt_project]
mode = include
path = ./passbolt/project.ini
[forgejo_personal_project]
mode = include
path = ./forgejo/project.ini
[pihole_project]
mode = include
path = ./pihole/project.ini
[tailscale_project]
mode = include
path = ./tailscale/project.ini
[woodpecker_project]
mode = include
path = ./woodpecker-ci/project.ini

View file

@ -0,0 +1,12 @@
apiVersion: tailscale.com/v1alpha1
kind: Connector
metadata:
name: ts-kube
spec:
hostname: ts-kube
subnetRouter:
advertiseRoutes:
- "10.0.0.0/24"
- "192.168.0.0/24"
- "192.168.2.0/24"
exitNode: true

View file

@ -0,0 +1,30 @@
[meta]
format_ver = 1
[tailscale_repo]
description = Tailscale Helm Repository
mode = helm
depends_on = traefik:longhorn_storage_class
[#tailscale_repo/helm]
mode = add_repo
name = tailscale
repo = https://pkgs.tailscale.com/helmcharts
[tailscale]
mode = helm
depends_on = tailscale_repo
[#tailscale/helm]
mode = install
name = tailscale
repo = tailscale/tailscale-operator
variable_file = ../../secrets/tailscale.env
[tailscale_connectors]
mode = k3s
depends_on = tailscale
[#tailscale_connectors/k3s]
mode = install
yml_path = ./connectors.yml

View file

@ -0,0 +1,15 @@
[meta]
format_ver = 1
[woodpecker_codeberg]
description = Woodpecker CI
mode = helm
depends_on = traefik:longhorn_storage_class
[#woodpecker_codeberg/helm]
mode = install
name = woodpecker
repo = oci://ghcr.io/woodpecker-ci/helm/woodpecker
variable_file = ../../secrets/woodpecker/codeberg.env
create_namespace = yes
namespace = woodpecker-codeberg

View file

@ -0,0 +1,15 @@
[meta]
format_ver = 1
[woodpecker_personal_git]
description = Woodpecker CI
mode = helm
depends_on = traefik:longhorn_storage_class
[#woodpecker_personal_git/helm]
mode = install
name = woodpecker-personal-git
repo = oci://ghcr.io/woodpecker-ci/helm/woodpecker
variable_file = ../../secrets/woodpecker/personal-git.env
create_namespace = yes
namespace = woodpecker-personal-git

View file

@ -0,0 +1,10 @@
[meta]
format_ver = 1
[woodpecker_personal_project]
mode = include
path = ./personal-git.ini
[woodpecker_codeberg_project]
mode = include
path = ./codeberg.ini

View file

@ -9,7 +9,7 @@ import socket
import json import json
import sys import sys
def json_to_bytes(str: str) -> bytearray: def json_to_bytes(str: dict[str, bool | str]) -> bytearray:
return bytearray(json.dumps(str), "utf-8") return bytearray(json.dumps(str), "utf-8")
# Who needs Flask, anyways? # Who needs Flask, anyways?
@ -17,11 +17,11 @@ class HTTPHandler(http.server.BaseHTTPRequestHandler):
def send_headers(self): def send_headers(self):
self.send_header("Content-Type", "application/json") self.send_header("Content-Type", "application/json")
self.end_headers() self.end_headers()
def do_POST(self): def do_POST(self):
if self.path == "/api/installer_update_webhook": if self.path == "/api/installer_update_webhook":
content_length = 0 content_length = 0
try: try:
content_length = int(self.headers.get('Content-Length')) content_length = int(self.headers.get('Content-Length'))
except ValueError: except ValueError:
@ -79,7 +79,7 @@ class HTTPHandler(http.server.BaseHTTPRequestHandler):
output_coloring = "light_yellow" output_coloring = "light_yellow"
elif res == "FAIL": elif res == "FAIL":
output_coloring = "light_red" output_coloring = "light_red"
result_text_component = f" {resp_decoded_data["result"]} " if "result" in resp_decoded_data else " " result_text_component = f" {resp_decoded_data["result"]} " if "result" in resp_decoded_data else " "
final_output_text = f"{str_formatted_time} {resp_decoded_data["event_type"].upper()} {resp_decoded_data["level"]}:{result_text_component}{resp_decoded_data["name"]} ({resp_decoded_data["description"]})" final_output_text = f"{str_formatted_time} {resp_decoded_data["event_type"].upper()} {resp_decoded_data["level"]}:{result_text_component}{resp_decoded_data["name"]} ({resp_decoded_data["description"]})"
@ -103,7 +103,7 @@ class HTTPHandler(http.server.BaseHTTPRequestHandler):
"success": False, "success": False,
"error": "Unknown route" "error": "Unknown route"
})) }))
def do_GET(self): def do_GET(self):
resolved_path = str(Path(self.path).resolve()) resolved_path = str(Path(self.path).resolve())
file_path = getcwd() + resolved_path file_path = getcwd() + resolved_path
@ -124,7 +124,7 @@ class HTTPHandler(http.server.BaseHTTPRequestHandler):
})) }))
except () as exception: except () as exception:
exception.print_exception() exception.print_exception()
def log_message(self, format: str, *args): def log_message(self, format: str, *args):
status_code = 0 status_code = 0
@ -136,7 +136,7 @@ class HTTPHandler(http.server.BaseHTTPRequestHandler):
# Disable logging for the /api/ endpoint for POST requests unless the error code > 400 # Disable logging for the /api/ endpoint for POST requests unless the error code > 400
if len(args) >= 1 and args[0].startswith("POST") and self.path.startswith("/api/") and status_code < 400: if len(args) >= 1 and args[0].startswith("POST") and self.path.startswith("/api/") and status_code < 400:
return return
super().log_message(format, *args) super().log_message(format, *args)
port = int(sys.argv[1]) if "SERVE_DEVELOP" not in environ else 10240 port = int(sys.argv[1]) if "SERVE_DEVELOP" not in environ else 10240
@ -144,4 +144,4 @@ server = socketserver.TCPServer(("", port), HTTPHandler)
server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
print("[x] started HTTP server.") print("[x] started HTTP server.")
server.serve_forever() server.serve_forever()

View file

@ -1,7 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
if [ ! -f "config/.env" ]; then if [ ! -f "config/.env" ]; then
echo "# NOTE: The UUID should be automatically generated, when running nix-shell. However, if it isn't, then" > .env echo "# NOTE: The UUID should be automatically generated, when running nix-shell. However, if it isn't, then" > config/.env
echo "# run uuidgen and change the below value." >> .env echo "# run uuidgen and change the below value." >> config/.env
cat config/.env.example >> config/.env cat config/.env.example >> config/.env
# Apple moment # Apple moment