137 Commits

Author SHA1 Message Date
6cfd02bc26 rompr new 2025-12-08 20:00:39 +01:00
0033a5a231 bogus commit for rompr 2024-10-29 10:01:12 +01:00
70ccdf43ef Merge branch 'main' of ssh://gitea.service.nr5:2222/chaos/docker-images
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build is failing
2024-10-29 09:52:10 +01:00
401acdc54f new rompr version 2024-10-29 09:47:26 +01:00
c6a8464bb2 why _?111git statuskubectl apply -n kube-system -f descheduler-cronjob.yaml
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2024-09-13 20:09:41 +02:00
d1247a3b02 listing
Some checks failed
continuous-integration/drone/push Build is failing
2024-09-13 20:07:39 +02:00
83e3907708 only apps
Some checks failed
continuous-integration/drone/push Build is failing
2024-09-13 09:58:44 +02:00
630f321651 only apps 2024-09-13 09:57:03 +02:00
65318147c7 with git/testing
Some checks failed
continuous-integration/drone/push Build is running
continuous-integration/drone Build is failing
2024-04-21 19:23:45 +02:00
5b5c21b67b klappt das so? Ja ne? - man-db darf bleiben
Some checks failed
continuous-integration/drone/push Build is failing
2024-04-21 17:44:07 +02:00
3dac0b92f1 klappt das so? Ja ne? - man-db darf bleiben
Some checks failed
continuous-integration/drone/push Build is failing
2024-04-21 17:38:45 +02:00
35ec70792c klappt das so? Ja ne?
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-21 17:36:58 +02:00
4ccfd0d648 building testing with git
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-21 17:27:46 +02:00
ccbe462a76 building testing with git
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-04-21 17:26:41 +02:00
98234e569a WHOA Sun 21 Apr 17:23:21 CEST 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:23:21 +02:00
8c96788392 Sun 21 Apr 17:17:50 CEST 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:17:50 +02:00
60417861fc more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:13:27 +02:00
dafa848d80 more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:09:07 +02:00
4579621b03 more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:07:27 +02:00
542fc02720 more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:05:43 +02:00
4b2f5d8c9f merged
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:02:48 +02:00
7da16def78 .gitignore 2024-04-21 17:02:29 +02:00
bcd8242061 what is happening here, for all hails sake
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:29:41 +02:00
6639d8d0c2 whats happening
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:16:12 +02:00
3ced13f704 whats happening
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:15:00 +02:00
d4f052787f cleanup
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:06:32 +02:00
d55511e84e bogus change
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:04:49 +02:00
11c3f3174d bogus change
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-04-21 15:51:03 +02:00
a770e55f47 loops and all in one pipeline 2024-04-21 12:50:23 +02:00
ac02ddcc00 Merge branch 'main' of ssh://gitea.service.nr5:2222/chaos/docker-images 2024-04-21 11:36:08 +02:00
0b93d83014 git log step
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-15 17:42:09 +02:00
0da2ea2477 removal clearer typed
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 19:27:09 +02:00
5751f2c82e removing man-db in first run
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 18:58:03 +02:00
9d83926159 git in debian-stable image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 18:47:43 +02:00
dd52955602 one character less optimization 2024-04-08 18:02:10 +02:00
b451999d77 dry_run and cache_from its own image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 16:53:49 +02:00
1d84d11f37 new ROMPR Version
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 16:29:04 +02:00
3067ebd5de new rompr version 2024-03-21 21:44:59 +01:00
fb1a6e307f all images again
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build was killed
2024-02-27 18:18:39 +01:00
82d001e962 distcc stuff removed
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-27 18:10:35 +01:00
cde42fcd56 distcc stuff removed
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build is failing
2024-02-27 17:32:20 +01:00
801e76f0d3 distcc stuff removed
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-27 17:25:55 +01:00
323f9eaff0 only openwrt image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-27 17:21:33 +01:00
09c98d766a only openwrt image
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-27 17:21:00 +01:00
2ebc1ec635 project rename
Some checks reported errors
continuous-integration/drone Build was killed
2024-02-26 17:02:22 +01:00
67787c4fe0 enabling openwrt image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-26 16:57:24 +01:00
fef81d7c28 using our own image
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:53:18 +01:00
7fbaf62415 using our own image
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:51:04 +01:00
7a70000833 using our own image 2024-02-26 16:50:40 +01:00
5058b10769 openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:48:16 +01:00
3b7ac02aed openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:46:46 +01:00
fc591f4dac openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:45:25 +01:00
36c7b2d0b5 openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:43:54 +01:00
cf8ac80bc5 all packs
Some checks are pending
continuous-integration/drone/push Build is running
continuous-integration/drone Build is passing
2024-01-17 18:28:19 +01:00
5c2bded912 ENV var fix 2024-01-17 18:07:20 +01:00
55ace2881c using fpm-socket
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 18:04:49 +01:00
75edd26772 php-fpm proper version
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 17:48:02 +01:00
21fab1e23f php-fpm proper version
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 17:38:06 +01:00
45ffac4318 fewer layers in rompr image 2024-01-17 17:32:45 +01:00
e702963a01 all packs again 2024-01-17 17:26:20 +01:00
ca165f5c5e how to run 2024-01-17 17:24:59 +01:00
44ae607709 removed obsolete kubernetes stuff 2024-01-17 17:13:28 +01:00
e0824bf3c1 rompr version 2.x
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 17:07:42 +01:00
95e8c6f363 all packages again
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 14:09:07 +01:00
123eeddf49 using debian again , we need chmod
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 13:45:08 +01:00
5a96d89fc2 experimental features and copy chmod
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-17 12:46:58 +01:00
296ab18421 chmod befor copy
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 12:13:54 +01:00
3477d59e07 from scratch and not debian
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-17 11:26:21 +01:00
0b3cbc584f chmod
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-01-17 11:04:53 +01:00
0075dac22d chmod?
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-16 14:38:00 +01:00
9ce1a6b610 all of them again
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-16 13:50:16 +01:00
e811e80f25 here we go
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-16 13:18:26 +01:00
397dd88ebb The right image might help
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-01-16 12:57:20 +01:00
da88bfdfc0 The right image might help
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-01-16 12:55:29 +01:00
7c94d1d7a7 all images again 2024-01-10 16:11:24 +01:00
598253193b downloading mods
Some checks reported errors
continuous-integration/drone/push Build encountered an error
continuous-integration/drone Build is failing
2024-01-10 11:34:46 +01:00
ec3e999375 lesser images
Some checks failed
continuous-integration/drone/push Build is failing
2023-12-21 11:59:04 +01:00
b423324a75 packages as steps
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build was killed
2023-12-19 13:55:34 +01:00
a2143bfc0a as steps 2
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:46:05 +01:00
2e76ec3da9 as steps
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:45:17 +01:00
01208f9413 as steps
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:44:30 +01:00
c72f7b7a1c as steps
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:43:46 +01:00
67edba2276 mosquitto prometheus exporter image build
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-19 12:46:37 +01:00
315d8bd632 apps (some of them), typo
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-15 18:52:13 +01:00
13898378cd apps (some of them)
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 18:51:23 +01:00
1815e60a37 registry typo fix
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-15 18:37:54 +01:00
72aeb85a2e looping against
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-15 18:24:39 +01:00
a6d2e03707 looping against
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 18:23:11 +01:00
da199f3fe0 new sources format, who knew?
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-15 18:12:50 +01:00
c686d6fe91 sources.list gone?
Some checks failed
continuous-integration/drone/push Build is failing
2023-12-15 18:06:59 +01:00
86855f541a context for drone and cleanup/update
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-15 18:04:23 +01:00
3debf1dabc platform part, not an array and its plugins/docker
Some checks failed
continuous-integration/drone/push Build is failing
2023-12-15 17:58:30 +01:00
af467c339e platform part, not an array
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:55:27 +01:00
47c4908ffe platform part
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:54:12 +01:00
4cb9b0c3b5 platform part
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:53:35 +01:00
f316936acc no loops for now2
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-15 17:40:43 +01:00
f353210a42 no loops for now
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:40:20 +01:00
eca7f86f4f no loops for now
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:39:47 +01:00
64196d7209 what
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:33:00 +01:00
065ff0a85d drone as jsonnnet
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-13 18:55:09 +01:00
2604d026e4 drone as jsonnnet
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-13 18:54:06 +01:00
dfd2866c06 drone as jsonnnet
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-13 18:53:30 +01:00
5e271a7593 drone as jsonnnet
Some checks reported errors
continuous-integration/drone Build encountered an error
2023-12-13 18:51:45 +01:00
77a646866d more obsolete stuff cleanup 2023-12-13 18:11:54 +01:00
e60be3ab70 removing kubernetes stuff 2023-12-13 18:09:45 +01:00
757ab5a092 removed submodules 2023-12-13 18:03:49 +01:00
2e3bb35f86 coreddns update 2023-10-15 19:17:51 +02:00
47cbd88587 coredns / cluster upgrade 2023-01-16 18:57:58 +01:00
dd74762778 tekton PVC? required? 2023-01-12 20:54:31 +01:00
07d7f45e64 other stings 2023-01-12 20:53:46 +01:00
536c0c4ddc flannel 0.20 upgrade 2023-01-12 20:53:23 +01:00
fcb2e69615 upgrade galore from 1.23 to 1.26. and cluster ist still at 1.25? See: Readme.md 2023-01-12 20:52:46 +01:00
e2e032ac94 another nfs -client provisioner 2022-12-08 17:51:23 +01:00
4bbf79569c another nfs -client provisioner 2022-12-08 17:47:14 +01:00
273fb0e252 more updates 2022-12-08 17:09:38 +01:00
62f5788742 changing output dir 2022-12-08 16:47:19 +01:00
9b2d2a9d95 php-fpm 2022-12-08 16:43:36 +01:00
b5ff289f66 stuff 2022-12-08 16:39:52 +01:00
7cb8d572e7 stuff 2022-12-08 14:03:01 +01:00
14aceae467 new version and create dirs on run 2022-12-08 13:57:10 +01:00
604d065252 new version and create dirs on run 2022-12-08 13:09:24 +01:00
b50d6de8f7 cleanup 2022-11-18 10:26:13 +01:00
79c4e5e0c7 tekton stuff and install 2022-11-18 10:24:39 +01:00
d7241c7563 removed obsolete submods 2022-11-18 10:21:37 +01:00
8fbf07efdf removed descheduler, helm is on its way 2022-10-25 14:03:10 +02:00
beb1bfe0da nginx ingress is installed via helm now 2022-10-25 14:01:34 +02:00
8b62746bcc cleanup 2022-10-12 13:20:42 +02:00
94b39a804b merged 2022-09-19 16:58:14 +02:00
43d17581b3 gitea and apt-cacher 2022-09-19 16:56:40 +02:00
180d28fe80 Merge branch 'master' of git.lan:chaos/kubernetes 2022-09-19 16:54:53 +02:00
30ba290918 don't know why this shit doesn't run anymore 2022-09-10 13:32:34 +02:00
b111463cf5 Merge branch 'master' of git.lan:chaos/kubernetes 2022-08-24 19:17:10 +02:00
c2f6c546eb gitea uses ebin02 2022-08-24 19:16:24 +02:00
748b94f069 local changes 2022-07-30 12:54:52 +02:00
59c019727d rompr version 1.61 2022-07-30 12:51:17 +02:00
17f8b2f5cb mosquitto and prometheus 2022-07-30 12:43:56 +02:00
105e051d64 grav and tekton 2022-07-30 12:33:26 +02:00
124 changed files with 251 additions and 13115 deletions

77
.drone.jsonnet Normal file
View File

@@ -0,0 +1,77 @@
#local dirs = ['_CI-CD', 'apps'];
local dirs = ['apps'];
local packages = ['debian-stable', 'debian-stable-build-essential', 'debian-stable-openwrt',
'debian-golang', 'debian-stable-php-fpm', 'debian-testing'];
#local packages = ['debian-stable-openwrt'];
local apps = ['rompr', 'apt-cacher-ng', 'curl', 'mosquitto', 'mosquitto-prometheus-exporter'];
#local apps = ['rompr'];
local build(dir, package) = {
name: '%(package)s' % { package: package },
image: 'plugins/docker',
settings: {
context: '%(dir)s/%(package)s' % { dir: dir, package: package },
dockerfile: '%(dir)s/%(package)s/Dockerfile' % { dir: dir, package: package },
registry: 'http://cr.wks',
insecure: 'true',
purge: 'false',
experimental: 'true',
tags: ['latest'],
repo: 'cr.wks/%(package)s' % { package: package },
cache_from: 'cr.wks/%(package)s:latest' % { package: package },
},
};
[
{
kind: 'pipeline',
type: 'docker',
name: 'Build Changes',
platform: {
os: 'linux',
arch: 'arm64',
},
steps: [
{
name: 'git log',
image: 'cr.wks/debian-testing',
commands: [ 'bin/find_changes.sh', 'ls -la' ]
},
# [
# build('_CI-CD', app)
# for app in packages
# ],
# [
# build('apps', app)
# for app in apps
# ]
],
},
#{
# kind: 'pipeline',
# type: 'docker',
# name: '_CI-CD',
# platform: {
# os: 'linux',
# arch: 'arm64',
# },
# steps: [
# build('_CI-CD', pkg)
# for pkg in packages
# ],
# },
{
kind: 'pipeline',
type: 'docker',
name: 'apps',
platform: {
os: 'linux',
arch: 'arm64',
},
steps: [
build('apps', app)
for app in apps
],
},
]

2
.gitignore vendored
View File

@@ -1 +1 @@
csi-s3/storage-csi-s3/cmd/s3driver/s3driver
*.swp

51
.gitmodules vendored
View File

@@ -1,51 +0,0 @@
[submodule "kube-prometheus"]
path = kube-prometheus
url = https://github.com/coreos/kube-prometheus.git
[submodule "cluster-monitoring"]
path = cluster-monitoring
url = https://github.com/carlosedp/cluster-monitoring.git
[submodule "gluster-kubernetes"]
path = gluster-kubernetes
url = https://github.com/jayflory/gluster-kubernetes.git
[submodule "kubernetes-ingress"]
path = kubernetes-ingress
url = https://github.com/haproxytech/kubernetes-ingress.git
[submodule "pihole-kubernetes"]
path = pihole-kubernetes
url = https://github.com/MoJo2600/pihole-kubernetes.git
[submodule "pihole-helm"]
path = pihole-helm
url = https://github.com/ChrisPhillips-cminion/pihole-helm.git
[submodule "helm"]
path = helm
url = https://github.com/helm/helm.git
[submodule "docker-apt-cacher-ng"]
path = docker-apt-cacher-ng
url = https://github.com/sameersbn/docker-apt-cacher-ng.git
[submodule "mosquitto/charts"]
path = mosquitto/charts
url = https://github.com/smizy/charts.git
[submodule "csi-s3/storage-csi-s3"]
path = csi-s3/storage-csi-s3
url = https://github.com/ctrox/csi-s3.git
[submodule "csi-s3/external-attacher"]
path = csi-s3/external-attacher
url = https://github.com/kubernetes-csi/external-attacher.git
[submodule "csi-s3/external-provisioner"]
path = csi-s3/external-provisioner
url = https://github.com/kubernetes-csi/external-provisioner.git
[submodule "csi-s3/node-driver-registrar"]
path = csi-s3/node-driver-registrar
url = https://github.com/kubernetes-csi/node-driver-registrar.git
[submodule "apps/tekton/dashboard"]
path = apps/tekton/dashboard
url = https://github.com/tektoncd/dashboard.git
[submodule "_sys/haproxy-ingress"]
path = _sys/haproxy-ingress
url = https://github.com/haproxytech/kubernetes-ingress.git
[submodule "nfs-subdir-external-provisioner"]
path = nfs-subdir-external-provisioner
url = https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
[submodule "descheduler"]
path = descheduler
url = https://github.com/kubernetes-sigs/descheduler.git

View File

@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>kubernetes</name>
<name>docker-images</name>
<comment></comment>
<projects>
</projects>

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
golang make git

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-debian-golang
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: _CI-CD/debian-golang
- name: path-to-dockerfile
value: _CI-CD/debian-golang/Dockerfile
- name: image-name
value: cr.lan/debian-stable-golang
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/debian-stable-golang

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash \

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-debian-stable-build-essential
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: _CI-CD/debian-stable-build-essential
- name: path-to-dockerfile
value: _CI-CD/debian-stable-build-essential/Dockerfile
- name: image-name
value: cr.lan/debian-stable-build-essential
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/debian-stable-build-essential

View File

@@ -0,0 +1,14 @@
FROM cr.wks/debian-stable-build-essential
RUN apt update -y; \
apt install -y build-essential ccache ecj fastjar file g++ gawk \
gettext git java-propose-classpath libelf-dev libncurses5-dev \
libncursesw5-dev libssl-dev python3 python3-dev unzip wget \
python3-distutils python3-setuptools rsync subversion swig time \
xsltproc zlib1g-dev make distcc distcc-pump nfs-common clang flex bison g++ gawk \
gcc-multilib-mips-linux-gnu git libncurses-dev libssl-dev && \
apt-get remove --purge -y exim* && \
apt-get autoremove --purge -y && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/apt/*

View File

@@ -1,11 +1,11 @@
FROM cr.lan/debian-stable
FROM debian:stable AS baseimage
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash openssl \
php-fpm php-zip php-sqlite3 php-pgsql php-mysqli php-json php-readline \
php-xml php-ldap php-imap php-intl php-xmlrpc php-imagick php-gd php-cli php-curl \
php-bz2 php-mbstring php-memcache php-redis
php-xml php-intl php-xmlrpc php-imagick php-gd php-cli php-curl \
php-bz2 php-mbstring
#cleanup
RUN apt-get remove -y --purge man-db ;\
@@ -14,6 +14,8 @@ RUN apt-get remove -y --purge man-db ;\
rm -rf /var/lib/apt/lists/* ;\
rm -rf /var/cache/apt/*
ADD etc_php-fpm/www.conf /etc/php/7.4/fpm/pool.d
FROM baseimage as final
ADD etc_php-fpm/www.conf /etc/php/8.4/fpm/pool.d
ADD docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-debian-stable-php-fpm
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: _CI-CD/debian-stable-php-fpm
- name: path-to-dockerfile
value: _CI-CD/debian-stable-php-fpm/Dockerfile
- name: image-name
value: cr.lan/debian-stable-php-fpm
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/debian-stable-php-fpm

View File

@@ -1,14 +1,15 @@
FROM debian:stable-slim
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash && \
RUN sed -i 's@deb.debian.org@apt-cache.service.nr5/deb.debian.org@g' /etc/apt/sources.list.d/debian.sources && \
sed -i 's@security.debian.org@apt-cache.service.nr5/security.debian.org@g' /etc/apt/sources.list.d/debian.sources
RUN apt-get update && apt-get install -y \
man-db- \
dnsutils procps nmap bash iputils-ping bash git
RUN apt-get remove -y --purge man-db ;\
apt-get autoremove -y --purge ;\
RUN apt-get autoremove -y --purge ;\
apt-get clean -y ;\
rm -rf /var/lib/apt/lists/* ;\
rm -rf /var/cache/apt/*
ADD docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
ENTRYPOINT ["/docker-entrypoint.sh"]

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-debian-stable
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: _CI-CD/debian-stable
- name: path-to-dockerfile
value: _CI-CD/debian-stable/Dockerfile
- name: image-name
value: cr.lan/debian-stable
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/debian-stable

View File

@@ -1,15 +1,15 @@
FROM debian:testing-slim
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash
RUN sed -i 's@deb.debian.org@apt-cache.service.nr5/deb.debian.org@g' /etc/apt/sources.list.d/debian.sources && \
sed -i 's@security.debian.org@apt-cache.service.nr5/security.debian.org@g' /etc/apt/sources.list.d/debian.sources
RUN apt-get remove -y --purge man-db ;\
apt-get autoremove -y --purge ;\
RUN apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash git
RUN apt-get autoremove -y --purge ;\
apt-get clean -y ;\
rm -rf /var/lib/apt/lists/* ;\
rm -rf /var/cache/apt/*
ADD docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
ENTRYPOINT ["/docker-entrypoint.sh"]

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-debian-testing
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: _CI-CD/debian-testing
- name: path-to-dockerfile
value: _CI-CD/debian-testing/Dockerfile
- name: image-name
value: cr.lan/debian-testing
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/debian-testing

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable-build-essential
FROM cr.wks/debian-stable-build-essential
RUN apt-get update && \
apt-get install -y \
@@ -6,8 +6,6 @@ RUN apt-get update && \
dpkg-dev distcc ccache \
build-essential gcc cpp g++ clang llvm
RUN apt-get remove -y --purge man-db ;\
apt-get autoremove -y --purge ;\
apt-get clean -y ;\

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-distcc
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: _CI-CD/distcc
- name: path-to-dockerfile
value: _CI-CD/distcc/Dockerfile
- name: image-name
value: cr.lan/distcc
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/distcc

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: git-secret
type: Opaque
data:
token: Nzk1YTFhMGQxMWQ0MDJiY2FiOGM3MjkyZDk5ODIyMzg2NDNkM2U3OQo=

View File

@@ -1,101 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: git-clone
spec:
workspaces:
- name: output
description: The git repo will be cloned onto the volume backing this workspace
params:
- name: url
description: git url to clone
type: string
default: http://git-ui.lan/chaos/kubernetes.git
- name: revision
description: git revision to checkout (branch, tag, sha, ref…)
type: string
default: master
- name: refspec
description: (optional) git refspec to fetch before checking out revision
default: ""
- name: submodules
description: defines if the resource should initialize and fetch the submodules
type: string
default: "true"
- name: depth
description: performs a shallow clone where only the most recent commit(s) will be fetched
type: string
default: "1"
- name: sslVerify
description: defines if http.sslVerify should be set to true or false in the global git config
type: string
default: "true"
- name: subdirectory
description: subdirectory inside the "output" workspace to clone the git repo into
type: string
default: ""
- name: deleteExisting
description: clean out the contents of the repo's destination directory (if it already exists) before trying to clone the repo there
type: string
default: "true"
- name: httpProxy
description: git HTTP proxy server for non-SSL requests
type: string
default: ""
- name: httpsProxy
description: git HTTPS proxy server for SSL requests
type: string
default: ""
- name: noProxy
description: git no proxy - opt out of proxying HTTP/HTTPS requests
type: string
default: ""
results:
- name: commit
description: The precise commit SHA that was fetched by this Task
steps:
- name: clone
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.30.2
script: |
CHECKOUT_DIR="$(workspaces.output.path)/$(params.subdirectory)"
cleandir() {
# Delete any existing contents of the repo directory if it exists.
#
# We don't just "rm -rf $CHECKOUT_DIR" because $CHECKOUT_DIR might be "/"
# or the root of a mounted volume.
if [[ -d "$CHECKOUT_DIR" ]] ; then
# Delete non-hidden files and directories
rm -rf "$CHECKOUT_DIR"/*
# Delete files and directories starting with . but excluding ..
rm -rf "$CHECKOUT_DIR"/.[!.]*
# Delete files and directories starting with .. plus any other character
rm -rf "$CHECKOUT_DIR"/..?*
fi
}
if [[ "$(params.deleteExisting)" == "true" ]] ; then
cleandir
fi
test -z "$(params.httpProxy)" || export HTTP_PROXY=$(params.httpProxy)
test -z "$(params.httpsProxy)" || export HTTPS_PROXY=$(params.httpsProxy)
test -z "$(params.noProxy)" || export NO_PROXY=$(params.noProxy)
/ko-app/git-init \
-url "$(params.url)" \
-revision "$(params.revision)" \
-refspec "$(params.refspec)" \
-path "$CHECKOUT_DIR" \
-sslVerify="$(params.sslVerify)" \
-submodules="$(params.submodules)" \
-depth "$(params.depth)"
cd "$CHECKOUT_DIR"
RESULT_SHA="$(git rev-parse HEAD | tr -d '\n')"
EXIT_CODE="$?"
if [ "$EXIT_CODE" != 0 ]
then
exit $EXIT_CODE
fi
# Make sure we don't add a trailing newline to the result!
echo -n "$RESULT_SHA" > $(results.commit.path)

View File

@@ -1,43 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: kaniko
spec:
params:
- name: git-url
- name: git-revision
- name: image-name
- name: path-to-image-context
- name: path-to-dockerfile
workspaces:
- name: git-source
tasks:
- name: fetch-from-git
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
- name: submodules
value: false
workspaces:
- name: output
workspace: git-source
- name: build-image
taskRef:
name: kaniko
params:
- name: IMAGE
value: $(params.image-name)
- name: CONTEXT
value: $(params.path-to-image-context)
- name: DOCKERFILE
value: $(params.path-to-dockerfile)
workspaces:
- name: source
workspace: git-source
# If you want you can add a Task that uses the IMAGE_DIGEST from the kaniko task
# via $(tasks.build-image.results.IMAGE_DIGEST) - this was a feature we hadn't been
# able to fully deliver with the Image PipelineResource!

View File

@@ -1,67 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: kaniko
labels:
app.kubernetes.io/version: "0.5"
annotations:
tekton.dev/pipelines.minVersion: "0.17.0"
tekton.dev/categories: Image Build
tekton.dev/tags: image-build
tekton.dev/displayName: "Build and upload container image using Kaniko"
tekton.dev/platforms: "linux/arm64"
spec:
description: >-
This Task builds source into a container image using Google's kaniko tool.
Kaniko doesn't depend on a Docker daemon and executes each
command within a Dockerfile completely in userspace. This enables
building container images in environments that can't easily or
securely run a Docker daemon, such as a standard Kubernetes cluster.
params:
- name: IMAGE
description: Name (reference) of the image to build.
- name: DOCKERFILE
description: Path to the Dockerfile to build.
default: ./Dockerfile
- name: CONTEXT
description: The build context used by Kaniko.
default: ./
- name: EXTRA_ARGS
type: array
default: []
- name: BUILDER_IMAGE
description: The image on which builds will run (default is v1.5.1)
default: gcr.io/kaniko-project/executor:v1.8.0
#default: gcr.io/kaniko-project/executor:v1.5.1@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5
workspaces:
- name: source
description: Holds the context and docker file
- name: dockerconfig
description: Includes a docker `config.json`
optional: true
mountPath: /kaniko/.docker
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: build-and-push
workingDir: $(workspaces.source.path)
image: $(params.BUILDER_IMAGE)
args:
- $(params.EXTRA_ARGS[*])
- --dockerfile=$(workspaces.source.path)/$(params.DOCKERFILE)
- --context=$(params.CONTEXT) # The user does not need to care the workspace and the source.
- --destination=$(params.IMAGE)
- --digest-file=/tekton/results/IMAGE-DIGEST
- --snapshotMode=redo
- --single-snapshot
- --use-new-run
- --skip-tls-verify
# kaniko assumes it is running as root, which means this example fails on platforms
# that default to run containers as random uid (like OpenShift). Adding this securityContext
# makes it explicit that it needs to run as root.
securityContext:
runAsUser: 0

View File

@@ -1,73 +0,0 @@
#!/usr/bin/python3
import kubernetes as k8s
from pint import UnitRegistry
from collections import defaultdict
__all__ = ["compute_allocated_resources"]
def compute_allocated_resources():
ureg = UnitRegistry()
ureg.load_definitions('kubernetes_units.txt')
Q_ = ureg.Quantity
data = {}
# doing this computation within a k8s cluster
k8s.config.load_kube_config()
core_v1 = k8s.client.CoreV1Api()
# print("Listing pods with their IPs:")
# ret = core_v1.list_pod_for_all_namespaces(watch=False)
# for i in ret.items:
# print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
for node in core_v1.list_node().items:
stats = {}
node_name = node.metadata.name
allocatable = node.status.allocatable
max_pods = int(int(allocatable["pods"]) * 1.5)
# print("{} ALLOC: {} MAX_PODS: {}".format(node_name,allocatable,max_pods))
field_selector = ("status.phase!=Succeeded,status.phase!=Failed," +
"spec.nodeName=" + node_name)
stats["cpu_alloc"] = Q_(allocatable["cpu"])
stats["mem_alloc"] = Q_(allocatable["memory"])
pods = core_v1.list_pod_for_all_namespaces(limit=max_pods,
field_selector=field_selector).items
# compute the allocated resources
cpureqs, cpulmts, memreqs, memlmts = [], [], [], []
for pod in pods:
for container in pod.spec.containers:
res = container.resources
reqs = defaultdict(lambda: 0, res.requests or {})
lmts = defaultdict(lambda: 0, res.limits or {})
cpureqs.append(Q_(reqs["cpu"]))
memreqs.append(Q_(reqs["memory"]))
cpulmts.append(Q_(lmts["cpu"]))
memlmts.append(Q_(lmts["memory"]))
stats["cpu_req"] = sum(cpureqs)
stats["cpu_lmt"] = sum(cpulmts)
stats["cpu_req_per"] = (stats["cpu_req"] / stats["cpu_alloc"] * 100)
stats["cpu_lmt_per"] = (stats["cpu_lmt"] / stats["cpu_alloc"] * 100)
stats["mem_req"] = sum(memreqs)
stats["mem_lmt"] = sum(memlmts)
stats["mem_req_per"] = (stats["mem_req"] / stats["mem_alloc"] * 100)
stats["mem_lmt_per"] = (stats["mem_lmt"] / stats["mem_alloc"] * 100)
data[node_name] = stats
return data
if __name__ == "__main__":
# execute only if run as a script
print(compute_allocated_resources())

View File

@@ -1,20 +0,0 @@
# memory units
kmemunits = 1 = [kmemunits]
Ki = 1024 * kmemunits
Mi = Ki^2
Gi = Ki^3
Ti = Ki^4
Pi = Ki^5
Ei = Ki^6
# cpu units
kcpuunits = 1 = [kcpuunits]
m = 1/1000 * kcpuunits
k = 1000 * kcpuunits
M = k^2
G = k^3
T = k^4
P = k^5
E = k^6

View File

@@ -1,6 +0,0 @@
Descheduler (reschedule pods)
# https://github.com/kubernetes-sigs/descheduler
# kubectl apply -n kube-system -f https://raw.githubusercontent.com/kubernetes-sigs/descheduler/master/kubernetes/base/rbac.yaml
# kubectl apply -n kube-system -f https://raw.githubusercontent.com/kubernetes-sigs/descheduler/master/kubernetes/base/configmap.yaml
# kubectl apply -n kube-system -f https://raw.githubusercontent.com/kubernetes-sigs/descheduler/master/kubernetes/job/job.yaml

File diff suppressed because it is too large Load Diff

View File

@@ -1,167 +0,0 @@
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
file /etc/coredns/lan.db lan
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
lan.db: "; lan. zone file\n$ORIGIN lan.\n@ 3600 IN SOA sns.dns.icann.org.
noc.dns.icann.org. 2021102006 7200 3600 1209600 3600\n 3600 IN NS 172.23.255.252\n\nns
\ IN A 172.23.255.252\nsalt IN A 192.168.10.2 \nmqtt
\ IN A 172.16.23.1\nwww-proxy IN A 172.23.255.1\ngit IN
\ A 172.23.255.2\npostgresql IN A 172.23.255.4\nmariadb IN A
\ 172.23.255.5\npihole IN A 172.23.255.253\nadm IN CNAME
adm01.wks.\n\nprometheus IN CNAME www-proxy \nalertmanager IN CNAME
www-proxy\nstats IN CNAME www-proxy\ncr-ui IN CNAME
www-proxy\napt IN CNAME www-proxy\napt-cache IN CNAME
www-proxy\nnodered IN CNAME www-proxy\nfoto IN CNAME
www-proxy\nmusik IN CNAME www-proxy\nhassio IN CNAME
www-proxy\nhassio-conf IN CNAME www-proxy \ngit-ui IN CNAME
www-proxy\ngrav IN CNAME www-proxy\ntekton IN CNAME
www-proxy\nnc IN CNAME www-proxy\nauth IN CNAME
www-proxy\npublic.auth IN CNAME www-proxy \nsecure.auth IN CNAME
www-proxy\ndocker-registry IN CNAME adm\ncr IN CNAME adm\ndr-mirror
\ IN CNAME adm\nlog IN CNAME adm\n"
---
apiVersion: v1
kind: Service
metadata:
name: dns-ext
namespace: kube-system
spec:
ports:
- name: dns-udp
protocol: UDP
port: 53
targetPort: 53
selector:
k8s-app: kube-dns
type: LoadBalancer
loadBalancerIP: 172.23.255.252
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: k8s.gcr.io/coredns/coredns:v1.8.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: lan.db
path: lan.db
name: coredns
name: config-volume

View File

@@ -1,47 +0,0 @@
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: descheduler-cronjob
namespace: kube-system
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
metadata:
name: descheduler-pod
spec:
priorityClassName: system-cluster-critical
containers:
- name: descheduler
image: k8s.gcr.io/descheduler/descheduler:latest
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--v"
- "3"
resources:
requests:
cpu: "500m"
memory: "256Mi"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: false
restartPolicy: "Never"
serviceAccountName: descheduler-sa
volumes:
- name: policy-volume
configMap:
name: descheduler-policy-configmap

View File

@@ -1,34 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: descheduler-policy-configmap
namespace: kube-system
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
maxNoOfPodsToEvictPerNode: 3
strategies:
"RemoveDuplicates":
enabled: true
"RemovePodsViolatingInterPodAntiAffinity":
enabled: true
"LowNodeUtilization":
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
"cpu": 20
"memory": 40
"pods": 20
targetThresholds:
"cpu": 50
"memory": 60
"pods": 20
#nodeFit: true
"RemovePodsViolatingTopologySpreadConstraint":
enabled: true
params:
includeSoftConstraints: false

View File

@@ -1,10 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
#namespace: nginx-ingress
namespace: default
data:
proxy-connect-timeout: "10s"
proxy-read-timeout: "10s"
client-max-body-size: "0"

View File

@@ -1,674 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 172.23.255.1
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.0.0@sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: nginx
namespace: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000

View File

@@ -1,223 +0,0 @@
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "172.23.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

View File

@@ -1,21 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: loki-data
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/loki-data
server: ebin02
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: storage-loki-0
namespace: monitoring

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.23.255.1-172.23.255.254

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.23.255.1-172.23.255.254

View File

@@ -1,9 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: minio-openwrt
type: Opaque
data:
username: b3BlbndydAo=
password: ZUZWbmVnOEkwOE1zRTN0Q2VCRFB4c011OU0yVjJGdnkK
endpoint: aHR0cHM6Ly9taW5pby5saXZlLWluZnJhLnN2Yy5jbHVzdGVyLmxvY2FsOjk0NDMK

View File

@@ -1,36 +0,0 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ssd
provisioner: nfs-ssd # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ssd-ebin01
provisioner: nfs-ssd-ebin01 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-hdd-ebin01
provisioner: nfs-hdd-ebin01 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ssd-ebin02
provisioner: nfs-ssd-ebin02 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-hdd-ebin01
namespace: live-infra
labels:
app: nfs-hdd-ebin01
service: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-hdd-ebin01
template:
metadata:
labels:
app: nfs-hdd-ebin01
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-hdd-ebin01
image: quay.io/external_storage/nfs-client-provisioner-arm:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-hdd-ebin01
- name: NFS_SERVER
value: ebin01
- name: NFS_PATH
value: /data/raid1-hdd/k8s-data
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- nfs
topologyKey: kubernetes.io/hostname
volumes:
- name: nfs-client-root
nfs:
server: ebin01
path: /data/raid1-hdd/k8s-data

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-ssd-ebin01
namespace: live-infra
labels:
app: nfs-ssd-ebin01
service: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-ssd-ebin01
template:
metadata:
labels:
app: nfs-ssd-ebin01
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-ssd-ebin01
image: quay.io/external_storage/nfs-client-provisioner-arm:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-ssd-ebin01
- name: NFS_SERVER
value: ebin01
- name: NFS_PATH
value: /data/raid1-ssd/k8s-data
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- nfs
topologyKey: kubernetes.io/hostname
volumes:
- name: nfs-client-root
nfs:
server: ebin01
path: /data/raid1-ssd/k8s-data

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-ssd-ebin02
namespace: live-infra
labels:
app: nfs-ssd-ebin02
service: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-ssd-ebin02
template:
metadata:
labels:
app: nfs-ssd-ebin02
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-ssd-ebin02
image: quay.io/external_storage/nfs-client-provisioner-arm:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-ssd-ebin02
- name: NFS_SERVER
value: ebin02
- name: NFS_PATH
value: /data/raid1-ssd/k8s-data
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- nfs
topologyKey: kubernetes.io/hostname
volumes:
- name: nfs-client-root
nfs:
server: ebin02
path: /data/raid1-ssd/k8s-data

View File

@@ -1,65 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,19 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: live-env
---
apiVersion: v1
kind: Namespace
metadata:
name: test-env
---
apiVersion: v1
kind: Namespace
metadata:
name: live-infra
---
apiVersion: v1
kind: Namespace
metadata:
name: test-infra

View File

@@ -1,4 +1,4 @@
FROM debian:bullseye
FROM debian:stable
ENV DEBIAN_FRONTEND noninteractive
ARG DEVPKGS="git make cmake gcc g++ python-dev libsqlcipher-dev"
@@ -34,4 +34,4 @@ RUN apt-get remove -y --purge ${DEVPKGS} && \
USER almond-cloud
WORKDIR /home/almond-cloud
ENTRYPOINT ["/opt/almond-cloud/start.sh"]
ENTRYPOINT ["/opt/almond-cloud/start.sh"]

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
apt-cacher-ng && \
@@ -8,5 +8,5 @@ RUN apt-get update && apt-get install -y \
RUN echo 'PassThroughPattern: .*' >> /etc/apt-cacher-ng/acng.conf
EXPOSE 3142
EXPOSE 3142
CMD /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng pidfile=/var/run/apt-cacher-ng/pid SocketPath=/var/run/apt-cacher-ng/socket foreground=1

View File

@@ -76,9 +76,30 @@ kind: PersistentVolumeClaim
metadata:
name: apt-cacher-volume
spec:
storageClassName: nfs-ssd
storageClassName: nfs-ssd-ebin02
volumeName: apt-cacher-ng
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: apt-cacher-ng
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/apt-cacher-ng
server: ebin02
capacity:
storage: 40Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: apt-cacher-volume
namespace: live-infra

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-apt-cacher-ng
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: apps/apt-cacher-ng
- name: path-to-dockerfile
value: apps/apt-cacher-ng/Dockerfile
- name: image-name
value: cr.lan/apt-cacher-ng
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/apt-cacher-ng

View File

@@ -1,7 +0,0 @@
FROM: https://tanzu.vmware.com/developer/guides/ci-cd/argocd-gs/
# kubectl apply -f namespace.yaml
# -kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml-
# kubectl apply -n argocd -f install.yaml (needs changes for ARM builds)
# kubectl apply -n argocd -f ingress.yaml

View File

@@ -1,18 +0,0 @@
#https://argoproj.github.io/argo-cd/operator-manual/ingress/#kubernetesingress-nginx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd.lan
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: argocd

View File

@@ -1,5 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
curl procps && \
apt-get clean -y && \

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-curl
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: apps/curl
- name: path-to-dockerfile
value: apps/curl/Dockerfile
- name: image-name
value: cr.lan/curl
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/curl

View File

@@ -20,7 +20,6 @@ spec:
spec:
containers:
- name: registry-ui
#image: cr.lan/docker-registry-ui:arm64
image: docker.io/joxit/docker-registry-ui:latest
imagePullPolicy: Always
env:

View File

@@ -45,16 +45,26 @@ spec:
value: gitea
- name: DB_PASSWD
value: giteaEu94XSS4gKpheSBoMsIs
- name: GITEA__indexer__ISSUE_INDEXER_TYPE
value: db
#- name: GITEA__indexer__ISSUE_INDEXER
#value: redis
#- name: GITEA__indexer__ISSUE_INDEXER_QUEUE_CONN_STR
#value: addrs=redis-standalone.live-env.svc.cluster.local:6379 db=1
- name: GITEA__packages__ENABLED
value: "true"
- name: GITEA__log__LEVEL
value: warn
- name: GITEA__log__MODE
value: console
- name: GITEA__queue__TYPE
value: persistable-channel
value: file
- name: GITEA__log__ROUTER
value: file
- name: GITEA__log__MACARON
value: file
#- name: GITEA__queue__TYPE
#value: redis
#- name: GITEA__queue__CONN_STR
#value: redis://redis-standalone.live-env.svc.cluster.local:6397/0
- name: GITEA__server__ROOT_URL
value: http://git-ui.lan/
volumeMounts:
- name: gitea
mountPath: /data
@@ -66,13 +76,13 @@ spec:
containerPort: 22
protocol : TCP
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 10
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 10
httpGet:
path: /
port: http
readinessProbe:
initialDelaySeconds: 30
initialDelaySeconds: 300
periodSeconds: 10
httpGet:
path: /
@@ -96,7 +106,8 @@ metadata:
labels:
app: gitea
spec:
storageClassName: nfs-ssd
storageClassName: nfs-ssd-ebin02
volumeName: gitea
accessModes:
- ReadWriteOnce
resources:
@@ -104,6 +115,26 @@ spec:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: gitea
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/gitea-data
server: ebin02
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: gitea
namespace: live-env
---
apiVersion: v1
kind: Service
metadata:
name: gitea

View File

@@ -1,6 +1,6 @@
FROM cr.lan/debian-stable-php-fpm
ENV DEBIAN_FRONTEND noninteractive
ARG GRAV_VERSION=1.6.28
ARG GRAV_VERSION=1.7.34
ARG DEV_PKGS="zlib1g-dev libpng-dev libjpeg-dev libfreetype6-dev \
libcurl4-gnutls-dev libxml2-dev libonig-dev"

View File

@@ -0,0 +1,19 @@
FROM cr.wks/debian-golang AS build
ENV GOARCH=arm64
ENV GOPATH=/usr/src/gopath
ENV GOCACHE=/usr/src/gocache
RUN go env
WORKDIR /usr/src
RUN go install github.com/sapcc/mosquitto-exporter@latest
#RUN go mod download
FROM cr.wks/debian-stable
LABEL source_repository="https://github.com/sapcc/mosquitto-exporter"
COPY --from=build /usr/src/gopath/bin/mosquitto-exporter /mosquitto-exporter
RUN chmod 0755 /mosquitto-exporter
EXPOSE 9234
ENTRYPOINT [ "/mosquitto-exporter" ]

View File

View File

@@ -1,8 +1,6 @@
FROM debian:stable-slim
FROM cr.wks/debian-stable
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && \
RUN apt-get update && \
apt-get install -y --no-install-recommends \
mosquitto procps && \
apt-get clean -y && \

0
apps/mosquitto/bla Normal file
View File

View File

@@ -1,93 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: github-mosquitto-prometheus-exporter
spec:
type: git
params:
- name: revision
value: master
- name: url
value: https://github.com/sapcc/mosquitto-exporter.git
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-mosquitto-prometheus-exporter
spec:
type: image
params:
- name: url
value: cr.lan/mosquitto-prometheus-exporter
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-mosquitto-prometheus-exporter
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-binary
image: cr.lan/debian-golang-stable
script: |
#!/usr/bin/env bash
cd $(resources.inputs.source.path)
ls -al
export GOARCH=arm64
export GOPATH=/usr/src/gopath
export GOCACHE=/usr/src/gocache
go env
go get github.com/sapcc/mosquitto-exporter
make -j4 build CGO_ENABLED=0
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
workspaces:
- name: usr-src
mountPath: /usr/src
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-mosquitto-prometheus-exporter
spec:
taskRef:
name: build-mosquitto-prometheus-exporter
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: github-mosquitto-prometheus-exporter
outputs:
- name: builtImage
resourceRef:
name: img-mosquitto-prometheus-exporter
workspaces:
- name: usr-src
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: usr_src

View File

@@ -1,77 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-mosquitto
spec:
type: image
params:
- name: url
value: cr.lan/mosquitto
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-mosquitto
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/mosquitto/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/mosquitto
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-mosquitto-taskrun
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-mosquitto
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-mosquitto

View File

@@ -1,42 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
run: nginx-deployment
template:
metadata:
labels:
run: nginx-deployment
spec:
containers:
- image: nginx
name: nginx-webserver
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
run: nginx-deployment
ports:
- port: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-test
spec:
rules:
- host: nginx-test.lan
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80

View File

@@ -1,4 +1,4 @@
FROM debian:stable-slim
FROM cr.lan/debian-stable
#RUN echo 'Acquire::http::proxy "http://172.23.255.1:3142";' >/etc/apt/apt.conf.d/proxy
RUN apt-get update && apt-get install -y \

View File

@@ -49,7 +49,7 @@ spec:
- key: app
operator: In
values:
- promtheus
- prometheus
- loki
topologyKey: kubernetes.io/hostname
# - name: prometheus-exporter

View File

@@ -5,6 +5,7 @@ metadata:
namespace: live-env
data:
redis.conf: |-
bind * -::*
appendonly yes
maxmemory 5mb
---

View File

@@ -1,10 +1,9 @@
FROM cr.lan/debian-stable-php-fpm
FROM cr.chaos/debian-stable-php-fpm as baseimage
ARG ROMPR_VERSION=1.60.1
ARG ROMPR_VERSION=2.24
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get -y install \
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y install \
nginx \
curl \
unzip
@@ -19,21 +18,24 @@ RUN mkdir -p /app /rompr
RUN unzip -d /app rompr.zip && rm rompr.zip
RUN ln -sf /rompr/prefs /app/rompr/prefs; ln -sf /rompr/albumart /app/rompr/albumart;
RUN chown -R www-data:www-data /app/rompr /rompr
RUN pwd; ls -la .;ls -la /workspace/source;
RUN pwd; ls -la .;ls -la /etc/php/
ADD files/nginx_default /etc/nginx/sites-available/default
RUN mkdir -p /run/php/
FROM baseimage as final
#Environment variables to configure php
RUN sed -ri -e 's/^allow_url_fopen =.*/allow_url_fopen = On/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^memory_limit =.*/memory_limit = 128M/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^max_execution_time =.*/max_execution_time = 1800/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^post_max_size =.*/post_max_size = 256M/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^upload_max_filesize =.*/upload_max_filesize = 8M/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^max_file_uploads =.*/max_file_uploads = 50/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^display_errors =.*/display_errors = On/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^display_startup_errors =.*/display_startup_errors = On/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^allow_url_fopen =.*/allow_url_fopen = On/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^memory_limit =.*/memory_limit = 128M/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^max_execution_time =.*/max_execution_time = 1800/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^post_max_size =.*/post_max_size = 256M/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^upload_max_filesize =.*/upload_max_filesize = 8M/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^max_file_uploads =.*/max_file_uploads = 50/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^display_errors =.*/display_errors = On/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^display_startup_errors =.*/display_startup_errors = On/g' /etc/php/8.4/fpm/php.ini
RUN echo "<?php phpinfo(); ?>" > /app/rompr/phpinfo.php
RUN update-rc.d php7.4-fpm defaults
RUN update-rc.d php8.4-fpm defaults
ADD files/run-httpd /usr/local/bin/
RUN chmod 755 /usr/local/bin/run-httpd
EXPOSE 80

View File

@@ -1,3 +1,5 @@
lighttpd is configured in etc_lighttpd
generate a configmap with:
kubectl create configmap rompr-lighttpd-config --from-file etc_lighthttpd/
Run with:
```podman run --pull=always -d --replace -p 127.0.0.1:8081:80 \
--mount=type=bind,source=/var/lib/rompr,destination=/rompr \
--tz=Europe/Berlin --name=rompr cr.wks/rompr:latest```

View File

@@ -1,73 +0,0 @@
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: rompr
spec:
selector:
matchLabels:
app: rompr
strategy:
type: Recreate
template:
metadata:
labels:
app: rompr
spec:
containers:
- image: cr.lan/rompr
name: rompr
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
volumeMounts:
- name: rompr-data
mountPath: /rompr
volumes:
- name: rompr-data
persistentVolumeClaim:
claimName: rompr-data
---
apiVersion: v1
kind: Service
metadata:
name: rompr
spec:
ports:
- name: http
port: 80
selector:
app: rompr
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rompr
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: musik.lan
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rompr
port:
name: http
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rompr-data
spec:
storageClassName: nfs-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 6Gi

View File

@@ -2,6 +2,7 @@
rm -f /var/run/nginx.pid
mkdir -p /var/log/nginx
set -e
/etc/init.d/php7.4-fpm restart
mkdir -p /rompr/albumart /rompr/prefs
chown www-data:www-data -R /rompr/albumart /rompr/prefs
/etc/init.d/php8.4-fpm restart
exec /usr/sbin/nginx -g 'daemon off;'

View File

@@ -1,23 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: build-rompr
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: apps/rompr
- name: path-to-dockerfile
value: apps/rompr/Dockerfile
- name: image-name
value: cr.lan/rompr
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/kaniko-pipelines

View File

@@ -1,8 +0,0 @@
Install:
# Pipelines: @kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml@
# Triggers: @kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml@ #https://github.com/tektoncd/triggers/blob/master/docs/install.md
# Dashboard:
## update submodule in ./dashboard
## Build: @docker build -t tekton-dashboard:arm64 -t docker-registry.lan/tekton-dashboard:arm64 --platform linux/arm64 --build-arg GOARCH=arm64 .@
## apply deployment.yaml

View File

@@ -1,60 +0,0 @@
# Copyright 2020 Tekton Authors LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-registry-cert
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
# Registry's self-signed certificate
# TODO: somehow automate this with salt
cert: |
-----BEGIN CERTIFICATE-----
MIIFujCCA6KgAwIBAgIEYsvT+zANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJE
RTEPMA0GA1UECAwGQmVybGluMQ8wDQYDVQQHDAZCZXJsaW4xFDASBgNVBAMMC3R1
bW9yLmNoYW9zMB4XDTIxMDIxMjE4MzAzM1oXDTIyMDIxMjE4MzAzM1owLzELMAkG
A1UEBhMCREUxDzANBgNVBAgMBkJlcmxpbjEPMA0GA1UEBwwGQmVybGluMIICIjAN
BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAog4t352wKHS4pflQK4NlWH6yv1FK
MnqNJiNnIgkWrNABTu9ES3cmUwdEhf+Um7MJYvQivOZFIH65wBBmOxfnYWB+NPwn
XAi/o3BcePIdbwEGs0cxgIEKbmL9fY0SCXq0pXRu8Y7WAhqdTNp6/HY2fTMx7ghX
RNQPoeNlcfAZgpsJlZdkSzMYoFpGIW+Tvj3INNuIuHo1pagckWW/hGUIqY0NuUV9
Aj8LOHhHB+vKtjbq5DMVAob4kKOPJFmq/1D6fmRh3W1YAGikowVv3V45jAmnkcBj
Z8BIEiOnBy1AyW9o8Tc5000MAGNrm9IGpRfBBTptSAApZmK1V6zKreqCiCpgOBbh
6U1Bf1L39u8aLVRxeyzQbxqBM1VTbjKxygFSIR/7rVd9BEhx6VA95EG+EdPLpKDp
mymElCcVgv2ZhKBRxtne4CAQD5ng2SoEqLdjvZdC44QNapnj+6jlaNvKRJ1q63kq
B5Y4shJxYOc6QDQp2+Eh2d7qQNiTE3FJC/aeXDNQ+dqeV7chU+PbcbMQoxnIN6ou
Zc2IdtNL87+Apgh6vqZX9pELBXUN1Nu3NI88T8tw1CdqfFfh4Z2EEBBCsPD0yZPV
UrHZsAMiHh5prRkwsBVzDBIaLYd6glf/w9W8sWxe5wceDNhxD8VAfq/ZXeuE1Pme
cTVYsBNj8idC9tECAwEAAaOBxzCBxDAMBgNVHRMBAf8EAjAAMAsGA1UdDwQEAwIF
4DAdBgNVHQ4EFgQUa7ADNR68XrDsLtLtngmdJQ9UtOswcAYDVR0jBGkwZ4AU9l9v
D1+dukLLV/uDnP3eB4i6ZyihSaRHMEUxCzAJBgNVBAYTAkRFMQ8wDQYDVQQIDAZC
ZXJsaW4xDzANBgNVBAcMBkJlcmxpbjEUMBIGA1UEAwwLdHVtb3IuY2hhb3OCBBKa
C88wFgYDVR0RBA8wDYILdHVtb3IuY2hhb3MwDQYJKoZIhvcNAQELBQADggIBAKK3
S8qKrsarBflGrDI4diG+QOcMG3/y6juARp3vxQf3fDqC6HZCl+kWAp+Cq3Sp/hU7
GKM7qraWpvGxgmDyaevAirLdFlYQBgcIl9frPI8yfLWbZHWvx3PFXNqg2Ckm98xX
vSUacPTPp/tKFBquJ5+j+/YS2U4qWWNIYYtDEI+3lswfoeh0CIEPSxDk0wHDAyfZ
Vh30ZuZhsf3F63xMggw/RpEHeTTCr0YGOAmzpb7jItcbP/EER1qTQ4T+3ExuC40C
EdOAeL377O2rr7zjcmJWk8B5FaQ8K8UdE/iQGM7tP5ieMNTVACe21KFpqIIXaIka
HqRTyvRmJGUrVf1NeXE16yKirIqAjEV/B/4S244wxYcwqweZObbI0PnbnEMn3PMF
TV+e1CUmVOKyGIxfHH7j/VKQfmH/W0jOlGWI7OkbdU5GckoX4Knjrv2MmT9i2ENy
6dID3BJVm6hK2SjJLc7SxbPXMG3I6BrlA5/3LaXzl+2fWAk5OA1jnGZz0P4XcdOO
iAulB4I3PdmNRdSYAXVRdo5OLoq/7iBcqSrCXRw1IbgJm0VlS2AI6hGEXDQvjQwP
38ijZUV/ch2lGyUZOfQymI7Ylh+Airn8ctqyMS8FeZBAyny4/t7xrhWuGO1awUzp
4p/sEjg6kqp3oLai5yhaz9S+y7Ao5XmGDdzfalWH
-----END CERTIFICATE-----

View File

@@ -1,19 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: tekton.lan
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tekton-dashboard
port:
number: 9097

View File

@@ -1,526 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: extensions.dashboard.tekton.dev
spec:
additionalPrinterColumns:
- JSONPath: .spec.apiVersion
name: API version
type: string
- JSONPath: .spec.name
name: Kind
type: string
- JSONPath: .spec.displayname
name: Display name
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: dashboard.tekton.dev
names:
categories:
- tekton
- tekton-dashboard
kind: Extension
plural: extensions
shortNames:
- ext
- exts
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
x-kubernetes-preserve-unknown-fields: true
versions:
- name: v1alpha1
served: true
storage: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-backend
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- use
- apiGroups:
- tekton.dev
resources:
- clustertasks
- clustertasks/status
verbs:
- get
- list
- watch
- apiGroups:
- triggers.tekton.dev
resources:
- clustertriggerbindings
verbs:
- get
- list
- watch
- apiGroups:
- dashboard.tekton.dev
resources:
- extensions
verbs:
- create
- update
- delete
- patch
- apiGroups:
- tekton.dev
resources:
- clustertasks
- clustertasks/status
verbs:
- create
- update
- delete
- patch
- apiGroups:
- triggers.tekton.dev
resources:
- clustertriggerbindings
verbs:
- create
- update
- delete
- patch
- add
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-dashboard
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- list
---
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.dashboard.tekton.dev/aggregate-to-dashboard: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-extensions
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-pipelines
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-tenant
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- dashboard.tekton.dev
resources:
- extensions
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
- pods/log
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- tekton.dev
resources:
- tasks
- taskruns
- pipelines
- pipelineruns
- pipelineresources
- conditions
- tasks/status
- taskruns/status
- pipelines/status
- pipelineruns/status
- taskruns/finalizers
- pipelineruns/finalizers
verbs:
- get
- list
- watch
- apiGroups:
- triggers.tekton.dev
resources:
- eventlisteners
- triggerbindings
- triggertemplates
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- update
- patch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- create
- update
- delete
- apiGroups:
- tekton.dev
resources:
- tasks
- taskruns
- pipelines
- pipelineruns
- pipelineresources
- conditions
- taskruns/finalizers
- pipelineruns/finalizers
- tasks/status
- taskruns/status
- pipelines/status
- pipelineruns/status
verbs:
- create
- update
- delete
- patch
- apiGroups:
- triggers.tekton.dev
resources:
- eventlisteners
- triggerbindings
- triggertemplates
verbs:
- create
- update
- delete
- patch
- add
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-triggers
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-backend
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tekton-dashboard
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
app.kubernetes.io/version: v0.11.1
dashboard.tekton.dev/release: v0.11.1
version: v0.11.1
name: tekton-dashboard
namespace: tekton-pipelines
spec:
ports:
- name: http
port: 9097
protocol: TCP
targetPort: 9097
selector:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tekton-dashboard
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
app.kubernetes.io/version: v0.11.1
dashboard.tekton.dev/release: v0.11.1
version: v0.11.1
name: tekton-dashboard
namespace: tekton-pipelines
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
template:
metadata:
labels:
app: tekton-dashboard
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
app.kubernetes.io/version: v0.11.1
name: tekton-dashboard
spec:
containers:
- args:
- --port=9097
- --logout-url=
- --pipelines-namespace=tekton-pipelines
- --triggers-namespace=tekton-pipelines
- --read-only=false
- --csrf-secure-cookie=false
- --log-level=info
- --log-format=json
- --namespace=
- --openshift=false
- --stream-logs=false
- --external-logs=
env:
- name: INSTALLED_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: WEB_RESOURCES_DIR
value: /go/src/github.com/tektoncd/dashboard/web
- name: TEKTON_PIPELINES_WEB_RESOURCES_DIR
value: /go/src/github.com/tektoncd/dashboard/web
#image: gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard@sha256:744eb92d7d0365bbfb2405df4ba4d2a66c01edc26028c362bd5675e2bc1b9626
image: docker-registry.lan/tekton-dashboard:arm64
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 9097
name: tekton-dashboard
ports:
- containerPort: 9097
readinessProbe:
httpGet:
path: /readiness
port: 9097
securityContext:
runAsNonRoot: true
runAsUser: 65532
serviceAccountName: tekton-dashboard
volumes: []
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-pipelines
namespace: tekton-pipelines
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-pipelines
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-dashboard
namespace: tekton-pipelines
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-dashboard
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-triggers
namespace: tekton-pipelines
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-triggers
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-tenant
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-tenant
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-extensions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-extensions
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
spec:
rules:
- host: tekton.lan
http:
paths:
- backend:
serviceName: tekton-dashboard
servicePort: 9097

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +0,0 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tektoncd-workspaces
spec:
storageClassName: nfs-ssd
accessModes:
- ReadWriteMany
resources:
requests:
storage: 40Gi

14
bin/find_changes.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
declare -A CH
CH=()
i=0
echo $(git --version)
while read line; do
WHAT=$(dirname ${line})
echo "LIN: ${line} WHAT: ${WHAT}"
CH[$i]=$WHAT
i=$((i++))
done < <(git diff-tree --no-commit-id --name-only HEAD -r| egrep '^_')
#echo "UNIQ:"
UNIQ=$(echo ${CH} |sort |uniq)
echo ${UNIQ}

View File

@@ -1,5 +0,0 @@
from :https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md
# create new secret:
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run -oyaml > additional-scrape-configs.yaml
# add "namespace: monitoring"
# apply

View File

@@ -1,7 +0,0 @@
apiVersion: v1
data:
prometheus-additional.yaml: LSBqb2JfbmFtZTogZ2l0ZWEKICBzdGF0aWNfY29uZmlnczoKICAtIHRhcmdldHM6CiAgICAtIGdpdC11aS5sYW4KLSBqb2JfbmFtZTogbXlzcWxkCiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBtYXJpYWRiLmxhbjo5MTA0Ci0gam9iX25hbWU6IG1xdHQubW9zcXVpdHRvCiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBtcXR0Lmxhbjo5MjM0CiAgICAtIG1xdHQuY2hhb3M6OTIzNAotIGpvYl9uYW1lOiBoYXByb3h5CiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBhZG0wMS53a3M6OTEwMQogICAgLSBkcnVja2kud2tzOjkxMDEKICAgIC0gYXV0bzAxLmNoYW9zOjkxMDEKICAgIC0gYXV0bzAyLmNoYW9zOjkxMDEKLSBqb2JfbmFtZToga2xpcHBlcgogIHN0YXRpY19jb25maWdzOgogIC0gdGFyZ2V0czoKICAgIC0gZHJ1Y2tpLndrczozOTAzCi0gam9iX25hbWU6IG9jdG9wcmludAogIG1ldHJpY3NfcGF0aDogL3BsdWdpbi9wcm9tZXRoZXVzX2V4cG9ydGVyL21ldHJpY3MKICBwYXJhbXM6CiAgICBhcGlrZXk6CiAgICAtIDMwRThCMDFCRkQ2NzRFNUJCRDQ0NkQwOEM0NzMwREY0CiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBkcnVja2kud2tzOjgwCi0gam9iX25hbWU6IGhhc3NpbwogIG1ldHJpY3NfcGF0aDogL2FwaS9wcm9tZXRoZXVzCiAgYmVhcmVyX3Rva2VuOiAnZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKSVV6STFOaUo5LmV5SnBjM01pT2lKaE16Qm1ZalUxWmpjeVpHRTBZemMyWW1VMk5tWTBOamxqTlRBeU1qZGpaQ0lzSW1saGRDSTZNVFl4TWpnNE16STVOeXdpWlhod0lqb3hPVEk0TWpRek1qazNmUS4xSUNzSGxpVVhSMENHNEg4dlFSWUo1alZxRndtcUtTQjBmU2NTaXRDLVE0JwogIHN0YXRpY19jb25maWdzOgogICAgLSB0YXJnZXRzOgogICAgICAtIGhhc3Npby5sYW46ODAKLSBqb2JfbmFtZTogaGFzc2lvX3Jpbmc4NgogIG1ldHJpY3NfcGF0aDogL2FwaS9wcm9tZXRoZXVzCiAgYmVhcmVyX3Rva2VuOiAnZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKSVV6STFOaUo5LmV5SnBjM01pT2lJME9HRmpaVEppTm1RM09UZzBNamMzWVdGbU1tTm1abVUxWXpjNE5URTBOQ0lzSW1saGRDSTZNVFl4TWpFNU1qazBNQ3dpWlhod0lqb3hPVEkzTlRVeU9UUXdmUS5CYklBWG05UnEwamI2b3VxZ1ZITmQ2S2VlejNOUDN5aC03d3lmdW9COFlrJwogIHN0YXRpY19jb25maWdzOgogICAgLSB0YXJnZXRzOgogICAgICAtIGF1dG8uY2hhb3M6ODAKLSBqb2JfbmFtZTogcG9zdGdyZXMKICBzdGF0aWNfY29uZmlnczoKICAgIC0gdGFyZ2V0czoKICAgICAgLSBwb3N0Z3Jlcy5saXZlLWVudi5zdmMuY2x1c3Rlci5sb2NhbDo5MTg3Ci0gam9iX25hbWU6IG5vZGUKICBzdGF0aWNfY29uZmlnczoKICAtIHRhcmdldHM6CiAgICAtIGFkbTAxLndrczo5MTAwCiAgICAtIGR1bW9udC13a3Mud2tzOjkxMDAKICAgIC0gZHJ1Y2tpLndrczo5MTAwCiAgICAtIGViaW4wMS53a3M6OTEwMAogICAgLSBlYmluMDIud2tzOjkxMDAKICAgIC0gb3NtYy53a3M6OTEwMAogICAgLSByaW90MDEud2tzOjkxMDAKICAgIC0gdHJ1aGUuY2hhb3M6OTEwMAogICAgLSBhdXRvMDEuY2hhb3M6OTEwMAogICAgLSBhdXRvMDIuY2hhb3M6OTEwMAogICAgLSBkdW1vbnQuY2hhb3M6OTEwMAogICAgLSB0dW1vci5jaGFvczo5MTAwCiAgICAtIHdvaG56LmNoYW9zOjkxMDAKICAgIC0geW9yaS5jaGFvczo5MTAwCg==
kind: Secret
metadata:
creationTimestamp: null
name: additional-scrape-configs

View File

@@ -1,30 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: prometheus-k8s
namespace: metallb-system
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: prometheus-k8s
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: prometheus-k8s
subjects:
- kind: ServiceAccount
name: prometheus-k8s
namespace: monitoring

View File

@@ -1,65 +0,0 @@
- job_name: gitea
static_configs:
- targets:
- git-ui.lan
- job_name: mysqld
static_configs:
- targets:
- mariadb.lan:9104
- job_name: mqtt.mosquitto
static_configs:
- targets:
- mqtt.lan:9234
- mqtt.chaos:9234
- job_name: haproxy
static_configs:
- targets:
- adm01.wks:9101
- drucki.wks:9101
- auto01.chaos:9101
- auto02.chaos:9101
- job_name: klipper
static_configs:
- targets:
- drucki.wks:3903
- job_name: octoprint
metrics_path: /plugin/prometheus_exporter/metrics
params:
apikey:
- 30E8B01BFD674E5BBD446D08C4730DF4
static_configs:
- targets:
- drucki.wks:80
- job_name: hassio
metrics_path: /api/prometheus
bearer_token: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhMzBmYjU1ZjcyZGE0Yzc2YmU2NmY0NjljNTAyMjdjZCIsImlhdCI6MTYxMjg4MzI5NywiZXhwIjoxOTI4MjQzMjk3fQ.1ICsHliUXR0CG4H8vQRYJ5jVqFwmqKSB0fScSitC-Q4'
static_configs:
- targets:
- hassio.lan:80
- job_name: hassio_ring86
metrics_path: /api/prometheus
bearer_token: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiI0OGFjZTJiNmQ3OTg0Mjc3YWFmMmNmZmU1Yzc4NTE0NCIsImlhdCI6MTYxMjE5Mjk0MCwiZXhwIjoxOTI3NTUyOTQwfQ.BbIAXm9Rq0jb6ouqgVHNd6Keez3NP3yh-7wyfuoB8Yk'
static_configs:
- targets:
- auto.chaos:80
- job_name: postgres
static_configs:
- targets:
- postgres.live-env.svc.cluster.local:9187
- job_name: node
static_configs:
- targets:
- adm01.wks:9100
- dumont-wks.wks:9100
- drucki.wks:9100
- ebin01.wks:9100
- ebin02.wks:9100
- osmc.wks:9100
- riot01.wks:9100
- truhe.chaos:9100
- auto01.chaos:9100
- auto02.chaos:9100
- dumont.chaos:9100
- tumor.chaos:9100
- wohnz.chaos:9100
- yori.chaos:9100

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-k8s-db-prometheus-k8s-0
namespace: monitoring
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
storageClassName: fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

View File

@@ -1,41 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-db
annotations:
pv.kubernetes.io/pirvisioned-by: nfs-ssd
spec:
storageClassName: "nfs-ssd"
nfs:
path: /data/raid1-ssd/k8s-data/prometheus-db
server: ebin01
capacity:
storage: 40Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: prometheus-k8s-db-prometheus-k8s-0
namespace: monitoring
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana-conf
spec:
storageClassName: "nfs-ssd"
nfs:
path: /data/raid1-ssd/k8s-data/grafana-conf
server: ebin01
capacity:
storage: 40Mi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: grafana-conf
namespace: monitoring

View File

@@ -1,12 +0,0 @@
COMMON:
** git tag -l
** V=GIT_TAG git checkout -b branch=$V $V
** run: build.sh dir-name
external-provisioner:
external-attacher:
node-driver-registrar:

View File

@@ -1,27 +0,0 @@
#!/bin/bash
APP=$1
cd $APP
VERSION=arm64 make -j8 GOARCH=arm64
docker build -t $APP:arm64 --platform linux/arm64 .
docker tag ${APP}:arm64 docker-registry.lan/${APP}:arm64
echo "=============================================="
while true; do
read -p "Push it real good? " yn
case $yn in
[Yy]* )
docker push docker-registry.lan/${APP}:arm64;
echo "-> Cheers";
echo;
break;;
[Nn]* )
echo "x> Cheers!";
echo;
exit;;
* ) echo "Please answer [y]es or [n]o.";;
esac
done
cd -

View File

@@ -1,12 +0,0 @@
# This is where the result of the go build goes
/output*/
/_output*/
/_output
# Go test binaries
*.test
# Godeps or dep workspace
/Godeps/_workspace
vendor
vendor.*

View File

@@ -1,24 +0,0 @@
image:
name: ctrox/csi-s3:test
entrypoint: [""]
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
stages:
- build
- test
build:
stage: build
script:
- make build
test:
stage: test
image: docker:stable
services:
- docker:dind
script:
- docker run --rm --privileged -v $(pwd):/app --device /dev/fuse ctrox/csi-s3:test

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,37 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: test build container push clean
PROJECT_DIR=/app
REGISTRY_NAME=docker-registry.lan
IMAGE_NAME=csi-s3
VERSION ?= dev
IMAGE_TAG=$(REGISTRY_NAME)/$(IMAGE_NAME):$(VERSION)
FULL_IMAGE_TAG=$(IMAGE_TAG)-full
TEST_IMAGE_TAG=$(REGISTRY_NAME)/$(IMAGE_NAME):test
build:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -a -ldflags '-extldflags "-static"' -o _output/s3driver ./cmd/s3driver
test:
docker build -t $(TEST_IMAGE_TAG) -f test/Dockerfile .
docker run --rm --privileged -v $(PWD):$(PROJECT_DIR) --device /dev/fuse $(TEST_IMAGE_TAG)
container: build
docker build --platform linux/arm64 -t $(IMAGE_TAG) -f cmd/s3driver/Dockerfile .
docker build --platform linux/arm64 -t $(FULL_IMAGE_TAG) --build-arg VERSION=$(VERSION) -f cmd/s3driver/Dockerfile.full .
push: container
docker push $(IMAGE_TAG)
docker push $(FULL_IMAGE_TAG)
clean:
go clean -r -x
-rm -rf _output

View File

@@ -1,173 +0,0 @@
# CSI for S3
This is a Container Storage Interface ([CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md)) for S3 (or S3 compatible) storage. This can dynamically allocate buckets and mount them via a fuse mount into any container.
## Status
This is still very experimental and should not be used in any production environment. Unexpected data loss could occur depending on what mounter and S3 storage backend is being used.
## Kubernetes installation
### Requirements
* Kubernetes 1.13+ (CSI v1.0.0 compatibility)
* Kubernetes has to allow privileged containers
* Docker daemon must allow shared mounts (systemd flag `MountFlags=shared`)
### 1. Create a secret with your S3 credentials
```yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
stringData:
accessKeyID: <YOUR_ACCESS_KEY_ID>
secretAccessKey: <YOUR_SECRET_ACCES_KEY>
# For AWS set it to "https://s3.<region>.amazonaws.com"
endpoint: <S3_ENDPOINT_URL>
# If not on S3, set it to ""
region: <S3_REGION>
```
The region can be empty if you are using some other S3 compatible storage.
### 2. Deploy the driver
```bash
cd deploy/kubernetes
kubectl create -f provisioner.yaml
kubectl create -f attacher.yaml
kubectl create -f csi-s3.yaml
```
### 3. Create the storage class
```bash
kubectl create -f storageclass.yaml
```
### 4. Test the S3 driver
1. Create a pvc using the new storage class:
```bash
kubectl create -f pvc.yaml
```
2. Check if the PVC has been bound:
```bash
$ kubectl get pvc csi-s3-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-s3-pvc Bound pvc-c5d4634f-8507-11e8-9f33-0e243832354b 5Gi RWO csi-s3 9s
```
3. Create a test pod which mounts your volume:
```bash
kubectl create -f poc.yaml
```
If the pod can start, everything should be working.
4. Test the mount
```bash
$ kubectl exec -ti csi-s3-test-nginx bash
$ mount | grep fuse
s3fs on /var/lib/www/html type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
$ touch /var/lib/www/html/hello_world
```
If something does not work as expected, check the troubleshooting section below.
## Additional configuration
### Mounter
As S3 is not a real file system there are some limitations to consider here. Depending on what mounter you are using, you will have different levels of POSIX compability. Also depending on what S3 storage backend you are using there are not always [consistency guarantees](https://github.com/gaul/are-we-consistent-yet#observed-consistency).
The driver can be configured to use one of these mounters to mount buckets:
* [rclone](https://rclone.org/commands/rclone_mount)
* [s3fs](https://github.com/s3fs-fuse/s3fs-fuse)
* [goofys](https://github.com/kahing/goofys)
* [s3backer](https://github.com/archiecobbs/s3backer)
The mounter can be set as a parameter in the storage class. You can also create multiple storage classes for each mounter if you like.
All mounters have different strengths and weaknesses depending on your use case. Here are some characteristics which should help you choose a mounter:
#### rclone
* Almost full POSIX compatibility (depends on caching mode)
* Files can be viewed normally with any S3 client
#### s3fs
* Large subset of POSIX
* Files can be viewed normally with any S3 client
* Does not support appends or random writes
#### goofys
* Weak POSIX compatibility
* Performance first
* Files can be viewed normally with any S3 client
* Does not support appends or random writes
#### s3backer (experimental*)
* Represents a block device stored on S3
* Allows to use a real filesystem
* Files are not readable with other S3 clients
* Support appends
* Supports compression before upload (Not yet implemented in this driver)
* Supports encryption before upload (Not yet implemented in this driver)
*s3backer is experimental at this point because volume corruption can occur pretty quickly in case of an unexpected shutdown of a Kubernetes node or CSI pod.
The s3backer binary is not bundled with the normal docker image to keep that as small as possible. Use the `<version>-full` image tag for testing s3backer.
Fore more detailed limitations consult the documentation of the different projects.
## Troubleshooting
### Issues while creating PVC
Check the logs of the provisioner:
```bash
kubectl logs -l app=csi-provisioner-s3 -c csi-s3
```
### Issues creating containers
1. Ensure feature gate `MountPropagation` is not set to `false`
2. Check the logs of the s3-driver:
```bash
kubectl logs -l app=csi-s3 -c csi-s3
```
## Development
This project can be built like any other go application.
```bash
go get -u github.com/ctrox/csi-s3
```
### Build executable
```bash
make build
```
### Tests
Currently the driver is tested by the [CSI Sanity Tester](https://github.com/kubernetes-csi/csi-test/tree/master/pkg/sanity). As end-to-end tests require S3 storage and a mounter like s3fs, this is best done in a docker container. A Dockerfile and the test script are in the `test` directory. The easiest way to run the tests is to just use the make command:
```bash
make test
```

View File

@@ -1,14 +0,0 @@
FROM debian:buster-slim
LABEL maintainers="Cyrill Troxler <cyrilltroxler@gmail.com>"
LABEL description="csi-s3 slim image"
#RUN echo 'Acquire::http::proxy "http://172.23.255.1:3142";' >/etc/apt/apt.conf.d/proxy
# s3fs and some other dependencies
RUN apt-get update && \
apt-get install -y \
s3fs curl unzip rclone && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/*
COPY ./s3driver /s3driver
ENTRYPOINT ["/s3driver"]

View File

@@ -1,43 +0,0 @@
FROM debian:buster-slim as s3backer
ARG S3BACKER_VERSION=1.5.4
#RUN echo 'Acquire::http::proxy "http://172.23.255.1:3142";' >/etc/apt/apt.conf.d/proxy
RUN apt-get update && apt-get install -y \
build-essential \
autoconf \
libcurl4-openssl-dev \
libfuse-dev libfuse3-dev \
libexpat1-dev \
libssl-dev \
zlib1g-dev \
psmisc \
pkg-config \
git && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/*
# Compile & install s3backer
RUN git clone https://github.com/archiecobbs/s3backer.git /src/s3backer
WORKDIR /src/s3backer
RUN git checkout tags/${S3BACKER_VERSION}
RUN ./autogen.sh && \
./configure && \
make -j8 && \
make install
FROM debian:buster-slim
LABEL maintainers="Cyrill Troxler <cyrilltroxler@gmail.com>"
LABEL description="csi-s3 image"
COPY --from=s3backer /usr/bin/s3backer /usr/bin/s3backer
# s3fs and some other dependencies
RUN apt-get update && \
apt-get install -y \
libfuse3-3 gcc sqlite3 libsqlite3-dev \
s3fs psmisc procps libcurl4 xfsprogs curl unzip rclone && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY ./_output/s3driver /s3driver
ENTRYPOINT ["/s3driver"]

View File

@@ -1,45 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"flag"
"log"
"os"
"github.com/ctrox/csi-s3/pkg/s3"
)
func init() {
flag.Set("logtostderr", "true")
}
var (
endpoint = flag.String("endpoint", "unix://tmp/csi.sock", "CSI endpoint")
nodeID = flag.String("nodeid", "", "node id")
)
func main() {
flag.Parse()
driver, err := s3.NewS3(*nodeID, *endpoint)
if err != nil {
log.Fatal(err)
}
driver.Run()
os.Exit(0)
}

View File

@@ -1,90 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher-sa
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update","patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io
---
# needed for StatefulSet
kind: Service
apiVersion: v1
metadata:
name: csi-attacher-s3
namespace: kube-system
labels:
app: csi-attacher-s3
spec:
selector:
app: csi-attacher-s3
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-attacher-s3
namespace: kube-system
spec:
serviceName: "csi-attacher-s3"
replicas: 1
selector:
matchLabels:
app: csi-attacher-s3
template:
metadata:
labels:
app: csi-attacher-s3
spec:
serviceAccount: csi-attacher-sa
containers:
- name: csi-attacher
image: docker-registry.lan/csi-attacher:arm64
args:
- "--v=4"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
type: DirectoryOrCreate

View File

@@ -1,121 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-s3
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-s3
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-s3
subjects:
- kind: ServiceAccount
name: csi-s3
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-s3
apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-s3
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-s3
template:
metadata:
labels:
app: csi-s3
spec:
serviceAccount: csi-s3
hostNetwork: true
containers:
- name: driver-registrar
image: docker-registry.lan/node-driver-registrar:arm64
args:
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
- "--v=4"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration/
- name: csi-s3
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: docker-registry.lan/csi-s3:arm64
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_ID)"
- "--v=4"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: "Always"
#imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: fuse-device
mountPath: /dev/fuse
volumes:
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
type: DirectoryOrCreate
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- name: fuse-device
hostPath:
path: /dev/fuse

View File

@@ -1,17 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: csi-s3-test-nginx
namespace: test
spec:
containers:
- name: csi-s3-test-nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: webroot
volumes:
- name: webroot
persistentVolumeClaim:
claimName: csi-s3-pvc
readOnly: false

View File

@@ -1,105 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner-sa
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:
- kind: ServiceAccount
name: csi-provisioner-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: csi-provisioner-s3
namespace: kube-system
labels:
app: csi-provisioner-s3
spec:
selector:
app: csi-provisioner-s3
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-provisioner-s3
namespace: kube-system
spec:
serviceName: "csi-provisioner-s3"
replicas: 1
selector:
matchLabels:
app: csi-provisioner-s3
template:
metadata:
labels:
app: csi-provisioner-s3
spec:
serviceAccount: csi-provisioner-sa
containers:
- name: csi-provisioner
image: docker-registry.lan/csi-provisioner:arm64
args:
- "--provisioner=ch.ctrox.csi.s3-driver"
- "--csi-address=$(ADDRESS)"
- "--v=4"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
imagePullPolicy: "Always"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
- name: csi-s3
image: docker-registry.lan/csi-s3:arm64
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_ID)"
- "--v=4"
env:
- name: CSI_ENDPOINT
value: unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: "Always"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
volumes:
- name: socket-dir
emptyDir: {}

Some files were not shown because too many files have changed in this diff Show More