159 Commits

Author SHA1 Message Date
6cfd02bc26 rompr new 2025-12-08 20:00:39 +01:00
0033a5a231 bogus commit for rompr 2024-10-29 10:01:12 +01:00
70ccdf43ef Merge branch 'main' of ssh://gitea.service.nr5:2222/chaos/docker-images
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build is failing
2024-10-29 09:52:10 +01:00
401acdc54f new rompr version 2024-10-29 09:47:26 +01:00
c6a8464bb2 why _?111git statuskubectl apply -n kube-system -f descheduler-cronjob.yaml
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2024-09-13 20:09:41 +02:00
d1247a3b02 listing
Some checks failed
continuous-integration/drone/push Build is failing
2024-09-13 20:07:39 +02:00
83e3907708 only apps
Some checks failed
continuous-integration/drone/push Build is failing
2024-09-13 09:58:44 +02:00
630f321651 only apps 2024-09-13 09:57:03 +02:00
65318147c7 with git/testing
Some checks failed
continuous-integration/drone/push Build is running
continuous-integration/drone Build is failing
2024-04-21 19:23:45 +02:00
5b5c21b67b klappt das so? Ja ne? - man-db darf bleiben
Some checks failed
continuous-integration/drone/push Build is failing
2024-04-21 17:44:07 +02:00
3dac0b92f1 klappt das so? Ja ne? - man-db darf bleiben
Some checks failed
continuous-integration/drone/push Build is failing
2024-04-21 17:38:45 +02:00
35ec70792c klappt das so? Ja ne?
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-21 17:36:58 +02:00
4ccfd0d648 building testing with git
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-21 17:27:46 +02:00
ccbe462a76 building testing with git
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-04-21 17:26:41 +02:00
98234e569a WHOA Sun 21 Apr 17:23:21 CEST 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:23:21 +02:00
8c96788392 Sun 21 Apr 17:17:50 CEST 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:17:50 +02:00
60417861fc more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:13:27 +02:00
dafa848d80 more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:09:07 +02:00
4579621b03 more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:07:27 +02:00
542fc02720 more changes
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:05:43 +02:00
4b2f5d8c9f merged
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 17:02:48 +02:00
7da16def78 .gitignore 2024-04-21 17:02:29 +02:00
bcd8242061 what is happening here, for all hails sake
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:29:41 +02:00
6639d8d0c2 whats happening
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:16:12 +02:00
3ced13f704 whats happening
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:15:00 +02:00
d4f052787f cleanup
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:06:32 +02:00
d55511e84e bogus change
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-21 16:04:49 +02:00
11c3f3174d bogus change
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-04-21 15:51:03 +02:00
a770e55f47 loops and all in one pipeline 2024-04-21 12:50:23 +02:00
ac02ddcc00 Merge branch 'main' of ssh://gitea.service.nr5:2222/chaos/docker-images 2024-04-21 11:36:08 +02:00
0b93d83014 git log step
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-15 17:42:09 +02:00
0da2ea2477 removal clearer typed
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 19:27:09 +02:00
5751f2c82e removing man-db in first run
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 18:58:03 +02:00
9d83926159 git in debian-stable image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 18:47:43 +02:00
dd52955602 one character less optimization 2024-04-08 18:02:10 +02:00
b451999d77 dry_run and cache_from its own image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 16:53:49 +02:00
1d84d11f37 new ROMPR Version
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-08 16:29:04 +02:00
3067ebd5de new rompr version 2024-03-21 21:44:59 +01:00
fb1a6e307f all images again
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build was killed
2024-02-27 18:18:39 +01:00
82d001e962 distcc stuff removed
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-27 18:10:35 +01:00
cde42fcd56 distcc stuff removed
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build is failing
2024-02-27 17:32:20 +01:00
801e76f0d3 distcc stuff removed
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-27 17:25:55 +01:00
323f9eaff0 only openwrt image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-27 17:21:33 +01:00
09c98d766a only openwrt image
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-27 17:21:00 +01:00
2ebc1ec635 project rename
Some checks reported errors
continuous-integration/drone Build was killed
2024-02-26 17:02:22 +01:00
67787c4fe0 enabling openwrt image
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-02-26 16:57:24 +01:00
fef81d7c28 using our own image
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:53:18 +01:00
7fbaf62415 using our own image
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:51:04 +01:00
7a70000833 using our own image 2024-02-26 16:50:40 +01:00
5058b10769 openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:48:16 +01:00
3b7ac02aed openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:46:46 +01:00
fc591f4dac openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:45:25 +01:00
36c7b2d0b5 openwrt builder
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2024-02-26 16:43:54 +01:00
cf8ac80bc5 all packs
Some checks are pending
continuous-integration/drone/push Build is running
continuous-integration/drone Build is passing
2024-01-17 18:28:19 +01:00
5c2bded912 ENV var fix 2024-01-17 18:07:20 +01:00
55ace2881c using fpm-socket
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 18:04:49 +01:00
75edd26772 php-fpm proper version
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 17:48:02 +01:00
21fab1e23f php-fpm proper version
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 17:38:06 +01:00
45ffac4318 fewer layers in rompr image 2024-01-17 17:32:45 +01:00
e702963a01 all packs again 2024-01-17 17:26:20 +01:00
ca165f5c5e how to run 2024-01-17 17:24:59 +01:00
44ae607709 removed obsolete kubernetes stuff 2024-01-17 17:13:28 +01:00
e0824bf3c1 rompr version 2.x
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 17:07:42 +01:00
95e8c6f363 all packages again
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 14:09:07 +01:00
123eeddf49 using debian again , we need chmod
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 13:45:08 +01:00
5a96d89fc2 experimental features and copy chmod
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-17 12:46:58 +01:00
296ab18421 chmod befor copy
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-17 12:13:54 +01:00
3477d59e07 from scratch and not debian
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-17 11:26:21 +01:00
0b3cbc584f chmod
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-01-17 11:04:53 +01:00
0075dac22d chmod?
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-16 14:38:00 +01:00
9ce1a6b610 all of them again
Some checks failed
continuous-integration/drone/push Build is failing
2024-01-16 13:50:16 +01:00
e811e80f25 here we go
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-16 13:18:26 +01:00
397dd88ebb The right image might help
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-01-16 12:57:20 +01:00
da88bfdfc0 The right image might help
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-01-16 12:55:29 +01:00
7c94d1d7a7 all images again 2024-01-10 16:11:24 +01:00
598253193b downloading mods
Some checks reported errors
continuous-integration/drone/push Build encountered an error
continuous-integration/drone Build is failing
2024-01-10 11:34:46 +01:00
ec3e999375 lesser images
Some checks failed
continuous-integration/drone/push Build is failing
2023-12-21 11:59:04 +01:00
b423324a75 packages as steps
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build was killed
2023-12-19 13:55:34 +01:00
a2143bfc0a as steps 2
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:46:05 +01:00
2e76ec3da9 as steps
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:45:17 +01:00
01208f9413 as steps
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:44:30 +01:00
c72f7b7a1c as steps
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-19 13:43:46 +01:00
67edba2276 mosquitto prometheus exporter image build
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-19 12:46:37 +01:00
315d8bd632 apps (some of them), typo
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-15 18:52:13 +01:00
13898378cd apps (some of them)
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 18:51:23 +01:00
1815e60a37 registry typo fix
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-15 18:37:54 +01:00
72aeb85a2e looping against
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-15 18:24:39 +01:00
a6d2e03707 looping against
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 18:23:11 +01:00
da199f3fe0 new sources format, who knew?
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-15 18:12:50 +01:00
c686d6fe91 sources.list gone?
Some checks failed
continuous-integration/drone/push Build is failing
2023-12-15 18:06:59 +01:00
86855f541a context for drone and cleanup/update
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-15 18:04:23 +01:00
3debf1dabc platform part, not an array and its plugins/docker
Some checks failed
continuous-integration/drone/push Build is failing
2023-12-15 17:58:30 +01:00
af467c339e platform part, not an array
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:55:27 +01:00
47c4908ffe platform part
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:54:12 +01:00
4cb9b0c3b5 platform part
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:53:35 +01:00
f316936acc no loops for now2
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-12-15 17:40:43 +01:00
f353210a42 no loops for now
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:40:20 +01:00
eca7f86f4f no loops for now
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:39:47 +01:00
64196d7209 what
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-15 17:33:00 +01:00
065ff0a85d drone as jsonnnet
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-13 18:55:09 +01:00
2604d026e4 drone as jsonnnet
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-13 18:54:06 +01:00
dfd2866c06 drone as jsonnnet
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-12-13 18:53:30 +01:00
5e271a7593 drone as jsonnnet
Some checks reported errors
continuous-integration/drone Build encountered an error
2023-12-13 18:51:45 +01:00
77a646866d more obsolete stuff cleanup 2023-12-13 18:11:54 +01:00
e60be3ab70 removing kubernetes stuff 2023-12-13 18:09:45 +01:00
757ab5a092 removed submodules 2023-12-13 18:03:49 +01:00
2e3bb35f86 coreddns update 2023-10-15 19:17:51 +02:00
47cbd88587 coredns / cluster upgrade 2023-01-16 18:57:58 +01:00
dd74762778 tekton PVC? required? 2023-01-12 20:54:31 +01:00
07d7f45e64 other stings 2023-01-12 20:53:46 +01:00
536c0c4ddc flannel 0.20 upgrade 2023-01-12 20:53:23 +01:00
fcb2e69615 upgrade galore from 1.23 to 1.26. and cluster ist still at 1.25? See: Readme.md 2023-01-12 20:52:46 +01:00
e2e032ac94 another nfs -client provisioner 2022-12-08 17:51:23 +01:00
4bbf79569c another nfs -client provisioner 2022-12-08 17:47:14 +01:00
273fb0e252 more updates 2022-12-08 17:09:38 +01:00
62f5788742 changing output dir 2022-12-08 16:47:19 +01:00
9b2d2a9d95 php-fpm 2022-12-08 16:43:36 +01:00
b5ff289f66 stuff 2022-12-08 16:39:52 +01:00
7cb8d572e7 stuff 2022-12-08 14:03:01 +01:00
14aceae467 new version and create dirs on run 2022-12-08 13:57:10 +01:00
604d065252 new version and create dirs on run 2022-12-08 13:09:24 +01:00
b50d6de8f7 cleanup 2022-11-18 10:26:13 +01:00
79c4e5e0c7 tekton stuff and install 2022-11-18 10:24:39 +01:00
d7241c7563 removed obsolete submods 2022-11-18 10:21:37 +01:00
8fbf07efdf removed descheduler, helm is on its way 2022-10-25 14:03:10 +02:00
beb1bfe0da nginx ingress is installed via helm now 2022-10-25 14:01:34 +02:00
8b62746bcc cleanup 2022-10-12 13:20:42 +02:00
94b39a804b merged 2022-09-19 16:58:14 +02:00
43d17581b3 gitea and apt-cacher 2022-09-19 16:56:40 +02:00
180d28fe80 Merge branch 'master' of git.lan:chaos/kubernetes 2022-09-19 16:54:53 +02:00
30ba290918 don't know why this shit doesn't run anymore 2022-09-10 13:32:34 +02:00
b111463cf5 Merge branch 'master' of git.lan:chaos/kubernetes 2022-08-24 19:17:10 +02:00
c2f6c546eb gitea uses ebin02 2022-08-24 19:16:24 +02:00
748b94f069 local changes 2022-07-30 12:54:52 +02:00
59c019727d rompr version 1.61 2022-07-30 12:51:17 +02:00
17f8b2f5cb mosquitto and prometheus 2022-07-30 12:43:56 +02:00
105e051d64 grav and tekton 2022-07-30 12:33:26 +02:00
9b92cf35e0 Merge branch 'master' of git.lan:chaos/kubernetes 2022-07-30 12:29:55 +02:00
41a2ba8c82 Dockerfile using our debian image 2022-07-30 12:29:43 +02:00
3b552f3134 my changes 2022-07-30 11:47:09 +02:00
7c778d3794 pipeline for mariadb prometheus 2022-07-29 18:42:25 +02:00
a608ac1297 mariadb pipeline 2022-07-29 18:41:01 +02:00
89c3eaac22 dolibarr and curl 2022-07-28 19:08:22 +02:00
7505262bc9 pipelinerun for nextcloud 2022-07-28 18:57:19 +02:00
9c88f4bc6c nextcloud pipelinerun 2022-07-28 18:52:51 +02:00
f96313a307 deschduler 2022-06-22 21:00:51 +02:00
1d3eb09904 deschduler 2022-06-22 21:00:18 +02:00
287458f48b gitea liveness probes and some config updates 2022-06-21 12:29:35 +02:00
5affbfd886 gitea liveness probes and some config updates 2022-06-21 12:27:57 +02:00
c1b864155e nextcloud 24 2022-05-08 11:33:56 +02:00
2827dac20c nextcloud 24 2022-05-07 10:47:51 +02:00
0c8338cd86 nextcloud 24 2022-05-06 19:44:19 +02:00
62aa39b493 descheduler still amystery 2022-03-20 11:23:40 +01:00
c626429abf more rfactoring 2022-03-16 19:58:37 +01:00
237981b8b2 multiarch-support is gone in bullseye 2022-03-16 19:28:46 +01:00
7763958f0f using another src dir 2022-03-16 19:06:17 +01:00
d904f51d20 migrated base images to pipeline runs 2022-03-16 18:33:09 +01:00
613da54d99 migrated base images to pipeline runs 2022-03-16 18:30:18 +01:00
06c173e650 refactoring 2022-03-16 18:11:11 +01:00
134 changed files with 438 additions and 14018 deletions

77
.drone.jsonnet Normal file
View File

@@ -0,0 +1,77 @@
#local dirs = ['_CI-CD', 'apps'];
local dirs = ['apps'];
local packages = ['debian-stable', 'debian-stable-build-essential', 'debian-stable-openwrt',
'debian-golang', 'debian-stable-php-fpm', 'debian-testing'];
#local packages = ['debian-stable-openwrt'];
local apps = ['rompr', 'apt-cacher-ng', 'curl', 'mosquitto', 'mosquitto-prometheus-exporter'];
#local apps = ['rompr'];
local build(dir, package) = {
name: '%(package)s' % { package: package },
image: 'plugins/docker',
settings: {
context: '%(dir)s/%(package)s' % { dir: dir, package: package },
dockerfile: '%(dir)s/%(package)s/Dockerfile' % { dir: dir, package: package },
registry: 'http://cr.wks',
insecure: 'true',
purge: 'false',
experimental: 'true',
tags: ['latest'],
repo: 'cr.wks/%(package)s' % { package: package },
cache_from: 'cr.wks/%(package)s:latest' % { package: package },
},
};
[
{
kind: 'pipeline',
type: 'docker',
name: 'Build Changes',
platform: {
os: 'linux',
arch: 'arm64',
},
steps: [
{
name: 'git log',
image: 'cr.wks/debian-testing',
commands: [ 'bin/find_changes.sh', 'ls -la' ]
},
# [
# build('_CI-CD', app)
# for app in packages
# ],
# [
# build('apps', app)
# for app in apps
# ]
],
},
#{
# kind: 'pipeline',
# type: 'docker',
# name: '_CI-CD',
# platform: {
# os: 'linux',
# arch: 'arm64',
# },
# steps: [
# build('_CI-CD', pkg)
# for pkg in packages
# ],
# },
{
kind: 'pipeline',
type: 'docker',
name: 'apps',
platform: {
os: 'linux',
arch: 'arm64',
},
steps: [
build('apps', app)
for app in apps
],
},
]

2
.gitignore vendored
View File

@@ -1 +1 @@
csi-s3/storage-csi-s3/cmd/s3driver/s3driver
*.swp

48
.gitmodules vendored
View File

@@ -1,48 +0,0 @@
[submodule "kube-prometheus"]
path = kube-prometheus
url = https://github.com/coreos/kube-prometheus.git
[submodule "cluster-monitoring"]
path = cluster-monitoring
url = https://github.com/carlosedp/cluster-monitoring.git
[submodule "gluster-kubernetes"]
path = gluster-kubernetes
url = https://github.com/jayflory/gluster-kubernetes.git
[submodule "kubernetes-ingress"]
path = kubernetes-ingress
url = https://github.com/haproxytech/kubernetes-ingress.git
[submodule "pihole-kubernetes"]
path = pihole-kubernetes
url = https://github.com/MoJo2600/pihole-kubernetes.git
[submodule "pihole-helm"]
path = pihole-helm
url = https://github.com/ChrisPhillips-cminion/pihole-helm.git
[submodule "helm"]
path = helm
url = https://github.com/helm/helm.git
[submodule "docker-apt-cacher-ng"]
path = docker-apt-cacher-ng
url = https://github.com/sameersbn/docker-apt-cacher-ng.git
[submodule "mosquitto/charts"]
path = mosquitto/charts
url = https://github.com/smizy/charts.git
[submodule "csi-s3/storage-csi-s3"]
path = csi-s3/storage-csi-s3
url = https://github.com/ctrox/csi-s3.git
[submodule "csi-s3/external-attacher"]
path = csi-s3/external-attacher
url = https://github.com/kubernetes-csi/external-attacher.git
[submodule "csi-s3/external-provisioner"]
path = csi-s3/external-provisioner
url = https://github.com/kubernetes-csi/external-provisioner.git
[submodule "csi-s3/node-driver-registrar"]
path = csi-s3/node-driver-registrar
url = https://github.com/kubernetes-csi/node-driver-registrar.git
[submodule "apps/tekton/dashboard"]
path = apps/tekton/dashboard
url = https://github.com/tektoncd/dashboard.git
[submodule "_sys/haproxy-ingress"]
path = _sys/haproxy-ingress
url = https://github.com/haproxytech/kubernetes-ingress.git
[submodule "nfs-subdir-external-provisioner"]
path = nfs-subdir-external-provisioner
url = https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git

View File

@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>kubernetes</name>
<name>docker-images</name>
<comment></comment>
<projects>
</projects>

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
golang make git

View File

@@ -1,84 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-debian-golang-stable
spec:
type: image
params:
- name: url
value: cr.lan/debian-golang-stable
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-debian-golang
spec:
params:
- name: pathToContainerFile
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-golang/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-golang
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToContainerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
#workspaces:
# - name: workspace
# mountPath: /workspace
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-debian-golang
spec:
taskRef:
name: build-debian-golang
params:
- name: pathToContainerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-debian-golang-stable
# workspaces:
# - name: workspace
# persistentVolumeClaim:
# claimName: tektoncd-workspaces
# subPath: workspaces

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash \

View File

@@ -1,85 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-debian-stable-build-essential
spec:
type: image
params:
- name: url
value: cr.lan/debian-stable-build-essential
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-debian-stable-build-essential
spec:
params:
- name: pathToContainerFile
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-stable-build-essential/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-stable-build-essential
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToContainerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
#workspaces:
# - name: workspace
# mountPath: /workspace
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-debian-stable-build-essential
spec:
taskRef:
name: build-debian-stable-build-essential
params:
- name: pathToContainerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-debian-stable-build-essential
# workspaces:
# - name: workspace
# persistentVolumeClaim:
# claimName: tektoncd-workspaces
# subPath: workspaces

View File

@@ -0,0 +1,14 @@
FROM cr.wks/debian-stable-build-essential
RUN apt update -y; \
apt install -y build-essential ccache ecj fastjar file g++ gawk \
gettext git java-propose-classpath libelf-dev libncurses5-dev \
libncursesw5-dev libssl-dev python3 python3-dev unzip wget \
python3-distutils python3-setuptools rsync subversion swig time \
xsltproc zlib1g-dev make distcc distcc-pump nfs-common clang flex bison g++ gawk \
gcc-multilib-mips-linux-gnu git libncurses-dev libssl-dev && \
apt-get remove --purge -y exim* && \
apt-get autoremove --purge -y && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/apt/*

View File

@@ -1,11 +1,11 @@
FROM cr.lan/debian-stable
FROM debian:stable AS baseimage
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash openssl \
php-fpm php-zip php-sqlite3 php-pgsql php-mysqli php-json php-readline \
php-xml php-ldap php-imap php-intl php-xmlrpc php-imagick php-gd php-cli php-curl \
php-bz2 php-mbstring php-memcache php-redis
php-xml php-intl php-xmlrpc php-imagick php-gd php-cli php-curl \
php-bz2 php-mbstring
#cleanup
RUN apt-get remove -y --purge man-db ;\
@@ -14,6 +14,8 @@ RUN apt-get remove -y --purge man-db ;\
rm -rf /var/lib/apt/lists/* ;\
rm -rf /var/cache/apt/*
ADD etc_php-fpm/www.conf /etc/php/7.4/fpm/pool.d
FROM baseimage as final
ADD etc_php-fpm/www.conf /etc/php/8.4/fpm/pool.d
ADD docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]

View File

@@ -1,85 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-debian-stable-php-fpm
spec:
type: image
params:
- name: url
value: cr.lan/debian-stable-php-fpm
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-debian-stable-php-fpm
spec:
params:
- name: pathToContainerFile
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-stable-php-fpm/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-stable-php-fpm
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToContainerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
#workspaces:
# - name: workspace
# mountPath: /workspace
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-debian-stable-php-fpm
spec:
taskRef:
name: build-debian-stable-php-fpm
params:
- name: pathToContainerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-debian-stable-php-fpm
# workspaces:
# - name: workspace
# persistentVolumeClaim:
# claimName: tektoncd-workspaces
# subPath: workspaces

View File

@@ -1,12 +1,13 @@
FROM debian:stable-slim
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash && \
RUN sed -i 's@deb.debian.org@apt-cache.service.nr5/deb.debian.org@g' /etc/apt/sources.list.d/debian.sources && \
sed -i 's@security.debian.org@apt-cache.service.nr5/security.debian.org@g' /etc/apt/sources.list.d/debian.sources
RUN apt-get remove -y --purge man-db ;\
apt-get autoremove -y --purge ;\
RUN apt-get update && apt-get install -y \
man-db- \
dnsutils procps nmap bash iputils-ping bash git
RUN apt-get autoremove -y --purge ;\
apt-get clean -y ;\
rm -rf /var/lib/apt/lists/* ;\
rm -rf /var/cache/apt/*

View File

@@ -1,85 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-debian-stable
spec:
type: image
params:
- name: url
value: cr.lan/debian-stable
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-debian-stable
spec:
params:
- name: pathToContainerFile
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-stable/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-stable
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToContainerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
#workspaces:
# - name: workspace
# mountPath: /workspace
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-debian-stable
spec:
taskRef:
name: build-debian-stable
params:
- name: pathToContainerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-debian-stable
# workspaces:
# - name: workspace
# persistentVolumeClaim:
# claimName: tektoncd-workspaces
# subPath: workspaces

View File

@@ -1,12 +1,12 @@
FROM debian:testing-slim
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash
RUN sed -i 's@deb.debian.org@apt-cache.service.nr5/deb.debian.org@g' /etc/apt/sources.list.d/debian.sources && \
sed -i 's@security.debian.org@apt-cache.service.nr5/security.debian.org@g' /etc/apt/sources.list.d/debian.sources
RUN apt-get remove -y --purge man-db ;\
apt-get autoremove -y --purge ;\
RUN apt-get update && apt-get install -y \
dnsutils procps nmap bash iputils-ping bash git
RUN apt-get autoremove -y --purge ;\
apt-get clean -y ;\
rm -rf /var/lib/apt/lists/* ;\
rm -rf /var/cache/apt/*

View File

@@ -1,85 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-debian-testing
spec:
type: image
params:
- name: url
value: cr.lan/debian-testing
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-debian-testing
spec:
params:
- name: pathToContainerFile
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-testing/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/_CI-CD/debian-testing
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToContainerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
#workspaces:
# - name: workspace
# mountPath: /workspace
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-debian-testing
spec:
taskRef:
name: build-debian-testing
params:
- name: pathToContainerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-debian-testing
# workspaces:
# - name: workspace
# persistentVolumeClaim:
# claimName: tektoncd-workspaces
# subPath: workspaces

View File

@@ -1,13 +1,11 @@
FROM cr.lan/debian-stable-build-essential
FROM cr.wks/debian-stable-build-essential
RUN apt-get update && \
apt-get install -y \
gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf \
multiarch-support dpkg-dev distcc ccache \
dpkg-dev distcc ccache \
build-essential gcc cpp g++ clang llvm
RUN apt-get remove -y --purge man-db ;\
apt-get autoremove -y --purge ;\
apt-get clean -y ;\

View File

@@ -1,76 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-distcc
spec:
type: image
params:
- name: url
value: cr.lan/distcc
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-distcc
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/distcc/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/distcc
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-distcc
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-distcc
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-distcc

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: git-secret
type: Opaque
data:
token: Nzk1YTFhMGQxMWQ0MDJiY2FiOGM3MjkyZDk5ODIyMzg2NDNkM2U3OQo=

View File

@@ -1,101 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: git-clone
spec:
workspaces:
- name: output
description: The git repo will be cloned onto the volume backing this workspace
params:
- name: url
description: git url to clone
type: string
default: http://git-ui.lan/chaos/kubernetes.git
- name: revision
description: git revision to checkout (branch, tag, sha, ref…)
type: string
default: master
- name: refspec
description: (optional) git refspec to fetch before checking out revision
default: ""
- name: submodules
description: defines if the resource should initialize and fetch the submodules
type: string
default: "true"
- name: depth
description: performs a shallow clone where only the most recent commit(s) will be fetched
type: string
default: "1"
- name: sslVerify
description: defines if http.sslVerify should be set to true or false in the global git config
type: string
default: "true"
- name: subdirectory
description: subdirectory inside the "output" workspace to clone the git repo into
type: string
default: ""
- name: deleteExisting
description: clean out the contents of the repo's destination directory (if it already exists) before trying to clone the repo there
type: string
default: "true"
- name: httpProxy
description: git HTTP proxy server for non-SSL requests
type: string
default: ""
- name: httpsProxy
description: git HTTPS proxy server for SSL requests
type: string
default: ""
- name: noProxy
description: git no proxy - opt out of proxying HTTP/HTTPS requests
type: string
default: ""
results:
- name: commit
description: The precise commit SHA that was fetched by this Task
steps:
- name: clone
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.30.2
script: |
CHECKOUT_DIR="$(workspaces.output.path)/$(params.subdirectory)"
cleandir() {
# Delete any existing contents of the repo directory if it exists.
#
# We don't just "rm -rf $CHECKOUT_DIR" because $CHECKOUT_DIR might be "/"
# or the root of a mounted volume.
if [[ -d "$CHECKOUT_DIR" ]] ; then
# Delete non-hidden files and directories
rm -rf "$CHECKOUT_DIR"/*
# Delete files and directories starting with . but excluding ..
rm -rf "$CHECKOUT_DIR"/.[!.]*
# Delete files and directories starting with .. plus any other character
rm -rf "$CHECKOUT_DIR"/..?*
fi
}
if [[ "$(params.deleteExisting)" == "true" ]] ; then
cleandir
fi
test -z "$(params.httpProxy)" || export HTTP_PROXY=$(params.httpProxy)
test -z "$(params.httpsProxy)" || export HTTPS_PROXY=$(params.httpsProxy)
test -z "$(params.noProxy)" || export NO_PROXY=$(params.noProxy)
/ko-app/git-init \
-url "$(params.url)" \
-revision "$(params.revision)" \
-refspec "$(params.refspec)" \
-path "$CHECKOUT_DIR" \
-sslVerify="$(params.sslVerify)" \
-submodules="$(params.submodules)" \
-depth "$(params.depth)"
cd "$CHECKOUT_DIR"
RESULT_SHA="$(git rev-parse HEAD | tr -d '\n')"
EXIT_CODE="$?"
if [ "$EXIT_CODE" != 0 ]
then
exit $EXIT_CODE
fi
# Make sure we don't add a trailing newline to the result!
echo -n "$RESULT_SHA" > $(results.commit.path)

View File

@@ -1,43 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: kaniko-pipeline
spec:
params:
- name: git-url
- name: git-revision
- name: image-name
- name: path-to-image-context
- name: path-to-dockerfile
workspaces:
- name: git-source
tasks:
- name: fetch-from-git
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
- name: submodules
value: false
workspaces:
- name: output
workspace: git-source
- name: build-image
taskRef:
name: kaniko
params:
- name: IMAGE
value: $(params.image-name)
- name: CONTEXT
value: $(params.path-to-image-context)
- name: DOCKERFILE
value: $(params.path-to-dockerfile)
workspaces:
- name: source
workspace: git-source
# If you want you can add a Task that uses the IMAGE_DIGEST from the kaniko task
# via $(tasks.build-image.results.IMAGE_DIGEST) - this was a feature we hadn't been
# able to fully deliver with the Image PipelineResource!

View File

@@ -1,65 +0,0 @@
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: kaniko
labels:
app.kubernetes.io/version: "0.5"
annotations:
tekton.dev/pipelines.minVersion: "0.17.0"
tekton.dev/categories: Image Build
tekton.dev/tags: image-build
tekton.dev/displayName: "Build and upload container image using Kaniko"
tekton.dev/platforms: "linux/arm64"
spec:
description: >-
This Task builds source into a container image using Google's kaniko tool.
Kaniko doesn't depend on a Docker daemon and executes each
command within a Dockerfile completely in userspace. This enables
building container images in environments that can't easily or
securely run a Docker daemon, such as a standard Kubernetes cluster.
params:
- name: IMAGE
description: Name (reference) of the image to build.
- name: DOCKERFILE
description: Path to the Dockerfile to build.
default: ./Dockerfile
- name: CONTEXT
description: The build context used by Kaniko.
default: ./
- name: EXTRA_ARGS
type: array
default: []
- name: BUILDER_IMAGE
description: The image on which builds will run (default is v1.5.1)
default: gcr.io/kaniko-project/executor:v1.8.0
#default: gcr.io/kaniko-project/executor:v1.5.1@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5
workspaces:
- name: source
description: Holds the context and docker file
- name: dockerconfig
description: Includes a docker `config.json`
optional: true
mountPath: /kaniko/.docker
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: build-and-push
workingDir: $(workspaces.source.path)
image: $(params.BUILDER_IMAGE)
args:
- $(params.EXTRA_ARGS[*])
- --dockerfile=$(workspaces.source.path)/$(params.DOCKERFILE)
- --context=$(params.CONTEXT) # The user does not need to care the workspace and the source.
- --destination=$(params.IMAGE)
- --digest-file=/tekton/results/IMAGE-DIGEST
- --snapshotMode=redo
- --skip-tls-verify
# kaniko assumes it is running as root, which means this example fails on platforms
# that default to run containers as random uid (like OpenShift). Adding this securityContext
# makes it explicit that it needs to run as root.
securityContext:
runAsUser: 0

View File

@@ -1,73 +0,0 @@
#!/usr/bin/python3
import kubernetes as k8s
from pint import UnitRegistry
from collections import defaultdict
__all__ = ["compute_allocated_resources"]
def compute_allocated_resources():
ureg = UnitRegistry()
ureg.load_definitions('kubernetes_units.txt')
Q_ = ureg.Quantity
data = {}
# doing this computation within a k8s cluster
k8s.config.load_kube_config()
core_v1 = k8s.client.CoreV1Api()
# print("Listing pods with their IPs:")
# ret = core_v1.list_pod_for_all_namespaces(watch=False)
# for i in ret.items:
# print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
for node in core_v1.list_node().items:
stats = {}
node_name = node.metadata.name
allocatable = node.status.allocatable
max_pods = int(int(allocatable["pods"]) * 1.5)
# print("{} ALLOC: {} MAX_PODS: {}".format(node_name,allocatable,max_pods))
field_selector = ("status.phase!=Succeeded,status.phase!=Failed," +
"spec.nodeName=" + node_name)
stats["cpu_alloc"] = Q_(allocatable["cpu"])
stats["mem_alloc"] = Q_(allocatable["memory"])
pods = core_v1.list_pod_for_all_namespaces(limit=max_pods,
field_selector=field_selector).items
# compute the allocated resources
cpureqs, cpulmts, memreqs, memlmts = [], [], [], []
for pod in pods:
for container in pod.spec.containers:
res = container.resources
reqs = defaultdict(lambda: 0, res.requests or {})
lmts = defaultdict(lambda: 0, res.limits or {})
cpureqs.append(Q_(reqs["cpu"]))
memreqs.append(Q_(reqs["memory"]))
cpulmts.append(Q_(lmts["cpu"]))
memlmts.append(Q_(lmts["memory"]))
stats["cpu_req"] = sum(cpureqs)
stats["cpu_lmt"] = sum(cpulmts)
stats["cpu_req_per"] = (stats["cpu_req"] / stats["cpu_alloc"] * 100)
stats["cpu_lmt_per"] = (stats["cpu_lmt"] / stats["cpu_alloc"] * 100)
stats["mem_req"] = sum(memreqs)
stats["mem_lmt"] = sum(memlmts)
stats["mem_req_per"] = (stats["mem_req"] / stats["mem_alloc"] * 100)
stats["mem_lmt_per"] = (stats["mem_lmt"] / stats["mem_alloc"] * 100)
data[node_name] = stats
return data
if __name__ == "__main__":
# execute only if run as a script
print(compute_allocated_resources())

View File

@@ -1,20 +0,0 @@
# memory units
kmemunits = 1 = [kmemunits]
Ki = 1024 * kmemunits
Mi = Ki^2
Gi = Ki^3
Ti = Ki^4
Pi = Ki^5
Ei = Ki^6
# cpu units
kcpuunits = 1 = [kcpuunits]
m = 1/1000 * kcpuunits
k = 1000 * kcpuunits
M = k^2
G = k^3
T = k^4
P = k^5
E = k^6

View File

@@ -1,6 +0,0 @@
Descheduler (reschedule pods)
# https://github.com/kubernetes-sigs/descheduler
# kubectl apply -n kube-system -f https://raw.githubusercontent.com/kubernetes-sigs/descheduler/master/kubernetes/base/rbac.yaml
# kubectl apply -n kube-system -f https://raw.githubusercontent.com/kubernetes-sigs/descheduler/master/kubernetes/base/configmap.yaml
# kubectl apply -n kube-system -f https://raw.githubusercontent.com/kubernetes-sigs/descheduler/master/kubernetes/job/job.yaml

File diff suppressed because it is too large Load Diff

View File

@@ -1,167 +0,0 @@
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
file /etc/coredns/lan.db lan
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
lan.db: "; lan. zone file\n$ORIGIN lan.\n@ 3600 IN SOA sns.dns.icann.org.
noc.dns.icann.org. 2021102006 7200 3600 1209600 3600\n 3600 IN NS 172.23.255.252\n\nns
\ IN A 172.23.255.252\nsalt IN A 192.168.10.2 \nmqtt
\ IN A 172.16.23.1\nwww-proxy IN A 172.23.255.1\ngit IN
\ A 172.23.255.2\npostgresql IN A 172.23.255.4\nmariadb IN A
\ 172.23.255.5\npihole IN A 172.23.255.253\nadm IN CNAME
adm01.wks.\n\nprometheus IN CNAME www-proxy \nalertmanager IN CNAME
www-proxy\nstats IN CNAME www-proxy\ncr-ui IN CNAME
www-proxy\napt IN CNAME www-proxy\napt-cache IN CNAME
www-proxy\nnodered IN CNAME www-proxy\nfoto IN CNAME
www-proxy\nmusik IN CNAME www-proxy\nhassio IN CNAME
www-proxy\nhassio-conf IN CNAME www-proxy \ngit-ui IN CNAME
www-proxy\ngrav IN CNAME www-proxy\ntekton IN CNAME
www-proxy\nnc IN CNAME www-proxy\nauth IN CNAME
www-proxy\npublic.auth IN CNAME www-proxy \nsecure.auth IN CNAME
www-proxy\ndocker-registry IN CNAME adm\ncr IN CNAME adm\ndr-mirror
\ IN CNAME adm\nlog IN CNAME adm\n"
---
apiVersion: v1
kind: Service
metadata:
name: dns-ext
namespace: kube-system
spec:
ports:
- name: dns-udp
protocol: UDP
port: 53
targetPort: 53
selector:
k8s-app: kube-dns
type: LoadBalancer
loadBalancerIP: 172.23.255.252
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: k8s.gcr.io/coredns/coredns:v1.8.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: lan.db
path: lan.db
name: coredns
name: config-volume

View File

@@ -1,47 +0,0 @@
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: descheduler-cronjob
namespace: kube-system
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
metadata:
name: descheduler-pod
spec:
priorityClassName: system-cluster-critical
containers:
- name: descheduler
image: k8s.gcr.io/descheduler/descheduler:v0.22.0
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--v"
- "3"
resources:
requests:
cpu: "500m"
memory: "256Mi"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: false
restartPolicy: "Never"
serviceAccountName: descheduler-sa
volumes:
- name: policy-volume
configMap:
name: descheduler-policy-configmap

View File

@@ -1,35 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: descheduler-policy-configmap
namespace: kube-system
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemoveDuplicates":
enabled: true
"RemovePodsViolatingInterPodAntiAffinity":
enabled: true
"RemovePodsViolatingInterPodAntiAffinity":
enabled: true
"LowNodeUtilization":
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
"cpu": 30
"memory": 40
"pods": 10
targetThresholds:
"cpu": 50
"memory": 60
"pods": 20
nodeFit: true
"RemovePodsViolatingTopologySpreadConstraint":
enabled: true
params:
includeSoftConstraints: false

View File

@@ -1,10 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
#namespace: nginx-ingress
namespace: default
data:
proxy-connect-timeout: "10s"
proxy-read-timeout: "10s"
client-max-body-size: "0"

View File

@@ -1,674 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 172.23.255.1
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.0.0@sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: nginx
namespace: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000

View File

@@ -1,223 +0,0 @@
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "172.23.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

View File

@@ -1,21 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: loki-data
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/loki-data
server: ebin02
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: storage-loki-0
namespace: monitoring

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.23.255.1-172.23.255.254

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.23.255.1-172.23.255.254

View File

@@ -1,9 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: minio-openwrt
type: Opaque
data:
username: b3BlbndydAo=
password: ZUZWbmVnOEkwOE1zRTN0Q2VCRFB4c011OU0yVjJGdnkK
endpoint: aHR0cHM6Ly9taW5pby5saXZlLWluZnJhLnN2Yy5jbHVzdGVyLmxvY2FsOjk0NDMK

View File

@@ -1,36 +0,0 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ssd
provisioner: nfs-ssd # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ssd-ebin01
provisioner: nfs-ssd-ebin01 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-hdd-ebin01
provisioner: nfs-hdd-ebin01 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ssd-ebin02
provisioner: nfs-ssd-ebin02 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-hdd-ebin01
namespace: live-infra
labels:
app: nfs-hdd-ebin01
service: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-hdd-ebin01
template:
metadata:
labels:
app: nfs-hdd-ebin01
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-hdd-ebin01
image: quay.io/external_storage/nfs-client-provisioner-arm:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-hdd-ebin01
- name: NFS_SERVER
value: ebin01
- name: NFS_PATH
value: /data/raid1-hdd/k8s-data
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- nfs
topologyKey: kubernetes.io/hostname
volumes:
- name: nfs-client-root
nfs:
server: ebin01
path: /data/raid1-hdd/k8s-data

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-ssd-ebin01
namespace: live-infra
labels:
app: nfs-ssd-ebin01
service: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-ssd-ebin01
template:
metadata:
labels:
app: nfs-ssd-ebin01
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-ssd-ebin01
image: quay.io/external_storage/nfs-client-provisioner-arm:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-ssd-ebin01
- name: NFS_SERVER
value: ebin01
- name: NFS_PATH
value: /data/raid1-ssd/k8s-data
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- nfs
topologyKey: kubernetes.io/hostname
volumes:
- name: nfs-client-root
nfs:
server: ebin01
path: /data/raid1-ssd/k8s-data

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-ssd-ebin02
namespace: live-infra
labels:
app: nfs-ssd-ebin02
service: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-ssd-ebin02
template:
metadata:
labels:
app: nfs-ssd-ebin02
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-ssd-ebin02
image: quay.io/external_storage/nfs-client-provisioner-arm:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-ssd-ebin02
- name: NFS_SERVER
value: ebin02
- name: NFS_PATH
value: /data/raid1-ssd/k8s-data
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- nfs
topologyKey: kubernetes.io/hostname
volumes:
- name: nfs-client-root
nfs:
server: ebin02
path: /data/raid1-ssd/k8s-data

View File

@@ -1,65 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: live-infra
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,19 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: live-env
---
apiVersion: v1
kind: Namespace
metadata:
name: test-env
---
apiVersion: v1
kind: Namespace
metadata:
name: live-infra
---
apiVersion: v1
kind: Namespace
metadata:
name: test-infra

View File

@@ -1,4 +1,4 @@
FROM debian:bullseye
FROM debian:stable
ENV DEBIAN_FRONTEND noninteractive
ARG DEVPKGS="git make cmake gcc g++ python-dev libsqlcipher-dev"

View File

@@ -1,4 +1,4 @@
FROM cr.lan/debian-stable
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
apt-cacher-ng && \

View File

@@ -76,9 +76,30 @@ kind: PersistentVolumeClaim
metadata:
name: apt-cacher-volume
spec:
storageClassName: nfs-ssd
storageClassName: nfs-ssd-ebin02
volumeName: apt-cacher-ng
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: apt-cacher-ng
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/apt-cacher-ng
server: ebin02
capacity:
storage: 40Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: apt-cacher-volume
namespace: live-infra

View File

@@ -1,76 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-apt-cacher-ng
spec:
type: image
params:
- name: url
value: cr.lan/apt-cacher-ng
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-apt-cacher-ng
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/apt-cacher-ng/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/apt-cacher-ng
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-apt-cacher-ng
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-apt-cacher-ng
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-apt-cacher-ng

View File

@@ -1,73 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: apt-cacher-ng-test
namespace: test
labels:
app: apt-cacher-ng-test
spec:
replicas: 1
selector:
matchLabels:
app: apt-cacher-ng-test
strategy:
type: Recreate
template:
metadata:
labels:
app: apt-cacher-ng-test
spec:
containers:
- name: apt-cacher-ng-test
image: docker-registry.lan/apt-cacher-ng:arm64
imagePullPolicy: Always
ports:
- containerPort: 3142
protocol: TCP
volumeMounts:
- mountPath: /var/cache/apt-cacher-ng
name: data
resources:
requests:
memory: 64Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 100m
volumes:
- name: data
persistentVolumeClaim:
claimName: apt-cacher-volume-test
#---
#apiVersion: v1
#kind: Service
#metadata:
# name: apt-cacher-ng
# labels:
# app: apt-cacher-ng
#spec:
# type: LoadBalancer
# loadBalancerIP: 172.23.255.1
# ports:
# - name: apt-cacher-ng
# port: 3142
# targetPort: 3142
# protocol: TCP
# selector:
# app: apt-cacher-ng
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: apt-cacher-volume-test
namespace: test
#annotations:
# volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
storageClassName: csi-s3-slow
#storageClassName: fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi

View File

@@ -1,7 +0,0 @@
FROM: https://tanzu.vmware.com/developer/guides/ci-cd/argocd-gs/
# kubectl apply -f namespace.yaml
# -kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml-
# kubectl apply -n argocd -f install.yaml (needs changes for ARM builds)
# kubectl apply -n argocd -f ingress.yaml

View File

@@ -1,18 +0,0 @@
#https://argoproj.github.io/argo-cd/operator-manual/ingress/#kubernetesingress-nginx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd.lan
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: argocd

View File

@@ -1,5 +1,4 @@
FROM debian:stable-slim
FROM cr.wks/debian-stable
RUN apt-get update && apt-get install -y \
curl procps && \
apt-get clean -y && \

View File

@@ -1,76 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-curl
spec:
type: image
params:
- name: url
value: cr.lan/curl
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-curl
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/curl/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/curl
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-curl-taskrun
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-curl
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-curl

View File

@@ -20,7 +20,6 @@ spec:
spec:
containers:
- name: registry-ui
#image: cr.lan/docker-registry-ui:arm64
image: docker.io/joxit/docker-registry-ui:latest
imagePullPolicy: Always
env:

View File

@@ -1,76 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-dolibarr
spec:
type: image
params:
- name: url
value: cr.lan/dolibarr
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-dolibarr
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/dolibarr/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/dolibarr
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-dolibarr-taskrun
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-dolibarr
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-dolibarr

View File

@@ -1,7 +1,7 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: build-rompr
name: img-dolibarr
spec:
pipelineRef:
name: kaniko-pipeline
@@ -11,13 +11,13 @@ spec:
- name: git-revision
value: master
- name: path-to-image-context
value: apps/rompr
value: apps/dolibarr
- name: path-to-dockerfile
value: apps/rompr/Dockerfile
value: apps/dolibarr/Dockerfile
- name: image-name
value: cr.lan/rompr
value: cr.lan/dolibarr
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: usr_src/tekton-kaniko-pipelines
subPath: tekton/dolibarr

View File

@@ -11,6 +11,8 @@ metadata:
release: latest
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: gitea
@@ -24,7 +26,6 @@ spec:
containers:
- name: gitea
image: gitea/gitea:latest
imagePullPolicy: Always
env:
- name: USER_UID
value: "1000"
@@ -32,6 +33,8 @@ spec:
value: "1000"
- name: TZ
value: "Europe/Berlin"
- name: GITEA__lfs__PATH
value: /data/git/lfs
- name: DB_TYPE
value: postgres
- name: DB_HOST
@@ -42,6 +45,26 @@ spec:
value: gitea
- name: DB_PASSWD
value: giteaEu94XSS4gKpheSBoMsIs
#- name: GITEA__indexer__ISSUE_INDEXER
#value: redis
#- name: GITEA__indexer__ISSUE_INDEXER_QUEUE_CONN_STR
#value: addrs=redis-standalone.live-env.svc.cluster.local:6379 db=1
- name: GITEA__packages__ENABLED
value: "true"
- name: GITEA__log__LEVEL
value: warn
- name: GITEA__log__MODE
value: file
- name: GITEA__log__ROUTER
value: file
- name: GITEA__log__MACARON
value: file
#- name: GITEA__queue__TYPE
#value: redis
#- name: GITEA__queue__CONN_STR
#value: redis://redis-standalone.live-env.svc.cluster.local:6397/0
- name: GITEA__server__ROOT_URL
value: http://git-ui.lan/
volumeMounts:
- name: gitea
mountPath: /data
@@ -53,20 +76,24 @@ spec:
containerPort: 22
protocol : TCP
livenessProbe:
initialDelaySeconds: 300
periodSeconds: 10
httpGet:
path: /
port: http
readinessProbe:
initialDelaySeconds: 300
periodSeconds: 10
httpGet:
path: /
port: http
resources:
requests:
memory: "200Mi"
memory: "300Mi"
cpu: "150m"
limits:
memory: "312Mi"
cpu: "500m"
memory: "512Mi"
cpu: "1000m"
volumes:
- name: gitea
persistentVolumeClaim:
@@ -79,7 +106,8 @@ metadata:
labels:
app: gitea
spec:
storageClassName: nfs-ssd
storageClassName: nfs-ssd-ebin02
volumeName: gitea
accessModes:
- ReadWriteOnce
resources:
@@ -87,6 +115,26 @@ spec:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: gitea
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/gitea-data
server: ebin02
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: gitea
namespace: live-env
---
apiVersion: v1
kind: Service
metadata:
name: gitea

View File

@@ -1,6 +1,6 @@
FROM cr.lan/debian-stable-php-fpm
ENV DEBIAN_FRONTEND noninteractive
ARG GRAV_VERSION=1.6.28
ARG GRAV_VERSION=1.7.34
ARG DEV_PKGS="zlib1g-dev libpng-dev libjpeg-dev libfreetype6-dev \
libcurl4-gnutls-dev libxml2-dev libonig-dev"

View File

@@ -1,5 +1,5 @@
# vim:set ft=dockerfile:
FROM debian:buster-slim
FROM cr.lan/debian-stable
RUN set -ex; \
apt-get update; \

View File

@@ -1,5 +1,5 @@
# vim:set ft=dockerfile:
FROM debian:buster-slim
FROM cr.lan/debian-stable
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql

View File

@@ -1,76 +1,23 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: chaos-kubernetes-git
name: img-mariadb-prometheus-node-exporter
spec:
type: git
pipelineRef:
name: kaniko-pipeline
params:
- name: revision
value: master
- name: url
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-mariadb-prometheus-exporter
spec:
type: image
params:
- name: url
value: cr.lan/mariadb-prometheus-exporter
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-mariadb-prometheus-exporter
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/mariadb/mariadb-prometheus/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/mariadb/mariadb-prometheus
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-mariadb-prometheus-exporter-taskrun
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-mariadb-prometheus-exporter
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-mariadb-prometheus-exporter
- name: git-revision
value: master
- name: path-to-image-context
value: apps/mariadb/mariadb-prometheus
- name: path-to-dockerfile
value: apps/mariadb/mariadb-prometheus/Dockerfile
- name: image-name
value: cr.lan/mariadb-prometheus-node-exporter
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/mariadb-prometheus-node-exporter

View File

@@ -1,76 +1,23 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: chaos-kubernetes-git
name: img-mariadb
spec:
type: git
pipelineRef:
name: kaniko-pipeline
params:
- name: revision
value: master
- name: url
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-mariadb
spec:
type: image
params:
- name: url
- name: git-revision
value: master
- name: path-to-image-context
value: apps/mariadb/mariadb
- name: path-to-dockerfile
value: apps/mariadb/mariadb/Dockerfile
- name: image-name
value: cr.lan/mariadb
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-mariadb
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/mariadb/mariadb/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/mariadb/mariadb
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-mariadb-taskrun
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-mariadb
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-mariadb
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/mariadb

View File

@@ -0,0 +1,19 @@
FROM cr.wks/debian-golang AS build
ENV GOARCH=arm64
ENV GOPATH=/usr/src/gopath
ENV GOCACHE=/usr/src/gocache
RUN go env
WORKDIR /usr/src
RUN go install github.com/sapcc/mosquitto-exporter@latest
#RUN go mod download
FROM cr.wks/debian-stable
LABEL source_repository="https://github.com/sapcc/mosquitto-exporter"
COPY --from=build /usr/src/gopath/bin/mosquitto-exporter /mosquitto-exporter
RUN chmod 0755 /mosquitto-exporter
EXPOSE 9234
ENTRYPOINT [ "/mosquitto-exporter" ]

View File

View File

@@ -1,8 +1,6 @@
FROM debian:stable-slim
FROM cr.wks/debian-stable
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && \
RUN apt-get update && \
apt-get install -y --no-install-recommends \
mosquitto procps && \
apt-get clean -y && \

0
apps/mosquitto/bla Normal file
View File

View File

@@ -1,93 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: github-mosquitto-prometheus-exporter
spec:
type: git
params:
- name: revision
value: master
- name: url
value: https://github.com/sapcc/mosquitto-exporter.git
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-mosquitto-prometheus-exporter
spec:
type: image
params:
- name: url
value: cr.lan/mosquitto-prometheus-exporter
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-mosquitto-prometheus-exporter
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-binary
image: cr.lan/debian-golang-stable
script: |
#!/usr/bin/env bash
cd $(resources.inputs.source.path)
ls -al
export GOARCH=arm64
export GOPATH=/usr/src/gopath
export GOCACHE=/usr/src/gocache
go env
go get github.com/sapcc/mosquitto-exporter
make -j4 build CGO_ENABLED=0
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
workspaces:
- name: usr-src
mountPath: /usr/src
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-mosquitto-prometheus-exporter
spec:
taskRef:
name: build-mosquitto-prometheus-exporter
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: github-mosquitto-prometheus-exporter
outputs:
- name: builtImage
resourceRef:
name: img-mosquitto-prometheus-exporter
workspaces:
- name: usr-src
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: usr_src

View File

@@ -1,77 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-mosquitto
spec:
type: image
params:
- name: url
value: cr.lan/mosquitto
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-mosquitto
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/mosquitto/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/mosquitto
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-mosquitto-taskrun
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-mosquitto
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-mosquitto

View File

@@ -1,4 +1,4 @@
FROM nextcloud:23-fpm
FROM nextcloud:24-fpm
#needed for some reason
ENV NEXTCLOUD_UPDATE=1
@@ -6,7 +6,7 @@ ENV NEXTCLOUD_UPDATE=1
RUN sed -i 's@deb.debian.org@apt-cache.lan/deb.debian.org@g' /etc/apt/sources.list && \
sed -i 's@security.debian.org@apt-cache.lan/security.debian.org@g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
procps bash iputils-ping libmagickcore-6.q16-6-extra
procps bash iputils-ping libmagickcore-6.q16-6-extra vim-tiny
RUN apt-get clean -y && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

View File

@@ -3,7 +3,7 @@
// Manually deployed by yourself
//
$CONFIG = array(
'config_is_read_only' => true,
'config_is_read_only' => false,
'htaccess.RewriteBase' => '/',
'memcache.local' => '\\OC\\Memcache\\APCu',
'apps_paths' => array(
@@ -46,7 +46,7 @@ $CONFIG = array(
),
'datadirectory' => '/var/www/html/data',
'dbtype' => 'pgsql',
'version' => '23.0.0',
'version' => '24.0.0',
'overwrite.cli.url' => 'http://nc.lan',
'dbname' => 'nextcloud',
'dbhost' => 'postgres.live-env.svc.cluster.local:5432',

View File

@@ -1,77 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-nextcloud
spec:
type: image
params:
- name: url
value: cr.lan/nextcloud
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-nextcloud
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/nextcloud/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/nextcloud
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:arm64
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-nextcloud
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: build-nextcloud
params:
- name: pathToDockerFile
value: Dockerfile
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-nextcloud

View File

@@ -0,0 +1,23 @@
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: img-nextcloud
spec:
pipelineRef:
name: kaniko-pipeline
params:
- name: git-url
value: http://git-ui.lan/chaos/kubernetes.git
- name: git-revision
value: master
- name: path-to-image-context
value: apps/nextcloud
- name: path-to-dockerfile
value: apps/nextcloud/Dockerfile
- name: image-name
value: cr.lan/nextcloud
workspaces:
- name: git-source
persistentVolumeClaim:
claimName: tektoncd-workspaces
subPath: tekton/nextcloud

View File

@@ -1,42 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
run: nginx-deployment
template:
metadata:
labels:
run: nginx-deployment
spec:
containers:
- image: nginx
name: nginx-webserver
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
run: nginx-deployment
ports:
- port: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-test
spec:
rules:
- host: nginx-test.lan
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80

View File

@@ -1,4 +1,4 @@
FROM debian:stable-slim
FROM cr.lan/debian-stable
#RUN echo 'Acquire::http::proxy "http://172.23.255.1:3142";' >/etc/apt/apt.conf.d/proxy
RUN apt-get update && apt-get install -y \

View File

@@ -49,7 +49,7 @@ spec:
- key: app
operator: In
values:
- promtheus
- prometheus
- loki
topologyKey: kubernetes.io/hostname
# - name: prometheus-exporter

103
apps/redis.yaml Normal file
View File

@@ -0,0 +1,103 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cm
namespace: live-env
data:
redis.conf: |-
bind * -::*
appendonly yes
maxmemory 5mb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-standalone
namespace: live-env
spec:
serviceName: redis-standalone
replicas: 1
selector:
matchLabels:
app: redis-standalone
template:
metadata:
labels:
app: redis-standalone
spec:
containers:
- name: redis-standalone
image: redis
command: ["redis-server"]
args: ["/usr/local/etc/redis/redis.conf"]
resources:
limits:
memory: "128Mi"
cpu: "50m"
ports:
- containerPort: 6379
volumeMounts:
- name: redis-standalone-pv
mountPath: /data
- name: config
mountPath: /usr/local/etc/redis
volumes:
- name: config
configMap:
name: redis-cm
- name: redis-standalone-pv
persistentVolumeClaim:
claimName: redis-standalone-pv
---
apiVersion: v1
kind: Service
metadata:
name: redis-standalone
labels:
app: redis-standalone
env: live-env
spec:
selector:
env: live-env
type: LoadBalancer
loadBalancerIP: 172.23.255.6
ports:
- name: redis-standalone
port: 6379
targetPort: 6379
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-standalone-pv
labels:
app: redis-stndalone
spec:
storageClassName: nfs-ssd-ebin02
volumeName: redis-standalone-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-standalone-pv
spec:
storageClassName: "nfs-ssd-ebin02"
nfs:
path: /data/raid1-ssd/k8s-data/redis-standalone-pv
server: ebin02
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: redis-standalone-pv
namespace: live-env

View File

@@ -1,10 +1,9 @@
FROM cr.lan/debian-stable-php-fpm
FROM cr.chaos/debian-stable-php-fpm as baseimage
ARG ROMPR_VERSION=1.60.1
ARG ROMPR_VERSION=2.24
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get -y install \
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y install \
nginx \
curl \
unzip
@@ -19,21 +18,24 @@ RUN mkdir -p /app /rompr
RUN unzip -d /app rompr.zip && rm rompr.zip
RUN ln -sf /rompr/prefs /app/rompr/prefs; ln -sf /rompr/albumart /app/rompr/albumart;
RUN chown -R www-data:www-data /app/rompr /rompr
RUN pwd; ls -la .;ls -la /workspace/source;
RUN pwd; ls -la .;ls -la /etc/php/
ADD files/nginx_default /etc/nginx/sites-available/default
RUN mkdir -p /run/php/
FROM baseimage as final
#Environment variables to configure php
RUN sed -ri -e 's/^allow_url_fopen =.*/allow_url_fopen = On/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^memory_limit =.*/memory_limit = 128M/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^max_execution_time =.*/max_execution_time = 1800/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^post_max_size =.*/post_max_size = 256M/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^upload_max_filesize =.*/upload_max_filesize = 8M/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^max_file_uploads =.*/max_file_uploads = 50/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^display_errors =.*/display_errors = On/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^display_startup_errors =.*/display_startup_errors = On/g' /etc/php/7.4/fpm/php.ini
RUN sed -ri -e 's/^allow_url_fopen =.*/allow_url_fopen = On/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^memory_limit =.*/memory_limit = 128M/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^max_execution_time =.*/max_execution_time = 1800/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^post_max_size =.*/post_max_size = 256M/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^upload_max_filesize =.*/upload_max_filesize = 8M/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^max_file_uploads =.*/max_file_uploads = 50/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^display_errors =.*/display_errors = On/g' /etc/php/8.4/fpm/php.ini && \
sed -ri -e 's/^display_startup_errors =.*/display_startup_errors = On/g' /etc/php/8.4/fpm/php.ini
RUN echo "<?php phpinfo(); ?>" > /app/rompr/phpinfo.php
RUN update-rc.d php7.4-fpm defaults
RUN update-rc.d php8.4-fpm defaults
ADD files/run-httpd /usr/local/bin/
RUN chmod 755 /usr/local/bin/run-httpd
EXPOSE 80

View File

@@ -1,3 +1,5 @@
lighttpd is configured in etc_lighttpd
generate a configmap with:
kubectl create configmap rompr-lighttpd-config --from-file etc_lighthttpd/
Run with:
```podman run --pull=always -d --replace -p 127.0.0.1:8081:80 \
--mount=type=bind,source=/var/lib/rompr,destination=/rompr \
--tz=Europe/Berlin --name=rompr cr.wks/rompr:latest```

View File

@@ -1,73 +0,0 @@
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: rompr
spec:
selector:
matchLabels:
app: rompr
strategy:
type: Recreate
template:
metadata:
labels:
app: rompr
spec:
containers:
- image: cr.lan/rompr
name: rompr
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
volumeMounts:
- name: rompr-data
mountPath: /rompr
volumes:
- name: rompr-data
persistentVolumeClaim:
claimName: rompr-data
---
apiVersion: v1
kind: Service
metadata:
name: rompr
spec:
ports:
- name: http
port: 80
selector:
app: rompr
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rompr
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: musik.lan
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rompr
port:
name: http
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rompr-data
spec:
storageClassName: nfs-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 6Gi

View File

@@ -2,6 +2,7 @@
rm -f /var/run/nginx.pid
mkdir -p /var/log/nginx
set -e
/etc/init.d/php7.4-fpm restart
mkdir -p /rompr/albumart /rompr/prefs
chown www-data:www-data -R /rompr/albumart /rompr/prefs
/etc/init.d/php8.4-fpm restart
exec /usr/sbin/nginx -g 'daemon off;'

View File

@@ -1,88 +0,0 @@
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: chaos-kubernetes-git
spec:
type: git
params:
- name: revision
value: master
- name: url
value: http://git-ui.lan/chaos/kubernetes.git
- name: submodules
value: "false"
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: img-rompr
spec:
type: image
params:
- name: url
value: cr.lan/rompr
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-rompr
spec:
params:
- name: pathToDockerFile
type: string
default: $(resources.inputs.source.path)/apps/rompr/Dockerfile
- name: pathToContext
type: string
default: $(resources.inputs.source.path)/apps/rompr
resources:
inputs:
- name: source
type: git
outputs:
- name: builtImage
type: image
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --snapshotMode=redo
- --skip-tls-verify
- --digest-file=/tekton/results/IMAGE-DIGEST
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: img-rompr-taskrun2
spec:
#serviceAccountName: dockerhub-service
taskRef:
name: kaniko
params:
- name: DOCKERFILE
value: Dockerfile
- name: CONTEXT
value: apps/rompr
- name: IMAGE
value: cr.lan/rompr
- name: BUILDER_IMAGE
value: gcr.io/kaniko-project/executor:latest
resources:
inputs:
- name: source
resourceRef:
name: chaos-kubernetes-git
outputs:
- name: builtImage
resourceRef:
name: img-rompr

View File

@@ -1,8 +0,0 @@
Install:
# Pipelines: @kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml@
# Triggers: @kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml@ #https://github.com/tektoncd/triggers/blob/master/docs/install.md
# Dashboard:
## update submodule in ./dashboard
## Build: @docker build -t tekton-dashboard:arm64 -t docker-registry.lan/tekton-dashboard:arm64 --platform linux/arm64 --build-arg GOARCH=arm64 .@
## apply deployment.yaml

View File

@@ -1,60 +0,0 @@
# Copyright 2020 Tekton Authors LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-registry-cert
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
# Registry's self-signed certificate
# TODO: somehow automate this with salt
cert: |
-----BEGIN CERTIFICATE-----
MIIFujCCA6KgAwIBAgIEYsvT+zANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJE
RTEPMA0GA1UECAwGQmVybGluMQ8wDQYDVQQHDAZCZXJsaW4xFDASBgNVBAMMC3R1
bW9yLmNoYW9zMB4XDTIxMDIxMjE4MzAzM1oXDTIyMDIxMjE4MzAzM1owLzELMAkG
A1UEBhMCREUxDzANBgNVBAgMBkJlcmxpbjEPMA0GA1UEBwwGQmVybGluMIICIjAN
BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAog4t352wKHS4pflQK4NlWH6yv1FK
MnqNJiNnIgkWrNABTu9ES3cmUwdEhf+Um7MJYvQivOZFIH65wBBmOxfnYWB+NPwn
XAi/o3BcePIdbwEGs0cxgIEKbmL9fY0SCXq0pXRu8Y7WAhqdTNp6/HY2fTMx7ghX
RNQPoeNlcfAZgpsJlZdkSzMYoFpGIW+Tvj3INNuIuHo1pagckWW/hGUIqY0NuUV9
Aj8LOHhHB+vKtjbq5DMVAob4kKOPJFmq/1D6fmRh3W1YAGikowVv3V45jAmnkcBj
Z8BIEiOnBy1AyW9o8Tc5000MAGNrm9IGpRfBBTptSAApZmK1V6zKreqCiCpgOBbh
6U1Bf1L39u8aLVRxeyzQbxqBM1VTbjKxygFSIR/7rVd9BEhx6VA95EG+EdPLpKDp
mymElCcVgv2ZhKBRxtne4CAQD5ng2SoEqLdjvZdC44QNapnj+6jlaNvKRJ1q63kq
B5Y4shJxYOc6QDQp2+Eh2d7qQNiTE3FJC/aeXDNQ+dqeV7chU+PbcbMQoxnIN6ou
Zc2IdtNL87+Apgh6vqZX9pELBXUN1Nu3NI88T8tw1CdqfFfh4Z2EEBBCsPD0yZPV
UrHZsAMiHh5prRkwsBVzDBIaLYd6glf/w9W8sWxe5wceDNhxD8VAfq/ZXeuE1Pme
cTVYsBNj8idC9tECAwEAAaOBxzCBxDAMBgNVHRMBAf8EAjAAMAsGA1UdDwQEAwIF
4DAdBgNVHQ4EFgQUa7ADNR68XrDsLtLtngmdJQ9UtOswcAYDVR0jBGkwZ4AU9l9v
D1+dukLLV/uDnP3eB4i6ZyihSaRHMEUxCzAJBgNVBAYTAkRFMQ8wDQYDVQQIDAZC
ZXJsaW4xDzANBgNVBAcMBkJlcmxpbjEUMBIGA1UEAwwLdHVtb3IuY2hhb3OCBBKa
C88wFgYDVR0RBA8wDYILdHVtb3IuY2hhb3MwDQYJKoZIhvcNAQELBQADggIBAKK3
S8qKrsarBflGrDI4diG+QOcMG3/y6juARp3vxQf3fDqC6HZCl+kWAp+Cq3Sp/hU7
GKM7qraWpvGxgmDyaevAirLdFlYQBgcIl9frPI8yfLWbZHWvx3PFXNqg2Ckm98xX
vSUacPTPp/tKFBquJ5+j+/YS2U4qWWNIYYtDEI+3lswfoeh0CIEPSxDk0wHDAyfZ
Vh30ZuZhsf3F63xMggw/RpEHeTTCr0YGOAmzpb7jItcbP/EER1qTQ4T+3ExuC40C
EdOAeL377O2rr7zjcmJWk8B5FaQ8K8UdE/iQGM7tP5ieMNTVACe21KFpqIIXaIka
HqRTyvRmJGUrVf1NeXE16yKirIqAjEV/B/4S244wxYcwqweZObbI0PnbnEMn3PMF
TV+e1CUmVOKyGIxfHH7j/VKQfmH/W0jOlGWI7OkbdU5GckoX4Knjrv2MmT9i2ENy
6dID3BJVm6hK2SjJLc7SxbPXMG3I6BrlA5/3LaXzl+2fWAk5OA1jnGZz0P4XcdOO
iAulB4I3PdmNRdSYAXVRdo5OLoq/7iBcqSrCXRw1IbgJm0VlS2AI6hGEXDQvjQwP
38ijZUV/ch2lGyUZOfQymI7Ylh+Airn8ctqyMS8FeZBAyny4/t7xrhWuGO1awUzp
4p/sEjg6kqp3oLai5yhaz9S+y7Ao5XmGDdzfalWH
-----END CERTIFICATE-----

View File

@@ -1,19 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: tekton.lan
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tekton-dashboard
port:
number: 9097

View File

@@ -1,526 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: extensions.dashboard.tekton.dev
spec:
additionalPrinterColumns:
- JSONPath: .spec.apiVersion
name: API version
type: string
- JSONPath: .spec.name
name: Kind
type: string
- JSONPath: .spec.displayname
name: Display name
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: dashboard.tekton.dev
names:
categories:
- tekton
- tekton-dashboard
kind: Extension
plural: extensions
shortNames:
- ext
- exts
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
x-kubernetes-preserve-unknown-fields: true
versions:
- name: v1alpha1
served: true
storage: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-backend
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- use
- apiGroups:
- tekton.dev
resources:
- clustertasks
- clustertasks/status
verbs:
- get
- list
- watch
- apiGroups:
- triggers.tekton.dev
resources:
- clustertriggerbindings
verbs:
- get
- list
- watch
- apiGroups:
- dashboard.tekton.dev
resources:
- extensions
verbs:
- create
- update
- delete
- patch
- apiGroups:
- tekton.dev
resources:
- clustertasks
- clustertasks/status
verbs:
- create
- update
- delete
- patch
- apiGroups:
- triggers.tekton.dev
resources:
- clustertriggerbindings
verbs:
- create
- update
- delete
- patch
- add
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-dashboard
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- list
---
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.dashboard.tekton.dev/aggregate-to-dashboard: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-extensions
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-pipelines
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-tenant
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- dashboard.tekton.dev
resources:
- extensions
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
- pods/log
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- tekton.dev
resources:
- tasks
- taskruns
- pipelines
- pipelineruns
- pipelineresources
- conditions
- tasks/status
- taskruns/status
- pipelines/status
- pipelineruns/status
- taskruns/finalizers
- pipelineruns/finalizers
verbs:
- get
- list
- watch
- apiGroups:
- triggers.tekton.dev
resources:
- eventlisteners
- triggerbindings
- triggertemplates
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- update
- patch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- create
- update
- delete
- apiGroups:
- tekton.dev
resources:
- tasks
- taskruns
- pipelines
- pipelineruns
- pipelineresources
- conditions
- taskruns/finalizers
- pipelineruns/finalizers
- tasks/status
- taskruns/status
- pipelines/status
- pipelineruns/status
verbs:
- create
- update
- delete
- patch
- apiGroups:
- triggers.tekton.dev
resources:
- eventlisteners
- triggerbindings
- triggertemplates
verbs:
- create
- update
- delete
- patch
- add
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-triggers
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-backend
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tekton-dashboard
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
app.kubernetes.io/version: v0.11.1
dashboard.tekton.dev/release: v0.11.1
version: v0.11.1
name: tekton-dashboard
namespace: tekton-pipelines
spec:
ports:
- name: http
port: 9097
protocol: TCP
targetPort: 9097
selector:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tekton-dashboard
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
app.kubernetes.io/version: v0.11.1
dashboard.tekton.dev/release: v0.11.1
version: v0.11.1
name: tekton-dashboard
namespace: tekton-pipelines
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
template:
metadata:
labels:
app: tekton-dashboard
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/name: dashboard
app.kubernetes.io/part-of: tekton-dashboard
app.kubernetes.io/version: v0.11.1
name: tekton-dashboard
spec:
containers:
- args:
- --port=9097
- --logout-url=
- --pipelines-namespace=tekton-pipelines
- --triggers-namespace=tekton-pipelines
- --read-only=false
- --csrf-secure-cookie=false
- --log-level=info
- --log-format=json
- --namespace=
- --openshift=false
- --stream-logs=false
- --external-logs=
env:
- name: INSTALLED_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: WEB_RESOURCES_DIR
value: /go/src/github.com/tektoncd/dashboard/web
- name: TEKTON_PIPELINES_WEB_RESOURCES_DIR
value: /go/src/github.com/tektoncd/dashboard/web
#image: gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard@sha256:744eb92d7d0365bbfb2405df4ba4d2a66c01edc26028c362bd5675e2bc1b9626
image: docker-registry.lan/tekton-dashboard:arm64
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 9097
name: tekton-dashboard
ports:
- containerPort: 9097
readinessProbe:
httpGet:
path: /readiness
port: 9097
securityContext:
runAsNonRoot: true
runAsUser: 65532
serviceAccountName: tekton-dashboard
volumes: []
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-pipelines
namespace: tekton-pipelines
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-pipelines
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-dashboard
namespace: tekton-pipelines
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-dashboard
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-triggers
namespace: tekton-pipelines
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-triggers
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-tenant
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-tenant
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
name: tekton-dashboard-extensions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-dashboard-extensions
subjects:
- kind: ServiceAccount
name: tekton-dashboard
namespace: tekton-pipelines
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
labels:
app.kubernetes.io/component: dashboard
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-dashboard
spec:
rules:
- host: tekton.lan
http:
paths:
- backend:
serviceName: tekton-dashboard
servicePort: 9097

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +0,0 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tektoncd-workspaces
spec:
storageClassName: nfs-ssd
accessModes:
- ReadWriteMany
resources:
requests:
storage: 40Gi

14
bin/find_changes.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
declare -A CH
CH=()
i=0
echo $(git --version)
while read line; do
WHAT=$(dirname ${line})
echo "LIN: ${line} WHAT: ${WHAT}"
CH[$i]=$WHAT
i=$((i++))
done < <(git diff-tree --no-commit-id --name-only HEAD -r| egrep '^_')
#echo "UNIQ:"
UNIQ=$(echo ${CH} |sort |uniq)
echo ${UNIQ}

View File

@@ -1,5 +0,0 @@
from :https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md
# create new secret:
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run -oyaml > additional-scrape-configs.yaml
# add "namespace: monitoring"
# apply

View File

@@ -1,7 +0,0 @@
apiVersion: v1
data:
prometheus-additional.yaml: LSBqb2JfbmFtZTogZ2l0ZWEKICBzdGF0aWNfY29uZmlnczoKICAtIHRhcmdldHM6CiAgICAtIGdpdC11aS5sYW4KLSBqb2JfbmFtZTogbXlzcWxkCiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBtYXJpYWRiLmxhbjo5MTA0Ci0gam9iX25hbWU6IG1xdHQubW9zcXVpdHRvCiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBtcXR0Lmxhbjo5MjM0CiAgICAtIG1xdHQuY2hhb3M6OTIzNAotIGpvYl9uYW1lOiBoYXByb3h5CiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBhZG0wMS53a3M6OTEwMQogICAgLSBkcnVja2kud2tzOjkxMDEKICAgIC0gYXV0bzAyLmNoYW9zOjkxMDEKLSBqb2JfbmFtZToga2xpcHBlcgogIHN0YXRpY19jb25maWdzOgogIC0gdGFyZ2V0czoKICAgIC0gZHJ1Y2tpLndrczozOTAzCi0gam9iX25hbWU6IG9jdG9wcmludAogIG1ldHJpY3NfcGF0aDogL3BsdWdpbi9wcm9tZXRoZXVzX2V4cG9ydGVyL21ldHJpY3MKICBwYXJhbXM6CiAgICBhcGlrZXk6CiAgICAtIDMwRThCMDFCRkQ2NzRFNUJCRDQ0NkQwOEM0NzMwREY0CiAgc3RhdGljX2NvbmZpZ3M6CiAgLSB0YXJnZXRzOgogICAgLSBkcnVja2kud2tzOjgwCi0gam9iX25hbWU6IGhhc3NpbwogIG1ldHJpY3NfcGF0aDogL2FwaS9wcm9tZXRoZXVzCiAgYmVhcmVyX3Rva2VuOiAnZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKSVV6STFOaUo5LmV5SnBjM01pT2lKaE16Qm1ZalUxWmpjeVpHRTBZemMyWW1VMk5tWTBOamxqTlRBeU1qZGpaQ0lzSW1saGRDSTZNVFl4TWpnNE16STVOeXdpWlhod0lqb3hPVEk0TWpRek1qazNmUS4xSUNzSGxpVVhSMENHNEg4dlFSWUo1alZxRndtcUtTQjBmU2NTaXRDLVE0JwogIHN0YXRpY19jb25maWdzOgogICAgLSB0YXJnZXRzOgogICAgICAtIGhhc3Npby5sYW46ODAKLSBqb2JfbmFtZTogaGFzc2lvX3Jpbmc4NgogIG1ldHJpY3NfcGF0aDogL2FwaS9wcm9tZXRoZXVzCiAgYmVhcmVyX3Rva2VuOiAnZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKSVV6STFOaUo5LmV5SnBjM01pT2lJME9HRmpaVEppTm1RM09UZzBNamMzWVdGbU1tTm1abVUxWXpjNE5URTBOQ0lzSW1saGRDSTZNVFl4TWpFNU1qazBNQ3dpWlhod0lqb3hPVEkzTlRVeU9UUXdmUS5CYklBWG05UnEwamI2b3VxZ1ZITmQ2S2VlejNOUDN5aC03d3lmdW9COFlrJwogIHN0YXRpY19jb25maWdzOgogICAgLSB0YXJnZXRzOgogICAgICAtIGF1dG8uY2hhb3M6ODAKLSBqb2JfbmFtZTogcG9zdGdyZXMKICBzdGF0aWNfY29uZmlnczoKICAgIC0gdGFyZ2V0czoKICAgICAgLSBwb3N0Z3Jlcy5saXZlLWVudi5zdmMuY2x1c3Rlci5sb2NhbDo5MTg3Ci0gam9iX25hbWU6IG5vZGUKICBzdGF0aWNfY29uZmlnczoKICAtIHRhcmdldHM6CiAgICAtIGFkbTAxLndrczo5MTAwCiAgICAtIGR1bW9udC13a3Mud2tzOjkxMDAKICAgIC0gZHJ1Y2tpLndrczo5MTAwCiAgICAtIGViaW4wMS53a3M6OTEwMAogICAgLSBlYmluMDIud2tzOjkxMDAKICAgIC0gb3NtYy53a3M6OTEwMAogICAgLSByaW90MDEud2tzOjkxMDAKICAgIC0gdHJ1aGUuY2hhb3M6OTEwMAogICAgLSBhdXRvMDIuY2hhb3M6OTEwMAogICAgLSBkdW1vbnQuY2hhb3M6OTEwMAogICAgLSB0dW1vci5jaGFvczo5MTAwCiAgICAtIHdvaG56LmNoYW9zOjkxMDAKICAgIC0geW9yaS5jaGFvczo5MTAwCg==
kind: Secret
metadata:
creationTimestamp: null
name: additional-scrape-configs

View File

@@ -1,30 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: prometheus-k8s
namespace: metallb-system
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: prometheus-k8s
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: prometheus-k8s
subjects:
- kind: ServiceAccount
name: prometheus-k8s
namespace: monitoring

View File

@@ -1,63 +0,0 @@
- job_name: gitea
static_configs:
- targets:
- git-ui.lan
- job_name: mysqld
static_configs:
- targets:
- mariadb.lan:9104
- job_name: mqtt.mosquitto
static_configs:
- targets:
- mqtt.lan:9234
- mqtt.chaos:9234
- job_name: haproxy
static_configs:
- targets:
- adm01.wks:9101
- drucki.wks:9101
- auto02.chaos:9101
- job_name: klipper
static_configs:
- targets:
- drucki.wks:3903
- job_name: octoprint
metrics_path: /plugin/prometheus_exporter/metrics
params:
apikey:
- 30E8B01BFD674E5BBD446D08C4730DF4
static_configs:
- targets:
- drucki.wks:80
- job_name: hassio
metrics_path: /api/prometheus
bearer_token: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhMzBmYjU1ZjcyZGE0Yzc2YmU2NmY0NjljNTAyMjdjZCIsImlhdCI6MTYxMjg4MzI5NywiZXhwIjoxOTI4MjQzMjk3fQ.1ICsHliUXR0CG4H8vQRYJ5jVqFwmqKSB0fScSitC-Q4'
static_configs:
- targets:
- hassio.lan:80
- job_name: hassio_ring86
metrics_path: /api/prometheus
bearer_token: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiI0OGFjZTJiNmQ3OTg0Mjc3YWFmMmNmZmU1Yzc4NTE0NCIsImlhdCI6MTYxMjE5Mjk0MCwiZXhwIjoxOTI3NTUyOTQwfQ.BbIAXm9Rq0jb6ouqgVHNd6Keez3NP3yh-7wyfuoB8Yk'
static_configs:
- targets:
- auto.chaos:80
- job_name: postgres
static_configs:
- targets:
- postgres.live-env.svc.cluster.local:9187
- job_name: node
static_configs:
- targets:
- adm01.wks:9100
- dumont-wks.wks:9100
- drucki.wks:9100
- ebin01.wks:9100
- ebin02.wks:9100
- osmc.wks:9100
- riot01.wks:9100
- truhe.chaos:9100
- auto02.chaos:9100
- dumont.chaos:9100
- tumor.chaos:9100
- wohnz.chaos:9100
- yori.chaos:9100

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-k8s-db-prometheus-k8s-0
namespace: monitoring
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
storageClassName: fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

View File

@@ -1,41 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-db
annotations:
pv.kubernetes.io/pirvisioned-by: nfs-ssd
spec:
storageClassName: "nfs-ssd"
nfs:
path: /data/raid1-ssd/k8s-data/prometheus-db
server: ebin01
capacity:
storage: 40Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: prometheus-k8s-db-prometheus-k8s-0
namespace: monitoring
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana-conf
spec:
storageClassName: "nfs-ssd"
nfs:
path: /data/raid1-ssd/k8s-data/grafana-conf
server: ebin01
capacity:
storage: 40Mi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
claimRef:
kind: PersistentVolumeClaim
name: grafana-conf
namespace: monitoring

View File

@@ -1,12 +0,0 @@
COMMON:
** git tag -l
** V=GIT_TAG git checkout -b branch=$V $V
** run: build.sh dir-name
external-provisioner:
external-attacher:
node-driver-registrar:

View File

@@ -1,27 +0,0 @@
#!/bin/bash
APP=$1
cd $APP
VERSION=arm64 make -j8 GOARCH=arm64
docker build -t $APP:arm64 --platform linux/arm64 .
docker tag ${APP}:arm64 docker-registry.lan/${APP}:arm64
echo "=============================================="
while true; do
read -p "Push it real good? " yn
case $yn in
[Yy]* )
docker push docker-registry.lan/${APP}:arm64;
echo "-> Cheers";
echo;
break;;
[Nn]* )
echo "x> Cheers!";
echo;
exit;;
* ) echo "Please answer [y]es or [n]o.";;
esac
done
cd -

View File

@@ -1,12 +0,0 @@
# This is where the result of the go build goes
/output*/
/_output*/
/_output
# Go test binaries
*.test
# Godeps or dep workspace
/Godeps/_workspace
vendor
vendor.*

Some files were not shown because too many files have changed in this diff Show More