GZCTF 单实例快速部署指南
Apr 22, 2026 · 8293 字
GZ::CTF 是一个基于 ASP.NET Core 的开源 CTF 平台,采用 Docker 或 K8s 作为容器部署后端,提供了可自定义的题目类型、动态容器和动态分值功能。
本文将介绍如何在单节点 k3s 集群上快速部署 GZCTF,适合初学者和小型 CTF 赛事使用。
目标环境
- 系统:Ubuntu 22.04+ 或 CentOS 7
安装前准备
Ubuntu
sudo apt update
sudo apt install -y curl ca-certificates gnupg lsb-release
sudo swapoff -a
sudo sed -ri 's@^([^#].*\sswap\s+sw\s+.*)$@#\1@g' /etc/fstab
sudo modprobe br_netfilter
cat <<'EOF' | sudo tee /etc/sysctl.d/99-kubernetes.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
CentOS 7
sudo yum install -y curl ca-certificates iptables-services
sudo swapoff -a
sudo sed -ri 's@^([^#].*\sswap\s+sw\s+.*)$@#\1@g' /etc/fstab
cat <<'EOF' | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
sudo modprobe br_netfilter
cat <<'EOF' | sudo tee /etc/sysctl.d/99-kubernetes.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
安装 k3s
curl -sfL https://get.k3s.io | sh -
准备部署目录
mkdir -p ~/gzctf/manifests
cd ~/gzctf
创建基础资源
先创建命名空间和权限:
cat <<'EOF' > manifests/01-base.yaml
apiVersion: v1
kind: Namespace
metadata:
name: gzctf-server
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gzctf-sa
namespace: gzctf-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gzctf-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gzctf-sa
namespace: gzctf-server
EOF
kubectl apply -f manifests/01-base.yaml
创建 GZCTF 配置文件:
cat <<'EOF' > manifests/appsettings.json
{
"ConnectionStrings": {
"Database": "Host=gzctf-db:5432;Database=ctf;Username=postgres;Password=ChangeMe_DbPass",
"RedisCache": "gzctf-garnet:6379,abortConnect=false",
"Storage": "local://"
}
}
EOF
kubectl -n gzctf-server create configmap gzctf-config \
--from-file=appsettings.json=manifests/appsettings.json
创建存储
创建宿主机目录:
mkdir -p ~/gzctf/files ~/gzctf/db
sudo chmod -R 777 ~/gzctf
创建 PV 和 PVC:
cat <<'EOF' > manifests/02-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gzctf-files-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: ~/gzctf/files
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: gzctf-db-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: ~/gzctf/db
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gzctf-files
namespace: gzctf-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: gzctf-files-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gzctf-db
namespace: gzctf-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: gzctf-db-pv
EOF
kubectl apply -f manifests/02-storage.yaml
部署 GZCTF
cat <<'EOF' > manifests/03-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gzctf
namespace: gzctf-server
labels:
app: gzctf
spec:
replicas: 1
selector:
matchLabels:
app: gzctf
template:
metadata:
labels:
app: gzctf
spec:
serviceAccountName: gzctf-sa
containers:
- name: gzctf
image: registry.cn-shanghai.aliyuncs.com/gztime/gzctf:latest
imagePullPolicy: Always
env:
- name: GZCTF_ADMIN_PASSWORD
value: "ChangeMe_AdminPass"
- name: LC_ALL
value: zh_CN.UTF-8
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: gzctf-files
mountPath: /app/files
- name: gzctf-config
mountPath: /app/appsettings.json
subPath: appsettings.json
volumes:
- name: gzctf-files
persistentVolumeClaim:
claimName: gzctf-files
- name: gzctf-config
configMap:
name: gzctf-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gzctf-garnet
namespace: gzctf-server
labels:
app: gzctf-garnet
spec:
replicas: 1
selector:
matchLabels:
app: gzctf-garnet
template:
metadata:
labels:
app: gzctf-garnet
spec:
containers:
- name: gzctf-garnet
image: ghcr.io/microsoft/garnet-alpine:latest
args: ["--bind", "0.0.0.0"]
ports:
- containerPort: 6379
name: garnet
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gzctf-db
namespace: gzctf-server
labels:
app: gzctf-db
spec:
replicas: 1
selector:
matchLabels:
app: gzctf-db
template:
metadata:
labels:
app: gzctf-db
spec:
containers:
- name: gzctf-db
image: postgres:alpine
env:
- name: POSTGRES_PASSWORD
value: "ChangeMe_DbPass"
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: gzctf-db
mountPath: /var/lib/postgresql
volumes:
- name: gzctf-db
persistentVolumeClaim:
claimName: gzctf-db
EOF
kubectl apply -f manifests/03-deploy.yaml
创建访问入口
cat <<'EOF' > manifests/04-network.yaml
apiVersion: v1
kind: Service
metadata:
name: gzctf
namespace: gzctf-server
spec:
selector:
app: gzctf
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: gzctf-db
namespace: gzctf-server
spec:
selector:
app: gzctf-db
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: gzctf-garnet
namespace: gzctf-server
spec:
selector:
app: gzctf-garnet
ports:
- protocol: TCP
port: 6379
targetPort: 6379
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gzctf
namespace: gzctf-server
spec:
ingressClassName: traefik
rules:
- host: ctf.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gzctf
port:
number: 8080
EOF
kubectl apply -f manifests/04-network.yaml
有域名时,把 ctf.example.com 改成你的真实域名,并把 DNS 解析到服务器公网 IP。
没有域名时,不需要写 host,直接使用下面的 Ingress 写法即可。这个规则会把所有访问都路由到 gzctf:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gzctf
namespace: gzctf-server
spec:
ingressClassName: traefik
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gzctf
port:
number: 8080
验证
kubectl -n gzctf-server get pods
kubectl -n gzctf-server get svc
kubectl -n gzctf-server get ingress
kubectl -n gzctf-server logs deploy/gzctf -f
浏览器访问 http://ctf.example.com,使用 GZCTF_ADMIN_PASSWORD 设置的密码完成首次登录。
常见问题
- Pod 一直 Pending:检查 PVC 是否 Bound,检查节点磁盘与内存是否足够;
- GZCTF 连不上数据库:确认 POSTGRES_PASSWORD 与 appsettings.json 中密码一致;
- 页面无法访问:检查域名 DNS、80 端口是否已放行;
- 执行 kubectl 提示 /etc/rancher/k3s/k3s.yaml 权限不足:这是当前用户无权读取 k3s 默认 kubeconfig 导致的问题。即使已经有 ~/.kube/config,如果 kubectl 实际仍在读取 /etc/rancher/k3s/k3s.yaml,依然会报错。先检查当前行为:
echo "$KUBECONFIG"
which kubectl
ls -l "$(which kubectl)"
如果输出显示 kubectl 是指向 k3s 的软链接,且 KUBECONFIG 为空,说明当前 shell 还没有切换到 ~/.kube/config。按下面命令修复:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
chmod 600 ~/.kube/config
export KUBECONFIG=$HOME/.kube/config
kubectl --kubeconfig $HOME/.kube/config get nodes
kubectl --kubeconfig $HOME/.kube/config apply -f manifests/01-base.yaml
写入 shell 配置后可以长期生效:
echo 'export KUBECONFIG=$HOME/.kube/config' >> ~/.bashrc
source ~/.bashrc
kubectl get nodes
如果你必须使用 sudo,请显式指定配置文件,避免又回退到 /etc/rancher/k3s/k3s.yaml:
sudo KUBECONFIG=$HOME/.kube/config kubectl apply -f manifests/01-base.yaml