首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >为什么Istio“身份验证策略”示例页不能像预期的那样工作?

为什么Istio“身份验证策略”示例页不能像预期的那样工作?
EN

Stack Overflow用户
提问于 2018-08-15 12:08:48
回答 1查看 280关注 0票数 0

这里的文章:https://istio.io/docs/tasks/security/authn-policy/ --具体来说,当我按照Setup部分的说明操作时,我无法连接任何驻留在名称空间foobar中的httpbin。不过,legacy的那款车没问题,我想安装侧车代理有什么问题。

下面是httpbin pod yaml文件的输出(在注入istioctl kubeinject --includeIPRanges "10.32.0.0/16"命令之后)。我使用--includeIPRanges,这样pod就可以与外部ip通信(为了调试安装dnsutils等软件包)。

代码语言:javascript
运行
复制
apiVersion: v1
kind: Pod
metadata:
  annotations:
    sidecar.istio.io/inject: "true"
    sidecar.istio.io/status: '{"version":"4120ea817406fd7ed43b7ecf3f2e22abe453c44d3919389dcaff79b210c4cd86","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
  creationTimestamp: 2018-08-15T11:40:59Z
  generateName: httpbin-8b9cf99f5-
  labels:
    app: httpbin
    pod-template-hash: "465795591"
    version: v1
  name: httpbin-8b9cf99f5-9c47z
  namespace: foo
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: httpbin-8b9cf99f5
    uid: 1450d75d-a080-11e8-aece-42010a940168
  resourceVersion: "65722138"
  selfLink: /api/v1/namespaces/foo/pods/httpbin-8b9cf99f5-9c47z
  uid: 1454b68d-a080-11e8-aece-42010a940168
spec:
  containers:
  - image: docker.io/citizenstig/httpbin
    imagePullPolicy: IfNotPresent
    name: httpbin
    ports:
    - containerPort: 8000
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-pkpvf
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - httpbin
    - --drainDuration
    - 45s
    - --parentShutdownDuration
    - 1m0s
    - --discoveryAddress
    - istio-pilot.istio-system:15007
    - --discoveryRefreshDelay
    - 1s
    - --zipkinAddress
    - zipkin.istio-system:9411
    - --connectTimeout
    - 10s
    - --statsdUdpAddress
    - istio-statsd-prom-bridge.istio-system.istio-system:9125
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    image: docker.io/istio/proxyv2:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    resources:
      requests:
        cpu: 10m
    securityContext:
      privileged: false
      readOnlyRootFilesystem: true
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - args:
    - -p
    - "15001"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - 10.32.0.0/16
    - -x
    - ""
    - -b
    - 8000,
    - -d
    - ""
    image: docker.io/istio/proxy_init:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  nodeName: gke-tvlk-data-dev-default-medium-pool-46397778-q2sb
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-pkpvf
    secret:
      defaultMode: 420
      secretName: default-token-pkpvf
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:41:01Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:44:28Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:40:59Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://758e130a4c31a15c1b8bc1e1f72bd7739d5fa1103132861eea9ae1a6ae1f080e
    image: citizenstig/httpbin:latest
    imageID: docker-pullable://citizenstig/httpbin@sha256:b81c818ccb8668575eb3771de2f72f8a5530b515365842ad374db76ad8bcf875
    lastState: {}
    name: httpbin
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-08-15T11:41:01Z
  - containerID: docker://9c78eac46a99457f628493975f5b0c5bbffa1dac96dab5521d2efe4143219575
    image: istio/proxyv2:1.0.0
    imageID: docker-pullable://istio/proxyv2@sha256:77915a0b8c88cce11f04caf88c9ee30300d5ba1fe13146ad5ece9abf8826204c
    lastState:
      terminated:
        containerID: docker://52299a80a0fa8949578397357861a9066ab0148ac8771058b83e4c59e422a029
        exitCode: 255
        finishedAt: 2018-08-15T11:44:27Z
        reason: Error
        startedAt: 2018-08-15T11:41:02Z
    name: istio-proxy
    ready: true
    restartCount: 1
    state:
      running:
        startedAt: 2018-08-15T11:44:28Z
  hostIP: 10.32.96.27
  initContainerStatuses:
  - containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
    image: istio/proxy_init:1.0.0
    imageID: docker-pullable://istio/proxy_init@sha256:345c40053b53b7cc70d12fb94379e5aa0befd979a99db80833cde671bd1f9fad
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
        exitCode: 0
        finishedAt: 2018-08-15T11:41:00Z
        reason: Completed
        startedAt: 2018-08-15T11:41:00Z
  phase: Running
  podIP: 10.32.19.61
  qosClass: Burstable
  startTime: 2018-08-15T11:40:59Z

这里是当我得到错误sleep.legacy -> httpbin.foo时的示例命令

代码语言:javascript
运行
复制
> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"

000
command terminated with exit code 7

**下面是获得成功状态时的示例命令: sleep.legacy -> httpbin.legacy **

代码语言:javascript
运行
复制
> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -csleep -n legacy -- curl http://httpbin.legacy:8000/ip -s -o /dev/null -w "%{http_code}\n"

200

我遵循了指示,以确保没有定义mtls策略,等等。

代码语言:javascript
运行
复制
> kubectl get policies.authentication.istio.io --all-namespaces
No resources found.
> kubectl get meshpolicies.authentication.istio.io
No resources found.
> kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
host: istio-policy.istio-system.svc.cluster.local
host: istio-telemetry.istio-system.svc.cluster.local
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-08-15 12:34:51

我想我找到原因了。在我的角色中,配置被搞砸了。如果您查看statsd地址,它是用不可识别的主机名istio-statsd-prom-bridge.istio-system.istio-system:9125定义的。我注意到,在查看代理容器后,多次重新启动/崩溃。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51858436

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档