首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >Nginx入口pod在传入的websocket连接请求时返回404

Nginx入口pod在传入的websocket连接请求时返回404
EN

Stack Overflow用户
提问于 2019-03-29 03:56:58
回答 1查看 1K关注 0票数 0

朋友们,我在AWS ECS上运行了很多服务。我的kubernetes集群也在AWS上,使用EKS。我使用nginx-ingress向我的ECS服务公开我的集群。

我的一个nodejs容器无法发起与后端pod的websocket连接请求。nodejs容器的日志只是说无法建立websocket连接。

从我的后端pod的日志中,看起来该请求从未被发送到后端。

然后我看了看我的nginx-ingress pod的日志,我确实看到了一堆像这样的404错误:

代码语言:javascript
复制
...
{
  "time": "2019-03-28T19:39:19+00:00",
  "request_body": "-",
  "remote_addr": "",
  "x-forward-for": "1.2.3.4(public ip), 127.0.0.1",
  "request_id": "ea1f269ce703a69126d22bea28b75b89",
  "remote_user": "-",
  "bytes_sent": 308,
  "request_time": 0,
  "status": 404,
  "vhost": "abc.net",
  "request_query": "-",
  "request_length": 1084,
  "duration": 0,
  "request": "GET /wsconnect HTTP/1.1",
  "http_referrer": "-",
  "http_user_agent": "Jetty/9.4.12.v20180830",
  "header-X-Destination": "-",
  "header-Host": "abc.net",
  "header-Connection": "upgrade",
  "proxy_upstream_name": "-",
  "upstream_addr": "-",
  "service_port": "",
  "service_name": ""
}
2019/03/28 19:39:19 [info] 82#82: *13483 client 192.168.233.71 closed keepalive connection
2019/03/28 19:39:23 [info] 79#79: *13585 client closed connection while waiting for request, client: 192.168.105.223, server: 0.0.0.0:80
2019/03/28 19:39:25 [info] 84#84: *13634 client closed connection while waiting for request, client: 192.168.174.208, server: 0.0.0.0:80
2019/03/28 19:39:25 [info] 78#78: *13638 client closed connection while waiting for request, client: 192.168.233.71, server: 0.0.0.0:80
2019/03/28 19:39:33 [info] 80#80: *13832 client closed connection while waiting for request, client: 192.168.105.223, server: 0.0.0.0:80
2019/03/28 19:39:35 [info] 83#83: *13881 client closed connection while waiting for request, client: 192.168.174.208, server: 0.0.0.0:80
2019/03/28 19:39:35 [info] 83#83: *13882 client closed connection while waiting for request, client: 192.168.233.71, server: 0.0.0.0:80
2019/03/28 19:39:36 [info] 84#84: *12413 client 127.0.0.1 closed keepalive connection
...

我的问题是:我如何才能进一步了解到底是什么原因导致这个websocket连接请求失败?我尝试将错误日志级别设置为debug,但这导致了很多垃圾。

安全组没问题。我的一个容器服务可以与我在K8s集群中的后端pods通信。不过,该服务是基于HTTP的。

我的入口是按照这个指南设置的:https://kubernetes.github.io/ingress-nginx/deploy/#aws我按原样部署了入口控制器。

服务、入口、ConfigMap如下:

代码语言:javascript
复制
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: default
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    # Enable PROXY protocol
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600" # recommended for websocket
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "cert-arn"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"

spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: http

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: default
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  enable-access-log-for-default-backend: "true"
  error-log-level: "info"
  allow-backend-server-header: "true"
  use-proxy-protocol: "true"
  log-format-upstream: '{"time": "$time_iso8601", "request_body": "$request_body", "remote_addr": "$proxy_protocol_addr","x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user":"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status, "vhost": "$host", "request_query": "$args", "request_length": $request_length, "duration": $request_time, "request" : "$request", "http_referrer": "$http_referer", "http_user_agent":"$http_user_agent", "header-X-Destination": "$http_X_Destination", "header-Host" : "$http_Host", "header-Connection": "$http_Connection","proxy_upstream_name":"$proxy_upstream_name", "upstream_addr":"$upstream_addr", "service_port" : "$service_port", "service_name":"$service_name" }'

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-{{UUID}}
  namespace: {{NAMESPACE}}
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/enable-access-log: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  labels:
    company-id: {{UUID}}
    company-name: {{ABC}}
spec:
  rules:
  - host: "{{UUID}}.k8s.dev.abc.net"
    http:
      paths:
      - path: /
        backend:
          serviceName: {{UUID}}
          servicePort: 443
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-04-23 01:22:33

结果是我的应用程序出了问题。这里的问题是,主机头(Vhost)最终到达了我的ECS服务的FQDN之一,这是我的k8s集群无法识别的。

为了解决这个问题,我最终修改了ECS服务的应用程序代码,用"k8s-backend-url.com:443“重写了X-Forwarded-Host头,然后让请求通过。

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/55405919

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档