我试图从KOPS集群中的运行舱收集日志。我在KOPS集群中运行filebeat,从我的pod(应用程序)收集日志,然后将这些日志传送到集群的外部,logstash服务在其中接受它们并将它们保存到一个文件中。
我注意到文件节拍总是用UTC时间戳生成日志,即使我的所有节点和豆荚都运行在SGT时区。
我在文件处理程序中设置了add_locale,但这没有帮助。
节点时区
吊舱时区
完整文件-kubernetes.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Singapore
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
从文件拍输出日志
发布于 2021-01-15 15:29:00
不幸的是,我没有足够的声誉来添加评论,所以把这个作为回答。以下参考文档提到:
处理器向每个事件添加一个event.timezone值。
因此,日志时间戳本身可能不会转换为本地时区,但是它会在事件日志中添加额外的字段来表示时区,并且可以通过使用日志的应用程序来格式化日志。
https://stackoverflow.com/questions/65733989
复制相似问题