可观察性可以通过分布式系统下各种组件交互的可见性来帮助识别应用程序中的问题和调试问题。
在Spring Boot 2.x的版本中, 我们可以通过引入Spring Cloud Sleuth来完成对服务信息的收集,然后将信息提交到如zipkin等
在Spring Boot 3.x的版本中, Spring Cloud Sleuth被micrometer替代.
下面以完整的示例演示接入micrometer的流程 .
依赖 | 版本 |
---|---|
jdk | 20 |
spring-boot | 3.1.2 |
选用zipkin进行数据收集和展示. 下载后可以用jdk 1.8版本对jar包进行启动
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-zipkin</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
management:
endpoints:
web:
exposure:
include: '*'
logging:
pattern:
level: '%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]'
这里要注意, Sleuth是将日志的格式自动做了调整, 而micrometer需要指定格式. 不过MDC的key依旧是traceId和spanId
@FeignClient(
contextId = "third-part-sf",
name = "third-part-sf",
url = "https://openic.sf-express.com",
configuration = SFConfig.class,
path = "/open/api/external"
)
@Observed(name = "SFDeliveryClient")
public interface SfDeliveryClient {
@PostMapping("/precreateorder")
SfResult<SfPreOrderResp> preCreateOrder(@RequestBody SfPreOrderReq preOrderReq);
}
这里和sleuth使用上有些不同; sleuth是将每次交互(rpc,redis)等发生交互的流程默认都做了收集. micrometer需要添加@Observed来添加一个收集的端点
在micrometer中, 对于每一次的上下文创建和销毁进行跟踪,
@Slf4j
public class SimpleLoggingHandler implements ObservationHandler<Observation.Context> {
@Override
public void onStart(Observation.Context context) {
log.info("Starting context {} ", context);
}
@Override
public void onStop(Observation.Context context) {
log.info("Stopping context {} ", context);
}
@Override
public boolean supportsContext(Observation.Context context) {
return true;
}
}
----
@Configuration
public class ObservedAspectConfiguration {
@Bean
public ObservedAspect observedAspect(ObservationRegistry observationRegistry) {
observationRegistry.observationConfig().observationHandler(new SimpleLoggingHandler());
return new ObservedAspect(observationRegistry);
}
}
http://localhost:8080/actuator/metrics
结果如下:
{
"names": [
"SFDeliveryClient",
"SFDeliveryClient.active",
"application.ready.time",
"application.started.time",
"disk.free",
"disk.total",
"executor.active",
"executor.completed",
"executor.pool.core",
"executor.pool.max",
"executor.pool.size",
"executor.queue.remaining",
"executor.queued",
"hikaricp.connections",
"hikaricp.connections.acquire",
"hikaricp.connections.active",
"hikaricp.connections.creation",
"hikaricp.connections.idle",
"hikaricp.connections.max",
"hikaricp.connections.min",
"hikaricp.connections.pending",
"hikaricp.connections.timeout",
"hikaricp.connections.usage",
"http.server.requests",
"http.server.requests.active",
"jdbc.connections.active",
"jdbc.connections.idle",
"jdbc.connections.max",
"jdbc.connections.min",
"jvm.buffer.count",
"jvm.buffer.memory.used",
"jvm.buffer.total.capacity",
"jvm.classes.loaded",
"jvm.classes.unloaded",
"jvm.compilation.time",
"jvm.gc.live.data.size",
"jvm.gc.max.data.size",
"jvm.gc.memory.allocated",
"jvm.gc.memory.promoted",
"jvm.gc.overhead",
"jvm.gc.pause",
"jvm.info",
"jvm.memory.committed",
"jvm.memory.max",
"jvm.memory.usage.after.gc",
"jvm.memory.used",
"jvm.threads.daemon",
"jvm.threads.live",
"jvm.threads.peak",
"jvm.threads.started",
"jvm.threads.states",
"lettuce.command.completion",
"lettuce.command.firstresponse",
"logback.events",
"process.cpu.usage",
"process.start.time",
"process.uptime",
"system.cpu.count",
"system.cpu.usage",
"thirdChannelController",
"thirdChannelController.active",
"tomcat.sessions.active.current",
"tomcat.sessions.active.max",
"tomcat.sessions.alive.max",
"tomcat.sessions.created",
"tomcat.sessions.expired",
"tomcat.sessions.rejected"
]
}
http://localhost:8080/actuator/metrics/SFDeliveryClient
{
"name": "SFDeliveryClient",
"baseUnit": "seconds",
"measurements": [
{
"statistic": "COUNT",
"value": 1.0
},
{
"statistic": "TOTAL_TIME",
"value": 0.229436
},
{
"statistic": "MAX",
"value": 0.0
}
],
"availableTags": [
{
"tag": "method",
"values": [
"preCreateOrder"
]
},
{
"tag": "error",
"values": [
"none"
]
},
{
"tag": "class",
"values": [
"io.yujie.fast.delivery.thirdparty.sf.client.SfDeliveryClient"
]
}
]
}
日志收集到elk后,可以用traceId或者spanId进行日志检索. 以一条较短的日志打印为例:
2023-08-02T13:53:53.127+08:00 INFO [eeaters-example,d50e2c42f3225b681bf1fef572dfbf0d,c2ad58fc187d2667] 14680 --- [nio-8080-exec-1] i.y.f.d.thirdparty.sf.client.SFConfig : 签名: ZGQ4MDRhNGNiMWUzYjQ0ZjcwNjhmYTY3ZmViZmJiMGM=
zipkin的页面展示如下:
TraceEnvironmentPostProcessor
自动调整了logger.level格式. micrometer需要手动设置日志的格式原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。