前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >DAY10:阅读CUDA异步并发执行中的Streams

DAY10:阅读CUDA异步并发执行中的Streams

作者头像
GPUS Lady
发布2018-06-22 18:23:01
1.5K0
发布2018-06-22 18:23:01
举报
文章被收录于专栏:GPUS开发者GPUS开发者

我们正带领大家开始阅读英文的《CUDA C Programming Guide》,今天是第10天,我们用几天时间来学习CUDA 的编程接口,其中最重要的部分就是CUDA C runtime.希望在接下来的90天里,您可以学习到原汁原味的CUDA,同时能养成英文阅读的习惯。

本文共计263字,阅读时间15分钟

重要的是——

如果你已经坚持了10天,你已经共计阅读了8164个字!

前情回顾:

DAY5:阅读 CUDA C编程接口之CUDA C runtime

DAY6:阅读 CUDA C编程接口之CUDA C runtime

DAY7:阅读 CUDA C编程接口之CUDA C runtime

DAY8:阅读CUDA异步并发执行中的Streams

DAY9:阅读CUDA异步并发执行中的Streams

今天继续讲解异步并发执行中的Streams,好消息是,今天讲完就真的把Stream部分讲完了,我们可以继续往下走了:

3.2.5.5.6. Callbacks

The runtime provides a way to insert【插入】 a callback at any point into a stream via cudaStreamAddCallback(). A callback is a function that is executed on the host once all commands issued to the stream before the callback have completed. Callbacks in stream 0 are executed once all preceding tasks and commands issued in all streams before the callback have completed.

The following code sample adds the callback function MyCallback to each of two streams after issuing a host-to-device memory copy, a kernel launch and a device-to-host memory copy into each stream. The callback will begin execution on the host after each of the device-to-host memory copies completes.

The commands that are issued in a stream (or all commands issued to any stream if the callback is issued to stream 0) after a callback do not start executing before the callback has completed. The last parameter of cudaStreamAddCallback() is reserved for future use.

A callback must not make CUDA API calls (directly or indirectly), as it might end up waiting on itself if it makes such a call leading to a deadlock.

3.2.5.5.7. Stream Priorities

The relative priorities of streams can be specified at creation using cudaStreamCreateWithPriority(). The range of allowable priorities, ordered as [ highest priority【优先级】, lowest priority ] can be obtained【获得】 using the cudaDeviceGetStreamPriorityRange() function. At runtime, as blocks in low-priority schemes finish, waiting blocks in higher-priority streams are scheduled in their place.

The following code sample obtains the allowable range of priorities for the current device, and creates streams with the highest and lowest available priorities

本文备注/经验分享:

A callback must not make CUDA API calls (directly or indirectly), as it might end up waiting on itself if it makes such a call leading to a deadlock.

回调函数不能调用任何CUDA API函数,无论是直接的,还是间接的调用。因为如果在回调函数中这样做了,调用CUDA函数的回调函数将自己等待自己,造成死锁。其实这很显然的,流中的下一个任务将需要等待流中的之前任务完成才能继续,因为CUDA Stream是顺序执行的, 而如果你一个流中的某回调函数,继续给某流发布了一个任务,很有可能该回调函数永远也等待不完下一个任务完成,因为等待下一个任务完成首先需要这回调函数先结束,而回调函数却在等待下一个任务完成....于是就死锁了。

有不明白的地方,请在本文后留言

或者在我们的技术论坛bbs.gpuworld.cn上发帖

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-05-13,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 GPUS开发者 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 前情回顾:
  • DAY5:阅读 CUDA C编程接口之CUDA C runtime
  • 3.2.5.5.6. Callbacks
  • 3.2.5.5.7. Stream Priorities
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档