前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >DAY 1: 学习CUDA C Programming Guide

DAY 1: 学习CUDA C Programming Guide

作者头像
GPUS Lady
发布2018-06-25 15:58:57
2.2K0
发布2018-06-25 15:58:57
举报
文章被收录于专栏:GPUS开发者GPUS开发者

今天开始,我们将带领大家开始阅读英文的《CUDA C Programming Guide》,希望在接下来的100天里,您可以学习到原汁原味的CUDA,同时能养成英文阅读的习惯。

本文共计899字,阅读时间15分钟

Changes from Version 9.0

· Documented restriction that operator-overloads【操作符重载】 cannot be __global__ functions in Operator Function.

· Removed guidance to break 8-byte shuffles into two 4-byte instructions. 8-byte shuffle variants are provided since CUDA 9.0. See Warp Shuffle Functions.

· Passing __restrict__ references to __global__ functions is now supported. Updated comment in __global__ functions and function templates.

· Documented CUDA_ENABLE_CRC_CHECK in CUDA Environment Variables.

· Warp matrix functions [PREVIEW FEATURE] now support matrix products with m=32, n=8, k=16 and m=8, n=32, k=16 in addition to m=n=k=16.

1. Introduction

1.1. From Graphics Processing to General Purpose Parallel Computing

Driven by the insatiable market demand for realtime, high-definition 3D graphics, the programmable Graphic Processor Unit or GPU has evolved into a highly parallel, multithreaded, manycore processor with tremendous computational horsepower and very high memory bandwidth, as illustrated by Figure 1 and Figure 2.

Figure 1. Floating-Point Operations per Second for the CPU and GPU

Figure 2. Memory Bandwidth for the CPU and GPU

The reason behind the discrepancy【差异】 in floating-point capability between the CPU and the GPU is that the GPU is specialized for compute-intensive【计算密集】, highly parallel computation【高度并行计算】 - exactly what graphics rendering is about - and therefore【因此】 designed such that more transistors【晶体管】 are devoted to data processing rather than data caching and flow control, as schematically illustrated by Figure 3.

Figure 3. The GPU Devotes More Transistors to Data Processing

More specifically, the GPU is especially well-suited to address problems that can be expressed as data-parallel computations - the same program is executed on many data elements in parallel - with high arithmetic intensity【算术强度】 - the ratio of arithmetic operations to memory operations【算术和内存操作的比率】. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control【复杂的流控制】, and because it is executed on many data elements and has high arithmetic intensity, the memory access latency【内存访问延迟】 can be hidden with calculations instead of big data caches.

Data-parallel processing maps data elements to parallel processing threads. Many applications that process large data sets can use a data-parallel programming model to speed up the computations. In 3D rendering【渲染】, large sets of pixels and vertices are mapped to parallel threads. Similarly【类似地】, image and media processing applications such as post-processing【后处理】 of rendered images, video encoding and decoding, image scaling, stereo vision, and pattern recognition【模式识别】 can map image blocks and pixels to parallel processing threads. In fact, many algorithms outside the field of image rendering and processing are accelerated by data-parallel processing, from general signal processing or physics simulation to computational finance or computational biology.

1.2. CUDA®: A General-Purpose Parallel Computing Platform and Programming Model

In November 2006, NVIDIA introduced CUDA®, a general【通用】 purpose parallel computing platform and programming model that leverages the parallel compute engine in NVIDIA GPUs to solve many complex computational problems in a more efficient way than on a CPU.

CUDA comes with a software environment that allows developers to use C as a high-level programming language. As illustrated by Figure 4, other languages, application programming interfaces, or directives-based approaches are supported, such as FORTRAN, DirectCompute, OpenACC.

Figure 4. GPU Computing Applications. CUDA is designed to support various languages and application programming interfaces.

1.3. A Scalable Programming Model

The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore【以此同时】, their parallelism continues to scale with Moore's law【摩尔定律】. The challenge is to develop application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores.

The CUDA parallel programming model is designed to overcome this challenge while maintaining a low learning curve for programmers familiar with standard programming languages such as C.

At its core are three key abstractions - a hierarchy of thread groups【线程组的层次结构】, shared memories【共享内存】, and barrier synchronization【栅栏同步】 - that are simply exposed to the programmer as a minimal set of language extensions.

These abstractions provide fine-grained【细粒度】 data parallelism and thread parallelism, nested within coarse-grained data parallelism and task parallelism. They guide the programmer to partition the problem into coarse sub-problems that can be solved independently in parallel by blocks of threads, and each sub-problem into finer pieces that can be solved cooperatively in parallel by all threads within the block.

This decomposition preserves language expressivity by allowing threads to cooperate when solving each sub-problem, and at the same time enables automatic scalability. Indeed, each block of threads can be scheduled on any of the available multiprocessors within a GPU, in any order, concurrently or sequentially, so that a compiled CUDA program can execute on any number of multiprocessors as illustrated by Figure 5, and only the runtime system needs to know the physical multiprocessor count.

This scalable programming model allows the GPU architecture to span a wide market range by simply scaling the number of multiprocessors and memory partitions: from the high-performance enthusiast GeForce GPUs and professional Quadro and Tesla computing products to a variety of inexpensive, mainstream GeForce GPUs (see CUDA-Enabled GPUs for a list of all CUDA-enabled GPUs).

Figure 5. Automatic Scalability

Note: A GPU is built around an array of Streaming Multiprocessors (SMs) (see Hardware Implementation for more details). A multithreaded program is partitioned into blocks of threads that execute independently from each other, so that a GPU with more multiprocessors will automatically execute the program in less time than a GPU with fewer multiprocessors.

本文备注/经验分享:

CUDA对于C++的支持不完善。有各种限制的。

算术和内存操作的比率,这个是衡量一张卡计算性能和访存性能比率的指标。 有两种单位。一个是指令对字节(或者4B), 另外一个是指令对指令。但是这ratio实际上不用自己记住的。因为一般情况下profiler会告诉你是你卡计算,还是卡访存。

有不明白的地方,请在本文后留言

或者在我们的技术论坛bbs.gpuworld.cn上发帖

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-05-02,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 GPUS开发者 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1. Introduction
    • 1.1. From Graphics Processing to General Purpose Parallel Computing
      • 1.2. CUDA®: A General-Purpose Parallel Computing Platform and Programming Model
      相关产品与服务
      GPU 云服务器
      GPU 云服务器(Cloud GPU Service,GPU)是提供 GPU 算力的弹性计算服务,具有超强的并行计算能力,作为 IaaS 层的尖兵利器,服务于深度学习训练、科学计算、图形图像处理、视频编解码等场景。腾讯云随时提供触手可得的算力,有效缓解您的计算压力,提升业务效率与竞争力。
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档