前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >一起学习Deepstream视频分析课程

一起学习Deepstream视频分析课程

原创
作者头像
GPUS Lady
修改2020-02-13 18:27:56
2K0
修改2020-02-13 18:27:56
举报
文章被收录于专栏:GPUS开发者GPUS开发者

NVIDIA DLI发布了一个最新的免费课程:

让我们一起来学习吧。

首先打开网址:https://courses.nvidia.com/courses/course-v1:DLI+C-IV-02+V1/about

看一下这个课程的学习内容:

You'll learn how to:

-Set up your Jetson Nano and (optional) camera

-Build end-to-end DeepStream pipelines to convert raw video input into insightful annotated video output

-Configure multiple video streams simultaneously

注意,虽然课程是基于Jetson NANO的,其实Xavier和Jetson TX2都是支持的哟!

好了,报名这个课程,点击进入吧

进入后,我们看到课程要求:

Hardware:

Jetson Nano Developer Kit

Computer with Internet Access and SD card port

A microSD Memory Card (32GB UHS-I minimum)

Compatible 5V 4A Power Supply with 2.1mm DC barrel connector

2-pin jumper

USB cable (Micro-B to Type-A)

A computer with an Internet connection and:

-The ability to flash your microSD card

-Administrative rights and the ability to install a compatible media player software such as VLC Player

OPTIONAL: Compatible camera such as a Logitech C270 Webcam

OPTIONAL: Wired Internet connection to the Jetson Nano (Ethernet port)

简单地说,就是你需要有一套Jetson NANO开发套件,上面有TF卡,至少32GB,刷好系统,有5V4A电源,有跳线帽;准备一台联网的电脑,电脑上装一个媒体播放器,比如VLC;可以准备一个USB摄像头哦,比如罗技C270.(实际上跟我们购买Jetson NANO套件的套餐,会包含32G刷好系统的TF卡和5V4A电源。升级版本的Jetson NANO是自带跳线帽的,

不过我们建议用户用一个全新的32GB TF卡,根据这个课程学会自己刷机安装系统)

好了,点击课程进入下一个画面,我们会看到整个课程目录:

大家可以按照这个课程一步一步看。我们来到最后的问答部分:

1. Which of the following statements are true about "bounding boxes" in the context of object detection? (Check all that apply)。

- Bounding boxes are used to show target locations.

-A bounding box is a rectangular box determined with x and y coordinates of the axis.

-A bounding box is used to shrink the size of the overall image.

-In a DeepStream pipeline, the Gst-nvdsosd plugin is used to draw bounding boxes, text, and region-of-interest (RoI) polygons.

2.What is the Gst-nvinfer plugin used for? (Check all that apply)

-Performs transforms (format conversion and scaling), on the input frame based on network requirements, and passes the transformed data to the low-level library

-Performs inferencing on input data using NVIDIA TensorRT

-Sends UDP packets to the network.

-Tracks object between frames.

3.What feature describes a "hardware-accelerated" plugin?

-Interacts with hardware to deliver maximum performance such as GPU, DLA, PVA

-Improves High level software interactions such as Python or Java

-Executes on embedded processors

4.Looking at the config file on your Jetson Nano at “/home/dlinano/deepstream_sdk_v4.0.2_jetson/sources/apps/dli_apps/deepstream-test1-rtsp_out/dstest1_pgie_config.txt”: What can we understand about the model we are using for object detection and counting? (Hint: check out the "model-file", "num-detected-classes", and "output-blob-names" keys)

-ResNet-10, Number of Classes 4, Output Layer: conv2d_bbox

-AlexNet, Number of Classes 100, Output Layer: conv2d_bbox

-ResNet-10, Number of Classes 100, Output Layer: conv2d_bbox

-AlexNet, Number of Classes 1000, Output Layer: conv2d_bbox

5.Looking at the “C” file at “/home/dlinano/deepstream_sdk_v4.0.2_jetson/sources/apps/dli_apps/deepstream-test1-rtsp_out/deepstream_test1_app.c”: In line 67, we use the “NvDsBatchMeta”, metadata structure. Why is it needed? (Check all that apply)

-It is not actually used.

-We need a metadata structure to hold frames, object, classifier, and label data.

-We need to access the metadata in this structure to determine how many objects are in each frame and display them

-It is a piece of legacy code carried forward from a previous version of DeepStream.

6.What type(s) of network(s) are supported by Gst-nvinfer?

-Multi-class object detection only

-Multi-label classification only

-Multi-class object detection, multi-label classification, and segmentation

7.What is Gst-nvstreammux used for? (Check all that apply)

-It forms a batch of frames from one or multiple input sources

-It collects an average of ( batch-size/num-source ) frames per batch from each source

-It runs inference

-It tracks objects between frames

8.How should we determine the batch size for multiple stream inputs?

-It should be equal to or proportional to the number of input streams

-Batch size is inversely proportional to number of streams

-Batch size can be 1 even for multiple stream inputs

9.Which of the following plugin lists are included in both Notebook #2: "Multiple Networks Application" and Notebook #3: "Multiple Stream Inputs"?

-Gst-nvinfer (for deep learning inference), Gst-nvstreammux (for batching video streams), Gst-nvtracker (tracks object between frames)

-Gst-nvinfer (for deep learning inference), Gst-nvdsosd (for drawing bounding boxes), Gst-nvvideoconvert (for video format conversion)

-Gst-nvinfer (for deep learning inference), Gst-nvmultistreamtiler (composites a 2D tile from batched buffers), Gst-nvtracker (tracks object between frames)

-Gst-nvvideoconvert (for video format conversion), Gst-nvstreammux (for batching video streams), Gst-nvmultistreamtiler (composites a 2D tile from batched buffers)

10.What are the DeepStream supported object detection networks? (Check all that apply)

-ResNet-10

-YOLO

-SSD -Faster-RCNN

11.What can be understood by looking at the config file “/home/dlinano/deepstream_sdk_v4.0.2_jetson/sources/apps/dli_apps/deepstream-test3-mp4_out-yolo/dstest3_pgie_config_yolov3_tiny.txt”?

-Neural Network Model : YOLO-V3, Number of Classes: 4, Network Mode= FP32 (Floating Point Computations), Input to model in format: BGR

-Neural Network Model : Tiny-YOLO-V3, Number of Classes: 80, Network Mode= FP32 (Floating Point Computations), Input to model in format: RGB

-Neural Network Model : Tiny-YOLO-V3, Number of Classes: 80, Network Mode= INT8 (Floating Point Computations), Input to model in format: BGR

12.Which of the following DeepStream plugins are part of the Primary Detector -> Object Tracker -> Secondary Classifier(s) sequence used in the pipeline for Multiple Network Applications in Notebook #2? (Check all that apply):

-Gst-nvinfer

-Gst-nvvideoconvert

-Gst-nvmultistreamtiler

-Gst-nvtracker

13.What kind of video container (file type) can a DeepStream video output be saved to? (Check all that apply): -mp4

-avi

-anything GStreamer supports

14.Which of the following statements are true about DeepStream SDK? (Check all that apply)

-DeepStream SDK is based on the GStreamer framework

-DeepStream SDK is not designed to optimize performance

-DeepStream SDK is supported on systems that contain NVIDIA Jetson modules, and NVIDIA dGPU adapters.

-DeepStream SDK has a plugin interface for TensorRT for inferencing deep learning networks.

15.Which of the following are possible use cases of DeepStream? (Check all that apply)

-Intelligent Video Analytics

-AI-based video and image understanding

-Multi-sensor processing -Cloud-based offline processing

做完所有的题目点击一下步,最后到Survey这里,“下一步”显示为灰色,表明已经完成了所有的课程。然后点击最上面的“进度”

就能看到完整的进度报告——以及可以打印证书!

好了,点击申请证书,见证奇迹的时刻吧!

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档