展开

关键词

Learning

E-Learning:英文全称为(ElectronicLearning),中文译作“数字(化)学习”、“电子(化)学习”、“网络(化)学习”等。不同的译法代表了不同的观点:一是强调基于因特网的学习;二是强调电子化;三是强调在E-Learning中要把数字化内容与网络资源结合起来。三者强调的都是数字技术,强调用技术来对教育的实施过程发挥引导作用和进行改造。网络学习环境含有大量数据、档案资料、程序、教学软件、兴趣讨论组、新闻组等学习资源,形成了一个高度综合集成的资源库。

相关内容

联邦学习

联邦学习

联邦学习(Federated Learning,FL)联邦学习为客户提供一种能保护自有数据,通过远程操作以及低成本快速迭代的联合建模服务。
  • 深度学习: Zero-shot Learning One-shot Learning Few-shot Learning

    为了 “多快好省” 地通往炼丹之路,炼丹师们开始研究 Zero-shot Learning One-shot Learning Few-shot Learning。Learning类型分为: Zero-shot Learning、One-shot Learning、Few-shot Learning、传统 Learning 。One-shot LearningOne-shot Learning,一次学习。wikipedia: One-shot learning is an object categorization problem in computer vision.Few-shot LearningFew-shot Learning,少量学习。也即 One-shot Learning 。传统 Learning即传统深度学习的 海量数据 + 反复训练 炼丹模式。
    来自:
    浏览:1030
  • Meta Learning元学习和Few-Shot Learning

    Meta Learning元学习和Few-Shot Learning一、Meta LearningMeta Learnig,元学习,就是能够让机器学习如何去学习(Learning to Learning与life long learning 不同,metal learning是希望在不同任务上机器都能自己学会一个模型,而life long learning是希望学习到一个模型可以处理多个任务。为了实现“多快好省”,研究者们开始提出Few shot Learning、One-shot Learning和Zero-shot Learning系列。One-shot Learning和Few shot Learning差不多,都是每个类别只有少量样本(一个或几个),通过一般化的映射得到想要的结果。是Meta Learning在监督学习领域的应用。
    来自:
    浏览:721
  • 广告
    关闭

    云+社区杂货摊第四季上线啦~

    攒云+值,TOP 100 必得云+社区定制视频礼盒

  • 您找到你想要的搜索结果了吗?
    是的
    没有找到
  • deep learning paper

    Some high-light papers are selected just for reference, most of them are associated with machine learning(deep learning) for 3D data.From the perspective of 3D data representation:(1) View-based:Multi-view(CVPR 2017 )PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space – Qi et al(CVPR 2018)Dynamic Graph CNN for Learning on Point Clouds – Wang et al.(CVPR 2018)AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation – Groueix et al.
    来自:
    浏览:296
  • Q-Learning

    文章目录什么是 Q-Learning ?Q学习是强化学习中基于价值的学习算法。假设机器人必须越过迷宫并到达终点。有地雷,机器人一次只能移动一个地砖。如果机器人踏上矿井,机器人就死了。为了学习Q表的每个值,我们使用Q-Learning算法。Q-Learning 的数学依据Q-Fuction所述 Q-Fuction 使用Bellman方程和采用两个输入:状态(小号)和动作(一个)。本文翻译自《An introduction to Q-Learning: reinforcement learning》维基百科版本维基百科版本Q -learning是一种无模型 强化学习算法。Q-learning的目标是学习一种策略,告诉代理在什么情况下要采取什么行动。它不需要环境的模型(因此内涵“无模型”),并且它可以处理随机转换和奖励的问题,而不需要调整。对于任何有限马尔可夫决策过程(FMDP),Q -learning在从当前状态开始的任何和所有后续步骤中最大化总奖励的预期值的意义上找到最优的策略。
    来自:
    浏览:592
  • 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 1)

    100 Best GitHub: Deep Learning介绍:100 Best GitHub: Deep Learning 《UFLDL-斯坦福大学Andrew Ng教授“Deep Learning《Deep Learning and Shallow Learning》介绍:对比 Deep Learning 和 Shallow Learning 的好文,来着浙大毕业、MIT 读博的 ChiyuanDeep Learning(深度学习)学习笔记整理系列之(二) Deep Learning(深度学习)学习笔记整理系列之(三) Deep Learning(深度学习)学习笔记整理系列之(四) Deep《Learning Spark》介绍:Ebook Learning Spark.《Learning Deep Learning》介绍:一个深度学习资源页,资料很丰富. 《Learning Deep Learning》介绍:免费电子书Learning Deep Learning.
    来自:
    浏览:960
  • WHEN NOT TO USE DEEP LEARNING

    转载自: http:hyperparameter.spaceblogwhen-not-to-use-deep-learning----I know it’s a weird way to start aIn this post, I wanted to visit use cases in machine learning where deep learning would not really makeDeep learning has become an undeniable force in machine learning and an important tool in the arsenallearning.This doesn’t quite happen with many other models in machine learning.
    来自:
    浏览:149
  • 03 Types of Learning

    Structured Learning结构化学习,常见例子比如:sentence ⇒ structure (class of each word)(序列标注)protein data ⇒ proteinby goodnessSupervised Learning Supervised learning: every ? comes with corresponding ?.Unsupervised Learning Unsupervised learning: diverse, with possibly very different performance goals.Internet logs ⇒ intrusion alertSemi-supervised Learning Semi-supervised learning: leverage unlabeledPLA can be easily adapted to online protocol.Active Learning当模型没有把握的时候,把问题交给用户,从而获取高质量样本。
    来自:
    浏览:278
  • Learning Rate Decay

    最左边的图由于learning rate设置小了,可能需要大量时间才能找到全局最小值;中间的图表示learning rate设置的刚刚好,则很快就能找到全局最小值;最右边的图表示learning rate设置过大,可能造成loss忽大忽小,无法找到全局最小值由此可以看出,选择合适的learning rate是很讲究技巧的。如下图所示,设置一个可以自动衰减的learning rate可能会在一定程度上加快优化?在pytorch中有一个函数可以帮助我们实现learning rate decayclass torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer,0.0001, threshold_mode=rel, cooldown=0, min_lr=0, eps=1e-8)# patience=10代表的是耐心值为10,# 当loss出现10次不变化时,即开始调用learning
    来自:
    浏览:210
  • Machine Learning-经典模型之DT Learning

    本篇文章整理一下decision tree learning的知识点。下面是维基百科的定义:Decision tree learning uses a decision tree (as a predictive model) to go from observationsDecision tree learning is the construction of a decision tree from class-labeled training tuples.分割成两个结点N1N1和N2N24>> 对N1N1和N2N2分别继续执行2-3步,直到每个结点足够纯为止;参考文献:1)维基百科 https:en.wikipedia.orgwikiDecision_tree_learning2
    来自:
    浏览:193
  • Why, What and How Of Machine Learning?

    Why do we use machine learning?for learning.Artificial intelligence is a broad science of learning and acting like humans, machine learning is theCourse to start a career as a Machine Learning Engineer.How does machine learning work?is supervised learning.2.)
    来自:
    浏览:232
  • Instance Based Learning

    Udacity Machine Learning Instance Based Learning----Supervised Learning 给你一些数据集,用算法去训练函数,训练出来后,就可以投入新的数据进行预测Instance Based Learning不包含训练函数这个过程,只需要把所有数据放在数据库里,投入新的数据时,只需要去数据库里查找, ?优点是: Remember:可信,不需要平滑什么的近似 Fast:不需要 learning Simple: 缺点是: Overfitting:太依赖已有数据了 看起来只能返回已有数据,无法返回新数据应用举例并不是 LR 就比 NN 慢,因为 learning 只需要一次,但是可以 query 很多次这个例子计算一下 q 这个点,在不同的 Domain 和 不同的 k 时,结果是怎样的,不过与实际值 18
    来自:
    浏览:582
  • Learning to Rank概述

    Learning to Rank,即排序学习,简称为 L2R,它是构建排序模型的机器学习方法,在信息检索、自然语言处理、数据挖掘等场景中具有重要的作用。其中 LambdaMART(对 RankNet 和 LambdaRank 的改进)在 Yahoo Learning to Rank Challenge 表现出最好的性能。参考资料Learning to Rank, Hang Lihttp:www.aclweb.organthologyP09-5005 Learning to Rank for Information Retrieval, Tie-Yan Liuhttp:www.dblab.ntua.gr~gtsatcollectiontopKlearning_to_rank_tutorial_-_www_-_2008.pdf Learningto rank基本算法小结https:zhuanlan.zhihu.comp26539920 Learning to Rank简介http:www.sohu.coma136316308_773498
    来自:
    浏览:1031
  • Meta Learning单排小教学

    原文链接:Meta Learning单排小教学虽然Meta Learning现在已经非常火了,但是还有很多小伙伴对于Meta Learning不是特别理解。Learning的概念,Meta Learning的几种研究方法,以及Meta Learning未来的发展,带大家上分!说完Deep Learning研究什么,那Meta Learning呢?Meta Learning研究Task!Deep Learning是在Task里面研究,现在Meta Learning是在Task外面,更高层级来研究。也就是在Meta Learning的问题上,Task是作为样本来输入的。这里我们将用Few-Shot Learning的问题加以解释。 但在此之前,我们要先从人类智能的角度,仿生学的角度说说Meta Learning的意义!2. 为什么研究Meta Learning?
    来自:
    浏览:461
  • Feasibility of Learning

    Feasibility of Learning0.说在前面1.Learning is Impossible2.Probablity to the Rescue3.Connection to Learning4.Connection to Real Learning5.作者的话0.说在前面前面几节主要介绍了机器学习的分类以及相关的二元分类问题及算法。1.Learning is Impossible?数据图在本节中,林老师给出了如上图所示的例子:输入特征x是二进制的、三维的,对用有8种输入,其中训练样本D有5个。3.Connection to Learning【迁移】?迁移图我们将上述罐子抽球思想迁移到机器学习上来。4.Connection to Real Learning【ML上的Hoeffdings inequality】上述主要讲固定h情况下,当h很多的情况下,又应该怎么做?
    来自:
    浏览:188
  • 最全深度学习资源集合(Github:Awesome Deep Learning)Awesome Deep Learning

    以下整理至:Awesome Deep Learning。Michael Nielsen (Dec 2014)Deep Learning by Microsoft Research (2013)Deep Learning Tutorial by LISA lab-2015)Deep Learning - Nvidia(2015)Graduate Summer School: Deep Learning, Feature Learning by Geoffrey, Self-Taught Learning and Unsupervised Feature Learning By Andrew NgRecent Developments in Deep LearningNewsMachine Learning is Fun!
    来自:
    浏览:514
  • Learning From Data Note 1

    (因为是英文课程,就直接记英文了,也算练习下英文)The Learning ProblemThe essence of machine learning (three components)A pattern(historical records)Hypothesis: g : X -> Y (final hypothesis is the trained model, g is close to f)LearningModellearning model = hypothesis set + learning algorithmHow does it work?learning-paradigm.pngPerceptron Learning Algorithm (PLA)The perception implements: h(X) = sign(w * x)
    来自:
    浏览:182
  • 求问meta-learning和few-shot learning的关系是什么?

    ),而meta-learning假设更充分。放到 meta learning里头,就是如何靠轻过程自动设计loss,或者梯度,就是learning to learn。个人对meta-learning的了解不多,相关的论文并没有深读下去。但对针对few-shot learning还是做了一些研究的。感觉上 few-shot learning 主要是一个应用场景,一个普遍的问题。meta-learning是一种学习策略,一种框架。,是meta learning的一个子集。
    来自:
    浏览:579
  • A Theory of State Abstraction for Reinforcement Learning

    A Theory of State Abstraction for Reinforcement Learning David Abel Department of Computer Science BrownUniversity david_abel@brown.eduAbstract Reinforcement learning presents a challenging problem: agentsend, the goal of my doctoral research is to characterize the role abstraction plays in reinforcement learning(RL), drawing on Information Theory, Computational Complexity, and Computational Learning Theory.My interest in this question stems from its foundational role in many aspects of learning and decision
    来自:
    浏览:146
  • 【深度学习Deep Learning】资料大全

    and Aaron CourvilleNeural Networks and Deep Learning42 by Michael NielsenDeep Learning27 by MicrosoftLearning and Unsupervised Feature Learning2 By Andrew NgRecent Developments in Deep Learning2 By Geoff《The Deep Learning Revolution: Rethinking Machine Learning Pipelines》 介绍:深度学习革命.《Deep Learning and Shallow Learning》 介绍:Deep Learning与Shallow Learning 介绍 《A First Encounter with Machine《Multimodal Deep Learning》 介绍:来自斯坦福大学的Multimodal Deep Learning papers.
    来自:
    浏览:1007

扫码关注云+社区

领取腾讯云代金券