首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往
首页标签函数式编程

#函数式编程

编程中的“algebra”是什么意思?

迷迷学生
我将从与编程相关的东西开始解释,然后添加一些数学内容,尽可能保持具体和实际。 引用一些coinduction http://www.cs.umd.edu/~micinski/posts/2012-09-04-on-understanding-coinduction.html Induction is about finite data, co-induction is about infinite data. The typical example of infinite data is the type of a lazy list (a stream). For example, lets say that we have the following object in memory: Induction is about finite data, co-induction is about infinite data. The typical example of infinite data is the type of a lazy list (a stream). For example, lets say that we have the following object in memory: The computer can’t hold all of π, because it only has a finite amount of memory! But what it can do is hold a finite program, which will produce any arbitrarily long expansion of π that you desire. As long as you only use finite pieces of the list, you can compute with that infinite list as much as you need. However, consider the following program: let print_third_element (k : int list) = match k with | _ :: _ :: thd :: tl -> print thd print_third_element pi let print_third_element (k : int list) = match k with | _ :: _ :: thd :: tl -> print thd print_third_element pi http://adam.chlipala.net/cpdt/html/Coinductive.html In lazy functional programming languages like Haskell, infinite data structures are everywhere. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow. http://www.alexandrasilva.org/#/talks.html [图片] Relating the ambient mathematical context to usual programming tasks What is "an algebra"? Algebraic structures generally look like: Stuff What the stuff can do This should sound like objects with 1. properties and 2. methods. Or even better, it should sound like type signatures. Standard mathematical examples include monoid ⊃ group ⊃ vector-space ⊃ "an algebra". Monoids are like automata: sequences of verbs (eg, f.g.h.h.nothing.f.g.f). A git log that always adds history and never deletes it would be a monoid but not a group. If you add inverses (eg negative numbers, fractions, roots, deleting accumulated history, un-shattering a broken mirror) you get a group. Groups contain things that can be added or subtracted together. For example Durations can be added together. (But Dates cannot.) Durations live in a vector-space (not just a group) because they can also be scaled by outside numbers. (A type signature of scaling :: (Number,Duration) → Duration.) Algebras ⊂ vector-spaces can do yet another thing: there’s some m :: (T,T) → T. Call this "multiplication" or don't, because once you leave Integers it’s less obvious what "multiplication" (or "exponentiation") should be. (This is why people look to (category-theoretic) universal properties: to tell them what multiplication should do or be like: [图片] Algebras → Coalgebras Comultiplication is easier to define in a way that feels non-arbitrary, than is multiplication, because to go from T → (T,T) you can just repeat the same element. ("diagonal map" – like diagonal matrices/operators in spectral theory) Counit is usually the trace (sum of diagonal entries), although again what's important is what your counit does; trace is just a good answer for matrices. The reason to look at a dual space, in general, is because it's easier to think in that space. For example it's sometimes easier to think about a normal vector than about the plane it's normal to, but you can control planes (including hyperplanes) with vectors (and now I'm speaking of the familiar geometric vector, like in a ray-tracer).... 展开详请
我将从与编程相关的东西开始解释,然后添加一些数学内容,尽可能保持具体和实际。 引用一些coinduction http://www.cs.umd.edu/~micinski/posts/2012-09-04-on-understanding-coinduction.html Induction is about finite data, co-induction is about infinite data. The typical example of infinite data is the type of a lazy list (a stream). For example, lets say that we have the following object in memory: Induction is about finite data, co-induction is about infinite data. The typical example of infinite data is the type of a lazy list (a stream). For example, lets say that we have the following object in memory: The computer can’t hold all of π, because it only has a finite amount of memory! But what it can do is hold a finite program, which will produce any arbitrarily long expansion of π that you desire. As long as you only use finite pieces of the list, you can compute with that infinite list as much as you need. However, consider the following program: let print_third_element (k : int list) = match k with | _ :: _ :: thd :: tl -> print thd print_third_element pi let print_third_element (k : int list) = match k with | _ :: _ :: thd :: tl -> print thd print_third_element pi http://adam.chlipala.net/cpdt/html/Coinductive.html In lazy functional programming languages like Haskell, infinite data structures are everywhere. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow. http://www.alexandrasilva.org/#/talks.html [图片] Relating the ambient mathematical context to usual programming tasks What is "an algebra"? Algebraic structures generally look like: Stuff What the stuff can do This should sound like objects with 1. properties and 2. methods. Or even better, it should sound like type signatures. Standard mathematical examples include monoid ⊃ group ⊃ vector-space ⊃ "an algebra". Monoids are like automata: sequences of verbs (eg, f.g.h.h.nothing.f.g.f). A git log that always adds history and never deletes it would be a monoid but not a group. If you add inverses (eg negative numbers, fractions, roots, deleting accumulated history, un-shattering a broken mirror) you get a group. Groups contain things that can be added or subtracted together. For example Durations can be added together. (But Dates cannot.) Durations live in a vector-space (not just a group) because they can also be scaled by outside numbers. (A type signature of scaling :: (Number,Duration) → Duration.) Algebras ⊂ vector-spaces can do yet another thing: there’s some m :: (T,T) → T. Call this "multiplication" or don't, because once you leave Integers it’s less obvious what "multiplication" (or "exponentiation") should be. (This is why people look to (category-theoretic) universal properties: to tell them what multiplication should do or be like: [图片] Algebras → Coalgebras Comultiplication is easier to define in a way that feels non-arbitrary, than is multiplication, because to go from T → (T,T) you can just repeat the same element. ("diagonal map" – like diagonal matrices/operators in spectral theory) Counit is usually the trace (sum of diagonal entries), although again what's important is what your counit does; trace is just a good answer for matrices. The reason to look at a dual space, in general, is because it's easier to think in that space. For example it's sometimes easier to think about a normal vector than about the plane it's normal to, but you can control planes (including hyperplanes) with vectors (and now I'm speaking of the familiar geometric vector, like in a ray-tracer).

函数式编程中如何存在时间函数?

嗨喽你好摩羯座
在Haskell中,人们使用一个名为monad的构造来处理副作用。monad基本上意味着你将值封装到一个容器中,并有一些函数将函数从值链接到容器中的值。如果我们的容器有这样的类型: data IO a = IO (RealWorld -> (a,RealWorld)) 我们可以安全地执行IO操作。这种类型意味着:一个类型的动作IO是一个函数,它接受一个类型的标记RealWorld并且返回一个新的标记和一个结果。 这背后的想法是,每个IO动作突变外部状态,由魔法令牌代表RealWorld。使用单子,可以链接多个功能,使现实世界变异在一起。monad最重要的功能是>>=明确的绑定: (>>=) :: IO a -> (a -> IO b) -> IO b >>=采取一个行动和一个功能,采取这一行动的结果,并创建一个新的行动。返回类型是新的操作。例如,让我们假装有一个函数now :: IO String,它返回一个表示当前时间的字符串。我们可以将其与功能链接putStrLn打印出来: now >>= putStrLn 或者写在do一个命令程序员更熟悉的-Notation中: do currTime <- now putStrLn currTime 所有这一切都是纯粹的,因为我们将外部世界的变化和信息映射到RealWorld令牌。所以每一次,你运行这个动作,当然你会得到一个不同的输出,但输入是不一样的:RealWorld令牌是不同的... 展开详请
领券