0 赞 0 踩
其他回答
最早KL divergence就是从信息论里引入的,不过既然题主问的是ML中的应用,就不多做具体介绍。只是简单概述给定真实概率分布P和近似分布Q,KL divergence所表达的就是如果我们用一套最优的压缩机制(compression scheme)来储存Q的分布,对每个从P来的sample我需要多用的bits(相比我直接用一套最优的压缩机制来储存P的分布)。这也叫做 Kraft–McMillan theorem。
所以很自然的它可以被用作统计距离,因为它本身内在的概率意义。然而,也正因为这种意义,题主所说的不对称性是不可避免的。因为D(P||Q)和D(Q||P)回答的是基于不同压缩机制下的“距离”问题。
至于general的统计距离,当然,它们其实没有本质差别。更广泛的来看,KL divergence可以看成是phi-divergence的一种特殊情况(phi取log)。注意下面的定义是针对discrete probability distribution,但是把sum换成integral很自然可以定义连续版本的。
用其它的divergence理论来做上是没有本质区别的,只要phi是convex, closed的。
因为它们都有相似的概率意义,比如说pinsker's theorem保证了KL-divergence是total variation metric的一个tight bound. 其它divergence metric应该也有类似的bound,最多就是order和常数会差一些。而且,用这些divergence定义的minimization问题也都会是convex的,但是具体的computation performance可能会有差别,所以KL还是用的多。
Reference: Bayraksan G, Love DK. Data-Driven Stochastic Programming Using Phi-Divergences. 覃含章 5小时前 0条评论
所以很自然的它可以被用作统计距离,因为它本身内在的概率意义。然而,也正因为这种意义,题主所说的不对称性是不可避免的。因为D(P||Q)和D(Q||P)回答的是基于不同压缩机制下的“距离”问题。
至于general的统计距离,当然,它们其实没有本质差别。更广泛的来看,KL divergence可以看成是phi-divergence的一种特殊情况(phi取log)。注意下面的定义是针对discrete probability distribution,但是把sum换成integral很自然可以定义连续版本的。
用其它的divergence理论来做上是没有本质区别的,只要phi是convex, closed的。
因为它们都有相似的概率意义,比如说pinsker's theorem保证了KL-divergence是total variation metric的一个tight bound. 其它divergence metric应该也有类似的bound,最多就是order和常数会差一些。而且,用这些divergence定义的minimization问题也都会是convex的,但是具体的computation performance可能会有差别,所以KL还是用的多。
Reference: Bayraksan G, Love DK. Data-Driven Stochastic Programming Using Phi-Divergences. 覃含章 5小时前 0条评论
0 赞 0 踩
Interesting question, KL divergence is something I'm working with right now.
KL divergence KL(p||q), in the context of information theory, measures the amount of extra bits (nats) that is necessary to describe samples from the distribution p with coding based on q instead of p itself. From the Kraft-Macmillan theorem, we know that the coding scheme for one value out of a set X can be represented q(x) = 2^(-l_i) as over X, where l_i is the length of the code for x_i in bits.
We know that KL divergence is also the relative entropy between two distributions, and that gives some intuition as to why in it's used in variational methods. Variational methods use functionals as measures in its objective function (i.e. entropy of a distribution takes in a distribution and return a scalar quantity). It's interpreted as the "loss of information" when using one distribution to approximate another, and is desirable in machine learning due to the fact that in models where dimensionality reduction is used, we would like to preserve as much information of the original input as possible. This is more obvious when looking at VAEs which use the KL divergence between the posterior q and prior p distribution over the latent variable z. Likewise, you can refer to EM, where we decompose
ln p(X) = L(q) + KL(q||p)
Here we maximize the lower bound on L(q) by minimizing the KL divergence, which becomes 0 when p(Z|X) = q(Z). However, in many cases, we wish to restrict the family of distributions and parameterize q(Z) with a set of parameters w, so we can optimize w.r.t. w.
Note that KL(p||q) = - /sum p(Z) ln (q(Z) / p(Z)), and so KL(p||q) is different from KL(q||p). This asymmetry, however, can be exploited in the sense that in cases where we wish to learn the parameters of a distribution q that over-compensates for p, we can minimize KL(p||q). Conversely when we wish to seek just the main components of p with q distribution, we can minimize KL(q||p). This example from the Bishop book illustrates this well.
KL divergence belongs to an alpha family of divergences, where the parameter alpha takes on separate limits for the forward and backwards KL. When alpha = 0, it becomes symmetric, and linearly related to the Hellinger distance. There are other metrics such as the Cauchy Schwartz divergence which are symmetric, but in machine learning settings where the goal is to learn simpler, tractable parameterizations of distributions which approximate a target, they might not be as useful as KL. 热心网民 5小时前 0条评论
KL divergence KL(p||q), in the context of information theory, measures the amount of extra bits (nats) that is necessary to describe samples from the distribution p with coding based on q instead of p itself. From the Kraft-Macmillan theorem, we know that the coding scheme for one value out of a set X can be represented q(x) = 2^(-l_i) as over X, where l_i is the length of the code for x_i in bits.
We know that KL divergence is also the relative entropy between two distributions, and that gives some intuition as to why in it's used in variational methods. Variational methods use functionals as measures in its objective function (i.e. entropy of a distribution takes in a distribution and return a scalar quantity). It's interpreted as the "loss of information" when using one distribution to approximate another, and is desirable in machine learning due to the fact that in models where dimensionality reduction is used, we would like to preserve as much information of the original input as possible. This is more obvious when looking at VAEs which use the KL divergence between the posterior q and prior p distribution over the latent variable z. Likewise, you can refer to EM, where we decompose
ln p(X) = L(q) + KL(q||p)
Here we maximize the lower bound on L(q) by minimizing the KL divergence, which becomes 0 when p(Z|X) = q(Z). However, in many cases, we wish to restrict the family of distributions and parameterize q(Z) with a set of parameters w, so we can optimize w.r.t. w.
Note that KL(p||q) = - /sum p(Z) ln (q(Z) / p(Z)), and so KL(p||q) is different from KL(q||p). This asymmetry, however, can be exploited in the sense that in cases where we wish to learn the parameters of a distribution q that over-compensates for p, we can minimize KL(p||q). Conversely when we wish to seek just the main components of p with q distribution, we can minimize KL(q||p). This example from the Bishop book illustrates this well.
KL divergence belongs to an alpha family of divergences, where the parameter alpha takes on separate limits for the forward and backwards KL. When alpha = 0, it becomes symmetric, and linearly related to the Hellinger distance. There are other metrics such as the Cauchy Schwartz divergence which are symmetric, but in machine learning settings where the goal is to learn simpler, tractable parameterizations of distributions which approximate a target, they might not be as useful as KL. 热心网民 5小时前 0条评论
0 赞 0 踩
登录后可回答 提交回答
关键词 :
- 上一个:「送礼」有什么创意与沉思?
- 下一个:生孩子后心里总是不行委屈怎么办?求开导?
推荐资讯
点击排行