怎统计上算法中时时讨论KL距离?

发布日期:2018-06-06 来源:财富国际在线 阅读:
怎统计上算法中时时讨论KL距离? 匿名用户 5小时前 104 kl距离 众所周知, KL距离只是诸多统计距离(statistical distance)中的一种. 很多经典的统计学习算法(如EM), 概率图模型变分方法(variational methods)都可以利用KL距离做诠释. 其他统计距离也可以用在算法收敛性分析和变分方法中吗? KL距离的不对称性对理论分析有没有不利之处? 这些机器学习问题有没有信息论层面的解释? 姊妹问题: 谁能比较一下卡方距离、KL距离和互信息? - 信息技术(IT)
0 0
其他回答
最早KL divergence就是从信息论里引入的,不过既然题主问的是ML中的应用,就不多做具体介绍。只是简单概述给定真实概率分布P和近似分布Q,KL divergence所表达的就是如果我们用一套最优的压缩机制(compression scheme)来储存Q的分布,对每个从P来的sample我需要多用的bits(相比我直接用一套最优的压缩机制来储存P的分布)。这也叫做 Kraft–McMillan theorem。

所以很自然的它可以被用作统计距离,因为它本身内在的概率意义。然而,也正因为这种意义,题主所说的不对称性是不可避免的。因为D(P||Q)和D(Q||P)回答的是基于不同压缩机制下的“距离”问题。

至于general的统计距离,当然,它们其实没有本质差别。更广泛的来看,KL divergence可以看成是phi-divergence的一种特殊情况(phi取log)。注意下面的定义是针对discrete probability distribution,但是把sum换成integral很自然可以定义连续版本的。
用其它的divergence理论来做上是没有本质区别的,只要phi是convex, closed的。
因为它们都有相似的概率意义,比如说pinsker's theorem保证了KL-divergence是total variation metric的一个tight bound. 其它divergence metric应该也有类似的bound,最多就是order和常数会差一些。而且,用这些divergence定义的minimization问题也都会是convex的,但是具体的computation performance可能会有差别,所以KL还是用的多。


Reference: Bayraksan G, Love DK. Data-Driven Stochastic Programming Using Phi-Divergences.
覃含章 5小时前 0条评论
0 0
Interesting question, KL divergence is something I'm working with right now.

KL divergence KL(p||q), in the context of information theory, measures the amount of extra bits (nats) that is necessary to describe samples from the distribution p with coding based on q instead of p itself. From the Kraft-Macmillan theorem, we know that the coding scheme for one value out of a set X can be represented q(x) = 2^(-l_i) as over X, where l_i is the length of the code for x_i in bits.

We know that KL divergence is also the relative entropy between two distributions, and that gives some intuition as to why in it's used in variational methods. Variational methods use functionals as measures in its objective function (i.e. entropy of a distribution takes in a distribution and return a scalar quantity). It's interpreted as the "loss of information" when using one distribution to approximate another, and is desirable in machine learning due to the fact that in models where dimensionality reduction is used, we would like to preserve as much information of the original input as possible. This is more obvious when looking at VAEs which use the KL divergence between the posterior q and prior p distribution over the latent variable z. Likewise, you can refer to EM, where we decompose

ln p(X) = L(q) + KL(q||p)

Here we maximize the lower bound on L(q) by minimizing the KL divergence, which becomes 0 when p(Z|X) = q(Z). However, in many cases, we wish to restrict the family of distributions and parameterize q(Z) with a set of parameters w, so we can optimize w.r.t. w.

Note that KL(p||q) = - /sum p(Z) ln (q(Z) / p(Z)), and so KL(p||q) is different from KL(q||p). This asymmetry, however, can be exploited in the sense that in cases where we wish to learn the parameters of a distribution q that over-compensates for p, we can minimize KL(p||q). Conversely when we wish to seek just the main components of p with q distribution, we can minimize KL(q||p). This example from the Bishop book illustrates this well.

KL divergence belongs to an alpha family of divergences, where the parameter alpha takes on separate limits for the forward and backwards KL. When alpha = 0, it becomes symmetric, and linearly related to the Hellinger distance. There are other metrics such as the Cauchy Schwartz divergence which are symmetric, but in machine learning settings where the goal is to learn simpler, tractable parameterizations of distributions which approximate a target, they might not be as useful as KL.
热心网民 5小时前 0条评论
0 0

关于我们 联系我们招聘信息免责申明广告服务 网站地图 百度地图 TAG标签

Copyright@2018-2022 Cfgjzx.Com 财富国际在线 版权所有 All Rights Reserved   
财富国际提供:最新财富资讯、房产资讯、股票资讯、区块链、投资理财、保险导购、健康产品、公私募基金,易经等资讯及服务.