Difference between revisions of "InvertedWishart"

 
Line 3: Line 3:
  
 
== InvertedWishart(psi, n, I, J) ==
 
== InvertedWishart(psi, n, I, J) ==
The inverted Wishart distribution represents a distribution over [[covariance]] matrices, i.e., each sample from the [[InvertedWishart]] is a covariance matrix.  It is conjugate to the [[Wishart]] distribution, which means it can be updated after observing data and computing its sample covariance, such that the Posterior is still a [[InvertedWishart]] distribution.  Because of this conjugacy property, it is therefore usually used as a Bayesian prior distribution for covariance.  The parameter, Psi, must be a positive definite matrix.
+
The inverted Wishart distribution represents a distribution over [[covariance]] matrices, i.e., each sample from the [[InvertedWishart]] is a covariance matrix.  It is conjugate to the [[Wishart]] distribution, which means it can be updated after observing data and computing its sample covariance, such that the Posterior is still a [[InvertedWishart]] distribution.  Because of this conjugacy property, it is therefore usually used as a Bayesian prior distribution for covariance.  The parameter, «psi», must be a positive definite matrix.
  
Suppose you represent the prior distribution of covariance using an inverted Wishart distribution: InvertedWishart(Psi, m).  You observe some data, X[I, R], where R := 1..N indexes each datapoint and I is the vector dimension, and compute <code>A = Sum(X*X[I = J], R)</code>, where <code>A</code> is called the scatter matrix.  The assumption is made that the data is generated, by nature, from a [[Gaussian]] distribution with the "true" covariance. The matrix <code>A</code> is an observation that gives you information about the true covariance matrix, so can use this to obtain a Bayesian posterior distribution on the true covariance given by:
+
Suppose you represent the prior distribution of covariance using an inverted Wishart distribution: <code>InvertedWishart(Psi, m)</code>.  You observe some data, <code>X[I, R]</code>, where <code>R := 1..N</code> indexes each datapoint and <code>I</code> is the vector dimension, and compute <code>A = Sum(X*X[I = J], R)</code>, where <code>A</code> is called the scatter matrix.  The assumption is made that the data is generated, by nature, from a [[Gaussian]] distribution with the "true" covariance. The matrix <code>A</code> is an observation that gives you information about the true covariance matrix, so can use this to obtain a Bayesian posterior distribution on the true covariance given by:
 
:<code>InverseWishart(A + Psi, n + m)</code>
 
:<code>InverseWishart(A + Psi, n + m)</code>
  
Line 11: Line 11:
 
Distribution Variations.ana
 
Distribution Variations.ana
  
= See Also =
+
== See Also ==
 
* [[Wishart]]
 
* [[Wishart]]
 
* [[LDens_InvertedWishart]]
 
* [[LDens_InvertedWishart]]

Latest revision as of 20:04, 29 January 2016


InvertedWishart(psi, n, I, J)

The inverted Wishart distribution represents a distribution over covariance matrices, i.e., each sample from the InvertedWishart is a covariance matrix. It is conjugate to the Wishart distribution, which means it can be updated after observing data and computing its sample covariance, such that the Posterior is still a InvertedWishart distribution. Because of this conjugacy property, it is therefore usually used as a Bayesian prior distribution for covariance. The parameter, «psi», must be a positive definite matrix.

Suppose you represent the prior distribution of covariance using an inverted Wishart distribution: InvertedWishart(Psi, m). You observe some data, X[I, R], where R := 1..N indexes each datapoint and I is the vector dimension, and compute A = Sum(X*X[I = J], R), where A is called the scatter matrix. The assumption is made that the data is generated, by nature, from a Gaussian distribution with the "true" covariance. The matrix A is an observation that gives you information about the true covariance matrix, so can use this to obtain a Bayesian posterior distribution on the true covariance given by:

InverseWishart(A + Psi, n + m)

Library

Distribution Variations.ana

See Also

Comments


You are not allowed to post comments.