Difference between revisions of "InvertedWishart"

(See Also - LDens_InvertedWishart)
Line 2: Line 2:
 
[[Category:Doc Status C]] <!-- For Lumina use, do not change -->
 
[[Category:Doc Status C]] <!-- For Lumina use, do not change -->
  
= InvertedWishart( psi, n, I, J ) =
+
== InvertedWishart(psi, n, I, J) ==
 +
The inverted Wishart distribution represents a distribution over [[covariance]] matrices, i.e., each sample from the [[InvertedWishart]] is a covariance matrix.  It is conjugate to the [[Wishart]] distribution, which means it can be updated after observing data and computing its sample covariance, such that the Posterior is still a [[InvertedWishart]] distribution.  Because of this conjugacy property, it is therefore usually used as a Bayesian prior distribution for covariance.  The parameter, Psi, must be a positive definite matrix.
  
The inverted Wishart distribution represents a distribution over covariance matrices, i.e., each sample from the InvertedWishart is a covariance matrix.  It is conjugate to the [[Wishart]] distribution, which means it can be updated after observing data and computing its sample covariance, such that the Posterior is still a InvertedWishart distribution.  Because of this conjugacy property, it is therefore usually used as a Bayesian prior distribution for covariance.  The parameter, Psi, must be a positive definite matrix.
+
Suppose you represent the prior distribution of covariance using an inverted Wishart distribution: InvertedWishart(Psi, m).  You observe some data, X[I, R], where R := 1..N indexes each datapoint and I is the vector dimension, and compute <code>A = Sum(X*X[I = J], R)</code>, where <code>A</code> is called the scatter matrix.  The assumption is made that the data is generated, by nature, from a [[Gaussian]] distribution with the "true" covariance. The matrix <code>A</code> is an observation that gives you information about the true covariance matrix, so can use this to obtain a Bayesian posterior distribution on the true covariance given by:
 
+
:<code>InverseWishart(A + Psi, n + m)</code>
Suppose you represent the prior distribution of covariance using an inverted Wishart distribution: InvertedWishart(Psi,m).  You observe some data, X[I,R], where R:=1..N indexes each datapoint and I is the vector dimension, and compute A = Sum( X*X[I=J], R), where A is called the scatter matrix.  The assumption is made that the data is generated, by nature, from a Gaussian distribution with the "true" covariance. The matrix A is an observation that gives you information about the true covariance matrix, so can use this to obtain a Bayesian posterior distribution on the true covariance given by:
 
  InverseWishart( A+Psi, n+m )
 
 
 
= Library =
 
  
 +
== Library ==
 
Distribution Variations.ana
 
Distribution Variations.ana
  
 
= See Also =
 
= See Also =
 
 
* [[Wishart]]
 
* [[Wishart]]
* [[LDens_InvertedWishart]], [[LDens_Wishart]]
+
* [[LDens_InvertedWishart]]
 +
* [[LDens_Wishart]]
 
* [[Gaussian]]
 
* [[Gaussian]]
 
* [[ChiSquared]]
 
* [[ChiSquared]]
 
* [[Covariance]]
 
* [[Covariance]]

Revision as of 23:32, 26 January 2016


InvertedWishart(psi, n, I, J)

The inverted Wishart distribution represents a distribution over covariance matrices, i.e., each sample from the InvertedWishart is a covariance matrix. It is conjugate to the Wishart distribution, which means it can be updated after observing data and computing its sample covariance, such that the Posterior is still a InvertedWishart distribution. Because of this conjugacy property, it is therefore usually used as a Bayesian prior distribution for covariance. The parameter, Psi, must be a positive definite matrix.

Suppose you represent the prior distribution of covariance using an inverted Wishart distribution: InvertedWishart(Psi, m). You observe some data, X[I, R], where R := 1..N indexes each datapoint and I is the vector dimension, and compute A = Sum(X*X[I = J], R), where A is called the scatter matrix. The assumption is made that the data is generated, by nature, from a Gaussian distribution with the "true" covariance. The matrix A is an observation that gives you information about the true covariance matrix, so can use this to obtain a Bayesian posterior distribution on the true covariance given by:

InverseWishart(A + Psi, n + m)

Library

Distribution Variations.ana

See Also

Comments


You are not allowed to post comments.