.BG
.FN me
.TL
EM for parameterized MVN mixture models
.SH DESCRIPTION
EM iteration (M-step followed by E-step) for estimating parameters in an 
MVN mixture model with possibly one Poisson noise term.
.CS
me(data, modelid, z, ...)
.PP
.RA
.AG data
matrix of observations.
.AG modelid
An integer specifying a parameterization of the MVN covariance matrix defined 
by volume, shape and orientation charactertistics of the underlying clusters. 
The allowed values for `modelid' and their interpretation are as follows:
`"EI"' : uniform spherical, `"VI"' : spherical, `"EEE"' : uniform variance,
`"VVV"' : unconstrained variance, `"EEV"' : uniform shape and volume,
`"VEV"' : uniform shape.
.AG z
matrix of conditional probabilities. `z' should have a row for each observation
in `data', and a column for each component of the mixture.
.OA
.AG ...
additional arguments, as follows:
.AG eps
Tolerance for determining singularity in the covariance matrix. The precise 
definition of `eps' varies the parameterization, each of which has a default.
.AG tol
The iteration is terminated if the relative error in the loglikelihood value
falls below `tol'. Default : `sqrt(.Machine$double.eps)'.
.AG itmax
Upper limit on the number of iterations. Default : `Inf' (no upper limit).
.AG equal
Logical variable indicating whether or not to assume equal proportions in the
mixture. Default : `F'.
.AG noise
Logical variable indicating whether or not to include a Poisson noise term in
the model. Default : `F'.
.AG Vinv
An estimate of the inverse hypervolume of the data region (needed only if
`noise = T'). Default : determined by function `hypvol'
.RT
the conditional probablilities at the final iteration (information about the
iteration is included as attributes).
.SH NOTE
The reciprocal condition estimate returned as an attribute ranges in value
between 0 and 1. The closer this estimate is to zero, the more likely it is
that the corresponding EM result (and BIC) are contaminated by roundoff error.
.SH REFERENCES
G. Celeux and G. Govaert, Gaussian parsimonious clustering models,
\fIPattern Recognition, \fR28:781-793 (1995).

A. P. Dempster, N. M. Laird and D. B. Rubin, Maximum Likelihood from
Incomplete Data via the EM Algorithm, \fIJournal of the Royal Statistical
Society, Series B, \fR39:1-22 (1977).

C. Fraley and A. E. Raftery, How many clusters? Which clustering method?
Answers via model-based cluster analysis. \fIComputer Journal,
\fR41:578-588 (1998).

C. Fraley and A. E. Raftery, \fIMCLUST:Software for model-based cluster
and discriminant analysis. \fRTechnical Report No. 342, Department of
Statistics, University of Washington (1998).

G. J. MacLachlan and T. Krishnan, The EM Algorithm and Extensions, Wiley,
(1997).
.SA
`mstep', `estep'
.EX
> data <- matrix(aperm(iris, c(1,3,2)), 150, 4)
> cl <- mhclass(mhtree(data, modelid = "VVV"),3)
> me( data, modelid = "EEE", ctoz(cl))

.KW clustering
.WR

