1) Logistic regression under sparseness priors (of the LASSO type) can not be approached using the standard algorithms, due to the non-differentiability of these priors. I will describe bound-optimization algorithms for sparse logistic regression (with parallel or sequential updates). Experimental results on real benchmark data show that sparse logistic regression outperforms both support vector machines and relevance vector machines. Performance bounds on this type of classifiers will be briefly mentioned.
2) In the second part of the talk, I will present a semi-supervised extension of logistic regression, which is able to use both labelled and unlabelled data. A Bayesian formulation and an EM algorithm allow for the automatic adjustment of the tradeoff between the contributions of the labelled and unlabelled data. Encouraging results are presented on benchmark data.
Time and Place: Fri., Dec. 10, at 2 pm in 4610 Engr. Hall. *** NOTE SPECIAL DAY & TIME ***
SYSTEMS SEMINAR WEB PAGE: http://homepages.cae.wisc.edu/~gubner/seminar/