Lennart Ljung received his PhD in Automatic Control from Lund Institute of Technology in 1974. Since 1976 he is Professor of the chair of Automatic Control In Linkoping, Sweden, and is currently Director of the Strategic Research Center "Modeling, Visualization and Information Integration" (MOVIII). He has held visiting positions at Stanford and MIT and has written several books on System Identification and Estimation. He is an IEEE Fellow, an IFAC Fellow and an IFAC Advisor. He is as a member of the Royal Swedish Academy of Sciences (KVA), a member of the Royal Swedish Academy of Engineering Sciences (IVA), an Honorary Member of the Hungarian Academy of Engineering, an Honorary Professor of the Chinese Academy of Mathematics and Systems Science, and a Foreign Associate of the US National Academy of Engineering (NAE). He has received honorary doctorates from the Baltic State Technical University in St Petersburg, from Uppsala University, Sweden, from the Technical University of Troyes, France, from the Catholic University of Leuven, Belgium and from Helsinki University of Technology, Finland. In 2002 he received the Quazza Medal from IFAC, and in 2003 he recieved the Hendrik W. Bode Lecture Prize from the IEEE Control Systems Society, and he was the 2007 recepient of the IEEE Control Systems Award.
System identification is about how to build mathematical models of systems from observed input-output signals. As a subarea of Automatic control is is about half a century old, and it takes many of its basic ideas from classical statistical techniques. Regularization is, simply put, to allow a considerable amount of freedom in the model, and then curb the flexibility by explicit penalties on the parameters. This is Â an old and well known Â technique in the area. But vitalizing encounters with young scientific communities keeps System Identification developing. Two such encounters have had important influence on the understanding the possibilities and potentials of regularization in System Identification. One is the meeting with Machine Learning, Gaussian process regression and Â renewed focus on Bayesian techniques as well as manifold learning. Another is the meeting with sparsity, compressed sensing and convex optimization. In this talk we illustrate how these two influences have meant additional tools for and insights into the basic identification problems of handling the bias/variance trade-off and finding parsimonious models.