July 20, 01:00 - 02:00
Over the past two decades, our research group has applied information-theoretic ideas, concepts, and techniques to diverse areas of machine learning and data science. This expository presentation follows our journey from compression, to estimation, structure exploitation, functional approximation, classification, and robust and reinforcement learning, with a few life lessons learned along the way.
Alon Orlitsky received B.Sc. degrees in Mathematics and Electrical Engineering from Ben Gurion University in 1980 and 1981, and M.Sc. and Ph.D. degrees in Electrical Engineering from Stanford University in 1982 and 1986. From 1986 to 1996 he was with the Communications Analysis Research Department of Bell Laboratories. He spent the following year as a quantitative analyst at D.E. Shaw and Company, an investment firm in New York City. In 1997 he joined the University of California San Diego, where he is currently a professor of Electrical and Computer Engineering and of Computer Science and Engineering. His research concerns information theory, statistical modeling, and machine learning.
From 2011 to 2014 Alon directed UCSD’s Center for Wireless Communications, and since 2006 he has directed the Information Theory and Applications Center. He was the president of the Information Theory Society in 2016. He has co-organized numerous programs on information theory, machine learning, and statistics, including the Information Theory and Applications Workshop that he started in 2006 and has helped organize since.
Alon is a recipient of the 1981 ITT International Fellowship and the 1992 IEEE W.R.G. Baker Paper Award, and co-recipient of the 2006 Information Theory Society Paper Award and the 2016 NIPS Paper Award. He co-authored two papers for which his students received student-paper awards: the 2003 Capocelli Prize and the 2010 ISIT Student Paper Award. He is a fellow of the IEEE, and holds the Qucalcomm Chair for Information Theory and its Applications at UCSD.
July 12, 15:00 - 16:00 (AEST)
Information theory is based on probabilistic modeling of the sources and channels which define the information quantities like entropy and capacity and the associated coding schemes. However, in reality, these models are unknown and may even be nonexistent. Yet, practical communication systems work under uncertainty and varying conditions. The key capability to combat uncertainty is termed “universality”. Universal schemes adapt, learn, work, and can even be optimal, no matter what the model is. The universality concept will be presented in several basic communication problems:
Meir Feder received the B.Sc and M.Sc degrees in Electrical Engineering in 1980 and 1984 from Tel-Aviv University and the Sc.D degree in Electrical Engineering and Ocean Engineering in 1987 from the Massachusetts Institute of Technology (MIT) and the Woods Hole Oceanographic Institution (WHOI). After being a Research Associate and a Lecturer in MIT, he joined the School of Electrical Engineering, Tel-Aviv University, where he is now a Chaired Professor and the head of the newly established Tel-Aviv university center for Artificial intelligence and Data science (TAD). He is also a Visiting Professor with the Department of EECS, MIT.
Parallel to his academic career, he is closely involved with the high-tech industry. He founded 5 companies, among them are Peach Networks that developed an interactive TV solution (Acq: MSFT) and Amimon that provided the highest quality, robust and no delay wireless high-definition A/V connectivity (Acq:LON.VTC). Recently, with his renewed interest in machine learning and AI, he cofounded Run:ai, a virtualization, orchestration, and acceleration platform for AI infrastructure. He is also an active angel investor and serves on the board/advisory board of several US and Israeli companies.
Prof. Feder received several academic and professional awards including the IEEE Information Theory Society best paper award for his work
on universal prediction, the “creative thinking” award of the Israeli Defense Forces, and the Research Prize of the Israeli Electronic Industry, awarded by the President of Israel. For the development of Amimon’s chip-set, that uses a unique MIMO implementation of joint source-channel coding for wireless video transmission he received the 2020 Scientific and Engineering Award of the Academy of Motion Picture Arts and Sciences.
July 15, 01:00 - 02:00
We argue that, as the amount of data that we need to transport, store and protect expands, there is a growing trend to (hyper-)specialize the techniques for information flow, compression and security, by tailoring them to the task purpose. We will present examples of this trend in our work that include pliable index coding, distortion security, task-aware compression, and community aware group testing.
Christina Fragouli is a Professor in the Electrical and Computer Engineering Department at UCLA and an IEEE fellow. She received the B.S. degree in Electrical Engineering from the National Technical University of Athens, Athens, Greece, and the M.Sc. and Ph.D. degrees in Electrical Engineering from the University of California, Los Angeles. She has worked at the Information Sciences Center, AT&T Labs, Florham Park New Jersey, and the National University of Athens. She also visited Bell Laboratories, Murray Hill, NJ, and DIMACS, Rutgers University. She was on the faculty at the School of Computer and Communication Sciences, EPFL, Switzerland, where she was an Associate Professor before she joined UCLA, where she directs the Laboratory for Algorithms for Networked Information (ARNI).
She has served as an Information Theory Society Distinguished Lecturer, and has received recognitions for her work, including several paper awards. She has served as Associate Editor for IEEE Transactions on Information Theory, IEEE Transactions on Communications, and IEEE Transactions on Mobile Computing among others. She has also served in several IEEE committees and organized IEEE conferences. Her research interests are in the intersection of coding theory and algorithms, with applications in network information flow, network security and privacy, wireless networks and bioinformatics.
July 17, 01:00 - 02:00
Deep learning methodology has revealed some major surprises from the perspective of statistical complexity: even without any explicit effort to control model complexity, these methods find prediction rules that give a near-perfect fit to noisy training data and yet exhibit excellent prediction performance in practice. In this talk, we survey some recent work on this phenomenon of ‘benign overfitting.’ In the setting of linear prediction, we give a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization shows that overparameterization is essential: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. We discuss implications for deep networks and for robustness to adversarial examples, and we describe extensions to ridge regression and barriers to analyzing benign overfitting via model-dependent generalization bounds.
Peter Bartlett is a professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at the University of California at Berkeley, Associate Director of the Simons Institute for the Theory of Computing, Director of the Foundations of Data Science Institute, and Director of the Collaboration on the Theoretical Foundations of Deep Learning. His research interests include machine learning and statistical learning theory. He is the co-author, with Martin Anthony, of the book Neural Network Learning: Theoretical Foundations. He has served as an associate editor of the journals Bernoulli, Mathematics of Operations Research, the Journal of Artificial Intelligence Research, the Journal of Machine Learning Research, the IEEE Transactions on Information Theory, Machine Learning, and Mathematics of Control Signals and Systems, and as program committee co-chair for COLT and NeurIPS. He has consulted to a number of organizations, including Google, General Electric, Telstra, SAC Capital Advisors, and Sentient. He has been a Professor in Mathematical Sciences at the Queensland University of Technology (2011-2017), a Miller Institute Visiting Research Professor in Statistics and Computer Science at U.C. Berkeley (Fall 2001), a fellow, senior fellow, and professor in the Research School of Information Sciences and Engineering at the Australian National University's Institute for Advanced Studies (1993-2003), an honorary professor at the University of Queensland and a visiting professor at the University of Paris. He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year in Australia in 2001, and was chosen as an Institute of Mathematical Statistics Medallion Lecturer in 2008, an IMS Fellow and Australian Laureate Fellow in 2011, and a Fellow of the ACM in 2018. He was elected to the Australian Academy of Science in 2015.
July 20, 15:00 - 16:00
This talk aims to present a (biased) summary of recent results in broadcast channels which owe heavily to novel techniques in evaluations of achievable regions and outer bounds. Additionally, the talk will also identify some elementary settings for which either the current techniques are insufficient or for which there are tantalising conjectures; hoping that these will inspire future research in this area.
Chandra Nair is a Professor with the Information Engineering department at The Chinese University of Hong Kong. He also serves as the Programme Director of the undergraduate program on Mathematics and Information Engineering.
Chandra Nair got his Bachelor's degree, B.Tech (EE), from IIT Madras (India) and his Ph.D. degree from the EE department of Stanford University. He has been an associate editor for the IEEE Transactions on Information Theory and a distinguished lecturer of the IEEE Information theory society. He is a Fellow of the IEEE. He is a co-recipient of the 2016 Information Theory Society paper award.
His research interests and contributions have been in developing ideas, tools, and techniques to tackle families of combinatorial and non-convex optimization problems arising primarily in the information sciences. He has been fortunate to be involved in the resolution of a few conjectures and open problems.