Presented by Praveen Venkatesh, Carnegie Mellon University; Gabe Schamberg, Massachusetts Institute of Technology; Pulkit Grover, Carnegie Mellon University; Todd Coleman, University of California San Diego
Saturday, 17 July, 07:00 - 09:30
The goal of this tutorial is to provide attendees with a basic understanding of how information-theoretic tools and techniques can apply to problems of neural inference, both in acquiring data and in using it to obtain relevant inferences. We will use three broad yet concrete motivating problems, carefully chosen to illustrate the breadth of information-theoretic techniques that can be applied to this field. We will present the necessary neuroscientific background and identify key areas of neuroscience research that stand to benefit from the information theory perspective. Attendees should leave the tutorial feeling excited to explore new problems that are, at their core, information theory problems, but can have meaningful impacts in how we understand, sense, and stimulate the brain. The attendees will also obtain concrete pointers to start their work on these problems, including links to a) relevant publicly available data; b) new data obtained by our team; c) software and neural models to enable data analyses; and d) key researchers in the neuroscience and neurotechnology field who may benefit from information theory collaborators. Following an overview of the tutorial material and a summary of relevant neuroscientific background, we will present a collection of open problems at the cutting edge of neuroscience research through three motivating applications: traveling waves, inferring encoding and dynamic processes in neural circuits, and closing the loop.
Presented by Daniela Tuninetti, University of Illinois Chicago; Hua Sun, University of North Texas; Mingyue Ji, University of Utah
Saturday, 17 July, 07:00 - 09:30
Modern algorithms need to efficiently execute complex queries on massive databases in a way that minimizes the use of communication resources while preserving the privacy of the entity that initiated the query. Such queries are functions of the data points that are stored at remote servers. For example, linear and multivariate polynomial operations are widely used fundamental primitives for building the complex queries that support on-line big-data analytics and data mining procedures. In those scenarios, it is too resource-consuming to download locally all the input variables in order to compute the desired output value. Instead, it is desirable to directly download the result of the desired output function, which should also be kept private. This tutorial introduces a principled and holistic framework for the problem of privately retrieving, at distributed cache-aided nodes, the output of functions based on both data that is locally computed and data that is received from multiple servers. This problem is at the intersection of distributed coded caching and private information retrieval. This tutorial has three major parts: (1) design optimal coded caching schemes for user-private general function retrieval; (2) motivated by distributed settings in which a user may also be a sender, devise optimal server-private function retrieval strategies; and (3) overcome complexity bottleneck in practical distributed computing systems with server- and/or user-privacy. The goal of this tutorial is to provide an overview of significant results on the topic, together with a comprehensive list of relevant past contributions for distributed coded caching and private information retrieval, and point to open research directions.
Presented by Pascal O. Vontobel, Department of Information Engineering, The Chinese University of Hong Kong
Saturday, 17 July, 13:30 - 16:00
The world we live in is a quantum world. Although our human bodies limit ourselves to experience a world that we call classical, and therefore makes it difficult for us to grasp quantum concepts, this should not stop us from designing and implementing devices that benefit from the unique features of the quantum world. Quantum information processing (QIP) is about understanding this quantum world and using it to our advantage to process, transmit, and store information.
This tutorial gives an introduction to QIP using a particular approach based on graphical models. Graphical models are a well-appreciated tool in the information and coding theory community. For example, low-density parity-check codes (which appear in the 5G Telecommunication standard) and their decoding via message-passing algorithms are naturally expressed in terms of graphical models.
Various graphical notations have been introduced over the last decades to visualize QIP concepts. The graphical notation and language that is used in this tutorial is based on so-called normal factor graphs (NFGs). One particular advantage of this approach is that the NFGs used for QIP are compatible with the NFGs used in classical information processing (CIP), like decoding of low-density parity-check codes, Kalman filtering, etc. Another advantage is that similarities/differences between the NFGs used for QIP and the NFGs used for CIP allow one to better appreciate similarities/differences between the quantum world and the classical world.
This tutorial is planned to be accessible to a broad audience within the information and coding theory community. Therefore, we only require a solid understanding of linear algebra, with relevant background from other areas being introduced and reviewed as necessary. Overall, this tutorial should allow newcomers to get started in the field and for people with some background knowledge in QIP to get a different perspective on QIP.
At the beginning, the tutorial will contain a step-by-step discussion of the topics, whereas later topics will be discussed at a higher level. Nevertheless, the slides will be such that the participants can study the details later on by themselves.
Tentative topics: introduction; normal factor graphs; closing-the-box operation, opening-the-box operation; sum-product algorithm as closing-the-box operation; factor-graph transforms; classical hidden Markov models; probability mass function vs. quantum mass function; unitary evolution and non-unitary evolution; measurements; quantum teleportation, superdense coding; Bell's game; Deutsch--Jozsa algorithm, Grover's algorithm; classical channel codes vs. quantum stabilizer codes; emergence of classical correlations; classical entropy vs. quantum entropy connections to other graphical representations like tensor network states, matrix product states, tree tensor states, projected entangled pair states, etc.
Presented by Nihar B. Shah, Carnegie Mellon University
Saturday, 17 July, 22:00 - 00:30
A large number of applications rely on data obtained from people such as hiring, admissions, crowdsourcing, A/B testing, product ratings and recommendations, peer grading, and peer review. Data from people contains a number of issues such as biases, subjectivity, miscalibration, noise, and fraud. These issues lead to unfairness and inefficiencies in these applications, and are further amplified when this data is used to train ML/AI algorithms for downstream tasks.
In this tutorial, using peer review as a running example, we will discuss these challenges in human-provided data. We will first present insightful experiments, including analyses of thousands of submissions and reviews in machine learning conferences such as ICML and NeurIPS. We will then present mathematical models for these issues, and proposed solutions with theoretical guarantees. Finally, we will highlight important open problems -- this is a wide open field where the information-theoretic perspective can contribute considerably in establishing the fundamental limits on these problems, as well as in designing algorithms that can achieve these limits and make a significant real-world impact. No prior background will be assumed.
Presented by Yingbin Liang, The Ohio State University; Shaofeng Zou, University at Buffalo, the State University of New York; Yi Zhou, University of Utah
Sunday, 18 July, 07:00 - 09:30
Reinforcement learning (RL) has driven machine learning from basic data-fitting to the new era of learning and planning through interacting with complex environments. Equipped with deep learning, RL has achieved tremendous successes in many applications, including autonomous driving, recommendation systems, wireless communications, robotics, gaming, etc. The success of RL is largely based on the foundational developments of RL algorithms, which were not thoroughly understood until recently, especially their finite-time convergence rates and sample complexities. This tutorial will provide a comprehensive overview of the recent advances on theoretical understanding of fundamental RL algorithms, which leverage stochastic approximation/optimization theory and exploit the Markovian structures of RL problems. The tutorial will also introduce some advanced RL algorithms and their recent developments.
Presented by Boulat A. Bash, the University of Arizona; Matthieu R. Bloch, the Georgia Institute of Technology
Sunday, 18 July, 07:00 - 09:30
Covert communication, also known as communication with low probability of detection (LPD), aims to reliably transmit a message to an intended receiver while simultaneously preventing the detection of the transmission by an adversary. Unlike other security approaches, such as cryptography or information-theoretic secrecy that aim at denying the adversary access to the information content of the transmitted signals, covertness aims at hiding the presence of the transmitted signal themselves. While practical radio-frequency covert communication systems have been around since the advent of spread-spectrum, exploration of their fundamental limits is a fairly new direction in information theory. Although related ideas can be found in the steganography literature, the initial mathematical analysis of covert communications was presented at ISIT 2012 and revealed that, while the Shannon capacity of a covert communication channel is zero, it allows transmission of a large volume of covert data. This is because covert communication is governed by the square root law: no more than L \sqrt(n)+o(\sqrt(n)) covert bits can be transmitted reliably in n channel uses. Subsequent works, have characterized the covert capacity L of discrete memoryless channels (DMCs) and additive white Gaussian noise (AWGN) channels, as well as classical-quantum channels. These papers seeded a large body of follow-on work resulting in “Covert Communications” sessions at recent ISITs and numerous presentations at ITWs, ICCs, etc.
The goal of this tutorial is to introduce the audience to the mathematical foundations of covert communications. We will present both the classical and quantum information theory perspectives. For the classical treatment we will assume familiarity with basic information theory. While the knowledge of the material from the recent ISIT tutorials on quantum information theory and quantum limits of optical communication will certainly benefit the listener, we will not assume any prior quantum background and will introduce the necessary concepts and results. Although we will present several proofs, we will focus on key insights rather than mathematical details.
Presented by Vincent Y. F. Tan, National University of Singapore; Lei Yu, Nankai University
Sunday, 18 July, 13:30 - 16:00
We start by reviewing the classical notions of Wyner's and Gács-Körner-Witsenhausen's (GKW's) common information (CI). We then segue to their extensions. For Wyner's CI, we discuss Kumar, Li and El Gamal's exact CI and show that Wyner's CI and the exact CI are special cases of a more general family of CIs known as Rényi CIs. For GKW's CI, we unveil its connections to the non-interactive correlation distillation problem, the noise-stability problem, the isoperimetric problem and hypercontractivity inequalities. We discuss several advances on these topics, including important conjectures and open problems and their partial solutions, e.g., Courtade-Kumar's conjecture, Mossel's mean-1/4 stability problem and Ordentlich-Polyanskiy-Shayevitz's conjecture. We present this “zoo” of CI measures and their connections to other problems in a unified manner. Our treatments hinge on the concept of simulation of random variables. We demonstrate that the proofs of several results in recent works (by us and other authors) can be presented in a pedagogically coherent manner. Intuitive explanations will be provided by means of illustrations and examples.
Presented by Jean-Francois Chamberland, Texas A&M University; Krishna Narayanan, Texas A&M University; Yury Polyanskiy, Massachusetts Institute of Technology
Sunday, 18 July, 22:00 - 00:30
Contemporary wireless traffic is increasingly heterogeneous, with growth coming primarily from low-rate unattended devices. Specifically, whereas human users tended to establish sustained connections, machines transmit succinct packets sporadically. This represents a formidable challenge for (a) the initial access in the licensed-spectrum networks (LTE 5G etc), and (b) the dedicated IoT long-range unlicensed networks (LoRa). This demand for a redesign of the medium access layer, prompted an exciting set of new information-theoretic and coding-theoretic problems, named unsourced/uncoordinated multiple access (UMAC). Correspondingly, in this tutorial, we discuss both the fundamental characterizations and pragmatic algorithms. Excitingly, UMAC can be interpreted as a very high-dimensional (2^100 and more) compressed-sensing (CS) problem. Thus, UMAC theory and algorithms are equally applicable to domains where such CS problems arise, for example in estimating heavy-hitters in data streams. For the latter problem we will show how communication-inspired solutions compare to computer-science algorithms of count-sketch family.