12th Int'l Symposium on Image and Signal Processing and Analysis (ISPA 2021)

13-15 September 2021, Zagreb, Croatia

Plenary Speaker: Professor Rama Chellappa

Bloomberg Distinguished Professor, Johns Hopkins University, USA

Lecture title: Design of Unbiased, Robust and Interpretable AI Systems

Abstract

Over the last decade, algorithms and systems based on deep learning and other data-driven methods have contributed to the reemergence of Artificial Intelligence-based systems with applications in national security, defense, medicine, intelligent transportation, and many other domains. However, another AI winter may be lurking around the corner if challenges due to bias or lack of fairness, lack of robustness to adversarial attacks and interpretability are not considered while designing AI systems. In this talk, I will present our approach to bias mitigation for face recognition, designing AI systems that are robust to a variety of adversarial attacks for object detection tasks, and methods for interpreting the behavior of AI systems.

Biography

Prof. Rama Chellappa is a Bloomberg Distinguished Professor in the Departments of Electrical and Computer Engineering and Biomedical Engineering at Johns Hopkins University (JHU). At JHU, he is also affiliated with CIS and CLSP, Malone Center and MINDS. Before coming to JHU in August 2020, he was a Distinguished University Professor, a Minta Martin Professor of Engineering, a Professor in the ECE department and the University of Maryland Institute Advanced Computer Studies at the University of Maryland (UMD). At UMD, he was also affiliated with the Center for Automation Research and the Department of Computer Science. He holds a non-tenure position as a College Park Professor in the ECE department at UMD. During 1981-1991, he was an assistant and associate professor in the Department of EE-Systems at University of Southern California. He received the M.S.E.E. and Ph.D. Degrees in Electrical Engineering from Purdue University, West Lafayette, IN. His current research interests span many areas in computer vision, artificial intelligence, machine learning. Prof. Chellappa has received several awards from the IEEE, the International Association of Pattern Recognition, the University of Southern California, and the University of Maryland. He has been recognized as an Outstanding ECE by Purdue University and received the Distinguished Alumni Award from the Indian Institute of Science. He served as the Editor-in-Chief of PAMI, as a Distinguished Lecturer of the IEEE Signal Processing Society and as the President of IEEE Biometrics Council. He is a Golden Core Member of the IEEE Computer Society. He is a Fellow of AAAI, AAAS, ACM, IAPR, IEEE, NAI and OSA and holds eight patents.

Plenary Speaker: Professor H. Vincent Poor

Michael Henry Strater University Professor, Princeton University, USA

Lecture title: Towards 6G Wireless Communication Networks: Vision, Enabling Technologies, and New Paradigm Shifts

Abstract

Fifth generation (5G) wireless communication networks are being deployed worldwide and more capabilities are in the process of being standardized, such as massive connectivity, ultra-reliability, and low latency. However, 5G will not meet all requirements of the future, and sixth generation (6G) wireless networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, greater intelligence and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architectures, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing. One vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-air-ground-sea integrated communication networks. Multiple spectra will be exploited to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the very large datasets generated by heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of AI-related technologies. And, fourth, network security will have to be strengthened when developing 6G networks. This talk will review recent advances and future trends in these four aspects.

Biography

H. Vincent Poor is the Michael Henry Strater University Professor at Princeton University, where his interests include information theory, machine learning and network science, and their applications in wireless networks, energy systems and related fields. Prior to joining Princeton in 1990, he was on the faculty of the University of Illinois, and he has also held visiting appointments at several other universities, including Berkeley, Cambridge, Harvard and Stanford, among others. Among his publications is the forthcoming book Machine Learning and Wireless Communications, to be published by Cambridge University Press. Dr. Poor is a member of the U.S. National Academy of Engineering and the U.S. National Academy of Sciences, and is a foreign member of the Chinese Academy of Sciences, the Royal Society, and other national and international academies. Recent recognition of his work includes the 2017 IEEE Alexander Graham Medal, and honorary doctorates from a number of universities in Asia, Europe and North America.

Plenary Speaker: Professor Ivan Dokmanić

Associate Professor for Data-Analytics, University of Basel, Switzerland

Lecture title: Learning for Inverse Problems with a Little Help from Signal Processing

Abstract

State-of-the-art approaches to computed tomography, MRI, and many other inverse problems in imaging use deep neural networks. Although our understanding of neural nets is improving by the minute, they are still often more surprising than intuitive. Some of the unintuitive behavior has to do with household signal processing topics. Convnets are advertised as shift invariant... but they are not. Their architectures evoke textbook images of analysis–synthesis dictionaries but they perform much better... or do they? I will show that truly invariant convnets exist (and work better!) and that good old sparsifying dictionaries can perform as well as neural nets on important inverse problems. I will then move to wave-based inverse problems and show how insights from physics lead to principled neural architectures with strong out-of-distribution generalization. The key role is again, implicitly, played by sparsity. Finally, I will address well-posedness of learning for imaging by discussing universal injective neural networks.

Biography

Ivan Dokmanić is an Associate Professor in the Department of Mathematics and Computer Science at the University of Basel, Switzerland. From 2016 to 2019 he was an Assistant Professor in the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign, where he now holds an adjunct appointment. He received a diploma in electrical engineering from the University of Zagreb in 2007, a PhD in computer science from EPFL in 2015, and did a postdoc at Institut Langevin and Ecole Normale Supérieure in Paris between 2015 and 2016. Before that he was a teaching assistant at the University of Zagreb, a codec developer for MainConcept AG, Aachen, and a digital audio effects designer for Little Endian Ltd., Zagreb. His research interests lie between inverse problems, machine learning, and signal processing . He received the Best Student Paper Award at ICASSP 2011, a Google PhD Fellowship, an EPFL Outstanding Doctoral Thesis Award and a Google Faculty Research Award. In 2019 the European Research Council (ERC) awarded him a Starting Grant.