Tutorial 5: Signal processing methods in Sleep Research
by Gary Garcia-Molina (Philips Research North-America, Briarcliff-Manor, NY, USA and Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, USA)
Tutorial 6: Standardization in semantic image annotation and interaction
by Frederik Temmermans (Vrije Universiteit Brussel – iMinds, Belgium), Jaime Delgado (Universitat Politècnica de Catalunya (UPC), Spain) and Mario Döller (University of applied science FH Kufstein/Tirol, Austria)
Tutorial 7: Signal Processing Tools for Big Data Analytics
by Georgios B. Giannakis (University of Minnesota, USA), Konstantinos Slavakis (University of Minnesota, USA), and Gonzalo Mateos (University of Rochester, USA)
Tutorial 8: Engineering wireless full-duplex nodes and networks
by Melissa Duarte (Huawei, France) and Maxime Guillaud (Huawei, France)
The success of natural intuitive human-robot interaction (HRI) in the future will critically depend on the responsiveness of the robot to all forms of human expressions as well as the robot’s awareness of the environment. Speech is the most effective means of communication among humans. Furthermore, acoustic signals distinctively characterize physical environments. Truly humanoid robots should therefore extract and constructively exploit auditory information from their environment for communication as well as acoustic scene analysis as much as humans do. While vision-based HRI is well developed, current limitations in robot audition do not allow for an effective, natural human-robot communication in real-world environments. These limitations are mainly due to the severe degradation of the desired acoustic signals by noise, interference and reverberation when captured by the robot’s microphones.
Natural HRI hinges on the development of effective approaches for robot audition. This tutorial provides an overview and details the challenges, concepts, and state of the art developments in embodied audition for robots. Aimed to overcome current limitations in robot audition, algorithmic approaches are presented which aim towards intelligent ‘ears’ with close-to-human (or even better) auditory capabilities. Novel robot-specific microphone arrays and signal processing algorithms are introduced for localization and tracking of multiple sound sources of interest as well as extraction and recognition of the desired signals. It is shown how robot vision can be used to complement and enhance robot audition by fusing the acoustic and visual modalities. Finally, we discuss how the acquired scenario information from audio and audio-visual processing can be combined and fed back to the acoustic interface for auditory scene analysis.
Heinrich Löllmann (Friedrich-Alexander University Erlangen-Nürnberg)
Heinrich Löllmann is a Senior Researcher at the Chair of Multimedia Communications and Signal Processing of the University of Erlangen-Nürnberg, Germany. He received the Dipl.-Ing. (univ.) degree in Electrical Engineering and the Dr.-Ing. degree from RWTH Aachen University in 2001 and 2011, respectively. He joined the audio research laboratory of the Chair of Multimedia Communications and Signal Processing in 2012. Heinrich Löllmann authored one book chapter and more than 30 refereed papers in journals and conference proceedings. His research interests include filter and filterbank design as well as speech and audio signal processing with a special focus on single and multi-channel speech enhancement. For this tutorial he is representing the coordinator of the EU-funded project Embodied Audition for RobotS (EARS).
Christine Evers (Imperial College London)
Christine Evers is currently a Research Associate at Imperial College in the Department of Electrical and Electronic Engineering since 2014 where she is working on acoustic scene analysis for human-robot interaction. Her research interests are in the area of Bayesian signal processing for audio and speech applications, involving multi-speaker localisation and tracking, blind speech dereverberation, sensor fusion, and acoustic environment mapping for robot audition. She previously worked between 2010 and 2014 as a Senior Systems Engineer at Selex ES, Edinburgh, UK, and was a Research Associate at the University of Edinburgh between 2009 and 2010. Christine received her PhD in statistical signal processing from the School of Engineering, University of Edinburgh, UK, in 2010, where she focused on blind speech dereverberation using sequential Monte Carlo methods.
Radu Horaud (Institut National de Recherche en Informatique et Automatique (INRIA), Grenoble)
Radu Horaud received the B.Sc. degree in electrical engineering, the M.Sc. degree in control engineering, and the Ph.D. degree in computer science from the Institut National Polytechnique de Grenoble, Grenoble, France. Currently he holds a position of director of research with the Institut National de Recherche en Informatique et Automatique (INRIA), Grenoble Rhône-Alpes, Montbonnot, France, where he is the founder and head of the PERCEPTION team. His research interests include computer vision, machine learning, audio signal processing, audiovisual analysis, and robotics. He is an area editor of the Elsevier Computer Vision and Image Understanding, a member of the advisory board of the Sage International Journal of Robotics Research, and an associate editor of the Kluwer International Journal of Computer Vision. He was Program Cochair of the Eighth IEEE International Conference on Computer Vision (ICCV 2001). In 2013, Radu Horaud was awarded a five year ERC Advanced Grant for his project Vision and Hearing in Action (VHIA).
In recent years, 3D experiences have become more popular, acknowledging that a faithful, transparent and immersive representation of our world requires more than 2D video. In this context, the current visual representation status quo may be understood as providing only efficient multiview video coding solutions for linear, horizontal only parallax camera arrangements, narrow baselines and reduced viewing range. Moreover, current display solutions based on stereoscopic vision only exploit limited depth cues and have inherent accommodation/vergence conflicts.
The emergence of new 3D cameras and displays and the increasing request for more immersive experiences led to questioning the fundamentals of the vision process, notably the structure of the information in the light impinging on an observer of a scene. These advances and needs led to the so-called plenoptic function which measures the intensity of light seen from any viewpoint/camera centre 3D spatial position (x,y,z), any angular viewing direction (?,?), over time (t) and for each wavelength (?). In this context, new (3D) visual representation models and associated coding solutions are needed to improve the immersion and interaction experiences as provided by some emerging displays to overcome the limitations. However, the novel imaging representations, notably based on the plenoptic function, will require huge amounts of raw data and thus efficient coding is a must.
Nowadays, a plenoptic representation of a visual scene involving full (horizontal and vertical) parallax may be obtained by using two main representation approaches: i) a so-called super multiview video (SMV) approach where multiple, high-density views are acquired using a multi-camera array (or a single camera rig) with a certain (e.g. linear or arc) arrangement; and ii) a so-called plenoptic approach based on a single integral/holoscopic camera using a lenticular array composed of a large number of micro lenses able to acquire the light information coming from different incident angles.
Considering the relevance of the novel potential functionalities, both the MPEG (Free-viewpoint TV (FTV) Adhoc group) and JPEG (JPEG PLENO) standardization groups started studying the representation and coding of the new data types associated to plenoptic imaging representations.
In this context, the main objective of this tutorial is to review and discuss the present and trends on 3D visual data representation, notably the available coding standards and future coding solutions adopting a plenoptic representation framework.
This tutorial is intended for academics, researchers, professionals and post-graduate students with some background in the area of video coding and applications, especially those with interest in learning about recent developments in visual representation research and standardization, notably regarding 3D imaging, light fields and plenoptic imaging representation.
Fernando Pereira (Instituto Superior Técnico - Instituto de Telecomunicações, Portugal)
Fernando Pereira is with Instituto Superior Técnico and Instituto de Telecomunicações, Lisbon, Portugal. He is a recognized researcher in video coding since the early nineties. Among other achievements and positions, it is relevant to highlight:
• IEEE Fellow in 2008 and EURASIP Fellow in 2013 for his contributions on video coding
• Area Editor of the EURASIP Signal Processing: Image Communication Journal and Associate Editor of the EURASIP Journal on Image and Video Processing
• Associate Editor of IEEE Transactions of Circuits and Systems for Video Technology, IEEE Transactions on Image Processing, IEEE Transactions on Multimedia, and IEEE Signal Processing Magazine
• Since January 2013, Editor-in-Chief of the IEEE Journal of Selected Topics in Signal Processing
• Member-at-Large of the Signal Processing Society Board of Governors for the 2014-2016 term
• Participation in important European projects in the area of video coding
• Chairman of the MPEG Requirements group for a few years
• Major contributor to the MPEG-4 and MPEG-7 standards
• Contributor to the JPEG PLENO standard
• ISO/IEC Award for his contributions for the development of the MPEG-4 standard
• Tens of tutorial, keynote and plenary presentations at top international conferences
• More than 250 video coding related publications in international journals and conferences
Eduardo A. B. da Silva (Universidade Federal do Rio de Janeiro, Brasil)
Eduardo A. B. da Silva was born in Rio de Janeiro, Brazil. He finished his Ph.D. in Electronics Systems Engineering in Essex University, UK, in 1995. He has been a Professor at Universidade Federal do Rio de Janeiro since 1989.
He is co-author of the book "Digital Signal Processing - System Analysis and Design", Cambridge University Press, with editions in 2002 and 2010, that has been translated to Portuguese and Chinese.
He has served as associate editor of the IEEE Transactions on Circuits and Systems - Part I, in 2002, 2003, 2008 and 2009, of the IEEE Transactions on Circuits and Systems - Part II in 2006 and 2007, and of Multidimensional, Systems and Signal Processing, Springer since 2006.
He is Vice-President of Regional and Membership Activities of the IEEE Circuits and Systems Society, and has been a member of its Board of Governors in 2012 and 2013. He is also a member of the IEEE Publications Services and Products Board Strategic Planning Committee since 2013.
His research interests lie in the fields of digital signal and image processing, especially signal compression, digital television, wavelet transforms, and applications to telecommunications and the oil and gas industry.
In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from possibly) noisy measurements in a variety applications in statistics, signal processing, machine learning, etc. In particular, the advent of compressed sensing has led to a flowering of ideas and methods in this area. While the algorithms (basis
pursuit, LASSO, etc.) are fairly well established, rigorous frameworks for the exact analysis of the performance of such methods is only just emerging.
The goal of this tutorial will be to develop and describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, etc.). This will allow researchers and practitioners to assess the performance of these methods before actual implementation and will allow them to optimally choose parameters such as regularizer coefficients, number of measurements, etc. The theory includes all earlier results as special cases. It builds on an inconspicuous 1962 lemma of Slepian (on comparing Gaussian processes), as well as on a non-trivial generalization due to Gordon in 1988, and introduces concepts
from convex geometry (such as Gaussian widths) in a very natural way. The tutorial will explain all this, and its various implications, in some detail.
Babak Hassibi (Caltech, USA)
Babak Hassibi is the Gordon M. Binder/Amgen professor and executive officer of electrical engineering at the California Institute of Technology, where he has been since 2001. From 1998 to 2001 he was a member of the technical staff at the Mathematical Sciences Research Center at Bell Laboratories,
Murray Hill, NJ, and prior to that he obtained his PhD in electrical engineering from Stanford University.
His research interests span different aspects of communications, signal processing and control. Among other awards, he is a recipient of the David and Lucille Packard Foundation Fellowship, and the Presidential Early Career Award for Scientists and Engineers (PECASE).
How to set the cursor between centralized and distributed decision making in future wireless network is one of the most debated and fascinating questions these days when it comes to wireless networking. This tutorial feeds this debate by highlighting tools allowing individual devices to interact in a robust manner among each other, for the benefit of the global network.
Decentralized or "team" decision making refers to an optimization framework whereby several network nodes seek to make a common utility-maximizing decision while only observing a partial/local and noisy version of the global system state (like the propagation channels). This optimization problem differs from conventional centralized optimization problems and opens many new possibilities and interesting problems.
The tutorial topic is rooted in the sound theories of coordination and team decision which are hot topics within the information theoretic, communication theoretic and signal processing theoretic communities
As an introduction, we will show how fundamental limitations of cellular (and more) networks can be addressed via robust decentralized decision making. This includes wide ranging issues such as interference management, MIMO feedback design, Massive MIMO coordination.
As a second part of the tutorial, we will give an introduction to the general fields of coordination and team decision theories. We will present an overview of the different approaches, their main principles, advantages and limitations. This section provides the fundamental tools used in the rest of the tutorial, presented in a didactic manner. Importantly, coordination and team decision theories are transversal topics which are useful to other fields as well (such as artificial intelligence, control and robotics). In particular, the formulation of the optimization problem is very general and can be used to model the distributed coordination of nodes in many different scenarios.
In the third part, we review practical applications of team decision to the problem of wireless network optimization. Considering the most common and practically relevant scenarios (including resource allocation problems such as power control, scheduling, and beamforming), we show how important gains can be realized and how the obstacles initially formulated can be overcome. Practical gains for wireless networks are illustrated.
David Gesbert (EURECOM, France)
David Gesbert (IEEE Fellow) is Professor and Head of the Mobile Communications Department, EURECOM, France, where he also heads the Communications Theory Group. He obtained the Ph.D degree from Ecole Nationale Superieure des Telecommunications, France, in 1997. From 1997 to 1999 he has been with the Information Systems Laboratory, Stanford University. In 1999, he was a founding engineer of Iospan Wireless Inc, San Jose, Ca.,a startup company pioneering MIMO-OFDM (now Intel). Between 2001 and 2003 he has been with the Department of Informatics, University of Oslo. D. Gesbert has published about 230 papers (five of which won paper awards), several patents and guest edited 7 special issues all in the area of signal processing, communications, and wireless networks. He co-authored the book “Space time wireless communications: From parameter estimation to MIMO systems”, Cambridge Press, 2006. He is currently working towards the organization of for IEEE ICC 2017, to be held in Paris, as a Technical Program co-Chair . In 2014, he was named in the Thomson-Reuters List of Highly Cited Researchers in Computer Science.
Paul de Kerret (Télécom Bretagne, France)
Paul de Kerret (IEEE Member) is currently an Assistant Professor at Télécom Bretagne. He graduated in 2009 from Telecom Bretagne and obtained a diploma degree in electrical engineering from the Munich University of Technology (TUM). In 2010, he has been a research assistant at the Institute for theoretical Information Technology, RWTH Aachen University. He obtained in 2013 his Ph. D. degree from EURECOM under the supervision of David Gesbert and then pursued its work there for one year as a post-doctoral researcher. He has been working in several key European projects focused on the cooperation of transmitters in future networks. He is co-author of several articles in prestigious IEEE journals and has published 20 articles in highly selective IEEE conferences.
By occupying nearly a third of human lifespan, sleep constitutes the main activity of the human brain. Numerous recent theories on the function of sleep confirm its beneficial role at multiple physiological levels. Together with physical activity and nutrition, sleep is at the basis of a healthy life. Despite the importance of sleep, latest trends show that our society has been progressively curtailing sleep in benefit of other activities.
The realization of the fact that we may not be getting enough sleep, has motivated the emergence of numerous portable (consumer or medical type) devices that measure signals which can provide sleep relevant information for consumers, patients, or physicians.
The types of signals able to provide sleep relevant information are of a large variety including: movement (actigraphy, radar), muscle activity (electromyography EMG), ocular activity (electro-oculography EOG), cardio-respiratory activity (electrocardiogram ECG, breathing effort), electroencephalography (EEG), and photopletismography (PPG). Signal processing plays an important role in the analysis and interpretation of these signals in the context of sleep.
Sleep research poses interesting signal processing challenges. In offline processing, one faces the challenge of the inherent long duration of the recordings and the relatively high sampling rate necessary to capture microevents that are important to understand the function of sleep. In online processing, one faces the challenge of identifying sleep states (for instance REM or NREM stages) in real-time and with a relatively short latency without having the option to consider the overall sleep architecture.
In the first part of this tutorial lecture, the main theories about the function of sleep will be introduced along with a general presentation of sleep science.
In the second part, the focus will be on signal processing methods supporting sleep research. The analysis of the sleep macro-structure in terms of sleep stages will be presented followed by the analysis of sleep microstructure characterized by events such as spindles and slow-waves. Sleep models (sleep dissipation, circadian model, and sleep inertia) will also be presented.
To conclude, current trends in sleep research and the signal processing tools to support this research will be presented.
Gary Garcia-Molina (Philips Research North-America, Briarcliff-Manor, NY, USA and Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, USA)
Dr. Gary Garcia Molina has worked in neuroscience research for more than a decade. In 2004, he obtained his doctoral degree from the Swiss Federal Institute of Technology Lausanne, Switzerland (EPFL).
In January 2005, Dr. Garcia joined Philips Research Europe laboratories (Eindhoven, The Netherlands) where he led research activities in the areas of EEG based Brain-Computer Interfaces (BCI) and sleep. In 2007, Gary Garcia led various work packages in the EU project BRAIN in which he developed a visual stimulation based BCI system.
In 2012, Gary Garcia joined Philips Research North America as a clinical scientist cooperating with the University of Wisconsin-Madison (UW). Dr. Garcia has an honorary fellow appointment at the UW. He is based in Madison-Wisconsin US and works at the world renowned center of Sleep and consciousness where he conducts research in the domain of closed-loop based systems for the enhancement of sleep.
Gary Garcia published numerous (50+) papers, book chapters, and (20+) patent applications on signal processing, BCI, and sleep. In addition he gave several tutorial lectures at various international conferences including BioCAS2008, ACII2009, EUSIPCO2010, ISSPA2011, and SIP2012.
Contemporary mobile or web-based applications interact with images in various ways. Multiple platforms or applications manipulate the same content. Too often, this leads to frustrating side effects. Annotations may not be retained when an image is moved from one platform to another or metadata might not be processed correctly because a different schema or vocabulary is used. These problems can be avoided by using the right tools and standards. This tutorial will cover three parts in order to provide insights in the domain of semantic image annotations and its interlinking and show how the aforementioned problems can be avoided.
The first part will give a general introduction to linked data and image annotations. It will survey the main state-of-the-art concepts in the domain of image annotation and indexing. Modern semantic image annotation representations in research will be briefly covered, which include RDF/Ontology as well as XML based approaches. The main focus will be pointed on Linked Media concepts in the domain of images.
The second part will highlight interoperability issues in image metadata management and introduce standardized solutions. While presenting an overview of different initiatives (JPEG, MPEG, W3C), the main focus will be pointed on the Joint Photographic Experts Group (JPEG, ISO/IEC JTC1/SC29/WG1), which released a new standard on next generation image metadata. The main goal of the standard is to provide a simple and uniform way of annotating JPEG images with metadata compliant to the Linked Data principles.
The last part of the tutorial will focus on interaction with images and how to take advantage once images are correctly annotated to bridge the gap between image data and semantics. A basic type of interaction with images is image search. Semantic annotations allow language independent search and enriched search by linking with external resources. Usage of standardized query languages and APIs allow distributed search over multiple platforms. Also, the concept of visual search and the relation with automated annotations will be handled. Again, the main focus will be on supporting standards such as JPEG’s JPSearch framework.
Frederik Temmermans (Vrije Universiteit Brussel – iMinds, Belgium)
Dr. Frederik Temmermans is a researcher at iMinds and the Department of Electronics and Informatics (ETRO) of the Vrije Universiteit Brussel (VUB). He obtained his PhD in Engineering 2014. His research mainly focuses on medical imaging and interoperable image search. He has been involved in several research projects in the medical, mobile and cultural domains. Frederik is an active member of the JPEG standardization committee (ISO/IEC JTC1/SC29/WG1) since 2006 and founder of VUB spinoff company Universum Digitalis.
Jaime Delgado (Universitat Politècnica de Catalunya (UPC), Spain)
Prof. Jaime Delgado obtained his Ph. D. in Telecommunication Engineering in 1987. Telecommunication Engineer since 1983. Since September 2006, Professor at the Department of Computer Architecture of the Universitat Politècnica de Catalunya (UPC) in Barcelona (Spain). Previously, Professor of Computer Networks and Computer Architecture at the Technology Department, Universitat Pompeu Fabra (UPF), also in Barcelona, since 1999. Head and founder of the Distributed Multimedia Applications Group (DMAG). Project Manager of several European and national research projects in the areas of electronic commerce, Digital Rights Management, metadata, multimedia content, security and distributed applications. Active participation, since 1989, in International standardization, as co-editor of standards and co-chair of groups in ISO/IEC, EWOS, ETSI, ITU-T and CEN/ISSS. Evaluator and reviewer for the European Commission in different research programs since 1989. Advisor for the Spanish Ministry of Science. Author of several hundreds of published papers and books, and member or chair of many Conference International Programme Committees.
Mario Döller (University of applied science FH Kufstein/Tirol, Austria)
Prof. (FH) PD Dr. habil. Mario Döller obtained his PhD from the University of Klagenfurt (Austria) in 2004 and his lecturing qualification in computer science from the University of Passau (Germany) in 2012. Currently, Dr. Döller is full professor for multimedia and web based information systems at the university of applied science FH Kufstein/Tirol and supervisor of one PhD student at the University of Passau. Dr. Döller is an active member of the MPEG and JPEG standardization bodies (worked as Session Chair on the standardization of the MPEG Query Format and is currently involved in the JPSearch project of JPEG and the Multimedia Preservation project in MPEG). Besides, he was invited as scientific expert to the Media Annotation Working Group of W3C. Furthermore, he is in the PC of numerous conferences and participated on the organization committee of EuroPar 2002, MUE 2010 and SMPT 2010 conference. In addition, Dr. Döller also participated within the review process of the EU FP6 program and was leading the University Passau’s collaboration with Siemens AG in the German national THESEUS/MEDICO project. His main research interests are any topic within multimedia information systems, content-based retrieval and web-based, distributed and mobile systems.
We live in an era of data deluge. Pervasive sensors collect massive amounts of information on every bit of our lives, churning out enormous streams of raw data in various formats. Mining information from unprecedented volumes of data promises to limit the spread of epidemics and diseases, identify trends in financial markets, learn the dynamics of emergent social-computational systems and also protect critical infrastructure including the smart grid and the Internet’s backbone network. While Big Data can be definitely perceived as a big blessing, big challenges also arise with large-scale datasets. The sheer volume of data makes it often impossible to run analytics using a central processor and storage, and distributed processing with parallelized multi-processors is preferred while the data themselves are stored in the cloud. As many sources continuously generate data in real time, analytics must often be performed “on-the-fly” and without an opportunity to revisit past entries. Due to their disparate origins, the resultant datasets are often incomplete and include a sizable portion of missing entries. In addition, massive datasets are noisy, prone to outliers and vulnerable to cyber-attacks. These effects are amplified if the acquisition and transportation cost per datum is driven to a minimum. Overall, Big Data present challenges in which resources such as time, space and energy are intertwined in complex ways with data resources. Given these challenges, ample signal processing (SP) opportunities arise. This tutorial seeks to provide an overview of ongoing research in novel models applicable to a wide range of Big Data analytics problems, as well as algorithms and architectures to handle the practical challenges, while revealing fundamental limits and insights on the mathematical trade-offs involved.
Georgios B. Giannakis (University of Minnesota, USA)
Georgios B. Giannakis received his Diploma in Electrical Engr. from the Ntl. Tech. Univ. of Athens, Greece, 1981. From
1982 to 1986 he was with the Univ. of Southern California (USC), where he received his MSc. in Electrical Engineering, 1983, MSc.
in Mathematics, 1986, and Ph.D. in Electrical Engr., 1986. Since
1999 he has been a professor with the Univ. of Minnesota, where he now holds an ADC Chair in Wireless Telecommunications in the ECE Department, and serves as director of the Digital Technology Center.
His general interests span the areas of communications, networking and statistical signal processing - subjects on which he has published more than 375 journal papers, 635 conference papers,
21 book chapters, two edited books and two research monographs (h-index 112). Current research focuses on sparsity and big data analytics, wireless cognitive radios, mobile ad hoc networks, renewable energy, power grid, gene-regulatory, and social networks.
He is the (co-) inventor of 23 patents issued, and the (co-) recipient of 8 best paper awards from the IEEE Signal Processing (SP) and Communications Societies, including the G. Marconi Prize Paper Award in Wireless Communications. He also received Technical Achievement Awards from the SP Society (2000), from EURASIP (2005), a Young Faculty Teaching Award, the G. W. Taylor Award for Distinguished Research from the University of Minnesota, and the IEEE Fourier Technical Field Award (2015). He is a Fellow of EURASIP, and has served the IEEE in a number of posts, including that of a Distinguished Lecturer for the IEEE-SP Society.
Konstantinos Slavakis (University of Minnesota, USA)
Konstantinos Slavakis received the M.Eng. and Ph.D. degrees in Electrical and Electronic Engineering from Tokyo Institute of Technology, (TokyoTech), Japan, in 1999 and 2002, respectively. He has been a Japanese Government Scholar, a JSPS Postdoc at TokyoTech, and a PostDoc with the Dept. of Informatics and Telecommunications, University of Athens, Greece. He served as an Assistant Professor in the Dept. of Telecommunications and Informatics, at the University of Peloponnese, Greece, and he is currently a Research Associate Professor in the Dept. of ECE, and the Digital Technology Center, University of Minnesota.
Research interests include signal processing, machine learning, and big data analytics. He served IEEE Trans. on Signal Processing as Associate Editor (2009-2013) and as Senior Area Editor (2010-2015). He has also delivered tutorials talks in ICASSP’12, ’14, and '15.
Gonzalo Mateos (University of Rochester, USA)
Gonzalo Mateos was born in Montevideo, Uruguay, in 1982. He received his B.Sc. degree in Electrical Engineering from Universidad de la Republica, Uruguay, in 2005 and the M.Sc. and Ph.D. degrees in Electrical Engineering from the University of Minnesota (UofM), Twin Cities, in 2009 and 2011. From 2004 to 2006, he worked as a Systems Engineer at Asea Brown Boveri (ABB), Uruguay. During the 2013 academic year, he was a visiting scholar with the Computer Science Dept., Carnegie Mellon University. Since 2014, he has been an Assistant Professor with the Department of Electrical and Computer Engineering at the University of Rochester, Rochester, NY.
His research interests lie in the areas of statistical learning from Big Data, network science, wireless communications and signal processing.
His current research focuses on algorithms, analysis and application of statistical signal processing tools to dynamic network health monitoring, social, power grid and Big Data analytics. Since 2012, he serves on the Editorial Board of the EURASIP Journal on Advances in Signal Processing. He received the Best Student Paper Award at the 13th IEEE Workshop on Signal Processing Advances in Wireless Communications, 2012, held at Cesme, Turkey, and was also a finalist of the Student Paper Contest at the 14th IEEE DSP Workshop, 2011, held at Sedona, Arizona, USA. His doctoral work has been recognized with the 2013 UofM’s Best Dissertation Award (Honorable Mention) across all Physical Sciences and Engineering areas.
A full-duplex wireless transceiver node can transmit and receive at the same time and in the same frequency band. In contrast, a half-duplex wireless transceiver node cannot realize simultaneous bidirectional in-band communication. Consequently, networks where all or some of the nodes are full-duplex capable can potentially achieve higher spectral efficiency than networks where all the nodes are half-duplex. This is the main motivation for the deployment of full-duplex nodes. However, implementing full-duplex capable transceivers requires the mitigation of the self-interference signal, with a power level several orders of magnitude larger than the received power of the signal of interest coming from a distant node. Recently, different research groups have demonstrated the feasibility of substantial self-interference mitigation that enables the realization of full-duplex communications with higher spectral efficiency than half-duplex systems. These demonstrations have motivated research in the area of full-duplex wireless communications and have made full-duplex a candidate technology for next generation wireless networks. The recent increasing amount of research on full-duplex systems has resulted in a variety of methods for self-interference mitigation and of protocol designs for networks with full-duplex nodes. In this tutorial, we will present the state-of-the-art of full-duplex technology and give some insight about potential applications in future communications systems, in particular in the context of fifth-generation (5G) cellular networks and 802.11ax WLAN. The attendees will learn about the challenges that need to be overcome at the RF level, physical layer level, and network level, and about the trade-offs between performance gains and hardware complexity associated with full-duplex. The tutorial is targeted to a broad audience with the aim that attendees with different backgrounds can understand the overall challenges of full-duplex system design as well as potential benefits. There are still improvements to be made to current proposed full-duplex solutions before they can be integrated into commercial networks. The teaching aim of the tutorial is to familiarize the attendees with the main to-date results on full-duplex nodes and networks and highlight some of the aspects that need to be addressed by future research.
Melissa Duarte (Huawei, France)
Melissa Duarte received her B.Sc. degree in Electrical Engineering from the Pontificia Universidad Javeriana, Bogota, Colombia, in 2005. She received her M.Sc. and Ph.D. degrees in Electrical and Computer Engineering from Rice University, Houston, TX, in 2007 and 2012 respectively. From 2012 to 2013 she was a postdoctoral researcher at the School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland. She is currently a research engineer at the Mathematical and Algorithmic Sciences Lab, France Research Center, Huawei Technologies Co. Ltd. Her Ph.D. thesis entitled “Full-duplex Wireless: Design, Implementation and Characterization” received the Rice University Electrical and Computer Engineering Department Best Dissertation Award, 2012. She holds two US Patents, one of them on a “System and Method for Full-Duplex Cancellation”. She received the ACM MobiHoc 2013 Best Paper Award for the paper entitled “Quantize-Map-Forward (QMF) Relaying: An Experimental Study”. Her research interests include the design and implementation of architectures for next-generation wireless communications. Specific interests and expertise include the areas of full-duplex wireless systems, cooperative relaying based networks, Multiple Input Multiple Output antenna (MIMO) systems, multi-carrier systems (OFDM), Software-Defined Radio (SDR), channel modeling for wireless systems, over-the-air measurements and experiments for the evaluation of wireless networks.
Maxime Guillaud (Huawei, France)
Maxime Guillaud received the M.Sc. degree in Electrical Engineering from ENSEA, Cergy, France, in 2000, and the Ph.D. in Electrical Engineering and Communications from Telecom ParisTech, Paris, France, in 2005. From 2000 to 2001 he was a research engineer at Lucent Bell Laboratories in Holmdel, NJ, USA. From 2006 to 2010, he was a Senior Researcher at the FTW in Vienna, Austria. From 2010 to 2014, he was a researcher with Vienna University of Technology, Vienna, Austria. Since 2014, he is a principal researcher in Huawei Technologies' France Research Center.
Dr. Guillaud is the author of over 50 research papers and holds two patents. He is the recipient of a SPAWC 2005 student paper award, and co-recipient of the Mario Boella Business Idea prize of the NEWCOM NoE in 2005. He worked on the transceiver architecture of multi-user cellular systems, as well as on various aspects of wireless channel modeling, including sparse representations and channel state inference methods. He introduced the principle of relative calibration for the exploitation of channel reciprocity. His recent interests include interference management in dense wireless systems. Dr. Guillaud is a Senior Member of IEEE.