EUSIPCO 2015 Sponsors
We gratefully acknowledge the kind support of the following sponsors.
List of Special Session proposals that have been accepted (paper submission by invitation):
- Acoustic scene analysis using microphone array
- Ad-hoc transducer arrays for speech and audio processing
- Advanced computational methods for Bayesian signal processing
- Advanced signal processing for radar applications
- Advances in matrix-tensor decomposition and subspace learning
- Algorithms for distributed coordination and learning
- Bayesian non-parametrics for signal and image processing
- Bio-inspired and perceptual data processing
- Complex audio scene analysis
- Estimation and modeling of relative transfer functions between microphones in noisy environments
- Fault detection, diagnosis and prognosis: New trends for system monitoring
- Massive and cloud-based virtual MIMO: alternative, complementary or merging wireless network technologies?
- Methodologies for signal processing on graphs
- New directions in high-dimensional optimisation
- Nonlinear signal and image processing - a celebration in honour of Giovanni L. Sicuranza on his 75th birthday
- Processing of reverberant speech and audio signals
- Recent advances and applications in hyperspectral imaging
- Recent advances in biomedical signal and image processing
- Recent advances in multifractal analysis and applications
- Robust EEG signals processing towards practical Brain-Computer
- Satellite communications in 5G networks
- Self-sustainable networks: energy harvesting and wireless energy transfer
- Sequential Monte Carlo methods for tracking in dynamical systems
- Signal processing for healthcare: applications, challenges and opportunities
- Signal processing for musical acoustics
- Sparse arrays and coprime sampling
- Sparsity, sampling and compressed sensing
- COST IC1106 Special Session: Using biometrics for forensic investigations
- Visual attention modelling
Shoji Makino, Nabutaka Ono
We are surrounded by sounds in our daily lives. To know the acoustic environment, acoustic scene analysis technologies are essential. The acoustic scene analysis include (but are not limited to) event detection, audio content searching, acoustic scene classification, sound profiling, source localization, source separation, noise reduction, dereverberation, sound effect generation, virtual acoustic reproduction, and many others. These techniques form the core of the state-of-the-art audio and acoustic signal processing and are indispensable to the realization of future communication via for both man-machine and human-human interfaces. This special session is dedicated to recent advances in acoustic scene analysis based on microphone array. The aim of this special session is to offer an opportunity to link these techniques in different areas and to find effective ways of achieving our goals. This special session will stimulate interest in the challenging area of acoustic scene analysis based on microphone array, and create an increasing body of high-quality research aligned with this idea.
Nikolay Gaubitch, Richard Hendriks
Acoustic transducer (microphone or loudspeaker) arrays facilitate superior performance for audio capturing and rendering compared to single transducers. However, traditional transducer arrays have a limitation in that they require dedicated hardware that often needs to be configured by experts. Much of this can be overcome by ad-hoc transducer arrays, which is why ad-hoc transducer arrays for speech and audio processing is becoming an increasingly popular research topic in the signal processing community. However, there are many challenges to be resolved before such technology can be used in practice including, clock synchronization between devices, microphone localization, new enhancement or beamforming algorithms and distributed algorithm implementations. In recent years, several interesting methods have been proposed, addressing one or more of these challenging issues. The aim of this special session is to bring together leading researchers in the field of ad-hoc transducer arrays in order to summarise the state-of-the-art and to highlight open research questions so as to promote this emerging area of research.
David Luengo, Víctor Elvira, Luca martino
Computational methods are often required in Bayesian signal processing to deal with intractable posterior densities. Sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) have been widely used in signal processing and communications applications. Several extensions and variants of these two families of methods have been proposed in order to improve their performance: population Monte Carlo (PMC) schemes, sequential quasi-Monte Carlo (SQMC) algorithms, adaptive Monte Carlo approaches, etc. Some of these methods have found their way into the signal processing literature, but there are still many advanced computational methods developed within the statistical community that are not widely known by signal processing practitioners. A special mention is deserved by the so-called Approximate Bayesian Computation (ABC) framework, which is raising a great deal of interest within the statistical community. This special session intends to bridge the gap between both communities by presenting a collection of papers that describe several advanced computational methods for signal processing applications.
Maria Sabrina Greco, Fulvio Gini
In the last decade, the radar world is assisting to a sort of new revolution, comparable to that caused by the introduction of adaptivity in the 70s. Continuing advances in device technologies combined with adaptive processing present rich opportunities for new sensing methodologies and new challenges in signal processing. Complex and integrated systems, waveform diversity, multi-mission and multi-mode operation systems are the new paradigms in the radar field theory and technology. This special session seeks to provide a venue for the publication of timely research results in advanced signal processing techniques for radar applications. As such, the special issue will touch on a wide variety of topics, including random matrix theory, robust detection and estimation, sparse signal recovery and sub-Nyquist techniques. The scope of this special session will naturally be interdisciplinary, involving contributions from experts in the areas of signal processing, array processing, radar systems, information theory, and communications theory. It will focus particularly on the possible improvements introduced by advanced signal processing techniques in radar systems operating in new complex scenarios, with multiple sensors and very fast time of reaction.
Sergios Theodoridis, Athanasios Rontogiannis
Nowadays, a plethora of applications in signal processing, pattern recognition and machine learning require the processing and extraction of information from high dimensional data. However, despite the high dimensionality of the data, the relevant information often resides in low-dimensional subspaces. Principal component analysis has been for many years the main analytic tool for low-rank subspace learning. Recently, the advancements in compressive sensing and matrix-tensor decomposition methods have sparked new interest in the field. In this context, novel robust subspace learning methods have been developed, having the ability to operate under non-ideal conditions. An even more challenging situation, in the era of big data, arises when the underlying low-dimensional subspace changes over time. Under such circumstances, real-time subspace tracking algorithms are required, which combine online processing capabilities, robustness, low complexity and high estimation accuracy. In this framework, this special session provides a forum of discussion on the most recent advances in the field with emphasis, but not restricted to online and distributed solutions, Bayesian subspace learning, robustness, optimization and convergence issues, applications and big data analytics.
David Gesbert, Paul De Kerret
The architecture of future wireless networks is shifting toward heterogeneous designs featuring large number of transmitters distributed over space. Future networks have the potential to deliver very large traffic capacity yet come with some challenges related to interference between the devices. In principle, such issues can be tackled efficiently via coordination paradigms. For instance, coordination methods can include power control, precoding design, interference alignment, scheduling etc. In practice coordination and device communication optimization must be carried out on the basis of uncertain channel state information, the characteristics of which are learned, possibly on the fly via local measurements. The design of algorithms for the distributed coordination lies at the cross-roads of Game Theory, Learning, and Distributed Optimization. Important progress has been recently realized, having been translated in novel practical approaches. Yet, many open questions remain. The session will aim at discussing such interesting open problems and put together experts of different fields so as to boost novel collaborations and innovative approaches.
François Caron, Pierre Chainais
Statistical methods have become more and more popular in signal and image processing over the past decades. These methods have been able to tackle various applications such as speech recognition, object tracking, image segmentation or restoration, classification, clustering, etc. The aim of this special session is to popularize the use of Bayesian nonparametric methods in statistical signal and image processing. Similarly to Bayesian parametric methods, this set of methods is concerned with the elicitation of prior and computation of posterior distributions, but now on infinite-dimensional parameter spaces. Although these methods have become very popular in statistics and machine learning over the last 15 years, their potential is largely underexploited in signal and image processing. The aim of this session, which gathers researchers in statistics, machine learning and signal and image processing, is to bridge the gap between the research on those methods in the different communities and stimulate deeper multidisciplinary interactions on this topic. Applications to image processing, time series, telecommunications and social will allow various concrete illustrations.
Lionel Fillatre, Chaker Larabi
The last decades witnessed an increasing interest for mimicking the human behavior or natural systems for various tasks. This is due to the limitations observed with standard techniques use in the various fields. Bio-inspired methods provide a set of powerful evolutionary approaches based on the principles of biological or natural systems. This class of methods including for instance retinal filtering, networks of neural cliques, and emerging technologies, complements traditional techniques of data processing. It can be applied in places where traditional methods and approaches have encountered difficulties. Bio-inspired methods have been successfully used for the processing, encoding and pattern recognition of various types of data ranging from image to video. Their ability to mimic the behavior of biological systems increases significantly the plausibility of the results and their similarity with the ground truth. The special session gathers timely and original research contributions about bio-inspired data processing applied to different fields. It aims at bringing together researchers and industrials working or having an interest for this hot topic. It also aims at disseminating the most recent findings and raise important questions about the usability, extensibility, plausibility in addition to future directions and challenges. The main topics cover data processing and implementation by using bio-inspired approaches.
Romain Serizel, Gaël Richard and Slim Essid
Despite the tremendous progress in the recent years, it is still difficult for a machine listening system to demonstrate the same capabilities as human listeners in the analysis of complex acoustic scenes. Yet, the analysis of environmental sounds has been receiving a growing interest from the community and is targeting an ever increasing set of audio categories. Typical tasks are audio-based scene classification and audio event recognition. While some popular techniques developed for speech or music applications are useful for acoustic event detection and scene analysis, the wide heterogeneity of possible sounds means that novel types of signal processing and machine learning methods should be developed including novel techniques for audio source segmentation and separation.
Zbynek Koldovsky, Sharon Gannot
Most of the acoustic scene analysis approaches developed until now have focused on small datasets. However, with the growing number of audio recordings coming from various sources, additional problems arise. There is a need for new approaches that are, by design, efficient on large scale problems and robust against signal degradation and acoustic variability.
Nowadays, system health monitoring is of main concerns in industrial and academic research due to the increasing demand of safety rules and the reduction of maintenance costs. Typical applications include the monitoring of transportation systems (automobile, aircraft, and trains), energy generation, transportation, storage and distribution systems (e.g. nuclear power plant, wind turbine, smart grid, ...), and industrial processes. In smart systems, faults are detected at an early stage, classified and the system lifetime is predicted to optimize maintenance operations. In order to meet these requirements, new monitoring algorithms are continuously developed. These algorithms integrate top of the art signal and data analysis/processing techniques and pattern recognition approaches. This special session will focus on the application of signal and analysis/ processing techniques for the health monitoring of complex systems. Many approaches are concerned for such topic: quantitative approaches with wide and efficient physical modeling, qualitative approaches, and data driven ones. For this session either theoretical or applicative works will be considered. Particular attention will be paid to applications in tune with time such as: renewable energy based systems, smart grids, vehicular and industrial applications, .....
Massive and cloud-based virtual MIMO: alternative, complementary or merging wireless network technologies?
Laura Cottatellucci, Petros Elia
Massive MIMO systems and Cloud-based Virtual MIMO systems supported by cloud architectures, are widely believed to be key enablers of the future 5G networks. These technologies are widely considered as alternative technologies. On the one hand, massive MIMO is thought to better capitalize on the existing cellular infrastructures to achieve inter-cell interference free communications. On the other hand, Cloud-based virtual MIMO is envisioned to maintain some of the benefits of massive MIMO, but also to transcend the cellular architecture and offer some appealing features of small cells. A comparison between these two technologies is a completely open issue even under ideal conditions. Understanding their limitations, exploring possible coexistence and convergence is crucial since heterogeneity is a driving aspect in the conception of 5G. This special session gathers together experts from the two fields and provides an intellectually challenging environment for exchanging ideas, results and strategic visions, identifying technical challenges, and fostering discussions towards identifying key driving concepts, and towards developing synergies.
Pierre Borgnat, Pierre Vandergheynst, Paulo Gonçalves
The emergent domain of signal processing on graphs aims at extending the notions of processing signals defined over Euclidean vector spaces (e.g. time and space) to processing signals and information (data) defined over arbitrary non-regular graphs. This faces several challenges, combining discrete mathematics of graph theory, statistical analysis of complex networks, and the vast methodological fields of signal and image processing. The special session is organised around contributions with a strong methodological asset, addressing fundamental questions and providing structural milestones to pave the way towards graph signal processing. For instance, one key issue is to develop methods aimed to outperform current approaches that do not apply to directed or weighted graphs. Some major contributions to be presented in this session will tackle such issues. Other works addressing structuring tenets of signal processing will also be put forward in this session. Among those, smoothing, oversampling, characterization and modelling of stationary signals or uncertainty principle are basic notions that will be casted in the context of graph signals.
Marcelo Pereyra, Jean Christophe Pesquet
High-dimensional optimisation algorithms are ubiquitous in modern signal processing, computer vision, and machine learning, where they are used to compute solutions to a wide variety of direct or inverse problems related to large-scale regression, classification, clustering, detection, prediction, restoration, and reconstruction. These algorithms are the focus of important research efforts aimed at delivering ever more computationally efficient, theoretically sound, and widely applicable mathematical optimisation tools for increasingly challenging problems and applications. Some examples of significant recent developments and promising directions of research include new primal-dual, adaptive, variable metric, distributed and stochastic methods for convex and nonconvex problems, often involving intricate objective functions (e.g. those resulting from marginal densities and expectations). This special session will provide a venue for the publication and dissemination of cutting edge research on high-dimensional optimisation methodologies and their application to problems arising in signal processing and machine learning.
Nonlinear signal and image processing - a celebration in honour of Giovanni L. Sicuranza on his 75th birthday
V.J. Mathews, G. Ramponi, A. Carini
This special session will celebrate Giovanni L. Sicuranza on his seventy-fifth birthday and honour his achievements and his dedication to the EURASIP community. The special session invites research and tutorial papers in Nonlinear Signal and Image Processing, the area that benefitted the most from Giovanni’s contributions. Contributions to this special session will include theory and applications of Polynomial and linear-in-the-parameters nonlinear filters, Nonlinear Partial Differential Equations, Kernel methods for estimation and modelling, Nonlinear Time-frequency Methods, and Adaptive Nonlinear Systems. The list is not meant to be exhaustive, and other topics can be considered.
Toon Van Waterschoot
Reverberation is the acoustic phenomenon that occurs whenever speech and audio signals are produced in an environment with reflective boundaries, such as a room, a hall, or a A wide range of speech and audio processing problems has been found to be much more difficult to tackle when those signals are subject to reverberation, e.g., speech enhancement, automatic speech recognition and audio analysis, and spatial audio reproduction. Moreover, reverberation is a multi-faceted phenomenon that can be approached from very different perspectives and disciplines, including room acoustics, psychoacoustics, and signal processing. Consequently, the processing of reverberant speech and audio signals is a challenging research area, which has recently received much attention in the signal processing community and beyond. In this special session, we will bring together researchers from the speech and audio processing community who have recently contributed to the problem of processing reverberant speech and audio signals in a variety of applications. The topics covered in this session include the modeling, identification, and reproduction of room acoustics, as well as speech dereverberation.
Stephen Marshall, Mauro Dalla Mura and Jinchang Ren
Whilst HyperSpectral Imaging (HSI) has been in existence for many years, it has mainly be the preserve of the Military and Remote Sensing areas. However recently advances in technology have resulted in the falling size, weight and cost of HSI systems and their widening take up in a whole range of fields such as Pharmaceutical, Healthcare and Food and Drink as well as traditional remote sensing applications. The research thrust consists of three distinct stages: Novel devices, which are mainly driven by the Physics and Optics Community; End Users, who are usually specialist in their own application domain but wish to deploy HSI as diagnostic tool; and Data processing. However, besides the numerous advantages and unique capabilities, HSI presents several currently open challenges in methodology and data processing (e.g., classification, spectral unmixing, image analysis), algorithmic development (e.g., data compression, big-data problem, efficient implementation) and applications. This special session aims at gathering experts from the signal and image processing fields working on HSI in order to highlight potentialities and address some of the challenges in the analysis of this unique type of data.
The Special Issue "Multimodal Data Fusion in Multi Dimensional (MD) Signal Processing” of the Springer journal Multidimensional Systems and Signal Processing is associated with this session.
Denis Kouame, Adrian Basarab
Biomedical imaging has a high impact on nowadays society and is of particular interest for the scientific community. Biomedical signals and images present particular characteristics and specificities. As a consequence, the processing techniques must be adapted and dedicated to such signals. In the recent years, the complexity of medical imaging systems has grown, leading to increasingly important needs in sophisticated signal and image processing techniques. This special session, aiming to gather several researchers from the various communities working on biomedical imaging, has two main objectives. The first is to provide signal processing scientists, engineers and even clinicians an up-to-date overview on the new advances and challenges in signal and image processing developments in the area of biomedical imaging. The second is to allow signal processing and medical imaging communities to meet and to share application opportunities and new methodological approaches.
Patrice Abry, Herwig Wendt
Multifractal analysis provides a theoretical framework for the characterization of the fluctuations of point-wise regularity and of the local functional embedding properties of signals, which can be deeply related to the concepts of non-Gaussian (higher order) statistics and dependence. It has nowadays matured into a powerful signal and image processing tool, benefiting from developments both in statistical methodology and functional analysis, and is frequently used in applications of very different natures, ranging from geophysics, finance and internet traffic to image texture and art investigation, to name but a few. This special session collects recent contributions that address both statistical and functional analysis aspects of multifractal analysis. A specific emphasis is given to the development of novel statistical signal and image processing and tools and models for multifractal analysis and their use in applications.
Fabien Lotte, Wojciech Samek and Cuntai Guan
Brain-Computer Interfaces (BCI) are systems that enable their users to interact with computers by means of brain activity only, typically measured by ElectroEncephaloGraphy (EEG). BCI have proven to be very promising, e.g., to provide communication abilities to motor-impaired users. However, BCI are scarcely used outside laboratories mainly due to their lack of robustness. Indeed, current BCI often incorrectly recognize the user’s mental states and several people cannot use BCI at all. Moreover, BCI require a long and tedious calibration before use. Finally, current EEG-based BCI performances are often degraded when confronted to environmental noise, users’ motions or long-term use, among other. Therefore, to bring BCI outside laboratories, we need robust EEG signal processing approaches that are accurate at all times and robust to noise, artifacts and non-stationarities. They should also require calibrations that are as short as possible. This special session aims at bringing together the latest research works in these directions, to further advance this promising research field.
Bhavani Shankar M. R, Symeon Chatzinotas, Bjorn Ottersten
The user centric 5G paradigm envisions converged service delivery and ubiquitous access through multiple networks. Satellites with their wide coverage can enable “anytime/ anywhere access”, at affordable costs, thereby staking claim in future 5G networks. In particular, satellite component, through its inherent structure to support to perceived multimedia traffic growth, ubiquitous coverage, enabling machine to machine communications and critical telecommunication set-up, can augment the 5G service capability. Towards realizing this, several research challenges need to be addressed. These include, convergence of broadcast and broadband services, satellite backhauling of terrestrial data, interference management between terrestrial and satellite components, energy efficient waveforms for Machine-to-Machine communications, reduced signalling overhead and latency, air interface enhancements, optical feeder links, on-board processing and spectrum management/monitoring. The planned session focusses on signal processing techniques needed to meet the aforementioned challenges. It aims to gather researchers towards providing a platform for interaction amongst them in EUSIPCO and introduces an emerging area to researchers in the wider community of signal processing for communications.
Marco Maso, Diomidis Michalopoulos
Reducing energy consumption is one of the most challenging goals for future wireless networks. In this regard, energy harvesting is envisioned as one of the possible strategies to achieve this goal. A very interesting research front investigates the feasibility of transferring power from a source to a destination via RF signals, in the context of legacy wireless energy transfer or simultaneous information and power transfer (SWIPT). Many research problems arise: Modeling and analysis of large scale energy harvesting networks, relay-based energy harvesting networks, optimization of harvesting time, optimization of SWIPT, scheduling of users’ data transmission and energy harvesting, and the design of smart signal processing strategies can incorporate the features and constraints which characterize this technology. This special session specifically targets the latter aspect and solicits contributions on recent advancements in signal processing in a broad sense. These include, but are not limited to: Novel algorithms, architectures and technological solutions to move towards the feasibility of self-sustainable networks.
François Desbouvries, Yohan Petetin
Tracking variables of interest from noisy observations is an ubiquitous problem in such different fields as signal processing, finance, oceanography or video tracking. Closed form solutions of the optimal estimators are generally unavailable. However the development of computer resources led to the parallel development of computational statistics, among which sequential Monte Carlo (SMC) methods have become a powerful tool to solve complex tracking problems. SMC methods are still nowadays an active field of research, and have to face new challenges. In particular, despite the availability of parallelized architectures and of always cheaper data storage, increasing flows of high dimensional data are well known to remain a major obstacle which impacts the applicability of available solutions. On the other hand, tracking algorithms have recently been generalized to systems where the number of variables is unknown and fluctuates with time. The special session aims at gathering different researchers in signal processing or computational statistics which address various recent aspects of SMC techniques.
The large availability of new sensor, algorithms, and the surge of mobile computing devices represent a great source of challenges and opportunities in many different societal aspects. Healthcare is one of the sectors that can benefit most from the adoption of innovative ways of retrieving, storing, and especially analysing signals. Optimising the healthcare through the use of digital health solutions will bring innovative solutions that will make the care delivery cheaper, more reliable, and more pervasive. This special session focusses on applications, algorithms, and datasets that signal-processing experts are using in the digital health domain at every level, from home to hospital setting scenarios. The goal of the session is to provide the audience with an overview of the current research themes of signal processing applied to the digital health domain. Moreover the session aims at stimulating interesting new ideas and discussions on the different applications of signal processing to healthcare data.
Musical Acoustics is a well-established field of study that primarily concerns the modeling of musical instruments, whether acoustic or electronic; as well as audio processing for improving or enhancing musical sounds. This discipline dates back centuries, ever since scientists began studying how musical instruments produce their sounds and how we perceive them.
The goal of this Special Session is to present an update on this field of research, by portraying a number of examples of applications of Signal Processing to various aspects of musical acoustics, including examples of vibro-acoustic modeling of acoustic musical instruments; applications of analysis of timbral descriptors and of sound fields for the objective and perceptual assessment of timbral quality in acoustic musical instruments; techniques for the characterization of the acoustic radiance of musical instruments; solutions for the modeling and the implementation of virtual analog systems for musical audio processing; as well as methodologies for the characterization and the applications of high-level descriptors of musical instruments.
Moeness Amin, P. P. Vaaidyanathan
Co-prime sampling and arrays have recently been shown to improve active and passive sensing in radar and underwater acoustics using both narrowband and wideband signal platforms. This is achieved by exploiting the offerings of co-prime processing for increased aperture and improved spatial resolution, enabling separation between individual targets and clutter. Co-prime based approaches to sampling and arrays can combat ambiguity and provide unique answers to target coordinates under forced coarse sampling in time, frequency, and space. These constraints can be dictated by cost, fast acquisition time, unavailability of specific frequencies and sensor locations. Over the last few years, co-prime sampling has made advances in RF surveillance and target geolocation, wideband radar processing for urban warfare, and nonstationary array processing utilizing Doppler signatures and instantaneous frequency source characteristics. In the context of DOA estimation, coprime and nested arrays provide a rich set of difference coarray elements, which can be used for the identification of many more sources than sensors. This can be used as a platform for new algorithms for such problems, by combining with some of the state-of-the-art compressive sensing algorithms. For examples, sparse recovery techniques based on a frequency grid can be used, and so can techniques based on a grid-less approach such as low rank matrix recovery techniques. Some of the more recent applications include spectrum sensing in cognitive radio. The five papers in the session are all authored by US researchers. This is an additional benefit for increasing EUSIPCO attendance from US Institutions. The papers cover the applications of co-prime arrays to direction finding using beamforming and high resolution methods. There are also contributions on robustness of co-prime sampling and the effect of perturbation and mutual coupling on performance.
Steve Mc Laughlin, Mike Davies
Compressed Sensing and the idea that we need no longer sample at the Nyquist rate have led to a enormous amount of research since the early publications in first appeared in 2006. This session brings together a group of European researchers who have been heavily involved in developing this topic to discuss recent results and novel applications that seek to exploit sparsity in signals by adopting suitable sampling methodologies.
Patrizio Campisi, Paulo Lobato Correia
This special session, promoted by COST Action IC1106 “Integrating Biometrics and Forensics for the Digital Age”, addresses the usage of biometric recognition techniques and their application in the context of forensic investigations.
Matei Mancas, Nathalie Guyader, Olivier Le Meur
These last 30 years, computational saliency models which predict human attention and, by extension, human gaze have been tremendously developing.
However, there are at least two issues that need to be taken into account to improve existing models. The first one concerns the metrics used to evaluate the saliency models. According to these metrics, the ranking of existing models can be dramatically altered. This variability indicates a lack of robustness of current metrics, leading to a real difficulty to select the “best” model in all situations. The second issue is that most of saliency models involve mainly bottom-up visual features. However, we all know that the improvement of saliency models is closely tied to our ability to embed high-level and multi-modal information. The special issue goal is to illustrate this move from generic to application-driven:
• Application-driven evaluation metrics
• Application-adapted models or mixtures of existing models
• Better use of motion, multi-modal and 3D information
• Integration of dynamic gaze features (fixation/saccades behavior)
Stay in e-touch