FIEEE; EURASIP fellow, Professor Department of Informatics Aristotle University of Thessaloniki Greece
Drone Vision and Deep Learning for Infrastructure Inspection
This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.
The lecture will offer: a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well.
Verification, Trustworthiness, and Accountability of Human-Driven Autonomous Systems
Despite the fact that autonomous systems’ science and control theory have almost 50 years of history, the community is facing major challenges to ensure the safety of fully autonomous consumer systems. It mostly concerns the verification and high fidelity operation of safety-critical systems, may that be a self-driving car, a homecare robot or a surgical manipulator. The community still struggles to establish objective criteria for trustworthiness of AI driven / machine learning based control systems. On one hand, we celebrate the rise of cognitive capabilities in robotic systems, leading independent decision-making; on the other hand, decisions made in complex environments, based on multi-sensory data will surly lead to some wrong conclusions and hazardous outcome, jeopardizing the public trust in entire application domains. This ambiguity led to the currently ruling safety principle to offer the possibility for a human-driven override, translating to Level of Autonomy 3 and 4 with autonomous vehicles.
The aim of the development community is to establish processes and metrics to ensure the reliability of the takeover process, when the human driver or operator takes back the partial or full control from the autonomous system. We have been building complex simulators and data collection systems to benchmark human decision making against the computer. Situation Awareness has been identified as a key, as it defines the level of cognitive understanding and capability of a human operator in a given environment. Assessing, maintaining and regaining efficiently SA are core elements of the relevant research projects, reviewed and compared in this talk. Based on the research at the Antal Bejczy Center for Intelligent Robotics at Óbuda University, we created an assessment method for critical handover performance, to quantitatively define the required level and components of SA with respect to the autonomous functionalities present. To improve system safety, driver assistance systems and automated driving functionalities shall be collected and organized in a hierarchical way, along with the two criteria of SA presented, as a standardized risk assessment protocol: 1) the Level of SA, based on state of the environment; 2) the components of SA, based on knowledge.
The outcome of our experiments may find its way to new verification standards through ongoing IEEE initiatives, such as the P1872.1, P2817, P7000 and P7007, moreover this systematic approach has already proved to bring benefit to other domains, such as medical robotics. (This presentation is a joint work with Prof. Tamas Haidegger.)
Sustainable Autonomy: Challenges and Perspectives
Cutting-edge autonomous systems demonstrate outstanding results in many important tasks requiring intelligent data processing under well-known conditions. However, the performance of these systems may drastically deteriorate when the data are perturbed, or the environment dynamically changes, either due to natural or man-made disturbances. The challenges are especially daunting in edge computing scenarios and on-board applications with limited resources, i.e., data/ energy/ computational power constraints, when decisions must be made rapidly and in a robust way. A neuromorphic perspective provides useful insights under such conditions. Human brains are efficient devices using 20W power (like a light bulb), which is many orders of magnitudes less than the power consumption of today’s supercomputers requiring MWs to innovatively solve a specific Deep AI learning task. Brains use spatio-temporal oscillations to implement pattern-based computing, going beyond the sequential symbol manipulation paradigm of traditional Turing machines. Neuromorphic spiking chips gain popularity in the field. Application examples include autonomous on-board signal processing and control, distributed sensor systems, autonomous robot navigation and control, and rapid response to emergencies.
Morphogenetic Self-organization of Swarm Robots
Self-organization is one of the most important features observed in social, economic, ecological and biological systems. Distributed self-organizing systems are able to generate emergent global behaviors through local interactions between individuals without a centralized control. Such systems are supposed to be robust, self-repairable and highly adaptive. However, design of self-organizing systems is very challenging, particularly when the emerged global behaviors are required to be predictable or predictable. This talk introduces a morphogenetic approach to the self-organizing swarm robots using genetic and cellular mechanisms governing the biological morphogenesis. We demonstrate that morphogenetic self-organizing algorithms are able to autonomously generate patterns and surround moving targets without centralized control. Finally, morphogen based methods for self-organization of simplistic robots that do not have localization and orientation capabilities are presented.
Improving Manipulation Capabilities of Autonomous Robots
Human-level manipulation continues to be beyond the capabilities of today’s robotic systems. Not only do current industrial robots require significant time to program a specific task, but they lack the flexibility to generalize to other tasks and be robust to changes in the environment. While collaborative robots help to reduce programming effort and improve the user interface, they still fall short on generalization and robustness. This talk will highlight recent advances in a number of key areas to improve the manipulation capabilities of autonomous robots, including methods to accurately model the dynamics of the robot and contact forces, sensors and signal processing algorithms to provide improved perception, optimization-based decision-making and control techniques, as well as new methods of interactivity to accelerate and enhance robot learning.
FIEEE; FSPIE, Professor Department of Electrical and Computer Engineering, University of Calgary, Canada
Information Fusion and Decision Support for Autonomous Systems
In this talk we present our works on decision support analytic for autonomous systems. Decision support analytic process multiple sensory information collected by an autonomous system such as lidar, camera, RGBD, acoustic to perform signal detection, target tracking, object recognition. As multiple sensors are involved, our system uses sensor registration, data association and fusion to combine sensory information. The next layer of the proposed decision support system orients the processed sensory information at feature and classification levels to perform situation assessment and treat evaluation. Based on the assessment, the decision support system will recommend decision. If the uncertainty is high, actions including resource allocation, planning will be used to extract or reassess the sensory information to get a recommended decision with lower uncertainty. This talk will also presents the applications of the proposed decision support analytic in four industrial projects including 1) goal-driven net-enabled distributed sensing for maritime surveillance, 2) autonomous navigation and perception of humanoid service robots, 3) distance learning for oil and gas drilling and 4) cognitive vehicles.
Professor of Cognitive Telecommunications Systems DITEN, University of Genova Italy
Bayesian Emergent Self Awareness
Multisensor signal Data Fusion and Perception, including processing of signals are important cognitive functionalities that can be included in artificial systems to increase their level of autonomy. However, the techniques they rely on have been developed incrementally along time with the underlying assumption that they should have been used mainly to provide a support to decision tasks driving the actions of those systems. Cognitive functionalities like self-awareness have been so far considered as not primary part of embodied knowledge of an autonomous or semi autonomous systems. One of the reason for this choice was the lack of understanding the principles that could allow an agent, even a human one, to organize successive sensorial experiences into a coherent framework of emergent knowledge, by means of integrating signal processing, machine learning and data fusion aspects. However, the developments of this last decade in many fields carried to the possibility to provide integrated solutions capable to sketch how emergent self awareness can be obtained by capturing experiences of autonomous agents like for example vehicles and intelligent radios. In this keynote, a Bayesian approach including abnormality detection and incremental learning of generative predictive models as bricks of emergent self awareness in intelligent agents. Discussion of the advantages of including emergent self awareness intelligent agents will be also provided with respect to different aspects, e.g. explainability of agent’s actions and capability of imitation learning.
Enabling Trust in Autonomous Human–Machine Teaming
The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers
FIEEE, The Kranzberg Chair Professor in Signal Processing School of Electrical Engineering, Tel Aviv University, Israel. IEEE Global Initiative for Ethical Considerations in AI/AS Israel
On Ethics of Autonomous and Intelligent Systems (AI/S)
In the 4th industrial revolution under which autonomous, intelligent systems are designed, human brain capacities are delegated to machines. This brings in great opportunity to reduce the need for human intervention in daily lives, together with considerable ethical challenges. The role of the designers of such systems, i.e., engineers, is most important in balancing the opportunities and the challenges.
Being a global organization with more than 450,000 members, the IEEE took responsibility and has set up the Global Initiative on Ethics of Autonomous and Intelligent Systems, whose mission is: “To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”
In this talk we will present the various activities of the global initiative to promote ethics in AI/S and, in particular, we will introduce the Ethically Aligned Design, First Edition which is a comprehensive report that combines a conceptual framework addressing universal human values, data agency, and technical dependability with a set of principles to guide A/IS creators and users through a comprehensive set of recommendations.