Looking for publications? All our latest publications are listed here, and you can also use our search functions to help find what you are looking for. Power users might also want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.

Support Surface Estimation for Legged Robots

Authors: Homberger, Timon; Wellhausen, Lorenz; Fankhauser, Péter; Hutter, Marco

The high agility of legged systems allows them to operate in rugged outdoor environments. In these situations, knowledge about the terrain geometry is key for foothold planning to enable safe locomotion. However, on penetrable or highly compliant terrain (e.g. grass) the visibility of the supporting ground surface is obstructed, i.e. it cannot directly be perceived by depth sensors. We present a method to estimate the underlying terrain topography by fusing haptic information about foot contact closure locations with exteroceptive sensing. To obtain a dense support surface estimate from sparsely sampled footholds we apply Gaussian process regression. Exteroceptive information is integrated into the support surface estimation procedure by estimating the height of the penetrable surface layer from discrete penetration depth measurements at the footholds. The method is designed such that it provides a continuous support surface estimate even if there is only partial exteroceptive information available due to shadowing effects. Field experiments with the quadrupedal robot ANYmal show how the robot can smoothly and safely navigate in dense vegetation.

Reference

  • Published in: 2019 International Conference on Robotics and Automation (ICRA)
  • DOI: 10.3929/ethz-b-000328256
  • Date: 2019
Posted on: October 21, 2019

Slasher: Stadium racer car for event camera end-to-end learning autonomous driving experiments

Authors: Hu, Yuhuang; Chen, Hong Ming; Delbruck, Tobi

Slasher is the first open 1/10 scale autonomous driving platform for exploring the use of neuromorphic event cameras for fast driving in unstructured indoor and outdoor environments. Slasher features a DAVIS event-based camera and ROS computer for perception and control. The DAVIS camera provides high dynamic range, sparse output, and sub-millisecond latency output for the quick visual control needed for fast driving. A race controller and Bluetooth remote joystick are used to coordinate different processing pipelines, and a low-cost ultra-wide-band (UWB) positioning system records trajectories. The modular design of Slasher can easily integrate additional features and sensors. In this paper, we show its application in a reflexive Convolutional Neural Network (CNN) steering controller trained by end-to-end learning. We present preliminary experiments in closed-loop indoor and outdoor trail driving.

Reference

Posted on: October 21, 2019

SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning

Authors: Cieslewski, Titus; Derpanis, Konstantinos G.; Scaramuzza, Davide

A wide range of computer vision algorithms rely on identifying sparse interest points in images and establishing correspondences between them. However, only a subset of the initially identified interest points results in true correspondences (inliers). In this paper, we seek a detector that finds the minimum number of points that are likely to result in an application-dependent “sufficient” number of inliers k. To quantify this goal, we introduce the “k-succinctness” metric. Extracting a minimum number of interest points is attractive for many applications, because it can reduce computational load, memory, and data transmission. Alongside succinctness, we introduce an unsupervised training methodology for interest point detectors that is based on predicting the probability of a given pixel being an inlier. In comparison to previous learned detectors, our method requires the least amount of data pre-processing. Our detector and other state-of-the-art detectors are extensively evaluated with respect to succinctness on popular public datasets covering both indoor and outdoor scenes, and both wide and narrow baselines. In certain cases, our detector is able to obtain an equivalent amount of inliers with as little as 60% of the amount of points of other detectors. The code and trained networks are provided at this https URL .

Reference

Posted on: October 21, 2019

Simplifying Exosuits: Kinematic Couplings in the Upper Extremity during Daily Living Tasks

Authors: Georgarakis, Anna-Maria; Wolf, Peter; Riener, Robert

In the past few years, several light-weight soft wearable robots, so called exosuits, for upper extremity assistance have been developed. The design of exosuits is often based on a bio-mimetic design approach, mimicking the human biomechanics. However, in the design process, the interactions of movement directions during daily living tasks have not yet been analyzed comprehensively. Therefore, the designs of exosuits might be overly complex, as movement directions that are coupled during daily life tasks were implemented independently; or lack functionality, as relevant movement directions were disregarded. In the meta-analysis presented in this paper, the maximum angles occurring during daily living tasks in the upper extremity of unimpaired individuals were examined. To identify the kinematic couplings between joint axes, the interactions between movement directions that act against gravity were analyzed. The strongest correlations were found between rotation in the plane of elevation and humeral axial rotation (R 2 = 0.82, p <; 0.001), and between humeral elevation and humeral axial rotation (R 2 = 0.16, p = 0.001). Shoulder rotations and elbow flexion were not correlated. We conclude that humeral axial rotation is a relevant movement direction in the upper extremity, which, so far, has often been neglected in the design of exosuits. To simplify the design of exosuits, we propose a one degree of freedom support trajectory in which rotation in the plane of elevation (at 70° and 80°) and humeral axial rotation (at 110° and 60°) are coupled to humeral elevation (continuously from 40° to 110°).

Reference

Posted on: October 21, 2019

Robots for Learning – R4L: Adaptive Learning

Authors: Wafa Johal ; Anara Sandygulova ; Jan de Wit ; Mirjam de Haas ; Brian Scassellati

The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. This year’s half-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to design, develop and test new robotics systems that help learners. This 5th edition of the workshop will be dealing in particular on the potential use of robots for adaptive learning. Since the past few years, inclusive education have been a key policy in a number of countries, aiming to provide equal changes and common ground to all. In this workshop, we aim to discuss strategies to design robotics system able to adapt to the learners’ abilities, to provide assistance and to demonstrate long-term learning effects.

Reference

Posted on: October 21, 2019

Robot-Supported Multiplayer Rehabilitation: Feasibility Study of Haptically Linked Patient-Spouse Training

Authors: Baur, Kilian; Wolf, Peter; Klamroth-Marganska, Verena; Bierbauer, Walter; Scholz, Urte; Riener, Robert; Duarte, Jaime E.

Multiplayer environments are thought to increase and prolongate active participation in robot-aided rehabilitation. We expect that environments linking patients with their spouses will particularly foster active participation. Thus, we developed two multiplayer games to link the game experience of two players: an Air Hockey game and a Haptic Kitchen game. In the competitive Air Hockey game, differences in skill levels between players were balanced by individualizing haptic guidance or damping forces. In the Haptic Kitchen game, a healthy player could support the patient’s movements using a virtual force field. The two players could control the haptic interaction since both the force field and the point of application were visualized. We tested the haptic performance balancing algorithm of the Air Hockey game and the spouse-controlled haptic support of the Kitchen game with patients post-stroke who trained both single- (i.e., alone) and multiplayer training (i.e., with spouse) in eight therapy sessions lasting 45 min each. Mean total rating in Intrinsic Motivation Inventory was 46.9 points (out of 63 points) for multiplayer modes, and 42.7 points for single player modes, respectively. The spouses applied the haptic support in the Haptic Kitchen game during 42 % of the total game duration. We are currently testing more patient-spouse couples to better understand the effects of using these haptic approaches on the behavior and recovery of patients. We foresee this approach can improve the motivation during training and positively influence the at-home behavior of patients, an important goal of rehabilitation training efforts.

Reference

Posted on: October 21, 2019

Robot Identification and Localization with Pointing Gestures

Authors: Gromov, Boris; Gambardella, Luca M.; Giusti, Alessandro

We propose a novel approach to establish the relative pose of a mobile robot with respect to an operator that wants to interact with it; we focus on scenarios in which the robot is in the same environment as the operator, and is visible to them. The approach is based on comparing the trajectory of the robot, which is known in the robot’s odometry frame, to the motion of the arm of the operator, who, for a short time, keeps pointing at the robot they want to interact with. In multi-robot scenarios, the same approach can be used to simultaneously identify which robot the operator wants to interact with. The main advantage over alternatives is that our system only relies on the robot’s odometry, on a wearable inertial measurement unit (IMU), and, crucially, on the operator’s own perception. We experimentally show the feasibility of our approach using real-world robots.

Reference

Posted on: October 21, 2019

Reliable decoding of motor state transitions during imagined movement

Authors: Orset, B.; Lee, K.; Chavarriaga, R.; Millan, J. del R.

Current non-invasive Brain Machine interfaces commonly rely on the decoding of sustained motor imagery activity. This approach enables a user to control brain-actuated devices by triggering predetermined motor actions. However, despite of its broad range of applications, this paradigm has failed so far to allow a natural and reliable control. As an alternative approach, we investigated the decoding of states transitions of an imagined movement, i.e. rest-to-movement (onset) and movement-to-rest (offset). We show that both transitions can be reliably decoded with accuracies of 71.47% for the onset and 73.31% for the offset (N = 9 subjects). Importantly, these transitions exhibit different neural patterns and need to be decoded independently. Our results indicate that both decoders are able to capture the brain dynamics during imagined movements and that their combined use could provide benefits in terms of accuracy and time precision.

Reference

Posted on: October 21, 2019

Real-Time Dance Generation to Music for a Legged Robot

Authors: Bi, Thomas; Fankhauser, Péter; Bellicoso, Dario; Hutter, Marco

The development of robots that can dance has received considerable attention. However, they are often either limited to a pre-defined set of movements and music or demonstrate little variance when reacting to external stimuli, such as microphone or camera input. In this paper, we contribute with a novel approach allowing a legged robot to listen to live music while dancing in synchronization with the music in a diverse fashion. This is achieved by extracting the beat from an onboard microphone in real-time, and subsequently creating a dance choreography by picking from a user-generated dance motion library at every new beat. Dance motions include various stepping and base motions. The process of picking from the library is defined by a probabilistic model, namely a Markov chain, that depends on the previously picked dance motion and the current music tempo. Finally, delays are determined online by time-shifting a measured signal and a reference signal, and minimizing the least squares error with the time-shift as parameter. Delays are then compensated for by using a combined feedforward and feedback delay controller which shifts the robot whole-body controller reference input in time. Results from experiments on a quadrupedal robot demonstrate the fast convergence and synchrony to the perceived music

Reference

Posted on: October 21, 2019

Proximity Human-Robot Interaction Using Pointing Gestures and a Wrist-mounted IMU

Authors: Gromov, Boris; Abbate, Gabriele; Gambardella, Luca M.; Giusti, Alessandro

We present a system for interaction between co-located humans and mobile robots, which uses pointing gestures sensed by a wrist-mounted IMU. The operator begins by pointing, for a short time, at a moving robot. The system thus simultaneously determines: that the operator wants to interact; the robot they want to interact with; and the relative pose among the two. Then, the system can reconstruct pointed locations in the robot’s own reference frame, and provide real-time feedback about them so that the user can adapt to misalignments. We discuss the challenges to be solved to implement such a system and propose practical solutions, including variants for fast flying robots and slow ground robots. We report different experiments with real robots and untrained users, validating the individual components and the system as a whole.

Reference

Posted on: October 21, 2019