To date, most modular robotic systems lack flexibility when increasing the number of modules due to their hard building blocks and rigid connection mechanisms. In order to improve adaptation to environmental changes, softness on the module level might be beneficial. However, coping with softness requires fundamental rethinking the way modules are built. A major challenge is to develop a connection mechanism that does not limit the softness of the modules, does not require precise alignment and allows for easy detachment. In this paper, we propose a soft active connection mechanism based on electroadhesion. The mechanism uses electrostatic forces to connect modules. The method is easy to implement and can be integrated in a wide range of soft module types. Based on our experimental results, we conclude that the mechanism is suitable as a connection principle for light-weight modules when efficiency in a wide range of softness, tolerance to alignment and easy detachment are desired. The main contributions of this article are (i) the qualitative comparison of different connector principles for soft modular robots, (ii) the integration of electroadhesion, featuring a novel electrode pattern design, into soft modules, and (iii) the demonstration and characterization of the performance of functional soft module mockups including the connection mechanism.
Looking for publications? All our latest publications are listed here, and you can also use our search functions to help find what you are looking for. Power users might also want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.
Vision Tape is a novel class of flexible compound-eye-like linear vision sensor dedicated to motion extraction and proximity estimation. This novel sensor possesses intrinsic mechanical flexibility that provides wide-range adaptive shape, allowing adjustable field of view as well as integration with numerous substrates and curvatures. Vision Tape extracts Optic Flow of the visual scene to calculate the motion vector, which allows proximity estimation based on the motion parallax principle.
For people with severe physical disabilities, low resolution input devices, such as buttons, sip and puff switches and brain–computer interfaces provide an opportunity to interact with the world. However, it can be difficult to control assistive technology, such as wheelchairs, tele–presence robots and robotic arms, when you have only a limited number of commands available and/or a lack of temporal precision in issuing such commands. These limitations can be overcome by employing shared control techniques, whereby the system assists the user in performing the desired task. In this study we compare the use of a simple discrete shared control policy with a more dynamic proportional shared control policy. We evaluate both approaches on a wheelchair that is only operated by two temporally– constrained discrete buttons. The experiments were performed in two different realistic indoor scenarios: an open–plan, spacious environment and a smaller, more cluttered ofﬁce environment. A total of 10 healthy participants took part in this study.
In this paper, we introduce Vision Tape (VT), a novel class of flexible compound-eye-like linear vision sensor dedicated to motion extraction and proximity estimation. This novel sensor possesses intrinsic mechanical flexibility that provides wide-range adaptive shape, allowing adjustable field of view as well as integration with numerous substrates and curvatures. VT extracts Optic Flow (OF) of the visual scene to calculate the motion vector, which allows proximity estimation based on the motion parallax principle. In order to validate the functionality of VT, we have designed and fabricated an exemplary prototype consisting of an array of eight photodiodes attached to a flexible PCB that acts as mechanical and electrical support. This prototype performs image acquisition and processing with an integrated microcontroller at a frequency of 1000 fps, even during bending of the sensor. With this, the effect of VT shape on motion perception and proximity estimation is studied and, in particular, the effect of pixel-to-pixel angle is discussed. The results of these experiments allow estimating an optimal configuration of the sensor for OF extraction. Subsequently, a method that enhances the quality of extracted OF for non-optimal configurations is proposed. The experimental results show that, by applying the proposed method to VT in a suboptimal curvature, the quality of the OF can be increased by up to 176% and proximity estimation by 178%.
Technology is playing an increasing role in our society. Therefore it becomes important to educate the general public, and young generations in particular, about the most common technologies. In this context, robots are excellent education tools, for many reasons: (i) robots are fascinating and attract the attention of all population classes, (ii) because they move and react to their environment, robots are perceived as close to living beings, which make people attracted and attached to them, (iii) robots are multidisciplinary systems and can illustrate technological principles in electronics, mechanics, computer and communication sciences, and (iv) robots have many applications ﬁelds: medical, industrial, agricultural, safety … While several robots exist on the market and are used for education, entertainment or both, none ﬁts with the dream educational tool: promoting creativity and learning, entertaining, cheap and powerful. We addressed this goal by developing the Thymio robot and distributing it during workshops over two years. This paper describes the design principles of the robot, the educational context, and the analysis made with 65 parents after two years of use. We conclude the paper by outlining the speciﬁcations of a new form of educational robot.
A current trend in robotics is to define robot tasks using a combination of superimposed motion patterns. For maximum versatility of such motion patterns, they should be easily and efficiently adaptable for situations beyond those for which the motion was originally designed. In this work, we show how a challenging minigolf-like task can be efficiently learned by the robot using a basic hitting motion model and a task-specific adaptation of the hitting parameters: hitting speed and hitting angle. We propose an approach to learn the hitting parameters for a minigolf field using a set of provided examples. This is a non- trivial problem since the successful choice of hitting parameters generally represent a highly non-linear, multi-valued map from the situation-representation to the hitting parameters. We show that by limiting the problem to learning one combination of hitting parameters for each input, a high-performance model of the hitting parameters can be learned using only a small set of training data. We compare two statistical methods, Gaussian Process Regression (GPR) and Gaussian Mixture Regression (GMR) in the context of inferring hitting parameters for the minigolf task. We validate our approach on the 7 degrees of freedom Barrett WAM robotic arm in both a simulated and real environment.
In this paper, we present a quantitative, trajectory-based method for calibrating stochastic motion models of water-floating robots. Our calibration method is based on the Correlated Random Walk (CRW) model, and consists in minimizing the Kolmogorov-Smirnov (KS) distance between the step length and step angle distributions of real and simulated trajectories generated by the robots. First, we validate this method by calibrating a physics-based motion model of a single 3-cm-sized robot floating at a water/air interface under fluidic agitation. Second, we extend the focus of our work to multi-robot systems by performing a sensitivity analysis of our stochastic motion model in the context of Self-Assembly ( SA). In particular, we compare in simulation the effect of perturbing the calibrated parameters on the predicted distributions of self-assembled structures. More generally, we show that the SA of water-floating robots is very sensitive to even small variations of the underlying physical parameters, thus requiring real-time tracking of its dynamics.
In the emerging field of soft robotics, there is an interest in developing new kinds of sensors whose characteristics do not affect the intrinsic compliance of soft robot components. Additionally, non-invasive shape and deflection sensors may provoke improved solutions to assist in the control of mechanical parts in these robots. Herein, we introduce a novel method for deflection sensing where an LED element and a photodiode are placed on to two substrates connected physically or virtually at a deflection point. The deflection angle between the two planes can be extracted from the LED light intensity detected at the photodiode due to the bell-shaped angular intensity profile of the emitted light. The main advantage of this system is that the components are not in physical contact with the deflection region as in the case of strain gauges and similar sensing methods. The sensor is characterized in a range of deflections of 105-180 degrees, showing a near 1 degree resolution. The experimental data are compared to simulations, modeled by ray tracing. The light intensity vs. deflection angle measurements in our setup display a maximum difference of 9% and an average difference of approximately 5% with respect to the model. Finally, a shape monitoring system has been developed using the proposed concept for a flexible PCB. The system is composed of 12 deflection sensors that operate at frame rate of 33 Hz. This device could be applied to monitor the body shape of a soft robot.
We present a communication based navigation algorithm for robotic swarms. It lets robots guide each other’s navigation by exchanging messages containing navigation information through the wireless network formed among the swarm. We study the use of this algorithm in two different scenarios. In the first scenario, the swarm guides a single robot to a target, while in the second, all robots of the swarm navigate back and forth between two targets. In both cases, the algorithm provides efficient navigation, while being robust to failures of robots in the swarm. Moreover, we show that in the latter case, the system lets the swarm self-organize into a robust dynamic structure. This self-organization further improves navigation efficiency, and is able to find shortest paths in cluttered environments. We test our system both in simulation and on real robots.
In this article, the RObject concept is first introduced. This is followed by a survey of applicable energy scavenging technologies. Energy is a key issue for the large scale deployment of robotics in daily life, as recharging the batteries places a considerable burden on the end-user and is a waste of energy which has an overall negative impact on the limited resources of our planet. We show how the energy obtained from light, water flow, and human work, could be promising sources of energy for powering low-duty devices. To assess the feasibility of powering future RObjects with technologies, tests were conducted on commonly available robotic vacuum cleaners. These tests established an upper-bound on the power requirements for RObjects. Finally, based on these results, the feasibility of powering RObjects using scavenged energy is discussed.