At IAS, we have access to a series of really great robots:
The robot Darias (DARmstadt IAS) is our main platform for research into bimanual and dexterous manipulation. The robot consists of a torso with two Kuka light weight robot arms, each of which has a five-fingered DLR hands as an end effector. For observing its envrionment, the robot is equipped with a Kinect and connected with our Optitrak system. The optitrak allows for marker-based tracking of objects and humans at a rate of 90Hz.
Each arm has seven degrees of freedom in an anthropomorphic configuration, i.e., three shoulder joints, an elbow, and three wrist joints. Communication with the robot runs at 1kHz, and allows for torque control of the robot's joints. The robot's joints are equipped with torque sensors as well as joint encoders. The robot arms are actively compliant, which allows them to be easily used for kinaesthetic teachin. The active compliance helps the robot to safely interact with its environment and with humans.
The five-fingered hands of the robot also have an anthropomorphic design. Each finger has three active degrees of freedom, including proximal and distal joints for flexing and extending the fingers, as well as a third joint in the base that allows the robot to spread its fingers apart. Similar to the robot arm, the joints of the robot's fingers provide torque information as well as the joint angle. The fingers are controlled using joint impedance control, which makes them actively compliant. This compliance of the finger, as well as the arms, allows the robot to better handle uncertainty in its surroundings .
Some of our work with Darias:
Interested in this robot system? Please contact Rudolf Lioutikov!
We also have a floor-mounted bimanual manipulation platform based on two Kuka LBR R820 manipulators. The manipulators are similar to the LWR 4+ used by DarIAS and are impedance-controlled at a rate of 1kHz. Both arms a equipped with a SAKE Gripper and an Intel RealSense D435 RGB-D camera. The robots are mounted together with our Mitsubishi PA-10 on a common frame that allows for flexible arrangements. We use six Optitrack Flex 13 cameras for marker-based tracking.
Interested in this robot system? Please contact Oleg Arenz!
An exclusive high-speed 7 degrees-of-freedom version of the famous Barrett WAM robot has recently arrived at our lab in Darmstadt. This cable driven robot is capable of producing extremely high accelerations and is uniquely suited for studying highly dynamic movements that lie beyond the capabilities of standard industrial robots. Our low-level torque control interface tightly integrated with a simulation environment as well as with an OptiTrack object tracking system allows for fast prototyping and rapid experimentation with the robot. A Robcom interface makes it easy to use a familiar language and environment such as Python, Matlab, or ROS for quickly testing new algorithmic ideas.
Several ongoing projects—including badminton, beerpong, and juggling—provide a great opportunity for motivated students to learn more about real-time control of nonlinear dynamical systems, as well as apply their knowledge of robot learning and machine learning in challenging control problems.
We have a full humanoid iCub robot (53 DOF), equipped with actuated cameras for stereo-vision, inertial sensor, whole-body skin (arms, legs, torso and foot-sole), tactile elements on the fingertips, 6 axis force/torque sensors (arms and legs), and variable-impedance actuation in the legs (design inherited from Coman's legs). Our version is the state-of-the-art and the best configuration for whole-body motions with contacts, such as walking or getting up from a chair. It is also the best configuration for physical interaction with humans and environment.
The setup is located in TU Darmstadt's Lab, where iCub will be used for the projects CoDyCo.
Some of our work with iCub:
Interested in this robot system? Please contact Elmar Rueckert!
We have setup a highly advanced robot table tennis setup consisting of a high-speed, high voltage special-made version of the Barrett WAM robot together with eight high-speed Prosilica Cameras. The WAM is torque controlled at 500 Hz via CAN Bus and, due to the special make, can start nearly instantaneous to high accelerations. The Prosilica Cameras are operated at 200 Hz and are being used with our vision system described in Lampert, C.H.; Peters, J. (2012). Real-Time Detection of Colored Objects In Multiple Camera Streams With Off-the-Shelf Hardware Components, Journal of Real-Time Image Processing, 7, 1, pp.31-41. See Details [Details] Download Article [PDF] BibTeX Reference [BibTex] .
The whole setup is located at our Tuebingen Lab location, the Robot Learning Lab at the Department for Empirical Inference at the Max Planck Institute for Intelligent Systems. Here, we have both students and post-docs, and many of our current members in Darmstadt have spend significant time at Tuebingen.
We have used this setup for a series of motor skill learning tasks including Ball-in-a-Cup, Ball-Paddling and basic Robot Table Tennis. Please read the publications below for our work with this system:
Interested in this robot system? Please contact Jan Peters!
The BioRob arm is a compliant robotic arm which, depending on the version, has five or six degrees of freedom. It's tendon driven design kinematically decouples the joint and motor side and allows the heavy servo motors to be placed close to the base, the ``torso'', of the robot. The result is a super lightweight design, especially at the final links of the robot, that offers significant advantages for dynamic and high-speed movements. Additionally, the use of springs to connect the tendons provide compliance, a necessary property for striking movements such as hammering, and allow the storage and release of energy to gain even higher accelerations than the motors can provide. Overall BioRob's lightweight design offers a great platform for high-speed movements while minimizing the risk of damaging it's servo motors and increase safety even for close human-robot interaction.
But these advantages of the design come at a cost: controlling the robot is a complex problem that requires sophisticated control policies. At IAS we focus on improving the control performance of the robot on motor skill tasks. We generate novel model-based control approaches for controlling the robots which take into account the elasticity and the spring characteristic of the robots. Since creating models based just on CAD data lead to inferior performance, we use model learning approaches to improve the models. Additionally we use imitation learning for incorporating expert knowledge in our control policies and we subsequently improve the policies with reinforcement learning techniques. We evaluated the performance of our control approaches on hitting static and moving balls, but we also developed a two-robot setup, in which the robots compete on the game of tether-ball, for further experimentation.
The Kilobots are an open-source swarm robotics platform developed by the Self-Organizing Systems Research Group at Harvard University. The design of the Kilobots is kept simple. A circular PCB with a diameter of roughly 3cm forms the body that is supported by three rigid legs of 2cm length. On top of the PCB sits the battery housing. The Kilobots move based on the slip-stick motion principle using two vibration motors that glued to the battery housing. Activating the vibration motors leads to tiny jumps of the robot which appears as a smooth movement at a velocity of around 1cm/s. Besides moving forward (using both vibration motors), the Kilobot can turn around on if its rear legs by activating only the motor opposite to this leg. The Kilobots can sense the ambient light through a light sensitive diode mounted on the upside of the PCB. With an infrared emitter and receiver on the lower side of the PCB, the Kilobots can communicate within a neighborhood of about 10cm. The Kilobots are programmed using an overhead controller that allows only one-way communication from a PC to the Kilobots.
At IAS, we use the Kilobot platform to evaluate policy search methods in the context of swarm robotics. Here, one direction of research is to learn a controller for a common input signal that steers the robot swarm. For the Kilobot swarm this common input signal is a light gradient, while the Kilobots are programmed to follow a light gradient towards the brightest point. Another research direction is to learn a control policy directly for the agents. In this setup, the agents get a global reward signal but execute the policy independently.
Interested in this robot system? Please contact Gregor Gebhardt!
The Oncilla robot is an open-source, open-hardware quadruped robot (developed in the AMARSi EU-project. The robot features 12 degrees of freedom (three per leg), passive compliance, and rich sensor feedback. Further information on the hardware design and open-source blueprints can be found here.
At IAS, the Oncilla robot is used to study motor skill learning, especially the learning of different walking, trotting or running gaits and transitions between them. In contrast to many existing studies performed in simulation, our goal is to learn these motor skills directly on the robot hardware. For that the Oncilla robot is placed on a treadmill that automatically adapts its speed according to the speed of the robot. For motor skill learning sample efficient and noise robust policy search methods developed at IAS as well as biological inspired movement primitive representations will be used. In this context we investigate which changes to policy search methods are needed to learn from stochastic rewards and which computational principles support safe transitions between different motor skills.
Interested in this robot system? Please contact Elmar Rückert!
Our Mitsubishi PA-10 robot is a typical industrial robot arm with seven degrees of freedom. It has an internal PD controller with high gains, so it is position controlled. In the past, we have equipped the robot with different kinds of sensors and actuators, such as a force-torque sensor, an RGBD camera, and different kinds of tactile sensors. The PA-10 robot arm is mainly in use by the grasping and manipulation lab.
Some of our work with the PA 10:
Interested in this robot system? Please contact Filipe Veiga!
The Allegro hand has four fingers consisting of four joints each, giving the hand 16 degrees of freedom in total. This complexity enables the hand to accomplish dexterous manipulation tasks. It comes with a PD controller and is position controlled. The hand comes with sticky rubber sensorless fingertips that can grasp a variety of objects, but do not provide any sensory feedback. Thus, for in-hand manipulation tasks we equipped the hand with BioTac sensors. These are human inspired tactile fingertip sensors and can be seen on the picture here.
Find more information about the Allegro hand here.
Find more information about the BioTac tactile sensors here.
Some of our work with the Allegro:
Moving : 25 degrees of freedom and a humanoid shape that enable him to move and adapt to the world around him. His inertial unit enables him to maintain his balance and to know whether he is standing up or lying down. Feeling : The numerous sensors in his head, hands and feet, as well as his sonars, enable him to perceive his environment and get his bearings. Hearing and speaking : With his 4 directional microphones and loudspeakers, NAO interacts with humans in a completely natural manner, by listening and speaking. Seeing : NAO is equipped with two cameras that film his environment in high resolution, helping him to recognise shapes and objects. Connecting : To access the Internet autonomously, NAO is able to use a range of different connection modes (WiFi, Ethernet). Thinking : We can't really talk about "Artificial Intelligence" with NAO, but the robots are already able to reproduce human behaviour.
Interested in this robot system? Please contact Boris Belousov!
The QUBE-Servo robot from Quanser is an implementation of the Furuta pendulum - the prototypical example of an underactuated mechanical system often used in courses on control and reinforcement learning. In our setup, it has a straightforward Python interface similar to an OpenAI Gym environment; so, if you have an algorithm working in the gym, you can easily run it on the real system by changing just one line of code.
Qube is perfect for quick prototyping and experimentation since the platform is extremely robust. Students can access it over the network, so it is not even necessary to be physically close to the robot to run experiments on the hardware.
Interested in this robot system? Please contact Boris Belousov!
Robotino is a small, mobile robot which is equipped with a elephant-trunk like arm, called bionic handling assistant. The robot's base allows holonomic movement thanks to an omnidirectional drive. The arm is pneumatically actuated: six air chambers bend two 'links' on the arm, two further chambers can are used to rotate its gripper which can be opened and closed. The robot can furthermore be equiped with e.g. a webcam, distance sensors, or other USB devices.
Since modelling the kinematics and dynamics of the arm is hard, and behaviour changes when the robot lifts a load, this robot provides interesting learning opportunities.
Some of our work with the Robotino:
Interested in this robot system? Please contact Herke van Hoof!
The Wessling Robotic Hand, produced by Wessling Robotics, is composed of five robotic fingers. The hand is designed as to allow the fingers to be interchangeable. Each finger consists of three actuated degrees of freedom. One of these degrees of freedom is a coupled joint controlling both the distal and proximal joints. The hand then offers 15 actuated degrees of freedom and 20 joints in total.
The fingertips of our Wessling Hand are equipped with BioTac SP sensors. These are a more recent version of the standard BioTac sensors produced by Syntouch.
For our purposes, the hand is position controlled in joint space or controlled in task space with end-effectors placed at each fingertip.
For more information concerning the Wessling Robotic Hand please visit the Wessling Robotics website. Additional information on the BioTac SP tactile sensors can be found in the Syntouch website.
Interested in this robot system? Please contact Filipe Veiga!