Our Robots

At IAS, we have access to a series of really great robots:

  1. a bimanual manipulation platform,
  2. an iCub with skin and Coman legs,
  3. a Robot table tennis setup consisting of Barrett WAM and high-speed cameras,
  4. three BioRobs compliant arms,
  5. a platform for swarm robotics,
  6. Oncilla, a compliant quadroped,
  7. a Mitsubishi PA-10,
  8. a sensorized Allegro hand,
  9. a humanoid robot Aldebaran Nao,
  10. a Furuta Pendulum,
  11. a Robotino XT,
  12. a Wessling Robotics Hand,
  13. a high-speed Barrett WAM,
  14. Sensors and other devices.

Darias: our Bimanual Manipulation Platform

The robot Darias (DARmstadt IAS) is our main platform for research into bimanual and dexterous manipulation. The robot consists of a torso with two Kuka light weight robot arms, each of which has a five-fingered DLR hands as an end effector. For observing its envrionment, the robot is equipped with a Kinect and connected with our Optitrak system. The optitrak allows for marker-based tracking of objects and humans at a rate of 90Hz.

Each arm has seven degrees of freedom in an anthropomorphic configuration, i.e., three shoulder joints, an elbow, and three wrist joints. Communication with the robot runs at 1kHz, and allows for torque control of the robot's joints. The robot's joints are equipped with torque sensors as well as joint encoders. The robot arms are actively compliant, which allows them to be easily used for kinaesthetic teachin. The active compliance helps the robot to safely interact with its environment and with humans.

The five-fingered hands of the robot also have an anthropomorphic design. Each finger has three active degrees of freedom, including proximal and distal joints for flexing and extending the fingers, as well as a third joint in the base that allows the robot to spread its fingers apart. Similar to the robot arm, the joints of the robot's fingers provide torque information as well as the joint angle. The fingers are controlled using joint impedance control, which makes them actively compliant. This compliance of the finger, as well as the arms, allows the robot to better handle uncertainty in its surroundings .

Some of our work with Darias:

  1. Maeda, G.; Ewerton, M.; Osa, T.; Busch, B.; Peters, J. (2017). Active Incremental Learning of Robot Movement Primitives, Proceedings of the Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Maeda, G.; Neumann, G.; Ewerton, M.; Lioutikov, R.; Kroemer, O.; Peters, J. (2017). Probabilistic Movement Primitives for Coordination of Multiple Human-Robot Collaborative Tasks, Autonomous Robots (AURO), 41, 3, pp.593-612.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Maeda, G.; Ewerton, M.; Neumann, G.; Lioutikov, R.; Peters, J. (2017). Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration, International Journal of Robotics Research (IJRR), 36, 13-14, pp.1579-1594.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Rueckert, E.; Kappel, D.; Tanneberg, D.; Pecevski, D; Peters, J. (2016). Recurrent Spiking Networks Solve Planning Tasks, Nature PG: Scientific Reports, 6, 21142, Nature Publishing Group.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Maeda, G.; Maloo, A.; Ewerton, M.; Lioutikov, R.; Peters, J. (2016). Proactive Human-Robot Collaboration with Interaction Primitives, International Workshop on Human-Friendly Robotics (HFR), Genoa, Italy.   See Details [Details]   BibTeX Reference [BibTex]
  6. Maeda, G.; Maloo, A.; Ewerton, M.; Lioutikov, R.; Peters, J. (2016). Anticipative Interaction Primitives for Human-Robot Collaboration, AAAI Fall Symposium Series. Shared Autonomy in Research and Practice, Arlington, VA, USA.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  7. Maeda, G.; Ewerton, M.; Koert, D; Peters, J. (2016). Acquiring and Generalizing the Embodiment Mapping from Human Observations to Robot Skills, IEEE Robotics and Automation Letters (RA-L), 1, 2, pp.784--791.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  8. Maeda, G.; Neumann, G.; Ewerton, M.; Lioutikov, R.; Peters, J. (2015). A Probabilistic Framework for Semi-Autonomous Robots Based on Interaction Primitives with Phase Estimation, Proceedings of the International Symposium of Robotics Research (ISRR).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  9. Ewerton, M.; Neumann, G.; Lioutikov, R.; Ben Amor, H.; Peters, J.; Maeda, G. (2015). Modeling Spatio-Temporal Variability in Human-Robot Interaction with Probabilistic Movement Primitives, Workshop on Machine Learning for Social Robotics, ICRA.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  10. Ewerton, M.; Maeda, G.J.; Peters, J.; Neumann, G. (2015). Learning Motor Skills from Partially Observed Movements Executed at Different Speeds, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), pp.456--463.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  11. Ewerton, M.; Neumann, G.; Lioutikov, R.; Ben Amor, H.; Peters, J.; Maeda, G. (2015). Learning Multiple Collaborative Tasks with a Mixture of Interaction Primitives, Proceedings of the International Conference on Robotics and Automation (ICRA), pp.1535--1542.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] Finalist: Best Paper, Best Student Paper, and Best Service Robotics Paper
  12. Kroemer, O.; van Hoof, H.; Neumann, G.; Peters, J. (2014). Learning to Predict Phases of Manipulation Tasks as Hidden States, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  13. Lioutikov, R.; Kroemer, O.; Peters, J.; Maeda, G. (2014). Learning Manipulation by Sequencing Motor Primitives with a Two-Armed Robot, Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  14. Maeda, G.J.; Ewerton, M.; Lioutikov, R.; Amor, H.B.; Peters, J.; Neumann, G. (2014). Learning Interaction for Collaborative Tasks with Probabilistic Movement Primitives, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), pp.527--534.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  15. Ben Amor, H.; Vogt, D.; Ewerton, M.; Berger, E.; Jung, B.; Peters, J. (2013). Learning Responsive Robot Behavior by Imitation, Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  16. Rueckert, E.; Mundo, J.; Paraschos, A.; Peters, J.; Neumann, G. (2015). Extracting Low-Dimensional Control Variables for Movement Primitives, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

    Mundo, J. (2014). Extracting Low-Dimensional Control Variables for Movement Primitives, Master Thesis.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Rudolf Lioutikov!

Our iCub with skin and Coman legs

We have a full humanoid iCub robot (53 DOF), equipped with actuated cameras for stereo-vision, inertial sensor, whole-body skin (arms, legs, torso and foot-sole), tactile elements on the fingertips, 6 axis force/torque sensors (arms and legs), and variable-impedance actuation in the legs (design inherited from Coman's legs). Our version is the state-of-the-art and the best configuration for whole-body motions with contacts, such as walking or getting up from a chair. It is also the best configuration for physical interaction with humans and environment.

The setup is located in TU Darmstadt's Lab, where iCub will be used for the projects CoDyCo.

Some of our work with iCub:

  1. Calandra, R.; Ivaldi, S.; Deisenroth, M.;Rueckert, E.; Peters, J. (2015). Learning Inverse Dynamics Models with Contacts, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Ivaldi, S.; Nguyen, S.M.; Lyubova, N.; Droniou, A.; Padois, V.; Filliat, D.; Oudeyer, P.-Y.; Sigaud, O. (2014). Object learning through active exploration, IEEE Transactions on Autonomous Mental Development, 6, pp.56-72.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Rueckert, E.; Camernik, J.; Peters, J.; Babic, J. (2016). Probabilistic Movement Models Show that Postural Control Precedes and Predicts Volitional Motor Control, Nature PG: Scientific Reports, 6, 28455.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Elmar Rueckert!

Robot table tennis setup consisting of Barrett WAM and high-speed cameras

We have setup a highly advanced robot table tennis setup consisting of a high-speed, high voltage special-made version of the Barrett WAM robot together with eight high-speed Prosilica Cameras. The WAM is torque controlled at 500 Hz via CAN Bus and, due to the special make, can start nearly instantaneous to high accelerations. The Prosilica Cameras are operated at 200 Hz and are being used with our vision system described in Lampert, C.H.; Peters, J. (2012). Real-Time Detection of Colored Objects In Multiple Camera Streams With Off-the-Shelf Hardware Components, Journal of Real-Time Image Processing, 7, 1, pp.31-41.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] .

The whole setup is located at our Tuebingen Lab location, the Robot Learning Lab at the Department for Empirical Inference at the Max Planck Institute for Intelligent Systems. Here, we have both students and post-docs, and many of our current members in Darmstadt have spend significant time at Tuebingen.

We have used this setup for a series of motor skill learning tasks including Ball-in-a-Cup, Ball-Paddling and basic Robot Table Tennis. Please read the publications below for our work with this system:

  1. Muelling, K.; Kober, J.; Kroemer, O.; Peters, J. (2013). Learning to Select and Generalize Striking Movements in Robot Table Tennis, International Journal of Robotics Research (IJRR), 32, 3, pp.263-279.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2016). Hierarchical Relative Entropy Policy Search, Journal of Machine Learning Research (JMLR), 17, pp.1-50.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

    Daniel, C.; Neumann, G.; Peters, J. (2012). Hierarchical Relative Entropy Policy Search, Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS 2012).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Kober, J.; Peters, J. (2010). Imitation and Reinforcement Learning - Practical Algorithms for Motor Primitive Learning in Robotics, IEEE Robotics and Automation Magazine, 17, 2, pp.55-62.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Jan Peters!

Three BioRob compliant robotic arms

The BioRob arm is a compliant robotic arm which, depending on the version, has five or six degrees of freedom. It's tendon driven design kinematically decouples the joint and motor side and allows the heavy servo motors to be placed close to the base, the ``torso'', of the robot. The result is a super lightweight design, especially at the final links of the robot, that offers significant advantages for dynamic and high-speed movements. Additionally, the use of springs to connect the tendons provide compliance, a necessary property for striking movements such as hammering, and allow the storage and release of energy to gain even higher accelerations than the motors can provide. Overall BioRob's lightweight design offers a great platform for high-speed movements while minimizing the risk of damaging it's servo motors and increase safety even for close human-robot interaction.

But these advantages of the design come at a cost: controlling the robot is a complex problem that requires sophisticated control policies. At IAS we focus on improving the control performance of the robot on motor skill tasks. We generate novel model-based control approaches for controlling the robots which take into account the elasticity and the spring characteristic of the robots. Since creating models based just on CAD data lead to inferior performance, we use model learning approaches to improve the models. Additionally we use imitation learning for incorporating expert knowledge in our control policies and we subsequently improve the policies with reinforcement learning techniques. We evaluated the performance of our control approaches on hitting static and moving balls, but we also developed a two-robot setup, in which the robots compete on the game of tether-ball, for further experimentation.

References:

  1. Kollegger, G.; Ewerton, M.; Wiemeyer, J.; Peters, J. (2017). BIMROB -- Bidirectional Interaction Between Human and Robot for the Learning of Movements, in: Lames, M.; Saupe, D.; Wiemeyer, J. (eds.), Proceedings of the 11th International Symposium on Computer Science in Sport (IACSS 2017), pp.151--163, Springer International Publishing.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Ewerton, M.; Kollegger, G.; Maeda, G.; Wiemeyer, J.; Peters, J. (2017). Iterative Feedback-basierte Korrekturstrategien beim Bewegungslernen von Mensch-Roboter-Dyaden, DVS Sportmotorik 2017.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Kollegger, G.; Reinhardt, N.; Ewerton, M.; Peters, J.; Wiemeyer, J. (2017). Die Bedeutung der Beobachtungsperspektive beim Bewegungslernen von Mensch-Roboter-Dyaden, DVS Sportmotorik 2017.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Wiemeyer, J.; Peters, J.; Kollegger, G.; Ewerton, M. (2017). BIMROB – Bidirektionale Interaktion von Mensch und Roboter beim Bewegungslernen, DVS Sportmotorik 2017.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Kollegger, G.; Ewerton, M.; Wiemeyer, J.; Peters, J. (2017). BIMROB – Bidirectional Interaction between human and robot for the learning of movements – Robot trains human – Human trains robot, 23. Sport­wissenschaft­licher Hochschultag der dvs.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Ewerton, M.; Maeda, G.; Neumann, G.; Kisner, V.; Kollegger, G.; Wiemeyer, J.; Peters, J. (2016). Movement Primitives with Multiple Phase Parameters, Proceedings of the International Conference on Robotics and Automation (ICRA), pp.201--206.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  7. Ewerton, M.; Maeda, G.J.; Kollegger, G.; Wiemeyer, J.; Peters, J. (2016). Incremental Imitation Learning of Context-Dependent Motor Skills, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), pp.351--358.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  8. Kollegger, G.; Ewerton, M.; Peters, J.; Wiemeyer, J. (2016). Bidirektionale Interaktion zwischen Mensch und Roboter beim Bewegungslernen (BIMROB), 11. Symposium der DVS Sportinformatik.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  9. Parisi, S.; Abdulsamad, H.; Paraschos, A.; Daniel, C.; Peters, J. (2015). Reinforcement Learning vs Human Programming in Tetherball Robot Games, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  10. Abdulsamad, H.; Buchholz, T.; Croon, T; El Khoury, M. (2014). Playing Tetherball with Compliant Robots, Advanced Design Project.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  11. Ho, D.; Kisner, V. (2014). Trajectory Tracking Controller for a 4-DoF Flexible Joint Robotic Arm, Advanced Design Project.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  12. Englert, P.; Paraschos, A.; Peters, J.;Deisenroth, M.P. (2013). Probabilistic Model-based Imitation Learning, Adaptive Behavior Journal, 21, pp.388-403.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in these robot systems? Please contact Simone Parisi and Marco Ewerton!

The Kilobots swarm robotics platform

The Kilobots are an open-source swarm robotics platform developed by the Self-Organizing Systems Research Group at Harvard University. The design of the Kilobots is kept simple. A circular PCB with a diameter of roughly 3cm forms the body that is supported by three rigid legs of 2cm length. On top of the PCB sits the battery housing. The Kilobots move based on the slip-stick motion principle using two vibration motors that glued to the battery housing. Activating the vibration motors leads to tiny jumps of the robot which appears as a smooth movement at a velocity of around 1cm/s. Besides moving forward (using both vibration motors), the Kilobot can turn around on if its rear legs by activating only the motor opposite to this leg. The Kilobots can sense the ambient light through a light sensitive diode mounted on the upside of the PCB. With an infrared emitter and receiver on the lower side of the PCB, the Kilobots can communicate within a neighborhood of about 10cm. The Kilobots are programmed using an overhead controller that allows only one-way communication from a PC to the Kilobots.

At IAS, we use the Kilobot platform to evaluate policy search methods in the context of swarm robotics. Here, one direction of research is to learn a controller for a common input signal that steers the robot swarm. For the Kilobot swarm this common input signal is a light gradient, while the Kilobots are programmed to follow a light gradient towards the brightest point. Another research direction is to learn a control policy directly for the agents. In this setup, the agents get a global reward signal but execute the policy independently.

References:

  1. Gebhardt, G.H.W.; Daun, K.; Schnaubelt, M.; Neumann, G. (2018). Learning Robust Policies for Object Manipulation with Robot Swarms, Proceedings of the IEEE International Conference on Robotics and Automation.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Gebhardt, G.H.W.; Daun, K.; Schnaubelt, M.; Hendrich, A.; Kauth, D.; Neumann, G. (2017). Learning to Assemble Objects with a Robot Swarm, Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS 17), pp.1547--1549, International Foundation for Autonomous Agents and Multiagent Systems.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Gregor Gebhardt!

The Oncilla compliant quadruped

The Oncilla robot is an open-source, open-hardware quadruped robot (developed in the AMARSi EU-project. The robot features 12 degrees of freedom (three per leg), passive compliance, and rich sensor feedback. Further information on the hardware design and open-source blueprints can be found here.

At IAS, the Oncilla robot is used to study motor skill learning, especially the learning of different walking, trotting or running gaits and transitions between them. In contrast to many existing studies performed in simulation, our goal is to learn these motor skills directly on the robot hardware. For that the Oncilla robot is placed on a treadmill that automatically adapts its speed according to the speed of the robot. For motor skill learning sample efficient and noise robust policy search methods developed at IAS as well as biological inspired movement primitive representations will be used. In this context we investigate which changes to policy search methods are needed to learn from stochastic rewards and which computational principles support safe transitions between different motor skills.

Interested in this robot system? Please contact Elmar Rückert!

Mitsubishi PA-10

Our Mitsubishi PA-10 robot is a typical industrial robot arm with seven degrees of freedom. It has an internal PD controller with high gains, so it is position controlled. In the past, we have equipped the robot with different kinds of sensors and actuators, such as a force-torque sensor, an RGBD camera, and different kinds of tactile sensors. The PA-10 robot arm is mainly in use by the grasping and manipulation lab.

Some of our work with the PA 10:

  1. Kroemer, O.; Lampert, C.H.; Peters, J. (2011). Learning Dynamic Tactile Sensing with Robust Vision-based Training, IEEE Transactions on Robotics (T-Ro), 27, 3, pp.545-557.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. van Hoof, H.; Kroemer, O; Peters, J. (2014). Probabilistic Segmentation and Targeted Exploration of Objects in Cluttered Environments, IEEE Transactions on Robotics (TRo), 30, 5, pp.1198-1209.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. van Hoof, H.; Kroemer, O; Peters, J. (2013). Probabilistic Interactive Segmentation for Anthropomorphic Robots in Cluttered Environments , Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Filipe Veiga!

Allegro hand

The Allegro hand has four fingers consisting of four joints each, giving the hand 16 degrees of freedom in total. This complexity enables the hand to accomplish dexterous manipulation tasks. It comes with a PD controller and is position controlled. The hand comes with sticky rubber sensorless fingertips that can grasp a variety of objects, but do not provide any sensory feedback. Thus, for in-hand manipulation tasks we equipped the hand with BioTac sensors. These are human inspired tactile fingertip sensors and can be seen on the picture here.
Find more information about the Allegro hand here.
Find more information about the BioTac tactile sensors here.

Some of our work with the Allegro:

  1. van Hoof, H.; Tanneberg, D.; Peters, J. (2017). Generalized Exploration in Policy Search, Machine Learning (MLJ), 106, 9-10, pp.1705-1724.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Daniel Tanneberg!

Aldebaran Nao

Moving : 25 degrees of freedom and a humanoid shape that enable him to move and adapt to the world around him. His inertial unit enables him to maintain his balance and to know whether he is standing up or lying down. Feeling : The numerous sensors in his head, hands and feet, as well as his sonars, enable him to perceive his environment and get his bearings. Hearing and speaking : With his 4 directional microphones and loudspeakers, NAO interacts with humans in a completely natural manner, by listening and speaking. Seeing : NAO is equipped with two cameras that film his environment in high resolution, helping him to recognise shapes and objects. Connecting : To access the Internet autonomously, NAO is able to use a range of different connection modes (WiFi, Ethernet). Thinking : We can't really talk about "Artificial Intelligence" with NAO, but the robots are already able to reproduce human behaviour.

Interested in this robot system? Please contact Boris Belousov!

Quanser QUBE-Servo Robot (Furuta Pendulum)

The QUBE-Servo robot from Quanser is an implementation of the Furuta pendulum - the prototypical example of an underactuated mechanical system often used in courses on control and reinforcement learning. In our setup, it has a straightforward Python interface similar to an OpenAI Gym environment; so, if you have an algorithm working in the gym, you can easily run it on the real system by changing just one line of code.

Qube is perfect for quick prototyping and experimentation since the platform is extremely robust. Students can access it over the network, so it is not even necessary to be physically close to the robot to run experiments on the hardware.

Interested in this robot system? Please contact Boris Belousov!

Robotino

Robotino is a small, mobile robot which is equipped with a elephant-trunk like arm, called bionic handling assistant. The robot's base allows holonomic movement thanks to an omnidirectional drive. The arm is pneumatically actuated: six air chambers bend two 'links' on the arm, two further chambers can are used to rotate its gripper which can be opened and closed. The robot can furthermore be equiped with e.g. a webcam, distance sensors, or other USB devices.

Since modelling the kinematics and dynamics of the arm is hard, and behaviour changes when the robot lifts a load, this robot provides interesting learning opportunities.

Some of our work with the Robotino:

  1. Bischoff, B.; Nguyen-Tuong, D.; van Hoof, H. McHutchon, A.; Rasmussen, C.E.; Knoll, A.; Peters, J.; Deisenroth, M.P. (2014). Policy Search For Learning Robot Control Using Sparse Data, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Interested in this robot system? Please contact Herke van Hoof!

Wessling Robotics Hand

The Wessling Robotic Hand, produced by Wessling Robotics, is composed of five robotic fingers. The hand is designed as to allow the fingers to be interchangeable. Each finger consists of three actuated degrees of freedom. One of these degrees of freedom is a coupled joint controlling both the distal and proximal joints. The hand then offers 15 actuated degrees of freedom and 20 joints in total. The fingertips of our Wessling Hand are equipped with BioTac SP sensors. These are a more recent version of the standard BioTac sensors produced by Syntouch. For our purposes, the hand is position controlled in joint space or controlled in task space with end-effectors placed at each fingertip.
For more information concerning the Wessling Robotic Hand please visit the Wessling Robotics website. Additional information on the BioTac SP tactile sensors can be found in the Syntouch website.

Interested in this robot system? Please contact Filipe Veiga!

High-speed Barrett WAM

An exclusive high-speed 7 degrees-of-freedom version of the famous Barrett WAM robot has recently arrived at our lab in Darmstadt. This cable driven robot is capable of producing extremely high accelerations and is uniquely suited for studying highly dynamic movements that lie beyond the capabilities of standard industrial robots. Our low-level torque control interface tightly integrated with a simulation environment as well as with an OptiTrack object tracking system allows for fast prototyping and rapid experimentation with the robot. A Robcom interface makes it easy to use a familiar language and environment such as Python, Matlab, or ROS for quickly testing new algorithmic ideas.

Several ongoing projects—including badminton, beerpong, and juggling—provide a great opportunity for motivated students to learn more about real-time control of nonlinear dynamical systems, as well as apply their knowledge of robot learning and machine learning in challenging control problems.

Interested in this robot system? Please contact Boris Belousov and Dorothea Koert!

  

zum Seitenanfang