Daniel Tanneberg

Quick Info

Research Interests

(Biologically-inspired) Machine Learning, Neuroscience, Deep Learning, (Stochastic) Neural Networks, Lifelong-Learning

More Information

Curriculum Vitae Publications ResearchGate Network Google Scholar ORCID

Contact Information

Mail. Daniel Tanneberg
TU Darmstadt, FG IAS,
Hochschulstr. 10, 64289 Darmstadt
Office. Room E327, Building S2|02
work+49-6151-16-25371

Daniel has joined the Intelligent Autonomous Systems (IAS) Group at the Technische Universitaet Darmstadt in October 2015 as a Ph.D. Student. His research focus lies on (biologically-inspired) machine learning for robotics and neuroscience. During his Ph.D., Daniel is working on investigating the applicability and properties of (spiking) deep neural networks for for open-ended robot learning. He is working on the GOAL-Robots project, that aims at developing goal-based open-ended autonomous learning robots; building lifelong learning robots.

Generally he is interested in the connection between machine learning, robotics and neuroscience. Daniel is also a member of the Athena-Minerva Cybathlon-Team of TU Darmstadt (IAS Group) and the Max Planck Institute for Intelligent Systems, which build a Brain-Computer-Interface system and participated in the BCI-Race at the Cybathlon 2016. The team and the related research aims to contribute to the BCI research that wants to give paralyzed people a way to communicate. Beside that, he was also involved on the TACMAN Project which used tactile-sensing to improve the manipulation abilities and dexterity of robotic hands.

Before starting his Ph.D., Daniel completed both, his Bachelor Degree in Computer Science and his Master Degree in Computer Science (with honors) at the Technische Universitaet Darmstadt. During his master, he focused on machine learning and studied Biological Psychology as a minor to get into learning and information processing in humans. His master thesis entitled “Spiking Neural Networks Solve Robot Planning Problems" was written under the supervision of Elmar Rueckert and Jan Peters and was decorated with the Hanns-Voith-Stiftungspreis Award 2017.

Key References

  1. Tanneberg, D.; Peters, J.; Rueckert, E. (2018). Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks, arXiv.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Tanneberg, D.; Peters, J.; Rueckert, E. (2017). Efficient Online Adaptation with Stochastic Recurrent Neural Networks, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Tanneberg, D.; Peters, J.; Rueckert, E. (2017). Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals, Proceedings of the Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Tanneberg, D.; Paraschos, A.; Peters, J.; Rueckert, E. (2016). Deep Spiking Networks for Model-based Planning in Humanoids, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Rueckert, E.; Kappel, D.; Tanneberg, D.; Pecevski, D; Peters, J. (2016). Recurrent Spiking Networks Solve Planning Tasks, Nature PG: Scientific Reports, 6, 21142, Nature Publishing Group.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

For all publications please see Publications or Google Scholar.

  

zum Seitenanfang