Software

Team Software

Github repository of the Flowers team, including code of most papers we produce: https://github.com/flowersteam

Other links:

Explauto: an autonomous exploration python library

Githubhttps://github.com/flowersteam/explauto

Software description in French

A library to study, model and simulate curiosity-driven learning and exploration in virtual and robotic agents. It provides a common interface for the implementation of active sensorimotor learning algorithm.

Explauto provides a high-level API for an easy definition of:

  1. Virtual and robotics setups (Environment level)
  2. Sensorimotor learning iterative models (Sensorimotor level)
  3. Active choice of sensorimotor experiments (Interest level)

It is crossed-platform and has been tested on Linux, Windows and Mac OS. Do not hesitate to contact us if you want to get involved! It has been released under the GPLv3 license.

KidLearn - Multi-Armed Bandits for Intelligent Tutoring Systems

 

Intelligent Tutoring System (ITS) are computer environments designed to guide students in their learning. Through the proposal of different activities, it provides teaching experience, guidance and feedback to improve learning.

The FLOWERS team has developed several computational models of artificial curiosity and intrinsic motivation based on research on psychology that might have a great impact for ITS. Results showed that activities with intermediate levels of complexity, neither too easy nor too difficult but just a little more difficult that the current level, provide better teaching experiences.

B. Clement, D. Roy, P.-Y. Oudeyer, M. Lopes, Multi-Armed Bandits for Intelligent Tutoring Systems, Journal of Educational Data Mining (JEDM), 2015

websitehttps://github.com/flowersteam/kidlearn
license: dual-license model: GNU Affero GPL License v3 (AGPL3) and Commercial.

RLPark - Reinforcement Learning Algorithms in Java.

RLPark is a reinforcement learning framework in Java. RLPark includes learning algorithms, agent state representa- tions, reinforcement learning architectures, standard benchmark problems, communication interfaces for robots, a framework for running experiments on clusters, and real-time visual- ization using Zephyr.

RLPark has been used in more than a dozens of publications (see http://rlpark.github.com/publications.html for a list). Moreover, RLPark has been ported to C++ by Saminda Abeyruwan, a student of the University of Miami (United States of America).

website: http://rlpark.github.com
maturity: RLPark has been used for research since 2010.
license: Eclipse Public License – v1.0

Multimodal NMF: Multimodal learning with Non-Negative Matrix Factorization

Githubhttps://github.com/omangin/multimodal

A set of tools and experimental scripts used to achieve multimodal learning with nonnegative matrix factorization (NMF).

This code reproduces the experiments from the publication:

O. Mangin, P.Y. Oudeyer, Learning semantic components from sub symbolic multi modal perceptionJoint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL EpiRob), Osaka (Japan) (2013) (More informationbibtex)

Please consider citing this paper when re-using the code in scientific publications.

The NMF implementation used in this code is also available as third party code for the scikit-learn project.

PROPRE-Py - Deep Neural Networks for concept discovery in Python

PROPRE- Py is a software suite intended for the simulation of deep neural hierarchies using the PRO- PRE algorithms in various forms. Time-critical parts are written in C/OpenCV but the high-level control of network topology and high-level learning mechanisms is done in Python, combining execution speed and ease of use.

website: http://www.gepperth.net/Downloads
maturity: PROPRE-Py is still in early development.
license: General Public License

SGIM - Socially Guided Intrinsic Motivation in Matlab

Matlab code for the algorithms developped in the Flowers team for intrinsic motivation exploration by robots: SGIM (Socially Guided Exploration) and SAGG-RIAC. This code corresponds to the algorithms and works described in : Nguyen (2013), A Curious Robot Learner for Interactive Goal-Babbling : Strategically Choosing What, How, When and from Whom to Learn, PhD Thesis, INRIA, France.

Downloadable from http://nguyensmai.free.fr/publication/SGIM.zip
Website: http://nguyensmai.free.fr

Maturity: The algorithm SGIM has been used for several experimental setups. The core code for the algorithm will not change. However, examples with the experimental setups will be added.

Associated journal articles:

Active Choice of Teachers, Learning Strategies and Goals for a Socially Guided Intrinsic Motivation Learner
Sao Mai Nguyen url; Pierre-Yves Oudeyer urlPaladyn Journal of Bejavioral Robotics, Springer, 2012, 3 (3), pp. 136-146Nguyen_2012Paladyn.pdf BibTex

Socially Guided Intrinsic Motivation for Robot Learning of Motor Skills
Sao Mai Nguyen url; Pierre-Yves Oudeyer urlAutonomous Robots, Springer, 2014, 36 (3), pp. 273-294Nguyen_2014Auro.pdf BibTex

LFUI - Learning From Unlabeled Interaction

Matlab code for the algorithms developped in the Flowers team for learning from unlabeled instructions. This code allows a robot to learn a new task by interacting with a human, but without knowing beforehand the associated meaning the communicative signals of the human. The result is a Calibration-Free Human-Machine Interaction system. This code corresponds to the algorithms and works described in : Grizou, J., Iturrate, I., Montesano, L., Oudeyer, P. Y., & Lopes, M. (2014, July). Interactive learning from unlabeled instructions. In Conference on Uncertainty in Artificial Intelligence (UAI) 

Downloadable from: https://github.com/jgrizou/lfui and https://github.com/jgrizou/thesis_code
Website: http://jgrizou.com/

For more information please visit the following page: http://jgrizou.com/projects/thesis-defense/

DMP BBO: C++ library for Dynamic Movement Primitives and Black-Box Optimization

This repository provides an implementation of dynamical systems, function approximators, dynamical movement primitives, and black-box optimization with evolution strategies, in particular the optimization of the parameters of dynamical movement primitives.
This library may be useful for you if you:

  1. are new to dynamical movement primitives and want to learn about them (see the tutorial in the doxygen documentation).
  2. already know about dynamical movement primitives, but would rather use existing, tested code than brew it yourself.
  3. want to do reinforcement learning/optimization of dynamical movement primitives.

Most submodules of this project are independent of all others, so if you don’t care about dynamical movement primitives, the following submodules can still easily be integrated in other code to perform some (hopefully) useful function:

  1. functionapproximators : a module that defines a generic interface for function approximators, as well as several specific implementations (LWR, LWPR, iRFRLS, GMR)
  2. dynamicalsystems : a module that defines a generic interface for dynamical systems, as well as several specific implementations (exponential, sigmoid, spring-damper)
  3. bbo : implementation of some (rather simple) algorithms for the stochastic optimization of black-box cost functions

If you use this library in the context of experiments for a scientific paper, we would appreciate if you could cite this library in the paper as follows:

@MISC{stulp_dmpbbo,
    author = {Freek Stulp},
    title  = {{\tt DmpBbo} -- A C++ library for black-box optimization of 
                                                dynamical movement primitives.},
    year   = {2014},
    url    = {https://github.com/stulp/dmpbbo.git}
}

 

githubhttps://github.com/stulp/dmpbbo