15
Jun

New PlosOne paper: Exploiting task constraints for self-calibrated brain-machine interface control using error-related potentials.

Our work on calibration-free interaction applied to Brain-Computer Interaction has been published in PlosOne.

Title: Exploiting task constraints for self-calibrated brain-machine interface control using error-related potentials.

Authors: I. Iturrate, J. Grizou, J. Omedes, P-Y. Oudeyer, M. Lopes and L. Montesano

Abstract: This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach.


The code for replicating our experiments can be find on github: https://github.com/flowersteam/self_calibration_BCI_plosOne_2015

The pdf of the paper is available at: https://github.com/flowersteam/self_calibration_BCI_plosOne_2015/releases/download/plosOne/iturrate2015exploiting.pdf

15
Dec

Calibration-Free Human-Machine Interfaces – Thesis Defense

Jonathan Grizou defended his thesis entitled Learning From Unlabeled Interaction Frames on October 24, 2014.

The video, slides, and thesis manuscript can be found at this link: http://jgrizou.com/projects/thesis_defense/

Keywords: Learning from Interaction, Human-Robot Interaction, Brain-Computer Interfaces, Intuitive and Flexible Interaction, Robotics, Symbol Acquisition, Active Learning, Calibration.

Abstract: This thesis investigates how a machine can be taught a new task from unlabeled human instructions, which is without knowing beforehand how to associate the human communicative signals with their meanings. The theoretical and empirical work presented in this thesis provides means to create calibration free interactive systems, which allow humans to interact with machines, from scratch, using their own preferred teaching signals. It therefore removes the need for an expert to tune the system for each specific user, which constitutes an important step towards flexible personalized teaching interfaces, a key for the future of personal robotics.

Our approach assumes the robot has access to a limited set of task hypotheses, which include the task the user wants to solve. Our method consists of generating interpretation hypotheses of the teaching signals with respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signal-label pairs for each task, the task the user wants to solve is the one that explains better the history of interaction.

We consider different scenarios, including a pick and place robotics experiment with speech as the modality of interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacher instructs a robot to perform a new task using initially unclassified signals, whose associated meaning can be a feedback (correct/incorrect) or a guidance (go left, right, up, \ldots). Our results show that a) it is possible to learn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) it is possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. We further introduce a planning strategy that exploits uncertainty from the task and the signals’ meanings to allow more efficient learning sessions. We present a study where several real human subjects control successfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.

Based on this work, but from another perspective, we introduce a new experimental setup to study how humans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve a task but the channels of communication they can use are constrained and force them to invent and agree on a shared interaction protocol in order to solve the task. These constraints allow analyzing how a communication protocol is progressively established through the interplay and history of individual actions.

 

16
Jun

UAI-14 Interactive Learning from Unlabeled Instructions

We have a new paper accepted to the 2014 Conference on Uncertainty in Artificial Intelligence  (UAI) to be held in July 2014 in Quebec, Canada. It is a joint work with Iñaki Iturrate (EPFL) and Luis Montesano (Univ. Zaragoza).

[webpage] [pdf] [bib]

Abstract: Interactive learning deals with the problem of learning and solving tasks using human instructions. It is common in human-robot interaction, tutoring systems, and in human-computer interfaces such as brain-computer ones. In most cases, learning these tasks is possible because the signals are predefined or an ad-hoc calibration procedure allows to map signals to specific meanings. In this paper, we address the problem of simultaneously solving a task under human feedback and learning the associated meanings of the feedback signals. This has important practical application since the user can start controlling a device from scratch, without the need of an expert to define the meaning of signals or carrying out a calibration phase. The paper proposes an algorithm that simultaneously assign meanings to signals while solving a sequential task under the assumption that both, human and machine, share the same a priori on the possible instruction meanings and the possible tasks. Furthermore, we show using synthetic and real EEG data from a brain-computer interface that taking into account the uncertainty of the task and the signal is necessary for the machine to actively plan how to solve the task efficiently.

 

 

28
Apr

AAAI-14 Calibration-Free BCI Base Control

We have a new paper accepted to the 2014 AAAI Conference on Artificial Intelligence to be held in July 2014 in Quebec, Canada. We present a method allowing a user to instruct a new task to an agent by mentally assessing the agent’s actions and without any calibration procedure. It is a joint work with Iñaki Iturrate (EPFL) and Luis Montesano (Univ. Zaragoza).

[webpage] [pdf] [bib]

9
Aug

Meet us @ ICDL-Epirob 2013

icdl2013

Five members of the Flowers Team will participate to the Thrid Joint IEE International Conference on Developmental and Learning and on Epigenetic Robotics. The conference takes place in Osaka, Japan,  August 18-22.

You will meet there Fabien Benureau, Jonathan Grizou, Olivier Mangin, Clément Moulin-Frier, and Mai Nguyen. They will be happy to discuss about the latest research and future projects of the team.

8
Aug

ICDL-Epirob 2013: Robot Learning Simultaneously a Task and How to Interpret Human Instructions

Can a robot learn a new task if the task is unknown and the user is providing unknown instructions ?

We explored this question in our paper: Robot Learning Simultanously a Task and How to Interpret Human Instructions. To appear in Joint IEEE International Conference on Development and Learning an on Epigenetic Robotics (ICDL-EpiRob), Osaka : Japan (2013)

In this paper we present an algorithm to bootstrap shared understanding in a human-robot interaction scenario where the user teaches a robot a new task using teaching instructions yet unknown to it. In such cases, the robot needs to estimate simultaneously what the task is and the associated meaning of instructions received from the user. For this work, we consider a scenario where a human teacher uses initially unknown spoken words, whose associated unknown meaning is either a feedback (good/bad) or a guidance (go left, right, …). We present computational results, within an inverse reinforcement learning framework, showing that a) it is possible to learn the meaning of unknown and noisy teaching instructions, as well as a new task at the same time, b) it is possible to reuse the acquired knowledge about instructions for learning new tasks, and c) even if the robot initially knows some of the instructions’ meanings, the use of extra unknown teaching instructions improves learning efficiency.

Learn more from my webpage: https://flowers.inria.fr/jgrizou/

10
May

King-Sun Fu Best Paper Award

At ICRA 2013, Freek Stulp was handed the “King-Sun Fu Best Paper Award of the IEEE Transactions on Robotics for the year 2012” for the paper “Reinforcement Learning with Sequences of Motion Primitives for Robust Manipulation — Freek Stulp, Evangelos Theodorou, and Stefan Schaal”. IEEE T-RO is one of the highest impact journals in robotics, and we are especially honored because this is the first time this award has been given to a paper on machine learning.

kung-fu-panda