Training a gaming agent on brainwaves

"Error-related potential (ErrP) are a particular type of Event-Related Potential (ERP) elicited by a person attending a recognizable error. These Electroencephalographic (EEG) signals can be used to train a gaming agent by a Reinforcement Learning (RL) algorithm to learn an optimal policy. The...

Descripción completa

Detalles Bibliográficos
Autores principales: Bartolomé, Francisco, Moreno, Juan, Navas, Natalia, Vitali, José, Ramele, Rodrigo, Santos, Juan Miguel
Formato: Artículos de Publicaciones Periódicas publishedVersion
Lenguaje:Inglés
Publicado: 2022
Materias:
Acceso en línea:http://ri.itba.edu.ar/handle/123456789/3918
Aporte de:
id I32-R138-123456789-3918
record_format dspace
spelling I32-R138-123456789-39182022-12-07T13:06:56Z Training a gaming agent on brainwaves Bartolomé, Francisco Moreno, Juan Navas, Natalia Vitali, José Ramele, Rodrigo Santos, Juan Miguel CEREBRO JUEGOS ALGORITMOS APRENDIJZAJE "Error-related potential (ErrP) are a particular type of Event-Related Potential (ERP) elicited by a person attending a recognizable error. These Electroencephalographic (EEG) signals can be used to train a gaming agent by a Reinforcement Learning (RL) algorithm to learn an optimal policy. The experimental process consists of an observational human critic (OHC) observing a simple game scenario while their brain signals are captured. The game consists of a grid, where a blue spot has to reach a desired target in the fewest amount of steps. Results show that there is an effective transfer of information and that the agent successfully learns to solve the game efficiently, from the initial 97 steps on average required to reach the target to the optimal number of 8 steps. Our results are expressed in threefold: (i) the mechanics of a simple grid-based game that can elicit the ErrP signal component, (ii) the verification that the reward function only penalizes wrong steps, which means that type II error (not properly identifying a wrong movement) does not affect significantly the agent learning process; (iii) collaborative rewards from multiple observational human critics can be used to train the algorithm effectively and can compensate low classification accuracies and a limited scope of transfer learning schemes." 2022-06-27T15:19:39Z 2022-06-27T15:19:39Z 2020-12-07 Artículos de Publicaciones Periódicas info:eu-repo/semantics/publishedVersion http://ri.itba.edu.ar/handle/123456789/3918 en nfo:eu-repo/semantics/altIdentifier/doi/10.1109/TG.2020.3042900 info:eu-repo/grantAgreement/ITBACyT/2020-15/AR. Ciudad Autónoma de Buenos Aires application/pdf
institution Instituto Tecnológico de Buenos Aires (ITBA)
institution_str I-32
repository_str R-138
collection Repositorio Institucional Instituto Tecnológico de Buenos Aires (ITBA)
language Inglés
topic CEREBRO
JUEGOS
ALGORITMOS
APRENDIJZAJE
spellingShingle CEREBRO
JUEGOS
ALGORITMOS
APRENDIJZAJE
Bartolomé, Francisco
Moreno, Juan
Navas, Natalia
Vitali, José
Ramele, Rodrigo
Santos, Juan Miguel
Training a gaming agent on brainwaves
topic_facet CEREBRO
JUEGOS
ALGORITMOS
APRENDIJZAJE
description "Error-related potential (ErrP) are a particular type of Event-Related Potential (ERP) elicited by a person attending a recognizable error. These Electroencephalographic (EEG) signals can be used to train a gaming agent by a Reinforcement Learning (RL) algorithm to learn an optimal policy. The experimental process consists of an observational human critic (OHC) observing a simple game scenario while their brain signals are captured. The game consists of a grid, where a blue spot has to reach a desired target in the fewest amount of steps. Results show that there is an effective transfer of information and that the agent successfully learns to solve the game efficiently, from the initial 97 steps on average required to reach the target to the optimal number of 8 steps. Our results are expressed in threefold: (i) the mechanics of a simple grid-based game that can elicit the ErrP signal component, (ii) the verification that the reward function only penalizes wrong steps, which means that type II error (not properly identifying a wrong movement) does not affect significantly the agent learning process; (iii) collaborative rewards from multiple observational human critics can be used to train the algorithm effectively and can compensate low classification accuracies and a limited scope of transfer learning schemes."
format Artículos de Publicaciones Periódicas
publishedVersion
author Bartolomé, Francisco
Moreno, Juan
Navas, Natalia
Vitali, José
Ramele, Rodrigo
Santos, Juan Miguel
author_facet Bartolomé, Francisco
Moreno, Juan
Navas, Natalia
Vitali, José
Ramele, Rodrigo
Santos, Juan Miguel
author_sort Bartolomé, Francisco
title Training a gaming agent on brainwaves
title_short Training a gaming agent on brainwaves
title_full Training a gaming agent on brainwaves
title_fullStr Training a gaming agent on brainwaves
title_full_unstemmed Training a gaming agent on brainwaves
title_sort training a gaming agent on brainwaves
publishDate 2022
url http://ri.itba.edu.ar/handle/123456789/3918
work_keys_str_mv AT bartolomefrancisco trainingagamingagentonbrainwaves
AT morenojuan trainingagamingagentonbrainwaves
AT navasnatalia trainingagamingagentonbrainwaves
AT vitalijose trainingagamingagentonbrainwaves
AT ramelerodrigo trainingagamingagentonbrainwaves
AT santosjuanmiguel trainingagamingagentonbrainwaves
_version_ 1765660880648798208