Try again till you are satisfied: Convergence, outcomes and mean-field limits

Alain Tcheukam, Tembine Hamidou

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This article examines the famous distributed algorithm: try-again-till-you're-satisfied in opinion formation game. It illustrates that a simple learning algorithm which consists to react only when unsatisfied through on/off observation can provide a satisfactory solution. Learning takes place during the interactions of the game, in which the agents have no direct knowledge of the payoff model. Each agent is allowed to observe their own satisfaction/dissatisfaction state and has only one-step memory. The existing results linking the outcomes to stationary satisfactory set do not apply to this situation because of continuous action space. We provide a direct proof of convergence of the scheme for arbitrary initial conditions and arbitrary number of agents. As the number of iterations grows, we show that there is an emergence of a consensus in terms of opinion distribution of satisfied agents. A similar result holds for the mean-field opinion formation game.

Original languageEnglish (US)
Title of host publicationProceedings of the 28th Chinese Control and Decision Conference, CCDC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2641-2645
Number of pages5
ISBN (Electronic)9781467397148
DOIs
StatePublished - Aug 3 2016
Event28th Chinese Control and Decision Conference, CCDC 2016 - Yinchuan, China
Duration: May 28 2016May 30 2016

Other

Other28th Chinese Control and Decision Conference, CCDC 2016
CountryChina
CityYinchuan
Period5/28/165/30/16

Fingerprint

Mean-field Limit
Opinion Formation
Game
Stationary Set
Arbitrary
Distributed Algorithms
Parallel algorithms
Mean Field
Learning algorithms
Linking
Learning Algorithm
Initial conditions
Iteration
Data storage equipment
Interaction

Keywords

  • learning algorithm
  • model-free optimization
  • opinion dynamics

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Control and Optimization
  • Statistics, Probability and Uncertainty
  • Artificial Intelligence
  • Decision Sciences (miscellaneous)

Cite this

Tcheukam, A., & Hamidou, T. (2016). Try again till you are satisfied: Convergence, outcomes and mean-field limits. In Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016 (pp. 2641-2645). [7531429] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CCDC.2016.7531429

Try again till you are satisfied : Convergence, outcomes and mean-field limits. / Tcheukam, Alain; Hamidou, Tembine.

Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. p. 2641-2645 7531429.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tcheukam, A & Hamidou, T 2016, Try again till you are satisfied: Convergence, outcomes and mean-field limits. in Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016., 7531429, Institute of Electrical and Electronics Engineers Inc., pp. 2641-2645, 28th Chinese Control and Decision Conference, CCDC 2016, Yinchuan, China, 5/28/16. https://doi.org/10.1109/CCDC.2016.7531429
Tcheukam A, Hamidou T. Try again till you are satisfied: Convergence, outcomes and mean-field limits. In Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016. Institute of Electrical and Electronics Engineers Inc. 2016. p. 2641-2645. 7531429 https://doi.org/10.1109/CCDC.2016.7531429
Tcheukam, Alain ; Hamidou, Tembine. / Try again till you are satisfied : Convergence, outcomes and mean-field limits. Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. pp. 2641-2645
@inproceedings{22f15ef822744d8b986c3ed74e7e76b3,
title = "Try again till you are satisfied: Convergence, outcomes and mean-field limits",
abstract = "This article examines the famous distributed algorithm: try-again-till-you're-satisfied in opinion formation game. It illustrates that a simple learning algorithm which consists to react only when unsatisfied through on/off observation can provide a satisfactory solution. Learning takes place during the interactions of the game, in which the agents have no direct knowledge of the payoff model. Each agent is allowed to observe their own satisfaction/dissatisfaction state and has only one-step memory. The existing results linking the outcomes to stationary satisfactory set do not apply to this situation because of continuous action space. We provide a direct proof of convergence of the scheme for arbitrary initial conditions and arbitrary number of agents. As the number of iterations grows, we show that there is an emergence of a consensus in terms of opinion distribution of satisfied agents. A similar result holds for the mean-field opinion formation game.",
keywords = "learning algorithm, model-free optimization, opinion dynamics",
author = "Alain Tcheukam and Tembine Hamidou",
year = "2016",
month = "8",
day = "3",
doi = "10.1109/CCDC.2016.7531429",
language = "English (US)",
pages = "2641--2645",
booktitle = "Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Try again till you are satisfied

T2 - Convergence, outcomes and mean-field limits

AU - Tcheukam, Alain

AU - Hamidou, Tembine

PY - 2016/8/3

Y1 - 2016/8/3

N2 - This article examines the famous distributed algorithm: try-again-till-you're-satisfied in opinion formation game. It illustrates that a simple learning algorithm which consists to react only when unsatisfied through on/off observation can provide a satisfactory solution. Learning takes place during the interactions of the game, in which the agents have no direct knowledge of the payoff model. Each agent is allowed to observe their own satisfaction/dissatisfaction state and has only one-step memory. The existing results linking the outcomes to stationary satisfactory set do not apply to this situation because of continuous action space. We provide a direct proof of convergence of the scheme for arbitrary initial conditions and arbitrary number of agents. As the number of iterations grows, we show that there is an emergence of a consensus in terms of opinion distribution of satisfied agents. A similar result holds for the mean-field opinion formation game.

AB - This article examines the famous distributed algorithm: try-again-till-you're-satisfied in opinion formation game. It illustrates that a simple learning algorithm which consists to react only when unsatisfied through on/off observation can provide a satisfactory solution. Learning takes place during the interactions of the game, in which the agents have no direct knowledge of the payoff model. Each agent is allowed to observe their own satisfaction/dissatisfaction state and has only one-step memory. The existing results linking the outcomes to stationary satisfactory set do not apply to this situation because of continuous action space. We provide a direct proof of convergence of the scheme for arbitrary initial conditions and arbitrary number of agents. As the number of iterations grows, we show that there is an emergence of a consensus in terms of opinion distribution of satisfied agents. A similar result holds for the mean-field opinion formation game.

KW - learning algorithm

KW - model-free optimization

KW - opinion dynamics

UR - http://www.scopus.com/inward/record.url?scp=84983802054&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84983802054&partnerID=8YFLogxK

U2 - 10.1109/CCDC.2016.7531429

DO - 10.1109/CCDC.2016.7531429

M3 - Conference contribution

AN - SCOPUS:84983802054

SP - 2641

EP - 2645

BT - Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -