01 February 2021 - Yifei Xie, Miguel Lucero Mendez & Georgios Papoudakis

 

Speaker: Yifei Xie

Title: Stochastic Programming Partitioning Optimization Model for Parallel BFT Consensus Algorithm.

Abstract: The latest blockchain systems require high performance especially under a large scale of network peers. To address performance and scalability issues, an effective solution is to partition peer network into multiple small committees and run the consensus process in parallel. While partitioning the peer set, it is essential to optimally decide how many committees should be divided and how to allocate peers to each committee. Setting up optimal consensus committees is crucial to yield the best system transaction throughput. Therefore, my work focuses on applying Stochastic Programming (SP) optimization model to optimally partition peer network for the purpose of improving performance in the multi-committee parallel consensus system.

Bio: I am working on performance optimization of distributed system. The project aims to adopt mathematical optimization method to maximize the speed of Byzantine fault tolerance or similar consensus algorithms.

 

 

Speaker: Miguel Lucero Mendez

Abstract: In a seminal book, Minsky and Papert define the perceptron as a limited implementation of what they called “parallel machines”. They showed that some binary boolean functions including xor are not definable in a single layer perceptron due to its limited capability to learn only linearly separable functions. In this work, we propose a new more powerful implementation of such parallel machines. This new mathematical machinery is defined by using analytic sinuosoids instead of linear combinations to form an analytic signal representation of the function that we want to learn. We show that this re-formulated parallel mechanism not only allows us to learn the binary xor function, but more generally any k-ary boolean function as well in just a single layer.

 

Speaker: Georgios Papoudakis

Title: Local Information Agent Modelling Using Variational Autoencoders

Abstract:  Modelling the behaviours of other agents is essential for understanding how agents interact and making effective decisions. Existing methods for agent modelling commonly assume knowledge of the local observations and chosen actions of the modelled agents during execution, which can significantly limit their applicability. We propose a new modelling technique based on variational autoencoders, which are trained to reconstruct the local actions and observations of the other agent based on embeddings which depend only on the local observations of the modelling agent (its observed world state, chosen actions, and received rewards). The embeddings are used to augment the modelling agent's decision policy which is trained via deep reinforcement learning; thus, the policy does not require access to other agent observations. We provide a comprehensive evaluation and ablation study in diverse multi-agent tasks, showing that our method achieves comparable performance to an ideal baseline which has full access to other agent's information, and significantly higher returns than a baseline method which does not use the learned embeddings.

Bio: I am a PhD student in the Robotics and Autonomous Systems CDT, supervised by Dr. Stefano Albrecht. I am interested in self-supervised methods for agent modelling in multi-agent systems.

 

 

 

 

Feb 01 2021 -

01 February 2021 - Yifei Xie, Miguel Lucero Mendez & Georgios Papoudakis

AIAI Seminar talk hosted by Yifei Xie, Miguel Lucero Mendez & Georgios Papoudakis

Online