Staff:

Prof. Dr. Yann Disser, Technische Universität Darmstadt
Dr. Max Klimm, Technische Universität Berlin

 

Webpage:

www.tu-berlin.de/?id=163770

 

 

Description:

The goal of this project is to deepen the understanding of algorithms that operate on very large networks and the dynamics that arise from the competition or cooperation between such algorithms. To achieve this goal, we want to combine models and techniques from the areas of graph exploration and algorithmic game theory. To date, the literature in these areas is mostly disjoint. By closing this gap, we hope to develop new insights into the important algorithmic and economic challenges faced in large networks, most prominently in those that are part of the Internet.

As a first step, we want to develop agent models tailored to software agents exploring the Internet. We consider the ability to store a small number of network nodes that the agent can jump back to. It is interesting to analyze how such a model compares to existing agent models in the field of graph exploration, and whether it allows for efficient exploration algorithms. In addition, we plan to study exploration algorithms that balance exploration coverage versus time and memory usage on an instance-by-instance basis. Second, we aim to understand the benefit of cooperation between multiple agents and the difficulty of coordinating them. Again, we want to analyze the impact of the ability to jump back on the power of the agents. Motivated by the exploration of large networks, we plan to study collaborative exploration by teams of small size and with little memory relative to the size of the network. As a main contribution of this project, we want to initiate the study of competitive exploration of large networks. Here, agents strive to maximize an individual objective function rather than pursuing a collective agenda. The competition between agents with conflicting interests is usually analyzed within the framework of non-cooperative games. A common assumption in the game theory literature is that the agents have full knowledge of all their strategic actions. This assumption is unrealistic in large data networks, where the behavior of agents is inherently local. To overcome this issue, we want to combine methodologies from graph exploration and game theory. We are interested in characterizing the circumstances under which the behavior of competing agents converges to a stable state. Other issues to be considered are the quality of the equilibria reached and the speed of convergence. We further plan to extend our analysis to settings where the payoff of an agent is received over time, and the objective the maximization of the average payoff. Going forward, we would like to combine the insights gained for cooperative and competitive exploration in order to understand the effects of competing teams of agents. In this setting, the agents of the same team cooperate on a common task and compete against other teams with conflicting interests.

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.

Ok