Programming Framework


Editor : Peter Van Roy

(If you have comments on any of the below items, feel free to edit the page and add them inline or as a new Wiki page! This page is not supposed to be just about finished things but also about work in progress.)

Table of contents


Self Management and the Future of Software Design (paper, talk slides in PDF)

(by Peter Van Roy) This paper introduces the idea of programming with feedback loops and events with many examples. This was an invited talk at FACS 06. I also gave this talk at the DG INFSO on Oct. 25, 2006.

Introduction to Reliable Distributed Computing

(by Rachid Guerraoui and Luis Rodrigues, Springer 2006) This book presents important distributed algorithms using a modular and compositional formalism based on events. We consider this book a good starting point for the work in this group. For course transparencies using this book see Mini-course on reliable distributed programming.

Oz/K: A kernel language for component-based distributed programming

(by Michael Lienhardt, Alan Schmitt and Jean-Bernard Stefani) (INRIA RR)

Extended abstract to be presented at GPCE 2007 (6th ACM Int. Conf. on Generative Programming and Component Engineering) (paper)

The Agoric Papers

The Agoric Papers ( Markets and Computation: Agoric Open Systems, Incentive Engineering for Computational Resource Management, Comparative Ecology: A Computational Perspective, by Mark S. Miller and K. Eric Drexler. These papers are from 1988 and were published in the volume The Ecology of Computation edited by Bernardo Huberman. These papers were influential in starting the area of agoric, or market-oriented, computing.

BAR Gossip (paper)

(by Harry C. Li, Allen Clement, Edmund L. Wong, Jeff Napper, Indrajit Roy, Lorenzo Alvisi, and Michael Dahlin)

General Principles of Learning-Based Multi-Agent Systems (paper)

(by David H. Wolpert, Kevin R. Wheeler, and Kagan Tumer) This paper gives an introduction to the principles of Collective Intelligence (COIN) using the El Farol Bar problem as an example.

Structure of self-managing systems

General systems are organized as sets of concurrent entities ("agents") that communicate through asynchronous message passing. An entity can be seen as an instance of a concurrent component or as an active object. Systems can be classified according to layers, depending on how agents are organized. The layers are as follows in decreasing order of adaptability and freedom:

  • Open-ended market (any agents can join). Arbitrary agent behavior is tolerated.
  • Market (system where each agent optimizes a local utility function). One "will" per agent, the system is adaptable.
  • Feedback loop architecture. One "will" per system, the system is adaptable.
  • Multiagent system with no particular structure. One "will" per system, the system is not adaptable in general.

The architecture described in the "Self Management and the Future of Software Design" paper is the third element in this ranking, namely the feedback loop architecture. It holds when there is a single designer (a "will") designing the multiple agents.

Secure systems are the first layer of this classification. The system is secure because it tolerates arbitrary agent behavior. In order to make this work, the system provides a simple 'market infrastructure'. Here is the basic idea: the system enforces a conservation law using cryptographic protocols. For example, there can be a "currency" that is conserved. Services talk to each other and trade currency for results. The overall motor of the system is the external entities (like humans) connecting to it who want results. Basically, they ask for results in exchange for currency, and the propagation of currency inside the system drives its execution. Agents can be cooperative or malicious. The malicious agents will waste currency, but they will not endure because this will be detected (since they do not provide results!). This works on similar principles as Axelrod's Iterated Prisoner's Dilemma. Good agents will flourish. It is similar to human markets but simplified. Secure systems built in this way are not "hacks" but are fundamentally correct and robust against malicious interference.

Low-level support for self management in SONs

The low-level support is for monitors and actuators in the feedback loop architecture.

  • Monitors. Boriss Mejias and Yves Jaradin have implemented a prototype belief propagation in P2PS. This gives some encouraging results, but it depends on the question whether the SON finger graph has meaning as a BP graph (which is a graph of pairwise probability distributions). Belief propagation is an algorithm to find the marginal probability distributions of stochastic variables, given the pairwise probability distributions (which can be modeled as an undirected graph).
  • Actuators. The pseudo-reliable broadcast algorithm may be a good default actuator.

Gossip algorithms and the BAR model. Another approach uses gossip algorithms. In these algorithms, each node exchanges data ("gossips") with randomly selected peers. It is the randomness that gives the gossip algorithms their robustness. Gossip algorithms can be used both to diffuse information (actuator) and to monitor global behavior. The gossip algorithm has been recently adapted to the BAR model, which combines Byzantine nodes, Altruistic nodes (which follow the protocol), and Rational nodes (which act selfishly according to a local utility function). This model may be realistic for Internet behavior of large distributed systems.

Collective Intelligence

Collective Intelligence (COIN) is a design technique for multi-agent systems that allows to build systems that achieve a global goal with selfish agents that each are interesting only in maximizing a local utility function. The paper General Principles of Learning-Based Multi-Agent Systems (paper) explains the basics of COIN with a classic example, the El Farol bar problem. In this problem, agents are people who would like to take an evening off at a bar. If there are too few people at the bar, then the evening is a failure, and also if the bar is too crowded. Each agent remembers what happened the week before, and chooses one day to go in the next week. How can we maximize the global utility when each agent is thinking selfishly (and certainly not cooperating with the others!)? A naive algorithm that uses a utility based on how many people attended last time gives very bad results (a "Tragedy of the Commons").

It turns out that a good utility function is the "Wonderful Life" utility (so called through the Frank Capra movie, which is a classic you should see if you have not seen it yet). The value of the utility for agent w is the global utility minus the global utility where agent w is "disabled" (as if it did not exist). To make this computable locally, we use a simple reinforcement learning algorithm. Each agent has weights for each day of the week, and each week it changes one of the weights according to its experience in the bar that week. This is the "reward". The utility is simply the sum of rewards over all the weeks. Picking a day for the next week is done according to a distribution that selects a day randomly according to the weights (technically, the algorithm implements a Boltzmann distribution and each weight is a Boltzmann energy).

Mini-course on collective intelligence

The COIN techniques seem to be a good starting point for designing a security architecture for SELFMAN. The idea is that the SELFMAN infrastructure enforces local utility functions designed with COIN techniques. For example, the "reward" can be a "currency" designed to obey a conservation law and designed according to the Wonderful Life utility. This means that selfish agents (which will be the most numerous agents) will by design help the system achieve its global goals. There will be a mini-course Collective intelligence to be held at ZIB on Feb. 15-16 given by Mohamed El-Beltagy that will be based on COIN and related techniques (game theory, agoric systems, theory of collectives, implications for SELFMAN).

The "grey goo" problem in Second Life

The application "Second Life" allows people to trade objects and services in a virtual world. They have recently been having problems with "grey goo": self-replicating objects that use up resources (CPU, memory, and network). There was a discussion on the e-lang mailing list (Jan. 2007) about this problem and how to solve it. One way to solve it is to use a conservation law: pay for resources. Then the grey goo cannot replicate. This is the solution we could do in Selfman (the "agoric solution"). It seems that the Second Life developers implemented another solution: dynamic rate-limiting of object creation along with monitoring and alerting tools that let the sysops identify and kill off the goo that manages to exploit the system.

The Second Life experience is relevant for us: I think we should observe how they manage things. They are setting up a real economy, in some sense, and they have problems with pyramid schemes and other scams. You can take a look at the Second Life Anti-Griefing Guild ( "Griefers" in Second Life are like Byzantine nodes. Griefers are so-called because they create grief. They do their best to interrupt proceedings in virtual worlds often for no other reason than because it is possible.