Distributed Estimation and Optimization

We have addressed the problem of distributed parameter estimation in a set-membership framework. In the proposed algorithm, each agent collects noisy measurements of the unknown parameter. Under the assumption of bounded noise, a feasible set containing the unknown parameter can be defined. The proposed distributed algorithm alternates a local projection step and a consensus step to merge estimates coming from neighboring nodes. It is shown the estimates of all agents converge to the same element of the feasible set. The extension to the case of time-varying communication graph is also considered.

We have proposed a fully asynchronous and distributed approach for tackling nonconvex optimization problems. In the considered setting each node has access only to a portion of the objective function and to a subset of the constraints. When awake, each node performs either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Switching between descent step and ascent step is done in a fully asynchronous and distributed fashion. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. The proposed technique has been successfully applied in extending the framework of Learning from Constraints to a distributed setting in which different agents connected over a network cooperate to the learning process.

You can find more information in the following publications.

References