ME597 – Homework #6 Solved

$ 24.99
Category:

Description

AA 597: Networked Dynamics Systems
Prof. Mehran Mesbahi
Soowhan Yi
All the codes are available at the end of the documents or here. https://github.com/SoowhanYi94/ME597 P1. 3.8 Consider the uniformly delayed agreement dynamics over a weighted graph, specified as
x˙i(t) = X wij(xj(t − τ) − xi(t − τ))
j∈N(i)
for some τ > 0 and i = 1,···n. Show that this delayed protocol is stable if

where λn(G) is the largest eigenvalue of the corresponding weighted Laplacian. Conclude that, for the delayed agreement protocol, there is a tradeoff between faster convergence rate and tolerance to uniform delays on the information-exchange links.
As we have sum of inputs (Pi∈V (i) u˙i(t) = 0) equal to 0, we know that the average of agent values are invariant set of quantities, therefore reaching average-consensus. Now, the Laplacian transform of this agreement protocol can be written as
X(s) = (sI + L(s))−1x(0)
where the L(s) is Laplacian transform of weighted laplacian matrix. Since we know the guaranteed stability of the response, the transfer function is guaranteed to be stable. Also, we know that the transfer function involving time delay would have the time delay term e−τs in front. Therefore, the transfer function has to be stable, and the poles should be on the left hand side of the plane.
G(s) = (sI + e−τsL(s))−1
Z(s) = sI + e−τsL(s) = 0 = sI + e−τsQ(s)ΛQ(s)−1 = 0
Q(s)−1sIQ(s) + e−τsΛ = 0
∴sI + e−τsΛ = 0
Using frequency domain analysis, we can substitute s with jw. Therefore,
s = −e−τsΛ = −e−τjwΛ = −(cos(−τw) + j sin(−τw))Λ = −(cos(τw) − j sin(τw))Λ
. Since the real part of s should be negative, cos(−τw) = cos(τw) ≥ 0. Also Λ = diag(λ1,λ2,··· ,λn), where 0 = λ1 ≤ λ2 ≤ ··· ≤ λn. Therefore, substituting jw into sI + e−τsΛ = 0, yields
jw + e−τjwλn = 0
−jw + eτjwλn = 0
. Multiplying those two equations yields,
(jw + e−τjwλn)(−jw + eτjwλn) = w2 − jwe−τjwλn + jweτjwλn + λ2n
= w2 + λ2n − jwλn(e−τjw − eτjw) = w2 + λ2n − jwλn(−2jsin(τw)) = (w − λn)2 + 2wλn − 2wλn(sin(τw))
∴(w − λn)2 + 2wλn(1 − sin(τw)) = 0
. Sine we know w ≥ 0,τ ≥ 0 and λn ≥ 0, w = λn and 1 = sin(wτ), and therefore, the smallest τ that satisfies above equations is

. Therefore if , the delayed agreement protocol is stable.
Also according to the Gersgorian theorum, λn ≤ 2dmax(G), where dmax(G) is the maximum out degree in a node of the graph. Therefore . So we can infer that the bigger the time delay, we are allowed to have less maximum out degree in a node of the graph, and causing a slower convergence rate. Therefore there is a tradeoff between faster convergence rate and the tolerance to uniform time delay.
P2. 3.9 A matrix M is called essentially non-negative if there exists a sufficiently large µ such that M + µI is non-negative, that is, all its entries are non-negative. Show that etM for an essentially non-negative matrix M is non-negative when t ≥ 0.
If M is essentially non-negative, then there exists a sufficiently large µ that M + µI is non-negative. Let
N = M + µI ≥ 0. Then
etM = et(N−µI) = etNe−µtI
Since t ≥ 0, etN ≥ 0, and e−µt ≥ 0, etNe−µtI = etM ≥ 0
P3. 3.16 Consider a network of n processors, where each processor has been given an initial computational load to process. However, before the actual processing occurs, the processors go through an initialization phase, where they exchange certain fractions of their loads with their neighbors in the network. Specifically, during this phase, processor i adopts the load-update protocol
pi(k + 1) = pi(k) − X ωij(pi(k) − pj(k))
j∈N(i)
for k = 0,1,··· , that is, it sends a fraction ωij of its load imbalance with its neighbors to each of them. What is the necessary and sufficient condition on the weights ωij in above equation such that this initialization phase converges to a balanced load for all processors when the network is (1) a path graph, (2) cycle graph (3) a star graph?
Lemma 8.1 (Necessary Condition)
Let the weight matrix be W = diag(ωi,j) for all pairs of (i,j) ∈ E. Then it is necessary to have connected graph, and to have the absolute value of the largest eigenvalue of weighted laplacian matrix less than 2 for the networks to converge to balanced load for all processors.
pi(k + 1) = pi(k) − X ωij(pi(k) − pj(k))
j∈N(i)
p(k + 1) = p(k) − Lω(G)p(k) = (I − Lω)(G)
Let Mω(G) = (I − Lω(G)), then
p(k + 1) = Mω(G)p(k) = Mω(G)kp(0)
where Λ = diag(λ1,λ2,··· ,λn) and λi∀i ∈ n are eigenvalues of Mω(G). So, for eigenvector x,
Mω(G)x = (I − Lω(G))x = λ(Mω(G))x
(I − Lω(G) − λ(Mω(G))I)x = 0,where x ̸= 0
I − Lω(G) − λ(Mω(G))I = 0
Q−1(I − Lω(G) − λ(Mω(G))I)Q = 0 where Q is a eigenvector matrix[q1,q2,··· ,qn]
∴ λ(Mω(G)) = I − λ(Lω(G)),Λ(Mω(G)) = 1 − Λ(Lω(G)) lim p(k + 1) = lim Λkp(0) = lim (1 − Λ(Lω(G)))kp(0)
k→∞ k→∞ k→∞
Therefore |1 − Λ(Lω(G))| has to be less than 1 in order for the network to converge, and the Λ(Lω(G)) < 2 has to satisfy. Finally the absolute value of largest eigenvalue of the weighted laplacian matrix of the graph has to be less than 2, then it would converge to the balanced load of all processors. Delta in original equation is omitted because it is 1 in this problem.
Corollary 8.2 (Sufficient Condition)
If the weights are determined as the inverse of maximum of sum of weights adjacent to the node associated with the edge, it is guaranteed to have maximum eigenvalue of weighted laplacian matrix less than 2.
Wjj = (max(dω(u),dω(v)))−1
where edge uv is the jth edge, and Wjj is the weight associated witht the edge.
So, in (1) a path graph, those weights would be larger going into the center of the path graph. (2) cycle graph and (3) a star graph would have the same weights in all of the edges because their nodes’ that forms the edge degrees are same.
P4. 8.1 Let Hi,i = 1,2,3, be the rows of the 3 × 3 identity matrix in the observation scheme zi = Hi x + vi for a three-node sensor network, observing state x ∈ R3 . It is assumed that the nodes form a path graph and that vi is a zero-mean, unit variance, Gaussian noise. Choose the weighting matrix W (8.8) and the step size ∆ in (8.20) – (8.21), conforming to the condition (8.14). Experiment with the selection of the weights for a given value of ∆ and their effect on the convergence properties of the distributed least square estimation (8.20) – (8.21).

(a) ∆ρ(L(G)) = 0.3,weight = [1,1],∆ = 0.1 (b) ∆ρ(L(G)) = 0.3,weight = [1,1],∆ = 0.1

(c) ∆ρ(L(G)) = 0.4732,weight = [2,1],∆ = 0.1 (d) ∆ρ(L(G)) = 0.4732,weight = [2,1],∆ = 0.1

(a) ∆ρ(L(G)) = 0.6646,weight = [3,1],∆ = 0.1 (b) ∆ρ(L(G)) = 0.6646,weight = [3,1],∆ = 0.1

(c) ∆ρ(L(G)) = 0.8606,weight = [4,1],∆ = 0.1 (d) ∆ρ(L(G)) = 0.8606,weight = [4,1],∆ = 0.1

(e) ∆ρ(L(G)) = 1.0583,weight = [5,1],∆ = 0.1 (f) ∆ρ(L(G)) = 1.0583,weight = [5,1],∆ = 0.1

(a) ∆ρ(L(G)) = 1.2568,weight = [6,1],∆ = 0.1 (b) ∆ρ(L(G)) = 1.2568,weight = [6,1],∆ = 0.1

(c) ∆ρ(L(G)) = 1.4557,weight = [7,1],∆ = 0.1 (d) ∆ρ(L(G)) = 1.4557,weight = [7,1],∆ = 0.1

(e) ∆ρ(L(G)) = 1.655,weight = [8,1],∆ = 0.1 (f) ∆ρ(L(G)) = 1.655,weight = [8,1],∆ = 0.1

(a) ∆ρ(L(G)) = 1.8544,weight = [9,1],∆ = 0.1 (b) ∆ρ(L(G)) = 1.8544,weight = [9,1],∆ = 0.1

(c) ∆ρ(L(G)) = 2.053,weight = [10,1],∆ = 0.1 (d) ∆ρ(L(G)) = 2.053,weight = [10,1],∆ = 0.1
As we can see from those experiments, if there is significant disproportion in weights, then ∆ρ(L(G)) starts increasing and their values starts to diverge as it gets bigger than 2.
P5. 10.8 If the network is connected, then the followers will end up (asymptotically) at
xf = −A−f 1Bfxl
given the static leader positions xl . Show that each component of xf above is in fact given by a convex combination of the components of xl .
By definition, a convex combination of the components of xl is Pi∈l αixli where Pi∈l αi = 1 and αi ∈ R+. Since we know that the connected network has positive definite Af, it is invertible and inverted matrix would also be positive definite. Now we just need to show if all the components in Bf matrix is negative or 0, and which are. As we have D = [Df;Dl], Bf = DfDlT, therefore each entries of Bf will be negative if and only if the follower corresponding to that entry is connected to some leaders. I think the each entries might be the negative of number of connected leaders to that corresponding follower. If that specific follower is not connected to the leaders, then the entries would be 0. Therefore, for connected graphs, at least one of the follower has connection with the leader, and we can conclude that each entries of Bf is either negative or 0. Finally, by definition, each component of xf is convex combination of the components of xl, because Af are positive definite, and each entries of Bf is either negative or 0.
P6. 10.9 Consider the linear-quadratic optimal control problem
Z ∞
min (u(t)TRu(t) + x(t)TQx(t))dt u 0
where the matrices Q and R are, respectively, positive semidefinite and positive definite, and
x˙(t) = −Afx(t) − Bfu(t)
corresponds to a controllable network.
Now, just because the followers execute a decentralized control strategy it does not follow that the leaders’ optimal control strategy will be decentralized. Determine if this is the case for this infinite horizon optimal control problem.
For linear quadratic optimal problem, the feedback control law is given by
u = −R−1BTP+x
where this P+ is the positive definite solution of the algebraic Riccati equation(ARE).
ATP + PA − PBR−1BTP + Q = 0 x˙(t) = (A − BR−1BTP+)x(t)
Therefore we need to use the whole or the global information of x in order to get the control law for the network. So the leaders can not have decentralized optimal control strategy because it requires global information of all the nodes in the network.
Code P4
import cvxpy as cp import numpy as np import networkx as nx import matplotlib.pyplot as plt import numpy as np
def main(): num = 3
graph = nx.path_graph(num) delta = .1 for j in range(10):
weight = {(0,1): 1+j, (1,2): 1} iteration = 100 #number of iteration to simulate limit time_lim = iteration *delta nx.set_edge_attributes(graph, weight, ‘weight’)
max_eigen_val = np.max(nx.laplacian_spectrum(graph, weight=’weight’)) L_w_G = nx.laplacian_matrix(graph, weight=’weight’).toarray() del_max_eig = round(delta*max_eigen_val,4) M_G = np.eye(num) – delta*L_w_G true_value = np.random.randint(0, 100, size = num) avg_true_value = np.average(true_value) H = np.eye(num) estimates = [] P_mat = [] t = np.linspace(0, time_lim, iteration) for i in range(num):
rand_noise = np.random.normal(0,1, size = num) z = H@true_value + rand_noise P = np.eye(9) for k in range(iteration): z = M_G @z
P = np.kron(M_G,np.eye(3)) @P estimates.append(z) P_mat.append(P) plt.figure(2*j) plt.plot(t, estimates) plt.plot(t, [avg_true_value]*iteration) plt.xlabel(“time t”) plt.ylabel(r”$hat heta (t)$”) plt.title(r”$Delta ho (L(G)) =” f”{del_max_eig}$” “, weight = ” f”{list(weight.values())} estimates = [] plt.figure(2*j+1) P_mat_plot = [] for k in range(iteration):
P_mat_k = P_mat[k]
P_mat_plot.append(np.array(np.diag(P_mat_k))) plt.plot(t, P_mat_plot) plt.xlabel(“time t”) plt.ylabel(f”$P_i (t)$”)
plt.title(r”$Delta ho (L(G)) =” f”{del_max_eig}$” “, weight = ” f”{list(weight.values())}
P_mat = [] plt.show()
if __name__ == “__main__”:
main()

Reviews

There are no reviews yet.

Be the first to review “ME597 – Homework #6 Solved”

Your email address will not be published. Required fields are marked *