Brief paperDistributed model predictive control—Recursive feasibility under inexact dual optimization☆
Abstract
We propose a novel model predictive control (MPC) formulation, that ensures recursive feasibility, stability and performance under inexact dual optimization. Dual optimization algorithms offer a scalable solution and can thus be applied to large distributed systems. Due to constraints on communication or limited computational power, most real-time applications of MPC have to deal with inexact minimization. We propose a modified optimization problem inspired by robust MPC which offers theoretical guarantees despite inexact dual minimization. The approach is not tied to any particular optimization algorithm, but assumes that the feasible optimization problem can be solved with a bounded suboptimality and constraint violation. In combination with a distributed dual gradient method, we obtain a priori upper bounds on the number of required online iterations. The design and practicality of this method are demonstrated with a benchmark numerical example.
Keywords
Predictive control
Control of constrained systems
Large scale systems
Distributed dual optimization
1. Introduction
Model predictive control (MPC) is a well-established control method, that can be used to control complex dynamical systems and guarantee constraint satisfaction (Rawlings & Mayne, 2009). One of the main limitations to control a system with MPC comes from computational issues, since in each time step an optimization problem has to be solved. In order to apply MPC to large-scale systems, we have to consider distributed approaches, which fall in the domain of distributed MPC (DMPC) (Maestre et al., 2014, Müller and Allgöwer, 2017). If we want to facilitate DMPC applications to fast (physically) interconnected networks, we typically need scalable distributed optimization algorithms with bounds on the number of required iterations.
Dual optimization algorithms such as the alternating direction method of multipliers (ADMM), dual gradient methods and proximal decomposition have been studied to solve DMPC optimization problems online (Kögel and Findeisen, 2012, Necoara and Nedelcu, 2015, Necoara and Suykens, 2008). While these algorithms enable a fully distributed implementation and asymptotically converge to the optimal central solution, real-time requirements lead to early termination and an inexact solution. Contrary to primal decomposition methods (Stewart, Venkat, Rawlings, Wright, & Pannocchia, 2010), these inexact solutions based on dual optimization do not necessarily satisfy the posed constraints (dynamic, state and input constraints) in the MPC optimization problem. This necessitates additional modifications to ensure recursive feasibility and stability of the resulting MPC scheme.
Related work
In Giselsson and Rantzer (2014) DMPC without terminal constraints is investigated and a sufficient stopping condition for the distributed iteration based on a candidate solution is presented. For this approach no prior bound on the number of required iterations can be given.
In Kögel and Findeisen (2014) a primal optimization algorithm with constraint violations in the dynamic equality constraints is investigated. Recursive feasibility is ensured with an appropriate state and input constraint tightening.
In Necoara, Ferranti, and Keviczky (2015) and Rubagotti, Patrinos, and Bemporad (2014) constraint violations in the inequality constraints due to inexact dual optimization are addressed with an appropriate (constant or adaptive) constraint tightening. Constraint violations in the posed dynamic equality constraints are avoided by using a condensed formulation (Necoara et al., 2015) or projecting the intermediate solution to the set of dynamically feasible trajectories (Rubagotti et al., 2014). Both approaches are, however, unsuited for distributed large-scale systems.
In Ferranti and Keviczky (2015) constraint violations in inequality constraints and dynamic equality constraints are considered by using an appropriate constraint tightening. Recursive feasibility is ensured by choosing the tolerance and thus the constraint tightening adaptively. As a consequence, the number of iterations can vary and global communication is required to enable this adaptation. In Doan, Keviczky, and Schutter (2011) a similar constraint tightening is used for a distributed hierarchical MPC scheme.
Contribution
We propose a new framework to ensure recursive feasibility of inexact DMPC resulting from finite dual iterations. This consists of a constant constraint tightening and a stabilizing controller, motivated by robust MPC (Chisci, Rossiter, & Zappa, 2001). To avoid an overly conservative constraint tightening, we propose a modified optimization problem and employ a different candidate solution, that explicitly takes the inexactness into account. This presents a general procedure which is applicable to different MPC setups. By combining this framework with a dual distributed gradient algorithm, we obtain an a priori upper bound for the number of dual iterations to ensure recursive feasibility. Compared to Ferranti and Keviczky (2015), Giselsson and Rantzer (2014) and Necoara et al. (2015), no adaptive constraint tightening is required. Furthermore, compared to Doan et al. (2011), Ferranti and Keviczky (2015), Kögel and Findeisen (2014) and Rubagotti et al. (2014), no centralized operations are necessary, thus allowing a fully distributed implementation for large-scale systems.
Outline
The remainder of this paper is structured as follows: Section 2 presents the nominal distributed MPC formulation and explains the problem inherent in inexact dual optimization. Section 3 presents the modified formulation, derives closed-loop properties under inexact minimization and presents a corresponding distributed dual iteration scheme. Section 4 illustrates the practicality and simplicity of the proposed framework with a numerical example. Section 5 concludes the paper.
In the extended version (Köhler, Müller, & Allgöwer, 2018a), these results are generalized to MPC without terminal ingredients, unreachable setpoints, multi-step MPC and the distributed offline computation of the terminal ingredients is detailed.
2. Distributed model predictive control
Notation
The real numbers are , the positive real numbers are and the natural numbers are . Given vectors , we abbreviate the column vector . The quadratic norm with respect to a positive definite matrix is denoted by and the minimal and the maximal eigenvalue of are denoted by and , respectively. For a polytopic constraint , we define an -feasible solution as any vector that satisfies , with and the vector of ones . We call a vector -strictly feasible if it satisfies . The Minkowski sum of two sets is denoted by A distributed system is represented as a graph with nodes and edges . Each node corresponds to a subsystem with local state and local input . The neighborhood of a subsystem is given by , with , .
2.1. Problem setup
The distributed linear discrete-time system is given by (1)with polytopic state and input constraints of the form (2)(3) where and . We consider the general case, where the control input is given by (4)where is some existing distributed controller and is the input calculated using distributed MPC. If no such feedback is known, we can always set . However, including this feedback can reduce the conservatism and mitigate the deteriorating effects of suboptimality on closed-loop stability. The overall system is given by (5)with the polytopic constraints where and . We consider a structured quadratic stage cost , with block diagonal positive definite matrices and . We consider an MPC framework including a terminal cost and terminal set. To this end, we make the following assumption.
Assumption 1
There exists a terminal cost with a block diagonal matrix , a distributed terminal controller , and a distributed compact polytopic set , such that the following conditions hold for each
(6a)(6b)(6c)
Remark 2
In Conte, Jones, Morari, and Zeilinger (2016) distributed linear matrix inequalities (LMIs) are presented that can be used to compute a distributed terminal cost and an ellipsoidal terminal set . Ellipsoidal terminal constraints lead to a (distributed) quadratically constrained quadratic program (QCQP), which makes the online optimization more complex. Methods to obtain a distributed polytopic terminal set are for example given in Kögel and Findeisen (2012) and Trodden (2016). The offline computation of the distributed terminal ingredients is discussed in more detail in the extended version (Köhler et al., 2018a A.1). The proposed framework can also be used without such terminal ingredients, which is discussed in Köhler et al. (2018a, A.2, A.3).
The open-loop cost of a state sequence and an input sequence with the prediction horizon is defined as
The standard MPC optimization problem is given by (7) The solution to this optimization problem is the value function and optimal state and input trajectories that satisfy the dynamic equality constraint and the state and input constraints. Problem (7) is a distributed quadratic program, the solution of which is discussed in Sections 2.2, 3.5.
For the closed-loop operation the first step of the optimal input is applied to the system (5), resulting in the following closed-loop system dynamics: (8)The following theorem is a standard result in MPC and establishes the desired properties.
Theorem 3
Rawlings & Mayne, 2009
Let Assumption 1 hold and assume that Problem (7) is feasible at. Then Problem (7) is recursively feasible and the origin is asymptotically stable for the resulting closed-loop system (8).
2.2. Distributed (dual) optimization
In the following, we motivate why we consider inexact dual optimization and explain why it necessitates modifications to Problem (7). Most theoretical results for MPC (such as Theorem 3) assume that the optimal solution to (7) is obtained in real time, which is rarely achievable in practice.
If primal optimization methods are used, Theorem 3 remains valid with inexact optimization assuming a suitable initialization (Scokaert et al., 1999, Stewart et al., 2010). However, an application of primal optimization methods to large-scale distributed systems suffers from various difficulties, including initialization and scalability.
Thus, we consider dual optimization algorithms (Kögel and Findeisen, 2012, Necoara and Nedelcu, 2015, Necoara and Suykens, 2008), which only require neighbor-to-neighbor communication and can be implemented in a fully distributed manner. The main drawback of dual optimization is that the constraints (dynamic, state and input) are not necessarily satisfied after finite iterations. This necessitates additional modifications to enable theoretical guarantees after finite iterations, compare (Ferranti and Keviczky, 2015, Rubagotti et al., 2014). In the following, we provide a novel MPC formulation which is suitable for distributed computation and explicitly takes the inexact dynamics of approximate solutions into account.
3. Inexact distributed MPC
In the following, we consider bounds on the accuracy , interpret them as disturbances and use tools from robust MPC (Chisci et al., 2001) to compensate the effects of inexact minimization. The proposed modifications are inspired by Ferranti and Keviczky (2015) and directly take the inexactness of the solver into account. By making use of an inexact candidate solution, we obtain a formulation that requires no adaptation and thus no global communication.
3.1. Inexact MPC and constraint tightening
Define an accuracy for the dynamic, state, input and terminal constraints and strict feasibility , given by the user. Consider relaxation parameters (9)and the sets . We tighten the constraints using the -step support function (Conte, Zeilinger, Morari, & Jones, 2013), which for some and is defined as (10) The tightened state and input constraints are given by with (11)(12) Here, denotes the th component of , the th component of and the th row of , . The evaluation of the -step support function amounts to solving a distributed linear program (LP) offline. The resulting tightened constraints preserve the distributed structure and can equally be represented with the local polytopic sets .
Assumption 4
Consider the terminal cost and controller from Assumption 1. There exists a compact tightened terminal set , such that the following conditions hold (13a)(13b)(13c)(13d)
The sets are needed to study strict recursive feasibility () under inexact minimization (). A sufficient condition for (13b) is . In case , is a sufficient condition for (13c). Condition (13d) requires contractivity of the terminal set, despite the additive disturbance .
If the terminal set in Assumption 1 is contractive, Assumption 4 can be satisfied with the following design procedure: for a fixed accuracy and prediction horizon , compute the tightened constraints (11). Then scale the terminal set such that conditions (13a)–(13c) are satisfied. Finally, verify that condition (13d) is satisfied. If this is not the case, decrease and start over. In Köhler et al. (2018a), we show that the proposed framework can also be used without constructing a terminal set.
With this, we define the modified optimization problem (14a)(14b)(14c)(14d)(14e)
Compared to the original optimization Problem (7), the state and input constraints are tightened and the dynamic equality constraints are relaxed to inequality constraints. We do not try to find a solution that exactly satisfies the dynamic constraints, but only consider a relaxed dynamic constraint with the parameter . This relaxation will allow us to construct a feasible candidate solution which again does not exactly satisfy the dynamic constraints. This is the key insight and novelty in order to prove recursive feasibility and stability under inexact minimization. The resulting Problem (14) is a distributed quadratic program with linear inequality constraints.
To study recursive feasibility of (14) under the inexact DMPC we introduce the notion of -feasible solutions.
Definition 5
3.2. Feasible consolidated trajectory
In order to characterize the feasibility of on an -feasible solution, we consider the consolidated1 trajectory (Ferranti & Keviczky, 2015).
Proposition 6
Let Assumption 1, Assumption 4 hold. Given an-feasible solution (15) at time, the consolidated state and input trajectories
(16) satisfy
Proof
The inexact relaxed dynamic constraint (15) can be equivalently written as a dynamic equality constraint with an additive disturbance (17)The consolidated trajectory (16) satisfies (18)which implies
Terminal constraint satisfaction follows by condition (13a) in combination with the characterization (18) for . □
Proposition 6 shows that the consolidated trajectory based on the inexact optimization has all the desirable properties of the standard optimal solution to Problem (7). The closed-loop system resulting from an inexact DMPC is given by (19) Thus, Proposition 6 implies that the closed loop based on an -feasible solution satisfies the state and input constraints.
Remark 7
In order to show feasibility of the consolidated trajectory, the constraint tightening (11), (12) could be formulated without the term and the support function could be defined based on the smaller set , compare (Ferranti & Keviczky, 2015). The more restrictive constraint tightening will be crucial in order to establish recursive feasibility of Problem (14) for the closed-loop system (19) based on an -feasible solution. The issue of using a more conservative constraint tightening to establish recursive feasibility is also addressed in Kögel and Findeisen (2014) and Rubagotti et al. (2014).
3.3. Recursive feasibility under inexact minimization
The following Theorem is the main contribution of this paper. It establishes recursive feasibility of Problem (14) under the inexact MPC control law with a suitable candidate solution.
Theorem 8
Let Assumption 1, Assumption 4 hold. Given an-feasible solution (15) at time , the candidate sequence (20) is an -strictly feasible solution to the optimization Problem (14) at time. Problem (14) is recursively feasible for the closed-loop system (19).
Proof
The proof is composed of three parts. First, we show strict satisfaction of the relaxed dynamic constraints. Then we show strict satisfaction of the tightened state and input constraints. Finally, we show strict satisfaction of the terminal constraint and thus establish recursive feasibility.
Part I: Show that the candidate sequence in (20) strictly satisfies the relaxed dynamic constraint (14b): The candidate input (20) is constructed by shifting the previous input sequence by one time step and appending the terminal controller . The state sequence is shifted with an additional error term propagated through the system dynamics to ensure satisfaction of the initial state constraint (14e). Substituting (17) in yields for , with . Similarly, the last dynamic constraint () is satisfied with equality, which implies that all relaxed dynamic constraints are strictly satisfied with .
Part II: Show that the candidate sequence (20) strictly satisfies the state and input constraints (14c): Due to the definition of the support function2 and linear superposition we have which implies The candidate sequence satisfies and hence the state constraints are strictly satisfied. For the input constraints the same argument holds with Given , conditions (13b) and (13c) imply strict satisfaction of the state and input constraints at .
This theorem ensures recursive feasibility under inexact dual optimization with bounded constraint violation. The candidate solution with the corresponding tightened (and shifted) constraint set is sketched in Fig. 1. The tightened constraint set is constructed, such that -feasibility of implies -strict feasibility of w.r.t. the shifted constraint set , despite the error .
3.4. Closed-loop stability
To study stability properties of the closed-loop system, we use the following definition regarding the suboptimality of the inexact solution.
Definition 9
Given an -feasible solution (Definition 5), the suboptimality w.r.t. the optimal solution is defined as (21)The inexact optimal solution is given by (22) The suboptimality with respect to this inexact optimal solution is given by (23)Solutions satisfying (15), (21), (23) are called -approximate solutions.
Corresponding bounds on the suboptimality for inexact dual optimization will be established in Proposition 12. The following proposition shows that the proposed inexact DMPC approximately preserves the stability properties of nominal MPC based on exact optimization.
Proposition 10
Let Assumption 1, Assumption 4 hold. Given an-approximate solution (Definition 9) at time , the candidate sequence in Theorem 8 implies (24)Hence the origin is practically asymptotically stable (Grüne & Pannek, 2017 Def. 2.15) for the closed-loop system (19) based on-approximate solutions at each time . Given a sufficiently small , the additional bound (25)holds with according to (26).
Proof
Part I: Consolidated cost : The candidate input sequence from Theorem 8 with the corresponding consolidated state trajectory is a feasible solution to (7) (Proposition 6). Using suboptimality according to Definition 9, this implies Practical asymptotic stability follows from standard Lyapunov arguments.
Part II: Inexact optimal cost : There exist constants , such that implies , and implies . In the following we consider a bound , with some , which is recursively established in the end. This bound in combination with the suboptimality implies . The stage cost and terminal cost of the candidate solution satisfy The cost of the candidate trajectory satisfies with (26) Using feasibility based on Theorem 8 and suboptimality according to Definition 9, this implies The upper bound is valid recursively, if are sufficiently small, such that the following inequality holds , compare (Köhler, Müller, & Allgöwer, 2018b Lemma 7, Thm. 8). □
Theorem 8 in combination with Proposition 10 ensures recursive feasibility and practical asymptotic stability under inexact dual optimization with bounded constraint violation and suboptimality. Both inequalities (24), (25) are each independently sufficient for practical asymptotic stability with the corresponding value functions as practical Lyapunov functions. The stability analysis based on tends to be less conservative (compare Proposition 12) and is only possible since we explicitly refrain from adapting the accuracy online, contrary to Ferranti and Keviczky (2015), Giselsson and Rantzer (2014) and Necoara et al. (2015). This is why we also prove the technically more difficult, but potentially less conservative, bounds on the inexact valuefunction (25).
3.5. Dual distributed optimization
In the following, we describe how to obtain an approximate solution to Problem (14) with finite dual distributed iterations. Problem (14) can be formulated3 as The local dual gradient is Lipschitz with . We consider the distributed dual gradient algorithm (Necoara & Nedelcu, 2015), with the local step size .
Here denotes the projection on the nonnegative orthant. This is an iterative synchronous algorithm, which consists of small-scale multiplications and requires only local communication. The following proposition summarizes the theoretical properties of this algorithm based on (Necoara & Nedelcu, 2015 Thm. 4.2–4.4) and strict feasibility (Ferranti & Keviczky, 2015 Thm. 1).
Proposition 12
Suppose there exists an -strictly feasible solution to Problem (14) and consider the initialization4
. For all, is an-approximate solution (Definitions 5, 9) with (27a)(27b)(27c)
,,,.
Proof
Part I: Given the -strictly feasible solution , the following upper bound holds for the optimal dual variable (compare Ferranti & Keviczky, 2015, Lemma 1) Based on (Necoara & Nedelcu, 2015 Thm. 4.2) the constraint violation satisfies with the fixed constant according to Necoara and Nedelcu (2015, Thm. 3.2, Thm. 4.2). Correspondingly, for , is an -feasible solution, with according to (27a).
Part II: Analogous to Ferranti and Keviczky (2015, Thm. 1), we can derive the following bound based on the dual variables and the relaxation where is the optimal solution to (14). By combining this result with the relative suboptimality of
(Necoara & Nedelcu, 2015 Thm. 4.4), we can establish the following bound The same derivations hold for , using the maximal size of the constraint tightening instead of . □
Remark 13
Given a user specified accuracy , this proposition gives an a priori upper bound on the number of iterations for a given sublevel set of . In combination with Theorem 8 and Proposition 10, this property holds recursively under the approximate DMPC. In closed-loop operation the value of decreases (Proposition 10) and thus the number of necessary iterations based on (27a) decreases. Using a larger tolerance leads to fewer iterations and a larger suboptimality . The bound for is (typically) significantly smaller than , which is crucial for the stability analysis (Proposition 10). Instead of choosing a desired accuracy , a user can also specify an upper bound on the number of iterations and choose a sufficiently small accuracy using (27a). There exists a variety of distributed dual algorithms for which similar complexity bounds can be obtained. If an alternating minimization algorithm such as Kögel and Findeisen (2012) and Ferranti and Keviczky (2015) is used, the relationship between the inexactness of the optimization and the resulting inexactness in the dynamic constraint changes, see Ferranti and Keviczky (2015) and Köhler, Müller, Li, and Allgöwer (2017).
The initialization and closed-loop operation of the MPC scheme is summarized in the following two algorithms.
Instead of using the (possibly conservative) a priori bound , a stopping condition ensuring an -feasible solution (Definition 5) can be used, which can be efficiently and distributedly checked online, compare Section 4. All the necessary offline and online computations can be accomplished in a fully distributed and scalable fashion.
3.6. Comments
By combining Theorem 8 and Proposition 10, Proposition 12, we can ensure recursive feasibility and practical asymptotic stability with finite distributed dual iterations. While parts of the proof might be technical, the application of the proposed method is straightforward. The bounds on the suboptimality and the resulting closed-loop stability guarantees (Propositions 10, 12) tend to be conservative and should rather be interpreted as a conceptual result of how the inexact minimization affects stability.
We prove the theoretical properties of the proposed framework within the standard MPC setup including a terminal cost and a polytopic terminal set. In various applications and setups, different variations of MPC can be advantageous (such as MPC without terminal ingredients or economic MPC). The extended version (Köhler et al., 2018a) shows in detail under which conditions similar results can be derived for these different setups.
The following remark discusses similarities of the proposed framework to existing schemes and highlights the novelty based on the inexact candidate solution.
Remark 14
In Rubagotti et al. (2014) violations in the inequality constraints (state and input constraints ) are considered. The corresponding constraint tightening can be viewed as a special case () of the proposed method.
In Kögel and Findeisen (2014) a constraint tightening is proposed to ensure recursive feasibility of the consolidated trajectory despite inexact dynamic constraints. The a priori complexity bounds (Proposition 12) do not hold for this formulation due to the usage of equality constraints and lack of strict feasibility.
In Giselsson and Rantzer (2014) the stopping condition is based on an explicit candidate solution for the next time step, which needs to be computed online. This requires additional online computations and bounds on the number of iterations cannot be given (in contrast to -feasibility as used in Ferranti and Keviczky, 2015, Necoara et al., 2015, Rubagotti et al., 2014).
In Ferranti and Keviczky (2015) and Necoara et al. (2015) the constraints are tightened, such that the consolidated trajectory is (strictly) feasible (Proposition 6). Recursive feasibility is ensured be adapting the accuracy and constraint tightening online. This adaptation requires global communication, is complex, and it is a priori unclear whether the number of online iterations increases or decreases in closed-loop operation. One of the main benefits of the proposed framework is that such an adaptation is not needed (although incorporating an optional adaptation, if possible, could be beneficial).
To the best of our knowledge, the proposed result is the first MPC result based on a dynamic inexact candidate solution. As discussed above, the use of such an inexact candidate solution is possible through relaxing the dynamic constraint (14b) and is the key ingredient for establishing recursive feasibility with a fixed constraint tightening, allowing for a fully distributed implementation of the proposed scheme with finite dual iterations.
4. Numerical example
In the following, we show the practicality of the proposed approach with the example5 of a chain of masses (Conte et al., 2016). We consider subsystems with randomly sampled mass , spring constant , damping constant and use an Euler discretization with s. The cost is , and the constraints are , , , . The resulting system has states, inputs, and coupled dynamics and constraints.
Offline computation
In the following we detail the offline computations. We consider no additional feedback, i.e., . We choose a prediction horizon of and the tolerance is chosen as . The constraints are tightened with the -step support function (11), (12). We compute a distributed terminal cost that satisfies (6a) with distributed LMIs as in Conte et al. (2016). For the terminal set, we consider decoupled local terminal sets , with symmetric local sets The vectors are determined using the method in Trodden (2016), such that the set is (robust) positively invariant for the dynamics , by solving a (distributed) LP. This terminal set is scaled, such that the conditions (13c), (13b) are satisfied. Finally, we verify that this terminal set satisfies condition (13d) and thus Assumptions 1, 4 are satisfied. The overall offline computations are accomplished in 60 s with an Intel Core i7.
Simulations — stability and dual initialization
In the following, the online optimization (Alg. 1) is stopped once an -feasible solution (Definition 5) is obtained. We explore the effect of the initialization (Alg. 3) on the number of dual iterations . Simple initialization strategies are , using the previous dual variables , or shifting similar to the shifted candidate solution in Theorem 8 (and appending zero at the end). We consider an initial condition with random positions and zero velocity. The inexact cost and number of dual iterations for the resulting closed loop can be seen in Fig. 2.
As expected, the predicted cost decreases and the origin is (practically) asymptotically stable (Proposition 10). Clearly, using the previous solution can significantly reduce the number of online iterations. Since the cost with different initialization only varies marginally, a suitable initialization simply reduces the number of online iterations.
In the following, we quantitatively explore the effect of the tolerance and the number of subsystems on the closed-loop computational demand. We consider the initialization based on the shifted vector . In Fig. 3 we can see the number of online iterations in each time step for different numbers of subsystems and tolerances . If we increase the number of subsystems , the number of dual iterations tends to increase slightly (due to the increased cost , compare Proposition 12). In contrast, if we chose a smaller tolerance the number of dual iterations increases significantly. Thus, by choosing a larger tolerance , we can consider significantly more subsystems without increasing the number of online dual iterations .
To summarize: Compared to a nominal DMPC, the design procedure only requires the additional computation of the tightened constraints for the chosen tolerance . With the proposed modifications, the closed loop satisfies the constraints and the effect of inexact minimization on closed-loop stability is negligible. Thus, we can significantly reduce the online computational demand by allowing for a non-vanishing tolerance without any major downside. The main limitation to consider more subsystems and a larger tolerance is the construction of the polytopic terminal set and the robust positive invariance condition (13d).
5. Conclusion
We have proposed a new formulation for DMPC based oninexact dual optimization. The online optimization can beaccomplished in a fully distributed manner using standard dual distributed optimization methods and only has to obtain an approximate solution. We have established recursive feasibility, constraint satisfaction and practical stability of the closed loop based on such an approximate solution. This is possible through the usage of a reformulated optimization problem and a novel candidate solution, which both explicitly consider the inexactness of the optimization. This modified formulation enables practical applications of MPC to large-scale systems with fast dynamics, for which the underlying MPC optimization problem cannot be solved inreal time.
Acknowledgments
Preliminary results of this paper have been derived during a stay of the first author in Na Li’s research group in SEAS Harvard.
References
- Chisci et al., 2001Systems with persistent disturbances: predictive control with restricted constraintsAutomatica, 37 (2001), pp. 1019-1028
- Conte et al., 2016Distributed synthesis and stability of cooperative distributed model predictive control for linear systemsAutomatica, 69 (2016), pp. 117-125
- Conte et al., 2013Robust distributed model predictive control of linear systemsProc. European control Conf. (ECC) (2013), pp. 2764-2769
- Doan et al., 2011A distributed optimization-based approach for hierarchical model predictive control of large-scale systems with coupled dynamics and constraintsProc. 50th IEEE Conf. Decision and control (CDC) (2011), pp. 5236-5241
- Ferranti and Keviczky, 2015A parallel dual fast gradient methodfor MPC applicationsProc. 54th IEEE Conf. Decision and control (CDC) (2015), pp. 2406-2413
- Giselsson and Rantzer, 2014On feasibility, stability and performance in distributed model predictive controlIEEE Transactions on Automatic Control, 59103 (2014), pp. 1-1036
- Grüne and Pannek, 2017Nonlinear model predictive controlSpringer (2017)
- Kögel and Findeisen, 2012Cooperative distributed MPC using the alternating direction multiplier methodProc. 8th IFAC symposium on advanced control of chemical processes (2012), pp. 445-450
- Kögel and Findeisen, 2014Stabilization of inexact MPC schemesProc. 53rd IEEE Conf. Decision and control (CDC) (2014), pp. 5922-5928
- Köhler et al., 2018aInexact distributed model predictive control - recursive feasibility under inexact dual optimizationTech. rep, University of Stuttgart (2018)
- Köhler et al., 2018bA novel constraint tightening approach for nonlinear robust model predictive controlProc. American control Conf. (ACC) (2018), pp. 728-734
- Köhler et al., 2017Real time economic dispatch for power networks: A distributed economic model predictive control approachProc. 56th IEEE Conf. Decision and control (CDC) (2017), pp. 6340-6345
- Maestre et al., 2014Distributed model predictive control made easySpringer (2014)
- Müller and Allgöwer, 2017Economic and distributed model predictive control: Recent developments in optimization-based controlSICE J. of Control, Measurement, and System Integration, 10 (2017), pp. 39-52
- Necoara et al., 2015An adaptive constraint tightening approach to linear model predictive control based on approximation algorithms for optimizationOptimal Control Applications & Methods, 36 (2015), pp. 648-666
- Necoara and Nedelcu, 2015On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problemsAutomatica, 55 (2015), pp. 209-216
- Necoara and Suykens, 2008Application of a smoothing technique to decomposition in convex optimizationIEEE Transactions on Automatic Control, 53 (2008), pp. 2674-2679
- Rawlings and Mayne, 2009Model predictive control: Theory and designNob Hill Pub (2009)
- Rubagotti et al., 2014Stabilizing linear model predictive control under inexact numerical optimizationIEEE Transactions on Automatic Control, 59 (2014), pp. 1660-1666
- Scokaert et al., 1999Suboptimal model predictive control (feasibility implies stability)IEEE Transactions on Automatic Control, 44 (1999), pp. 648-654
- Stewart et al., 2010Cooperative distributed model predictive controlSystems & Control Letters, 59 (2010), pp. 460-469
- Trodden, 2016A one-step approach to computing a polytopic robustpositively invariant setIEEE Transactions on Automatic Control, 61 (12) (2016), pp. 4100-4105
Cited by (22)
Analysis and design of model predictive control frameworks for dynamic operation—An overview
2024, Annual Reviews in ControlCitation Excerpt :MPC formulations without terminal ingredients (Section 6) are simple to apply for large scale systems due to the absence of any offline designs. For example, Giselsson and Rantzer (2014) and Köhler et al. (2019a, App. A) derive closed-loop guarantees for regulation and economic performance despite inexact distributed optimization. In the following, we mention existing results that combine the nominal results in Sections 3–6 with a robust MPC formulation.
Distributed model predictive safety certification for learning-based control
2020, IFAC-PapersOnLineTube-Based Model Predictive Full Containment Control for Stochastic Multiagent Systems
2023, IEEE Transactions on Automatic ControlRobust-to-Early Termination Model Predictive Control
2023, IEEE Transactions on Automatic ControlStochastic Distributed Predictive Tracking Control under Inexact Minimization
2021, IEEE Transactions on Control of Network SystemsDistributed Model Predictive Control and Optimization for Linear Systems with Global Constraints and Time-Varying Communication
2021, IEEE Transactions on Automatic Control
Johannes Köhler received his Master degree in Engineering Cybernetics from the University of Stuttgart, Germany, in 2017. During his studies, he spent 3 months at Harvard University in Na Li’s research lab. He has since been a doctoral student at the Institute for Systems Theory and Automatic Control under the supervision of Prof. Frank Allgöwer and a member of the Graduate School Soft Tissue Robotics at the University of Stuttgart. His research interests are in the area of model predictive control.
Matthias A. Müller received a Diploma degree in Engineering Cybernetics from the University of Stuttgart, Germany, and an M.S. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign, US, both in 2009. In 2014, he obtained a Ph.D. in Mechanical Engineering, also from the University of Stuttgart, Germany, for which he received the 2015 European Ph.D. award on control for complex and heterogeneous systems. He is currently working as a senior lecturer (Akademischer Oberrat) at the Institute for Systems Theory and Automatic Control at the University of Stuttgart, Germany. His research interests include nonlinear control and estimation, model predictive control, distributed control and switched systems, with application in different fields including biomedical engineering.
Frank Allgöwer studied Engineering Cybernetics and Applied Mathematics in Stuttgart and at the University of California, Los Angeles (UCLA), respectively, and received his Ph.D. degree from the University of Stuttgart in Germany. Since 1999 he is the Director of the Institute for Systems Theory and Automatic Control and professor at the University of Stuttgart. His research interests include networked control, cooperative control, predictive control, and nonlinear control with application to a wide range of fields including systems biology. For the years 2017–2020 Frank serves as President of the International Federation of Automatic Control (IFAC) and since 2012 as Vice President of the German Research Foundation DFG.
- ☆
- The authors thank the German Research Foundation (DFG) for support of this work within grant AL 316/11-1 and within the Research Training Group Soft Tissue Robotics (GRK 2198/1). The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Giancarlo Ferrari-Trecate under the direction of Editor Ian R. Petersen.
- 1
- The feasibility recovery scheme described in Kögel and Findeisen (2014) to obtain a (dynamically) feasible solution is comparable to the definition of the consolidated trajectory.
- 2
- This would not hold, if we would use for the definition of the -step support function, compare Remark 7.
- 3
- The minimization can be further decoupled along the time axis with the variables , compare (Ferranti & Keviczky, 2015).
- 4
- The following properties remain valid if the initialization satisfies .
- 5
- To improve the numerical conditioning, we set .
© 2019 Elsevier Ltd. All rights reserved.