Domain

Design Considerations

In order to use dimension reduction for practical algorithms, we need be able to solve three problems:

Problem Mathematical statement
extent \(\displaystyle \max_{\alpha \ge 0} \alpha \text{ such that } \mathbf{x}_0+\alpha \mathbf{p}\in \mathcal{D}\)
closest point \(\displaystyle \min_{\mathbf x \in \mathcal D} \| \mathbf L (\mathbf x - \mathbf y)\|_2\)
furthest point (corner) \(\displaystyle \max_{\mathbf x \in \mathcal D} \mathbf p^\top \mathbf x\)
constrained least squares \(\displaystyle \min_{\mathbf x \in \mathcal D} \| \mathbf A \mathbf x - \mathbf b \|_2\)

With these three operations, we can, for example, implement hit-and-run sampling.

Abstract Base Class

All domains implement a similar interface that provides these operations.

class psdr.Domain[source]

Abstract base class for arbitary domain shapes

is_box_domain

Returns True if the domain is specified only by bound constraints on each variable

is_linineq_domain

Returns True if the domain is specified by linear equality/inqueality constraints

is_linquad_domain

Returns true if the domain is purely specified by linear equality/inequality and convex quadratic constraints

Deterministic Domains

Here we use deterministic domains to describe domains whose main function is to specify a series of constraints. These classes do have a sample method, but this sampling is simply random with uniform probability over the domain. The classes below are in the nesting order; i.e., a BoxDomain is a subset of a LinIneqDomain. These distinctions are important as we can often use less expensive algorithms for each of the subclasses.

class psdr.LinQuadDomain(A=None, b=None, lb=None, ub=None, A_eq=None, b_eq=None, Ls=None, ys=None, rhos=None, names=None, **kwargs)[source]

A domain specified by a combination of linear (in)equality constraints and convex quadratic constraints

Here we define a domain that is specified in terms of bound constraints, linear inequality constraints, linear equality constraints, and quadratic constraints.

\[\mathcal{D} := \left \lbrace \mathbf{x} : \text{lb} \le \mathbf{x} \le \text{ub}, \ \mathbf{A} \mathbf{x} \le \mathbf{b}, \ \mathbf{A}_{\text{eq}} \mathbf{x} = \mathbf{b}_{\text{eq}}, \ \| \mathbf{L}_i (\mathbf{x} - \mathbf{y}_i)\|_2 \le \rho_i \right\rbrace \subset \mathbb{R}^m\]
Parameters:
  • A (array-like (m,n)) – Matrix in left-hand side of inequality constraint
  • b (array-like (m,)) – Vector in right-hand side of the ineqaluty constraint
  • A_eq (array-like (p,n)) – Matrix in left-hand side of equality constraint
  • b_eq (array-like (p,)) – Vector in right-hand side of equality constraint
  • lb (array-like (n,)) – Vector of lower bounds
  • ub (array-like (n,)) – Vector of upper bounds
  • Ls (list of array-likes (p,m)) – List of matrices with m columns defining the quadratic constraints
  • ys (list of array-likes (m,)) – Centers of the quadratic constraints
  • rhos (list of positive floats) – Radii of quadratic constraints
  • names (list of strings, optional) – Names for each of the parameters in the space
  • kwargs (dict, optional) – Additional parameters to be passed to cvxpy Problem.solve()
class psdr.LinIneqDomain(A=None, b=None, lb=None, ub=None, A_eq=None, b_eq=None, names=None, **kwargs)[source]

A domain specified by a combination of linear equality and inequality constraints.

Here we build a domain specified by three kinds of constraints: bound constraints \(\text{lb} \le \mathbf{x} \le \text{ub}\), inequality constraints \(\mathbf{A} \mathbf{x} \le \mathbf{b}\), and equality constraints \(\mathbf{A}_{\text{eq}} \mathbf{x} = \mathbf{b}_{\text{eq}}\):

\[\mathcal{D} := \left \lbrace \mathbf{x} : \text{lb} \le \mathbf{x} \le \text{ub}, \ \mathbf{A} \mathbf{x} \le \mathbf{b}, \ \mathbf{A}_{\text{eq}} \mathbf{x} = \mathbf{b}_{\text{eq}} \right\rbrace \subset \mathbb{R}^m\]
Parameters:
  • A (array-like (m,n)) – Matrix in left-hand side of inequality constraint
  • b (array-like (m,)) – Vector in right-hand side of the ineqaluty constraint
  • A_eq (array-like (p,n)) – Matrix in left-hand side of equality constraint
  • b_eq (array-like (p,)) – Vector in right-hand side of equality constraint
  • lb (array-like (n,)) – Vector of lower bounds
  • ub (array-like (n,)) – Vector of upper bounds
  • kwargs (dict, optional) – Additional parameters to pass to solvers
chebyshev_center()[source]

Estimates the Chebyshev center using the constrainted least squares approach

Solves the linear program finding the radius \(r\) and Chebyshev center \(\mathbf{x}\).

\[\begin{split}\max_{r\in \mathbb{R}^+, \mathbf{x} \in \mathcal{D}} &\ r \\ \text{such that} & \ \mathbf{a}_i^\top \mathbf{x} + r \|\mathbf{a}_i\|_2 \le b_i\end{split}\]

where we have expressed the domain in terms of the linear inequality constraints \(\mathcal{D}=\lbrace \mathbf{x} : \mathbf{A}\mathbf{x} \le \mathbf{b}\rbrace\) and \(\mathbf{a}_i^\top\) are the rows of \(\mathbf{A}\) as described in [BVNotes].

Returns:
  • center (np.ndarray(m,)) – Center of the domain
  • radius (float) – radius of largest circle inside the domain

References

[BVNotes]https://see.stanford.edu/materials/lsocoee364a/04ConvexOptimizationProblems.pdf, page 4-19.
class psdr.ConvexHullDomain(X, A=None, b=None, lb=None, ub=None, A_eq=None, b_eq=None, Ls=None, ys=None, rhos=None, names=None, tol=1e-05, **kwargs)[source]

Define a domain that is the interior of a convex hull of points.

Given a set of points \(\lbrace x_i \rbrace_{i=1}^M\subset \mathbb{R}^m\), construct a domain from their convex hull:

\[\mathcal{D} := \left\lbrace \sum_{i=1}^M \alpha_i x_i : \sum_{i=1}^M \alpha_i = 1, \ \alpha_i \ge 0 \right\rbrace \subset \mathbb{R}^m.\]

In additionally any linear equality, linear inequality, and quadratic inequality constraints can be included.

Parameters:
  • X (array-like (M, m)) – Points from which to build the convex hull of points.
  • A (array-like (m,n)) – Matrix in left-hand side of inequality constraint
  • b (array-like (m,)) – Vector in right-hand side of the ineqaluty constraint
  • A_eq (array-like (p,n)) – Matrix in left-hand side of equality constraint
  • b_eq (array-like (p,)) – Vector in right-hand side of equality constraint
  • lb (array-like (n,)) – Vector of lower bounds
  • ub (array-like (n,)) – Vector of upper bounds
  • Ls (list of array-likes (p,m)) – List of matrices with m columns defining the quadratic constraints
  • ys (list of array-likes (m,)) – Centers of the quadratic constraints
  • rhos (list of positive floats) – Radii of quadratic constraints
  • names (list of strings, optional) – Names for each of the parameters in the space
  • kwargs (dict, optional) – Additional parameters to be passed to cvxpy Problem.solve()
class psdr.BoxDomain(lb, ub, names=None)[source]

Implements a domain specified by box constraints

Given a set of lower and upper bounds, this class defines the domain

\[\mathcal{D} := \lbrace \mathbf{x} \in \mathbb{R}^m : \text{lb} \le \mathbf{x} \le \text{ub} \rbrace \subset \mathbb{R}^m.\]
Parameters:
  • lb (array-like (m,)) – Lower bounds
  • ub (array-like (m,)) – Upper bounds
class psdr.PointDomain(x, names=None)[source]

A domain consisting of a single point

Given a point \(\mathbf{x} \in \mathbb{R}^m\), construct the domain consisting of that point

\[\mathcal{D} = \lbrace \mathbf x \rbrace \subset \mathbb{R}^m.\]
Parameters:x (array-like (m,)) – Single point to be contained in the domain

Random Domains

An alternative function of domains is to provide samples from an associate sampling measure on some domain \(\mathcal{D}\). Domains with a stochastic interpretation are all subclasses of psdr.RandomDomain and implement several additional functions.

class psdr.RandomDomain(names=None)[source]

Abstract base class for domains with an associated sampling measure

pdf(X)[source]

Probability density function associated with the domain

This evaluates a probability density function \(p:\mathcal{D}\to \mathbb{R}_*\) at the requested points. By definition, this density function is normalized to have measure over the domain to be one:

\[\int_{\mathbf{x} \in \mathcal{D}} p(\mathbf{x}) \mathrm{d} \mathbf{x}.\]
Parameters:X (array-like, either (m,) or (N,m)) – points to evaluate the density function at
Returns:evaluation of the density function
Return type:array-like (N,)
class psdr.UniformDomain(lb, ub, names=None)[source]

A randomized version of a BoxDomain with a uniform measure on the space.

class psdr.NormalDomain(mean, cov=None, truncate=None, names=None, **kwargs)[source]

Domain described by a normal distribution

This class describes a normal distribution with mean \(\boldsymbol{\mu}\in \mathbb{R}^m\) and a symmetric positive definite covariance matrix \(\boldsymbol{\Gamma}\in \mathbb{R}^{m\times m}\) that has the probability density function:

\[p(\mathbf{x}) = \frac{ e^{-\frac12 (\mathbf{x} - \boldsymbol{\mu}) \boldsymbol{\Gamma}^{-1}(\mathbf{x} - \boldsymbol{\mu})} }{\sqrt{(2\pi)^m |\boldsymbol{\Gamma}|}}\]

If the parameter truncate is specified, this distribution is truncated uniformly; i.e., calling this parameter \(\tau\), the resulting domain has measure \(1-\tau\). Specifically, if we have a Cholesky decomposition of \(\boldsymbol{\Gamma} = \mathbf{L} \mathbf{L}^\top\) we find a \(\rho\) such that

\[\begin{split}\mathcal{D} &= \lbrace \mathbf{x}: \|\mathbf{L}^{-1}(\mathbf{x} - \boldsymbol{\mu})\|_2^2 \le \rho\rbrace ; \\ p(\mathcal{D}) &= 1-\tau.\end{split}\]

This is done so that the domain has compact support which is necessary for several metric-based sampling strategies.

Parameters:
  • mean (array-like (m,)) – Mean
  • cov (array-like (m,m), optional) – Positive definite Covariance matrix; defaults to the identity matrix
  • truncate (float in [0,1), optional) – Amount to truncate the domain to ensure compact support
class psdr.LogNormalDomain(mean, cov=1.0, offset=0.0, scaling=1.0, truncate=None, names=None)[source]

A one-dimensional domain described by a log-normal distribution.

Given a normal distribution \(\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Gamma})\), the log normal is described by

\[x = \alpha + \beta e^y, \quad y \sim \mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Gamma})\]

where \(\alpha\) is an offset and \(\beta\) is a scaling coefficient.

Parameters:
  • mean (float) – mean of normal distribution feeding the log-normal distribution
  • cov (float, optional) – covariance of normal distribution feeding the log-normal distribution
  • offset (float, optional) – Shift the distribution
  • scaling (float or np.ndarray) – Scale the output of the log-normal distribution
  • truncate (float [0,1)) – Truncate the tails of the distribution

Tensor Product Domain

As part of the operations on domains, one goal is to compose a tensor product of sub-domains; i.e., given \(\mathcal{D}_1\) and \(\mathcal{D}_2\) to form:

\[\mathcal{D} := \mathcal{D}_1 \otimes \mathcal{D}_2.\]
class psdr.TensorProductDomain(domains=None, **kwargs)[source]

A class describing a tensor product of a multiple domains

Parameters:
  • domains (list of domains) – Domains to combine into a single domain
  • **kwargs – Additional keyword arguments to pass to CVXPY

Domain Operations