# Local Linear Models¶

psdr.local_linear(X, fX, perplexity=None, bandwidth=None, Xt=None)[source]

Construct local linear models at specified points

In several dimension reduction settings we want to estimate gradients using only samples. If we had freedom to place these samples anywhere we wanted, we would use a finite difference approach. As this is often not the case, we need some way to estimate gradients given a fixed and arbitrary set of data.

Local linear models provide one approach for estimating the gradient. This approach constructs a local linear model centered around each $$\mathbf{x}_t$$ with weights depending on the distance between points

$\min_{a_0\in \mathbb{R}, \mathbf{a}\in \mathbb{R}^m} \sum_{i=1}^M [(a_0 + \mathbf{a}^\top \mathbf{x}_i) - f(\mathbf{x}_i)]^2 e^{-\beta_t \| \mathbf{x}_i - \mathbf{x}_t\|_2^2}.$

The choice of $$\beta_t$$ is critical. Here we provide two main options. By default, we choose $$\beta_t$$ for each $$\mathbf{x}_t$$ such that the perplexity corresponds to $$m+1$$; other values of perplexity are avalible setting perplexity. The other option is to specify the bandwidth $$\beta$$ explicitly.

Note The cost of this method scales quadratically in the dimension of input space.

Parameters: X (array-like (M, m)) – Places where the function is evaluated fX (array-like (M,)) – Value of the function at those locations perplexity (None or float) – If None, defaults to m+1. bandwidth (None, 'xia' or positive float) – If specified, set the global bandwidth to the specified float. If ‘xia’, use the bandwidth selection heuristic of Xia mentioned in [Li18]. A – Matrix of coefficients of linear model; A[:,0] is the constant term and A[:,1:m+1] is the linear coefficients. np.array (M, m+1)