type

Post

Created date

Jun 16, 2022 01:21 PM

category

Data Science

tags

Machine Learning

Machine Learning

status

Published

Language

From

summary

slug

password

Author

Priority

Featured

Featured

Cover

Origin

Type

URL

Youtube

Youtube

icon

**Local Regression**

**Local Regression**

We slide a window across the predictor values, and only observations in that window are used to make a regression model.

The model here could be a linear regression, or even a polynomial model. After all, we average the values as they go further.

We treat EVERY observation sort of features in the middle of as one window.

**🤬**Hard to write up an equation or function easily.

**😀**Does well in the awkward relationship.

- Take a sliding window to compute regression model

- then combine the results by averaging

**Assumption:**

- Around point x, the mean of y can be approximated by a small class of parametric functions in polynomial regression.

- The errors in estimating y are independent and randomly distributed with a mean of zero.

- Bias and variance are traded off by the choices for the settings of span and degree of polynomial.

**Polynomial Regression**

**Polynomial Regression**

**Polynomial Regression****ransforms (e.g., making x to be cubic) the variable to make a new variable, and then fit a model**

**basically t****Orthogonal Polynomial Regression**

**makes the variables to be as uncorrelated as possible with the**

*first variable*; there's no correlation between the two variables so it sets up a system of new variables that are really just polynomials, but that's a special polynomial it adds additional parts

**Assumption:**

- the behavior of a dependent variable
*y*is explained by a linear, or nonlinear — curvilinear, additive relationship between the dependent variable and a set of*k*independent variables*(xi, i=1 to k)*),

- the relationship between the dependent variable
*y*and any independent variable*xi*is linear or curvilinear (specifically polynomial),

- the independent variables
*xi*are independent of each other

- the errors are independent, normally distributed with mean zero and a constant variance (
*OLS*).

**Step Functions (Skipped in lecture 2 but used in tree models)**

**Step Functions (Skipped in lecture 2 but used in tree models)**

Essentially, cutting up all predictor into chunks.

**Basis Functions**

**Basis Functions**

**Regression Splines & Smoothing Splines**

**Regression Splines & Smoothing Splines**

The degree of freedom (deg) is the number of knots in the natural spline model.

- For each sample, the R2 increases as deg increases.

- The fitted curve is more flexible as deg increases. Deg 10 appears to capture the true model shape better.

**Generalized Additive Models**

**Generalized Additive Models**

It is just an AUTOMATED version of Spline.

#### Assumption (Further here)

2'. Homogeneity of variance (Similar variance)

3'. Normality of residuals

### FAQ

### Reference

### Extra Resource

**Author:**Jason Siu**URL:**https://jason-siu.com/article%2Fcae2c463-b61b-4a67-b833-0063da3f9943**Copyright:**All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source!

Relate Posts