We illustrate the application of two linear compression algorithms in python: Principal component analysis (PCA) and least-squares feature selection. Both can be used to compress a passed array, and they both work by stripping out redundant columns from the array. The two differ in that PCA operates in a particular rotated frame, while the feature selection solution operates directly on the original columns. As we illustrate below, PCA always gives a stronger compression. However, the feature selection solution is often comparably strong, and its output has the benefit of being relatively easy to interpret — a virtue that is important for many applications.
This is a tutorial post relating to our python feature selection package,
linselect. The package allows one to easily identify minimal, informative feature subsets within a given data set.
Here, we demonstrate
linselect‘s basic API by exploring the relationship between the daily percentage lifts of 50 tech stocks over one trading year. We will be interested in identifying minimal stock subsets that can be used to predict the lifts of the others.
This is a demonstration walkthrough, with commentary and interpretation throughout. See the package docs folder for docstrings that succinctly detail the API.
- Load the data and examine some stock traces
- FwdSelect, RevSelect; supervised, single target
- FwdSelect, RevSelect; supervised, multiple targets
- FwdSelect, RevSelect; unsupervised
The data and a Jupyter notebook containing the code for this demo are available on our github, here.
linselect package can be found on our github, here.