Skip to main content

Principal component analysis

We review the two essentials of principal component analysis (“PCA”): 1) The principal components of a set of data points are the eigenvectors of the correlation matrix of these points in feature space. 2) Projecting the data onto the subspace spanned by the first $k$ of these — listed in descending eigenvalue order — provides the best possible $k$-dimensional approximation to the data, in the sense of captured variance.

(more…)

Read More

NBA 2015-16!!!

NBA is back this Tuesday! The dashboard and weekly predictions are now live*, once again. These will each be updated daily, with game winner predictions, hypothetical who-would-beat-whom daily matchup predictions, and more. For a discussion on how we make our predictions, see our first post on this topic. Note that our approach does not make use of any bookie predictions (unlike many other sites), and so provide an independent look on the game.

This season, we hope to crack 70% accuracy!

* Note that we have left up last season’s completed games results, for review purposes. Once every team has played one game, we’ll switch it over to the current season’s results.

Read More

A review of parameter regularization and Bayesian regression

Here, we review parameter regularization, which is a method for improving regression models through the penalization of non-zero parameter estimates. Why is this effective? Biasing parameters towards zero will (of course!) unfavorably bias a model, but it will also reduce its variance. At times the latter effect can win out, resulting in a net reduction in generalization error. We also review Bayesian regressions — in effect, these generalize the regularization approach, biasing model parameters to any specified prior estimates, not necessarily zero.

This is the second of a series of posts expounding on topics discussed in the text, “An Introduction to Statistical Learning”. Here, we cover material from its Chapters 2 and 6. See prior post here.

(more…)

Read More

Stochastic geometric series

Let $a_1, a_2, \ldots$ be an infinite set of non-negative samples taken from a distribution $P_0(a)$, and write
$$\tag{1} \label{problem}
S = 1 + a_1 + a_1 a_2 + a_1 a_2 a_3 + \ldots.
$$
Notice that if the $a_i$ were all the same, $S$ would be a regular geometric series, with value $S = \frac{1}{1-a}$. How will the introduction of $a_i$ randomness change this sum? Will $S$ necessarily converge? How is $S$ distributed? In this post, we discuss some simple techniques to answer these questions.

Note: This post covers work done in collaboration with my aged p, S. Landy.

(more…)

Read More

Leave-one-out cross-validation

This will be the first of a series of short posts relating to subject matter discussed in the text, “An Introduction to Statistical Learning”. This is an interesting read, but it often skips over statement proofs — that’s where this series of posts comes in! Here, I consider the content of Section 5.1.2: This gives a lightning-quick “short cut” method for evaluating a regression’s leave-one-out cross-validation error. The method is applicable to any least-squares linear fit.

(more…)

Read More

Pseudotsuga_seed_seedling

How not to sort by average rating, revisited

What is the best method for ranking items that have positive and negative reviews? Some sites, including reddit, have adopted an algorithm suggested by Evan Miller to generate their item rankings. However, this algorithm can sometimes be unfairly pessimistic about new, good items. This is especially true of items whose first few votes are negative — an issue that can be “gamed” by adversaries. In this post, we consider three alternative ranking methods that can enable high-quality items to more-easily bubble-up. The last is the simplest, but continues to give good results: One simply seeds each item’s vote count with a suitable fixed number of hidden “starter” votes.
(more…)

Read More

Multivariate Cramer-Rao inequality

The Cramer-Rao inequality addresses the question of how accurately one can estimate a set of parameters $\vec{\theta} = \{\theta_1, \theta_2, \ldots, \theta_m \}$ characterizing a probability distribution $P(x) \equiv P(x; \vec{\theta})$, given only some samples $\{x_1, \ldots, x_n\}$ taken from $P$. Specifically, the inequality provides a rigorous lower bound on the covariance matrix of any unbiased set of estimators to these $\{\theta_i\}$ values. In this post, we review the general, multivariate form of the inequality, including its significance and proof.
(more…)

Read More

MLB predictions take off!

Screen Shot 2015-04-11 at 10.51.35 AM

Announcing: EFavDB’s first major league baseball prediction project! Just as in our corresponding NBA project, we will be providing free-of-charge game winner predictions on a weekly basis. In addition, we’ve implemented a MLB dashboard (screenshot above) where you can get a quick summary of each team’s prior results. You can also use the dashboard to check out our guesses for who beat whom, were they to play today, etc.

The algorithm we’ll be applying this season is similar to that discussed here, for the NBA. We’ve set its parameters to generate reasonably conservative predictions, but ones that will also lead to interesting upset predictions when appropriate. Unlike many other sites, our predictions do not take into account the over-under values published by bookies, and so are independent of their opinions. Looking forward to a great season!

Read More

2798303714_bc07ce3d44_b

Machine Learning Methods: Decision trees and forests

This post contains our crib notes on the basics of decision trees and forests. We first discuss the construction of individual trees, and then introduce random and boosted forests. We also discuss efficient implementations of greedy tree construction algorithms, showing that a single tree can be constructed in $O(k \times n \log n)$ time, given $n$ training examples having $k$ features each. We provide exercises on interesting related points and an appendix containing relevant python/sk-learn function calls.
(more…)

Read More

Mathematics of measles

Here, we introduce — and outline a solution to — a generalized SIR model for infectious disease. This is referenced in our following post on measles and vaccination rates. Our generalized SIR model differs from the original SIR model of Kermack and McKendrick in that we allow for two susceptible sub-populations, one vaccinated against disease and one not. We conclude by presenting some python code that integrates the equations numerically. An example solution obtained using this code is given below.
(more…)

Read More