06 Oct 2018
A central part of data science is hyperparameter optimization, which can often be a difficult challenge to overcome. The process of hyperparameter optimization can be computationally expensive and prone to both underfitting and overfitting if not carried out well. However, one of the best methods that exist for hyperparameter optimization is a randomized search with cross-validation, like
sklearn.model_selection.RandomizedSearchCV. This method is not nearly as computationally expensive as a grid search, and also provides a balance between overfitting and underfitting. The only issue in using randomized search is selecting a distribution that parameter values are to be drawn from. In this blog post, I will talk about one of the biggest issues with randomized search: sampling probability distributions on non-linear scales. Below, I propose a function to map random samples of a probability distribution from a linear scale to a logarithmic scale.
21 Jul 2018
I was browsing through other blogs and reading like I normally do, and noticed that some blogs had animations on the homepage. I thought to myself, gosh wouldn’t that be cool to have on my blog, especially since physics really lends itself to cool animations. In this blog post, I’m going to make three animations of a simple pendulum, the Lennard-Jones potential, and the Maxwell construction. Without further ado, I’ll dive right in.
09 Jun 2018
It’s hard to compute some functions, like trig and special functions, hence why approximations are frequently used. Sometimes, it may be the only way to calculate these functions. Mathematicians have worked on these approximations for centuries, including many famous ones such as the small angle approximation and Taylor series to many high precision methods such as Chebyshev polynomials. However, when using approximations for problems done by hand you want to both use a simple but accurate approximation, which usually means Taylor series or something similar. The problem with this is that some Taylor series require many higher order terms to converge to the desired accuracy. What if approximations in the form of a second degree polynomial divided by another second degree polynomial existed for many functions?