Image for post
Image for post
Source: Unsplash

Here’s a function: f(x). It’s expensive to calculate, not necessarily an analytic expression, and you don’t know its derivative.

Your task: find the global minima.

This is, for sure, a difficult task, one more difficult than other optimization problems within machine learning. Gradient descent, for one, has access to a function’s derivatives and takes advantage of mathematical shortcuts for faster expression evaluation.

Alternatively, in some optimization scenarios the function is cheap to evaluate. If we can get hundreds of results for variants of an input x in a few seconds, a simple grid search can be employed with good results.


Image for post
Image for post
Source.

Quantum computing is a buzz-word that’s been thrown around quite a bit. Unfortunately, despite its virality in pop culture and quasi-scientific Internet communities, its capabilities are still quite limited.

As a very new field, quantum computing presents a complete paradigm shift to the traditional model of classical computing. Classical bits — which can be 0 or 1 — are replaced in quantum computing with qubits, which instead holds the value of a probability.

Relying on the quirks of physics at a very, very small level, a qubit is forced into a state of 0 or 1 with a certain probability…


Image for post
Image for post
Source: Unsplash

Oh no! You find out that your data is corrupted — there enough instances of training data being attached to the incorrect label for it to be significant enough. What should you do?

If you want to be a radical optimist, you could think of this data corruption as a form of regularization — depending on the level of corruption. However, if too many labels are corrupted and it is not done in a balanced way, this view may not be very practical (if it was practical to begin with, of course).

Depending on the problem, though, the model may…


Image for post
Image for post
Sources: Unsplash, Unsplash

If you’re a company, you’re continually seeking how to gain more profit. If a company is seeking to expand or change its current business (in both big or small ways), a common solution is experimentation.

Companies can experiment if a change works out or not; if a change does seem to be promising, they can incorporate that change into their broader business. Especially with digital-based companies, experimentation is a driving force of innovation and growth.

A common — and relatively simple — test is the A/B test. Half of users are randomly directed towards layout A, and the other half…


Image for post
Image for post
Source: Unsplash

Machine learning has begun to pervade all aspects of life — even those protected by anti-discrimination law. It’s being used in hiring, credit, criminal justice, advertising, education, and more.

What makes this a particularly difficult problem, though, is that machine learning algorithms seem to be more fluid and intangible. Part of the design of complex machine learning algorithms is that they are difficult to interpret and regulate. Machine learning fairness is a rising field that seeks to cement abstract principles of “fairness” into machine learning algorithms.

We’ll look at notions and perspectives on “fairness” three ways. Although there’s plenty more…


Image for post
Image for post
Source: Pixabay.

2021 is here, and deep learning is as active as ever; research in the field is speeding up exponentially. There are obviously many more deep learning advancements that are fascinating and exciting. To me, though, the five presented demonstrate a central undercurrent in ongoing deep learning research: how necessary is the largeness of deep learning models?

1. GrowNet

tl;dr: GrowNet applies gradient boosting to shallow neural networks. It has been rising in popularity, yielding superior results in classification, regression, and ranking. It may indicate research supporting larger ensembles and shallower networks on non-specialized data (non-image or sequence).

Gradient boosting has proven to…


Image for post
Image for post
Created by author.

All machine learning models originate in the computer lab. They’re initialized, trained, tested, redesigned, trained again, fine-tuned, tested yet again before they are deployed.

Afterwards, they fulfill their duty in epidemiological modelling, stock trading, shopping item recommendation, and cyber attack detection, among many other purposes. Unfortunately, success in the lab may not always mean success in the real world — even if the model does well on the test data.

This big problem — that the machine learning models being developed on the computer to serve a purpose in the real world can often crash — has had little research…


Image for post
Image for post
Source: Unsplash

The model had been training across several sessions for many days on an image recognition competition. It was a relatively simple, and scored about a 0.9 AUC initially — the metric for the competition, which is between 0 and 1. I didn’t expect much from it at all.

That’s why I quite literally jumped out of my seat when I began the usual routine of loading the model weights and training for several epochs:


Image for post
Image for post
Source: Unsplash

The A/B Test is revered across the marketing and experimentation landscapes as the golden testing standard. While there are other testing methods like bandits, the simplicity of the A/B test usually makes it a default.

Indeed, the A/B test is incredibly simple. It’s essentially what you did in your elementary school science fair experiment: do something to one group, do another thing to another group, and see the different in results. It’s not really an algorithm so much as it is the process of science.

But that simplicity can be very misleading, because it’s often presented with relatively simple examples…


Image for post
Image for post
Source.

If you’ve been keeping up with the Kaggle News, you may be familiar with the Mechanisms of Action competition by the Laboratory for Innovation Science at Harvard recently closed. I’m proud to say that my partner, Andy Wang, and I managed to place in the top 4% — 152nd out of 4,373 teams.

What’s interesting, though, is that we’re relatively new to Kaggle competition. In terms of machine learning, we’re not exactly professionals — we’re both students that have picked up Python and machine learning from online courses and tutorials.

We didn’t get gold, of course. That’s for the top…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store