Entries by Daniel

Understanding binary cross-entropy / log loss: a visual explanation

Photo by G. Crescoli on Unsplash Originally posted on Towards Data Science. Introduction If you are training a binary classifier, chances are you are using binary cross-entropy / log loss as your loss function. Have you ever thought about what exactly does it mean to use this loss function? The thing is, given the ease of […]

Hyper-parameters in Action! Weight Initializers

Photo by Jesper Aggergaard on Unsplash Originally posted on Towards Data Science. Introduction This is the second post of my series on hyper-parameters. In this post, I will show you the importance of properly initializing the weights of your deep neural network. We will start with a naive initialization scheme and work out its issues, like […]

Hyper-parameters in Action! Introducing DeepReplay

Photo by Immo Wegmann on Unsplash Originally posted on Towards Data Science. Introduction In my previous post, I invited you to wonder what exactly is going on under the hood when you train a neural network. Then I investigated the role of activation functions, illustrating the effect they have on the feature space using plots and […]

Hyper-parameters in Action! Activation Functions

Introduction This is the first of a series of posts aiming at presenting visually, in a clear and concise way, some of the fundamental moving parts of training a neural network: the hyper-parameters. Originally posted on Towards Data Science. Motivation Deep Learning is all about hyper-parameters! Maybe this is an exaggeration, but having a sound […]