HandySpark is a new Python package designed to improve PySpark user experience, especially when it comes to exploratory data analysis, including visualization capabilities.
This author has yet to write their bio.Meanwhile lets just say that we are proud Daniel contributed a whooping 6 entries.
Entries by Daniel
Have you ever wondered the workflow behind getting such a pizza delivered to your home? I mean, the full workflow, from the sowing of tomato seeds to the bike rider buzzing at your door! It turns out, it is not so different from a Machine Learning workflow.
Photo by G. Crescoli on Unsplash Originally posted on Towards Data Science. Introduction If you are training a binary classifier, chances are you are using binary cross-entropy / log loss as your loss function. Have you ever thought about what exactly does it mean to use this loss function? The thing is, given the ease of […]
Photo by Jesper Aggergaard on Unsplash Originally posted on Towards Data Science. Introduction This is the second post of my series on hyper-parameters. In this post, I will show you the importance of properly initializing the weights of your deep neural network. We will start with a naive initialization scheme and work out its issues, like […]
Photo by Immo Wegmann on Unsplash Originally posted on Towards Data Science. Introduction In my previous post, I invited you to wonder what exactly is going on under the hood when you train a neural network. Then I investigated the role of activation functions, illustrating the effect they have on the feature space using plots and […]
Introduction This is the first of a series of posts aiming at presenting visually, in a clear and concise way, some of the fundamental moving parts of training a neural network: the hyper-parameters. Originally posted on Towards Data Science. Motivation Deep Learning is all about hyper-parameters! Maybe this is an exaggeration, but having a sound […]