Consider the following case
a = [1, 2, 3]
b = a
b = 11
print("list a:", a)
print("list b:", b)
We created a new variable
b and assigned the value of list
a to it, after modifying the value in list
b, what would be the result of list
Consider a second case:
a = [1, 2, 3]
input_list = 11
b = func(a)
print("list a:", a)
print("list b:", b)
In this example, list
a is passed in as an input to our function, and we modified the input inside the…
In the image recognition field, the general training set could take from GB to TB size, with each image growing bigger and bigger, there is no way to preload all images into the memory and do a model training. In this article, we will touch on how to make use of some handy functions in Keras to load images in batches without busting your RAM.
We will work on a concrete data set from a competition in Kaggle and learn how to:
You probably have used convolutional functions from Tensorflow, Pytorch, Keras, or other deep learning frameworks. But in this article, I would like to implement the convolutional layers from scratch, which, I believe, could help one gains a deeper understanding of each component in the convolutional process.
We are going to implement the forward propagation with 4 different steps:
Let’s start with padding.
Zero Padding pads 0s at the edge of an image, benefits include:
1. It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes…
In deep learning, generally, to approach the optimal value, gradient descent is applied to the weights, and optimization is achieved by running many many epochs with large datasets. The process is computationally intensive, and more than often you need to fit a dataset multiple times, thus, an efficient and fast optimization method helping you to get to the optimal more quickly is of great importance.
In this article, we will go through some general techniques applied in the optimization process.
Firstly, let’s learn the idea of weighted average, it smoothes values by given weights to past values.
In the last post, we have coded a deep dense neural network, but to have a better and more complete neural network, we would need it to be more robust and resistant to overfitting. The commonly applied method in a deep neural network, you might have heard, are regularization and dropout. In this article, we will together understand these 2 methods and implement them in python.
(we will directly use the function created in the last post in the following, if you get confused about some of the code, you might need to check the previous post)
Regularization helps to…
In last post, we’ve built a 1-hidden layer neural network with basic functions in python. To generalize and empower our network, in this post, we will build a n-layer neural network to do a binary classification task, in which n is customisable (it is recommended to go over my last introduction of neural network as the basics of theory would not be repeated here).
All images created on my own, referred images are source-added.
Firstly, weights need to be initialized for different layers. Note that in general, the input is not considered as a layer, but output is.
We will build a shallow dense neural network with one hidden layer, and the following structure is used for illustration purpose.
Before trying to understand this post, I strongly suggest you to go through my pervious implementation of logistic regression, as logistic regression can be seem as a 1-layer neural network and the basic concept is actually the same.
Where in the graph above, we have an input vector x = (x_1, x_2), containing 2 features and 4 hidden units a1, a2, a3 and a4, and output one value y_1 in [0, 1].(consider …
Say we are doing a classic prediction task, where given a input vector with $n$ variables:
And to predict 1 response variable $y$ (may be the sales of next year, the house price, etc.), the simplest form is to use a linear regression to do the prediction with the formula:
For regression prediction tasks, not all time that we pursue only an absolute accurate prediction, and in fact, our prediction is always inaccurate, so instead of looking for an absolute precision, some times a prediction interval is required, in which cases we need quantile regression — that we predict an interval estimation of our target.
Fortunately, the powerful
lightGBM has made quantile prediction possible and the major difference of quantile regression against general regression lies in the loss function, which is called pinball loss or quantile loss. There is a good explanation of pinball loss here, it has the formula:
The very first idea of creating my own app and deploying it on the cloud so that everyone could use it is super exciting to me, and this is what inspired me to write this post. If this idea also intrigues you, please follow through and from this post, you will learn how to deploy a python app step by step.
You will need to wrap your idea in an app, or say an API, which can process calls from the internet. An example is here. This is a flask app,
where the key lies in the
Hmm…I am a data scientist looking to catch up the tide…