This is the simple practice session of Neural Network in Python. By the end of this lesson, you should be able to reveal some of the limitation of the built-in modules in scikit-learn.
Let us start with a simple example of Boolean OR
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import Perceptron
import sklearn.metrics as metric
import numpy as np
X_training=[[1, 1],
[1, 0],
[0, 1],
[0, 0]
]
y_training=[1,
1,
1,
0
]
X_testing=X_training
y_true=y_training
The following code is the example of how you will use Perceptron Neural Network to train, predict and measure the accuracy of your prediction. You can also get the weights of the Neural Network.
ptn = Perceptron(max_iter=500) # set the method
ptn.fit(X_training, y_training) # training
y_pred=ptn.predict(X_testing) # prediction
print(y_pred) # show the output
accuracy=metric.accuracy_score(y_true, y_pred, normalize=True)
print('acuracy=',accuracy) # show accracy score
print(ptn.intercept_, ptn.coef_) # show the synapsis weights w0, w1, w2, ...
The following code is the example of how you will use Multi-Layer Perceptron (MLP) Neural Network to train, predict and measure the accuracy of your prediction. You can also get the weights of the Neural Network.
mlp = MLPClassifier(solver='lbfgs', hidden_layer_sizes=(1,1), activation='logistic') # set the method
mlp.fit(X_training, y_training) # training
y_pred=mlp.predict(X_testing) # prediction
print(y_pred) # show the output
accuracy=metric.accuracy_score(np.array(y_true).flatten(), np.array(y_pred).flatten(), normalize=True)
print('acuracy=',accuracy) # show accracy score
print([coef.shape for coef in mlp.coefs_]) # size of synapsis weights
mlp.coefs_ # synapsis weights
Tips
Instead of {0, 1}, we can also change the value into bipolar {-1, +1}. Try the following OR gate using Perceptron and MLP.
X_training=[[1, 1],
[1, -1],
[-1, 1],
[-1, -1]
]
y_training=[1,
1,
1,
-1
]
X_testing=X_training
y_true=y_training
Train and predict back the AND gate training set below using Perceptron and MLP.
X_training=[[1, 1],
[1, 0],
[0, 1],
[0, 0]
]
y_training=[1,
0,
0,
0
]
X_testing=X_training
y_true=y_training
Modify the above Boolean AND training data into bipolar. Use Perceptron and MLP to train and predict back this gate.
Try to use perceptron and MLP to train and predict the following dataset. What happen? Why do you think the error happen?
X_training=[[1, 1],
[1, 0],
[0, 1],
[0, 0]
]
y_training=[[1, 1],
[1, 1],
[1, 1],
[0, 1]
]
X_testing=X_training
y_true=y_training
Use Perceptron and MLP to solve the following Boolean Exclusive Or (XOR) problem. How to improve the accuracy? How many minimum hidden layer is needed to make it 100% accuracy?
X_training=[[1, 1],
[1, 0],
[0, 1],
[0, 0]
]
y_training=[1,
0,
0,
1
]
X_testing=X_training
y_true=y_training
Train the Perceptron and MLP to solve the following Boolean Tautology (all 1). What happen?
X_training=[[1, 1],
[1, 0],
[0, 1],
[0, 0]
]
y_training=[1,
1,
1,
1
]
X_testing=X_training
y_true=y_training
Use Perceptron and MLP to solve the following Boolean Exclusive Not Or (XNOR) problem. How to improve the accuracy? How many minimum hidden layer is needed to make it 100% accuracy?
X_training=[[1, 1],
[1, 0],
[0, 1],
[0, 0]
]
y_training=[0,
1,
1,
0
]
X_testing=X_training
y_true=y_training
Try to train the following data using neural network. Experiment by changing the solver, hidden layers and activation function to improve the accuracy.
X_training=[[ 1, 1, 0],
[ 1, -1, -1],
[-1, 1, 1],
[-1, -1, 1],
[ 0, 1, -1],
[ 0, -1, -1],
[ 1, 1, 1]
]
y_training=[[1, 0],
[0, 1],
[1, 1],
[1, 0],
[1, 0],
[1, 1],
[1, 1]
]
X_testing=X_training
y_true=y_training
Train the Perceptron and MLP to solve neural network addition problem. What happen? Why you cannot solve this kind of simple problem?
X_training=[[ 1, 1],
[ 1, 2.5],
[-1, 1.5],
[-1, -1],
[ 0, 1],
[ 0, -1],
[ 1, 1]
]
y_training=[2,
3.5,
0.5,
0,
1,
-1,
2
]
X_testing=X_training
y_true=y_training
The following practice session comes from my Neural Network book. Suppose we have the following 10 rows of training data. The training data is supposed to be part of a transportation study regarding the mode choice to select bus, car or train among commuters along a major route in a city, gathered through a questionnaire study. For simplicity and clarity, we selected only 4 attributes. Attribute ‘gender’ is a binary type, while ‘car ownership’ is a quantitative integer. ‘Travel cost/km’ is a quantitative of ratio type but here it was converted into an ordinal type. ‘Income level’ is also an ordinal type. Train a neural network to predict the transport mode of a person, given the four attributes: gender, car ownership, travel cost, and income level. After training the neural network using the data above, try to predict the Transportation Mode Choice of the following instance of data: Female without car ownership, willing to pay expensive travel cost and having medium income level.
Gender | Car ownership | Travel Cost | Income Level | Transportation mode |
---|---|---|---|---|
Male | 0 | Cheap | Low | Bus |
Male | 1 | Cheap | Medium | Bus |
Female | 1 | Cheap | Medium | Train |
Female | 0 | Cheap | Low | Bus |
Male | 1 | Cheap | Medium | Bus |
Male | 0 | Standard | Medium | Train |
Female | 1 | Standard | Medium | Train |
Female | 1 | Expensive | High | Car |
Male | 2 | Expensive | Medium | Car |
Female | 2 | Expensive | High | Car |
last update: Nov 2017
Cite this tutorial as [Teknomo (2017) Practices Neural Network in Python] (http://people.revoledu.com/kardi/tutorial/NeuralNetwork/)
See Also: Python for Data Science
Visit www.Revoledu.com for more tutorials in Data Science
Copyright © 2017 Kardi Teknomo
Permission is granted to share this notebook as long as the copyright notice is intact.