Learn TensorFlow for free.

Below a list of free resources to learn TensorFlow:

  1. TensorFlow website: www.tensorflow.org
  2. Udacity free course: www.udacity.com
  3. Google Cloud Platform: cloud.google.com
  4. Coursera free course: www.coursera.orgicon
  5. Machine Learning with TensorFlow by Nishant Shukla : www.tensorflowbook.com
  6. ‘First Contact With TensorFlow’ by Prof. JORDI TORRES: jorditorres.org  or you can order from Amazon: First Contact With Tensorflow
  7. Kadenze Academy: www.kadenze.com
  8. OpenShift: blog.openshift.com
  9. Tutorial by pkmital : github.com
  10. Tutorial by HyunsuLee : github.com
  11. Tutorial by orcaman : github.com
  12. Stanford CS224d: Lecture 7

Hope the above list would be useful.

Here some not free, but definitely worth trying resources available on Amazon:

 

Numerai deep learning example.

In a previous post on Numerai, I have described very basic code to get into a world of machine learning competitions. This one will be a continuation, so if you haven’t read it I recommend to do it- here. In this post, we will add little more complexity to the whole process. We will split out 20% of training data as validation set so we can train different models and compare performance. And we will dive into deep neural nets as predicting model.

Ok, let’s do some machine learning…

Let’s start with importing what will be required, this step is similar to what we have done in the first model. Apart from Pandas, we import “StandardScaler” to preprocess data before feeding them into neural net. We will use “train_test_split” to split out 20% of data as a test set. “roc_auc_score” is a useful metric to check and compare performance of the model, we will also need neural net itself – that will be classifier from ‘scikit-neuralnetwork’ (sknn).

Imports first:

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from sknn.mlp import Classifier, Layer

As we have all required imports, we can load the data from csv(remember to update the system path to downloaded files):

train = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_training_data.csv")
test = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_tournament_data.csv")
sub = pd.read_csv("/home/m/Numerai/numerai_datasets/example_predictions.csv")

Some basic data manipulation required:

sub["t_id"]=test["t_id"]
test.drop("t_id", axis=1,inplace=True)
labels=train["target"]
train.drop("target", axis=1,inplace=True)
train=train.values
labels=labels.values

In next four lines, we will do what is called standardization. The result of standardization (or Z-score normalization) is that the features will be rescaled so that they’ll have the properties of a standard normal distribution with μ=0 and σ=1.

scaler = StandardScaler()
scaler.fit(train)
train = scaler.transform(train)
test = scaler.transform(test)

Next line of code will split original downloaded train set to train and test set, basically we set aside 20% of original train data to make sure we can check the out of the sample performance – to avoid overfitting.

X_train, X_test, y_train, y_test = train_test_split(train,labels, test_size=0.2, random_state=35)

Having all data preprocessed we are ready to define model, set number of layers in neural network, and a number of neurons in each layer. Below few lines of code to do it:

nn = Classifier(
layers=[
Layer("Tanh", units=50),
Layer("Tanh", units=200),
Layer("Tanh", units=200),
Layer("Tanh", units=50),
Layer("Softmax")],
learning_rule='adadelta',
learning_rate=0.01,
n_iter=5,
verbose=1,
loss_type='mcc')

“units=50” – states a number of neurons in each layer, number of neurons in first layer is determined by a number of features in data we will feed in.

“Tahn” – this is kind of activation function, you can use other as well eg. rectifier, expLin, sigmoid, or convolution. In last layer the activation function is Softmax – that’s usual output layer function for classification tasks. In our network we have five layers with a different number of neurons, there are no strict rules about number of neurons and layers so it is more art than science, you just need to try different versions and check what works best.

In our network we have five layers with a different number of neurons, there are no strict rules about a number of neurons and layers so it is more art than science, you just need to try different versions and check what works best. After layers we set learning rule to ‘adadelta’ again more choice available: sgd, momentum, nesterov, adagrad or rmsprop just try and check what works best.

“learning_rule=’adadelta'” – sets learning algorithm to ‘adadelta’, more choice available: sgd, momentum, nesterov, adagrad or rmsprop just try and check what works best, you can mix them for different layers.

“learning_rate=0.01” – learning rate, often as rule of thumb you start with ‘default’ value of 0.01, but other values can be used, mostly anything from 0.001 to 0.1.

“n_iter=5” – number of iterations ‘epochs’, the higher the number the longer process of learning will take, 5 is as example only, one need to look at error after each epoch, at some point it will stop dropping, I have seen anything from 50 to 5000 so feel free to play with it.

“verbose=1” – this parameter will let us see progress on screen.

“loss_type=’mcc’ ” – loss function, ‘mcc’ typical for classification tasks.

As the model is set, we can feed data and train it, depending on how powerful your pc is it can take from seconds to days. It is recommended to use GPU computing for neural networks training.

nn.fit(X_train, y_train)

Below line validates the model against 20% of data we have set aside before.

print('Overall AUC:', roc_auc_score(y_test, nn.predict_proba(X_test)[:,1]))

Using above code we can play around with different settings and neural networks architectures, checking the performance. After finding the best settings, they can be applied for prediction to be uploaded to Numerai, just run last three lines(just remember to update system path to save the file):

y_pred = nn.predict_proba(test)
sub["probability"]=y_pred[:,1]
sub.to_csv("/home/m/Numerai/numerai_datasets/Prediction.csv", index=False)

I hope above text was useful and you can now start playing around with deep learning for trading predictions for Numerai. If you have any comments or questions please feel free to contact me.

Full code below:

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from sknn.mlp import Classifier, Layer

train = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_training_data.csv")
test = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_tournament_data.csv")
sub = pd.read_csv("/home/m/Numerai/numerai_datasets/example_predictions.csv")

sub["t_id"]=test["t_id"]
test.drop("t_id", axis=1,inplace=True)

labels=train["target"]
train.drop("target", axis=1,inplace=True)

train=train.values
labels=labels.values

scaler = StandardScaler()
scaler.fit(train)
train = scaler.transform(train)
test = scaler.transform(test)

X_train, X_test, y_train, y_test = train_test_split(train,labels, test_size=0.2, random_state=35)

nn = Classifier(
layers=[
Layer("Tanh", units=50),
Layer("Tanh", units=200),
Layer("Tanh", units=200),
Layer("Tanh", units=50),
Layer("Softmax")],
learning_rule='adadelta',
learning_rate=0.01,
n_iter=5,
verbose=1,
loss_type='mcc')

nn.fit(X_train, y_train)

print('Overall AUC:', roc_auc_score(y_test, nn.predict_proba(X_test)[:,1]))

y_pred = nn.predict_proba(test)
sub["probability"]=y_pred[:,1]
sub.to_csv("/home/m/Numerai/numerai_datasets/Prediction.csv", index=False)

Machine learning competitions – Numerai example code.

In this post, I want to share, how simple it is to start competing in machine learning tournaments like Numerai. I will go step by step, line by line explaining what is doing what and why it is required.

Numerai is a global artificial intelligence competition in predicting financial markets. Numerai is a little bit similar to Kaggle but with clean datasets, so we can pass over long data cleansing process.  You just download the data, build a model, and upload your predictions, that’s it. To extract most of the data you would initially do some feature engineering, but for simplicity of this intro, we will pass this bit over.  One more thing we will pass on is splitting out validation set, the main aim of this exercise is to fit ‘machine learning’ model to training dataset. Later using fitted model, generate a prediction.  All together it shouldn’t take more than 14 simple lines of python code, you can run them as one piece or run part by part in interactive mode.

Let’s go, let’s do some machine learning…

A first thing to do is to go to numer.ai, click on ‘Download Training Data’  and download datasets, after unzipping the archive, you will have few files in there, we are interested mainly in three of them. It is worth noting what is a path to the folder as we will need it later.

I assume you have installed python and required libraries, if not there is plenty of online tutorials on how to do it, I recommend installing Anaconda distribution. It it time to open whatever IDE you use, and start coding, first few lines will be just importing what we will use later, that is Pandas and ScikitLearn.

import pandas as pd 
from sklearn.ensemble import GradientBoostingClassifier

Pandas is used to import data from csv files and do some basic data manipulations, GradientBoostingClassifier as part of ScikitLearn will be the model we will use to fit and do predict. As we have required libraries imported let’s use them… in next three lines, we will import data from csv to memory.  We will use ‘read_csv’  method from pandas, all you need to do is amend the full path to each file, wherever you have extracted numerai_datasets.zip.

train = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_training_data.csv")
test  = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_tournament_data.csv")   
sub  = pd.read_csv("/home/m/Numerai/numerai_datasets/example_predictions.csv")

What above code does it creates three data frames and imports the csv files we have we have previously extracted from downloaded numerai_datasets.zip.

‘train’ –  this dataset contains all required data to train our model, so it has both ‘features’ and ‘labels’, so you can say it has both questions and answers that our model will ‘learn’

‘test’ – this one contains features but does not contain ‘labels’, you can say it contains questions and our model will deliver answers.

‘sub’ – it is just template for uploading our prediction

Let’s move on,  in next line will copy all unique row id’s from ‘test’ to ‘sub’ to make sure each predicted value will be assigned to a right set of features, let’s say we put question number next to our answer so whoever checks the test would now.

sub["t_id"]=test["t_id"]

As we have copied the ids to ‘sub’, we don’t need them anymore in ‘test’ (all rows will stay in same order), so we can get rid of them.

test.drop("t_id", axis=1,inplace=True)

In next two lines, we will separate ‘labels’ or target values from train dataset.

labels=train["target"]
train.drop("target", axis=1,inplace=True)

As we have prepared ‘train’ dataset, we can get our model to learn from it. First, we select model we want to use, it will be Gradient BoostingClassifier from ScikitLearn – no specific reason for using this one, you can use whatever you like eg. random forest, linear regression…

grd = GradientBoostingClassifier()

As we have a model defined, let’s have it learn from ‘train’ data.

grd.fit(train,labels)

Ok, now our model is well trained and ready to make predictions, as the task is called ‘classification’ we will predict what is a probability of each set of features belongs to one of two classes ‘0’ or ‘1’.

y_pred = grd.predict_proba(test)

We have a long list of predicted probabilities called ‘y_pred’, let’s attach it to ‘id’ we had separated previously.

sub["probability"]=y_pred[:,1]

And save it in csv format, to get uploaded.

sub.to_csv("/home/m/Numerai/numerai_datasets/SimplePrediction.csv", index=False)

The last thing to do is go back to numer.ai website and click on ‘Upload Predictions’… Good luck.

This was very simplistic and introductory example to start playing with numer.ai competitions and machine learning. I will try and come back with gradually more complicated versions, if you have any questions, suggestions or comments please go to ‘About’ section and contact me directly.

The full code below:

import pandas as pd 
from sklearn.ensemble import GradientBoostingClassifier 
train = pd.read_csv("C:/Users/Downloads/numerai_datasets/numerai_training_data.csv") 
test = pd.read_csv("C:/Users/Downloads/numerai_datasets/numerai_tournament_data.csv") 
sub = pd.read_csv("C:/Users/Downloads/numerai_datasets/example_predictions.csv") 
sub["t_id"]=test["t_id"] 
test.drop("t_id", axis=1,inplace=True) 
labels=train["target"] 
train.drop("target", axis=1,inplace=True)
grd = GradientBoostingClassifier() 
grd.fit(train,labels) 
y_pred = grd.predict_proba(test) 
sub["probability"]=y_pred[:,1] 
sub.to_csv("C:/Users/Downloads/numerai_datasets/SimplePrediction.csv", index=False)