You are on page 1of 37

Introduction

Recently, I came across a quote about Julia:

“Walks like python. Runs like C.”


The above line tells a lot about why I chose to write this article. I came across Julia
a while ago even though it was in its early stages, it was still creating ripples in the
numerical computing space. Julia is a work straight out of MIT, a high-level
language that has a syntax as friendly as Python and performance as competitive
as C. This is not all, It provides a sophisticated compiler, distributed parallel
execution, numerical accuracy, and an extensive mathematical function library.

But this article isn’t about praising Julia, it is about how can you utilize it in your
workflow as a data scientist without going through hours of confusion which usually
comes when we come across a new language. Read more about Why Julia? here.

Table of Contents
1. Installation
2. Basics of Julia for Data Analysis
1. Running your first program
2. Julia Data Structures
3. Loops, Conditionals in Julia
3. Exploratory analysis with Julia
1. Introduction to DataFrames.jl
2. Visualisation in Julia using Plots.jl
3. Bonus – Interactive visualizations using Plotly
4. Data Munging in Julia
5. Building a predictive ML model
1. Logistic Regression
2. Decision Tree
3. Random Forest
6. Calling R and Python libraries in Julia
1. Using pandas with Julia
2. Using ggplot2 in Julia

Installation
Before we can start our journey into the world of Julia, we need to set up our
environment with the necessary tools and libraries for data science.

Installing Julia
1. Download Julia for your specific system from here

https://julialang.org/downloads/

2. Follow the platform-specific instructions to install Julia on your system from


here

https://julialang.org/downloads/platform.html

3. If you have done everything correctly, you’ll get a Julia prompt from the
terminal.

Installing IJulia and Jupyter Notebook

Jupyter notebook has become an environment of choice for data science since it is
really useful for both fast experimenting and documenting your steps. There are
other environments too for Julia like Juno IDE but I recommend to stick with the
notebook. Let’s look at how we can setup the same for Julia.

Go to the Julia prompt and type the following code

julia> Pkg.add(“IJulia”)
Note that Pkg.add() command downloads files and package dependencies in the
background and installs it for you. For this, you should have an active internet
connection. If your internet is slow, you might have to wait for some time.

After ijulia is successfully installed you can type the following code to run it,

julia> using IJulia

julia> notebook()
By default, the notebook “dashboard” opens in your home directory ( homedir() ),
but you can open the dashboard in a different directory
with notebook(dir=”/some/path”) .

There you have your environment all set up. Let’s install some important Julia
libraries that we’d be needing for this tutorial.

Installing Julia Packages

A simple way of installing any package in Julia is using the command Pkg.add(“..”).
Like Python or R, Julia too has a long list of packages for data science. I thought
instead of installing all the packages together it would be better if we install them as
and when needed, that’d give you a good sense of what each package does. So
we will be following that process for this article.

Basics of Julia for Data Analysis


Julia is a language that derives a lot of syntax from other data analysis tools like R,
Python, and MATLAB. If you are from one of these backgrounds, it would take you
no time to get started with it. Let’s learn some of the basic syntaxes. If you are in a
hurry here’s a cheat sheet comparing syntax of all the three languages:

https://cheatsheets.quantecon.org/

Running your first Julia program


1. Open your Jupyter notebook from Julia prompt using the following command

julia> using IJulia

julia> notebook()

2. Click on New and select Julia notebook from the dropdown

There, you created your first Julia notebook! Just like you use jupyter notebook for
R or Python, you can write Julia code here, train your models, make plots and so
much more all while being in the familiar environment of jupyter.
Few things to note
 You can name a notebook by simply clicking on the name – Untitled in the
top left area of the notebook. The interface shows In [*] for inputs and Out[*]
for output.
 You can execute a code by pressing “Shift + Enter” or “ALT + Enter”, if you
want to insert an additional row after.

Go ahead and play around a bit with the notebook to get familiar.

Julia Data Structures


The following are some of the most common data structures we end up using when
performing data analysis on Julia:

1. Vector(Array) – A vector is a 1-Dimensional array. A vector can be created


by simply writing numbers separated by a comma in square brackets. If you
add a semicolon, it will change the row. Vectors are widely used in linear
algebra.
Note that in Julia the indexing starts from 1, so if you want to access the first
element of an array you’ll do A[1].

2. Matrix – Another data structure that is widely used in linear algebra, it can
be thought of as a multidimensional array. Here are some basic operations
that can be performed in a matrix
3. Dictionary – D ictionary is an unordered set of key: value pairs, with the
requirement that the keys are unique (within one dictionary). You can create
a dictionary using the Dict() function.
Notice that “=>” operator is used to link key with their respective values. You
access the values of the dictionary using its key.

4. String – Strings can simply be defined by use of double ( ” ) or triple ( ”’ )


inverted commas. Like Python, strings in Julia are also immutable(they can’t
be changed once created). Look at the error below.
Loops, Conditionals in Julia
Like most languages, Julia also has a FOR-loop which is the most widely used
method for iteration. It has a simple syntax:

for i in [Julia Iterable]

expression(i)

end

Here “Julia Iterable” can be a vector, string or other advanced data structures which
we will explore in later sections. Let’s take a look at a simple example, determining
the factorial of a number ‘n’.

fact=1

for i in range(1,5)

fact = fact*i

end

print(fact)

Julia also supports the while loop and various conditionals like if, if/else, for
selecting a bunch of statements over another based on the outcome of the
condition. Here is an example,

if N>=0

print("N is positive")

else
print("N is negative")

end

The above code snippet performs a check on N and prints whether it is a positive or
a negative number. Note that julia is not indentation sensitive like Python but it is a
good practice to indent your code that’s why you’ll find code samples in this article
well indented. Here is a list of Julia conditional constructs compared to their
counterparts in MATLAB and Python.
You can learn more about Julia basics here .

Now that we are familiar with Julia fundamentals, let’s take a deep dive into
problem-solving. Yes, I mean making a predictive model! In the process, we use
some powerful libraries and also come across the next level of data structures. We
will take you through the 3 key phases:

1. Data Exploration – finding out more about the data we have


2. Data Munging – cleaning the data and playing with it to make it better suit
statistical modeling

3. Predictive Modeling – running the actual algorithms and having fun

Exploratory Analysis using Julia (Analytics Vidhya


Hackathon)
The first step in any kind of data analysis is exploring the dataset at hand. There
are two ways to do that, the first is exploring the data tables and applying statistical
methods to find patterns in numbers and the second is plotting the data to find
patterns visually.

The former requires an advanced data structure that is capable of handling multiple
operations and at the same time is fast and scalable. Like many other data analysis
tools, Julia provides one such structure called DataFrame. You need to install the
following package for using it:

julia> Pkg.add(“DataFrames.jl”)

Introduction to DataFrames.jl
A dataframe is similar to Excel workbook – you have column names referring to
columns and you have rows, which can be accessed with the use of row numbers.
The essential difference is that column names and row numbers are known as
column and row index, in case of dataframes . This is similar to pandas.DataFrame
in Python or data.table in R.

Let’s work with a real problem. We are going to analyze an Analytics Vidhya
Hackathon as a practice dataset.

Practice dataset: Loan Prediction Problem


You can download the dataset from here . Here is the description of variables:
Importing libraries and the data set
In Julia we import a library by the following command:

using <library_name>

Let’s first import our DataFrames.jl library and load the train.csv file of the data set:

using DataFrames

train = readtable(“train.csv”)

Quick Data Exploration


Once the data set is loaded, we do preliminary exploration on it. Such as finding the
size(number of rows and columns) of the data set, the name of columns etc. The
function size(train) is used to get the number of rows and columns of the data set
and names(train) is used to get the names of columns(features).

The data set is not that large(only 614 rows) knowing the size of data set
sometimes affect the choice of our algorithm. There are 13 columns(features) we
have that is also not much, in case of a large number of features we go for
techniques like dimensionality reduction etc. Let’s look at the first 10 rows to get a
better feel of how our data looks like? The head(,n) function is used to read the first
n rows of a dataset.

head(train, 10)
A number of preliminary inferences can be drawn from the above table such as:

 Gender, Married, Education, Self_Employed, Credit_History, Property_Area


are all categorical variables with two categories each.
 Loan_ID is just a unique number, it doesn’t provide any information to help
in regard to the loan getting accepted or not.
 Some columns have missing values like LoanAmount.

Note that these inferences are just preliminary they will either get rejected or
updated after further exploration.

I am interested in analyzing the LoanAmount column, let’s have a closer look at


that.

describe(train[:LoanAmount])
describe() function would provide the count(length), mean, median, minimum,
quartiles and maximum in its output (Read this article to refresh basic statistics to
understand population distribution).

Please note that we can get an idea of a possible skew in the data by comparing
the mean to the median, i.e. the 50% figure.

For the non-numerical values (e.g. Property_Area, Credit_History etc.), we can look
at frequency distribution to understand whether they make sense or not. The
frequency table can be printed by the following command:

countmap ( train[:Property_Area])

Similarly, we can look at unique values of credit history. Note that


dataframe_name[:column_name] is a basic indexing technique to access a
particular column of the dataframe. A column can also be accessed by its index.
For more information, refer to the documentation .

Visualisation in Julia using Plots.jl

Another effective way of exploring the data is by doing it visually


using various kind of plots as it is rightly said, “A picture is worth a
thousand words” .
Julia doesn’t provide a plotting library of its own but it lets you use any plotting
library of your own choice in Julia programs. In order to use this functionality you
need to install the following package:

julia> Pkg.add(“Plots.jl”)

julia>Pkg.add(“StatPlots.jl”)

julia>Pkg.add(“PyPlot.jl”)
The package “Plots.jl” provides a single frontend(interface) for any plotting
library(matplotlib, plotly, etc.) you want to use in Julia. “StatPlots.jl” is a supporting
package used for Plots.jl. “PyPlot.jl” is used to work with matplotlib of Python in
Julia.

Distribution analysis
Now that we are familiar with basic data characteristics, let us study the distribution
of various variables. Let us start with numeric variables – namely ApplicantIncome
and LoanAmount

Let’s start by plotting the histogram of ApplicantIncome using the following


commands:

using Plots, StatPlots #import required packages

pyplot() #Set the backend as matplotlib.pyplot

Plots.histogram(dropna(train[:ApplicantIncome]),bins=50,xlabel="ApplicantInc

ome",labels="Frequency") #Plot histogram


Here we observe that there are few extreme values. This is also the reason why 50
bins are required to depict the distribution clearly.

Next, we look at box plots to understand the distributions. Box plot for fare can be
plotted by:

Plots.boxplot(dropna(train[:ApplicantIncome]), xlabel="ApplicantIncome")
This confirms the presence of a lot of outliers/extreme values. This can be
attributed to the income disparity in the society. Part of this can be driven by the
fact that we are looking at people with different education levels. Let us segregate
them by Education:

Plots.boxplot(train[:Education],train[:ApplicantIncome],labels="ApplicantInc

ome")
We can see that there is no substantial difference between the mean income of
graduate and non-graduates. But there are a higher number of graduates with very
high incomes, which are appearing to be the outliers.

Now, Let’s look at the histogram and boxplot of LoanAmount using the following
command:

Plots.histogram(dropna(train[:LoanAmount]),bins=50,xlabel="LoanAmount",label

s="Frequency")
Plots.boxplot(dropna(train[:LoanAmount]), ylabel="LoanAmount")
Again, there are some extreme values. Clearly, both ApplicantIncome and
LoanAmount require some amount of data munging. LoanAmount has missing and
well as extreme values, while ApplicantIncome has a few extreme values, which
demand deeper understanding. We will take this up in coming sections.

That was a lot of useful visualizations, to learn more about creating visualizations in
Julia using Plots.jl Plots.jl Documentation

Bonus: Interactive visualizations using Plotly


Now’s the time where awesomeness of Plots.jl comes into play. The visualizations
we created till now were all good but while exploration it is useful if the plot is
interactive. We can create interactive plots in Julia using Plotly as a backend. Type
the following code

plotly() #use plotly as backend

Plots.histogram(dropna(train[:ApplicantIncome]),bins=50,xlabel="ApplicantInc

ome",labels="Frequency")
You can do much more with Plots.jl and various backends it supports. Read Plots.jl

Documentation

Data Munging in Julia


For those, who have been following, here you must wear your shoes to start
running.

Data munging – recap of the need


While our exploration of the data, we found a few problems in the data set, which
needs to be solved before the data is ready for a good model. This exercise is
typically referred as “Data Munging”. Here are the problems, we are already aware
of:

1. There are missing values for some variables. We should estimate those
values wisely depending on a number of missing values and the expected
importance of variables.
2. While looking at the distributions, we saw that ApplicantIncome and
LoanAmount seemed to contain extreme values at either end. Though they
might make intuitive sense, but should be treated appropriately.

In addition to these problems with numerical fields, we should also look at the non-
numerical fields i.e. Gender, Property_Area, Married, Education and Dependents to
see, if they contain any useful information.

Check missing values in the dataset


Let us look at missing values in all the variables because most of the models don’t
work with missing data and even if they do, imputing them helps more often than
not. So, let us check the number of nulls / NaNs in the dataset

showcols(train)
Though the missing values are not very high in number, many variables have them
and each one of these should be estimated and added to the data.

Note: Remember that missing values may not always be NaNs. For instance, if the
Loan_Amount_Term is 0, does it makes sense or would you consider that missing?
I suppose your answer is missing and you’re right. So we should check for values
which are unpractical.

How to fill missing values?

There are multiple ways of fixing missing values in a dataset. Take


LoanAmount for example, there are numerous ways to fill the
missing values – the simplest being replacement by the mean.
The other extreme would be to build a supervised learning model to
predict loan amount on the basis of other variables and then use
age along with other variables to predict survival.
We would be taking the simpler approach to fix missing values in this article:

#replace missing loan amount with mean of loan amount

train[isna.(train[:LoanAmount]),:LoanAmount] = floor(mean(dropna(train[:LoanA

mount])))
#replace 0.0 of loan amount with the mean of loan amount

train[train[:LoanAmount] .== 0, :LoanAmount] = floor(mean(dropna(train[:Loan

Amount])))

#replace missing gender with mode of gender values

train[isna.(train[:Gender]), :Gender] = mode(dropna(train[:Gender]))

#replace missing married with mode value

train[isna.(train[:Married]), :Married] = mode(dropna(train[:Married]))

#replace missing number of dependents with the mode value

train[isna.(train[:Dependents]),:Dependents]=mode(dropna(train[:Dependents])

#replace missing values of the self_employed column with mode

train[isna.(train[:Self_Employed]),:Self_Employed]=mode(dropna(train[:Self_E

mployed]))

#replace missing values of loan amount term with mode value

train[isna.(train[:Loan_Amount_Term]),:Loan_Amount_Term]=mode(dropna(train[:

Loan_Amount_Term]))

#replace credit history missing values with mode


train[isna.(train[:Credit_History]), :Credit_History] = mode(dropna(train[:C

redit_History]))

I have basically replaced all missing values in numerical columns with their means
and with the mode in categorical columns. Let’s understand the code little closely,

train[isna.(train[:LoanAmount]),:LoanAmount] = floor(mean(dropna(train[:Loan

Amount])))

 train[:LoanAmount] – access LoanAmount column from dataframe.


 isna.(..) – returns true or false based on whether there is missing value in
the column.
 train[condition, :column_name] – returns the rows of the given column
that satisfy the condition(In this case if the values is NA).
 dropna(..) – ignore NA values.
 mean(..) – mean of column values.
 floor(..) – perform floor operation on the value.

I hope this gives you a better understanding of the code part that is used to fix
missing values.

As discussed earlier, there are better ways to perform data imputation and I
encourage you to learn as many as you can. Get a detailed view of different
imputation techniques through this article .

Building a predictive ML model


Now that we have fixed all missing values, we will be building a predictive machine
learning model. We will also be cross-validating it and saving it to the disk for future
use. The following packages are required for doing so:

julia> Pkg.add(“ScikitLearn.jl”)

This package is an interface to Python’s scikit-learn package so python users are in


for a treat. The interesting thing about using this package is you get to use the
same models and functionality as you used in Python.
Label Encoding categorical data
Sklearn requires all data to be of numeric type so let’s label encode our data,

using ScikitLearn

@sk_import preprocessing: LabelEncoder

labelencoder = LabelEncoder()

categories = [2 3 4 5 6 12 13]

for col in categories

train[col] = fit_transform!(labelencoder, train[col])

end

Those who have used sklearn before will find this code to be familiar, we are using
LabelEncoder to encode the categories. I have used the index of columns with
categorical data.

Next, we will import the required modules. Then we will define a generic
classification function, which takes a model as input and determines the Accuracy
and Cross-Validation scores. Since this is an introductory article and julia code is
very similar to python, I will not go into the details of coding. Please refer to this
article for getting details of the algorithms with R and Python codes. Also, it’ll be
good to get a refresher on cross-validation through this article , as it is a very
important measure of power performance.

using ScikitLearn: fit!, predict, @sk_import, fit_transform!

@sk_import preprocessing: LabelEncoder

@sk_import model_selection: cross_val_score


@sk_import metrics: accuracy_score

@sk_import linear_model: LogisticRegression

@sk_import ensemble: RandomForestClassifier

@sk_import tree: DecisionTreeClassifier

function classification_model(model, predictors)

y = convert(Array, train[:13])

X = convert(Array, train[predictors])

X2 = convert(Array, test[predictors])

#Fit the model:

fit!(model, X, y)

#Make predictions on training set:

predictions = predict(model, X)
#Print accuracy

accuracy = accuracy_score(predictions, y)

println("\naccuracy: ",accuracy)

#5 fold cross validation

cross_score = cross_val_score(model, X, y, cv=5)

#print cross_val_score

println("cross_validation_score: ", mean(cross_score))

#return predictions

fit!(model, X, y)

pred = predict(model, X2)

return pred

end

Logistic Regression
Let’s make our first Logistic Regression model. One way would be to take all the
variables into the model but this might result in overfitting (don’t worry if you’re
unaware of this terminology yet). In simple words, taking all variables might result in
the model understanding complex relations specific to the data and will not
generalize well. Read more about Logistic Regression .

We can easily make some intuitive hypothesis to set the ball rolling. The chances of
getting a loan will be higher for:

1. Applicants having a credit history (remember we observed this in


exploration?)
2. Applicants with higher applicant and co-applicant incomes
3. Applicants with higher education level
4. Properties in urban areas with high growth perspectives

So let’s make our first model with ‘Credit_History’.

model = LogisticRegression()

predictor_var = [:Credit_History]

classification_model(model, predictor_var)

Accuracy : 80.945% Cross-Validation Score : 80.957%

#We can try a different combination of variables:

predictor_var = [:Credit_History, :Education, :Married, :Self_Employed, :Prop

erty_Area]

classification_model(model, predictor_var)

Accuracy : 80.945% Cross-Validation Score : 80.957%

Generally, we expect the accuracy to increase by adding variables. But this is a


more challenging case. The accuracy and cross-validation score are not getting
impacted by less important variables. Credit_History is dominating the mode. We
have two options now:

1. Feature Engineering derives new information and tries to predict those. I will
leave this to your creativity.
2. Better modeling techniques. Let’s explore this next.

Decision Tree
A decision tree is another method for making a predictive model. It is known to
provide higher accuracy than logistic regression model. Read more about Decision
Trees.

model = DecisionTreeClassifier()

predictor_var = [:Credit_History, :Gender, :Married, :Education]

classification_model(model, predictor_var)

Accuracy : 80.945% Cross-Validation Score : 76.656%

Here the model based on categorical variables is unable to have an impact


because Credit History is dominating over them. Let’s try a few numerical variables:

#We can try different combinations of variables:

predictor_var = [:Credit_History, :Loan_Amount_Term]

classification_model(model, predictor_var)

Accuracy : 99.345% Cross-Validation Score : 72.009%

Here we observed that although the accuracy went up on adding variables, the
cross-validation error went down. This is the result of model over-fitting the
data. Let’s try an even more sophisticated algorithm and see if it helps:

Random Forest
Random forest is another algorithm for solving the classification problem. Read
more about Random Forest.
An advantage with Random Forest is that we can make it work with all the features
and it returns a feature importance matrix which can be used to select features.

model = RandomForestClassifier(n_estimators=100)

predictors =[:Gender, :Married, :Dependents, :Education,

:Self_Employed, :Loan_Amount_Term, :Credit_History, :Property_Area,

:LoanAmount]

classification_model(model, predictors)

Accuracy : 100.000% Cross-Validation Score : 78.179%

Here we see that the accuracy is 100% for the training set. This is the ultimate case
of overfitting and can be resolved in two ways:

1. Reducing the number of predictors


2. Tuning the model parameters

The updated code would now be

model = RandomForestClassifier(n_estimators=100, min_samples_split=25, max_de

pth=8, n_jobs=-1)

predictors = [:ApplicantIncome, :CoapplicantIncome, :LoanAmount, :Credit_Hist

ory, :Loan_Amoun_Term, :Gender, :Dependents]

classification_model(model, predictors)

Accuracy : 82.410% Cross-Validation Score : 80.635%

Notice that although accuracy reduced, the cross-validation score is improving


showing that the model is generalizing well. Remember that random forest models
are not exactly repeatable. Different runs will result in slight variations because of
randomization. But the output should stay in the ballpark.
You would have noticed that even after some basic parameter tuning on the
random forest, we have reached a cross-validation accuracy only slightly better
than the original logistic regression model. This exercise gives us some very
interesting and unique learning:

1. Using a more sophisticated model does not guarantee better results.


2. Avoid using complex modeling techniques as a black box without
understanding the underlying concepts. Doing so would increase the
tendency of overfitting thus making your models less interpretable
3. Feature Engineering is the key to success. Everyone can use Xgboost
models but the real art and creativity lie in enhancing your features to better
suit the model.

So are you ready to take on the challenge? Start your data science journey
with Loan Prediction Problem.

Calling R and Python libraries in Julia


Julia is a powerful language with interesting libraries but it may so happen that you
want to use library of your own from outside Julia. One such reason can be lack of
functionality in existing Julia libraries(it is still very young). For situations like this,
Julia provides ways to call libraries from R and Python. Let’s see how can we do
that?

Using pandas with Julia


Install the following package:

julia> Pkg.add("PyCall.jl")

using PyCall

@pyimport pandas as pd

df = pd.read_csv("train.csv")

There is something interesting about using a Python library as


smoothly in another language.
Pandas is a very mature and performant library, it is certainly a
bliss that we can use it wherever the native DataFrames.jl falls
short.

Using ggplot2 in Julia


Install the following packages:

julia> Pkg.add("RDatsets.jl")

julia> Pkg.add("RCall.jl")

using RCall, RDatasets

mtcars = datasets("datasets", "mtcars");

library(ggplot2)

ggplot($mtcars, aes(x = WT, y=MPG)) + geom_point()


End Notes
I hope this tutorial will help you maximize your efficiency when starting with data
science in Julia. I am sure this not only gave you an idea about basic data analysis
methods but it also showed you how to implement some of the more sophisticated
techniques available today.

Julia is really a great tool and is becoming an increasingly popular language


among the data scientists. The reason being, it’s easy to learn, integrates well with
other tools, gives C like speed and also allows using libraries of existing tools like R
and Python.

So, learn Julia to perform the full life-cycle of any data science project. It includes
reading, analyzing, visualizing and finally making predictions.

Also note, all the code used in this article is available on GitHub.

If you come across any difficulty while practicing Julia, or you have any
thoughts/suggestions/feedback on the post, please feel free to post them in
comments below.
Learn, engage, compete, and get hired!

You might also like