Professional Documents
Culture Documents
Ordinal - is one where the order matters but not the difference between values. Correlation Covariance
Interval - is a measurement where the difference between two values is meaningful. Features should be uncorrelated with each other and highly
correlated to the feature we’re trying to predict. A measure of how much two random variables change
Ratio - has all the properties of an interval variable, and also has a clear definition of 0.0. together. Math: dot(de_mean(x), de_mean(y)) / (n - 1)
Data Types
Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to
convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated
variables called principal components. This transformation is defined in such a way that the first principal Plot the variance per feature and select the
Principal Component Analysis (PCA)
component has the largest possible variance (that is, accounts for as much of the variability in the data as features with the largest variance.
possible), and each succeeding component in turn has the highest variance possible under the constraint that
it is orthogonal to the preceding components.
Dimensionality Reduction
Identify Predictor (Input) and Target (output) variables. Next,
Variable Identification
identify the data type and category of the variables. SVD is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition
of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to
Mean, Median, Mode, Min, Max, Range, Singular Value Decomposition (SVD)
any m×n matrix via an extension of the polar decomposition. It has many useful applications in signal
Quartile, IQR, Variance, Standard Deviation, Continuous Features processing and statistics.
Skewness, Histogram, Box Plot Univariate Analysis
Feature Selection
Frequency, Histogram Categorical Features Correlation
Filter type methods select features based only on general metrics like the
Finds out the relationship between two variables. correlation with the variable to predict. Filter methods suppress the least Linear Discriminant Analysis
Filter Methods interesting variables. The other variables will be part of a classification or a
Scatter Plot regression model used to classify or to predict data. These methods are ANOVA: Analysis of Variance
particularly effective in computation time and robust to overfitting.
Correlation Plot - Heatmap Chi-Square
Data Exploration
We can start analyzing the relationship by Wrapper methods evaluate subsets of variables which allows, unlike Forward Selection
creating a two-way table of count and Two-way table filter approaches, to detect the possible interactions between
count%. Importance variables. The two main disadvantages of these methods are : The Backward Elimination
Bi-variate Analysis Wrapper Methods
increasing overfitting risk when the number of observations is Recursive Feature Ellimination
Stacked Column Chart
insufficient. AND. The significant computation time when the
This test is used to derive the statistical number of variables is large. Genetic Algorithms
significance of relationship between the Chi-Square Test
variables. Embedded methods try to combine the advantages of both previous Lasso regression performs L1 regularization which adds penalty
methods. A learning algorithm takes advantage of its own variable equivalent to absolute value of the magnitude of coefficients.
Z-Test/ T-Test Embedded Methods
selection process and performs feature selection and classification Ridge regression performs L2 regularization which adds penalty
simultaneously. equivalent to square of the magnitude of coefficients.
ANOVA
One may choose to either omit elements from a dataset Machine Learning algorithms perform Linear Algebra on Matrices, which means all features
Missing values must be numeric. Encoding helps us do this.
that contain missing values or to impute a value
Machine Learning Data
Numeric variables are endowed with several formalized special values including ±Inf, NA and NaN.
Special values Processing
Calculations involving special values often result in special values, and need to be handled/cleaned
Feature Cleaning
They should be detected, but not necessarily removed. Their Feature Encoding In One Hot Encoding, make sure the
Outliers
inclusion in the analysis is a statistical decision. encodings are done in a way that all features
are linearly independent.
A person's age cannot be negative, a man cannot be pregnant
Obvious inconsistencies
and an under-aged person cannot possess a drivers license.
The technique then finds the first missing value and uses the cell value Label Encoding One Hot Encoding
Hot-Deck
immediately prior to the data that are missing to impute the missing value.
Since the range of values of raw data varies widely, in some machine learning
Selects donors from another dataset to complete missing data. Cold-Deck algorithms, objective functions will not work properly without normalization.
Another reason why feature scaling is applied is that gradient descent converges
Another imputation technique involves replacing any missing value with the mean of that variable much faster with feature scaling than without it.
Mean-substitution
for all other cases, which has the benefit of not changing the sample mean for that variable.
A regression model is estimated to predict observed values of a variable based on other The simplest method is rescaling the range of
Regression Rescaling
variables, and that model is then used to impute values in cases where that variable is missing features to scale the range in [0, 1] or [−1, 1].
Feature Imputation Feature Normalisation
or Scaling
Feature standardization makes the values of each
Methods Standardization feature in the data have zero-mean (when subtracting
the mean in the numerator) and unit-variance.
Some Libraries...
To fit the parameters of the classifier in the
Converting 2014-09-20T20:45:40Z into categorical A set of examples used for Multilayer Perceptron, for instance, we would
Decompose Training Dataset
attributes like hour_of_the_day, part_of_day, etc. learning use the training set to find the “optimal”
weights when using back-progapation.
Typically data is discretized into partitions of K
equal lengths/width (equal intervals) or K% of Continuous Features In the Multilayer Perceptron case, we would use the test to
the total data (equal frequencies). A set of examples used only to assess the estimate the error rate after we have chosen the final model (MLP
Discretization Test Dataset
performance of a fully-trained classifier size and actual weights) After assessing the final model on the test
Values for categorical features may be combined, particularly Dataset Construction set, YOU MUST NOT tune the model any further.
Categorical Features Feature Engineering
when there’s few samples for some categories.
In the Multilayer Perceptron case, we would use the validation
A set of examples used to tune the
Changing from grams to kg, and losing detail might Validation Dataset set to find the “optimal” number of hidden units or determine a
Reframe Numerical Quantities parameters of a classifier
be both wanted and efficient for calculation stopping point for the back-propagation algorithm
Creating new features as a combination of existing features. Could be One round of cross-validation involves partitioning a sample of data into complementary subsets,
multiplying numerical features, or combining categorical variables. This is a Crossing performing the analysis on one subset (called the training set), and validating the analysis on the other
Cross Validation
great way to add domain expertise knowledge to the dataset. subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation
are performed using different partitions, and the validation results are averaged over the rounds.
Regression A supervised problem, the outputs are continuous rather than discrete.
When we are interested mainly in the predicted variable as a result of the inputs, but not
Inputs are divided into two or more classes, and the learner must produce a
on the each way of the inputs affect the prediction. In a real estate example, Prediction
Prediction Classification model that assigns unseen inputs to one or more (multi-label classification) of
would answer the question of: Is my house over or under valued? Non-linear models are
these classes. This is typically tackled in a supervised way.
very good at these sort of predictions, but not great for inference because the models
are much less interpretable. Motivation A set of inputs is to be divided into groups.
When we are interested in the way each one of the inputs affect the prediction. In a real Types Unlike in classification, the groups are not
Clustering
estate example, Inference would answer the question of: How much would my house known beforehand, making this typically an
Inference unsupervised task.
cost if it had a view of the sea? Linear models are more suited for inference because the
models themselves are easier to understand than their non-linear counterparts. Density Estimation Finds the distribution of inputs in some space.
Fraction of correct predictions, not reliable as skewed when the When we do not make assumptions about the form of our function (f). However,
data set is unbalanced (that is, when the number of samples in Accuracy since these methods do not reduce the problem of estimating f to a small number
Non-Parametric
different classes vary greatly) of parameters, a large number of observations is required in order to obtain an
accurate estimate for f. An example would be the thin-plate spline model.
Deep learning
ROC Curve - Receiver Operating
Characteristics Inductive logic programming
Leave-one-out cross-validation There is an inherent tradeoff between Prediction Accuracy and Model Interpretability,
Prediction
that is to say that as the model get more flexible in the way the function (f) is selected,
k-fold cross-validation Methods Selection Criteria Accuracy vs Model
they get obscured, and are hard to interpret. Flexible methods are better for
Interpretability
inference, and inflexible methods are preferable for prediction.
Holdout method
Adds support for large, multi-dimensional arrays and matrices,
Repeated random sub-sampling validation Numpy along with a large library of high-level mathematical functions
to operate on these arrays
The traditional way of performing hyperparameter optimization has
been grid search, or a parameter sweep, which is simply an exhaustive Pandas Offers data structures and operations for manipulating numerical tables and time series
searching through a manually specified subset of the hyperparameter
Grid Search
space of a learning algorithm. A grid search algorithm must be guided It features various classification, regression and clustering algorithms
by some performance metric, typically measured by cross-validation including support vector machines, random forests, gradient boosting, k-
on the training set or evaluation on a held-out validation set. Scikit-Learn
means and DBSCAN, and is designed to interoperate with the Python
numerical and scientific libraries NumPy and SciPy.
Since grid searching is an exhaustive and therefore potentially
expensive method, several alternatives have been proposed. In Hyperparameters
particular, a randomized search that simply samples parameter settings Random Search
a fixed number of times has been found to be more effective in high-
dimensional spaces than exhaustive search. Tuning
For specific learning algorithms, it is possible to compute the gradient with respect to
hyperparameters and then optimize the hyperparameters using gradient descent. The first
Gradient-based optimization
usage of these techniques was focused on neural networks. Since then, these methods have
been extended to other models such as support vector machines or logistic regression. Tensorflow
Collect
Explore
Clean Features
Impute Features
Data
Engineer Features
Reinforcement Learning What should I do now? Select Algorithm based on question and
Model
data available
Vision API
Speech API The cost function will provide a measure of how far my algorithm and
its parameters are from accurately representing my training data.
Jobs API Cost Function
Google Cloud Sometimes referred to as Cost or Loss function when the goal is to
Video Intelligence API minimise it, or Objective function when the goal is to maximise it.
Language API Having selected a cost function, we need a method to minimise the Cost function, or
SaaS - Pre-built Machine Learning models Optimization maximise the Objective function. Typically this is done by Gradient Descent or Stochastic
Translation API Gradient Descent.
Rekognition Machine Learning Process Different Algorithms have different Hyperparameters, which will affect the
Tuning algorithms performance. There are multiple methods for Hyperparameter
Lex AWS
Tuning, such as Grid and Random search.
Polly
Analyse the performance of each algorithms and
Direction
… many others discuss results.
Tensorflow Scaling How does my algorithm scale for both training and inference?
MXNet
Machine Learning Research How can feature manipulation be done for training and
Torch inference in real-time?
… many others Deployment and How to make sure that the algorithm is retrained
Operationalisation periodically and deployed into production?
The natural logarithm of the likelihood function, called the log-likelihood, is more convenient to work with. Because the
logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same
points as the function itself, and hence the log-likelihood can be used in place of the likelihood in maximum likelihood
estimation and related techniques.
Maximum
Likelihood
Estimation (MLE) In general, for a fixed set of data and underlying
statistical model, the method of maximum likelihood
selects the set of values of the model parameters that
maximizes the likelihood function. Intuitively, this
maximizes the "agreement" of the selected model with
the observed data, and for discrete random variables it
indeed maximizes the probability of the observed data
under the resulting distribution. Maximum-likelihood
estimation gives a unified approach to estimation,
which is well-defined in the case of the normal
distribution and many other problems.
In linear algebra, an eigenvector or characteristic vector of a linear transformation T Cross entropy can be used to define the loss
from a vector space V over a field F into itself is a non-zero vector that does not Eigenvectors and Eigenvalues function in machine learning and optimization.
change its direction when that linear transformation is applied to it. Cross-Entropy The true probability pi is the true label, and
http://setosa.io/ev/eigenvectors-and- the given distribution qi is the predicted value
eigenvalues/ of the current model.
When the dimensionality increases, the volume of the space increases so fast that the available data
become sparse. This sparsity is problematic for any method that requires statistical significance. In
Curse of Dimensionality
order to obtain a statistically sound and reliable result, the amount of data needed to support the
result often grows exponentially with the dimensionality. It is used to quantify the similarity between To define the Hellinger distance in terms of
Hellinger Distance two probability distributions. It is a type of f- measure theory, let P and Q denote two
Mean divergence. probability measures that are absolutely
Value in the middle or an ordered continuous with respect to a third probability
Median measure λ. The square of the Hellinger
list, or average of two in middle.
Measures of Central Tendency distance between P and Q is defined as the
Most Frequent Value Mode quantity
Division of probability distributions based on contiguous intervals with equal Is a measure of how one probability
Quantile distribution diverges from a second expected
probabilities. In short: Dividing observations numbers in a sample list equally.
probability distribution. Applications include
Range characterizing the relative (Shannon) entropy
Kullback-Leibler Divengence
in information systems, randomness in
The average of the absolute value of the deviation of each continuous time-series, and information gain
Medium Absolute Deviation (MAD) Discrete Continuous
value from the mean when comparing statistical models of
Three quartiles divide the data in approximately four equally divided parts Inter-quartile Range (IQR) inference.
The average of the squared differences from the Mean. Formally, is the expectation of is a measure of the difference between an
the squared deviation of a random variable from its mean, and it informally measures Definition original spectrum P(ω) and an approximation
how far a set of (random) numbers are spread out from their mean. Itakura–Saito distance P^(ω) of that spectrum. Although it is not a
perceptual measure, it is intended to reflect
perceptual (dis)similarity.
Variance https://stats.stackexchange.com/questions/154879/a-list-of-cost-functions-used-in-neural-networks-alongside-applications
Dispersion
https://en.wikipedia.org/wiki/Loss_functions_for_classification
Types
Frequentist Basic notion of probability: # Results / # Attempts
Continuous
Frequentist vs Bayesian Probability Bayesian The probability is not a number, but a distribution itself.
Discrete
http://www.behind-the-enemy-lines.com/2008/01/are-you-bayesian-or-frequentist-or.html
The signed number of standard In probability and statistics, a random variable, random quantity, aleatory variable or stochastic variable is
deviations an observation or datum z-score/value/factor Standard Deviation a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense). A
is above the mean. Random Variable
random variable can take on a set of possible different values (similarly to other mathematical variables),
sqrt(variance) each with an associated probability, in contrast to other mathematical variables. Same, for continuous variables
Expectation (Expected Value) of a Random Variable
Pearson
Benchmarks linear relationship, most appropriate for measurements taken from
Relationship Statistics
an interval scale, is a measure of the linear dependence between two variables
Bayes Theorem (rule, law)
Benchmarks monotonic relationship (whether linear or not), Spearman's coefficient is Correlation
Spearman Simple Form
appropriate for both continuous and discrete variables, including ordinal variables.
Is a statistic used to measure the ordinal association between two measured quantities. With Law of Total probability
Contrary to the Spearman correlation, the Kendall correlation is not affected by how far from Kendall The marginal distribution of a subset of a collection
each other ranks are but only by whether the ranks between observations are equal or not, Machine Learning of random variables is the probability distribution of
and is thus only appropriate for discrete variables but not defined for continuous variables. Mathematics Probability Concepts the variables contained in the subset. It gives the
Marginalisation
probabilities of various values of the variables in the
The results are presented in a matrix format, where the cross tabulation of two fields is a cell value. subset without reference to the values of the other
Co-occurrence
The cell value represents the percentage of times that the two fields exist in the same events. variables. Discrete
Continuous
Is a general statement or default position that there is no relationship between two measured phenomena, or no
Null Hypothesis
association among groups. The null hypothesis is generally assumed to be true until evidence indicates otherwise.
Is a fundamental rule relating marginal probabilities
to conditional probabilities. It expresses the total
In this method, as part of experimental design, before performing the experiment, one first Law of Total Probability
probability of an outcome which can be realized
chooses a model (the null hypothesis) and a threshold value for p, called the significance
via several distinct events - hence the name.
level of the test, traditionally 5% or 1% and denoted as α. If the p-value is less than the
chosen significance level (α), that suggests that the observed data is sufficiently inconsistent
with the null hypothesis that the null hypothesis may be rejected. However, that does not
prove that the tested hypothesis is true. For typical analysis, using the standard α = 0.05
cutoff, the null hypothesis is rejected when p < .05 and not rejected when p > .05. The p-
value does not, in itself, support reasoning about the probabilities of hypotheses but is only
a tool for deciding whether to reject the null hypothesis.
Chain Rule
p-value
Techniques
Suppose a researcher flips a coin five times in a row and assumes a null hypothesis that the coin is fair. The test statistic Permits the calculation of any member of the joint distribution of a set of
of "total number of heads" can be one-tailed or two-tailed: a one-tailed test corresponds to seeing if the coin is biased random variables using only conditional probabilities.
towards heads, but a two-tailed test corresponds to seeing if the coin is biased either way. The researcher flips the coin
five times and observes heads each time (HHHHH), yielding a test statistic of 5. In a one-tailed test, this is the upper
extreme of all possible outcomes, and yields a p-value of (1/2)5 = 1/32 ≈ 0.03. If the researcher assumed a significance
level of 0.05, this result would be deemed significant and the hypothesis that the coin is fair would be rejected. In a two- Five heads in a row Example
tailed test, a test statistic of zero heads (TTTTT) is just as extreme and thus the data of HHHHH would yield a p-value of
2×(1/2)5 = 1/16 ≈ 0.06, which is not significant at the 0.05 level. Bayesian Inference Bayesian inference derives the posterior probability as a consequence of two
antecedents, a prior probability and a "likelihood function" derived from a statistical
This demonstrates that specifying a direction (on a symmetric test statistic) halves the p-value (increases the significance) model for the observed data. Bayesian inference computes the posterior probability
and can mean the difference between data being considered significant or not. according to Bayes' theorem. It can be applied iteratively so to update the confidence on
out hypothesis.
The process of data mining involves automatically testing huge numbers of hypotheses about a single data set by exhaustively searching for
combinations of variables that might show a correlation. Conventional tests of statistical significance are based on the probability that an observation p-hacking Is a table or an equation that links each outcome of a statistical experiment
arose by chance, and necessarily accept some risk of mistaken test results, called the significance. Definition with the probability of occurence. When Continuous, is is described by the
Probability Density Function
States that a random variable defined as the average of a large number of
http://blog.vctr.me/posts/central-limit-theorem.html independent and identically distributed random variables is itself approximately Central Limit Theorem
normally distributed.
Is a first-order iterative optimization algorithm for finding the minimum of a function. To find a
local minimum of a function using gradient descent, one takes steps proportional to the
negative of the gradient (or of the approximate gradient) of the function at the current point. If Gradient Descent
instead one takes steps proportional to the positive of the gradient, one approaches a local
maximum of that function; the procedure is then known as gradient ascent.
Uniform
Gradient descent uses total gradient over all
examples per update, SGD updates after only Stochastic Gradient Descent (SGD) Poisson
Normal (Gaussian)
1 or few examples:
Gradient descent uses total gradient over all Optimization Types (Density Function)
Mini-batch Stochastic Gradient Descent Distributions
examples per update, SGD updates after only
(SGD)
1 example
This regularizer constrains the functions learned for each task to be similar to Information Theory
the overall average of the functions across all tasks. This is useful for
expressing prior information that each task is expected to share similarities Mean-constrained regularization Joint Entropy
with each other task. An example is predicting blood iron levels measured at
different times of the day, where each task represents a different person.
Kullback-Leibler Divergence
The methods are generally intended for description rather than formal inference
non-negative
real-valued
symmetric
non-parametric
Methods
calculates kernel distributions for every
sample point, and then adds all the
Kernel Density Estimation distributions
With SGD, the training proceeds in steps, and Using mini-batches of examples, as opposed to one example at a time, is helpful in Identity
at each step we consider a mini- batch x1...m several ways. First, the gradient of the loss over a mini-batch is an estimate of the
of size m. The mini-batch is used to approx- gradient over the training set, whose quality improves as the batch size increases. Batch Normalization
Generalised Linear Models (GLMs)
imate the gradient of the loss function with Second, computation over a batch can be much more efficient than m computations for Inverse
respect to the parameters. individual examples, due to the parallelism afforded by the modern computing platforms. Link Function
complete
Backpropagation
single
Linkage
average
Expectation Maximization
Sigmoid / Logistic DBSCAN
Dunn Index
Softmax
Maxout