15 views

Uploaded by Geraldo Rossoni Sisquini

- Guidelines NatureServeClimateChangeVulnerabilityIndex r2.1 Apr2011
- Production management sales forecasting
- Financial Sensitivity Analysis
- 9 10 Sensitivity Analysis
- Research Article -HR Old
- Hong Yue - Sensitivity Analysis and Robust Experimental Design of a Signal Transduction Pathway System
- Tomato Growth
- Project Management Assignment
- 9789048186075-c2
- Heidari2018_Article_ProbabilisticStudyOfTheResista.pdf
- HW_6
- Splztn251 Choice-Phase EvaluationDecision
- Fault Diagnosis Algorithm for Analog Electronic Circuits based on Node-Frequency Approach
- 2007-Wendelstorf-process-modeling.pdf
- Generator Thermal Sensitivity Analysis With Support Vector Regression
- Customer Satisfaction Measures
- Paper N°_1037
- Effect of Tqm Principles on Performance of Indian
- Journal 39
- Revision Module 3

You are on page 1of 36

Analysis

Cdric J. Sallaberry & Laura P. Swiler

1

What is sensitivity Analysis ? (1/2)

2

Involve generation and exploration of mapping from analysis

inputs to analysis results

Analysis input: x = [x

1

,x

2

,x

nX

]

Analysis results: y(x) = [y

1

(x),y

2

(x),,y

nY

(x)]

How important are the individual elements of x

with respect to the uncertaintyin y(x)?

3

What is sensitivity Analysis ? (2/2)

Sensitivity Analysis can be used for

Ranking parameters in term of their importance relative to the uncertainty

in the output

Verification and validation of the model. It is a powerful tool to check

whether the system performs as expected

Leading further uncertainty quantification towards the parameters that

really matters in an iterative process

Sensitivity is usually not used for

Prediction: The purpose is not to construct a meta-model

Determining the importance of one parameter or one feature of the system

to the response. It only looks at the influence of the uncertainty in the input

on the uncertainty of the output

4

Methods for sensitivity analysis

Several techniques have been developed to perform sensitivity

analysis, which can be classified into three groups:

Deterministic

Local

Global

5

Deterministic Sensitivity Analysis (1/2)

The basis of such methods, such as adjoint method or gradient-

based method is to invert the problem

They can be very powerful, since the influence of input parameters

is instantaneous and mathematicallyproven.

However, they can be pretty hard to set up mathematically and

numerically and any change in the system may lead to a complete

newanalysis

6

Deterministic Sensitivity Analysis (2/2)

In most of big analysis such methods are impractical due to the

complexity of the (often non-linear) equations and potential

couplingbetweenthe systems.

At high level, the inverse problemcan be seen as a minimization of

the functional formderivedfromthe original problem

For linear systems, linear algebra and functional analysis give

theoretical stability leading to general existence results and efficient

algorithms in order to solve the problem

For non linear systems most of the results are specific to a given

problemand it is often necessary to use optimization algorithmfrom

Numerical Analysis course to converge to the solution.

7

Local Sensitivity Analysis (1/2)

Local methods can be applied on deterministic or probabilistic

problems.

As their name suggests, they calculate sensitivity of the output at

one location (or arounda neighborhood) in the input hypercube.

Most of these methods calculate derivative of the function relative to

one or multiple parameter

In order to calculate derivatives, Taylor series expansion is often

used

) (

!

) (

! 2

) ( ) (

2

2 2

h x

x

f

n

h

x

x

f h

x

f

h x f h x f

n

n n

+

+ +

+ = +

8

Local Sensitivity Analysis (2/2)

We are usually interested more on the global effect of an input

variables thanon its effect arounda specific point in the hypercube

Technically, there is no assumptions on the relation between any input x

i

and the output y - The accuracy of this approach will depend on the method

and the number of points used to estimate the derivative

It is also possible to reduce the accuracy of the method but looking at a

larger region around the point of interest by selecting further point in order to

calculate the derivative.

It is theoretically possible, although costly, to calculate higher order

derivatives including partial derivatives over two or more input parameters

however, new runs (to estimate the derivatives) have to be done for each

instance of interest of each input variable. This method can be then very

costly if applied to a systemwith many inputs and outputs

9

Local Sensitivity Analysis with reliability methods

(1/3)

One particular case of local regression that may be useful is when

dealingwith reliability methods suchas FORM/SORM:

The gradients are already estimated so no more calculations are

necessary

The point of the hypercube is of particular interest as its stability

mayaffect the probabilityof interest.

sensitivity indices derived not only tell which input is the most

important in this area but also which parameters of the associated

distributions would more likely to affect the output result.

10

Local Sensitivity Analysis with reliability methods

(2/3)

Because of the way the derivatives are calculated (not taking into

account the correlation between inputs variables), it is more appropriate

to group the correlated variables and study the effect of such groups.

11

Local Sensitivity Analysis with reliability methods

(3/3)

Taux de relchement COLIS

cart-

type

For each input, the sensitivity indices indicate how the change in the

parameters distribution used would affect the result in the neighborhood

of the point (and potentially the may

12

Global methods

These methods try to capture the influence of the inputs on the

whole hypercube.

Working on the overall for both the input and output of interest, they

hadto consider the all range of distributionof the variables.

Several methods cover the whole range of uncertainty in inputs and

output of interest. They include response surface, variance based

decomposition and sampling-based methods. The first two will be

the subject of futures lectures. The rest of this presentation will be on

tools usedon sampling-basedmethods.

For sampling-based methods, the same sample set is used to

perform uncertainty analysis and sensitivity analysis. While,

depending on the problem, the sample size may be large, there is no

needto rerunthe code a secondtime.

13

Complete Variance Decomposition (CVD)

This topic and two specific methods (Sobol and FAST) will be

covered in more detailed in a future lecture.

The driving idea of complete variance decomposition stands on

rewriting the function as a sum of functions depending on one or

more parameters.

CVD will estimate the influence of each parameters and interactions

of parameters without any assumption, but at the cost of more runs.

( ) ( ) ( )

k k

j i

j i ij

k

i

i i

x x x f x x f x f f x f ,..., , ... , ) (

2 1 ,..., 2 , 1

1

0

+ + + + =

< =

14

Response Surface

Response surface or Meta-models are used to represent the

response domain with an analytical function (or at least less

expensive numerical functionthan the original code).

They can be used either directly to estimate the importance of

parameters or as a surrogate of the model to be able to perform

greater number of runs at a lower cost (then making a CVD possible

for instance)

This topic and some selected methods will be covered in more

detailed in future lectures.

15

Scatterplots

Scatterplots consists in plotting the relation between inputs (on the

x-axis) and output (onthe y-axis)

They are the simplest way to observe any dependency between

input and output without making any assumption. And are usually

enoughto understandrelationshipbetweeninput and output

They are usually bidimensional (one output vs. one input) but can be

sometime tri-dimensional to showthree wayinteractions.

But, they may be impractical if hundreds of inputs and outputs are in

consideration

16

2D - Scatterplots examples

ISCSNS

0.0 0.2 0.4 0.6 0.8 1.0 1.2

I

S

C

S

I

N

A

D

a

t

5

0

0

,

0

0

0

y

r

(

m

o

l

/

k

g

)

0.1

0.2

0.3

0.4

0.5

0.6

THERMCON =1

THERMCON =2

THERMCON =3

LA_v5.000_NC_000300_000.gsm; NO_1M_00_300_ISCSINAD.mview;

step_NO_1M_00_300_ISCSINAD.xls; NO_1M_00_300_ISCSINAD_scatterplot_REV001.J NB

Color coding or use of different

symbols may help represent

conjoint influence from discrete

and continuous parameter

TH_INFIL

0 1 2 3 4 5 6 7 8

I

S

C

S

I

N

A

D

a

t

1

,

0

0

0

y

r

s

(

m

o

l

/

k

g

)

10

-3

10

-2

10

-1

10

0

10

1

LA_v5.000_NC_000300_000.gsm; NO_1M_00_300_ISCSINAD.mview;

step_NO_1M_00_300_ISCSINAD.xls; NO_1M_00_300_ISCSINAD_scatterplot_REV001.J NB

Box-plots may be a good

alternative to scatterplots when

dealing with discrete variables.

17

Correlation and Regression

When the number of input and output is big in the analysis, it is

necessaryto screenout most of the relations before plotting them.

Regression analysis and Correlation are therefore used to

determine which variables influence the uncertainty in the model

result

Several other methods can be used. Some of them will be

presented in future lectures. This lecture will focus on linear

regression, coefficient of correlation and on the rank transform

18

X

1

, X

2

, X

3

and X

4

uniformly distributed between 0 and 1

Corr(X

1

,X

4

)=0.8 . All other correlations are set to 0

is an input representing random noise which varies between -0.2 and 0.2

inputs

output

4 3 2 1 1

01 0 1 0 10 X X X X Y . . + + + =

sampling

Latin Hypercube Sampling (LHS)

Iman-Conover correlation technique

Sample size =1,000

+ + + + =

4 3 2 1 2

01 0 1 0 10 X X X X Y . .

Correlation and regression: Example set up

19

X

1

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

1

0

2

4

6

8

10

12

X

2

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

1

0

2

4

6

8

10

12

X

3

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

1

0

2

4

6

8

10

12

X

4

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

1

0

2

4

6

8

10

12

Slope=10

Slope=1

Example : Results for Y1

20

X

1

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

2

0

2

4

6

8

10

12

X

2

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

2

0

2

4

6

8

10

12

X

3

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

2

0

2

4

6

8

10

12

X

4

0.0 0.2 0.4 0.6 0.8 1.0 1.2

Y

2

0

2

4

6

8

10

12

Random noise does not seem to influence the scatterplots

Slope=10

Slope=1

Example : Results for Y2

21

( )

( )

) var( ). var(

, cov

,

Y X

Y X

i

i

Y X

i

=

Source: http://en.wikipedia.org/wiki/Correlation

Measures the strength and direction of a linear relationship between two quantities X

i

and Y

j

Note 1: as the covariance is

normalized by the variance

of the two terms in

consideration, the slope of

the relation does not change

the correlation value

Note 2: there may be

dependency amongst

variables that will not be

captured by correlation

( )

2

1 1

2

2

1

1

1

2

1

1 1

1

1

1

=

= = = =

= = =

n

i

i

n

i

i

n

i

i

n

i

i

n

i

i

n

i

i

n

i

i i

Y X

y y n x x n

y x y x n

r

i

, ,

, ,

,

.

Coefficient of Correlation (Pearson)

22

X

1

X

2

X

3

X

4

Y

1

X

1

1.000 0.040 -0.001 0.800 0.995

X

2

0.040 1.000 0.021 0.112 0.139

X

3

-0.001 0.021 1.000 0.045 0.011

X

4

0.800 0.112 0.045 1.000 0.804

Y

1

0.995 0.139 0.011 0.804 1.000

Even if X

4

has negligible effect in the

equation of Y, its correlation coefficient

is strong due to its correlation with X

1

X

1

X

2

X

3

X

4

Y

2

X

1

1.000 0.040 -0.001 0.800 0.995

X

2

0.040 1.000 0.021 0.112 0.139

X

3

-0.001 0.021 1.000 0.045 0.011

X

4

0.800 0.112 0.045 1.000 0.804

Y

2

0.995 0.139 0.011 0.804 1.000

Results for

Y

1

Results for

Y

2

Random noise does not seem to influence the correlation

Coefficient of correlation (CC) in example

23

Measures the strength and direction of a linear relationship an input X

i

and

an output Y

j

after the linear effect of the remaining input parameters has

been taken out from both X

i

and Y

j

n n i i i i i

n n i i i i j i

X b X b X b X b X b b X

X a X a X a X a X a a Y

+ + + + + =

+ + + + + =

+ +

+ +

1 1 1 1 2 2 1 1 0

1 1 1 1 2 2 1 1 0

~

~

,

Step 1: linear regression models of Y

j

and X

i

Step 2: Calculation of residual

i i i

j i j j i

X X rx

Y Y ry

~

~

, ,

=

=

Step 3: Calculation of correlation between ry

i,j

and rx

i

( )

( )

) var( ). var(

, cov

,

,

,

j i i

j i i

Y X

ry rx

ry rx

PCC

i

=

Partial Correlation Coefficient (PCC)

24

PCC

values

X

1

X

2

X

3

X

4

Y

1

1.00 1.00 1.00 1.00

Y

2

1.00 0.93 0.25 0.02

In absence of noise, as the model is perfectly

linear, each PCC is equal to 1.00 this reflects

that each parameter has a perfect linear

influence on the output Y

1

When random noise is added, the linear

influence of the parameter is partially or totally

hidden. Depending of its importance in the

equation, a parameter may have a PCC varying

from 1.00 to 0.02 in the analysis of Y

2

Note that would be in the group of input

parameters, all PCC would be equal to 1.00 for

Y

2

Partial Correlation Coefficient in example (1/2)

25

rx

3

-0.6 -0.4 -0.2 0.0 0.2 0.4 0.6

r

y

3

,

2

-0.3

-0.2

-0.1

0.0

0.1

0.2

0.3

0.4

5 0

01 0 10

3

4 2 1 2 3 1 3

.

~

.

~ ~

, ,

=

+ + = =

X

X X X Y Y

5 0

1 0

3 3

3 1 3 1 1 3

.

.

~

, ,

=

= =

X rx

X Y Y ry

5 0

1 0

3 3

3 2 3 2 2 3

.

.

~

, ,

=

+ = =

X rx

X Y Y ry

X

3

[0,1]

Y

1

[0,0.1]

X

3

[0,1] ; [-0.2 , 0.2]

Y

2

[-0.2,0.3]

0.1

0.1

0.4

0.4

rx

3

-0.6 -0.4 -0.2 0.0 0.2 0.4 0.6

r

y

3

,

1

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.1

Partial Correlation Coefficient in example (2/2)

26

Coefficients of the Linear Regression model, estimated using Least Square

( )

j n

Y X X X = , ,

1

=

=

n

i

i i j

X Y

1

~

model

Linear regression

We want to select the

i

such that they minimize the square difference between

the output Y

j

and its linear regression. (Thats why its called Least Square)

The minimum of the function is obtained when its derivative is zero.

( )

( )

( )

( )

( ) Y X X X

Y X X X

X Y X

f

X Y Y Y f

T T

T T

j

T

j j j

1

2

2

0 2

0

=

=

=

=

= =

'

min

~

min min

Regression Coefficient (RC)

27

( )

01 0

1 0

1

10

10 6 2

8

4

3

2

1

0

1

1

.

.

.

Y X X X

T T

4 3 2 1 1

01 0 1 0 10 X X X X Y . . + + + =

8

4 3 2 1 1

10 6 2 01 0 1 0 10

+ + + = . . .

~

X X X X Y

( )

=

014 0

101 0

02 1

03 10

03 0

4

3

2

1

0

1

1

.

.

.

.

.

Y X X X

T T

03 0 014 0 101 0 02 1 03 10

4 3 2 1 2

. . . . .

~

+ + + = X X X X Y

+ + + + =

4 3 2 1 2

01 0 1 0 10 X X X X Y . .

Regression Coefficient in example (1/2)

28

Note : values are not normalized. Changing the unit will change the result

Let change X

2

as uniformly distributed between 0 and 0.1

Y

1

will then be defined with:

4 3 2 1 1

01 0 1 0 10 10 X X X X Y . . + + + =

( )

01 0

1 0

10

10

10 6 2

8

4

3

2

1

0

1

1

.

.

.

Y X X X

T T

The change in unit changes

the value of the regression

coefficient associated with

X

3

while its influence does

not change

Regression Coefficient in example (2/2)

29

Standardized Coefficients of the Linear Regression model corresponds to

linear coefficients of the Standardized model. The standardization of a

variable is performed by subtracting the mean and dividing the result by the

standard deviation

( )

j n

Y X X X = , ,

1

=

=

n

i

i i j

X Y

1

* * *

~

model

Linear regression

Standardization

Standardized model

( )

* * * *

, ,

j n

Y X X X =

1

Xi

Xi i

i

Yj

Yj j

j

X

X

Y

Y

=

* *

;

The standardized values will not be affected by

unit change

Standardized Regression Coefficient (SRC)

30

SRC

values

X

1

X

2

X

3

X

4

Y

1

0.99 0.1 0.01 0.00

Y

2

0.99 0.1 0.01 0.00

Results are the same whether there is random

noise or not.

Standard Regression Coefficient in example

31

All the previous techniques suppose that the relation between input and output is

linear

It is generally NOT the case

A simple way to relax part of this constraint is to work with

RANK

Raw DATA Rank

DATA

rank

transform

SRC

Standardized Regression

Coefficient

SRRC

Standardized Rank

Regression Coefficient

PCC

Partial Correlation

Coefficient

PRCC

Partial Rank

Correlation Coefficient

Rank Transformation (1/2)

32

Advantages

It extends the relation from linear to monotonic patterns

It does not involve any additional calculation than ranking

It does not create over fitting

Drawbacks

As it is a non-parametric method, it cannot be used for prediction

It Still does not capture non-monotonic pattern or conjoint influence of

multiple input parameters

Rank Transformation (2/2)

33

The coefficient of Determination, noted R

2

measures the part

of the variance of the output that is explained by the

regression model.

R

2

~1: Most of the variance

of Y is explained

No other analysis is required to determine

the most influent parameters

R

2

~0: Most of the variance

of Y is NOT explained

Some influent parameters may be missing

OR the influence of the parameters

selected is misrepresented. It may be thus

necessary to look at scatterplots or/and

apply different regression techniques

Coefficient of Determination (R

2

)

34

Partial correlation coefficients (PCCs) and standardized regression

coefficients (SRCs) are usually not calculated as described, as they can all

be calculated by inverting the correlation matrix.

The rank regression is a powerful tool for performing sensitivity analysis.

It captures monotonic relations which are the most common in the type

of problem we deal with

It is fast and can thus be applied on large set of data

This technique is pretty robust and does not usually over-fit (i.e. over-

estimate the value of R

2

)

The R

2

is an indicator of the quality of the regression, informing when

more analyses are necessary to understand the system.

Experience shows that most of the time, even when the R

2

is low, the most

influent parameters are selected (their influence is however under-

estimated)

Conclusive remarks on regression, correlation, rank

transform and R

2

35

General Conclusion

While several techniques exist and bring, each, different aspect and

information in sensitivity analysis, the most commonly used in the type of

system we study is Sampling-based methods

They are easy to setup and do not need more run than for the

uncertainty analysis

They are not so expensive considering that the same sample set can

work on a large input set and for many output parameters

Furthermore, simple regression, associated with rank transform, will give

pretty good information on the behavior of a particular output about 80% of

the time.

However, that does not mean that other methods should not be used.

Sampling-based methods allow to simplify the analysis (results can be

processed automatically) and reduce the number of cases on which more

sophisticated methods need to be used. These methods will be presented in

the subsequent lectures.

36

References

1. J . C. HELTON, J .D. J OHNSON, C. J . SALLABERRY, AND C.B. STORLIE Survey of

Sampling-Based Methods for Uncertainty and Sensitivity Analysis ,

Reliability Engineering and System Safety 91 (2006) pp. 1175-1209

2. Ionescu-Bujor, M. and Cacuci D.G. A Comparative Review of Sensitivity and

Uncertainty Analysis of Large-Scale Sytems part I and II Nucliear Science and

Engineering Vol 147. n. 3 pp 189-217

3. What is sensitivity analysis - Local Methods Sampling-Based Methods in Saltelli, A.

Chan K. and Scott E.M. (eds) Sensitivity Analysis (2000) New York, NY: Wiley

- Guidelines NatureServeClimateChangeVulnerabilityIndex r2.1 Apr2011Uploaded byFranciscoPrieto
- Production management sales forecastingUploaded byMeet Bakotia
- Financial Sensitivity AnalysisUploaded byFea Sibility
- 9 10 Sensitivity AnalysisUploaded byIndah Purnama Sari
- Research Article -HR OldUploaded bysehrish23
- Hong Yue - Sensitivity Analysis and Robust Experimental Design of a Signal Transduction Pathway SystemUploaded byAbrahão E Laíla Silva
- Tomato GrowthUploaded byYusnita Oni
- Project Management AssignmentUploaded bya_ash6
- 9789048186075-c2Uploaded bySaul Martinez Molina
- Heidari2018_Article_ProbabilisticStudyOfTheResista.pdfUploaded bysabareesan09
- HW_6Uploaded bycincinmindy
- Splztn251 Choice-Phase EvaluationDecisionUploaded byMaria Fhebie Kate Alsaybar
- Fault Diagnosis Algorithm for Analog Electronic Circuits based on Node-Frequency ApproachUploaded byijcsis
- 2007-Wendelstorf-process-modeling.pdfUploaded byRamiz Shaikh
- Generator Thermal Sensitivity Analysis With Support Vector RegressionUploaded byurasua29
- Customer Satisfaction MeasuresUploaded byIsmayil
- Paper N°_1037Uploaded byjavila83
- Effect of Tqm Principles on Performance of IndianUploaded bySawan Achary
- Journal 39Uploaded byRohammed Castillo
- Revision Module 3Uploaded byavinesh
- PACE_JSM2012_talk.pdfUploaded byAkshay Singh
- Factors Affecting Consumers’ Switching IntentionsUploaded bysurajitbijoy
- homework-03-answers.pdfUploaded byNoahIssa
- An Appraisal of Entrepreneurial Skills of Construction Professionals to Thedevelopment of the Service Sector in Nigeria 2169 026X 1000212 (1)Uploaded byNico Scheepers
- probablityUploaded bymaheshcgpl
- Relationship between Quality Management System Adoption and Organization Performance of Public Universities in KenyaUploaded byPremier Publishers
- WK8Assgn_R_Forbes.docxUploaded byPandurang Thatkar
- 468-1842-1-PBUploaded byVely Kazu
- ProductionandOperationsManagement Module 1 [Autosaved]Uploaded bySanjana Desmon
- PREDICTING EQUATIONS FOR MEASURING VITAL CAPACITY OF THE RAJSHAHI UNIVERSITY STUDENTS.Uploaded byIJAR Journal

- 10.1.1.121.3682(1)Uploaded bySerawit Tadesse
- Slides - Uncertainty_Propagation - IntrodutionUploaded byGeraldo Rossoni Sisquini
- VTT P775 Licentiate Thesis ORC Dec2011Uploaded byOtso Cronvall
- Slides - Key Steps to Implementing Condition-Based MaintenanceUploaded byGeraldo Rossoni Sisquini
- Slides - Key Steps to Implementing Condition-Based MaintenanceUploaded byGeraldo Rossoni Sisquini
- Slides - Complete Variance Decomposition MethodsUploaded byGeraldo Rossoni Sisquini
- Ps2014 - Processo Seletivo Ufes 2014 - Prova de MatematicaUploaded byGeraldo Rossoni Sisquini
- Tese de Fabricio Baptista - Monitoramento de Integridade Estrutural - 2010Uploaded byGeraldo Rossoni Sisquini
- Probabilidade e a - PucCamp 2a Ed. - Sonia Maria B. B. C.Uploaded bysanderpivetta
- Slides - Alternative Representation of UncertaintyUploaded byGeraldo Rossoni Sisquini
- Estatistica Básica - ApostilaUploaded byraynercid6223
- Ps2014 - Processo Seletivo Ufes 2014 - Prova de FisicaUploaded byGeraldo Rossoni Sisquini
- ANSYS Example 3DSolidUploaded byGeraldo Rossoni Sisquini
- ANSYS Example 3DSolidUploaded byGeraldo Rossoni Sisquini
- ansys-example0240Uploaded byGeraldo Rossoni Sisquini
- Slides - Casos de corrosãoUploaded byGeraldo Rossoni Sisquini
- acet-matrizesUploaded byJose Henrique
- Crack Propagation of Rolling Contact Fatigue - 2007Uploaded byGeraldo Rossoni Sisquini
- Lei-12.-863.2013-Antiga-MP-614-1Uploaded byGeraldo Rossoni Sisquini
- Transverse Vibration of a Cantilever BeamUploaded byGeraldo Rossoni Sisquini
- DengueUploaded byGeraldo Rossoni Sisquini
- Word 2007Uploaded byCesinha_Sganzella

- MCMC TutorialUploaded byZaf
- A Note on Finite Difference Discretizations for Poisson Equation on a DiskUploaded bymarcelodalbo
- Appendix F - Sorting- A Deeper LookUploaded byZoltan Loboda
- Video Compression Using Hybrid DCT-DWT AlgorithmUploaded byIRJET Journal
- Biham Shamir Dfa Crypto1997Uploaded byChandrika
- s06.pdfUploaded byLucas Lanza
- Application of New Classes of Mersenne Primes for Fast Modular Reduction for Large-Integer MultiplicationUploaded byIJCSDF
- Fibonacci HeapUploaded bychainika
- Numerical Analysis of Gradually Varied FlowUploaded byTusharMeshram
- Blowfish AlgorithmUploaded byInternational Organization of Scientific Research (IOSR)
- 00474 Nataša ZavrtanikUploaded byMuhd Shafiq Aiman
- SMC_BasicUploaded bymojinjo
- 6 CSC446 546 RandomNumberUploaded byLeiidy Acero
- NumCSE15.pdfUploaded byAnkit Shrivastava
- NE7202-NIS Unit Wise QnsUploaded bykathir
- Or GraphicalUploaded byabishek_mithun
- mmae517Uploaded bypadania
- mis 6324 group 9 term paper reportUploaded byapi-165783868
- Vladimir N. Vapnik Statistical Learning Theory 1998Uploaded byGastão Junior
- DspUploaded byYalamanchili Pavan
- Scholastic Book NefghuralNetworks Part03 2013-10-22Uploaded byరవితేజ నంబూరు
- Fastpoissonslides Using FFTUploaded bystkegg
- prog5Uploaded byShahid
- 13-InvariantsUploaded byQamar Khurshid
- A Simple Fast Dominance AlgorithmUploaded byfoofoofoo52
- Parallelization of Dijkstra's algorithmUploaded byDionysios Zelios
- Secret SplittingUploaded bySara Toga
- ARTIFICIAL IntelligenceUploaded byShriti kesarwani
- Bilinear InterpolationUploaded bySridhar Panneer
- Back-Propagation 1Uploaded byRoots999