# DoE

Design of Experiment (DoE) is an analysis process to achieve the best possible result for your goal.

## General Idea

The DoE process can be broken down into five steps.

1. Describe: the goal is determined by the research team. The response is defined, something of value that is measurable. Factors are analysed, which factors are constant and mutable are listed.
2. Specify: find a model that accurately describes the physical situation. (MANOVA/regression/linear models).
3. Collect: collecting data conforming to your expectations.
4. Fit:
5. Predict:

## Designing Experiments for Performance Analysis

Tuning parameters for optimal performance. Design a set of experiments, developing a model that best describes data, and analyzing the mode.

### Terminology

• Response variables are the outcome of the experiment.
• Factors (predictors): variables that affect the response variable
• Levels (Treatment): The values that a factor can assume
• Replication: repetition of experiments
• design: the number of experiments, factor level and number of replications for each experiment.
• An ML Algorithm can be trained on a machine with 1-4 cores and 2/4GB RAM. With 5 replications of each experiment, that means `4*2*5 = 40` experiments.
• Replication captures the variability

### Types of Experimental Designs

To make sure we optimally make use of our time, money and other resources, we have to look at the type of design and variations in experiments.

• Simple design: vary one factor at a time
• (-) statistically inefficient
• (-) wrong conclusions if the factors are tied together
• Full factorial design: all combinations
• (-) costly
• (-) time consuming eg: with n factors having 2 levels each, means 2^n experiments
• (+) all factor effects can be found
• Fractional factorial design: subset of the full factorial design
• (-) less information (confounding [LINK])
• (~) not all interactions can be judged
• (+) time and cost efficient

## In-depth Designs

Some assumptions and checks have to be made before creating a design. The response variable must have some expected outcome. Using experimental data, parameters should be estimated. A goodness of fit, looking at error, using ANOVA, and Confidence Interval must be made. Finally check if the assumptions you made (about your response variable and model) hold true.

Notation: Figure 1 Notation

#### One Factor Design

In a one factor design, there is one independent variable that can change.

calculations
We can simply calculate the average of each version of the experiment (column average). The overall average can be computed too, by using all outcomes. The column effect can be calculated by the formula in figure 1, where y̅.. is the average of all outcomes of all versions (column average - overall average).

By using an estimator, we can estimate the model parameters (figure 2).
The response variable is μ+α1+α2+eij Figure 2 Estimator

### 2^k factorial

In a 2^k fact. design, there are k factors. Each having two levels. The effect should be unidirectional (results are effected negatively when tuning down a parameter).

#### Model

An example observation set may look like the one in figure 3. Here, the experiment x is a 2-factor experiment with 2 levels each.

Now, to turn this into a mathematical expression, we have to do some encoding. Let's say if the memory (factor A) is 4MB we call it not-chosen, for 16MB we call it chosen. We do the same for a cache size (factor B) of 1KB and 2KB, not-chosen and chosen respectively. In figure 4, a regression is performed (using the encoding). The results can be read as following. If factor A is chosen, a performance difference of 20 improved or 20 worsened may occur. For factor B this difference either way is 10, and for the interaction between both, it is 5. This difference is called the effect. Figure 3 Observations of an experiment Figure 4 Regresion solved for y Figure 5 Solving the regression using a sign table

## Analyzing Models

ANOVA is used for testing model accuracy and confidence. By testing α1=α2=...=αn=0, one can find out how similar datasets are.

Allocation of variations to errors
Importance explains a high percentage of variation, while significance is the relation of variation compared to errors.

If we build a good model we reject the Hypothesis.