Analysis of a surface response design
Analyzing a surface response design allows identifying parameter values that optimize a response. Available in Excel using the XLSTAT software.
What is an analysis of surface response design?
The analysis of a surface response design uses the same statistical and conceptual framework as linear regression. The main difference comes from the model that is used.
Options for the Analysis of surface response design in XLSTAT
Responses optimization and desirability
In the case of many response values y1, ..., ym it is possible to optimize each response value individually and to create a combined desirability function and analyze its values. Proposed by Derringer and Suich (1980), this approach is to first convert each response yi into an individual desirability function di that varies over the range 0 ≤ di ≤1.
When yi has reached its target, then di = 1. If yi is outside an acceptable region around the target, then di = 0. Between these two extreme cases, intermediate values of di exist as shown below.
The 3 different optimization cases for di are present with the following definitions:
L = lower value. Every value smaller than L has di = 0
U = upper value. Every value bigger than U has di = 0.
T(L) = left target value.
T(R) = right target value. Every value between T(L) and T(R) has di=1.
s, t = weighting parameters that define the shape of the optimization function between L and T(L) and T(R) and U
Interactions / Level
In the Responses tab, it is possible to activate an option if you want to include interactions in the model then enter the maximum interaction level (value between 1 and 4).
Results for Analysis of surface response design in XLSTAT
Variables information: This table shows the information about the factors. For each factor the short name, long name, unit and physical unit are displayed.
Responses optimization: This table gives the 5 best solutions obtained during the responses optimization.
Goodness of fit statistics: The statistics relating to the fitting of the regression model are shown in this table (R², MSE, Cp, AIC, SBC, MAPE, etc).
The analysis of variance table is used to evaluate the explanatory power of the explanatory variables. Where the constant of the model is not set to a given value, the explanatory power is evaluated by comparing the fit (as regards least squares) of the final model with the fit of the rudimentary model including only a constant equal to the mean of the dependent variable. Where the constant of the model is set, the comparison is made with respect to the model for which the dependent variable is equal to the constant which has been set.
If the Type I/II/III SS (SS: Sum of Squares) is activated, the corresponding tables are displayed.
The parameters of the model table displays the estimate of the parameters, the corresponding standard error, the Student’s t, the corresponding probability, as well as the confidence interval
The equation of the model is then displayed to make it easier to read or re-use the model.
The table of standardized coefficients (also called beta coefficients) are used to compare the relative weights of the variables. The higher the absolute value of a coefficient, the more important the weight of the corresponding variable. When the confidence interval around standardized coefficients has value 0 (this can be easily seen on the chart of standardized coefficients), the weight of a variable in the model is not significant.
The predictions and residuals table shows, for each observation, its weight, the value of the qualitative explanatory variable, if there is only one, the observed value of the dependent variable, the model's prediction, the residuals, the confidence intervals together with the fitted prediction and Cook's D if the corresponding options have been activated in the dialog box. Two types of confidence interval are displayed: a confidence interval around the mean (corresponding to the case where the prediction would be made for an infinite number of observations with a set of given values for the explanatory variables) and an interval around the isolated prediction (corresponding to the case of an isolated prediction for the values given for the explanatory variables). The second interval is always greater than the first, the random values being larger.
The charts which follow show the results mentioned above. If there is only one explanatory variable in the model, the first chart displayed shows the observed values, the regression line and both types of confidence interval around the predictions. The second chart shows the standardized residuals as a function of the explanatory variable. In principle, the residuals should be distributed randomly around the X-axis. If there is a trend or a shape, this shows a problem with the model.
The three charts displayed next respectively show the evolution of the standardized residuals as a function of the dependent variable, the distance between the predictions and the observations (for an ideal model, the points would all be on the bisector), and the standardized residuals on a bar chart. The last chart quickly shows if an abnormal number of values are outside the interval ]-2, 2[]−2,2[ given that the latter, assuming that the sample is normally distributed, should contain about 95% of the data.
Then the contour plot is displayed, if the design has two factors and the corresponding option is activated. The contour plot is shown as a two dimensional projection and as a 3D chart. Using these charts it is possible to analyze the dependence of the two factors simultaneously.
Then the trace plots are displayed, if the corresponding option is activated. The trace plots show for each factor the response variable as a function of the factor. All other factors are set to their mean value. These charts are shown in two options: with the standardized factors and with the factors in original values. Using these plots the dependence of a response on a given factor can be analyzed.
analyze your data with xlstat
Included in