skip to Main Content

usa+1 (903) 231-3943 ge+995 (593) 675-107

Design Of Experiment (DOE)

MR-CFD experts are ready for DOE analysis, consulting, training, and CFD simulation.

Design Of Experiment (DOE)

Optimization process in ANSYS Workbench software

DOE

DOE is a set of measures that are performed using process modeling and related variables and lead to increased production efficiency. In fact, a set of different parameters that affect the outcome of a particular process is analyzed to obtain the best possible values for the production of an optimal product. Therefore, the purpose of DOE is, firstly, which input factors or parameters will have a significant impact on the output result or product of the process, and secondly, how much of these input factors or parameters should be used to achieve the desired output result or product.

Optimization

The optimization process is a set of steps taken to find the best possible response to the output of a process with the highest return and the lowest cost. The optimization process depends on several factors, such as the nature of the process itself and the solution methods.

DOE in ANSYS Workbench Software

The purpose of producing DOE is to divide the range of input factors or parameters into different values from the minimum value to the maximum value based on a certain pattern. Values are called design points. Then a set of these definition points or design points for each of these input parameters is placed in different rows of a table, each of the rows contains a solution process. In that row, each of the input parameters with a certain value is placed in the corresponding column and these parameters simultaneously perform the solution process. Therefore, the result obtained in each of the solutions indicates the simultaneous and reciprocal effect of these values of the input parameters or factors on the desired output parameters.

The figure below shows an example of a table of design points of DOE. In this example, there are four parameters as input or independent parameters as length, radius, input velocity, and heat flux on the wall, and two parameters as output parameters or dependent, including outlet temperature and pressure drop.

DOE

DOE Types

Several different methods are used to produce the DOE; Thus, these methods determine the type of division of the change intervals for each of the input factors or parameters. In fact, each of these types of DOE, using its own method, shows the number of design points and the type of division of points.

These methods in the ANSYS Workbench are as follow:

  • Central Composite Design (CCD)
  • Box-Behnken Design (BBD)
  • Custom
  • Custom + Sampling
  • Optimal Space-Filling Design
  • Sparse Grid Initialization
  • Latin Hypercube Sampling Design

Naturally, the greater the number of production design points, the longer the solution process and the higher the computational cost; But on the other hand, it can increase the accuracy of the optimization process. Also, the number of input parameters or variables will affect the DOE and RSM method; Increasing the number of defined input variables prolongs the production time of the DOE, and also makes it difficult to create an accurate response level, because the response levels depend on the relationship between the input variables and the output parameters. Naturally, the greater the number of defined input parameters, the more difficult the response levels can be to determine how much these input variables will affect the output parameters. Therefore, it is recommended that each test environment production method use as few input parameters as possible.

Hence, there are limitations in determining the number of input parameters. For example, the Central Composite Design Method (CCD) has a limit of up to 20 input parameters, the BBD has a limit of up to 12 input parameters, the LHS and Optimal Space Design (OSF) has a limit of up to 20 input parameters.

Now, if the number of input parameters exceeds the allowable limit, the message system will issue a warning to reduce the input parameters, in which case some of the input parameters must be disabled. Therefore, less effective parameters should be disabled, and if it is difficult to determine the less important parameters for disabling, parameter correlation systems should be used to select parameters with lower correlation.

The following figure shows the DOE types in the ANSYS Workbench software.

DOE

Regression Equation

A regression model is a type of statistical model based on which the value of a variable can be estimated based on changes in one or more input parameters; This means that the effect of each of the input parameters or variables on an output parameter or variable can be obtained. In fact, if we assume that we have the input parameters or variables as Xn and the desired output variable as Y, we can assemble a set of sample design points obtained from the solution or experiment process (Y, Xn) draw in the form of a diagram. Therefore, a linear regression equation must be estimated to predict the value of a dependent variable based on changes in one or more independent variables. If the variable Y is assumed to be based on the changes in each of the input variables X, the following equation is obtained in which each of the independent variables is multiplied by a factor, and plus a constant value. These coefficients and this constant value are obtained from the estimation process. Ԑ also indicates the error rate of the equation, which is the same as the difference between the value of the output parameter of the hypothetical linear equation in a given value of the input parameter and the value of the output parameter of the experiment at the same design point.

The following equation represents a linear multiple regression equation:

DOE

The following figure shows a simple linear regression function that the value of the output variable Y is only a function of an input variable X, and the graphical slope line represents the estimated function for predicting the design points.

DOE

The estimation process tries to estimate the coefficients of the regression equation in such a way that when these values are placed in the equation, the value of the output parameters in each of the design points with the available data (design points obtained from the solution or test process) to have the closest match. In fact, these coefficients must have values that, when we put the value of each of the values of the input parameters in the resulting equation, result in the value of the output parameter of the equation with the value of the output parameter obtained during the solution process has the least difference in the same value of the input parameter.

One of the common methods of the estimation process is to use the least error squares method in which the sum of the squares of the difference between the estimated values of the equation and the values obtained from the software solution is minimized. In this way, we write the above equation for the error value (Ԑ) according to the output parameter Y and the input parameter X, then we bring it to the power of two and write its derivative according to the coefficients of the equation so that after the process by the necessary mathematics, a suitable equation with appropriate estimation coefficients with the least possible error is achieved.

Therefore, in the DOE environment, different methods are used to divide each of the independent input parameters, which creates a specific model of design points, and as a result, the equation of change of an output parameter will vary according to one or more input parameters.

Central Composite Design (CCD)

The Central Composite Design Model (CCD) has five levels for dividing the various modes of input factors or parameters. The five levels include -𝛂, -1, 0, +1, +𝛂, where level + 𝛂 and -𝛂 are equal to the maximum and minimum values of each input parameter, respectively, and level 0 is equal to the mean value of that parameter. Therefore, each input parameter is divided into five parts or levels between its maximum and minimum values.

The number of divided modes in the present model is obtained from the following equation, in which k is the symbol of the number of input parameters or the factors. According to the following formula, the number 1 indicates the only case where all factors have their own mean value (level 0), the expression 2k indicates the states in which the maximum and minimum value of each parameter (level + 𝛂 or level) -𝛂) with a constant mean value, the rest of the parameters are considered, and 2^(k-f) indicates the total number of modes where the parameters have values between the maximum or minimum mode and the intermediate mode (level +1 or level – 1).

DOE

It should be noted that the phrase invoice f is only a quantity to limit the number of design points to a reasonable number and to prevent an excessive increase in the number of design points, the value of which is based on the number of input parameters in the table below. This table is written for the maximum number of possible input parameters, ie 20 parameters, and provides the number of design points for both fixed and non-fixed constant factors. Of course, the ANSYS Workbench software also uses the case that has the limiting factor f to determine the number of design points. Therefore, according to the formula, two-factor models have 9 divided states, three factors have 15 states, four factors have 25 states, and so on.

DOE

The following figure shows an example of a model with two input parameters. The midpoint is the same level 0, which indicates the same state that all parameters have their mean value, the red star points of the level + 𝛂 and level -𝛂 or the same maximum and minimum points related to the two factors that are equivalent to the semester 2k = 2 * 2 = 4, and the blue circular points represent the level +1 and level -1 corresponds to two factors that represent the same semester 2^(k-f) = 2*2 = 4. The set of these states for the two-factor model, as shown below, is equal to 4 + 4 + 1 = 9.

DOE

In other words, in the Central Composite Model (CCD), there is a central point in the middle of the input parameter space, 2 * k points called the axial points on the axes specific to each input parameter. They are located at the same points as +𝛂 and -𝛂, and the 2 ^ (k-f) points are called the factorial points on the space diameters of the input parameters, which are the same points +1 and -1.

The following figure shows two examples of a central composite model (CCD) that the left figure is equivalent to the space for the design points for the model with two inputs parameters and the figure on the right is equivalent to the space for the design points for the model with three input factors or parameters. As the figure shows, each axis is divided into five sections, and the number of design points for the two-factor model is 9 and the three-factor model is 15.

DOE

For example, suppose there are three input parameters or variables including length, radius, and velocity inlet as design points, and the effects of their changes on the output parameter are the pressure drop, and the CCD model is used. The following figure shows the table of design points for each input parameter.

DOE

As shown in the figure above, the range of defining changes for the geometric parameter of length from 720 mm to 880 mm, the range of changes in the radius geometric parameter of the range from 90 mm to 110 mm, and the range of changes in the operating parameter of input flow velocity is from 0.0009 ms-1 to 0.0011 ms-1. The mean value for the length parameter is 800 mm, for the radius parameter is 100 mm, and the velocity is 0.001 m. s-1. Therefore, the maximum and minimum points of each interval are equal to the levels of + 𝛂 and -𝛂, the central points of each interval are equal to the levels of 0, and the values between these intermediate levels and the maximum or minimum levels are equal to the levels of +1 and – 1.

The main advantage of the Central Composite Model (CCD) is:

  • The variance estimation or forecast is the same for both points at the same distance from the design center; That is, according to the figure, all sample points are at the same distance from the design center or midpoint and have the same variance or deviation.

While there are two criteria as a weak point in the Central Composite Model (CCD) to set up an optimal design that includes these two:

  • First, the non-orthogonality of regression terms (the linear intensity) can make the variance of the model coefficients large or swollen.
  • Second, the position of the sample points in the design can be influenced by their position relative to other input variables in a subset of the entire set of observed data.

To solve the two problems of this model of environmental design, the following should be done:

  • To minimize the degree of uncertainty (or increase the intensity), a variance inflation factor or VIF is used. Therefore, to solve the problem of increasing the variance of the coefficients of the model due to the non-dimensional degree, it is possible to use the optimization method of the variance inflation factor (VIF-optimality); Thus, the value of the inflation factor of the maximum variance should be minimized or, in fact, minimized to an unconventional degree. The minimum amount that the variance inflation factor can take into account will be equal to 1.
  • To minimize the opportunity for effective sample points, a leverage value for each of the sample points is considered, which is the diameter of a matrix. In the G-optimality method, the maximum lever value of the sample points reaches its minimum value.

Therefore, in optimizing the type of variance inflation factor (VIF-optimality), the value of 𝛂 is selected in such a way that the maximum variance inflation factor is the minimum, and in the optimization of type G (G- optimality), the value of 𝛂 is selected so that the maximum leverage value is minimal. Therefore, the rotatable design will be a poor design in the VIF and G impact terms.

The design models will be described in the following.

Orthogonality

Orthogonality is the degree that declares how much the principal effects (ie, the direct and independent effect of an input parameter on the output parameter) and the interaction (ie, the simultaneous effects of two or more input parameters on the output parameter) are relative to each other.

For example, the following pattern represents non-orthogonality; Because each of the two input parameters, only their combined or interaction effects are measured. As it is obvious, the two solution processes have different values from the two input parameters.

Parameter 2 Parameter 1
1 1 Run 1
-1 -1 Run 2

While the following pattern represents orthogonality; Because the independent effects of each input parameter (unrelated to the other input parameter) are measured. As it turns out, for each of the two input parameters, a constant value of that parameter with two different values from the other parameter is considered in the form of two separate solution processes. Thus, when we set a parameter equal to a constant value and change another parameter and perform the solution process, we can measure the direct and non-direct effect of that parameter.

Parameter 2 Parameter 1
1 1 Run 1
-1 1 Run 2
1 -1 Run 3
-1 -1 Run 4

Leverage

The amount of leverage indicates the opportunity in which sample points of non-normal effects on the output of the process are solved. For example, if the point of a defined sample had a very large size in terms of an input parameter compared to another input parameter, so that the size of that other parameter could be ignored at that point. The eclipse allows the estimated regression model to pass through the vicinity of that particular area associated with that large value parameter, according to these design points. Therefore, a condition called lever or weight is used to reduce these effects.

After discussing the number of design drawings embedded in the Central Composite Model (CCD), it is necessary to discuss how to determine the amount of 𝛂. In this design model, the 𝛂 value is determined according to the type of central composite model selected. The central composite design model has different types, which are:

  • Face-Centered
  • Rotatable
  • VIF-Optimality
  • G-Optimality
  • Auto Defined

The following figure shows the types of central composite design types in the ANSYS Workbench software.

DOE

Face-Centered

In the central composite model of the face-centered type, the value of 𝛂 is considered to be 1; That is, level + 𝛂 and level -𝛂 are equal to level +1 and level -1, respectively, and level 0 has a value in the middle of the two values of maximum and minimum. In fact, we can say that this type has three levels for dividing each of the input parameters.

For example, suppose an input parameter has a maximum and minimum value of 90 and 110, respectively. Therefore, values 90 and 110, respectively, indicate level -𝛂 and level +𝛂 or the same level +1 and level -1, and the mean value of this range, ie 100, also indicates level 0.

level +1 level 0 level –1
110 100 90

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm and uses the Face-Centered method. The following figure shows the table created in the software environment of the ANSYS Workbench.

DOE

Rotatable

To calculate the value of 𝛂 in the rotatable design model, the following formula must be used, in which k represents the number of input parameters or factors. Thus, the value of 𝛂 for the two-factor model is 1.414, for the three-factor model is 1.681, for the four-factor mode is 2, and so on.

DOE

For example, suppose an input parameter has a maximum and minimum value of 90 and 110, respectively. Therefore, values 90 and 110, respectively, indicate level -𝛂 and level +𝛂, and the mean value of this range, ie 100, also indicates level 0. Since the present model has three input parameters, the 𝛂 value is 2^(3/4) = 1.681. Therefore, the values for level +1 and level -1 are also obtained as follows using the median between 0 and 1.681 and 0 to -1.681.

level +1.681 level +1 level 0 level -1 level –1.681
110 105.95 100 94.054 90

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm and a rotatable method has been used. The following figure shows the table created in the ANSYS Workbench software environment.

DOE

The difference between Face-Centered and Rotatable models:

  • The face-centered model has three dividing levels for the design points, the location pattern of the design points does not have a rotating shape, and its advantage is that the sample points of its design is placed in all corners and on all sides.
  • The rotatable model has five levels for dividing the design points. The pattern of the design points has a rotating shape, the weakness of which is that the design points are not located in the corners, and its advantage is that the projected variance for both points of the design pattern, which are at the same distance from the central point of the design, is equal.

VIF-Optimality

The Variance Inflation Factor (VIF) optimization model has five response levels for each input parameter. In this model, the value of 𝛂 is calculated based on the minimization of the non-orthogonality value, which is known as the variance inflation factor. In fact, as mentioned earlier, this is the central composite method for completing the central composite design (CCD) model in terms of orthogonality.

VIF

As mentioned earlier, a regression equation for several variables associated with a model represents the relationship of independent variables, or input parameters, to a dependent variable, or output parameter. As a result, simple or multiple linear relationships are formed between one or more independent variables with a dependent one; But naturally this estimated equation has differences or errors over the actual values. Therefore, the basis of the work in the linear regression equation is to minimize the sum of the error squares. Therefore, a regression equation that has the least error squares is the best option for displaying the communication model between independent and dependent variables.

Changes in the dependent variable are expected to be predicted by the regression model. The share of the regression model of the dependent variable variation is known as R-squared, which is also called the coefficient of determination. R2 then determines the percentage of change in a dependent variable by each of the independent variables.

Now, if there is a relationship between the independent parameters themselves, that is, an independent parameter itself is a linear relationship with other independent parameters, it is called multi-collinearity. In this case, the regression method does not have valid answers; Because the estimated coefficients in the regression equation have a large variance and deviation from the softening data. Therefore, to determine the amount of deviation or validity of linear regression results, an indicator called the variance inflation factor (VIF) can be used. This indicator indicates the degree of all-line intensity of the regression model; This means that the index states how much the estimated coefficients in linear regression deviate from a state where there is no collinearity phenomenon, or how much so-called swollen or increased.

Only independent variables are used to calculate this index. Thus, the coefficient of determination of an independent variable, based on the regression equation of that variable with other independent variables, is obtained by using the least error square procedure. Now the value of the variance inflation factor (VIF) is equal to the inverse of the difference in the number of one of the coefficients for determining each of the input variables relative to the other input variables. The following equation determines the value of the inflation variable index of the variable “i” based on the least-squares procedure. The error of its equations is a regression with other variables (j’s).

DOE

Therefore, it is clear that the greater the number and degree of correlation of an independent variable with other independent variables in the form of a regression equation (increase the size and number of Rj2), according to the above formula, the VIF value of that variable increases. In fact, as the increasing correlation between the independent variables themselves reinforces the collinearity phenomenon, so does the amount of variance inflation.

For example, suppose an input parameter has a maximum and minimum value of 90 and 110, respectively. Therefore, values 90 and 110, respectively, indicate level -𝛂 and level +𝛂, and the mean value of this range, ie 100, also indicates level 0. Since the present model has three input parameters, the value of 𝛂 according to the operating formula of the variance factor is equal to 1.23. The values for level +1 and level -1 are also given below using a median.

level +1.23 level +1 level 0 level -1 level –1.23
110 108.13 100 91.87 90

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm and the optimization method of variance inflation is used. The following figure shows the table created from the experimental environment in the ANSYS Workbench software.

DOE

G-optimality

The G-optimality model can minimize the amount of error expected based on forecasting, as well as minimize the largest expected variance in the target range according to the forecast. In fact, as mentioned earlier, this is the central composite method for completing the central composite design (CCD) model in terms of minimizing leverage.

For example, suppose an input parameter has a maximum and minimum value of 90 and 110, respectively. Therefore, values 90 and 110, respectively, indicate level -𝛂 and level +𝛂, and the mean value of this range, ie 100, also indicates level 0. Since the current model has three input parameters, the value is 2.06. The values for level +1 and level -1 are also given below using a median.

level +2.06 level +1 level 0 level -1 level –2.06
110 104.85 100 95.146 90

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm and the G-optimization method is used. The following figure shows the table created in the ANSYS Workbench software.

DOE

Auto-Defined

In auto-defined mode, the software automatically selects the most appropriate central composite design model (CCD) according to the number of variables or input parameters, usually between two models, G-optimality and VIF. It is recommended to use the same automatic software selection mode, but if the divided values of the input parameters do not show a good fit with the response surface diagram, it is better to use a rotatable model.

The following figure shows a space for design points for a model with two input parameters, which simultaneously locate the design points based on different patterns related to the Central Composite Design Method (CCD). These patterns include face-centered, rotatable, VIF-optimality, and G-optimality. Using this image, we can compare the distribution of design points in two-parameter space.

DOE

Box-Behnken Design (BBD)

The BBD model has three levels for dividing design points into input parameters or factors. The three levels include -1, 0, +1, the level of +1 and -1 is equal to the maximum and minimum values of each input parameter, respectively, and the level 0 is equal to the middle value of that parameter. Therefore, each input parameter is divided between its maximum and minimum values into three parts or levels. Therefore, the mean level is always equal to the mean value between the maximum and minimum values.

The number of divided modes in the present model is obtained from the following equation, in which k is the symbol of the number of input parameters or the same factors. According to the following formula, the number 1 represents the only case in which all factors have their own mean value (level 0), and the expression 2 (k (k-1)) indicates the states in which the value for each parameter, its mean constant is considered to be the maximum and minimum values of the other parameters (level +1 and level -1).

DOE

The following figure shows the space for design points in the BBD model for the state with two input parameters. As can be seen from the figure, if we consider two horizontal axes and one vertical axis as the range for selecting design points, it can be seen that the mean value of each input parameter is placed between the maximum and minimum values of the other input parameter.

DOE

For example, suppose there are three input parameters or variables including length, radius, and velocity inlet as design points, and the effects of their changes on the output parameter are the pressure drop, is examined and the BBD model is used. The following figure shows a table of design points for each input parameter.

DOE

The following figure shows a comparison of the design space with three different design methods, including a face-centered central composite design model and a central composite design model (the rotatable), and BBD box design model. In all three design methods, three parameters or input variables are considered. Therefore, all three images related to the space of the design points have three axes containing the range of input parameters.

DOE

Advantages of the BBD Method over the CCD Method:

  • It requires fewer design points and therefore takes less time to calculate.
  • Due to the lack of use of the points in the corners of the space for the design points, the user is allowed to work well around a combination of input parameters.

Disadvantages of BBD Compared to CCD Method:

  • It is unable to predict the points in the corners of the space for the design points.
  • Only three levels are considered for each input parameter, which may reduce the predictive accuracy.

Optimal Space-Filling Design

The optimal space-filling design model divides each input parameter into several parts. This design model is divided in terms of the number of sample points and the values of these sample points include different types (sample type). Also, the number of input parameters or variables in each model determines the number of divided points.

This design model has several features:

  • This model can distribute design points evenly within the design space.
  • The goal of this model is to achieve maximum analytical vision using the least number of design points.
  • The placement of design points within the design space is heterogeneous; This means that in this model, unlike previous models, it is not necessary to place the design points in the corners or middle points.

The following figure shows the design space, including the design sample points according to the optimal filling space design method. In this design space, two input parameters are considered, each of which is divided into five levels.

optimal

 

The types of distribution of the sample points of the optimal filling space design model include the following:

  • CCD Samples
  • Linear Model
  • Pure Quadratic Model
  • Full Quadratic Model
  • Auto-Defined

The following figure shows the Sample distribution types for the optimal filling space design model.

DOE

When each of the mentioned models is selected, each of the input variables is divided into a certain number of sample points. We now consider the range of changes in each of the input parameters and calculate the difference between the upper and lower limits of each of the input variables, and we divide this value by the number of points in the positive sample. The value of this division is equal to the distance of each of the points of the created sample from each other. If the number of points in the sample is an odd number, the point of the central sample (the mean value between the maximum and minimum) is maintained, and this distance is added periodically to the top and bottom sides of this middle value. As far as it reaches the maximum and minimum range of that parameter. If the number of points in the sample is an even number, ignore the point of the central sample (the mean value between the maximum and minimum) and add half of this distance to both sides and above and below this intermediate value to create two points in the middle of this interval, and then add the full value of this interval periodically to the two middle and upward and downward points until it reaches the maximum and minimum range of that parameter.

For example, suppose an input parameter has a maximum and minimum values of 90 and 110, respectively. Assuming that three input variables are defined for the software and the optimal space-filling design method is used; The process of distributing sample points has been done in two ways: CCD design and full quadratic model.

If the central composite design (CCD) sampling type is used, the number of divided points for each of the input variables will be 15. As a result, we divide the distance between 90 and 110, which is equal to 20, to 15, which will be equal to 1.33, and since the number of points is odd, keep the midpoint of this interval, 100, and add this distance of 1.33, alternately to both sides of the number 100.

≀110 +1.33 center -1.33 90≀
109.33 102.67 101.33 100 98.66 97.33 90.66

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm and uses the optimal space-filling design model and the central composite design sampling type. The following figure shows the table created in the ANSYS Workbench software.

DOE

If a full quadratic model is used, the number of divided points for each of the input variables will be 10. As a result, we divide the distance between 90 and 110, which is equal to 20, by 10, which will be equal to 2, and since the number of points is even, we calculate half the calculated distance, that is 1. We add the 1 to both sides of the middle point of this interval, i.e. the number 100, to produce the middle two points of this interval equal to 101 and 99, and then we add this numerical distance equal to 2 alternately to the two sides of the numbers 101 and 99.

≀110 +2 center -2 90≀
109 105 103 101 99 97 95 91

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm, and from the optimal space-filling design model and the full quadratic model is used. The following figure shows the table created in the ANSYS Workbench software environment.

quadratic

Custom Design Model

The special feature of this design model is that it allows the user to design the DOE as desired and according to their wishes. In fact, when this model is activated, a table of the values of the input parameters is created instead of the DOE environment; So that the user can enter the desired values manually from the input parameters. Also, if another model of test design is selected before selecting this model, after selecting this model, the same previously created table remains on the page, but the values in this table can be changed manually. Another feature of this design model is that external software files in CSV format can be called to the test environment design table. In fact, the ability to recall external data to the software is activated only when selecting a Custom design model.

The following figure shows a common design model with a sample (custom + sampling). There are two input parameters in the design space.

customIt should be noted that there are two methods of defining design sample points manually, which include the custom model and the custom model with the sample (custom + sampling); But the difference between the two methods is that in the custom model with the sample, the total number of samples can be determined manually.

Sparse Grid Initialization Model

The sparse grid initialization model has three levels for dividing the different modes of the input parameters. The three levels include -1, 0, +1, the level of +1 and -1 is equal to the maximum and minimum values of each input factor or parameter, respectively, and the level 0 is equal to the middle value of that parameter. Therefore, each input factor is divided into three parts or levels between its maximum and minimum values.

The number of divided modes in the present model is obtained from the following equation, in which k is the symbol of the number of input parameters or the factors. According to the following formula, the number 1 indicates the only case in which all factors have their own mean value (level 0), and the expression 2k indicates the states in which the maximum and minimum value of each factor (level +1 or level -1) are considered with a constant mean value of the remaining factors.

For example, suppose an input parameter has a maximum and minimum values of 90 and 110, respectively. Therefore, the values of 90 and 110 indicate level -1 and level +1, respectively, and the mean value of this range, i.e. 100, also indicates level 0.

level +1 level 0 level -1
110 100 90

 

Now suppose that according to the above pattern, we have an example that the radius parameter defined in the experimental design environment table has a range of changes from 90 mm to 110 mm, and the design model of the Sparse Grid Initialization is used. The following figure shows the table created in the ANSYS Workbench software.

This DOE model should be used when the goal is to create a sparse grid response surface. The sparse grid response surface is based on the accuracy of the model. In fact, this type of response level, where the gradient of the desired output parameters is higher, can automatically correct the matrix of design points in order to increase the accuracy of the response level.

Latin Hypercube Sampling Design

Latin Hypercube Sampling Design is an advanced or developed algorithm using the Monte Carlo sampling method that prevents cumulative or cluster specimens’ points from forming. These points are randomly generated within each of the square grids in the design space, but no two points can be found that have the same input parameter value.

The following figure shows the space involved in the design points in accordance with the Latin hypercube sampling design model.

It should be noted that the Latin hypercube sampling design model also behaves in terms of how to select the sample type points, such as the optimal space-filling design model. In fact, the sample type selection methods are the same as the previous ones, and the process of spacing between the two sample points in the design space is the same as the previous method. The only difference between the two methods is the type of combination of the sample points of each of the input parameters with the sample points of the other input parameters. In fact, the number of sample points for each input parameter and the value of those sample points in both design models is similar; However, the arrangement of these points in the design sample between the defined input parameters differs from each other in these two methods.

Display Graphs of Design Points

From the charts section, you can draw a graph of the design points defined by the various input parameters. In fact, these graphs represent the display form of the values defined for each input parameter, which includes two types of displays, which are the parallel display of parameters and the display of design points vs parameter.

The following figure shows an example of a graph display the design points for each input parameter (design points vs parameters). The right and left y-axis represents the radius and length changes, respectively, as input parameters, and the lower horizontal axis (x-axis) represents the number of produced design points in the selected design method.

The following figure shows an example of a graph display of the combination of the design points of each of the Parallel Parameters. The horizontal axis (x-axis) consists of three defined input parameters, and the vertical axis (y-axis) represents the range of changes associated with each of the three input parameters, ranging from the minimum to the maximum value of each parameter.

Response Surface

Response surfaces are functions with different natures that can define each of the desired output parameters or variables as terminals of input parameters or variables. In other words, response surfaces can obtain approximate values of the desired variable or output parameter at any point in the analyzed design space without performing a complete solution process at that point.

In fact, as mentioned earlier, a number of input parameters are defined first, and each of these input parameters in the definition of DOE is divided into several points of the sample design. Therefore, a set of design points is created based on input parameters. Now the response surface is based on the results of the process of solving each of the components of the sample design points, using different methods to define the most appropriate possible function to estimate the value of the desired output parameter based on the value of one or more input selective parameters.

The response surface production mechanism in the ANSYS Workbench software optimization section has six different types:

  • Genetic Aggregation
  • Standard Response Surface – Full 2nd Order Polynomials
  • Kriging
  • Non-Parametric Regression
  • Neural Network
  • Sparse Grid

The following figure shows the response surface types in the ANSYS Workbench software.

Genetic Aggregation

The genetic aggregation model solves the repetitive genetic algorithm to find the best type of response level for each variable or output parameter. In fact, this method selects the best response modes and combines them to produce a mass or density of several response surfaces. Therefore, this model results in achieving the highest quality response level and with different settings for each desired parameter or output variable.

The main goal of this model is to achieve the following three main criteria to achieve the best level of response:

  • Accuracy (high compliance with DOE points).
  • Reliability (cross-validation).
  • Smoothness (similarity with the linear model).

Genetic Algorithm

A genetic algorithm is a special technique for the optimization process; It seeks to find the best values of the input parameters or variables to achieve the best output parameter. This model follows the optimization of a repetitive algorithm. The basis of this algorithm is derived from the biology topic.

For example, suppose you decide to turn the people of a city into good people. One way is to identify the good people of the city and separate them from the bad people, and then force them to expand their generation with childbearing. In fact, by doing so, you can change their genetics and continue the process to the point where the entire population of the city is made up of good people. Therefore, a cycle can be defined based on the process mentioned; In this way, we first consider the initial population of the city (initialization), then we define a function as a criterion for good or bad people of each individual in the society (fitness assignment), identify good people from the perspective of these criteria, we select these people as parents who are forced to have children (crossover), but the child born may experience changes in his or her genetics and distance himself or herself from the genetics of his or her parents. It is a mutation, and eventually reaches the final conditional stage as a criterion for measuring the appropriate genetics for the end or continuation of the cycle (stop criteria); Thus, if we achieve the desired criterion (true), the cycle will end, and if the desired criterion is not met (false), we will re-measure the good or bad of the new generation, and again we continue the cycle.

The following figure shows the overall structure of a genetic algorithm in general.

It can now be said that the functional principles of the genetic aggregation algorithm are used to identify the best response surface based on the general principles of the genetic algorithm mentioned above. In fact, different levels of response can be assumed to be equivalent to the population of a city, and the criterion for measuring the quality or optimization of response levels can be considered equal to the criterion for measuring the genes of the people of a city.

The following figure shows the genetic density algorithm. This network algorithm consists of seven steps with a conditional constraint, which we define step by step in each of the steps in this algorithm.

Step 1: Initial population: There are several different levels of the response generated, each with its own set of settings.

Step 2: Evaluation: The levels of the response generated should be measured using accuracy criteria. By defining the tolerance level, a criterion for measuring production contact levels can be measured.

Step 3: Selection: The quality of each response level is determined using a cross-validation process and a measure of smoothness. Then the best response levels are selected to be reproduced in the next step.

Step 4: Conditional stage: After completing a complete round and in each repetition stage, if the selected response levels can meet one of the two qualitative needs stated in the third step or if the number of repetitions reaches its maximum, the algorithm process ends and the final results are presented as optimal; Otherwise, the selected levels will go to the next step in the third step as the best answers and will be reproduced.

Step 5: Reproduction: The best levels selected in the previous step, along with their own settings, are selected as parent genes to cross-over with each other and change as a mutation.

Step 6: Cross-over: If the parent’s selected response levels are the same, the settings for each response level are mixed; However, if the parent’s selected response levels are different from each other, a linear combination of both parent response levels is generated.

Step 7: Mutation: Optional changes are made to the settings of each response level; Just as in genetics, a child born to his parents has a genetic mutation over time.

Step 8: New population: In this final stage, new response levels are introduced as a new generation of a city’s population, which must return to the second step, quality assessment, and continue this cycle.

DOE

cross-validation

If the number of input data for the model is too high, it increases the complexity of the model and makes it impossible for calculations to be easily performed. In such cases, cross-validation is one way to optimally determine the number of input data.

In general, there are two methods for evaluating the efficiency of a model. In the first method, evaluation is based on the assumptions that the model must apply, and in the second method, evaluation based on the efficiency of the model in predicting new values. In evaluating the first type, it relies on the data observed and used in constructing the model, such as creating a regression model using existing laboratory data from the input and output parameters based on the principle of least squares error; However, this estimated model is possible for the observed data on which the model is based, and the efficiency of that model cannot be measured for new data that were not observed at the time of modeling. While in the cross-validation method, the reliance is on the data obtained and observed, but not applied at the time of the model construction, because the purpose of this case, is to apply the available and not used data to measure the effectiveness of the model for predicting new data. Therefore, in order to fully evaluate the efficiency of a model and its optimization, we must estimate the error of the model based on the data that were left out in the cross-validation. The estimation of this error is called out-of-sample error. It should be noted that the data or input design points used in estimating the output function or response level are called learning points and the data or design points used in cross-validation to test the function or the estimated response level is called the checking point.

Therefore, cross-validation acts as a means of calculating the error outside the sample. In fact, as the amount of input data increases, the error rate decreases and the model moves toward validation, but if the number of these input data exceeds a certain number, the error estimate increases again and the validity of the model decreases. The various methods of cross-validation include the leave-one-out and the K-fold method.

Leave-One-Out

In this method, only one of the n-point of the existing design is excluded from the process of estimating the response level, and as a result, the response level is obtained based on the remaining n-1 point. Then, that single point of independent design is used to change the quality of the response level, so that the error level of the response level for this single design point of is calculated. This is done for each of the design points in the test environment.

The following figure shows an example of a leave-one-out cross-validation method. As can be seen from the figure, at each stage of the validation process, a design point changes the response level.

DOE

K-Fold

In this method, the set of design points is divided by the number of k layers with the same volume. This method works the same way as Leave-One-Out; The difference is that in fact, several design points are left out. In the DOE environment, the number k is considered to be 10, and hence the number of cross-validation calculations ends in ten repetitions.

The following figure shows an example of a K-Fold cross-validation method used for k = 10 layers.

k-fold

Auto Refinement

Auto refinement can automatically add a few design points to the model so that the corresponding response surface accuracy reaches the user’s desired range. In fact, it is used to increase the accuracy of contact surfaces using a repetitive process; This means that at each repetition stage, one or more design points are automatically added to estimate the contact surface.

Therefore, the refinement option should be activated from the table of contact levels of output parameters based on tolerance, and then the tolerance value for each output parameter as a criterion or limit required to perform the process. This value of tolerance indicates the maximum value that the output parameter can accept; Thus, at each stage of the response level correction process, the maximum value of that parameter or output variable is calculated and its value is compared with the other maximum values obtained from the other stages of the correction. As a result, the maximum possible difference between these values is determined. It is also possible to manually define the refinement points in the table related to the refinement points.

The refinement section has adjustment options. From the output variable combinations section, you can set the time to apply the new modifier point; So that the maximum output option means adding a correction point per repetition to amplify the minimum output, and all outputs mean adding a correction point has become non-convergent for each output. The crowding distance separation percentage option is used to define the minimum allowable distance between the correction points. The option of the number of refinement points indicates the number of design points created during the formation of the desired response level. The maximum number of refinement points option is also used to define the maximum number of points that can be generated during the formation of the response level.

The following figure shows the settings section for correction points in the response levels section.

The following figure shows the repetitive process of creating design points. This iterative process continues until it reaches the desired tolerance. The horizontal axis (x-axis) indicates the number of refinement points and the y-axis indicates the ratio between the maximum predicted error and the tolerance of each output parameter. Convergence occurs when all output parameters are within the convergence threshold.

doe

standard full 2nd order polynomial

The standard full 2nd order polynomials are the starting point for a large number of design points. This model is based on the correction of the 2nd order formulation; Thus, each parameter or output variable is a 2nd order function of the input parameters or variables. This method will lead to satisfactory results when the parameters or output variables are changed gently.

In this model, the output parameter function is written as follows according to the input variables; So that the function f is a 2nd order polynomial function:

kriging Model

The kriging model is a multidimensional introspection of a multi-component model similar to one of the standard response levels, which is considered a global model of design space, as well as local deviation, is combined; So this model can identify design points.

In this model, the output parameter function is written as follows according to the input variables; So that the function f is a second-order polynomial function (indicating the general behavior of the model) and the function z is a term of deviation (indicating the local behavior of the model):

Because the kriging model fits with the contact surfaces at all points of design, the goodness of fit criterion will always be appropriate. The kriging model will have better results than the standard response surface model; Whenever the output parameters are stronger and nonlinear. One of the weaknesses of this model is that it is possible for the level of response to fluctuate.

The following figure shows the behavioral pattern of a function related to the kriging model. As can be seen from the figure, the behavior of the estimated function consists of a general function (f) combined with a local function (z).

In the kriging model, it is also possible to apply refinement to the design points. In fact, this model can determine the accuracy of contact surfaces and can also determine the points that are needed to increase accuracy. Also, the process of refinement type correction is determined in two ways: manual and auto.

The refinement section of this model, like the previous model, has adjustment options. Application of the maximum number of refinement points, crowding distance separation percentage, output variable combinations, and the number of refinement points are such as the previous model. The maximum predicted relative error option is also used to define the maximum percentage of the relative error predicted during the process.

Also, verification points can be activated to detect and determine the quality of the response surface. It is recommended that this mode be activated when creating a response level using the kriging model. The functional mechanism of this type of point is that a comparison is made between the estimated values of the desired output parameter and the actual values observed from the same output parameter in different positions of the space for the design points.

Verification points can also be defined automatically and manually for the software. Also, if the review points are activated, these points will be added to the table for measuring the goodness of fit of the points.

non-parametric regression

The non-parametric regression model tends to be a general class of support vector methods or RSM techniques. The main idea of this model is that the tolerance of Epsilon (Ԑ) creates a narrow envelope around the output response level and extends around it. This envelope space should be created in such a way that it includes all the design or major examples of these points.

In fact, instability regression is created directly to estimate the regression function; In other words, the regression of instability can examine the effect of one or more independent variables on a dependent variable, without first considering a special function for establishing a relationship between the independent and dependent variables.

The following figure shows the behavioral pattern of the function related to the non-parametric regression model. As can be seen from the figure, the behavior of the estimated function consists of the main function of the response level with a margin of tolerance on both sides.

In general, the characteristics of a non-parametric regression model are:

  • Suitable for non-linear responses.
  • Used when the results are so-called noisy; This means that when the number of results is too large, this model can approximate the design points by considering a tolerance limit of (Ԑ).
  • It usually has a low computational speed.
  • It is recommended that it be used only when the goodness of fit criterion does not reach the desired level of the 2nd order response level model.
  • In some special issues, such as lower-order polynomials, there may be fluctuations between test site design points.

Neural Network

The Neural Network model represents a mathematical technique based on natural neural networks in the human brain. The structure of this neural network model is such that each of the input parameters (inputs) is connected to the weights by arrows, which determine whether the hidden functions are active or inactive. Hidden functions are the same as threshold functions that are connected or disconnected to the desired output function based on a set of input parameters, and finally, each time the process is repeated, these weight functions are set to minimize the error between response levels or output functions with design points or inputs.

The following figure shows the behavioral algorithm of the cellular network model. This algorithm consists of input parameters, hidden functions, and output functions.

In general, the features of the Neural Network model are:

  • Successful for high nonlinear responses.
  • The control of this algorithm is very restrictive.
  • Seventy percent of design points are as learning points and thirty percent of design points are as checking points.
  • Used when the input parameters, as well as the number of design points for each parameter, are high.

Β Sparse Grid

The sparse grid is a kind of adaptive response level; That is, it can constantly correct itself. This model usually requires more design points than other methods of creating response levels, and therefore, is used when the model simulation process is fast. This model will be usable when the sparse grid initialization design method is used to generate design points in the experimental environment. The feature of this model is that it modifies the design points only in the required directions; For this reason, fewer design points are needed to achieve the same quality response level. This model is also suitable for cases involving multiple discontinuities.

The following figure shows the behavioral pattern of a Sparse Grid model and how the hierarchical interpolation within it. As shown below, the first cell is located on the top left and has a design point. By tracking cells in a horizontal and vertical direction, we see changes in the cell and its design points. The peak shape indicates the interior of a design point resulting from the two design points on either side of the cell, and the trough mark indicates the division of one cell into several other cells at the design point. Now, if we follow the path of changes of a cell and its design points in the horizontal direction, we see that first, we have a cell with a design point in the middle of it, then that cell from the design point in the horizontal direction is divided to the two half-cells. The new design points of these two half-cells are placed on its borders, then interpolation is made between the two cell boundaries and new design points are created in the middle of these boundaries, and as a result, two design points are created in the middle of the two cells. Again, the new cell divides from the location of each of the two design points to the other two half-cells in the horizontal direction, and the same procedure continues in the horizontal direction. According to the figure, all the steps mentioned in the horizontal direction are done with the same procedure in the vertical direction.

Goodness of fit

After completing the process of creating response levels, a table is activated by the ANSYS Workbench software that provides the quality criteria for the suitability of response levels. These criteria examine the quality of the response levels generated by solving the design sample points. Types of the goodness of fit measurement criteria include:

  • coefficient of determination or R2 measure
  • maximum relative residual
  • root mean square error
  • relative root mean square error
  • relative maximum absolute error
  • relative average absolute error

Each of the criteria for measuring the goodness of fit of response levels has some value as its Best Value. If each of the measurement criteria obtained for each parameter or numerical output variable was equal to the value of the best measure of that criterion, it means that the response levels are estimated to be the best and have the least error. When the best response level is obtained for the desired output parameter according to a certain measurement criterion, the golden three-star symbol for that output parameter is displayed. Now, the further away the number obtained from each of the measurement criteria is from the best number of the criteria, the fewer the number of golden stars. If the number obtained from each of the measurement criteria for a given output parameter was too far away from the value of the best state of that criterion, it means that the response levels are estimated to be very poor, resulting in a red cross is displayed for that parameter.

The following table shows the best value of each of the quality metrics:

best value quality metric
1 coefficient of determination
0 % maximum relative residual
0 % root mean square error
0 % relative root mean square error
0 % relative maximum absolute error
0 % relative average absolute error

 

Definition of coefficient of determination:

It is a measure of how much the quality of the resulting response shows the degree of variability. The closer this criterion is to the number one, the higher the quality of the response level. The formula for this criterion is as follows:

Definition of maximum relative residual:

A similar criterion for the level of response using the alternating mathematical expression. The closer this criterion is to zero, the higher the quality of the response level. The formula for this criterion is as follows:

Definition of root mean square error:

A criterion, equivalent to the second root of the mean square at the design points in the test environment for regression methods. The closer this criterion is to zero, the higher the quality of the response level. The formula for this criterion is as follows:

Definition of relative root mean square error:

A criterion, equivalent to the second square root of the mean square of scales with real output values at design points in the experimental environment for regression methods. The closer this criterion is to zero, the higher the quality of the response level. The formula for this criterion is as follows:

Definition of relative maximum absolute error:

A criterion equal to the remainder of the absolute maximum relative to the standard deviation from the actual output data. The closer this criterion is to zero, the higher the quality of the response level.

Definition of relative average absolute error:

A criterion equal to the mean error relative to the standard deviation from the actual output data. This error criterion is used when the number of sample design points is less than 30. The closer this criterion is to zero, the higher the quality of the response level.

For example, suppose there are two parameters or output variables, including temperature-outlet and pressure-drop, on which the effects of changes in input parameters lead to a response level. This example uses the Kriging method to generate a response level. The following figure shows a table of quality criteria for each output parameter.

Now we can evaluate the goodness of fit related to the response levels according to the special goodness of fit chart. In fact, this measure indicates the appropriateness or similarity between the values obtained from the solution process in the design point in the DOE and the points on the response level estimated by the method of creating response levels (RSM). In the diagram for this criterion, the horizontal axis represents the observed points from the design points and the vertical axis represents the estimated points from the response level (predicted from the response surface). If we set a 45-degree line as the boundary for comparing the values of these two axes, the higher the density of the points related to the results of the output parameters on this boundary and the less scattered, the higher the quality of the response levels.

Suppose, for example, that the changes in two output parameters, called temperature-outlet and mass flow-outlet, are investigated in terms of changes in three input parameters using the Response Levels (RSM) method. The following figure shows the proportional quality chart between the values of the output temperature and the output mass flow in the test environment, and the values obtained from the production of the response level. As can be seen from the figure, there is a very good relationship between the values observed in the experimental environment and the estimated values at the response level, and as a result, the quality of the response level is high.

response surface results

After completing the process of forming the required response levels based on the appropriate model, the results related to the response levels are presented in different formats:

  • response points
  • min-max search
  • charts:
  1. standard 2D, 2D slices, 3D
  2. local sensitivity bars/pies
  3. local sensitivity curves
  4. spider

The following figure shows the sections that provide the final results of the response levels created in the ANSYS Workbench software.

Response Points

After the response surface production process is completed, the response level is estimated using the design points in the DOE. This level of positive response involves several points. In fact, each of these points on the response level has values of each of the input parameters, and thus the design point can be defined by defining the desired values for each of the input parameters or variables in the range of its changes, a new design point than the ones generated in the test environment (DOE) can be created, as a result, obtained the value of its desired output parameter at this new point.

Suppose, for example, that the two output parameters change as the temperature-outlet and the outlet mass flux, based on the variation in three input parameters as the length, geometry, radius, and velocity are investigated using the Response Levels (RSM) method. The range of geometric length changes from 720 mm to 880 mm and the range of geometric radius changes from 90 mm to 110 mm, as well as the range of flow velocity changes from 0.0009 m. s-1 to 0.0011 m. s-1. The following figure shows the response point, which can be generated by tapping any desired value for each of the parameters of radius, length, and input speed to generate a new design point. Then, the output temperature value and the discharge flow rate were obtained.

Min-Max Search

Two types of optimization are performed automatically in terms of maximum and minimum values on each output parameter; This means that maximum value and a minimum value are defined for each output parameter. These optimization steps are based on response levels from design points and will have reliable results. This process takes a short time, but it can take a long time if the discrete, fictitious, or manual values are defined for the input data, or if the number of output parameters is too large.

In general, this section can provide the minimum and maximum of each of the defined output parameters. In fact, two sections, the minimum output parameter minimum, and the maximum output parameter value appear in tabular form; In this way, it determines which of the input parameters for which values the desired output parameter reaches its maximum or minimum value. Also, the maximum and minimum values obtained for each output parameter in the house whose row and column represent the same output parameter are displayed large and colored.

Suppose, for example, that the two output parameters change as the temperature-outlet and the mass flux-outlet flow based on the three input parameters as the length, radius, and velocity. The velocity-inlet input is checked using the Response Levels (RSM) method. The following figure shows the table for the maximum and minimum values obtained from each of the two output temperature variables and the output mass flow.

Introducing two-dimensional and three-dimensional standard diagram (2D, 3D)

The two-dimensional diagram shows the changes in an output parameter in terms of changes in an input parameter; Thus, the horizontal axis (x-axis) represents one of the selected input parameters or variables, and the vertical axis (y-axis) represents the changes in the desired output parameter. In the three-dimensional diagram, the changes of one output parameter are shown simultaneously according to the changes of the two input parameters; Thus, the two axes on the screen (x-axis, y-axis) represent two defined parameters or input variables, and the vertical axis (z-axis) represents the changes in the desired output parameter. In both cases, drawing two-dimensional and three-dimensional diagrams, when the graph of changes in an output parameter is plotted according to one or two input parameters, the values for the other input parameters defined in the process of creating response levels (RSM), that is not used in the drawing chart must have a fixed value; So that this value can range from the minimum value of its change period to its maximum value and can be defined manually.

Suppose, for example, that the changes in an output parameter called temperature-outlet are examined in terms of changes in an input parameter called the length of the geometry using the response surface method (RSM). Also, the other input parameters are equal to a constant value equal to their average value in the range of their changes. The following figure shows the changes in output temperature in geometric lengths.

Now suppose that in an example, the changes of an output parameter called temperature-outlet are investigated by changing the two input parameters as the length of the geometry and the radius of the radius using the method of response levels (RSM). Also, the other input parameters are equal to a constant value equal to their average value in the range of their changes. The following figure shows the changes in output temperature in terms of geometry length and radius.

Introducing 2D slices diagrams

The changes of one output parameter are shown simultaneously according to the changes of two input parameters; Thus, a horizontal axis (x-axis) represents one of the two defined input parameters or variables, a vertical axis (y-axis) represents changes in the desired output parameter and another input parameter as slices are included in the diagram in the number of certain divisions. It should be noted that when drawing a two-dimensional slice diagram, when the graph of changes in an output parameter is plotted according to one or two input parameters, the values for the other input parameters defined in the process of creating response levels (RSM) in drawing diagrams are not used, they must have a fixed value; This value can range from the minimum value of its range of changes to its maximum value and can be manually defined.

Suppose, for example, that changes in an output parameter called temperature-outlet are investigated in terms of changes in two input parameters, such as length and radius, using the RSM response method. Also, the other input parameters are equal to a constant value equal to their average value in the range of their changes. The following figure shows the changes in output temperature in terms of geometric length and radius; So that the length parameter changes along the horizontal axis and the radius parameter is compared in pieces in five divided values.

Introducing a local sensitivity bar/pie chart

The local sensitivity bar (pie) determines the rate of change of each output parameter for each of the input parameters independently; That is, how sensitive each output parameter is to each of the input parameters. The relationship is related to the sensitivity of each parameter or output variable as follows; Thus, this percentage of sensitivity is equivalent to the ratio of the difference between the maximum value of the output parameter and its minimum value to the average value of that output parameter. If this criterion has a positive value, it means that the output parameter and the input parameter have a direct relationship, and if this criterion has a negative value, it means that the output parameter and the input parameter have an inverse relationship. The greater the degree of sensitivity, both positive and negative, and the greater the number one or 100 percent, the greater the dependence of the output parameter on that input parameter.

If the local sensitivity chart is plotted as a bar, the horizontal axis is the location of the measured parameter or output parameters, the vertical axis represents the percentage of sensitivity and the colored bars in the diagram represent each of the input parameters. If the chart is plotted as a pie, a 360-degree circle appears, each of which has a larger fraction of a percentage, depending on how sensitive they are to the desired parameter or output parameters, they occupy more angles.

Suppose, for example, that the two output parameters change as the temperature-outlet and the mass flux-outlet flow as the three input parameters of the length of the geometry, the radius, and the input velocity. The velocity-inlet is investigated using the Response Levels (RSM) method. The following figure shows the local sensitivity diagram of the output temperature variable bars to the geometry length and radius parameters and the input flow velocity; So that the three colored bars represent the three parameters or the input variable, and the reason for the two-piece diagram is the study of the sensitivity criterion on the two output parameters.

Suppose, for example, that the two output parameters change as the temperature-outlet and the flow flux-outlet flow as the three input parameters of the length of the geometry, the radius, and the input velocity. The velocity-inlet is investigated using the Response Levels (RSM) method. The following figure shows the local sensitivity diagram of the output cycle variable to the parameters of geometric length, radius, and input flow velocity; So that the three color pieces represent the three parameters or the input variable, and the reason for the two-layer diagram is the study of the sensitivity criterion on the two output parameters.

local sensitivity curves

The local sensitivity curve represents the sensitivity of one or two output parameters defined in terms of input parameters or variables. In this type of curve, a constant value must be defined for each input parameter. In fact, in this curve, the sensitivity of each input parameter to the range of changes in the output parameter is shown.

If using this curve, the sensitivity of an output parameter is measured in terms of a constant value of the input variables, the vertical axis indicates the range of changes in the value of the output parameter and the colored lines inside the curve represent each of the input parameters with their constant value. Also, the horizontal axis indicates the sensitivity according to the different values obtained from the output parameter.

If using this curve, the sensitivity of the two output parameters is evaluated according to a constant value of the input variables, both horizontal and vertical axes indicate the range of changes in the value of the two selected output parameters. The colored lines inside the curve represent each of the input parameters with their constant value. In fact, in this case, the simultaneous changes of the two output parameters are shown together; That is, their simultaneous sensitivity to input parameters is measured.

Suppose, for example, that the output-temperature parameter changes have been investigated as the temperature-outlet changes in terms of the three input parameters as the length, radius, and velocity-inlet using the Response levels (RSM) method. The following figure shows the local sensitivity curve of the output temperature variable to the geometry length and radius parameters and the input flow velocity; So that the vertical axis indicates the changes in the output temperature value and the horizontal axis represents the sensitivity of this parameter and the colored lines for each of the input parameters are also inside the curve.

Suppose, for example, that the two output parameters are called temperature-outlet and the mass flux-outlet is changed by three input parameters called length, radius, and input velocity. The velocity-inlet is investigated using the Response Levels (RSM) method. The following figure shows the local sensitivity curve of the two output temperature variables and the output mass flow to the geometry length and radius parameters and the input flow velocity; So that the horizontal and vertical axes represent the changes made simultaneously in the output temperature value and the output mass flow rate and color lines for each of the input parameters are also inside the curve.

Introducing the spider chart

The spider chart is a visual effect with a spider-like pattern that includes the range of changes in each of the output parameters. For each existing output parameter, an equivalent axis is created that divides that axis from its minimum value to its maximum value. The color part produced by merging these axes includes space for response points.

Suppose, for example, that the two output parameters change as the temperature-outlet and the mass flux-outlet flow as the three input parameters of the length, radius, and velocity. The velocity-inlet input is checked using the Response Levels (RSM) method. The following figure shows the spider diagram for the output temperature and the output current flow at a constant value of the input parameters. As can be seen from the figure, the two vertical lines represent the segmented range of each of the output parameters and the color space for the expression of the response points.

DOEMR-CFD

MR-CFD experts are ready to fulfill every Computational Fluid Dynamic (CFD) needs. Our service includes both industrial and academic purposes considering a wide range of CFD problems. MR-CFD services in three main categories of ANSYS Fluent Consultation, ANSYS Fluent Training, and ANSYS Fluent Project Simulation. MR-CFD company has gathered experts from various engineering fields to ensure the quality of CFD services. Your CFD project would be done in the shortest time, with the highest quality and appropriate cost.

Service Process

Here is MR-CFD Service Process Steps

STEP 1

Contact Us via [email protected] or Call on WhatsApp to Share the Project Description.

STEP 2

Project Order Will Be Investigated by MR-CFD Experts.

STEP 3

An Official Contract Including Service Price, Time Span, and Terms of Condition Will Be Set as a Service Agreement.

STEP 4

All Simulation Files, Results, Technical Report, and a Free Comprehensive Training Movie Will Be Sent to the Client as the Contract is Done.

Back To Top
×Close search
Search
Call On WhatsApp
UpTo 100% Discount on all Products 6 Days 8 Hours 26 Minutes 49 Seconds
Buy Now