| 查看: 482 | 回复: 2 | |||
| 当前主题已经存档。 | |||
wangwei2008木虫 (职业作家)
小木虫之移花宫主
|
[交流]
【分享】Design of experiments
|
||
|
28 August 2008 Prof Jack P.C. Kleijnen Tilburg University (UvT) / School of Economics & Business (FEB), Netherlands http://center.uvt.nl/staff/kleijnen/ Design Of Experiments (DOE) is needed for experiments with real-life (physical) systems in agriculture, chemical plants, industrial factories, service industries, etc. (Montgomery 2009); deterministic simulation models of airplanes, automobiles, TVs, computer chips, etc.—in Computer Aided Engineering (CAE) and Computer Aided Design (CAD)—at companies like Boeing, General Motors, Philips, etc. (Santner, Williams, and Notz 2003, Kleijnen 2008); random (stochastic, discrete-event) simulation models of queuing and inventory systems (Kleijnen 2008, Law 2007). This Page focuses on experiments with simulation models (either deterministic or random). These experiments may vary hundreds of factors (inputs variables and parameters)—each with many values. Consequently, a multitude of scenarios (combinations of factor values) might be simulated, were it not that the simulation of a single scenario may require too much computer time. Simulation experiments are well-suited to sequential designs (discussed below) instead of “one shot” designs Classic designs are often fractional factorials; i.e., only a fraction of all conceivable scenarios are simulated. For example, if there are 7 factors and each factors is observed at 2 values only, then there are still 128 possible combinations; actually, only 8 combinations suffice to estimate the first-order or “main” effects of these factors. Notice that changing only 1 factor at a time also requires only 8 combinations, but it can be proven that a fractional factorial gives estimates that are less sensitive to experimental noise (usually assumed to be “white” noise; see below). These classic designs may also allow interaction among factors and second-order effects Simulation designs also use Latin Hypercube Sampling (LHS), which allows more complicated Input/Output (I/O) behavior of the underlying simulation model. LHS provides a “space filling” design, whereas classic designs select extreme scenarios at the corners of the experimental space. Space filling designs are often analyzed through Kriging (spatial correlation or Gaussian) models. Kriging gives an exact interpolator; i.e., for “old” scenarios the Kriging prediction equals the observed outputs Which design is “best” depends on the goal of the experiment: sensitivity analysis, optimization, etc. Sensitivity analysis may serve validation & verification of the underlying simulation model, and factor screening, which is the search for the really important factors. Optimization tries to find the “best” scenario for the simulated real system. Robust optimization allows for uncertain environmental factors. DOE assumes white noise;i.e., normally and independently distributed noise with a constant variance. In practice, however, this assumption is often unrealistic; e.g., the experimental output (response) may have variability that is not constant when the input combination changes. The experimenters should therefore test whether the white-noise assumption holds; if the assumption does not hold, the experimenters may adapt their analysis methods. For example, Weighted Least Squares (WLS) weight the experimental outputs with their variability. Further analysis is possible through computer-intensive methods, such as “bootstrapping” and “cross-validation” (Kleijnen 2008). Screening is related to “sparse” effects, “parsimony”, “Pareto’s principle”, “Occam’s razor”, “20-80 rule”, and “curse of dimensionality”. An example is provided by a specific greenhouse simulation model with 281 factors. The users (politicians) want to focus on a few really important factors. Even a classic design for first-order effects would require the simulation of at least 282 scenarios—which would take too much computer time. Many screening designs are sequential (not one-shot); i.e., observations are analyzed before the next scenario is selected and observed. This approach implies that the design is customized, tailored, application-driven, or non-generic. For Kriging there are also sequential designs, which are indeed customized; e.g., the design selects relatively many scenarios with non-linear I/O behavior. For optimization there are many methods (Kleijnen 2008). Unfortunately, most methods for optimization in simulation assume a single output, whereas in practice simulation models give multiple outputs. A recent variant of Response Surface Methodology (RSM) allows multiple outputs, and searches for the optimal input combination in a sequence of steps (using estimated local gradients). An alternative method combines Kriging and Mathematical Programming. This alternative uses (i) sequential designs to specify the simulation inputs, (ii) Kriging to analyze the global I/O behavior, and (iii) Integer Non-Linear Programming to estimate the optimal solution from the Kriging models. Whereas most optimization methods assume known (fixed) environments, robust methods allow uncertain environmental factors. The Japanese engineer Taguchi first applied such a view; his statistical techniques have been improved later on by others (Kleijnen 2008, Myers and Montgomery 1995). Robust optimization may minimize the variability of the output subject to a restriction on the expected output, to select a compromise or "robust" solution. In conclusion, simulation allows experimentation with many factors, so screening designs are very important. Kriging allows more complicated I/O behavior of the system. Sequential designs for either sensitivity analysis or optimization are more efficient than one-shot designs. Optimization should account for multiple outputs. Robust optimization is important in an uncertain world. Design Of Experiments (DOE) is needed for experiments with real-life (physical) systems in agriculture, chemical plants, industrial factories, service industries, etc. (Montgomery 2009); deterministic simulation models of airplanes, automobiles, TVs, computer chips, etc.—in Computer Aided Engineering (CAE) and Computer Aided Design (CAD)—at companies like Boeing, General Motors, Philips, etc. (Santner, Williams, and Notz 2003, Kleijnen 2008); random (stochastic, discrete-event) simulation models of queuing and inventory systems (Kleijnen 2008, Law 2007). This Page focuses on experiments with simulation models (either deterministic or random). These experiments may vary hundreds of factors (inputs variables and parameters)—each with many values. Consequently, a multitude of scenarios (combinations of factor values) might be simulated, were it not that the simulation of a single scenario may require too much computer time. Simulation experiments are well-suited to sequential designs (discussed below) instead of “one shot” designs Classic designs are often fractional factorials; i.e., only a fraction of all conceivable scenarios are simulated. For example, if there are 7 factors and each factors is observed at 2 values only, then there are still 128 possible combinations; actually, only 8 combinations suffice to estimate the first-order or “main” effects of these factors. Notice that changing only 1 factor at a time also requires only 8 combinations, but it can be proven that a fractional factorial gives estimates that are less sensitive to experimental noise (usually assumed to be “white” noise; see below). These classic designs may also allow interaction among factors and second-order effects Simulation designs also use Latin Hypercube Sampling (LHS), which allows more complicated Input/Output (I/O) behavior of the underlying simulation model. LHS provides a “space filling” design, whereas classic designs select extreme scenarios at the corners of the experimental space. Space filling designs are often analyzed through Kriging (spatial correlation or Gaussian) models. Kriging gives an exact interpolator; i.e., for “old” scenarios the Kriging prediction equals the observed outputs Which design is “best” depends on the goal of the experiment: sensitivity analysis, optimization, etc. Sensitivity analysis may serve validation & verification of the underlying simulation model, and factor screening, which is the search for the really important factors. Optimization tries to find the “best” scenario for the simulated real system. Robust optimization allows for uncertain environmental factors. DOE assumes white noise;i.e., normally and independently distributed noise with a constant variance. In practice, however, this assumption is often unrealistic; e.g., the experimental output (response) may have variability that is not constant when the input combination changes. The experimenters should therefore test whether the white-noise assumption holds; if the assumption does not hold, the experimenters may adapt their analysis methods. For example, Weighted Least Squares (WLS) weight the experimental outputs with their variability. Further analysis is possible through computer-intensive methods, such as “bootstrapping” and “cross-validation” (Kleijnen 2008). Screening is related to “sparse” effects, “parsimony”, “Pareto’s principle”, “Occam’s razor”, “20-80 rule”, and “curse of dimensionality”. An example is provided by a specific greenhouse simulation model with 281 factors. The users (politicians) want to focus on a few really important factors. Even a classic design for first-order effects would require the simulation of at least 282 scenarios—which would take too much computer time. Many screening designs are sequential (not one-shot); i.e., observations are analyzed before the next scenario is selected and observed. This approach implies that the design is customized, tailored, application-driven, or non-generic. For Kriging there are also sequential designs, which are indeed customized; e.g., the design selects relatively many scenarios with non-linear I/O behavior. For optimization there are many methods (Kleijnen 2008). Unfortunately, most methods for optimization in simulation assume a single output, whereas in practice simulation models give multiple outputs. A recent variant of Response Surface Methodology (RSM) allows multiple outputs, and searches for the optimal input combination in a sequence of steps (using estimated local gradients). An alternative method combines Kriging and Mathematical Programming. This alternative uses (i) sequential designs to specify the simulation inputs, (ii) Kriging to analyze the global I/O behavior, and (iii) Integer Non-Linear Programming to estimate the optimal solution from the Kriging models. Whereas most optimization methods assume known (fixed) environments, robust methods allow uncertain environmental factors. The Japanese engineer Taguchi first applied such a view; his statistical techniques have been improved later on by others (Kleijnen 2008, Myers and Montgomery 1995). Robust optimization may minimize the variability of the output subject to a restriction on the expected output, to select a compromise or "robust" solution. In conclusion, simulation allows experimentation with many factors, so screening designs are very important. Kriging allows more complicated I/O behavior of the system. Sequential designs for either sensitivity analysis or optimization are more efficient than one-shot designs. Optimization should account for multiple outputs. Robust optimization is important in an uncertain world. |
» 猜你喜欢
不自信的我
已经有8人回复
磺酰氟产物,毕不了业了!
已经有8人回复
求助:我三月中下旬出站,青基依托单位怎么办?
已经有10人回复
26申博(荧光探针方向,有机合成)
已经有4人回复
要不要辞职读博?
已经有3人回复
论文终于录用啦!满足毕业条件了
已经有26人回复
2026年机械制造与材料应用国际会议 (ICMMMA 2026)
已经有4人回复
Cas 72-43-5需要30g,定制合成,能接单的留言
已经有8人回复
北京211副教授,35岁,想重新出发,去国外做博后,怎么样?
已经有8人回复
自荐读博
已经有3人回复
2楼2010-01-22 16:20:01
qqsvery
木虫 (著名写手)
- 应助: 0 (幼儿园)
- 金币: 2895.9
- 红花: 3
- 帖子: 1688
- 在线: 18.6小时
- 虫号: 592166
- 注册: 2008-09-03
- 专业: 造血、造血调控与造血微环

3楼2010-01-25 09:57:49












回复此楼