Design Optimization Average-Based Algorithm

This article introduces a metaheuristic algorithm to solve engineering design optimization problems. The algorithm is based on the concept of diversity and independence that is aggregated in the average design of a population of designs containing information dispersed through a variety of points, and on the concept of intensification represented by the best design. The algorithm is population-based, where the population individual designs are randomly generated. The population can be normally or uniformly generated. The algorithm may start either with points randomly generated or with a designer preferred trial guess. The algorithm is validated using standard classical unconstrained and constrained engineering optimum design test problems reported in the literature. The results presented indicate that the proposed algorithm is a very simple alternative to solve this kind of problems. They compare well with the analytical solutions and/or the best results achieved so far. Two constrained problem analytical solutions not found in the literature are presented in annex.


Introduction
With the advent of fast, cheap and reliable computing power over the last decades, in addition to the application of classic optimization to larger and larger size problems, new alternative algorithms operating in a different fashion have been developed.The classical optimization algorithms have shortcomings and are not suitable for all optimization problems.The new alternative algorithms allow attacking optimization problems either too costly or not applicable to classical algorithms.
The purpose of heuristic algorithms applied to optimization problems is to search a solution to them by trial-and-error in a satisfying amount of computing time.The optimum solution is not guaranteed but a near optimum solution is accepted as a good solution.Metaheuristics refers to higher-level algorithms combining lower-level techniques and tactics for exploration and exploitation of the design space.That is, these algorithms, on one hand, must be able to generate a range of points in the whole design space including potentially optimum ones; on the other hand, they intensify the search around the neighborhood of an optimum or near optimum points (Yang, 2008).
Exploration and exploitation are the two important components of the metaheuristic algorithms.They are also called diversification and intensification (Glover, Laguna, 1997).A good balance of these components is required.Too much weight in diversification risks slow convergence with the solutions jumping around the potentially optimum ones; too much weight in intensification restricts the design space to a local region and risks the convergence to a local optimum (Blum, Roli, 2003).The heuristic algorithms start typically either with guess solutions randomly generated or with a designer preferred trial solution.The diversification is gradually reduced as the algorithm proceeds; simultaneously, the intensification is increased.
One of the roles of injected randomness in stochastic search is to allow for movements to unexplored areas of the search space that may contain an unexpectedly good design.This is especially relevant when the search is stalled near a local solution.Injected randomness may also be used for the creation of simple random quantities that act like their deterministic counterparts, but which are much easier to obtain and more efficient to compute.
If a sufficient numerous and diversified group of people is asked to decide on subjects of general interest, the decisions of the group are better than the decisions that an isolated individual would take (Surowiecki, 2005).Well informed and sophisticated an expert is, his or her advice and predictions should be pooled with those of others to get the most out of him or her.Practical examples, simple and complex, are described in (Surowiecki, 2005), remarking the principle of group think, and the concept that the masses are better problem J.B. Cardoso et al. solvers, forecasters, and decision makers than any one individual.A classic example of group intelligence is the jelly-beans-in-the-jar experiment, in which invariably the group's average estimate for the number of the jelly beans in the jar is superior to the vast majority of the individual guesses.
The theory that groups are remarkably intelligent and often smarter than the smartest people in them, demonstrates the significant impact on how businesses operate, how knowledge is increased, how economies are structured, and how people live their daily lives (Williams, 2006).The necessary conditions for a crowd to be wise include diversity, independence, and a specific type of decentralization.These conditions are essential to making good decisions which are the result of disagreement and contest rather than consensus or compromise.
Diversity means individuals have some private information or their own interpretation of known facts.Diversity helps because it actually adds perspectives that would otherwise be absent and because it takes away, or at least weakens, some of the destructive characteristics of group decision-making.Homogeneous groups are great at doing what they do well, but they become progressively less able to investigate alternatives.Independence means freedom from the influence of others.It keeps the individual mistakes from becoming correlated.Diversity is essential to preserving this independence.In the jelly-beans-in-the-jar experiment, most group members are not talking to each other or solving problems together.
Decentralization means people draw on local knowledge.It encourages individuals to make important decisions, not just in one location based only on one specific type of information, but dispersed through a variety of locations from where local knowledge is drawn and shared.The information coming out of a decentralized group must be aggregated throughout the system, to maintain a balance between local and global counterparts.Aggregation needs a mechanism that turns individual judgments into a collective decision.For instance, in a free market, the aggregating mechanism is price; in the jelly-beans-in-the-jar experiment, individual guesses were aggregated and then averaged, i.e., the aggregating mechanism is the average guess.
The aim of the present article is the proposal of an algorithm based on the concept just described.The algorithm considered is stochastic in the sense that it relies on random numbers and that different results may be obtained upon running the algorithm repeatedly.The algorithm is population-based, where the population individual designs are randomly generated.These individuals are diversified and independent since their design variables values are chosen stochastically without any correlation.
The individual designs are also decentralized since the design variables are chosen all over the entire design space.Finally, the different values of the design variables are aggregated as the plain or weighted average of those values.However, generating a diverse set of possible solutions isn't enough.The designer, as the body of people, also has to be able to distinguish the good solutions for the bad.So, at each of the iterations, the present algorithm selects two designs: the best design, the one with the best objective so far; and the averaged design, the one which design variable values are the mean of those variable values for the iteration.
The best design represents the intensification component of the algorithm; the averaged design represents the diversity part.In the current article, the optimization problem is understood as a minimization problem, where the function of merit to be assessed is an extended cost function that takes in account penalization due to violated constraints.A reference design is considered as a linear combination of both the best and the averaged design variables.A simple recurrence formula, centered on the reference design, is used to actualize the design for the next randomly generated population.The population can be normally or uniformly generated.
In optimization, there is traditionally a concern with developing a good stopping criterion.Unfortunately, the quest for an automatic means of stopping an algorithm with a guaranteed level of accuracy seems doomed to failure in general stochastic search problems.The fundamental reason is that in nontrivial problems, there will always be a significant region within the design space that will remain unexplored in any finite number of iterations, and there is always the possibility the optimum could lie in this unexplored region.
A danger arises in making broad claims about the performance of an algorithm based on the results of numerical studies.Performance can vary tremendously under even small changes in the form of the functions involved or the coefficient settings within the algorithms themselves.Outstanding performance on some types of functions is consistent with poor performance on other types of functions.This is a manifestation of no free lunch theorems (Spall, 2003;Wolpert, Macready, 1997).
The present algorithm is applied to several constrained and unconstrained test functions as well as to typical engineering design problems.

The optimal design problem
The optimal design problem may be formulated in a generalized fashion as In Eq. ( 2), P stands for the penalty factor which value depends on the violation of the constraints as 1 0, for satisfied constraints , for violated constraints where 0   and m is the number of violated constraints.In practical terms, the constraint j  can be considered violated if j   , with  a very small number.Note that the absolute value of the objective is considered in the Eq. ( 2) in order to accommodate problems with negative objective functions.

The average concept algorithm
The present algorithm may be described by the flowchart of Fig. 1 as well as in the following manner: 1. Start with a preference design guess b and an imposed standard deviation, estimated as with 01   and assume the distribution of designs k b centered at R b with a standard deviation vector evaluated as End the iterates.

Stop.
In order of handling tabular discrete value design variables, the Eq. ( 4) is rewritten in these cases as int where  is the difference value between two consecutive design variables.

Numerical applications
In this Section one is going to solve unconstrained as well as constrained optimization problems by applying the formulation and the algorithm presented on the previous sections.With respect to the unconstrained problems, the applications are the minimizations of the benchmark following test functions: Six-Hump Camelback function, Rosenbrock and Michalewicz's functions.
Concerning to the constrained problems, one solves three well-known engineering design optimization test problems: the welded beam design, the pressure vessel design and the tension-compression spring design.For all the applications normally distributed populations were used.A total of 30 independent runs were performed per problem.In the Sections 4.1 to 4.6 the runs of the algorithm were performed with a particular initial seed for different population sizes and different weights on the calculation of the average and reference designs.
The results are compared with analytical solutions and/or heuristic and nonlinear programming algorithms.We should also note that the worst results exist for 0  = , i.e., only considering the average design as the design of reference.For 1  = , respecting to the selection at each iteration of the best design as the reference one, the solutions are also not so good For 20 N = , 0.85

 =
, starting point ( )  Table 2 shows the optimal solutions achieved for different N population sizes and different average choices on Eq. ( 5), 0 IP = standing for plain average and 1 IP = meaning weighted average as written on step 4 of the algorithm described on Chapter 3, using 0.85

 =
and with the starting point located at ( ) We may note that is the population size 20 N = the one that needs the smallest number of function evaluations ( )

Rosenbrock's function
This function also referred to as the Valley or Banana function due to the shape of its contour lines, is a popular test problem for gradient-based optimization algorithms.The function turns out to be quite challenging to find its minimum point by numerical methods.
Its global optimum point is ( )

Michalewicz's function
The third unconstrained optimization problem uses the Michalewicz's function in its two-dimensional form ( ) ( )

Welded beam design
The welded beam design problem is well studied in the context of single-objective optimization.A beam A needs to be welded on another beam B and must carry a certain load P as shown in Fig. 2. The welded beam is designed for minimum fabrication cost subject to constraints on shear stress τ, bending stress in the beam σ, buckling load on the bar c P , end deflection of the beam δ, cost of the weld and beam A materials, and side constraints (Ragsdell, Phillips, 1975;Rao, 1996).One wants to find four design parameters: thickness of the beam b, width of the beam t, length of the weld l, and thickness of the weld h.
In order of formulating the problem in a standard form, let ( ) be the design vector.Then, our problem may be described as where .The other parameters are defined as: )  3 for different population sizes N and different average choices on Eq. ( 5).The first row for each choice combination of N and IP represents the optimum at the convergence of the algorithm.The following rows are the results at intermediary iterations.For all these design points all the constraints are satisfied.
The best minimum cost value obtained in the present article is One may observe that the algorithm solutions compare very well with the best solution presented above, even earlier iterates of the algorithm.We may also observe that convergence is faster when the plain average is used in Eq. ( 5).

Pressure vessel design
The pressure vessel design problem has been proposed in (Kannan, Kramer, 1994).It is one of the most used test problems for validating optimization algorithms.The problem is to find the optimal design of a compressed air storage tank (Fig. 3) with a working pressure of 1000 psi and a minimum capacity volume of 3 min 1, 296,000in V = .The pressure vessel is composed of a cylindrical shell capped at both ends by hemispherical heads.
Let the design variables be x should be integer multiples of 0.0625 in.The objective is to minimize the manufacturing cost (material, welding and forming costs) of the pressure vessel (Sandgren, 1990) The analytical optimum for this problem is calculated in the Annex A as The optimal results achieved with the present algorithm are shown in Table 4 for different population sizes N and different average choices on Eq. ( 5).For all these solutions there is no violation of the constraints.The first constraint is nearly active at the optimal point for all the different population sizes; it takes values in the interval 75 1 0.3666 10 0.1215 10 The results for the present algorithm compare well with the analytical solution.Again, the plain average of the Eq. ( 5) gives origin to faster convergence.

Tension/Compression spring design
The tension/compression spring design optimization problem is described in (Arora, 1989).The goal is to minimize the weight of a tension/compression spring (Fig. 4) subject to constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and side constraints.The design variables to be considered are the wire diameter d, the mean coil diameter D and the number n of active coils.
Let us set up the vector of design variables as ( )  The problem may be formulated as The analytical solution for this problem is presented in the Annex B. The minimum weight of the spring is achieved as The optimum results achieved with the present algorithm are shown in Table 5, for 0 IP = , different sizes N of the population and plain average selection on Eq. ( 5).For all these solutions there is no violation of the constraints, being practically active the first two constraints 1  and 2  .The last two constraints have values within the intervals

Concluding remarks
This article presents an average concept algorithm to solve various optimization problems which include typical benchmark functions unconstrained problems and structural engineering test design constrained problems.
To evaluate the performance of the present algorithm, numerical applications are and the results are compared to the results obtained analytically and/or to the best ones achieved by other optimization methods.The analytical solutions for two of the constrained problems, namely the pressure vessel design and the tension-compression spring design problems, are determined in the present article.The solutions found by the proposed algorithm compare well with those results.
We may conclude that the algorithm finds the global solution or a near-global solution in each problem tested.The Six-Hump Camelback function, for example, has 6 local minimal points; however, the algorithm converges to the global minima.
Characterizing the principal advantage of the algorithm one should emphasize the good balance between the accuracy of the solutions it achieves and its rare simplicity.x , and these variables may be minimized out as  and side constraints.The necessary Karush-Kuhn-Tucker conditions (Arora, 1989) for the problem (A.2) may now be set as   Testing now the second-order sufficient conditions for the only point ( , , , ) , determined in 3.3, satisfying the necessary conditions, one may use the socalled bordered Hessian (Luenberger, 1984)

Annex B: Tension-Compression Spring Classical Design Optimization
Again, let us firstly to analyze the monotonicity of the problem (Papalambros, Wilde, 1988).One should observe the constraint 1  is critical with respect to the design variable 3 x .The cost function 0  increases monotonically in the variable 3 x , and there is exactly one constraint, the constraint 1  , whose monotonicity with respect to 3 x is opposite from that of the objective.Then, gives the optimum cost of 0 0.0  =.The function is unimodal and its analytical solution can be obtained straightforwardly by partial differentiation.The numerical solution, however, poses a particular challenge.The solution lies inside a very deep, narrow, banana shaped valley.The valley causes a lot of troubling for nonlinear programming search algorithms.Using a population of 1000 samples within the design optimum point above is obtained after 38 iterations.For 20 N = , the algorithm converges towards the same optimum point after 2604 iterations.One may say that is the population of 100 samples that gives the shortest number of function evaluations ( )100 38  .
function is tricky; it has several local minimum values and several flat areas which make the one global minimum value hard to find numerically.Its global minimum
third constraints being active.
present algorithm results compare well with the analytical solution.
bars on the cost and constraint symbols mean we are determining by now the solution for all variables continuous.The Lagrangian function for this problem is Therefore, the analytical optimum point for the pressure vessel design problem is ( ) 0.8125, 0.4375, 42.0984456, 176.6365958 =

Table 1 :
Optimum points and function values for different θs and starting points.J.B. Cardoso et al. = if the first value is smaller than the second one, and a weight 1 k p is the weight accounted for design k b on the average.The weights are selected by the designer.If a plain average is chosen, then 1 k p = for every design.Y Figure 1: Flowchart of the algorithm.kp

Table 4 :
Pressure vessel optimal solutions.