GET THE APP

Global Optimization Algorithms and its Parameters
..

Global Journal of Technology and Optimization

ISSN: 2229-8711

Open Access

Commentary - (2022) Volume 13, Issue 7

Global Optimization Algorithms and its Parameters

Igor Litvinchev*
*Correspondence: Igor Litvinchev, Department of Computing Center Russian Academy of Sciences, Nuevo Leon State University, Ciudad, Russia, Email:
Department of Computing Center Russian Academy of Sciences, Nuevo Leon State University, Ciudad, Russia

Received: 02-Jul-2022, Manuscript No. gjto-22-78603; Editor assigned: 04-Jul-2022, Pre QC No. P-78603; Reviewed: 16-Jul-2022, QC No. Q-78603; Revised: 21-Jul-2022, Manuscript No. R-78603; Published: 28-Jul-2022 , DOI: 10.37421/2229-8711.2022.13.308
Citation: Litvinchev, Igor. “Global Optimization Algorithms and its Parameters.” Glob J Tech Optim 13 (2022): 308.
Copyright: © 2022 Litvinchev I. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Introduction

An algorithm design approach for discrete and combinatorial optimization problems is called branch and bound (BB or B&B). A branch-and-bound technique enumerates potential solutions in a methodical manner using state space search; the set of candidates is conceptualized as a rooted tree with the full set at the root. This tree's branches, which stand for subsets of the solution set, are investigated by the algorithm. Before listing a branch's potential solutions, the branch is tested against higher and lower estimates of the optimal solution's bounds, and it is eliminated if it is unable to offer a solution that is superior to the best one the algorithm has so far discovered [1].

By utilising a variety of numerical techniques, optimization generally seeks to identify the optimal solution to a problem, or the optimum. In this situation, methods that resolve optimization issues for real, continuous, differentiable, and non-linear functions are of relevance. There are several methods available, including local ones that provide a local optimal and global ones that allow for the discovery of a global optimum. Because a maximzation problem can be thought of as a minimization challenge, we shall look for minima in this case. A conventional optimization problem is the travelling salesman issue. That is, all the information (distances between each destination point) required to choose the best course of action is known with confidence, and the task is to compare all feasible routes to choose the one with the shortest overall distance [2]. Let's suppose, however, that our goal was to reduce the overall time required to travel to each desired location rather than the total distance travelled to get there. Given that trip time is essentially variable, this goes beyond standard optimization (traffic jams, time of day, etc.). As a result, in order to find the best course of action, we would first like to comprehend the range of options. A simulation technique called parallel tempering, also known as replica exchange MCMC sampling, aims to enhance the dynamic features of Markov chain Monte Carlo (MCMC) sampling methods and Monte Carlo method simulations of physical systems. Giorgio Parisi, among others, later developed the replica exchange method, which was first established by Swendsen, Geyer, and Sugita. Sugita and Okamoto also developed a parallel tempering approach for molecular dynamics, which is commonly referred to as replica-exchange molecular dynamics, or REMD [3].

The various costs of function evaluations provide a basic problem in such systems. Whether we are querying the simulator, probing an actual physical system, or building a new complicated data model, these operations require a substantial amount of resources to complete. Therefore, GO techniques for these situations must meet a specific set of criteria. Without any additional knowledge of the problem's structure, they must only use black-box style probes. Additionally, they must identify the optimal improvement within a constrained set of function evaluations [4].

Description

Basically, N randomly initialised replicas of the system are run at various temperatures. Then, one swaps configurations at various temperatures in accordance with the Metropolis criterion. Basically, N randomly initialised replicas of the system are run at various temperatures. Then, one swaps configurations at various temperatures in accordance with the Metropolis criterion. This method's goal is to give simulations at low temperatures access to settings at high temperatures, and vice versa. As a result, a very strong ensemble that can sample both low and high energy configurations is produced. This allows for highly accurate computation of thermodynamical characteristics like specific heat, which are typically difficult to compute in the standard ensemble.

Through the use of parallel computing, high-performance computing clusters, and multi-core desktop computers, a significant amount of computational capacity has become accessible to academics all around the world. These research areas have benefited from this: First, the creation of more sophisticated, widely applicable, and nature-inspired heuristics, often known as metaheuristics. Second, it enabled substantial advancement in the area of precise, data-driven approximation models, also known as surrogate models, and their incorporation in an optimization procedure. Thirdly, there will soon be methods known as "hyperheuristics" that mix many optimization algorithms and aim to automatically assemble and configure the best optimization technique [5].

Conclusion

The researchers suggest generalised parallel computational methods, which may include multiple effective parallel global optimization algorithms, to tackle such computational complexity. In order to address the computational demands of supercomputing systems with shared and distributed memory multiprocessors employing thousands of processors, the proposed schemes incorporate techniques for multilevel decomposition of parallel computations.

The global optimal solution is suggested to be the best of the locally optimal solutions. Although continuous branch and bound methods have a theoretical guarantee of convergence to the globally optimal solution, this guarantee is typically not achievable for problems with more than a few variables in a reasonable length of time. Therefore, in order to increase performance, many Continuous Branch and Bound algorithms additionally include some form of random or statistical sampling.

Acknowledgement

None.

Conflict of Interest

There are no conflicts of interest by author.

References

  1. Karaboğa, Derviş and Selçuk Ökdem. "A simple and global optimization algorithm for engineering problems: Differential evolution algorithm."Turk J Electr Eng Comput Sci 12 (2004): 53-60.
  2. Google Scholar, Indexed at

  3. Papamichail, Ioannis, and Claire S. Adjiman. "A rigorous global optimization algorithm for problems with ordinary differential equations." J Glob Optim 24 (2002): 1-33.
  4. Google Scholar, Crossref, Indexed at

  5. Scales, John A., Martin L. Smith and Terri L. Fischer. "Global optimization methods for multimodal inverse problems." J Comput Phys 103 (1992): 258-268.
  6. Google Scholar, Crossref, Indexed at

  7. Kvasov, Dmitri E. and Marat S. Mukhametzhanov. "Metaheuristic vs. deterministic global optimization algorithms: The univariate case." Appl Math Comput 318 (2018): 245-259.
  8. Google Scholar, Crossref, Indexed at

  9. Trivedi, Indrajit N., Jangir Pradeep, Jangir Narottam and Kumar Arvind, et al. "Novel adaptive whale optimization algorithm for global optimization." Indian J Sci Technol 9 (2016): 319-326.
  10. Google Scholar, Crossref, Indexed at

Google Scholar citation report
Citations: 664

Global Journal of Technology and Optimization received 664 citations as per Google Scholar report

Global Journal of Technology and Optimization peer review process verified at publons

Indexed In

 
arrow_upward arrow_upward