Cloud Computing Resource Scheduling Method Based on Optimization Theory
Abstract
To propose a reliable cloud computing task deployment algorithm for the optimization theory. The current research on cloud computing task deployment mostly only focuses on one of the two goals: reliability and optimization theory. This paper studies how to provide fault tolerance for task execution failure while minimizing the number of servers used to perform all tasks, thus reducing the problem of optimization theory. This article provides fault recovery capability through task replication, providing two instances of each task that makes up the job. Task copies can be deployed either on a dedicated backup server or to the server where the main task is located, by sharing the same computing resources and running at less than the execution speed of the main task. We propose a reliable cloud computing task deployment algorithm for optimizing theoretical optimization and service quality perception. For users, the completion time of the service is usually limited, and if a timeout occurs, it will cause a loss to the cloud service provider. For the actual completion time performance of the task at the last moment, the algorithm RER is about 2% to 10% more than the algorithm QSRE at xtr = 0.75. Time out times of the algorithm RER (xtr = 0.75). If the task fails at a random time, the algorithm RER (xtr = 0.75) has 10% -15% probability over the execution time period of the job, and the algorithm RER has 42% to 63% probability of timeout. The algorithm RER (xtr = 0.5) is 12% to 22% less than the algorithm QSRE. This paper studies how to minimize the number of servers used to perform all task copies while ensuring service quality and providing fault tolerance, thus reducing the problem of optimization theory. This deployment for different sizes of task copy of different sizes, deployed to a dedicated backup server with different execution speed, to ensure that the fault after the task copy can still be completed in the user required time, this paper proposed a service quality perception and energy saving reliable task copy deployment algorithm. This method can control the prediction error below 2%. For the historical data of resource usage with non-linear input data amount, sample prediction value and error prediction value. This strategy weights different sample data and takes the error prediction value as the correction parameter of the final prediction value. Compared with the original least squares method and Hyper Log base estimation algorithm, using the optimization strategy in this paper can reduce the prediction error by about 70% and 50%, respectively. For the task scheduling problem of cloud computing, this paper proposes a priority allocation algorithm TDP based on task latency time. Task latency time refers to the longest time in which the task can be delayed under the premise of not affecting the completion time of the task.DOI:
https://doi.org/10.31449/inf.v48i23.6901Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







