TY - JOUR
T1 - Multi Objective Prioritized Workflow Scheduling Using Deep Reinforcement Based Learning in Cloud Computing
AU - Mangalampalli, Sudheer
AU - Hashmi, Syed Shakeel
AU - Gupta, Amit
AU - Karri, Ganesh Reddy
AU - Rajkumar, K. Varada
AU - Chakrabarti, Tulika
AU - Chakrabarti, Prasun
AU - Margala, Martin
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Workflow Scheduling is a huge challenge in cloud paradigm as many number of workflows dynamically generated from various heterogeneous resources and task dependencies in each workflow varies from each other. Therefore, if a workflow with more number of dependencies is not scheduled onto an appropriate Virtual Machine i.e. with low processing capacity which leads to delay in executing workflows and it results in increase of makespan, cost, energy consumption. In order to effectively schedule complex workflows i.e. with more task dependencies, we propose a novel multi objective workflow scheduling algorithm using Deep reinforcement Learning. Initially, priorities of all workflows calculated based on their dependencies and then calculated priorities of VMs based on electricity cost at datacenters to map workflows onto precise VMs. These priorities are fed to scheduler which uses Deep Q-Network model to dynamically schedule tasks by considering both priorities of tasks and VMs. Extensive simulations carried out on workflowsim by considering realtime scientific workflows (Montage, cybershake, Epigenomics, LIGO). Our proposed MOPWSDRL compared against existing state of art approaches i.e. Heterogeneous Earliest First Deadline, Cat Swarm Optimization, Ant Colony Optimization. Results revealed that our proposed MOPDSWRL outperforms existing state of art algorithms by minimizing makespan, energy consumption.
AB - Workflow Scheduling is a huge challenge in cloud paradigm as many number of workflows dynamically generated from various heterogeneous resources and task dependencies in each workflow varies from each other. Therefore, if a workflow with more number of dependencies is not scheduled onto an appropriate Virtual Machine i.e. with low processing capacity which leads to delay in executing workflows and it results in increase of makespan, cost, energy consumption. In order to effectively schedule complex workflows i.e. with more task dependencies, we propose a novel multi objective workflow scheduling algorithm using Deep reinforcement Learning. Initially, priorities of all workflows calculated based on their dependencies and then calculated priorities of VMs based on electricity cost at datacenters to map workflows onto precise VMs. These priorities are fed to scheduler which uses Deep Q-Network model to dynamically schedule tasks by considering both priorities of tasks and VMs. Extensive simulations carried out on workflowsim by considering realtime scientific workflows (Montage, cybershake, Epigenomics, LIGO). Our proposed MOPWSDRL compared against existing state of art approaches i.e. Heterogeneous Earliest First Deadline, Cat Swarm Optimization, Ant Colony Optimization. Results revealed that our proposed MOPDSWRL outperforms existing state of art algorithms by minimizing makespan, energy consumption.
UR - https://www.scopus.com/pages/publications/85182353886
UR - https://www.scopus.com/inward/citedby.url?scp=85182353886&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3350741
DO - 10.1109/ACCESS.2024.3350741
M3 - Article
AN - SCOPUS:85182353886
SN - 2169-3536
VL - 12
SP - 5373
EP - 5392
JO - IEEE Access
JF - IEEE Access
ER -