Cloud Computing Projects – ElysiumPro

Cloud Computing Projects

CSE Projects
Description
C Cloud computing is a computing infrastructure for enabling access to resources like computer networks, servers, storage, applications and services. We have projects for such systems as cloud security projects, cloud optimization systems and other cloud based application.
Download Project List

Quality Factor

  • 100% Assured Results
  • Best Project Explanation
  • Tons of Reference
  • Cost optimized
  • Controlpanel Access


1Privacy-Preserving Outsourced Support Vector Machine Design for Secure Drug Discovery
In this paper, we propose a framework for privacy-preserving outsourced drug discovery in the cloud, which we refer to as POD. Specifically, POD is designed to allow the cloud to securely use multiple drug formula providers' drug formulas to train Support Vector Machine (SVM) provided by the analytical model provider. In our approach, we design secure computation protocols to allow the cloud server to perform commonly used integer and fraction computations. To securely train the SVM, we design a secure SVM parameter selection protocol to select two SVM parameters and construct a secure sequential minimal optimization protocol to privately refresh both selected SVM parameters. The trained SVM classifier can be used to determine whether a drug chemical compound is active or not in a privacy-preserving way. Lastly, we prove that the proposed POD achieves the goal of SVM training and chemical compound classification without privacy leakage to unauthorized parties, as well as demonstrating its utility and efficiency using three real-world drug datasets.

2Secure Data Collection, Storage and Access in Cloud-Assisted IoT
Cloud-assisted Internet of Things (IoT) provides a promising solution to data booming problems for the ability constraints of individual objects. However, with the leverage of cloud, IoT faces new security challenges for data mutuality between two parties, which is introduced for the first time in this paper and not currently addressed by traditional approaches. We investigate a secure cloud-assisted IoT data managing method to keep data confidentiality when collecting, storing and accessing IoT data with the assistance of a cloud with the consideration of users increment. The proposed system novelly applies a proxy re-encryption scheme, which was proposed in \cite{XJW15}. Hence, a secure IoT under our proposed method could resist most attacks from both insiders and outsiders of IoT to break data confidentiality, and meanwhile with constant communication cost for re-encrytion anti incremental scale of IoT. We further show the method is practical by numerical results.

3Information leakage in cloud data warehouses
Information leakage is the inadvertent disclosure of sensitive information through correlation of records from several databases/collections of a cloud data warehouse. Malicious insiders pose a serious threat to cloud data security and this justifies the focus on information leakage due to rogue employees or to outsiders using the credentials of legitimate employees. The discussion in this paper is restricted to NoSQL databases with a flexible schema. Data encryption can reduce information leakage, but it is impractical to encrypt large databases and/or all fields of database documents. Encryption limits the operations that can be carried on the data in a database. It is thus, critical to distinguish sensitive documents in a data warehouse and concentrate on efforts to protect them. The capacity of a leakage channel introduced in this work quantifies the intuitively obvious means to trigger alarms when an insider attacker uses excessive computer resources to correlate information in multiple databases. The Sensitivity Analysis based on Data Sampling (SADS) introduced in this paper balances the trade-offs between higher efficiency in identifying the risks posed by information leakage and the accuracy of the results obtained by sampling very large collections of documents. The paper reports on experiments assessing the effectiveness of SADS and the use of selective disinformation to limit information leakage. Cloud services identifying sensitive records and reducing the risk of information leakage are also discussed.

4Catch You if You Misbehave: Ranked Keyword Search Results Verification in Cloud Computing
With the advent of cloud computing, more and more people tend to outsource their data to the cloud. As a fundamental data utilization, secure keyword search over encrypted cloud data has attracted the interest of many researchers recently. However, most of existing researches are based on an ideal assumption that the cloud server is “curious but honest”, where the search results are not verified. In this paper, we consider a more challenging model, where the cloud server would probably behave dishonestly. Based on this model, we explore the problem of result verification for the secure ranked keyword search. Different from previous data verification schemes, we propose a novel deterrent-based scheme. With our carefully devised verification data, the cloud server cannot know which data owners, or how many data owners exchange anchor data which will be used for verifying the cloud server's misbehavior. With our systematically designed verification construction, the cloud server cannot know which data owners' data are embedded in the verification data buffer, or how many data owners' verification data are actually used for verification. All the cloud server knows is that, once he behaves dishonestly, he would be discovered with a high probability, and punished seriously once discovered. Furthermore, we propose to optimize the value of parameters used in the construction of the secret verification data buffer.

5Modelling and Analysis of a Novel Deadline-Aware Scheduling Scheme for Cloud Computing Data Centres
User request (UR) service scheduling is a process that significantly impacts the performance of a cloud data centre. This is especially true since essential quality-of-service (QoS) performance metrics such as the UR blocking probability as well as the data centre’s response time are tightly coupled to such a process. This paper revolves around the proposal of a novel Deadline-Aware UR Scheduling Scheme (DASS) that has the objective of improving the data centre’s QoS performance in term of the above-mentioned metrics. A minority of existing work in the literature targets the formulation of mathematical models for the purpose of characterizing a cloud data centre’s performance. As a contribution to covering this gap, this paper presents an analytical model, which is developed for the purpose of capturing the system's dynamics and evaluating its performance when operating under DASS. The model's results' accuracy are verified through simulations. Also, the performance of the data centre achieved under DASS is compared to its counterpart achieved under the more generic First-In-First-Out (FIFO) scheme. The reported results indicate that DASS outperforms FIFO by 11 to 58 percent in terms of the blocking probability and by 82 to 89 percent in terms of the system's response time.

6Online VM Auto-Scaling Algorithms for Application Hosting in a Cloud
We consider the auto-scaling problem for application hosting in a cloud, where applications are elastic and the number of requests changes over time. The application requests are serviced by Virtual Machines (VMs), which reside on Physical Machines (PMs) in a cloud. We aim to minimize the number of hosting PMs by intelligently packing VMs into PMs, while the VMs are auto-scaled, i.e., dynamically acquired and released, to accommodate varying application needs. We consider a shadow routing based approach for this problem. The proposed shadow algorithm employs a specially constructed virtual queueing system to dynamically produce an optimal solution that guides the VM auto-scaling and the VM-to-PM packing. The proposed algorithm runs continuously without the need to re-solve the underlying optimization problem "from scratch", and adapts automatically to the changes in the application demands. We prove the asymptotic optimality of the shadow algorithm. The simulation experiments further demonstrate the algorithm's good performance and high adaptivity.

7A Robust Formulation for Efficient Application Offloading to Clouds
Application offloading to clouds is the key enabler for compute-intensive applications running on mobile devices. An offloading algorithm employs estimated averages of the execution and communication costs of application modules to decide on a modules subset to be offloaded with the objective of minimizing a certain metric (e.g., execution time or energy). This decision is highly affected by the inherent uncertainty arising from the estimated cost averages due to natural fluctuations or measurement inaccuracies. In this article, we propose a novel offloading scheme that takes into consideration these uncertainties. The proposed work first formulates the offloading problem as a tractable robust optimization problem where the uncertainty in k cost parameters is incorporated by allowing these parameters to fluctuate within intervals specified from profiling the application and the network. We then show that this problem can be transformed into k+1 binary linear programs that are solved while preserving the complexity of the original problem. In contrast to existing approaches, the performance of the obtained decision is guaranteed as long as the behavior of the uncertain parameters remains within the given intervals. Performance evaluation results using a face detection and a synthetically generated applications with a large number of modules demonstrate the robustness of the obtained offloading decisions.

8A Planning Approach for Reassigning Virtual Machines in IaaS Clouds
Reassignment of virtual machines into clusters is an important task for the good management of cloud resources since it decisively affects the performance of the Service Provider platform. Thus, for a successful reassignment, a clear and careful reassignment plan should be constructed in advance. In this paper, we propose a planning approach to the problem of reassigning virtual machines in IaaS Cloud platforms and we prove that this problem is NP-Hard. First, we use the well-known A* algorithm to solve this planning problem. Then, we propose two algorithms, called Direct Move Heuristic (DMH) and Iterative Direct Move Heuristic (IDMH), to bridge the space limitation of the A* algorithm. Also, we suggest two experimental studies that have been conducted on randomly generated problem instances. The first experimental study considers small sized problem instances. It aims to show the applicability of the described modeling and assesses the efficiency of the proposed algorithms. The second experimental study focuses on large sized problem instances. It assesses the scalability performance of the IDMH heuristic. Our obtained results show a good scalability performance on problem instances with up to 800 virtual machines.

9Efficient Traceable Authorization Search System for Secure Cloud Storage
Secure search over encrypted remote data is crucial in cloud computing to guarantee the data privacy and usability. To prevent unauthorized data usage, fine-grained access control is necessary in multi-user system. However, authorized user may intentionally leak the secret key for financial benefit. Thus, tracing and revoking the malicious user who abuses secret key needs to be solved imminently. In this paper, we propose an escrow free traceable attribute based multiple keywords subset search system with verifiable outsourced decryption (EF-TAMKS-VOD). The key escrow free mechanism could effectively prevent the key generation centre (KGC) from unscrupulously searching and decrypting all encrypted files of users. Also, the decryption process only requires ultra-lightweight computation, which is a desirable feature for energy-limited devices. In addition, efficient user revocation is enabled after the malicious user is figured out. Moreover, the proposed system is able to support flexible number of attributes rather than polynomial bounded. Flexible multiple keyword subset search pattern is realized, and the change of the query keywords order does not affect the search result. Security analysis indicates that EF-TAMKS-VOD is provably secure.

10Publicly Verifiable Boolean Query over Outsourced Encrypted Data
Outsourcing storage and computation to the cloud has become a common practice for businesses and individuals. As the cloud is semi-trusted or susceptible to attacks, many researchers suggest that the outsourced data should be encrypted and then retrieved by using searchable symmetric encryption (SSE) schemes. Since the cloud is not fully trusted, we doubt whether it would always process queries correctly or not. Therefore, there is a need for users to verify their query results. Motivated by this, in this paper, we propose a publicly verifiable dynamic searchable symmetric encryption scheme based on the accumulation tree. We first construct an accumulation tree based on encrypted data and then outsource both of them to the cloud. Next, during the search operation, the cloud generates the corresponding proof according to the query result by mapping Boolean query operations to set operations, while keeping privacy-preservation and achieving the verification requirements: freshness, authenticity, and completeness. Finally, we extend our scheme by dividing the accumulation tree into different small accumulation trees to make our scheme scalable. The security analysis and performance evaluation show that the proposed scheme is secure and practical.

11Assessment of the Suitability of Fog Computing in the Context of Internet of Things
This work performs a rigorous, comparative analysis of the fog computing paradigm and the conventional cloud computing paradigm in the context of the Internet of Things (IoT), by mathematically formulating the parameters and characteristics of fog computing-one of the first attempts of its kind. With the rapid increase in the number of Internet-connected devices, the increased demand of real-time, low-latency services is proving to be challenging for the traditional cloud computing framework. Also, our irreplaceable dependency on cloud computing demands the cloud data centers (DCs) always to be up and running which exhausts huge amount of power and yield tons of carbon dioxide (CO2) gas. In this work, we assess the applicability of the newly proposed fog computing paradigm to serve the demands of the latency-sensitive applications in the context of IoT. We model the fog computing paradigm by mathematically characterizing the fog computing network in terms of power consumption, service latency, CO2 emission, and cost, and evaluating its performance for an environment with high number of Internet-connected devices demanding real-time service. A case study is performed with traffic generated from the 100 highest populated cities being served by eight geographically distributed DCs.

12Translating Algorithms to Handle Fully Homomorphic Encrypted Data on the Cloud
Cloud provides large shared resources where users (or foundations) can enjoy the facility of storing data or executing applications. In spite of gaining convenience of large resources, storing critical data in cloud is not secured. Hence, cloud security is an important issue to make cloud useful at the enterprise level. Data encryption is a primary solution for providing confidentiality to sensitive data. However, processing of encrypted data requires extra overhead, since repeated encryption-decryption need to be performed for every simple processing on encrypted data. Hence, direct processing on encrypted cloud data is advantageous, which is supported by homomorphic encryption schemes. Fully Homomorphic Encryption (FHE) provides a method of performing arbitrary operations directly on encrypted data. This seemingly magical idea is a welcome to cloud computing. However, there are several challenges to overcome for making the technology viable in practical applications. In this paper, we make an initial effort to highlight the problem of translating algorithms that can run on unencrypted or normal data to those which operate on encrypted data. Here, we show that although FHE provides the ability to perform arbitrary computations, its complete benefit can only be obtained if they also allow to execute arbitrary algorithms on encrypted data. In this pursuit, we provide techniques to translate basic operators (like bitwise, arithmetic and relational operators), which are used for implementation of algorithms in any high level language like C. Subsequently, we address decision making and loop handling and related data structures which are vital to realize when the controlling variables are encrypted.

13SimGrid VM: Virtual Machine Support for a Simulation Framework of Distributed Systems
As real systems become larger and more complex, the use of simulator frameworks grows in our research community. By leveraging them, users can focus on the major aspects of their algorithm, run in-siclo experiments (i.e., simulations), and thoroughly analyze results, even for a large-scale environment without facing the complexity of conducting in-vivo studies (i.e., on real testbeds). Since nowadays the virtual machine (VM) technology has become a fundamental building block of distributed computing environments, in particular in cloud infrastructures, our community needs a full-fledged simulation framework that enables us to investigate large-scale virtualized environments through accurate simulations. To be adopted, such a framework should provide easy-to-use APIs as well as accurate simulation results. In this paper, we present a highly-scalable and versatile simulation framework supporting VM environments. By leveraging SimGrid, a widely-used open-source simulation toolkit, our simulation framework allows users to launch hundreds of thousands of VMs on their simulation programs and control VMs in the same manner as in the real world (e.g., suspend/resume and migrate). Users can execute computation and communication tasks on physical machines (PMs) and VMs through the same SimGrid API, which will provide a seamless migration path to IaaS simulations for hundreds of SimGrid users.

14Lightweight Fine-Grained Search over Encrypted Data in Fog Computing
Fog computing, as an extension of cloud computing, outsources the encrypted sensitive data to multiple fog nodes on the edge of Internet of Things (IoT) to decrease latency and network congestion. However, the existing ciphertext retrieval schemes rarely focus on the fog computing environment and most of them still impose high computational and storage overhead on resource-limited end users. In this paper, we first present a Lightweight Fine-Grained ciphertexts Search (LFGS) system in fog computing by extending Ciphertext-Policy Attribute-Based Encryption (CP-ABE) and Searchable Encryption (SE) technologies, which can achieve fine-grained access control and keyword search simultaneously. The LFGS can shift partial computational and storage overhead from end users to chosen fog nodes. Furthermore, the basic LFGS system is improved to support conjunctive keyword search and attribute update to avoid returning irrelevant search results and illegal accesses. The formal security analysis shows that the LFGS system can resist Chosen-Keyword Attack (CKA) and Chosen-Plaintext Attack (CPA), and the simulation using a real-world dataset demonstrates that the LFGS system is efficient and feasible in practice.

15SEPDP: Secure and Efficient Privacy Preserving Provable Data Possession in Cloud Storage
Cloud computing is an emergent paradigm to provide reliable and resilient infrastructure enabling the users (data owners) to store their data and the data consumers (users) can access the data from cloud servers. This paradigm reduces storage and maintenance cost of the data owner. At the same time, the data owner loses the physical control and possession of data which leads to many security risks. Therefore, auditing service to check data integrity in the cloud is essential. This issue has become a challenge as the possession of data needs to be verified while maintaining the privacy. To address these issues this work proposes a secure and efficient privacy preserving provable data possession (SEPDP). Further, we extend SEPDP to support multiple owners, data dynamics and batch verification. The most attractive feature of this scheme is that the auditor can verify the possession of data with low computational overhead.

16SECURE: Self-Protection Approach in Cloud Resource Management
In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.

17Multi-user Multi-task Computation Offloading in Green Mobile Edge Cloud Computing
Mobile Edge Cloud Computing (MECC) has becoming an attractive solution for augmenting the computing and storage capacity of Mobile Devices (MDs) by exploiting the available resources at the network edge. In this work, we consider computation offloading at the mobile edge cloud that is composed of a set of Wireless Devices (WDs), and each WD has an energy harvesting equipment to collect renewable energy from the environment. Moreover, multiple MDs intend to offload their tasks to the mobile edge cloud simultaneously. We first formulate the multi-user multi-task computation offloading problem for green MECC, and use Lyaponuv Optimization Approach to determine the energy harvesting policy: how much energy to be harvested at each WD; and the task offloading schedule: the set of computation offloading requests to be admitted into the mobile edge cloud, the set of WDs assigned to each admitted offloading request, and how much workload to be processed at the assigned WDs. We then prove that the task offloading scheduling problem is NP hard, and introduce centralized and distributed Greedy Maximal Scheduling algorithms to resolve the problem efficiently. Performance bounds of the proposed schemes are also discussed. Extensive evaluations are conducted to test the performance of the proposed algorithms.

18Intelligent Health Vessel ABC-DE: An Electrocardiogram Cloud Computing Service
The severe challenges of the fast aging population and the prevalence of cardiovascular diseases highlight the needs for effective solutions supporting more accurate and affordable medical diagnosis and treatment. Recent advances in cloud computing have inspired numerous designs of cloud-based health care services. In this paper, we developed a cloud-computing platform monitored by physicians, which can receive 12-lead ECG records and send back diagnostic reports to users. Aiming to lessen the physicians' workload, we implemented an analysis algorithm that can identify abnormal heart rate, irregular heartbeat, abnormal amplitude, atrial fibrillation and abnormal ECG in it. A large number of testing samples were used to evaluate performance. Our algorithm achieved a TPR95 (specificity under the condition of negative predictive value being equal to 95%) of 68.5% and 0.9317 AUC (area under the ROC curve) for classification of normal and abnormal ECG records and a sensitivity of 98.51% and specificity of 98.26% for atrial fibrillation classification, comparable to the state-of-the-art results for each subject. The proposed ECG cloud computing service has been applied in Hunan Jinshengda Aerial Hospital Network and it now can receive and analyze ECG records in real time.

19A Distributed Truthful Auction Mechanism for Task Allocation in Mobile Cloud Computing
In mobile cloud computing, offloading resource-demanded applications from mobile devices to remote cloud servers can alleviate the resource scarcity of mobile devices. Recent studies show that exploiting the unused resources of the nearby mobile devices for task execution can reduce the energy consumption and communication latency. Nevertheless, it is non-trivial to encourage mobile devices to share their resources or execute tasks for others. To address this issue, we construct an auction model to facilitate the resource trading between the owner of the tasks and the mobile devices participating in task execution. Specifically, the owners of the tasks act as bidders by submitting bids to compete for the resources available at mobile devices. We design a distributed auction mechanism to fairly allocate the tasks, and determine the trading prices of the resources. Moreover, an efficient payment evaluation process is proposed to prevent against the possible dishonest activity of the seller on the payment decision, through the collaboration of the buyers. We prove that the proposed auction mechanism can achieve certain desirable properties, such as computational efficiency, individual rationality, truthfulness guarantee of the bidders, and budget balance.

20Locality-aware Scheduling for Containers in Cloud Computing
The state-of-the-art scheduler of containerized cloud services considers load balance as the only criterion; many other important properties, including application performance, are overlooked. In the era of Big Data, however, applications evolve to be increasingly more data-intensive thus perform poorly when deployed on containerized cloud services. To that end, this paper aims to improve today's cloud service by taking application performance into account for the next-generation containers. More specifically, in this work we build and analyze a new model that respects both load balance and application performance. Unlike prior studies, our model abstracts the dilemma between load balance and application performance into a unified optimization problem and then employs a statistical method to efficiently solve it. The most challenging part is that some sub-problems are extremely complex (for example, NP-hard), and heuristic algorithms have to be devised. Last but not least, we implement a system prototype of the proposed scheduling strategy for containerized cloud services. Experimental results show that our system can significantly boost application performance while preserving relatively high load balance.

21An Adaptive and Fuzzy Resource Management Approach in Cloud Computing
Resource management plays a key role in a cloud environment in which applications face with dynamically changing workloads. However, such dynamic and unpredictable workloads can lead to performance degradation of applications. To meet the Quality of Service (QoS) requirements based on Service Level Agreements (SLA), the resource management strategies must be taken into account. The question addressed in this research includes how to reduce the number of SLA violations based on the optimization of resources applying an autonomous control cycle and a fuzzy knowledge management system. In this paper, an adaptive and fuzzy resource management framework (AFRM) has been proposed. In the AFRM, the last resource values of each virtual machine have been gathered through the environment sensors and have been sent to a fuzzy controller. Then, the AFRM analyzes the received information to make decision about how to reallocate the resources. All the membership functions and rules are dynamically updated based on workload changes to satisfy the defined QoS requirements. Three sets of experiments were conducted to test the AFRM in comparison to a rule-based approach. Experimental results demonstrate that the AFRM outperforms the other competitive algorithms.

22A Lightweight Secure Data Sharing Scheme for Mobile Cloud Computing
With the popularity of cloud computing, mobile devices can store/retrieve personal data from anywhere at any time. Consequently, the data security problem in mobile cloud becomes more and more severe and prevents further development of mobile cloud. There are substantial studies that have been conducted to improve the cloud security. However, most of them are not applicable for mobile cloud since mobile devices only have limited computing resources and power. Solutions with low computational overhead are in great need for mobile cloud applications. In this paper, we propose a lightweight data sharing scheme (LDSS) for mobile cloud computing. It adopts CP-ABE, an access control technology used in normal cloud environment, but changes the structure of access control tree to make it suitable for mobile cloud environments. LDSS moves a large portion of the computational intensive access control tree transformation in CP-ABE from mobile devices to external proxy servers. Furthermore, to reduce the user revocation cost, it introduces attribute description fields to implement lazy-revocation, which is a thorny issue in program based CP-ABE systems. The experimental results show that LDSS can effectively reduce the overhead on the mobile device side when users are sharing data in mobile cloud environments.

23Fair Resource Allocation for Data-Intensive Computing in the Cloud
To address the computing challenge of `big data', a number of data-intensive computing frameworks (e.g., MapReduce, Dryad, Storm and Spark) have emerged and become popular. YARN is a de facto resource management platform that enables these frameworks running together in a shared system. However, we observe that, in cloud computing environment, the fair resource allocation policy implemented in YARN is not suitable because of its memoryless resource allocation fashion leading to violations of a number of good properties in shared computing systems. This paper attempts to address these problems for YARN. Both single-level and hierarchical resource allocations are considered. For single-level resource allocation, we propose a novel fair resource allocation mechanism called Long-Term Resource Fairness (LTRF)for such computing. For hierarchical resource allocation, we propose Hierarchical Long-Term Resource Fairness (H-LTRF) by extending LTRF. We show that both LTRF and H-LTRF can address these fairness problems of current resource allocation policy and are thus suitable for cloud computing. Finally, we have developed LTYARN by implementing LTRF and H-LTRF in YARN, and our experiments show that it leads to a better resource fairness than existing fair schedulers of YARN.

24Towards Privacy-Preserving Content-Based Image Retrieval in Cloud Computing
Content-based image retrieval (CBIR) applications have been rapidly developed along with the increase in the quantity, availability and importance of images in our daily life. However, the wide deployment of CBIR scheme has been limited by its the severe computation and storage requirement. In this paper, we propose a privacy-preserving content-based image retrieval scheme, which allows the data owner to outsource the image database and CBIR service to the cloud, without revealing the actual content of the database to the cloud server. Local features are utilized to represent the images, and earth mover's distance (EMD) is employed to evaluate the similarity of images. The EMD computation is essentially a linear programming (LP) problem. The proposed scheme transforms the EMD problem in such a way that the cloud server can solve it without learning the sensitive information. In addition, local sensitive hash (LSH) is utilized to improve the search efficiency. The security analysis and experiments show the security and efficiency of the proposed scheme.

25MapReduce Scheduling for Deadline-Constrained Jobs in Heterogeneous Cloud Computing Systems
MapReduce is a software framework for processing data-intensive applications with a parallel manner in cloud computing systems. Some MapReduce jobs have the deadline requirements for their job execution. The existing deadline-constrained MapReduce scheduling schemes do not consider the following two problems: various node performance and dynamical task execution time. In this paper, we utilize the Bipartite Graph modelling to propose a new MapReduce Scheduler called the BGMRS. The BGMRS can obtain the optimal solution of the deadline-constrained scheduling problem by transforming the problem into a well-known graph problem: minimum weighted bipartite matching. The BGMRS has the following features. It considers the heterogeneous cloud computing environment, such that the computing resources of some nodes cannot meet the deadlines of some jobs. In addition to meeting the deadline requirement, the BGMRS also takes the data locality into the computing resource allocation for shortening the data access time of a job. However, if the total available computing resources of the system cannot satisfy the deadline requirements of all jobs, the BGMRS can minimize the number of jobs with the deadline violation. Finally, both simulation and testbed experiments are performed to demonstrate the effectiveness of the BGMRS in the deadline-constrained scheduling.

26A framework for efficient and secured mobility of IoT devices in mobile edge computing
Mobile Edge Computing (MEC) provides an efficient solution for IoT as it brings the cloud services close to the IoT device. This works well for IoT devices with limited mobility. IoT devices that are mobile by nature introduce a set of challenges to the MEC model. Challenges include security and efficiency aspects. Achieving mutual authentication of IoT device with the cloud edge provider is essential to protect from many security threats. Also, the efficiency of data transmission when connecting to a new cloud edge provider requires efficient data mobility among MEC providers or MEC centers. This research paper proposes a new framework that offers a secure and efficient MEC for IoT applications with mobile devices.

27Distributed Resource Allocation for Data Center Networks: A Hierarchical Game Approach
The increasing demand of data computing and storage for cloud-based services motivates the development and deployment of large-scale data centers. This paper studies the resource allocation problem for the data center networking system when multiple data center operators (DCOs) simultaneously serve multiple service subscribers (SSs). We formulate a hierarchical game to analyze this system where the DCOs and the SSs are regarded as the leaders and followers, respectively. In the proposed game, each SS selects its serving DCO with preferred price and purchases the optimal amount of resources for the SS's computing requirements. Based on the responses of the SSs' and the other DCOs', the DCOs decide their resource prices so as to receive the highest profit. When the coordination among DCOs is weak, we consider all DCOs are noncooperative with each other, and propose a sub-gradient algorithm for the DCOs to approach a sub-optimal solution of the game. When all DCOs are sufficiently coordinated, we formulate a coalition game among all DCOs and apply Kalai-Smorodinsky bargaining as a resource division approach to achieve high utilities. Both solutions constitute the Stackelberg Equilibrium. The simulation results verify the performance improvement provided by our proposed approaches.

28virtFlow: Guest Independent Execution Flow Analysis Across Virtualized Environments
An agent-less technique to understand virtual machines (VMs) behavior and their changes during the VM life-cycle is essential for many performance analysis and debugging tasks in the cloud environment. Because of privacy and security issues, ease of deployment and execution overhead, the method preferably limits its data collection to the physical host level, without internal access to the VMs. We propose a host-based, precise method to recover execution flow of virtualized environments, regardless of the level of virtualization. Given a VM, the Any-Level VM Detection Algorithm (ADA) and Nested VM State Detection (NSD) Algorithm compute its execution path along with the state of virtual CPUs (vCPUs) from the host kernel trace. The state of vCPUs is displayed in an interactive trace viewer (TraceCompass) for further inspection. Then, a new approach for profiling threads and processes inside the VMs is proposed. Our proposed VM trace analysis algorithms have been open-sourced for further enhancements and to the benefit of other developers. Our new techniques are being evaluated with workloads generated by different benchmarking tools. These approaches are based on host hypervisor tracing, which brings a lower overhead (around 1%) as compared to other approaches.

29VMGuard: A VMI-based Security Architecture for Intrusion Detection in Cloud Environment
In this paper, we propose a Virtual Machine Introspection-based security architecture design for fine granular monitoring of the Tenant Virtual Machines (TVMs) in the cloud. We have developed techniques for monitoring the TVMs at the process level and system call level to detect known and zero-day attacks such as those based on malicious hidden processes, attacks that disable security tools in the TVMs as well as those that alter the behaviour of the legitimate applications. Our architecture, VMGuard, utilizes the introspection feature at the VMM-layer to analyze system call traces of programs running in the monitored TVM. VMGuard applies the software breakpoint injection technique which is OS agnostic and used to trap the execution of programs running in a TVM. VMGuard provides ‘Bag of n-grams’ approach integrated with Term Frequency-Inverse Document Frequency method, to extract and select features of normal and attack traces. It then applies the Random Forest statistical learning technique to produce a generic behavior for different categories of intrusions of the monitored TVM. We have implemented a prototype and the results obtained seem to be very promising and demonstrate the applicability of the VMGuard. We compare VMGuard with existing techniques and discuss the advantages.

30A Key-Policy Attribute-Based Temporary Keyword Search scheme for Secure Cloud Storage
Temporary keyword search on confidential data in a cloud environment is the main focus of this research. The cloud providers are not fully trusted. So, it is necessary to outsource data in the encrypted form. In the attribute-based keyword search (ABKS) schemes, the authorized users can generate some search tokens and send them to the cloud for running the search operation. These search tokens can be used to extract all the ciphertexts which are produced at any time and contain the corresponding keyword. Since this may lead to some information leakage, it is more secure to propose a scheme in which the search tokens can only extract the ciphertexts generated in a specified time interval. To this end, in this paper, we introduce a new cryptographic primitive called key-policy attribute-based temporary keyword search (KP-ABTKS) which provide this property. To evaluate the security of our scheme, we formally prove that our proposed scheme achieves the keyword secrecy property and is secure against selectively chosen keyword attack (SCKA) both in the random oracle model and under the hardness of Decisional Bilinear Diffie-Hellman (DBDH) assumption. Furthermore, we show that the complexity of the encryption algorithm is linear with respect to the number of the involved attributes. Performance evaluation shows our scheme's practicality.

31Towards Security-based Formation of Cloud Federations: A Game Theoretical Approach
Cloud federations allow Cloud Service Providers (CSPs) to deliver more efficient service performance by interconnecting their Cloud environments and sharing their resources. However, the security of the federated service could be compromised if the resources are shared with relatively insecure CSPs, and violations of the Security Service Level Agreement (Security-SLA) might occur. In this paper, we propose a Cloud federation formation model that considers the security level of CSPs. We start by applying the Goal-Question-Metric (GQM) method to develop a set of parameters that quantitatively describes the Security-SLA in the Cloud, and use it to evaluate the security levels of the CSPs and formed federations with respect to a defined Security-SLA baseline, while taking into account CSPs' customers' security satisfaction. Then, we model the Cloud federation formation process as a hedonic coalitional game with a preference relation that is based on the security level and reputation of CSPs. We propose a federation formation algorithm that enables CSPs to join a federation while minimizing their loss in security, and refrain from forming relatively insecure federations. Experimental results show that our model helps maintaining higher levels of security in the formed federations and reducing the rate and severity of Security-SLA violations.

32Efficient Regular Language Search for Secure Cloud Storage
Cloud computing provides flexible data management and ubiquitous data access. However, the storage service provided by cloud server is not fully trusted by customers. Searchable encryption could simultaneously provide the functions of confidentiality protection and privacy-preserving data retrieval, which is a vital tool for secure storage. In this paper, we propose an efficient large universe regular language searchable encryption scheme for the cloud, which is privacy-preserving and secure against the off-line keyword guessing attack (KGA). A notable highlight of the proposal over other existing schemes is that it supports the regular language encryption and deterministic finite automata (DFA) based data retrieval. The large universe construction ensures the extendability of the system, in which the symbol set does not need to be predefined. Multiple users are supported in the system, and the user could generate a DFA token using his own private key without interacting with the key generation centre. Furthermore, the concrete scheme is efficient and formally proved secure in standard model. Extensive comparison and simulation show that this scheme has function and performance superior than other schemes.

33A Distributed Auction-based Framework for Scalable IaaS Provisioning in Geo-Data Centres
This paper proposes a Cloud Infrastructure-as-a-Service (IaaS) framework that allows customers to have their high performance computing applications hosted efficiently and Cloud Service Providers (CSPs) to use their resources profitably. The solution introduces a distributed architecture that manages geographically distributed Data Centres (Geo-Data Centres) logically grouped in regions. This framework overcomes the challenges of traditional centralized provisioning approaches: (a) efficient provisioning of IaaS demand, (b) scale with respect to the growing number of IaaS requests, (c) guarantee of the stringent Quality of Service requirements of IaaS requests, and (d) efficient use of Cloud Geo-Data Centre computing resources. Our architecture incorporates two decentralized approaches, hierarchical and distributed, that use auctions instead of a pay-as-you-go pricing scheme. The two approaches use a large-scale optimization technique for the allocation of Geo-Data Centres computing resources. The results of a simulation demonstrate an efficient use of computing resources and a significant reduction in computation time. This ensures adequate scalability to meet an exponential growth of IaaS demand. The auction-based approaches are also shown to provide monetary benefits to the participants.

34Privacy Aware Data Deduplication for Side Channel in Cloud Storage
Cloud storage services enable individuals and organizations to outsource data storage to remote servers. Cloud storage providers generally adopt data deduplication, a technique for eliminating redundant data by keeping only a single copy of a file, thus saving a considerable amount of storage and bandwidth. However, an attacker can abuse deduplication protocols to steal information. For example, an attacker can perform the duplicate check to verify whether a file (e.g., a pay slip, with a specific name and salary amount) is already stored (by someone else), hence breaching the user privacy. In this paper, we propose ZEUS (zero-knowledge deduplication response) framework. We develop ZEUS and ZEUS+, two privacy-aware deduplication protocols: ZEUS provides weaker privacy guarantees while being more efficient in the communication cost, while ZEUS+ guarantees stronger privacy properties, at an increased communication cost. To our knowledge, ZEUS is the first solution which addresses two-side privacy by neither using any extra hardware nor depending on heuristically chosen parameters used by the existing solutions, thus reducing both cost and complexity of the cloud storage. We show the efficiency of the proposed framework by evaluating on real dataset and comparing the communication cost of the proposed solutions, and prove the privacy.

35Energy-Efficient Decision Making for Mobile Cloud Offloading
Mobile cloud offloading migrates heavy computation from mobile devices to remote cloud resources or nearby cloudlets. It is a promising method to alleviate the struggle between resource-constrained mobile devices and resource-hungry mobile applications. Caused by frequently changing location mobile users often see dynamically changing network conditions which have a great impact on the perceived application performance. Therefore, making high-quality offloading decisions at run time is difficult in mobile environments. To balance the energy-delay trade-off based on different offloading-decision criteria (e.g., minimum response time or energy consumption), an energy-efficient offloading-decision algorithm based on Lyapunov optimization is proposed. The algorithm determines when to run the application locally, when to forward it directly for remote execution to a cloud infrastructure and when to delegate it via a nearby cloudlet to the cloud. The algorithm is able to minimize the average energy consumption on the mobile device while ensuring that the average response time satisfies a given time constraint. Moreover, compared to local and remote execution, the Lyapunov-based algorithm can significantly reduce the energy consumption while only sacrificing a small portion of response time. Furthermore, it optimizes energy better and has less computational complexity than the Lagrange Relaxation based Aggregated Cost (LARAC-based) algorithm.

36CypherDB: A Novel Architecture for Outsourcing Secure Database Processing
CypherDB addresses the problem of protecting the confidentiality of database stored externally in a cloud and enabling efficient computation over it to thwart any curious-but-honest cloud computing service provider. It works by encrypting the entire outsourced database and executing queries over the encrypted data using our novel CypherDB secure processor architecture. To optimize computational efficiency, our proposed processor architecture provides tightly-coupled data paths that avoid information leakage during database access and query execution. Our simulation using a well-known database benchmark TPC-H over a commercial grade Database Management System (SQLite) demonstrates that our proposed architecture incurs an average of about 10 percent overhead when compared with the same set of operations without secure database processing.

37Towards Efficient Resource Allocation for Heterogeneous Workloads in IaaS Clouds
Infrastructure-as-a-service (IaaS) cloud technology has attracted much attention from users who have demands on large amounts of computing resources. Current IaaS clouds provision resources in terms of virtual machines (VMs) with homogeneous resource configurations where different types of resources in VMs have similar share of the capacity in a physical machine (PM). However, most user jobs demand different amounts for different resources. For instance, high-performance-computing jobs require more CPU cores while big data processing applications require more memory. The existing homogeneous resource allocation mechanisms cause resource starvation where dominant resources are starved while non-dominant resources are wasted. To overcome this issue, we propose a heterogeneous resource allocation approach, called skewness-avoidance multi-resource allocation (SAMR), to allocate resource according to diversified requirements on different types of resources. Our solution includes a VM allocation algorithm to ensure heterogeneous workloads are allocated appropriately to avoid skewed resource utilization in PMs, and a model-based approach to estimate the appropriate number of active PMs to operate SAMR. We show relatively low complexity for our model-based approach for practical operation and accurate estimation. Extensive simulation results show the effectiveness of SAMR and the performance advantages over its counterparts.

38CSR: Classified Source Routing in Distributed Networks
In recent years cloud computing provides a new way to address the constraints of limited energy, capabilities, and resources. Distributed hash table (DHT) based distributed networks have become increasingly important for efficient communication in large-scale cloud systems. Previous studies mainly focus on improving the performance such as latency, scalability and robustness, but seldom consider the security demands on the routing paths, for example, bypassing untrusted intermediate nodes. Inspired by Internet source routing, in which the source nodes specify the routing paths taken by their packets, this paper presents CSR, a tag-based, Classified Source Routing scheme in distributed networks to satisfy the security demands on the routing paths. Different from Internet source routing which requires some map of the overall network, CSR operates in a distributed manner where nodes with certain security level are tagged with a label and routing messages requiring that level of security are forwarded only to the qualified next-hops. We show how this can be achieved efficiently, by simple extensions of the traditional routing structures, and safely, so that the routing is uniformly convergent. The effectiveness of our proposals is demonstrated through theoretical analysis and extensive simulations.

39Efficient Skew Handling for Outer Joins in a Cloud Computing Environment
Outer joins are ubiquitous in many workloads and Big Data systems. The question of how to best execute outer joins in large parallel systems is particularly challenging, as real world datasets are characterized by data skew leading to performance issues. Although skew handling techniques have been extensively studied for inner joins, there is little published work solving the corresponding problem for parallel outer joins, especially in the extremely popular Cloud computing environment. Conventional approaches to the problem such as ones based on hash redistribution often lead to load balancing problems while duplication-based approaches incur significant overhead in terms of network communication. In this paper, we propose a new approach for efficient skew handling in outer joins over a Cloud computing environment. We present an efficient implementation of our approach over the Spark framework. We evaluate the performance of our approach on a 192-core system with large test datasets in excess of 100 GB and with varying skew. Experimental results show that our approach is scalable and, at least in cases of high skew, significantly faster than the state-of-the-art.

40A Workflow Management System for Scalable Data Mining on Clouds
The extraction of useful information from data is often a complex process that can be conveniently modeled as a data analysis workflow. When very large data sets must be analyzed and/or complex data mining algorithms must be executed, data analysis workflows may take very long times to complete their execution. Therefore, efficient systems are required for the scalable execution of data analysis workflows, by exploiting the computing services of the Cloud platforms where data is increasingly being stored. The objective of the paper is to demonstrate how Cloud software technologies can be integrated to implement an effective environment for designing and executing scalable data analysis workflows. We describe the design and implementation of the Data Mining Cloud Framework (DMCF), a data analysis system that integrates a visual workflow language and a parallel runtime with the Software-as-a-Service (SaaS) model. DMCF was designed taking into account the needs of real data mining applications, with the goal of simplifying the development of data mining applications compared to generic workflow management systems that are not specifically designed for this domain. The result is a high-level environment that, through an integrated visual workflow language, minimizes the programming effort, making easier to domain experts the use of common patterns specifically designed for the development and the parallel execution of data mining applications. The DMCF's visual workflow language, system architecture and runtime mechanisms are presented. We also discuss several data mining workflows developed with DMCF and the scalability obtained executing such workflows on a public Cloud.

41An Efficient and Secured Framework for Mobile Cloud Computing
Smartphone devices are widely used in our daily lives. However, these devices exhibit limitations, such as short battery lifetime, limited computation power, small memory size and unpredictable network connectivity. Therefore, numerous solutions have been proposed to mitigate these limitations and extend the battery lifetime with the use of the offloading technique. In this paper, a novel framework is proposed to offload intensive computation tasks from the mobile device to the cloud. This framework uses an optimization model to determine the offloading decision dynamically based on four main parameters, namely, energy consumption, CPU utilization, execution time, and memory usage. In addition, a new security layer is provided to protect the transferred data in the cloud from any attack. The experimental results showed that the framework can select a suitable offloading decision for different types of mobile application tasks while achieving significant performance improvement. Moreover, different from previous techniques, the framework can protect application data from any threat.

42Karma: Cost-effective Geo-replicated Cloud Storage with Dynamic Enforcement of Causal Consistency
Causal consistency has emerged as an attractive middle-ground to architecting cloud storage systems, as it allows for high availability and low latency, while supporting stronger-than-eventual-consistency semantics. However, causally-consistent cloud storage systems have seen limited deployment in practice. A key factor is these systems employ full replication of all the data in all the data centres (DCs), incurring high cost. A simple extension of current causal systems to support partial replication by clustering DCs into rings incurs availability and latency problems. We propose Karma, the first system to enable causal consistency for partitioned data stores while achieving the cost advantages of partial replication without the availability and latency problems of the simple extension. Our evaluation with 64 servers emulating 8 geo-distributed DCs shows that Karma (i) incurs much lower cost than a fully-replicated causal store (obviously due to the lower replication factor); and (ii) offers higher availability and better performance than the above partial-replication extension at similar costs.

43Trigger-based Incremental Data Processing with Unified Sync and Async Model
In recent years, more and more applications in the cloud have needs to process large-scale on-line datasets, which evolve over time as new entries are added and existing entries are modified. Several programming frameworks, such as Percolator and Oolong, are proposed for such incremental data processing and can achieve efficient processing with an event-driven abstraction. However, these frameworks are inherently asynchronous, leaving the heavy burden of managing synchronization to applications' developers, which further significantly restricts their usability. In this study, we propose a trigger-based incremental computing framework for big data applications in the cloud, called Domino, with both synchronous and asynchronous mechanism to coordinate parallel triggers. With this new framework, both synchronous and asynchronous applications can be seamlessly developed. Use cases and extensive evaluation results confirm that it can deliver sufficient performance, and also is easy to use for incremental applications in large-scale distributed computing.

44Aggregation-Based Colocation Datacentre Energy Management in Wholesale Markets
In this paper, we study how colocation datacentre energy cost can be effectively reduced in the wholesale electricity market via cooperative power procurement. Intuitively, by aggregating workloads and renewables across a group of tenants in a colocation datacentre, the overall power demand uncertainty of the colocation datacentre can be reduced, resulting in less chance of being penalized when participating in the wholesale electricity market. We use cooperative game theory to model the cooperative electricity procurement process of tenants as a cooperative game, and show the cost saving benefits of aggregation. Then, a cost allocation scheme based on the marginal contribution of each tenant to the total expected cost is proposed to distribute the aggregation benefits among the participating tenants. Besides, we propose proportional cost allocation scheme to distribute the aggregation benefits among the participating tenants after realizations of power demand and market prices. Finally, numerical experiments based on real-world traces are conducted to illustrate the benefits of aggregation compared to noncooperative power procurement.

45Energy Efficient Scheduling of Servers with Multi-Sleep Modes for Cloud Data Centre
In a cloud data centre, servers are always over-provisioned in an active state to meet the peak demand of requests, wasting a large amount of energy as a result. One of the options to reduce the power consumption of data centres is to reduce the number of idle servers, or to switch idle servers into a low-power sleep states. However, the servers cannot process the requests immediately when transiting to the active state. There are delays and extra power consumption during transition. In this paper, we consider using state-of-the-art servers with multi-sleep modes. The sleep modes with smaller transition delays usually consume more power when sleeping. Given the arrival of incoming requests, our goal is to minimize the energy consumption of cloud data centre by scheduling of servers with multi-sleep modes. We formulate this problem as an integer linear programming (ILP) problem during the whole period of time with millions of decision variables. To solve this problem, we divide it into sub-problems with smaller period while ensuring the feasibility and transition continuity for each sub-problems through Backtrack-and-Update technique. Experiments show that our method can significantly reduce the power consumption for cloud data centre.

46A Dynamic and Failure-aware Task Scheduling Framework for Hadoop
Hadoop has become a popular framework for processing data-intensive applications in cloud environments. A core constituent of Hadoop is the scheduler, which is responsible for scheduling and monitoring the jobs and tasks, and rescheduling them in case of failures. Although fault-tolerance mechanisms have been proposed for Hadoop, the performance of Hadoop can be significantly impacted by unforeseen events in the cloud environment. In this paper, we introduce a dynamic and failure-aware framework that can be integrated within Hadoop scheduler and adjust the scheduling decisions based on collected information about the cloud environment. Our framework relies on predictions made by machine learning algorithms and scheduling policies generated by a Markovian Decision Process (MDP), to adjust its scheduling decisions on the fly. Instead of the fixed heartbeat-based failure detection commonly used in Hadoop to track active TaskTrackers (i.e., nodes that process the scheduled tasks), our proposed framework implements an adaptive algorithm that can dynamically detect the failures of the TaskTracker. To deploy our proposed framework, we have built, ATLAS+, an AdapTive Failure-Aware Scheduler for Hadoop. To assess the performance of ATLAS+, we conduct a large empirical study on a 100- node Hadoop cluster deployed on Amazon Elastic MapReduce (EMR), comparing the performance of ATLAS+ with those of three Hadoop schedulers (FIFO, Fair, and Capacity).

47Cost-Efficient Tasks and Data Co-Scheduling with AffordHadoop
With today's massive jobs spanning thousands of tasks each, cost-optimality has become more important than ever. Modern distributed data processing paradigms can be significantly more sensitive to cost than make span, especially for long jobs deployed in commercial clouds. This paper posits that minimized dollar costs cannot be achieved unless data and tasks are scheduled simultaneously. In this paper, we introduce the problem of cost-efficient co-scheduling for highly data-intensive jobs in cloud, such as MapReduce. We show that while the problem is polynomial in some cases, its general problem is NP-Hard. We propose to tackle the problem by using integer programming techniques coupled with heuristic reduction and optimization to enable a near-real-time solution. AffordHadoop, a pluggable co-scheduler for Hadoop, is implemented as an example of such a co-scheduler. AffordHadoop can save up to 48% of the overall dollar costs when compared to existing schedulers and provides significant flexibility in fine-tuning the cost-performance trade-off.

48Hadoop MapReduce for Mobile Clouds
The new generations of mobile devices have high processing power and storage, but they lag behind in terms of software systems for big data storage and processing. Hadoop is a scalable platform that provides distributed storage and computational capabilities on clusters of commodity hardware. Building Hadoop on a mobile network enables the devices to run data intensive computing applications without direct knowledge of underlying distributed systems complexities. However, these applications have severe energy and reliability constraints (e.g., caused by unexpected device failures or topology changes in a dynamic network). As mobile devices are more susceptible to unauthorized access, when compared to traditional servers, security is also a concern for sensitive data. Hence, it is paramount to consider reliability, energy efficiency and security for such applications. The MDFS (Mobile Distributed File System) [1] addresses these issues for big data processing in mobile clouds. We have developed the Hadoop MapReduce framework over MDFS and have studied its performance by varying input workloads in a real heterogeneous mobile cluster. Our evaluation shows that the implementation addresses all constraints in processing large amounts of data in mobile clouds.

49Phase–Reconfigurable Shuffle Optimization for Hadoop MapReduce
Hadoop MapReduce is a leading open source framework that supports the realization of the Big Data revolution and serves as a pioneering platform in ultra large amount of information storing and processing. However, tuning a MapReduce system has become a difficult work because a large number of parameters restrict its performance, many of which are related with shuffle, a complicated phase between map and reduce functions, including sorting, grouping, and HTTP transferring. During shuffle phase, a large mount of time is consumed on disk I/O with a low speed of data throughput. In this paper, we build a mathematical model to judge the computing complexities with the different operating orders within map-side shuffle, so that a faster execution can be achieved through reconfiguring the order of sorting and grouping. Furthermore, a 3-dimension exploring space of the performance is expanded, with which, some sampled features during shuffle stage, such as key number, spilling file number, and the variances of intermediate results, are collected to support the evaluation of computing complexities of each operating order. Thus, an optimized reconfiguration of map-side shuffle architecture can be achieved within Hadoop without extra disk I/O induced. Comparing with the original Hadoop implementation, the results show that our reconfigurable architecture gains up to 2.37X speedup to finish mapside shuffle work.

50Heterogeneous Job Allocation Scheduler for Hadoop MapReduce Using Dynamic Grouping Integrated Neighboring Search
MapReduce is a crucial framework in the cloud computing architecture, and is implemented by Apache Hadoop and other cloud computing platforms. The resources required for executing jobs in a large data centre vary according to the job types. In general, there are two types of jobs, CPU-bound and I/O-bound, which require different resources but run simultaneously in the same cluster. The default job scheduling policy of Hadoop is first-come-first-served and therefore, may cause unbalanced resource utilization. Considering various job workloads, numerous job allocation schedulers were proposed in the literature. However, those schedulers encountered the data locality problem or unreasonable job execution performance. This paper proposes a job scheduler based on a dynamic grouping integrated neighboring search strategy, which can balance the resource utilization and improve the performance and data locality in heterogeneous computing environments.



Topic Highlights




Cloud computing projects is the delivery of computing services. It deals with servers, storage, databases, networking, software, analytics etc…

Cloud computing projects:

They typically charge for cloud computing services based on usage. Basically , they are similar to how you are billed for water or electricity at home.

Its useful for both startups and MNC’s . from government agencies to non-profits. It is embracing the technology for all sorts of reasons.

Although , Its projects have been great shift for every entrepreneurs. They can control the data automatically.Elysiumpro Cloud Projects helps you to known about real importance of its needs in today’s society.

We offer placement services to students to get path way for them . In order to enhance their knowledge.


Hi there! Click one of our representatives below and we will get back to you as soon as possible.

Chat with us on WhatsApp
Online Payment
LiveZilla Live Chat Software