Deject Calculating Service

Migrating to the Cloud

Tom Laszewski , Prakash Nauduri , in Migrating to the Cloud, 2012

Deject Computing Service Models

Deject computing service models indicate the type of service that is being offered (i.eastward., hardware/software infrastructure, or application evolution, testing, and deployment platform, or enterprise software ready for use by subscription). The 3 service models that are essential components of cloud computing standards are:

Software as a Service (SaaS)  Applications delivered as a service to stop users over the Internet. This model is the earliest model of cloud computing in which software companies started to sell their solutions to businesses based on the number of users with a given set of service-level requirements. The major players in this field are Oracle (with its CRM on Demand solution), Salesforce.com, and Google (with its Google Apps).

Platform every bit a Service (PaaS)  Application development and deployment platform (comprising application servers, databases, etc.) delivered as a service. Amazon Elastic Compute Cloud (EC2) and Savvis are the prominent providers of this model of cloud service.

Infrastructure as a Service (IaaS)  Server, storage, and network hardware and associated software delivered every bit a service. Amazon EC2 is the prominent provider of this model of cloud service.

Cardinal technologies that have enabled deject calculating in full general are virtualization and clustering.

Virtualization

Virtualization allows users to overcome the restrictions associated with sharing physical computing resources such as servers, storage, and networks. For example, virtualization of servers allows users to run multiple operating system images on a unmarried server. Virtualization of network infrastructure allows users to share network bandwidth by creating virtual local area networks (VLANs). Virtualization in deject calculating typically involves deploying many operating system images (virtual machines or VMs) on a single server sharing available CPU resource and memory. Being able to deploy many VMs too allows users to pay merely for the resources they apply instead of paying for all the installed capacity on the servers. This also facilitates monitoring of resource consumption by private VMs for use past accuse-back systems later.

Most of the virtualization software that is available today is based on hypervisor technology. IBM introduced hypervisor engineering science in the 1960s with a mainframe-based product chosen CP/CMS, which evolved into a product known as z/VM. Using hypervisor technology for server virtualization is more than pop than using hardware partitioning or OS partitioning (host OS based), for the following reasons:

Ease of deployment  VMs can be quickly deployed or undeployed with a click of a push.

Isolation  Since each VM provides a consummate prototype of the operating environs, including the choice of operating organization and applications, it provides excellent isolation capabilities for users sharing the same servers.

Multiplatform support  Hypervisor-based virtualization technologies support a wide range of platforms, making them very popular. They can as well support different operating systems, unlike traditional server sectionalisation methods such equally hardware partitioning. This is a key requirement for cloud providers considering they demand to back up the near pop operating systems, databases, middleware, and applications.

Ii types of hypervisors are available today. The showtime, which is the most widely used and is known equally a "bare-metallic" or "native" hypervisor, is directly deployed on a server. Therefore, information technology interacts directly with the hardware to provide virtualization support. Many software vendors, including Oracle, leverage a bare-metal hypervisor for performance reasons. The almost popular virtualization software in this category is Oracle VM, VMware VSphere, Microsoft Hyper-V, and the IBM pSeries PR/SM. The second type of hypervisor, known every bit a "hosted" hypervisor, is deployed on pinnacle of an operating organization (hosted) on a server. Products such as Oracle VirtualBox (formerly Sunday), VMware Server/Client, and Microsoft Virtual PC autumn into this category.

Bare-metal hypervisor-based virtualization offerings primarily differ from i some other in terms of the following factors:

Hypervisor used  Some vendors, such as VMware, developed their own hypervisors (VMware's is called ESX), whereas others, such every bit Oracle with its Oracle VM, leverage the open source Xen hypervisor as the foundation for their solution.

Full virtualization  Products such every bit those from VMware back up full virtualization; that is, they run a binary image of the Os and emulate real I/O device drivers. Other offerings that support full virtualization are KVM and Xen-HVM.

Paravirtualization  Products that support paravirtualization, such every bit Oracle VM, run OSes that are ported to specific hardware compages and are hypervisor-aware. Every bit such, they employ generic device drivers to perform regular system functions.

Virtualization is essential for deject providers to be able to provide end users with calculating resources at a lower toll. Virtualization also helps cloud providers to maximize utilization of resource and reduce capital expenditures by fugitive implementation of siloed IT infrastructure.

Clustering

Clustering allows multiple systems to function as ane big organisation by using software and networking technologies such as shared file systems, high-speed interconnects (for servers), and similar applied science. This, in plough, helps users to calibration out their applications easily by adding more than systems to an existing cluster to overcome the limits of physical resources (CPU and memory) in a single server. It also makes the applications highly bachelor by allowing them to run on multiple servers in an active-active fashion to avoid single points of failure. Grid computing, every bit discussed earlier, was all about clustering commercially available, off-the-shelf hardware technologies to create powerful, scalable computing infrastructure. In some ways, grid computing was an early form of cloud computing in that it ensured availability of applications and masked the servers which executed specific requests in the grid.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597496476000016

Open Source Deject Storage Forensics

Darren Quick , ... Kim-Kwang Raymond Choo , in Cloud Storage Forensics, 2014

The Storage as a Service (StaaS) deject computing compages is showing significant growth as users adopt the capability to shop data in the cloud surround beyond a range of devices. Cloud (storage) forensics has recently emerged as a salient surface area of research. Using a widely used open source cloud StaaS application—ownCloud—as a case study, nosotros document a serial of digital forensic experiments with the aim of providing forensic researchers and practitioners with an in-depth understanding of the artifacts required to undertake cloud storage forensics. Our experiments focus upon client and server artifacts, which are categories of potential evidential data specified before commencement of the experiments. A number of digital forensic artifacts are plant as part of these experiments and are used to support the choice of artifact categories and provide a technical summary to practitioners of artifact types. Finally nosotros provide some full general guidelines for future forensic assay on open up source StaaS products.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780124199705000065

Energy Efficiency in Information Centers and Clouds

Thant Zin Oo , ... Gang Quan , in Advances in Computers, 2016

Abstruse

Due to demand for ubiquitous cloud computing services, the number and scale of data centers has been increasing exponentially leading to huge consumption of electricity and water. Moreover, information centers in need response programs can make the ability grid more than stable and sustainable. Nosotros report the power management in data centers from perspectives of economic, sustainability, and efficiency. From economic perspective, nosotros focus on toll minimization or budgeting of data centers. From sustainability point of view, we look at water and carbon footprint in addition to free energy consumption. Finally, we study demand response between data centers and utilities to manage the power grid efficiently.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/S0065245815000601

Privacy Direction and Protection of Personal Information

Maryline Laurent , Claire Levallois-Barth , in Digital Identity Direction, 2015

4.3.1.one.4 Notion of information controllers

The information controller is, "unless expressly designated by legislative or regulatory provisions relating to this processing, a person, public authority, department or whatever other organization who determines the purposes and means of the data processing" (see Art. 3-I [CNI 78]). This entity is therefore the person with the ability to define or control the content of a procedure, with a concrete influence on the process. In practice, this person is responsible for the respect of data protection rules. This person acts equally an intermediary when, for example, data subjects exercise their rights.

When a social networking service offers online communications platforms allowing users to publish and exchange information, iii distinct categories may be said to plant data controllers.

The social network provider are data controllers, equally they define the purposes and the ways of processing of users' personal data, along with the bones services related to user direction (eastward.g. account registration and deletion) (run into [WP 09, p. 5]). They also make up one's mind the way in which user information may be used for advertising or marketing purposes, including advertising provided past 3rd parties.

In improver to the basic service, third-party designers may offer additional applications, such equally games or a service assuasive users to send virtual altogether presents. In this example, the application providers decide the purposes and the way in which personal data are used by the awarding. They recollect data via the "programming interface", using logins and passwords supplied by the user. In this fashion, the application provider constitutes a controller of personal data.

Finally, the users of social networking sites may, under certain circumstances, exist considered to be data controllers.

If a member processes the personal information of their "friends" in the context of "exclusively private activities" (run into Art. two [CNI 78]), the French Information Protection Act is not applicable. This is the instance for personal correspondence, or when admission to a user's data (contour data, messages, newsfeed, etc.) is limited to their chosen contacts.

Nonetheless, users' activities can go beyond the exclusively private sphere. For example, a member may use a social network on behalf of a company for the purposes of collaboration and upload the personal data of other users, or for the purposes of a political or social association. In these cases, the WP29 considers that "a loftier number of contacts could exist an indication that the household exception does non apply and therefore that the user would be considered a data controller" (see [WP 09, p. 6]).

The data controller should be distinguished from a subcontractor, defined as "any person who processes personal data on behalf of the data controller". In practise, a subcontractor is an external service provider continued with the controller by a contract. The subcontractor acts in accord with instructions issued past the controller.

When a social network provider uses deject computing services the old is more often than not considered to be the controller, and the latter a subcontractor [CNI 12a]. However, in the case of certain standardized Deject offers, the client company may non really provide instructions, and will not be in a position to monitor the effectiveness of the security guarantees offered by the Cloud provider. In these cases, the CNIL considers that the two entities may a priori be considered to exist jointly responsible. The service provider and the client must share responsibilities and ascertain which entity is responsible for each obligation defined by the French Data Protection Act 9 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785480041500043

Virtualization

Dijiang Huang , Huijun Wu , in Mobile Cloud Calculating, 2018

2.1.2 Abstraction vs. Virtualization

Virtualization is an important technique for establishing mod cloud computing services. All the same, information technology is piece of cake to misfile with another overly used concept – abstraction. In short, brainchild is about hiding details, and involves constructing interfaces to simplify the utilise of the underlying resource (e.g., by removing details of the resources's structure). For case, a file on a hard disk drive that is mapped to a collection of sectors and tracks on the disk. Nosotros usually exercise not directly address disk layout when accessing the file. Concrete is the opposite of abstract. For example, software and development goes from physical, east.g., the bodily binary instructions, to abstract, e.g., assembly to C to Java to a framework like Apache Groovy [54] to a customizable Groovy add-on. For estimator virtualization solutions, hardware no longer exists for the OS. At some level it does, on the host system, but the host system creates a virtualization layer that maps hardware functions, which allows an OS to run on the software rather than the hardware.

Besides abstraction, two additional concepts are besides quite frequently used with the concept of virtualization: replication is to create multiple instances of the resource (east.g., to simplify direction or allocation); and isolation is to separate the uses which clients make of the underlying resources (e.k., to improve security).

Read total chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B978012809641300003X

Secure migration to the deject—In and out

Thomas Kemmerich , ... Carsten Momsen , in The Cloud Security Ecosystem, 2015

seven.2 Security

Security is a major concern in the adoption of cloud computing services. Most customers are unwilling to identify their sensitive corporate data in the cloud as the CSPs, or other competitive customers of the aforementioned CSP could compromise the data. Customers experience the lack of transparency as they are not aware of how, why, and where their sensitive data is processed. A customer can access deject computing services anywhere with Internet. On the connectedness between the CSP and cloud customer, the data may exist compromised if they are non secured. A survey conducted on 800 IT-Professionals across 4 countries by Intel in 2012 states that 57% of the respondents accepted that they are unwilling to movement their workload and data into the cloud due to security and compliance issues ( Intel, 2012).

Organizations practise often not have control or knowledge of the location of the information storage in the cloud. Cloud customers have very limited control over the security policies enforced by the CSPs on their behalf. In traditional IT-Environments, IT-Infrastructure is present behind the firewall of an organization. Virtualized and nonvirtualized servers serve only a fixed line of business. Information technology-Professionals can select advanced security tools that give them a high caste of control over the security and compliance issues related to the system. In the case of cloud infrastructure, servers are virtualized and shared across several lines of business organization or even across multiple organizations. It is important to connect multiple cloud information centers to gain efficiencies. For instance, It-Professional may want to link a public cloud data center based in UK with their private cloud-based i in Germany. If there is non an avant-garde tool available to secure the connection among the afar infrastructure, local It-Staff loses a degree of control and visibility into workload and information.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128015957000100

Domain 6

Eric Conrad , ... Joshua Feldman , in CISSP Report Guide (2nd Edition), 2012

Cloud calculating

Public cloud calculating outsources It infrastructure, storage, or applications to a third-party provider. A cloud also implies geographic diversity of computer resource. The goal of cloud calculating is to permit big providers to leverage their economies of scale to provide computing resources to other companies which typically pay for these services based on their usage.

Three unremarkably available levels of service provided by cloud providers are infrastructure every bit a service (IaaS), platform equally a service (PaaS), and software as a service (SaaS). IaaS provides an entire virtualized operating system, which the customer configures from the Bone on up. PaaS provides a preconfigured operating organisation, and the customer configures the applications. Finally, SaaS is completely configured, from the operating system to applications, and the customer merely uses the application. In all three cases, the cloud provider manages hardware, virtualization software, network, backups, etc. See Table vii.i for typical examples of each.

Table 7.1. Example Cloud Service Levels

Type Example
Infrastructure as a service (IaaS) Linux server hosting
Platform as a service (PaaS) Web service hosting
Software every bit a service (SaaS) Web postal service

Private clouds house information for a unmarried arrangement and may be operated by a 3rd party or by the organization itself. Regime clouds are designed to keep data and resources geographically independent within the borders of 1 land. Benefits of cloud calculating include reduced upfront capital expenditure, reduced maintenance costs, robust levels of service, and overall operational cost savings.

From a security perspective, taking advantage of public deject calculating services requires strict service-level agreements and an agreement of new sources of risk. One business organization is multiple organizations' guests running on the same host. The compromise of one cloud customer could lead to the compromise of other customers.

Acquire by Instance: Pre-Owned Images

In April 2011, Amazon sent email to some Elastic Cloud Compute (EC2) customers, alarm them that, "It has recently come up to our attention that a public AMI in the The states-East region was distributed with an included SSH public fundamental that will allow the publisher to log in equally root." [2]

AMI stands for Amazon Automobile Image, a preconfigured virtual guest. DVLabs' Tipping Betoken described what happened: "The infected image is comprised of Ubuntu 10.4 server, running Apache and MySQL along with PHP…the image appears to have been published…six months ago and we are only hearing about this problem now. So what exactly happened here? An EC2 user that goes by the proper name of guru created this image, with the software stack he uses most oft, and and so published it to the Amazon AMI community. This would all be fine and groovy if information technology wasn't for 1 simple fact. The epitome was published with his SSH key nonetheless on information technology. This ways that the image publisher, in this instance guru, could log into any server case running his image as the root user. The keys were left in /root/.ssh/authorized_keys and /home/ubuntu/.ssh/authorized_keys. We refer to the resulting epitome as 'certified pre-owned.' The publisher claims this was purely an blow, a mere result of his inexperience. While this may or may not be truthful, this incident exposes a major security hole within the EC2 community." [3]

Organizations must clarify the risk associated with preconfigured cloud-based systems and consider the pick of configuring the system from the "ground up," starting time with the base operating system.

Besides, many cloud providers offer preconfigured system images, which may introduce risks via insecure configuration. For example, imagine a blog service image, with the operating organisation, Web service, and blogging software all preconfigured. Any vulnerability associated with the preconfigured epitome can introduce risk to every system that uses the epitome.

Organizations should as well negotiate specific rights before signing a contract with a deject computing provider. These rights include the right to inspect, the right to carry a vulnerability assessment, and the correct to behave a penetration exam (both electronic and physical) of information and systems placed in the cloud.

Finally, practice you know where your data is? Public clouds may potentially movement data to whatever land, potentially across the jurisdiction of the organization's home country. For example, U.S.-based laws such as the Health Insurance Portability and Accountability Human action (HIPAA) or the Gramm–Leach–Bliley Act (GLBA) accept no effect outside of the United States. Individual or government clouds should exist considered in these cases.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9781597499613000078

Securing Cloud Calculating Systems

Cem Gurkok , in Computer and Information Security Handbook (3rd Edition), 2017

Abstruse

Cloud computing is a method of delivering computing resource. Cloud computing services, ranging from data storage and processing to software such as client relationship management systems, are now bachelor instantly and on need. In times of fiscal and economic hardship, this new depression-toll ownership model for computing has received lots of attention and is seeing increasing global investment. Generally speaking, cloud calculating provides implementation agility, lower capital letter expenditure, location independence, resource pooling, broad network access, reliability, scalability, elasticity, and ease of maintenance. While in most cases cloud computing can better security due to ease of management, the lack of knowledge and feel of the provider can jeopardize customer environments. This affiliate discusses various cloud computing environments and methods to make them more secure for hosting companies and their customers.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128038437000636

CATRA

Nina Viktoria Juliadotter , Kim-Kwang Raymond Choo , in The Deject Security Ecosystem, 2015

half dozen Determination and future work

In this chapter, nosotros presented our taxonomy of attacks on cloud calculating services that is comprehensive yet extensible, useful, and purposeful. The taxonomy classifies deject attacks based on source, vectors, vulnerabilities, targets, impacts, and defenses and tin easily be utilized for risk assessment by providers and consumers of deject computing services. The classification scheme draws from an array of existing taxonomies of attacks on reckoner and networks also as the unique security challenges present in the cloud.

Equally deject services increment in popularity, and then exercise organizations' dependence on them and the likelihood of attack. Future work includes validating and refining CATRA using existent case studies.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128015957000033

Cloud, SDN, and NFV

Walter Goralski , in The Illustrated Network (Second Edition), 2017

Cloud Computing Models

In addition to the service models shown in Figure 29.1 , cloud computing services are deployed in one of three different models, as shown in Figure 29.2.

Figure 29.2. Private, public, and hybrid clouds.

Deject computing can be supported in a private cloud, a public cloud, or a hybrid cloud. At that place are also variations known equally community cloud and distributed cloud that are non shown in the figure.

In a private deject scenario, the unabridged cloud infrastructure is endemic and operated for or past a unmarried organization. The cloud is considered private if this is single-use is truthful, fifty-fifty if the cloud is managed by a third party or hosted on the bounds or remotely. Creating a private deject requires a keen deal of expertise and expense to virtualize the whole concern surroundings and properly capture the details of its resource use. Self-run information centers can be a huge expense. They also require big facilities, ecology controls, and frequent hardware and software upgrades. Individual clouds tend not to take advantage of the master attraction of clouds in the offset place: you only have to pay for what you lot apply because during lulls in current demand, those resources can be used past others (who pay for them). This drawback can be mitigated somewhat by a kind of chargeback organisation among departments or divisions or business units of a large arrangement, merely it is oft more than attractive to deploy deject computing in some form of public cloud.

In a public cloud, the services are offered over a public network to anyone who can pay the price. The fees are often per-employ or on a sliding subscription rate, only the basic compages of virtualized data centers and network connections amidst them is still the same. However, public clouds require much more intense security requirements, from ensured information separation to unauthorized access and beyond. The ease of public network admission but intensifies these concerns. The largest public deject service providers such as Amazon, Microsoft, and Google offering only access over the global public Internet.

A community cloud is a cloud that shares its infrastructure among many organizations, merely organizations from a specific community of involvement, such equally hospitals. These communities have common concerns such as patient tracking or drug purchasing that are not necessarily shared with other types of organization. Again, the deject can be managed locally or past a third party, and hosted on-site or somewhere central to the grouping. Only usually the costs are spread among fewer users than a public cloud, but more a private cloud (of form).

A hybrid cloud is a composite of ii or more clouds—usually public and private, but also including community clouds in some cases—that remain distinct only tin can be treated in most applications every bit one. Y'all tin as well connect a traditional, nonvirtualized service with a public cloud and call the outcome a hybrid. Hybrids accept a fashion of minimizing ongoing costs and still allowing scaling for peak business concern periods.

Hybrid clouds are very popular. A customer tin can cull to store sensitive customer information on its private cloud and connect to a marketing application on a public cloud when needed. IT organizations can temporarily expand their in-house capacity by using a public cloud when necessary. Some have called this practice "cloud bursting." Cloud bursting allows a relatively meaty Information technology shop to support more users than it looks like it should be able to, based on pure physical resources.

Finally, a deployment model called distributed cloud gathers resources are different locations but connected by a common network and dedicates some of their capabilities for a common purpose. Here is a case where the distributed computing aspects of the cloud come to the forefront and outweigh the massive amount of data that are stashed away in enormous data centers.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128110270000291