The answer will depend on the target market for the cloud service and how that market is reflected in the applications users will run. Servers provide four things: compute power from microprocessor chips, memory for application execution, I/O access for information storage and retrieval, and network access for connecting to other resources. Any given application will likely consume each of these resources to varying degrees, meaning applications can be classified by their resource needs. That classification can be combined with cloud business plans to yield a model for an optimum cloud computing server architecture.
For a starting point in cloud computing server architectures, it's useful to consider the Facebook Open Compute project's framework. Facebook's social networking service is a fairly typical large-scale Web/cloud application, and so its specific capabilities are a guide for similar applications. We'll also discuss how these capabilities would change for other cloud applications.
Cloud computing servers needs may not align with Facebook Open Compute
The Open Compute baseline is a two-socket design that allows up to 12 cores per socket in the Version 2.x designs. Memory capacity depends on the dual inline memory modules (DIMMs) used, but up to 256 GB is practical. The design uses a taller tower for blades to allow for better cooling with large lower-powered fans. Standard serial advanced technology attachment (SATA) interfaces are provided for storage and Gigabit Ethernet is used for the network interface. Facebook and the Open Compute project claim a 24% cost of ownership advantage over traditional blade servers. Backup power is provided by 48-volt battery systems, familiar for those who have been building to the telco Network Equipment Building System (NEBS) standard.
The Open Compute reference has a high CPU density, which is why a higher tower and good fans are important. However, many cloud applications will not benefit from this high of a CPU density for several reasons:
How storage I/O affects cloud computing server needs
The next consideration for cloud computing server architecture is storage. Web applications typically don’t require a lot of storage and don't typically make large numbers of storage I/O accesses per second. That's important because applications that are waiting on storage I/O are holding memory capacity while they wait.
Consider using larger memory configurations for cloud applications that are more likely to use storage I/O frequently to avoid having to page the application in and out of memory. Also, it may be difficult to justify the maximum number of CPUs/cores for applications that do frequent storage I/O, as CPU usage is normally minimal when an application is waiting for I/O to complete.
A specific storage issue cloud operators may have with the Open Compute is the storage interface. Web applications are not heavy users of disk I/O, and SATA is best suited for dedicated local server access rather than storage pool access.
Additionally, it is likely that a Fibre Channel interface would be preferable to SATA for applications that demand more data storage than typical Web servers-- including many of the Platform as a Service (PaaS) offerings that will be tightly coupled with enterprise IT in hybrid clouds. Software as a Service (SaaS) providers must examine the storage usage of their applications to determine whether more sophisticated storage interfaces are justified.
Cloud computing server guidelines to consider
Here are some summary observations for cloud providers looking for quick guidance on cloud computing server architecture:
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher ofNetwatcher, a journal addressing advanced telecommunications strategy issues.
For a starting point in cloud computing server architectures, it's useful to consider the Facebook Open Compute project's framework. Facebook's social networking service is a fairly typical large-scale Web/cloud application, and so its specific capabilities are a guide for similar applications. We'll also discuss how these capabilities would change for other cloud applications.
Cloud computing servers needs may not align with Facebook Open Compute
The Open Compute baseline is a two-socket design that allows up to 12 cores per socket in the Version 2.x designs. Memory capacity depends on the dual inline memory modules (DIMMs) used, but up to 256 GB is practical. The design uses a taller tower for blades to allow for better cooling with large lower-powered fans. Standard serial advanced technology attachment (SATA) interfaces are provided for storage and Gigabit Ethernet is used for the network interface. Facebook and the Open Compute project claim a 24% cost of ownership advantage over traditional blade servers. Backup power is provided by 48-volt battery systems, familiar for those who have been building to the telco Network Equipment Building System (NEBS) standard.
The Open Compute reference has a high CPU density, which is why a higher tower and good fans are important. However, many cloud applications will not benefit from this high of a CPU density for several reasons:
- Some cloud providers may not want to concentrate too many users, applications or virtual machines onto a single cloud computing server for reliability reasons.
- The applications running on a cloud computing server may be constrained by the available memory or by disk access, and the full potential of the CPUs might not be realized.
- The applications might be constrained by network performance and similarly be unable to fully utilize the CPUs/cores that could be installed.
How storage I/O affects cloud computing server needs
The next consideration for cloud computing server architecture is storage. Web applications typically don’t require a lot of storage and don't typically make large numbers of storage I/O accesses per second. That's important because applications that are waiting on storage I/O are holding memory capacity while they wait.
Consider using larger memory configurations for cloud applications that are more likely to use storage I/O frequently to avoid having to page the application in and out of memory. Also, it may be difficult to justify the maximum number of CPUs/cores for applications that do frequent storage I/O, as CPU usage is normally minimal when an application is waiting for I/O to complete.
A specific storage issue cloud operators may have with the Open Compute is the storage interface. Web applications are not heavy users of disk I/O, and SATA is best suited for dedicated local server access rather than storage pool access.
Additionally, it is likely that a Fibre Channel interface would be preferable to SATA for applications that demand more data storage than typical Web servers-- including many of the Platform as a Service (PaaS) offerings that will be tightly coupled with enterprise IT in hybrid clouds. Software as a Service (SaaS) providers must examine the storage usage of their applications to determine whether more sophisticated storage interfaces are justified.
Cloud computing server guidelines to consider
Here are some summary observations for cloud providers looking for quick guidance on cloud computing server architecture:
- You will need more sophisticated storage interfaces and more installed memory, but likely fewer CPUs/cores for applications that do considerable storage I/O. This means that business intelligence (BI), report generation and other applications that routinely examine many data records based on a single user request will deviate from the Open Compute model. Cloud providers may also need more memory in these applications to limit application paging overhead.
- Cloud providers will need more CPUs/cores and memory for applications that use little storage-- particularly simple Web applications -- because only memory and CPU cores will limit the number of users that can be served in these applications.
- Pricing models that prevail for Infrastructure as a Service (IaaS) offerings tend to discourage applications with high levels of storage, so most IaaS services can likely be hosted on Open Compute model servers with high efficiency.
- PaaS services are the most difficult to map to optimum server configurations, due to potentially significant variations in how the servers will utilize memory, CPU and especially server resources.
- For SaaS clouds, the specific nature of the application will determine which server resources are most used and which can be constrained without affecting performance.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher ofNetwatcher, a journal addressing advanced telecommunications strategy issues.
No comments:
Post a Comment