Server Choices – Why Chips, Form Factors, Architectures And Workloads Matter

Server Choice Highlights Q210

  • Chip makers increased the number of transistors and complexity of design when they reached heat limitations with 3GHz chips
  • There is no need to worry about RISC v CISC or 32- v 64-bit issues today
  • ‘Sockets’ as opposed to ‘Processors’ are now being used to describe the number of physical chips in a machine
  • The number of in-silicon processors – known as ‘Cores’ – has increased even in x86 chips
  • There are essentially 4 client device and 4 server form factors in the market today
  • Server form factors relate increasingly to Scale Up, scale Out and Cluster architectures
  • Ultimately servers are being bought to run specific workloads and are beginning to be designed that way

In recent announcements from AMD, Intel and the server vendors there are references to cores, threads, processors, workloads, form factors and architectures. I’ve been studying servers since 1983 and still get muddled on these terms, so I thought it might be useful to explain them – not least so I can have a handy look-up to stop being confused. I’ve developed some charts that I hope will prove useful if you too get muddled with these technical hardware terms. Understanding the differences in technology is essential for market research and forecasting, but more importantly for successful server deployment and usage.

Chips, Sockets, Processor Count And Cores

I’ve shown the relationship between the chip, motherboard and associated terms which help classify and the various offerings from microprocessor vendors such as Intel (x86 and Itanium), AMD (x86), IBM (System z and Power), Oracle (Sparc) and the like. Some of the relationships between the terms are as follows:

  • Chips are continuously being miniaturised, which is a question of designing the circuits and etching them onto silicon wafers; each generation of the ‘manufacturing process’ can be described in terms of size, which has just reached 32nm (nanometers) in the most recent chips. A chip fabrication plant can be designed typically only to run one size and it’s an investment of several $ billion for a new one.
  • Multiple processors are now included on a single chip; these are referred to as ‘cores’; for instance Intel’s Nehalem EX has up to 8 and AMD’s Magny Cours up to 12.
  • Each chip runs programmes in one or more ‘threads’, which are independent of each other; the number of threads run simultaneously tends to be high in RISC and mainframe processors.
  • The chip fits into a socket on the motherboard; although it is theoretically possible to stack processors and fit more than one into a socket, today all computers with 2 physical processors have 2 sockets – the terms are synonymous
  • The use of the term ‘processor’ to define the number of physical chips in a computer is declining in favour of ‘socket’ because each core could be defined as a processor.
  • Typically different chips are needed for single, dual and quad socket designs – not so AMD’s Magny Cours, which will fit both dual and quad.
  • Each chip will fit only the motherboards designed for it.
  • The type of memory (DDR 2, DDR 3, etc.) is also defined by the specific chip, as are the number that can be addressed; Intel’s Nehalem EX is said to have double the amount of addressable memory of the Nehalem chip.
  • Of course there are differences in the instruction sets imbedded in the chip – Intel and AMD’s are x86, while Power and Sparc are different forms of Reduced Instruction Set Computing (RISC) for instance; there is no longer a significant debate about RISC v CISC chips.
  • They also traditionally differ in the width of memory that can be addressed, described in terms of ‘bit-edness’, although today most processors are 64-bit, as opposed to 32-, 16, or 8-bit.

 

Chip manufacturers reached an impasse at the beginning of the new century in terms of the speed of their most powerful chips. The heat generated made it impossible to drive them at frequencies much above 3GHz. Hence the move to multi-core Symmetrical Multi Processor (SMP) designs. In addition newer designs also address electricity usage, switching off unused cores for instance. It is not necessarily true that the SMP designs of new chips are always beneficial for users, because operating systems and applications need to be written to take advantage; the majority of commercial applications today are ‘single threaded’ and built for single processors. The move towards virtual servers allows many of these to be run on a single physical server, but there are many more interesting challenges for software developers in creating future programmes.

Client Device And Server Form Factors

There are a number of different form factors for computers. I have tried to capture them all in Figure 2 for both Client Devices and Servers. There have been some interesting developments. In particular:

  • Desktop and Laptop (or Notebook) personal computers are the most widely used client devices used in corporate computing; Tablet PCs can also be categorised as versions of Laptops without a conventional keyboard; these devices have the ability to run full applications on their own and are sometimes referred to as ‘Fat Clients’.
  • Thin Clients (machines with restricted processing capabilities and a requirement to be attached via the networks to servers) have found a niche in supporting task-oriented workers for Call Centre, Helpdesk and similar applications.
  • Increasingly Smart Phones (Apple’s iPhone, RIM’s Blackberry to give two examples) are being used to access corporate applications; these devices tend to use ARM RISC chips.

 

There are now essentially four server form factors, which I’ve tried to capture in Figure 2. These are:

  • Tower – deskside machines used in small office or departmental computing; also sometimes deployed in great numbers throughout branches; tower servers are the most numerous and least expensive servers.
  • Rack Mounted – servers designed to fit horizontally into a standard 19” rack; typically deployed in data centres, the standardised width and fitting of these machines allows users to handle multiple makes, models and types of equipment in single racks if necessary.
  • Blade – vendors such as IBM, HP, Oracle and Cisco have designed heavily integrated systems for blade servers; these miniaturised computers fit (either vertically or horizontally) in to proprietary chasses and are typically used for high ‘density’ computing; they are almost always used in data centres.
  • Cabinet – mainframes (such as IBM’s System z) and minicomputers (IBM’s Power Systems, HP’s Non-Stop and Integrity servers, Oracle Sparc servers for instance) typically come packed in proprietary system skins; almost without exception cabinet form factors are found in data centres.

How Server Form Factors Fit With Scale Up, Scale Out And Cluster Architectures

 

Returning to the subject of the difficulty of programming SMP systems, I think it’s a good idea to think about server architectures. In his excellent book ‘In Search of Clusters’ (Prentice Hall, ISBN 0-13-899709-8) IBM’s Gregory Pfister put his great experience to work in talking about the benefits and challenges of various computer architectures. Although I don’t feel competent to explain his ideas thoroughly, he explained that SMP architectures are ‘easy to programme, but hard to scale’, while loosely coupled systems are ‘easy to scale, hard to programme’. I can’t help thinking that the massive success of the Internet has done little more than multiply the number of simple applications massively. However the move to ‘Web 2.0’, Cloud Computing, High Performance Computing brings with it the opportunity to develop some more interesting applications.

I’ve tried to capture how the different server form factors fit against a ‘Scale Up/Scale Out’ mapping in Figure 3. I’ve kept things relatively simple, but it is important to note that the type of form factor you select to use relates to the type of application and workload you intend to run.

 

How Server Workloads Fit With Scale Up, Scale Out And Cluster Architectures

The workloads you run on your servers will also define the architecture, form factor and size of the servers you use because the applications associated with them have been developed on SMP or loosely coupled systems (see Figure 4).

 

As time goes on so I expect suppliers will develop single appliance servers to run specific workloads of a pre-determined scale most efficiently, breaking the tradition of maintaining servers as general-purpose machines. In addition I expect the most significant programming challenges to be in the area of clusters – the middle segment between Scale Up and Scale Out.

Some Conclusions – Why Server Chips, Form Factors, Architectures And Workloads Matter

These are quite complex ideas and I’ve enjoyed spending a few hours trying to make some observations which I know will be helpful going forward as ITCandor continues to build out its server research. Much more importantly these distinctions are important for CIOs who have to pick different servers and suppliers for different applications. A good understanding of the differences should help you decide when to make replacement decisions, why multiple cores won’t necessarily help improve your processing and why IBM mainframes remain as the servers of choice for OLTP in banks. I also hope that software development will improve to address the multi-thread-edness and multiple cores the chip makers had to put in to their products when the chips got too hot.

Does this help in understanding the differences between server types? Please let me know by commenting below.

2 Responses to “Server Choices – Why Chips, Form Factors, Architectures And Workloads Matter”

Read below or add a comment...

Trackbacks

  1. […] for the coming year. We believe that form factors for servers count and will be updating our post to include cartridges in the next few […]

  2. […] We shouldn’t get carried away by the success of the server market in Q4 – it was heavily influenced by System z life cycles: however the rise in x86 sales is very positive. We’ve put the latest results into our mixing bowl and modified our server forecast (Figure 3). We still think there will be a 1% decline in 2013, but from there we’ll see growth. Our rationale is the importance servers play in centralisation and integrated systems developments. We also expect to see a rise in the shipments and revenues from microservers with AMD processors – not massive at first, but significant in offering lower power consumption and hyperscale fabric capabilities for those workloads which don’t necessitate x86 chips. For HP, IBM, Dell and others it will be a year in which workloads take centre stage from Hadoop to SAP HANA and other types of Analytics, server and client device virtualisation. As always we aim to keep up with developments and report our findings to you as we go forward. The market statistics in this paper are a small component of our server traker, which also includes data for virtualisation, individual country and regional markets, operating systems and the like. Click here to read our flyer. To find out how we define servers click here. […]