Skip to main content

Lower infrastructure costs in the datacenter by scaling up, not out—this is prevailing wisdom in modern IT, but what exactly does this mean? A scale-up system, also called an in-memory system, stores information in server RAM instead of in hard drives as a scale out distributed system would.

Storing a database in-memory works brilliantly for CPU-intensive scientific, modeling, and design applications. It eliminates the complexity involved with storing a database across several drives or servers: a single system is easier to deploy and maintain, there are fewer bottlenecks, and it makes database access less problematic.

Datacenter cost reduction is a concern for companies that analyze large sets of data as part of their operation. In-memory computing models achieve this; according to IDC, a scale-up system carries 43% lesser labor costs for datacenter management than a scale-out system.

The main culprits for scale-out computing overhead includes managing multiple OS instances across a mixed-bag of node types—the kind of scenarios that require hands-on administration to navigate. Problems arise if, for example, a scale-out database network uses InfiniBand communications standard for high-throughput Ethernet performance. In scale-out systems InfiniBand requires manual Layer 3 network configuration, partitioning, and workflow scheduling to achieve the necessary quality of service.

In contrast, in-memory systems provide nodes hardware uniformity, which allows more efficient sharing of processing resources at the hardware level, and greater flexibility and automation in systems management. It is a faster, more cost-efficient way to data visualization.

Scale-out deployment pitfalls

The traditional multi-node server cluster structure usually has two or more servers and a storage array of shared files called the cluster.

The cluster acts as the quorum resource—the physical disk that provides the benchmark for data consistency among all users and file copies. In the event of disk failure, power outage, and other anomalies that affect data, the quorum resource sets the ‘truth’ for the entire system.

As any system administrator is aware, any time the network expands—you add applications, servers, storage, and endpoints—it adds a wrinkle to managing the flow of data and CPU resources. The further you get from the quorum resource, the more computing power and bandwidth is needed to maintain integrity.

It also strains the IT team, who are faced with manually configuring the network to achieve the necessary stability, security, and quality of service for the network.  Altering the Resource dynamic link library, configuring management of failover functions, traffic to the node, system updates, monitoring the logs—if something goes wrong there is a long list to troubleshoot.

Scale-out systems get complex in a hurry, and the more complex it gets, the more inefficient it becomes at sharing resources—copying large files and datasets from node to node taxes the CPU and network bandwidth.

Streamline with scale-up hardware

Powerful servers called HPCs (high performance computers) act as several nodes would in cluster, and eliminate the complexity of nodes communicating over Ethernet or copper network. At a hardware level, this happens by virtue of the processor cores sharing memory—this also occurs in a scale-out system, but requires a slew of tools to mitigate all the different domains.

An in-memory system will always be more efficient in distributing computer resources and easier to manage. The tradeoff is that it takes a fair amount of computing power to pull off.

For running entry level business applications, look for a server with an Intel Xeon E5 CPU for the needed level of performance. To mitigate scaling up, you add cores instead of nodes. A server motherboard with two or four X99 sockets facilitates this. One or two CPUs per node is the sweet spot for most applications and data sets.

Look for server hardware with enough RAM capacity for the application database. Remember to consider the rate at which the database grows. For CAD and scientific modeling applications, typically 128 GB or more is needed for the database.

You will find dozens of options for pre-built HPCs capable of scale-up in-memory database computing on NeweggBusiness.

Click image to view and compare server systems on NeweggBusiness.

Make sure to note how much server memory comes installed for each SKU. You may find that certain SKUs offer a better bargain if the server memory is purchased separately. Be aware that you must install the highest capacity server memory in each of the DIMM slots to reach the RAM capacity in the product specifications. The RAM modules should be of uniform capacity for optimal performance.

For example, if you have 8 DIMM slots in the motherboard and want 128 GB of RAM, purchase eight 16 GB modules of server memory.

The specifications in your server or motherboard handbook, or on NeweggBusiness product pages, will specify the correct number of pins and RAM type—newer server systems take 288-pin DDR4 RAM, though a few still run 240-pin DDR3.

Click image to view and compare server system memory on NeweggBusiness.

In-Memory computing might not fit rapid growth

If your database is growing at a rate of more than 25-50 percent year over year, in-memory computing might not be a great fit. There is a harder limit on the RAM capacity of server hardware than there is with storing data on spinning disks or flash arrays.

Anticipating the forecast of your data growth should occur before any hardware purchase; server hardware is no exception.

Unless you’re managing a huge database that is exploding in growth, as in you’re running the next hip social media platform for example, consider scale-up model for lower hardware infrastructure costs in the coming years.

Summary
Lower Hardware Infrastructure Costs: Scale Up, Not Out
Article Name
Lower Hardware Infrastructure Costs: Scale Up, Not Out
Description
Lower infrastructure costs in the datacenter by scaling up, not out—this is prevailing wisdom in modern IT, but what exactly does this mean? A scale-up system, also called an in-memory system, stores information in server RAM instead of in hard drives as a scale out distributed system would.
Author
Newegg Business Smart Buyer
HardBoiled | NeweggBusiness
Adam Lovinus

Author Adam Lovinus

A tech writer and Raspberry Pi enthusiast from Orange County, California.

More posts by Adam Lovinus

What's your take?