Every so often we write about some of the current trends or advanced technologies in our sector, which may not be directly related to your needs, but nevertheless is an interesting topic.
In this article, we will be talking about ‘Hyperscale Computing’.
Put simply, Hyperscale Computing is when a computing architecture is implemented in your system to scale to meet an increase or decrease in demand on the system and ease the transition when increasing the number of servers.
This architecture usually consists of a combination of both cloud-based service providers (such as AWS and Azure) and physical servers, primarily used in cloud-based computing or companies that process a lot of “Big Data”.
How does it work?
The servers are connected together horizontally and linked to a “Load Balancer” which will assess the current requests in the system and divert them to the appropriate server to maximise performance.
Due to the ‘stripped back’ approach of hyperscale computing, all of the hardware is linked through one point of contact, so with the combination of this and the supporting systems, the company’s capacity for the number of potential servers is massively increased.
This flexibility allows the load balancer to increase or decrease the servers currently online in relation to what the current demand is.
This overall can reduce the amount of admin work needed and makes the whole process more streamlined.
But what are the pros and cons?
- The cost to implement this is relatively low cost and predictable.
- Due to the increased flexibility, the amount to which a company can potentially scale is unlimited.
- Changes made to the server memory have a higher chance of introducing errors.
- There is less control over the company’s data.
This is only a very basic overview of the hyperscale computing concept and the technology is changing all the time.