Why we attend Computex Taipei and how it benefits you!
Each year we’re lucky enough to attend Computex and Supermicro’s annual partner training. From attending specific seminars and interactions with other technologists around the world, we carefully select insights to address common business objectives and understanding on technology-driven innovation and growth.
In this event summary we dive into the latest AI/ML platform from Supermicro - HGX-2, Supermicro’s 2018 technology roadmap, NVMe solutions, Intel Optane SSD drives and AMD solutions as announced at the event.
MUST READ for Data scientists, HPC divisions and all AI enthusiasts.
At Computex, Supermicro announced it was among the first to adopt the NVIDIA® HGX-2 Cloud server platform, one of the most powerful systems in the world for Artificial Intelligence and High-Performance Computing.
“To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance,” said Charles Liang, president and CEO of Supermicro.
From natural speech by computers to autonomous vehicles, rapid progress in AI has transformed entire industries. To enable these capabilities, AI models are exploding in size. HPC applications are similarly growing in complexity as they unlock new scientific insights.
With fine-tuned optimisations, the HGX-2 server will deliver the highest compute performance and memory for rapid model training, while saving significant cost, space and energy in the datacenter.
High density NVMe platforms are STORMING the data centre
NVMe acclerates highly performing applications in the data centre. Let’s not bore you with tech specs (you don’t need another product presentation to put you to sleep, you just need the hard facts). Below are some key applications which can benefit:
- High data throughput ingest – read the hitepaper
- HPC / Data analytics acceleration
- Media / video streaming
- Big data Top of Rack storage
- NVMe-oF capable of supporting 12 host nodes
- Hyper-converged / scale out architectures (e.g. Vmware VSAN) which achieves 50% greater performance and 7x CAPEX Improvement – VSAN whitepaper
The main point to make here is that you can now take advantage of this hardware to accelerate your applications results to rapidly deliver your business objectives and mission critical applications that drive revenue for your business. Read more: whitepaper for NVMe cost & performance to IOPS
Cost effective clouds - AMD A+ New generation Solutions
Let’s start with why AMD EYPC?
Today’s applications are more critical than ever, leading businesses to demand higher levels of reliability, availability and serviceability (RAS) to protect not only their data but also their bottom line. Applications no longer run in a single domain, instead favouring interconnected environments and complex multi-tier applications that share data from multiple sources. This makes higher RAS an integral component of any platform. Interruptions can impact multiple applications as failure domains increase with application complexity; a single server failure can have multiple downstream impacts in this interconnected environment. AMD has engineered EPYC to increase reliability, availability, and serviceability through an exhaustive set of new features and functions. These features build on the already available RAS capabilities from previous generations that have proved themselves out in some of the largest server environments like massive cloud deployments, split-second real-time financial applications, or critical government research clusters.
Here’s a quick snapshot of the top points that were discussed at Computex at multiple server vendor stands & other Supermicro partners from around the world.
- AMD accelerates memory-intensive applications performance
- You can match core count with your application needs without compromising processor features to ensure your software licensing doesn’t blow out costs.
- Increase the capacity of cloud computing environments and virtual desktop infrastructure deployments.
- Big data analytics and in-memory databases are accelerated and capacity needed to read large amounts is persistent for the analysis of data.
- EPYC offers parallelism to speed in high performance computing environments.
- Machine learning & predictive analysis benefit from the ability to process more data in memory and to accelerate computation with more direct GPU connectivity than any other processor.
- Virtualised and cloud computing environments benefit from high security with minimal overhead.
Increasingly in conversations with other Supermicro partners around the world at both events, we were consistently told about the large uptakes in hyper-converged appliances.
Why? Well I’m glad you asked.
1. Maximising compute in a 2U system enables data center footprints to be reduced by 50% compared to standard 2U servers with equivalent performance. (less rack space required)
2. High density with up to 4 hot swappable nodes in a 2U form factor, reducing TCO by up to 50% compared to standard 2U systems.
3. Delivery of better performance and responsiveness for database and cloud infrastructure workloads (maximum hardware flexibility for your applications workload).
Ultimately, if you’re looking to enhance the performance of your memory intensive applications then look no further. If you need your machine learning and analytics to perform more efficiently, take a serious look at AMD. As a business you will also appreciate the highly cost-effective solutions on offer, reduce your bottom line and increase effectiveness within your organisation.
ELIMINATE data center storage bottlenecks, accelerate your applications with Intel® Optane™ Solid State Drives (SSD)
Intel® Optane™ Solid State Drives (SSD) help to eliminate data center storage bottlenecks, accelerate applications for fast caching and storage (hot storage), reduce transaction costs for latency-sensitive workloads and increase scale per server. This technology allows data centers to deploy bigger and more affordable datasets to gain new insights from large memory pools and improve overall data center TCO.
Health sciences – The Broad Institute (leader in Human Genomics) significantly decreased the time to results which directly resulted in a deeper understanding of human diseases.
MRI research – Universita Di Pisa needed large volumes of memory for their brain disease research MRI application, but their HPC infrastructure lacked the required RAM. By using Intel Optane drives they decreased the applications runtime from 40 minutes to 2 mins. Positively effecting the lives of patients with brain disease. (Watch the Webinar for more).
Ecommerce – Increased hot-data performance maintained a high level of customer experience during high spend periods (Christmas, one-day sales events, end of financial year sales).
SaaS applications – With the combined use of Xeon scalable processors & Optane drives, application performance increases by a factor of 2x versus prevous generation hardware. This allows SaaS customers to run more complex analysis & larger data sets to drive consistent revenue across their platform.
Key points to note after speaking with Intel advisors at Computex:
1. Extreme IOPS (storage intensive applications)
2. Quality of Service out performs other drives in the market (Reliable Service level agreements with your customers)
3. Highly responsive under load, no customer experience degredation
4. High endurance for intensive workloads (HPC)
With all of the above in mind, what questions do you have for us?
Not what you're looking for? Check out our archives for more content