AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phase, on the near side of a difficult chasm to cross. In response, many vendors have built integrations of their server and storage platforms with AI hardware, usually Nvidia GPUs, designed to leapfrog the initial steps involved in AI implementations.
Penguin Computing this week announced its own market strategy for helping organizations jumpstart their AI journeys, launching a new AI practice that the company says will operate as a full-service consultancy, providing guidance on system design, building custom technology solutions, and delivering professional services and support.
In so doing, a senior strategist at Penguin Computing also looked ahead to a future in which GPUs may be augmented with special purpose inference processors as chips of choice for AI workloads. He also discussed, in light of the widely shared observation that “AI is the new HPC workload,” the significantly different demands AI and traditional HPC workloads place on the processor.
About its new AI consultancy, Penguin Computing CTO Philip Pokorny said, “We help customers build networks, rack layouts, and assist with figuring out the best deployment strategy, so they can focus on doing AI and not have to worry about the details.”
Since Penguin Computing was acquired by SMART Global Holdings in June, the newly formed subsidiary has been on a mission to double down on AI initiatives, said Pokorny, who leads the new AI consultancy. He added that the acquisition by SMART can provide financial resources that would help grow the Penguin-Computing-on-Demand service, a bare metal HPC cloud that is planned to be part of the menu of offerings provided by the AI practice.
“It’s that customizability of solutions and access to a broad base of technologies that we think will make our AI practice really valuable to customers who probably don’t have a lot of experience – or if they do have the experience, decide their time is better spent focusing on AI,” said Pokorny.
Pokorny said Penguin Computing’s more popular hardware configurations for AI workloads include 4-GPU and 8-GPU servers, and it’s a certified reseller of the Nvidia DGX-1 system. He noted the company has done remarkably well deploying a large number of Nvidia DGX-1s and has seen its revenue attributable to Nvidia grow significantly over the past few years.