ZeroStack delivers AI-as-a-Service

ZeroStack unveiled that administrators of its Self-Driving Cloud platform can provide single-click deployment of GPU resources and deep learning frameworks like TensorFlow, PyTorch, and MXNet, taking care of all the OS and CUDA library dependencies so users can focus on AI development.

Furthermore, users can enable GPU acceleration with dedicated access to multiple GPU resources for a faster inference latency and user responsiveness. GPUs within hosts can be shared across users in a multi-tenant manner.

Artificial Intelligence and Machine Learning products and solutions are becoming commonplace and are shaping our experiences in computing like no other time in history, and AI applications and solutions are now more viable than ever with the availability of machine learning and deep learning frameworks such as TensorFlow, Caffe, etc., along with access to GPUs that are built to perform parallel operations on large amounts of data. However, one challenge remains: deploying, configuring, and executing these tools and managing their interdependencies and versioning and compatibility with servers and GPUs.

ZeroStack’s AI-as-a-service capability gives customers features to detect GPUs and make them available for users to run their AI applications. In order to maximize utilization of this resource, cloud admins can configure, scale, and allow access control of GPU resources to end users.

“ZeroStack is offering the next level of cloud by delivering a collection of point-and-click service templates,” said Michael Lin, director of product management at ZeroStack. “Our new AI-as-a-service template automates provisioning of key AI tool sets and GPU resources for DevOps organizations.”

More about

Don't miss