Call a Specialist Today! (02) 9388 1741

AI Toolkit for IBM Z and LinuxONE
Deploy AI with speed and confidence


Deploy AI with speed and confidence

WatsonWorks Products
IBM Storage Software - Artificial Intelligence
AI Toolkit for IBM Z and LinuxONE is a family of popular open-source AI frameworks with IBM Elite Support and adapted for IBM Z and IBM LinuxONE hardware.
#AI-Toolkit-IBM-Z-LinuxONE
Our Price: Request a Quote

Click here to jump to more pricing!

Please Note: All Prices are Inclusive of GST

Overview:

Accelerate open source AI on IBM Z and LinuxONE with optimized performance and trusted support

AI Toolkit for IBM Z® and LinuxONE is a family of supported open source AI frameworks optimized for the Telum processor. Adopt AI with certified containers, integrated accelerators and expert support. These frameworks use on-chip AI acceleration in z16®, LinuxONE 4, z17® and LinuxONE 5.

Secure, compliant containers by IBM

AI Toolkit

The AI Toolkit consists of IBM Elite Support (within IBM Selected Support) and IBM Secure Engineering. These tools vet and scan open source AI serving frameworks and IBM-certified containers for security vulnerabilities and validate compliance with industry regulations.

Features:

Seamlessly develop and deploy machine learning (ML) models with optimized TensorFlow and PyTorch frameworks tailored for IBM Z. Use integrated acceleration for improved neural network inference performance.

Pytorch

PyTorch compatible

Accelerate seamless integration of PyTorch with IBM Z Accelerated for PyTorch to develop and deploy ML models on neural networks.

TensorFlow

TensorFlow compatible

Accelerate seamless integration of TensorFlow with IBM Z Accelerated for TensorFlow to develop and deploy ML models on neural networks.

ML Models TensorFlow

ML models with TensorFlow Serving

Harness the benefits of TensorFlow Serving, a flexible and high-performance service system, with IBM Z Accelerated for TensorFlow Serving to help the deployment of ML models in production.

NVIDIA Triton

NVIDIA Triton Inference Server

Optimized for IBM Telum processors and Linux on Z, IBM Z Accelerated for NVIDIA Triton Inference Server enables high-performance AI inference. The tool offers support for dynamic batching, multiple frameworks and custom backends across CPUs and GPUs.

Run Snap ML

Run Snap ML

Use IBM Z Accelerated for Snap ML to build and deploy ML models with Snap ML, an IBM nonwarranted program that optimizes the training and scoring of popular ML models.

Compile ML ONNX

Compile ML ONNX models with IBM zDLC

Use Telum and Telum II on-chip accelerated inference capabilities with ONNX models that use the IBM Z® Deep Learning Compiler (IBM zDLC) on IBM z/OS®, zCX and LinuxONE. IBM zDLC, an AI model compiler, provides capabilities such as auto-quantization for ML models with reduced latency and reduced energy consumption.

Use Cases

Real Time

Real-time natural language processing

Use on-chip AI inferencing to analyze large volumes of unstructured data on IBM Z and LinuxONE. Deliver faster, more accurate predictions for chatbots, content classification and language understanding.

Credit Card Fraud Detection

Credit card fraud detection in milliseconds

With up to 450 billion inferences per day and 99.9 percentile response under 1 ms, detect and act on fraudulent activity instantly by using composite AI models and Telum acceleration.

Anti Money Laundering

Anti-money laundering at scale

Identify suspicious patterns in financial transactions by using Snap ML and Scikit-learn. With data compression, encryption and on-platform AI, improve AML response without sacrificing performance or security.

Benefits:

Confident AI deployment at scale

Deploy open source AI with IBM Elite Support and IBM-vetted containers for compliance, security and nonwarranted software confidence.

Accelerated real-time AI

IBM z17’s Telum II on-chip AI accelerator delivers inference performance comparable to a 13-core x86 server within the same system managing online transaction processing (OLTP) workloads.

Inferencing at scale

IBM z17 and LinuxONE 5 enable INT8-optimized AI2, powering multiple models predictive scoring, while delivering up to 450 billion daily inferences with less than 1 ms response time.

Support for multiple AI models

Deploy ML, DL and large language models (LLMs) with up to 3.5x faster inference for predictions.4 Seamlessly integrate with PyTorch, TensorFlow, Snap ML, Open Neural Network Exchange (ONNX) and more.

Specifications:

Documentation:

Download the AI Toolkit for IBM Z and LinuxONE (.PDF)

No PDF plugin? You can download the PDF.

Pricing Notes:

WatsonWorks Products
IBM Storage Software - Artificial Intelligence
AI Toolkit for IBM Z and LinuxONE is a family of popular open-source AI frameworks with IBM Elite Support and adapted for IBM Z and IBM LinuxONE hardware.
#AI-Toolkit-IBM-Z-LinuxONE
Our Price: Request a Quote