Inference Engine

Inference Engine is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more. Inference Engine automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching. Visit https://unity.com/ai for more resources.

Basic Information
Latest Version
2.6.1 (30 Jul 2025)
Price
free
Extended Information
State
None