Machine learning

Table of contents

How do I get started with machine learning (ML) on Axis devices?

It is possible to work with machine learning (ML) on various Axis devices. For an example on how to implement machine learning on Axis devices, see the computer vision documentation.

The recommended model architecture gives guidance on what machine learning models to use for different SoCs like ARTPEC-7, ARTPEC-8 and CV25. Additionally, the Axis Model Zoo repository contains a collection of different models compatible with Axis cameras and some performance measures (accuracy and speed).

What exact layers are supported?

For recommendations about model layers, go to general suggestions, specifically the paragraph use simple layers. Tips for choosing layers that work for a specific accelerator can be found in the optimization tips section.

To verify that the layers do not yield any errors, follow the instructions in test your model.

Is PyTorch or other formats supported?

No, only Tensorflow Lite is supported. See supported frameworks for more information and guides on how to convert other formats to Tensorflow Lite.

What is the training process for the respective accelerators?

In ACAP Native SDK examples, see the examples named tensorflow-to-larod.

What is a Deep Learning Processing Unit (DLPU) and how does it relate to the Central Processing Unit (CPU)?

The DLPU is designed to accelerate the execution of a model, making inference significantly faster compared to the CPU. See Axis Deep Learning Processing Unit (DLPU) for more information.

Can a Deep Learning Processing Unit (DLPU) run multiple models?

Yes, Axis devices are capable of running multiple models concurrently but not in parallel. With multiple models the DLPU will have to switch between inferences, and the load will not be divided equally if the models aren’t similar in size and complexity. As a guideline, only run models that benefits the intended use case, e.g. stop AXIS Object Analytics if it is not contributing to the intended use case.

My model is developed for ARTPEC-7 (TPU). How do I run it on ARTPEC-8?

In general, a model that works on ARTPEC-7 can also work on ARTPEC-8. The difference is not about the model’s architecture but an optimization stage called quantization. See quantization for each DLPU and Deep Learning Processing Unit (DLPU) model conversion for more information.

Relevant information on model conversion, model quantization and image formats can be found in the ACAP Native SDK examples repository, specifically for ARTPEC-7, ARTPEC-8 and CV25.

How do I test and debug my model on my device?

Go to test your model for a detailed guide on how to test and debug a model on a device.

Why do I loose accuracy after quantization?

That is unfortunately normal and difficult to avoid. See the quantization section for more information.


Back to top

© Axis Communications AB. All rights reserved. AXIS COMMUNICATIONS, AXIS, ARTPEC and VAPIX are registered trademarks of Axis AB in various jurisdictions, and you are not granted any license to use them. All other trademarks are the property of their respective owners.