メインコンテンツまでスキップ

Embedded

Embedded

Embedded deployment allows users to integrate trained AI models into embedded systems. Due to the complexity of embedded deployments, upgrading to the Embedded R&D Tier subscription is required. Users must contact Reality AI for assistance, as successful deployment often requires collaboration with the Reality AI team.

Recommendations for Embedded Deployment

Before proceeding with embedded deployment, consider the following best practices to optimize performance and ensure compatibility:

RecommendationExplanation
Reduce sample rate and window sizeLowering these values minimizes memory and processing requirements while maintaining accuracy.
Choose results with lower complexityIn AI Explore, prioritize models with low complexity. If the complexity icon is red, the model may require PC or server-grade hardware.
Use training data from the same hardwareEnsure the data used for training matches the hardware and software used in deployment. If necessary, recollect and retrain the model.
Test classifier before deploymentUse Test > Try new data and the cloud API to validate the classifier in a controlled environment before deploying it to an embedded system. This helps isolate deployment-related issues from model-related issues.

Embedded Section

The Embedded section consists of two tabs:

  1. Create Package – Used to generate a deployable package from a trained model.
  2. Combine Packages – Allows multiple packages to be merged.

Creating a Package

In the Create Package tab, the Trained Tool section displays the available trained models along with their details:

Embedded page

FieldDescription
Trained Tool DescriptionName of the trained model.
VersionVersion number of the model.
CreatedDate and time the model was created.
Sample RateThe frequency at which data is sampled (e.g., 100 Hz).
Target RangeThe number of classification categories.
StatusIndicates the current status of the model.

Deploying a New Package

Click New Package to open the Deploy New Package page, where you can configure deployment settings.

Deploy package page

Deployment Configuration

Deployed Name

Assign a custom name for the new package.

Input Configuration

FieldDescription
Array NameName of the input array.
Data TypeSelect the data type for input processing. Available options:- uint8 (unsigned char) - int8 (signed char) - uint16 (unsigned short) - int16 (short) - uint32 (unsigned integer) - int32 (integer) - float32 (float) - float64 (double)

Output Configuration

Output TypePossible Results
Signed Char-1: _error, 0: _no_result, 1: _fan_balance, 2: _fan_blocked, 3: _fan_on, 4: _fan_trans, 5: _idle

Build Options

OptionDescription
Target DeviceSelect the hardware platform for deployment.
ToolchainChoose the compiler toolchain (e.g., GNU GCC 10.3.1).
Use CMSIS-NNEnable or disable CMSIS-NN for neural network acceleration.
Math TypeChoose between Fixed Point and Floating Point arithmetic.
Optimization MethodSelect optimization for Speed (performance) or Size (memory efficiency).
Data Range ScalingUses 32-bit float representation by default.

C Function Prototype

The system provides a C function prototype for integrating the deployed model into external applications.

Example:

signed char SampleFandata (float32 *inputData);

Viewing and Downloading Packages

Once a package is created, it will appear in the Packages section with the following details:

FieldDescription
Deployed NameName of the deployed package.
Package DateDate and time the package was created.
Input DataFormat and structure of the input data.
ParametersSpecific settings used in the package.
TargetThe selected target hardware.
Math TypeFixed Point or Floating Point.
ToolchainThe compiler toolchain used.
DownloadOption to download the package as a ZIP file.

Viewing Hardware Resource Usage

Click the microcontroller icon next to the target to view:

MetricDescription
RAM UsagePre-allocated memory and stack usage.
Storage (FLASH/ROM)Breakdown of parameter and code sizes (in bytes).
Inference Output ValidationDisplays classification accuracy as a percentage.

By following these steps, you can successfully configure and deploy AI models into embedded environments, ensuring optimal performance and accuracy. For additional assistance, contact Reality AI Support.