Avg license not updating
The second memory interface is optional, and allows for a connection to higher-bandwidth memory that may be dedicated to NVDLA or to a computer vision subsystem in general.
This option for a heterogeneous memory interface enables additional flexibility for scaling between different types of host systems.
These operations share a few characteristics that make them particularly well suited for special-purpose hardware implementation: their memory access patterns are extremely predictable, and they are readily parallelized.
The Small model represents an NVDLA implementation for a more cost-sensitive purpose built device.
The Large System model is characterized by the addition of a dedicated control coprocessor and high-bandwidth SRAM to support the NVDLA sub-system.
A system that has no need for pooling, for instance, can remove the planar averaging engine entirely; or, a system that needs additional convolutional performance can scale up the performance of the convolution unit without modifying other units in the accelerator.
Scheduling operations for each unit are delegated to a co-processor or CPU; they operate on extremely fine-grained scheduling boundaries with each unit operating independently.