NVDLA

http://dbpedia.org/resource/NVDLA

NVIDIA Deep Learning Accelerator ou NVDLA (signifiant en anglais, Accélérateur d’apprentissage profond [de] NVIDIA) est un standard ouvert (licence « NVIDIA Open NVDLA License ») de puce d'accélération de réseaux de neurones à destination de l’apprentissage profond, crée par la société NVIDIA et proposé publiquement en octobre 2017. Le processeur est écrit au format Verilog. rdf:langString
The NVIDIA Deep Learning Accelerator (NVDLA) is an open-source hardware neural network AI accelerator created by Nvidia. The accelerator is written in Verilog and is configurable and scalable to meet many different architecture needs. NVDLA is merely an accelerator and any process must be scheduled and arbitered by an outside entity such as a CPU. Nvidia's involvement with open hardware includes the use of RISC-V processors as part of their GPU product line-up. rdf:langString
rdf:langString NVDLA
rdf:langString NVDLA
xsd:integer 57667862
xsd:integer 1032947770
rdf:langString NVIDIA Deep Learning Accelerator ou NVDLA (signifiant en anglais, Accélérateur d’apprentissage profond [de] NVIDIA) est un standard ouvert (licence « NVIDIA Open NVDLA License ») de puce d'accélération de réseaux de neurones à destination de l’apprentissage profond, crée par la société NVIDIA et proposé publiquement en octobre 2017. Le processeur est écrit au format Verilog.
rdf:langString The NVIDIA Deep Learning Accelerator (NVDLA) is an open-source hardware neural network AI accelerator created by Nvidia. The accelerator is written in Verilog and is configurable and scalable to meet many different architecture needs. NVDLA is merely an accelerator and any process must be scheduled and arbitered by an outside entity such as a CPU. NVDLA is available for product development as part of Nvidia's Jetson Xavier NX, a small circuit board in a form factor about the size of a credit card which includes a 6-core ARMv8.2 64-bit CPU, an integrated 384-core Volta GPU with 48 Tensor Cores, and dual NVDLA "engines", as described in their own press release. Nvidia claims the product will deliver 14 TOPS (tera operations per second) of compute under 10 W, but most of this likely comes from the GPU cores. Applications broadly include edge computing inference engines, including object recognition for autonomous driving. Nvidia's involvement with open hardware includes the use of RISC-V processors as part of their GPU product line-up.
xsd:nonNegativeInteger 2973

data from the linked data cloud