http://rdf.ncbi.nlm.nih.gov/pubchem/patent/US-2022075597-A1

Outgoing Links

Predicate Object
assignee http://rdf.ncbi.nlm.nih.gov/pubchem/patentassignee/MD5_cce1bf18a448daccdd9f4edb9dcb3fb2
classificationCPCAdditional http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N3-08
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F2207-4824
classificationCPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N5-04
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F7-5443
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F9-522
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N3-045
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F40-20
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N3-063
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F9-3867
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N20-20
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F15-163
classificationIPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06F9-52
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06N3-063
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06F7-544
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06F40-20
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06F9-38
filingDate 2020-09-10-04:00^^<http://www.w3.org/2001/XMLSchema#date>
inventor http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_674e2ca8246c8f4c37913775d005ae03
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_e983c8180af0b90845108aaa1f233042
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_69ced53b63a062b62f400537ebf2240f
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_9b4c08b2c51dc428a4af3b9ef72334f5
publicationDate 2022-03-10-04:00^^<http://www.w3.org/2001/XMLSchema#date>
publicationNumber US-2022075597-A1
titleOfInvention Multi-die dot-product engine to provision large scale machine learning inference applications
abstract Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
isCitedBy http://rdf.ncbi.nlm.nih.gov/pubchem/patent/US-11756140-B2
priorityDate 2020-09-10-04:00^^<http://www.w3.org/2001/XMLSchema#date>
type http://data.epo.org/linked-data/def/patent/Publication

Incoming Links

Predicate Subject
isDiscussedBy http://rdf.ncbi.nlm.nih.gov/pubchem/substance/SID453034310
http://rdf.ncbi.nlm.nih.gov/pubchem/substance/SID419559541
http://rdf.ncbi.nlm.nih.gov/pubchem/compound/CID516892
http://rdf.ncbi.nlm.nih.gov/pubchem/compound/CID5461123

Total number of triples: 33.