http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-114462590-A

Outgoing Links

Predicate Object
assignee http://rdf.ncbi.nlm.nih.gov/pubchem/patentassignee/MD5_a5f468ab791daeaa95143f47382c4f9b
http://rdf.ncbi.nlm.nih.gov/pubchem/patentassignee/MD5_d6a6f422b091ba12ea61d4adbf1b0e8e
classificationCPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06F16-24552
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N3-08
classificationIPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06N3-08
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06F16-2455
filingDate 2021-12-13-04:00^^<http://www.w3.org/2001/XMLSchema#date>
inventor http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_1a94d006062f7c5a6f45e37d5355bf48
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_a35d45f1e17590c5b8bd23dff1b8882d
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_4c5886059a8f06ebdf755e62a4867d94
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_05986683e7c228a820c59991937696d4
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_98abdef1ace7a1b1f82a97326180afcf
publicationDate 2022-05-10-04:00^^<http://www.w3.org/2001/XMLSchema#date>
publicationNumber CN-114462590-A
titleOfInvention An importance-aware deep learning data cache management method and system
abstract The invention provides an importance-aware deep learning data cache management method and system. The invention divides the cache into two areas to store divided important samples and unimportant samples respectively, and sets an importance-aware cache management module and dynamic The packaging module responds to data requests and manages the cache in the two cache areas of the cache module. The importance-aware cache management module caches the more important samples in the memory first. When the cache is full, the less important samples are sorted first. It can improve the cache hit rate. Use asynchronous threads for unimportant samples for packing and caching, and use other unimportant samples to replace when unimportant samples are missing from the cache. This ensures the diversity of training samples without introducing additional overhead. Compared with the prior art, the present invention has a negligible effect on the accuracy of model training, and makes the DNN training speed three times as fast.
priorityDate 2021-12-13-04:00^^<http://www.w3.org/2001/XMLSchema#date>
type http://data.epo.org/linked-data/def/patent/Publication

Incoming Links

Predicate Subject
isDiscussedBy http://rdf.ncbi.nlm.nih.gov/pubchem/substance/SID453034310
http://rdf.ncbi.nlm.nih.gov/pubchem/compound/CID516892

Total number of triples: 20.