http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-111461063-A

Outgoing Links

Predicate Object
assignee http://rdf.ncbi.nlm.nih.gov/pubchem/patentassignee/MD5_217f2be73fd6be0be36dd55803d3bec5
classificationCPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N3-08
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06N3-045
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G06V40-23
classificationIPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06N3-04
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06N3-08
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06K9-00
filingDate 2020-04-24-04:00^^<http://www.w3.org/2001/XMLSchema#date>
inventor http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_782a06af14e5751e7d300ffc9dc5f934
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_e4086a3716fdc4067f13d41bbac9dcfb
http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_314bcec9d15b4fa656aa7c31e849e6f1
publicationDate 2020-07-28-04:00^^<http://www.w3.org/2001/XMLSchema#date>
publicationNumber CN-111461063-A
titleOfInvention A Behavior Recognition Method Based on Graph Convolution and Capsule Neural Network
abstract The invention proposes a behavior recognition method based on graph convolution and capsule neural network. The present invention obtains the space coordinates of human body joint points in each frame of human body continuous action images through manual marking, and further constructs the space coordinate vectors of the human body joint points; the multi-layer perceptron is used to map the space coordinate vector into a high-dimensional feature vector, and combined with the action correlation The joint point adjacency matrix is constructed in principle; the velocity space vector of the joint point is constructed according to the spatial coordinates, and the acceleration space vector of the joint point is further constructed; the convolutional neural network is used to extract features, and the capsule neural network is used for action classification. The convolutional neural network is constructed by concatenating the convolutional neural network and the capsule neural network in series; the trained capsule convolutional neural network is obtained by repeating the training set for multiple generations. The invention conforms to the characteristics of actual motion, and the propagation of features on the graph is more in line with the actual situation, and can effectively retain the features for classification and improve the recognition ability of the model.
isCitedBy http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-113486917-A
http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-113486917-B
http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-113313831-A
http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-114444187-A
http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-113255514-A
http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-114444187-B
priorityDate 2020-04-24-04:00^^<http://www.w3.org/2001/XMLSchema#date>
type http://data.epo.org/linked-data/def/patent/Publication

Incoming Links

Predicate Subject
isCitedBy http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CN-109362066-A
http://rdf.ncbi.nlm.nih.gov/pubchem/patent/WO-2019040196-A1
isDiscussedBy http://rdf.ncbi.nlm.nih.gov/pubchem/compound/CID31307
http://rdf.ncbi.nlm.nih.gov/pubchem/substance/SID419505112

Total number of triples: 27.