http://rdf.ncbi.nlm.nih.gov/pubchem/patent/CA-2143483-A1

Outgoing Links

Predicate Object
assignee http://rdf.ncbi.nlm.nih.gov/pubchem/patentassignee/MD5_7ea4dbc26717a286f058859ebbbcaaf1
classificationCPCAdditional http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G10L2021-105
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N19-20
classificationCPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G10L25-57
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G11B20-10
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/G10L21-0356
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N19-132
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N21-4341
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N21-44008
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N21-440281
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N19-587
http://rdf.ncbi.nlm.nih.gov/pubchem/patentcpc/H04N21-4394
classificationIPCInventive http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G10L21-10
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G10L13-04
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/H04N21-434
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/H04N7-14
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G06T9-00
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/H04N21-2368
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/H04N19-00
http://rdf.ncbi.nlm.nih.gov/pubchem/patentipc/G10L13-00
filingDate 1995-02-27-04:00^^<http://www.w3.org/2001/XMLSchema#date>
inventor http://rdf.ncbi.nlm.nih.gov/pubchem/patentinventor/MD5_982ddb266c5fa7ab27c73d4e48a63bfe
publicationDate 1995-09-19-04:00^^<http://www.w3.org/2001/XMLSchema#date>
publicationNumber CA-2143483-A1
titleOfInvention Video signal processing systems and methods utilizing automated speech analysis
abstract A method of increasing the frame rate of an image of a speaking person comprisesmonitoring an audio signal indicative of utterances by the speaking person and the associated video signal. The audio signal corresponds to one or more fields or frames to be reconstructed, and individual portions of the audio signal are associated with facial feature information. The facial information includes mouth formation and position information derived from phonemes or other speech-based criteria from which the position of a speaker's mouth may be reliably predicted. A field or frame of the image is reconstructed using image features extracted from the existing frame and by utilizing the facial feature information associated with a detected phoneme.
priorityDate 1994-03-18-04:00^^<http://www.w3.org/2001/XMLSchema#date>
type http://data.epo.org/linked-data/def/patent/Publication

Incoming Links

Predicate Subject
isDiscussedBy http://rdf.ncbi.nlm.nih.gov/pubchem/taxonomy/TAXID163112
http://rdf.ncbi.nlm.nih.gov/pubchem/compound/CID4496
http://rdf.ncbi.nlm.nih.gov/pubchem/substance/SID408976986
http://rdf.ncbi.nlm.nih.gov/pubchem/anatomy/ANATOMYID163112

Total number of triples: 32.