US Pat. No. 10,116,990

INFORMATION DISTRIBUTION SYSTEM AND METHOD FOR DISTRIBUTING CONTENT INFORMATION

SONY CORPORATION, Tokyo ...

1. An information distribution method for a server apparatus to distribute image data and selected content information appended thereto, comprising the steps of:receiving, by receiver circuitry at the server from a first information processing terminal of a user, image data input by the user, text data input by the user appended to the image data and selectively edited text data comprised of said text data selectively edited by the user's first information processing terminal prior to being received by the server;
storing at the server the received image data at an addressable storage location;
storing in a user information database user information representing predetermined characteristics and activities of the user;
selecting, at the server apparatus, content information to be appended to said image data based on (a) the received text data, (b) the received edited text data and (c) the user information;
appending the selected content information to at least the image data;
providing a web page address of a web page related to and containing more information than the selected content information; and
transmitting, by transmitting circuitry, said image data with (a) the storage location address of said image data appended thereto, (b) the selected content information appended thereto and (c) the web page address appended thereto, to a second information processing terminal of a recipient;
wherein said image data, the edited text data and the content information are displayed on said second information processing terminal based on the storage location address of said image data; and
wherein specific information is shared by the user of said first information processing terminal with the recipient at said second information processing terminal via said server.

US Pat. No. 10,116,989

BUFFER REDUCTION USING FRAME DROPPING

Twitch Interactive, Inc.,...

1. A computing system for reducing a quantity of video frames stored in a video frame buffer during transmission of a video stream comprising:one or more processors; and
one or more memories having stored therein instructions that, upon execution by the one or more processors, cause the computing system perform operations comprising:
selecting, from among a plurality of video frames from the video frame buffer, at least a first video frame to drop from a video output for display, wherein the first video frame is selected for dropping based, at least in part, on at least one of a quantity of dropped video frames, a frequency of dropped video frames, or whether the first video frame is a reference frame;
dropping at least the first video frame from the video output;
determining not to drop at least a second video frame from the video output, the second video frame having a first video timestamp associated with the video frame buffer;
determining a second video timestamp for the second video frame associated with the video output;
determining a video timestamp difference between the first video timestamp and the second video timestamp;
determining an audio timestamp difference for a first audio frame from an audio frame buffer that is a difference between a first audio timestamp for the first audio frame associated with the audio frame buffer and a second audio timestamp for the first audio frame associated with an audio output;
selecting, from among a plurality of audio frames from the audio frame buffer, a second audio frame to drop from the audio output, wherein the second audio frame is selected for dropping based, at least in part, on a relationship between the audio timestamp difference and the video timestamp difference; and
dropping the second audio frame from the audio output.

US Pat. No. 10,116,988

SIGNAL TRANSMISSION DEVICE AND METHOD FOR CONTROLLING SAME

LG ELECTRONICS INC., Seo...

1. A signal transmission device for transmitting/receiving data to/from a companion device, the signal transmission device comprising:a controller to set state variables using an application of the signal transmission device,
wherein the controller sets a Uniform Resource Identifier (URI) state variable including URI information of the application, and a name state variable including a name of the application; and
a communicator to transmit values of the URI state variable and the name state variable to a companion application of the companion device,
wherein the controller sets a ready state variable indicating whether the application is prepared to engage in a communication and an access point state variable indicating a Transport Service Access Point (TSAP) at which the application is accepting a connection,
wherein the communicator transmits values of the ready state variable and the access point state variable to the companion application, and
wherein the communicator receives a connection request for the connection from the companion application based on values of the state variables.

US Pat. No. 10,116,986

DIGITAL VIDEO RECORDER STATE CACHE

TIME WARNER CABLE ENTERPR...

1. A method comprising the steps of:maintaining a bookmark service database in communication with a plurality of set-top boxes, said bookmark service database comprising a plurality of bookmarks for a plurality of programs, each of said bookmarks in turn comprising a program identifier and a play position;
obtaining, at said bookmark service database, an indication that a given one of said set-top boxes has updated its cache with a bookmark action;
updating said bookmark service database to reflect said bookmark action; and
pushing, from said bookmark service database to said plurality of set-top boxes other than said given one of said set-top boxes, at least a portion of said bookmark service database for local caching on said plurality of set-top boxes,
wherein said pushing is carried out in response to said obtaining of said indication and wherein said subset includes an update to reflect said bookmark action.

US Pat. No. 10,116,984

PORTABLE TERMINAL, INFORMATION PROCESSING APPARATUS, CONTENT DISPLAY SYSTEM AND CONTENT DISPLAY METHOD

MAXELL, LTD., Kyoto (JP)...

1. A content receiving apparatus, the content receiving apparatus comprising:a digital television broadcast receiver;
a signal separator which conducts de-multiplexing video data and audio data from a signal;
a processor which executes video processing to video data;
a network communication module which connects to the internet; and
a controller configured to:
control a first video content to be received via a digital television broadcast;
control the first video content received via a digital television broadcast to be outputted to a display;
control an identifier for identifying a second video content to be received from an external mobile terminal;
control display state information to be received from the external mobile terminal;
control the second video content to be acquired using the identifier; and
control the second video content to be outputted to the display,
wherein the controller controls output of the first video content on the display to be terminated before output of the second video content on the display being started.

US Pat. No. 10,116,982

SYSTEMS AND METHODS FOR AUTOMATED EXTRACTION OF CLOSED CAPTIONS IN REAL TIME OR NEAR REAL-TIME AND TAGGING OF STREAMING DATA FOR ADVERTISEMENTS

CROSSBAR MEDIA GROUP, INC...

1. A method for finding target content, comprising:providing at least one device and a cloud-based platform, wherein the cloud-based platform comprises at least one server and at least one database; wherein the at least one device communicates with the cloud-based platform over the Internet;
the at least one device receiving a live broadcast and/or streaming audio and video content;
the at least one device extracting captions of the live broadcast and/or the audio and video content in real time;
the cloud-based platform receiving extracted captions from the at least one device and storing the extracted captions in the at least one database;
the cloud-based platform searching the extracted captions for at least one keyword relating to the target content, thereby creating search result data;
the cloud-based platform harvesting social media data relevant to the target content in a predetermined period of time from the Internet; and
the cloud-based platform determining an impact of the target content by correlating the search result data with the social media data.

US Pat. No. 10,116,981

VIDEO MANAGEMENT SYSTEM FOR GENERATING VIDEO SEGMENT PLAYLIST USING ENHANCED SEGMENTED VIDEOS

MICROSOFT TECHNOLOGY LICE...

8. A computer-implemented method for video management, the method comprising:receiving a search query for video content;
identifying a plurality of relevant enhanced segmented videos, wherein a relevant enhanced video segment is an enhanced segmented video in a cognitive index that satisfies the search query based at least in part of the corresponding plurality of segmentation dimensions, wherein the enhanced segmented video is generated based on segmentation rules, and segment reconstruction rules;
receiving a selection of at least a subset of the plurality of relevant enhanced segmented videos to generate a video segment playlist;
generating the video segment playlist wherein the video segment playlist comprises references to the subset of the plurality of relevant enhanced segmented videos; and
causing playback of the subset of the plurality of relevant enhanced segmented videos based on the references in the video segment playlist.

US Pat. No. 10,116,980

ZERO SIGN-ON AUTHENTICATION

Cable Television Laborato...

1. A method of providing zero sign-on (ZSO) authentication comprising:determining a media access request from a first device requesting access to a media service associated with a service provider, the media access request being transmitted using signaling through a gateway;
determining a location for the gateway as a function of information included within the media access request;
determining a level of trust for a second device determined to be at the location; and
enabling the first device or operating system ZSO authentication sufficient to access the media service if the level of trust is sufficient and denying the first device or operating system ZSO authentication if the level of trust is insufficient.

US Pat. No. 10,116,979

METHOD AND SYSTEM FOR THE DELIVERY AND STORAGE OF HIGH DEFINITION AUDIO-VISUAL CONTENT

1. A system for permitting temporary access for delivery of content owned by a subscriber to a location selected by the subscriber, the system comprising:a central facility, wherein the central facility stores a list of all content owned by the subscriber;
a plurality of local networks operatively connected to the central facility for receipt of the content;
a temporary access application residing on a device controlled by the subscriber;
a set-top box (STB) at the selected location, wherein the STB has a unique identification (ID); and
a temporary access protocol residing on the central facility, wherein the temporary access protocol performs the following functions:
receiving a message from the temporary access application, wherein the message comprises the name of content selected by the subscriber from the list of all content owned by the subscriber, the unique ID of the STB, the location of the device, and a unique ID of the subscriber;
comparing the content selected by the subscriber with the unique ID of the subscriber, wherein if the content is determined to be owned by the subscriber, the central facility queries one of the plurality of local networks that has the STB identified by the unique ID of the STB at the selected location, so the local network supplies the location of the STB to the central facility;
comparing the location of the STB to the location of the device and determining whether the location of the STB and the location of the device are within a set distance to each other to determine that there is no fraudulent activity; and
informing the local network, if there is no fraudulent activity, to play the selected content on the STB of the selected location.

US Pat. No. 10,116,977

DYNAMIC DELAY EQUALIZATION FOR MEDIA TRANSPORT

BIAMP SYSTEMS CORPORATION...

1. A method, comprising:receiving a plurality of related media signals along different media paths by a media transport system;
determining an uncorrected propagation delay for each media path and determining a longest propagation delay that is a maximum of the uncorrected propagation delays;
delaying each of the related media signals by an amount related to a difference between an uncorrected propagation delay for each media path and a longest propagation delay that is a maximum of the uncorrected propagation delays of that related media signal;
in response to a change to a propagation delay of at least one of the related media signals while transporting the related media signals, determining a revised uncorrected propagation delay for each media path and determining a revised longest propagation delay that is a maximum of the revised uncorrected propagation delays; and
delaying each of the related media signals by an amount related to a difference between the revised longest propagation delay and the revised uncorrected propagation delay of that related media signal.

US Pat. No. 10,116,976

SYSTEM AND METHOD FOR DISTRIBUTING MEDIA CONTENT ASSOCIATED WITH AN EVENT

1. A device comprising:a processor; and
a memory accessible to the processor, the memory storing instructions that, when executed by the processor, cause the processor to perform operations comprising:
receiving, via a transmission, a first video stream of a plurality of video streams associated with an event;
during display of the first video stream at a display device coupled to the processor, sending a replay request for the event, the replay request associated with a particular video stream of the plurality of video streams, wherein the replay request includes an offset time and a replay duration, and wherein a communication session with a media server is established via a unicast transmission based on the replay request;
receiving, via the communication session, a set of video segments associated with the particular video stream responsive to the replay request; and
displaying the set of video segments at the display device, wherein a display of the set of video segments includes information overlaying a portion of the video segments, the information including a first indicator of a channel associated with the particular video stream, a second indicator that the set of video segments is a replay, and a numeric indicator of time remaining in the replay.

US Pat. No. 10,116,975

CONTROLLER, CONTROL METHOD, COMPUTER PROGRAM, AND VIDEO TRANSMISSION SYSTEM

SONY CORPORATION, Tokyo ...

1. A controller comprising:circuitry configured to:
acquire information on a status of a network topology and a transmitted video stream when an instruction given to a receiving device that receives a first video stream transmitted via IP multicast to receive a second video stream after receiving the first video stream is acquired;
calculate a network bandwidth based on the acquired information on the status of the network topology and the transmitted video stream for transmitting the second video stream in addition to the first video stream; and
send out an instruction to the receiving device for changing from the first video stream to the second video stream by using the network bandwidth calculated.

US Pat. No. 10,116,974

CONTENT SELECTION BASED ON DISPERSION CALCULATIONS

Comcast Cable Communicati...

1. A method comprising:receiving, by a computing device, viewership information associated with a first content item and viewership information associated with a second content item;
determining, based on the viewership information associated with the first content item and the viewership information associated with the second content item, a value indicative of a likely viewership, during a broadcast of the first content item, of the second content item; and
based on a determination that the value does not satisfy a threshold, causing transmission of the second content item via a narrowcast server during the broadcast of the first content item.

US Pat. No. 10,116,973

ADVANCED DATA CELL RESOURCE MAPPING

One Media, LLC, Hunt Val...

1. A method of mapping a plurality of modulation symbols of a plurality of physical layer pipes to be transmitted in a frame to a sequentially indexed array of data cells, the method comprising:determining, for the frame, whether each physical layer pipe of the plurality of physical layer pipes is dispersed or non-dispersed;
responsive to determining that a first physical layer pipe is non-dispersed, populating a next available position of the sequentially indexed array with a first modulation symbol value of the first physical layer pipe; and
responsive to determining that a second physical layer pipe is dispersed:
calculating a sub-slice size for the second physical layer pipe by dividing a size of the second physical layer pipe with a number of sub-slices of the second physical layer pipe; and
populating a next available position of the sequentially indexed array with a first modulation symbol value of a sub-slice of the second physical layer pipe.

US Pat. No. 10,116,972

METHODS FOR IDENTIFYING VIDEO SEGMENTS AND DISPLAYING OPTION TO VIEW FROM AN ALTERNATIVE SOURCE AND/OR ON AN ALTERNATIVE DEVICE

INSCAPE DATA, INC., Irvi...

9. A computer-implemented method, comprising:transmitting one or more software applications configured for execution by a media system;
receiving, by a computing device, pixel data associated with a video frame of a video segment being displayed by the media system, wherein the video segment includes at least a portion of a version of a video program;
identifying the video segment being displayed by the media system, wherein identifying the video segment includes comparing the pixel data with stored pixel data to find a closest match;
determining contextual content, wherein the contextual content is contextually related to the identified video segment, wherein the contextual content includes an option to switch to an alternative version of the video program, wherein the version and the alternative version are of a same video program, and wherein the alternative version is from a video server; and
transmitting one or more software instructions, wherein the one or more software instructions, when received by the media system, cause a software application associated with the contextual content to execute on the media system, wherein the software application provides the contextual content to the media system, wherein selection of the option causes the media system to switch from the version of the video program to the alternative version of the same video program.

US Pat. No. 10,116,971

METHOD AND SYSTEM FOR FETCHING A PORTION OF A LIVE MEDIA STREAM BEFORE A FIRST AD FINISHES PLAYING TO DETECT THE SUBSEQUENT AD INDICATOR FOR LIVE CONSECUTIVE AD REPLACEMENT

MobiTV, Inc., Emeryville...

1. A method comprising:receiving and playing a live media stream at a client device;
playing a first ad during a first ad period in the live media stream;
before the first ad finishes playing during the first ad period, fetching only a portion of the live media stream in order to detect the presence of a subsequent ad indicator, the subsequent ad indicator indicating a start point for a second ad period in the live media stream for playing an original second ad, the second ad period being consecutive to the first ad period and set to start immediately following the first ad period;
detecting the subsequent ad indicator;
fetching a replacement ad to play in place of the original second ad; and
playing the replacement ad at the start point for the second ad period, instead of playing the original second ad in the live media stream, during the second ad period.

US Pat. No. 10,116,970

VIDEO DISTRIBUTION, STORAGE, AND STREAMING OVER TIME-VARYING CHANNELS

Empire Technology Develop...

1. A method to provide video distribution, storage, and streaming over time-varying channels, the method comprising:grouping a stream of video frames to form one or more groups-of-pictures (GOPs), wherein each GOP of the one or more GOPs includes a plurality of sub-groups-of-pictures (sub-GOPs);
encoding the plurality of sub-GOPs of video frames into a plurality of blocks, wherein each block of the plurality of blocks includes at least a portion of an individual encoded video frame;
determining two or more priority levels to be assigned to the plurality of blocks;
assigning a priority level to each block, in the plurality of blocks, based on one or more of an importance and a context of each block within an associated sub-GOP;
one or more of storing and distributing the plurality of blocks based on the priority level assigned to each block;
selecting one or more of a storage type and a content distribution network path such that an aggregate loss probability is less for blocks with a higher priority level compared to blocks with a lower priority level; and
transmitting, to a requesting client device, blocks with the higher priority level over a higher quality delivery channel, compared to blocks with the lower priority level which are transmitted over a lower quality delivery channel,
wherein the transmission of the blocks with the higher priority level over the higher quality delivery channel, compared to the blocks with the lower priority level which are transmitted over the lower quality delivery channel, facilitates a particular quality of service (QoS) being provided for the requesting client device.

US Pat. No. 10,116,969

METHODS, SYSTEMS AND MEDIA FOR SELECTIVELY PRESENTING BROADCAST CONTENT BASED ON USER INTERESTS

Google LLC, Mountain Vie...

1. A method for selectively presenting media content, the method comprising:associating, using a hardware processor, a plurality of entities of interest with a user account of a user device based on user information associated with the user account;
identifying, using the hardware processor, a plurality of live media content items that are available for the user device to present;
determining, using the hardware processor, that each of one or more live media content items of the plurality of live media content items that are available for presentation by the user device is relevant to at least one of the plurality of entities of interest based on content metadata corresponding to each of the plurality of live media content items;
determining, using the hardware processor, during a broadcast of a first live media content item and a second live media content item of the plurality of live media content items, that an upcoming portion of the first live media content item and an upcoming portion of the second live media content item are relevant to a first entity and a second entity of the plurality of entities of interest based on the content metadata;
placing, using the hardware processor, the upcoming portion of the first live media content item and the second live media content item in a queue, wherein the first live media content item and the second live media content item are ordered in the queue based on relevance to the plurality of entities; and
transmitting, using the hardware processor, instructions to the user device to store the upcoming portion of the first live media content item and the upcoming portion of the second live media content item in an order based on the ordered queue without intervention from a user of the user device.

US Pat. No. 10,116,968

ARITHMETIC ENCODING-DECODING METHOD AND CODEC FOR COMPRESSION OF VIDEO IMAGE BLOCK

PEKING UNIVERSITY SHENZHE...

1. An arithmetic encoding-decoding method for compression of a video image block, the method comprising an encoding process and a decoding process,the encoding process comprising:
1) inputting an image block to be encoded into an encoder, wherein the encoder comprises a first module for acquiring information of an image block, a second module for extracting an encoding mode, a third module for acquiring an index of a reference frame, a fourth module for acquiring a probability model for encoding, and a fifth module for arithmetic encoding;
2) activating the first module to acquire information of the image block to be encoded and then to transmit the information of the image block to be encoded to the third module;
3) activating the second module to extract an encoding command of a weighted skip model and then to transmit the encoding command of the weighted skip model to the third module;
4) activating the third module to acquire an index of a reference frame according to the information of the image block to be encoded and the encoding command of the weighted skip model, the reference frame comprising a prediction block for reconstructing the image block to be encoded, and then to transmit the index of the reference frame to the fifth module;
5) activating the fourth module to acquire a context-based adaptive probability model for encoding and then to transmit the context-based adaptive probability model for encoding to the fifth module; and
6) activating the fifth module to perform arithmetic encoding of the index of the reference frame and writing arithmetic codes into an arithmetically encoded bitstream according to the context-based adaptive probability model for encoding;
the decoding process comprising:
7) transmitting the arithmetically encoded bitstream to a decoder, wherein the decoder comprises a sixth module for acquiring the arithmetically encoded bitstream, a seventh module for acquiring a probability model for decoding, an eighth module for arithmetic decoding, a ninth module for producing the index of the reference frame, a tenth module for acquiring the prediction block, and an eleventh module for calculating a reconstruction block;
8) activating the sixth module to receive the arithmetically encoded bitstream and then to transmit the arithmetically encoded bitstream to the eighth module;
9) activating the seventh module to acquire a context-based adaptive probability model for decoding and then to transmit the context-based adaptive probability model for decoding to the eighth module, wherein the context-based adaptive probability model for decoding is a statistical result of the greater probability event on each bit of the index of the reference frame;
10) activating the eighth module to perform arithmetic decoding of the arithmetically encoded bitstream according to the context-based adaptive probability model for decoding corresponding to each bit of the index of the reference frame whereby acquiring a binary value of each bit of the reference frame, and then to transmit the binary value of each bit of the index of the reference frame to the ninth module;
11) activating the ninth module to acquire the index of the reference frame according to the binary value of each bit of the index of the reference frame and then to transmit the index of the reference frame to the tenth module;
12) activating the tenth module to acquire the prediction block of the reference frame according to the index of the reference frame and then to transmit the prediction block of the reference frame to the eleventh module; and
13) activating the eleventh module to average pixel values of the prediction blocks to obtain a pixel value of a reconstruction block, wherein the reconstruction block is the image block to be encoded;
wherein:
in the encoder, the first module and the second module are physically connected to the third module; the third module is physically connected to the fourth module; and the fourth module is physically connected to the fifth module; and
in the decoder, the sixth module and the seventh module are physically connected to the eighth module; the eighth module is physically connected to the ninth module; the ninth module is physically connected to the tenth module; and the tenth module is physically connected to the eleventh module.

US Pat. No. 10,116,967

METHOD AND APPARATUS FOR CODING OF SAMPLE ADAPTIVE OFFSET INFORMATION

HFI INNOVATION INC., Zhu...

1. A method for Sample Adaptive Offset (SAO) processing of video data in a video decoder, the method comprising:receiving a block of processed-reconstructed pixels associated with a picture from a media or a processor, wherein the block of processed-reconstructed pixels are decoded from a video bitstream;
determining a SAO type index from the video bitstream, wherein the SAO type index is decoded according to truncated unary binarization, the SAO type index is decoded using CABAC (context-based adaptive binary arithmetic coding) with one context, or the SAO type index is decoded by CABAC using a context mode for a first bin associated with the SAO type index and using a bypass mode for any remaining bin associated with the SAO type index; and
applying SAO processing to the block of processed-reconstructed pixels based on SAO information including the SAO type index.

US Pat. No. 10,116,966

IMAGE DECODING DEVICE

Sharp Kabushiki Kaisha, ...

1. An image decoding device which decodes coded data obtained by coding image information, comprising:profile information decoding means for decoding profile information;
level information decoding means for decoding level information;
sublayer profile present flag decoding means for decoding a sublayer profile present flag (sub_layer_profile_present_flag) indicating the presence or absence of sublayer profile information regarding respective sublayers;
sublayer level present flag decoding means for decoding a sublayer level present flag (sub_layer_level_present_flag) indicating the presence or absence of sublayer level information regarding the respective sublayers; and
byte-aligned data decoding means for decoding byte-aligned data that is determined based on the number of sublayers and is inserted after the sublayer profile present flag and the sublayer level present flag, and before the sublayer profile information;
wherein the profile information decoding means refers to decoded sublayer profile present flags of the number of sublayers —1 and, in a case where the sublayer profile information is present, decodes the sublayer profile information,
wherein a first bit of the sublayer profile information is byte-aligned by the byte-aligned data, and
wherein the level information decoding means refers to decoded sublayer level present flags of the number of sublayers —1 and, in a case where the sublayer level information is present, decodes the sublayer level information.

US Pat. No. 10,116,965

THREE-DIMENSIONAL VIDEO ENCODING METHOD, THREE-DIMENSIONAL VIDEO DECODING METHOD, AND RELATED APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A three-dimensional video decoding method, comprising:decoding a video bitstream to obtain a single sample flag bit corresponding to a current image block in a current depth map;
performing detection on a first adjacent prediction sampling point and a second adjacent prediction sampling point of the current image block in the current depth map when the single sample flag bit obtained by decoding indicates that a decoding mode corresponding to the current image block is a single depth intra-frame mode;
constructing a sample candidate set according to results of the detection on the first adjacent prediction sampling point and the second adjacent prediction sampling point, wherein available adjacent prediction sampling points of the current image block comprise only the first adjacent prediction sampling point and the second adjacent prediction sampling point;
decoding the video bitstream to obtain a single sample index flag bit corresponding to the current image block;
obtaining, according to an index location indicated by the single sample index flag bit obtained by decoding, a candidate depth value located in the index location that is indicated by the single sample index flag bit and that is in the sample candidate set;
using the candidate depth value as a prediction sample value of some or all of pixels of the current image block; and
reconstructing the current image block using the prediction sample value of some or all of the pixels of the current image block.

US Pat. No. 10,116,963

VECTOR-BASED ENCODING TECHNIQUE FOR LOW-BANDWIDTH DELIVERY OR STREAMING OF VECTORIZABLE VIDEOS

DOT LEARN INC., Edison, ...

1. A method of encoding a media file, the method comprising:receiving a video stream depicting a drawing including at least one object being drawn on a drawing surface;
detecting, in the video stream, at least one path included in the drawing and representing the at least one object;
storing a plurality of coordinate sets representing the at least one path;
identifying a first coordinate set of the plurality of coordinate sets;
executing an interpolation function to determine an interpolated path represented by a subset of the plurality of coordinate sets, the subset of the plurality of coordinate sets not including the first coordinate set;
determining a path length of the interpolated path;
determining that the interpolated path represents the at least one path to a degree of accuracy exceeding a defined threshold;
and
storing the subset of the plurality of coordinate sets in a text file format.

US Pat. No. 10,116,962

IMAGE CODING DEVICE, IMAGE DECODING DEVICE, IMAGE CODING METHOD AND IMAGE DECODING METHOD

FUJITSU LIMITED, Kawasak...

1. An image coding device comprising:a generation circuit configured to generate first palette information from a local decoded image of a first block used for predicting second palette information of a second block in a coding target image when the first block has not received palette coding, wherein a coded image of the first block is generated by coding the first block by using a coding method other than the palette coding and the local decoded image of the first block is generated by locally decoding the coded image of the first block in the image coding device;
a storage circuit configured to store the first palette information; and
a coding circuit configured to perform prediction coding on the second palette information by using the first palette information so as to generate coded palette information.

US Pat. No. 10,116,959

SPATIOTEMPORAL PREDICTION FOR BIDIRECTIONALLY PREDICTIVE (B) PICTURES AND MOTION VECTOR PREDICTION FOR MULTI-PICTURE REFERENCE MOTION COMPENSATION

Microsoft Technology Lice...

1. A computing device comprising one or more processing units, volatile memory, non-volatile memory, and storage, the non-volatile memory and/or storage having stored therein computer-executable instructions for causing the computing device, when programmed thereby, to perform operations comprising:encoding a current frame in a sequence of video frames, including, for a given block of the current frame:
determining a predicted motion vector (“MV”) of a given MV for the given block using multiple MV predictors from the current frame, the multiple MV predictors including:
a first MV predictor from a first surrounding block of the current frame, the first surrounding block being above and to the left of the given block;
a second MV predictor from a second surrounding block of the current frame, the second surrounding block being above the given block and separated from the first surrounding block;
a third MV predictor from a third surrounding block of the current frame, the third surrounding block being above the given block and adjacent the second surrounding block;
a fourth MV predictor from a fourth surrounding block of the current frame, the fourth surrounding block being left of the given block and separated from the first surrounding block; and
a fifth MV predictor from a fifth surrounding block of the current frame, the fifth surrounding block being left of the given block and adjacent the fourth surrounding block; and
performing motion compensation for the given block, using the given MV for the given block, relative to a reference frame to determine a motion-compensated prediction for the given block; and
outputting encoded data for the current frame.

US Pat. No. 10,116,958

APPARATUS FOR ENCODING AN IMAGE

1. An apparatus for encoding an image, comprising:a picture divider configured to determine a size and a prediction mode of each coding block;
an intra predictor configured to determine an intra prediction mode of a current block and generate a prediction block corresponding to the current block according to the intra prediction mode;
a transformer configured to transform a residual block obtained by calculating difference between the current block and the prediction block to generate a transform block;
a quantizer configured to determine a quantization step size of a current coding block and quantize the transform block using the quantization step size to generate a quantized transform block;
a scanner configured to scan quantized transform coefficients of the quantized transform block to generate one-dimensional (1D) quantized transform coefficients; and
an entropy coder configured to entropy-code the 1D quantized transform coefficients,
wherein the quantization step size is encoded using a quantization step size predictor and the quantization step size predictor is generated using a quantization step size of a left coding block of the current coding block and a quantization step size of an above coding block of the current coding block,
wherein, when the quantization step size of the left coding block of the current coding block and the quantization step size of the above coding block of the current coding block are unavailable, a quantization step size of previous coding block in scan order is selected as the quantization step size predictor of the current coding block, and
wherein, when a size of the transform block is larger than 4×4, the scanner divides the quantized transform coefficients of the quantized transform block into a plurality of sub-blocks and scans the plurality of sub-blocks according to a scan pattern determined by the intra prediction mode of the current block.

US Pat. No. 10,116,957

DUAL FILTER TYPE FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

GOOGLE INC., Mountain Vi...

1. An apparatus for encoding or decoding a video frame, comprising:a processor configured to execute instructions stored in a non-transitory storage medium to:
determine whether a first component of a motion vector represents sub-pixel motion;
determine whether a second component of the motion vector represents sub-pixel motion;
responsive to a determination that the first component of the motion vector represents sub-pixel motion and a determination that the second component of the motion vector represents sub-pixel motion:
determine a first interpolation filter for motion prediction using the motion vector along a first axis;
determine a second interpolation filter for motion prediction using the motion vector along a second axis different from the first axis, the second interpolation filter being different from the first interpolation filter;
apply the first interpolation filter to pixels of a reference frame identified using the motion vector to generate a temporal pixel block; and
apply the second interpolation filter to the temporal pixel block to generate a first prediction block for a first block of the video frame; and
at least one of:
encode the first block of the video frame using the first prediction block by producing a residual block as a difference between the first block and the first prediction block, and encoding the residual block into an encoded bitstream for decoding by a decoder; or
decode the first block of the video frame using the first prediction block by decoding an encoded residual block from the encoded bitstream to generate a residual block, and reconstructing the first block for display by combining the residual block with the first prediction block.

US Pat. No. 10,116,956

VIDEO PREDICTIVE ENCODING DEVICE, VIDEO PREDICTIVE ENCODING METHOD, VIDEO PREDICTIVE ENCODING PROGRAM, VIDEO PREDICTIVE DECODING DEVICE, VIDEO PREDICTIVE DECODING METHOD, AND VIDEO PREDICTIVE DECODING PROGRAM

NTT DOCOMO, INC., Tokyo ...

1. A video predictive decoding method executed by a video predictive decoding device, comprising:receiving encoded data comprising compressed picture data representative of a compressed form of a plurality of pictures forming a video sequence, wherein the plurality of pictures are encoded by either intra-frame prediction or inter-frame prediction;
decoding the compressed form of the plurality of pictures to reconstruct the plurality of pictures;
storing in a decoded picture buffer (DPB) one or more of the reconstructed pictures as reference pictures to be used for decoding a subsequent picture;
retrieving (i) a target picture frame size indicative of a frame size of a target picture, (ii) a maximum frame size (MaxLumaFS) defined in advance by level information and indicative of a maximum size of a reconstructed picture storable in the DPB, and (iii) a size of the frame memory (MFSBuffer) expressed by a preset maximum number of reconstructed pictures storable in the DPB; and
setting an adaptive maximum number (maxDPBsize) of the reconstructed pictures storable in the DPB equal to the MFSBuffer, a doubled MFSBuffer or a quadrupled MFSBuffer based on relationship between the MaxLumaFS and the target picture frame size, and
wherein a number of the reconstructed pictures stored in the DPB is no more than a number set in the maxDPBsize.

US Pat. No. 10,116,955

FILTERING MODE FOR INTRA PREDICTION INFERRED FROM STATISTICS OF SURROUNDING BLOCKS

SUN PATENT TRUST, New Yo...

1. A system, comprising:a first apparatus for encoding image data on a block-by-block basis; and
a second apparatus for decoding encoded image data on a block-by-block basis,
wherein the first apparatus includes:
a first processor; and
a first non-transitory memory having stored thereon executable instructions, which when executed, cause the first processor to perform:
deriving first characteristics between image data of each of plurality of previously processed blocks spatially adjacent to a current block, the previously processed blocks being used for intra prediction of the current block;
deciding on the basis of the first characteristics whether filtering is to be applied or not to the previously processed blocks; and
predicting a corresponding prediction block to the current block from the image data of the previously processed blocks,
wherein in said predicting of the corresponding prediction block to the current block, intra prediction is performed using filtered image data of the previously processed blocks to which the filtering is applied, when said deciding decides that the filtering is to be applied to the previously processed blocks,
wherein the second apparatus includes:
a second processor; and
a second non-transitory memory having stored thereon executable instructions, which when executed, cause the second processor to perform:
deriving second characteristics between image data of each of a plurality of previously decoded blocks spatially adjacent to a current block, the previously decoded blocks being used for intra prediction of the current block by the second processor;
deciding on the basis of the second characteristics whether the filtering is to be applied or not to the previously decoded blocks; and
predicting the current block from the image data of the previously decoded blocks by intra prediction by the second processor, and
wherein in the predicting of the current block by the second processor, intra prediction is performed using filtered image data of the previously decoded blocks to which the filtering is applied, when said deciding by the second processor decides that the filtering is to be applied to the previously decoded blocks.

US Pat. No. 10,116,954

HIGH DYNAMIC RANGE ADAPTATION OPERATIONS AT A VIDEO DECODER

ARRIS Enterprises LLC, S...

1. A method of decoding a bitstream, comprising:receiving a bitstream at a decoder;
decoding said bitstream with said decoder into color values and metadata items indicating information about adaptive post-processing operations to be performed by said decoder;
performing, with said decoder, one or more high dynamic range (HDR) adaptation operations on said color values based on said metadata items; and
performing, with said decoder, one or more fixed post-processing operations to reconstruct an HDR video from said color values,
wherein said one or more HDR adaptation operations convert said color values into a format expected by said one or more fixed post-processing operations.

US Pat. No. 10,116,953

DECODING A PICTURE BASED ON A REFERENCE PICTURE SET ON AN ELECTRONIC DEVICE

Huawei Technologies Co., ...

1. A method for decoding a video bitstream, the method comprising:identifying a reference picture set from the video bitstream, wherein the video bitstream includes at least one flag associated with one of long and short term reference pictures of the reference picture set, wherein a current picture associated with the reference picture set is encoded in the video bitstream; and
decoding the current picture from the video bitstream using inter prediction with the reference picture set; and
parsing the at least one flag, wherein the at least one flag indicates keeping the at least one reference picture for decoding subsequent pictures.

US Pat. No. 10,116,952

BITSTREAM DECODING METHOD AND BITSTREAM DECODING CIRCUIT

MSTAR SEMICONDUCTOR, INC....

1. A method for decoding a bitstream, the bitstream comprising a plurality of frames, the method comprising:receiving the bitstream at a decoding circuit, which performs steps of:
obtaining a display order of a current frame in the bitstream by parsing a header of the current frame, the current frame belonging to a group; and
determining whether to decode the current frame or to drop instead of decoding the current frame according to the display order of the current frame,
wherein the step of determining whether to decode the current frame or to drop instead of decoding the current frame according to the display order of the current frame comprises:
determining whether the display order of the current frame is later than display orders of previous frames in the group according to the display order of the current frame, and
decoding the current frame according to a determination result indicating that the display order of the current frame is later than the display orders of all the previous frames in the group thereby ensuring that at least all P-frames of the group are decoded and not dropped.

US Pat. No. 10,116,951

VIDEO DECODER WITH CONSTRAINED DYNAMIC RANGE

Sharp Laboratories of Ame...

1. A decoder that decodes video comprising:(a) said decoder receives a bitstream containing quantized coefficient level values representative of a block of video representative of a plurality of pixels and a quantization parameter related to said block of video;
(b) a de-quantizer of said decoder de-quantizing said quantized coefficient level values based upon said quantized coefficient level values, a transform block size, and said quantization parameter when said quantized coefficient level values, said transform block size, and said quantization parameters are jointly within a predefined range of acceptable values to limit a dynamic range of said dequantizing;
(c) said decoder inverse transforming said dequantized coefficients to determine a decoded residue;
(d) where the bitstream provided to said de-quantizer includes a restriction on the largest permitted level value allowed in the bitstream and the restriction is selected from among a plurality of different restrictions available to the decoder.

US Pat. No. 10,116,949

METHOD AND APPARATUS FOR ENCODING VIDEO SIGNAL AND METHOD AND APPARATUS FOR DECODING VIDEO SIGNAL

SAMSUNG ELECTRONICS CO., ...

1. A method for decoding a video, the method comprising:obtaining quantized transform coefficients of a transform block from a bitstream;
generating scaled transform coefficients of the transform block by performing inverse-quantization and scaling on the quantized transform coefficients;
generating intermediate sample values by performing vertical inverse-transformation on the scaled transform coefficients;
generating residual values of the transform block by performing horizontal inverse-transformation on the intermediate sample values;
generating a prediction block by performing intra prediction or inter prediction; and,
restoring sample values using the residual values of the transform block and the prediction block,
wherein the scaling on the quantized transform coefficients comprises:
determining an offset value based on a scaling variable, adding the inverse-quantized transform coefficients to the offset value and bit shifting on the added transform coefficients by the scaling variable,
wherein the scaling variable is generated by using a size of the transform block and bit depth of samples, and
wherein the vertical inverse-transformation is performed by multiplying a transform matrix by the scaled transform coefficients.

US Pat. No. 10,116,948

SYSTEM FOR TEMPORAL IDENTIFIER HANDLING FOR HYBRID SCALABILITY

SHARP KABUSHIKI KAISHA, ...

1. A method for decoding a video bitstream comprising the steps of: (a) receiving a base bitstream representative of a coded video sequence; (b) receiving at least one enhancement bitstreams representative of said coded video sequence; (c) receiving a video parameter set containing syntax elements that apply to said base bitstream and said at least one enhancement bitstreams where said syntax elements selectively signaling bitrate and picture rate information for said base bitstream based upon whether said base bitstream is externally decoded or internally specified.

US Pat. No. 10,116,947

METHOD AND APPARATUS FOR CODING MULTILAYER VIDEO TO INCLUDE SCALABLE EXTENSION TYPE INFORMATION IN A NETWORK ABSTRACTION LAYER UNIT, AND METHOD AND APPARATUS FOR DECODING MULTILAYER VIDEO

SAMSUNG ELECTRONICS CO., ...

1. A multilayer video encoding method comprising:encoding a multilayer video;
generating network abstraction layer (NAL) units for data units included in the encoded multilayer video; and
adding scalable extension type information, for a scalable extension of the multilayer video, to a video parameter set (VPS) NAL unit among the NAL units, the VPS NAL unit comprising VPS information that is information applied to the multilayer video; and
outputting an encoded scalable video bitstream including the encoded multilayer video and the generated NAL units with added scalable extension type information;
wherein the adding of the scalable extension type information comprises adding, to a header of the VPS NAL unit: 1) a scalable extension type table index indicating a scalable extension type table among scalable extension type tables including combinations of scalable extension types that are applicable to the multilayer video; and 2) a plurality of sub-layer indexes indicating specific scalable extension types included in a combination among the combinations of the scalable extension types included in the scalable extension type table indicated by the scalable extension type table index.

US Pat. No. 10,116,946

IMAGE ENCODING/DECODING METHOD AND DEVICE

ELECTRONICS AND TELECOMMU...

1. A method for picture decoding supporting layers, the method comprising:receiving a bitstream comprising the layers;
acquiring information on a maximum number of sub-layers for each of the layers by decoding the bitstream; and
acquiring a residual block of a current block by decoding the bitstream,
wherein the information on the maximum number of sub-layers is included in video parameter set extension information and signaled, and
wherein a video parameter set comprises information on a maximum number of sub-layers, and
in response to the video parameter set extension information not comprising the information on a maximum number of sub-layers for a layer among the layers, the maximum number of sub-layers for the layer is derived based on the information included in the video parameter set.

US Pat. No. 10,116,944

VIDEO ENCODING DEVICE, VIDEO ENCODING METHOD, AND PROGRAM

NEC CORPORATION, Tokyo (...

1. A video encoding device comprising:first video encoding section, implemented by a hardware including at least one processor, which encodes an input image to generate first coded data;
coded data transcoding section, implemented by the at least one processor, which transcodes the first coded data generated by the first video encoding section, to generate second coded data; and
second video encoding section, implemented by the at least one processor, which generates a prediction signal with regard to the input image based on the second coded data supplied from the coded data transcoding section,
wherein the first video encoding section comprises:
dividing section which divides the input image into a plurality of image areas; and
one or more encoding sections which perform encoding in units of blocks, each encoding corresponding to the image area in which there are a plurality of blocks, and
wherein the first video encoding section also encodes blocks that are included in image areas adjacent to the image area for which the encoding section performs encoding, together with the blocks in the image area for which the encoding section performs encoding.

US Pat. No. 10,116,943

ADAPTIVE VIDEO COMPRESSION FOR LATENCY CONTROL

NVIDIA CORPORATION, Sant...

1. A computer-implemented method for adaptively compressing video frames, the method comprising:encoding a first plurality of video frames based on a first video compression algorithm to generate first encoded video frames;
transmitting the first encoded video frames to a client device;
receiving a user input event;
switching from the first video compression algorithm to a second video compression algorithm in response to the user input event;
encoding a second plurality of video frames based on the second video compression algorithm to generate second encoded video frames;
transmitting the second encoded video frames to the client device;
determining that a threshold period of time has elapsed since receiving the user input event; and
in response to determining that the threshold period of time has elapsed, switching from the second video compression algorithm to the first video compression algorithm, wherein the first video compression algorithm requires less network bandwidth for transmitting data than the second video compression algorithm, and the first video compression algorithm results in greater latency when encoding data than the second video compression algorithm.

US Pat. No. 10,116,942

METHOD AND APPARATUS FOR DECODING A VIDEO USING AN INTRA PREDICTION

SK TELECOM CO., LTD., Se...

1. A method of decoding a video using an intra-prediction at a video decoding apparatus, comprising:reconstructing information on an intra-prediction mode of a current block to be decoded from a bitstream, wherein the current block is a square block;
reconstructing transform block information from the bitstream and reconstructing transform coefficients corresponding to each of one or more transform blocks divided in a quad tree structure from the current block; and
reconstructing the current block based on the information on the intra-prediction mode and the reconstructed transform coefficients of the transform blocks,
wherein the reconstructing of the transform block information comprises:
reconstructing a split transform flag indicating whether a block corresponding thereto is divided into four equal-sized square blocks of a low layer in the quad tree structure;
reconstructing a first chroma coded block flag and a second chroma coded block flag of the corresponding block, irrespectively of whether or not the split transform flag indicates that the corresponding block is divided, wherein:
the first second chroma coded block flag indicates whether a first chrominance component of the corresponding block has at least one non-zero transform coefficient, and
the second chroma coded block flag indicates whether a second chrominance component of the corresponding block has at least one non-zero transform coefficient; and
when the split transform flag indicates that the corresponding block is not further divided, reconstructing a luma coded block flag indicating whether a luminance component of the corresponding block has at least one non-zero transform coefficient and identifying the corresponding block which is not further divided as one of the transform blocks, wherein, when the split transform flag indicates that the corresponding block is further divided, the luma coded block flag of the corresponding block is not reconstructed from the bitstream.

US Pat. No. 10,116,941

INTER PREDICTION METHOD AND APPARATUS THEREFOR

LG Electronics Inc., Seo...

1. A video decoding apparatus, comprising:an entropy-decoder configured to receive information on a parallel merge level which indicates a size of a parallel merging unit region;
a predictor configured to generate a merging candidate list for a current block when a merge mode is applied to the current block, to derive motion information of the current block based on one of a plurality of merging candidates constituting the merging candidate list, to derive prediction samples of the current block based on the derived motion information; and
an adder configured to generate a reconstructed picture based on the prediction samples,
wherein the current block corresponds to a prediction unit (PU) belonging to the parallel merging unit region,
wherein the PU is partitioned from a coding unit (CU),
wherein for the PU, in the CU and the parallel merging unit region, spatial merge candidates which are identical to spatial merge candidates of a 2N×2N PU which has a same size as the parallel merging unit region are used for the merging candidate list,
wherein the spatial merge candidates for the PU are derived from a lower left corner neighboring block, a left neighboring block, an upper right corner neighboring block, an upper neighboring block, and an upper left corner neighboring block of the parallel merging unit region, and
wherein the parallel merging unit region is determined based on the parallel merge level, and the information on the parallel merge level is received through a picture parameter set.

US Pat. No. 10,116,940

METHOD FOR ENCODING VIDEO, METHOD FOR DECODING VIDEO, AND APPARATUS USING SAME

LG ELECTRONICS INC., Seo...

1. A video decoding method by a video decoder, comprising:receiving first flag information indicating whether a picture in a reference layer is not needed for an inter-layer prediction and second flag information indicating whether the reference layer is directly referred by a current layer;
decoding and storing pictures in the reference layer;
deriving an inter-layer reference picture for a current block from at least one of the decoded pictures in the reference layer based on the first flag information and the second flag information;
constructing a reference picture list comprising the inter-layer reference picture in the reference layer and a reference picture in the current layer;
deriving a predicted sample of the current block in the current layer based on the inter-layer reference picture comprised in the reference picture list; and
deriving a reconstructed sample of the current block based on the predicted sample and a residual sample of the current block,
wherein when the first flag information indicates that a specific picture in the reference layer is not needed for the inter-layer prediction, the specific picture is not comprised in the inter-layer reference picture set,
wherein when the second flag information indicates that the reference layer is not directly referred by the current layer, the reference layer is not used for the inter-layer prediction of the current layer, and
wherein the first flag information is received through a slice segment header, and in all slice segment header of the picture in the reference layer, a value of the first flag information is set to the same.

US Pat. No. 10,116,939

METHOD OF DERIVING MOTION INFORMATION

INFOBRIDGE PTE. LTD., Si...

1. A method of encoding video data in a merge mode, the method comprising:determining motion information of a current block;
generating a prediction block of the current block using the motion information;
generating a residual block using the current block and the prediction block;
transforming the residual block to generate a transformed block;
quantizing the transformed block using a quantization parameter and a quantization matrix to generate a quantized block;
scanning coefficient components of the quantized block using a diagonal scan;
entropy-coding the scanned coefficient components of the quantized block; and
encoding the motion information,
wherein encoding the motion information comprises the sub-steps of:
constructing a merge list using available spatial and temporal merge candidates;
selecting a merge predictor among merge candidates of the merge list; and
encoding a merge index specifying the merge predictor,
wherein when the current block is a second prediction unit partitioned by asymmetric partitioning, the spatial merge candidate corresponding to a first prediction unit partitioned by the asymmetric partitioning is not to be listed on the merge list, and
wherein the quantization parameter is determined per a quantization unit and is encoded using a quantization parameter predictor,
a minimum size of the quantization unit is adjusted by a picture parameter set,
if two or more quantization parameters are available among a left quantization parameter, an above quantization parameter and a previous quantization parameter of a current coding unit, the quantization parameter predictor is generated using first two available quantization parameters among the left quantization parameter, the above quantization parameter and the previous quantization parameter, and
if only one is available among the left quantization parameter, the above quantization parameter and the previous quantization parameter, the available quantization parameter is set as the quantization parameter predictor.

US Pat. No. 10,116,938

SYSTEM FOR CODING HIGH DYNAMIC RANGE AND WIDE COLOR GAMUT SEQUENCES

ARRIS Enterprises LLC, S...

1. A method of encoding a digital video, comprising:receiving a digital video data set including at least one of high dynamic range (HDR) and wide color gamut (WCG) video data;
converting a portion of the digital video data set from an input color space to an intermediate color space to generate intermediate color converted video data and generating metadata identifying the input color space, the intermediate color space and the portion of the digital video data set;
applying a compression transfer function to the intermediate color converted video data to generate compressed video data and generating metadata characterizing the compression transfer function and identifying the portion of the digital video data set;
converting the compressed video data from the intermediate color space to a final color space to generate final color converted video data and generating metadata identifying the intermediate color space, the final color space and the portion of the digital video data set;
identifying a characteristic of the portion of the digital video data set;
modifying a perceptual transfer function according to the identified characteristic;
applying the modified perceptual transfer function to the portion of the digital video data set to generate a perceptually modified portion of the digital video data set;
applying a perceptual normalization including at least one of a gain factor or an offset to the perceptually modified digital video data set to generate a perceptually normalized portion of the digital video data set;
encoding the perceptually normalized portion of the video data set to generate a bit stream;
combining the metadata identifying the input color space and the intermediate color space, the metadata characterizing the compression transfer function and the metadata identifying the final color space with the metadata that indicates the modification of the perceptual transfer function to generate combined metadata;
wherein the portion of the digital video data to which the perceptual transfer function is applied includes the final color converted video data; and
transmitting, to a decoder, the bit stream and metadata that indicates the modification of the perceptual transfer function, that identifies the perceptual normalization, and that identifies the portion of the video data set; wherein the transmitting transmits the bit stream and the combined metadata to the decoder.

US Pat. No. 10,116,937

ADJUSTING QUANTIZATION/SCALING AND INVERSE QUANTIZATION/SCALING WHEN SWITCHING COLOR SPACES

Microsoft Technology Lice...

1. A computing device comprising:one or more buffers configured to store an image or video; and
an image encoder or video encoder configured to perform operations comprising:
encoding units of the image or video to produce encoded data, including, when switching from a first color space to a second color space between two of the units, adjusting final quantization parameter (“QP”) values or intermediate QP values for color components of the second color space according to per component color space adjustment factors, wherein the first color space is RGB and the second color space is YCoCg, and wherein the per component color space adjustment factors adjust the final QP values or intermediate QP values for the color components of the second color space by offsets of ?5, ?3 and ?5 for Y, Co and Cg components, respectively; and
outputting the encoded data as part of a bitstream.

US Pat. No. 10,116,936

MOVING IMAGE CODING DEVICE, MOVING IMAGE DECODING DEVICE, MOVING IMAGE CODING METHOD, AND MOVING IMAGE DECODING METHOD

1. A moving image coding device that divides an image into MBs and codes the MBs, the moving image coding device having a memory and a processor, the processor comprising:a coarse search unit that calculates a moving amount and a moving direction of each of the MBs;
an MB parallel processing unit that performs preprocessing to code the image with respect to each of the MBs that are contained in an MB line constituting the image and for which the moving amount and the moving direction are calculated, and writes the resulting MB information in a storage unit in the processing order of the MBs;
a coding unit that reads out the MB information stored in the storage unit in a raster order and codes the MBs; and
an MB line parallel processing unit that configures the MB arranged in a horizontal direction as an MB line, performs the preprocessing with respect to each of the MB line, and includes a plurality of the MB parallel processing units;
wherein the moving image coding device is operable in two modes, switchable between a mode performed by one or more of the MB parallel processing units included in the MB line parallel processing unit and a mode performed by one or more of the MB parallel processing units included in a plurality of MB line parallel processing units.

US Pat. No. 10,116,935

IMAGE ENCODING METHOD, IMAGE DECODING METHOD, IMAGE ENCODING DEVICE, IMAGE DECODING DEVICE, AND IMAGE ENCODING/DECODING DEVICE

SUN PATENT TRUST, New Yo...

1. An image decoding method for decoding an image from a bitstream on a per block basis, the image decoding method comprising:predicting a current block in the image using a reference block different from the current block, to generate a prediction block; and
generating a reconstructed block using the prediction block,
wherein the generating includes:
first filtering for filtering a boundary between the reconstructed block and a decoded neighboring block neighboring the current block, using a first filter strength which is set using first prediction information for the prediction of the current block and second prediction information for prediction of the decoded neighboring block;
second filtering for filtering the boundary using a second filter strength; and
determining whether or not the boundary is a first boundary, the first boundary being at least one of a tile boundary and a slice boundary,
wherein the first filtering is in-loop filtering in a loop in which a filtered reconstructed block is used as a reference block for another block,
wherein the second filtering is post filtering outside the loop,
wherein the second filtering is performed without performing the first filtering when the determining determines that the boundary is the first boundary,
wherein the second filter strength is set based on supplemental information included in a header of the bitstream,
wherein the determining determines whether or not the boundary is the first boundary based on the supplemental information, and
wherein the supplemental information indicates (i) whether or not filtering the tile boundary is enabled and (ii) whether or not filtering the slice boundary is enabled.

US Pat. No. 10,116,934

IMAGE PROCESSING METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. An image processing method implemented by an encoder, the method comprising:acquiring N pieces of motion information from N adjacent image blocks adjacent to a current image block, wherein the N adjacent image blocks correspond to the N pieces of motion information, wherein the N pieces of motion information indicate N reference image blocks in a reference image of the current image block, wherein the N pieces of motion information correspond to the N reference image blocks, and wherein N is a positive integer;
determining candidate motion information from the N pieces of motion information according to a preset rule, wherein the candidate motion information comprises two or more pieces of information of the N pieces of motion information;
determining, in the reference image, a location range of a to-be-stored pixel according to the candidate motion information;
storing all pixels in the location range, wherein the location range covers two or more candidate reference image blocks, wherein the candidate reference image blocks comprise two or more image blocks of the N reference image blocks, and wherein the candidate reference image block is an image block corresponding to the candidate motion information;
reading the pixels in the location range; and
performing encoding processing on the current image block according to the pixels in the location range, to generate a target data stream.

US Pat. No. 10,116,933

METHOD OF LOSSLESS MODE SIGNALING FOR VIDEO SYSTEM WITH LOSSLESS AND LOSSY CODING

MEDIATEK INC., Hsin-Chu ...

1. A method of lossless mode signaling for a coding system supporting both lossy coding and lossless coding, wherein a picture is divided into multiple slices and each slice is divided into multiple coding units, the method comprising:receiving input data associated with a current picture;
if the lossless coding is allowed for the current picture, incorporating or parsing a first syntax element in a picture level to indicate whether a second syntax element is present in each slice for selecting lossy coding or lossless coding;
if the first syntax indicates that the second syntax element is present, incorporating or parsing the second syntax element in each slice of the current picture to indicate whether a forced lossless coding mode is selected,
if the second syntax element indicates that the forced lossless coding mode is selected, encoding or decoding all coding units in the slice using lossless coding; and
if the second syntax indicates that the forced lossless coding mode is not selected, encoding or decoding each coding unit in the slice according to a third syntax element indicating whether each coding unit is coded using lossless coding or not.

US Pat. No. 10,116,932

IMAGE FILTER DEVICE, DECODING DEVICE, ENCODING DEVICE, AND DATA STRUCTURE

Sharp Kabushiki Kaisha, ...

1. An image filter device comprising:a deblocking filter that performs deblocking on first target pixels in side boundaries of a unit region and generates a deblocked image including the unit region;
filter circuitry that performs adaptive filtering on second target pixels in the unit region of the deblocked image, wherein the second target pixels are all pixels included in the unit region; and
reference region setting circuitry that sets a reference region to be referenced by the filter circuitry to calculate a pixel value of one of the second target pixels according to a position of the one of the second target pixels in the unit region;
wherein the reference region setting circuitry includes:
first setting circuitry that sets a position of an upstream edge of the reference region lower than or equal to a virtual boundary line, which separates the unit region into an upstream side and a downstream side, when the one of the second target pixels is on the downstream side, and
second setting circuitry that sets a position of a downstream edge of the reference region higher than the virtual boundary line, which separates the unit region into the upstream side and the downstream side, when the one of the second target pixels is on the upstream side; wherein
the deblocking filter, the filter circuitry, and the reference region setting circuitry are implemented by one or more processors.

US Pat. No. 10,116,931

HIERARCHICAL INTER-LAYER PREDICTION IN MULTI-LOOP SCALABLE VIDEO CODING

TEXAS INSTRUMENTS INCORPO...

1. A method for encoding a video sequence in a scalable video encoder to generate a scalable bitstream, the method comprising:encoding the video sequence in a first layer encoder of the scalable video encoder to generate a first layer sub-bitstream that includes a first temporal hierarchy of two or more temporal levels;
encoding the video sequence in a second layer encoder of the scalable video encoder to generate a second layer sub-bitstream that includes a second temporal hierarchy of two or more temporal levels, wherein portions of the video sequence being encoded in the second layer encoder are predicted using reference portions of the video sequence encoded in the first layer encoder;
combining the first layer sub-bitstream and the second layer sub-bitstream in the scalable bitstream; and
signaling an indication of a highest temporal level in the first temporal hierarchy of the first layer sub-bitstream that is used for inter-layer prediction in the scalable bitstream.

US Pat. No. 10,116,930

ICON-BASED HOME CERTIFICATION, IN-HOME LEAKAGE TESTING, AND ANTENNA MATCHING PAD

Viavi Solutions, Inc., S...

1. A system for determining the magnitude of leakage in a subscriber's premises installation for a cable network that is configured to provide a signal level in a range of ?5 dBmV to 0 dBmV, the system comprising:a signal generator configured to be secured to a suitable network port at a subscriber's premises to wiredly connect the signal generator to cable wiring in the subscriber's premises, the signal generator including a frequency source operable to generate an output signal in a range of 40 dB to 70 dB above the signal level provided by the cable network to supply the output signal to the cable wiring in the subscriber premises, the frequency source being shielded to prevent transmission of radiated frequency source oscillations, and
a signal level meter operable to be transported around the subscriber's premises and measure signal levels radiating from the subscriber's premises, the signal level meter including an output device configured to output the signal levels measured by the signal level meter,
wherein the signal generator is configured to supply the output signal through the suitable network port to the cable wiring at the subscriber's premises so that a high power offset is maintained when the signal generator is secured to the suitable network port and wiredly connected to the cable wiring, and
wherein the signal level meter is configured to measure the signal levels including a first signal level corresponding to the output signal of the signal generator.

US Pat. No. 10,116,929

MULTIMEDIA QUALITY MONITORING METHOD, AND DEVICE

Huawei Technologies Co., ...

1. A multimedia quality monitoring method, comprising:determining, by a multimedia quality monitoring apparatus, multimedia quality of multimedia due to compression of the multimedia according to video quality due to compression of video data of the multimedia and audio quality due to compression of audio data of the multimedia;
acquiring, by the multimedia quality monitoring apparatus, multimedia distortion quality corresponding to video distortion and/or audio distortion of the multimedia, wherein the multimedia distortion quality comprises multimedia distortion quality due to packet loss and/or multimedia distortion quality due to rebuffering;
and
determining, by the multimedia quality monitoring apparatus, quality of the multimedia according to the multimedia quality of the multimedia due to compression of the multimedia and the multimedia distortion quality;
wherein acquiring the multimedia distortion quality further comprises:
(a) acquiring the multimedia distortion quality due to packet loss, wherein acquiring the multimedia distortion quality due to packet loss comprises:
(1) determining video quality due to packet loss according to the video quality due to compression of the video data of the multimedia and a video packet loss rate and/or determining audio quality due to packet loss according to the audio quality due to compression of audio data of the multimedia and an audio packet loss rate;
(2) determining video distortion quality due to packet loss according to the video quality due to packet loss and the video quality due to compression of the video data of the multimedia and/or determining audio distortion quality due to packet loss according to the audio quality due to packet loss and the audio quality due to compression of audio data of the multimedia;
(3) determining a video packet loss distortion factor according to the video distortion quality due to packet loss and the video quality due to compression of the video data of the multimedia and/or determining an audio packet loss distortion factor according to the audio distortion quality due to packet loss and the audio quality due to compression of audio data of the multimedia;
(4) determining a multimedia packet loss distortion factor according to the video packet loss distortion factor and/or the audio packet loss distortion factor; and
(5) determining the multimedia distortion quality due to packet loss according to the multimedia packet loss distortion factor and the multimedia quality of the multimedia due to compression of the multimedia; and/or
(b) acquiring the multimedia distortion quality due to rebuffering according to a rebuffering parameter of the multimedia corresponding to a transmission process.

US Pat. No. 10,116,928

THREE-DIMENSIONAL (3D) DISPLAY SCREEN AND 3D DISPLAY DEVICE

Shanghai Tianma Micro-ele...

1. A three-dimensional (3D) display screen, comprising:a pixel array comprising m laterally displaced groups,
wherein:
a laterally displaced group in the m laterally displaced groups includes n rows of sub-pixel units arranged in an array and sequentially numbered as a 1st sub-pixel unit row to a nth sub-pixel unit row, the sub-pixel units in a same sub-pixel unit row are arranged in a first lateral direction, m is a positive integer larger than or equal to 1, and n is a positive integer larger than or equal to 2;
a sub-pixel unit in the n rows of sub-pixel units includes a plurality of light-shielding stripes arranged in parallel and has a length of L in the first lateral direction, the plurality of light-shielding stripes are disposed inside the sub-pixel unit, two adjacent light-shielding stripes have a gap of P in the first lateral direction, and P in the laterally displaced group, along the first lateral direction, the nth sub-pixel unit row has a lateral displacement of P with respect to the 1st sub-pixel unit row, an ith sub-pixel unit row in the n rows of sub-pixel units has a lateral displacement of P/n with respect to an (i?1) th sub-pixel unit row in the n rows of sub-pixel units, where i is a positive integer and 1 along the first lateral direction, the lateral displacement between any two sub-pixel unit rows in the pixel array is less than or equal to P.

US Pat. No. 10,116,927

METHOD FOR REPRODUCING IMAGE INFORMATION AND AUTOSTEREOSCOPIC SCREEN

Fraunhofer-Gesellschaft z...

1. A method for reproducing image information on an autostereoscopic screen, which has a pixel matrix with a plurality of pixels and also an optical grid arranged in front of the pixel matrix, wherein the plurality of pixels of the pixel matrix are arranged such that they form a plurality of columns arranged equidistantly side by side with a column direction that is vertical or inclined relative to a vertical direction, and wherein the optical grid has a group of strip-shaped structures oriented parallel to the plurality of columns and arranged equidistantly side by side and gives light originating from the plurality of pixels at least one defined propagation plane, which is spanned from a defined horizontal propagation direction and the column direction, wherein a period length (D) of the optical grid, the period length being defined by a lateral offset of adjacent strip-shaped structures, is greater by a factor n×Ln/(Ln+a) than a lateral offset (d) of directly adjacent columns, wherein “a” denotes an effective distance between the pixel matrix and the optical grid, Ln denotes a nominal viewing distance of the autostereoscopic screen, and n denotes an integer greater than two, wherein the method comprises:assigning an angle value and a location coordinate value to each column of the plurality of columns, wherein the angle value is defined as a measure for an angle between a horizontal reference direction and the defined horizontal propagation direction which is given to the light originating from the plurality of pixels of a respective column by the optical grid, and wherein the location coordinate value specifies a position, in a lateral direction, of the respective column;
for each column of the plurality of columns, calculating an extract of an image by image synthesis, wherein the image is a parallel projection of a three dimensional (3D) scene to be reproduced having a projection direction that is defined by the angle corresponding to the angle value assigned to the respective column, and wherein the extract is defined by a strip of the image that has a lateral position in the image corresponding to the location coordinate value assigned to the respective column; and
controlling the plurality of pixels of the pixel matrix in such a way that each column of the plurality of columns has written into it the extract calculated for the respective column.

US Pat. No. 10,116,926

3D SCANNING CONTROL APPARATUS BASED ON FPGA AND CONTROL METHOD AND SYSTEM THEREOF

SHENZHEN ESUN DISPLAY CO....

1. A 3D scanning control apparatus based on FPGA (Field Programmable Gate Array), for controlling a 3D scanner to scan, wherein the apparatus comprises:a first projection control module configured for controlling at least one structured light generation unit to project to an object;
a first image acquisition control module configured for controlling at least one shooting unit to capture at least one projection image of the object when the first projection control module is projecting;
a second projection control module configured for controlling at least another one structured light generation unit to project to the object for one more time;
a second image acquisition control module configured for controlling at least one corresponding shooting unit to capture the projection images of the object for one more time when the second projection control module is projecting; and
a data processing module configured for processing the captured projection images with at least one of the Bayer color rendition, color space conversion and phase unwrapping, by using algorithm in the FPGA;
a driver module coupled to the structured light generation units and the shooting units via corresponding third interfaces; and
an optimization module coupled to the shooting units via other corresponding third interfaces;
wherein, the driver module is configured for driving the structured light generation unit and the shooting unit to rotate an angel with the object as the axis and a second cycle as the time interval, until a circle is rotated; the optimization module is configured for providing a soft light environment when the shooting unit is capturing the projection image.

US Pat. No. 10,116,925

TIME-RESOLVING SENSOR USING SHARED PPD + SPAD PIXEL AND SPATIAL-TEMPORAL CORRELATION FOR RANGE MEASUREMENT

SAMSUNG ELECTRONICS CO., ...

15. An imaging unit comprising:a light source operative to project a laser pulse onto a three-dimensional (3D) object; and
an image sensor unit that includes:
a plurality of pixels arranged in a two-dimensional (2D) pixel array, wherein each pixel in at least one row of pixels in the 2D pixel array includes:
a pixel-specific plurality of Single Photon Avalanche Diodes (SPADs), wherein each SPAD is operable to convert luminance received in a returned pulse into a corresponding electrical signal, wherein the returned pulse results from reflection of the projected pulse by the 3D object,
a pixel-specific first control circuit coupled to the pixel-specific plurality of SPADs, wherein, for each SPAD receiving luminance in the returned pulse, the pixel-specific first control circuit is operable to process the corresponding electrical signal from the SPAD and generate a SPAD-specific output therefrom,
a pixel-specific device operable to store an analog charge, and
a pixel-specific second control circuit coupled to the pixel-specific first control circuit and the pixel-specific device, wherein the pixel-specific second control circuit is operable to initiate transfer of a pixel-specific first portion of the analog charge from the pixel-specific device, and terminate the transfer upon receipt of at least two SPAD-specific outputs from the pixel-specific first control circuit within a pre-defined time interval, and
a processing unit coupled to the 2D pixel array and operative to:
provide an analog modulating signal to the pixel-specific second control circuit in each pixel in the row of pixels to control the transfer of the pixel-specific first portion of the analog charge, and
determine a pixel-specific Time of Flight (TOF) value of the returned pulse based on the transfer of the pixel-specific first portion of the analog charge within the pre-defined time interval.

US Pat. No. 10,116,924

COLOR ANALYSIS AND CONTROL USING AN ELECTRONIC MOBILE DEVICE TRANSPARENT DISPLAY SCREEN

1. A method for comparing the image data of a predetermined object using an electronic mobile device, comprising the steps of:using at least one transparent display screen associated with the electronic mobile device for generating optical processing data;
storing said optical processing data, and optical processing instructions, and computer processor algorithms in a memory associated with the electronic mobile device;
operating a computer processor associated with the electronic mobile device and said memory for executing said optical processing instructions and computer processor algorithms in response to said optical processing data;
directing an optical lens of the electronic mobile device to capture an object image of an object for display on said at least one transparent display screen;
collecting a first set of optical processing data using data deriving from the capture of the object image, said first set of optical processing data comprising a first set of color image data;
displaying the object for display through a transparent portion of the said at least one transparent display screen and generating there from a second set of optical processing data comprising a second set of image data;
receiving and storing in said the memory a said second set of image data associated with perceiving the object through the at least one transparent display screen;
executing instructions on the computer processor for determining image difference values between said first set of image data and said second set of image data; and
displaying on the at least one transparent display screen image difference values from the group consisting of color differences, texture differences, transparency differences, lighting differences, motion differences, focus differences and the like.

US Pat. No. 10,116,923

IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR IMPROVING QUALITY OF IMAGE

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus comprising:a generator configured to generate difference information relating to a difference in a luminance value between a plurality of parallax images;
a gain distribution determiner configured to determine a gain distribution depending on a reduction rate distribution determined based on the plurality of parallax images and the difference information generated by the generator;
an intensity determiner configured to determine an intensity of an unnecessary component based on a product of the gain distribution and the difference information, the unnecessary component corresponding to a ghost or a flare; and
a reducer configured to generate an output image by reducing, using the intensity of the unnecessary component, the unnecessary component from a synthesized image obtained by synthesizing the plurality of parallax images.

US Pat. No. 10,116,922

METHOD AND SYSTEM FOR AUTOMATIC 3-D IMAGE CREATION

Google LLC, Mountain Vie...

1. A method for image creation, the method comprising:receiving, by a system comprising a hardware processor, a first two-dimensional image;
comparing, by the system, the first two-dimensional image to a second two-dimensional image to determine whether the first two-dimensional image has a given similarity to the second two-dimensional image, wherein the given similarity is determined by combining at least the first two-dimensional image and the second two-dimensional image and determining whether a desired stereoscopic effect resulting from the combined first two-dimensional image and second two-dimensional image falls within a range of the desired stereoscopic effect and whether an image composition of the combined first-two dimensional image and the second two-dimensional image has changed; and
in response to the comparison determining that the first two-dimensional image has at least the given similarity to the second two-dimensional image based on the desired stereoscopic effect falling within the range of the desired stereoscopic effect and based on the image composition, generating, by the system, a three-dimensional image that combines at least the first two-dimensional image and the second two-dimensional image.

US Pat. No. 10,116,921

METHOD FOR PROVIDING A MULTIMEDIA MESSAGE SERVICE

Huawei Technologies Co., ...

1. A method, implemented by a server or a relay, for providing a multimedia message service to a user agent in a network, the method comprising:determining a plurality of content options for a video content, wherein the video content comprises an MMS three dimension (3D) video content according to a Third Generation Partnership Project (3GPP) Multimedia Message Service (MMS) specification;
signaling the content options for the video content to the user agent, wherein the signaled content options comprise at least one of the following: two dimensions (2D) or three dimensions (3D);
receiving from the user agent selection of one of the content options;
adapting the 3D video content to the user agent based on display and/or decoding capabilities of the user agent and the selected content option, wherein adapting the 3D video content comprises encoding the 3D video content as a 2D video content; and
providing the adapted video content to the user agent.

US Pat. No. 10,116,920

BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE

FARO TECHNOLOGIES, INC., ...

1. A method of optically scanning and measuring a scene, the method comprising:providing a first scanner, the scanner including a first light emitter for emitting a first light onto the scene, a first light receiver for receiving a first portion of the first light from the scene, and a first processor, the first scanner having a first angle measuring device, a second angle measuring device and a distance meter;
providing a second scanner, the second scanner including a second light emitter for emitting a second light onto the scene, a second light receiver for receiving a portion of the second light from the scene, and a second processor;
measuring with a first scanner in a first scanner location three-dimensional (3D) coordinates and a color for each of a plurality of first object points in the scene based at least in part on the emitting of the first light, an angle measured by the first angle measuring device, an angle measured by the second angle measuring device and a receiving of the first portion with the distance meter;
measuring with the second scanner in a second scanner location 3D coordinates and a color for each of a plurality of second object points in the scene based at least in part on the emitting of the second light and the receiving of the second portion;
selecting a plurality of areas within the scene, each area being defined by a plurality of cells and including at least one first object point from the first plurality of object points and further including at least one second object point from the second plurality of object points;
determining an adapted second color for each second object point, wherein in each of the plurality of areas the adapted second color is based at least in part on a statistical distribution of the colors of the at least one first object point in the area;
storing the 3D coordinates and the color for each first object point; and
storing the 3D coordinates and the adapted second color for each second object point.

US Pat. No. 10,116,919

METHOD AND ARRANGEMENT FOR ESTIMATING AT LEAST ONE CROSS-CHANNEL COLOUR MAPPING MODEL FROM AN SET OF TUPLES OF CORRESPONDING COLOURS RELATIVE TO AT LEAST TWO IMAGES

THOMSON LICENSING, Issy-...

1. A method for compensation of colour differences between at least two images imaging a same scene, the colours of which are represented according to m colour channels, comprising:extracting from said images a set of tuples of corresponding colours;
estimating from said set of tuples of corresponding colours a channel-wise colour mapping model for each of said m colour channels;
selecting within said set of tuples of corresponding colours at least one intermediate tuple having colours with a difference to said estimated channel-wise colour mapping model that are smaller than a determined threshold;
estimating from said at least one selected intermediate tuple of corresponding colours at least one cross-channel coulour mapping model for at least one of said m colour channels;
generating a final set of final tuples of corresponding colours from said at least one selected intermediate tuple of corresponding colours such that said final tuples have colors with a difference to said estimated cross-channel colour mapping model that are smaller than a determined threshold; and
compensating colour difference between said images based on said final set of final tuples of corresponding colours.

US Pat. No. 10,116,918

DISPARITY IMAGE GENERATING DEVICE, DISPARITY IMAGE GENERATING METHOD, AND IMAGE

TOYOTA JIDOSHA KABUSHIKI ...

1. A disparity image generating device comprising:a disparity image acquiring unit configured to acquire chronologically consecutive first and second disparity images based on an imaging result of an environment around a vehicle, the first disparity image being a disparity image acquired by the disparity image acquiring unit at a first time, the second disparity image being a disparity image acquired by the disparity image acquiring unit at a second time which is a time after the first time;
a first correcting unit configured to optimize a disparity value of a first target pixel from among pixels configuring the first disparity image using semi-global matching, based on a disparity value of a pixel configuring at least a part of a first pixel route which is in a first pixel region configured with a plurality of pixels around the first target pixel, the first pixel route being a pixel route in at least one direction from the first target pixel toward the first pixel region;
a second correcting unit configured to optimize a disparity value of a second target pixel from among pixels configuring the second disparity image using the semi-global matching, based on a disparity value of a pixel configuring at least a part of a second pixel route which is in a second pixel region configured with a plurality of pixels around the second target pixel, the second pixel route being a pixel route in at least one direction from the second target pixel toward the second pixel region, the second pixel route being a pixel route in a direction approximately opposite to a direction of the first pixel route, the second target pixel being positioned at a position corresponding to the first target pixel; and
a disparity image generating unit configured to calculate a desired disparity image, based on a comparison between the first disparity image optimized by the first correcting unit and the second disparity image optimized by the second correcting unit.

US Pat. No. 10,116,917

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus that corrects a depth image representing information about a depth to a subject in a scene, which is the same scene of a plurality of images obtained by photographing the same subject from different viewpoints, the image processing apparatus comprising:a unit configured to determine a pixel of interest in a first image that is taken to be a reference among the plurality of images and peripheral pixels existing within a predetermined region according to the pixel of interest, the peripheral pixels being pixels for which a weight is to be derived;
an evaluation value derivation unit configured to derive a matching evaluation value between the plurality of images for the respective peripheral pixels;
a weight determination unit configured to determine a weight for the respective peripheral pixels in the correction based on the evaluation value; and
a correction unit configured to correct a pixel value of the pixel of interest in the depth image by using the weight and a pixel value of the peripheral pixels,
wherein the evaluation value derivation unit specifies, for each peripheral pixel, a pixel corresponding to the pixel of interest in a second image among the plurality of images, the second image being different from the first image, by using a depth of the respective peripheral pixels in the depth image and derives the matching evaluation value of each peripheral pixel based on the pixel value of the pixel of interest and the pixel value of the specified corresponding pixel.

US Pat. No. 10,116,916

METHOD FOR DATA REUSE AND APPLICATIONS TO SPATIO-TEMPORAL SUPERSAMPLING AND DE-NOISING

NVIDIA CORPORATION, Sant...

1. A method, comprising:generating a current frame of image data in a memory; and
for each pixel in the current frame of image data:
sampling a resolved pixel color for a corresponding pixel in a previous frame of image data stored in the memory;
adjusting the resolved pixel color based on a statistical distribution of color values for a plurality of samples in the neighborhood of the pixel in the current frame of image data to generate an adjusted pixel color, comprising:
calculating a mean color value based on the color values for a plurality of samples in the neighborhood of the pixel;
calculating a variance for each color component based on the color values for the plurality of samples in the neighborhood of the pixel; and
generating an axis-aligned bounding box (AABB) based on the mean color value and a standard deviation from the mean color value, wherein the standard deviation from the mean color value is calculated based on the variance, for each color component; and
blending a color value for the pixel in the current frame of image data with the adjusted pixel color to generate a resolved pixel color for the pixel in the current frame of image data.

US Pat. No. 10,116,915

CLEANING OF DEPTH DATA BY ELIMINATION OF ARTIFACTS CAUSED BY SHADOWS AND PARALLAX

SEIKO EPSON CORPORATION, ...

1. An environment sensing apparatus, comprising:a depth image capture device including:
an illumination source, the depth image capture device using reflected illumination from the illumination source to determine a depth map of a scene, the depth map being comprised of lines of depth data points; and
an intensity image capture device that captures an intensity image of the scene, the intensity image being comprised of lines of intensity data points; and
a data processing unit implementing the following steps:
on a line-by-line basis, identifying a first edge of an object in a current line, of intensity data points, of the intensity image, and identifying a corresponding second edge of the same object in a corresponding current line, of depth data points, of the depth map;
defining as an observed shadow, depth data points in the current line of the depth map that lie between a first depth data point, whose line position corresponds to a position of the first edge in the corresponding current line of the intensity image, and a second depth data point that corresponds to a position of the second edge in the same current line of the depth map; and
selectively removing depth data points within the defined observed shadow from the depth map.

US Pat. No. 10,116,913

3D VIRTUAL REALITY MODEL SHARING AND MONETIZATION ELECTRONIC PLATFORM SYSTEM

DOUBLEME, INC, San Jose,...

1. A three-dimensional body double-generating, social sharing, and monetization electronic system comprising:a HoloPortal electronic system that incorporates a dedicated physical studio space including a center stage, a plurality of stationary cameras surrounding the center stage, and a 3D reconstruction electronic system, which is configured to capture, calculate, reconstruct, and generate graphical transformation of a target object to create a 3D body double model from pre-calibrated image sources from the plurality of stationary cameras;
a HoloCloud electronic system comprising uncalibrated portable video recording devices positioned at multiple-angle views around the target object to generate uncalibrated raw multiple-angle video data streams, a cloud computing resource containing a scalable number of graphics processing units (GPU's) that receive the uncalibrated raw multiple-angle video data streams from the uncalibrated portable video recording devices, a pre-processing module in the cloud computing resource that calibrates temporal, spatial, and photometrical variables deduced from the uncalibrated raw multiple angle video data streams as a post-capture process, which in turn generates background 3D geometry and 360-degree virtual reality videos, and a 3D reconstruction module in the cloud computing resource for providing depth map computations, voxel grid reconstructions, and deformed mesh generations for creation of another 3D body double model that resembles the target object;
a 3D model and content database configured to store the 3D body double model created from the HoloPortal electronic system or the HoloCloud electronic system;
an electronic 3D content sharing software executed on a computer server connected to the 3D model and content database, wherein the electronic 3D content sharing software configures the computer server to upload, list, transmit, and share 3D model animations and 3D contents that are created from the HoloPortal electronic system and the HoloCloud electronic system; and
a client-side 3D content viewer and management user interface executed on a notebook computer, a desktop computer, a mobile communication device, or a web server, wherein the client-side 3D content viewer and management user interface is configured to purchase, sell, transmit, receive, or playback a 3D content incorporating the 3D body double model via the electronic 3D content sharing software and the 3D model and content database.

US Pat. No. 10,116,912

METHOD OF DISPLAYING AN IMAGE AND DISPLAY DEVICE FOR PERFORMING THE SAME

Samsung Display Co., Ltd....

1. A method of displaying an image, comprising:receiving image data for a content image;
determining a modulation region and a peripheral region in the content image based on at least one of a first position derived from a mouse device and a second position derived from an eye detecting device;
generating a left-eye content image and a right-eye content image based on the image data for the content image such that the modulation region has a three-dimensional depth;
displaying the left-eye content image and the right-eye content image; and
periodically changing the three-dimensional depth of the modulation region by changing a modulation distance between the modulation region in the left-eye content image and the modulation region in the right-eye content image based at least in part on a periodic modulation reference timing.

US Pat. No. 10,116,911

REALISTIC POINT OF VIEW VIDEO METHOD AND APPARATUS

QUALCOMM Incorporated, S...

1. A method of providing video corresponding to a dynamic and arbitrary viewing position, the method comprising:receiving, at a video server from at least one camera, image data representing multiple views of a scene, each view having a capture position identifying a capture angle and a capture distance of a camera capturing image data for the view;
receiving, at the video server from a viewing device for presenting the video, a server capability request, wherein the server capability request is received before providing image data of the scene to the viewing device;
transmitting, from the video server to the viewing device for presenting the video, in response to the server capability request, server capability information indicating that the video server can generate a video data stream corresponding to a requested viewing position;
receiving, at the video server, a request for the scene from the viewing device for presenting the video, the request including a viewing position relative to the viewing device of a viewer within a viewing area of the viewing device detected by a position detector coupled with the viewing device;
determining, by the video server, that the multiple views do not include a view associated with a capture position aligned with the viewing position;
identifying, by the video server, a first view of the multiple views of the scene and a second view of the multiple views of the scene, said identifying based on a comparison of the viewing position and the capture position of each view included in the multiple views, and wherein the first view is captured from a first capture position and the second view is captured from a second capture position, and wherein the viewing position is between the first capture position and the second capture position; and
generating, by the video server, an output stream including first image data for the first view and second image data for the second view for transmission to the viewing device, wherein a three-dimensional image of the scene is formed from a combination of the first image data with the second image data.

US Pat. No. 10,116,910

IMAGING APPARATUS AND METHOD OF PROVIDING IMAGING INFORMATION

Hanwha Techwin Co., Ltd.,...

1. An imaging apparatus comprising:a video reproducer configured to reproduce a video and a heatmap of the video on a display;
a sub-heatmap area setter configured to set a plurality of sub-heatmap areas on the heatmap; and
a video summarizer configured to provide at least one video summary of the video to at least one summary area selected from among the plurality of sub-heatmap areas, respectively,
wherein the video reproducer is configured to provide a three-dimensional (3D) area on the plurality of sub-heatmap areas in which a portion of the video and heatmap data of the portion of the video are displayed on different facets of the 3D area.

US Pat. No. 10,116,909

DETECTING A VERTICAL CUT IN A VIDEO SIGNAL FOR THE PURPOSE OF TIME ALTERATION

PRIME IMAGE DELAWARE, INC...

1. A method, comprising:receiving, in real-time, a video program segment having a sequence of digital video images, each digital video image having a plurality of multi-bit pixels;
generating, for each multi-bit pixel, a single-bit indicator that is set when the pixel is active and cleared when the pixel is not active;
counting the single-bit indicators that are set to represent active pixels in each one of adjacent frames of the sequence of digital video images, wherein a vertical cut is not detected when the count between adjacent frames is approximately the same;
calculating a percentage of change value between adjacent frames when the count between adjacent frames is not approximately the same;
comparing the percentage of change value to a positive threshold value and a negative threshold value, wherein a positive change bit is set when the percentage of change value exceeds the positive threshold value, a negative change bit is set when the percentage of change value exceeds the negative threshold value, and a no change bit is set when the percentage of change value does not exceed the positive threshold value or the negative threshold value;
analyzing a pattern of the positive change bits, the negative change bits, and the no change bits over a plurality of sequential digital video images;
determining that a vertical cut has occurred in the sequence of digital video images when the pattern of the positive change bits, negative change bits, and no change bits matches a pre-defined pattern; and
adding or removing individual frames in real-time at the location of the vertical cut to alter a duration of the video program segment.

US Pat. No. 10,116,908

PLAYBACK METHOD, PLAYBACK DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM

PANASONIC INTELLECTUAL PR...

1. A playback method of a playback device that plays video streams recorded in a recording medium,in which are recorded
one or more video streams including at least one of an HDR stream of which a dynamic range of luminance of video corresponds to a high dynamic range (HDR), and an SDR stream of which a dynamic range of luminance of video corresponds to a standard dynamic range (SDR) that is narrower than HDR, and
identification information indicating whether or not the HDR stream is included in the one or more video streams,
the playback method comprising:
reading the identification information from the recording medium;
acquiring
playback capability information indicating whether or not the playback device can play the HDR stream, and
display capability information indicating whether or not a display device connected to the playback device can display the HDR video;
deciding a video stream for playback out of the one or more video streams, in accordance with the identification information that has been read out, the acquired playback capability information and the acquired display capability information;
playing the decided video stream; and
outputting playback signals obtained by the playing on the display device.

US Pat. No. 10,116,907

METHODS, SYSTEMS AND APPARATUSES FOR OPTICALLY ADDRESSED IMAGING SYSTEM

THE BOEING COMPANY, Chic...

1. A method of addressing a projection system comprising the steps of:positioning a plasma-containing projection device at a predetermined location;
positioning an electro-optical device at a predetermined location relative to the plasma-containing projection device, the electro-optical device operative to generate a write beam;
activating the projection device by applying a voltage across the plasma-containing device to generate plasma in the plasma-containing device;
generating the write beam;
directing the write beam to the plasma-containing projection device; and
exclusively optically addressing information to the plasma-containing projection device via the write beam;
wherein the write beam is operative to cause a shift in the value of an index of refraction of a material in the plasma-containing projection device to thereby generate an image projected by the plasma-containing projection device.

US Pat. No. 10,116,906

ELEVATED MARINE CAMERA

1. A camera system comprising:a. a nautical vessel comprising:
i. a deck,
ii. a canopy and
iii. a canopy structural support;
b. a linear actuator;
c. a camera;
d. a display and
e. a set of camera controls;
f. wherein the linear actuator fully supports the weight of the camera;
g. wherein the linear actuator is configured to extend above the canopy;
h. wherein the camera is configured to be in communication with the display;
i. wherein the set of camera controls is located below the canopy;
j. wherein the camera is located above the canopy;
k. wherein the linear actuator is concealed within the canopy structural support;
l. wherein the linear actuator comprises a threaded tubular shaft;
m. wherein the threaded tubular shaft has a range of motion;
n. wherein the threaded tubular shaft has a length between 67 and 108 inches;
o. wherein the threaded tubular shaft is internally threaded along an entirety of the length;
p. wherein the range of motion crosses into a space below the deck;
q. wherein the range of motion crosses into a space above the canopy;
r. wherein the linear actuator comprises a motor;
s. wherein the motor is located in the space below the deck and
t. wherein the nautical vessel has a nautical vessel length of 45 feet or less.

US Pat. No. 10,116,905

SYSTEM AND METHOD OF VIRTUAL ZONE BASED CAMERA PARAMETER UPDATES IN VIDEO SURVEILLANCE SYSTEMS

HONEYWELL INTERNATIONAL I...

1. A method comprising:a processor of a surveillance system recording first video with a first level of a picture quality for a first camera;
the processor recording second video with the first level of the picture quality for a second camera;
the processor detecting a selection of a first portion of a secured area, wherein the selection of the first portion of the secured area is received via an operator drawing a shape on a diagram of the secured area displayed on a user interface, and wherein the processor detects a second portion of the secured area outside of the first portion of the secured area as an unselected zone;
the processor identifying the first camera within the first portion of the secured area;
the processor recording the first video with a second level of the picture quality for the first camera for a predetermined time period, wherein the second level of the picture quality includes increased video quality relative to the first level of the picture quality by increasing image resolution, increasing frames per second, decreasing a group of pictures (GOP) value, decreasing a compression ratio, or decreasing a bit rate;
the processor identifying the second camera within the unselected zone;
the processor recording the second video with a third level of the picture quality for the second camera for the predetermined time period, wherein the third level of the picture quality includes decreased video quality relative to the first level of the picture quality by decreasing the image resolution, decreasing the frames per second, increasing the GOP value, increasing the compression ratio, or increasing the bit rate; and
the processor recording the first video with the second level of the picture quality for the first camera for the predetermined time period concurrently with recording the second video with the third level of the picture quality for the second camera for the predetermined time period,
wherein the first video with the first level of the picture quality and the second video with the first video quality combined does not exceed a predetermined imposed bandwidth constraint, and
wherein the first video with the second level of the picture quality and the second video with the third video quality combined does not exceed the predetermined imposed bandwidth constraint.

US Pat. No. 10,116,904

FEATURES IN VIDEO ANALYTICS

HONEYWELL INTERNATIONAL I...

1. A video analytics function for streaming video from a video source arranged to monitor a field of view (FOV) that modifies a compression level of an object of interest (“object”) within the FOV, the video analytics function embodied as a set of instructions on a non-transitory computer readable medium, the video analytics function executable by a computer and implementing the following steps:reconstructing the FOV comprising the streaming video for viewing at an end-user interface;
receiving end-user commands at the end-user interface to define an object field encompassing the object within the FOV based on a monitoring priority for the object;
defining the compression level for the object including partial compression or full compression that fully masks the object;
compressing the streaming video within the object field according to the compression level;
monitoring the FOV of the streaming video;
analyzing first data associated with the FOV of the streaming video for a detectable event including movement and a direction of a person in the FOV; and
automatically decreasing the compression level of the object field in response to a detected event, wherein the detected event includes the movement of the person within the object field.

US Pat. No. 10,116,903

APPARATUS AND METHOD FOR RECOGNITION OF SUSPICIOUS ACTIVITIES

AIC Innovations Group, In...

1. A medication administration confirmation apparatus, comprising:a display for displaying a first set of one or more instructions to a user encouraging proper performance of one or more steps of a medication administration sequence;
a video capture device operable to capture one or more video sequences of a user administering medication in response to the displayed one or more instructions;
an audio capture device operable to capture one or more audio sequences of the user administering medication;
a memory operable to store the captured one or more video sequences and the captured one or more audio sequences; and
a processor operable to analyze at least one of the stored video sequences to identify one or more predetermined indications of suspicious activity on behalf of the user, operable to mark the at least one of the stored video sequences as including suspicious activity, operable to analyze at least one of the stored audio sequences to determine one or more additional indications of suspicious activity on behalf of the user, and to cause the display to display one or more further instructions to the user encouraging proper performance of the one or more steps of the medication administration sequence in response to the identification of one or more predetermined indications of suspicious activity.

US Pat. No. 10,116,902

PROGRAM SEGMENTATION OF LINEAR TRANSMISSION

Comcast Cable Communicati...

1. A method comprising:determining, by a computing device and based on content scheduling information associated with a media stream:
content from the media stream, wherein the content comprises non-commercial content and commercial content; and
a content type associated with the non-commercial content;
determining, based on the content type, one or more expected visual elements corresponding to the content type;
determining, based on a comparison between the one or more expected visual elements and the content from the media stream, a non-commercial portion of the content from the media stream;
determining that a quantity of repeating elements in a second portion of the content from the media stream satisfies a threshold, wherein the second portion is different from the non-commercial portion; and
storing, after determining that the quantity satisfies the threshold, an updated version of the content from the media stream, wherein the updated version omits one or more of the repeating elements.

US Pat. No. 10,116,900

METHOD AND APPARATUS FOR INITIATING AND MANAGING CHAT SESSIONS

APPLE INC., Cupertino, C...

1. A machine-implemented method performed by at least one machine for initiating a video chat session, the method comprising:in response to a request for starting a single group video chat among a plurality of members, determining whether all members have a chat service account with the same chat service provider;
initiating multiple group video chats among the members in response to determining that not all of the plurality of members have a chat service account with the same chat service provider, wherein each member has at least one chat service account to participate in at least one of the multiple group video chats; and
after the multiple group video chats have started, merging the multiple group video chats into the single group video chat using communication among the members of the multiple group video chats, without involving at least one chat server associated with the chat service provider of at least one of the plurality of members.

US Pat. No. 10,116,899

METHOD AND APPARATUS FOR FACILITATING SETUP, DISCOVERY OF CAPABILITIES AND INTERACTION OF ELECTRONIC DEVICES

LOGITECH EUROPE, S.A., L...

1. A system for configuring and/or controlling one or more electronic devices, comprising:a beacon generation system that comprises:
a first processor;
a wireless transceiver that is configured to transmit a beacon signal that comprises beacon information; and
non-volatile memory having the beacon information stored therein, and also a number of instructions which, when executed by the first processor, causes the beacon generation system to perform operations comprising:
receive an input from a first electronic device or a user;
wirelessly transmit the beacon information to a first electronic device after receiving the input from the first electronic device or the user,
wherein the beacon information includes information that is used by a software application running on the first electronic device to:
select a second electronic device out of a plurality of external electronic devices; and
initiate communication with the second electronic device.

US Pat. No. 10,116,898

INTERFACE FOR A VIDEO CALL

FACEBOOK, INC., Menlo Pa...

1. A method, comprising:displaying a full-sized interface for a video call on a display associated with a participant in the video call, wherein the display is a touch interface;
displaying a reduced-size interface for the video call on a portion of the display associated with the first participant in the video call, the portion being smaller than an entirety of the display, the interface comprising a main window displaying a current relevant video communication in the video call and a roster of additional participants in the video call;
registering a haptic contact initiation signal at a first location on the display in the portion of the display comprising the interface;
registering a haptic contact release signal at a second location on the display; and
moving the interface for the video call based on a difference between the first location and the second location;
receiving an instruction to display a second video communication associated with a second participant that is identified as a previous relevant video communication in the video call while a first video communication associated with the first participant is flagged as the current relevant video communication; and
displaying the second video communication in the main window of the interface.

US Pat. No. 10,116,895

SIGNAL DISPLAY OUTPUT METHOD, APPARATUS, AND SYSTEM

Huawei Technologies Co., ...

1. A signal display output method, comprising:receiving, by a TV box expansion device, a radio television signal by using a radio frequency port, wherein the radio television signal comprises a first television signal and a second television signal;
performing, by the TV box expansion device, demodulation processing on the radio television signal to obtain a to-be-decoded digital signal, comprising:
performing, by the TV box expansion device, demodulation processing on the first television signal to obtain a to-be-decoded first digital signal, and performing demodulation processing on the second television signal to obtain a to-be-decoded second digital signal;
sending, by the TV box expansion device, the to-be-decoded digital signal to an Internet Protocol (IP) TV box for decoding processing on the to-be-decoded digital signal to obtain a decoded digital signal for display output, comprising:
sending, by the TV box expansion device, the to-be-decoded first digital signal and the to-be-decoded second digital signal to the Internet Protocol (IP) TV box; and
receiving and storing, by the TV box expansion device, a decoded second digital signal sent by the IP TV box.

US Pat. No. 10,116,894

RETAIL OUTLET TV FEATURE DISPLAY SYSTEM

Sony Corporation, Tokyo ...

1. A system, comprising:at least one computer memory with instructions executable by a processor; and
a processor configured for communicating with a display device and to execute the instructions which when executed by the processor configure the processor to:
send feature presentation images thereto to the display device in a loop in a first sequence of feature presentation images at least while the display device is energized in a retail outlet, and
disable feature presentation in response to a user entering a setup mode of the display device, the setup mode being accessed from an initial menu and entry of the setup mode from the initial menu being used as a signal to disable the feature presentation, a menu entry being provided in the setup mode to reenable the feature presentation.

US Pat. No. 10,116,892

BITLINE BOOST FOR FAST SETTLING WITH CURRENT SOURCE OF ADJUSTABLE BIAS

OmniVision Technologies, ...

1. A fast settling output line circuit, comprising: a photodiode (PD) (202) adapted to accumulate image charges in response to incident light; at least one transfer (TX) transistor (204) coupled between the PD (202) and a floating diffusion (FD) (208) to transfer the image charges from the PD (202) to the floating diffusion (FD) (208), wherein a transfer (TX) gate voltage (206) controls transmission of the image charges from a TX receiving terminal (207) of the TX transistor to the FD (208); a reset (RST) transistor (210) coupled to supply a reset FD voltage (VRFD) to the FD (208), wherein a reset (RST) gate voltage (212) controls the RST transistor; a source follower (SF) transistor (216) coupled to receive a voltage of the FD (208) from a SF gate terminal and provide an amplified signal to a SF source terminal (218); a bitline enable transistor (226) coupled to link between a bitline (224) and a bitline source node (BLSN) (230), wherein a bitline enable voltage (228) controls the bitline enable transistor (226); a current source generator (231) coupled to connect between the BLSN (230) and a ground (AGND), wherein the current source generator (231) sinks adjustable current from the BLSN (230) to the AGND through a cascode transistor (232) and a bias transistor (242) controlled by a cascode control voltage (234) and a bias control voltage (244); a cascode hold capacitor (250) coupled between the cascode control voltage (234) and the AGND; a bias hold capacitor (252) coupled between the bias control voltage (244) and the AGND; and a bias boost driver (255) coupled to control the cascode control voltage (234) and the bias control voltage (244).

US Pat. No. 10,116,891

IMAGE SENSOR HAVING STACKED IMAGING AND DIGITAL WAFERS WHERE DIGITAL WAFER HAS STACKED CAPACITORS AND LOGIC CIRCUITRY

1. An electronic device, comprising:a first integrated circuit die having formed therein at least one photodiode, read circuitry for the at least one photodiode, and readout circuitry for the first integrated circuit die, wherein the read circuitry has an input coupled to the at least one photodiode and an output, wherein the readout circuitry has an input coupled to the output of the read circuitry and an output;
a second integrated circuit die in a stacked arrangement with the first integrated circuit die and having formed therein at least one storage capacitor associated with the at least one photodiode; and
an interconnect between the first and second integrated circuit dies for coupling the output of the read circuitry to the at least one storage capacitor;
wherein the output of the readout circuitry provides for readout of data stored in the at least one storage capacitor.

US Pat. No. 10,116,890

IMAGING APPARATUS AND IMAGING METHOD USING DIFFERENCE BETWEEN RESET SIGNAL AND PIXEL SIGNAL STORED TO TWO CAPACITORS

SmartSens Technology (US)...

1. An imaging apparatus, comprising:a pixel array, comprising a plurality of pixels arranged in rows and columns and a column output line, wherein at least one of the pixels comprises: an output transistor,
a first capacitor configured to store a reset signal, and a second capacitor configured to store a pixel signal; a plurality of column circuits, wherein at least one of the column circuits reads the reset signal from the first capacitor, reads the pixel signal from the second capacitor, and generates difference between the reset signal and the pixel signal, wherein the pixel is configured to store the pixel signal to the second capacitor after the reset signal is stored to the first capacitor,
wherein the output transistor is connected between a source follower transistor and the ground, and controlled by a bias control signal; and when ‘reset’ and ‘signal’ signals are read out and stored to the first and second capacitors, the output transistor is in turned off state to ensure the storage of the signals, and after that, the output transistor grounds an output of source follower transistor to reduce the inference when the ‘reset’ and ‘signal’ signals are read out, and to ensure auto-zeroing;
wherein a charge stored in the first capacitor is obtained through the column output line, and a charge stored in the second capacitor is obtained through the column output line, and
wherein the column output line is grounded through a transistor 1002 controlled by a signal Vrbit; when the column output line transfers a charge to an input capacitor, or an charge in an input capacitor is redistributed, the signal Vrbit controls the transistor 1QQ2 to be in the off state, so as to facilitate conversion of the charge and reduction of the noise; and wherein after a read out of the pixel signal is completed, the signal Vrbit controls the transistor 1002 to be in the off state, and the column output line is reset.

US Pat. No. 10,116,889

IMAGE SENSOR WITH TWO-DIMENSIONAL SPLIT DUAL PHOTODIODE PAIRS

OmniVision Technologies, ...

1. An image sensor, comprising:an array of split dual photodiode (DPD) pairs arranged into a plurality of first groupings and a plurality of second groupings, wherein each first grouping of the array of split DPD pairs consists entirely of either first-dimension split DPD pairs or entirely of second-dimension split DPD pairs, wherein each first grouping of the array of split DPD pairs consisting of the first-dimension split DPD pairs is adjacent to an other first grouping of the array of split DPD pairs consisting of the second-dimension split DPD pairs, wherein the first-dimension is orthogonal to the second-dimension, wherein each one of the split DPD pairs is coupled to sense both phase information and image information from incident light;
a plurality of floating diffusion (FD) regions arranged in each first grouping of the split DPD pairs; and
a plurality of transfer transistors, wherein each one of the plurality of transfer transistors is coupled to a respective photodiode of a respective split DPD pair, and is coupled between the respective photodiode and a respective one of the plurality of FD regions.

US Pat. No. 10,116,886

DEVICE AND METHOD FOR DIRECT OPTICAL IMAGE CAPTURE OF DOCUMENTS AND/OR LIVE SKIN AREAS WITHOUT OPTICAL IMAGING ELEMENTS

JENETRIC GmbH, Jena (DE)...

1. A device for direct optical recording of a security-related object without optically imaging elements, the device comprising:a placement surface for depositing the object, and a sensor layer disposed under the object on a substrate layer transparent at least in a visible wavelength range;
the sensor layer having light-sensitive elements in a two-dimensional pixel grid and being disposed in a layer body with a circuitry based on thin film transistor (TFT) electronics;
a light source being a primary light-emitting layer for illuminating the object with at least light portions of the primary light-emitting layer from a direction of the sensor layer through the placement surface, wherein all layers of the layer body disposed between the primary light-emitting layer and the placement surface transmit at least portions of light in the visible wavelength range;
the light-sensitive elements of the sensor layer being disposed at a distance of less than a mean pixel spacing from the object on the placement surface, the mean pixel spacing being defined by the two dimensional pixel grid;
the light sensitive elements each having a control unit disposed within the sensor layer for controlling an exposure time to obtain an image captured with a predefined exposure time;a shutter for changing the exposure time by changing a shutter setting of the light sensitive elements in the sensor layer if an overexposure or underexposure has been determined;a storage for storing the image and for storing a resulting image when no further change of the exposure time is needed; and
an internal computing device for analyzing the image at least for overexposure or underexposure, for determining whether a further iteration is needed to change the exposure time, and for further evaluating illumination intensity and adapting the illumination intensity of the primary light-emitting layer below the placement surface if an underexposure or overexposure of the object is determined;
wherein the security-related object is selected from the groups consisting of personal identification documents, passports or driver's licenses and single-fingerprints, multiple finger prints and handprints.

US Pat. No. 10,116,885

SYSTEMS AND APPROACHES FOR REPEATED THERMAL IMAGING DETERMINATIONS

HEMA IMAGING LLC, Eden P...

1. A thermal imaging system, the system comprising;a thermal imaging sensor configured to capture a plurality of thermal images containing thermal data of a plurality of assets in an environment;
a non-thermal imaging sensor coupled to the thermal sensor, the non-thermal imaging sensor configured to capture a plurality of non-thermal images of the plurality of assets;
an image alignment system;
an asset identification system configured to identify a particular asset from the plurality of assets, the particular asset being previously identified by being input into the asset identification system, the asset identification system identifying the particular asset by instructing a user to capture a unique identifier image associated with the particular asset and comparing the capture to a plurality of baseline unique identifier images;
a controller configured to control operation of the thermal imaging sensor, the non-thermal imaging sensor, and the image alignment system by:
(i) presenting at least one of a previously-captured baseline image or a template image of the particular asset to the user via a display such that the user may approximate an orientation of a present thermal view of the particular asset and a present non-thermal view of the particular asset to the at least one of the previously-captured baseline image or template image,
(ii) adjusting the orientation of a present thermal view of the particular asset and the present non-thermal view of the particular asset to match an orientation of at least one of the previously-captured baseline image or template image of the particular asset, and
(iii) causing the thermal imaging sensor to capture a thermal image of the present thermal view of the particular asset and the non-thermal imaging sensor to capture a non-thermal image of the present non-thermal view of the particular asset; and
a remote computing device configured to receive the captured thermal and non-thermal images of the particular asset, the remote computing device having a change detection system configured to detect a change in at least one characteristic of the particular asset using the thermal image captured by the thermal imaging sensor.

US Pat. No. 10,116,884

SYSTEMS AND APPROACHES FOR THERMAL IMAGING

HEMA IMAGING LLC, Eden P...

1. A thermal imaging system, the system comprising:a thermal imaging sensor configured to capture a plurality of thermal images of a plurality of assets in an environment;
a non-thermal imaging sensor coupled to the thermal sensor, the non-thermal imaging sensor configured to capture a plurality of non-thermal images of the plurality of assets;
an asset identification system configured to identify a particular asset from the plurality of assets, the particular asset being previously identified by being input into the asset identification system, the asset identification system identifying the particular asset by instructing a user to capture a unique identifier image associated with the particular asset and comparing the capture to a plurality of baseline unique identifier images;
an image alignment system; and
a controller configured to control operation of the thermal imaging sensor, the non-thermal imaging sensor, the asset identification system, and the image alignment system;
wherein when the asset identification system identifies the particular asset, the controller is configured to (i) present at least one of a previously-captured baseline image or a template image of the particular asset to the user via a display such that the user may approximate an orientation of a present thermal view of the particular asset and a present non-thermal view of the particular asset to the at least one of the previously-captured baseline image or template image, (ii) control the image alignment system to adjust the orientation of the present thermal view of the particular asset and the present non-thermal view of the particular asset to match the at least one of the previously-captured baseline image or template image, and (iii) cause the thermal imaging sensor to capture a thermal image of the present thermal view of the particular asset and the non-thermal imaging sensor to capture a non-thermal image of the present non-thermal view of the particular asset.

US Pat. No. 10,116,882

DISPLAY APPARATUS FOR SUPERIMPOSING AND DISPLAYING IMAGES

CASIO COMPUTER CO., LTD.,...

1. A display apparatus comprising:a display unit; and
a processor that is configured to:
perform control for superimposing and displaying a plurality of images in the display unit such that at least one of the plurality of images can be observed through one or more other images distinguishably;
designate one or more of the plurality of images; and
detect a user manipulation performed for the plurality of images,
wherein the process performs control for changing the designated one or more images spatially or temporally according to the detected user manipulation while keeping the plurality of images superimposed and displayed.

US Pat. No. 10,116,881

IMAGE APPARATUS AND METHOD FOR RECEIVING VIDEO SIGNAL IN MULTIPLE VIDEO MODES

SAMSUNG ELECTRONICS CO., ...

1. A video signal processing apparatus comprising:a video signal input unit including a plurality of video input terminals that includes a first video input terminal for receiving a plurality of types of video signals and a second video input terminal for receiving one type of video signals; and
a signal processing unit configured to:
determine whether a first video signal is received via the first video input terminal,
determine whether a second video signal is received via the second video input terminal,
in response to the second video signal being received via the second video input terminal while the first video signal is being received via the first video input terminal, process the first and second video signals received via the first video input terminal and the second video input terminal based on an automatically determined first video mode corresponding to a first type of the plurality of types of video signals, and
in response to the second video signal not being received via the second video input terminal while the first video signal is being received via the first video input terminal, process the first video signal received via the first video input terminal based on an automatically determined second video mode corresponding to a second type of the plurality of types of video signals.

US Pat. No. 10,116,880

IMAGE STITCHING METHOD AND IMAGE PROCESSING APPARATUS

SINTAI OPTICAL (SHENZHEN)...

1. An image processing device, comprising:a first lens;
a second lens, wherein the first lens and the second lens respectively captures at least one first image and at least one second image, and locations of the first lens and the second lens slightly differ;
a first microphone;
a second microphone, wherein the first microphone and the second microphone are respectively attached on the first lens and the second lens for capturing a first audio track and a second audio track;
a memory unit, for storing an image stitching program; and
a processor, for executing the image stitching program to perform the steps of:
utilizing the image processing device to receive a first video file and a second video file, wherein the first video file comprises the at least one first image and the first audio track, and the second video file comprises the at least one second image and the second audio track;
calculating delay time between a first acoustic feature of the first audio track and a second acoustic feature of the second audio track and synchronizing the first image and the second image according to the delay time;
converting the synchronized first image and synchronized the second image into a first adjusted image and a second adjusted image, respectively; and
performing an image stitching process on the first adjusted image and the second adjusted image to generate a stitched image.

US Pat. No. 10,116,879

METHOD AND APPARATUS FOR OBTAINING AN IMAGE WITH MOTION BLUR

Alcatel Lucent, Boulogne...

1. Method for obtaining an image containing a portion with motion blur, comprising:controlling at least one camera to take a first, second and third picture in a determined order of an object and a background, such that said first picture is taken with a first exposure time, said second picture with a second exposure time, and said third picture with a third exposure time, said second exposure time being longer than said first and said third exposure time, such that said second picture contains a blurred image of the background and/or the object if said object and/or said background is moving with respect to said at least one camera;
generating a final image containing at least a portion of said blurred image of the second picture as well as a portion derived from said first and/or third picture using said first, second and third picture,
wherein generating of the final image comprises:
using the first and the third picture to determine a shape and a position of the object in said first and said third picture;
isolating the at least a portion of the blurred image from the second picture, using the position and shape of the object in the first and third picture; and
combining the isolated at least a portion of the blurred image with a portion derived from the first and/or third picture to obtain the final image.

US Pat. No. 10,116,878

METHOD FOR PRODUCING MEDIA FILE AND ELECTRONIC DEVICE THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A method for producing a media file in an electronic device, the method comprises:detecting an event during recording of media frames;
determining at least one effect to be applied on the media frames;
applying the determined effect on at least one of at least one first media frame from a first set of the media frames and at least one second media frame from a second set of the media frames; and
generating a media file comprising the first and second sets of the media frames.

US Pat. No. 10,116,877

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:one or more processors; and
a memory storing instructions which, when the instructions are executed by the one or more processors, cause the image processing apparatus to function as:
an obtaining unit configured to obtain a first image and a second image;
a determination unit configured to determine a partial area of the first image as a composite area to be combined with the second image; and
a combining unit configured to combine the second image with the composite area,
wherein the determination unit determines the composite area based on distance information with regard to a plurality of partial areas of the first image, and
wherein the determination unit further sets a prohibited area in the first image, and does not set, among the plurality of partial areas, a partial area that overlaps the prohibited area as the composite area.

US Pat. No. 10,116,876

IMAGE CAPTURING METHOD AND APPARATUS, WITH REDUCED SATURATION AND IMPROVED SIGNAL-TO-NOISE RATIO BASED ON A DYNAMIC RANGE

CANON KABUSHIKI KAISHA, ...

1. An image capturing apparatus comprising:an image capturing unit including an image sensor that captures an object image and output image data; and
at least one processor that performs the operations of the following units by executing instructions stored in at least one memory;
an adjustment unit configured to adjust an exposure of the image capturing unit;
a detection unit configured to, in a case where the image data which is based on a signal outputted from the image sensor is saturated, detect a signal level corresponding to a highest level of incident light outputted from the image sensor after the exposure is reduced by the adjustment unit so as to reduce the saturation; and
a control unit configured to, in a case where the exposure is reduced, preferentially select a method having a faster response speed with respect to an exposure change instruction among a plurality of methods for changing the exposure,
wherein the control unit adjusts an exposure so as to reduce the saturation based on a dynamic range corresponding to before reducing the exposure and a maximum dynamic range that can be set in the image capturing apparatus.

US Pat. No. 10,116,875

IMAGE PICKUP APPARATUS AND METHOD FOR CONTROLLING THE SAME TO PREVENT DISPLAY OF A THROUGH IMAGE FROM BEING STOPPED WHEN A SHUTTER UNIT IS NOT COMPLETELY OPENED

Olympus Corporation, Tok...

1. An image pickup apparatus comprising:an image pickup device including an imaging plane on which imaging pixels are arranged;
a shutter unit which adjusts an amount of light incident upon the imaging plane;
an image pickup control unit which drives the shutter unit and picks up a still image by the image pickup device, captures a first through image by the image pickup device when the shutter unit is opened, and picks up a second through image including a light-shielded area by the image pickup device, the light-shielded area being formed by shielding part of light incident upon the imaging plane by the shutter unit when the shutter unit is partly light-shielded; and
a display control unit which causes a display device to display a through image using at least the first through image and the second through image,
wherein the display control unit superimposes a superimposing image on the light-shielded area of the second through image to cause the display device to display a through image based on the second through image on which the superimposing image is superimposed,
wherein the display control unit includes an advice display unit which superimposes an advice display on the second through image as the superimposing image during a period from when the still image is completely picked up until at least the shutter unit is opened.

US Pat. No. 10,116,874

ADAPTIVE CAMERA FIELD-OF-VIEW

MICROSOFT TECHNOLOGY LICE...

1. A display device, comprising:a display;
a movable mount;
a camera having an optical field-of-view;
an orientation sensor; and
a controller configured to receive image output from the camera, select, based on the image output, a first clipped field-of-view of the camera to thereby capture a target within the first clipped field-of-view, and in response to a change in an orientation of the camera identified by output from the orientation sensor, select, based on the image output and the output from the orientation sensor, a second clipped field-of-view to thereby capture the target within the second clipped field-of-view, the first and second clipped field-of-views being subsets of the optical field-of-view and being angularly offset from each other.

US Pat. No. 10,116,873

SYSTEM AND METHOD TO ADJUST THE FIELD OF VIEW DISPLAYED ON AN ELECTRONIC MIRROR USING REAL-TIME, PHYSICAL CUES FROM THE DRIVER IN A VEHICLE

Ambarella, Inc., Santa C...

1. An apparatus comprising:a first sensor configured to generate a first video signal based on a targeted view from a vehicle;
a second sensor configured to generate a second video signal based on a targeted view of a driver; and
a processor configured to (A) receive said first video signal, (B) receive said second video signal, (C) determine a field of view to present to said driver, (D) generate a third video signal and (E) present said third video signal to an electronic mirror configured to show said field of view, wherein (a) said field of view is determined based on (i) a body position of said driver extracted from said second video signal by determining a distance from said second sensor and (ii) said first video signal, (b) said distance from said second sensor is based on a comparison of a number of pixels of a known object in a first video frame showing an interior of said vehicle without said driver and a second video frame of said interior of said vehicle with said driver, (c) said field of view displayed on said electronic mirror is configured to emulate a view from a reflective mirror as seen from a point of view of said driver and (d) said electronic mirror implements at least one of a rear view mirror and a side view mirror for said vehicle.

US Pat. No. 10,116,872

IMAGE CAPTURING APPARATUS, METHOD, AND PROGRAM WITH OPERATION STATE DETERMINATION BASED UPON ANGULAR VELOCITY DETECTION

Sony Corporation, Tokyo ...

1. An image capturing apparatus comprising:an angular velocity detection unit configured to respectively detect angular velocities of movement of the image capturing apparatus at a plurality of times;
an operation determination unit configured to determine a panning operation state of the image capturing apparatus based on the detected angular velocities at the plurality of times, the determined panning operation state being one of a plurality of predetermined classifications of panning operation states; and
a zoom control unit configured to perform zoom control based on the determined panning operation state.

US Pat. No. 10,116,871

TUNNEL LINING SURFACE INSPECTION SYSTEM AND VEHICLE USED FOR TUNNEL LINING SURFACE INSPECTION SYSTEM

WEST NIPPON EXPRESSWAY EN...

1. A tunnel lining surface inspection system wherein, while a vehicle is travelling in a tunnel, a tunnel lining surface image is photographed and is processed into an image used for inspecting the tunnel lining surface, the system comprising:a plurality of line sensors mounted in the vehicle, having a photography range of one side face in both side faces of the tunnel lining surface, which photography images of each area along a circumferential direction of the tunnel lining surface,
a fixing member mounted in a lodging space of the vehicle, on which the plurality of line sensors arranged along the circumferential direction of the tunnel lining surface and fixed so that the one side face in the both side faces of the tunnel lining surface can be photographed,
a drive axis mounted in the fixing member for fixing the plurality of line sensors to a first photography position where one side face in the both side faces of the tunnel lining surface can be photographed and for fixing the plurality of line sensors to a second photography position where the other side face in the both side faces of the tunnel lining surface can be photographed, which rotates the fixing member in the circumferential direction of the tunnel lining surface,
a first image processing unit capturing imaging data having been photographed by the plurality of line sensors, and
a second image processing unit processing the imaging data having been captured in the first image processing unit, wherein
the first image processing unit, while the plurality of line sensors being fixed in the first photography position after the drive axis being driven to the left and the fixing member being rotated to the left side in the circumferential direction of the tunnel lining surface, performs processing of capturing a first imaging data having been photographed by the plurality of line sensors, showing one side face in the both side faces of the tunnel lining surface, and, while the plurality of line sensors being fixed in the second photography position after the drive axis being driven to the right and the fixing member being rotated to the right side in the circumferential direction of the tunnel lining surface, performs processing of capturing a second imaging data having been photographed by the plurality of line sensors, showing the other side face in the both side faces of the tunnel lining surface, and
the second image processing unit performs processing of selecting the imaging data forming the identical span of the tunnel lining surface in the first imaging data and the second imaging data according to each span of the tunnel lining surface, and performs image synthesis processing to obtain the images showing both side faces of the tunnel lining surface according to each span of the tunnel lining surface.

US Pat. No. 10,116,870

SINGLE CAMERA VISION SYSTEM FOR LOGISTICS APPLICATIONS

Cognex Corporation, Nati...

1. A vision system for acquiring images of features of objects of varying height passing under a camera field of view in a transport direction comprising:a camera with an image sensor defining a height:width aspect ratio of at least 1:4;
a lens assembly comprising a front lens group and a rear lens group, the front lens group including a front convex lens and a rear composite lens, the rear lens group comprising a variable lens element, the lens assembling being in optical communication with the image sensor and having an adjustable viewing angle at constant magnification within a predetermined range of working distances;
a distance sensor that measures a distance between camera and at least a portion of object; and
an adjustment module that adjusts the viewing angle based upon the distance.

US Pat. No. 10,116,869

IMAGE PICKUP APPARATUS AND DISPLAY CONTROL METHOD

Sony Corporation, (JP)

1. An image processing apparatus comprising:circuitry configured to:
detect an edge of an input image; and
control display of an output image based on the input image and a highlight signal, in which the highlight signal is generated based on the detected edge of the input image and the highlight signal is displayed in a color set for a predetermined range of a level of an edge that a detection level of the detected edge is within.

US Pat. No. 10,116,868

DISPLAY-INTEGRATED USER-CLASSIFICATION, SECURITY AND FINGERPRINT SYSTEM

QUALCOMM Incorporated, S...

1. An apparatus comprising:an electronic display, having a display cover glass with a front surface that includes a viewing area, and a fingerprint reading area within the viewing area;
a first planar light guide; and
at least one photosensing element configured to:
detect received scattered light, the received scattered light resulting from interaction of light with an object in at least partial optical contact with the front surface within the fingerprint reading area;
register, within a field of view of the photo sensing element, multiple images of the object, each of the multiple images corresponding to light that is scattered at a respective angle from the object and that undergoes a respective number of internal reflections within the first planar light guide before being detected by the photosensing element; and
output, to a processor, image data of the multiple images; wherein
the respective angle and the respective number of internal reflections is different for each of the multiple images; and
the processor is configured to recognize, from the image data, a fingerprint of a user of the electronic display.

US Pat. No. 10,116,867

METHOD AND APPARATUS FOR DISPLAYING A LIGHT FIELD BASED IMAGE ON A USER'S DEVICE, AND CORRESPONDING COMPUTER PROGRAM PRODUCT

Thomson Licensing, Issy-...

1. A method for displaying at least one light field based image on a user's device, wherein the method comprisesdisplaying said image focused according to at least one focusing parameter determined as a function of a movement of said device by a user,
wherein said displayed image is a slice image determined as an intersection of a focal stack with an intersection plane depending on at least one movement parameter of said device
wherein said focal stack comprises a set of focused images of a scene, where two consecutive focused images are spaced a sampling interval apart from each other in the focal stack, and wherein the method also comprises:
adjusting said sampling interval as a function of a layout of said scene;
determining the focal stack as the set of consecutive focused images spaced an adjusted sampling interval apart from each other.

US Pat. No. 10,116,866

STABILIZATION OF LOW-LIGHT VIDEO

Facebook, Inc., Menlo Pa...

1. A method comprising:by a computing device, determining a first maximum exposure time for capturing one or more image frames of a video clip, wherein the first maximum exposure time is based on a first amount of motion of the computing device and a first light level;
by the computing device, initiating capture of the image frames, wherein each of the captured image frames has an exposure time that is less than or equal to the first maximum exposure time;
by the computing device, while the capture of the image frames is in progress, determining a second amount of motion of the computing device and a second light level; and
by the computing device, determining whether to adjust the first maximum exposure time to a second maximum exposure time based on the second amount of motion and the second light level.

US Pat. No. 10,116,865

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD FOR CALCULATING MOTION VECTOR BETWEEN IMAGES WITH DIFFERENT IN-FOCUS POSITIONS

Canon Kabushiki Kaisha, ...

1. An electronic device having a function based on a motion vector, comprising:one or more processors;
a memory that stores a program, which is executable by the one or more processors and causes, when executed by the one or more processors, the one or more processors to function as:
a detection unit configured to detect a plurality of motion vectors between a first image and a second image based on a correlation between the first image and the second image;
a determination unit configured to determine a degree of reliability for each of the plurality of motion vectors based on a corresponding evaluation value regarding the correlation; and
a control unit configured to control the function based on a motion vector the degree of reliability of which is evaluated as being high, from among the plurality of motion vectors,
wherein the determination unit determines the degree of reliability for each of the plurality of motion of vectors further based on a corresponding difference in amount of bokeh between the first image and the second image, and
wherein the function includes at least one of an image stabilization function and a subject tracking function.

US Pat. No. 10,116,864

IMAGING APPARATUS, IMAGING DISPLAY CONTROL METHOD, AND PROGRAM

Sony Corporation, Tokyo ...

1. An image processing apparatus for controlling an image capturing apparatus, the image processing apparatus comprising:a memory; and
a processor configured to
control, during a capturing operation of images by the image capturing apparatus, display of an area indication indicating a range of an area for moving the image capturing apparatus, at least part of the images captured within the range being used for generating a synthetic image having a field of view wider than that of the images,
control, during the capturing operation, display of a reference position indication indicating a position within the range of an identified subject identified by user operation with the area indication associated with the synthetic image, and
display an instruction indicating a direction that the image capturing apparatus should be moved based on the position of the subject.

US Pat. No. 10,116,863

SCANNING WITH FRAME AVERAGING

Goodrich Corporation, Ch...

1. A method of obtaining image data comprising:scanning an imaging area with an imaging device while obtaining multiple overlapping images of the imaging area; and
transforming the overlapping images by performing frame averaging on the overlapping images to produce at least one enhanced image of the imaging area, wherein transforming the overlapping images by performing frame averaging is performed automatically at a coarse level to produce the at least one enhanced image, and further comprising:
transforming the overlapping images by performing super resolution frame averaging on at least one portion of the at overlapping images to produce at least one super resolution image of the imaging area wherein the at least one super resolution image has a finer sampling than the at least one enhanced image.

US Pat. No. 10,116,862

IMAGING APPARATUS

OLYMPUS CORPORATION, Tok...

1. An image generation apparatus comprising:a first imaging circuit that acquires first image data;
a second imaging circuit that acquires second image data;
a control circuit that searches a region corresponding to the first image data from the second image data;
a designating circuit that limits a region in the second image data corresponding to the first image data by a touch operation designating a limited region in the second image data corresponding to the first image data; and
a communication circuit that is provided in the second imaging circuit, transmits, upon receipt of an information acquiring operation, information obtained by analyzing the limited region or the corresponding region in the second image data to a server, and receives information relating to the first image data from the server.

US Pat. No. 10,116,861

GUIDED IMAGE CAPTURE USER INTERFACE

Ricoh Company, Ltd., Tok...

1. A computer-implemented method comprising:generating a first user interface configured to receive and present product information for an item including dimensions of the item;
receiving a first image;
generating a second user interface to present a template, the template including a bounding box sized to match the dimensions of the item, the second user interface configured to present the bounding box overlaid over a second image;
receiving input to capture a portion of the second image within the bounding box;
responsive to the input to capture the portion of the second image, generating a third user interface to present the first image and the captured portion of the second image as variants of a face of the item; and
storing the captured portion of the second image as a variant of the face of the item and the information of the item in a database.

US Pat. No. 10,116,860

IMAGING OPERATION GUIDANCE DEVICE AND IMAGING OPERATION GUIDANCE METHOD

OLYMPUS CORPORATION, Tok...

1. An imaging operation guidance device, comprising:an image sensor that obtains a current image;
an attitude sensor that measures motion of the image sensor;
a memory that stores at least one previous image and an operation history for the image sensor; and
a controller that is communicatively coupled to the image sensor, the attitude sensor and the memory, wherein the controller:
stores measurements from the attitude sensor in the memory,
identifies an object of interest that is located in the at least one previous image that is missing from the current image, and
determines guidance instructions for obtaining a future image based on the operation history and the measurements from the attitude sensor, wherein the guidance instructions are determined to restore the object of interest to the future image.

US Pat. No. 10,116,859

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD THAT PRESENT ASSIST INFORMATION TO ASSIST PHOTOGRAPHING

OLYMPUS CORPORATION, Tok...

1. An image processing apparatus comprising:a display;
a memory; and
a hardware processor which, under control of a program stored in the memory, controls execution of processes comprising:
an image acquisition process which acquires image data;
a photographic information acquisition process which acquires photographic information concerning the image data;
a scene/subject discrimination process which discriminates a photographic scene or a subject in the image data based on the photographic information;
an assist information retrieval process which retrieves assist information concerning a lens in accordance with a discrimination result of the photographic scene or the subject;
a lens information acquisition process which acquires lens information which is information indicating a relationship between a corresponding lens and a user;
an assist information priority setting process which sets a priority of pieces of assist information to be displayed on the display in accordance with the acquired lens information; and
a display process which displays the retrieved assist information on the display,
wherein the assist information comprises lens-related assist information which includes at least one of a sample image showing an example corresponding to the lens, a type of the lens used to acquire the sample image, a specification of the lens used to acquire the sample image, and a setting of the lens used to acquire the sample image,
wherein the lens information includes at least one of information indicating whether the corresponding lens is mounted in an imaging apparatus which acquires the image data, information indicating that the corresponding lens has been mounted in the imaging apparatus which acquires the image data, and information indicating whether the user possesses the corresponding lens, and
wherein the lens information further includes information indicative of a time of purchasing the corresponding lens, information indicative of a time of mounting the corresponding lens in the imaging apparatus for a first time, and information indicative of a number of pieces of image data acquired by using the corresponding lens.

US Pat. No. 10,116,858

IMAGING APPARATUS, CONTROL METHOD THEREOF, PROGRAM, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. An imaging apparatus comprising:an optical system that includes a focus adjustment lens that operates to move forward and backward in an optical axis direction in a predetermined movable area;
an imaging element that has an imaging plane capable of being curved and that captures an image of a subject formed via the optical system;
an evaluation unit that determines an evaluation value indicating a degree of in-focus of an image signal output from the imaging element based on the image signal;
an adjustment unit that adjusts, based on the evaluation value, a position of the focus adjustment lens to, among positions in the predetermined movable area, a position with the highest evaluation value; and
a control unit that performs control of the curvature of the imaging plane for correcting an image plane curve in the optical system and that performs control of the curvature of the imaging plane for bringing the image signal into focus, wherein
in a case where the adjustment unit adjusts the position of the focus adjustment lens to an end portion of the predetermined movable area, the control unit performs the control of the curvature of the imaging plane for bringing the image signal into focus on a priority basis.

US Pat. No. 10,116,857

FOCUS ADJUSTMENT APPARATUS, CONTROL METHOD OF FOCUS ADJUSTMENT APPARATUS, AND IMAGING APPARATUS

Canon Kabushiki Kaisha, ...

1. A focus adjustment apparatus comprising:an imaging unit configured to convert light from an optical system to an electric signal by photoelectric conversion and output an image signal for imaging and a pair of parallax image signals in a focus detection area;
a focus detection unit configured to detect a defocus amount using the pair of parallax image signals;
a control unit configured to control adjustment of a focus position of the optical system based on the defocus amount;
a first determination unit configured to determine whether the imaging unit is imaging a subject with a repetitive pattern in the focus detection area; and
a second determination unit configured to determine whether a degree of image blurring is equal to or more than a predetermined degree of blurring using at least one of the image signal for imaging and the pair of parallax image signals, wherein
when the first determination unit determines that the imaging unit is imaging a subject with a repetitive pattern in the focus detection area and the second determination unit determines that the degree of image blurring is equal to or more than the predetermined degree of blurring, the control unit moves a focus lens in the optical system to acquire a new defocus amount.

US Pat. No. 10,116,856

IMAGING APPARATUS AND IMAGING METHOD FOR CONTROLLING A DISPLAY WHILE CONTINUOUSLY ADJUSTING FOCUS OF A FOCUS LENS

Olympus Corporation, Tok...

1. An imaging apparatus that carries out a focus adjustment operation by moving a focus lens based on an image signal of an image sensor for forming a subject image, comprising:a focus controller that generates an evaluation value by extracting given signal components from the image signal, and carries out focus adjustment by calculating position of the focus lens where the evaluation value becomes a peak;
a display that displays an image based on image data generated from the image signal of the image sensor; and
a controller that executes to display an image using the display by generating image data based on an image signal that has been acquired during a focus adjustment operation where continuous focus adjustment is executed by the focus controller, wherein
the controller, as initial image display after commencement of the continuous focus adjustment operation, executes display using the display based on image data acquired when a movement position of the focus lens is within a predetermined vicinity of a predicted in-focus position that is based on a history of at least one past in-focus position of the focus lens when an in-focus position was reached in the past, from among image data that has been acquired during the focus adjustment operation, and
from commencement of the continuous focus adjustment operation until the movement position of the focus lens is within the predetermined vicinity of the predicted in-focus position, an image based on image data generated from the image signal is not displayed on the display section.

US Pat. No. 10,116,855

AUTOFOCUS METHOD FOR MICROSCOPE AND MICROSCOPE COMPRISING AUTOFOCUS DEVICE

CARL ZEISS MICROSCOPY GMB...

1. A microscope for imaging a sample, the microscope comprising:an image detector,
an objective, which has a focal plane lying in a sample space and images the sample space onto the image detector, wherein the position of the focal plane in the sample space is adjustable, and
an autofocus device having:
a light modulator which is adapted to generate a luminous modulation object that is intensity-modulated periodically along one direction and to additionally generate a luminous comparison object which extends along the direction of the modulation object,
an autofocus illumination optical unit which projects the modulation object and the comparison object to the sample space such that a projection of the modulation object and a projection of the comparison object are formed in the sample space,
a separate autofocus camera,
an autofocus imaging optical unit which images the projection of the modulation object and the projection of the comparison object onto the separate autofocus camera,
a control device which receives signals of the separate autofocus camera and is adapted:
to determine from the signals of the autofocus camera an intensity distribution which the projection of the image of the modulation object has along the direction, and an intensity distribution, which the projection of the image of the comparison object has along the direction, and
to evaluate the intensity distribution of the image of the projection of the comparison object, and to generate a corrected intensity distribution of the image of the projection of the modulation object based on the evaluated intensity distribution, in which corrected intensity distribution effects of reflectivity variations in the sample space are reduced or eliminated,
wherein the control device is further adapted to generate a focus control signal based on the corrected intensity distribution, which focus control signal defines the adjustment of the location of the focal plane when imaging the sample to the image detector.

US Pat. No. 10,116,854

PHOTOELECTRIC CONVERSION APPARATUS, SWITCHING AN ELECTRIC PATH BETWEEN A CONDUCTIVE STATE AND A NON-CONDUCTIVE STATE

1. A photoelectric conversion apparatus, comprising:a sensor cell unit comprising a photoelectric conversion unit, an amplification unit, a select switch, and a reset switch, the amplification unit comprising an input node and an output node;
an output line;
a signal processing unit; and
a control unit,
wherein the output node is electrically connected to the signal processing unit via the select switch and via the output line in this order,
wherein an electrical path between the output node and the output line is switched between a conductive state and a non-conductive state by the select switch,
wherein the input node is electrically connected to the photoelectric conversion unit, and is electrically connected to the signal processing unit via the reset switch and via the output line in this order,
wherein an electric path between the input node and the output line is switched between a conductive state and a non-conductive state by the reset switch,
wherein the control unit is configured to control the select switch to be in a conductive state in a period in which the reset switch is in a conductive state, and
wherein the sensor cell unit further comprises a switch, and a capacitance element electrically connected to the input node via the switch.

US Pat. No. 10,116,852

CONTROL DEVICE, CONTROL SYSTEM, CONTROL METHOD AND PROGRAM

Sony Corporation, Tokyo ...

1. A remote camera control device comprising:a communication circuit configured to transmit an operation request to an external camera device, and to selectively transmit a sensor information to the external camera device; and
a control circuit configured to access a product information of the external camera device, and in a case that the external device does not include a local sensor, cause the communication circuit to transmit the sensor information to the external camera device.

US Pat. No. 10,116,851

OPTIMIZED VIDEO DENOISING FOR HETEROGENEOUS MULTISENSOR SYSTEM

SAGEM DEFENSE SECURITE, ...

1. A method for temporal denoising of a sequence of images, said method comprising:/a/ capturing, by a first sensor, a sequence of first images corresponding to a given scene, each first image being divided into elements each associated with a corresponding area of said first image,
/b/ capturing, by a second sensor of a type different from the type of the first sensor, a sequence of second images corresponding to said given scene, each second image corresponding to a first image, each second image being divided into elements each associated with a corresponding area of said second image, each pair of element and associated area of the second image corresponding to a pair of element and associated area of the corresponding first image, and
/c/ obtaining, by calculation circuitry, a first sequence of images derived from the sequence of first images and a second sequence of images derived from the sequence of second images,
/d/ obtaining, by the calculation circuitry, for each area of each of the images of the first and second sequences of images, an associated weight,
/e/ obtaining, by the calculation circuitry, a first weighted sequence of images, in which each element of each image is equal to the corresponding element of the first sequence of images weighted by the weight associated with the area associated with said corresponding element, and a second weighted sequence of images, in which each element of each image is equal to the corresponding element of the second sequence of images weighted by the weight associated with the area associated with said corresponding element,
/f/ obtaining, by the calculation circuitry, a sequence of enhanced images resulting from combining sequences of images comprising the first weighted sequence of images and the second weighted sequence of images,
/g/ obtaining, by the calculation circuitry, a motion estimation based on the obtained sequence of enhanced images,
/h/ obtaining, by the calculation circuitry, based on the calculated motion estimation, a spatial alignment of the images of a sequence of images to be displayed derived from sequences of images corresponding to the given scene and comprising the sequence of first images and the sequence of second images,
/i/ a temporal denoising, by the calculation circuitry, based on the determined spatial alignment of the sequence of images to be displayed.

US Pat. No. 10,116,850

METHOD AND AN ELECTRONIC DEVICE FOR AUTOMATICALLY CHANGING SHAPE BASED ON AN EVENT

Samsung Electronics Co., ...

1. A method for automatically changing a shape of a flexible electronic device, the method comprising:identifying, by the flexible electronic device, at least one event triggered in the flexible electronic device; and
changing, by the flexible electronic device, the shape of a surface of the flexible electronic device, according to the at least one identified event,
wherein the changing of the shape of the surface of the flexible electronic device comprises changing, if the at least one event is associated with at least one camera, the shape of the flexible electronic device such that the at least one camera is positioned to at least one side of the flexible electronic device, according to the at least one event associated with the at least one camera.

US Pat. No. 10,116,849

LENS DRIVING ACTUATOR

LG INNOTEK CO., LTD., Se...

1. A lens driving actuator comprising:a housing;
a bobbin disposed inside the housing and comprising first to third lateral surfaces, first corner surface disposed between the first lateral surface and the second lateral surface and second corner surface disposed between the second lateral surface and the third lateral surface;
a first magnet disposed on the first lateral surface of the bobbin;
a second magnet disposed on the first corner surface of the bobbin;
a third magnet disposed on the second corner surface of the bobbin;
a first coil disposed on the housing and facing the first magnet;
a second coil disposed on the housing and facing the second magnet; and
a third coil disposed on the housing and facing the third magnet,
wherein an outer surface of the second magnet facing the second coil and an outer surface of the third magnet facing the third coil are formed as a plane surface, and
wherein an imaginary plane surface extended from the outer surface of the second magnet is perpendicular to an imaginary plane surface extended from the outer surface of the third magnet.

US Pat. No. 10,116,848

ILLUMINATION AND IMAGING SYSTEM FOR IMAGING RAW SAMPLES WITH LIQUID IN A SAMPLE CONTAINER

Screen Holdings Co., Ltd....

1. An imaging apparatus that images a raw sample as an imaging object carried together with liquid in a sample container, the apparatus comprising:a holder that holds the sample container;
an imaging optical system, arranged to face the sample container held by the holder, that has an object-side hypercentric property;
an imaging element that images an image of the imaging object focused by the imaging optical system; and
an illuminator that illuminates the imaging object from a side opposite to the imaging optical system across the sample container held by the holder, wherein:
the illuminator includes a light source and an illumination optical system that causes light emitted from the light source to be incident on a sample surface where the imaging object is present;
the illumination optical system has an optical axis coaxial with that of the imaging optical system and an exit pupil position located between the illumination optical system and the imaging optical system;
the holder arranges the sample surface between the exit pupil position and the imaging optical system;
the sample container contains a well with a bottom surface having optical transparency;
the well carries the raw sample as the imaging object together with the liquid;
a size of an imaging field of view of the imaging apparatus is smaller than a size of the bottom surface of the well; and
the imaging field of view covers only a central area of the well, the central area being distant from a peripheral edge of the well.

US Pat. No. 10,116,845

IMAGING DEVICE

Ricoh Company, Ltd., Tok...

1. An imaging device comprising:an imaging unit having an imager configured to image a subject, and a holder configured to hold the imager at one end thereof;
a housing including a recess formed in a first surface thereof, and configured to house the imaging unit, the housing being a housing for a video conferencing device; and
a hinge having a hinge member housed in the recess pivotally coupled to the housing around an axle extending approximately in parallel with the first surface inside the recess of the housing, wherein
the imaging unit pivots around the axle via the hinge between a housing position at which the imaging unit is housed inside the recess of the housing and a projecting position at which the imaging unit is projected from the recess of the housing,
wherein:
the imager includes an imaging element having a rectangular shape with a 16:9 aspect ratio, a lens configured to introduce external light into the imaging element, and a lens hood mounted at an outer periphery of the lens, the imaging element disposed inside of the housing,
the lens hood projects from a surface of the lens by a distance to allow the imager to introduce light for imaging a subject from the lens into the imaging element and to block unnecessary light introduced from the lens into the imaging element,
a shape of the lens hood is substantially rectangular, and has an aspect ratio substantially the same as the aspect ratio of the imaging element, and
the substantially rectangular shape of the lens hood has substantially the same shape as the rectangular shape of the imaging element in both a horizontal and a vertical dimension.

US Pat. No. 10,116,844

CAMERA MODULE HAVING BASE WITH MENTAL SUBSTRATE, CONDUCTIVE LAYERS AND INSULATION LAYERS

TDK TAIWAN CORP., Yangme...

1. A camera module, comprising:a lens driving mechanism;
a lens unit, disposed on the lens driving mechanism;
a circuit board, comprising:
a metal member;
a metal wire;
an insulation layer, disposed between the metal member and the metal wire; and
an image sensor, disposed on the circuit board and electrically connected to the metal wire, wherein the lens driving mechanism can drive the lens unit to move relative to the image sensor, and the image sensor can catch the light through the lens unit; and
a base, disposed between the image sensor and the lens unit, comprising:
a metal substrate;
a first conductive layer, electrically connected to the lens driving mechanism; and
a first insulation layer, disposed between the metal substrate and the first conductive layer.

US Pat. No. 10,116,842

GATHERING RANGE AND DIMENSIONAL INFORMATION FOR UNDERWATER SURVEYS

CATHX RESEARCH LTD., Cou...

1. An underwater survey system for gathering range and 3D dimensional information of subsea objects, the system comprisinga camera configured to capture images of a subsea scene; and
one or more reference projection light sources configured to project one or more structured light beams
the camera configured to capture a sequence of images of each of a plurality of fields of view within the scene, where each of the plurality of fields of view of the scene is illuminated by one or more of the light sources, and wherein the camera and light sources are synchronized so that each time an image is acquired, a specific configuration of light source parameters and camera parameters is used;
the one or more reference projection light sources having a fixed distance from the camera and a fixed orientation in relation to the camera.

US Pat. No. 10,116,841

RELATION TO UNDERWATER IMAGING FOR UNDERWATER SURVEYS

CATHX RESEARCH LTD., Cou...

1. A method of capturing sequential underwater image data of a scene for use in an underwater survey, the method operating in an underwater imaging system comprising a light module, an image processing module, a camera module, and a sequential imaging module, the light module comprising a plurality of light classes each light class having one or more light sources, wherein the steps of the method comprise:the camera module capturing a sequence of images of each of a plurality of overlapping fields of view within the scene, where the scene is illuminated by one or more of the light classes of the light module;
the sequential imaging module controlling the operational parameters of the light module and camera module to adjust lighting and imaging parameters between individual image captures such that a time relationship of the lighting and imaging parameters to individual image captures in the sequence of images in each of the plurality of overlapping fields of view is predetermined, and a time relationship of the lighting and imaging parameters to image captures with different light classes in each of the plurality of overlapping fields of view is predetermined; and
the image processing module concatenating the individual images to form a set of images that are related to each other.

US Pat. No. 10,116,840

ARRAY CAMERA, ELECTRICAL DEVICE, AND METHOD FOR OPERATING THE SAME

LG ELECTRONICS INC., Seo...

1. A method for operating an array camera comprising a plurality of camera modules, the method comprising:acquiring images through the camera modules;
when a size of a first object present in the acquired images is equal to or greater than a predetermined size, extracting a first image acquired by a first camera module and a second image acquired by a second camera module, the first camera module and the second camera module being two adjacent camera modules selected from among the plurality of camera modules;
calculating first distance information regarding the first object based on the first image and the second image; and
when a size of a second object present in the acquired images is less than the predetermined size, extracting a third image acquired by a third camera module and a fourth image acquired by a fourth camera module, the third camera module and the fourth camera module being two spaced apart camera modules selected from among the plurality of camera modules;
calculating second distance information regarding the second object based on the third image and the fourth image.

US Pat. No. 10,116,839

METHODS FOR CAMERA MOVEMENT COMPENSATION FOR GESTURE DETECTION AND OBJECT RECOGNITION

Atheer Labs, Inc., Mount...

1. A method, comprising:receiving a video stream comprised of a sequential series of frames from a camera, wherein the video stream is captured at a frame rate;
receiving motion data from a motion sensor that is physically associated with the camera to detect motion of the camera, wherein the motion data is captured at a sampling rate;
associating a first frame of the sequential series of frames with a portion of the motion data that is captured approximately contemporaneously with the first frame, the portion of the motion data indicative of an amount of movement of the camera when the camera captured the first frame;
when the sampling rate is greater than the frame rate, aggregating a first frame sample of the motion data and a second sample of the motion data captured between the first frame of the sequential series of frames and a second frame of the sequential series of frames to obtain an aggregated movement value representative of the motion of the camera when the camera captured the first frame;
comparing the aggregated movement value with a first threshold for the amount of movement of the camera;
when the aggregated movement value does not exceed the first threshold, accepting the first frame from the video stream; and
when the aggregated movement value exceeds the first threshold, rejecting the first frame from the video stream.

US Pat. No. 10,116,838

METHOD AND APPARATUS FOR PROVIDING SIGNATURES OF AUDIO/VIDEO SIGNALS AND FOR MAKING USE THEREOF

GRASS VALLEY CANADA, Tor...

1. A method for setting a signal delay based on generated video signatures representative of a content of a video signal, the method comprising:for each of a first video signal and second video signal comprising the first signal after at least one transmission operation:
selecting, by a signature extraction unit, a first subset of pixels of a first image of the video signal and a corresponding second subset of pixels of a second image of the video signal, each of the first subset and second subset excluding one or more pixels of the corresponding image,
incrementing, by a comparator of the signature extraction unit for each pixel of the first subset of pixels, a counter value responsive to a difference between pixel data of a pixel of the first subset of pixels and pixel data of a corresponding pixel of the second subset of pixels exceeding a threshold,
dividing, by the signature extraction unit, the counter value by a value proportional to the number of the plurality of pixels, and
generating, by the signature extraction unit, a video signature comprising the divided counter value;
identifying a delay between the first video signal and second video signal based on a comparison of the video signature of the first video signal and the video signature of the second video signal; and
automatically setting a signal delay based on the identified delay.

US Pat. No. 10,116,837

SYNCHRONIZED LOOK-UP TABLE LOADING

Hewlett-Packard Developme...

1. A printing device comprising:a processor to process a print job that is received from a computing device;
processor memory operatively connected to the processor and comprising multiple buffers, each buffer to store a look-up table;
additional memory configured to store a plurality of look-up tables for processing the print job; and
a memory controller operatively connected to the additional memory, the memory controller to:
in response to processing of the print job reaching a buffer trigger row of the print job, use look-up metadata stored in the additional memory to identify a next look-up table from among the plurality of look-up tables, wherein the processing of the print job is performed using an initial look-up table of the plurality of look-up tables;
dynamically load the next look-up table into a next buffer of the processor memory while processor continues to process the print job using the initial look-up table in a current buffer of the processor memory; and
continue processing the print job using the next look-up table after a target row of the print job is reached.

US Pat. No. 10,116,836

IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, LENS APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus comprising:an acquirer configured to acquire information related to a lateral chromatic aberration; and
a corrector configured to correct an image to reduce the lateral chromatic aberration based on the information related to the lateral chromatic aberration,
wherein the information related to the lateral chromatic aberration includes a first component related to a design value, a second component related to a manufacturing error,
wherein each of the first component and the second component is a rotationally symmetric component.

US Pat. No. 10,116,835

INFORMATION PROCESSING APPARATUS AND METHOD THAT MANAGE LOG INFORMATION

Ricoh Company, Ltd., Tok...

1. An information processing apparatus, comprising:a first memory; and
a processor coupled to the first memory, and configured to
obtain log information related to a job having been executed in response to an instruction by a user, the log information including a management code selected by the user and user identification information of the user;
modify the obtained log information by modifying the user identification information included in the obtained log information such that the user is not specified by the log information; and
store the modified log information in a second memory,
wherein the modified user identification information is included in the modified log information stored in the second memory, and
wherein the user identification information that is not modified is not included in the modified log information stored in the second memory.

US Pat. No. 10,116,834

IMAGE READING APPARATUS AND METHOD, WITH A MOVABLE LIGHT REFLECTING SURFACE

PFU LIMITED, Kahoku-Shi,...

1. An image reading apparatus comprising:an image capturing device for capturing an image of a document;
a light source;
a movable light reflecting surface having a white color;
a driving device for moving the light reflecting surface between a first position at which the light reflecting surface reflects light from the light source and a second position at which the light reflecting surface does not reflect light from the light source; and
a control circuit for controlling the driving device to move the light reflecting surface,
wherein the control circuit determines whether an image obtained by the image capturing device is a white reference image and determines, as the first position, a position where the white reference image is detected when the control circuit moves the light reflecting surface from the second position to the first position; and determines, as the second position, a position where the light reflecting surface is moved by a predetermined distance from the first position without using an image obtained by the image capturing device, when the control circuit moves the light reflecting surface from the first position to the second position.

US Pat. No. 10,116,833

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM FOR ANIMATION DISPLAY

Sony Semiconductor Soluti...

1. An image processing device comprising:a memory unit storing image data;
a reduction scaler unit configured to reduce image data of an input image or maintain a current size of the image data, and store the image data into the memory unit; and
an enlargement scaler unit configured to enlarge the image data stored in the memory unit or maintain a current size of the image data, and output the image data as image data of an output image,
wherein
the reduction scaler unit converts a resolution of the input image to an intermediate resolution in accordance with first parameters related to an image to be supplied from the enlargement scaler unit, the intermediate resolution being a resolution for performing writing on the memory unit, and
the enlargement scaler unit converts the intermediate resolution of a memory-held image read from the memory unit to a resolution of the output image, in accordance with second parameters related to an image to be supplied from the reduction scaler unit.

US Pat. No. 10,116,832

INFORMATION PROCESSING DEVICE, CONTROL METHOD, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. A control method of an information processing device that communicates with a communication device and includes at least one processor configured to execute the control method, the method comprising:accepting a predetermined operation by a user;
not executing control to execute newly transmission processing for transmitting wirelessly, to the communication device by a first communication standard, information about an external device outside the communication device and outside the information processing device, and communicating with the communication device via the external device in a case where the predetermined operation is accepted, in a state that the external device is connected to the information processing device by a second communication standard different from the first communication standard and the communication device is connected to the external device by the second communication standard, and
communicating with the communication device via the external device after the control to execute newly the transmission processing is executed based on the predetermined operation in a case where the predetermined operation is accepted, in a state that the communication device is not connected to the external device by the second communication standard, the external device being connected to the information processing device by the second communication standard
wherein after the control to execute newly the transmission processing is executed, the communication device connects to the external device by the second communication standard based on the information about the external device, the information being transmitted to the communication device as a result of the transmission processing being executed newly.

US Pat. No. 10,116,831

MANAGEMENT SERVER CONFIGURED TO EXTRACT INFORMATION INDICATING AN AVAILABILITY OF AN IDENTIFIED IMAGE FORMING APPARATUS, INFORMATION PROCESSING METHOD, SYSTEM AND RECORDING MEDIUM

Ricoh Company, Ltd., Tok...

1. A management server comprising:a memory and a processor, the memory containing computer readable code that, when executed by the processor, configures the processor to,
authenticate a user of at least one image forming apparatus based on information on the user from an information processing apparatus,
accumulate print data from the information processing apparatus,
acquire availability information and history information from the at least one image forming apparatus, the availability information indicating whether the at least one image forming apparatus is online and idle, and the history information indicating a tally of past usage of the at least one image forming apparatus by the user,
generate a preferred list of preferred image forming apparatuses from among a plurality of image forming apparatuses connected to the management server based on the availability information and the history information,
acquire device information from the preferred image forming apparatuses,
transmit the device information to the information processing apparatus prior to receiving a printing request to print the accumulated print data such that the user is provided with the device information of the preferred image forming apparatuses prior to executing location-free (LF) printing from a user interface of one of the plurality of image forming apparatuses, and
perform the location-free (LF) printing by transmitting the accumulated print data to the one of the plurality of image forming apparatuses in response to receipt of the printing request from the one of the plurality of image forming apparatuses.

US Pat. No. 10,116,829

INFORMATION PROVIDING SYSTEM BY DATA RELAYING APPLICATION

STAR MICRONICS CO., LTD.,...

1. An information providing system using a data relaying application comprising a printing application which receives a first data generated by another application executed by a mobile, converts the first data into a second data for printing, and outputs the second data to a printer, the information providing system comprising:an application activating unit which issues an application binding command to activate the printing application in response to a print instruction given by a user of the mobile, the application binding command designating the printing application and including a predetermined information acquiring command designated according to an information acquisition parameter set by the user of the mobile;
a printing execution controlling unit of the printing application which controls execution of printing by the printer;
a print result information acquiring unit of the printing application which acquires a print result information from the printer, the print result information representing success or failure of the execution of printing:
an additional information acquiring unit of the printing application which acquires an additional information on at least one of the printer and the printing application according to the predetermined information acquiring command included in the application binding command; and
an information providing unit of the printing application which provides the mobile with the print result information acquired by the print result information acquiring unit and the additional information acquired by the additional information acquiring unit by displaying the print result information and the additional information together on a screen of the mobile.

US Pat. No. 10,116,828

IMAGE COMMUNICATION APPARATUS, CONTROL METHOD THEREOF, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image communication apparatus capable of image communication to an external line and an extension line, the apparatus comprising:a memory device that stores a set of instructions; and
at least one processor that executes the instructions to:
designate a transmission destination of image data,
append an external line number if the designated transmission destination is the external line,
transmit the image data in accordance with one of the designated transmission destination and the transmission destination to which the external line number is appended,
when a transmission of the image data is performed in accordance with the transmission destination to which the external line number is appended, individually record, as history information for the transmission, the designated transmission destination and the external line number,
set a number as the external line number;
register the transmission destination included in the history information to an address book if the external line number included in the history information and the set number match, and
display that the external line number is changed if the external line number included in the history information and the set number do not match when the history information is selected for the address book.

US Pat. No. 10,116,827

IMAGE FORMING APPARATUS

KYOCERA Document Solution...

1. An image forming apparatus comprising:a reading section configured to read a plurality of images from a document; and
an image forming section configured to form the plurality of read images on a sheet, wherein
the plurality of images include a first image having a first color and a second image having a second color that is different from the first color,
the image forming section forms the first image on a first main side of the sheet and the second image on a second main side of the sheet, the first main side being one of two opposite sides of the sheet, the second main side being the other of the two opposite sides of the sheet,
the first image shows a question,
the second image shows an answer to the question,
when the sheet is folded such that a part of the sheet covers the second image, the second image is visible at the first main side through the part of the sheet,
the image forming section forms the first image on the first main side and the second image on the second main side in such a manner that the first image and the second image visible at the first main side through the part of the sheet when the sheet is folded such that the part of the sheet covers the second image do not overlap each other and form the same content as the plurality of images, and
the image forming section forms a third image in a region of the first main side to prevent a mirror image of the second image from being visible at the first main side through the sheet when the sheet is not folded, the region of the first main side overlapping a region where the second image is formed, the third image covering and hiding the entirety of the mirror image of the second image.

US Pat. No. 10,116,826

METHOD AND APPARATUS FOR AUTOMATICALLY RESUMING A PRINT JOB FROM PORTABLE MEMORY DEVICE

Xerox Corporation, Norwa...

1. A method for automatically printing a document in a document printing system, comprising:detecting, by a processing device of a print device, a trigger event by determining that a portable memory device has become communicatively connected to a port of the print device;
upon detecting the trigger event, by the processing device:
accessing a document file stored in the portable memory device, wherein the document file comprises a digital representation of a document to be printed,
detecting whether a configuration file associated with the document file is stored in the portable memory device,
if the configuration file exists in the portable memory device, automatically printing the document file by:
determining that the configuration file contains information about an interrupted print job of the document file,
extracting, from the configuration file, at least a page number of the document at which an interruption of the interrupted print job occurred, and
causing a print engine of the print device to automatically resume the interrupted print job from the page number of the document at which the interruption occurred.

US Pat. No. 10,116,825

CONTROL SERVER

Brother Kogyo Kabushiki K...

1. A control server configured to control a multifunction peripheral capable of executing scan and print, the control server comprising:a processor; and
a first memory storing computer-readable instructions therein, the computer-readable instructions, when executed by the processor, causing the control server to:
receive, from the multifunction peripheral, instruction information indicating that a first instruction has been accepted at the multifunction peripheral, the instruction information including information for executing both processes of a scan saving process for saving scan data representing a document image in a destination apparatus and a copying process for copying the document image;
in a case where the instruction information is received,
send a scan instruction to the multifunction peripheral, the scan instruction being for causing the multifunction peripheral to execute a generating process included in the both processes, the generating process being for generating first scan data representing the document image;
send a sending instruction to the multifunction peripheral, the sending instruction being for causing the multifunction peripheral to execute a sending process included in the copying process, the sending process being for sending the first scan data to a predetermined server configured separately from the multifunction peripheral;
send a print instruction to the multifunction peripheral, the print instruction being for causing the multifunction peripheral to execute a receiving process and a printing process that are included in the copying process, the receiving process being for receiving, from the predetermined server, second scan data obtained using the first scan data, and the printing process being for printing the document image represented by the second scan data; and
execute a predetermined process included in the scan saving process, the predetermined process being for saving, in the destination apparatus, third scan data obtained using the first scan data.

US Pat. No. 10,116,824

METHOD AND IMAGE FORMING APPARATUS FOR GENERATING WORKFLOW OF IMAGE FORMING JOB

S-Printing Solution Co., ...

1. A method of generating a workflow of an image forming job, the method comprising:providing a first list of selectable first functions;
receiving a user input for selecting a first function from the first list;
running an application for executing the selected first function to provide a user interface (UI) for receiving setting values for the selected first function;
storing the received setting values for the selected first function;
determining output data of the selected first function;
determining, based on the output data of the selected first function, a second list of selectable second functions that are continuously executable to the selected first function;
providing the second list;
receiving a user input for selecting a second function from the second list; and
generating a workflow to sequentially execute the selected first function based on the received setting values for the selected first function and the selected second function,
wherein the second list of selectable second functions is determined based on whether input data of the second functions corresponds to the output data of the selected first function.

US Pat. No. 10,116,823

CLEANING DEVICE THAT REMOVES TONER AND PAPER POWDER, AND IMAGE FORMING APPARATUS

KYOCERA Document Solution...

1. A cleaning device comprising:a removal roller rotating around a first rotary shaft extending widthwise of an image carrier while making contact with the image carrier to remove a toner and a paper powder remaining on the image carrier;
a collecting roller making contact with the removal roller while rotating around a second rotary shaft parallel to an axial direction of the first rotary shaft to collect the toner and the paper powder on the removal roller;
a blade extending in parallel to an axial direction of the second rotary shaft, the blade making contact with the collecting roller to scrape off the toner and the paper powder on the collecting roller; and
a toner storage section being partitioned from the removal roller and the collecting roller by a seal extending in parallel to the first rotary shaft and the second rotary shaft, the toner storage section storing the toner and the paper powder collected by the collecting roller and scraped off by the blade, wherein
the removal roller and the collecting roller have no relationship such that a rotation speed or a diameter of one of the removal roller and the collecting roller is an integral multiple of a rotation speed or a diameter of another one of the removal roller and the collecting roller,
provided on an outer circumferential surface of the collecting roller in a circumferential direction of the collecting roller are: a first outer circumferential region having a predefined first surface roughness and extending in the axial direction of the second rotary shaft; and a second outer circumferential region having a greater predefined second surface roughness than the first surface roughness and extending in the axial direction of the second rotary shaft, and
a width of the second outer circumferential region in the circumferential direction is smaller than a width of the first outer circumferential region in the circumferential direction.

US Pat. No. 10,116,822

OPTICAL SCANNING DEVICE AND IMAGE FORMING APPARATUS INCLUDING THE SAME

KYOCERA DOCUMENT SOLUTION...

1. An optical scanning device comprising:a housing having light emitting ports extending in a predetermined direction;
a transparent cover that closes the light emitting ports;
a cleaning member that slidably contacts with a surface of the transparent cover to clean the surface;
a holding member that holds the cleaning member; and
a movement mechanism that allows the holding member to reciprocally move along the transparent cover in the predetermined direction,
wherein the holding member has an inside/outside double structure including an inner boss member that receives power from the movement mechanism and an outer boss member that internally receives the inner boss member and is longer than the inner boss member, and
the outer boss member reaches a moving end and stops earlier than the inner boss member, and subsequently the inner boss member moves in the outer boss member, reaches the moving end and stops.

US Pat. No. 10,116,821

IMAGE FORMING APPARATUS WHICH CAN REDUCE POWER CONSUMPTION

Konica Minolta, Inc., Ch...

1. An image forming apparatus comprising:a hardware circuit for image forming, which includes an image forming unit to form images and an image forming control unit to control the image forming unit, and
a hardware circuit for communication, which includes a communication unit to perform communication with external devices and a communication control unit to control the communication unit, wherein
both the circuit for image forming and the circuit for communication have a common IP (Internet Protocol) address as an IP address published to users of the image forming apparatus, and
the circuit for communication further includes an electric power control unit to control electric power supply to the circuit for image forming and electric power supply to the circuit for communication, being independent of each other.

US Pat. No. 10,116,820

IMAGE FORMING APPARATUS, METHOD FOR CONTROLLING SAME, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image forming apparatus comprising:a processor; and
a memory storing instructions, when executed by the processor, causing the image forming apparatus to function as:
an input unit configured to input image data;
a printing unit configured to print an image based on the image data input by the input unit;
a control unit configured to determine if the image forming apparatus is operating in a first mode or a second mode,
wherein if the control unit determines the image forming apparatus is operating in a first mode then perform control to print by the printing unit an image generated from the image data input by the input unit, and
wherein if the control unit determines the image forming apparatus is operating in a second mode then print by the printing unit an image obtained by adding a predetermined pattern image to the image generated from the image data input by the input unit; and
an operation unit including a display and accepting unit,
wherein if the image forming apparatus operates in the second mode, display a confirmation screen to a user prior to printing to accept selection regarding whether to perform printing in the second mode in response to operation performed to start printing the image.

US Pat. No. 10,116,819

DOCUMENT CONVEYING APPARATUS

PFU LIMITED, Kahoku-Shi,...

1. A document conveying apparatus comprising:a document tray;
a driving module for generating a first driving force;
a first conveying roller for conveying a document stacked at a lowermost position, which is one of a plurality of documents stacked on the document tray;
a second conveying roller, provided at a downstream side with respect to the first conveying roller in a document conveying direction for conveying said document stacked at the lowermost position;
a separation roller provided at the downstream side with respect to the second conveying roller in the document conveying direction for separating the document from the plurality of stacked documents;
a driving force transmission mechanism for transmitting the first driving force to a driving shaft of the first conveying roller, a driving shaft of the second conveying roller, and a driving shaft of the separation roller;
a first blocking mechanism provided between the first conveying roller and the driving shaft of the first conveying roller for blocking a second driving force transmitted to the first conveying roller by the driving shaft of the first conveying roller so that the second driving force is not transmitted to the first conveying roller, after a rear edge of the document conveyed by the first conveying roller passes the first conveying roller and a next document to be subsequently conveyed comes into contact with the first conveying roller; and
a third conveying roller provided at the downstream side with respect to the separation roller in the document conveying direction, wherein
a period of time for blocking the second driving force is set to be equal to or less than a period of time for conveying the document for a distance between the separation roller and the third conveying roller.

US Pat. No. 10,116,818

INFORMATION PROCESSING APPARATUS WITH OPERATION UNIT, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR

CANON KAUSHIKI KAISHA, T...

1. An image processing apparatus comprising:a scanner scanning a document and generating image data;
a display displaying a first display area for selecting an image processing function to be executed for the image data generated by the scanner;
a memory storing instructions; and
at least one processor that executes the instructions causing the image processing apparatus to:
display a plurality of standard icons corresponding to a plurality of image processing functions in the first display area;
in a case where a plurality of extension applications are installed and a total number of the plurality of standard icons and a plurality of additional icons corresponding to the plurality of extension applications does not exceed a display upper limit of the first display area, display the plurality of standard icons and the plurality of the additional icons in the first display area; and
in a case where the plurality of extension applications are installed and the total number of the plurality of standard icons and a plurality of additional icons corresponding to the plurality of extension applications exceeds the display upper limit of the first display area:
display the plurality of standard icons and a predetermined icon in the first display area; and
display a second display area on the display, in which the plurality of additional icons corresponding to the plurality of extension applications are arranged, when the predetermined icon is selected from among the icons in the first display area.

US Pat. No. 10,116,817

IMAGE FORMING APPARATUS AND IMAGE FORMING SYSTEM INCORPORATING SAME

RICOH COMPANY, LTD., Tok...

1. An image forming apparatus comprising:a display including a touch panel display screen to display a preview image before an image is formed on a recording medium;
an operation position detector to detect a series of operation positions on the touch panel display screen displaying the preview image, the detected series of operation positions forming a handwritten additional image;
a display controller to display on the display screen, a composite image including both the preview image and the handwritten additional image superimposed on the preview image; and
an image forming unit to form, on the recording medium, a post-addition image corresponding to the composite image, including both the preview image and the handwritten additional image, displayed on the display screen,
wherein each of a vertical length and a horizontal length of the display screen is equal to or greater than a length of a long side of a maximum size recording medium on which an image is to be formed by the image forming unit.

US Pat. No. 10,116,816

INFORMATION PROCESSING APPARATUS THAT PERFORMS TWO SEPARATE AND DIFFERENT SEARCH OPERATIONS FOR A DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus connected to an external access point, the information processing apparatus comprising:one or more processors operating to:
cause a first search to be performed so that a first device that is not in a state of being connected to the external access point and that has a function of an access point is searched for;
cause a second search to be performed so that a second device that is already in a state of being connected to the external access point is able to be searched for, wherein the second search is a different search operation than the first search; and
cause a display unit to display first information regarding the first device found by the first search and second information regarding the second device found by the second search,
wherein, in a case where the first information displayed on the display unit is designated, processing for connecting the first device to the external access point is performed based on the designation of the first information,
wherein, in a case where the second information displayed on the display unit is designated, processing for connecting the second device to the external access point is not performed based on the designation of the second information, and
wherein the external access point is provided outside of the information processing apparatus, the first device, and the second device.

US Pat. No. 10,116,815

DISPLAY APPARATUS THAT ENSURES REDUCED OPERATION LOAD OF USER, AND IMAGE FORMING APPARATUS

KYOCERA Document Solution...

1. A display apparatus comprising:a display that has a touch panel function;
a first display controller that performs a control such that a first button image and a first character image are displayed on the display, the first button image having a first region enclosed by a first line, the first button image detecting a touch within the first region to be transferred to a setting input screen for receiving a predetermined setting, the first character image being arranged in the first region, the first character image indicating a content of the first button image;
a detection unit that detects a request for an enlargement of a display of the first button image by an operation on the display;
a determining unit that determines whether an enlargement ratio is equal to or more than a predetermined value when the request for the enlargement of the display of the first button image is detected by the detection unit; and
a second display controller that performs a control such that a second button image and a second character image are displayed on the display when the determining unit determines that the enlargement ratio is equal to or more than a predetermined value, the second button image being located on an inner side of the first region enlarged with the enlargement ratio, the second button image having a second region enclosed by a second line, the second button image detecting a touch within the second region to accept the input of the predetermined setting on the setting input screen, the second character image indicating a content of the predetermined setting in the first region.

US Pat. No. 10,116,814

ELECTRONIC APPARATUS AND DISPLAY CONTROL METHOD FOR ELECTRONIC APPARATUS

Seiko Epson Corporation, ...

11. A display control method for an electronic apparatus which is able to communicate with an external server via a network,wherein the electronic apparatus includes a communication interface which receives external server screen information from the external server, a printing section which executes printing on a medium, an operable mechanism which accommodates the medium prior to printing and includes a medium accommodating section which is attachable and detachable with respect to a main body of the electronic apparatus, a display which displays a screen, a control device which controls the display state of the display, and a memory which stores internal screen information in advance, the external server screen information being screen information of an operation screen for controlling the electronic apparatus by a user operation, and
the control device is provided with an internal screen information output circuit that outputs the internal screen information which is screen information that relates to the screen which is displayed on the display, the internal screen information being received from an internal server and not being received from the external server,
the display control method comprising:
causing the control device to execute
a first display step of displaying a first screen that is based on the external server screen information that is received from the external server on the display,
a second display step of displaying a second screen that is based on the internal screen information which is received from the internal server on the display,
a state change sensing step of sensing, by a state change sensing circuit, a change in state of the operable mechanism that is caused by an operation of the operable mechanism,
a switching step of switching, by a display control circuit, from the first display step to the second display step such that the display control circuit ends the first display step to end displaying of the first screen and starts the second display step to display the second screen upon the change in state of the operable mechanism being sensed in the state change sensing step while the first screen is displayed on the display,
a step of acquiring, by the display control circuit, the internal screen information which relates to a screen according to the aspect of the change in state of the operable mechanism from the internal screen information output circuit, and displaying the second screen that is based on the internal screen information on the display when the second display step is executed according to the execution of the switching step, the internal screen information being screen information of a setting screen for setting an attribute of the medium that is set in the medium accommodating section whose change in state is sensed while executing the first display process.

US Pat. No. 10,116,813

COMPOSITE APPARATUS

Konica Minolta, Inc., To...

1. A composite apparatus comprising:a first apparatus and a second apparatus that operate independently of each other, the first apparatus and the second apparatus each comprising:
a display memory that stores display data; and
a drawing processor;
a single console display that is shared by the first apparatus and the second apparatus and displays the display data upon an instruction by the drawing processor of the first or second apparatus;
a selector that selectively connects the drawing processor of the first or second apparatus to the single console display; and
a switch processor that receives a connection request from the first or second apparatus, wherein the connection request includes a request to connect the drawing processor of the first or second apparatus to the single console display and to instruct the selector to connect the drawing processor of either the first or second apparatus to the single console display,
wherein, while connected to the single console display, the drawing processor of either the first or second apparatus that issued the connection request instructs the single console display to display the display data, and
wherein the first apparatus and second apparatus operate independently of each other while sharing the single console display to display the display data.

US Pat. No. 10,116,812

IMAGE FORMING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER-READABLE DATA RECORDING MEDIUM HAVING CONTROL PROGRAM STORED THEREON

KONICA MINOLTA, INC., Ch...

1. An image forming apparatus, comprising:a display; and
a hardware processor configured to:
accept an operation indicating that display of an image currently displayed on said display is unnecessary, wherein said operation includes at least an input to close said currently displayed image;
generate, based on said operation, a menu showing image candidates to which transition from said currently-displayed image can be made;
display said generated menu on said currently-displayed image;
accept an operation for selecting a particular image from said candidates shown in said generated menu displayed on said currently-displayed image; and
display said selected particular image on said display based on said operation for selecting.

US Pat. No. 10,116,811

IMAGE FORMING SYSTEM, IMAGE FORMING APPARATUS, AND IMAGE FORMATION CONTROL PROGRAM

KONICA MINOLTA, INC., To...

1. An image forming system, comprising:a sheet feeding apparatus which feeds a long sheet with a sheet length in a conveying direction longer than a sheet of a fixed sheet size;
an image forming apparatus which performs image formation based on image data while conveying the long sheet being fed from the sheet feeding apparatus;
a sheet delivering apparatus which winds up the long sheet subjected to the image formation; and
a control apparatus which controls the sheet feeding apparatus, the image forming apparatus, and the sheet delivering apparatus;
wherein the control apparatus controls the sheet feeding apparatus, the image forming apparatus, and the sheet delivering apparatus so as to perform image formation based on the image data while conveying the long sheet at a constant speed,
wherein at a time of executing image formation of another image based on another image data other than the image data in the middle of the image formation based on the image data, the control apparatus controls to stop conveying of the long sheet temporarily, and controls to cut out a region where the another image is formed thereby separating the long sheet, in the state of stopping conveying of the long sheet temporarily, and
wherein in a case in which the image formation of the another image has been executed by using the another image data, the control apparatus controls to stop winding up of the long sheet in the sheet delivering apparatus before the region where the another image is formed on the long sheet is wound up into the sheet delivering apparatus, and in a state in which the winding up of the long sheet is stopped, the control apparatus controls to continue the sheet feeding and conveying of the long sheet such that the region where the another image is formed on the long sheet goes out from the image forming apparatus, wherein the control apparatus continues the sheet feeding and conveying of the long sheet until a trailing end of the region where the another image is formed goes out from the image forming apparatus.

US Pat. No. 10,116,810

IMAGE-OUTPUTTING APPARATUS FOR OUTPUTTING NOTIFICATION IMAGE SENT FROM SERVER

Brother Kogyo Kabushiki K...

1. A server comprising;a network interface configured to communicate with an image-outputting apparatus, the image-outputting apparatus being configured to receive user operations including a login operation, the login operation being performed by a user to which an account is assigned, the image-outputting apparatus being configured to determine whether or not the login operation is accepted, the image-outputting apparatus being configured to determine what type of machine operation is permitted for the logged user on the basis of the account used for the accepted login operation, the image-outputting apparatus being configured to transmit query information and device information to the server, the query information being for querying the server whether notification data to be transmitted to the image-outputting apparatus exists, the device information including a login information indicating the account used by the user logged in to the image-outputting apparatus;
a storage capable of storing the notification data for transmission to the image-outputting apparatus; and
a controller configured to:
receive the query information via the network interface;
in response to receiving the query information, (a) determine whether or not the notification data is stored in the storage;
in response to determining in (a) that the notification data is stored in the storage, receive the device information via the network interface;
(b) determine whether or not the login information included in the device information indicates a specific account, the specific account being assigned to an authorized user; and
in response to determining in (b) that the login information included in the device information indicates the specific account, (c) transmit an output instruction to the image-outputting apparatus via the network interface, the output instruction being for controlling the image-outputting apparatus to output a notification image represented by the notification data.

US Pat. No. 10,116,809

IMAGE PROCESSING APPARATUS, CONTROL METHOD, AND COMPUTER-READABLE STORAGE MEDIUM, WHICH OBTAINS CALIBRATION IMAGE INFORMATION WITH WHICH TO CORRECT IMAGE DATA

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:an image capturing unit configured to capture an image of a document placed on a document board;
a processor; and
a memory storing instructions, when executed by the processor, causing the apparatus to function as:
a determination unit configured to determine a correction parameter for correcting a first image of the document placed on the document board, the first image being captured by the image capturing unit, using a value corresponding to each pixel in a second image that is captured by imaging the document board by the image capturing unit; and
a correction unit configured to correct the first image of the document placed on the document board, the first image being captured by the image capturing unit, using the correction parameter determined by the determination unit,
wherein the determination unit modifies the parameter by modifying a value corresponding to each pixel in a first region containing an edge portion extracted based on an edge extraction filter from the second image of the document board using a value corresponding to each pixel surrounding the first region,
wherein in a case where the first region surrounding the edge portion is larger than a predetermined size, the determination unit is configured to change a coefficient of the edge extraction filter, extract the edge portion from the second image of the document board, and determine, as a second region, a region surrounding an edge portion extracted using the changed coefficient.

US Pat. No. 10,116,808

MOVING AMOUNT DETECTOR AND IMAGE FORMING APPARATUS INCLUDING THE SAME

KONICA MINOLTA, INC., Ch...

1. A moving amount detector that sets a movable member included in a device or an object conveyed by the device as a detection target and detects a moving amount of the detection target, the moving amount detector comprising:an imaging unit that repeatedly captures a series of images of the detection target at a constant sampling period while the detection target moves; and
a hardware processor configured to function as a moving amount calculating unit that selects every Nth image of the series of images and compares each pair of adjacent selected images with each other from among the series of images of the detection target captured by the imaging unit;
wherein N is determined based on an intended moving speed of the detection target; and
the moving amount calculating unit calculates a moving amount of the detection target based on a movement of the detection target during a time period between when the two compared images were taken.

US Pat. No. 10,116,807

METHOD AND APPARATUS FOR MANAGING SUBSCRIPTION TO POLICY COUNTERS

Telefonaktiebolaget LM Er...

1. A method, performed in a Policy and Charging Rules Function (PCRF), for managing subscription to policy counters maintained at an Online Charging System (OCS), wherein the PCRF is operable to communicate with the OCS over an Sy reference point, the method comprising:receiving a Multiple Users subscription trigger from a network operator, the Multiple Users subscription trigger identifying a reference network policy and a subject network policy; and
sending a Spending Limit Request (SLR) command to the OCS, the SLR command specifying an identifier of a subject policy counter for the subject network policy and specifying application of the SLR command with respect to the subject policy counter to all ongoing Sy sessions between the PCRF and the OCS which already include a subscription to a policy counter for the reference network policy.

US Pat. No. 10,116,806

BANDWIDTH AWARE NETWORK STATISTICS COLLECTION

QUALCOMM Innovation Cente...

1. A method of controlling data usage statistics in a computing device, comprising:suppressing, via a minimum window component of the computing device, triggering data usage stats collection during a minimum window;
performing, via a network status component of the computing device, at least one instance of data usage stats collection after termination of the minimum window;
incrementally decreasing, via a minimum window adjustment function of the minimum window component, the minimum window as data usage approaches a warning limit:
wherein the minimum window is a function of (1) a communications channel link speed, and (2) a proximity of data usage to the warning limit;
wherein the triggering is caused by either expiration of a timer or data usage that meets a buffer threshold, and wherein a length of the timer and a size of the buffer threshold are based on the communication channel link speed; and
wherein the buffer threshold is a function of the proximity of the data usage to the warning limit.

US Pat. No. 10,116,805

APPARATUSES AND METHODS FOR DETERMINING USAGE OF A WIRELESS COMMUNICATION SERVICE

10. A method comprising:receiving user input at a user interface displayed by a wireless device, the wireless device configured to access a communication service, wherein the user input designates a user profile; and
after receiving the user input, receiving a selection at the wireless device to initiate a session of the communication service, and responsive to the selection:
generating, at the wireless device, a message associated with the session of the communication service, wherein the message includes a particular identifier of the user profile, wherein the particular identifier indicates that the session is to be billed to a first billing account of a plurality of billing accounts associated with the wireless device, each billing account of the plurality of billing accounts associated with a respective identifier; and
transmitting the message from the wireless device via a wireless network to a network element, wherein the messages is configured to instruct the network element to initiate the session and to cause the session to be billed to the first billing account based on the particular identifier in the message.

US Pat. No. 10,116,804

SYSTEMS AND METHODS FOR POSITIONING A USER OF A HANDS-FREE INTERCOMMUNICATION

Elwha LLC, Bellevue, WA ...

1. A hands-free intercommunication system for automatically connecting a user to an entity of interest, the system comprising:a user-tracking sensor that determines a location of the user;
a directional microphone that measures vocal emissions by the user, wherein the measured vocal emissions include identifying the entity of interest with which the user would like to communicate;
a communication interface that communicatively couples the directional microphone and a directional sound emitter to a communication device of the entity of interest, wherein the communication interface determines whether to couple the communication device of the entity of interest to the user based on the location of the user; and
a directional sound emitter that delivers audio received at the communication device of the entity of interest to the user, wherein the directional sound emitter emits audio received the entity of interest using a plurality of inaudible ultrasonic sound waves that frequency convert to produce audible audio corresponding to the received audio the entity of interest for the user at the location of the user.

US Pat. No. 10,116,803

SYSTEM AND METHOD OF REROUTING A TOLL FREE TELEPHONY CALL IN THE EVENT OF THE FAILURE TO CONNECT TO THE TARGET TELEPHONY STATION

1. A method of re-routing a toll free telephony call by a telephony service provider computing machine (Provider Machine) comprising:populating a first routing database with a plurality of target station identifiers, each associated with a target telephony station, and at least one carrier identification code (CIC) routing code associated with each said target station identifier;
populating an auxiliary routing database with one or a plurality of said target station identifiers that match at least one or more said target station identifiers in said first routing database and further populating said auxiliary routing database with one or plurality of alternative routing codes per said target station identifier, where at least one of said alternative routing codes is an alternative routing code different from said CIC routing code in said first routing database;
receiving at said Provider Machine said toll free telephony call containing a called target station identifier;
having said Provider Machine automatically access from said first routing database one or a plurality of said CIC routing codes associated with said called target station identifier;
having said Provider Machine automatically access from said auxiliary routing database one or a plurality of said routing codes associated with said called target station identifier;
using one said CIC routing code accessed from said first routing database as a primary routing code to automatically route said toll free telephony call from said Provider Machine to an electronic routing machine associated with said primary routing code;
routing said toll free telephony call from said electronic routing machine to said target telephony station associated with said called target station identifier;
receiving a call completion status indicator at said Provider Machine from said electronic routing machine; and
using as an alternate routing code at least one said routing code differing from said primary routing code to automatically re-route said toll free telephony call from said Provider Machine to an alternate electronic routing machine associated with said alternate routing code in the event said call completion status indicator reveals to the Provider Machine said toll free telephony call did not successfully connect to said target telephony station.

US Pat. No. 10,116,802

IP CARRIER PEERING

1. A system to interconnect carrier communication systems, the system comprises:a communication client, the communication client configured to:
receive a request, including an e.164 number, to connect an IP (Internet protocol) call from equipment of a first carrier to equipment of a second carrier;
modify a query to a private ENUM (tElephone NUmber Mapping) to include an intercarrier ENUM apex-based domain with an associated DNS (domain name server) forwarding zone, wherein the associated DNS forwarding zone includes a primary internet address of a tier 2 ENUM of the second carrier;
automatically forward the modified query to the equipment of the second carrier to retrieve a routing record from the second carrier; and
route the IP call to the equipment of the second carrier using the routing record.

US Pat. No. 10,116,801

CONFERENCE CALL PLATFORM CAPABLE OF GENERATING ENGAGEMENT SCORES

Shoutpoint, Inc., Newpor...

1. A conference call management system, comprising:a call processing system comprising one or more computing devices, said call processing system comprising telecommunication hardware configured to initiate and process telephonic calls, including conference calls, and comprising a processor and a memory, said call processing system programmed with at least:
a conference call management module that provides functionality for initiating a conference call and for enabling conference call participants to interactively participate on the conference call, said conference call management module configured to monitor, and maintain participant-specific records of, the interactive participation by the participants;
a scoring module configured to use at least the participant-specific records of interactive participation to generate participant-specific engagement scores reflective of levels of engagement of the participants on the conference call; and
a ranking module configured to rank participant-submitted requests for consideration based on the participant specific engagement scores.

US Pat. No. 10,116,800

TECHNIQUES FOR BEHAVIORAL PAIRING IN A CONTACT CENTER SYSTEM

Afiniti Europe Technologi...

1. A method for behavioral pairing in a contact center system comprising:determining, by at least one computer processor communicatively coupled to and configured to perform behavioral pairing operations in the contact center system, a plurality of agents available for connection to a contact;
determining, by the at least one computer processor, a plurality of preferred contact-agent pairings among possible pairings between the contact and the plurality of agents;
selecting, by the at least one computer processor, one of the plurality of preferred contact-agent pairings according to a probabilistic network flow model that is constrained by agent skills and contact skill needs, wherein the probabilistic network flow model is adjusted to minimize agent utilization imbalance according to the constraints of the agent skills and the contact skill needs and to optimize performance of the contact center system, wherein the optimized performance of the contact center system is attributable to the probabilistic network flow model; and
outputting, by the at least one computer processor, the selected one of the plurality of preferred contact-agent pairings for connection in the contact center system.

US Pat. No. 10,116,799

ENHANCING WORK FORCE MANAGEMENT WITH SPEECH ANALYTICS

1. A method for generating an agent work schedule, the method comprising:performing, by a speech or text analytics module hosted on a processor, analytics on a plurality of recorded interactions with a plurality of contact center agents;
detecting, based on the analytics, specific utterances in the recorded interactions;
classifying, on the processor, the recorded interactions into a first plurality of interaction reasons and a first plurality of interaction resolution statuses, wherein the classifying is based on the detected specific utterances;
computing, on the processor, based on the classifying of the recorded interactions, a first agent effectiveness of a first agent and a second agent effectiveness of a second agent of the plurality of agents, wherein the first agent effectiveness and the second agent effectiveness correspond to an interaction reason of the first interaction reasons, the first agent effectiveness being higher than the second agent effectiveness;
forecasting, on the processor, a demand of the contact center agents for a first time period for handling interactions classified with the interaction reason;
generating, on the processor, the agent work schedule for the first time period based on the forecasted demand and the first agent effectiveness and the second agent effectiveness, wherein the agent work schedule includes a first number of agents scheduled to work during the first time period that is larger than a second number of agents scheduled to work during the first time period, the first number of agents including the first agent with the first agent effectiveness, and the second number of agents including the second agent with the second agent effectiveness;
detecting an interaction having the interaction reason during the first time period;
routing, by an electronic switch, the detected interaction to a particular agent selected from the first and second number of agents;
analyzing, on the processor, a second plurality of recorded interactions, the analyzing including classifying the second plurality of recorded interactions into a second plurality of interaction reasons and a second plurality of interaction resolution statuses; and
forecasting, on the processor, a demand of the contact center agents for a second time period for handling the second interaction reasons without forecasting a demand for handling an obsolete interaction reason included in the first plurality of interactions reasons, the second time period being different from the first time period.

US Pat. No. 10,116,797

TECHNIQUES FOR BENCHMARKING PAIRING STRATEGIES IN A CONTACT CENTER SYSTEM

Afiniti Europe Technologi...

1. A method for benchmarking pairing strategies in a contact center system comprising:cycling, by at least one computer processor communicatively coupled to and configured to operate in the contact center system, among at least two pairing strategies, wherein the cycling comprises establishing, by a routing engine of the contact center system, a connection between communication equipment of a contact and communication equipment of an agent based upon at least one pairing strategy of the at least two pairing strategies;
determining, by the at least one computer processor, a differential value attributable to the at least one pairing strategy of the at least two pairing strategies;
determining, by the at least one computer processor, a difference in performance between the at least two pairing strategies, wherein the difference in performance provides an indication that pairing contacts and agents using a first pairing strategy of the at least two pairing strategies results in a performance gain for the contact center system attributable to the first pairing strategy, wherein the difference in performance also provides an indication that optimizing performance of the contact center system is realized using the first pairing strategy instead of another of the at least two pairing strategies; and
outputting, by the at least one computer processor, the difference in performance between the at least two pairing strategies for benchmarking the at least two pairing strategies.

US Pat. No. 10,116,796

REAL-TIME COMMUNICATIONS-BASED INTERNET ADVERTISING

Ooma, Inc., Sunnyvale, C...

1. A computer-implemented method for Internet advertising comprising:providing an advertisement of a plurality of advertisements including a first identifier to a website of a plurality of websites using at least one of a template and an application programming interface (API) associated with the website, the advertisement to be displayed on the website;
receiving a communications session initiated by an end customer using the first identifier, the communications session including a second identifier associated with the end customer;
accepting the communications session when the second identifier is not included in a black list, the black list being produced using at least reported spam;
retrieving a record associated with the end customer using the second identifier;
determining to provide a customized message to the end customer using the record;
selecting a promotional offer using the record;
providing the customized message to the end customer using the communications session, the customized message including the promotional offer and a request for an indication of interest in the promotional offer;
directing the communications session to a live agent of a plurality of live agents in response to receiving the indication of interest from the end customer;
storing data associated with the communications session;
removing the advertisement from the website; and
re-assigning the first identifier to another advertisement of the plurality of advertisements when a number of calls received at the first identifier since the removing the advertisement is below a predetermined threshold.

US Pat. No. 10,116,795

TECHNIQUES FOR ESTIMATING EXPECTED PERFORMANCE IN A TASK ASSIGNMENT SYSTEM

Afiniti Europe Technologi...

1. A method comprising:receiving, by at least one computer processor communicatively coupled to and configured to perform task assignment operations in a task assignment system, a first plurality of historical agent-task assignments;
determining, by the at least one computer processor, a closeness of fit for each of the first plurality of historical agent-task assignments to a preferred task assignment strategy for validating the preferred task assignment strategy;
determining, by the at least one computer processor, a threshold closeness of fit for each of the first plurality of historical agent-task assignments to the preferred task assignment strategy;
determining, by the at least one computer processor, an expected performance of the task assignment system using the preferred task assignment strategy based on a subset of the first plurality of historical agent-task assignments that are within the threshold closeness of fit;
outputting, by the at least one computer processor, the expected performance for use in pairing agents with tasks in the task assignment system based upon the preferred task assignment strategy; and
establishing, by the at least one computer processor, in a switch of the task assignment system, a connection between an agent and a task based upon the expected performance to realize a first amount of performance gain for the task assignment system attributable to the preferred task assignment strategy, wherein actual performance of the task assignment system is optimized by using the validated preferred task assignment strategy based on the expected performance.

US Pat. No. 10,116,794

DETERMINING AN ACTIVE STATION BASED ON MOVEMENT DATA

1. A method for determining an active contact center station for an agent in a contact center system, wherein the contact center system comprises a plurality of contact center stations, based on sensor data, the method comprising the steps of:receiving, by a processor of the contact center system, movement data from a mobile device associated with the agent;
matching, by the processor of the contact center system, the movement data from the mobile device associated with the agent with a previously stored pattern of movement associated with one of the plurality of contact center stations associated with the agent; and
automatically updating, by the processor of the contact center system, one of the plurality of contact center stations to active, wherein the update is based on the movement data and matched pattern of movement, and wherein the agent is not logged into the contact center system.

US Pat. No. 10,116,793

METHOD AND SYSTEM FOR LEARNING CALL ANALYSIS

1. A method for communication learning in a telecommunication system, wherein the telecommunication system comprises at least an automated dialer, a telephony service module, a database, and a media server operatively coupled over a network for exchange of data there between, the method comprising the steps of:a. selecting, by the automated dialer, a contact from the database, the contact being associated with a telephone number and one or more acoustic fingerprints;
b. retrieving, by the telephony service module, from the database, the one or more acoustic fingerprints and the telephone number associated with the contact;
c. initiating, by the automated dialer, a communication with the contact based on the telephone number, the communication generating audio;
d. analyzing, by the media server, the audio for matches to any of the one or more of the acoustic fingerprints, wherein matches are not identified;
e. routing, via an electronic routing device by the telephony service module, the communication to an agent device associated with an agent for determining whether or not the communication comprises a speech recording;
f. receiving, from the agent device, a signal indicating the communication comprises a speech recording;
g. requesting, by the automated dialer, new acoustic fingerprints from the media server for the speech recording and associating the new acoustic fingerprints with the contact in the database; and
h. disconnecting the communication with the contact after receiving the signal indicating the communication comprises the speech recording.