US Pat. No. 10,250,909

DEVICE AND METHOD FOR IMPROVING VIDEO CONFERENCE QUALITY

ATI Technologies ULC, Ma...

1. A processing device for use with a video conferencing network, the processing device comprising:memory configured to store data; and
a processor configured to:
determine a first sub-sampling phase for each of a plurality of portions of first video data;
chrominance sub-sample each of the portions of first video data using the first sub-sampling phase;
vary, over time, a phase relationship of the chrominance sub-samples and the portions of first video data;
encode each of the portions of first video data;
decode sub-sampled, encoded portions of second video data;
determine a second sub-sampling phase at which each of the portions of second video data is chrominance sub-sampled; and
chrominance up-sample each of the portions of second video data using the second sub-sample phase.

US Pat. No. 10,250,908

ADAPTIVE TRANSFORM SIZE SELECTION FOR GEOMETRIC MOTION PARTITIONING

QUALCOMM Incorporated, S...

1. A method of decoding video data, comprising:receiving a block of video data partitioned by a geometric motion partition line into a first partition and a second partition;
determining which pixels in the block of video data are part of a transition region around the geometric motion partition line, wherein one of the pixels in either the first partition or the second partition is determined to be part of the transition region in response to a determination that two or more pixels in a pixel window centered on the one pixel are in the other partition, the two or more pixels being neighboring pixels to the one pixel;
generating a first prediction value for each of the pixels in the transition region based on a first motion vector of the first partition, and a second prediction value for each of the pixels in the transition region based on a second motion vector of the second partition;
calculating a prediction value for each of the pixels in the transition region based on a weighted sum of the respective first prediction value and the second prediction value; and
determining a decoded block of video data from the block of video data based at least in part on the prediction value of each of the pixels in the transition region.

US Pat. No. 10,250,907

INTRA-FRAME PIXEL PREDICTION METHOD, ENCODING METHOD AND DECODING METHOD, AND DEVICE THEREOF

Tsinghua University, Bei...

1. An intra-frame pixel prediction method for predicting a pixel in an image frame, comprising:a step of defining a target template, defining a target template of a pixel to be predicted currently;
a step of determining a matching template, comparing the target template with candidate templates in a search region of the frame, and determining, from the candidate templates, at least one matching template matching the target template; and
a step of determining a prediction value, determining a prediction value of the pixel to be predicted currently based on the at least one matching template;
the intra-frame pixel prediction method further comprising:
rearranging pixels and dividing the pixels into blocks, obtaining respective blocks as encoding targets so that a plurality of pixels in a divided block after the rearranging and dividing of the pixels into blocks do not appear in an original block in the frame before the rearranging and dividing of the pixels into blocks, and so that when a pixel in a divided block is being predicted, a plurality of pixels in its target template in the frame before the rearranging have been reconstructed.

US Pat. No. 10,250,906

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:a supply unit configured to supply image data one piece of pixel data at a time; and
an encoding unit configured to encode the image data supplied by the supply unit, the encoding unit selecting, as prediction pixel data, locally decoded data of one pixel out of n pixels (n is an integer greater than or equal to 2) that were encoded immediately before a pixel to be encoded and have been locally decoded, and encoding a difference between the prediction pixel data and data of the pixel to be encoded,
wherein the supply unit supplies pixel data to the encoding unit in a predetermined order according to which all of the n pixels are pixels that are adjacent to the pixel to be encoded, and
wherein the encoding unit includes:
a storage unit configured to store the n pixels of locally decoded data, wherein each of the n pixels is adjacent to the pixel to be encoded in different directions, and
a determination unit configured to select the locally decoded data of one of the n pixels as the prediction pixel data based on the locally decoded data of n pixels stored in the storage unit,
wherein the determination unit
detects a direction in which a correlation is higher out of directions from the pixel to be encoded toward the n pixels, and determines locally decoded data corresponding to the detected direction as the prediction pixel data,
determines locally decoded data of, out of the n pixels, one pixel out of two pixels immediately before the pixel to be encoded in the predetermined order as the prediction pixel data, and
determines the direction in which the correlation is higher based on a correlation of locally decoded data between a pixel immediately before the two pixels in the predetermined order and each of the two pixels.

US Pat. No. 10,250,905

CONVERSION OPERATIONS IN SCALABLE VIDEO ENCODING AND DECODING

Microsoft Technology Lice...

1. One or more computer storage media having stored thereon computer software instructions for causing a processing unit, when programmed thereby, to perform operations for scalable video processing, the one or more computer storage media being selected from the group consisting of non-volatile memory, magnetic disk, CD ROM, and DVD, the operations comprising: selecting a type of chroma upsampling, wherein the type of chroma upsampling is an interpolation filter selected from among multiple interpolation filters, each of the multiple interpolation filters having different coefficients for cubic interpolation;receiving base layer video after reconstruction of the base layer video, the reconstructed base layer video having a luma channel and plural chroma channels with a first chroma sampling rate; and
scaling each of the plural chroma channels of the reconstructed base layer video to a second chroma sampling rate different than the first chroma sampling rate using the selected type of chroma upsampling, the selected type of chroma upsampling being indicated by one or more chroma scaling parameters that are signaled as side information.

US Pat. No. 10,250,903

MOTION VECTOR DERIVATION METHOD, MOVING PICTURE CODING METHOD AND MOVING PICTURE DECODING METHOD

PANASONIC INTELLECTUAL PR...

1. An image coding and decoding system which includes an image coding apparatus and an image decoding apparatus,wherein said image coding apparatus comprises:
a unit operable to obtain a reference motion vector of a reference block, the reference motion vector being used for deriving a motion vector of a current block to be coded;
a unit operable to calculate a first parameter corresponding to a difference between a display order of a picture including the reference block and a display order of a reference picture of the reference block, wherein said reference block is motion-compensated using the reference motion vector, and said reference picture is referred to by the reference motion vector;
a unit operable to calculate a second parameter corresponding to a difference between a display order of a current picture and a display order of the reference picture, wherein said current picture is a picture including the current block to be coded;
a first judging unit operable to judge whether or not the first parameter is within a range having a predetermined maximum value;
a unit operable to generate a multiplier parameter corresponding to the first parameter, the multiplier parameter being used for changing a division operation by the first parameter into a multiplication operation by the multiplier parameter;
a first motion vector derivation unit operable to derive the motion vector of the current block to be coded by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the predetermined maximum value of the range and the second parameter, when it is judged by said first judging unit that the first parameter is not within the range having the predetermined maximum value, and by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the first parameter and the second parameter, when it is judged by said first judging unit that the first parameter is within the range having the predetermined maximum value;
a unit operable to generate a motion-compensated image of the current block to be coded using the motion vector derived in said first motion vector derivation unit; and
a unit operable to code a difference image between the current block to be coded and the motion-compensated image of the current block to be coded, and
wherein said image decoding apparatus comprises:
a unit operable to obtain a reference motion vector of a reference block, the reference motion vector being used for deriving a motion vector of a current block to be decoded;
a unit operable to calculate a first parameter corresponding to a difference between a display order of a picture including the reference block and a display order of a reference picture of the reference block, wherein said reference block is motion-compensated using the reference motion vector, and said reference picture is referred to by the reference motion vector;
a unit operable to calculate a second parameter corresponding to a difference between a display order of a current picture and a display order of the reference picture, wherein said current picture is a picture including the current block to be decoded;
a second judging unit operable to judge whether or not the first parameter is within a range having a predetermined maximum value;
a unit operable to generate a multiplier parameter corresponding to the first parameter, the multiplier parameter being used for changing a division operation by the first parameter into a multiplication operation by multiplier parameter;
a second motion vector derivation unit operable to derive the motion vector of the current block to be decoded by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the predetermined maximum value of the range and the second parameter, when it is judged by said second judging unit that the first parameter is not within the range having the predetermined maximum value, and by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the first parameter and the second parameter, when it is judged by said second judging unit that the first parameter is within the range having the predetermined maximum value;
a unit operable to decode a coded data stream to obtain a decoded difference image of the current block to be decoded;
a unit operable to generate a motion compensated image of the current block to be decoded using the motion vector derived in said second motion vector derivation unit; and
a unit operable to reconstruct the current block to be decoded by adding the motion compensated image of the current block to be decoded and the decoded difference image of the current block to be decoded.

US Pat. No. 10,250,902

METHOD FOR INDUCING MOTION INFORMATION IN MULTILAYER STRUCTURE AND APPARATUS USING SAME

KT CORPORATION, Gyeonggi...

1. A video decoding method supporting a multi-layer structure, the video decoding method comprising:specifying a current layer criterion location that specifies a current block in a current layer;
specifying a reference layer criterion location, corresponding to the current layer criterion location, in a reference layer;
deriving a motion information storage location in the reference layer by adding the reference layer criterion location and an offset, based on a size of the motion information storage unit;
deriving motion information corresponding to the derived motion information storage location; and
scaling the derived motion information and deriving the scaled motion information as a motion vector to be used for reconstructing a picture in the current layer,
wherein the current layer criterion location is a location of a lower-right sample of a center of the current block, and
wherein a location of a top-left sample of the current block is (xP, yP), the current block is a block having a size of 16×16, and the current layer criterion location is (xP +8, yP +8).

US Pat. No. 10,250,901

TRANSMITTING/RECEIVING DEVICE, METHOD, AND CODING/DECODING DEVICE

Saturn Licensing LLC, Ne...

1. A transmitting device comprising:circuitry configured to:
classify image data of pictures constituting moving image data into a plurality of layers arranged in a hierarchical order, each picture belonging to any particular layer that is not a lowest layer of the plurality of layers having a temporal position in a temporal center between two temporally adjacent pictures belonging to a combination of any of the plurality of layers that are lower than the particular layer;
code the classified image data of each layer, including coding a picture of a layer using a referenced picture that belongs to the same layer or a lower layer;
generate a video stream holding the coded image data of each layer;
generate a container layer descriptor that includes
frame frequency information of pictures in the lowest layer,
layer number information indicating
a number of layers in the plurality of layers, and
for each layer higher than the lowest layer, a respective multiplication factor, a frame frequency corresponding to the layer being a frame frequency of the lowest layer multiplied by the respective multiplication factor;
generate a container in a prescribed format that includes the generated video stream and the generated container layer descriptor; and
transmit the generated container.

US Pat. No. 10,250,900

SYSTEMS AND METHODS FOR EMBEDDING METADATA INTO VIDEO CONTENTS

Disney Enterprises, Inc.,...

1. A system for embedding data in a video content, the system comprising:an encoder;
a memory storing the encoder and encoder setting; and
a hardware processor configured to execute the encoder according to the encoder settings to:
obtain the data;
determine a plurality of pixels of an image in at least one frame of the video content;
embed the data into the image in the at least one frame of the video content by inverting the plurality of pixels according to the data; and
provide, to a decoder, a location of the inverted pixels of the image in the at least one frame of the video content, so as to enable the decoder to obtain the embedded data from the video content.

US Pat. No. 10,250,899

STORING AND RETRIEVING HIGH BIT DEPTH IMAGE DATA

QUALCOMM Incorporated, S...

1. A method of accessing media data, the method comprising:coding, by one or more processors of a device, the one or more processors implemented in circuitry, a plurality of bit length values for a plurality of block fixed length code length (bflc_len) values for a plurality of blocks of a tile or sub-tile of an image, the bit length values representing numbers of bits used to code the bflc_len values, and the bflc_len values representing numbers of bits used to code codewords representing residual values for pixels of the tile or sub-tile corresponding to the respective blocks;
coding, by the one or more processors, the bflc_len values for each of the plurality of blocks such that the bflc_len n values have the numbers of bits indicated by the respective bit length values, wherein at least two of the bflc_len values have different number of bits;
coding, by the one or more processors, the codewords for each of the plurality of blocks such that the codewords have the numbers of bits indicated by the values for corresponding blocks of the plurality of blocks; and
accessing, by the one or more processors, the bit length values, the bflc_len values, and the codewords in a memory of the device.

US Pat. No. 10,250,898

DECODING DEVICE AND ENCODING DEVICE

Kabushiki Kaisha Toshiba,...

2. A decoding device comprising:a receiver to acquire transmission information including encoded data, a first color-difference format, a second color-difference format, and filter information, the first color-difference format indicating a resolution of a color-difference component of the encoded data that is generated according to predetermined syntax, the second color-difference format indicating a resolution of a color-difference component used when reproducing a decoded image obtained by decoding the encoded data, the filter information being for identifying a filter that is used in converting a color-difference format of the decoded image;
a decoder to decode the encoded data included in the transmission information to obtain the decoded image; and
a converter to convert a color-difference format of the decoded image using the filter identified with the filter information, wherein
the first color-difference format is indicative of a resolution of a color-difference component of a 4:2:2 format or a 4:4:4 format, and the second color-difference format is indicative of a resolution of a color-difference component of a 4:2:0 format; or the first color-difference format is indicative of the resolution of the color-difference component of the 4:4:4 format, and the second color-difference format is indicative of the resolution of the color-difference component of the 4:2:0 format or the 4:2:2 format;
the filter information includes identification information of the filter; and
the converter converts the color-difference format of the decoded image by using at least one of a horizontal down-sampling filter and a vertical down-sampling filter in accordance with the identification information of the filter.

US Pat. No. 10,250,897

TILE ALIGNMENT SIGNALING AND CONFORMANCE CONSTRAINTS

SHARP KABUSHIKI KAISHA, ...

1. A method for decoding video comprising:(a) receiving a video usability information syntax including a tile boundaries aligned flag for a direct reference layer of a current layer, the tile boundaries aligned flag not being received for a layer that is not a direct reference layer of the current layer; and
(b) decoding a picture of said current layer by using a decoded picture of said direct reference layer of said current layer, wherein
in a case that said tile boundaries aligned flag is equal to 1, when any two samples of said picture of said current layer belong to one tile, collocated samples of said decoded picture of said direct reference layer belong to one tile; and
in said case that said tile boundaries aligned flag is equal to 1, when any two samples of said picture of said current layer belong to different tiles, said collocated samples of said decoded picture of said direct reference layer belong to different tiles.

US Pat. No. 10,250,896

IMAGE COMPRESSING METHOD BASED ON JPEG-LS

VIA ALLIANCE SEMICONDUCTO...

1. An image compression method based on JPEG-LS performed by a computer hardware, comprising:dividing M×N pixels in a source image into k groups, wherein M, N, and k are integers larger than one, and each of the groups corresponds to a plurality of pixels among the M×N pixels;
performing a decorrelation procedure and a context modeling procedure for each of the pixels in ith group of the k groups;
not refreshing a compensation look-up table corresponding to the context modeling procedure before the decorrelation procedure and the context modeling procedure for the plurality of pixels in the ith group are accomplished; and
refreshing the compensation look-up table after the decorrelation procedure and the context modeling procedure for the plurality of pixels in the ith group are accomplished.

US Pat. No. 10,250,895

DPB CAPACITY LIMITS

SHARP KABUSHIKI KAISHA, ...

1. A method for decoding a video bitstream comprising:(a) receiving a first bitstream layer representative of a coded video sequence;
(b) receiving a second bitstream layer representative of said coded video sequence;
(c) receiving a first dimension syntax element associated with said first bitstream layer included within a first set of syntax elements;
(d) receiving a second dimension syntax element associated with said first bitstream layer included within said first set of syntax elements;
(e) receiving a third dimension syntax element associated with said second bitstream layer included within a second set of syntax elements;
(f) receiving a fourth dimension syntax element associated with said second bitstream layer included within said second set of syntax elements;
(g) determining a first size constraint on a first decoded picture buffer for said first bitstream layer using said first dimension syntax element associated with said first bitstream layer included within said first set of syntax elements;
(h) determining a second size constraint on a second decoded picture buffer for said second bitstream layer using said fourth dimension syntax element associated with said second bitstream layer included within said second set of syntax elements, and
(i) decoding at least one frame of said video bitstream using said first size constraint for said first bitstream layer and said second size constraint for said second bitstream layer.

US Pat. No. 10,250,894

SYSTEMS AND METHODS FOR PROVIDING TRANSCODED PORTIONS OF A VIDEO

GoPro, Inc., San Mateo, ...

1. A system that transcodes videos, the system comprising:one or more physical computer processors configured by machine-readable instructions to:
obtain a video, the video having a time duration, the video including visual information defined by one or more electronic media files, the video stored within the one or more electronic media files in a first video format;
determine an initial portion of the time duration where the one or more electronic media files defining the visual information of the video are to be transcoded, such determination, including:
determining whether the time duration is greater than a predefined threshold; and
in response to the time duration being greater than the predefined threshold, determining the initial portion to be an initial time duration that is less than the time duration;
generate one or more transcoded media files defining the visual information in the video during the initial portion, the initial portion of the video stored within the one or more transcoded media files in a second video format different from the first video format;
receive a request for the video from a client computing platform, the request including specification regarding compatibility of video format for playback associated with the client computing platform;
in response to receipt of the request, select the one or more transcoded media files for transmission to the client computing platform; and
effectuate transmission of the one or more transcoded media files to the client computing platform for display of the visual information defined by the one or more transcoded media files via the client computing platform.

US Pat. No. 10,250,893

METHOD AND DEVICE FOR ENCODING BOTH A HIGH-DYNAMIC RANGE FRAME AND AN IMPOSED LOW-DYNAMIC RANGE FRAME

INTERDIGITAL VC HOLDINGS,...

15. A device for encoding a frame whose pixel values belong to a high-dynamic range and an imposed frame whose pixel values belong to a lower-dynamic range, wherein the device comprises a processor configured to:determine a backlight frame from the luminance component of the frame;
calculate a residual frame responsive to the frame and the backlight frame; and
predictive-encode the residual frame using a predictor of the residual frame calculated from an imposed frame, said imposed frame being a low-dynamic version of the frame to be encoded.

US Pat. No. 10,250,892

TECHNIQUES FOR NONLINEAR CHROMINANCE UPSAMPLING

NVIDIA CORPORATION, Sant...

1. A computer-implemented method for upsampling compressed video, the method comprising:receiving an encoded video data stream;
identifying a first pixel within the encoded video data stream;
determining a first chrominance value associated with the first pixel;
computing a probability that a difference between a second chrominance value associated with a second pixel within a decoded video data stream and the first chrominance value is below a first threshold;
computing the second chrominance value based on the probability and the first chrominance value, wherein the second chrominance value represents a decoded chrominance value of the first pixel; and
generating at least the second pixel in the decoded video stream that includes the second chrominance value.

US Pat. No. 10,250,891

VIDEO CODING DEVICE, VIDEO CODING METHOD, VIDEO DECODING DEVICE AND VIDEO DECODING METHOD

FUJITSU LIMITED, Kawasak...

1. A video coding device comprisinga mode determination circuit configured to calculate a first difference between a coding target image included in a video and a previous image that is one image previous to the coding target image, and to compare the first difference and a threshold;
a frame buffer configured to store a first decoded image of a coded image that is coded before the coding target image;
a video coding circuit configured to code the coding target image by inter prediction coding that uses the first decoded image as a reference image when the first difference is greater than the threshold;
a difference calculation circuit configured to calculate a second difference between the coding target image and a second decoded image of a coded image for the previous image when the previous image has been coded by video coding;
a still-image coding circuit configured to code the second difference or the coding target image by still-image coding when the first difference is smaller than the threshold, wherein the still-image coding circuit codes the second difference when the previous image has been coded by the video coding and the coding target image when the previous image has been coded by the still-image coding; and
an adder circuit configured to generate an addition result by adding a decoded difference of a coded second difference to the second decoded image and to output the addition result as a third decoded image to the frame buffer when the previous image has been coded by the video coding, and to output a decoded image of a coded image for the coding target image as the third decoded image to the frame buffer when the previous image has been coded by the still-image coding.

US Pat. No. 10,250,890

METHOD, APPARATUS AND SYSTEM FOR ENCODING AND DECODING THE SIGNIFICANCE MAP FOR RESIDUAL COEFFICIENTS OF A TRANSFORM UNIT

Canon Kabushiki Kaisha, ...

1. A method of decoding a bitstream, the method comprising:receiving, from the bitstream, encoded data of residual coefficients for a transform unit represented as a 8×8 block, wherein the received encoded data includes significant coefficient group flags for indicating whether at least one of residual coefficients within a corresponding sub-block of the 8×8 block is significant;
determining a scan order of the significant coefficient group flags for the transform unit from a plurality of scan orders of the significant coefficient group flags according to an intra-prediction mode corresponding to the transform unit from a plurality of intra-prediction modes, wherein a first scan order of the significant coefficient group flags is determined if the intra-prediction mode corresponding to the transform unit is a first intra-prediction mode and a second scan order of the significant coefficient group flags is determined if the intra-prediction mode corresponding to the transform unit is a second intra-prediction mode different from the first intra-prediction mode, and the second scan order of the significant coefficient group flags is different from the first scan order of the significant coefficient group flags;
determining a scan order of the residual coefficients for the transform unit from a plurality of scan orders of the residual coefficients according to the intra-prediction mode corresponding to the transform unit from the plurality of intra-prediction modes, wherein a first scan order of the residual coefficients is determined if the intra-prediction mode corresponding to the transform unit is the first intra-prediction mode and a second scan order of the residual coefficients is determined if the intra-prediction mode corresponding to the transform unit is the second intra-prediction mode different from the first intra-prediction mode, and the second scan order of the residual coefficients is different from the first scan order of the residual coefficients;
determining the significant coefficient group flags corresponding to respective sub-blocks of the 8×8 block according to the determined scan order of the significant coefficient group flags; and
decoding residual coefficients of the transform unit according to the determined significant coefficient group flags and the determined scan order of the residual coefficients to generate at least one frame.

US Pat. No. 10,250,889

SYSTEM, METHOD AND COMPUTER READABLE MEDIUM FOR PROVIDING VISUAL CONTENT TO A USER

ARRIS Enterprises LLC, S...

1. A method for providing visual content to a user, the method comprising:receiving, at a location remote from a receiving display device or decoder, information representative of an image, wherein said image is divisible in to one or more portions each having a size smaller than the image;
processing the image in order to generate a plurality of macro-blocks;
compressing individually each of said macro-blocks that correspond to the image based on a plurality of resolutions, and storing each of the plurality of individually compressed macro-blocks that represent the entire image at each of the plurality of resolutions;
receiving a request for a first portion of the image, the request including coordinates of said first portion;
responsive to receipt of the request for the first portion of the image, said first portion defined by coordinates that are within coordinates of said image such that the first portion is geometrically smaller than said image and excludes at least a second portion of the image, processing only the compressed macro-blocks from storage that are within said coordinates defining the first portion of the image to generate an encoded MPEG compliant video stream including an I-picture that represents only said first portion and dependently decodable pictures having motion vectors equal to 0 indicating that a content of the I-picture is to be duplicated; and
transmitting to the user over a cable network the encoded MPEG compliant video stream representing only the first portion of the image for display.

US Pat. No. 10,250,888

ELECTRONIC DEVICE CONFIGURED TO NON-UNIFORMLY ENCODE/DECODE IMAGE DATA ACCORDING TO DISPLAY SHAPE

Samsung Electronics Co., ...

1. An electronic device comprising:an operation processor configured to encode image data, the image data generated by capturing an object, the operation processor further configured to,
divide an image represented by the image data into a first region and a second region based on an information signal received from an external device, the information signal including information associated with a shape of a display region of the external device such that an outline of the first region varies depending on the shape of the display region,
encode first image data corresponding to the first region by a first encoding manner, and
encode second image data corresponding to the second region by a second encoding manner, the second encoding manner being different from the first encoding manner; and
a communication circuit configured to receive the information signal from the external device and provide the information signal to the operation processor,
wherein the first region includes a third region and a fourth region, the third region being entirely inside an outline of a display region that corresponds to an image to be displayed on the display region of the external device, the fourth region overlapping the outline of the display region, and
the operation processor is further configured to encode the first image data by encoding third image data corresponding to the third region and fourth image data corresponding to at least a portion of the fourth region by the first encoding manner, the third region encoded by a unit of processing block, and the fourth region encoded by a unit of sub block, the processing block divided into a plurality of sub blocks.

US Pat. No. 10,250,887

IMAGE DECODING METHOD, IMAGE CODING METHOD, IMAGE DECODING APPARATUS, IMAGE CODING APPARATUS, AND IMAGE CODING AND DECODING APPARATUS

SUN PATENT TRUST, New Yo...

1. An image decoding apparatus of decoding, on a block-by-block basis, image data included in a coded stream, the image decoding apparatus comprising:a processor; and
a non-transitory computer-readable medium storing thereon a computer program, which when executed by the processor, causes the processor to perform operations including:
selecting candidates for an intra prediction mode to be used for intra prediction for a decoding target block from a plurality of predetermined intra prediction modes, each of the plurality of predetermined intra prediction modes having a mode number;
deriving the selected candidates, the number of the candidates constantly being a predetermined fixed plural number, the predetermined fixed plural number being at least 2 and less than a number of the plurality of predetermined intra prediction modes;
making a candidates list which includes indices and the derived candidates, the derived candidates corresponding on a one-to-one basis with the indices, and a number of the indices being equal to the predetermined fixed plural number;
obtaining a coded flag which indicating whether the intra prediction mode is inferred from a neighboring block or not;
decoding the coded flag to obtain a decoded flag;
when the decoded flag indicates that the intra prediction mode is inferred from the neighboring block,
(i) obtaining, from the coded stream, a coded specified index which specifies an index of one of the derived candidates as the intra prediction mode to be used for intra prediction for the decoding target block,
(ii) decoding the coded specified index to obtain a decoded specified index,
(iii) determining the one of the derived candidates using the decoded specified index, the index of the one of the derived candidates being specified by the decoded specified index in the candidates list, and
(iv) decoding the image data using the determined one of the derived candidates; and
when the decoded flag indicates that the intra prediction mode is not inferred from the neighboring block,
(i) obtaining a coded specified mode number from the coded stream,
(ii) decoding the coded specified mode number to obtain a decoded specified mode number, and
(iii) comparing the mode number of one of the derived candidates with the decoded specified mode number,
when the mode number of the one of the derived candidates is larger than the decoded specified mode number, determining the one of the derived candidates as the intra prediction mode to be used for intra prediction for the decoding target block, and decoding the image data using the determined one of the derived candidates, and
when the mode number of the one of the derived candidates is smaller than or equal to the decoded specified mode number, adding one to the decoded specified mode number, and decoding the image data using one of the plurality of predetermined intra prediction modes which is specified by the number obtained by adding one to the decoded specified mode number,
wherein the deriving includes:
deriving a first candidate for the intra prediction mode to be used for intra prediction for the decoding target block from an intra prediction mode used for intra prediction for each of adjacent blocks that are adjacent to the decoding target block; and
in a case that the number of the derived first candidates is smaller than the predetermined fixed plural number, further deriving a second candidate as a DC prediction mode to be used for intra prediction and a third candidate as a vertical (angular) prediction mode to be used for intra prediction.

US Pat. No. 10,250,886

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD AND MOVING PICTURE CODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture coding device adapted to code moving pictures in units of blocks obtained by partitioning each picture, comprising:a spatial merge candidate generation unit configured to derive spatial merge candidates from a first predefined number of blocks neighboring a prediction block subject to coding;
a temporal merge candidate generation unit configured to derive a temporal merge candidate from a block that exists at the same position as or near a prediction block subject to coding in a coded picture that is different from the prediction block subject to coding;
a merge candidate addition unit configured to add the spatial merge candidates and the temporal merge candidates to a merge candidate list;
a merge candidate supplying unit configured to add one or more merge candidates to the merge candidate list up to the predefined number of merge candidates as an upper limit when the number of merge candidates included in the merge candidate list is smaller than the predefined number of merge candidates;
a coding information selection unit configured to select a merge candidate from the merge candidates added to the merge candidate list;
a motion compensation prediction unit configured to perform inter prediction of the prediction block subject to coding by the merge candidate thus selected;
a first bitstream coding unit configured to set the number of merge candidates; and
a second bitstream coding unit configured to code information indicating indices of the merge candidates,
wherein the second bitstream coding unit derives the indices of the merge candidates based on the number of the merge candidates; and
the spatial merge candidate generation unit stops deriving the spatial merge candidates when the number of the derived spatial merge candidates reaches a second predefined number smaller than the first predefined number.

US Pat. No. 10,250,885

SYSTEM AND METHOD FOR INTRACODING VIDEO DATA

Intel Corporation, Santa...

1. A method of decoding video data in a computing system, the method comprising:decoding a present subblock of a video frame according to an intra prediction mode that specifies a predicted value of an upper right-hand corner pixel of the present subblock based at least in part on a weighted average of a plurality of pixel values comprising a value of a lower right-hand corner pixel of a vertically adjacent subblock, a value of a lower left-hand corner pixel of an upper-right diagonally adjacent subblock, a value of a pixel immediately horizontally-right adjacent to the lower left-hand corner pixel of the upper-right diagonally adjacent subblock, a value of a lower right-hand corner pixel of a horizontally adjacent subblock, a value of an upper right-hand corner pixel of a lower-left diagonally adjacent subblock, and a value of a pixel immediately vertically-down adjacent to the upper right-hand corner pixel of the lower-left diagonally adjacent subblock.

US Pat. No. 10,250,884

SYSTEMS AND METHODS FOR SIGNALING INFORMATION FOR LAYER SETS IN A PARAMETER SET

QUALCOMM Incorporated, S...

1. A method of decoding video data, the method comprising:obtaining an encoded video bitstream comprising one or more layer sets and one or more additional layer sets, wherein a layer set from the one or more layer sets includes one or more layers comprising a base layer and wherein an additional layer set from the one or more additional layer sets includes one or more layers not comprising a base layer, the encoded video bitstream including a video parameter set defining parameters of the encoded video bitstream, wherein the one or more layer sets are defined in a base part of the video parameter set, and wherein the one or more additional layer sets are defined in an extension part of the video parameter set; and
decoding one or more syntax elements from the video parameter set, the one or more syntax elements including rate information for the one or more layer sets defined in the base part of the video parameter set and for the one or more additional layer sets defined in the extension part of the video parameter set, wherein the rate information for the one or more layer sets and for the one or more additional layer sets is decoded based on a variable in the video parameter set, the variable indicating a total number of layer sets signaled in the base part of the video parameter set and the extension part of the video parameter set.

US Pat. No. 10,250,883

CHROMA QUANTIZATION IN VIDEO CODING

APPLE INC., Cupertino, C...

1. A non-transitory computer readable medium storing a program that is executable by a processing unit, the program comprising sets of instructions for:identifying two initial chroma quantization parameter (QP) offset values for two levels of a video coding hierarchy, each initial chroma QP offset value for specifying chroma QPs of video units encompassed by one level of the video coding hierarchy;
for each of a plurality of quantization groups, identifying an additional chroma QP offset value, each quantization group encompassing a plurality of video units, wherein different additional chroma QP offset values are identified for at least two quantization groups; and
computing, for each video unit in the plurality of quantization groups, a chroma QP value by adding (i) the initial chroma QP offset values that were identified for the levels of the video coding hierarchy encompassing the video unit and (ii) the additional chroma QP offset value that was identified for the quantization group encompassing the video unit.

US Pat. No. 10,250,882

CONTROL AND USE OF CHROMA QUANTIZATION PARAMETER VALUES

Microsoft Technology Lice...

1. A computing device that implements an image or video encoder adapted to perform operations comprising:encoding image or video content for which values of quantization parameter (QP) vary according to a relationship between a luma component and one or more chroma components, wherein the encoding includes:
determining a QP index from a luma component QP and a chroma component QP offset, wherein the chroma component QP offset incorporates a picture-level chroma QP offset and a slice-level chroma QP offset, and wherein the QP index is a variable qPI determined according to:
qPI=Clip3(a,b,QPY+qp_offset+slice_qp_delta),where QPY represents the luma component QP, qp_offset represents the picture-level chroma QP offset, slice_qp_delta represents the slice-level chroma QP offset, and Clip3(a, b, c) represents a function that clips the value of c to the range of a to b; andmapping the QP index to a chroma component QP; and
outputting at least part of a bitstream including the encoded content, wherein the bitstream includes a flag in a picture parameter set that indicates presence of slice-level chroma QP offsets in slice headers.

US Pat. No. 10,250,881

VIDEO ENCODING/DECODING APPARATUS AND METHOD USING BATCH MODE TOGETHER WITH SKIP MODE

SK TELECOM CO., LTD., Se...

1. A video decoding method performed by a video decoding apparatus, the method comprising:reconstructing a syntax corresponding to at least one of a first block, a second block and a third block using a context-adaptive binary arithmetic coding (CABAC), wherein
the first block is generated by dividing a video frame to be decoded, and the syntax for the first block includes at least one of a SKIP flag indicating whether the first block is decoded in a SKIP mode, and a flag indicating whether an inter prediction mode is used or an intra prediction mode is used, wherein a differential motion vector is not encoded in the SKIP mode,
the second block corresponds to a unit of a prediction and is generated by dividing the first block, and the syntax for the second block includes a batch mode flag indicating whether the second block is decoded in a batch mode and a motion information index for selecting a motion information of the second block, wherein, in the batch mode, a motion information selected from adjacent blocks and a collocated block is used as the motion information of the second block, when the inter prediction mode is used, and
the third block corresponds to a unit of transformation and is generated by dividing the first block; and
reconstructing a prediction block corresponding to at least one of the first block, the second block and the third block,
wherein, when the inter prediction mode is used, the reconstructing the syntax includes:
deciding the first block according to a partition type information which indicates how the first block is derived by dividing a block, having a preset size, in a tree structure,
reconstructing the SKIP flag for the first block using the CABAC wherein a context number (ctxIdx) for the SKIP flag is selected by varying a ctx_inc value for the SKIP flag according to conditions of a upper side block and a left side block of the first block,
reconstructing the batch mode flag for the second block when the reconstructed SKIP flag for the first block does not correspond to the SKIP mode, and
reconstructing the motion information index for the second block and not reconstructing the differential motion vector, either when the SKIP flag of the first block corresponds to the SKIP mode or when the reconstructed batch mode flag corresponds to the batch mode, and
wherein when both of the upper side block and the left side block exist, the reconstructing the syntax set:
the ctx_inc value as 2, when both of the SKIP flag of the upper side block and the SKIP flag of the left side block are 1;
the ctx_inc value as 1, when one of the SKIP flag of the upper side block and the SKIP flag of the left side block is 1; and
the ctx_inc value as 0, when both of the SKIP flag of the upper side block and the SKIP flag of the left side block are 0,
wherein, when the intra prediction mode is used, a prediction pixel in the prediction block is generated by averaging a horizontal prediction value and a vertical prediction value, and
wherein the horizontal prediction value is calculated by using a pixel adjacent to the left side of the prediction block and an upper-right pixel outside of the prediction block, and the vertical prediction value is calculated by using a pixel adjacent to the upper side of the prediction block and a lower-left pixel outside of the prediction block.

US Pat. No. 10,250,880

VIDEO ENCODING AND DECODING METHODS AND DEVICE USING SAME

Electronics and Telecommu...

1. A method for decoding a video signal with a decoding apparatus, the method comprising:obtaining, by the decoding apparatus, the video signal relating to motion information;
determining, by the decoding apparatus, based on a merge flag obtained from the video signal, whether a merge mode is applied to a first prediction block included in a coding block, the merge flag indicating whether the merge mode is applied to the first prediction block, the coding block being partitioned to comprise a plurality of prediction blocks, the first prediction block being one of the prediction blocks;
when the merge flag indicates that the merge mode is applied to the first prediction block, determining a plurality of merge candidates of the first prediction block,
wherein when the coding block has a pre-defined size, the merge candidates are derived based on a position of the coding block, and
wherein when the coding block does not have the pre-defined size, the merge candidates are derived based on a position of the first prediction block;
deriving, by the decoding apparatus, the motion information of the first prediction block using the merge candidates;
obtaining, by the decoding apparatus, prediction samples of the first prediction block using the motion information; and
obtaining, by the decoding apparatus, reconstruction samples by using the prediction samples.

US Pat. No. 10,250,879

VIDEO ENCODING METHOD USING IN-LOOP FILTER PARAMETER PREDICTION AND APPARATUS THEREFOR, AND VIDEO DECODING METHOD AND APPARATUS THEREFOR

SAMSUNG ELECTRONICS CO., ...

1. A video encoding apparatus comprising:an encoder configured to generate encoded video data by encoding an input video;
a decoder configured to decode video data to which a filter to compensate for a pixel value is to be applied, wherein the video data is from the encoded video data;
a deblocking filter unit configured to apply, to the decoded video data, a deblocking filter to remove a block effect;
an adaptive loop filter (ALF) parameter predictor configured to generate an ALF filter parameter by using information of the decoded video data, wherein the ALF filter parameter is configured to be applied to an ALF filter to compensate for a value of a current pixel by using a value of a neighboring pixel adjacent to the current pixel and a filter coefficient with respect to the neighboring pixel;
a sample adaptive offset (SAO) filter unit configured to apply a SAO filter to the decoded video data, wherein the SAO filter compensates for a value of a current pixel by using at least one of an edge offset and a band offset;
an ALF filter unit configured to apply, by using the ALF filter parameter, the ALF filter to video data to which the SAO filter has been applied; and
an entropy encoder configured to perform entropy encoding on the ALF filter parameter,
wherein the ALF parameter predictor comprises a first ALF parameter predictor and a second ALF parameter predictor,
wherein the first ALF parameter predictor is configured to generate the ALF filter parameter by using the information of the decoded video data to which the deblocking filter has not been applied,
wherein the second ALF parameter predictor is configured to generate the ALF filter parameter by using the information of the decoded video data to which the deblocking filter has been applied and the SAO filter has not been applied, and
wherein the video encoding apparatus selects one of the first ALF parameter predictor and the second ALF parameter predictor to generate the ALF filter parameter according to a preset method.

US Pat. No. 10,250,878

METHOD FOR DETERMINING PREDICTOR BLOCKS FOR A SPATIALLY SCALABLE VIDEO CODEC

Huawei Technologies Co., ...

1. A method for determining predictor blocks of a first resolution layer image from blocks of a second resolution layer image in video encoding, wherein the method is performed by a spatially scalable video encoder, wherein a resolution of the first resolution layer image is higher than a resolution of the second resolution layer image, the method comprising:up-scaling each block of the second resolution layer image to a corresponding block of an up-scaled second resolution layer image by using a scale factor different from a ratio of the resolutions of the first resolution layer image and the second resolution layer image;
selecting a block of the up-scaled second resolution layer image among blocks of the up-scaled second resolution layer image surrounding the corresponding block of the up-scaled second resolution layer image as a predictor block of the first resolution layer image and
transmitting a bitstream, wherein the scale factor and vector information for indicating the selected block are carried in the bitstream.

US Pat. No. 10,250,877

METHOD AND DEVICE FOR CODING AN IMAGE BLOCK, CORRESPONDING DECODING METHOD AND DECODING DEVICE

INTERDIGITAL MADISON PATE...

7. A decoding device comprising at least one circuit configured to:access, from a decoded picture buffer, a reference image reconstructed at a size different from the size of a current image;
motion compensate a reference block of said reconstructed reference image by applying a single horizontal filter GFH and a single vertical filter GFv successively on the lines and on the columns of pixels of said reference block,
decode, for a current block of said current image, a residue block from a binary stream, and
reconstruct the current block from said residue block and from said motion compensated reference block,
wherein said single vertical filter GFv applied on a pixel s is such that GFv(s)=MCIFv(SCFv(s)), where MCIFv is a vertical motion compensation interpolation filter and SCFv is a vertical resampling filter, MCIFv and SCFv being applied jointly and wherein said single horizontal filter GFH applied on a pixel u is such that GFH(u)=MCIFH(SCFH(u)), where MCIFH is a horizontal motion compensation interpolation filter and SCFH is a horizontal resampling filter, MCIFH and SCFH being applied jointly and wherein no resampled version of said reconstructed reference image is stored in the decoded picture buffer.

US Pat. No. 10,250,876

CODEWORD ASSIGNMENT FOR INTRA CHROMA MODE SIGNALLING FOR HEVC

SONY CORPORATION, Tokyo ...

1. An encoding device, comprising:circuitry configured to:
execute a binarization process on a same as luma intra prediction mode for a chroma component based on a codeword assignment,
wherein the same as luma intra prediction mode for the chroma component is same as an intra prediction mode for a luma component corresponding to the chroma component; and
assign the same as luma intra prediction mode for the chroma component to a codeword that is the shortest codeword among a plurality of codewords associated with a plurality of intra prediction modes for the chroma component.

US Pat. No. 10,250,875

DEVICE FOR DECODING A VIDEO BITSTREAM

Velos Media, LLC, Plano,...

1. An electronic device for receiving an encoded video bitstream, comprising:a decoder adapted to decode a video bitstream using a reference picture set by:
decoding first information in the video bitstream defining least significant bits (LSB) of a picture order count (POC) of a reference picture;
decoding second information in the video bitstream, wherein the second information is used to determine a value of most significant bits (MSB) of the POC of the reference picture;
determining third information which is used to determine whether the second information is included in the video bitstream;
checking if the second information is included in the video bitstream by using the third information;
decoding the reference picture set;
decoding the current picture to generate a current decoded picture by using inter prediction based on the reference picture set; and
storing the current decoded picture to be referred for future inter prediction, wherein:
the reference picture set is decoded by using at least the first information of the reference picture and the third information;
the first information of the reference picture is included in a slice header of the video bitstream;
the third information is included in the slice header at least after the first information of the reference picture; and
if the checking indicates that the second information is included in the video bitstream, then the second information is included in the slice header at least after the third information.

US Pat. No. 10,250,874

METHOD FOR CODING SEQUENCE OF DIGITAL IMAGES

1. A method for encoding a sequence of digital images, the method comprising:predicting values of pixels in the sequence of digital images based on reconstructed values of pixels in image areas processed previously, the predicting comprising using a number of prediction modes; and
generating the encoded sequence of digital images, the generating comprising processing a prediction error, the prediction error being a difference between predicted values and original values of pixels,
wherein a preset prediction mode is an intra-prediction mode based on pixels of a single image and is performed pixel-wise, the preset prediction mode comprising:
comparing, for a region of pixels with reconstructed values in the single image and for a template of an image area, a first patch of pixels in the region that surround a single first pixel to be predicted based on the template with a plurality of second patches, each second patch of the plurality of second patches being assigned to a second pixel from a plurality of second pixels in the region and including pixels in the region that surround the second pixel based on the template, thereby determining a similarity measure for each second pixel describing the similarity between reconstructed values of the pixels of the second patch assigned to the respective second pixel and the reconstructed values of the pixels of the first patch; and
determining a predicted value of the single first pixel based on the values of one or more second pixels that have a highest similarity described by the similarity measure among all second pixels of the plurality of second pixels in the region.

US Pat. No. 10,250,873

AUTOMATIC DEVICE TESTING

Verizon Patent and Licens...

1. A network device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
receive an indication to perform a test of a capability of the network device to determine whether the network device can process data during times of a threshold demand for network resources by devices on a network;
perform, based on receiving the indication, an action to cause the network device to receive the data from a source of the data at a threshold data rate associated with the threshold demand for the network resources;
monitor, based on causing the network device to receive the data from the source of the data at the threshold data rate, a set of metrics related to the data or a performance of the network device during the times of the threshold demand for the network resources,
the set of metrics including at least one of:
a forward error correction (FEC)-related metric,
a pragmatic general multicast (PGM)-related metric, or a channel occupancy-related metric;
perform an analysis of the set of metrics after monitoring the set of metrics;
identify a source of an error based on a result of performing the analysis of the set of metrics,
the source of the error being related to the data or the performance of the network device; and
perform another action related to fixing the source of the error.

US Pat. No. 10,250,872

METHOD, DEVICE, AND SYSTEM FOR TESTING VIDEO QUALITY

1. A communication apparatus comprising:a first communication terminal having a processor communicatively connected to non-transitory memory;
the first communication terminal configured to superimpose at least one code within at least one frame of a video to be sent to a second communication terminal and subsequently send the video with the at least one frame having the at least one superimposed code to the second communication terminal;
wherein the video with the at least one frame having the at least one superimposed code comprises at least one of:
a first frame comprising a mono-chromatic background with a super imposed static code,
a second frame comprising a multi-colored background with a super imposed static code,
a third frame comprising a moving multi-colored background with a super imposed code, and
a fourth frame comprising a mono-chromatic static background with a moving superimposed code.

US Pat. No. 10,250,871

SYSTEMS AND METHODS FOR DYNAMIC CALIBRATION OF ARRAY CAMERAS

FotoNation Limited, (IE)...

1. A method of dynamically generating geometric calibration data for an array of cameras, comprising:acquiring a set of images of a scene using a plurality of cameras, where the set of images comprises a reference image and an alternate view image;
detecting features in the set of images using a processor directed by an image processing application;
identifying within the alternate view image features corresponding to features detected within the reference image using a processor directed by an image processing application;
rectifying the set of images based upon a set of geometric calibration data using a processor directed by an image processing application;
determining residual vectors for geometric calibration data at locations where features are observed within the alternate view image based upon observed shifts in locations of features identified as corresponding in the reference image and the alternate view image using a processor directed by an image processing application;
determining updated geometric calibration data for a camera that captured the alternate view image based upon the residual vectors using a processor directed by an image processing application, wherein determining updated geometric calibration data comprises:
using at least an interpolation process to generate a residual vector calibration field from the residual vectors;
mapping the residual vector calibration field to a set of basis vectors; and
generating a denoised residual vector calibration field using a linear combination of less than the complete set of basis vectors; and
rectifying an image captured by the camera that captured the alternate view image based upon the updated geometric calibration data using a processor directed by an image processing application.

US Pat. No. 10,250,870

ADJUSTABLE VIRTUAL REALITY DEVICE CAPABLE OF ADJUSTING DISPLAY MODULES

Wistron Corporation, New...

1. An adjustable virtual reality device capable of adjusting display modules, the adjustable virtual reality device comprising:a housing;
a base disposed inside the housing;
a first display module movably disposed on the base to display a first image;
a first detecting assembly comprising a first image capturing unit to detect a first position of a first pupil of a first eyeball;
a second display module movably disposed on the base to display a second image;
a second detecting assembly comprising a second image capturing unit to detect a second position of a second pupil of a second eyeball;
at least one transverse driving module to drive the first display module and the second display module to move individually or synchronously along a transverse direction, the at least one transverse driving module comprising:
a transverse screw rod disposed on the base and arranged in the transverse direction;
a transverse guiding rod disposed on the base and parallel to the transverse screw rod, the first display module and the second display module being disposed on the transverse screw rod and the transverse guiding rod in a movable manner along the transverse direction; and
a transverse motor assembly coupled to the transverse screw rod; and
a control unit electrically connected to the transverse motor assembly of the at least one transverse driving module, the first image capturing unit, and the second image capturing unit, the control unit controlling the transverse motor assembly of the at least one transverse driving module to drive the transverse screw rod to drive the first display module and the second display module to move individually or synchronously along the transverse direction according to a relation between the first pupil and the second pupil based on the first position of the first pupil detected by the first image capturing unit and the second position of the second pupil detected by the second image capturing unit.

US Pat. No. 10,250,869

METHOD OF DRIVING A DISPLAY DEVICE OPERABLY SWITCHABLE BETWEEN 2D AND 3D DISPLAY MODES

AU OPTRONICS CORPORATION,...

1. A method of driving a display device operably switchable between a two-dimensional (2D) display mode and a three-dimensional (3D) display mode, the display device comprising:a display panel; and
a liquid crystal lens disposed on the display panel, comprising:
a first substrate and a second substrate spaced apart from each other;
a liquid crystal layer disposed between the first substrate and the second substrate;
a plurality of first electrode structures, each first electrode structure comprising a plurality of first electrodes, a plurality of second electrodes, a plurality of third electrodes and a plurality of fourth electrodes, wherein the plurality of first electrodes and the plurality of second electrodes are disposed between the first substrate and the liquid crystal layer and spaced-apart and alternately arranged along a first transversal direction, and wherein the plurality of third electrodes and the plurality of fourth electrodes are disposed between the second substrate and the liquid crystal layer and spaced-apart and alternately arranged along the first transversal direction such that each of the first electrodes and the second electrodes and a corresponding one of the third electrodes and the fourth electrodes of each first electrode structure are aligned at a left tilted angle, and wherein each of the first electrodes and the second electrodes has a central portion and two side portions, and each of two adjacent electrodes of the third electrodes and the fourth electrodes define a space therebetween, such that for each of the first electrodes and the second electrodes, the central portion is overlapped with the space formed between two corresponding adjacent electrodes of the third electrodes and the fourth electrodes in a vertical projection direction, and the two side portions are respectively correspondingly overlapped with the two corresponding adjacent electrodes of the third electrodes and the fourth electrodes in the vertical projection direction; and
a plurality of second electrode structures, each second electrode structure comprising a plurality of first electrodes, a plurality of second electrodes, a plurality of third electrodes and a plurality of fourth electrodes, wherein the plurality of first electrodes and the plurality of second electrodes are disposed between the first substrate and the liquid crystal layer and spaced-apart and alternately arranged along the first transversal direction, and wherein the plurality of third electrodes and the plurality of fourth electrodes are disposed between the second substrate and the liquid crystal layer and spaced-apart and alternately arranged along the first transversal direction such that each of the first electrodes and the second electrodes and a corresponding one of the third electrodes and the fourth electrodes of each second electrode structure are aligned at a right tilted angle, and wherein each of the first electrodes and the second electrodes has a central portion and two side portions, and each of two adjacent electrodes of the third electrodes and the fourth electrodes define a space therebetween, such that for each of the first electrodes and the second electrodes, the central portion is overlapped with the space formed between two corresponding adjacent electrodes of the third electrodes and the fourth electrodes in the vertical projection direction, and the two side portions are respectively correspondingly overlapped with the two corresponding adjacent electrodes of the third electrodes and the fourth electrodes in the vertical projection direction,
wherein the plurality of first electrode structures and the plurality of second electrode structures are alternately arranged along a second transversal direction that is different from the first transversal direction, wherein each of the first and second transversal directions is parallel to the first and second substrates; and
the method comprising:
applying a first voltage to the first electrodes of the first electrode structures and the second electrode structures, a second voltage to the second electrodes of the first electrode structures and the second electrode structures, a third voltage to the third electrodes of the first electrode structures and the second electrode structure; and a fourth voltage to fourth electrodes of the first electrode structures and the second electrode structure, respectively.

US Pat. No. 10,250,868

SYNCHRONIZING DATA STREAMS

Amazon Technologies, Inc....

1. A system comprising:a pulse-width-modulation (PWM) unit configured to generate pulses;
a camera, coupled to the PWM unit and configured to acquire respective images of an environment in response to detecting respective pulses generated by the PWM unit;
one or more processors; and
one or more computer-readable media storing computer-executable instructions that, when executed on the one or more processors, cause the one or more processors to perform acts comprising:
determining a nominal frequency at which to operate the PWM unit;
calculating a value of a first expected timestamp based at least in part on the nominal frequency and a current time;
calculating a value of a second expected timestamp based at least in part on the nominal frequency and at least one of the first expected timestamp or the current time;
configuring the PWM unit to generate pulses at the nominal frequency;
receiving a value of a first recorded timestamp, the value of the first recorded timestamp corresponding to a first time at which a first pulse was generated by the PWM unit;
comparing the value of the first expected timestamp to the value of the first recorded timestamp to generate a first error;
calculating a first amount to adjust the nominal frequency based at least in part on the first error;
configuring the PWM unit to generate pulses at a first adjusted frequency, the first adjusted frequency comprising the nominal frequency adjusted by the first amount;
receiving a value of a second recorded timestamp, the value of the second recorded timestamp corresponding to a second time at which a second pulse was generated by PWM unit;
comparing the value of the second expected timestamp to the value of the second recorded timestamp to generate a second error;
calculating a second amount to adjust nominal frequency based at least in part on the second error; and
configuring the PWM unit to generate pulses at a second adjusted frequency, the second adjusted frequency comprising the nominal frequency adjusted by the second amount.

US Pat. No. 10,250,867

MOBILE TERMINAL AND METHOD FOR CONTROLLING THE SAME

LG ELECTRONICS INC., Seo...

1. An electronic device comprising:a light emitting unit comprising a plurality of light emitting elements, wherein the plurality of light emitting elements are grouped into a plurality of groups;
a camera configured to process an image of an object;
a controller operably connected to the light emitting unit and the camera and configured to control the light emitting unit to emit light to the object,
wherein the controller is configured to:
control the plurality of groups to emit light in a predetermined order while receiving an image of an object from the camera, and
determine depth information of the object using the image received via the camera based on the light emitted from the plurality of groups to the object in the predetermined order,
wherein the plurality of light emitting elements included in each of the plurality of groups are arranged to form different patterns such that all of the plurality of groups emit different light patterns.

US Pat. No. 10,250,866

SYSTEMS AND METHODS FOR CAPTURING LIGHT FIELD OF OBJECTS

GoPro, Inc., San Mateo, ...

1. A system for capturing light field of objects, the system comprising:a set of light field sensors configured to generate light field output signals conveying light field information within fields of view of the set of light field sensors, the fields of view of the set of light field sensors inwardly directed such that the fields of view overlap over an overlap volume, the set of light field sensor surrounding the overlap volume, the generation of the light field output signals by the set of light field sensors characterized by a subpixel accuracy, the subpixel accuracy enabled by a physical link between the set of light field sensors, wherein the set of light field sensors includes:
a first light field sensor configured to generate first light field output signals conveying first light field information within a first field of view; and
a second light field sensor configured to generate second light field output signals conveying second light field information within a second field of view; and one or more physical processors configured by machine-readable instructions to:
obtain first object light field information based on the first light field information, the first object light field information characterizing first light field emanating from an object entirely located within the overlap volume, wherein the first object light field information is determined based on distinguishing the first light field emanating from the object from other light field emanating from one or more other objects;
obtain second object light field information based on the second light field information, the second object light field information characterizing second light field emanating from the object entirely located within the overlap volume;
generate a combined object light field information based on the first object light field information and the second object light field information; and
present a view of the object on a display based on the combined object light field information, wherein presenting the view of the object on the display includes:
determining a location of view for the object; and
determining the view of the object based on the location of view for the object and the combined object light field information.

US Pat. No. 10,250,865

APPARATUS AND METHOD FOR DUAL IMAGE ACQUISITION

VISIONY CORPORATION, Dov...

1. A dual image capture assembly, comprising:(a) a dual eyepiece adaptor apparatus including a first eyepiece adaptor and a second eyepiece adaptor, each of the first and second eyepiece adaptors having a lumen with an axial axis, a front end and a back end, and including a plurality of gripping elements movable toward and away from the axial axis to engage and disengage an outer wall of an eyepiece such that each of the first and second eyepiece adaptors can engage and disengage eyepieces of varying diameters, and
(b) a dual-camera apparatus, comprising:
(b1) a first camera component coupled with the back end of the first eyepiece adaptor, the first camera component comprising a first image capture device having an optical axis aligned with the lumen axis of the first eyepiece adaptor,
(b2) a second camera component coupled with the back end of the second eyepiece adaptor, the second camera component comprising a second image capture device having an optical axis aligned with the lumen axis of the second eyepiece adaptor,
(b3) a linker connecting the first and second camera components.

US Pat. No. 10,250,864

METHOD AND APPARATUS FOR GENERATING ENHANCED 3D-EFFECTS FOR REAL-TIME AND OFFLINE APPLICATIONS

VEFXi Corporation, North...

1. A method for adjusting and generating enhanced 3D-effects for real time and offline 2D to 3D image and video conversion applications consisting of:(a) selectively controlling a depth location of a zero parallax plane within a depth field of an image scene to adjust parallax of objects in the image scene;
(b) selectively controlling a depth volume of objects in the image scene to one of either exaggerate or reduce 3D-effect of the image scene;
(c) selectively controlling a depth location of a segmentation plane within the depth field of the image scene, wherein said depth location is a non-zero depth location, dividing the objects in the image scene into a foreground group and a background group based on a location of the objects relative to the segmentation plane wherein an object of said foreground group is in said background group when said depth location of said segmentation plane is moved forward and wherein an object of said background group is in said foreground group when said depth location of said segmentation plane is moved backward, wherein said segmentation plane is moved from a zero location to a different location where as a result of moving said segmentation plane to said different location at least one of,
(i) objects that were in said foreground group when said segmentation plane was at said zero location are moved to said background group when said segmentation plane is moved to said different location, and;
(ii) objects that were in said background group when said segmentation plane was at said zero location are moved to said foreground group when said segmentation plane is moved to said different location;
(d) selectively increasing or decreasing depth volume of objects in the foreground group independently of selectively increasing or decreasing depth volume of objects in the background group, wherein said depth volume of objects in said foreground group is modified to change available foreground volume in which objects to be rendered are mapped, wherein said depth volume of objects in said background group is modified to change available background volume in which objects to be rendered are mapped,
(i) wherein objects that were in said foreground group when said segmentation plane was at said zero location that are moved to said background group when said segmentation plane is moved to said different location are said selectively increased or decreased in said depth volume as objects in said background group, and
(ii) wherein objects that were in said background group when said segmentation plane was at said zero location that are moved to said foreground group when said segmentation plane is moved to said different location are said selectively increased or decreased in said depth volume as objects in said foreground group;
(e) selectively increasing or decreasing depth separation of objects in the foreground group relative to the objects in the background group, where said separation includes both a step offset and a slope scaling, wherein said step offset and said slope scaling is relative to said available foreground volume being fixed, wherein said step offset and said slope scaling is relative to said available background volume being fixed, wherein objects in said foreground group and said background group include a continuous range of available depths prior to said selectively increasing or decreasing said depth separation of objects in said foreground group relative to said objects in said background group and wherein objects in said foreground group and said background group include a discontinuous range of available depths after said selectively increasing or decreasing said depth separation of objects in said foreground group relative to said objects in said background group, wherein said discontinuous range includes a prohibited range of depths within said continuous range of available depths being said step offset;
(f) generating an updated depth map file for a 2D-image based upon the controlling the depth location, the controlling the depth volume, the increasing and decreasing depth volume, and the increasing and decreasing depth separation;
(g) rendering an enhanced 3D-image using the updated depth map.

US Pat. No. 10,250,863

ELECTRONIC FLASH, ELECTRONIC CAMERA AND LIGHT EMITTING HEAD

FUJIFILM Corporation, To...

1. An electronic camera comprising:an imager configured to receive ambient light around a subject to be imaged and output color image signals of the received ambient light around the subject;
a flash light source configured to emit a light, the flash light source adjusting a color temperature of the light to be emitted by a pulse width modulation control;
an input device configured to choose at least an automatic mode;
a processor configured to control the electronic camera, the processor performing a white balance control of the color image signals according to a color temperature of the ambient light determined according to the color image signals thereof, adjusting the color temperature of the light emitted from the flash light source to a color temperature that is closest to the color temperature of the ambient light, and controlling a light emitting time of the flash light source, in the automatic mode; and
a monitor configured to display an image according to the image signals subjected to the white balance control.

US Pat. No. 10,250,862

VIDEO PROCESSING DEVICE

ONKYO CORPORATION, Osaka...

1. A video processing device comprising:a controller that obtains dynamic range of a display device;
an obtaining section that obtains content information; and
a video processor that converts a video signal based on the dynamic range of the display device that is obtained by the controller and the content information that is obtained by the obtaining section;
wherein the video processor converts the video signal so that minimum luminance of content that is included in the content information becomes minimum luminance of the dynamic range of the display device and maximum luminance of the content that is included in the content information becomes maximum luminance of the dynamic range of the display device; and
wherein the controller;
makes the video processor generate multiple ramp images or gray scale images that are different from each other in maximum value and minimum value,
sends the multiple ramp images or gray scale images to the display device in turn,
receives selection of any one of the multiple ramp images or gray scale images, and
obtains the maximum value and the minimum value of the ramp image or gray scale image of which selection is received as the dynamic range of the display device.

US Pat. No. 10,250,861

METHODS AND DEVICES FOR DETECTING OPEN AND/OR SHORTS CIRCUITS IN MEMS MICRO-MIRROR DEVICES

North Inc., Kitchener, O...

1. An apparatus comprising:a microelectromechanical system (MEMS) mirror comprising:
a mirror; and
a conduction coil to conduct a current and apply a force to the mirror to oscillate the mirror about at least one axis;
a power supply circuit electrically coupled to the MEMS mirror, the power supply circuit to selectively apply voltage to a first side of the conduction coil or a second side of the conduction coil;
a current source electrically coupled to the power supply circuit; and
a comparator to compare a voltage at the first and second side of the conduction coil to a voltage across the current source to detect an open circuit in the MEMS mirror.

US Pat. No. 10,250,860

ELECTRONIC APPARATUS AND IMAGE DISPLAY METHOD

Alpine Electronics, Inc.,...

1. An electronic apparatus comprising:display means configured to display a first image;
projecting means capable of displaying a virtual image of a second image at least in a first position near the display means or in a second position distant therefrom on a windshield of a vehicle by adjusting a focal length of the virtual image;
determining means configured to determine whether the virtual image of the second image is to be displayed in the first position or in the second position;
adjusting means configured to adjust the focal length of the virtual image in the projecting means on the basis of the determination by the determining means; and,
cooperation determining means configured to determine whether the projecting means is to cooperate with the display means on the basis of correlation between the first image and the second image, wherein the first image and the second image have correlation with each other when at least part of the first image is common to the second image; and,
wherein the adjusting means adjusts the focal length of the virtual image when it is determined that the projecting means is to cooperate.

US Pat. No. 10,250,859

PROJECTOR

SEIKO EPSON CORPORATION, ...

1. A projector that projects an image on a projection surface, the projector comprising:a projection lens;
a first imaging section that captures an image of the projection surface for long-distance imaging;
a second imaging section that captures an image of the projection surface for short-distance imaging;
a control section that causes
the first imaging section or the second imaging section to capture an image of the projection surface; and
an adjustment section that adjusts a projection image projected on the projection surface based on a captured image captured by the first imaging section or the second imaging section
wherein the adjustment section adjusts the projection image projected on the projection surface based on the captured image captured by the first imaging section in accordance with distance information relating to a distance from the projector to the projection surface in a case where the distance information is greater than or equal to a predetermined threshold, and
the adjustment section adjusts the projection image projected on the projection surface based on the captured image captured by the second imaging section in a case where the distance information is smaller than the predetermined threshold.

US Pat. No. 10,250,858

STAGE LAMP BASED ON TCP/IP PROTOCOL AND CONTROL SYSTEM THEREOF

1. A stage lamp based on the TCP/IP protocol, comprising a data parsing module, a drive module and a light emitting module;the data parsing module receives a control instruction transmitted based on the TCP/IP protocol, and parses the control instruction to obtain a corresponding control signal; and
the drive module and the light emitting module respectively enable the stage lamp to perform a corresponding mechanical action and emit light of corresponding color and brightness;
wherein the data parsing module receives image data of a video image or a hand drawn graphic or picture from the user, parses the image data, and then redisplays the image data on a wall, a floor or other specific place in a real-time and synchronous manner through outputting optical projection.

US Pat. No. 10,250,857

ELECTRONIC DEVICE AND METHOD

SAMSUNG ELECTRONICS CO., ...

17. A method of operating an electronic device having a projector to project content onto a projection surface with which the electronic device is in contact, the method comprising:determining whether the electronic device is in contact with the projection surface on which projected content is displayed by determining a distance between the electronic device and the projection surface,
in response to the determination that the distance between the electronic device and the projection surface is greater than zero such that the electronic device is not in contact with the projection surface, controlling the projector not to project content, and
in response to the determination that the distance between the electronic device and the projection surface is zero such that the electronic device is in contact with the projection surface, controlling the projector to project an indicator to inform a user of a position of a projection image to be output on the projection surface.

US Pat. No. 10,250,856

SYSTEMS, DEVICES, AND METHODS FOR LASER PROJECTORS

North Inc., Kitchener (C...

1. A method of operating a laser projector, the method comprising:providing power to at least one laser diode by a power source electrically coupled to the at least one laser diode;
generating a laser light by the at least one laser diode;
splitting the laser light into a first portion and a second portion by a beam splitter;
directing the first portion of the laser light from the at least one laser diode along a first optical path towards a photodetector by the beam splitter;
directing the second portion of the laser light from the at least one laser diode along a second optical path towards an output of the laser projector by the beam splitter;
detecting the first portion of the laser light by the photodetector;
outputting a signal by the photodetector in response to detecting the first portion of the laser light by the photodetector, the signal indicative of a power of the laser light generated by the at least one laser diode;
receiving the signal from the photodetector by a laser safety circuit communicatively coupled to the photodetector, wherein the laser safety circuit includes a switch that mediates the electrical coupling between the power source and the at least one laser diode; and
in response to the signal from the photodetector indicating that the power of the laser light generated by the at least one laser diode exceeds a threshold, interrupting, by the switch, a supply of power to the at least one laser diode from the power source.

US Pat. No. 10,250,855

VIDEO DISPLAY APPARATUS, VIDEO DISPLAY SYSTEM, AND LUMINANCE ADJUSTING METHOD OF VIDEO DISPLAY APPARATUS

PANASONIC INTELLECTUAL PR...

1. A display apparatus functioning as a master display apparatus connected to first and second slave display apparatuses functioning as slave display apparatuses in order to display on one screen one image including a first image from the master display apparatus, a second image from the first slave display apparatus, and a third image from the second slave display apparatus,the master display apparatus being connected to upstream of the first slave display apparatus, and being connected to downstream of the second slave display apparatus to form a ring single continuous pathway for data communication through the master display apparatus, the first slave display apparatus, and the second slave display apparatus,
the master display apparatus comprising:
a light source;
a video signal receiver that receives a first video signal;
an image generator that modulates light from the light source based on the first video signal to generate image light which forms the first image to be projected on the screen;
a controller that obtains, based on the first video signal, first video characteristic data indicating a characteristic of the first video signal;
a hardware data outputting unit that outputs the first video characteristic data to the first slave display apparatus;
a hardware data receiving unit that receives a cumulative value from the second slave display apparatus, the cumulative value being obtained by, through the ring single continuous pathway,
(1) in the first slave display apparatus, adding second video characteristic data indicating a characteristic of a second video signal received by the first slave display apparatus to the first video characteristic data to obtain third video characteristic data to be sent to the second slave display apparatus, and
(2) in the second slave display apparatus, adding fourth video characteristic data indicating a characteristic of a third video signal received by the second slave display apparatus to the third video characteristic data to obtain the cumulative value; and
a controller that calculates a common video setting value from the cumulative value for commonly controlling the light source of the master display apparatus to generate the first image from the first video signal, a light source of the first slave display apparatus to generate the second image from the second video signal, and a light source of the second slave apparatus to generate the third image from the third video signal, wherein:
the first, second, and fourth video characteristic data are average luminance information respectively obtained from the first, second, and third video signals,
the common video setting value is common average luminance information obtained by dividing the cumulative value of the average luminance information of the master, first slave and second slave apparatuses by a cumulative value of effective pixel numbers of the image generator of the master display apparatus, an image generator of the first slave display apparatus, and an image generator of the second slave display apparatus, and
the controller of the master display apparatus controls the light source of the master display apparatus, a controller of the first slave display apparatus controls a light source of the first slave display apparatus, and a controller of the second slave display apparatus controls a light source of the second slave display apparatus, based on the common average luminance information.

US Pat. No. 10,250,854

IMAGE PROJECTION APPARATUS, AND METHOD OF CONTROLLING IMAGE PROCESSING

Ricoh Company, Ltd., Tok...

1. An image projection apparatus comprising:an image generation element implemented by circuitry and configured to generate an image using light emitted from a light source;
a shift unit implemented by the circuitry and configured to shift a position of the image generation element with a given cycle;
a projection control unit implemented by the circuitry and configured to control projection of the image under a plurality of modes, the modes including a first mode in which the image is projected without shifting the position of the image generation element by the shift unit, and a second mode in which the image is projected while shifting the position of the image generation element by the shift unit; and
a detector implemented by the circuitry and configured to detect whether the image generation element operates normally by detecting a position of the shift unit,
wherein when the detector detects that the image generation element does not operate normally under the second mode, the projection control unit stops image processing being performed, and switches the second mode to the first mode.

US Pat. No. 10,250,853

WIRELESS ENDOSCOPE SYSTEM, ENDOSCOPE, DISPLAY DEVICE, IMAGE TRANSMISSION METHOD, IMAGE DISPLAY METHOD, AND PROGRAM

OLYMPUS CORPORATION, Tok...

1. A wireless endoscope system, comprising:an imaging unit that captures an image and generates imaging data;
a freeze-instructing unit that generates a still image display signal relevant to an execution of a still image display;
a data storage unit that stores the imaging data as storage data at a time at which the still image display signal is switched from a first state indicating an execution of a moving image display to a second state indicating the execution of the still image display;
a moving image-compressing unit that performs a moving image-compressing process on the imaging data and generates moving image data;
a still image-compressing unit that performs a still image-compressing process on the storage data and generates a plurality of pieces of still image data;
an image quality control unit that controls image quality of the plurality of pieces of still image data such that image quality of a piece of still image data generated later among the plurality of pieces of still image data becomes higher;
a data-selecting unit that selects the moving image data when the still image display signal is in the first state, and sequentially selects the plurality of pieces of still image data in an order in which the plurality of pieces of still image data are generated when the still image display signal is in the second state;
a transmission unit that transmits the moving image data or the plurality of pieces of still image data selected by the data-selecting unit in a wireless manner;
a reception unit that receives the moving image data or the plurality of pieces of still image data transmitted from the transmission unit in a wireless manner;
an image-decompressing unit that performs a decompressing process on the moving image data or the plurality of pieces of still image data received by the reception unit and generates display data, the image-decompressing unit generating a plurality of pieces of the display data in an order in which the plurality of pieces of still image data are generated in the decompressing process on the plurality of pieces of still image data;
an image display unit that performs a display process based on the display data;
a target image-designating unit that designates the display data as target image data when the image quality of the piece of still image data is equal to or higher than a predetermined image quality;
an image storage unit that stores the display data designated as the target image data; and
a high image quality requesting unit that generates a high definition request signal relevant to a request for a piece of still image data with a predetermined highest image quality,
wherein the image quality control unit sets the image quality of the piece of still image data to the highest image quality when the still image display signal is in the second state and the high definition request signal is in a third state indicating a request for still image data with the highest image quality before a final piece of still image data among the plurality of pieces of still image data is generated, and
the image quality control unit gradually changes the image quality of the plurality of pieces of still image data such that the image quality of the piece of still image data generated later becomes higher and image quality of a piece of still image data generated finally becomes the highest image quality when the still image display signal is in the second state and the high definition request signal is in a fourth state other than the third state before the final piece of still image data among the plurality of pieces of still image data is generated.

US Pat. No. 10,250,852

AUTOMATIC SENSING RF COMBINER

Steiner Enterprises, Laf...

1. A combiner for receiving signals from multiple sources, including at least an antenna for receiving one or more over-the-air signals, a cable television signal source (CATV source) and at least two satellite antennae, the combiner comprising:multiple inputs for connecting to each of the antenna, CATV source and at least two satellite antennae;
one or more outputs for coupling to a corresponding receiver;
a printed circuit board including;
a conflict module connected to the inputs for the at least two satellite antennae, the conflict module operable to detect the connection of a satellite receiver to an active one of the inputs and to disable the other inputs for the at least two satellite antenna in response thereto;
a first signal combiner connected to the input for the over-the-air antenna and the CATV source and providing an output signal;
a second signal combiner receiving the output signal from the first signal combiner and connected to the at least one satellite antenna to receive the satellite signal, and providing an output signal to the one or more outputs;
a signal processor connected between the input for the over-the-air antenna and the first signal combiner and configured to operate on the antenna signal received at the input before the antenna signals reach the first signal combiner; and
a logic controller connected between the input for the CATV source and the signal processor, the logic controller configured and operable to control the signal processor to operate on the antenna signal when a signal is detected at the input for the CATV source, and operable to control the signal processor to allow the antenna signal to pass to the first signal combiner when no signal is detected at the input for the CATV source, thereby controlling whether the over-the-air signal passes to the first signal combiner; and
a power input for connecting the printed circuit board to a source of electrical power.

US Pat. No. 10,250,851

VIDEO FEEDS IN COLLABORATION ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method comprising:obtaining respective video feeds of respective participants of a group of video conference participants, wherein the group of video conference participants includes one or more presenter participant and one or more viewer participant;
examining data of the respective video feeds to determine a current group aggregate sentiment output for a video conference, wherein the examining includes subjecting data of feeds of the respective video feeds to sentiment analysis that includes processing of facial feature representing video data of the feeds; and
presenting a video conference view to one or more participant of the group of video conference participants based on the group aggregate sentiment output, wherein the presenting a video conference view to one or more participant of the group of video conference participants includes presenting a first video conference view to a presenter of the video conference and presenting a second video conference view to a viewer of the video conference, wherein the first video conference view presents N video feeds representing certain one or more viewers of the video conference, the N video feeds being selected on the basis of the certain one or more viewers currently exhibiting sentiments representative of the current group aggregate sentiment output for the video conference, wherein the second video conference view presents M video feeds representing certain two or more viewers of the video conference, the M video feeds being selected on the basis of the two or more viewers currently exhibiting sentiments representative of the current group aggregate sentiment output for the video conference, wherein N is less than M and wherein certain one or more viewers and the certain two or more viewer have zero to N viewers in common.

US Pat. No. 10,250,850

COMMUNICATION CONTROL METHOD, COMMUNICATION CONTROL APPARATUS, TELEPRESENCE ROBOT, AND RECORDING MEDIUM STORING A PROGRAM

PANASONIC INTELLECTUAL PR...

1. A communication control method of a communication control apparatus that controls a first communication apparatus used by a first user inside a first space, and one or a plurality of self-propelled second communication apparatus placed inside the first space,the communication control apparatus being provided with a position management database in which position information indicating a current position of each of the one or plurality of second communication apparatus is registered,
the method comprising:
receiving a private conversation request that requests a private conversation from the first communication apparatus;
acquiring a position of the first user when the private conversation request is input into the first communication apparatus by the first user;
setting a region of fixed range based on the position of the first user as a first conversation-listening area;
detecting, based on the position management database, a second communication apparatus positioned inside the first conversation-listening area from among the one or plurality of second communication apparatus as a first evacuation communication apparatus; and
transmitting, to the first evacuation communication apparatus, an evacuation order causing the first evacuation communication apparatus to evacuate to an outside of the first conversation-listening area.

US Pat. No. 10,250,849

DYNAMIC SPEAKER SELECTION AND LIVE STREAM DELIVERY FOR MULTI-PARTY CONFERENCING

Akamai Technologies, Inc....

1. Apparatus for multi-party videoconferencing, comprising:one or more hardware processors; and
computer memory holding computer program instructions configured to be executed by the processors to perform a set of operations, comprising:
for each of a set of participants in a multi-party videoconference, obtaining information about a relevance of a participant's stream;
based on the information, determining which participant stream has a given relevance relative to one or more other participant streams;
modifying a presentation characteristic of one or more of the participant streams based on the determination of which participant stream has a given relevance relative to one or more other participant streams; and
delivering the participant streams via an overlay network, wherein a participant stream that is determined to be more relevant than another participant stream is afforded increased bandwidth and resource allocation by the overlay network during delivery.

US Pat. No. 10,250,848

POSITIONAL CONTROLLED MUTING

Avaya Inc., Santa Clara,...

1. A method comprising:establishing a collaboration interaction among a plurality of communication devices, each of the plurality of communication devices corresponding to one of a plurality of participants in the collaboration interaction, the collaboration interaction comprising at least one media channel via which media signals received from any one of the plurality of communication devices can be sent to the remaining ones of the plurality of communication devices;
sending a first instruction to a first communication device of the plurality of communication devices that causes the first communication device to display, via a graphical user interface of the first communication device, an interaction workspace for the collaboration interaction comprising a plurality of icons distributed between a first area for the collaboration interaction having a first muting behavior and a second area for the collaboration interaction having a second muting behavior, each of the plurality of icons corresponding to one of the plurality of participants;
applying the first muting behavior for the collaboration interaction when each one of the plurality of communication devices corresponding to the ones of the plurality of icons is displayed in the first area for the collaboration interaction and the second muting behavior when each one of the plurality of communication devices corresponding to the ones of the plurality of icons is displayed in the second area for the collaboration interaction, and
receiving a second instruction from the first communication device that indicates that an individual participant of the plurality of participants has dragged a texting tool onto an icon representing the individual participant to send a text message to every other participant in the collaboration interaction.

US Pat. No. 10,250,847

VIDEO ENDPOINTS AND RELATED METHODS FOR TRANSMITTING STORED TEXT TO OTHER VIDEO ENDPOINTS

SORENSON IP HOLDINGS LLC,...

1. A video endpoint, comprising:video equipment configured to enable a user to engage in video communication sessions with at least another video endpoint; and
a control circuit operably coupled with the video equipment, the control circuit configured to:
manage a database of text strings that are stored in the database prior to the video communication sessions and accessible to the user during a first video communication session;
display a list of the text strings to the user during the first video communication session;
select a first text string from among the text strings stored in the database responsive to an input received from the user during the first video communication session without requiring the user to input individual characters of the selected text string during the first video communication session;
transmit the selected first text string to the another video endpoint during the first video communication session;
communicate with a remote server including a data storage device comprising the database for the video endpoint to remotely access the text strings; and
access the database from the remote server as a shared database such that changes made to the text strings stored in the database are available to other video communication devices that are operated by the same user.

US Pat. No. 10,250,846

SYSTEMS AND METHODS FOR IMPROVED VIDEO CALL HANDLING

T-Mobile USA, Inc., Bell...

1. A user equipment (UE) comprising:a display to display a graphical user interface (GUI) comprising at least a video window, a subtitle window, and a call log for displaying text that previously appeared in the subtitle window and for displaying textual communications between users;
one or more input devices to receive inputs from a user;
one or more transceivers to send and receive one or more wireless transmissions;
one or more processors in communication with at least the display, the one or more transceivers, and the one or more input devices; and
memory storing computer-executable instructions that, when executed, cause the one or more processors to:
receive, at the one or more transceivers, a video call from a caller's UE;
send, with the one or more transceivers, a first audio file to a voice recognition server (VRS), the first audio file containing data related to a first part of an audio portion of the video call;
receive, with the one or more transceivers, a first text file from the VRS, the first text file comprising text data related to the first part of the audio portion of the video call;
display, on the display, the text related to the first part of the audio portion of the video call in the subtitle window of the GUI;
receive, from the one or more input devices, a plurality of alphanumeric characters, symbols, or both from the user; the plurality of alphanumeric characters, symbols, or both constituting a textual message for communication between users; and
display, in the call log of the GUI of the display, a type identifier indicating a type of textual communication.

US Pat. No. 10,250,845

REMOTE COLLABORATION SYSTEM WITH PROJECTOR-CAMERA BASED ROBOT DEVICE AND HEAD MOUNTED DISPLAY, AND REMOTE INTERACTION METHOD USING THE SAME

Korea Institute of Scienc...

1. A remote interaction method for providing a remote collaboration system, the system comprising a robot device and a head mounted display, the robot device comprising a projector, a panoramic camera and a high resolution camera, and the robot device and the head mounted display being located space apart from each other, the method comprising:in response to receipt of a communication request, establishing a communication between the robot device and the head mounted display;
observing, by the head mounted display, a remote space based on first image information and second image information, the first image information, corresponding to panoramic image of the remote space, being collected by the panoramic camera, the second image information, corresponding to a high resolution image or a magnified image of a region in the remote space, being collected by the high resolution camera, and the first and second image information being transmitted from the robot device, wherein
observing the remote space is performed by following steps:
displaying, by the head mounted display, a panoramic image of the remote space based on the first image information,
in response to detection of a screen magnifying interaction while the head mounted display displaying the panoramic image, displaying, by the head mounted display, the panoramic image and a magnified image of a first region in the remote space together based on the first image information and the second image information,
in response to detection of the screen magnifying interaction while the head mounted display displaying the panoramic image and the magnified image together, displaying, by the head mounted display, the magnified image based on the second image information,
in response to detection of a screen demagnifying interaction, while the head mounted display displaying the panoramic image and the magnified image together, displaying, by the head mounted display, the panoramic image based on the first image information, and
in response to detection of the screen demagnifying interaction, while the head mounted display displaying the magnified image, displaying, by the head mounted display, the panoramic image and the magnified image together based on the first image information and the second image information;
searching, by the head mounted display, necessary information to be provided to the remote space based on a result of observing the remote space; and
providing, by the robot device, the necessary information received from the head mounted display to the remote space using the projector.

US Pat. No. 10,250,844

SIGNAL PROCESSING DEVICE AND METHOD

SONY SEMICONDUCTOR SOLUTI...

1. A signal processing device, comprising:circuitry configured to:
provide a control signal to control an imaging device connected with the signal processing device via a cable,
wherein the control signal is provided at a frequency different from a reference frequency of a video signal; and
transmit the control signal to the imaging device via the cable in a time when a signal in an active pixel region is not transmitted,
wherein the control signal is adjusted on at least one of a transmission frequency or a signal level of the control signal, and
wherein the control signal is provided at a frequency different from a reference frequency of a color difference signal of the video signal.

US Pat. No. 10,250,843

IMAGING APPARATUS AND INFORMATION PROCESSING APPARATUS

SONY CORPORATION, Tokyo ...

1. An imaging apparatus, comprising:a memory configured to store instructions; and
at least one processor configured to execute the instructions to:
capture a moving image comprising a plurality of images in a time-series order;
acquire geographical position information of the imaging apparatus at which the plurality of images are captured,
wherein the geographical position information is acquired for each image of the plurality of images;
calculate an offset of the geographical position information for each image of the plurality of images based on a difference between a reference value and the corresponding geographical position information of each image of the plurality of images,
wherein the reference value is the geographical position information of one image of the plurality of images in the moving image;
generate moving image metadata that accompanies the moving image,
wherein the moving image metadata comprises the reference value;
generate image metadata that accompanies each image of the plurality of images,
wherein the image metadata comprises the corresponding offset of the geographical position information for each image of the plurality of images; and
invalidate absolute geographical position information of each image of the plurality of images in the moving image based on invalidation of the reference value corresponding to the moving image.

US Pat. No. 10,250,842

ELECTRONIC DEVICE AND METHOD OF CONTROLLING THE SAME

Samsung Electronics Co., ...

1. An electronic device comprising:a plurality of image sensors comprising a first imaging sensor and a second imaging sensor; and
a processor electrically connected to the plurality of imaging sensors and configured to output a read control signal and a synchronization signal to the plurality of imaging sensors,
wherein the processor is further configured to:
output a first read control signal to the first imaging sensor and store first data read from the first imaging sensor in a temporary memory,
output a second read control signal to the second imaging sensor at a point of time later than the first read control signal and additionally store second data read from the second imaging sensor in the temporary memory,
control a speed of outputting the first and second data stored in the temporary memory to the processor based on an output control signal, the speed of said outputting the first and second data from the temporary memory to the processor being faster than a speed of reading the first and second data from the first and second imaging sensors to the temporary memory based on the first and second read control signals, and
generate merged data by merging the first data and the second data output from the temporary memory.

US Pat. No. 10,250,841

SYSTEM AND METHOD FOR MODIFYING MEDIA STREAMS USING METADATA

MobiTV, Inc., Emeryville...

1. A method comprising:selecting, via a controller, a content server among a plurality of content servers, the content server being geographically close to a mobile device, wherein the controller is configured to perform session management using RTSP (Real Time Streaming Protocol) protocol such that streaming functions, performed by the content server, and session management functions, performed by the controller, are separated, wherein the controller establishes the sessions directly with mobile devices, instead of the content server connecting directly with mobile devices, such that more mobile devices can operate simultaneously while being directly connected with the controller than if the mobile devices were directly connected to the content servers;
establishing, via the controller, a media streaming session between the content server and the mobile device;
obtaining device information from the mobile device by the content server;
streaming a media stream from the content server to the mobile device using RTP (Real Time Transport Protocol) protocol, the media stream including a video track, an audio track, and a metadata track;
analyzing metadata content of the media stream;
indexing metadata content in the metadata track to allow for later retrieval of the metadata content and corresponding video content in the video track, wherein the indexing is performed via entering the metadata content into a search engine;
selecting targeted advertising for the media stream using closed captioning contents in the metadata track, wherein the advertising is maintained in a database;
modifying, by the content server, the media stream using the device information to include selected advertising, wherein modifying includes:
maintaining a current sequence number for the media stream,
removing RTP packets with sequence numbers subsequent to the current sequence number from the media stream,
inserting new RTP packets with modified sequence numbers subsequent to the current sequence number, the new RTP packets corresponding to the selected advertising being inserted into the media stream;
updating sequence numbers of original RTP packets of the media stream that are transmitted to the device after transmitting the inserted new RTP packets, the modified sequence numbers of the original RTP packets being subsequent to the updated sequence numbers of the new RTP packets; and
transmitting the modified media stream to the mobile device.

US Pat. No. 10,250,840

PROJECTION APPARATUS AND CONTROL METHOD THEREFOR

Canon Kabushiki Kaisha, ...

1. A projection apparatus comprising:a projection unit configured to project an image;
an operation receiving unit configured to receive an operation; and
a control unit configured to control the projection unit,
wherein the control unit controls the projection unit to project a first adjustment image for indicating a plurality of adjustment positions in an image to be projected by the projection unit and selecting an adjustment target position from the plurality of adjustment positions,
wherein, when the operation receiving unit receives an operation for determining the adjustment target position while the projection unit is projecting the first adjustment image, the control unit controls the projection unit to project a second adjustment image for adjusting a position at which an image corresponding to the adjustment target position is projected in an area corresponding to the adjustment target position selected by the determination operation received by the operation receiving unit, and
wherein, in the second adjustment image, an image indicating an adjustment position different from the adjustment target position is different from an image indicating the plurality of adjustment positions in the first adjustment image so that the selected adjustment target position is visually recognized.

US Pat. No. 10,250,839

MOBILE TERMINAL AND OPERATING METHOD FOR PROVIDING A SCREEN OF AN OPTIMIZED VIEWING ANGLE

LG Electronics Inc., Seo...

1. A mobile terminal comprising:a display unit;
a sensing unit configured to detect an input signal; and
a control unit configured to:
detect, through the sensing unit, a first input signal for playing a 360-degree video at a first playing angle;
based on detecting the first input signal for playing the 360-degree video at the first playing angle, control the display unit to display a first image of the 360-degree video at the first playing angle;
detect, through the sensing unit, a second input signal for changing a playing angle of the 360-degree video to a second playing angle different from the first playing angle;
based on detecting the second input signal for changing the playing angle of the 360-degree video to the second playing angle different from the first playing angle:
display, as a main display area on the display unit, a second image of the 360-degree video at the second playing angle, and
display, as a portion of the main display area on the display unit, the first image of the 360-degree video played at the first playing angle;
determine that the second playing angle is outside of a predetermined angular range from the first playing angle; and
based on a determination that the second playing angle is outside of the predetermined angular range from the first playing angle, display the first image of the 360-degree video played at the first playing angle as a picture-in-picture (PIP) display within the main display area.

US Pat. No. 10,250,838

SYSTEM AND METHOD FOR CONVERTING LIVE ACTION ALPHA-NUMERIC TEXT TO RE-RENDERED AND EMBEDDED PIXEL INFORMATION FOR VIDEO OVERLAY

1. A computer-implemented method, comprising:automatically detecting, at an imaging sensor on a network located in an environment of an entertainment venue, a set of alphanumeric characters on a display device;
capturing, using the imaging sensor, an image of a first set of alphanumeric characters;
converting the image of the first set of alphanumeric characters into an ASCII code associated with the first set of alphanumeric characters, and transmitting the ASCII code associated with the first set of alphanumeric characters to a hub on the network;
capturing, using a video capture device located in the environment, a captured video stream, wherein the captured video stream includes video data associated with the entertainment venue;
generating, using the hub, a first output video stream, wherein the first output video stream includes the captured video stream and an overlay including the ASCII code associated with the first set of alphanumeric characters;
transmitting, using the hub, the first output video stream, wherein after the first output video stream is transmitted, the first output video stream is displayed on a computing device;
capturing, using the imaging sensor, an image of a second set of alphanumeric characters;
converting the image of the second set of alphanumeric characters into an ASCII code associated with the second set of alphanumeric characters, and transmitting the ASCII code associated with the second set of alphanumeric characters to the hub;
generating, using the hub, an updated output video stream, wherein the updated output video stream includes the captured video stream and an updated overlay including the ASCII code associated with the second set of alphanumeric characters; and
transmitting, using the hub, the updated output video stream, wherein after the updated output video stream is transmitted, the updated output video stream is displayed on a computing device instead of the first output video stream.

US Pat. No. 10,250,837

USER PROXIMITY RECOGNITION AND HANDS-FREE CONTROL

DISH Technologies L.L.C.,...

1. A set-top box comprising:a first transmitter configured to output an interrogation signal when the set-top box is in a low power mode;
a first receiver configured to receive an identification signal transmitted from a mobile electronic device in response to the interrogation signal, the identification signal including an identification code;
a memory configured to store a plurality of user profiles, each user profile including a respective registered identification code and a respective command; and
control circuitry configured to compare the identification code to the registered identification codes stored in the memory, to exit the low power mode in response to receiving the identification signal, and to execute the command corresponding to a registered identification code that matches the identification code of the identification signal if fewer than a predetermined number of commands have been previously executed in a predetermined timeframe.

US Pat. No. 10,250,836

SOLID-STATE IMAGE SENSING APPARATUS, CONTROL METHOD, AND ELECTRONIC DEVICE

Sony Corporation, Tokyo ...

1. A solid-state image sensing apparatus, comprising:an A/D converter configured to convert pixel signals of a plurality of pixels to obtain digital data;
a vertical signal line configured to feed the pixel signals from the plurality of pixels to the A/D converter; and
a circuit configured to increase a potential of the vertical signal line at a time of start of a read operation of the plurality of pixels, based on the potential of the vertical signal line that is less than a determined potential of a gate signal input to the circuit.

US Pat. No. 10,250,835

IMAGING DEVICE INCLUDING PIXEL REGION HAVING ISOLATED REGION AND SHADED REGION AND IMAGING SYSTEM INCLUDING IMAGING DEVICE

RICOH COMPANY, LTD., Tok...

1. An imaging device comprising:a pixel region in which a plurality of pixels and a plurality of charge-to-voltage conversion circuits are arranged in matrix, the pixels including photoelectric conversion elements that output charges in accordance with intensity of received light, the charge-to-voltage conversion circuits converting the charges output from the pixels into voltage signals, wherein
the pixel region includes an isolated region including
isolated shaded pixels covered with a first shading metal of the same layer as a layer of wiring metals of the charge-to-voltage conversion circuits; and
an isolated pixel that is not covered with the metal,
all the pixels surrounding the isolated pixel in the isolated region are the isolated shaded pixels, and
the pixel region includes a shaded region in which the pixels and the charge-to-voltage conversion circuits are entirely covered with a second shading metal of an upper layer than the layer of the wiring metals.

US Pat. No. 10,250,834

METHODS AND APPARATUS FOR A VOLTAGE-SHIFTING READOUT CIRCUIT

SEMICONDUCTOR COMPONENTS ...

1. A readout circuit, comprising:a storage device selectively coupled to an input signal and configured to sample the input signal, wherein the input signal has a first voltage value within a first voltage range;
a comparator coupled to the input signal, wherein the comparator compares the first voltage value of the input signal to a predetermined threshold voltage; and
a level-shifting circuit coupled to the storage device and responsive to the comparator, wherein the level-shifting circuit shifts the first voltage value of the input signal to a second voltage value within a second voltage range if the first voltage value of the input signal is greater than the predetermined threshold voltage.

US Pat. No. 10,250,833

TIMESTAMP CALIBRATION OF THE 3D CAMERA WITH EPIPOLAR LINE LASER POINT SCANNING

SAMSUNG ELECTRONICS CO., ...

1. An imaging unit comprising:a light source operative to perform a one-dimensional (1D) point scan of a three-dimensional (3D) object along a scanning line, wherein the point scan projects a sequence of light spots on a surface of the 3D object; and
an image sensor unit that includes:
a plurality of pixels arranged in a two-dimensional (2D) pixel array forming an image plane, wherein a row of pixels in the 2D pixel array forms at least a portion of an epipolar line of the scanning line, wherein each pixel in the row of pixels is associated with a respective column in the 2D pixel array, and wherein each pixel in the row of pixels is operative to detect a corresponding light spot in the sequence of light spots,
a plurality of Analog-to-Digital Converter (ADC) units, wherein each ADC unit is associated with a respective pixel in the row of pixels and is operative to generate a pixel-specific timestamp value for the respective pixel in response to a pixel-specific detection of the corresponding light spot by the respective pixel, and
a processing unit coupled to the plurality of ADC units, wherein the processing unit is operative to perform the following:
for a column in the 2D pixel array associated with the respective pixel in the row of pixels, apply a column-specific correction value to the pixel-specific timestamp value to obtain a corrected timestamp value, wherein the column-specific correction value represents a column-specific propagation delay between the pixel-specific detection and when a pixel-specific output of the respective pixel reaches a pre-defined threshold, and
determine a distance to the corresponding light spot on the surface of the 3D object based at least on the corrected timestamp value and on a scan angle used by the light source for projecting the corresponding light spot.

US Pat. No. 10,250,832

STACKED ROLLING SHUTTER AND GLOBAL SHUTTER IMAGE SENSOR WITH KNEE SELF POINT CALIBRATION

SMARTSENS TECHNOLOGY (CAY...

5. A method for reducing image sensor pixel array fixed pattern noise by calibrating a knee point voltage level of a two segment piecewise linear response curve representing the pixel image signal output versus increasing incident image light, wherein the pixels include a reset transistor with a bimodal selectable potential supply voltage and selectable rolling and global shutter readout circuits, the method comprising the steps of:providing a plurality of stacked imaging pixel cells comprising a first portion sensing and a rolling shutter readout circuit comprising a photodiode, a transfer transistor, a reset transistor, an amplifier transistor and a rolling shutter select transistor;
providing a second portion global shutter readout circuit stacked on the first portion;
providing a row decoder circuit which repeatedly provides frame by frame a sequence of readout control signals in turn to each row of the pixel cells wherein the sequence comprises four time intervals wherein the intervals comprise the sequence:
first, a photodiode reset and expose interval;
second, a photodiode image signal readout interval;
third, a knee point reset and recovery interval; and
fourth, a knee point readout interval for each row;
providing a programmable function logic circuit to provide to the row decoder circuit timing sequences and configurations;
providing from the row decoder a knee point signal level to the gate electrode of the transfer transistor first during the photodiode reset and expose interval and next during the knee point reset and recovery interval wherein the signal level determines a knee point of a two segment piecewise linear response curve representing the pixel image signal output versus increasing incident image light;
reading out the image signal during the photodiode image signal readout interval through the rolling shutter readout circuit;
providing a mode select switch within the row decoder circuit configured to supply the selectable reset power supply Vrab with a bimodal selection of either a high voltage level Vhi or an adjustable low voltage level Vlo;
engaging the mode select switch to change the reset power supply voltage Vrab to the adjustable low voltage Vlo during the knee point reset and recovery interval of each row while maintaining Vrab at the high level Vhi during the remaining three intervals;
reading out a knee point self calibration signal through the rolling shutter readout circuit during the knee point readout interval; and
determining and applying an amended knee point voltage level to the gate electrode of the transfer transistor during the subsequent photodiode reset and expose interval.

US Pat. No. 10,250,831

METHOD AND APPARATUS FOR ON-CHIP PER-PIXEL PSEUDO-RANDOM TIME CODED EXPOSURE

MASSACHUSETTS INSTITUTE O...

1. A focal plane imaging apparatus comprising:a photodetector to convert a portion of light that is scattered and/or reflected from a scene into an analog signal;
an analog-to-digital converter (ADC), electrically coupled to the photodetector, to convert the analog signal into a digital signal;
control logic to provide a plurality of modulation signals;
a plurality of digital registers, electrically coupled to the ADC and the control logic, to store compressive data produced by modulating the digital signal with the plurality of modulation signals; and
a processor, operably coupled to the plurality of digital registers, to generate a detection map based on the compressive data stored by the plurality of digital registers,
wherein the detection map indicates a correlation of a portion of the scene to a predetermined signature, and
wherein the predetermined signature represents a control pattern generated by a distributed control pattern generator configured to generate and distribute pseudorandom modulation to pixels of the focal plane imaging apparatus.

US Pat. No. 10,250,830

SOLID-STATE IMAGING DEVICE, IMAGE READING DEVICE, AND IMAGE FORMING APPARATUS

Ricoh Company, Ltd., Tok...

1. A system including a solid-state image sensor in which a plurality of pixels are arranged along at least one direction, the pixels being configured to convert incident light to electric charge whose amount is in accordance with the amount of the incident light and to accumulate the electric charge, the system comprising:a valid area including pixels that are not shielded from light;
a first light-blocked area including pixels that are shielded from light and that are arranged at one of two end portions of the valid area;
a second light-blocked area including pixels that are shielded from light and that are arranged at the other end portion of the valid area;
a photoelectric transducer including
an analog-to-digital converting unit configured to convert the electric charge accumulated respectively by the pixels belonging to the first light-blocked area, the valid area, and the second light-blocked area, to first bit size image data at least at a time for an individual line of pixels,
a signal reading unit configured to read, among items of the image data, light-blocked data obtained from the first light-blocked area and the second light-blocked area, and valid data obtained from the valid area, in units of pixels,
a reference black level estimating unit configured to estimate a reference black level of the light-blocked data for the individual line of pixels whenever the light-blocked data is obtained,
a level correction unit configured to correct, based on the estimated reference black level, a size of the valid data obtained simultaneously with the light-blocked data used in estimating the reference black level; and
circuitry configured to convert the first bit size image data, after the correction to the size of the valid data has been performed, to second bit size image data that is smaller than the first bit size image data, and output the second bit size image data to an image processor,
wherein the reference black level estimating unit is configured to use an average of the light-blocked data as the reference black level used by the level correction unit to correct the valid data, and
wherein the reference black level estimating unit is configured to use an average of the light-blocked data obtained from pixels belonging to an inner area not adjacent to end portions in the one direction, among the pixels belonging to the first light-blocked area and the second light-blocked area, as the reference black level used by the level correction unit to correct the valid data.

US Pat. No. 10,250,829

IMAGE PROCESSING APPARATUS THAT USES PLURALITY OF IMAGE PROCESSING CIRCUITS

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:an imaging unit comprising an image sensor including a vertical light-shielded region and an effective region that is not light-shielded;
a first image processing circuit connected to the imaging unit; and
a second image processing circuit connected to the first image processing circuit,
wherein the first image processing circuit:
receives image data from the imaging unit, and
applies, to image data of a first region which is a portion of the effective region of a frame of the image data received from the imaging unit, predetermined image processing by using image data of the vertical light-shielded region of the frame,
the first image processing circuit:
sets as a second region, out of the effective region of the frame, a portion that is at least partially different to the first region and that is contiguous with the vertical light-shielded region, and
consecutively transmits, to the second image processing circuit, (i) the image data of the vertical light-shielded region and (ii) unprocessed image data of the second region to which the predetermined image processing has not been applied, and
the second image processing circuit:
receives the image data of the vertical light-shielded region of the frame and the unprocessed image data of the second region transmitted from the first image processing circuit, and
applies the predetermined image processing to the unprocessed image data of the second region received from the first image processing circuit, by using the image data of the vertical light-shielded region received from the first image processing circuit.

US Pat. No. 10,250,828

GLOBAL SHUTTER IMAGE SENSOR WITH ANTI-BLOOMING PIXEL AND KNEE POINT SELF-CALIBRATION

SMARTSENS TECHNOLOGY (U.S...

1. An image sensor comprising:a plurality of global shutter imaging pixel cells, each including an anti-blooming transistor gate adjacent on one side to a photodiode and adjacent on another side to an anti-blooming transistor drain for modifying electric charge within the photodiode and for setting the photodiode to a selected potential;
a row decoder circuit providing readout signals to each of a plurality of rows of the imaging pixel cells during both a readout interval and a calibration interval for each row and providing to the anti-blooming transistor drain a selectable potential supply voltage, the selectable potential supply voltage being either a standard drain supply voltage or an adjustable low voltage;
a mode select switch within the row decoder circuit being operable to apply the standard drain supply voltage to the drain of the anti-blooming transistor or to apply the adjustable low voltage to the anti-blooming transistor drain; and
a programmable function logic circuit operable to engage the mode select switch to change the potential applied to the drain of the anti-blooming transistor from the standard supply voltage to the adjustable low voltage during an interval following the readout interval and before the calibration interval of each row.

US Pat. No. 10,250,827

IMAGING APPARATUS AND CONTROL METHOD THEREOF

CANON KABUSHIKI KAISHA, ...

1. An imaging apparatus comprising:an imaging element including a plurality of micro-lenses and a plurality of photoelectric conversion units correspond to the plurality of micro-lenses;
one or more processors; and
a memory storing instructions which, when executed by the one or more processors, cause the imaging apparatus to:
generate a first image signal based on a signal from a first photoelectric conversion unit among the plurality of photoelectric conversion units and a second image signal based on signals from the first photoelectric conversion unit and a second photoelectric conversion unit different from the first photoelectric conversion unit among the plurality of photoelectric conversion units; and
record a first correction parameter specific to the imaging element for correcting the first image signal, and a second correction parameter specific to the imaging element for correcting a third image signal based on the signal from the second photoelectric conversion unit in association with the first and second image signals in the memory,
wherein one file including the first and second image signals and the first and second correction parameters is generated and the generated file is recorded in the memory.

US Pat. No. 10,250,826

IMAGE SENSORS HAVING EXTENDED DYNAMIC RANGE

InVisage Technologies, In...

1. An electronic device, comprising:a pixel circuit, comprising an overflow gate;
an automatic circuit configured to reset a pixel a plurality of times only during a time period when a signal detected at the pixel is below a predetermined threshold for the pixel circuit with the overflow gate; and
a column readout circuit, the column readout circuit including a correlated double-sampling (CDS) capacitor, one or more CDS clamp switches, a single slope analog-to-digital converter (ADC) circuit, and a column memory.

US Pat. No. 10,250,825

NIGHT VISION DEVICE WITH PERIODIC INFRARED ILLUMINATION AND GLOBAL SHUTTER CMOS SENSOR

Ambarella, Inc., Santa C...

1. An apparatus comprising:a complementary metal oxide semiconductor (CMOS) image sensor comprising a plurality of picture elements, wherein light integration begins on all of said picture elements simultaneously in response to a first control signal;
an infrared light generating circuit, wherein a duration, an intensity, or both a duration and an intensity of infrared illumination generated by said infrared light generating circuit is controlled by a second control signal; and
a processor circuit enabled to generate said first control signal and said second control signal, wherein a period of said infrared illumination is shorter than an integration period of said CMOS image sensor.

US Pat. No. 10,250,824

CAMERA SENSOR WITH EVENT TOKEN BASED IMAGE CAPTURE AND RECONSTRUCTION

The University of North C...

1. A camera sensor with event token image capture and reconstruction comprising:a photodetector for detecting light from a portion of a scene and producing a signal indicative of the light;
an integrator for accumulating charge resulting from the signal output by the photodetector;
an in-pixel processor for, in response to each accumulation by the integrator of a predetermined level of charge, resetting the integrator, wherein the in-pixel processor generates event tokens such that each event token coincides in time with an integrator reset event;
a communication pipeline for communicating the event tokens from the in-pixel processor for downstream processing; and
a post processor for receiving the event tokens and determining an output intensity based on a number of reset events and a time between at least two of the event tokens, wherein determining the output intensity includes dividing a value of a reset event counter by a difference in arrival times of two of the event tokens at the post processor and wherein the communication pipeline has a substantially constant propagation delay for communicating the event tokens from the in-pixel processor to the post processor.

US Pat. No. 10,250,823

LIQUID CRYSTAL FOURIER TRANSFORM IMAGING SPECTROMETER

Palo Alto Research Center...

1. A method of operating a hyperspectral imaging device, comprising:receiving a light beam at a liquid crystal retarding device;
pre-computing a voltage waveform based on a dynamic model of liquid crystal material in the liquid crystal variable retarding device that takes into account liquid crystal dynamics of the liquid crystal retarding device to change the optical retardance of the liquid crystal retarding device as a prescribed function of time, the pre-computed voltage waveform producing a controlled retardance versus time trajectory, wherein transitions between at least some retardance states of the liquid crystal variable retarding device along the trajectory occur faster than a response time of the liquid crystal variable retarding device; and
driving the liquid crystal retarding device with the pre-computed voltage waveform.

US Pat. No. 10,250,822

WEARABLE APPARATUS WITH INTEGRATED INFRARED IMAGING MODULE

FLIR Systems, Inc., Wils...

1. A method of presenting a user-viewable image on a wearable apparatus having a shield, the method comprising:capturing, using a focal plane array (FPA) of an infrared imaging device of the wearable apparatus, a thermal image of an external environment;
converting the thermal image into a user-viewable image of the external environment;
presenting the user-viewable image using a display for viewing by a user while wearing the wearable apparatus;
passing, by the shield, at least some visible light from the external environment to the user for viewing the external environment through the shield;
passing, by a window provided in the shield, infrared radiation from the external environment to the infrared imaging device, wherein the infrared imaging device is positioned behind the window of the shield to receive the infrared radiation passed through the window; and
protecting the infrared imaging device, the display, and at least a portion of the user's face from the external environment by the shield while the wearable apparatus is worn by the user,
wherein the infrared imaging device and the display are positioned interior to and behind the shield to be protected from the external environment.

US Pat. No. 10,250,821

GENERATING A THREE-DIMENSIONAL MODEL OF AN INDUSTRIAL PLANT USING AN UNMANNED AERIAL VEHICLE

Honeywell International I...

1. A method of generating a three-dimensional model of a site, the method comprising:capturing, using a single unmanned aerial vehicle, a number of visual images of the site;
capturing, using the single unmanned aerial vehicle, a number of infrared images of the site,
wherein the number of visual images and the number of infrared images captured using the single unmanned aerial vehicle include a number of lateral images of all objects of the site captured by the single unmanned aerial vehicle while travelling along a route between a plurality of the objects of the site, including between tanks of the site, at different heights of the site;
determining a relative position of the single unmanned aerial vehicle while the single unmanned aerial vehicle is travelling along the route between the plurality of objects of the site, wherein the relative position of the single unmanned aerial vehicle is determined using the captured visual and infrared images and a known initial position of the single unmanned aerial vehicle at a beginning of the route stored in the single unmanned aerial vehicle;
determining dimensions, including heights, of the plurality of the objects of the site based on the number of lateral images and based on data from a pressure sensor or an ultrasound sensor; and
forming a three-dimensional model of the site based on the dimensions of the plurality of the objects of the site and based on the relative position of the single unmanned aerial vehicle while the single unmanned aerial vehicle is travelling along the pre-programmed route by combining the number of visual images and the number of infrared images.

US Pat. No. 10,250,820

ELECTRONIC DEVICE AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE

Samsung Electronics Co., ...

1. An electronic device comprising:a light emitter;
an image sensor including a plurality of first pixels controlled based on a first exposure time, and a plurality of second pixels controlled based on a second exposure time;
at least one sensor; and
a processor,
wherein the processor is configured to:
identify a first object and a second object among a plurality of objects in an image area;
in response to a light outputted from the light emitter being projected on the first object and the second object, identify light intensity reflected from the first object and the second object using the image sensor or the at least one sensor;
identify the first exposure time based on a first property of the first object and the second exposure time based on a second property of the second object, wherein the identification of the first exposure time and the second exposure time is based at least in part on the light intensity, and a first value corresponding to the first property of the first object is different from a second value corresponding to the second property of the second object;
acquire a first image of the first object according to the identified first exposure time using the plurality of first pixels and a second image of the second object according to the identified second exposure time using the plurality of second pixels; and
synthesize a first area corresponding to the first object in the first image with a second area corresponding to the second object in the second image to generate an output image.

US Pat. No. 10,250,819

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Olympus Corporation, Tok...

1. An image processing apparatus comprising:a memory that stores first image data, and
a processor that includes an image associated information processing section, wherein
the image associated information processing section, for the image data of a single frame that has been taken at a plurality of shooting conditions, within the first image data that has been stored in the memory, acquires image region information, relating to image regions in which shooting is carried out at different shooting conditions, and image associated information of the image regions, associates the image region information and the image associated information and subjects the first image data to image processing, and generates second image data.

US Pat. No. 10,250,818

ELECTRONIC DEVICE INCLUDING A PLURALITY OF CAMERAS AND OPERATING METHOD THEREOF

Samsung Electronics Co., ...

1. A method for operating an electronic device, the method comprising:activating a first camera and a second camera;
receiving a first image that is output from the activated first camera and a second image that is output from the activated second camera;
displaying the first image on a first part of an image output area of the electronic device;
sensing a camera switch request;
in response to the camera switch request, displaying a dummy image that includes a portion of the first image and a portion of the second image; and
displaying the second image on a second part of the image output area after displaying the dummy image.

US Pat. No. 10,250,817

SHADING OBJECT, INTELLIGENT UMBRELLA AND INTELLIGENT SHADING CHARGING SYSTEM INTEGRATED CAMERA AND METHOD OF OPERATION

1. An intelligent shading umbrella, comprising:a base assembly;
a stem assembly coupled to a base assembly;
a center support assembly coupled to a stem assembly, the center support assembly comprising one or more arm support assemblies;
one or more blades, coupled to the arm support assemblies;
a shading fabric, coupled to at least the one or more blades; and
a camera, the camera to capture images of an area in proximity to the intelligent shading umbrella, wherein the camera comprises multiple image resolution settings and the camera receives instructions identifying which of the image resolution settings at which the images are to be captured, and
wherein the center support assembly includes a hollow tubular section and the camera is located in the hollow tubular section of the center support assembly and wherein the camera captures the images through a hole in the center support assembly.

US Pat. No. 10,250,815

COMPONENT MOUNTER INCLUDING A NOZZLE, CAMERA, AND A TRANSFER DEVICE

FUJI CORPORATION, Chiryu...

1. A component mounter comprising:a conveyor configured to convey a circuit board;
a suction nozzle configured to hold a component;
a camera configured to image a bump on a bottom surface of the component held by the suction nozzle; a transfer device configured to hold fluid that is one of solder, flux, conductive paste, and adhesive; and
a control device configured to image the bump onto which the fluid is not transferred at multiple shutter speeds using the camera to acquire multiple pre-transfer images captured at the multiple shutter speeds, to transfer the fluid held by the transfer device onto the bump, and to image the bump onto which the fluid is transferred at the multiple shutter speeds to acquire multiple post-transfer images captured at the multiple shutter speeds, the control device including an image processing unit, wherein
while the suction nozzle maintains suction on the component, the image processing unit is configured to determine pixel values of each of the pre-transfer images by performing gray processing on each of the pre-transfer images, and to determine pixel values of each of the post-transfer images by performing gray processing on each of the post-transfer images, the image processing unit including a transfer inspection data creation device configured to create transfer inspection data and a transfer inspection device configured to inspect a transfer state of the fluid transferred onto a bump using the transfer inspection data created by the transfer inspection data creation device,
the transfer inspection data creation device includes a shutter speed determining device configured to determine a transfer state shutter speed as a minimum shutter speed from the multiple shutter speeds for which a minimum value for the pixel values of each of the post-transfer images is equal to or larger than a predetermined value,
the transfer inspection data creation device is configured to create the transfer inspection data based on the pixel values of a pre-transfer image and a post-transfer image captured with the transfer state shutter speed determined by the shutter speed determining device, and
the control device is configured to mount the component used for the creation of the transfer inspection data on the circuit board conveyed by the conveyor.

US Pat. No. 10,250,814

IMAGE SIGNAL PROCESSOR APPARATUS AND IMAGE SIGNAL PROCESSING METHOD

Kabushiki Kaisha Toshiba,...

1. An image signal processor apparatus comprising:a first image signal processor apparatus configured to receive first image data as input; and
a second image signal processor apparatus configured to receive second image data as input,
wherein the second image signal processor apparatus selects, as a signal to adjust pixel values of the second image data, one of a first signal that the first image signal processor apparatus uses for first image processing to adjust pixel values of the first image data and a second signal generated based on the second image data, and when selecting the first signal, uses the first signal to perform the first image processing of adjusting pixel values of the second image data.

US Pat. No. 10,250,813

METHODS AND SYSTEMS FOR SHARING VIEWS

FUJI XEROX CO., LTD., To...

1. A method, comprising:at a first device with a display:
receiving an image from a camera device remote from the first device, the image corresponding to a first field of view of the camera device;
receiving sensory information from a second device remote from the first device, the sensory information corresponding to a second field of view of the second device;
displaying on the display:
the image; and
a visualization, distinct from the image, of an angle orientation of the first field of view and an angle orientation of the second field of view, wherein the visualization includes a first bounded area representing the angle orientation of the first field of view and a second bounded area representing the angle orientation of the second field of view;
sending a command to orient the angle orientation of the first field of view of the camera device with the angle orientation of the second field of view corresponding to the sensory information; and
upon sending the command, updating the visualization in accordance with the orienting of the angle orientation, to show the first field of view of the camera device overlapping with the angle orientation of the second field of view corresponding to the sensory information.

US Pat. No. 10,250,812

DISPLAY SYSTEM FOR MACHINE

Caterpillar Inc., Deerfi...

1. A display system for displaying image data of an environment of a machine, the display system comprising:a display screen; and
a plurality of imaging devices comprising:
a first imaging device; and
a second imaging device,
the second imaging device being communicably coupled to the first imaging device and the display screen, and
the second imaging device being configured to:
generate the image data;
receive, from the first imaging device, information identifying a first operating parameter associated with the first imaging device; and
selectively transmit, based on the first operating parameter and a second operating parameter associated with the second imaging device, signals indicative of the image data to the display screen.

US Pat. No. 10,250,811

METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR CAPTURING IMAGES

Nokia Technologies Oy, E...

1. A method comprising:generating by a first camera of an apparatus, in a preview mode of the apparatus, a first plurality of image frames at a first frame rate to provide a preview of a scene at a first resolution;
generating by at least one second camera of the apparatus, in the preview mode of the apparatus during a time of generating the first plurality of image frames, a second plurality of image frames of the scene at a second frame rate at a second resolution,
wherein the second resolution is a higher resolution than the first resolution, and
wherein the at least one second camera is different from the first camera of the apparatus, and wherein a field of view of the first camera and a field of view of the at least one second camera are aligned;
determining whether a capture mode of the apparatus is initiated within a threshold time; and
in response to determining that the capture mode is initiated within the threshold time, selecting in the capture mode of the apparatus at least one image frame from the second plurality of image frames of the scene generated by the at least one second camera as at least one capture image of the scene with zero shutter lag,
where the first frame rate of the first camera is x frames per second, the second frame rate of the at least one second camera is y frames per second, where x is greater than y and where y is greater than one.

US Pat. No. 10,250,810

SINGLE PIECE OPTICAL IMAGE STABILIZATION ACTUATOR COIL ASSEMBLY

Apple Inc., Cupertino, C...

1. A coil assembly, comprising:a base;
one or more tabs that extend from the base;
one or more coils located in the one or more tabs;
one or more terminals in the base for connections to the one or more coils; and
one or more leads routed through the base and the one or more tabs between respective ones of the terminals and respective ones of the one or more coils;
wherein the one or more tabs are bent such that the one or more coils are oriented at an angle to the base.

US Pat. No. 10,250,809

VIDEO STABILIZATION SYSTEM AND METHOD

SYNAPTIVE MEDICAL (BARBAD...

1. A method of stabilizing digital video received from a video source as a sequence of digital frames, each digital frame comprising an array of image pixels, each image pixel having a value, the method comprising the steps of:providing a census kernel, the census kernel being an array of points including a center point with value 0, each point having a binary value, the array having 2N+1 rows and 2N+1 columns, N being an integer greater than 1, wherein a total of T test points in the census kernel have value 1 and the other points have value 0, T being an integer greater than or equal to 8, and wherein a subset of the test points in the census kernel form a connected circle centered on the center point;
for each candidate image pixel in each digital frame, a candidate image pixel being any image pixel in the digital frame spaced apart by at least N pixels from an edge of the digital frame:
computing and storing a digital signature of the candidate image pixel having 2T bits consisting of T brighter bits and T darker bits by aligning the center point of the census kernel with the candidate image pixel, selecting the image pixels corresponding to the test points in the census kernel, and for each selected image pixel (a) including a brighter bit of value 1 in the signature if the sum of the value of the selected image pixel and a threshold value Tb is less than the value of the candidate image pixel, or otherwise including a brighter bit of value 0 in the signature, and (b) including a darker bit of value 1 in the signature if the value of the selected image pixel is greater than the sum of the value of the candidate image pixel and a second threshold value, or otherwise including a darker bit of value 0 in the signature;
summing all the darker bits in the digital signature corresponding to test points on the connected circle, and if the sum is greater than a threshold Tdnc, designating the candidate image pixel to be a corner pixel, or counting a maximum number of darker bits having value 1 in the digital signature corresponding to contiguous test points on the connected circle, and if the number is greater than a threshold Tdc, designating the candidate image pixel to be a corner pixel;
summing all the brighter bits in the digital signature corresponding to test points on the connected circle, and if the sum is greater than a threshold Tbnc, designating the candidate image pixel to be a corner pixel, or counting a maximum number of brighter bits having value 1 in the digital signature corresponding to contiguous test points on the connected circle, and if the number is greater than a threshold Tbc, designating the candidate image pixel to be a corner pixel;
for each digital frame following a previous digital frame in the sequence of digital frames:
for each corner pixel in the digital frame, attempting to identify an image pixel in the previous digital frame corresponding to the corner pixel based on the Hamming distances between the digital signature of the corner pixel and the stored digital signatures of the image pixels in the previous digital frame, and, if a corresponding image pixel is identified, calculating a motion vector for the corner pixel based on difference in locations of the corner pixel and the identified corresponding image pixel;
calculating a motion model based on the motion vectors; and
applying the motion model to the digital frame to produce a stabilized frame.

US Pat. No. 10,250,808

IMAGING APPARATUS AND CONTROL METHOD THEREFOR

Canon Kabushiki Kaisha, ...

1. An imaging apparatus comprising:a camera body on which a lens device is detachably mounted;
a shake detector configured to be inside the camera body; and
a processor, wherein the processor functions according to a program stored in a memory as a control unit configured to determine an exposure time at a time of panning,
wherein the control unit determines the exposure time at the time of panning using a shake detection signal output by the shake detector irrespective of whether the lens device mounted on the camera body corresponds to the panning.

US Pat. No. 10,250,807

IMAGING DEVICE, IMAGING METHOD, AND RECORDING MEDIUM

Olympus Corporation, Tok...

1. An imaging device, comprising:an imaging unit which captures a subject and acquires a captured image;
an area setting circuit which sets a first area and a second area other than the first area on the captured image acquired by the imaging unit;
a main subject setting circuit which sets a main subject based on the captured image acquired by the imaging unit or another image;
a subject tracking circuit which periodically detects a position of the main subject on the captured image acquired by the imaging unit;
a variable power circuit which zooms on a tele-angle side or a wide angle side; and
a control circuit which directs the variable power circuit to zoom on a wide angle side when a zoom state of the variable power circuit is on a tele-angle side and the main subject exists in the second area, wherein
the variable power circuit zooms according to the direction of the control circuit;
depending on a detection result of the subject tracking circuit, the control circuit directs the subject tracking circuit to detect the position of the main subject in a first period when the main subject does not exist in the second area, and to detect the position of the main subject in a second period shorter than the first period when the main subject exists in the second area; and
the subject tracking circuit detects the position of the main subject in a period according to the direction of the control circuit.

US Pat. No. 10,250,806

ELECTRONIC DEVICE AND METHOD FOR CONTROLLING IMAGE SHOOTING AND IMAGE OUTPUTTING

Samsung Electronics Co., ...

1. A method for controlling image shooting and image outputting in an electronic device, the method comprising:determining whether to shoot an image or output the image;
in response to determining to shoot the image, controlling a variable reflecting plate in a first direction while the electronic device is shooting the image through a single lens contained in the electronic device; and
in response to determining to output the image, controlling the variable reflecting plate in a second direction different from the first direction while the electronic device is outputting the image through a single lens contained in the electronic device.

US Pat. No. 10,250,805

IMAGING DEVICE FOR PERFORMING DFD PROCESSING AT APPROPRIATE TIMING

PANASONIC INTELLECTUAL PR...

1. An imaging device comprising:an optical system including a focus lens;
an imaging sensor configured to generate image data based on an image obtained by the optical system;
a memory;
a processor; and
a non-transitory computer-readable recording medium having stored thereon executable instructions, which when executed, cause the processor to function as:
a Depth from Defocus (DFD) processing unit configured to calculate DFD distance information regarding a subject distance based on a plurality of blur signals and a plurality of pieces of image data having different focusing positions, the image data being obtained from the imaging sensor following drive of the focus lens;
a measuring unit configured to measure a variation of a subject image of the image data; and
a control unit configured to control the DFD processing unit to calculate the DFD distance information, wherein
the memory stores the DFD distance information, and
(A) when the measured variation is determined, by the control unit, to be equal to or more than a predetermined value, the control unit:
causes the DFD distance information to be cleared from the memory; and
determines whether or not the variation is stable, wherein
(B) when the control unit determines that the variation is stable, the control unit controls the DFD processing unit to calculate the DFD distance information, and the control unit causes the memory to store the calculated DFD distance information so as to update the previous DFD distance information, and
(C) when the control unit determines that the variation is not stable, the control unit does not control the DFD processing unit to calculate the DFD distance information, wherein
when a release button is half-depressed, the control unit discontinues processing (A), (B) and (C), and
(i) when the DFD distance information is present in the memory, the control unit causes the focus lens to move to a position which corresponds to the DFD distance information in the memory, and
(ii) when the DFD distance information is not present in the memory, the control unit causes the focus lens not to move.

US Pat. No. 10,250,804

INDUCTION-POWERED CAMERA

Vivint, Inc., Provo, UT ...

1. An apparatus for security and/or automation systems, comprising:a first half of the apparatus, the first half of the apparatus comprising an internal half of the apparatus configured to be located inside a building;
an electricity transmission unit positioned within the first half;
a first light source included in the first half of the apparatus, wherein the first light source illuminates when the electricity transmission unit is activated;
a first rechargeable battery included in the first half of the apparatus, wherein the first rechargeable battery charges using a continuous power supply located inside the building;
a second half of the apparatus positioned on a divider configured to separate the inside of the building from the outside of the building, the second half of the apparatus comprising an external half of the apparatus configured to be located outside the building;
an electricity receiving unit included in the external half of the apparatus and positioned a predetermined distance away from the electricity transmission unit;
a second light source included in the second half of the apparatus, wherein the second light source illuminates when first half of the apparatus is in communication with the second half of the apparatus;
a camera included in the external half of the apparatus, the camera powered by a portion of power received by the electricity receiving unit; and
a second rechargeable battery included in the second half of the apparatus, wherein the second rechargeable battery stores a remaining portion of the power received by the electricity receiving unit.

US Pat. No. 10,250,803

VIDEO GENERATING SYSTEM AND METHOD THEREOF

HTC Corporation, Taoyuan...

1. A video generating system, comprising:a processor operatively configured to:
determine a plurality of target angles of a 360-degree panoramic video based on at least one object, at least one face, at least one speaking person, or at least one sound direction being detected from a part of at least one frame of the 360-degree panoramic video;
separate the 360-degree panoramic video into a plurality of durations according to the target angles, wherein each duration corresponds to one of the target angles;
generate a normal video according to the durations associated with the target angles;
zoom in the part of the at least one frame in response to a resolution of the at least one frame being higher than a resolution threshold and the durations corresponding to the same target angles are longer than a time threshold; and
zoom out the part of the at least one frame in response to the resolution of the at least one frame being lower than a resolution threshold.

US Pat. No. 10,250,802

APPARATUS AND METHOD FOR PROCESSING WIDE VIEWING ANGLE IMAGE

FXGear Inc., Seoul (KR)

1. An apparatus for processing a wide viewing angle image and providing the wide viewing angle image to a display device, comprising:a correction parameter generating processor configured to analyze an image input from a camera and a planar reference image to generate a correction parameter based on the analysis of the image input and the planar reference image;
a wide viewing angle image packaging processor configured to encode the input image and the correction parameter to generate a wide viewing angle image package; and
a first communication processor configured to provide the wide viewing angle image package to a wide viewing angle image display device;
wherein the planar reference image is transformed into a curved projection geometry by the wide viewing angle image display device,
wherein the wide viewing angle image display device is configured to project the curved projection geometry to a virtual space, to dispose a virtual camera, to texture the input image to the curved projection geometry to compose a scene, to display the scene on the curved projection geometry,
wherein the wide viewing angle image display device corrects the image that is already displayed on the curved projection geometry based on the correction parameter,
wherein the correction parameter includes a camera intrinsic parameter or a stabilizing parameter,
wherein the wide viewing angle image display device is configured to dynamically correct the scene by dynamically adjusting a location of the virtual camera by using the stabilizing parameter.

US Pat. No. 10,250,801

CAMERA SYSTEM AND IMAGE-PROVIDING METHOD

Institute For Information...

1. A computer device for calculating an estimated camera location and an estimated camera pose, comprising:a storage, being configured to store a plurality of feature models based on a plurality of reference images, wherein each of the feature models comprises a plurality of reference feature points; and
a processor, being electrically connected to the storage and configured to:
calculate a plurality of initial feature points of an initial image and a plurality of respective three-dimensional coordinates thereof, wherein the initial image corresponds to an initial camera location and an initial camera pose;
compare a plurality of feature differences, each of which is between the initial feature points of the initial image and the plurality of reference feature points of each of the plurality of feature models, to decide a feature model candidate from the plurality of feature models, wherein the feature model candidate corresponds to a minimum of the feature differences; and
calculate the estimated camera location and the estimated camera pose based on a projection of the three-dimensional coordinates of the initial feature points of the initial image in a case where a plurality of feature points of an estimated image approach the reference feature points of the feature model candidate, wherein the estimated image is produced based on the estimated camera location and the estimated camera pose.

US Pat. No. 10,250,800

COMPUTING DEVICE HAVING AN INTERACTIVE METHOD FOR SHARING EVENTS

A9.COM, INC., Palo Alto,...

1. A computing device, comprising:in a portable handheld configuration,
a first camera pointed outwardly from a front face of the computing device;
a second camera pointed outwardly from a rear face of the computing device;
at least one orientation component configured to detect device orientation data for the computing device;
at least one processor; and
a memory device including instructions that, upon being executed by the at least one processor, cause the computing device to:
capture first image data by the first camera and second image data by the second camera of an object;
detect first orientation data associated with the first image data and second orientation data associated with the second image data, the first orientation data and the second orientation data determined using the at least one orientation component;
identify historic image data previously captured by the first camera or the second camera, the historic image data previously determined using the at least one orientation component, and at least some of the historic image data representing a view of an area different from views represented in the first image data and the second image data;
stitch together the first image data, the second image data, and the historic image data using the first orientation data, the second orientation data, and the historic orientation data to generate composite image data that includes a representation of a three-hundred and sixty degree view of an area; and
provide at least a portion of composite image data to a remote computing device.

US Pat. No. 10,250,799

ENHANCED IMAGE CAPTURE

Google Technology Holding...

1. A method on an image-capture device, the method comprising:capturing, by the image-capture device, a plurality of still images;
receiving, by the image-capture device, a capture command;
selecting, by the image-capture device, a still image from the plurality of captured images, wherein the selecting a still image is based, at least in part, on a temporal proximity of an image-capture time of a captured image to a time of the capture command; and
capturing, by the image-capture device, video;
analyzing, by the image-capture device, the captured video to determine an interest score for a segment of video based, at least in part, on an analysis of the plurality of still images, the segment of video being a sequence of video frames; and
in response to determining that the interest score for the segment of the captured video is above a threshold, outputting a notification that the captured video is available.

US Pat. No. 10,250,798

ENABLEMENT AND DISABLEMENT OF CAMERAS

Hewlett-Packard Developme...

1. An apparatus, comprising:a camera to record an image, wherein the camera is enabled and disabled, independent of a basic input and output system (BIOS) engine available to enable the camera and disable the camera, by:
a button actuatable between an on condition and an off condition; and
an input device when the button is actuated to the on condition.

US Pat. No. 10,250,797

THIN MULTI-APERTURE IMAGING SYSTEM WITH AUTO-FOCUS AND METHODS FOR USING SAME

Corephotonics Ltd., Tel ...

1. A dual-aperture digital camera, comprising:a) a first sub-camera that includes a first optics bloc with a respective first optical axis and a first, color image sensor covered with a color filter array (CFA) with a first number of pixels and a first pixel size, the first camera configured to output a first image;
b) a second sub-camera that includes a second optics bloc with a respective second optical axis and a second, clear image sensor with a second number of pixels and a second pixel size, the second camera configured to output a second image, wherein the first and second optics blocs are mounted on a single lens holder;
c) a single autofocus (AF) mechanism coupled mechanically to the single lens holder and operative to move the single lens holder and the first and second optics blocs mounted thereon together for AF in a direction common to the respective first and second optical axes;
d) an optical image stabilization (OIS) mechanism, coupled mechanically to the first and second optics blocs and operative to move the first and second optics blocs together for OIS in a direction perpendicular to the first and second optical axes; and
e) an image fusion algorithm that combines the first image and the second image into a combined color image.

US Pat. No. 10,250,796

FOCUS ADJUSTMENT APPARATUS AND FOCUS ADJUSTMENT METHOD

Canon Kabushiki Kaisha, ...

1. A focus adjustment apparatus comprising:a signal generating unit configured to generate signals of two images respectively corresponding to a pair of light fluxes that pass through different pupil regions of a focusing lens;
a two-image displacement amount calculating unit configured to calculate a displacement amount of the two images with respect to each other based on a phase difference of the signals of the two images;
a steepness calculating unit configured to calculate a steepness of change in a correlation change amount of the two images;
a steepness normalizing unit configured to normalize the steepness; and
an evaluating unit configured to evaluate reliability of the displacement amount of the two images based on the normalized steepness.

US Pat. No. 10,250,795

IDENTIFYING A FOCUS POINT IN A SCENE UTILIZING A PLURALITY OF CAMERAS

Motorola Mobility LLC, C...

1. A method comprising:performing, via a main lens of a main camera, a first focus procedure that scans a current scene by focusing the main lens in a first direction; and
concurrently performing, via at least one auxiliary lens of at least one auxiliary camera, a second focus procedure that scans the current scene by focusing the at least one auxiliary lens in a second direction that is opposite the first direction;
identifying a focus value peak within autofocus data collected during the first and second focus procedures;
establishing the focus value peak as a focus point;
determining whether the focus point was identified by the at least one auxiliary camera; and
in response to determining the focus point was identified by the at least one auxiliary camera:
terminating the first focus procedure;
performing, via the at least one auxiliary lens, a fine scanning focus procedure to confirm the focus value peak as a greatest focus value peak for the current scene; and
automatically synchronizing a position of the main lens to that of the at least one auxiliary lens responsive to successful completion of the fine scanning focus procedure.

US Pat. No. 10,250,794

CAPTURING AN IMAGE USING MULTI-CAMERA AUTOMATIC FOCUS

Motorola Mobility LLC, C...

1. A method comprising:concurrently performing a main focus procedure and an auxiliary focus procedure for a current scene via a main camera and at least one auxiliary camera, respectively, of an image capturing device;
determining whether a focus point has been found in the current scene; and
in response to determining the focus point has not been found:
dynamically adjusting a number of focus scan steps of the auxiliary focus procedure utilized by only the at least one auxiliary camera;
performing, via an auxiliary lens of the at least one auxiliary camera, the auxiliary focus procedure using the adjusted number of focus scan steps;
determining whether the focus point has been found; and
in response to determining the focus point has been found, automatically synchronizing a position of a main lens of the main camera to that of the auxiliary lens.

US Pat. No. 10,250,793

FOCUS ADJUSTMENT DEVICE HAVING A CONTROL UNIT THAT DRIVES A FOCUS ADJUSTMENT OPTICAL SYSTEM TO A FOCUSED POSITION ACQUIRED FIRST BY EITHER A CONTRAST DETECTION SYSTEM OR A PHASE DIFFERENCE DETECTION SYSTEM

NIKON CORPORATION, Tokyo...

1. An imaging apparatus comprising:an imaging sensor which includes imaging pixels and focus detection pixels, captures an image by an optical system, and outputs a signal, the optical system having a focus adjustment optical system; and
a camera control unit which calculates a focus evaluation value and an amount of defocus;
wherein the camera control unit:
repeats calculation of the focus evaluation value based on the signal output from the imaging pixels and calculation of the amount of defocus based on the signal output from the focus detection pixels for detecting a position of the focus adjustment optical system at the time a focused position of the optical system is focused on the imaging sensor; and
stops the calculation of the focus evaluation value and drives the focus adjustment optical system according to the amount of defocus when the amount of defocus can be calculated.

US Pat. No. 10,250,792

UNMANNED AERIAL VEHICLES, VIDEOGRAPHY, AND CONTROL METHODS

Platypus IP PLLC, Salt L...

1. A method performed by an unmanned aerial vehicle (UAV) for capturing imagery of a climber as the climber navigates a climbing surface, the method comprising: receiving a signal from one or more distance sensors of the UAV describing the climbing surface;determining a plane describing the climbing surface;
receiving a signal describing a position of the climber on the climbing surface; maintaining a position of the UAV at: a distance normal to the plane describing the climbing surface; and
relative to the position of the climber;
and capturing the imagery of the climber.

US Pat. No. 10,250,791

MOBILE-BASED PERSONAL CONTENT PROTECTION APPARATUS AND METHOD THEREOF

ELECTRONICS AND TELECOMMU...

1. A mobile-based personal content protection apparatus comprising:a camera view estimator configured to, by using a sensor equipped in a mobile terminal of a user, calculate a position of the mobile terminal and a position of an area, which is being photographed by the camera included in the mobile terminal, to estimate a camera view;
a peripheral information receiver configured to search for another mobile terminal located in an area within a predetermined range, based on the position of the another mobile terminal and the camera view and receive peripheral position information and information about allowance of photographing from the another mobile terminal;
a personal content detector configured to detect a personal content area based on an image captured by a camera included in the another mobile terminal where the photographing is not allowed, based on the peripheral position information; and
a personal content masking unit configured to mask the detected personal content area.

US Pat. No. 10,250,790

ELECTRIC APPARATUS, IMAGE PROJECTION APPARATUS AND IMAGE CAPTURING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An image projection apparatus comprising:a first optical modulator configured to operate in response to receipt of a first clock signal whose polarity reverses periodically so as to modulate a first color light introduced thereto;
a second optical modulator configured to operate in response to receipt of a second clock signal whose polarity reverses periodically at a same period as that of the first clock signal so as to modulate a second color light introduced thereto;
an optical element through which exit lights from the first and second optical modulators pass;
a projection optical system configured to project an image light including the exit lights modulated by the first and second optical modulators onto a projection surface; and
a signal outputter configured to output the first and second clock signals respectively to the first and second optical modulators,
wherein:
the first and second optical modulators are arranged on mutually opposite sides across the optical element,
light exit surfaces of the first and second optical modulators respectively face first and second light entrance surfaces provided in the optical element at mutually opposite sides thereof, and
the signal outputter is configured to output the first and second clock signals such that, in each period of the first and second clock signals, a time period in which their polarities are mutually reversed is longer than a time period in which their polarities are mutually identical.

US Pat. No. 10,250,789

ELECTRONIC DEVICE WITH MODULATED LIGHT FLASH OPERATION FOR ROLLING SHUTTER IMAGE SENSOR

GOOGLE LLC, Mountain Vie...

1. A user-held electronic device comprising:a modulated light projector;
an electronic rolling shutter (ERS) imaging camera having a sensor array with pixel rows;
a controller to control the modulated light projector to project a modulated light flash into an environment of the user-held electronic device during capture of a first image frame by the sensor array and to refrain from projecting any modulated light flash during capture of a second image frame adjacent to the capture of the first image frame, and wherein the controller is to initiate the modulated light flash while each pixel row of the sensor array is exposed for gathering light for the first image frame and to terminate the modulated light flash before any pixel row of the sensor array ceases to be exposed for gathering light for the first image frame; and
a processor to determine modulated light image data based on the first image frame, to determine one or more two-dimensional (2D) spatial features of the environment based on visible light imagery and match the 2D spatial features with a corresponding depth reading based on a 2D feature analysis of the modulated light image data, to identify one or more three-dimensional (3D) spatial features from the 2D spatial features, and to determine a pose of the user-held electronic device relative to the environment based on the one or more 3D spatial features; and
wherein the controller is to control the ERS imaging camera to selectively:
initiate exposure of each of the pixel rows of the sensor array substantially simultaneously for capture of the first image frame and to initiate the modulated light flash in response to the initiation of the exposure of the pixel rows substantially simultaneously; or
capture the first and second image frames at a frame rate greater than a nominal frame rate for capture of visible light imagery by the user-held electronic device.

US Pat. No. 10,250,788

CAMERA MODULE WITH HEAT DISSIPATION ARRANGEMENT AND MANUFACTURING METHOD THEREOF

Ningbo Sunny Opotech Co.,...

1. A camera module, comprising:an optical lens unit, and
a light sensing unit provided along a light outgoing path of said optical lens unit so that said light sensing unit is able to sense light emitted from said optical lens unit,
wherein said light sensing unit comprises:
a photoelectric converting element, and
a conducting unit connected to said photoelectric converting element, wherein said conducting unit transfers electrical signals converted and generated during an operation of said photoelectric converting element, and conducts heat generated during the operation of said photoelectric converting element to the surroundings, wherein said conducting unit comprises a substrate and a circuit board overlapped and contacted with said substrate at a position that said circuit board is positioned between said substrate and said photoelectric converting element, wherein said substrate is contacted with said photoelectric converting element for dissipating the heat from said photoelectric converting element during operation thereof, wherein said substrate has a raised portion and said circuit board has a through hole that said raised portion of said substrate is engaged and contacted with said through hole of said substrate to increase a heat dissipating area therebetween and to contact with said photoelectric converting element, wherein said substrate and said photoelectric converting element are closely attached to conduct the heat generated during operation of said photoelectric converting element, and said circuit board is electrically coupled with said photoelectric converting element to transfer electrical signals converted and generated during the operation of said photoelectric converting element.

US Pat. No. 10,250,785

ELECTRONIC APPARATUS CAPABLE OF EFFICIENT AND UNIFORM HEAT DISSIPATION

Canon Kabushiki Kaisha, ...

1. An electronic apparatus provided with a battery accommodation section that can accommodate a battery, comprising:a substrate that has an electric component, which forms a heat source, mounted thereon;
a heat dissipating member that is formed in the battery accommodation section so as to receive and hold the battery when the battery is accommodated in the accommodation section;
a first heat conductive member that has one end thermally connected to the electric component and another end thermally connected to said heat dissipating member; and
an insertion port member that forms part of the battery accommodation section and is connected to said heat dissipating member at a location closer to a battery insertion port through which the battery is inserted than a remaining portion of said heat dissipating member,
wherein said insertion port member is formed of a material lower in thermal conductivity than said heat dissipating member.

US Pat. No. 10,250,784

CAMERA ASSEMBLY WITH SHIELDED IMAGER CIRCUIT

GENTEX CORPORATION, Zeel...

1. An imager assembly for a vehicle comprising:an optic lens comprising a proximal end portion, a distal end portion, and a longitudinal axis extending there-between;
an imager circuit comprising at least one circuit comprising an imager; and
a lens holder formed of a conductive material, wherein the lens holder forms:
a shielded cavity, wherein the shielded cavity is configured to receive the imager circuit; and
a mounting surface configured to receive a mating surface of the imager circuit, wherein the mounting surface is configured to align the longitudinal axis of the optic lens with the imager, and wherein the lens holder is configured to shield the imager circuit electromagnetic interference and prevent emissions from radiating out of the imager circuit from the lens holder; and
a housing configured to receive the lens holder, wherein a portion of the lens is positioned between the lens holder and the housing.

US Pat. No. 10,250,783

MAGNETIC MOUNT ASSEMBLY OF A CAMERA

GOOGLE LLC, Mountain Vie...

1. A physical assembly, comprising:a magnet mount for physically receiving a physical module, the physical module including a housing having a rear surface of a first shape, the magnet mount including:
a first surface configured to attach to a mounting surface directly or indirectly;
a second surface opposing the first surface, the second surface having a second shape that is substantially complementary to the first shape of the rear surface of the housing of the physical module, the second surface being configured to engage the rear surface of the housing of the physical module; and
a magnetic material disposed between the first and second surfaces and configured to magnetically couple to a magnetic material of the physical module such that when the physical module is magnetically coupled to the magnet mount an adjustable union between the magnet mount and the physical module is formed permitting adjustment of an angle of orientation of the physical module with respect to the magnet mount, the angle of orientation being limited by a stopping structure of the physical module, wherein the magnetic material of the physical module has an area that is substantially greater than that of a cross section of the magnetic material included in the magnet mount.

US Pat. No. 10,250,782

CAMERA MODULE, ELECTRONIC DEVICE, AND METHOD OF OPERATING THE SAME USING PRE-ESTIMATED LENS-CUSTOMIZED POINT SPREAD FUNCTION (PSF)

Samsung Electro-Mechanics...

1. A camera module comprising:a lens module comprising lenses; and
a sensor module comprising an image sensor configured to sense an image input through the lens module, and a logic unit configured to process the image sensed by the image sensor,
wherein the logic unit stores a lens-customized point spread function (PSF) pre-estimated to correct blur characteristics of the lenses within the lens module, and
during a production process of the camera module, the lens-customized PSF is estimated using a sample image that is obtained by photographing an image chart through the lens module and a reference image, without blur, that is obtained independent of the lens module and that corresponds to the image chart.

US Pat. No. 10,250,781

INSTRUMENT FOR LOCATING A NOISE SOURCE IN A CATV SYSTEM AND METHOD OF USING SAME

VIA VI SOLUTIONS, INC., ...

1. An instrument for locating noise in a CATV system, the instrument comprising:an outer body including a first connector configured to be coupled to a port of a CATV tap and a second connector configured to be coupled to a signal level meter,
an electrical circuit assembly positioned in the outer body, the electrical circuit assembly including a filter configured to suppress noise frequencies in the CATV system and a shaft extending outwardly from the first connector to a shaft tip, and
a mechanical biasing element coupled to the electrical circuit assembly opposite the shaft,
wherein the electrical circuit assembly is moveable within the outer body along a first axis between (i) a first position at which a first distance is defined along the first axis between the shaft tip and an end of the first connector and (ii) a second position at which a second distance less than the first distance is defined along the first axis between the shaft tip and the end of the first connector, and
wherein the mechanical biasing element is operable to bias the electrical circuit assembly in the first position in the outer body.

US Pat. No. 10,250,780

VIDEO PROCESSING METHOD, VIDEO PROCESSING CIRCUIT, LIQUID CRYSTAL DISPLAY, AND ELECTRONIC APPARATUS

SEIKO EPSON CORPORATION, ...

1. A video processing method which corrects a video signal specifying a voltage to be applied to a liquid crystal element for each pixel and defines the voltage to be applied to the liquid crystal element on the basis of the corrected video signal, the method comprising:a detection step of detecting a first pixel whose applied voltage specified by the video signal falls below a first voltage and a second pixel whose applied voltage exceeds a second voltage higher than the first voltage, the second pixel being adjacent to the first pixel, wherein in the detecting step, a risk boundary which is a portion of a boundary between the first pixel and the second pixel is detected, and a time period in which the risk boundary is present at a same position is shorter than a time period of one frame;
a correction step of correcting the video signal, which specifies a voltage to be applied to a liquid crystal element corresponding to at least one of the first and second pixels such that the voltage to be applied to the liquid crystal element in at least one field of a plurality of fields constituting one frame of time is different from the voltage to be applied to the liquid crystal element in other field of the plurality of fields constituting one frame.

US Pat. No. 10,250,779

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

FUJIFILM Corporation, Mi...

1. An image processing apparatus comprising:an image reading unit that reads a target printed matter to acquire first read image data indicating a read image of the target printed matter;
an image matching unit that performs a process of matching a positional relationship between read image data which is any one of the first read image data and second read image data obtained by color conversion of the first read image data and original document image data of the target printed matter;
a statistical processing unit that generates statistical information that reflects a distribution of read image signal values of the read image data in each image region of the read image data corresponding to an image region having the same original document image signal values in the original document image data; and
a mismatching detection unit that detects color mismatching between the original document image data and the target printed matter on the basis of the statistical information.

US Pat. No. 10,250,778

DISTRIBUTED SMART CARD READER FOR MULTIFUNCTION PRINTER

Xerox Corporation, Norwa...

13. A method for distributed smart card authentication comprising:selecting, by an administrator, at least one device that can be authenticated by said smart card reader;
setting, by an administrator, a secure password for each of said at least one selected devices that can be authenticated by said smart card reader;
reading user authentication information on a smart card with a smart card reader;
sending said user authentication information from said smart card reader to a server;
storing said user authentication information on said server wherein said server is accessible by each of said selected devices that can be authenticated by said smart card reader;
providing said stored user authentication information to each of said selected devices where authentication is desired upon request; and
authenticating a user of said at least one selected device where authentication is desired according to said stored user authentication information, and user authentication information provided at said selected device by said user.

US Pat. No. 10,250,777

IMAGE PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR IMAGE DIFFERENCE CONFIRMATION

FUJI XEROX CO., LTD., To...

1. An image processing apparatus comprising:a display that displays a first image and a second image simultaneously, the first image and the second image being distinct images derived from distinct image data; and
a processor programmed to:
specify a first area which is at least a portion of the first image,
display a second area on the second image, the second area corresponding to the first area,
specify and display a first reference in the first area on the first image, the first reference being different from the first image,
display a second reference in the second area on the second image, the second reference being different from the second image,
wherein
a relative position of the first reference with respect to the first area is automatically moved to match a relative position of the second reference with respect to the second area each time the second reference is moved, and the relative position of the second reference with respect to the second area is automatically moved to match the relative position of the first reference with respect to the first area each time the first reference is moved, thereby facilitating confirmation of the relative positions, and
display a third image using third image data generated based on a difference detected between a first area image of the first area of the first image and a second area image of the second area of the second image, the third image showing at least one of a portion of the first image which is not present in the second image and a portion of the second image which is not present in the first image, thereby facilitating confirmation of the difference between the first area image of the first area and the second area image of the second area.

US Pat. No. 10,250,776

ENHANCING GRAPH VISUALIZATION WITH SUPPLEMENTAL DATA

International Business Ma...

1. A method for communicating supplemental data for nodes of a graph representing a system, said method comprising:dividing, by one or more processors of a computer system, a visual representation of the graph into a plurality of portions, each portion of the visual representation of the graph being a rectangle corresponding to a respective node of the graph, said visual representation being an area of a computer screen displaying the graph, wherein said dividing the visual representation of the graph comprises determining a number of pixels in a plurality of rectangles displayed on the computer screen along with the graph, based on a number of pixels of the graph and a number of nodes of the graph;
encoding steganographically, by the one or more processors, supplemental data for each respective node, together with an offset distance between a reference point of each respective node and a reference point of the supplemental data for each respective node, into the portion of the graphical representation corresponding to each respective node;
prioritizing, by the one or more processors, fixed length attributes related to the respective node, and
encoding steganographically, by the one or more processors, the attributes into the rectangle corresponding to the respective node, said encoding starting with a highest priority attribute of the fixed length attributes and continuing sequentially according to a priority of the remaining attributes of the fixed length attributes.

US Pat. No. 10,250,775

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND REMOTE-SCAN IMAGE PROCESSING SYSTEM USING THE SAME

RICOH COMPANY, LTD., Tok...

1. An image processing device, comprising:a scanner to scan a document;
a network interface;
a user interface including a display;
a memory; and
a processor configured to conduct an image data storing process and an image data sending process;
wherein in conducting the image data storing process, the processor is configured to:
display one or more first display screens to receive first inputs by a user, the first inputs including at least one or more scan condition settings and a document name, and the one or more first display screens including a scan setting screen to receive the at least one or more scan condition settings and a document name input screen to receive the document name;
control the scanner to scan the document in accordance with the one or more scan condition settings that have been received via the one or more first display screens;
generate image data based on the scanned document; and
store the generated image data in the memory in association with the document name which has been received via the one or more first display screens; and
wherein in conducting the image data sending process, the processor is configured to:
display one or more second display screens to receive second inputs by the user, the second inputs including at least a selection of at least one image data to be sent among a plurality of image data stored in the memory and a selection of a destination to which the at least one image data is to be sent, the plurality of image data including at least the generated image data; and
control the network interface to send the at least one image data to the destination.

US Pat. No. 10,250,774

CONTENT TRANSMITTING METHOD AND APPARATUS THEREFOR

HP Printing Korea Co., Lt...

1. An electronic apparatus comprising:a transceiver;
a display;
a memory; and
at least one processor to:
receive a user input for selecting a user interface (UI) element displayed on a screen of the display,
generate a code, comprising identification information of the electronic apparatus, based on the user input of selecting the UI element,
receive receiver information transmitted from an external device that received the code,
control the display to display the receiver information on a first area of the screen of the display, and
control the transceiver to transmit content to an external source based on the receiver information.

US Pat. No. 10,250,773

IMAGE READING DEVICE, METHOD OF ADJUSTING DISTANCE BETWEEN DOCUMENT AND LIGHT RECEIVING SENSOR IN THE DEVICE, AND DISTANCE ADJUSTMENT PROGRAM

KONICA MINOLTA, INC., Ch...

1. An image reading device comprising:a plurality of light receiving sensors, which use an optical shrink system, configured to read an image of a document conveyed in a sub-scanning direction and be arranged with an interval in a main scanning direction, the plurality of light receiving sensors arranged so that border parts of reading areas read by adjacent two of the light receiving sensors in the main scanning direction correspond to each other on or under a reference conveyance plane of the document;
at least one pair of document conveying rollers configured to be provided in front and back of the plurality of light receiving sensors in the sub-scanning direction;
a detection unit configured to detect an image overlap amount in an image overlap area which occurs when a part of the document in the main scanning direction is redundantly read by the adjacent two light receiving sensors in a case where the document is conveyed at a position apart from the reference conveyance plane as being away from the light receiving sensors and/or an image missing amount in an image missing area which occurs when a part of the document in the main scanning direction is not read by any of the adjacent two light receiving sensors in a case where the document is conveyed at a position apart from the reference conveyance plane as being closer to the light receiving sensor, from a plurality of parts in the main scanning direction of the image data of the document read by the light receiving sensors;
an adjustment amount determination unit configured to determine an adjustment amount for adjusting a distance between the document and the light receiving sensors in at least one part of the document in the main scanning direction based on the image overlap amount and/or image missing amount detected by the detection unit; and
a drive unit configured to adjust the distance between the document and the light receiving sensors by displacing at least one of the document conveying rollers and the light receiving sensors according to the adjustment amount determined by the adjustment amount determination unit.

US Pat. No. 10,250,772

DEVICE MANAGAMENT APPARATUS, DEVICE MANAGEMENT SYSTEM, AND DEVICE MANAGEMENT METHOD

RICOH COMPANY, LTD., Tok...

1. A device management apparatus configured to manage one or more managed devices, each device configured to connect to the device management apparatus via a network and configured to include one or more associated functions provided therein, each function executed according to one or more corresponding security setting items which are required to be set to respective security setting values for the device to operate the function, the device management apparatus comprising circuitry configured to perform a method comprising:(a) registering device management data in a storage device, the device management data including, for each associated function amongst the one or more functions provided in a managed device, use setting information indicating whether the associated function provided in the managed device is currently available for the managed device to operate;
(b) presenting a selection screen to display the use setting information for the functions provided in the managed device and to receive a use change instruction of a specified function provided in the managed device, the use change instruction permitting the use setting information corresponding to the specified function to be changed, in a case that, prior to the use change instruction having been received, the specified function was unavailable to the managed device to operate;
(c) determining, in response to receiving the use change instruction in (b), one or more security setting items of the managed device required to be set to respective security setting values, for the specified function provided by the managed device to be executed; and
(d) changing, upon determining in (c) that the one or more security setting items of the managed device are required to be set to respective security setting values to execute the specified function, a first security setting item of the managed device, amongst said one or more security setting items, from a first setting value to a second setting value different from the first setting value of the first security setting item of the managed device, said first security setting item of the managed device being required to be set to said second setting value to enable the managed device to execute the specified function which was unavailable for the managed device to operate prior to the use change instruction being received in (b).

US Pat. No. 10,250,771

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD

Ricoh Company, Ltd., Tok...

1. An information processing system including at least one information processing apparatus, the system comprising:a memory to store,
for each of a plurality of flows, each flow containing a series of processes executable to electronic data, flow information that defines program identification information identifying one or more programs executing the flow of series of processes and an execution order of the one or more programs, in association with flow identification information for identifying the flow information, and
for each of a plurality of events, event identification information for identifying the event that triggers a start of execution of the flow of the series of processes, in association with the flow identification information and, in association with said each of the plurality of events, condition information indicating a condition for starting the execution of the series of processes associated with said each of the plurality of events; and
circuitry configured to,
based on an occurrence of an event, read the flow identification information associated with the event identification information identifying the occurred event,
based on the occurrence of the event identified by the event identification information, determine whether a condition indicated by the condition information stored in the memory in association with the event identification information is satisfied, and
based on a determination that the condition is satisfied, control the at least one information processing apparatus to execute each of the one or more programs in the execution order, as defined in the flow information identified by the read flow identification information, to perform the flow of series of processes executable on the electronic data.

US Pat. No. 10,250,770

IMAGE READING DEVICE, IMAGE FORMING APPARATUS, AND METHOD FOR CONTROLLING IMAGE READING DEVICE

KYOCERA Document Solution...

1. An image reading device comprising:a contact glass having one side on which a document is fed;
a first lamp disposed on the other side of the contact glass so as to emit light to the one side of the contact glass;
an image sensor disposed on the other side of the contact glass so as to receive light emitted from the first lamp and reflected by the document;
a light absorbing member disposed on the one side of the contact glass at a position irradiated with light emitted from the first lamp and transmitted by the contact glass;
a second lamp disposed on the one side of the contact glass at a position farther from the contact glass than the document feeding path so as to emit light to the image sensor;
a control unit comprising a circuit board including a scanner CPU and a scanner memory, the control unit being configured to control to read with the reflected light by turning on the first lamp while turning off the second lamp in a first period of a reading period of one line when reading a document, and to control to read with the transmitted light by turning on the second lamp while turning off the first lamp in a second period; and
a determining unit comprising a determination circuit, the determining unit being configured to determine whether each pixel in the first image data is a pixel with document data that is a pixel having read document data or a pixel without document data that is a pixel having no read document data, on the basis of first image data obtained by reading in the first period and second image data obtained by reading in the second period, to determine a pixel in the first image data having a density value brighter than a predetermined first threshold value to be the pixel with document data, to check a dark pixel in the second image data having a density value darker than a predetermined second threshold value, and to determine a pixel in the first image data at a position corresponding to the dark pixel to be the pixel with document data.

US Pat. No. 10,250,769

IMAGE FORMING APPARATUS USING DECOLORABLE AND NON-DECOLORABLE TONER FOR PRINTING ACQUIRED IMAGES

Kabushiki Kaisha Toshiba,...

1. An image processing apparatus comprising:a reading unit configured to acquire image data by reading an image on a first sheet;
an image forming unit configured to form an image from the image data on a second sheet using a first color material and a second color material;
an input unit configured to receive a selection of one of a first copying process, a second copying process, and a third copying process; and
a controller configured to:
when the first copying process is selected, cause the image forming unit to print the image data on the second sheet using the first color material;
when the second copying process is selected, perform text recognition on the image data, and cause the image forming unit to print the image data on the second sheet using the first color material and the second color material based on a result of the text recognition; and
when the third copying process is selected, cause the image forming unit to print the image data on the second sheet using the second color material.

US Pat. No. 10,250,768

PRINT MEDIA SIZE/COLOR DETECTION USING SCANNER

Xerox Corporation, Norwa...

1. A printer comprising:a print media storage device;
a drawer sensor positioned to detect opening of said print media storage device;
a processor electrically connected to said drawer sensor;
a document scanner electrically connected to said processor;
tray sensors electrically connected to said processor; and
a user interface electrically connected to said processor,
said user interface displays instructions to use said document scanner for scanning a sheet of print media of a stack of print media being added to said print media storage device, in response to said drawer sensor detecting said opening of said print media storage device,
said processor determines whether said tray sensors detect a media type of said stack of print media,
said instructions are displayed on said user interface only if said tray sensors cannot detect said media type of said stack of print media,
said document scanner generates an electronic image of said sheet of print media from said scanning of said sheet of print media,
said processor evaluates said electronic image of said sheet of print media to identify said media type and a color of said stack of print media being added to said print media storage device, and
said processor records that said print media storage device contains said stack of print media having said media type and said color.

US Pat. No. 10,250,767

IMAGE FORMING APPARATUS CALCULATING MOVEMENT OF RECORDING MEDIUM

Konica Minolta, Inc., Ch...

1. An image forming apparatus comprising:a conveying path on which a recording medium is conveyed;
an image capturer that includes a light source and radiates light from the light source to capture the recording medium being conveyed at different timings to generate at least two images including a first image and a second image; and
a movement amount calculator that calculates a movement amount of the recording medium between capturing timings of the first and second images, wherein
the movement amount calculator:
calculates an index distance which is a distance in a conveying direction of a pattern formed by reflected light from the recording medium based on at least one image out of the first and second images;
calculates a ratio between the index distance and a reference distance used for comparison with the index distance; and
calculates a movement amount of the recording medium between the capturing timings of the first and second images based on the ratio, the first image, and the second image.

US Pat. No. 10,250,766

METHOD AND DEVICE FOR CHANGING DISPLAY LANGUAGE OF APPLICATION

Ricoh Company, Ltd., Tok...

1. An imaging device including a main unit implementing an imaging function, and an information processing terminal configured to act as an operation unit, the information processing terminal comprising a hardware processor and a hardware memory storing a program that causes the hardware processor to execute a process including:obtaining a predetermined selectable language of an operating system and a language of a character string added to an application program by a terminology module;
displaying a list of selectable language candidates on a language setting screen to enable a user to select a display language of the operating system, the selectable language candidates including the predetermined selectable language of the operating system and the language of the character string added to the application program by the terminology module; and
changing the display language of the operating system into a language selected from the list of the selectable language candidates by the user.

US Pat. No. 10,250,765

IMAGE FORMING APPARATUS, IMAGE FORMING SYSTEM, AND IMAGE FORMING METHOD

RICOH COMPANY, LTD., Tok...

1. An image forming apparatus, comprising:a display; and
circuitry to:
control the display to display, in a predetermined display format, multiple display components corresponding to multiple setting items, the multiple setting items relating to one of a plurality of functions implemented in the image forming apparatus, the predetermined display format causing the display not to display at least a first group of the multiple display components when the first group of the multiple display components is configured as non-displayed;
modify setting values for respective ones of the multiple setting items based on setting values for a preregistered combination of setting items in response to selecting the preregistered combination of setting items by a user, the preregistered combination of setting items corresponding to a second group of the multiple display components to be configured en bloc; and
output a notification to a user indicating that a setting value for a non-displayed setting item has been modified when one of the second group of display components corresponding to the setting values modified in response to selecting the preregistered combination of setting items by a user is also in the first group of the multiple display components that is configured as non-displayed.

US Pat. No. 10,250,764

DISPLAY SYSTEM, CONTROL DEVICE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM

FUJI XEROX CO., LTD., To...

1. A display system comprising:a head-mounted augmented reality (AR) display apparatus including a display that displays a virtual image, the head-mounted AR display apparatus enabling a user to see an object by using the head-mounted AR display apparatus, the object being actually located on a line extending from a line of sight of the user;
at least one recording medium container that accommodates a recording medium on which an image is to be formed; and
a display controller of the head-mounted AR display apparatus that displays, on the display, the virtual image of the image to be recorded on the recording medium, the virtual image being superimposed on the recording medium to be placed in the recording medium container, the display controller displaying the virtual image when the recording medium container is aligned with the line of sight of the user.

US Pat. No. 10,250,763

INPUT DEVICE, IMAGE FORMING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM

FUJI XEROX CO., LTD., To...

1. An input device comprising:a display having a first outer edge and a second outer edge;
a protrusion located along the first outer edge;
an operation detector configured to detect an operation input within a detection region, the detection region corresponding to a display region of the display, the detection region including:
a first function enabling area located along the first outer edge; and
a second function enabling area located along the second outer edge, a width of the second function enabling area in a direction perpendicular to the second outer edge being less than a width of the first function enabling area in a direction perpendicular to the first outer edge; and
a processor operatively connected to the display and operation detector, the processor programmed to:
display an operation screen on the display region of the display;
determine whether the detected operation input is a continuous movement across the detection region;
in response to a determination that the detected operation input is a continuous movement across the detection region, determine a direction of the continuous movement;
in response to a determination that the direction of the continuous movement is oriented towards the protrusion, enable a function corresponding to the operation input once the continuous movement enters the first function enabling area; and
in response to a determination that the direction of the continuous movement is oriented away from the protrusion, enable the function corresponding to the operation input once the continuous movement enters the second function enabling area.

US Pat. No. 10,250,762

MOBILE CLOUD-BASED REGISTRATION SSYTEM

Electronic Exposition Inf...

1. A mobile registration platform configured to allow event attendees to print customized event badges, the mobile registration platform comprising:a transceiver configured to transmit and receive registration data, the registration data comprising at least a verification associated with a specified attendee being registered for an event, and an indication that a badge is to be printed at a specified printer according to a badge design that is customized to the attendee and the event;
a printer controller configured to receive the indication from the transceiver and generate printer commands readable by a printer to print the badge according to the badge design; and
a plurality of printers, wherein the specified printer is configured to print the badge according to the badge design,
wherein the transceiver is further configured to request a geolocation signal from the attendee to verify that the attendee is actually at the event before printing the badge.

US Pat. No. 10,250,761

IMAGE HANDLING APPARATUS, IMAGE PROCESSING SYSTEM, IMAGE PROCESS CONTROLLING METHOD, AND IMAGE PROCESS CONTROLLING PROGRAM PRODUCT

Ricoh Company, Ltd., Tok...

1. An image handling apparatus, comprising:an operation panel displaying one or more input screens, including a web browser, allowing to input image handle information including one or more scan settings for handling an image, the one or more input screen is displayed based on HTML data received from a web application of an external web server capable of connecting through a network, and receiving a user instruction of an image generation from the web browser displaying at least one of the input screens;
hardware resource including a scanner for generating an image; and
processing circuitry configured to
transmit a first message including a scan setting from among the one or more scan settings, in response to the input at the web browser displaying at least one of the input screens, to the web application
transmit a second message to the web application in response to receiving the user instruction of an image generation,
receive image generation information for the scanner to execute an image generation including the one or more scan settings, from the web application, in response to the transmitted second message, transmitted to the web application from the web browser included in the image handling apparatus, and
control the scanner to execute the image generation, wherein
the first message including the scan setting is transmitted to the web application in a plurality of times, in response to each input of the scan setting on at least one of the input screens, and
the image generation information including the plurality of scan settings, is transmitted from the web application to control the scanner, in response to the user instruction of the image generation on the image handling apparatus.

US Pat. No. 10,250,760

IMAGING DEVICE, IMAGING SYSTEM, AND IMAGING METHOD

Olympus Corporation, Tok...

1. An imaging device comprising:an imaging sensor;
a processor including a first image processing section which generates a non-magnified image using image processing that does not perform magnification processing from image data that was acquired by the imaging sensor, at the time of movie shooting or at the time of live view display, and a magnification processing section which magnifies a part of the non-magnified image to generate a magnified image;
a display device to display an image;
a transmission circuit to transmit an image to an external device that has a screen that is larger than the display device, and that displays an image in high definition;
a transmission switching section; and
a controller,
wherein the display device includes a display switching section to control which one of the magnified image and the non-magnified image is displayed,
wherein the transmission switching section is configured to control which one of the magnified image and the non-magnified image is transmitted and the controller has a photographing mode setting section which sets one of a still image photographing mode and a moving image photographing mode, and
wherein the transmission switching section, when a moving image photographing mode is set and a setting is made to display the magnified image on the display device, controls the transmission circuit to transmit the non-magnified image data, and when a still image photographing mode is set and the magnified image is displayed on the display device, controls the transmission circuit to transmit the magnified image.

US Pat. No. 10,250,759

COMMUNICATION TERMINAL DEVICE, OUTGOING CALL CONTROL METHOD, AND PROGRAM

NTT DOCOMO, INC., Tokyo ...

1. A mobile communication terminal device comprising:a specifying unit that specifies a voice call partner;
an outgoing call control unit that calls the specified voice call partner;
a measurement unit that measures an orientation or a displacement of the communication terminal device; and
a proximity detection unit that detects when a head area of a user is near the communication terminal device; and
a memory that stores a plurality of patterns of change over time of the orientation of the communication terminal device and associates each pattern of change over time with a respective voice call partner, wherein each pattern of change over time defines at least the orientation of the communication terminal device when the communication terminal device is near the head area of the user,
wherein in the case where the head area is detected as being near, the specifying unit specifies the voice call partner on the basis of the orientation or displacement measured before the head area was detected as being near, or the specifying unit specifies the voice call partner on the basis of one of the plurality of patterns of change over time while the head area was detected as being near; and
wherein the specifying unit specifies the voice call partner by an operation of at least one tap on the rear surface side of the communication terminal device.

US Pat. No. 10,250,758

APPARATUS AND METHODS FOR AUDIO AND/OR VIDEO COMMUNICATION

1. A method of operating a first telecommunication apparatus for audio and/or video communication, comprising:receiving input for calling with said first telecommunication apparatus for audio and/or video communication with a second telecommunication apparatus for audio and/or video communication, the first telecommunication apparatus comprising a non-transitory computer readable medium connected to a processor;
the first telecommunication apparatus initiating a call to said second telecommunication apparatus in response to the received input to establish a communication session;
in response to a first input for releasing the call to terminate the communication session that is received within a first pre-selected time period after the call is connected to establish the communication session, the first telecommunication apparatus muting the call,
the first telecommunication apparatus maintaining the muting of the call until:
(a) a second pre-selected time period passes after the first input is received without a second input being received that confirms the releasing of the call, the first telecommunication apparatus terminating the call when the second pre-selected time period passes after the first input is received without the second input being received,
(b) a third input is received within the second pre-selected time period, the first telecommunication apparatus unmuting the call and maintaining the communication session in response to receiving the third input within the second pre-selected time period, or
(c) the second input is received within the second pre-selected time period, the first telecommunication apparatus responding to the second input by releasing the call for termination of the communication session;
wherein the first input is providable via actuation of a first pre-defined area on a display of the first telecommunication apparatus and the second input is providable via a second pre-defined area on the display that is in a different location on the display than the first pre-defined area;
the first telecommunication apparatus activating the second pre-defined area for the second input after the call is connected within the first pre-selected time period;
the first telecommunication apparatus responding to receipt of the second input within the first pre-selected time period by releasing the call for termination of the communication session.

US Pat. No. 10,250,757

METHOD, COMPUTER PROGRAM, AND ALGORITHM FOR COMPUTING NETWORK SERVICE VALUE PRICING BASED ON COMMUNICATION SERVICE EXPERIENCES DELIVERED TO CONSUMERS AND MERCHANTS OVER A SMART MULTI-SERVICES (SMS) COMMUNICATION NETWORK

INCNETWORKS, INC., Somer...

1. A system for providing a secure communication session between a plurality of devices over a Smart Multi-Services (SMS) communication network, the system comprising:a first processor of a first device having a first biometric input interface for obtaining information representing biometric data of a first end user, wherein the first processor is configured to execute instructions to:
collect from the first biometric input interface the biometric input data of the first end user;
apply the biometric input data of the first end user to a first Secure Channel Biometric Transaction (SCBT) algorithm operating on the first device to create a first Global Service Identity (GSI) Security Token of the first device; and
cause the first device to send a request to initiate a secure communication session through a biometric secure communication channel based on the Global Service Identity (GSI) Security Token of the first device with a second device over a SMS communication network;
a second processor of the second device having a second biometric input interface for obtaining information representing biometric data of a second end user, wherein the second processor is configured to execute instructions to:
collect from the second biometric input interface the biometric input data of the second end user;
apply the biometric input data of the second end user to a second Secure Channel Biometric Transaction (SCBT) algorithm operating on the second device to create a second Global Service Identity (GSI) Security Token of the second device; and
cause the second device to respond to the request to initiate the secure communication session with the first device through the biometric secure communication channel based on the Global Service Identity (GSI) Security Token of the second device over the SMS communication network; and
respective memories of the first and second devices coupled to the first and second processors, respectively, the respective memories for storing data, respectively, for providing the secure communication session between a plurality of devices connected to the SMS communication network, and the respective memories storing and issuing executable instructions to the SMS communication network to cause at least one SMS communication network processor to:
receive, at the SMS communication network, registration information transmitted from the first and second devices connected to the SMS communication network, wherein the registration information includes a first biometric data associated with the first and second end users to initiate a registration process;
receive, at the SMS communication network, from the first and second devices, the request to conduct the secure communication session over the SMS communication network;
obtain, at the SMS communication network, a second biometric data associated with the first and second end users and transmitted from any of the plurality of the devices during the request to conduct the secure communication session to determine authenticity of the first and second end users;
verify, at the SMS communication network, authenticity of the first and second end users to permit access to the communication network to conduct the requested secure communication session via the SMS communication network processor by comparing the first biometric data of the first and second end users obtained during the registration with the second biometric data of the first and second end users obtained during the request to conduct the secure communication session;
upon verifying the authenticity of the first and second end users, establish by the SMS communication network a secure encoded communication channel based on the Global Service Identity (GSI) Security Token of the first and second devices to conduct the secure communication session;
sample, at the SMS communication network, at a sampling frequency rate to obtain a third biometric data of the first and second end users transmitted from the first and second devices during the secure communication session;
reaffirm, at the SMS communication network, the authenticity of the first and second end users during the communication session based on the sampling frequency rate by comparing the first biometric data of the first and second end users obtained during the registration process with the third biometric data of the first and second end users obtained during the secure communication session, wherein the sampling frequency rate is performed based on at least one of a (i) predetermined location of the first end user, the second end user or both, (ii) a predetermined time, and (iii) repeatedly at a predetermined time interval selected by at least one of the first and second end users and the SMS communication network; and
after the completion of the secure communication session, the SMS communication network processor executes and issues instructions to delete and erase all data and information associated with the secure communication session conducted over the secured encoded communication channel from the SMS communication network, from the first and second devices, and from all end points of the first and second end users.

US Pat. No. 10,250,756

SYSTEMS AND METHODS FOR PROVIDING RENEWABLE WIRELINE AND WIRELESS SERVICES AND GOODS

1. A method comprising:receiving, by a system comprising a processor, from a mobile communication device associated with a subscription plan for a first service associated with a first time period, a user request for a second service associated with a second time period, wherein the second service is different from the first service;
provisioning, by the system, in response to receiving the user request, the second service for the mobile communication device to grant the mobile communication device access to the second service for the second time period; and
establishing, in a database, an entry associated with an account corresponding to a user of the mobile communication device, the entry comprising at least one of an expiration quantity of the second service or an expiration date for the second service based on the second time period during which the second service is available.

US Pat. No. 10,250,755

SYSTEM AND METHOD FOR REAL-TIME ANALYSIS OF NETWORK TRAFFIC

NETWORK KINETIX, LLC, Au...

1. A set of one or more tangible, non-transitory, machine-readable media storing instructions that when executed by one or more processors effectuate operations to monitor network traffic, the operations comprising:obtaining, with one or more processors, a mirrored data flow of network traffic routed through a network element of a network, wherein:
the network traffic is transmitted as packets, via the network element, between respective endpoints in communication with the network;
respective portions of the packets are encoded according to a plurality of different respective protocols;
the network traffic includes packets having instructions by which network events are effectuated; and
the network events include network session events;
before a first network session event among the network session events completes, determining, with one or more processors, based on at least part of the mirrored data flow, that the first network session event is actionable, wherein determining that the first network session event is actionable comprises:
filtering the packets in the mirrored data flow to identify a subset of the packets pertaining to a type of network session events based on the subset of the packets being encoded in one or more protocols that are a specified subset of protocols among the plurality of protocols;
writing the subset of the packets from the mirrored data flow to a buffer;
decoding at least some of the subset of the packets to obtain decoded information by which the first network session event is requested to be effectuated;
comparing the decoded information of the first network session event to a plurality of conditions specified by a plurality of rules; and
based on at least part of the comparison, determining that the first network session event is actionable;
in response to the determining that the first network session event is actionable, with one or more processors, causing an intervention in the first network session before the first network session completes; and
determining, with one or more processors, that a second network session event among the network sessions is not actionable.

US Pat. No. 10,250,754

NETWORK RESOURCES BROKERING SYSTEM AND ENFORCEMENT FUNCTION NETWORK ENTITY

DEUTSCHE TELEKOM AG, Bon...

1. An enforcement function network entity comprising an enforcement function for providing preferential treatment and charging in a communication network, the enforcement function network entity comprising:an input configured to receive a stream of input data from a plurality of bidder function network entities, the stream of input data comprising for each bidder function network entity a bid offer comprising an electronic bid value offered by the respective bidder function network entity; and
a processor configured to process an enforcement function for ranking the bid offers of the plurality of bidder function network entities according to a priority ranking, and determining for each bidder function network entity an amount of electronic bid values consumed during a charging period for satisfying the respective bid offer,
wherein the processor is configured to create a stream of output data comprising for each bidder function network entity the amount of electronic bid values consumed during the charging period.

US Pat. No. 10,250,753

CUSTOMER-CENTRIC NETWORK-BASED CONFERENCING

1. A communication system comprising:a processor; and
a memory coupled to the processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to:
receive a command provided by an initiator to set up a conference;
identify a list of potential conference participants other than the initiator and an agent of an enterprise hosting the communication system;
identify projected presence information for each of the potential conference participants, the projected presence information being an estimate of presence at a future point in time;
determine an optimum time to schedule the conference based on the projected presence for each of the potential conference participants at the optimum time, the optimum time being a time later than a current time, wherein the determined optimum time is for increasing a likelihood of acceptance of an invitation to join the conference;
determine whether a minimum number of the potential conference participants are projected to be available at the determined optimum time;
in response to determining, based on the projected presence, that the minimum number of potential conference participants are projected to be available at the determined optimum time:
send the invitation to a first one of the potential conference participants to join the conference scheduled for the optimum time;
refrain from sending an invitation to a second one of the potential conference participants whose projected presence information indicates unavailability of the second one of the potential conference participants at the optimum time;
receive a first message from the first one of the potential conference participants accepting the invitation to join the conference;
receive a second message from the first one of the potential conference participants to join the conference at the scheduled optimum time; and
transmit a command to connect the first one of the potential conference participants to the conference in response to receipt of the second message.

US Pat. No. 10,250,752

TELECOMMUNICATION NETWORK

STARLEAF LTD, London (GB...

1. A telecommunication network for telecommunications between telecommunication end point devices, the telecommunication network comprising:a plurality of PBXs implemented by a computer system comprising a single hardware platform, each PBX being shared by telecommunication end point devices allocated to it, and each PBX being configured to control communication connections involving telecommunications originating from telecommunication end point devices allocated to it and other telecommunication end point devices,
wherein at least some functionality of the plurality of PBXs shared between the plurality of PBXs is configured to be stored in a memory of the computer system,
wherein the computer system is configured to:
receive a request for a telecommunication event;
determine telecommunication resource requirements for the telecommunication event;
allocate, based on the telecommunication resource requirements, resources of a first PBX of the plurality of PBXs to implement the telecommunication event by allocating resources of the first PBX that are on a public internet side of a firewall; and
allocate, based on the telecommunication resource requirements, resources of a second PBX of the plurality of PBXs to implement the telecommunication event by allocating resources of the second PBX that are on a private side of the firewall.

US Pat. No. 10,250,751

MOBILE CONFERENCE SYSTEM

1. A mobile conference system comprising:a master mobile hands-free unit; and
a first slave mobile hands-free unit;
wherein each of the mobile hands-free unit comprise:
at least one microphone configured to record audio signals;
a loudspeaker configured to play back audio signals to be played back;
a battery unit configured to supply energy;
an operating unit;
a muting operating element,
a control interface configured to control wireless bidirectional transmission of control commands;
at least one lighting unit; and
a first transmitting/receiving unit configured to bidirectionally communicate wirelessly with another of the mobile hands-free units
wherein the master mobile hands-free unit further comprises:
a second transmitting/receiving unit configured to bidirectionally communicate wirelessly with an external unit;
wherein the master mobile hands-free unit is configured to:
receive first audio signals from the external unit by the second transmitting/receiving unit; and
wirelessly transmit the received first audio signals to the second mobile hands-free unit via the first transmitting/receiving unit;
wherein the master mobile hands-free unit is further configured to:
receive second audio signals from the second mobile hands-free unit by the first transmitting/receiving unit; and
mix the received second audio signals with recorded audio signals recorded by the at least one microphone of the master mobile hands-free unit; and
wirelessly transmit the mixed audio signals to the external unit via the second transmitting/receiving unit;
wherein the at least one second hands-free unit further comprises:
a muting operating element;
wherein the at least one second hands-free unit is configured to transmit, in response to an actuation of the muting operating element, a muting control command via the control interface of the at least one second hands-free unit to the control interface of the master mobile hands-free unit; and
wherein the master mobile hands-free unit is configured to interrupt, in response to receiving the muting control command, output of the audio signals recorded by its at least one microphone to the external unit via the second transmitting/receiving unit.

US Pat. No. 10,250,750

METHOD AND SYSTEM FOR INTEGRATING AN INTERACTION MANAGEMENT SYSTEM WITH A BUSINESS RULES MANAGEMENT SYSTEM

1. A system for processing communication events for a contact center, the system comprising:a processor;
a memory, wherein the memory has stored therein instructions that, when executed by the processor, cause the processor to:
receive, during execution of a routing strategy for routing a communication event, a request for rule execution;
identify, in response to the request, a set of facts associated with the communication event and with contact center state;
identify a rule based on the set of facts;
execute the identified rule based on the set of facts, the identified rule including a condition for updating the contact center state;
determine, in response to executing the identified rule, that the condition is satisfied; and
transmit, in response to the condition being satisfied, a signal for updating the contact center state; and
an electronic switch coupled to the processor, the electronic switch configured to distribute the communication event according to the updated contact center state.

US Pat. No. 10,250,749

AUTOMATED TELEPHONE HOST SYSTEM INTERACTION

REPNOW INC., San Diego, ...

1. A system comprising:one or more client applications executable by respective communication devices, each communication device comprising one or more processors configured with processor-executable instructions included in the client application to perform operations comprising:
receiving, from a user of the communication device, a natural language input;
determining, using natural language processing, that the natural language input comprises a request associated with a service provider of a plurality of known user service providers; and
transmitting a representation of the natural language input via a communication network; and
a server comprising one or more processors configured with processor-executable instructions to perform operations comprising:
receiving, via the communication network, the representation of the natural language input from the communication device;
determining, using natural language processing, metadata associated with a call center corresponding to the request;
causing, based at least in part on the metadata, initiation of a call center communication session from a telephony service to the call center;
receiving at least one interactive voice response (IVR) prompt transmitted by the call center in the call center communication session;
determining, based on natural language processing of the representation of the natural language input, a response to the IVR prompt; and
causing the response to be transmitted from the telephony service to the call center in response to the IVR prompt.

US Pat. No. 10,250,748

SYSTEM AND METHOD FOR UNIFIED CALLING

JPMorgan Chase Bank, N.A....

1. A method for unified calling, comprising:a computer application executed by a computer processor at an agent terminal receiving an indication that an agent is ready to manually dial a telephone number using an agent telephone;
the computer application communicating, to a telephony server as a signal that emulates an analog connection between the agent telephone and the telephony server, an agent status signal indicating that the agent is ready to manually make a telephone call using the agent telephone; and
the agent terminal receiving an indication of a telephone number to call;
wherein the agent telephone and the telephony server are electrically isolated from each other.

US Pat. No. 10,250,747

SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SERVICE CALL IDENTIFICATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for service call identification, comprising:receiving a request for registration from a service provider,
upon the service receiver having authorized the request for registration, registering, for a specific user, characteristic information of the service call in a user device of a service receiver; and
upon a lapse of time, deregistering the characteristic information from the user device.

US Pat. No. 10,250,746

METHOD AND SYSTEM FOR GROUP COMMUNICATION ACROSS ELECTRONIC MAIL USERS AND FEATURE PHONE USERS

OATH INC., New York, NY ...

1. A method comprising:receiving, by a processor, a first electronic mail from an electronic mail user;
converting, by the processor, the first electronic mail to an audio clip in response to the receipt of the first electronic mail;
generating, by the processor, a text message for communication to a user of a feature phone, said text message comprising embedded information referencing the audio clip, and further comprising a name of the electronic mail user, and an email identifier of the electronic mail user;
authenticating, by the processor, the user of the feature phone by requesting a username and password of the feature phone user;
comparing, by the processor, a number of characters in the first electronic mail to a predefined number;
when the number of characters in the first electronic mail is below the predefined number, including, by the processor, the first electronic mail in the text message;
when the number of characters in the first electronic mail is above the predefined number or when the first electronic mail has been included in the text message, transmitting, by the processor, the text message to the feature phone;
receiving, by the processor, a voice input provided by the feature phone user as a response to the first electronic mail;
receiving, by the processor, the email identifier of the electronic mail user from the feature phone;
creating and storing, by the processor, a shortcut key, said shortcut key comprising the email identifier of the electronic mail user and a phone number of the electronic mail user;
embedding, by the processor, the voice input as an audio file in a second electronic mail; and
transmitting, by the processor, the second electronic mail to the electronic mail user via the shortcut key.

US Pat. No. 10,250,745

IDENTIFYING THE CELLULAR NUMBER ON A CELLULAR DEVICE CAPABLE OF SUPPORTING MULTIPLE CELLULAR NUMBERS

T-Mobile USA, Inc., Bell...

1. A method comprising:on a cellular device capable of using a first cellular number and a second cellular number, receiving an input to open a contacts application to generate a contact, the contact including a name field indicating a name of the contact and a phone number field indicating a phone number of the contact;
displaying a customizable field in the contact, the customizable field associating a call received from the contact with the first cellular number or with the second cellular number;
receiving an input to modify the customizable field, the input identifying the first cellular number or the second cellular number;
storing the input as part of the contact;
receiving an incoming call from the contact; and
displaying on the cellular device in response to receiving the incoming call from the contact, the name field, phone number field, and the customizable field of the contact.

US Pat. No. 10,250,744

CALL CENTER SYSTEM AND VOICE RECOGNITION CONTROL METHOD OF THE SAME

1. A call center system that records calls from customers and stores the calls as data, the call center system comprising:an exchange for performing an exchange of a call from a public telephone network that arrives at an extension;
a communication terminal provided for each operator to make calls;
an operator information processing terminal that each operator operates;
a recorder for recording call data transmitted from the exchange and stores as recorded data;
a call and recorded information management server that receives call information relating to the call from the exchange, associates the call information with the information relating to the recorded data from the recorder, and stores the call information in a database; and
a voice recognition server that performs voice recognition on the recorded data and outputs text data,
wherein the exchange, the communication terminal, the operator information processing terminal, the recorder, the call and recorded information management server, and the voice recognition server are connected by a network,
wherein the call and recorded information management server includes a call and recorded information management table in which call identification information, business identification information relating to a call, and management information of recorded data are associated with each other,
wherein the voice recognition server includes a business information-recognition engine correspondence table in which the business identification information relating to the call and a voice recognition engine used for voice recognition are associated to each other,
wherein the operator information processing terminal transmits the call identification information to the call and recorded information management server in order to request a voice recognition process,
wherein the call and recorded information management server searches the call and recorded information management table for the business identification information relating to the call according to the received call identification information, and transmits the call identification information as well as the business identification information relating to the call corresponding to the call identification information, to the voice recognition server,
wherein the voice recognition server searches for the corresponding voice recognition engine according to the received business identification information relating to the call, and adds the received identification information to a recognition queue to be processed by the corresponding voice recognition engine,
wherein the voice recognition server requests the call and recorded information management server to obtain the recoded data according to the received call identification information,
wherein the call and recorded information management server searches the call and recorded information management table according to the received call identification information, and transfers the recoded data corresponding to the call identification information to the voice recognition server, and
wherein the voice recognition server performs voice recognition on the recorded data corresponding to the call identification information stored in the recognition queue by using the corresponding voice recognition engine, and stores the recognition result as text data.

US Pat. No. 10,250,743

SENDER IDENTIFICATION SYSTEM AND METHOD

MOBILE MESSENGER GLOBAL, ...

1. A method for messaging, including:sending to an identity database a query based upon data included in a received message, the received message originating from a sender at a first device in a first protocol and directed to a recipient at a second device;
receiving from the identity database information pertaining to the identity of the sender of the message, including identity information that is not a part of the message;
locating a rule set by the recipient of the message, the rule being associated with the identity of the sender of the message;
handling the message according to the rule; and
wherein the handling is performed without receiving additional input from the recipient and includes at least one of providing the message to the recipient in a second protocol and not providing the message to the recipient,
wherein the second protocol is different than the first protocol and comprises at least one of a Short Message Service (“SMS”) or Multimedia Messaging Service (“MMS”);
when the message is provided to the second device according to the rule, rendering the message to the recipient and providing an indication to the sender that the message was provided to the recipient, and
when the message is not provided to the second device according to the rule, identifying that the message is subject to deletion and providing an indication to the sender that the message was not provided to the recipient.

US Pat. No. 10,250,742

SYSTEMS AND METHODS FOR GENERATING APPLICATION DATA FROM CALL DATA

RINGCENTRAL, INC., Bosto...

1. A computer-implemented method for generating application data from call data, the method comprising:acquiring, with one or more call-data aggregators, call data from at least one call-data source;
modifying at least a portion of the call data with a call-data modifier;
generating application data from the portion of the call data, wherein the application data is configured for diagram generation; and
generating a diagram from the application data, wherein the diagram graphically indicates volumes of calls in branches of an interactive voice response (IVR) system map.

US Pat. No. 10,250,741

SYSTEMS AND METHODS FOR MANAGING, ANALYZING AND PROVIDING VISUALIZATIONS OF MULTI-PARTY DIALOGS

Cogito Corporation, Bost...

1. A method for managing multi-party dialogs between communication devices, the method comprising:receiving, by a processor of a computing device, digital connection requests from a plurality of communication devices, the communication devices being connected over a plain old telephone service (POTS) connection;
establishing, by the processor, a digital connection with each of the plurality of communication devices;
performing, by the processor, one or more tests on the plurality of communication devices;
switching, by the processor, the connection between the communication devices from the POTS connection to the digital connections, enabling the communication devices to communicate with each other via the computing device (server) over the digital connections;
receiving, by the processor, audio signals from at least a portion of the plurality of communication devices, the audio signals being part of a multi-party dialog between users of the plurality of communication devices;
splitting, by the processor, the received audio signals into corresponding first signals and second signals;
transmitting, by the processor, the first signals to the plurality of communication devices of the digital connections in the form of audio to be output by each of the communication devices; and
transmitting, by the processor, to at least one of the plurality of communication devices, feedback data comprising at least a portion of measurements of features of the second signals, the feedback data contextualizing each of the participation of the users of the plurality of communication devices during the multi-party dialog.

US Pat. No. 10,250,740

ECHO PATH CHANGE DETECTOR

Imagination Technologies ...

1. An echo path monitoring system for controlling an adaptive filter configured to estimate an echo of a far-end signal comprised in a microphone signal, the system comprising:a comparison generator configured to compare the microphone signal with the estimated echo to obtain a first comparison result and compare an error signal, which represents a difference between the microphone signal and the estimated echo, with the estimated echo to obtain a second comparison result; and
a controller configured to:
combine the first and second comparison results to form a parameter indicative of a state of the microphone signal;
control an operating mode of the adaptive filter in dependence on said parameter and on whether the adaptive filter is in a transient state or a steady state, wherein, when the adaptive filter is in the steady state, the controller is configured to assess whether the microphone signal incorporates echo path change.

US Pat. No. 10,250,739

TERMINAL EQUIPMENT CONTROL METHOD, TERMINAL EQUIPMENT AND SYSTEM, COMPUTER STORAGE MEDIA

ZTE Corporation, Shenzhe...

1. A terminal device control method, applied to a terminal device to be controlled, comprising:acquiring control strategies for the terminal device, and acquiring information on a current study scenario where the terminal device is located, wherein the current study scenario is calculated based on position information and course table information describing class conditions;
acquiring control information for the terminal device, according to the control strategies for the terminal device and the information on the current study scenario;
controlling the terminal device according to the control information;
sending a request for regulating the control strategies;
acquiring control strategy regulation information;
acquiring new control information used in controlling the terminal device, according to the control strategy regulation information and the information on the current study scenario; and
controlling the terminal device according to the new control information.

US Pat. No. 10,250,738

SCHEDULE MANAGEMENT DEVICE AND METHOD THEREFOR

Samsung Electronics Co., ...

1. A method of managing a schedule in a mobile device, the method comprising:receiving information of an event and a first user input of location information related to the event;
storing the received information of the event and the location information related to the event;
searching, upon receiving a search criteria as a second user input of only location information via an input interface, for location information matching the search criteria from the stored location information related to the event; and
if the location information matching the search criteria is found, outputting the information of the event related to the location information found,
wherein the information of the event comprises a location where a reminder is to be provided, and
wherein when the received search criteria does not have a constant shape, the searching for the matched location information is performed by replacing the received search criteria with a minimum closed polygon comprising the received search criteria, and searching for location information matching the minimum closed polygon from the stored location information.

US Pat. No. 10,250,737

TERMINAL FUNCTION SETTING METHOD AND DEVICE FOR VEHICLE UNLOCKING, AND MOBILE TERMINAL

BEIJING MOBIKE TECHNOLOGY...

1. A terminal function setting method for vehicle unlocking, said method comprising the steps of:receiving a current request for vehicle unlocking sent by a mobile terminal;
determining to perform a scene mode of the current request;
acquiring a terminal function setting manner under the scene mode; and
performing function setting of the mobile terminal according to the terminal function setting manner,
wherein:
the scene mode comprises a night mode, and a terminal function setting manner under the night mode includes activating a flashlight function of the mobile terminal; and
the terminal function setting manner under the night mode further comprises a setting manner of camera parameters.

US Pat. No. 10,250,736

TERMINAL CONTROLLING DEVICE AND CONTROLLING METHOD USING SAME

MODA-INNOCHIPS CO., LTD.,...

1. A terminal control device having a mode control module for switching an execution mode of a terminal, the terminal control device comprising:a main control module configured to control driving of the terminal;
a receiver sensor installed in the terminal and capable of inter-conversion between an electric signal and an external physical force; and
a signal processing unit configured to analyze an electric signal generated in the receiver sensor, process the electric signal into a mode switching signal for switching a mode of the terminal, and transfer the mode switching signal to the main control module,
wherein the receiver sensor acts as a speaker for transferring a call connection sound and a voice of a user's other party to the user in response to an input electric signal.

US Pat. No. 10,250,735

DISPLAYING RELEVANT USER INTERFACE OBJECTS

Apple Inc., Cupertino, C...

1. A non-transitory computer readable storage medium having computer-executable instructions which, when executed by one or more computer processors, causes the one or more computer processors to display a user interface, the computer-executable instructions comprising instructions for:receiving input from a movement sensor based on a movement of an electronic device; and
displaying a first plurality of user interface objects on a touch-sensitive display,
wherein the display is in response to the received movement sensor input,
wherein the first plurality of user interface objects is a subset of a larger plurality of user interface objects available for display, and
wherein the first plurality of user interface objects was selected from the larger plurality of user interface objects using a relevance algorithm that uses as input user health information represented by input received from a biometric sensor and at least one of:
a location of the electronic device;
a location of an external device;
a current time;
an upcoming calendar event; or
map information.

US Pat. No. 10,250,734

SCREEN INTERFACE FOR A MOBILE DEVICE APPARATUS

Majen Tech, LLC, Longvie...

1. A mobile phone apparatus, comprising:a touchscreen;
a near field communication interface;
at least one non-transitory memory storing instructions; and
one or more processors in communication with the touchscreen, the near field communication interface, and the at least one non-transitory memory, wherein the one or more processors execute the instructions to:
receive, from a terminal utilizing the near field communication interface of the mobile phone apparatus, a near field communication signal;
determine whether the mobile phone apparatus is operating in a lock screen mode based on a receipt of at least one of a biometric input or an identifier input;
in response to the receipt of the near field communication signal from the terminal utilizing the near field communication interface of the mobile phone apparatus when the mobile phone apparatus is determined to not be operating in the lock screen mode, and without necessitating any user input into the mobile phone apparatus after the receipt of the near field communication signal from the terminal utilizing the near field communication interface of the mobile phone apparatus when the mobile phone apparatus is determined to not be operating in the lock screen mode:
simultaneously display, via the touchscreen of the mobile phone apparatus, at least one card and at least a portion of a card number of the at least one card;
send, to the terminal utilizing the near field communication interface of the mobile phone apparatus, an authorization signal; and
after sending, to the terminal utilizing the near field communication interface of the mobile phone apparatus, the authorization signal:
receive a response signal, and
in response to the receipt of the response signal:
display, via the touchscreen of the mobile phone apparatus, a notification indicating whether use of the at least one card was successful.

US Pat. No. 10,250,733

LOCK SCREEN INTERFACE FOR A MOBILE DEVICE APPARATUS

Majen Tech, LLC, Longvie...

1. A mobile phone, comprising:a touchscreen including a backlight;
a near field communication interface;
a mechanical button;
at least one non-transitory memory storing instructions; and
one or more processors in communication with the touchscreen, the near field communication interface, the mechanical button, and the at least one non-transitory memory, wherein the one or more processors execute the instructions to:
receive, from a terminal utilizing the near field communication interface of the mobile phone, a near field communication signal;
determine whether the mobile phone is operating in a lock screen mode;
determine whether the backlight of the touchscreen of the mobile phone is powered off;
after the receipt of the near field communication signal from the terminal utilizing the near field communication interface of the mobile phone when the backlight of the touchscreen of the mobile phone is determined to be powered off:
power on the backlight of the touchscreen of the mobile phone;
after the receipt of the near field communication signal from the terminal utilizing the near field communication interface of the mobile phone when the mobile phone is determined to be operating in the lock screen mode, and without necessitating any user input into the mobile phone after the receipt of the near field communication signal from the terminal utilizing the near field communication interface of the mobile phone:
simultaneously display, via the touchscreen of the mobile phone, at least one card, at least a portion of a card number of the at least one card, and an indication to enter a touch user input via the mechanical button of the mobile phone;
after the display, via the touchscreen of the mobile phone, of the at least one card, the at least portion of the card number of the at least one card, and the indication to enter the touch user input via the mechanical button of the mobile phone:
receive an indication of the touch input via the mechanical button of the mobile phone;
without necessitating any user input into the mobile phone other than the touch user input via the mechanical button of the mobile phone and a presentation of a face of a user after the display, via the touchscreen of the mobile phone, of the at least one card, the at least portion of the card number of the at least one card, and the indication to enter the touch user input via the mechanical button of the mobile phone:
capture the face, utilizing the mobile phone,
after the capture of the face utilizing the mobile phone, perform an analysis based on the capture of the face, and
based on the analysis, send, to the terminal utilizing the near field communication interface of the mobile phone, a first signal; and
after sending, to the terminal utilizing the near field communication interface of the mobile phone, the first signal:
receive a second signal, and
after the receipt of the second signal:
display, via the touchscreen of the mobile phone, a notification.

US Pat. No. 10,250,732

MESSAGE PROCESSING METHOD AND SYSTEM, AND RELATED DEVICE

Huawei Technologies Co., ...

1. A message processing method, comprising:receiving, by a user equipment, a new message;
determining, by the user equipment, a distance between the user equipment and a wearable device, wherein the user equipment and the wearable device are in a connected state; and
in response to determining that the distance between the user equipment and the wearable device is greater than a preconfigured distance,
recording, by the user equipment, information about the new message, instead of providing, by the user equipment, an alert for the new message; and
in response to determining that the wearable device is in a wireless communication range of the user equipment, providing, by the user equipment, an alert for the new message.

US Pat. No. 10,250,731

CONTROL OF USER EQUIPMENT FUNCTIONALITY

Alcatel Lucent, Boulogne...

1. A method comprising:receiving, at a network control node, an incoming call signal comprising an automatic answer and remote loudspeaker activation request indicator in one or more header fields and a predetermined dialed number;
identifying, at the network control node, at least one user equipment mapped to said predetermined dialed number as indicated by configuration information associated with each at least one user equipment;
generating a call signal to the at least one user equipment, said call signal comprising a combined automatic answer and loudspeaker activation indication; and
transmitting said call signal with said combined automatic answer and loudspeaker activation indication towards said at least one user equipment.