US Pat. No. 11,115,699

SYSTEM AND METHOD FOR DELIVERING MEDIA BASED ON VIEWER BEHAVIOR

TIME WARNER CABLE ENTERPR...


1. A method of communicating multimedia content over a service provider network, the method comprising:receiving, by a network server processor, an audio-video stream from a content provider server;
relaying, by the network server processor, the audio-video stream to a receiver device via the service provider network;
receiving, by the network server processor, a first operating mode notification from the receiver device, the first operating mode notification indicating that a user of the receiver device is no longer in close proximity to the receiver device;
determining, by the network server processor, whether the service provider network is congested;
stopping, by the network server processor, the relaying of the audio-video stream to the receiver device in response to the network server processor receiving the first operating mode notification indicating that the user of the receiver device is no longer in close proximity to the receiver device and determining that the service provider network is congested;
receiving, by the network server processor, a second operating mode notification from the receiver device, the second operating mode notification indicating that the user of the receiver device is now in close proximity to the receiver device; and
sending the audio-video stream to the receiver device via the service provider network in response to the network server processor receiving the second operating mode notification indicating that the user of the receiver device is now in close proximity to the receiver device.

US Pat. No. 11,115,698

SYSTEMS AND METHODS FOR PROVIDING RECOMMENDATIONS BASED ON A LEVEL OF LIGHT

OrCam Technologies Ltd., ...


1. A wearable apparatus, the wearable apparatus comprising:a wearable image sensor configured to capture a plurality of images from an environment of a user of the wearable apparatus; and
at least one processing device programmed to:analyze the plurality of images to identify in one or more of the plurality of images a window of a room in the environment of the user and a position of a chair in the room;
determine, based on the plurality of images, a brightness level of light from the window of the room;
select, from a database of recommendations, a recommendation to adjust or modify the position of the chair in the room to capitalize on or avoid the brightness level of light; and
cause the recommendation to be output to the user.


US Pat. No. 11,115,697

RESOLUTION-BASED MANIFEST GENERATOR FOR ADAPTIVE BITRATE VIDEO STREAMING

Amazon Technologies, Inc....


1. A computer-implemented method comprising:performing a first, convex hull optimization on a video file for a first device resolution;
performing a second, convex hull optimization on the video file for a second, lower device resolution;
generating a first video representation for a point on both the first, convex hull optimization and the second, convex hull optimization for the first device resolution and the second, lower device resolution;
generating a second video representation for a point only on the second, convex hull optimization for the second, lower device resolution;
receiving a request for a manifest for the video file from a client device at the second, lower device resolution;
generating the manifest for the client device that identifies the first video representation and the second video representation; and
sending the manifest to the client device.

US Pat. No. 11,115,696

PROGRAMMATIC INGESTION AND STB DELIVERY IN AD AUCTION ENVIRONMENTS

Beachfront Media LLC, Or...


1. A tangible non-transitory computer readable storage media loaded with program instructions that, when executed on one or more processors, are configurable as a campaign management delegate system that carries out a method, including:receiving an ad management service (abbreviated AMS) placement request from a broadcast cable provider for ad insertion into a program;
conducting real time bidding during the program for ad insertion;
accepting new content provided by a successful bidder, after receiving the real time bidding, automatically formatting the new content, and uploading the new content to the broadcast cable provider in time for playback with entertainment content that prompted the placement request; and
responding to the placement request with a placement response that includes reference to the new content.

US Pat. No. 11,115,695

USING MACHINE LEARNING AND OTHER MODELS TO DETERMINE A USER PREFERENCE TO CANCEL A STREAM OR DOWNLOAD

Google LLC, Mountain Vie...


1. A method for training a machine learning model using information pertaining to transmissions of one or more media items to a plurality of user devices associated with a user account, the method comprising:generating training data for the machine learning model, wherein generating the training data comprises:generating first training input, the first training input comprising first contextual information associated with a first user device of the plurality of user devices;
generating second training input, the second training input comprising second contextual information associated with a second user device of the plurality of users devices, wherein a number of the transmissions to the plurality of user devices for the user account exceeds a threshold number of transmissions that is allowed for the user account; and
generating a first target output for the first training input and the second training input, wherein the first target output identifiesan indication of a preference of a user associated with the user account to cancel a first transmission to the first user device responsive to the number of the transmissions to the plurality of user devices exceeding the threshold number, and
an indication of a preference of the user to keep a second transmission to the second user device responsive to the number of the transmissions exceeding the threshold number; and

providing the training data to train the machine learning model on (i) a set of training inputs comprising the first training input and the second training input, and (ii) a set of target outputs comprising the first target output.


US Pat. No. 11,115,694

INFORMATION PROCESSING APPARATUS, METHOD, AND PROGRAM

Sony Corporation, Tokyo ...


1. A signal processing apparatus, comprising:a frequency band dividing unit configured to divide a sound signal included in a first content into a plurality of frequency bands;
a periodicity detection unit configured to detect periodicity information of each frequency band supplied from the frequency band dividing unit;
a periodicity information merging unit configured to merge the periodicity information detected by the periodicity detection unit;
a peak detection unit configured to generate peak information by detecting a peak position of the merged periodicity information;
a downsampling unit configured to downsample the peak information for a plurality of time sections to a downsampled peak information for a time section; and
an output unit configured to output the downsampled peak information as a synchronization feature amount for synchronizing a second content with the first content.

US Pat. No. 11,115,693

SOURCE CLOCK RECOVERY IN WIRELESS VIDEO SYSTEMS

Advanced Micro Devices, I...


1. A receiver comprising:a transceiver configured to receive encoded pixels of frames; and
logic configured to:detect a start of frame and a corresponding transmitter counter value in wirelessly received data, wherein the transmitter counter value corresponds to a time at which an encoder of the transmitter detected the start of frame;
capture a receiver counter value responsive to a decoder of the receiver detecting the start of frame;
compare the transmitter counter value to the receiver counter value; and
adjust a pixel decoding rate of a decoder in the receiver based on a difference between the transmitter counter value and the receiver counter value.


US Pat. No. 11,115,692

DELIVERY OF THIRD PARTY CONTENT ON A FIRST PARTY PORTAL

SONY INTERACTIVE ENTERTAI...


1. A method for dynamic assignment of streams, the method comprising:storing a content preference associated with a user, the content preference stored in memory of a platform server;
querying a plurality of third-party servers for metadata regarding a plurality of different third-party content streams hosted by one or more of the third-party servers;
receiving a user request regarding the third-party content streams, the user request received from a user device associated with the user;
dynamically assigning each of the third-party content streams into one of a plurality of different channels based on descriptive information indicated by the respective metadata and the content preference associated with the user;
generating a display of the dynamically assigned channels, wherein a selected one of the channels has been assigned third-party content streams from different third-party servers; and
streaming the assigned third-party content streams of the selected channel to the user device, wherein the platform server retrieves the assigned third-party content streams from the different third-party servers in accordance with the dynamic assignment.

US Pat. No. 11,115,691

CUSTOM DATA INDICATING NOMINAL RANGE OF SAMPLES OF MEDIA CONTENT

Microsoft Technology Lice...


1. In a computer system that implements a video processing tool, a method comprising:receiving, as part of an elementary video bitstream of encoded video content, a sequence header for a video sequence, the sequence header including range data that indicates nominal range of samples of the encoded video content, the samples of the encoded video content having a sample depth that indicates an available range of values of the samples of the encoded video content, wherein the nominal range is a range of values within the available range for the sample depth of the samples of the encoded video content, and wherein the range data indicates one of multiple possible options for the nominal range, the multiple possible options for the nominal range including:full range characterized by values from 0 . . . 2n?1 for samples of bit depth n; and
a limited range characterized by values in less than the full range;

parsing the sequence header, including parsing the range data; and
for each of multiple frames in a group of frames of the video sequence:receiving, as part of the elementary video bitstream of encoded video content, encoded data for the frame; and
decoding the encoded data for the frame to reconstruct the frame, thereby producing samples of reconstructed video output for the frame.


US Pat. No. 11,115,690

SYSTEMS AND METHODS FOR VIDEO DELIVERY BASED UPON SACCADIC EYE MOTION

Cable Television Laborato...


1. A method for displaying an immersive video content according to eye movement of a viewer, comprising:detecting, using an eye tracking device and a processor, saccadic movement of at least one eye of the viewer viewing a display device;
predicting, using the processor, a future field of view of the viewer based at least in part on the saccadic movement;
obtaining, from a video storage device via a communication network outside of a local network which is between the video storage device and the display device, first immersive video content corresponding to the future field of view;
displaying the first immersive video content in a first zone of the display device, the first zone corresponding to the future field of view; and
determining a size of the first zone at least partially based on a latency of the communication network outside of the local network which is between the video storage device and the display device such that the first immersive video content is displayed in the first zone before the at least one eye of the viewer arrives at the first zone.

US Pat. No. 11,115,689

TRANSMISSION APPARATUS, TRANSMISSION METHOD, RECEPTION APPARATUS, AND RECEPTION METHOD

SONY CORPORATION, Tokyo ...


1. A transmission apparatus, comprising:processing circuitry configured to:obtain image data, having a basic format, from which an image having a high definition at a basic frame rate is to be obtained, the image data having the basic format obtained by the processing circuitry being configured to:(i) execute mixing processing at a first ratio in units of two temporally consecutive pictures in image data having an ultra-high definition at a high frame rate to obtain first image data as image data at the basic frame rate, and
(ii) execute down-scale processing for the first image data to obtain the image data having the basic format,

obtain image data, having a first enhancement format, from which an image having the high definition at the high frame rate is to be obtained, the image data having the first enhancement format obtained by the processing circuitry being configured to:(i) execute mixing processing at a second ratio in units of two temporally consecutive pictures to obtain second image data as image data having an enhancement frame at the high frame rate, and
(ii) execute down-scale processing for the second image data to obtain the image data having the first enhancement format,

obtain, based on at least the image data having the basic format, image data, having a second enhancement format, from which an image having the ultra-high definition at the basic frame rate is to be obtained, and
obtain, based on at least the image data having the first enhancement format, image data, having a third enhancement format, from which an image having the ultra-high definition at the high frame rate is to be obtained,
producing a basic video stream containing encoded image data of the image data having the basic format, and a predetermined number of enhancement video streams containing encoded image data of the image data having the first to third enhancement formats; and

transmission circuitry configured to transmit a container having a predetermined format containing the basic video stream and the predetermined number of enhancement video streams.

US Pat. No. 11,115,688

GLOBAL APPROACH TO BUFFERING MEDIA CONTENT

NETFLIX, INC., Los Gatos...


1. A computer-implemented method for streaming media content to a client device, the method comprising:computing a first distance along a first potential playback path between a first playback position and a first media content block;
computing a first score for the first media content block based on the first distance and a first probability that the first potential playback path is to be followed during a playback session associated with the client device;
computing a second distance along a second potential playback path between a second playback position and a second media content block;
computing a second score for the second media content block based on the second distance and a second probability that the second potential playback path is to be followed during the playback session;
comparing the first score to the second score to determine that the first media content block should be buffered by the client device; and
causing the first media content block to be stored in a playback buffer for subsequent playback on the client device.

US Pat. No. 11,115,687

MULTI-USER INTELLIGENT CONTENT CACHE FOR BANDWIDTH OPTIMIZATION


1. An end-user device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising:downloading media content from a server to the end-user device;
storing the media content;
presenting the media content at a display device of the end-user device to allow a user of the end-user device to view the media content;
communicating with the server via a first type of communication path to obtain permission to provide the media content, that is stored locally at the end-user device, to a second end-user device, wherein the second end-user device communicates with the server via the first type of communication path to obtain additional permission to receive the media content that is stored locally at the end-user device; and
responsive to the end-user device receiving the permission from the server and in association with the second end-user device receiving the additional permission from the server, providing the media content to the second end-user device, the media content being provided from the end-user device to the second end-user device via a second type of communication path that is different from the first type of communication path.


US Pat. No. 11,115,686

METHOD OF RECORDING, IN A MASS MEMORY OF AN ELECTRONIC DEVICE, AT LEAST ONE MULTIMEDIA CONTENT

Sagemcom Broadband SAS, ...


1. A recording method for recording, in a mass memory of electronic equipment, at least one multimedia content streamed by at least one stream, the multimedia content being stored temporarily in a buffer memory of the electronic equipment prior to being recorded in the mass memory, the recording method comprising the steps of:initializing a current recording rate for the stream in the mass memory, the current recording rate being a current rate of access to the mass memory;
acquiring a current portion of the multimedia content at the current recording rate, and storing it temporarily in the buffer memory;
evaluating an occupancy fraction for the buffer memory, the occupancy fraction being a fraction of the buffer memory being occupied;
if the occupancy fraction of the buffer memory is greater than a predetermined high occupancy threshold, decreasing the current recording rate;
if the occupancy fraction of the buffer memory is less than or equal to a predetermined low occupancy threshold, increasing the current recording rate.

US Pat. No. 11,115,685

SYSTEMS AND METHODS FOR SHARING VIDEO DATA VIA SOCIAL MEDIA

Bruce Melanson, Valrico,...


1. A system for capturing and sharing video images via social media, the system comprising:(a) a processor in a first electronic device that identifies a video image from a second electronic device being displayed on the second electronic device at an event, wherein the first electronic device includes a computing device and a transmitting device linked to a second electronic device and configured to communicate with a third electronic device;
(b) the first electronic device captures the video image displayed on the second electronic device to form a captured video image while the second electronic device provides the captured video image with a unique identifier;
(c) the third electronic device communicates with the first electronic device by downloading and installing an application, wherein the third electronic device comprises a peripheral device;
(d) creating, with the third electronic device, a user account via the application by providing information comprising a name, email address, password, date of birth, address, nationality or any combination thereof and at least one of favorite sports, favorite sports teams, favorite entertainment events other than sports, desired venues, merchandise types typically purchased, whether the user ever uses coupons and what kind if answered yes, whether the system can forward coupons to the user, whether the system can forward targeted emails to the user, whether the system can forward advertisements to the user, and/or user billing information;
(e) the first electronic device receives and communicates the captured video image with the unique identifier to the third electronic device; and
(f) the third electronic device is configured to share the captured video image of step (e) via social media as desired.

US Pat. No. 11,115,684

SYSTEM, METHOD, AND PROGRAM FOR DISTRIBUTING LIVE VIDEO

DeNA CO., LTD., Tokyo (J...


1. A live video distribution system comprising one or more computer processors, wherein, in response to execution of a readable command, the one or more computer processors execute:processing to distribute, to each of a plurality of viewers, live video provided by each of a plurality of distributors;
processing to determine a ranking of each of the plurality of distributors from among a plurality of rankings based on at least the live video distribution performance of each of the plurality of distributors, wherein the plurality of rankings is arranged into a plurality of ranking groups, and wherein the processing to determine the ranking comprises determining the ranking of each of the plurality of distributors, based on at least a value of a specific parameter for each of the plurality of distributors, by one or both of,when a next highest ranking above a current ranking of the distributor is in a same ranking group as a current ranking group, increasing the ranking to the next highest ranking if the value of the specific parameter is above a first rank-increase threshold, and when the next highest ranking is in a different ranking group from that of the current ranking group, increasing the ranking to the next highest ranking if the value of the specific parameter is above a second rank-increase threshold that is greater than the first rank-increase threshold, or,
when a next lowest ranking below the current ranking of the distributor is in the same ranking group as the current ranking group, decreasing the ranking to the next lowest ranking if the value of the specific parameter is below a first rank-decrease threshold, and when the next lowest ranking is in a different ranking group from that of the current ranking group, decreasing the ranking to the next lowest ranking if the value of the specific parameter is below second rank-decrease threshold that is less than the first rank-decrease threshold;

processing to set a base reward quantity for each of the plurality of distributors based on at least the ranking of each of the plurality of distributors; and
processing to give each of the plurality of distributors rewards in a quantity based on at least a distribution duration of the live video and the base reward quantity.

US Pat. No. 11,115,683

HIGH DEFINITION VP8 DECODER

Texas Instruments Incorpo...


1. A video decoding system comprising:one or more processors; and
memory coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the video decoding system to:
perform an entropy decoding operation in a frame level pipelined manner on a bitstream representative of encoded video data to produce first output data;
perform at least one of an inverse quantization operation or an inverse frequency transform operation in a row level pipelined manner on the first output data to produce second output data;perform a deblocking filtering operation on first input data that is based on the second output data in a row level pipelined manner to produce third output data, wherein the deblocking filtering operation is performed using a non exact approximate loop filter;

perform a feedback operation on the third output data to produce fourth output data, wherein the feedback operation includes performing a fading compensation in a row level pipelined manner followed by performing a motion compensation operation in a row level pipelined manner on the third output data to produce the fourth output data;
perform a summing operation on the second output data and the fourth output data to produce the first input data; andoutput a decoded video output based on the third output data.


US Pat. No. 11,115,682

BLOCK-BASED PREDICTIVE CODING AND DECODING OF A PICTURE

Fraunhofer-Gesellschaft z...


1. An apparatus for block-based predictive decoding of a picture, comprising:a prediction provider configured to predict a predetermined block of the picture to acquire a first version of a predicted filling of the predetermined block;
a spectral decomposer configured to spectrally decompose a region composed of the first version of the predicted filling of the predetermined block and a reconstructed version of a neighborhood of the predetermined block so as to acquire a first spectrum of the region;
a noise reducer configured to perform noise reduction on the first spectrum to acquire a second spectrum;
a spectral composer configured to subject the second spectrum to spectral composition so as to acquire a modified version of the region including a second version of the predicted filling of the predetermined block;
a reconstructor configured to decode a reconstructed version of the predetermined block from a data stream on the basis of the second version of the predicted filling,
wherein the reconstructor is configured to decode a first residual signal from the data stream and correct a prediction signal for the neighborhood using the first residual signal to obtain the reconstructed version of the neighborhood of the predetermined block, and decode a second residual signal from the data stream and correct the second version of the predicted filling of the predetermined block using the second residual signal to obtain the reconstructed version of the predetermined block.

US Pat. No. 11,115,681

HYBRID VIDEO CODING SUPPORTING INTERMEDIATE VIEW SYNTHESIS

GE Video Compression, LLC...


1. A decoder for decoding encoded information representing a multi-view video, comprising:an extractor configured to obtain, from the encoded information including data related to a first-view video and a second-view video of the multi-view video, a disparity vector associated with a sub-region of a frame of the second-view video, wherein the disparity vector indicates a displacement of the sub-region of the frame of the second-view video with respect to a corresponding frame of the first-view video;
a predictive reconstructor configured to reconstruct the sub-region of the frame of the second-view video based on a reconstructed portion of the corresponding frame of the first-view video and the disparity vector in accordance with a prediction mode; and
a view synthesizer configured to:
determine a scaled disparity vector based on the disparity vector and a scaling factor, and
synthesize a portion of a frame of a synthesized third-view video using the reconstructed portion of the frame of the first view video and the scaled disparity vector.

US Pat. No. 11,115,680

APPARATUSES AND METHODS FOR ENCODING AND DECODING A PANORAMIC VIDEO SIGNAL

Huawei Technologies Co., ...


1. An encoding apparatus for encoding a video signal, the video signal being a two-dimensional (2D) projection of a panoramic video signal and comprising a plurality of successive frames including a reference frame and a current frame, each frame of the plurality of successive frames comprising a plurality of video coding blocks, each video coding block comprising a plurality of pixels, the encoding apparatus comprising computer hardware implementing a plurality of units, including:an inter prediction unit for generating a predicted video coding block for a current video coding block of the current frame based on a corresponding video coding block of the reference frame, wherein the inter prediction unit is configured to:determine a three-dimensional (3D) reference motion vector based on a first 3D position defined by a projection of a first pixel of the current video coding block of the current frame onto a viewing sphere and based on a second 3D position defined by a projection of a second pixel of the corresponding video coding block of the reference frame onto the viewing sphere, and
predict at least one pixel of the current video coding block based on the 3D reference motion vector; and

an encoding unit configured to encode the current video coding block of the current frame based on the predicted video coding block,
wherein the inter prediction unit is configured to predict the at least one pixel of the current video coding block based on the 3D reference motion vector by:
selecting a further pixel from the corresponding video coding block of the reference frame;
generating a modified 3D reference motion vector based on the 3D reference motion vector, wherein the modified 3D reference motion vector and the 3D reference motion vector have a same length, but different orientations;
adding the modified 3D reference motion vector to a third 3D position defined by a projection of the further pixel from the corresponding video coding block of the reference frame onto the viewing sphere to generate a fourth 3D position; and
determining a 2D position of the at least one pixel of the current video coding block by de-projecting the fourth 3D position onto the current frame.

US Pat. No. 11,115,679

METHOD AND SYSTEM FOR ENCODING AND TRANSMITTING HIGH DEFINITION 3-D MULTIMEDIA CONTENT

DISNEY ENTERPRISES, INC.,...


1. A method for decoding encoded content including a plurality of encoded frames, comprising:receiving by an image processing device the encoded frames, wherein the encoded frames comprise a first type frame, a second type frame, and at least one null frame;
detecting by the image processing device a synchronization signal corresponding to the encoded frames; and
compiling by the image processing device the encoded frames into a frame-sequential stream of frames, according to the synchronization signal, wherein when the frame-sequential stream of frames is displayed on a display device, the at least one null frame is not displayed, the first type frame utilizes a full resolution of the display device, and the second type frame utilizes the full resolution of the display device to display a full resolution of the decoded content.

US Pat. No. 11,115,678

DIVERSIFIED MOTION USING MULTIPLE GLOBAL MOTION MODELS

GOOGLE LLC, Mountain Vie...


1. An apparatus for encoding a current frame of a video, comprising:a memory; and
a processor, the processor configured to execute instructions stored in the memory to:generate, for each reference frame of a subset of available reference frames, at least two respective candidate global motion models (GMMs), wherein to generate the at least two respective candidate GMMs comprises to:partition the current frame into at least two segments;
generate, for the each reference frame, at least two respective GMMs corresponding to the at least two segments; and
use the at least two respective GMMs corresponding to the at least two segments as the least two respective GMMs for the each reference frame;

partition the current frame into blocks;
generate an aggregated residual frame for the current frame, wherein to generate the aggregated residual frame comprises to:select, for predicting each block, a respective selected GMM, wherein the respective selected GMM corresponds to the one of the at least two respective candidate GMMs that minimizes a total error associated with the aggregated residual frame; and
obtain respective residual blocks for the block; and

encode the respective residual blocks in a compressed bitstream.


US Pat. No. 11,115,677

IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, AND IMAGE DECODING APPARATUS

SUN PATENT TRUST, New Yo...


1. An image decoding method of decoding each block among blocks of pictures, the image decoding method comprising:selecting a collocated picture from a plurality of referable reference pictures, the collocated picture being different from a current picture that includes a current block to be decoded;
deriving a candidate for motion vectors of the current block, from a first motion vector of a first block included in the collocated picture;
adding the derived candidate to a list of motion vector candidates;
selecting, based on an index, one candidate that includes a second motion vector from the list of motion vector candidates; and
decoding the current block using (i) the second motion vector included in the selected one candidate and (ii) a reference picture of the current block,
wherein the deriving includes:determining whether a collocated reference picture of the first block is a long-term reference picture or a short-term reference picture, the collocated reference picture being a reference picture referred to by the first block included in the collocated picture;
determining whether the reference picture of the current block is a long-term reference picture or a short-term reference picture;
deriving the second motion vector so as to be the same as the first motion vector without scaling based on a temporal distance in the case of determining that (i) the collocated reference picture of the first block is a long-term reference picture and (ii) the reference picture of the current block is a long-term reference picture; and
deriving the second motion vector from the first motion vector by scaling based on a temporal distance in the case of determining that (i) the collocated reference picture of the first block is a short-term reference picture and (ii) the reference picture of the current block is a short-term reference picture,

wherein the scaling is based on a ratio of temporal distance calculated by using (i) a difference between a Picture Order Count (POC) assigned to the current picture and the POC assigned to the reference picture and (ii) a difference between the POC assigned to the collocated picture and the POC assigned to the collocated reference picture, and
wherein the POC is a sequential number assigned for each of the current picture, the reference picture, the collocated picture, and the collocated reference picture based on display order.

US Pat. No. 11,115,676

INTERACTION BETWEEN INTRA BLOCK COPY MODE AND INTER PREDICTION TOOLS

BEIJING BYTEDANCE NETWORK...


1. A method for coding video data, comprising:determining that an intra block copy (IBC) mode is applied to a current video block of a video, wherein in the IBC mode, reference samples from a video region including the current video block are used;
making a decision regarding a disabling of a combined inter-intra prediction mode for the current video block, wherein in the combined inter-intra prediction mode, a final prediction is generated at least based on a weighted sum of an intermediate intra prediction signal and an intermediate inter prediction signal; and
performing, based on the decision, a conversion between the current video block and a bitstream of the video,
wherein a combined inter-intra prediction flag for the current video block is not included in the bitstream in response to the IBC mode being used in the current video block.

US Pat. No. 11,115,675

CONDITIONS FOR UPDATING LUTS

BEIJING BYTEDANCE NETWORK...


1. A method of coding video data, comprising:maintaining one or multiple tables, wherein each table includes one or more motion candidates derived from one or more video blocks that have been coded, and arrangement of the motion candidates in the each table is based on a sequence of addition of the motion candidates into the table;
constructing a motion candidate list for a current video block;
determining motion information of the current video block using the motion candidate list; and
coding the current video block based on the determined motion information;
wherein whether the determined motion information is used to update a table of the one or multiple tables is based on coded information of the current video block, wherein the table, after updating, is used to construct a motion candidate list of a subsequent video block, and
wherein using the table to construct the motion candidate list comprises checking at least one motion candidate of the table to determine whether to add the checked motion candidate from the table to the motion candidate list of the subsequent video block, and the coded information comprises a coding mode of the current video block.

US Pat. No. 11,115,674

METHOD AND DEVICE FOR INDUCING MOTION INFORMATION BETWEEN TEMPORAL POINTS OF SUB PREDICTION UNIT

University-Industry Coope...


1. A method of decoding an image, the method comprising:deriving motion information of a current block; and
deriving a prediction sample for the current block based on the motion information of the current block,
wherein the deriving motion information of the current block comprises:determining whether a center sub block within a reference block has motion information;

when the center sub block within the reference block has motion information, deriving motion information of sub blocks within the current block from sub blocks within the reference block; and
when the center sub block within the reference block does not have motion information, terminating the deriving motion information of sub blocks within the current block from sub blocks within the reference block,
wherein the center sub block corresponds to a center position of the current block, and
wherein the motion information includes at least a motion vector.

US Pat. No. 11,115,673

IMAGE ENCODER USING MACHINE LEARNING AND DATA PROCESSING METHOD OF THE IMAGE ENCODER

Samsung Electronics Co., ...


1. An image encoder for outputting a bitstream by encoding an input image, the image encoder comprising:a predictive block configured to generate a prediction block using data of a previous input block;
a machine learning based prediction enhancement (MLBE) block configured to transform the prediction block into an enhanced prediction block by applying a machine learning technique to the prediction block, wherein the MLBE block:selects one of the prediction block and the enhanced prediction block as a selected block in response to a rate-distortion optimization (RDO) value, wherein the enhanced prediction block is selected as the selected block in response to a rate-distortion optimization (RDO) value corresponding to the enhanced prediction block indicating a performance gain with respect to an RDO value corresponding to the prediction block, and the prediction block is selected as the selected block in response to the RDO value corresponding to the enhanced prediction block indicating no performance gain with respect to the RDO value corresponding to the prediction block; and

a subtractor configured to generate a residual block of residual data by subtracting pixel data of the selected block from pixel data of a current input block.

US Pat. No. 11,115,672

VIDEO ENCODING METHOD AND APPARATUS, VIDEO DECODING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...


1. A video encoding method, applied to a computer device having a processor and memory storing a plurality of computer programs to be executed by the processor, the method comprising:obtaining a current encoding block to be encoded in a current video frame, the current encoding block having a width and a height that is different from the width;
determining, within the current video frame, target reference pixels corresponding to the current encoding block, a target quantity corresponding to the target reference pixels being the e-th power of one of the width and the height of the encoding block under a target numeral system, e being a positive integer, the target numeral system being a numeral system used for calculating a predicted value of the current encoding block;
obtaining a predicted value corresponding to the current encoding block according to the target reference pixels; and
performing video encoding on the current encoding block according to the predicted value, to obtain encoded data.

US Pat. No. 11,115,671

SYSTEM AND METHOD FOR VIDEO CODING

Panasonic Intellectual Pr...


1. A decoder, comprising:circuitry;
memory coupled to the circuitry;
wherein the circuitry, in operation:parses a first flag indicating whether a CCALF (cross component adaptive loop filtering) process is enabled for a first block, the first block located adjacent to a left side of a current block;
parses a second flag indicating whether the CCALF process is enabled for a second block, the second block located adjacent to an upper side of the current block;
determines a first index associated with a color component of the current block;
derives a second index indicating a context model, using an equation that is a function of the first flag, the second flag, and the first index, the equation being used to derive a third index indicating another context model for an ALF (adaptive loop filtering) control flag;
performs entropy decoding of a third flag indicating whether the CCALF process is enabled for the current block, using the context model indicated by the second index, and
performs the CCALF process on the current block in response to the third flag indicating the CCALF process is enabled for the current block.


US Pat. No. 11,115,670

IMAGE ENCODING APPARATUS, IMAGE ENCODING METHOD, RECORDING MEDIUM AND PROGRAM, IMAGE DECODING APPARATUS, IMAGE DECODING METHOD, AND RECORDING MEDIUM AND PROGRAM

Canon Kabushiki Kaisha, ...


1. An image encoding apparatus that performs hierarchical coding of images composing a moving image with a plurality of layers, the image encoding apparatus comprising:a generating unit that generates a second image, wherein the second image corresponds to a second layer which is different from a first layer to which the first image corresponds;
an encoding unit that encodes a first tile set composed of one or more tiles in the first image and a second tile set composed of one or more tiles in the second image; and
an information encoding unit,
wherein the second tile set is arranged, in the second image, at a position corresponding to the first tile set,
wherein the encoding unit encodes the first tile set without reference to any tile set in the first image other than the first tile set, and encodes the second tile set without reference to any tile set in the second image other than the second tile set,
wherein, in a case where the encoding unit encodes the first tile set with reference to at least a part of an area of the second image, the encoding unit encodes the first tile set without reference to any tile set in the second image other than the second tile set,
wherein the information encoding unit encodes a supplemental enhancement information (SEI) message indicating constraint related to a decoding process of the first tile set and the second tile set, and
wherein the SEI message includes a top left tile index indicating a position of a top left tile included in a tile set.

US Pat. No. 11,115,669

END OF SEQUENCE AND END OF BITSTREAM NAL UNITS IN SEPARATE FILE TRACKS

QUALCOMM Incorporated, S...


1. A system for processing a file for storage of data, the system comprising:a memory configured to store the file for storage of data; and
a processor configured to:receive a file comprising a first track and a second track, the first track including a first set of access units of a coded video sequence (CVS) of a bitstream, the second track including a second set of access units of the same CVS, the first set of access units including a first access unit that includes a first end of sequence (EOS) network abstraction layer (NAL) unit, the second set of access units including a second access unit that includes a second EOS NAL unit, the second EOS NAL unit is different from the first EOS NAL unit, the first access unit belonging to a first temporal sub-layer, and the second access unit belonging to a second temporal sub-layer different from the first temporal sub-layer; and
output, based on a comparison of a time associated with the first access unit and a time associated with the second access unit, the first EOS NAL unit and discarding the second EOS NAL unit.


US Pat. No. 11,115,668

SUPPLEMENTAL ENHANCEMENT INFORMATION INCLUDING CONFIDENCE LEVEL AND MIXED CONTENT INFORMATION

Microsoft Technology Lice...


1. A method performed by a decoder device, comprising:receiving encoded data for a sequence of pictures in a bitstream or bitstream portion, the encoded data in the bitstream or bitstream portion including a first flag and a second flag for identifying source scan type for the sequence of pictures, the first flag and the second flag collectively and exclusively indicating one of the following unique states for the sequence of pictures: a state indicating that the source scan type of the pictures in the sequence is interlaced, a state indicating that the source scan type of the pictures in the sequence is progressive, a state indicating that the source scan type of the pictures in the sequence is unknown, and a state indicating that the source scan type is independently indicated for each picture of the pictures of the sequence by a value of a picture-level syntax element that is to be signaled as part of an SEI message or to be inferred, wherein the first flag indicates whether the source scan type of the pictures is interlaced and the second flag is different from the first flag, is a separate syntax element from the first flag, and indicates whether the source scan type of the pictures is progressive; and
determining, for a given picture of the pictures of the sequence, the value of the picture-level syntax element that indicates the source scan type of the given picture, the picture-level syntax element indicating one of the following states: a state indicating that the source scan type of the given picture is interlaced, a state indicating that the source scan type of the given picture is progressive, and a state indicating that the source scan type of the given picture is unknown, wherein the determining the value of the picture-level syntax element includes:determining that the picture-level syntax element is not present in the encoded data; and
inferring the value of the picture-level syntax element, including:if the first flag and the second flag indicate that the source scan type of the pictures in the sequence is progressive, inferring the value of the picture-level syntax element to indicate that the source scan type of the given picture is progressive;
if the first flag and the second flag indicate that the source scan type of the pictures in the sequence is interlaced, inferring the value of the picture-level syntax element to indicate that the source scan type of the given picture is interlaced; and
otherwise, inferring the value of the picture-level syntax element to indicate that the source scan type of the given picture is unknown;


decoding the pictures; and
processing the decoded pictures in accordance with the source scan type.

US Pat. No. 11,115,667

MOBILE DEVICE IMAGE COMPRESSION

AMERICAN EXPRESS TRAVEL R...


1. A method comprising:identifying, by a mobile device, a targeted upload time for the mobile device based on a screen size of the mobile device;
determining, by the mobile device, a file size for a transmission of an image based at least in part on a connection latency between the mobile device and a computing device and based at least in part on the targeted upload time for completing the transmission of the image, wherein the file size is determined prior to the transmission of the image;
compressing, by the mobile device, the image into a compressed image until a compressed file size of the compressed image meets the file size; and
transmitting, by the mobile device, the compressed image to the computing device for storage.

US Pat. No. 11,115,666

SEMANTIC VIDEO ENCODING


1. A method comprising:receiving, by a processing system, a video program;
identifying, by the processing system, boundaries of a scene in the video program;
identifying, by the processing system, a theme in the scene, wherein the theme comprises a concept selected from a lexical database, wherein the lexical database defines themes comprising: different types of objects, different sites, different types of people, different events, and different activities;
selecting, by the processing system, an encoding strategy for the scene based upon the theme and a target bitrate, wherein the encoding strategy is associated with the theme, wherein for each of a plurality of themes, the processing system maintains a plurality of encoding strategies associated with different target bitrates, wherein the plurality of themes includes the theme;
encoding, by the processing system, the scene using the encoding strategy that is selected;
receiving, by the processing system, a video quality indicator of the scene that is encoded; and
adjusting, by the processing system based upon the video quality indicator, the encoding strategy that is associated with the theme as maintained by the processing system, wherein the encoding strategy is adjusted when the video quality indicator falls below a video quality indicator threshold, wherein for each of the plurality of themes, the processing system maintains a plurality of video quality indicator thresholds, wherein each of the plurality of video quality indicator thresholds is for adjusting a different one of the plurality of encoding strategies.

US Pat. No. 11,115,664

IMAGE CODING METHOD, IMAGE CODING APPARATUS, IMAGE DECODING METHOD, IMAGE DECODING APPARATUS, AND IMAGE CODING AND DECODING APPARATUS

SUN PATENT TRUST, New Yo...


1. A moving picture coding apparatus for coding a current block, the moving picture coding apparatus comprising:a processor; and
a non-transitory memory,
wherein the processor performs, using the non-transitory memory, processes including:deriving a first candidate from a first motion vector that has been used to code a first block, the first block being adjacent to the current block;
coding a first index identifying a reference picture selected for coding the current block;
deriving a second candidate having a second motion vector that includes a non-zero value, the non-zero value assigned to the reference picture;
selecting a selected candidate to be used for coding the current block from a plurality of candidates, the plurality of candidates including the first candidate and the second candidate;
coding a second index identifying the selected candidate; and
coding the current block using the selected candidate, and

the second candidate includes the non-zero value of the reference picture, the reference picture being selected from a plurality of referable reference pictures based on the first index.

US Pat. No. 11,115,663

CODEBOOK GENERATION FOR CLOUD-BASED VIDEO APPLICATIONS

Adobe Inc., San Jose, CA...


1. A method for generating a codebook for vector quantization of digital video content, the method comprising:performing entropy decoding on a pre-compressed training video stream to provide a decoded video stream that comprises a sequence of decoded video stream components;
partitioning the sequence of decoded video stream components into a first subset of image data segments, a second subset of motion data segments, and a third subset of control data segments;
grouping the first subset of image data segments to form an image data vector that is longer than each of the image data segments in the first subset;
generating a codebook entry that is at least partially based on the image data vector; and
associating an index value with the generated codebook entry, wherein the index value is shorter than, and provides a compressed representation of, the generated codebook entry.

US Pat. No. 11,115,662

QUANTIZATION MATRIX DESIGN FOR HEVC STANDARD

SONY GROUP CORPORATION, ...


1. A method, comprising:determining intra quantization matrices (QM_hvs_intra) of square-shaped blocks; and
converting the intra quantization matrices of the square-shaped blocks into correspondingly-sized inter square-shaped quantization matrices (QM_hvs_inter) according to the following conversion:
(i) for 1st row/column:
QM_hvs_inter[0][0]=QM_hvs_intra[0][0];
for (n=1; n QM_hvs_inter[0][n]=QM_hvs_inter[0][n?1]+(unsigned char) (slope_1st*(float)
(QMhvsintra[0][n]?QMhvs_intra[0][n?1])+0.5);
(ii) for last row/column:
QM_hvs_inter[0][BLK_Y?1]=QM_hvs_intra[0][BLK_Y?1];
for (m=1; m QM_hvs_inter[m][BLKY?1]=QM_hvs_inter[m?1][BLKY?1]+(unsigned char) (slopelast*(float)
(QMhvsintra[m][BLKY?1]?QM_hvs_intra[m?1][BLKY?1])+0.5); and
(iii) remaining data due to symmetry;
for (m=1); m

Where:slope_1st=0.714285714;
slope_last=0.733333333;
BLK X is a matrix size in x direction;
BLK_Y is a matrix size in y direction; and
n, m are counter variables.


US Pat. No. 11,115,660

METHOD AND APPARATUS OF SYNTAX INTERLEAVING FOR SEPARATE CODING TREE IN VIDEO CODING

MEDIATEK INC., Hsinchu (...


1. A method of video coding used by a video coding system, the method comprising:receiving input data associated with a current data unit in a current picture, wherein the input data associated with the current data unit comprise a luma component and a chroma component;
partitioning the current data unit into multiple initial blocks using inferred splitting without split-syntax signalling when the current data unit is larger than M×N for the luma component, wherein each of the multiple initial blocks comprises an initial luma block and an initial chroma block, and wherein M and N are positive integers;
determining a partition structure for partitioning the initial luma block and the initial chroma block of each initial block into one or more luma CUs (coding units) and one or more chroma CUs respectively; and
signalling or parsing one or more luma syntaxes and one or more chroma syntaxes associated with one initial block in the current data unit, and then signalling or parsing one or more luma syntaxes and one or more chroma syntaxes associated with one next initial block in the current data unit.

US Pat. No. 11,115,659

VIDEO SIGNAL ENCODING/DECODING METHOD AND APPARATUS

Industry Academy Cooperat...


1. A method of decoding an image signal, comprising:decoding a coefficient of a current block from a bitstream based on a result of comparing a size of the current block and a pre-determined threshold size; and
reconstructing the current block using the decoded coefficient,
wherein when the size of the current block is greater than or equal to the pre-determined threshold size, decoding the coefficient of the current block is performed only on a partial region in the current block, and is not performed on a remaining region except for the partial region in the current block,
wherein when the size of the current block is less than the pre-determined threshold size, decoding the coefficient of the current block is performed on an entire region of the current block,
wherein the partial region is divided into a plurality of partial blocks, and
wherein the coefficient of the partial region is decoded in units of the partial blocks.

US Pat. No. 11,115,658

MATRIX INTRA PREDICTION AND CROSS-COMPONENT LINEAR MODEL PREDICTION HARMONIZATION FOR VIDEO CODING

Qualcomm Incorporated, S...


1. A method of decoding video data, the method comprising:predicting, by one or more processors implemented in circuitry, luma samples for a block of the video data using matrix intra prediction (MIP), wherein using MIP comprises down-sampling a set of luma neighboring samples to generate down-sampled luma neighboring samples;
predicting, by the one or more processors, chroma samples for the block using cross-component linear model (CCLM) prediction, wherein using CCLM prediction comprises predicting the chroma samples for the block based on the down-sampled luma neighboring samples generated from the MIP;
generating, by the one or more processors, a prediction block for the block based on the luma samples and the chroma samples;
decoding, by the one or more processors, a residual block for the block; and
combining, by the one or more processors, the prediction block and the residual block to decode the block.

US Pat. No. 11,115,657

CHECKING ORDER OF MOTION CANDIDATES IN LUT

BEIJING BYTEDANCE NETWORK...


1. A video processing method, comprising:maintaining one or more tables, wherein each table includes one or more candidates derived from one or more video blocks that have been coded;
constructing a candidate list for a current video block of a video, wherein at least one candidate in the table is checked in an order of at least one index of the at least one candidate;
determining motion information of the current video block using the candidate list; and
coding the current video block based on the determined motion information, and
wherein K candidates from the table which have indices equal to a0, a0?T0, a0?T0?T1?T2, . . . a0?T0?T1?T2? . . . ?TK?1 are checked in an order of K indices wherein a0 and Ti are integer values, i being 0 . . . K?1.

US Pat. No. 11,115,656

SELECTION OF CODED MOTION INFORMATION FOR LUT UPDATING

BEIJING BYTEDANCE NETWORK...


1. A method for processing video data, comprising:maintaining one or multiple tables, wherein each table includes one or more motion candidates derived from one or more video blocks that have been coded, and arrangement of the motion candidates in the each table is based on a sequence of addition of the motion candidates derived from the one or more video blocks into the each table;
constructing a motion candidate list for a current video block; wherein whether to use a table of the one or multiple tables to construct the motion candidate list is based on coded information of the current video block, wherein the coded information comprises a coding mode of the current video block, and the using the table to construct the motion candidate list comprises checking at least one motion candidate of the table to determine whether to add the checked motion candidate from the table to the candidate list;
determining motion information of the current video block using the motion candidate list; and
coding the current video block based on the determined motion information.

US Pat. No. 11,115,655

NEIGHBORING SAMPLE SELECTION FOR INTRA PREDICTION

BEIJING BYTEDANCE NETWORK...


1. A method for processing video data, comprising:determining, for a conversion between a current video block of a video that is a chroma block and a bitstream of the video, parameters of cross-component linear model (CCLM) at least based on selected chroma samples that are selected from neighboring chroma samples of the current video block, wherein positions of the selected chroma samples are derived from a first position offset value (F) and a step value (S), and wherein the F and S are derived at least based on availabilities of the neighboring chroma samples of the current video block and a dimension of the current video block;
applying the CCLM on luma samples located in a luma block corresponding to the current video block to derive prediction values of the current video block; and
performing the conversion based on the prediction values,
wherein the selected chroma samples are a subset of the neighboring chroma samples based on the dimension of the current video block, and the CCLM is an intra prediction mode,
wherein the neighboring chroma samples include left neighboring chroma samples, above neighboring chroma samples, above-right neighboring chroma samples, or below-left neighboring chroma samples relative to the current video block,
wherein in response to the neighboring chroma samples of the current video block being unavailable, the prediction values of the current video block are set to a default value, and
wherein the default value is equal to 1<<(BitDepth-1), wherein BitDepth represents the bit-depth of the chroma samples,
wherein F=Floor (M/2i), wherein M is a number of the neighboring chroma samples used to derive the selected chroma samples in horizontal direction, or F=Floor (N/2i), wherein N is a number of the neighboring chroma samples used to derive the selected chroma samples in vertical direction, i is equal to 2 or 3, and the Floor operation is used to obtain an integer part of a number,
wherein S=Max(1, Floor (M/2J)), or S=Max(1, Floor (N/2J)), j is equal to 1 or 2, and the Max operation is used to obtain a maximum of multiple numbers,
wherein the CCLM mode of the current video block is one of a first CCLM mode that derives the parameters of CCLM based on the left neighboring chroma samples and the above neighboring chroma samples, a second CCLM mode that derives the parameters of CCLM based on the left neighboring chroma samples and the below-left neighboring samples, and a third CCLM mode that derives the parameters of CCLM based on the above neighboring chroma samples and the above-right neighboring chroma sample,
wherein in response to the CCLM mode being the first CCLM mode and the above neighboring chroma samples being available, M is equal to W.

US Pat. No. 11,115,654

ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD

Panasonic Intellectual Pr...


1. A decoder comprising:circuitry; and
memory,
wherein, using the memory, the circuitry:in a first operating mode,derives first motion vectors for a first block obtained by splitting a picture, and
generates a prediction image corresponding to the first block, with a bi-directional optical flow flag set to true, by referring to spatial gradients of luminance generated based on the first motion vectors; and

in a second operating mode,derives second motion vectors for a sub-block obtained by splitting a second block, the second block being obtained by splitting the picture, and
generates a prediction image corresponding to the sub-block, with the bi-directional optical flow flag set to false.



US Pat. No. 11,115,653

INTRA BLOCK COPY MERGE LIST SIMPLIFICATION

MediaTek Inc., Hsinchu (...


1. A video decoding method comprising:receiving data from a bitstream for a block of pixels to be decoded as a current block of a current picture of a video, wherein a plurality of spatially adjacent neighboring blocks of the current block are coded before the current block;
generating a list of merge candidates including intra picture candidates that are associated with motion information referencing pixels in the current picture, the intra picture candidates comprising spatial merge candidates that are associated with the plurality of spatially adjacent neighboring blocks of the current block, the list of merge candidates comprising at most two spatial merge candidates,
wherein whether a history-based motion vector prediction (HMVP) candidate is included in the list of merge candidates or not is dependent on whether at least one HMVP candidate is available, and whether a default merge candidate is included in the list of merge candidates or not is dependent on a number of the spatial merge candidates and the HMVP candidate in the list of merge candidates;
selecting a merge candidate from the generated list of merge candidates; and
decoding the current block by using the motion information of the selected merge candidate,
wherein generating the list of merge candidates comprises determining which merge candidate to include in the list based on a size of the current block.

US Pat. No. 11,115,652

METHOD AND APPARATUS FOR FURTHER IMPROVED CONTEXT DESIGN FOR PREDICTION MODE AND CODED BLOCK FLAG (CBF)

TENCENT AMERICA LLC, Pal...


1. A method for video decoding or encoding, the method comprising:determining whether at least one of a plurality of neighboring blocks in a video sequence is coded by an intra prediction mode which is different than an intra-inter prediction mode;
entropy coding a prediction mode flag of a current block by a first context in response to determining that any of the plurality of neighboring blocks is coded by the intra prediction mode; and
entropy coding the prediction mode flag of the current block by a second context in response to determining that none of the plurality of neighboring blocks are coded by at least the intra prediction mode.

US Pat. No. 11,115,651

QUALITY SCALABLE CODING WITH MAPPING DIFFERENT RANGES OF BIT DEPTHS

GE Video Compression, LLC...


1. A decoder for decoding a scalable data stream into which a picture is encoded, the scalable data stream comprising base layer data representing the picture with a first sample bit depth and enhancement layer data representing a prediction residual associated with a second sample bit depth that is higher than the first sample bit depth, the decoder comprising:a first sub-decoder for decoding the base layer data into a lower bit-depth reconstructed picture;
a second sub-decoder for decoding the enhancement layer data into the prediction residual;
a prediction module, which comprises hardware or a program executed using a computer, for providing a prediction of the picture based on the lower bit-depth reconstructed picture, the prediction associated with the second sample bit depth, wherein the prediction module is adapted to perform, on the lower bit-depth reconstructed picture, a mapping of samples of the picture based on the first sample bit depth, the second sample bit depth, a bit-shift operation, and an offset value that depends at least on the first sample bit depth to obtain the prediction of the picture; and
a reconstructor for reconstructing the picture with the second sample bit depth based on the prediction and the prediction residual.

US Pat. No. 11,115,650

SYSTEM AND METHOD FOR MONITORING VIDEO COMMUNICATION DEVICE

Acer Incorporated, New T...


1. A system for monitoring a video communication device, the system comprising:a video capturing module;
a detection circuit configured to detect an operation current input to the video capturing module; and
a control circuit electrically connected to the detection circuit and determining whether the video capturing module is activated according to a magnitude of the operation current,
wherein the video capturing module comprises:
the video communication device; and
a transmission interface configured to output video communication data generated by the video communication device,
wherein the control circuit periodically sends a request command to the transmission interface and determines a data traffic of the transmission interface according to a reply command transmitted back by the transmission interface,
wherein when the data traffic of the transmission interface increases, the control circuit determines that the video capturing module is activated.

US Pat. No. 11,115,649

SYSTEMS AND METHODS FOR SYNCHRONIZING EYEWEAR DEVICES WITH DISPLAYS

Facebook Technologies, LL...


1. A system comprising:a communication device configured to receive wireless synchronization information for display content;
a first lens through which a first user can view the display content;
a second lens through which a second user can view the display content;
an optical device; and
a controller configured to:determine, based on an output of the optical device, that the display content is within a field of view of the first lens and the second lens;
in response to determining that the display content is within the field of view of the first lens and the second lens, cause the first lens and second lens to selectively allow the display content to pass through the first lens and second lens based on the wireless synchronization information; and
in response to determining that the display content is within the field of view of the first lens and the second lens, increasing a frame rate at which the display content is generated to maintain image quality for both the first user and the second user.


US Pat. No. 11,115,648

DISPLAY DEVICE, AND METHOD AND APPARATUS FOR ADJUSTING IMAGE PRESENCE ON DISPLAY DEVICE

HUAWEI TECHNOLOGIES CO., ...


1. A display device, comprising:a display device body, comprising:a first lens barrel, comprising:a first end;
a first lens coupled to the first end; and
a first edge region;

a second lens barrel, comprising:a second end;
a second lens coupled to the second end; and
a second edge region;

a longitudinal central axis comprising:a first side; and
a second side;

a first distance sensor symmetrically disposed on the first side and configured to measure a first distance between a left eyeball and the first lens, wherein the first distance sensor is disposed in the first edge region;
a second distance sensor symmetrically disposed on the second side and configured to measure a second distance between a right eyeball and the second lens, wherein the second distance sensor is disposed in the second edge region; and
a traverse central axis, wherein the first edge region comprises a first intersecting point along the traverse central axis that is close to the longitudinal central axis and a second intersecting point along the traverse central axis, wherein the second edge region comprises a third intersecting point along the traverse central axis that is close to the longitudinal axis and a fourth intersecting point along the traverse central axis, wherein the first distance sensor is disposed at the first intersecting point, and wherein the second distance sensor is disposed at the third intersecting point.


US Pat. No. 11,115,647

PRIVACY DISPLAY APPARATUS

RealD Spark, LLC, Beverl...


1. A privacy display device comprising:a directional backlight arranged to output light, wherein the directional backlight is arranged to provide switching between at least two different angular luminance profiles; and
a transmissive spatial light modulator arranged to receive output light from the backlight,wherein the spatial light modulator is arranged to modulate the output light from the backlight to provide an image that may be switched between at least two different angular contrast profiles,
wherein the spatial light modulator comprises a pixelated liquid crystal display comprising a liquid crystal pixel layer and pixel addressing electrodes arranged to provide in-plane electric fields to pixels of the pixelated liquid crystal display, and
wherein the spatial light modulator further comprises a liquid crystal bias layer arranged between the input polariser and output polariser of the pixelated liquid crystal display; and bias layer electrodes arranged to provide out-of-plane bias electric fields to the liquid crystal bias layer.


US Pat. No. 11,115,646

EXPOSURE COORDINATION FOR MULTIPLE CAMERAS

Woven Planet North Americ...


1. A method comprising, by a computing system:detecting a plurality of objects captured within an overlapping region between a first field of view associated with a first camera and a second field of view associated with a second camera;
determining a respective priority ranking for each object of the plurality of objects;
selecting an object from the plurality of objects based on the respective priority ranking for the object;
determining, for the first camera, a first lighting condition associated with the first field of view;
determining, for the second camera, a second lighting condition associated with the second field of view;
determining a shared exposure time for the selected object based on the first lighting condition and the second lighting condition; and
causing at least one image of the selected object to be captured using the shared exposure time.

US Pat. No. 11,115,645

GENERATING NOVEL VIEWS OF A THREE-DIMENSIONAL OBJECT BASED ON A SINGLE TWO-DIMENSIONAL IMAGE

ADOBE INC., San Jose, CA...


1. A computer-readable storage medium having instructions stored thereon, which, when executed by a processor of a computing device cause the computing device to perform actions comprising:receiving a source image that includes a first view of an object from a first viewpoint and includes a first portion of the object;
hallucinating a second portion of the object occluded in the first view of the object by generating values for masked pixels in an intermediate image of the object from a second viewpoint, the masked pixels corresponding to the second portion of the object occluded in the first view of the object;
generating a target image that includes a second view of the object from the second viewpoint, wherein the second view includes at least a sub-portion of the first portion of the object and the second portion of the object occluded in the first view, the second portion being predicted based on the generated values for the masked pixels; and
providing the target image.

US Pat. No. 11,115,644

VIDEO GENERATION METHOD AND APPARATUS USING MESH AND TEXTURE DATA

Sony Interactive Entertai...


1. A content generation apparatus comprising:a data generation unit operable to generate mesh and texture data for a virtual scene; and
a frame encoding unit operable to encode one or more frames comprising at least a portion of the generated mesh and texture data,
wherein the mesh and texture data that is encoded contains information that may be used to describe any of a plurality of different viewpoints within the same virtual scene, and
wherein the frame encoding unit is operable to encode, in one or more frames, information indicating a gearing ratio between a request for a change of viewpoint and a change in viewpoint to be displayed.

US Pat. No. 11,115,643

ALIGNMENT SYSTEM

XION GMBH


1. A method for adjusting a medical stereoscopic optical system comprising:arranging a predetermined reference object having known visible reference object features in a field of view captured by the optical system;
capturing a first and a second stereoscopic half-image of the predetermined reference object by means of a sensor system;
determining position information of at least two visible reference object features of the predetermined reference object that are imaged in the first and the second stereoscopic half-images;
ascertaining an optical distortion of the first and the second stereoscopic half-images from the position information of the at least two visible reference object features;
adjusting an image processor, which is at least indirectly connected to the sensor system, by setting a geometric rectification in a manner corresponding to the optical distortion ascertained;
wherein, after said adjusting the image processor the method then comprises repeatedly carrying out the following steps during operation of the optical system for capturing an object to be examined:
arranging the object to be examined in the field of view captured by the optical system;
capturing position data containing a position deviation determined by means of at least one reference point of the first and the second stereoscopic half-images provided by the optical system of the object to be examined relative to each other, wherein said position deviation is a deviation of an image position of said object to be examined with respect to an ideal position of said object in a completely centered optical system; and
adjusting the image processor in a manner corresponding to the captured position deviation between the first and second stereoscopic half-images by setting a corresponding displacement of the first and the second stereoscopic half-images by means of the image processor to perform a correction between the mutual position of the first and second stereoscopic half-images among one another.

US Pat. No. 11,115,642

INTEGRATED VISION MODULE COMPONENT AND UNMANNED AERIAL VEHICLE

SZ DJI TECHNOLOGY CO., LT...


1. An integrated vision module component, comprising:a bracket;
a first vision module;
a second vision module;
wherein:
the bracket includes a body and a lead-out portion, the lead-out portion being connected to an upper portion of the body;
the first vision module and the second vision module are mounted on the bracket; the first vision module includes:
a first lens and a second lens mounted on the bracket;
a first flexible circuit board including a first segment, a second segment substantially perpendicular to the first segment, and a third segment, the second segment connecting the first segment and the third segment;
a first image sensor mounted on the first segment and disposed on an image side of the first lens; and
a second image sensor mounted on the second segment and disposed on an image side of the second lens;
the second vision module includes a second flexible circuit board, and a third image sensor and a fourth image sensor mounted on the second flexible circuit board;
the first image sensor and the third image sensor form a first binocular vision sensor;
the second image sensor and the fourth image sensor form a second binocular vision sensor;
the first image sensor, the second image sensor, the third image sensor, and the fourth image sensor are mounted on the body;
a mounting groove is disposed on the lead-out portion, and the third segment of the first flexible circuit and the second flexible circuit board are at least partially located in the mounting groove; and
the part of the third segment of the first flexible circuit located in the mounting groove and the part of the second flexible circuit board located in the mounting groove are stacked.

US Pat. No. 11,115,641

METHOD OF TRANSMITTING OMNIDIRECTIONAL VIDEO, METHOD OF RECEIVING OMNIDIRECTIONAL VIDEO, DEVICE FOR TRANSMITTING OMNIDIRECTIONAL VIDEO, AND DEVICE FOR RECEIVING OMNIDIRECTIONAL VIDEO

LG ELECTRONICS INC., Seo...


16. A method for processing 360 degree video in a digital receiver, comprising:receiving at least one segment including one or more media tracks related to the 360 degree video and audio data, and timed metadata track for specifying a sphere region,
wherein the timed metadata track includes first information for specifying a first rotation value related to a center of the sphere region based on a yaw axis, second information for specifying a second rotation value related to the center of the sphere region based on a pitch axis, third information for specifying a first range related to the sphere region based on the yaw axis, and fourth information for specifying a second range related to the sphere region based on the pitch axis,
wherein the timed metadata track is linked to the one or more media tracks;
decoding the 360 degree video;
rendering the 360 degree video; and
displaying the 360 degree video.

US Pat. No. 11,115,640

MULTI-SENSOR VIDEO FRAME SYNCHRONIZATION APPARATUS AND METHODS

Texas Instruments Incorpo...


1. A video controller, comprising:a processor; and
a non-transitory computer readable storage medium storing a program for execution by the processor, the program including instructions to:receive a first start-of-frame indication that has a first time of receipt and is associated with a first image sensor;
receive a second start-of-frame indication that has a second time of receipt and is associated with a second image sensor;
calculate a time difference between the first time of receipt associated with the first image sensor and the second time of receipt associated with the second image sensor; and
adjust a first frame period determining parameter associated with the first image sensor to:decrease the time difference between the first time of receipt associated with the first image sensor and the second time of receipt associated with the second image sensor, in response to determining that the time difference between the first time of receipt associated with the first image sensor and the second time of receipt associated with the second image sensor is greater than or equal to a frame synchronization hysteresis threshold value; and
be equal to a second frame period determining parameter of the second image sensor, in response to determining that the time difference between the first time of receipt associated with the first image sensor and the second time of receipt associated with the second image sensor is less than the frame synchronization hysteresis threshold value.



US Pat. No. 11,115,639

SYSTEM AND METHOD OF GENERATING PANORAMIC VIRTUAL REALITY IMAGE USING A PLURALITY OF PHOTOGRAPH IMAGES

Korea Electronics Technol...


1. A system for generating a 360-degree panoramic virtual reality (VR) image using a plurality of photograph images, comprising:a plurality of image capturing devices configured to capture and transmit the plurality of photograph images, respectively; and
an image conversion device in communication with plurality of image capturing devices,
wherein the plurality of image capturing devices comprise a first camera and a second camera, wherein the first camera is configured to take a first image using first image characteristic information, wherein the second camera is configured to take a second image using second image characteristic information that is different from the first image characteristic information such that the first and second images have different image attributes, wherein each of the first image characteristic information and the second image characteristic information comprises a viewing angle, a zooming ratio, and a resolution, and wherein the image conversion device is configured to:receive the first image from the first camera and receive the second image from the second camera;
prior to receiving the first image and the second image store the first image characteristic information and the second image characteristic information in a local memory of the image conversion device;
retrieve the first image characteristic information and the second image characteristic information from the local memory of the image conversion device;
calibrate the first image using both the retrieved first image characteristic information and the retrieved second image characteristic information to obtain a first calibrated image without communicating with the first camera;
calibrate the second image using both the retrieved first image characteristic information and the retrieved second image characteristic information to obtain a second calibrated image without communicating with the second camera; and
stitch the first calibrated image and the second calibrated image to generate a single 360-degree panoramic VR image configured to be viewed from all directions.


US Pat. No. 11,115,638

STEREOSCOPIC (3D) PANORAMA CREATION ON HANDHELD DEVICE

FotoNation Limited, Galw...


1. A method for generating a stereoscopic panorama image, the method comprising:acquiring multiple at least partially overlapping image frames of a scene from an optic and imaging sensor of a portable imaging device panned across the scene;
extracting a first set of image pieces from a first subset of the multiple at least partially overlapping image frames, each image piece of the first set of image pieces being from a distinct frame of the first subset of the multiple at least partially overlapping image frames than each other image piece of the first set of image pieces;
constructing a first panoramic image from the first set of image pieces;
extracting a second set of image pieces from a second subset of the multiple at least partially overlapping image frames, each image piece of the second set of image pieces being from a distinct frame of the second subset of the multiple at least partially overlapping image frames than each other image piece of the second set of image pieces, the first subset of the multiple at least partially overlapping image frames being distinct from the second subset of the multiple at least partially overlapping image frames;
constructing a second panoramic image from the second set of image pieces;
determining displacements between an image piece of the first set of image pieces and a corresponding image piece of the second set of image pieces;
forming, based at least in part on the displacements, a stereoscopic panorama image using the first panoramic image and the second panoramic image; and
storing, transmitting or displaying the stereoscopic panorama image, or combinations thereof.

US Pat. No. 11,115,637

IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...


1. An image processing apparatus comprising at least one processor and/or circuit configured to function as following units:an identification unit configured to identify a target color that a pixel of a shot image corresponding to an object having a specific color is to have as a result of white balance processing performed on the shot image;
a determination unit configured to determine a target region of a virtual light source in the shot image;
an image processing unit configured to perform image processing for adding an effect of virtual light irradiation by the virtual light source to the target region; and
a control unit configured to control a color of the virtual light source such that a color close to the target color approaches the target color in the target region.

US Pat. No. 11,115,636

IMAGE PROCESSING APPARATUS FOR AROUND VIEW MONITORING

LG INNOTEK CO., LTD., Se...


1. An image processing apparatus, comprising:a deserializer configured to receive respective Bayer image information pieces acquired from a plurality of cameras; and
an image processor configured to:process Bayer data processed and output by the deserializer to produce one stitched image from a plurality of Bayer images acquired from the plurality of cameras,
after producing the one stitched image, perform a demosaicing operation on the one stitched image to generate a stitched demosaiced image,
perform an image signal processing (ISP) operation on the stitched demosaiced image to generate an output stitched image that is demosaiced and corrected, and
adjusting an auto exposure and an auto white balance of one or more of the plurality of cameras based on the output stitched image,

wherein the output stitched image is output, and
wherein the ISP operation includes edge enhancement and at least one of gamma correction, color correction, auto exposure correction or auto white balance adjustment.

US Pat. No. 11,115,635

COMPUTER IMPLEMENTED COLOR SPACE CODING THAT DEFINES COLOR SPACE WITH FRACTAL GEOMETRY


1. A computer implemented color space encoding system comprising:a data repository encoding data defining a fractal shape and a color space represented by the fractal shape whereby colors in the color space correspond to positions on the fractal shape, the fractal shape having a fractal coordinate system with fractal coordinates for specifying the positions on the fractal shape and mapping the colors in the color space to the positions on the fractal shape;
an input interface configured to receive a color measurement;
a processor configured to execute computer program code for executing a fractal coordinate encoder, including:
computer program code configured to obtain the color measurement from the input interface;
computer program code configured to map the obtained color measurement to a color in the color space represented by the fractal shape by determining a position on the fractal shape corresponding to the obtained color measurement using the fractal coordinates; and,
computer program code configured to output, via an output interface, the fractal coordinates specifying the positions on the fractal shape corresponding to the obtained color measurement.

US Pat. No. 11,115,634

PROJECTION IMAGE ADJUSTMENT SYSTEM AND PROJECTION IMAGE ADJUSTMENT METHOD

PANASONIC INTELLECTUAL PR...


1. A projection image adjustment system comprising:a pattern generator configured to decompose a projection test pattern into a plurality of binary test patterns individually corresponding to a plurality of first attributes to be expressed in a binary manner;
a projection display apparatus configured to individually project a projection image and the plurality of binary test patterns;
a pattern combining unit configured to generate a plurality of binary captured images by individually binarizing a plurality of captured images generated by individually capturing the plurality of projected binary test patterns and combine the plurality of binary captured images into a combined captured image; and
a calculator configured to transform the projection image by using a relationship between positions of a plurality of feature points in the combined captured image and positions of the plurality of feature points in the projection test pattern,
wherein the projection test pattern includes a plurality of divided regions,
wherein each of the plurality of regions includes a second attribute to be expressed by a combination of the plurality of first attributes, and
wherein each of the plurality of feature points is a point shared three or more adjacent regions among the plurality of regions.

US Pat. No. 11,115,633

METHOD AND SYSTEM FOR PROJECTOR CALIBRATION

DISNEY ENTERPRISES, INC.,...


1. A method of calibrating a projector comprising:receiving by a processing element a first calibration image of a first calibration pattern projected onto a projection surface;
generating a perspective projection matrix by the processing element by modeling the projector as a pinhole device with a pinhole approximation to provide a first calibration approximation based on the first calibration image and the first calibration pattern; and
determining by the processing element a two-dimensional non-linear mapping function to correct inaccuracies in the first calibration approximation resulting from the pinhole approximation by analyzing the first calibration image to compensate for distortion characteristics inherent to the projector.

US Pat. No. 11,115,632

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, PROGRAM, AND PROJECTION SYSTEM

SONY CORPORATION, Tokyo ...


1. An image processing device comprising:a generator configured to generate a second equidistant cylindrical image whose center pixel is equal to a center pixel of each of projection regions of plural projectors in a first equidistant cylindrical image and that includes pixels within an angle range including the each of the projection regions, as an image used to generate each of projection images that are projected from the plural projectors;
a resolution converter configured to generate a low-resolution equidistant cylindrical image by lowering resolution of the first equidistant cylindrical image; and
a first projection frontal position adjustment section configured to adjust a projection frontal position using a display of the low-resolution equidistant cylindrical image,
wherein the generator is further configured to generate the second equidistant cylindrical image, based on the each of the projection regions in the first equidistant cylindrical image after adjustment of the projection frontal position, and
wherein the generator, the resolution converter, and the first projection frontal position adjustment section are each implemented via at least one processor.

US Pat. No. 11,115,630

CUSTOM AND AUTOMATED AUDIO PROMPTS FOR DEVICES

Amazon Technologies, Inc....


1. A method comprising:receiving audio prompt data from a user device;
receiving, from the user device, a request to associate the audio prompt data with an object;
storing first identifier data associated with the audio prompt data;
storing second identifier data associated with the object;
receiving first image data generated by an electronic device;
determining that the first image data represents the object;
based at least in part on the determining that the first image data represents the object, selecting the audio prompt data; and
sending the audio prompt data to the electronic device.

US Pat. No. 11,115,629

CONFIRMING PACKAGE DELIVERY USING AUDIO/VIDEO RECORDING AND COMMUNICATION DEVICES

Amazon Technologies, Inc....


1. A non-transitory computer readable medium having instructions stored thereon that, upon execution by a client device, cause the client device to perform operations comprising:receiving, at the client device, a signal indicative of a request to begin a two-way audio communication between the client device and a network-connectable audio/video recording and communication device (“A/V device”);
receiving, via a user interface of the client device, a first input indicating an acceptance of the request to begin the two-way audio communication;
receiving, via the user interface during the two-way audio communication, a second input indicating that a package is being delivered to a building associated with the A/V device;
based on the receiving of the second input indicating that the package is being delivered, initiating a delivery confirmation process at the client device, wherein the delivery confirmation process comprises:receiving, at the client device, information about the package comprising at least one of a description of contents of the package, an identity of a sender of the package, a tracking number associated with the package, or a delivery address of the package;
displaying, on a display of the client device, the information about the package;
receiving, via the user interface, an electronic signature indicating acceptance of delivery of the package; and
transmitting, by the client device, data representative of the electronic signature to a delivery service server.


US Pat. No. 11,115,628

MOBILE SURVEILLANCE UNIT

SitePro, Inc., Lubbock, ...


1. A surveillance system to monitor deployments of fluid-handling devices, comprising:a trailer having wheels, and a mast;
a camera coupled to the mast;
a wireless modem communicatively coupled to the camera and operative to transmit video captured by the camera;
one or more remote computers operative to remotely receive and store data describing operation of a fluid-handling device collocated with the trailer and to send instructions by which a user computing device of the user obtains the video and an indication of the data describing operation of the fluid-handling device to a user in response to a request from the computing device of the user,
wherein:sending instructions by which a user computing device of the user obtains the video comprises sending instructions to the computing device of the user to establish a peer-to-peer connection with the wireless modem by which an instance of the video is transmitted from the camera to the computing device of the user without at least part of the instance of video sent to the computing device of the user passing through the one or more remote computers;

a wireless access point coupled to the wireless modem and operative to provide a local area wireless network via the wireless modem; and
one or more processors and memory coupled to the wireless access point, wherein the memory stores instructions operative to perform operations comprising:receiving a request to access the local area network from a mobile device collocated with the fluid-handling device;
authenticating the collocated mobile device by determining that the collocated mobile device is associated with a data access account; and
granting access to the local area network in response to authenticating the collocated mobile device,

wherein:the one or more remote computers comprises memory storing instructions that when executed cause the one or more remote computers to perform operations comprising:receiving a request from a plurality of different user devices for video from the camera;
receiving a single copy of the video from the camera via the wireless modem and the Internet;
creating multiple copies of the video; and
sending respective instances of the multiple copies of the video to each of the different user devices; and

the one or more remote computers are operative to send one or more web interfaces to the computing device of the user by which the user controls the camera and the fluid-handling device.


US Pat. No. 11,115,627

REAL TIME CAMERA MAP FOR EMERGENCY VIDEO STREAM REQUISITION SERVICE

EAGLE EYE NETWORKS, INC.,...


1. A method of operation to provide at least one transformed video stream from, at least one of a publicly owned and a privately owned camera bearing on an in-scope location vicinity, to, at least one responding emergency service agency (ESA), the method comprising:at a responding ESA,determining an identifier for a time and location vicinity for an emergency incident,
assigning a display terminal to at least one responding agent,
transmitting a requisition for transformed video streams in-scope to said location vicinity to a surveillance security server (video server);
upon receipt of a link to a time-bound application programming interface instantiated in a server,
relaying among the video server and at least one assigned display terminal;

at said video server,maintaining a geo-location macro map of cameras bearing on at least one of location vicinity;

upon request from an authenticated ESA display terminal,transmitting a zoomable and selectable in-scope map of cameras bearing on said location vicinity;

upon request from an authenticated ESA display terminal,activating a time-bound application programming interface (API) wherein activating a time-bound application programming interface comprises storing a virtual machine image of said API into a processor core at a first time and purging said virtual machine image at a second time;
transmitting a link for operation of said API to said display terminal;
fulfilling said API operations by optionally transforming and transmitting video streams bearing on said location vicinity;
maintaining camera owner/operator access permissions, contact information, and default transformation settings;
notifying each owner/operator of cameras in-scope of a request or demand for transformed video streams from certain cameras; and,
requesting additional access to cameras when an emergency is coming toward the owner/operator's facility or when a suspect may be hiding on their property;
wherein transformation settings include optional synthetic depth of focus/depth of field to enable selective blurring outside of the public thoroughfare and public lands, blurring within the curtilage of private properties, and hiding background and other intransigent detail; and
wherein, access permissions include display terminals associated with neighbors, security service providers, owner's relations, ESA, location indicia ranges, time of day, day of week, time-bounds, classes of ESA, named services, and whether video streams from cameras bearing solely on interior locations are transmitted.


US Pat. No. 11,115,626

APPARATUS FOR VIDEO COMMUNICATION

Huddle Toom Technology S....


1. A local device for video communication comprising:processing circuitry including a processor and a storage device containing instructions that, when executed by the processor, implement:
a receiver that receives a gesture, a framing command and video signals from a plurality of local mobile devices associated with a plurality of participants in a video conference,
a generator, operatively connected to said receiver, that
changes a mode of generation of an output video communication stream dependent on a recognition of a speaker, among the plurality of participants, based on the framing command and the gesture,
combines the video signals received from the plurality of local mobile devices into a single video communication stream representing a patchwork of moving images captured by the plurality of local mobile devices, and
generates the output video communication stream based on the video signals received from a video camera of the local device and a plurality of mobile device video cameras associated with the plurality of local mobile devices to create the single video communication stream,
a transmitter that wirelessly transmits the output video communication stream to a local processing device executing video conferencing software,
wherein the apparatus is configured to receive from the local processing device an output signal representing video images displayable by a display of the local processing device when executing the video conferencing software, and
a sharing device that shares with a screen the output signal representing the video images displayable by the display of the local processing device.

US Pat. No. 11,115,625

POSITIONAL AUDIO METADATA GENERATION

CISCO TECHNOLOGY, INC., ...


8. An apparatus comprising:a camera;
a microphone array; and
a processor coupled to the microphone array and the camera, and configure to:divide a video output of the camera into one or more tracking sectors;
detect a head position for each participant of one or more participants in the video output of the camera;
determine, for each detected head position, whether the detected head position is located within a tracking sector of the one or more tracking sectors of the video output of the camera;
determine one or more active sound source positions of one or more actively speaking participants of the one or more participants based on sound from the one or more actively speaking participants being detected by the microphone array;
determine whether any of the one or more active sound source positions are located in the tracking sector of the one or more tracking sectors; and
if any of the one or more active sound source positions are located in the tracking sector of the one or more tracking sectors, update positional audio metadata for the tracking sector based on the one or more active sound source positions located in the tracking sector and the detected head positions located within the tracking sector.


US Pat. No. 11,115,624

METHODS AND SYSTEMS FOR JOINING A CONFERENCE

Salesloft, Inc., Atlanta...


1. A method for enabling a recording bot server to join a video conference, comprising:finding a first function, wherein the first function interacts with a user interface element used by the recording bot server to join a video conference hosted by a human conference host computer for at least two human conference participants;
replacing the first function with a second function; wherein the second function is a frozen code version used by the recording bot server to join the video conference such that the recording bot server does not interact with the user interface element, wherein the frozen code version comprises a modified participant code version or a code version that exposes underlying functions of a participant code version; and
using the second function for the recording bot server to join the video conference without interacting with a document object model (DOM).

US Pat. No. 11,115,623

SYSTEMS AND METHODS FOR ASYMMETRIC IMAGE SPLITTER WITH LINE MARK MEMORY

Maxim Integrated Products...


1. A method comprising:receiving a multi-streaming video comprising super-frame video images, wherein each super-frame video images includes a first video image and a second video image, and wherein height of the first video image is higher than the second video image;
adjusting vertical asymmetry of the second video image to same height as the first video image by adding padding to the second video image;
utilizing an asymmetric image splitter configured to split the super-frame video images into two separate video images; and
marking each line of the second video image to determine which lines are padded and discarded, and which lines are data to be displayed using a line mark memory,
wherein a memory requirement for the asymmetric image splitter is represented by a difference between an input time and an output time of every individual pixel of the received multi-streaming video, wherein the memory requirement is reduced when the received multi-streaming video is generated via spreading the data by skipping lines in time.

US Pat. No. 11,115,622

APPARATUS AND METHOD FOR TRANSCEIVING BROADCAST SIGNAL

LG ELECTRONICS INC., Seo...


1. A method for transmitting a broadcast signal including a broadcast service, the method comprising:generating Diff information based on a template and a signaling instance,
wherein the template is pre-shared information for the signaling instance between a transmission system and a reception system,
the Diff information representing a difference between the template and the signaling instance, the Diff information including an element or attribute value for updating the pre-shared information,
one or more components are transmitted by using a ROUTE (Real-Time Object Delivery over Unidirectional Transport) protocol; and
transmitting the broadcast signal including the one or more components, service list information and the Diff information,
wherein the service list information includes:
first URL (Uniform Resource Locator) signaling information indicating a URL for acquiring a first ESG (Electronic Service Guide) or a first service layer signaling file for all broadcast services when the first ESG or the first service layer signaling file is available via a broadband,
capability information indicating capability required for processing of a specific broadcast service,
broadcast signaling location information including address information required for acquiring service layer signaling information for the specific broadcast service, and
second URL signaling information indicating a URL for acquiring a second ESG or a second service layer signaling file for the specific broadcast service when the second ESG or the second service layer signaling file is available via the broadband.

US Pat. No. 11,115,621

EMBEDDING VIDEO CONTENT IN PORTABLE DOCUMENT FORMAT FILES

Nautica Consulting Servic...


1. A data processing system for producing video-embedded PDF files from an input video file having a file size and a playback length, the system comprising:one or more processors;
a memory including one or more digital storage devices;
a plurality of instructions stored in the memory and executable by the one or more processors to:reduce the file size of the input video file while maintaining the playback length of the input video file;
convert the input video file to a standard video format;

wherein converting the input video file includes:using a first video converter tool to convert the input video file to a selected video format and storing a resulting video file as a first temporary video file in the memory; and

wherein reducing a file size of the input video file includes:comparing a file size of the first temporary video file to a preselected threshold, and, in response to the file size being greater than the preselected threshold, reducing a video bit rate of the first temporary video file using a second video converter tool, and
storing a resulting file as a second temporary video file in the memory; and
producing an output PDF file that is readable by a PDF reader program, wherein the output PDF file includes contents of the input PDF file, and the second temporary video file is embedded in the output PDF file and is playable within the PDF reader program and satisfies a maximum file size limit.


US Pat. No. 11,115,620

SYSTEM FOR FACILITATING INTERACTIONS BETWEEN CONSUMERS AND INDIVIDUALS HAVING MARKETABLE PUBLIC RECOGNITION

TRAINA INTERACTIVE CORP.,...


1. A system, comprising:a processor programmed to:
receive, from an application of the system executing at a third-party social media site, a request to access an application canvas page associated with a first user that offers for sale a plurality of items, the application canvas page being integrated into the third-party social media site;
determine that a second user that made the request through the third-party social media site has not provided permission to access a social media account of the second user;
transmit, to the application, an indication of the plurality of items for sale and an indication that access to additional information regarding at least a first item, from among the plurality of items, is conditioned upon permission to access the social media account of the second user;
determine that the permission has been granted by the second user; and
responsive to the determination that the permission has been granted by the second user, transmit the additional information.

US Pat. No. 11,115,619

ADAPTIVE STORAGE BETWEEN MULTIPLE CAMERAS IN A VIDEO RECORDING SYSTEM

Axis AB, Lund (SE)


1. A method for storing video data in a surveillance system including three or more cameras and a storage service comprising a storage space, the method comprising:partitioning the storage space into an initial set of allotted portions for storing video data captured over a time period from each of the three or more video cameras, wherein each allotted portion in the initial set of allotted portions is configured to store video data from one camera of the three or more cameras;
for each camera, setting a video quality value for encoding video captured by the camera;
determining, for each camera and at discrete time intervals, whether maintaining the video quality value would cause the camera to output an expected amount of video data that would exceed the allotted portion of storage space for the camera over the time period;
in response to determining for at least one camera that the expected amount of video data would exceed the allotted portion of storage space, adjusting the video quality value to a video quality value representing a reduced video quality relative to the video quality represented by the current video quality value; and
in response to determining that the reduced video quality value from a camera represents a video quality that falls below a video quality represented by a video quality threshold value, re-partitioning the storage space into a new set of allotted portions, wherein increased storage space is allotted to the camera having a video quality that falls below the video quality represented by a video quality threshold value and decreased storage space is allotted to at least one other camera.

US Pat. No. 11,115,618

TELEVISION POWER SUPPLY DRIVING DEVICE AND TELEVISION

SHENZHEN SKYWORTH-RGB ELE...


1. A television power supply driving device, comprising a power board connected to a mainboard and a backlight module, the power board has a PFC circuit arranged, wherein the power board has further arranged an auxiliary power supply for supplying power to the mainboard, and a main power supply for supplying the backlight module; the auxiliary power supply comprises a first LLC resonance module, a resonance control module, a standby control module, a power supply module, and a rectifying filter module; the main power supply comprises a second LLC resonance module, a secondary LLC control module, and a synchronous rectifier module;after powered on, when receiving a power-on signal, the standby control module controls the power supply module to turn on the resonance control module and the PFC circuit successively, the PFC circuit outputs a PFC voltage to the first LLC resonance module and the second LLC resonance module, the first LLC resonance module converts the PFC voltage into a first voltage and a second voltage according to a first control signal output from the resonance control module, the first voltage and the second voltage are then output to the mainboard and the secondary LLC control module for power respectively, after being rectified and filtered by the rectifying filter module; the second LLC resonance module converts the PFC voltage into a third voltage according to a second control signal output by the secondary LLC control module, and the third voltage is output to the backlight module for power after being synchronous rectified by the synchronous rectifier module; the standby control module controls the power supply module to shut down the PFC circuit to stop outputting the PFC voltage, when receiving a standby signal, making the resonance control module enter a standby state.

US Pat. No. 11,115,617

AUDIO DEVICE FOR HDMI

Dolby Laboratories Licens...


1. An apparatus including a loopback device for connecting High-Definition Multimedia Interface (HDMI) devices, the loopback device comprising:a first HDMI interface that is configured to connect to a first HDMI source device;
a second HDMI interface that is configured to connect to an HDMI sink device; and
a processor,
wherein the processor is configured to control the loopback device to pass a first HDMI signal through from the first HDMI source device to the HDMI sink device via a first HDMI connection,
wherein the HDMI sink device is configured to select a selected HDMI signal, wherein the selected HDMI signal is one of the first HDMI signal from the first HDMI source device via the first HDMI connection, and a second HDMI signal from a second HDMI source device via a second HDMI connection, wherein the second HDMI connection does not connect to the, loopback device,
wherein the processor is configured to control the loopback device to receive, via the second HDMI interface from the HDMI sink device, a selected audio signal associated with the selected HDMI signal, and
wherein the processor is configured to control the loopback device to output the selected audio signal to a speaker.

US Pat. No. 11,115,616

DISPLAY SYSTEM, DISPLAY METHOD, AND DISPLAY APPARATUS

Panasonic Intellectual Pr...


1. A display apparatus comprising:an electro-optical transfer function (EOTF) converter that receives a video signal having a first luminance range and performs EOTF conversion on the video signal to obtain a first luminance value;

a luminance converter, which is provided for converting the video signal into a pseudo High Dynamic Range signal having a second luminance range, a maximum value of the second luminance range being smaller than a maximum value of the first luminance range, the luminance converter that converts the first luminance value into a second luminance value within the second luminance range, the maximum value of the second luminance range being larger than a maximum value of a luminance of a Standard Dynamic Range (SDR) signal, and the maximum luminance value of the SDR signal being 100 nit; anda display that displays the pseudo High Dynamic Range signal having the second luminance range based on the second luminance value,
wherein the maximum value of the second luminance range is a maximum possible display peak luminance (DPL) that the display is capable of displaying, and is greater than the maximum luminance value of the SDR signal,
wherein the luminance converter performs conversion of the first luminance value into the second luminance value, based on content luminance information of the video signal, and
wherein the content luminance information includes an average value of luminance values of the video signal (CAL: Content Average Luminance) and a maximum value among luminance values of the video signal (CPL: Content Peak Luminance).

US Pat. No. 11,115,615

AUGMENTED REALITY DISPLAY OF LOCAL INFORMATION

Amazon Technologies, Inc....


1. A method for displaying crime location information for a street in a neighborhood, the method comprising:receiving, at a mobile computing device and from a user of the mobile computing device, an input instructing display of street crime information of a neighborhood;
controlling a camera of the client device to capture a live view of the neighborhood including at a first street and a second street;
displaying the live view on a display of the mobile computing device;
determining a current location and orientation of the mobile computing device;
determining a first name of the first street and a second name of the second street from a street sign detected in the live view;
sending a message, including the current location, the first street name and the second street name, to at least one server to request the crime location information;
receiving the crime location information corresponding to the current location and the first street and the second street from the at least one server;
generating a street crime summary indicating which of the first and second streets is safer based on a safety score calculated from the crime location information for the first street and the second street and a safety threshold defining a safe street; and
augmenting the live view of the neighborhood with the street crime summary.

US Pat. No. 11,115,614

IMAGE SENSOR WITH A/D CONVERSION CIRCUIT HAVING REDUCED DNL DETERIORATION

RENESAS ELECTRONICS CORPO...


1. A semiconductor device comprising:a lower counter circuit configured to output a lower bit counter signal;
an input signal determination circuit configured to compare a reference voltage with an input signal and output a lower bit latch signal;
a lower bit latch circuit configured to receive the lower bit counter signal and the lower bit latch signal to output a lower bit latch result signal;
a lower bit determination circuit configured to receive the lower bit counter signal and the lower bit latch signal to output an upper bit latch signal;
an upper counter circuit configured to output an upper bit counter signal; and
an upper bit latch circuit configured to receive the upper bit counter signal and the upper bit latch signal to output an upper bit latch result signal;
wherein each of the lower bit counter signal and the upper bit counter signal includes a plurality of bit counter signals,
wherein the lower bit latch circuit latches the lower bit counter signal in response to the lower bit latch signal, and
wherein the lower bit determination circuit outputs the upper bit latch signal in a period excluding an indefinite period in which the upper bit counter signal becomes indefinite, based on the lower bit latch signal and a predetermined bit counter signal in the lower bit counter signal.

US Pat. No. 11,115,613

READOUT ARRANGEMENT FOR AN IMAGE SENSOR, IMAGE SENSOR SYSTEM AND METHOD FOR READING OUT AN IMAGE SENSOR

Fraunhofer-Gesellschaft z...


1. A readout arrangement for an image sensor,wherein the readout arrangement is configured to receive from a plurality of column leads of the image sensor in parallel a plurality of image sensor analog signals describing in an analog manner brightness values detected by the image sensor, and
wherein the readout arrangement is configured to select which subset of a plurality of analog values represented by the image sensor analog signals or based on the image sensor analog signals is to be stored in an analog memory for further processing, and to cause storage of the selected analog values in the analog memory, or to store the selected analog values in the analog memory,
wherein the readout arrangement is configured to, based on an evaluation of the image sensor analog signals, decide which subset of a plurality of analog values represented by the image sensor analog signals or based on the image sensor analog signals is to be stored in the analog memory for further processing.

US Pat. No. 11,115,612

SOLID-STATE IMAGE SENSOR AND IMAGE CAPTURE APPARATUS

CANON KABUSHIKI KAISHA, ...


1. A solid-state image sensor comprising a plurality of pixels, wherein each of the plurality of pixels includes:a sensor unit that generates a pulse signal upon reception of a photon;
a low pass filter to which the pulse signal generated by the sensor unit is input;
a comparator that compares output of the low pass filter with a threshold; and
a counter configured to selectively switch between operating in a first mode of counting the number of pulses of the pulse signal generated by the sensor unit when a light emitting unit does not emit light, and operating in a second mode of counting the number of pulses of a predetermined signal, the number depending on an elapsed time from a timing at which light is emitted from the light emitting unit,
wherein when operating in the second mode, the counter counts the number of pulses of the predetermined signal from a timing at which light is emitted from the light emitting unit until a timing at which output of the comparator is inverted.

US Pat. No. 11,115,611

SOLID-STATE IMAGING DEVICE AND IMAGING SYSTEM

NUVOTON TECHNOLOGY CORPOR...


1. A solid-state imaging device, comprising:a first converter which converts an analog signal representing a pixel value to an upper bit of a digital signal including the upper bit and a lower bit; and
a second converter which converts the analog signal to the lower bit of the digital signal,
wherein the second converter includes:a first latch circuit which latches, as phase information, a plurality of clock signals upon conversion to the upper bit in the first converter, the plurality of clock signals having different phases;
a conversion circuit which generates the lower bit of the digital signal by converting the phase information to a binary value;
an adder; and
a second latch circuit which latches an addition result of the adder, and

the adder adds the binary value converted by the conversion circuit and a value latched by the second latch circuit.

US Pat. No. 11,115,610

NOISE AWARE EDGE ENHANCEMENT

DePuy Synthes Products, I...


1. A digital imaging method for use with an endoscope in ambient light deficient environments comprising:illuminating an environment using a source of electromagnetic radiation;
sensing reflected electromagnetic radiation with a pixel array of a sensor, wherein said pixel array generates image data;
creating an image frame from said image data;
detecting image textures and edges within the image frame;
enhancing textures and edges within the image frame;
retrieving from memory properties pertaining to a pixel technology comprising a known-conversion gain for an individual pixel or a group of pixels and an applied sensor gain of the sensor to:determine a noise variance expectation for noise variance within the image frame created by said sensor;
use said noise variance expectation to control the enhancing of the textures and edges within the image frame; and

creating a stream of images by sequentially combining a plurality of image frames.

US Pat. No. 11,115,609

SOLID-STATE IMAGING APPARATUS AND DRIVING METHOD THEREOF

Sony Semiconductor Soluti...


1. A light detecting device, comprising:a plurality of avalanche photodiodes arranged in a two-dimensional array, including a first avalanche photodiode and a second avalanche photodiode;
a first transistor coupled to an anode or a cathode of the first avalanche photodiode; and
a second transistor coupled to an anode or a cathode of the second avalanche photodiode,
wherein, in a first mode, the first transistor is in an ON state and the second transistor is in an ON state,
wherein, in a second mode, the first transistor is in the ON state and the second transistor is in an OFF state,
wherein the plurality of avalanche photodiodes are disposed at a side of a first surface of a semiconductor substrate that is a light incident surface, and the anode or the cathode of the first avalanche photodiode and the anode or the cathode of the second avalanche photodiode are disposed at a side of a second surface opposite to the first surface, and
wherein the light detecting device is configured to switch between the first mode and the second mode according to an amount of incident light.

US Pat. No. 11,115,608

IMAGING DEVICE, IMAGING SYSTEM, AND DRIVE METHOD OF IMAGING DEVICE

CANON KABUSHIKI KAISHA, ...


1. An imaging device comprising:a pixel unit including a plurality of pixels that are arranged in a matrix and each output a pixel signal in accordance with a received light amount;
a reference signal circuit that outputs a first reference signal and a second reference signal, wherein voltages of the first and second reference signals respectively change in dependent on time and a voltage change per unit time of the first reference signal is different from a voltage change per unit time of the second reference signal; and
a plurality of column circuits each of which is provided on a corresponding column signal line of the pixel unit and includes a selector circuit that selects either the first reference signal or the second reference signal and a comparator that outputs a comparison signal indicating a result of comparison between the pixel signal and a reference signal selected by the selector circuit,
wherein the column circuits operate selectively in a first drive mode to output the comparison signal or a second drive mode to acquire a correction value of a ratio between the voltage change per unit time of the first reference signal and the voltage change per unit time of the second reference signal,
wherein the plurality of column circuits include a first column circuit that operates in the first drive mode and a second column circuit that is driven by a smaller current than a current of the first column circuit, and
wherein the selector circuit of the second column circuit selects the same reference signal out of the first reference signal and the second reference signal in the first drive mode and the second drive mode.

US Pat. No. 11,115,607

CONTROL SYSTEM FOR STRUCTURED LIGHT PROJECTOR AND ELECTRONIC DEVICE

GUANGDONG OPPO MOBILE TEL...


1. A control system for a structured light projector, comprising:a first driving circuit, connected with the structured light projector and configured to drive the structured light projector to project laser;
a microprocessor, connected with the first driving circuit and configured to provide a driving signal for the first driving circuit; and
an application processor, connected with the microprocessor and the first driving circuit, and configured to:receive a control instruction of turning on or turning off the structured light projector;
provide an enabling signal for the first driving circuit;
transmit an instruction of controlling the microprocessor to provide the driving signal for the first driving circuit in response to that the control instruction of turning on the structured light projector is received, or transmit an instruction of controlling the microprocessor to stop providing the driving signal for the first driving circuit in response to that the control instruction of turning off the structured light projector is received;
receive a detection signal, wherein the detection signal is an output current of the first driving circuit or is an image sent by the microprocessor;
restart the microprocessor or not based on the detection signal; and
control the structured light projector to be turned on or turned off.


US Pat. No. 11,115,606

MOTHERBOARD AND OPERATING SYSTEM CAPABLE OF OUTPUTTING IMAGE DATA

GIGA-BYTE TECHNOLOGY CO.,...


1. A motherboard capable of outputting image data and disposed in a case, comprising:a basic input/output system (BIOS);
an image transmission port configured to receive an external image signal;
an on-board video graphics array (VGA) card configured to provide an internal image signal;
a switching circuit selectively using the external image signal or the internal image signal as the image data according to a selection signal;
a control circuit selectively using the image data or Ethernet network data as output data; and
a first network connection port configured to transmit the output data via a network cable,
wherein:
the network cable is directly connected between the first network connection port and an electronic device,
in response to the control circuit using the image data as the output data, the electronic device displays an image according to the output data,
the selection signal is provided by a switch or the BIOS, and the switch is disposed outside of the case.

US Pat. No. 11,115,605

ELECTRONIC DEVICE FOR SELECTIVELY COMPRESSING IMAGE DATA ACCORDING TO READ OUT SPEED OF IMAGE SENSOR, AND METHOD FOR OPERATING SAME

Samsung Electronics Co., ...


1. An electronic device comprising:an image sensor;
an image processor; and
a control circuit electrically connected to the image sensor through a first predetermined interface, and to the image processor through a second predetermined interface,
wherein the control circuit is configured to:in response to setting of a read-out speed of the image sensor to a first predetermined speed, receive first image data through the first predetermined interface, the first image data being obtained through the image sensor and being not compressed by the image sensor,
transfer the first image data to the image processor through the second predetermined interface,
in response to setting of the read-out speed of the image sensor to a second predetermined speed, receive second image data through the first predetermined interface, the second image data being obtained through the image sensor and being compressed by the image sensor,
decompress the compressed second image data, and
transfer the decompressed second image data to the image processor through the second predetermined interface.


US Pat. No. 11,115,604

CAMERA APPARATUS FOR GENERATING MACHINE VISION DATA AND RELATED METHODS

INSITU, INC., Bingen, WA...


1. An apparatus comprising:a first camera coupled to a movable turret and a second camera coupled to the movable turret, the first camera to generate first image data of an object of interest in an environment and the second camera to generate second image data of the object of interest in the environment, at least a portion of the environment in the first image data not in the second image data and at least a portion of the environment in the second image data not in the first image data, the first camera to maintain the object of interest in a field of view of the first camera during movement of the first camera about the movable turret to generate the first image data and the second camera to maintain the object of interest in a field of view of the second camera during movement of the second camera about the movable turret to generate the second image data; and
a processor in communication with at least one of the first camera or the second camera, the processor to:alternatingly sample the first image data at a first sampling rate during first sampling intervals to generate a first image data feed and the second image data at a second sampling rate during second sampling intervals to generate a second image data feed, the first image data feed including a first image data feature and the second image data feed including a second image data feature, the second image data feature different than the first image data feature; and
transmit the second image data feed for analysis by a machine vision analyzer.


US Pat. No. 11,115,603

IMAGE PICKUP APPARATUS THAT REDUCES TIME REQUIRED FOR LIGHT CONTROL PROCESSING AND METHOD OF CONTROLLING SAME

CANON KABUSHIKI KAISHA, ...


1. An image pickup apparatus that photographs an object by causing a light emission device to emit light toward the object, the image pickup apparatus comprising:at least one memory configured to store instructions; and
at least one processor configured to execute the instructions stored in the at least one memory to thereby implement:
a photometry unit configured to selectively obtain a first image having a first resolution and a second image having a second resolution lower than the first resolution, as a photometric image, and performs photometry using the obtained photometric image to thereby obtain a result of photometry;
a determination unit configured to determine whether or not a predetermined condition is satisfied; and
a control unit configured to control, when performing light control processing for determining a main light emission amount for causing the light emission device to perform main light emission for photographing the object, the photometry unit to switch between first control in which the light control processing is performed based on the first image and second control in which the light control processing is performed based on the second image, according to a result of determination performed by the determination unit.

US Pat. No. 11,115,602

EMULATING LIGHT SENSITIVITY OF A TARGET

WATCHGUARD VIDEO, INC., ...


1. A method of emulating light sensitivity of a human, the method comprising, for each of at least some frames of a video recording:receiving an image;
accessing image metadata associated with the image, wherein the image metadata comprises at least one encoding of different combinations of gain values and exposure settings obtained from test image processing in a controlled lighting environment;
discovering, from the image metadata, information related to an exposure setting and a gain value used by a camera in capture of the image;
deriving, from stored camera metadata for the camera, information related to a luminous flux associated with the exposure setting and the gain value used by the camera in the capture of the image;
determining an adjusted gain value corresponding to a human light sensitivity using the derived information related to the luminous flux; and
generating an adjusted image using the adjusted gain value.

US Pat. No. 11,115,601

IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, AND IMAGE PROCESSING METHOD

Canon Kabushiki Kaisha, ...


1. An image processing apparatus comprising a processor configured to operate as:a first acquisition unit configured to acquire (a) a visible light image signal representing a visible-ray image which comes into focus at a first distance and (b) an infrared image signal representing (b1) an infrared-ray image which comes into focus at a second distance that is shorter than the first distance and (b2) an infrared-ray image which comes into focus at a third distance that is longer than the first distance;
a second acquisition unit configured to acquire first brightness information, hue information, and saturation information from the visible light image signal and at least second brightness information from the infrared image signal;
a third acquisition unit configured to acquire third brightness information based on edge information obtained from the first brightness information and edge information obtained from the second brightness information; and
a generation unit configured to generate a second visible light image using the third brightness information, the hue information, and the saturation information.

US Pat. No. 11,115,600

DYNAMIC FIELD OF VIEW COMPENSATION FOR AUTOFOCUS

QUALCOMM Incorporated, S...


1. A method of image processing, the method comprising:receiving image data corresponding to an image frame captured by an image sensor of an image capture device;
moving a lens of the image capture device while still receiving the image data corresponding to the image frame;
determining a plurality of positions of the lens corresponding to movement of the lens; and
performing field of view compensation on the image data from the image frame by cropping one or more rows of the image data based on the plurality of positions of the lens.

US Pat. No. 11,115,599

AUTOMATIC CORRECTION OF CAMERA VIEWS

International Business Ma...


1. A computer-implemented method for automatically correcting a view of an image capture device, the computer-implemented method comprising:adjusting a view of an image capture device by applying one or more modifications selected from a group of: a zoom modification, a pan modification, and a tilt modification, wherein the view is displayed at a client device during a viewing session and the one or more modifications are indicated by the client device;
applying one or more corrective operations to the adjusted view based on user input, wherein the one or more corrective operations include a rotational operation to correct the adjusted view; and
applying subsequent operations automatically based on the one or more corrective operations during a subsequent viewing session with corresponding view settings, wherein the subsequent operations repeat at least a portion of the one or more corrective operations.

US Pat. No. 11,115,598

ELECTRONIC DEVICE AND METHOD FOR CHANGING MAGNIFICATION OF IMAGE USING MULTIPLE CAMERAS

Samsung Electronics Co., ...


1. An electronic device, comprising:a display;
a plurality of cameras;
a memory storing instructions; and
at least one processor operably coupled with the display, the plurality of cameras, and the memory,
wherein, the instructions are executable by the at least one processor to cause the electronic device to:
display, on the display, a preview acquired using a first camera from among the plurality of cameras, the preview including a sequence of images representing frames captured by the first camera, and a first image;
in response to receiving a first input while the preview acquired by the first camera is displayed, activate a second camera from among the plurality of cameras, wherein the first input is received and the second camera is activated before reception of any input adjusting a magnification level of the preview;
receive a second input for adjusting the magnification level of the preview when a second image, distinct from the first image, is acquired using the activated second camera; and
display the preview, based on at least a part of the second image and at least partially based on receiving the second input.

US Pat. No. 11,115,597

MOBILE TERMINAL HAVING FIRST AND SECOND AI AGENTS INTERWORKING WITH A SPECIFIC APPLICATION ON THE MOBILE TERMINAL TO RETURN SEARCH RESULTS

LG ELECTRONICS INC., Seo...


1. A mobile terminal comprising:a camera;
a touch screen configured to display screen information;
a first artificial intelligence (AI) agent that extracts a search keyword from a search command for searching image information;
a second AI agent that retrieves at least one piece of image information corresponding to an input search keyword from pre-stored image information; and
a controller configured to:
activate the camera in response to a request, and store an image captured through the camera,
in response to the search command, control the first AI agent to convert the search command into converted text and extract the search keyword from the converted text,
identify whether the first AI agent is interconnected with the second AI agent and determine a transmission path of the search keyword,
transmit the search keyword to the second AI agent according to the determined transmission path,
create category information defining a type of the captured image via the second AI agent,
match the captured image with the category information and store the matched information,
retrieve the at least one piece of image information corresponding to the search keyword from the pre-stored image information using the category information,
transmit the image information search result to the first AI agent according to the determined transmission path, and
control the first AI agent to display the image information search result on an execution screen of the first AI agent.

US Pat. No. 11,115,596

FULL-SCREEN DISPLAY WITH SUB-DISPLAY CAMERA

Google LLC, Mountain Vie...


1. An apparatus, comprising:a camera;
a primary display panel comprising a first pixel array and an aperture adjacent the first pixel array;
an auxiliary display comprising a second pixel array; and
an optical assembly comprising a reflecting optical element and an actuator coupled to the reflecting optical element, the actuator being configured to switch the reflecting optical element between a first arrangement and a second arrangement,
wherein in the first arrangement defines an optical path from the aperture to the camera and the second arrangement defines an optical path from the aperture to the auxiliary display.

US Pat. No. 11,115,595

PRIVACY-AWARE IMAGE ENCRYPTION

MEDIATEK INC., Hsinchu (...


1. An electronic device, comprising:a controller, configured to receive an input image signal captured by a camera device and perform a codec process on the input image signal to generate a processed file,
wherein the controller is further configured to perform privacy detection on the input image signal or the processed file, wherein the controller performs the privacy detection using a classifier network that includes a plurality of classifiers,
wherein in response to the input image signal or the processed file being detected to include privacy information, the controller is further configured to encrypt the processed file to generate an encrypted file.

US Pat. No. 11,115,594

SHUTTER SPEED ADJUSTING METHOD AND APPARATUS, AND ROBOT USING THE SAME

UBTECH ROBOTICS CORP, Sh...


1. A computer-implemented shutter speed adjusting method for a robot with a photographing device, comprising executing on a processor the steps of:obtaining a motion speed of the robot;
obtaining an included angle between a motion direction of the robot and a shooting direction of the photographing device;
obtaining a distance between the robot and a photographed object; and
adjusting a shutter speed of the photographing device based on the motion speed, the included angle, and the distance;
wherein the step of obtaining the motion speed of the robot comprises:
calculating the motion speed of the robot based on a rotational speed of a servo of the robot through the following formula:v=R×?,

where, v indicates the motion speed of the robot, ? indicates the angular speed of the servo of the robot, and R indicates the radius of a wheel of the robot.

US Pat. No. 11,115,593

LENS INTERCHANGEABLE DIGITAL CAMERA, AND OPERATION METHOD AND OPERATION PROGRAM THEREOF

FUJIFILM Corporation, To...


1. A lens interchangeable digital camera comprising:an image sensor that outputs image data of a subject;
a sensor movement type shake correction mechanism that performs a sensor movement operation of moving the image sensor in a direction to cancel a shake;
a body mount in which a plurality of types of lens units with built-in imaging optical systems for forming a subject image on an imaging surface of the image sensor are interchangeably mounted; and
a processor configured to:acquire optical characteristic data corresponding to optical characteristics of the imaging optical system of the lens unit mounted in the body mount;
perform image correction on the image data based on the optical characteristic data;
determine whether or not adaptive optical characteristic data that is the optical characteristic data can be acquired to be subjected to the image correction; and
decide an operation of the shake correction mechanism according to a determination result and restricts at least a part of the sensor movement operation allowed in a correctable state where the image correction based on the adaptive optical characteristic data is possible in an uncorrectable state where the image correction based on the adaptive optical characteristic data is not possible since the adaptive optical characteristic data cannot be acquired.


US Pat. No. 11,115,592

METHOD AND DEVICE FOR STABILIZING PHOTOGRAPHIC EQUIPMENT ON MOBILE DEVICE

BOE TECHNOLOGY GROUP CO.,...


1. A method for stabilizing photographic equipment on a mobile device, comprising:acquiring attitude information of the mobile device;
determining an expected pitch angle of the photographic equipment according to the attitude information of the mobile device;
determining a driving force according to the expected pitch angle; and
adjusting a pitch angle of the photographic equipment by adoption of the driving force,
wherein determining the driving force according to the expected pitch angle includes:reading an initial pitch angle of the photographic equipment;
calculating a number of adjustments according to the expected pitch angle and the initial pitch angle; and
acquiring a sub-driving force of each adjustment according to the number of adjustments and the expected pitch angle,

acquiring the sub-driving force of each adjustment according to the number of adjustments and the expected pitch angle includes:determining a sub-expected pitch angle of each adjustment according to a serial number of each adjustment;
determining a current pitch angle of the photographic equipment; and
calculating the sub-driving force of each adjustment according to the sub-expected pitch angle and the current pitch angle,

wherein calculating the sub-driving force of each adjustment according to the sub-expected pitch angle and the current pitch angle includes:calculating the sub-driving force according to






in which X refers to the current pitch angle of the photographic equipment; Xd refers to the sub-expected pitch angle of the photographic equipment; b refers to the Coulomb friction coefficient; X refers to a current pitch angular velocity of the photographic equipment; Kp refers to the virtual stiffness factor; and F refers to the sub-driving force.

US Pat. No. 11,115,591

PHOTOGRAPHING METHOD AND MOBILE TERMINAL


1. A photographing method, wherein the photographing method is applied to a mobile terminal that comprises a front-facing camera and a rear-facing camera, and the photographing method comprises:receiving a first input by a user when a current screen displays a photographing preview screen;
in response to the first input, updating the photographing preview screen and displaying it as a first sub-preview-screen and a second sub-preview-screen;
receiving a second input by the user;
in response to the second input, controlling a first photographing identifier displayed on the first sub-preview-screen and a second photographing identifier displayed on the second sub-preview-screen to move; and
when the first photographing identifier and the second photographing identifier have an overlapping region with a preset area, controlling the front-facing camera and the rear-facing camera to capture a first image and a second image respectively, and displaying a composite image of the first image and the second image, wherein
the first sub-preview-screen displays a preview image captured by the front-facing camera, and the second sub-preview-screen displays a preview image captured by the rear-facing camera.

US Pat. No. 11,115,590

INTELLIGENT SENSOR SWITCH DURING RECORDING

GoPro, Inc., San Mateo, ...


1. An image capture device comprising:a first image sensor configured to obtain first frames at a first frame rate;
a second image sensor configured to obtain second frames at a second frame rate;
a memory; and
an image signal processor configured to:process the first frames from the first image sensor and store the processed first frames in the memory;
partially process the second frames from the second image sensor to obtain first camera control statistics;
on a condition that an indication to switch from the first image sensor to the second image sensor is received:the first image sensor is further configured to obtain third frames at the second frame rate;
the second image sensor is further configured to obtain fourth frames at the first frame rate; and
the image signal processor is further configured to process the fourth frames from the second image sensor and store the processed fourth frames in the memory and partially process the third frames from the first image sensor to obtain second camera control statistics.



US Pat. No. 11,115,589

IMAGING CONTROL APPARATUS AND METHOD FOR CONTROLLING IMAGING CONTROL APPARATUS

CANON KABUSHIKI KAISHA, ...


1. An apparatus comprising:at least one processor; and
At least one memory coupled to the at least one processor, the memory having instructions that, when executed by the processor, performs operations as:
a generation unit configured to generate a combined image obtained by combining an image captured in a case where an image capturing direction is moved in a first direction from a position of an image capturing start, and an image captured in a case where the image capturing direction is moved in a second direction opposite to the first direction with respect to the position of the image capturing start after the image is captured in the case where the image capturing direction is moved in the first direction; and
a control unit configured to perform control to, according to an amount of movement in the second direction from the position of the image capturing start having reached a second amount of movement that is an end condition for ending image capturing in the second direction, record the combined image obtained by the generation unit combining the image captured in the case where the image capturing direction is moved in the first direction and the image captured in the case where the image capturing direction is moved in the second direction in a recording medium,
wherein the second amount of movement that is the end condition for ending image capturing in the second direction is smaller as the amount of movement in the first direction is larger.

US Pat. No. 11,115,588

CONTROL CIRCUIT AND CAMERA IRIS DRIVING DEVICE

Samsung Electro-Mechanics...


1. A controller of a camera iris, the controller comprising:a signal processing circuit, configured to generate a detected position value corresponding to a current position of the iris, based on a first sensing signal and a second sensing signal respectively received from a first position sensor and a second position sensor that are configured to detect an aperture position of the iris;
a control circuit, configured to set a section between a first aperture position and a second aperture position to a plurality of control sections, and upon receiving a command to change the aperture position of the iris, performing a corresponding control operation for each of the plurality of control sections to control a movement operation of the iris, based on the detected position value; and
a driving circuit, configured to generate a driving current, and provide the generated driving current to a coil that is configured to drive the iris under control of the controller.

US Pat. No. 11,115,587

RECORDING REPRODUCTION APPARATUS, RECORDING REPRODUCTION METHOD, AND PROGRAM

JVCKENWOOD CORPORATION, ...


1. A recording reproduction apparatus comprising:a captured data acquisition unit configured to acquire first captured data captured by a first camera that captures an image of an outside of a moving body and second captured data captured by a second camera that captures an image in a direction that is opposed to a display surface of a display unit;
an event detection unit configured to detect an event on the moving body;
a recording controller configured to store, when the event detection unit has detected an event, the first captured data including at least data at a timing when the event has occurred as event recording data;
a face detection unit configured to detect a human face from the second captured data;
a reproduction controller configured to reproduce the event recording data when the face detection unit has detected a human face within a predetermined period after the storage of the event recording data; and
a display controller configured to cause the display unit to display the event recording data reproduced by the reproduction controller.

US Pat. No. 11,115,586

SYSTEMS AND METHODS FOR COVERTLY MONITORING AN ENVIRONMENT


1. A system for capturing images, comprising:a device, comprising:a housing having a camouflage appearance, a top, a bottom, a front, a rear, and two sides, the housing having a greater length from the top to the bottom than from the front to the rear or between the two sides;
a motion detector disposed within the housing;
a camera disposed within the housing;
a circuit board disposed within the housing;
a memory disposed within the housing storing instructions and image recognition software; and
a processor disposed within the housing configured to execute the instructions and image recognition software;

a first cable connected to the circuit board and extending through the top of the housing;
a second cable connected to the circuit board and extending through the bottom of the housing; and
at least one of the first cable or the second cable mimicking a vine in color and appearance.

US Pat. No. 11,115,584

VIEWING DEVICE

Optelec Holding B.V., Ba...


1. An apparatus to capture and display an image of an object on a surface, said apparatus comprising:a frame;
a body;
at least one optical sensor for capturing the image of the object, wherein the at least one optical sensor is fixed to the body of the apparatus;
a screen for displaying the image of the object;
a pair of handles arranged on the body and provided with controls configured to control the screen, movement of the apparatus and/or the optical sensor;
a frame plate arranged behind the screen;
means for moving the optical sensor and the screen along a transverse axis of the apparatus and relative to the frame plate in a translation direction such that both the optical sensor and the screen are moveable in the same translation direction;
at least two skids spaced apart from one another on the transverse axis at opposing ends of the frame and rotatably connected to the frame for supporting the apparatus such that the at least one optical sensor captures the object on the surface between the skids and the frame can move with respect to the surface.

US Pat. No. 11,115,583

APPARATUS AND METHOD FOR DETERMINATION OF A FOCUS DETECTION TARGET

CANON KABUSHIKI KAISHA, ...


1. An apparatus comprising:at least one processor; and
a memory coupled to the at least one processor, the memory having instructions that, when executed by the processor, performs operations as:
a focus detection unit configured to execute focus detection based on a signal output from an image sensor;
a control unit configured to control driving of a focus lens;
a determination unit configured to determine a main object to be tracked by the focus lens after the control unit controls driving of the focus lens to adjust a focus on a first object detected based on a result of the focus detection,
wherein the determination unit determines a second object as the main object based on conspicuousness in a case where a focus detection result corresponding to the second object is smaller than a first threshold value, and determines the main object regardless of the conspicuousness in a case where the difference between the focus detection result and the position corresponding to the second object is greater than or equal to the first threshold value.

US Pat. No. 11,115,582

IMAGING CONTROL APPARATUS, IMAGING APPARATUS, AND RECORDING MEDIUM

JVCKENWOOD CORPORATION, ...


1. An imaging control apparatus comprising:a conversion unit configured to convert a first image signal acquired at a first depth, a second image signal acquired at a second depth that is deeper than the first depth, and a third image signal acquired at a third depth that is shallower than the first depth into a first luminance signal, a second luminance signal, and a third luminance signal, respectively;
a signal calculation unit configured to calculate a first signal, a second signal and a third signal about an amount of blur or high frequency signal, which are related to an edge, based on the first luminance signal, the second luminance signal, and the third luminance signal, respectively;
a data expansion unit configured to expand the calculated first signal, the calculated second signal, and the calculated third signal respectively to first expanded signal data, second expanded signal data, and third expanded signal data, which are related to a peripheral edge pixel;
a comparison unit configured to compare the first expanded signal data, the second expanded signal data, and the third expanded signal data with each other; and
a control method determination unit configured to set a back focus area based on a magnitude relation between the first expanded signal data and the second expanded signal data and set a front focus area based on magnitude relation between the first expanded signal data and the third expanded signal data to thereby control focus.

US Pat. No. 11,115,581

FOCUS ADJUSTMENT METHOD AND DEVICE USING A DETERMINED RELIABILITY OF A DETECTED PHASE DIFFERENCE

Olympus Corporation, Tok...


1. An imaging device, comprising:an image sensor having imaging pixels that receive light of a subject image though a photographing lens and perform photoelectric conversion, and paired phase difference pixels that respectively receive light flux corresponding to pupil regions that are paired with the photographing lens and perform photoelectric conversion on the light flux that has been received; and
a processor having a phase difference detection section, pixel data calculation section, degree of coincidence calculation section, reliability determination section, and focus adjustment section, wherein
the phase difference detection section detects a phase difference based on pixel data of the paired phase difference pixels;
the pixel data calculation section calculates pixel data of virtual image pixels at positions of the phase difference pixels, or selects pixel data of image pixels around positions of the phase difference pixels;
the degree of coincidence calculation section calculates degree of coincidence between each pixel data of the virtual image pixels that have been calculated, or calculates degree of coincidence of each pixel data of image pixels that have been selected for positions of paired phase difference pixels;
the reliability determination section determines reliability of the phase difference detection result in accordance with the degree of coincidence; and
the focus adjustment section performs focus adjustment based on the phase difference detection result and the reliability,
wherein the pixel data calculation section includesa gain setting section that sets gain for pixel data of the phase difference pixels, and
an interpolation section that interpolates pixel data of virtual imaging pixels corresponding to position of the phase difference pixels, based on pixel data of imaging pixels positioned around the phase difference pixels, and

wherein, the pixel data calculation section calculates pixel data of virtual imaging pixels corresponding to positions of the phase difference pixels based on a value resulting from having applied the gain to pixel data of the phase difference pixels, and the pixel data that has been interpolated.

US Pat. No. 11,115,580

COMMUNICATION SYSTEM, COMMUNICATION APPARATUS, IMAGE CAPTURE APPARATUS, CONTROL METHOD FOR COMMUNICATION APPARATUS, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...


1. A communication system comprising a first communication apparatus and a second communication apparatus,the first communication apparatus comprising:
at least one memory storing instructions; and
at least one processor configured to, upon executing the stored instructions, function as
a unit configured to transmit a first notification for connection processing in short-range wireless communication during an operation in a first power mode, and
a unit configured to transmit a second notification, different from the first notification, for the connection processing in the short-range wireless communication during an operation in a second power mode in which less power is used than in the first power mode, and
the second communication apparatus comprising:
at least one memory storing instructions; and
at least one processor configured to, upon executing the stored instructions, function as
a receiving unit configured to receive one of the first notification and the second notification transmitted by the first communication apparatus, and
a unit configured to start, if the first notification is received, the connection processing at an arbitrary timing, and starting, if the second notification is received, the connection processing at a timing restricted as compared to the case in which the first notification is received,
wherein in a state where a connection has been established by the connection processing and the first communication apparatus is in the first power mode, the connection is disconnected when a transition to the second power mode is performed.

US Pat. No. 11,115,579

METHOD FOR SWITCHING OPTICAL FIELDS OF VIEW


1. A digital camera comprising:a. a dual field of view optical system comprising a first lens arrangement forming an image of a first field of view focused at a first focal plane and a second lens arrangement forming an image of a second field of view focused at a second focal plane axially spaced from said first focal plane, said second field of view being wider than said first field of view, said first lens arrangement and said second lens arrangement being combined on a common optical axis,
b. a single imaging sensor deployed to receive images simultaneously from said first lens arrangement and said second lens arrangement,
c. an actuator deployed to move said optical system along the optical axis between a first position in which an image from said first lens arrangement is focused on said imaging sensor and an image from said second lens arrangement is defocused and a second position in which an image from said second lens arrangement is focused on said imaging sensor and an image from said first lens arrangement is defocused, and
d. a digital processing element associated with said imaging sensor and configured to receive images sampled when said optical system is in said first position and said second position, and to co-process said sampled images to derive at least one output image selected from the group consisting of: a first output image with said first field of view and a second output image with said second field of view.

US Pat. No. 11,115,578

ELECTRONIC APPARATUS, CONTROL DEVICE, AND CONTROL METHOD

SHARP KABUSHIKI KAISHA, ...


1. An electronic apparatus comprising:a first camera and a second camera, having a same photographing direction from the electronic apparatus;
a display device that displays an image captured by the first camera;
a touch panel; and
a control device,
wherein the control device performs an imaging area display operation,
to cause the display device to display an imaging area of the second camera in overlay on the image captured by the first camera,
further to cause the display device to display, a zoom operation bar capable of changing a zoom magnification of the second camera by inputting to the touch panel, the zoom operation bar overlaid on the image captured by the first camera, and
to cause a size of the imaging area of the second camera displayed on the display device to change according to a change of the zoom magnification of the second camera.

US Pat. No. 11,115,577

DRIVER MONITORING DEVICE MOUNTING STRUCTURE

TOYOTA JIDOSHA KABUSHIKI ...


1. A driver monitoring device mounting structure, the structure comprising:a driver monitoring device main body that is attached to an upper portion of a steering column that supports a steering wheel, the driver monitoring device main body being disposed on the upper portion of the steering column and in front of the steering wheel as viewed in a front-rear direction of a vehicle;
a supporting component that has an anchoring portion and a fastening portion, and that supports a connector that is attached to a lead wire that is drawn out from the driver monitoring device main body; and
a column cover that has a temporary fixing portion to which the anchoring portion is temporarily fixed, and a fixing portion to which the fastening portion is fixed,
wherein the column cover covers the anchoring portion, the fastening portion, the temporary fixing portion and the steering column,
wherein the anchoring portion is temporarily fixed to the temporary fixing portion by means of a weight of the supporting component.

US Pat. No. 11,115,576

SENSOR MODULE WITH A COLLAR CONFIGURED TO BE ATTACHED TO A CAMERA MODULE FOR A USER DEVICE

QUALCOMM Incorporated, S...


1. A sensor module comprising:a collar configured to be attached to a camera module for a user device,wherein the collar includes:a first opening that is configured to align with an aperture of a camera of the camera module, and
a second opening;


a sensor attachment module in the collar;
a sensor within the sensor attachment module,wherein the sensor is in an offset position relative to an axis associated with the sensor module; and

a reflective device that redirects light towards the sensor,wherein the light enters the second opening in a direction that is parallel relative to the axis associated with the sensor module and is redirected in a direction that is perpendicular to the axis associated with the sensor module.


US Pat. No. 11,115,575

CAMERA WITH 2-COMPONENT ELEMENT

Aptiv Technologies Limite...


1. A device for acquiring images inside a vehicle, said device comprising:an illuminator configured to illuminate a field of view;
an image sensor configured to acquire images from the field of view; and
a cover between the illuminator and the field of view;
an optical element that abuts the cover on a side opposite the field of view, the optical element further comprising a light-transparent portion, a light-blocking portion, and a printed circuit board (PCB),
wherein the light-transparent portion is configured so that:light of the illuminator emitted to the field of view passes through the light-transparent portion;
the light-transparent portion forms a cavity with an opening and defines a space that is partially occupied by a light source;
a side of the light-transparent portion partially forms the cavity and faces the light source to form a lens; and
the light-transparent portion comprises a silicon rubber and further forms a sealed surface that abuts the PCB to dampen vibrations of the PCB, and

wherein the light-blocking portion is configured to block the light of the illuminator emitted in a direction towards the image sensor.

US Pat. No. 11,115,574

CONTROL CIRCUIT OF LIQUID LENS, CAMERA MODULE AND METHOD OF CONTROLLING LIQUID LENS

LG INNOTEK CO., LTD., Se...


16. A camera module, comprising:a holder comprising a first side wall, the first side wall having a first hole formed therethrough;
a focus adjustable lens disposed in the holder and comprising a common electrode and a plurality of individual electrodes;
a voltage control circuit configured to supply a driving voltage to the focus adjustable lens in order to control an interface in the focus adjustable lens, the voltage control circuit configured to supply the driving voltage based on a capacitance between the common electrode and at least one individual electrode of the plurality of individual electrodes in the focus adjustable lens;
a capacitance measuring circuit configured to measure the capacitance between the common electrode and the at least one individual electrode of the plurality of individual electrodes in the focus adjustable lens;
a first switch located between the capacitance measuring circuit and the focus adjustable lens; and
a second switch located between the voltage control circuit and the first switch and between the voltage control circuit and the focus adjustable lens,
wherein the capacitance measuring circuit is configured to measure the capacitance between the common electrode and each individual electrode of the plurality of individual electrodes,
wherein one side of the first switch is electrically connected to the focus adjustable lens and the voltage control circuit,
wherein the second switch is turned on when a common electrode voltage is supplied to the common electrode, and turned off when the capacitance measuring circuit measures the capacitance,
wherein, when the common electrode is floating after a ground voltage is supplied to the common electrode after a shape of the interface is changed by the driving voltage, the capacitance is measured during a period when the first switch is turned on and a voltage supplied to the at least one individual electrode is changed from a first voltage to a second voltage having a lower level than the first voltage,
wherein the first switch is turned off when the voltage control circuit supplies the driving voltage to the focus adjustable lens for operating the focus adjustable lens,
wherein the voltage control circuit comprises:
a first voltage control circuit configured to supply a common electrode voltage to the common electrode; and
a second voltage control circuit configured to supply individual electrode voltages to the plurality of individual electrodes, respectively, the driving voltage being created via interaction between the common electrode voltage and at least one individual electrode voltage of the individual electrode voltages,
wherein the common electrode voltage and the at least one individual electrode voltage are applied to the common electrode and the individual electrode at different times, respectively,
wherein the voltage control circuit is configured to generate the driving voltage used for operating the focus adjustable lens as well as to accumulate electric charge used for measuring the capacitance between the common electrode and the at least one individual electrode,
wherein the capacitance measuring circuit includes a reference capacitor,
wherein the capacitance between the common electrode and the at least one individual electrode is measured by using the reference capacitor,
wherein the holder comprises:
a second side wall opposite the first sidewall in a first direction perpendicular to an optical axis of the camera module; and
a first plate having a cavity in which the focus adjustable lens is disposed, and wherein the second side wall has a second hole formed therethrough.

US Pat. No. 11,115,573

HYPERSPECTRAL PLENOPTIC CAMERA

UNITED STATES OF AMERICA ...


1. A plenoptic camera, comprising:a filter located between a first lens and a second lens in an aperture plane of the plenoptic camera, wherein the filter is configured to selectively permit transmission of wavelengths;
a micro lens array, wherein each micro lens of the micro lens array is configured to, based on a location of the wavelengths, focus the wavelengths from the filter; wherein a distance between the micro lens array and the second lens is based on a diameter of an aperture of the plenoptic camera; and
an image sensor, configured to receive the focused wavelengths.

US Pat. No. 11,115,572

AUTOMATIC FOCUSING SYSTEM, METHOD, AND VEHICULAR CAMERA DEVICE THEREFOR

TRIPLE WIN TECHNOLOGY (SH...


1. An automatic focusing system comprising:a control device; and
a camera device, comprising:a lens module configured to be assembled inside a vehicle;
a driving structure, comprising at least one shape memory alloy wire (SMA wire); and
a converting module configured to connect to a control device and to receive a control signal from the control device, wherein the control signal controls the SMA wire to become energized or de-energized thereby changing the focus of the lens module;

wherein the control device drives the lens module to focus according to the formula AF=(a*(X?b))+c, wherein AF represents a focusing distance of the lens module, X represents a moving distance of the lens module, X?b represents the bth power of X, and a, b, and c are parameters of the lens module; and wherein, during focusing, the moving distance X is preset, the parameters of a, b, and c are constants, the focusing distance AF is calculated according to the formula AF=(a*(X?b))+c, the lens module is driven to focus at the focusing distance AF.

US Pat. No. 11,115,571

VEHICLE IMAGING UNIT

HONDA MOTOR CO., LTD., T...


1. A vehicle imaging unit comprising:a rearward imaging apparatus that is configured to capture an image of a side rear direction of a vehicle; and
a housing that houses the rearward imaging apparatus and that is attached to a side part of a vehicle body,
wherein
a first imaging lens of the rearward imaging apparatus is arranged on a rear end part of the housing,
a protrusion part that extends substantially in a vehicle front-to-rear direction is provided on a lower surface of the housing,
the housing further houses a downward imaging apparatus that is configured to capture an image of a side lower direction of the vehicle,
a second imaging lens of the downward imaging apparatus penetrates through a lower surface in a middle region in the vehicle front-to-rear direction of the housing and is arranged on the protrusion part,
the protrusion part has a region that extends in a vehicle rearward direction further than the second imaging lens of the downward imaging apparatus,
a leading edge of the protrusion part is distant from a leading edge of the housing by a same amount as a trailing edge of the protrusion part is distant from a trailing edge of the housing,
the protrusion part includes a narrowing region of which a separation width with respect to a side surface of the vehicle body becomes narrower in the vehicle rearward direction such that, at a time of the vehicle traveling frontward, a flow rate of a first travel wind passing between the side surface of the vehicle body and the protrusion part becomes faster than a flow rate of a second travel wind passing at an outer side in a vehicle width direction of the protrusion part and the second travel wind is attracted by the first travel wind, and
the first imaging lens of the rearward imaging apparatus is exposed toward the rearward direction of the vehicle at the outer side in the vehicle width direction of the protrusion part and at a rear side of the vehicle from the narrowing region.

US Pat. No. 11,115,570

IMAGING APPARATUS

Canon Kabushiki Kaisha, ...


1. An imaging apparatus comprising:an electronic viewfinder unit configured to be movable between a retracted state where the electronic viewfinder unit is retracted in a main body and a protruded state where the electronic viewfinder unit is protruded from the main body,
wherein the electronic viewfinder unit includes a rotation unit, where the rotation unit includes an electronic display unit and an exterior cover to cover the rotation unit,
wherein, in the protruded state, the rotation unit is rotatable about a rotation shaft held by the exterior cover, where the exterior cover is integrally formed of attachment surfaces to which the rotation shaft of the rotation unit is attached and a linking surface that links the attachment surfaces,
wherein one end of a flexible board connected to the electronic display unit is bent by the rotation shaft, and another end of the flexible board is connected to a finder board that does not rotate together with the rotation unit, where the flexible board includes a flexible portion with an amount of flexure variable by the rotation of the rotation unit, and where a protective cover for the flexible board is disposed between the electronic display unit and the linking surface when viewed from a direction where the rotation shaft extends, and
wherein the flexible board is disposed in a gap between the linking surface of the exterior cover and the protective cover when viewed from the direction where the rotation shaft extends.

US Pat. No. 11,115,569

RUGGEDIZED CAMERA SYSTEM FOR AEROSPACE ENVIRONMENTS

United States of America ...


1. A ruggedized camera system for aerospace environments, comprising:a housing comprising an electronics housing section and a lens housing section attached to the electronics housing section, the electronics housing section having a rear side, a forward side and an interior region for the placement of electronic components, wherein the forward side has an open region adjacent to the lens housing section;
a cover member attached to the electronics housing section so as to cover the interior region;
a camera electronics holder positioned within the interior region of the electronic housing section and attached to the housing, wherein the camera electronics holder comprises a rear portion, a pair of opposing side portions that are contiguous with the rear portion and an open front, wherein the camera electronics holder is positioned within the interior region so that the open front of the camera electronics holder is situated within the open region of the forward side of the electronics housing section, the rear portion and side portions defining an inner space that is configured to receive camera electronics and prevent movement of the camera electronics when the camera system is subjected to vibrations or physical shock, the rear portion having a plurality of openings therein that are in communication with the inner space of the camera electronics holder;
camera electronics positioned within the inner space of the camera electronics holder, wherein the camera electronics are secured to the camera electronics holder and extend through the open front of the camera electronics holder, the camera electronics including a plurality of signal connectors, wherein each signal connector is aligned with a corresponding opening in the rear portion of the camera electronics holder;
a lens assembly positioned within the lens housing section and comprising a lens interface rotatably attached to the lens housing section such that the lens interface is rotatable in a clockwise direction and in a counter-clockwise direction, a lens holder attached to the lens interface and an optical lens secured within the lens holder and in optical communication with the camera electronics;
a light-ring circuit attached to the lens housing structure and extending about the optical lens; and
a diffuser attached to the lens housing structure and configured to cover the light-ring circuit.

US Pat. No. 11,115,568

FIN SHAPED UNDERWATER CAMERA HOUSING AND SYSTEM INCORPORATING SAME


1. An apparatus for connecting to the underside of a watersports board and suitable for holding and enclosing a camera, the apparatus comprising:a fin having a proximal end, a distal end, two sides and a void internal to said fin;
a housing system for holding a camera and for connecting with the fin, comprising:a housing having an outer surface, an opening positioned in the outer surface and a void internal to the outer surface, said void accessible through the opening and said void configured to hold and enclose at least one camera, and a closure removably connected with the opening, wherein the closure in a first position seals the opening to make the void and housing watertight and the closure in a second open position provides access to the void enabling a camera system be removed or inserted;
a housing assembly for holding the housing, said housing assembly having at least one adjustment, said adjustment positioned outside of the housing outer surface, but extending into the void so as to allow the user to move and set the camera within the void to a plurality of pitch and angles; and
a housing connection means connected with the housing assembly, and removably connectable with the fin; and

a fin connection means for removably connecting the housing system to the underside of a watersports board;
wherein said fin is connected with the underside of the watersports board, the housing system is positioned below the waterline of the watersports board to position a camera in the housing system below the waterline so as to film below the waterline.

US Pat. No. 11,115,567

IMAGE CAPTURE ASSEMBLY AND AERIAL PHOTOGRAPHING AERIAL VEHICLE

SZ DJI TECHNOLOGY CO., LT...


1. An image capture assembly comprising:a heat-conducting housing;
a camera component in the heat-conducting housing and including a plurality of side walls; and
a heat-conducting sheet attached to the camera component and arranged between the camera component and the heat-conducting housing to dissipate heat generated by the camera component to the heat-conducting housing, the heat-conducting sheet extending to more than one of the plurality of side walls of the camera component from a bottom side of the camera component, a height of the heat-conducting sheet on each of the more than one of the plurality of side walls of the camera component being greater than a half of a height of each of the more than one of the plurality of side walls of the camera component.

US Pat. No. 11,115,566

CAMERA SYSTEM WITH EXCHANGEABLE ILLUMINATION ASSEMBLY

Cognex Corporation, Nati...


1. A vision system, comprising:a main body section including an image sensor and processor circuitry, the image sensor being configured to acquire at least one image frame as an array of individual image pixels;
one or more external connectors located at a rear side of the main body section, the one or more external connectors operatively connected to the processor circuitry;
a front plate section, joined to the main body section, having an exterior face and defining an aperture in a central region of the front plate section;
a first connection socket disposed on the exterior face of the front plate section and interconnected to the processor circuitry;
a second connection socket disposed on the exterior face of the front plate section and interconnected to the processor circuitry;
a mounting assembly disposed along a perimeter of the aperture;
a liquid lens assembly having a liquid lens assembly connector connected to the first connection socket, the liquid lens assembly retained relative to the front plate section by the mounting assembly such that the liquid lens assembly connector is aligned with the first connection socket; and
an accessory component having an accessory component connector connected to the second connection socket.

US Pat. No. 11,115,565

USER FEEDBACK FOR REAL-TIME CHECKING AND IMPROVING QUALITY OF SCANNED IMAGE

ML Netherlands C.V., Ams...


1. A method of forming a composite image of a scene using a portable electronic device, the method comprising:capturing a stream of image frames of a scene with the portable electronic device;
extracting one or more image features from image frames of the stream of image frames;
associating sets of points with the one or more image features;
determining correspondences between sets of points associated with the one or more image features from multiple image frames of the stream of image frames;
sequentially incorporating image frames of the stream of image frames into a three dimensional point cloud based on the determined correspondences, wherein the incorporated image frames are incorporated into initial positions in the three dimensional point cloud; and
adjusting the points in the three dimensional point cloud based on a bundle adjustment for a plurality of the sets of points.

US Pat. No. 11,115,564

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...


1. An image processing apparatus comprising:a generation unit configured to generate print data to be used in printing a metallic image using a metallic ink containing silver particles in an inkjet printing apparatus capable of ejecting the metallic ink and at least one type of chromatic color ink; and
an obtaining unit configured to obtain metallic image data corresponding to a predetermined region on a print medium and indicating a tone in the metallic image,
wherein the generation unit generates print data of the metallic ink to be printed in the predetermined region based on the metallic image data obtained by the obtaining unit and generates print data of the chromatic color ink to be printed in the predetermined region based on the metallic image data obtained by the obtaining unit and pre-stored data indicating correspondences between tone values of the metallic image data and amounts of the chromatic color ink.

US Pat. No. 11,115,563

METHOD AND APPARATUS FOR NONLINEAR INTERPOLATION COLOR CONVERSION USING LOOK UP TABLES

ATI Technologies ULC, Ma...


1. A method of mapping between color gamuts comprising:obtaining, by an apparatus, a source image having a plurality of source color gamut pixels in a source color gamut;
converting, by the apparatus, the plurality of source color gamut pixels to a plurality of corresponding narrower target color gamut pixels using non-linear interpolation of a plurality of output pixel values from a reduced 3 dimensional (3-D) look-up table (LUT) corresponding to a target color gamut, wherein the reduced 3-D LUT comprises a plurality of displayable target color values representing vertices of a plurality of sub-cubes that represent less than all entries within the target color gamut; and
providing, for display, by the apparatus, the plurality of corresponding target color gamut pixels on a target color gamut display.

US Pat. No. 11,115,562

CONTEXT-AWARE IMAGE PROCESSING

SYNAPTICS INCORPORATED, ...


1. A method of image processing, comprising:receiving image data for a plurality of pixels corresponding to a first image, the image data including color information and a transparency value for each of the plurality of pixels;
determining a boundary of the first image based on the transparency values, the boundary delineating a first region of the first image from a second region of the first image;
updating the image data by selectively changing the color information for one or more of the pixels within the first region based at least in part on the color information for one or more of the pixels within the second region; and
interpolating pixels within the second region based on the updated image data.

US Pat. No. 11,115,561

INCLINATION DETECTING DEVICE, READING DEVICE, IMAGE PROCESSING APPARATUS, AND METHOD OF DETECTING INCLINATION

RICOH COMPANY, LTD., Tok...


1. An inclination detecting device comprising:processing circuitry configured to:detect, in image information that is an image of an object imaged by an imaging device at an imaging position at a background, a first boundary between the background and a shadow of the object;
detect a second boundary between the object and the shadow of the object in the image information; and
detect an inclination of the object in the image information from the first boundary and the second boundary detected by the processing circuitry,

wherein the processing circuitry detects the first boundary from a part or all of an outline of the object in a main-scanning direction and detects an inclination amount of the object from a detection result of the first boundary,
wherein the processing circuitry detects the second boundary from a part or all of the outline of the object in the main-scanning direction and detects an inclination amount of the object from a detection result of the second boundary,
wherein the processing circuitry determines processing to be performed on the image information from the inclination amounts detected from the first boundary and the second boundary,
wherein the processing circuitry includes:
a plurality of first boundary detecting circuitries; and
a plurality of second boundary detecting circuitries,wherein the processing circuitry determines the processing to be performed on the image information from the inclination amounts of the object detected from the first boundary and the second boundary, respectively, detected by the plurality of first boundary detecting circuitries and the plurality of second boundary detecting circuitries.


US Pat. No. 11,115,560

IMAGE PROCESSING DEVICE, CONTROL METHOD AND CONTROL PROGRAM FOR MORE ACCURATELY REMOVING A BACKGROUND PATTERN FROM AN IMAGE

PFU LIMITED, Kahoku (JP)...


1. An image processing apparatus, comprising:a processor toacquire an input image,
generate a multi-valued image from the input image,
generate a binary image acquired by binarizing the input image,
detect a thin line having a width equal to or less than a predetermined number of pixels from the multi-valued image, and
generate a background pattern removal image in which the thin line is removed from the binary image, based on the thin line detected from the multi-valued image; and

an output device to output the background pattern removal image or information generated using the background pattern removal image.

US Pat. No. 11,115,559

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

FUJIFILM Business Innovat...


1. A mobile communication terminal comprising:a plurality of communication modules configured to conduct a wireless communication with an information processing apparatus;
a processor programmed to function as:a selection unit to select a communication module from among the plurality of communication modules in accordance with whether the mobile communication terminal is moving away from the information processing apparatus or not; and

a controller configured to perform a control so as to start the communication using the selected communication module.

US Pat. No. 11,115,558

SYSTEMS AND METHODS FOR MAINTAINING CHAIN OF CUSTODY FOR ASSETS OFFLOADED FROM A PORTABLE ELECTRONIC DEVICE

MOTOROLA SOLUTIONS, INC.,...


1. A system for maintaining chain of custody for assets offloaded from a portable electronic device, the system comprising:an asset management controller external to the portable electronic device, the asset management controller including
a network interface; and
an electronic processor coupled to the network interface and configured toreceive, from the portable electronic device via the network interface, an asset manifest including at least one asset identifier, at least one fixed-length unique identifier associated with the at least one asset identifier, and a manifest digital signature;
transmit, to the portable electronic device via the network interface, storage message based on the asset manifest, the storage message identifying a data warehouse;
receive, from the portable electronic device via the network interface, an upload completion message;
retrieve, from the data warehouse via the network interface, at least one asset file;
determine, for the at least one asset file, at least one asset file fixed-length unique identifier;
determine whether the at least one fixed-length unique identifier matches the at least one asset file fixed-length unique identifier, wherein the at least one fixed-length unique identifier is a hash of the at least one asset file;
determine whether the manifest digital signature is valid; and
transmit, via the network interface, an asset deletion permission message when the at least one fixed-length unique identifier matches the at least one asset file fixed-length unique identifier, and the manifest digital signature is valid.


US Pat. No. 11,115,557

READING DEVICE

PANASONIC INTELLECTUAL PR...


1. A reading device comprising:a housing having a housing cover that is not transparent, the housing cover including (i) a first opening covered with an opening cover that is transparent and (ii) a second opening provided on a same plane as the first opening at a position separated from the first opening;
a guiding lighting unit that emits light to an outside of the housing to guide a user to appropriately place an object on the opening cover, the guiding lighting unit being located inside the second opening of the housing cover;
a reading lighting unit that illuminates the opening cover from inside the housing and that illuminates that object placed on the opening cover by the user, the reading lighting unit being located on an inner sidewall of the housing; and
an imaging unit disposed inside the housing that images the object placed by the user on the opening cover,
wherein the reading lighting unit is disposed closer to the opening cover than the imaging unit, and the reading lighting unit is configured to prevent light illuminated from the reading lighting unit from reaching the imaging unit.

US Pat. No. 11,115,556

WORK FORM SHARING

Hewlett-Packard Developme...


1. An image forming apparatus comprising:a user interface device;
a communication interface;
a processor; and
a memory storing instructions executable by the processor, wherein, when executed by the processor, the instructions cause the processor to, when an additional application that is not a default application installed in the image forming apparatus is comprised in a workform to be shared with an external apparatus, the workform being selected via the user interface device, back up installation file information of the additional application based on a package name of the additional application, and transmit, to the external apparatus via the communication interface, the backed up installation file information of the additional application with the workform to be shared.

US Pat. No. 11,115,555

METHOD OF PRODUCING MACHINE LEARNING MODEL, AND COPYING APPARATUS

Seiko Epson Corporation, ...


1. A method of producing a machine learning model, the method comprising:collecting, as training data, data including an image of a document and a size of the document; and
producing a model that indicates a correspondence between an image and a size of an output sheet required for printing the image such that a mark included in the image is identifiable by a user or a relationship between an input that includes the image and a predetermined size and an output that includes a value indicating that the mark included in the image is identifiable or not by the user on an output sheet of the predetermined size through machine learning based on the training data.

US Pat. No. 11,115,554

POWER SUPPLY CONTROL DEVICE AND IMAGE FORMING APPARATUS CAPABLE OF DETECTING A FAILURE OF A MAIN POWER SWITCH

CANON KABUSHIKI KAISHA, ...


1. A power supply control device comprising:a switch configured to output a switch depression signal, which is in a first logic level during a period in which the switch is not depressed, and is in a second logic level during a period in which the switch is depressed;
a first power supply configured to generate a first DC voltage based on AC power, which is supplied from an outside, and to apply the first DC voltage to a predetermined load;
a second power supply configured to generate a second DC voltage based on the AC power irrespective of a state of the switch; and
a controller configured to operate on the second DC voltage, to switch an operation state of the first power supply in a case where the switch depression signal has switched from the first logic level to the second logic level and thereafter the switch depression signal remains at the second logic level until a first predetermined time period has elapsed, and to output a signal on a failure of the switch in a case where the switch depression signal has switched from the first logic level to the second logic level and thereafter the switch depression signal remains at the second logic level until a second predetermined time period has elapsed, the second predetermined time period being longer than the first predetermined time period.

US Pat. No. 11,115,553

INFORMATION PROCESSING SYSTEM FOR DETECTING IMAGE INFORMATION AND NON-TRANSITORY COMPUTER READABLE MEDIUM

FUJIFILM Business Innovat...


1. An information processing system comprising:a processor, configured to:
register prohibited information associated with at least one of pieces of attribute information to be assigned to image information on a per piece-of-attribute information basis;
refer to the pieces of attribute information and the prohibited information; and
perform error handling if received image information includes the prohibited information associated with a piece of attribute information assigned to the received image information,
wherein if the prohibited information associated with the piece of attribute information is included in at least one page of a plurality of pages of the received image information,
the processor transmits the received image information outward in the error handling, excluding information representing the page including the prohibited information from the received image information.

US Pat. No. 11,115,552

MULTIFUNCTION PERIPHERAL CAPABLE OF EXECUTING DOUBLE-SIDED READING PROCESS, METHOD OF CONTROLLING SAME, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...


1. An image processing apparatus including a scanner controller and a controller, the image processing apparatus comprising:a scanner configured to scan a first side and a second side of a document sheet and generate image data of the document sheet,
wherein the scanner controller includes a memory that stores the generated image data and the scanner controller is configured to transfer the image data to the controller;
wherein the controller includes an image processor configured to perform a predetermined image process on the image data transferred from the scanner controller,
wherein the image processing apparatus is configured to determine whether another predetermined image process that differs from the predetermined image process is being performed,
wherein, for transferring image data of the first side of the document sheet, the image processing apparatus is configured to set, in a case where it is determined that another predetermined image process is not being performed, a frequency of an image transfer clock to a first frequency, and set, in a case where it is determined by the image processing apparatus that another predetermined image process is being performed, the frequency of the image transfer clock to a second frequency lower than the first frequency, and
wherein, for transferring image data of the second side of the document sheet, the image processing apparatus is configured to set, in a case where it is determined by the image processing apparatus that another predetermined image process is not being performed, the frequency of the image transfer clock to the first frequency, and set, in a case where it is determined by the image processing apparatus that another predetermined image process is being performed, the frequency of the image transfer clock to the second frequency.

US Pat. No. 11,115,551

DEVICE FOR PERFORMING SECURE AND AUTOMATIC FLIPPING AND SCANNING OF DOCUMENTS

Tata Consultancy Services...


1. A device for automatic flipping and scanning of a plurality of answer sheets, the device comprising:a scanning stand configured to hold the plurality of answer sheets;
a page flipping arm connected to the scanning stand, wherein the page flipping arm is configured to flip one or more pages of the answer sheets;
one or more suction cups attached on the page flipper arm, the one or more suction cups are configured to hold the page for flipping;
a roller attached on the scanning stand configured to avoid flipping of more than one pages at a time;
a controller to control the operation of the scanning stand, the page flipper arm and the roller;
a mobile device configured to be attached on a holder present in the scanning stand, wherein the mobile device comprising a mobile application in communication with the controller, wherein the mobile application configured to:command the controller to initiate the flipping of a particular page from amongst the one or more pages;
command, via the controller, the suction cup to hold the particular page and move the particular page up for flipping;
command, via the controller, the roller to avoid flipping of more than one page at a time;
command, via the controller, a camera present in the mobile device to scan the particular page;
send the scanned page to a cloud server; and
repetitively performing the step from “commanding the controller” to the “sending of scanned page”, until a predefined number of pages of the plurality of answer sheets are scanned.


US Pat. No. 11,115,550

MULTIFUNCTION PERIPHERAL

Brother Kogyo Kabushiki K...


1. A multifunction peripheral comprising:a conveyor configured to reel out a recording medium from a roll medium on which the recording medium is wound, and to convey the recording medium in a conveying direction;
a cutter configured to cut the recording medium to form a rear end of the recording medium in the conveying direction; and
a controller configured to control the conveyor, recording an image on the recording medium reeled out and conveyed by the converyor, the cutter and the reading data of the image recorded on the recording medium, the controller being configured to perform:controlling recording a test image on the recording medium and controlling the cutter to cut the recording medium to form the rear end of the recording medium;
after the recording of the test image, controlling generating read data of the test image;
after the generation of the read data, generating correction data relating to a cutting position of the recording medium by the cutter based on a difference between a length obtained from the generated read data and an ideal value of the length, the length being from a predetermined part of the test image to the rear end of the recording medium in the conveying direction; and
after the generation of the correction data, main recording comprising controlling recording an image on the recording medium based on a recording command, and controlling the cutter to cut the recording medium in a cutting position based on image data included in the recording command and the correction data, to form the rear end of the recording medium.


US Pat. No. 11,115,549

IMAGE SENSOR MOUNTING BRACKET AND IMAGE SENSOR DEVICE USING SAME

Mitsubishi Electric Corpo...


1. An image sensor mounting bracket for mounting an image sensor to an attachment target object, the image sensor mounting bracket comprising:a first fastening element to be fastened to a lateral surface of the image sensor, the lateral surface extending in a main scanning direction; and
a second fastening element to be fastened to the attachment target object, the second fastening element intersecting with the first fastening element, and extending in a sub scanning direction,
wherein
the first fastening element includes a first fastening surface that abuts the image sensor and has (i) a plurality of positioning pins to determine a position of mounting of the image sensor mounting bracket to the image sensor, the positioning pins being arranged in the main scanning direction on a straight line parallel to the main scanning direction, and (ii) a plurality of image sensor fastening through holes penetrating to a fastening member of the image sensor that fastens the image sensor mounting bracket,
the image sensor fastening through holes extend perpendicularly to the first fastening surface and are arranged in the main scanning direction on the straight line parallel to the main scanning direction in which the plurality of positioning pins are disposed,
the image sensor fastening through holes are arranged outside a portion of the first fastening surface between two positioning pins among the plurality of positioning pins in the main scanning direction, and
the second fastening element includes a second fastening surface that abuts the attachment target object and has a plurality of elongated through holes that are arranged in the main scanning direction and in the sub-scanning direction and each elongate in the sub-scanning direction.

US Pat. No. 11,115,548

IMAGE PROCESSING APPARATUS THAT EXECUTES JOB, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...


1. An image processing apparatus operable with an user interface including a display, and displays on the display a history button for calling a job setting used by a function, the image processing apparatus comprising:a display control unit configured to display information indicating at least part of a setting of a job executed using one function in a display area constituting the history button; and
an obtaining unit configured to obtain a display size of the display by communicating with the display included with the user interface connected to the image processing apparatus,
wherein the display control unit determines the number of setting items to be displayed in the display area as the information, based on the obtained display size.

US Pat. No. 11,115,547

IMAGE FORMING APPARATUS THAT NOTIFIES A SPECIFIC CONTROL UNIT OF OPERATION INFORMATION DEPENDING ON AN OPERATION SCREEN BEING DISPLAYED, AND CONTROL METHOD THEREFOR

Canon Kabushiki Kaisha, ...


1. An image forming apparatus, comprising:a display;
a first control unit configured to control a native program of the image forming apparatus;
a second control unit configured to control an extension application that is different from the native program;
a first memory area in which an operation screen of the native program controlled by the first control unit is rendered;
a second memory area in which an operation screen of the extension application controlled by the second control unit is rendered; and
an output processing unit configured to output at least a part of the operation screen of the native program to the display when an event related to outputting of the operation screen of the native program has occurred; and
a notification unit configured to, upon accepting a user operation via an operation screen, notify the first control unit or the second control unit of operation information based on the operation screen being output,
wherein upon accepting the user operation, the notification unit notifies the first control unit of the operation information when the operation screen of the native program is being output,
notifies the first control unit of the operation information when the operation screen of the extension application is being output and the user operation has been performed on a part of the operation screen of the native program that has been set in, and
notifies the second control unit of the operation information when the operation screen of the extension application is being output and the user operation has been performed on a part other than the part of the operation screen of the native program that has been set in, and
wherein the first control unit, the second control unit, the output processing unit, and the notification unit are implemented using one or more processors.

US Pat. No. 11,115,546

IMAGE PROCESSING APPARATUS, CONTROL METHOD TO EXECUTE PLURALITY OF FUNCTIONS FOR AN IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...


1. An image processing apparatus having a plurality of functions including a first function, the first function being a function of printing at the image processing apparatus a plurality of files selected from among files stored in an external server or the image processing apparatus, or transmitting the selected plurality of files to a transmission destination, the image processing apparatus comprising:at least one memory storing instructions; and
at least one processor that, when executing the instructions, causes the image processing apparatus to:
display, on a screen on which a first software key for activating functions of the image processing apparatus is displayed, a second software key that is to be displayed based on execution of processing of the first function and is to be used for instructing that the processing of the first function be executed again for the plurality of files in accordance with a setting content of the processing executed; and
refer to a plurality of files associated with the second software key,
wherein, even in a case where a file that is not able to be referred to is included in the selected plurality of files when the second software key is operated, the processing of the first function is executed for referred-to files among the selected plurality of files when the operation is performed.

US Pat. No. 11,115,545

IMAGE FORMING APPARATUS AND METHOD OF INFORMATION DISPLAY

SHARP KABUSHIKI KAISHA, ...


1. An image forming apparatus comprising:a display; and
a processor; wherein
the processor controls the display to display page images generated from input image data and a preview image of an image to be formed on a recording medium;
the processor receives an editing instruction given on said preview image; and
said editing instruction includes a first type of editing instruction which is not reflected on said page images, and a second type of editing instruction which is reflected on said page images.

US Pat. No. 11,115,544

INFORMATION PROCESSING APPARATUS, CHARACTER RECOGNITION METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM ENCODED WITH CHARACTER RECOGNITION INSTRUCTIONS

Konica Minolta, Inc., To...


1. An information processing apparatus comprising:a hardware processor that:accepts an image input from outside as an input image;
recognizes a plurality of characters in the input image and produces character information that is constituted by the characters and that includes a plurality of character strings;
detects, from the character information, first link information representing a network address of data;
corrects one or more of the character strings other than a character string that constitutes the first link information among the character strings, and generates correction information;
verifies whether the data specified by the first link information is accessible; and
confirms that access is permitted based on a whitelist that defines a network address to which access is permitted.


US Pat. No. 11,115,543

IMAGE PROCESSING APPARATUS, AND CONTROL METHOD AND STORAGE MEDIUM THEREOF

Canon Kabushiki Kaisha, ...


1. An image processing apparatus comprising:an operation unit including an operation system configured to receive a touch operation performed by a user, and a display system configured to display an image;
a detection unit configured to detect a contact of an information storage medium;
an identification unit configured to identify a location in the image processing apparatus at which an abnormality has occurred; and
a controller configured to perform control;
wherein, in a case where the location at which the abnormality has occurred is identified to be only the operation system and the detection unit detects the contact of the information storage medium, the controller performs control to continue processing on the image processing apparatus on the display system, and
wherein, in a case where at least the identification unit identifies that the abnormality has occurred in the display system;the controller performs control at least to print a document indicating a method for continuing the processing on the image processing apparatus on the display system.

US Pat. No. 11,115,542

PAIR-THE-PLAN SYSTEM FOR DEVICES AND METHOD OF USE

Aeris Communications, Inc...


1. A computer-implemented method comprising:enrolling a second device enabled for connectivity to cellular or other wireless service in a cellular subscription and associated billing plan associated with a first device owned by a second user,wherein the enrollment includes providing an identifier for the second device to a cellular service provider associated with the first device by the second user, effectively adding the second device to the cellular subscription and associated billing plan associated with the first device; and

allowing the second user to use capabilities of the second device as governed by the cellular subscription and an associated billing plan associated with the first device; while the second device is also configured to allow a first user to use capabilities of the second device as governed by the cellular subscription and an associated billing plan of the first user's choice,wherein the second device is a multipurpose device shared simultaneously by the first user and the second user, and
wherein the first user is an enterprise user and the capabilities of the second device of interest to the first user comprise machine to machine (M2M) traffic, and
wherein the second user is a consumer user and the capabilities of the second device of interest to the second user comprise consumer traffic.


US Pat. No. 11,115,541

POST-TELECONFERENCE PLAYBACK USING NON-DESTRUCTIVE AUDIO TRANSPORT

Dolby Laboratories Licens...


1. A method for processing audio data, the method comprising:receiving, by an analysis engine, audio data corresponding to a teleconference recording involving a plurality of conference participants, the audio data comprising an individual uplink data packet stream for each of the plurality of conference participants, each of the individual uplink data packet streams including gain coefficient data and at least one of: (a) conference participant speech data from multiple endpoints, recorded separately or (b) conference participant speech data from a single endpoint corresponding to multiple conference participants and including information for identifying conference participant speech for each conference participant of the multiple conference participants, the gain coefficient data corresponding to suppressive gain coefficients applied during the teleconference;
analyzing, by the analysis engine, the audio data;
determining, by the analysis engine, proposed modifications to at least some of the gain coefficient data, the proposed modifications to be applied when the teleconference recording is played back; and
outputting indications of the proposed modifications, the indications of the proposed modifications corresponding to proposed selective changes to the attenuation of conference participant nuisance audio for playback, as compared to the attenuation of conference participant nuisance audio during the teleconference according to the suppressive gain coefficients, the conference participant nuisance audio corresponding to apparent non-voice activity.

US Pat. No. 11,115,540

COMPLEX COMPUTING NETWORK FOR PROVIDING AUDIO CONVERSATIONS AND ASSOCIATED VISUAL REPRESENTATIONS ON A MOBILE APPLICATION

Stereo App Limited, Ashf...


1. A method for selecting and initiating streaming of audio conversations, the method comprising:determining, using one or more computing device processors, a first user accesses a mobile application on a first mobile device associated with the first user;
selecting, using the one or more computing device processors, an audio conversation for the first user, wherein the audio conversation involves at least a second user who accesses the mobile application on a second mobile device associated with the second user, and a third user located remotely from the second user who accesses the mobile application on a third mobile device associated with the third user, wherein the audio conversation is selected for the first user based on at least one of first user information associated with the first user, second user information associated with the second user, or conversation information associated with the audio conversation;
streaming, using the one or more computing device processors, the audio conversation to the mobile application on the first mobile device;
recording, using the one or more computing device processors, the audio conversation;
transmitting, using the one or more computing device processors, to the first mobile device for visual display, during the streaming of the audio conversation, on a first user interface of the mobile application on the first mobile device, a selectable first visual representation of the second user not comprising a first video of the second user and, simultaneously with the visual display of the selectable first visual representation on the first user interface of the mobile application on the first mobile device, a selectable second visual representation of the third user not comprising a second video of the third user;
adding, using the one or more computing device processors, the conversation information associated with the audio conversation to user profile information associated with the second user;
receiving, using the one or more computing device processors, a selection of the selectable first visual representation of the second user not comprising the first video of the second user; and
in response to receiving the selection of the selectable first visual representation of the second user not comprising the first video of the second user, transmitting, using the one or more computing device processors, to the first mobile device for visual display, during the streaming of the audio conversation, on a second user interface, different from the first user interface, of the mobile application on the first mobile device, the user profile information associated with the second user;
wherein the conversation information, comprised in the user profile information, is displayed on the second user interface or a third user interface, different from the first user interface, of the mobile application on the first mobile device,
wherein the user profile information associated with the second user is editable by the second user during the audio conversation involving the at least the second user and the third user,
wherein second user profile information associated with the first user is editable by the first user during the audio conversation involving the at least the second user and the third user being streamed to the mobile application on the first mobile device, and
wherein the audio conversation involving the at least the second user and the third user continues to stream when the second user accesses, during the audio conversation, a second mobile application on the second mobile device of the second user.

US Pat. No. 11,115,539

SMART VOICE SYSTEM, METHOD OF ADJUSTING OUTPUT VOICE AND COMPUTER READABLE MEMORY MEDIUM

PIXART IMAGING INC., Hsi...


1. A smart voice system, comprising:a data receiving module, used for receiving a user's hearing evaluation data, and acquiring a user's hearing parameter according to the hearing evaluation data, wherein the user's hearing parameter comprises, for each of a plurality of different frequencies of sound, a minimum volume data that the user can hear for each different frequencies of sound;
a voice message receiving module, which is used for receiving a voice message issued by the user;
a voice response module comprising:
an issuing unit, which is used for sending the voice message to a voice server to generate an original response voice message after the voice server analyzes the voice message;
a response receiving unit, which is used for receiving the original response voice message from the voice server;
a frequency adjustment unit, which is used for adjusting a frequency of the original response voice message according to the user's hearing parameter to generate a response voice message based on the user's hearing parameter; and
a voice message output module, which is used for outputting the response voice message.

US Pat. No. 11,115,538

AGENT EFFICIENCY BASED ON REAL-TIME DESKTOP ANALYTICS

Avaya Inc., Santa Clara,...


1. A computing system comprising:one or more processors; and
a non-transitory computer-readable medium storing computer-readable instructions which, when executed by the one or more processors, cause the one or more processors to:receive state information from an agent computing system via a network connection;
generate a contact assignment by assigning a contact to the agent computing system, wherein the contact assignment is generated based at least in part on the state information received from the agent computing system; and
in response to receiving the state information, transmit the contact assignment to a contact center computing system.


US Pat. No. 11,115,537

TEMPLATE-BASED MANAGEMENT OF TELECOMMUNICATIONS SERVICES

8x8, Inc., Campbell, CA ...


1. A data communications server comprising:one or more computer processor circuits coupled to memory circuits and configured to interface with remotely-situated client entities using a first programming language that is to use a message exchange protocol between the server and data sources, the server configured as a private branch exchange (PBX) and including a call control engine, including circuitry, that is configured to:identify, in response to receipt of a call associated with one of the client entities over the message exchange protocol, at least one call control template written in a second programming language that is different from the first programming language, that provides client-specific commands associated with said one of the client entities for directing call processing functions carried out on behalf of said one of the client entities; and
control call routing by the PBX and for the call by:executing the call control template written in the second programming language to identify at least one data source that corresponds to a call property for the call;
retrieving data from the data source; and
implementing one or more call processing functions specified by the call control template as being conditional upon the retrieved data and based on client-specific data retrieved from a configuration database.



US Pat. No. 11,115,536

DYNAMIC PRECISION QUEUE ROUTING

United Services Automobil...


1. A method of dynamically modifying routing steps in a precision queuing call routing system, the method comprising:defining a precision queue,wherein the precision queue is associated with two or more routing steps,wherein each routing step comprises:
an expression, of one or more terms, that evaluates to a true or false based on features of an agent, and
a wait time criteria specifying an amount of time that a call can wait for the expression of that step to evaluate to true before proceeding to a next step; and

wherein at least one of the two or more routing steps includes a consider-if condition, wherein the consider-if condition specifies a formula which, when the formula evaluates to a false result, causes call routing to proceed to the next step without waiting for the amount of time specified by the wait time criteria;

in response to receiving a call from a caller, determining attributes of the agents needed to address the call based at least in part on a time of day, a time zone in which the call is received, and the caller's previous interactions with a call center;
selecting the precision queue, from a plurality of precision queues, based on the attributes needed to address the call;
detecting a modification event that modifies a particular routing step, of the two or more routing steps, in the selected precision queue, wherein the modification event causes a change to the wait time criteria defined for the particular routing step; and
routing the call based on the two or more routing steps, including the modified routing step by:evaluating the formula of the consider-if condition, of the at least one of the two or more routing steps, to have a false result and, in response, proceeding from the at least one of the two or more routing steps to a second of the two or more routing steps without waiting for the amount of time specified by the wait time criteria of the at least one of the two or more routing steps.


US Pat. No. 11,115,535

TECHNIQUES FOR SHARING CONTROL OF ASSIGNING TASKS BETWEEN AN EXTERNAL PAIRING SYSTEM AND A TASK ASSIGNMENT SYSTEM WITH AN INTERNAL PAIRING SYSTEM

Afiniti, Ltd., Hamilton ...


1. A method comprising:determining, by at least one computer processor communicatively coupled to and configured to operate in a contact center system, a first contact waiting in the contact center system;
determining, by the at least one computer processor, a second contact waiting in the contact center system;
pairing, by the at least one computer processor, the second contact based on information about the first contact; and
after pairing the second contact, pairing, by the at least one computer processor, the first contact based on information about the second contact,
wherein the information about the second contact comprises information other than the pairing of the second contact.

US Pat. No. 11,115,534

TECHNIQUES FOR BEHAVIORAL PAIRING IN A CONTACT CENTER SYSTEM

Afiniti, Ltd., Hamilton ...


1. A method comprising:determining, by at least one computer processor communicatively coupled to and configured to operate in a contact center system, a set of agents available to be connected to a contact, the set of agents comprising a first agent, a second agent, and a third agent; and
selecting, by the at least one computer processor, according to a pairing strategy, the first agent for pairing to the contact to optimize a performance metric;
wherein the first agent has a worst expected performance among the set of agents for the contact according to the performance metric;
wherein the first agent has been lagging the least in a fairness metric, wherein the fairness metric is based on a measurement of utilization of an agent over time;
wherein the third agent has been lagging the most in the fairness metric;
wherein unavailability of the first agent would cause the pairing strategy to select the second agent instead of the third agent; and
wherein the first agent has not previously interacted with the contact.

US Pat. No. 11,115,533

UNIFIED CROSS CHANNEL COMMUNICATIONS

VIRTUAL HOLD TECHNOLOGY S...


1. A method for unified cross-channel communication, comprising the steps of:connecting a virtual communication interceptor to a plurality of communication channels between user devices and call center agents;
sending, from the plurality of communication channels, a first communication of a first type using a first communication channel to a first computing device, wherein the first computing device is adapted to receive communications of the first type and receives the first communication from a first user device;
providing interaction information pertaining to the first communication to the virtual communication interceptor, wherein the interaction information includes at least one identifier associated with a user of the first user device, and wherein the virtual communication interceptor stores at least some of the received interaction information;
transmitting at least some of the stored interaction information to a second computing device, wherein the second computing device is adapted to process a second communication of a second type in a second communication channel; wherein the second computing device is associated with a call center agent;
monitoring the plurality of communication channels for changes in the interaction information associated to the user of the first user device;
storing any changes in the interaction information associated with the user of the first user device; and
transmitting any changes in the interaction information associated with the user of the first user device to the second computing device.

US Pat. No. 11,115,532

VISUAL ENGAGEMENT USING AUTOMATICALLY DYNAMICALLY SELECTED VISUALIZATION MEDIUMS

Glance Networks, Inc., W...


1. A method of implementing a visualization session, comprising:receiving, by an agent, a contact event from a customer using an application client;
instructing the customer, by the agent, to start a visualization session at a visualization system;
starting the visualization session at the visualization system by the customer; and
authenticating by the agent to the visualization system to join the visualization session;
wherein the customer is not known at the visualization system prior to implementing the step of starting the visualization session at the visualization system;
determining that the application client in use by the customer is a hybrid application client, in which a first portion of a user interface of the application client is natively drawn using system calls to an operating system and a second portion of the user interface is output via a browser;
using screen share technology, by the customer, to capture and transmit the first portion of the user interface on a visualization session; and
using co-browse technology, by the customer, to capture and transmit the second portion of the user interface on the visualization session
wherein the agent does not know the type of visualization technology to be used with the customer prior to instructing the customer to start the visualization session.

US Pat. No. 11,115,530

INTEGRATION OF HUMAN AGENT AND AUTOMATED TOOLS FOR INTERACTIVE VOICE RESPONSE (IVR) SYSTEMS

Bank of America Corporati...


14. A method for leveraging artificial intelligence to integrate human and machine responses within an interactive voice response (“IVR”) system, the method comprising:receiving an initiation of a conversation with a human caller and an artificial intelligence (“AI”) engine, the conversation comprising a request from the human caller;
providing, to the AI engine, voice inputs generated by the human caller;
tracking:sequences of utterances extracted from the voice inputs; and
responses provided by the AI engine;

automatically transferring the human caller and the voice inputs to a human agent when:a sequence of utterances from the sequences of utterances is determined to be repeated more than once; and
a satisfactory response to the request has not yet been provided by the AI engine, the satisfactory response comprising a response that triggers the human caller's delay of a second pre-determined amount of time;

continuing the conversation with the human caller and the human agent;
based off of the continuing conversation and the transferred voice inputs, determining, by the human agent, a type of request;
receiving a selected response provided by the human agent based on the determination of the type of request;
transferring back the human caller to the AI engine to provide the selected response to the human caller;
following the transferring back of the human caller to the AI engine to provide the selected response, continuously providing a voice-to-text rendition of the continuing conversation on a display screen associated with the human agent; and
wherein, when the selected response provided by the AI engine to the human caller is not the satisfactory response, changing a background-color on the display screen to alert the human agent.

US Pat. No. 11,115,529

SYSTEM AND METHOD FOR PROVIDING AND MANAGING THIRD PARTY CONTENT WITH CALL FUNCTIONALITY

Google LLC, Mountain Vie...


1. A computer-implemented method of extracting contact information from a resource and associating the contact information with a content item, comprising:receiving, by a data processing system, a content item and uniform resource locator (URL) from a content provider computing device, the URL identifying a resource;
loading, by the data processing system, the resource identified by the received URL;
detecting, by the data processing system, a plurality of contact information from the loaded resource, the plurality of contact information comprising a first contact information and a second contact information;
determining, by the data processing system prior to associating contact information with the content item, a prominence score for each contact information of the plurality of contact information by analyzing an object tree of the resource or analyzing a result of optical character recognition of the resource;
selecting, by the data processing system, the first contact information and the second contact information of the plurality of contact information based on the calculated prominence scores;
associating, by the data processing system, the selected first contact information and the second contact information with the content item;
receiving a request from a computing device, the request including location information for the computing device;
selecting, by a data processing system, one of the first contact information or the second contact information responsive to the location information;
modifying, responsive to the request, the content item by embedding the one of the first contact information or the second content information selected based on the location information provided with the request and the calculated prominence scores with a selectable button for the content item; and
serving the content item with the selectable button to the computing device responsive to the request, the computing device contacting the content provider using the selected one of the first contact information or the second contact information responsive to a selection of the selectable button.

US Pat. No. 11,115,528

CALL CONTROL SERVICE

Amazon Technologies, Inc....


1. A system comprising:at least one processor; and
a memory device including instructions that, when executed by the at least one processor, cause the system to:receive, via a content designer user interface (UI) hosted in a service provider environment and from a device associated with a call recipient:a first priority designation of a first type for a first caller and a second priority designation of a second type, different than the first type, for a second caller;
a first interactive agent assigned to the first priority designation and a second interactive agent assigned to the second priority designation; and
a subject matter associated with calls;

receive a call at a call control service hosted in the service provider environment, wherein the call control service is managed by a computing service provider;
identify the first caller and the call recipient using addressing information for the call, wherein the call recipient is registered with the call control service;
determine that the call is associated with the subject matter and that the call recipient is unavailable to receive the call;
obtain, from an agent linking profile that includes priority designations assigned to callers by the call recipient, the first priority designation assigned to the first caller;
identify the interactive agent linked to the subject matter and the first priority designation assigned to the first caller, wherein the interactive agent is to provide a virtual call assistant personalized to the subject matter and the first priority designation assigned to the first caller; and
invoke the interactive agent linked to the subject matter and the first priority designation.


US Pat. No. 11,115,527

SYSTEM AND METHOD FOR IDENTIFYING UNWANTED COMMUNICATIONS USING COMMUNICATION FINGERPRINTING

YouMail, Inc., Irvine, C...


1. A method, in a communication environment including a data processing system comprising a processor and a memory, for identifying communicators as wanted or unwanted based on messages from such communicators, the method comprising:receiving, by the data processing system, an inbound communication from a communicator;
comparing, by the data processing system, the inbound communication to fingerprints stored in a database accessible to the data processing system, the fingerprints including data representative of content features from communications associated with unwanted communicators;
determining, by the data processing system, at least one match of the inbound communication to the fingerprints;
associating, by the data processing system, a communicator identifier with the communicator based on the at least one match to the fingerprints, the communicator identifier including a descriptive name associated with the fingerprint; and
providing, by the data processing system, the communicator identifier to recipients of future communications from the communicator.

US Pat. No. 11,115,526

REAL TIME SIGN LANGUAGE CONVERSION FOR COMMUNICATION IN A CONTACT CENTER

Avaya Inc., Santa Clara,...


1. A system for real-time sign language translation, comprising:a communication interface configured to receive a video image of a customer utilizing a customer communication device engaged in an interaction via a network with a human agent utilizing an agent communication device; and
a processor having an accessible memory; and
wherein the processor is configured to:receive a video image of the customer;
determine a first sign language gesture from the received video image of the customer, the determining comprising prioritizing processing to detect words previously determined to be more likely to be encountered;
translate the determined first sign language gesture into speech; and
present the agent communication device with the interaction, wherein the interaction further comprises the speech.


US Pat. No. 11,115,525

VENUE OWNER-CONTROLLABLE PER-VENUE SERVICE CONFIGURATION

NOKIA SOLUTIONS AND NETWO...


1. A method, comprising:configuring a per-venue service model of services of a network operator for terminal devices of subscribers of the network operator on the basis of an input by a venue owner of a venue, the per-venue service model including a configurable parameter for defining a venue-based service for terminal devices of subscribers residing in a specified venue area of the venue, the configurable parameter comprising a time period in which the per-venue service model is applicable, a content, a source for service provision in accordance with the per-venue service model, and a control scheme for service provision in accordance with the per-venue service model; and
causing implementation of the configured per-venue service model for service provision for at least one of a network device of the network operator and a terminal device of a subscriber of the network operator after the per-venue service model is verified to be applicable to a received service for the terminal device.

US Pat. No. 11,115,524

COMMUNICATION DEVICE


1. A communication device, which is a handheld device operable to implement wireless communication, comprising:an input device;
a display;
an antenna;
a wireless communication implementer, wherein wireless communication is implemented via said antenna;
an audiovisual playback implementer, wherein the playback process of an audiovisual data is initiated in response to a first user input, and the playback process of said audiovisual data is stopped in response to a second user input;
a 1st communication device wireless updating data implementer, wherein a 1st communication device wireless updating data is received via said antenna which updates a communication device battery controller which controls the communication device battery included in said communication device;
a 2nd communication device wireless updating data implementer, wherein a 2nd communication device wireless updating data is received via said antenna which updates a communication device input device controller which controls said input device of said communication device;
a 3rd communication device wireless updating data implementer, wherein a 3rd communication device wireless updating data is received via said antenna which updates a communication device display controller which controls said display of said communication device; and
a 4th communication device wireless updating data implementer, wherein a 4th communication device wireless updating data is received via said antenna which updates a shortcut icon software program which is the software program operable to be executed by said communication device, wherein said software program is indicated on said display by a specific shortcut icon which is operable to be selected by the user;
wherein said communication device battery controller, which controls the communication device battery included in said communication device, is updated by said 1st communication device wireless updating data;
wherein said communication device input device controller, which controls said input device of said communication device, is updated by said 2nd communication device wireless updating data;
wherein said communication device display controller, which controls said display of said communication device, is updated by said 3rd communication device wireless updating data; and
wherein said shortcut icon software program, which is the software program operable to be executed by said communication device, wherein said software program is indicated on said display by a specific shortcut icon which is operable to be selected by the user, is updated by said 4th communication device wireless updating data.

US Pat. No. 11,115,523

METHOD FOR SELECTIVELY ACCEPTING PHONE CALLS AND TEXT MESSAGES


1. A method comprising:(a) receiving, by a first call handling device, via a service provider, a call including an identification code associated with a second call handling device initiating the call, wherein each call handling device includes one or more processors and memory;
(b) determining, by the first call handling device, if there is a match between the identification code included with the call and a previously recorded identification code available to the first call handling device;
(c) in response to determining no match in step (b), causing the first call handling device to not activate a user notification means of the first call handling device and causing the second call handling device to receive an audio prompt requesting entry of a passcode into the second call handling device;
(d) receiving, by the first call handling device, the passcode entered into the second call handling device in response to step (c);
(e) determining, by the first call handling device, if there is a match between the passcode received in step (d) and a previously recorded passcode available to the first call handling device; and
(f) in response to determining a match in step (e), the first call handling device activating the user notification means of the first call handling device,
wherein all or a portion of the identification code included with the call is compared for a match in step (b) with previously recorded identification codes included in a plurality of cascading lists, one list at time, wherein the plurality of cascading lists includes a contact list including identification codes and at least one of the following lists: a wildcard list including identification codes, a previously dialed list including identification codes, and an entered password list including identification codes, wherein:
each identification code included in the contact list is one of a Caller ID or a phone number;
each identification code included in the wildcard list is a portion of a phone number;
each identification code included in the previously dialed list is a phone number that was previously dialed at the first call handling device; and
each identification code included in the entered password list is a Caller ID or a phone number of a second call handling device in which was previously entered a passcode that matched a previously recorded passcode available to the first call handling device.

US Pat. No. 11,115,522

CUSTOMIZATION OF CNAM INFORMATION FOR CALLS PLACED TO MOBILE DEVICES

FIRST ORION CORP., Littl...


1. A method comprising:identifying, via a content delivery device, a call from a calling device destined for a called device;
identifying, via the content delivery device, a calling device number assigned to the calling device;
receiving, via the content delivery device, a first caller identification name (CNAM) or a second CNAM from a call content application programming interface (API), where one of the first CNAM or the second CNAM is to be assigned to the calling device number and provided to a called device;
determining whether to assign the first CNAM or the second CNAM to the calling device number based on contextual information associated with a previously logged transaction identifying transaction information linked to the called device number and the calling device number;
assigning the first CNAM or the second CNAM to a push API based on the identified contextual identification;
receiving, via the content delivery device, the assignment of one of the first CNAM or the second CNAM; and
communicating, via the content delivery device, the assigned CNAM to a carrier network for communication to the called device.

US Pat. No. 11,115,521

SYSTEMS AND METHODS FOR AUTHENTICATION AND FRAUD DETECTION

VERINT AMERICAS INC., Al...


1. A system for authenticating calls and for preventing fraud comprising:one or more processors;
a memory communicably coupled to the one or more processors and storing:an analysis module including instructions that when executed by the one or more processors cause the one or more processors to:receive a call through a first channel, wherein the call is associated with a customer and a speaker;
based on one or more characteristics of the received call, the customer, or the channel, assign a score to the call;
determine if the score satisfies a threshold; and
if the score does not satisfy the threshold, flag the call as a fraudulent call;

a biometrics module including instructions that when executed by the one or more processors cause the one or more processors to:analyze voice data associated with the call to determine whether the speaker is a fraudulent speaker; and
if the speaker is a fraudulent speaker, flag the call as a fraudulent call; and

an authentication module including instructions that when executed by the one or more processors cause the one or more processors to:determine that no voiceprints are associated with the customer; and
in response to the determination that no voiceprints are associated with the customer:generate a first code;
retrieve a profile associated with the customer;
send the first code to the customer through a second channel indicated by the profile associated with the customer;
receive a second code through the first channel;
determine if the first code matches the second code; and
if it is determined that the first code matches the second code, flag the call as an authenticated call.




US Pat. No. 11,115,520

SIGNAL DISCOVERY USING ARTIFICIAL INTELLIGENCE MODELS

Invoca, Inc., Santa Barb...


1. A method comprising:receiving call transcript data comprising an electronic digital representation of a verbal transcription of a current call between a first person of a first person type and a second person of a second person type;
splitting the call transcript data into first person type data comprising words spoken by the first person in the current call and second person type data comprising words spoken by the second person type in the current call;
storing a topic model, the topic model simultaneously modeling the first person type data as a function of a first probability distribution of words used by the first person type, over a plurality of calls, for one or more topics discussed in the current call and the second person type data as a function of a second probability distribution of words used by the second person type over the plurality of calls, for the one or more topics discussed in the current call, both the first probability distribution of words used by the first person type for the one or more topics and the second probability distribution of words used by the second person type for the one or more topics being modeled as a function of a third probability distribution of words for the one or more topics, the third probability distribution of words representing an overall probability distribution of words for each topic of the one or more topics discussed in the current call;
using the topic model, determining a topic of the call;
storing the call transcript data with additional data indicating the topic of the call.