US Pat. No. 11,032,626

METHOD FOR PROVIDING ADDITIONAL INFORMATION ASSOCIATED WITH AN OBJECT VISUALLY PRESENT IN MEDIA CONTENT

1. A method for creating interactive media content having additional information associated with an object visually present in media content, said method comprising the steps of:receiving media content in an authoring tool that is capable of receiving input from an author to create interactivity for objects visually present in the media content when the media content is played;
defining a default resolution to import the media content into the authoring tool;
scaling the media content within the authoring tool to display at the default resolution so that the authoring tool and the media content have a 1:1 correspondence for a coordinate grid;
establishing an interactive element corresponding to the object visually present in the media content, the interactive element is defined by element parameters, the element parameters comprise a plurality of (X, Y) coordinates that define a shape and an object time corresponding to a duration that the shape is present at the coordinates;
establishing object metadata for the object and associating the object metadata with the interactive element;
creating a portable package for distribution that includes the default resolution, the element parameters and the object metadata such that when the portable package is accessed through a customer viewer, the object metadata will retrieve the additional information about the object.

US Pat. No. 11,032,625

METHOD AND APPARATUS FOR FEEDBACK-BASED PIRACY DETECTION

IRDETO B.V., Hoofddorp (...

1. A method for detecting piracy of multiple content streams respectively associated with receivers, the method comprising:detecting potential unauthorized use of a content stream having a first predetermined watermark pattern defined by a sequence of watermarks applied to successive elements of the content stream by a watermarking system, wherein the predetermined watermark pattern has been transmitted to multiple nodes in a first group;
dividing the first group into at least two subgroups of nodes each subgroup being smaller than the first group;
transmitting information regarding the at least two subgroups to the watermarking system to thereby allow the watermarking system to watermark further elements of the content stream with a second predetermined watermark pattern sent to a first subgroup of nodes of the at least two subgroups and a third predetermined watermark pattern sent to a second subgroup of nodes of the at least two subgroups, wherein the second predetermined watermark pattern is different from the third predetermined watermark pattern; and
detecting unauthorized use of a content stream having the second predetermined pattern to thereby determine that a node in the first subgroup is the source of the unauthorized use.

US Pat. No. 11,032,624

SYSTEM AND METHOD FOR PROVIDING AN ALERT AND AD WHILE DELIVERING DIGITAL CONTENT

1. A method for providing an alert on delivering a digital content, comprising:providing the alert signal to a viewer device for directing the focus of the viewer to the imminent occurrence of a feature of interest in the digital content being delivered, and
presenting a commercial message to the viewer at least partially during the time the focus of the viewer has been directed to the feature of interest in the digital content, wherein the feature of interest comprises the resumption of play during a sporting event following a brief stoppage that was not caused by a change in the score.

US Pat. No. 11,032,623

SUBTITLED IMAGE GENERATION APPARATUS AND METHOD

REALTEK SEMICONDUCTOR COR...

1. A subtitled image generation apparatus, comprising:a subtitle generation circuit configured to receive audio data to generate a subtitle image pattern according to the audio data;
an image delay circuit comprising:
a first delay path having a delay buffer circuit;
a second delay path having a data amount decreasing circuit, the delay buffer circuit and a data amount restoring circuit; and
a control circuit configured to control the first delay path to store and delay image data when a data amount of the image data matches a direct-writing condition, and control the second delay path to decrease the data amount of the image data, to store and delay the image data and to restore the data amount of the image data when the data amount fails to match the direct-writing condition; and
an overlaying circuit configured to overlay the subtitle image pattern on the image data having corresponding a timing to generate an output subtitled image.

US Pat. No. 11,032,622

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. An information processing apparatus, comprising:processing circuitry configured to:
receive instrument information from instruments that are communicatively connected to the information processing apparatus via a network, the instrument information specifying a plurality of instances of inter-instrument communication among the instruments, and identifying pieces of transmission data and pieces of reception data for the instances of inter-instrument communication;
arrange, according to the instrument information, the pieces of transmission data into transmission data groups and the pieces of reception data into reception data groups, transmission processing within a respective transmission data group being collectively controlled, and reception processing within a respective reception data group being collectively controlled; and
generate a user interface for display on a screen according to the instrument information, the user interface including:
source group icons each representing a respective one of the transmission data groups; and
reception group icons each representing a respective one of the reception data groups.

US Pat. No. 11,032,621

CONTENT DISTRIBUTION SERVER, TERMINAL DEVICE, CONTENT DISTRIBUTION SYSTEM, CONTENT DISTRIBUTION METHOD, CONTENT PLAY METHOD, CONTENT DISTRIBUTION PROGRAM, AND CONTENT PLAYER PROGRAM

DWANGO, Co., Ltd., Tokyo...

1. A content distribution server, comprising:a storage that stores live content received;
a controller that controls distributing the live content stored in the storage as chase content to be played in a delayed manner from the live content received; and
a communicator that distributes the live content and the chase content to a viewer terminal,
wherein the controller converts a frequency of sound of the live content to a higher or a lower frequency than an original frequency, while a frequency of sound of the chase content is equal to the original frequency of the sound of the live content, and the controller distributes the sound having been changed to the viewer terminal through the communicator.

US Pat. No. 11,032,620

METHODS, SYSTEMS, AND APPARATUSES TO RESPOND TO VOICE REQUESTS TO PLAY DESIRED VIDEO CLIPS IN STREAMED MEDIA BASED ON MATCHED CLOSE CAPTION AND SUB-TITLE TEXT

SLING MEDIA PVT LTD, Ban...

1. A method for implementing voice search in media content, the method comprising:requesting, at a client device by a voice request, media content comprising at least a video clip of a scene contained in the media content wherein the media content is streamed to the client device;
capturing, at the client device the voice request for the media content of the video clip to display at the client device wherein the streamed media content is a selected video streamed from a video source;
applying a natural language processing solution for matching the voice request to a set of one or more words contained in at least close caption text of the selected video;
associating matched words to close caption text with a start index and an end index of the video clip contained in the selected video; and
streaming the video clip to the client device in accordance with the start index and the end index associated with matched closed caption text.

US Pat. No. 11,032,619

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. An electronic apparatus comprising:a receiver configured to receive a broadcast signal;
a communicator configured to communicate through a network;
an input interface configured to receive a user input; and
a controller configured to:
perform a channel scan for the broadcast signal,
generate a channel list comprising a first channel of the broadcast signal based on the channel scan,
obtain information regarding a second channel from the broadcast signal, the second channel being available through the network,
generate a potential channel list comprising the second channel based on the obtained information,
provide guide information to prompt a user to establish a connection to the network based on a network connection status of the communicator indicating that the electronic apparatus is not connected to the network, and
receive and provide content corresponding to one of the first channel and the second channel selected from the channel list and the potential channel list,
wherein the content corresponding to the second channel is received through the communicator, which is different from the receiver through which the information regarding the second channel is obtained from the broadcast signal,
wherein when the network connection status of the communicator indicates that the electronic apparatus is not connected to the network, the controller is further configured to prevent a channel switching to the second channel in the potential channel list in response to a first user input of channel up or down and to allow the channel switching to the second channel in the potential channel list in response to a second user input of inputting a channel number of the second channel, and
wherein when the network connection status of the communicator indicates that the electronic apparatus is connected to the network, the controller is further configured to add the second channel in the channel list and allow the channel switching to the second channel in the channel list in response to a third user input of channel up or down and to allow the channel switching to the second channel in the channel list in response to a fourth user input of inputting the channel number of the second channel.

US Pat. No. 11,032,618

METHOD AND APPARATUS FOR PROCESSING CONTENT FROM PLURALITY OF EXTERNAL CONTENT SOURCES

Samsung Electronics Co., ...

1. A content processing device comprising:a processor configured to:
map contents of a plurality of external content sources to a content listing in a broadcasting mode of the content processing device, based on at least one content parameter comprising a frequency of viewing channels, wherein the broadcasting mode enables the content processing device to broadcast at least one TV channels;
present an icon array of a modified content listing in the identified source broadcasting mode, the icon array of the modified content listing comprising a plurality of content icons, each icon representing one of the contents of the plurality of external content sources according to the broadcasting mode of the content processing device; and
reproduce the one of the contents of the plurality of external content sources in response to a selection of one of the icon array of the modified content listing in the broadcasting mode and without explicitly switching to an external content source mode,
wherein the mapping of the contents of the plurality of external content sources to the content listing comprises mapping, based on the frequency of viewing channels, the contents of the plurality of external content sources to empty or least viewed channels in the content listing in the broadcasting mode of the content processing device.

US Pat. No. 11,032,617

MULTIPLE HOUSEHOLD MANAGEMENT

Sonos, Inc., Santa Barba...

1. A method to be performed by a computing system, the method comprising:causing, via a network interface, display of a household management interface for multiple media playback systems that comprise one or more respective playback devices, wherein a first control interface of the household management interface comprises (i) respective representations of the multiple media playback systems and (ii) first status information comprising indications of one or more respective first statuses corresponding to each media playback system of the multiple media playback systems;
receiving, via a households control of the household management interface, input data representing a selection of a particular media playback system from among the multiple media playback systems, wherein the household management interface is switchable among a plurality of second control interfaces corresponding to each media playback system of the multiple media playback systems via the households control;
based on receiving the input data representing the selection of the particular media playback system, causing, via the network interface, display of a particular second control interface of the household management interface, the second control interface comprising (i) playback controls to control playback on playback devices of the particular media playback system and (ii) second status information comprising indications of two or more second statuses corresponding to respective parameters of the particular media playback system;
receiving, via the network interface, data representing updated status information for at least one media playback system of the multiple media playback systems; and
based on receiving the data representing updated status information for at least one media playback system, updating the household management interface to display the updated status information for at least one media playback system of the multiple media playback systems.

US Pat. No. 11,032,616

SELECTIVELY INCORPORATING FEEDBACK FROM A REMOTE AUDIENCE

BLIZZARD ENTERTAINMENT, I...

1. A method comprising:concurrently generating a set of live audience feeds, wherein each live audience feed in the set of live audience feeds includes media content involving a set of participant users;
transmitting the set of live audience feeds to a plurality of client devices associated with a plurality of audience members other than the set of participant users;
while transmitting the set of live audience feeds:
receiving, from the plurality of the audience members, feedback data that represents a plurality of items of feedback on one or more already-transmitted portions of the media content;
classifying each item of feedback, of the plurality of items of feedback, with a particular feedback type of a plurality of feedback types;
wherein each feedback type, of the plurality of feedback types, is associated with one or more changes that affect contents of one or more live audience feeds in the set of live audience feeds;
based on the plurality of items of feedback being classified with the particular feedback type, determining that the plurality of items of feedback satisfies stored triggering criteria for triggering an audience feed change;
responsive to determining that the plurality of items of feedback satisfies the stored triggering criteria, causing at least one change, of the one or more changes that correspond to the particular feedback type, to be made to one or more live audience feeds in the set of live audience feeds;
wherein the method is performed by one or more computing devices.

US Pat. No. 11,032,615

DETERMINING AN END SCREEN TIME FOR DISPLAYING AN END SCREEN USER INTERFACE

Verizon Patent and Licens...

1. A method, comprising:receiving, by a content platform, information identifying a plurality of exit times, in connection with an on-demand content element, associated with respective user-initiated exit events;
generating, by the content platform, an exit time distribution based on the plurality of exit times;
determining, by the content platform and using a machine-learning regression model, an end screen time in connection with the on-demand content element based on the exit time distribution; and
transmitting, by the content platform, an instruction to display an end screen user interface at the end screen time during playback of the on-demand content element.

US Pat. No. 11,032,614

COMPUTER-IMPLEMENTED SYSTEM AND METHOD FOR DETERMINING ATTENTIVENESS OF USER

1. A computer-implemented method of determining user attentiveness during media content consumption, the method comprising:obtaining, at a collection server, response data from a client device, wherein the response data is collected for a user consuming media content on the client device, and wherein the response data comprises a data stream representative of variation over time of the user's behaviour whilst consuming the media content;
associating, at the collection server, the data stream with the media content;
displaying, at an annotation device, a dynamic representation of the response data concurrently with the media content to which it is associated;
receiving, at the annotation device, attentiveness data from an annotator, wherein the attentiveness data is an input score indicative of user attentiveness based on the dynamic representation;
associating, at the annotation device, the attentiveness data with events in the data stream or media content to generate attentiveness-labelled response data;
storing, in a data repository, attentiveness-labelled response data from multiple users;
extracting, from the data repository by an analysis server, an attentiveness-labelled response data training set;
establishing, at the analysis server, an objective for a machine learning algorithm; and
generating, using the machine learning algorithm, an attentiveness model from the attentiveness-labelled response data training set.

US Pat. No. 11,032,613

DYNAMIC SLATES FOR LIVE STREAMING BLACKOUTS

FOX BROADCASTING COMPANY,...

1. A method of managing streaming of a regionally premier media program to a device disposed at a location, comprising:(a) receiving a request to stream the media program to the device, the request comprising:
an identifier of the media program;
(b) in response to the received request, determining a current transmission state of the media program to the device at a location according to the identifier of the media program and the location of the device, the current transmission state comprising:
a first current transmission state wherein streaming the media program to the device is not precluded;
a second current transmission state wherein streaming of the media program to the device is precluded; and
(c) in response to a determination that the current transmission state is the second current transmission state:
terminating any streaming of the media program to the device and transmitting second information in lieu of the streaming of the media program, the second information initiating presentation of alternative content by the device.

US Pat. No. 11,032,612

DYNAMIC VERIFICATION OF PLAYBACK OF MEDIA ASSETS AT CLIENT DEVICE

Turner Broadcasting Syste...

1. A system, comprising:a verification server for dynamic verification of playback of one or more media assets; and
a client device for playback of the one or more media assets on a display view of the client device,
wherein the client device comprises a first circuitry that is configured to:
receive an asset stream of the one or more media assets that comprises one or more tags embedded in the one or more media assets, via a communication network;
detect an asset identifier associated with each media asset of the one or more media assets during playback of each media asset on the client device, based on identification of a tag of the one or more tags; and
generate support information for each media asset, in response to the detection of the asset identifier,
wherein the support information is generated for verification of the playback of each media asset at the client device, and
generate a verification message for each media asset of the one or more media assets based on the detection of the asset identifier for each media asset,
wherein the verification message for each media asset indicates that each media asset is presented on the display view of the client device;
encrypt the generated verification message for each media asset, based on a client private key associated with the client device and an asset public key for each media asset; and
wherein the verification server comprises a second circuitry that is configured to:
verify the playback of the one or more media assets on the client device based on the verification message,
wherein the playback of the one or more media assets are verified to satisfy a defined asset delivery criteria and to identify at least one deviation or at least one error with the playback of the one or more media assets.

US Pat. No. 11,032,611

METHOD FOR ENHANCING A USER VIEWING EXPERIENCE WHEN CONSUMING A SEQUENCE OF MEDIA

Rovi Guides, Inc., San J...

1. A method for providing edited content, the method comprising:detecting that a first content item is about to be consumed;
identifying an overlapping content portion in the first content item and a second content item; and
in response to the identifying the overlapping content portion:
generating a prompt for a user to select one of a first playback rate or a second playback rate, wherein the first playback rate is faster than the normal playback rate and wherein the second playback rate is faster than the first playback rate;
receiving a user selection of one of the first playback rate or the second playback rate; and
providing for playing the overlapping content portion of the first content item at the selected playback rate.

US Pat. No. 11,032,610

METHODS AND APPARATUS TO DETERMINE ENGAGEMENT LEVELS OF AUDIENCE MEMBERS

The Nielsen Company (US),...

7. An apparatus comprising:memory including machine readable instructions; and
circuitry to execute the instructions to:
analyze image data from a sensor to determine whether an environment in which media is to be presented by a first device includes a second device with an illuminated display;
determine a type of the second device based on the image data; and
determine, when the environment includes the second device with the illuminated display, whether the illuminated display of the second device is associated with a media presentation by the second device.

US Pat. No. 11,032,609

ANALYSIS OF TELEVISION VIEWERSHIP DATA FOR CREATING ELECTRONIC CONTENT SCHEDULES

AMC Network Entertainment...

1. A method implemented on a processor to generate an electronic content schedule, the method comprising:receiving one or more data files comprising television (TV) viewing data and descriptive data for a first plurality of individuals, the descriptive data comprising demographic and behavioral data for each individual;
receiving, from a user, target audience criteria, target TV content, and criteria for key performance indicators (KPIs);
tracking KPIs for a target segment including a second plurality of individuals selected from the first plurality of individuals based on matching the target audience criteria to the descriptive data;
calculating spot watching probabilities for each individual in the target segment;
generating a plurality of spot packages based on the target TV content;
analyzing the plurality of spot packages to select a spot package that provides the highest incremental value to the electronic content schedule, wherein analyzing each spot package comprises:
selecting a probabilistic segment from the target segment for the spot package based on the spot watching probabilities, and
calculating a score representing an incremental value of the spot package based on applying the KPI criteria to a plurality of KPIs calculated for the probabilistic segment; and
adding the selected spot package to the electronic content schedule.

US Pat. No. 11,032,608

MOBILE TERMINAL AND CONTROL METHOD THEREFOR

LG ELECTRONICS INC., Seo...

1. A mobile terminal, comprising:a display unit;
a memory;
a sensing unit configured to sense a fingerprint input;
a first camera; and
a controller configured to:
sense a first input signal in a state that a first video content captured through the first camera is output on the display unit,
extract a first fingerprint information from the first input signal, and
set up a security section of the first video content captured through the first camera in response to the first fingerprint information corresponding to a registered fingerprint information,
wherein the security section comprises a section in which at least one portion of the first video content captured through the first camera is encrypted based on a time during which the sensing of the first input signal is maintained, and
wherein the controller is further configured to:
extract a second fingerprint information in response to a second input signal being sensed before or while the first video content in which the security section is set up is played,
in response to the second fingerprint information corresponding to the registered fingerprint information, play the first video content including the security section, and
in response to the second fingerprint information not corresponding to the registered fingerprint information, play the first video content except the security section.

US Pat. No. 11,032,607

METHODS, DEVICES, AND SYSTEMS FOR EMBEDDING VISUAL ADVERTISEMENTS IN VIDEO CONTENT

1. A device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising:
obtaining video content, wherein the video content comprises a plurality of frames;
monitoring, by an image sensor, a facial feature of a user to determine a visual focus of the user in relation to the video content;
identifying a first group of frames of the plurality of frames and identifying a second group of frames of the plurality of frames, wherein the first group of frames includes a first group of moving objects, wherein the second group of frames includes a second group of moving objects;
determining that the first group of moving objects in the first group of frames is fewer than the second group of moving objects in the second group of frames resulting in an object determination;
determining the first group of frames includes different media content than the second group of frames resulting in a scene determination;
determining, according to the monitoring and the object determination, a measure of attention of the user within a region of the first group of frames;
determining that the measure of attention of the user within the region of the first group of frames satisfies a threshold;
detecting a change in viewpoint by the user;
determining a viewpoint trajectory from the change in viewpoint;
identifying a portion of subsequent frames of the plurality of frames to insert a visual advertisement according to the scene determination, object determination, the measure of attention of the user, and the viewpoint trajectory;
identifying a current time period to embed the visual advertisement, wherein the portion of subsequent frames are within the current time period;
identifying a previous time period for embedding a previous visual advertisement in a previous group of frames;
determining a time period between the previous time period and the current time period;
determining the time period exceeding a time threshold; and
in response to the time period exceeding the time threshold, embedding in at least the portion of subsequent frames of the plurality of frames the visual advertisement in the region for presentation to the user via a communication device receiving the at least the portion of subsequent frames of the plurality of frames.

US Pat. No. 11,032,606

SYSTEM, METHOD, AND RECORDING MEDIUM FOR PROVIDING NOTIFICATIONS IN VIDEO STREAMS TO CONTROL VIDEO PLAYBACK

INTERNATIONAL BUSINESS MA...

1. A video stream control system, the system comprising:a video stream analyzing circuit configured to identify, apriori, a section of a video stream where an adverse condition will occur by determining, apriori, that a likelihood of the section of the video will have a negative effect is greater than a first predetermined threshold;
a notification delivering circuit configured to modify the video stream and to deliver a notification that the adverse condition will occur in the section of the video stream before the section of the video stream is played; and
a selecting circuit configured to select a type of delivery of the section of the video stream that includes a modified version of the section omitting the adverse condition based on a set of rules factoring the adverse condition and user data at a time of the notification,
wherein the selecting circuit, after the notification is delivered, queries for authorization to watch the unmodified version of the section of the video stream,
wherein a timing of the notification varies based on a user input, and
wherein a sensitivity of the video stream analyzing circuit identifying the adverse condition varies based on a user constraint to a predetermined threshold of biometric data required to trigger the adverse condition,
further comprising a learning circuit configured to learn a plurality of user reactions to the video stream to optimize an identification of the adverse condition by the video stream analyzing circuit.

US Pat. No. 11,032,605

SYSTEMS AND METHODS FOR REDUCING DIGITAL VIDEO LATENCY

Xandr Inc., New York, NY...

1. A method comprising:determining, by a processing system including a processor, that a client device of a user is displaying a web page, wherein the processing system is part of the client device;
determining, by the processing system, a likelihood that the user will input a selection of content on the web page, the content being associated with a digital video;
determining, by the processing system, whether the likelihood exceeds a threshold;
responsive to the likelihood exceeding the threshold and prior to the user selecting the content:
identifying, by the processing system, the digital video associated with the content;
obtaining, by the processing system, a copy of a video file comprising the digital video and an initial portion that precedes the digital video;
initiating, by the processing system, a video player of the client device, wherein the initial portion comprises at least one of executable code or instructions for the video player, wherein executing the at least one of executable code or instructions for the video player results in the initiation of an unwrapping process and results in the video player generating a started signal indicating when the initial portion has been processed by the video player;
determining, by the processing system, a start location in the video file where the initial portion ends and the digital video begins, wherein the determining the start location comprises processing, by the processing system, the video file in a play direction until the digital video is reached; and
configuring, by the processing system, the video player to begin playing the video file from the start location when the user selects the content.

US Pat. No. 11,032,604

MANAGEMENT OF DEVICES IN AD HOC RENDERING NETWORKS

Apple Inc., Cupertino, C...

1. A control method for a network of member player devices, comprising:receiving at a first device in the network, a data record representing state of the network, the data record comprises: data identifying the devices that are members of the network, grouping(s) of the devices defined for the network, and a desired play state for each of the devices;
storing the data record at the first device;
determining, by the first device, whether a play state at the first device is different than a desired play state for the first device as indicated in the data record; and
if so, altering the play state at the first device to match the desired play state as indicated in the data record.

US Pat. No. 11,032,603

RECORDING REMOTE EXPERT SESSIONS

RPX Corporation, San Fra...

1. A server comprising:one or more hardware processor comprising a recording application, the recording application being configured to perform operations comprising:
receiving first content data from a first device, the first content data having been generated by the first device and being associated with performance of a first task in relation to a first physical device;
providing the first content data to a second device;
receiving second content data from the second device, the second content data including modifications to the first content data, the modification including an annotation describing performance of the first task in relation to the first physical device;
forming playback parameters based on a context of the first device, the context of the first device being based on an orientation of the first device in relation to the first physical device and the first task;
receiving third content data from a third device, the third content data describing a context of the third device, the context of the third device indicating an orientation of the third device in relation to the first physical device and identifying the first task;
determining, based on the context of the third device, that the third device meets the playback parameters;
in response to determining that the third device meets the playback parameters, generating an enhanced playback session based on the first content data and second content data, the enhanced playback session including the first content data generated by the first device modified based on the modifications included in the second content data; and
communicating the enhanced playback session to the third device.

US Pat. No. 11,032,602

AUDIOVISUAL COLLABORATION METHOD WITH LATENCY MANAGEMENT FOR WIDE-AREA BROADCAST

Smule, Inc., San Francis...

1. An audio collaboration method for broadcast of a joint performance of geographically distributed first and second performers with non-negligible peer-to-peer communications latency between host and guest devices, the method comprising:receiving at the host device, operating as a local peer, a media encoding of a mixed audio performance (i) including vocal audio captured at the guest device, communicatively coupled as a remote peer, from a first one of the performers and (ii) mixed with a backing audio track;
at the host device, audibly rendering the received mixed audio performance and capturing thereagainst vocal audio from a second one of the performers;
mixing, at the host device, the captured second performer vocal audio with the received mixed audio performance to provide a broadcast mix for transmission to an audience as the broadcast, wherein the broadcast mix includes vocal audio of the first and second performers and the backing audio track with negligible temporal lag therebetween; and
buffering the broadcast mix at a content server separate from the host device and transmitting the buffered broadcast mix from the content server to the audience.

US Pat. No. 11,032,601

ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF

Samsung Electronics Co., ...

1. An electronic apparatus comprising:a memory configured to store information related to an identification name corresponding to a content providing source;
a communicator comprising communication circuitry; and
a processor coupled to the memory and the communicator, the processor configured to control the electronic apparatus to:
obtain information related to the identification name of the content providing source based on at least one of: identification information of the content providing source received from the content providing source through the communicator, an image received from the content providing source through the communicator and a search result regarding the identification name received through the communicator,
map the obtained information onto a plurality of different identification names corresponding to the content providing source,
store mapping information in the memory,
assign a priority order to the plurality of different identification names based on a frequency of use of the plurality of different identification names,
receive a user command from an input device,
determine whether the user command includes one of the different identification names of the plurality of different identification names by comparing the user command with the plurality of different identification names based on the priority order of the plurality of different identification names,
recognize the user command as a selection command of the content providing source based on the different identification name being included in the user command, and
control the electronic apparatus to perform an operation corresponding to the user command.

US Pat. No. 11,032,600

SYSTEM AND METHOD FOR INTERACTING WITH A PROGRAM GUIDE DISPLAYED ON A PORTABLE ELECTRONIC DEVICE

Universal Electronics Inc...

1. A method for controlling the operation of a one of a plurality of consumer electronic device, comprising:displaying a plurality of broadcast channel identifiers each corresponding to a broadcast channel in a display of a controlling device a designation of a one of the-broadcast channel identifiers of the displayed plurality of broadcast channel identifiers is capable of causing the controlling device to transmit a command to control at least channel tuning operations of each of the plurality of consumer electronic devices;
accepting an input into the controlling device that functions to designate the one of the plurality of broadcast channel identifiers; and
using the designation of the one of the plurality of broadcast channel identifiers to cause a transmission of a wireless signal from the controlling device to the one of the plurality of consumer electronic devices to cause the one of the plurality of consumer electronic devices to tune to the broadcast channel corresponding to the designated one of the plurality of broadcast channel identifiers wherein a condition associated with the controlling device at a time when the input is accepted into the controlling device determines the one of the plurality of consumer electronic devices from amongst each of the plurality of consumer electronic devices.

US Pat. No. 11,032,599

SYSTEMS, METHODS AND APPARATUS FOR INTERACTING WITH A SECURITY SYSTEM USING A TELEVISION REMOTE CONTROL

ECOLINK INTELLIGENT TECHN...

1. A method performed by an auxiliary interface device for interacting with a security system controller, a television and a television remote control, comprising:providing, by a processor of the auxiliary interface, a security dashboard to the television for display by the television;
receiving, by the processor via a receiver coupled to the processor, an electronic signal from the television remote control, the electronic signal transmitted in accordance with a first communication protocol that cannot be directly received by the security system controller, the electronic signal comprising a command to cause the security system controller to perform an action;
accessing, by the processor via a network interface coupled to the processor, a server associated with the security system controller via a wide-area network, the server for providing an interface to the security system controller; and
providing, by the processor via the network interface, the electronic signal to the server via the wide-area network;
wherein the server forwards the electronic to the security system controller to perform the action associated with the command.

US Pat. No. 11,032,598

SYSTEM AND METHOD FOR RETRIEVING INFORMATION WHILE COMMANDING OPERATION OF AN APPLIANCE

UNIVERSAL ELECTRONICS INC...

1. A method for using a wireless interface device interfaced to an appliance to facilitate play a media stream, comprising:receiving at a portable electronic device the media stream;
causing the portable electronic device to route the received media stream for a playing of the received media stream by the portable electronic device;
detecting by the portable electronic device that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance; and
in response to the portable electronic device detecting that the portable electronic device has been placed into wireless communication with the wireless interface device interfaced to the appliance, causing the portable electronic device to automatically reroute the received media stream to the wireless interface device interfaced to the appliance for a playing of the received media stream by the appliance instead of routing the received media stream for the playing of the received media steam by the portable electronic device.

US Pat. No. 11,032,597

SYSTEMS AND METHODS OF DYNAMIC OPTIMIZATION OF DATA ELEMENT UTILIZATION ACCORDING TO OBJECTIVES

ADAP.TV, INC., Dulles, V...

1. A computer-implemented method for optimizing graphical data element usage according to the plurality of objectives, comprising:receiving, at a server, a plurality of objectives associated with one or more graphical data elements via a user interface;
receiving, at the server, one or more fiscal constraints associated with the graphical data elements via the user interface;
apportioning, by the server, at least a portion of fiscal distribution resources to each graphical data element of the graphical data elements within the one or more fiscal constraints;
receiving, at the server, one or more electronic distribution metrics associated with the performance of the graphical data elements in meeting the plurality of objectives, wherein the one or more electronic distribution metrics are associated with distribution across an electronic network;
automatically revising, at the server, the at least a portion of fiscal distribution resources associated with each graphical data element of the graphical data elements in a manner optimized to meet the plurality of objectives within the one or more fiscal constraints by determining: (i) at least one feasibility region based on the one or more fiscal constraints and (ii) an optimum distribution of the at least a portion of fiscal distribution resources by evaluating one or more candidate optimization points based on the at least one feasibility region; and
automatically allocating, by the server, the revised portion of fiscal distribution resources associated with each graphical element.

US Pat. No. 11,032,596

SYSTEMS AND METHODS FOR GENERATING CONTENT STREAMS

Facebook, Inc., Menlo Pa...

1. A computer-implemented method comprising:determining, by a computing system, first bandwidth capabilities of a first viewing audience for a first content producer on a system, wherein the first viewing audience comprises users of the system that are following the first content producer through the system;
constructing, by the computing system, a first bandwidth distribution that plots the first bandwidth capabilities of the first viewing audience for the first content producer;
determining, by the computing system, one or more first quality levels for encoding streams of content items created by the first content producer based at least in part on the first bandwidth distribution, wherein the one or more first quality levels correspond to one or more peaks in the first bandwidth distribution, wherein the one or more peaks include at least a global maximum and one or more local maxima, and wherein a number of streams to be encoded corresponds with a number of peaks in the first bandwidth distribution;
updating, by the computing system, the first bandwidth distribution for the first viewing audience for the first content producer to accommodate changes to the first viewing audience over time;
encoding, by the computing system, at least one stream of at least one content item created by the first content producer based at least in part on the one or more determined first quality levels;
determining, by the computing system, second bandwidth capabilities of a second viewing audience for a second content producer on the system, wherein the second bandwidth capabilities of the second viewing audience are different from the bandwidth capabilities of the viewing audience;
constructing, by the computing system, a second bandwidth distribution that plots the second bandwidth capabilities of the second viewing audience for the second content producer;
determining, by the computing system, one or more second quality levels for encoding streams of content items created by the second content producer; and
encoding, by the computing system, at least one stream of at least one content item created by the second content producer based on the one or more determined second quality levels.

US Pat. No. 11,032,595

SYSTEMS AND METHODS FOR DELIVERY OF CONTENT VIA MULTICAST AND UNICAST

Rovi Guides, Inc., San J...

1. A method for delivering content via a combination of a multicast source and a unicast stream, the method comprising:receiving, from user equipment, a request for the content;
identifying a plurality of multicast sources, wherein each stream in the plurality of multicast sources provides access to the content;
determining a recent multicast source from the plurality of multicast sources, wherein the recent multicast source most recently began delivering the content relative to other multicast sources of the plurality of multicast sources;
transmitting, to the user equipment, an identity of the recent multicast source;
retrieving data indicating a position in the content of the recent multicast source corresponding with when the user equipment began buffering the recent multicast source;
calculating a projected unicast stream length based on a difference in a start time in the content of the unicast stream and the position in the content when the user equipment began buffering the recent multicast source;
detecting an advertisement period in the unicast stream;
reducing the advertisement period, wherein the advertisement period in the unicast stream is reduced based on the projected unicast stream length; and
providing a beginning portion of the content to the user equipment via the unicast stream.

US Pat. No. 11,032,594

SYSTEMS AND METHODS FOR DETERMINING WHETHER TO UPDATE EMBEDDED ADVERTISEMENTS IN DOWNLOADED CONTENT USING ADVERTISEMENT UPDATE CRITERIA

Rovi Guides, Inc., San J...

1. A method, comprising:receiving a request from a user to download a media asset at a first future time, wherein the media asset includes a plurality of embedded advertisements and the media asset is downloaded at the first future time based on receiving the request;
based on downloading the media asset at the first future time, determining, at a second future time and based on user-specific update criteria, whether an embedded advertisement in the plurality of embedded advertisements needs to be updated by:
retrieving, from a user profile, a threshold period of time;
retrieving, from metadata for the embedded advertisement, an age of the embedded advertisement;
comparing the age of the embedded advertisement with the threshold period of time; and
in response to determining that the age of the embedded advertisement exceeds the threshold period of time, determining that the embedded advertisement in the media asset needs to be updated;
based on determining the embedded advertisement needs to be updated, replacing the embedded advertisement with an updated embedded advertisement in the media asset, wherein the updated embedded advertisement is retrieved from an online database using the user-specific update criteria.

US Pat. No. 11,032,593

ACCOUNT LOGIN METHOD AND SYSTEM, VIDEO TERMINAL, MOBILE TERMINAL, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

12. A video terminal, comprising:a memory storing computer-readable instructions; and
a processor coupled to the memory to execute the computer-readable instructions and configured to perform:
obtaining a first matching code from a service server based on a video terminal identifier, and outputting the first matching code to enable a mobile terminal to obtain an inputted second matching code from a login page and enable the mobile terminal to obtain the video terminal identifier from the service server based on the second matching code, there being a correspondence between the first matching code and the second matching code; and
obtaining an inputted direction control instruction, recognizing a second video login key indicated by the direction control instruction, and performing login on the video terminal according to the second video login key, the direction control instruction being an instruction inputted according to a first video login key, and there being a correspondence between the first video login key and the second video login key,
the first video login key being an obtained login key that is returned by the service server and corresponds to the video terminal identifier after the mobile terminal transmits a login key request carrying the video terminal identifier to the service server.

US Pat. No. 11,032,592

SYSTEMS AND METHODS FOR SECURELY STREAMING MEDIA CONTENT

SLING MEDIA L.L.C., Fost...

1. An automated process performed by a player device to securely establish a media streaming session with a server device via a communications network, the automated process comprising:transmitting, by the player device, a request for a connection to the server device via the communications network;
receiving, in response to the request for the connection, an authorization credential from a separately located central server via the communications network to authorize the media streaming session, wherein the authorization credential is generated and provided by the central server to both the player device and to the server device via the communications network; and
establishing the media streaming session between the player device and the server player over the communications network in response to receipt of the authorization credential received from the central server to thereby securely receive a media stream from the server device by the player device, wherein at least a portion of the media stream is encrypted based upon the authorization credential.

US Pat. No. 11,032,591

TIME DIVISION MULTIPLEXING METHOD FOR DECODING HARDWARE

AMLOGIC (SHANGHAI) CO., L...

1. A time division multiplexing method for decoding hardware, comprising:Step S1, providing a single decoding hardware;
Step S2, instantiating the decoding hardware into a first decoder and a second decoder; and
Step S3, decoding a first data stream through the first decoder, and decoding a second data stream through the second decoder, wherein a process for decoding the first data stream comprises:
Step S30, loading a decoding firmware corresponding to the format of the first data stream, to decode the first data stream;
Step S300, the first decoder loads the decoding firmware corresponding to the format of the first data stream;
Step S31, determining whether the first data stream is successfully decoded;
if yes, decoding a header information of the first data stream, then decoding the first data stream into a decoded video frame and saving contextual information corresponding to the first data stream wherein the saved contextual information comprises configuration information of output buffeting from the decoding operation, variable of the decoding environment, and register values;
if no, then returning to Step 300.

US Pat. No. 11,032,590

METHODS, DEVICES, AND SYSTEMS FOR PROVIDING PANORAMIC VIDEO CONTENT TO A MOBILE DEVICE FROM AN EDGE SERVER

1. A device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising:
determining a first motion-to-update latency of a mobile device in relation to receiving a video content update provided by the device;
responsive to determining that the first motion-to-update latency exceeds a threshold:
determining a motion-to-update latency of the mobile device in relation to receiving video content updates from a plurality of edge servers, wherein the determining results in a plurality of motion-to-update latencies;
identifying a second motion-to-update latency from the plurality of motion-to-update latencies that is below the threshold;
identifying an edge server associated with the second motion-to-update latency; and
transmitting video content to the edge server to mitigate the first motion-to-update latency of the device, wherein the edge server provides a portion of the video content at different time intervals determined according to the second motion-to-update latency to the mobile device in response to receiving an indication from the mobile device of user head movement in viewing the video content resulting in a plurality of portions of video content, wherein the video content comprises panoramic video content, wherein a first portion of the plurality of portions of video content comprises a margin area surrounding a viewport area, wherein the margin area and the viewport area cover a plurality of head movements over a time period, where the margin area is determined by a coverage rate for the first portion of the video content, a cumulative probability distributed function of the coverage rate and the second motion-to-update latency, wherein the coverage rate is based on covering the plurality of head movements over the time period.

US Pat. No. 11,032,589

METHODS, SYSTEMS, AND MEDIA FOR ENSURING CONSUMPTION OF PORTIONS OF MEDIA CONTENT

Google LLC, Mountain Vie...

1. A method for ensuring media consumption, the method comprising:receiving, using a hardware processor, an encrypted media content stream from a media content source that includes first media content corresponding to at least a portion of a media content item;
determining that the media content stream received from the media content source is encrypted;
in response to determining that the media content stream received from the media content source is encrypted, requesting a media content license corresponding to the media content item, wherein the media content license includes a key;
requesting a second media content stream that includes second media content having a playback position adjacent to the first media content, wherein the second media content stream includes encrypted content key information inserted at a first time position that corresponds with an end portion of the second media content stream for decrypting the encrypted media content stream and key policy information that indicates a manner in which the encrypted content key information is to be used inserted at a second time position that also corresponds with the end portion of the second media content stream, and wherein the first time position and the second time position are different time positions within the second media content stream;
causing the second media content stream to be presented;
extracting the encrypted content key information during presentation of the second media content stream upon reaching the first time position within the second media content stream and the key policy information during presentation of the second media content stream upon reaching the second time position within the second media content stream;
decrypting the encrypted content key information included in the second media content stream using the key from the requested media license to obtain decrypted content key information, wherein the decrypted content key information includes at least one content key for decrypting the encrypted media content stream;
decrypting the encrypted media content stream using the at least one content key extracted from the decrypted content key information included in the second media content stream and based on the key policy information extracted from the second media content stream; and
causing the decrypted media content stream to be presented.

US Pat. No. 11,032,588

METHOD AND APPARATUS FOR SPATIAL ENHANCED ADAPTIVE BITRATE LIVE STREAMING FOR 360 DEGREE VIDEO PLAYBACK

Google LLC, Mountain Vie...

1. A method for providing spatial adaptive enhanced video streams for playback of a 360 degree video signal, comprising:generating at least two streaming video signals corresponding to the 360 degree video signal, wherein:
a first streaming video signal has a first resolution,
a second streaming video signal has a second resolution,
the second resolution is lower than the first resolution,
each of the first and second streaming video signals includes a plurality of frames, and
each frame spans a 360 degree viewing angle;
dividing each frame of the first and second streaming video signals into a plurality of segments, wherein each of the plurality of segments spans a portion of the 360 degree viewing angle;
generating a plurality of enhanced video streams for a 360 degree video player, wherein:
each of the plurality of enhanced video streams includes a plurality of frames, and
each frame in one of the enhanced video streams includes at least one segment from one of the plurality of frames in the first streaming video signal and at least one segment from one of the plurality of frames in the second streaming video signal; and
generating a manifest file for the 360 degree video signal to be requested by the 360 degree video player from a manifest server, wherein the manifest file comprises, for each of the plurality of enhanced video streams corresponding to the 360 degree video signal;
a first identifier that defines an address at which a respective enhanced video stream is stored, and
a second identifier that defines a direction corresponding to the portion of the 360 degree viewing angle spanned by the segment from the first streaming video signal,
the manifest file and the plurality of enhanced video streams to be distributed, for storage, to a content delivery network comprising a plurality of edge servers.

US Pat. No. 11,032,587

SYSTEMS AND METHODS OF VIDEO FORWARDING WITH ADAPTIVE VIDEO TRANSCODING CAPABILITIES

Dialogic Corporation, Mo...

1. A selective forwarding unit (SFU), comprising: areal-time adaptive video transcoder;
a memory storing one or more sets of instructions; and
at least one processor configured to execute the one or more sets of instructions:
to receive at least a first large video stream and a second large video stream from a first user device and a second user device, respectively to identify the first user device as being associated with a low available bandwidth;
to cause the real-time adaptive video transcoder to perform, in real-time, adaptive video transcoding on the second large video stream received from the second user device to produce a small video stream that conforms to the low available bandwidth associated with the first user device; and
to selectively forward the small video stream to the first user device associated with the low available bandwidth;
wherein the at least one processor is further configured to execute the one or more sets of instructions to determine at least a first available bandwidth from the SFU to the first user device and a second available bandwidth from the SFU to the second user device; and
wherein the at least one processor is further configured to execute the one or more sets of instructions to determine a heterogeneity value associated with at least the first available bandwidth and the second available bandwidth associated with the first user device and the second user device, respectively.

US Pat. No. 11,032,586

TECHNIQUES FOR DYNAMIC DIGITAL ADVERTISING

WP COMPANY LLC, Washingt...

1. A dynamic advertising method comprising:receiving, by a computer processor associated with a dynamic advertising device, from a client device and via a network, first advertising content information including timepoint data for a playback event associated with first advertising content of a first advertising slot on a webpage;
configuring, by the computer processor, a recall request for the first advertising content, the recall request including identification data associated with the first advertising content and specifying a timepoint obtained from the timepoint data;
receiving, by the computer processor, from the client device and via the network, first advertising content resume request; and
requesting, by the computer processor, for the client device, using the recall request, timepoint-configured first advertising content having a playback start time equal to the timepoint.

US Pat. No. 11,032,585

REAL-TIME SYNTHETICALLY GENERATED VIDEO FROM STILL FRAMES

Capital One Services, LLC...

1. A system for generating synthetic video, the system comprising:one or more memory units for storing instructions; and
one or more processors configured to execute the instructions to perform operations comprising:
training, using a sequence of difference images, an image sequence generator model to generate synthetic difference images;
obtaining a seed difference image by one of:
receiving the seed difference image; or
performing at least one of forward-step or backward-step encoding using an encoder model;
generating a sequence of synthetic difference images based on the seed difference image, wherein the sequence of synthetic difference images is generated by iteratively using the image sequence generator model to accept a previous synthetic difference image as an input and return a subsequent synthetic difference image as an output, starting from the seed difference image; and
generating a sequence of synthetic images based on the sequence of synthetic difference images, the generating comprising implementing a decoder model on the sequence of synthetic difference images to perform at least one of forward-step decoding or backward-step decoding.

US Pat. No. 11,032,584

PICTURE STORAGE METHOD, APPARATUS AND VIDEO MONITORING SYSTEM

HANGZHOU HIKVISION DIGITA...

10. A video monitoring system, comprising: a managing server, a storage server, and an a picture acquisition device, wherein,the picture acquisition device is configured for sending, to the managing server, a resource allocation request for a to-be-stored picture;
the managing server is configured for receiving the resource allocation request, and determining a first storage server according to the resource allocation request, and sending an identifier of the first storage server to the picture acquisition device;
the picture acquisition device is configured for receiving the identifier of the first storage server, and sending a picture writing request for the to-be-stored picture according to the received identifier;
the first storage server is configured for receiving the picture writing request, determining a first storage block for storing the to-be-stored picture according to the picture writing request; and send, to the picture acquisition device, a first identifier of the first storage block;
the picture acquisition device is configured for receiving the first identifier, generating target picture data according to the to-be-stored picture and a first capture time of the to-be-stored picture, and sending the target picture data to the first storage server according to the first identifier;
the first storage server is configured for receiving the target picture data, storing the to-be-stored picture to the first storage block, determining a first storage time at which the to-be-stored picture is stored in the first storage block, and storing the first storage time and the first capture time into a first information sub-block of the first storage block, wherein an information sub-block of each storage block of the storage server records a corresponding relationship among pictures stored in that storage block, storage time of the pictures and capture time of the picture.

US Pat. No. 11,032,583

METHOD AND SYSTEM FOR IMPROVING HIGH AVAILABILITY FOR LIVE CONTENT

QWLT, Inc., Redwood City...

1. A method for acquiring live content for a content delivery network (CDN), comprising:intercepting a content manifest based on a content session initiated by a first user node and a broadcast server, wherein the content manifest includes at least one content identifier (ID) and a corresponding content chunk;
fetching the corresponding content chunk to store in a memory of the CDN;
receiving a request from a second user node for content of the content session;
continuously determining a current leader user node between at least the first user node and the second user node; and
fetching at least a content chunk based on a content manifest of the current leader user node.

US Pat. No. 11,032,582

OVER-THE-TOP MULTICAST SERVICES

Verizon Patent and Licens...

1. A method comprising:generating, by a network device in a network, timeslot information pertaining to a network resource of the network during which the network resource is available for use by subscribers of an over-the-top (OTT) multicast service, wherein the subscribers are program providers;
generating, by the network device, cost information based on policies and the timeslot information;
publishing, by the network device, the timeslot information and the cost information to the subscribers;
determining, by the network device, that one of the subscribers has secured the network resource;
in response to determining that the one of the subscribers has secured the network resource, obtaining a uniform resource identifier of a program that is to be delivered using the network resource; and
provisioning, by the network device in response to the obtaining, the network resource to deliver the program to one or more end users.

US Pat. No. 11,032,581

PROTOCOL AND ARCHITECTURE FOR THE DECENTRALIZATION OF CONTENT DELIVERY

Charter Communications Op...

1. A method for content delivery from a content delivery network (CDN), comprising:sending, from a processor of a computing device, a discovery message to an Internet Service Provider (ISP) network;
receiving, at the processor of the computing device, a capability response from a local cache server in response to sending the discovery message, wherein the capability response indicates topology data for the local cache server and the topology data is signed by a key of the CDN; and
sending, from the processor of the computing device, a request for content to the CDN including the topology data for the local cache server.

US Pat. No. 11,032,580

SYSTEMS AND METHODS FOR FACILITATING A PERSONALIZED VIEWING EXPERIENCE

DISH Network L.L.C., Eng...

1. A method for facilitating a personalized viewing experience in connection with an object of interest included in a source video content comprising:a first phase for identifying features specific to the object of interest using training video content including:
receiving the training video content including the object of interest, wherein the training video content is distinct from the source video content;
processing the training video content to identify one or more audio features and one or more video features, wherein the one or more audio features includes a change in a pitch or a frequency and the one or more video features includes a geometric attribute, a color, or an indication of a change in form;
extracting, from the one or more audio features and the one or more video features based upon processing the training video content, at least one audio feature or at least one video feature specific to the object of interest;
generating a list of objects including the object of interest in the source video content;
receiving a user selection of the object of interest from the list of objects of interest;
a second phase for identifying the object of interest using the source video content including:
receiving the source video content including multiple objects distributed in a plurality of frames, the multiple objects including the object of interest;
identifying, in the plurality of frames, the object of interest in response to detecting the at least one audio feature or the at least one video feature specific to the object of interest extracted during the first phase;
segmenting the source video content into multiple chunks, wherein each chunk includes at least one frame having the object of interest; and
generating a target video content by combining the multiple chunks, the target video content including sequentially-arranged frames having the object of interest.

US Pat. No. 11,032,579

METHOD AND A DEVICE FOR ENCODING A HIGH DYNAMIC RANGE PICTURE, CORRESPONDING DECODING METHOD AND DECODING DEVICE

InterDigital VC Holdings,...

1. A method for decoding a stream coding a standard dynamic range picture obtained from a high dynamic range picture comprising:obtaining a decoded standard dynamic range picture and color metadata associated in the stream with the coded standard dynamic range picture, wherein the color metadata comprises at least a maximum display luminance of a mastering display used in mastering said high dynamic range picture;
inverse mapping a luma signal of the decoded standard dynamic range picture responsive to said maximum display luminance of said mastering display in order to obtain a high dynamic range luminance signal;
performing color correction of a chroma signal of said decoded standard dynamic range picture responsive at least to said maximum display luminance of said mastering display to obtain a corrected chroma signal; and
reconstructing said high dynamic range picture from said obtained high dynamic range luminance signal and said corrected chroma signal.

US Pat. No. 11,032,578

RESIDUAL ENTROPY COMPRESSION FOR CLOUD-BASED VIDEO APPLICATIONS

Adobe Inc., San Jose, CA...

1. A system for encoding digital video content, the system comprising:a storage facility; and
one or more processors configured to:
vectorize a compressed digital video stream to provide a plurality of vectors;
generate, by vector quantization using a codebook vector from a codebook, a residual vector for a vector included in the plurality of vectors;
remove zeros from the residual vector to create an optimized residual vector;
entropy code the optimized residual vector to produce a coded optimized residual vector;
store, in the storage facility, metadata associated with the coded optimized residual vector, the metadata including at least one of a length of the coded optimized residual vector and a length of each dimension of the coded optimized residual vector; and
store, in the storage facility, media data associated with the coded optimized residual vector, the media data including an index corresponding to the codebook vector used to generate the residual vector.

US Pat. No. 11,032,577

ELECTRONIC DEVICE FOR COMPRESSING IMAGE BASED ON COMPRESSION LOSS DATA RELATED TO COMPRESSION OF MULTIPLE BLOCKS, WHICH ARE SEGMENTS OF THE IMAGE, AND METHOD FOR OPERATING THE SAME

Samsung Electronics Co., ...

1. An electronic device, comprising:a processor;
an image sensor; and
a control circuit operatively connected with the image sensor and connected with the processor via a designated interface, the control circuit configured to:
obtain a first image and a second image subsequent to the first image using the image sensor,
segment the first image into a plurality of blocks including a first block and a second block and compress the first image, wherein in compressing the first image, the control circuit is further configured to generate first compression loss data corresponding to the first block and second compression loss data corresponding to the second block,
provide compressed data of the first image corresponding to the first block and the second block to the processor via the designated interface,
identify a first compression property based on the first compression loss data and the second compression loss data,
segment the second image into a plurality of blocks including a third block and a fourth block that respectively correspond to the first block and the second block and compress the second image, wherein in compressing the second image, the control circuit is further configured to:
compress the third block according to the first compression property,
generate third compression loss data corresponding to the third block,
when a difference between the first compression loss data and the third compression loss data meets a first predetermined condition, compress the fourth block according to the first compression property, and
when the difference between the first compression loss data and the third compression loss data meets a second predetermined condition, compress the fourth block according to a second compression property different from the first compression property, and
provide compressed data of the second image corresponding to the third block and the fourth block to the processor via the designated interface.

US Pat. No. 11,032,576

SELECTIVELY ENHANCING COMPRESSED DIGITAL CONTENT

MICROSOFT TECHNOLOGY LICE...

1. A method, comprising: decompressing compressed digital video content to generate decompressed digital video content including a plurality of decoded video frames; identifying a decoded video frame from the plurality of decoded video frames; receiving a segmentation mask for the decoded video frame, the segmentation mask including an identified area of interest for the decoded video frame, the segmentation mask being received from a server device in conjunction with receiving the compressed digital video content from the server device, the area of interest comprising a portion of the decoded video frame indicated by the segmentation mask; and selectively applying a denoising model to the portion of the decoded video frame based on the segmentation mask to generate a denoised video frame in which one or more compression artifacts from the area of interest for the decoded video frame have been removed, wherein the denoising model comprises a machine learning model trained to receive an input image including at least one compression artifact and generate an output image in which the at least one compression artifact has been removed.

US Pat. No. 11,032,575

RANDOM ACCESS IN A VIDEO BITSTREAM

Telefonaktiebolaget LM Er...

1. A method for performing a random access operation in a video bitstream, comprising:obtaining a first dependent random access point, DRAP, picture wherein said first DRAP picture is encoded as a trailing picture referencing only an intra random access point, IRAP, picture, preceding said first DRAP picture in said video bitstream, as an associated IRAP picture of said first DRAP picture such that said first DRAP picture may be used as a reference picture and constitutes a random access point in said video bitstream;
obtaining a second DRAP picture encoded as a trailing picture referencing only the IRAP picture preceding said first DRAP picture as an associated reference picture of said second DRAP picture such that said second DRAP picture may be used as a reference picture and constitutes a random access point in said video bitstream, said first DRAP picture preceding said second DRAP picture in said video bitstream;
obtaining said preceding IRAP picture of said video bitstream;
decoding said preceding IRAP picture;
decoding at least one of said first DRAP picture and said second DRAP picture, using said preceding IRAP picture associated with each of said first and second DRAP pictures as sole reference picture for the decoding of at least one of said first and second DRAP picture; and
performing a random access operation into said video bitstream at one of said decoded first DRAP picture or said decoded second DRAP picture.

US Pat. No. 11,032,574

METHOD AND APPARATUS FOR VIDEO CODING

TENCENT AMERICA LLC, Pal...

1. A method of video decoding in a decoder, comprising:receiving a first syntax element in a bitstream indicating a difference between a first maximum allowed number of triangular prediction mode (TPM) candidates of a TPM applied to a first set of coding blocks and a second maximum allowed number of merge candidates in a merge mode applicable to a current coding block;
deriving the first maximum allowed number of TPM candidates based on the received first syntax element; and
constructing a TPM candidate list of the current coding block processed with the TPM according to the derived first maximum allowed number of TPM candidates.

US Pat. No. 11,032,573

METHOD, APPARATUS AND MEDIUM FOR DECODING OR ENCODING

TENCENT AMERICA LLC, Pal...

1. A method of decoding or encoding, the method comprising:receiving information regarding a target data block for encoding or decoding, the target data block for encoding or decoding being one of: a compressed video or image data block or an uncompressed video or image data block;
determining whether to use a recursive transform or a non-recursive transform for the encoding or decoding of the target data block;
when a result of the determination is to use the recursive transform: determining a first portion of the recursive transform based on a compound orthonormal transform (COT), deriving a second portion of the recursive transform due to the symmetry/anti-symmetry properties of the core of the recursive transform in view of the first portion of the COT, and causing or transmitting information that causes the target data block to be encoded or decoded using the first portion and the second portion of the recursive transform as the recursive transform.

US Pat. No. 11,032,572

LOW-FREQUENCY NON-SEPARABLE TRANSFORM SIGNALING BASED ON ZERO-OUT PATTERNS FOR VIDEO CODING

QUALCOMM Incorporated, S...

1. A method of decoding video data, the method comprising:determining a position of a last significant coefficient in a transform block of video data;
determining a value of a low-frequency non-separable transform (LFNST) index for the transform block based on the position of the last significant coefficient relative to a zero-out region of the transform block, wherein the zero-out region of the transform block includes both a first region within an LFNST region of the transform block and a second region of the transform block outside the LFNST region, wherein determining the value of the LFNST index includes inferring the value of the LFNST index to be zero in the case that the position of the last significant coefficient in the transform block is in the zero-out region of the transform block, wherein the value of the LFNST index of zero indicates that the LFNST is not applied to the transform block; and
inverse transforming the transform block in accordance with the value of the LFNST index.

US Pat. No. 11,032,571

IMAGE PROCESSING METHOD, TERMINAL, AND SERVER

HUAWEI TECHNOLOGIES CO., ...

1. An image processing method implemented by a server, the image processing method comprising:determining a division location of vertical division using a latitude;
performing horizontal division and the vertical division on a longitude-latitude map or a sphere map of a to-be-processed image to obtain sub-areas of the longitude-latitude map or the sphere map, wherein a division location of the horizontal division is a preset latitude, wherein at least two types of vertical division intervals in an area are formed by adjacent division locations of the horizontal division, and wherein a vertical division interval is a distance between adjacent division locations of the vertical division;
sampling an image of a sub-area in a horizontal direction at a first sampling interval, wherein a higher latitude corresponding to the sub-area indicates a larger first sampling interval:
encoding, sampled images of sampled sub-areas; and
encoding images of the sub-areas after sampling the image of the sub-area in the horizontal direction at the first sampling interval and after encoding the sampled images of the sampled sub-areas.

US Pat. No. 11,032,570

MEDIA DATA PROCESSING METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A method for presenting media data, comprising:receiving, by a media processing device, a plurality of media data tracks, wherein each media data track comprises media data recorded at a viewpoint, and viewpoint identification information of the viewpoint;
obtaining, by the media processing device, viewpoint position information of viewpoints associated with the media data tracks; and
displaying, by the media processing device, media data of a first viewpoint based on the viewpoint identification information and viewpoint position information of the first viewpoint;wherein obtaining viewpoint position information of a viewpoint comprises:obtaining a plurality of samples in a timed metadata track associated with the viewpoint, on condition that the position of the viewpoint is dynamic, wherein each sample in the timed metadata track comprises a set of viewpoint position information, and each set of viewpoint position information indicates a position of the viewpoint; or
obtaining only one set of viewpoint position information from a media data track associated with the viewpoint, on condition that the position of the viewpoint is static, wherein the set of viewpoint position information indicates a position of the viewpoint.

US Pat. No. 11,032,569

NEURAL NETWORK POWERED CODEC

SONY INTERACTIVE ENTERTAI...

1. A method for training a video encoder/decoder system the method comprising:a) generating at least two sets of video encoding parameters wherein the at least two sets of video encoding parameters are valid;
b) masking a set of the at least two sets of encoding parameters with invalid values to generate an invalid set of video encoding parameters;
c) providing the at least two sets of video encoding parameters to one or more neural networks;
d) training the one or more neural networks to predict one or more valid values corresponding to values of the invalid set using an iterative training algorithm;
e) determining a prediction error of the predicted one or more valid values from results of the training of the one or more neural networks and the at least two sets of video encoding parameters;
f) inserting the prediction error into encoding data and dropping encoded parameters from the encoding data that are determined to be accurately predicted by the one or more neural networks with the addition of the prediction error;
g) encoding a new video stream with the predication error and without the dropped encoding parameters.

US Pat. No. 11,032,568

METHOD OF VIDEO CODING USING PREDICTION BASED ON INTRA PICTURE BLOCK COPY

HFI INNOVATION INC., Zhu...

1. A method of signaling of a coding mode selected from a prediction mode group for a picture, the prediction mode group including an Intra-block copy (IntraBC) mode, wherein the picture is divided into multiple coding units, the method comprising:receiving input data associated with a current coding unit in the picture, wherein the current coding unit is coded using a prediction mode selected from the prediction mode group; and
in response to the IntraBC mode being selected for the current coding unit,
determining a binary string for a partition mode associated with the IntraBC mode, wherein the binary string for the partition mode associated with the IntraBC mode is the same as a corresponding binary string for a corresponding partition mode associated with an Inter mode when the Inter mode is selected; and
encoding or decoding the binary string for the partition mode associated with the IntraBC mode using context-adaptive binary arithmetic coding (CABAC) with one or more context models, wherein the one or more context models for encoding or decoding the binary string for the partition mode associated with the IntraBC are the same as corresponding one or more context models for encoding or decoding the corresponding binary string for the corresponding partition mode associated with the Inter mode when the Inter mode is selected,
wherein
the determining the binary string for the partition mode is performed based on a bin string assignment scheme, and
the bin string assignment scheme includes, in a case that an N×N mode is included in the bin string assignment scheme, assigning to the N×N mode a first bin string that has a greatest number of bits among all bin strings assigned to all partitioning modes included in the bin string assignment scheme.

US Pat. No. 11,032,567

AUTOMATIC ADAPTIVE LONG TERM REFERENCE FRAME SELECTION FOR VIDEO PROCESS AND VIDEO CODING

Intel Corporation, Santa...

1. A system to apply an adaptive Long Term Reference to a video sequence, comprising:one or more substrates and logic coupled to the one or more substrates, wherein the logic is to:
receive content analysis of stability of the video sequence;
receive coding condition of the video sequence;
automatically toggle Long Term Reference operations between an on setting mode and an off setting mode based at least in part on the received content analysis and coding condition information, wherein no frames of the video sequence are assigned as Long Term Reference frames and any previously assigned Long Term Reference frames are unmarked when in the off setting mode, and wherein a Long Term Reference assign action is signaled that unmarks a previous Long Term Reference frame and marks a current frame as a current Long Term Reference frame in response to detection of a scene transition when in the on setting mode; and
a power supply to provide power to the logic.

US Pat. No. 11,032,566

METHOD AND APPARATUS FOR ENCODING AND DECODING AN IMAGE WITH INTER LAYER MOTION INFORMATION PREDICTION ACCORDING TO MOTION INFORMATION COMPRESSION SCHEME

Canon Kabushiki Kaisha, ...

1. A method of encoding an image according to a scalable format, said format using a reference layer picture and a resampled picture, an image area of the image being predictively encoded based on motion information, said motion information being itself predictively encoded based on a motion information predictor from a set of motion information predictor candidates, wherein the method comprises:determining a position in the reference layer picture using a position in an image area of the resampled picture; and
determining a set of motion information predictor candidates including a motion information predictor candidate based on motion information associated with an image area belonging to the reference layer picture,
wherein the determining the position comprises:
deriving, in the reference layer picture, a corresponding position of said position in the image area of the resampled picture using a scaling factor; and
deriving a value X? from at least one coordinate X of the corresponding position using X?=((X+4)>>4)<<4,
and the determining the set of motion information predictor candidates comprises obtaining the motion information predictor candidate to be included in said set of motion information predictor candidates using, if available, motion information associated with the determined position in the reference layer picture, the determined position indicated by the value X?.

US Pat. No. 11,032,565

SYSTEM FOR NESTED ENTROPY ENCODING

Dolby International AB, ...

1. An apparatus for decoding a motion vector of a current block in a sequence of pictures encoded in a bitstream of coded pictures, the apparatus comprising processing circuitry to perform operations comprising:obtaining the bitstream of coded pictures;
obtaining, from the bitstream, a flag that indicates whether candidate motion vector trimming is to be applied when decoding the bitstream of coded pictures;
determining that the flag indicates that candidate motion vector trimming is to be applied when decoding the bitstream of coded pictures and, in response:
identifying a first adjacent block and a second adjacent block which are adjacent to the current block in a current picture;
constructing a set of candidate motion vector predictors comprising motion vectors of the first and second adjacent blocks;
determining whether the motion vectors of the first and second adjacent blocks are identical;
when the motion vectors of the first and second adjacent blocks are identical:
removing the motion vector of the second adjacent block from the set of candidate motion vector predictors, and
adding a motion vector of a block in a previously decoded picture to the set of candidate motion vector predictors, wherein the block in the previously decoded picture that is added to the set of candidate motion vector predictors for the current block is larger than the current block;
selecting one of the motion vectors in the set of the candidate motion vector predictors as the motion vector predictor of the current block; and
deriving the motion vector of the current block based on the selected motion vector predictor.

US Pat. No. 11,032,564

METHOD FOR ENCODING AND DECODING IMAGE AND DEVICE USING SAME

LG ELECTRONICS INC., Seo...

1. A method of video decoding by a decoding apparatus, the method comprising:deriving a prediction mode for a current block as an inter prediction mode based on received information;
deriving a temporal motion vector predictor candidate for the current block;
deriving a motion vector of the current block based on the temporal motion vector predictor candidate;
deriving a predicted pixel of the current block based on the motion vector of the current block; and
generating a reconstructed picture based on the predicted pixel,
wherein deriving the temporal motion vector predictor candidate comprises:
determining a reference prediction unit (colPu) which is a prediction block encompassing a modified location (x, y); and
deriving the temporal motion vector predictor candidate based on motion information of the colPu in a reference picture,
wherein the modified location is specified by a position ((xPCtr>>4)<<4, (yPCtr>>4)<<4), wherein a position (xPCtr, yPCtr) is a position of a pixel located at the bottom right side among four central pixels of a co-located block, and wherein the co-located block is a block with a same position and a same size as the current block and is located in the reference picture.

US Pat. No. 11,032,563

METHOD AND APPARATUS FOR AFFINE MODEL PREDICTION

Tencent America LLC, Pal...

1. A method for video decoding in a decoder, comprising:decoding prediction information of a current block in a current picture from a coded video bitstream, the prediction information being indicative of an affine based motion vector prediction;
identifying consecutive minimum compensation blocks, from minimum compensation blocks adjacent to the current block, that are affine coded based on affine flags that are stored with motion information of the consecutive minimum compensation blocks;
determining, based on the motion information of at least two minimum compensation blocks in the identified consecutive minimum compensation blocks, parameters of an affine model; and
reconstructing at least a sample of the current block based on the affine model that is used to transform between the current block and a reference block in a reference picture that has been reconstructed.

US Pat. No. 11,032,562

EFFECTIVE WEDGELET PARTITION CODING USING SPATIAL PREDICTION

GE Video Compression, LLC...

1. A decoder for reconstructing a sample array from a data stream, the decoder comprising a processor configured for:predicting a first block of the sample array using intra prediction based on an intra-prediction direction related to the first block;
deriving a position of a wedgelet separation line within a second block of the sample array neighboring the first block based on an extension direction of the wedgelet separation line within the second block, wherein the extension direction is based on the intra-prediction direction used to predict the first block, and the wedgelet separation line divides the second block into first and second wedgelet portions; and
decoding the second block based on a first value related to samples within the first wedgelet portion and a second value related to samples within the second wedgelet portion.

US Pat. No. 11,032,561

FLEXIBLE BAND OFFSET MODE IN SAMPLE ADAPTIVE OFFSET IN HEVC

SONY CORPORATION, Tokyo ...

1. A decoding device, comprising:circuitry configured to:
decode a bit stream;
generate a decoded image based on the decode of the bit stream;
generate, for the decoded image in a band offset mode in which an offset is applied to each band that indicates a range to which pixel values belong, a modulo remainder for a number of first band of consecutive bands based on a total number of bands;
set the first band and bands other than the first band included in the consecutive bands based on the modulo remainder, wherein
the first band is at a beginning of the consecutive bands, and
the consecutive bands are a plurality of divided bands of the total number of bands; and
apply the offset to pixels that belongs to the consecutive bands, wherein the offset is set for each of the consecutive bands that includes the first band and the bands other than the first band.

US Pat. No. 11,032,560

METHOD AND APPARATUS FOR VIDEO CODING WITHOUT UPDATING THE HMVP TABLE

Tencent America LLC, Pal...

1. A method of video decoding at a video decoder, comprising:receiving a merge sharing region including a plurality of coding blocks;
constructing a shared merge candidate list for the merge sharing region; and
decoding the merge sharing region based on the shared merge candidate list, wherein
at least one inter coded coding block within the merge sharing region is processed without updating a history-based motion vector prediction (HMVP) table with motion information of the at least one inter coded coding block, and
motion information of a last inter coded coding block within the merge sharing region according to a decoding order is used to update the HMVP table, and other inter coded coding block(s) within the merge sharing region is processed without updating the HMVP table with motion information of the other inter coded coding block(s).

US Pat. No. 11,032,559

VIDEO ENCODING AND DECODING METHOD AND APPARATUS USING THE SAME

Electronics and Telecommu...

1. A video decoding method supporting layers, the method comprising:constructing a reference layer list comprising one or more reference layers;
constructing, based on the constructed reference layer list, a reference picture list comprising a decoded picture of the one or more reference layers; and
predicting and decoding the picture of a target layer by referring to the reference picture list,
wherein the decoded picture of the one or more reference layers included in the reference picture list is treated as a long-term reference picture and
wherein
the constructing of the reference picture list comprises:
configuring a first set comprising the decoded picture of the reference layer;
configuring a second set comprising pictures on a same layer as the picture of the target layer; and
combining the first set and the second set.

US Pat. No. 11,032,558

METHOD AND APPARATUS FOR PERFORMING SUPERPOSITION CODED MODULATION SCHEME IN A BROADCASTING OR COMMUNICATION SYSTEM

Samsung Electronics Co., ...

1. A method for receiving a signal, the method comprising:receiving a superposition-coded modulation (SCM) signal with a noise signal;
de-mapping the SCM signal with the noise signal to generate first values corresponding to a first layer signal;
decoding and mapping the first values based on a low density parity check (LDPC) code to determine constellation points corresponding to the first layer signal;
determining a second layer signal of the SCM signal based on the constellation points and the SCM signal with the noise signal;
de-mapping the second layer signal to generate second values corresponding to the second layer signal; and
decoding the second values based on the LDPC code to determine information bits corresponding to the second layer signal,
wherein the SCM signal is determined based on a SCM coefficient, the first layer signal and the second layer signal at a transmitter,
wherein the SCM coefficient is used to control a transmission power ratio between a total power and a power corresponding to the first layer, or between the total power and a power corresponding to the second layer, and
wherein a quality of services (QoS) level of the first layer is different from a QoS level of the second layer.

US Pat. No. 11,032,557

DECODING DEVICE, CODING DEVICE, AND METHOD

SHARP KABUSHIKI KAISHA, ...

1. A decoding device for decoding coded data, the decoding device comprising:a prediction mode deriving circuitry that derives a prediction mode of a target block using a prediction mode index, wherein the prediction mode indicates one of directional prediction modes; and
a predicted image generating circuitry that generates a predicted image using the prediction mode,
wherein:
the prediction mode is set equal to one of the directional prediction modes included in one of at least three prediction sets, wherein each of the three prediction sets is comprised of a same number of directional prediction modes,
a first set of the three prediction sets has more directional prediction modes used for predicting in a horizontal direction than a second set of the three prediction sets,
a third set of the three prediction sets has more directional prediction modes used for predicting in a vertical direction than the second set,
in a case that a value of a flag specifying whether a prediction mode is predicted from a neighboring block is equal to one, a value of the prediction mode is set equal to a first estimated prediction mode, and
otherwise and in a case that a prediction mode which is set by using remainder information is greater than a second estimated prediction mode, a value of the prediction mode is incremented by one.

US Pat. No. 11,032,556

CODING OF SIGNIFICANCE MAPS AND TRANSFORM COEFFICIENT BLOCKS

GE Video Compression, LLC...

1. An apparatus for decoding a transform coefficient block encoded in a data stream, comprising a processor, which when executes computer-readable code, is configured to:extract, from the data stream, syntax elements via context-based entropy decoding, wherein each of the syntax elements indicates whether a significant transform coefficient is present at a corresponding position within the transform coefficient block;
associate each of the syntax elements with the corresponding position within the transform coefficient block in a first scan order, wherein the first scan order is selected from a plurality of scan orders; and
use, for context-based entropy decoding of at least one syntax element of the syntax elements, a context which is selected for the at least one syntax element based on a size of the transform coefficient block and a position of the at least one syntax element within the transform coefficient block, wherein contexts for different syntax elements are selected based on different combinations of the size of the transform coefficient block and the position of the respective syntax element.

US Pat. No. 11,032,555

EFFECTIVE PREDICTION USING PARTITION CODING

GE Video Compression, LLC...

1. A decoder for reconstructing a depth map of a video signal using encoded information from a data stream, the decoder comprising a processor configured for:deriving a bipartition of a block of the depth map into first and second portions;
associating each of neighboring samples of the depth map with a respective one of the first and second portions, the neighboring samples adjoining the block of the depth map;
predicting the block of the depth map by determining a first predicted value for the first portion based on values of a first set of the neighboring samples, or determining a second predicted value for the second portion based on values of a second set of the neighboring samples;
determining, from the data stream, one or more refinement values for the first or second predicted value, wherein the one or more refinement values include a first or second refinement value; and
refining the prediction of the block by applying the first refinement value to the first predicted value for the first portion or applying the second refinement value to the second predicted value for the second portion.

US Pat. No. 11,032,554

VIDEO ENCODING/DECODING METHOD AND DEVICE FOR CONTROLLING REFERENCE IMAGE DATA ACCORDING TO REFERENCE FREQUENCY

SAMSUNG ELECTRONICS CO., ...

1. A video decoding method comprising:obtaining reference image data from a bitstream;
determining a reference region that is a rectangular area split from a reference picture, the reference region comprising a plurality of reference image data and each of the plurality of reference image data being data of a rectangular area;
determining a frequency of references to the reference region referred by image data to be decoded, wherein the frequency of references to the reference region is determined as a sum of frequencies of the references to each of the plurality of the reference image data of the reference region referred by the image data to be decoded;
identifying all of the plurality of the reference image data of the reference region as a first reference image data or a second reference image data, by comparing the frequency of the references to the reference region and a reference value, wherein the first reference image data has a higher frequency of reference by the image data to be decoded than the second reference image data;
storing the identified plurality of reference image data in a memory; and
decoding the image data by using the identified reference image data stored in the memory,
wherein when a reference image data is comprised in the reference region and another reference region, a frequency of references to the reference image data is added to the frequency of references to the reference region and a frequency of references to the another reference region.

US Pat. No. 11,032,553

LUMINANCE BASED CODING TOOLS FOR VIDEO COMPRESSION

Dolby Laboratories Licens...

1. A computer-implemented method for processing spatial regions of a video image represented in a video signal, the method comprising:dividing an available luminance range as supported by the video signal into two or more partitions that correspond to two or more different specific brightness ranges of luminance levels for distinguishing relatively brighter spatial regions of the video image from relatively darker spatial regions of the video image;
determining a luminance indicator of a specific spatial region in the video image, wherein the luminance indicator is encoded as a code word in the video signal, wherein the luminance indicator indicates that a specific partition corresponding to a specific brightness range of luminance levels, among the two or more different brightness ranges of luminance levels, is utilized in the specific spatial region;
determining, based at least in part on the specific brightness range of luminance levels, thresholds and values of operational parameters used in one or more signal processing operations, internal precisions of one or more of the thresholds and the values of operational parameters depending on the specific brightness range of luminance levels;
wherein the one or more signal processing operations includes applying an image processing filter of a same type to the video image with two or more different sets of filter parameters respectively for the two or more different specific brightness ranges of luminance levels;
wherein the two or more different sets of filter parameters are represented with two or more different precisions respectively;
selecting, from the thresholds and values of operational parameters determined based at least in part on the specific brightness range of luminance levels, a specific set of thresholds and values of operational parameters for applying to the specific spatial region;
wherein the specific set of thresholds and values of operational parameters for applying to the specific spatial region includes a specific set of filter parameters selected, based at least in part on the specific brightness range of luminance levels, from the two or more different sets of filter parameters for the image processing filter of the same type.

US Pat. No. 11,032,552

VIDEO ENCODING METHOD, VIDEO ENCODING APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A video encoding method for an electronic device, comprising:receiving a video image frame, and acquiring at least one coding tree unit (CTU) of the video image frame;
dividing the CTU according to different video prediction unit division rules, to acquire video prediction units of different sizes of the CTU;
performing initial selection on intra-predicted brightness prediction directions of the video prediction units of different sizes according to a first rate-distortion-evaluation-function, to obtain a preset number of intra-predicted initially-selected brightness-directions of the video prediction units of different sizes;
performing fine selection on intra-predicted optimal brightness-directions of a related video prediction unit and intra-predicted initially-selected brightness-directions of a video prediction unit of a current size according to a second rate-distortion-evaluation-function, to obtain intra-predicted optimal brightness-directions of the video prediction unit of the current size;
performing fine selection on intra-predicted chroma prediction directions of the video prediction units of different sizes according to the second rate-distortion-evaluation-function, to obtain intra-predicted optimal chroma-directions of the video prediction units of different sizes; and
performing intra-prediction encoding on a current video encoding unit according to intra-predicted optimal brightness-directions of the video prediction units of different sizes and the intra-predicted optimal chroma-directions of the video prediction units of different sizes.

US Pat. No. 11,032,551

SIMPLIFIED MOST PROBABLE MODE LIST GENERATION SCHEME

TENCENT AMERICA LLC, Pal...

1. A method of signaling an intra prediction mode used to encode a current block in an encoded video bitstream using at least one processor, the method comprising:generating a first most probable mode (MPM) list corresponding to a zero reference line of the current block, wherein the first MPM list comprises a plurality of angular intra prediction modes;
generating a second MPM list corresponding to one or more non-zero reference lines of the current block, wherein the second MPM list comprises the plurality of angular intra prediction modes;
signaling a reference line index indicating a reference line used to encode the current block from among the zero reference line and the one or more non-zero reference lines; and
signaling an intra mode index indicating the intra prediction mode within the first MPM list or the second MPM list,
wherein based on the reference line index indicating the reference line is the zero reference line, based on a respective intra prediction mode of a first neighboring block of the current block being a non-angular mode, and based on a respective intra prediction mode of a second neighboring block of the current block being an angular mode, a first intra prediction mode of the first MPM list is the non-angular mode, and a second intra prediction mode of the first MPM list is the angular mode.

US Pat. No. 11,032,550

METHOD AND APPARATUS OF VIDEO CODING

MEDIATEK INC., Hsinchu (...

1. A method of video decoding, comprising:receiving a bitstream that includes coded data to be decoded as a current block in an image frame;
obtaining an enable flag that is included in a Sequence Parameter Set (SPS) or a Picture Parameter Set (PPS) of the bitstream, the enable flag indicating whether intra-inter prediction functionality is enabled for a corresponding sequence or a corresponding picture that includes the current block;
when the enable flag indicates that the intra-inter prediction functionality is enabled for the corresponding sequence or the corresponding picture that includes the current block, determining, as a special case of an inter prediction, whether the current block is coded according to an intra-inter prediction; and
when the current block is deteir tined to be coded according to the intra-inter prediction:
generating an inter predictor of the current block;
generating an intra predictor of the current block based on samples of neighboring pixels and an intra prediction mode for the current block that locates the samples of neighboring pixels, the intra prediction mode for the current block being a Planar mode or a DC mode;
determining an intra weight coefficient according to intra information of a previously coded block regardless of the intra prediction mode for the current block, the intra weight coefficient being the same for all samples of the current block;
generating a final predictor of the current block by combining the inter predictor and the intra predictor according to the intra weight coefficient, wherein the intra weight coefficient indicates a weight of the intra predictor in the final predictor; and
reconstructing the current block for output based on the final predictor.

US Pat. No. 11,032,549

SPATIAL LAYER RATE ALLOCATION

Google LLC, Mountain Vie...

1. A method comprising:receiving, at data processing hardware, transform coefficients corresponding to a scaled video input signal, the scaled video input signal comprising a plurality of spatial layers, the plurality of spatial layers comprising a base layer;
determining, by the data processing hardware, a spatial rate factor based on a sample of frames from the scaled video input signal, the spatial rate factor defining a factor for bit rate allocation at each spatial layer of an encoded bit stream formed from the scaled video input signal, the spatial rate factor represented by a difference between a rate of bits per transform coefficient of the base layer and an average rate of bits per transform coefficient for the plurality of spatial layers; and
reducing, by the data processing hardware, a distortion for the plurality of spatial layers of the encoded bit stream by allocating a bit rate to each spatial layer based on the spatial rate factor and the sample of frames.

US Pat. No. 11,032,548

SIGNALING FOR REFERENCE PICTURE RESAMPLING

TENCENT AMERICA LLC, Pal...

1. A method of decoding an encoded video bitstream using at least one processor, the method comprising:obtaining a coded picture from the encoded video bitstream;
decoding the coded picture to generate a decoded picture;
obtaining from the encoded video bitstream a first flag indicating whether reference picture resampling is enabled;
based on the first flag indicating that the reference picture resampling is enabled, obtaining from the encoded video bitstream a second flag indicating whether reference pictures have a constant reference picture size indicated in the encoded video bitstream;
based on the first flag indicating that the reference picture resampling is enabled, obtaining from the encoded video bitstream a third flag indicating whether output pictures have a constant output picture size indicated in the encoded video bitstream;
based on the second flag indicating that the reference pictures have the constant reference picture size, generating a reference picture by resampling the decoded picture to have the constant reference picture size, and storing the reference picture in a decoded picture buffer; and
based on the third flag indicating that the output pictures have the constant output picture size, generating an output picture by resampling the decoded picture to have the constant output picture size, and outputting the output picture.

US Pat. No. 11,032,547

IMAGE ENCODING APPARATUS, CONTROL METHOD THEREOF, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image encoding apparatus that encodes a frame obtained by an image capturing unit as RAW image data, the apparatus comprising:a memory storing a program; and
one or more processors which execute the program, wherein the one or more processors function as:
a calculating unit that finds a color temperature of the RAW image data of a target frame which is to be encoded, and calculates a gain value for each of color components in order to apply white balancing to the RAW image data;
a generating unit that generates a plurality of planes, each plane constituted by data of a single color component, from the RAW image data in the target frame;
an adjusting unit that adjusts the white balance of each plane obtained by the generating unit on the basis of the gain calculated by the calculating unit;
an encoding unit that encodes each plane by frequency-transforming, quantizing, and encoding each plane; and
a control unit that, when each plane of the target frame is encoded by the encoding unit, encodes the plurality of planes resulting from the adjustment by the adjusting unit, on the basis of quantization parameters for each plane of the RAW image data one frame before the target frame wherein the calculating unit calculates relative gain values for R and B components when the gain of a G component is a reference value of 1, and
wherein the control unit:
calculates R and B thresholds on the basis of a pre-set threshold for the G component and the R and B component gain values calculated by the calculating unit;
compares sub-band transform coefficients at a predetermined decomposition level for each component, used in the quantization when the encoding unit encodes the RAW image data from the previous frame, with the thresholds for each component; and
determines, on the basis of the result of the comparison, whether to use the plurality of planes generated by the generating unit, or the plurality of planes resulting from the adjustment by the adjusting unit, as the target of the encoding by the encoding unit for the target frame.

US Pat. No. 11,032,546

QUANTIZER FOR LOSSLESS AND NEAR-LOSSLESS COMPRESSION

TENCENT AMERICA LLC, Pal...

1. A system comprising:at least one memory configured to store computer program code; and
at least one processor configured to access the computer program code and operate as instructed by the computer program code, the computer program code comprising:
first obtaining code configured to cause the at least one processor to obtain a first syntax element that indicates a first quantization index value for an AC coefficient of a coded image;
second obtaining code configured to cause the at least one processor to obtain at least one second syntax element that indicates an offset value;
third obtaining code configured to cause the at least one processor to obtain a second quantization index value for another coefficient of the coded image by combining the first quantization index value of the first syntax element and the offset value of the at least one second syntax element to obtain a combined value, and modifying, in a case where the combined value is less than a predetermined minimum value, the combined value to be the predetermined minimum value as the second quantization index value;
fourth obtaining code configured to cause the at least one processor to obtain a quantization step size that corresponds to the second quantization index value that is obtained;
determining code configured to cause the at least one processor to determine whether a mode in which the coded image is to be decoded is a lossy mode or a lossless mode based on determining whether the first quantization index value is equal to a quantization index value associated with lossless coding, and based on determining whether the offset value is less than or equal to the quantization index value associated with the lossless coding;
first setting code configured to cause the at least one processor to set the predetermined minimum value to a value, that is compared to the combined value, based on the determining of the determining code; and
decoding code configured to cause the at least one processor to decode the coded image in the lossy mode or the lossless mode based on the determining of the determining code, and by using the quantization step size that is obtained.

US Pat. No. 11,032,545

REDUCING SEAM ARTIFACTS IN 360-DEGREE VIDEO

QUALCOMM Incorporated, S...

1. A method of processing video data at a video encoder, comprising:obtaining at least one 360-degree rectangular formatted projected picture;
detecting a projection boundary in the at least one 360-degree rectangular formatted projected picture;
disabling an in-loop filtering based on detecting the at least one 360-degree rectangular formatted projected picture comprises the projection boundary, wherein in-loop filtering is disabled for the entire at least one 360-degree rectangular formatted projected picture by disabling in-loop filtering in a parameter set; and
generating an encoded video bitstream.

US Pat. No. 11,032,544

TRANSFORM AND QUANTIZATION ARCHITECTURE FOR VIDEO CODING AND DECODING

TEXAS INSTRUMENTS INCORPO...

1. A method for transformation in a transform component of a digital system, the method comprising:decomposing an input vector using even-odd decomposition to generate a first vector consisting of even elements of the input vector and a second vector consisting of odd elements of the input vector;
comparing the first vector with a first threshold size;
comparing the second vector with a second threshold size;
in response to the second vector being greater than the second threshold size, using butterfly decomposition to apply an odd part of a transform to the second vector wherein the transform is one selected from a group consisting of a discrete cosine transform (DCT) and an inverse discrete cosine transform (IDCT); and
in response to the first vector being less than or equal to the first threshold size, using matrix multiplication to apply an even part of the transform to the first vector.

US Pat. No. 11,032,543

METHOD AND APPARATUS FOR VIDEO CODING

Tencent America LLC, Pal...

1. A method of video decoding at a video decoder, comprising:receiving a coding block having a width of W pixels and a height of H pixels;
partitioning the coding block into sub processing units (SPUs) each having a width of a lesser of W or K pixels and a height of a lesser of H or K pixels, where K is a dimension of a virtual pipeline data unit (VPDU) having an area of K×K pixels; and
partitioning each SPU into transform units, with each transform unit having a maximum allowable transform unit size of M pixels.

US Pat. No. 11,032,542

METHOD AND A DEVICE FOR IMAGE ENCODING AND DECODING

INTERDIGITAL VC HOLDINGS,...

1. A decoding method for decoding an image, the decoding method comprising, for at least one slice of the image:decoding the slice from a bitstream;
determining information representative of a size of a region of the decoded slice, the size being different from a size of a basic coding block used for decoding the slice, wherein the size of the region includes at least one of a region width and a region height, and wherein the size of the basic coding block includes at least one of a basic coding block height and a basic coding block width; and
filtering the decoded slice by applying a filter on the region identified by the determined information;
wherein determining the information comprises decoding a single index from a header of the slice, wherein the single index indicates at least one of a horizontal ratio and a vertical ratio with respect to the size of the basic coding block, wherein the vertical ratio represents a ratio of the region height and the basic coding block height and the horizontal ratio represents a ratio of the region width and the basic coding block width, and wherein the single index is utilized to obtain the size of the region.

US Pat. No. 11,032,541

METHOD AND APPARATUS FOR VIDEO CODING

Tencent America LLC, Pal...

1. A method for video decoding in a decoder, comprising:determining first and second reference motion vectors for decoding a current block based on coding information of at least one previously decoded block,
the first reference motion vector including a first component on a first coordinate axis, and
the second reference motion vector including a second component on the first coordinate axis;
generating a first sum by adding the first component and the second component;
setting a right-shift parameter as 1;
setting a left-shift parameter as 0;
executing a rounding subroutine or operating a rounding circuit to calculate a third component of the first averaged reference motion vector on the first coordinate axis according to:
offset=1<<(rightShift?1), and
y1=(x1>=0?(x1+offset)>>rightShift:?((?x1+offset)>>rightShift))< wherein rightShift corresponds to the right-shift parameter, leftShift corresponds to the left-shift parameter, x1 corresponds to the first sum, and y1 corresponds to the third component;
constructing a list of reference motion vectors for decoding the current block, the list of reference motion vectors incorporating the first and second reference motion vectors and the first averaged reference motion vector;
determining a motion vector predictor using the list of reference motion vectors; and
decoding the current block for output based on the determined motion vector predictor.

US Pat. No. 11,032,540

METHOD FOR ENCODING/DECODING AN INTRA-PICTURE PREDICTION MODE USING TWO INTRA-PREDICTION MODE CANDIDATE, AND APPARATUS USING SUCH A METHOD

Intellectual Discovery Co...

1. A method of decoding a video signal, comprising:deriving a plurality of candidate intra-prediction modes including a first candidate intra-prediction mode and a second candidate intra-prediction mode of a current block;
determining whether an intra-prediction mode of the current block is identical to one of the plurality of candidate intra-prediction modes based on n-bit information;
when the intra-prediction mode of the current block is identical to one of the plurality of candidate intra-prediction modes, determining which one of the plurality of candidate intra-prediction modes is identical to the intra-prediction mode of the current block, based on additional m-bit information;
obtaining the intra-prediction mode of the current block;
obtaining prediction sample of the current block based on the intra-prediction mode of the current block; and
obtaining a reconstructed sample of the current block based on the prediction sample,
wherein deriving the plurality of candidate intra-prediction modes comprises:
deriving a first neighbor prediction mode based on a first neighbor block adjacent to the current block,
wherein when the first neighbor block is available, the first neighbor prediction mode is set to as an intra-prediction mode of the first neighbor block, and
wherein when the first neighbor block is unavailable, the first neighbor prediction mode is set to a DC mode;
deriving a second neighbor prediction mode based on a second neighbor block adjacent to the current block,
wherein when the second neighbor block is available, the second neighbor prediction mode is set to as an intra-prediction mode of the second neighbor block, and
wherein when the second neighbor block is unavailable, the second neighbor prediction mode is set to a DC mode; and
deriving the first candidate intra-prediction mode and the second candidate intra-prediction mode from the first neighbor prediction mode and the second neighbor prediction mode,
wherein when the first neighbor prediction mode and the second neighbor prediction mode are different, the first candidate intra-prediction mode and the second candidate intra-prediction mode are determined without comparing values of the first neighbor prediction mode and the second neighbor prediction mode.

US Pat. No. 11,032,539

VIDEO CODING METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

10. A computer device, comprising memory and one or more processors, the memory storing computer readable instructions that, when executed by the one or more processors, enable the one or more processors to perform the following operations:obtaining, by the computer device, a current video frame to be coded, and obtaining a predicted residual of a reference video frame of the current video frame by determining the reference frame through an optimization process of comparing the current video frame with the reference video frame in a case that the current video frame is an inter prediction frame;
determining, by the computer device, a quantization parameter threshold corresponding to the current video frame according to the predicted residual of the reference video frame, further including:
obtaining, by the computer device, a quantity of pixels comprised in the reference video frame;
calculating, by the computer device, an average predicted residual corresponding to the reference video frame according to the quantity of pixels and the predicted residual; and
determining, by the computer device, the quantization parameter threshold corresponding to the current video frame according to the average predicted residual, wherein the average predicted residual is negatively correlated with the quantization parameter threshold;
obtaining, by the computer device, a quantization parameter estimated value corresponding to the current video frame;
selecting, by the computer device, a target coding mode from candidate coding modes according to the quantization parameter estimated value and the quantization parameter threshold, the candidate coding modes comprising a down-sampling mode and a full resolution mode, further including:
using, by the computer device, the down-sampling mode as the target coding mode in a case that a difference between the quantization parameter estimated value and the quantization parameter threshold is greater than a preset threshold otherwise, using the full resolution mode as the target coding mode; and
coding, by the computer device, the current video frame according to the target coding mode.

US Pat. No. 11,032,538

METHOD FOR CONTROLLING AND INSPECTING DEVICE OR DEVICES GOVERNING A LIGHT SOURCE FOR TESTING CAMERA, AND COMPUTING DEVICE USING METHOD

TRIPLE WIN TECHNOLOGY (SH...

1. A method for inspecting and controlling a device and operating in a computer device, comprising:obtaining a first device identifier of a plurality of light sources;
obtaining a second device identifier of a plurality of control devices; wherein each of the control devices respectively controls turning on a corresponding one of the light sources; wherein the control device corresponds to a camera device;
associating the first device identifier with a corresponding one of the second device identifier, and generating a binding list;
obtaining the second device identifier of the control device corresponding to the camera device;
obtaining light source information of the second device identifier corresponding to the first device identifier; and determining whether the light source information meets a requirement of the camera device; wherein the light source information comprises color temperature in the light source;
obtaining the color temperature of the light source corresponding to each control device, and determining whether color temperature of the light source meets a requirement of the camera device;
comparing the second device identifier of the control device corresponding to the camera device with the binding list, and recording the first device identifiers corresponding to the second device identifiers; and controlling the camera device to sequentially capture a picture under the light source corresponding to the first device identifiers; and
analyzing whether functions of the camera device are normal according to the picture.

US Pat. No. 11,032,537

MOVABLE DISPLAY FOR VIEWING AND INTERACTING WITH COMPUTER GENERATED ENVIRONMENTS

Athanos, Inc., Pomona, C...

1. A system for viewing three-dimensional virtual reality content comprising:a movable display for rendering the three-dimensional virtual reality content for display;
at least one tracker for tracking the movable display;
a computing device configured to generate the three-dimensional virtual reality content;
wherein the movable display is configured such that movement of the movable display relative to a viewer's head is detected using the at least one tracker is translated by the computing device into alteration of a viewing area of the three-dimensional virtual reality content corresponding to the movement of the movable display relative to the viewer's head, by:
generating a matrix representative of a position and orientation of the movable display in physical space;
generating a second matrix representative of a second position and a second orientation of the viewer's head;
merging the matrix and the second matrix into a final matrix; and
rendering the three-dimensional virtual reality content on the moveable display based upon the final matrix.

US Pat. No. 11,032,536

GENERATING A THREE-DIMENSIONAL PREVIEW FROM A TWO-DIMENSIONAL SELECTABLE ICON OF A THREE-DIMENSIONAL REALITY VIDEO

Verizon Patent and Licens...

1. A computer-implemented method comprising:displaying a virtual reality menu user interface that includes two-dimensional (2D) selectable icons representing three-dimensional (3D) virtual reality videos;
receiving data describing a user selection of a 2D selectable icon from the 2D selectable icons in the virtual reality menu user interface;
displaying, in response to the user selection of the 2D selectable icon, a first preview of a 3D virtual reality video inside a 3D object in the virtual reality menu user interface, the first preview of the 3D virtual reality video corresponding to a first location within the 3D object;
receiving data describing a rotation of the 3D object within the virtual reality menu user interface; and
displaying, based on the rotation of the 3D object, a second preview of the 3D virtual reality video inside the 3D object in the virtual reality menu user interface, the second preview of the 3D virtual reality video corresponding to a second location within the 3D object.

US Pat. No. 11,032,535

GENERATING A THREE-DIMENSIONAL PREVIEW OF A THREE-DIMENSIONAL VIDEO

Verizon Patent and Licens...

1. A computer-implemented method comprising:displaying a virtual reality user interface that includes
(1) a set of selectable icons that represents a set of three-dimensional (3D) virtual reality videos and
(2) an object;
receiving user input to move the object to be positioned in front of a first selectable icon included in the set of selectable icons, the first selectable icon corresponding to a first 3D virtual reality video from the set of 3D virtual reality videos;
moving, in response to the user input, the object to be positioned in front of the first selectable icon in the virtual reality user interface;
providing, in response to the object being positioned in front of the first selectable icon in the virtual reality user interface, a first 3D preview of the first 3D virtual reality video within the object;
receiving a user selection of a location within the object for entering the object;
receiving a user selection of the object while the first 3D preview is provided within the object;
determining a viewing direction based on the selected location within the object; and
displaying the first 3D virtual reality video in a 360-degree environment as viewed from the viewing direction.

US Pat. No. 11,032,534

PLANAR DEVIATION BASED IMAGE REPROJECTION

Microsoft Technology Lice...

1. An image reprojection method comprising:dividing a depth buffer of depth values corresponding to an image produced based on a predicted pose into a plurality of tiles;
for each tile of the plurality of tiles, calculating a planar deviation error value that estimates a geometric complexity of the tile and penalizes the tile in proportion to an extent to which a geometry of the tile deviates from a plane;
producing a tessellated mesh of the plurality of tiles based on the planar deviation error values calculated for the plurality of tiles;
receiving an updated pose; and
rendering the tessellated mesh based on the updated pose to produce a reprojected image.

US Pat. No. 11,032,533

IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus that processes a first image and a second image so as to detect a corresponding pixel in the second image which corresponds to a target pixel in the first image, the first image and the second image being obtained by image capturing, the first image having a first parameter value, the second image having a second parameter value different from the first parameter value, and the first parameter value and the second parameter value being values of optical parameters of one or more image capturing systems used to capture the first image and the second image, the image processing apparatus comprising:one or more processors connected to a memory, the one or more processors being configured to:
set a two-dimensional search area as a partial area in which the corresponding pixel is to be searched in the second image, based on a predetermined range, the predetermined range determined to include an entire range in which each of the first and second parameter values can change due to tolerances of the one or more image capturing systems; and
detect the corresponding pixel by searching the two-dimensional search area,
wherein the two-dimensional search area is set by determining which of a plurality of epipolar lines change in the second image as the first parameter value and the second parameter value change in the predetermined range, and setting the two-dimensional search area to include all epipolar lines that change in the second image as the first parameter value and the second parameter value change in the predetermined range, and
wherein the optical parameter includes a position of an entrance pupil in an optical system in the one or more image capturing systems.

US Pat. No. 11,032,532

ELECTRONIC DEVICE AND METHOD FOR PROVIDING VIRTUAL DEVICE VIA AT LEAST PORTION OF CONTENT

SAMSUNG ELECTRONICS CO., ...

1. A method comprising:displaying, by an electronic device, virtual reality content on a display of the electronic device;
generating, by the electronic device, a virtual device for controlling at least one function of the electronic device, wherein the virtual device is rendered in virtual reality in a form corresponding to the electronic device;
displaying, by the electronic device, a graphical user interface (GUI) corresponding to a GUI of the electronic device, on a screen of the virtual device;
displaying, by the electronic device, the virtual device on the display of the electronic device in such a way that the virtual device is displayed on at least a portion of the virtual reality content; and
performing the at least one function of the electronic device, based on a user input with respect to the virtual device,
wherein the performing of the at least one function of the electronic device comprises:
obtaining a request for voice call from an external electronic device,
performing a call function with the external electronic device based on the request for the voice call,
obtaining a user input for sharing call content of the voice call,
based on the user input for sharing the call content, identifying another virtual device that is located within a range of a certain distance from the virtual device, wherein the other virtual device is displayed on the virtual reality content, and
transmitting information about the call content to the identified other virtual device.

US Pat. No. 11,032,531

MOBILE DEVICE HAVING A 3D DISPLAY WITH SELECTABLE MAGNIFICATION

3D MEDIA LTD., Chai Wan ...

1. A 3D imaging system for use on a mobile device, comprising:a display area configured to display a 3D image, the display area comprising a display panel and a parallax sheet disposed over the display panel, the parallax sheet comprising a plurality of parallax separating units, each parallax separating unit has a unit width; and
a processor configured to compose a composite image from a plurality of images, said plurality of images comprise a first image and a second image and to convey signals indicative of the composite image to the display panel for displaying a displayed image indicative of the composite image, the display image comprising a plurality of first image strips and second image strips alternately arranged, the first image strips indicative of the first image and the second image strips indicative of the second image, each of the first and second image strips has a strip width approximately equal to one half of the unit width, wherein the parallax sheet is arranged such that each parallax separating unit approximately covers one of the first image strips and one of the second image strips, wherein the 3D imaging system is operable at least in a first display mode and in a second display mode, such that when the 3D image system is operated in the first display mode, the composite image displayed on the display panel is indicative of a full image of the first and the second images, and when the 3D image system is operated in the second display mode, the composite image displayed on the display panel is indicative of the full image modified by a magnification factor different from 1, and wherein when the 3D imaging system is operated in the first display mode or in the second display mode, the strip width is the same and the unit width is also unchanged so that the strip width is approximately equal to one half of the unit width when the 3D imaging system is operated in the first display mode or in the second display mode.

US Pat. No. 11,032,530

GRADUAL FALLBACK FROM FULL PARALLAX CORRECTION TO PLANAR REPROJECTION

MICROSOFT TECHNOLOGY LICE...

1. A computer system configured to facilitate improved depth map generation by computing a smoothness penalty to be imposed against a smoothness term of a cost function, which also includes a data term and which is used when a stereo depth matching algorithm generates a depth map, said computing being based on a detected signal to noise ratio (SNR) of texture images used by the stereo depth matching algorithm to generate the depth map, the computer system comprising:one or more processors; and
one or more computer-readable physical hardware storage devices that store instructions that are executable by the one or more processors to cause the computer system to at least:
access a stereo pair of images of an environment, the stereo pair of images comprising a first texture image and a second texture image;
identify a SNR within one or both of the first texture image and the second texture image;
based on the identified SNR, selectively compute a smoothness penalty to be imposed against a smoothness term of a cost function that is used by a stereo depth matching algorithm; and
generate a depth map by using the stereo depth matching algorithm to perform stereo depth matching on the stereo pair of images, wherein the stereo depth matching algorithm performs the stereo depth matching via use of the selectively computed smoothness penalty.

US Pat. No. 11,032,529

SELECTIVELY APPLYING COLOR TO AN IMAGE

Motorola Mobility LLC, C...

1. A method comprising:capturing, by a first camera of an image capturing device, a color image data within a current scene;
receiving, via an input device, a selection of at least one location within the color image data;
generating a color mask based on the at least one selected location;
identifying, within a depth map of the current scene, at least one unmasked area that is not included in the color mask and which has a depth on the depth map of at least one portion of the color image data;
adding the at least one unmasked area to the color mask;
applying the color mask to the color image data to generate a color masked image data that includes the at least one portion of the color image data and omits a remaining portion of the color image data;
combining the color masked image data with monochromatic image data of the current scene to create a selective color image that includes the color image data in the at least one portion, while a remainder of the selective color image includes monochromatic portions of monochromatic image data; and
providing the selective color image to at least one output device.

US Pat. No. 11,032,528

GAMUT MAPPING ARCHITECTURE AND PROCESSING FOR COLOR REPRODUCTION IN IMAGES IN DIGITAL CAMERA ENVIRONMENTS

INTEL CORPORATION, Santa...

1. An apparatus comprising:one or more processors to:
compute one or more of gamut boundary descriptors and mapping configurations associated with a colored image captured using one or more cameras; and
perform gamut mapping of color representation of the image based on the one or more gamut boundary descriptors and mapping configurations, wherein the gamut mapping to facilitate color reproduction of the image, wherein the gamut mapping is customized based on selection of the one or more gamut boundary descriptors and mapping configurations, and application of the one or more gamut boundary descriptors and mapping configurations to calculations based on color corrections and color enhancements corresponding to a mapping type, wherein the one or more gamut boundary descriptors and mapping configurations are selected and applied based on their hue dependency as revealed through calculated hue.

US Pat. No. 11,032,527

UNMANNED AERIAL VEHICLE SURFACE PROJECTION

Intel Corporation, Santa...

1. An unmanned aerial vehicle comprising:a memory, configured to store a projection image;
a position sensor, configured to receive position information corresponding to a position of the unmanned aerial vehicle;
an image sensor, configured detect image data of a projection surface from a predetermined position according to the position sensor;
one or more processors, configured to:
determine a depth information for a plurality of points in the detected image data;
generate a transformed projection image from a projection image by modifying the projection image to compensate for unevenesses in the projection surface according to the determined depth information;
send the transformed projection image to an image projector; and
an image projector, configured to project the transformed projection image onto the projection surface; wherein the one or more processors are configured to determine the depth information by executing one or more stereo matching algorithms and/or one or more stereo disparity algorithms.

US Pat. No. 11,032,526

PROJECTION DEVICE, PROJECTION METHOD AND PROJECTION SYSTEM

Coretronic Corporation, ...

1. A projection device, comprising a projection apparatus, a video and audio processing circuit and a power control circuit, whereinthe video and audio processing circuit is coupled to the projection apparatus; and
the power control circuit is coupled to the projection apparatus and the video and audio processing circuit, and is used for receiving a power signal correspondingly provided by an external power supply unit when a switchable mechanism is in an open state,
wherein the power control circuit enables the projection apparatus and the video and audio processing circuit according to the power signal, such that the video and audio processing circuit outputs a dynamic image data to the projection apparatus and controls the projection apparatus to project a dynamic projection image according to the dynamic image data,
wherein the video and audio processing circuit comprises a first control circuit and a second control circuit, wherein
the first control circuit is used for storing a set data; and
the second control circuit is coupled to the first control circuit and the projection apparatus, and is used for controlling the projection apparatus,
wherein the first control circuit and the second control circuit are respectively configured that the first control circuit is used for providing the set data to the second control circuit, and the second control circuit sets the projection apparatus according to the set data and outputs the dynamic image data to the projection apparatus.

US Pat. No. 11,032,525

PROJECTION SYSTEMS AND METHODS

MTT Innovation Incorporat...

1. A method for projecting a color image, the method comprising:illuminating an imaging element with incident light of a color; and
operating the imaging element to spatially modulate the incident light;
wherein illuminating the imaging element comprises selectively, based on a brightness or power level for the color in the color image:
I) operating in a first mode wherein light of the color is directed onto a phase modulator that is controlled to provide a phase pattern operative to steer the light of the color to desired locations on the imaging element; and:
II) operating in a second mode wherein either:
a. light of the color is directed onto the imaging element from a light source without interacting with the phase modulator; or
b. light of the color is directed onto the phase modulator that is controlled to provide a phase pattern operative to steer the light of the color to desired locations on the imaging element and combined with additional light of the color and the combined light is directed onto the imaging element.

US Pat. No. 11,032,524

CAMERA CONTROL AND IMAGE STREAMING

1. An image capturing apparatus, comprising:a camera carried by an unmanned flying object,
a memory comprises computer program code, and
at least one processor configured to, with the computer program code, cause the apparatus to:
associate a unique identifier of the first device with the camera;
establish a first communication with the first device based on the unique identifier of the first device, where the first communication allows the first device to control the camera, wherein controlling the camera comprising one or more of: capturing an image using the camera, capturing a video using the camera, zooming the camera, panning the camera, and tilting the camera;
associate a unique identifier of the second device with the camera; and
establish a second communication with the second device based on the unique identifier of the second device, where the second communication allows the second device to control the camera, wherein controlling the camera comprising one or more of: capturing an image using the camera, capturing a video using the camera, zooming the camera, panning the camera and, tilting the camera.

US Pat. No. 11,032,523

AUTOMATED IMAGE METADATA PROCESSING

NCR Corporation, Atlanta...

1. A method, comprising:obtaining a current physical location of a mobile device;
acquiring metadata for the mobile device, wherein acquiring further includes obtaining with the metadata, mobile device settings and camera settings for the mobile device and a camera that captures an image of an object, wherein acquiring further includes discovering the camera settings in real time as the mobile device is operated by processing an Application Programming Interface (API) associated with a camera driver on the mobile device and obtaining through the API the camera settings comprising a zoom level for current zooming, a panning level for current panning, and a pixel density of the image;
recording the current physical location as part of the metadata;
transmitting the image with the metadata to a network service for processing, wherein transmitting further includes transacting with the network service in a transaction and purchasing the object identified in the image based on: determining the actual physical location of an object associated with the image by calculating a distance offset from the current physical location of the mobile device when the image was taken to the object represented in the image using: the metadata, cartographic map data, public network data sources available for the current physical location, private network data sources, and verifying information that is unique to a user who operates the mobile device in order to verify the user by using the information as additional information provided by the user that is in addition to a user identifier and password combination provided by the user during a login to the network service, wherein purchasing further includes purchasing the object that is located at the actual physical location; and
receiving a decision for the transaction from the network service.

US Pat. No. 11,032,522

PATIENT VIDEO MONITORING SYSTEMS AND METHODS HAVING DETECTION ALGORITHM RECOVERY FROM CHANGES IN ILLUMINATION

CareView Communications, ...

1. A system for monitoring a patient in a patient area having one or more detection zones, the system comprising:a computing system that receives a chronological series of frames from a camera and performs the following steps based on the reception of each frame of the chronological series:
calculate a current background luminance of a current frame;
calculate an aggregate background luminance based on a respective background luminance for each of a plurality of previous frames of the chronological series;
for each of one or more zones, calculate a zone luminance based at least in part on each luminance value of each pixel within a plurality of pixels of the zone;
for each of one or more zones, detect patient motion based on a change between the zone luminance and a previous zone luminance of a previous frame;
compare the current background luminance of the current frame to an aggregate background luminance;
in response to the current background luminance changing relative to the aggregate background luminance by more than a predetermined amount, disregard detected patient motion based on the current background luminance changing relative to the aggregate background luminance by more than the predetermined amount, and set the previous zone luminance to the zone luminance in subsequent detection for patient motion; and
in response to patient motion being detected, generate an alert with a user interface in response to the current background luminance changing relative to the aggregate background luminance by less than the predetermined amount.

US Pat. No. 11,032,521

SYSTEM AND METHOD FOR MONITORING A FALL STATE OF A PATIENT WHILE MINIMIZING FALSE ALARMS

CareView Communications, ...

1. A system for monitoring a patient on a patient support surface, the system comprising:a camera configured to output a plurality of video frames of a patient room; and
a computing system configured to:
receive the plurality of video frames from the camera;
define a plurality of zones of a patient area based on the plurality of video frames of the patient room, the plurality of zones defined on a plane that is aligned with a supporting surface on which the patient lies and comprising a supporting surface zone, first zones that extend outside left and right boundaries of the supporting surface zone, second zones that extend outside the first zones, the second zones including a zone that is located below the supporting surface zone and extends laterally beyond the first zones;
monitor the plurality of video frames for motion within each of the plurality of zones;
in response to a first motion being detected in the first zones without the first motion being detected first in the second zones, issue an alert with a user interface; and
in response to a second motion being detected in the second zones before the second motion is detected in the first zones, disarm issuance of the alert.

US Pat. No. 11,032,520

SELF-HEALING VIDEO SURVEILLANCE SYSTEM

1. A method for configuring a computing device in a network of at least one remote device, comprising:storing, in a remote device, a configuration data archive relating to an existing computing device, wherein the remote device is at least one of a traffic camera or an aerial drone camera;
determining, by a computing device to be configured, whether the remote device has stored therein a configuration data archive, the computing device to be configured being distinct from the existing computing device; and
transferring data from the configuration data archive to the computing device to be configured in response to a determination that the remote device has stored therein a configuration data archive.

US Pat. No. 11,032,519

SERVER AND PROGRAM

DWANGO Co., Ltd., Tokyo ...

1. A server comprising:a storage unit storing first content identification information for identifying a first content and first interrupted spot information indicating a viewing interrupted spot of the first content in association with first user identification information for identifying a first user in a case where viewing of the first content during live distribution is interrupted in a first terminal logged in a content sharing service by using the first user identification information;
a determination unit determining whether or not at least one of the content identification information and the interrupted spot information that are associated with the first user identification information is stored in the storage unit in a case where the server is accessed from a second terminal logged in the content sharing service by using the first user identification information after the viewing of the first content is interrupted in the first terminal; and
a notification unit notifying the second terminal that the viewing of the first content is capable of being restarted from the viewing interrupted spot by time shift reproduction in a case where it is determined that at least one of the first content identification information and the first interrupted spot information is stored in the storage unit by being associated with the first user identification information,
wherein in a case where it is determined that at least one of the first content identification information and the first interrupted spot information is stored in the storage unit by being associated with the first user identification information, the determination unit further determines whether or not the live distribution of the first content is ended, and
wherein, in a case where it is determined that the live distribution of the first content is not ended, the notification unit further notifies the second terminal that the first content is capable of being viewed in the live distribution,
wherein, in a case where the viewing of the first content during the live distribution is interrupted in the first terminal, first terminal identification information for identifying the first terminal is stored in the storage unit by being associated with the first user identification information, in addition to at least one of the first content identification information and the first interrupted spot information,
wherein, in a case where it is determined that at least one of the first content identification information and the first interrupted spot information is stored in the storage unit by being associated with the first user identification information, the determination unit further determines whether or not the first terminal identification information is coincident with second terminal identification information for identifying the second terminal,
wherein, in a case where it is determined that an elapsed time between interruption of the live distribution on the first terminal and access of the server via the second terminal is less than a threshold value, the notification unit omits a notification with respect to the second terminal,
wherein the threshold value is dependent upon a comparison between the first terminal identification information and the second terminal identification information.

US Pat. No. 11,032,518

METHOD AND APPARATUS FOR BOUNDARY-BASED NETWORK OPERATION

Time Warner Cable Enterpr...

1. A computerized method of operating a content delivery network having a switched broadcast architecture, the content delivery network comprising at least one switching node and a plurality of computerized client devices, the computerized method comprising:providing a first identifying value associated with a computerized client device;
converting the first identifying value to a second identifying value, the second identifying value preventing determination of the first identifying value therefrom;
accessing a correlation between the second identifying value and at least one data element, the at least one data element being selected from at least one of: (i) demographic data; (ii) geographic data; or (iii) psychographic data; and
based at least in part on the accessing, switching at least one broadcast content option to the computerized client device using the at least one switching node;
wherein the switching of the at least one broadcast content option comprises:
identifying at least one of a plurality of digitally rendered content being then-currently broadcast in a digital content stream to at least the computerized client device as a candidate for removal from the digital content stream, the identifying based at least in part on a prescribed threshold related to viewing activity associated with the at least one of the plurality of digitally rendered content;
based at least on a network resource availability associated with the computerized client device, determining that the removal is appropriate; and
based at least on the identifying and the determining, replacing the at least one of the plurality of digitally rendered content identified for the removal with the at least one broadcast content option for delivery in the digital content stream broadcast to at least the computerized client device.

US Pat. No. 11,032,517

INTERACTIVE VIDEOCONFERENCE APPARATUS

1. A method for operating an electrical device via a videoconference system, comprising:positioning a plurality of light sensors in front of a first electronic display at a first location, such that each light sensor of the plurality of light sensors receives light from a localized area of the first electronic display;
detecting a change in received light level below a predetermined threshold for one or more light sensors of the plurality of light sensors based on a received video feed from a second location;
asserting a signal for each of the one or more light sensors that detected the change in received light level;
causing a visual change in an electric device located at the first location;
sending images of the electric device to a second electronic display at the second location; wherein the second location is different from the first location.

US Pat. No. 11,032,516

SYSTEMS AND METHODS FOR REDUCING VIDEO CONFERENCE BANDWIDTH NEEDS

CAPITAL ONE SERVICES, LLC...

1. A system comprising:one or more processors; and
a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors, are configured to cause the system to:
receive a video comprising a first face of a first person;
generate, using a variational auto-encoder, a first 3D representation of the first face by analyzing the video;
store the first 3D representation of the first face in a database;
receive a request to set up a video conference between the first person and a second person;
retrieve the first 3D representation of the first face and a second 3D representation of a second face of the second person based on the request to conference;
initiate the video conference between a first user device associated with the first person and a second user device associated with the second person;
automatically identify the first user device or the second user device as a speaker device associated with a speaker, wherein the speaker is the first person or the second person;
automatically identify the first user device or the second user device as a listening device, associated with a listener, when not identified as the speaker device, wherein the listener is the first person or the second person;
receive a first bandwidth indication from the first user device and a second bandwidth indication from the second user device;
determine that the first bandwidth indication or the second bandwidth indication is below a predetermined threshold;
responsive to determining that the first bandwidth indication or the second bandwidth indication is below the predetermined threshold, transmit a facial expression request from the speaker device;
receive encoded facial expressions of the speaker from the speaker device in response to the facial expression request; and
transmit the encoded facial expressions of the speaker to the listening device to be decoded and combined with the first 3D representation of the first face or the second 3D representation of the second face associated with the speaker.

US Pat. No. 11,032,515

BIOSENSOR-TRIGGERED COLLABORATION

Mutualink, Inc., Walling...

1. A system, comprising:a transceiver; and
one or more processors coupled to the transceiver, wherein the one or more processors are configured to:
transmit, via the transceiver, an activation message to a camera device worn by a user to begin recording and transmitting video data;
transmit, via the transceiver, an event alert to an interoperability workstation (IWS), wherein the IWS establishes a biosensor-triggered multimedia collaboration session including a voice communication device of the user and the camera device;
receive, via the transceiver, a first biometric signal from a biosensor worn by the user; determine, using the first biometric signal, that an event has ceased; and transmit, via the transceiver, a deactivation message to the camera device.

US Pat. No. 11,032,514

METHOD AND APPARATUS FOR PROVIDING IMAGE SERVICE

Samsung Electronics Co., ...

1. An electronic device comprising:a camera;
a display;
at least one sensor;
a communication unit configured to establish wireless communication with another electronic device using at least one protocol; and
a processor configured to be functionally connected to the camera, the display, the at least one sensor, and the communication unit,
wherein the processor is configured to:
perform a call with the other electronic device,
detect a state change of the electronic device based on sensing information sensed by the at least one sensor while the call is maintained,
determine whether the state change of the electronic device corresponds to a user gesture for switching a call mode, and
in response to determining that the state change of the electronic device corresponds to the user gesture for switching the call mode, switch the call mode,
wherein switching the call mode comprises switching the call mode from a voice call mode to a video call mode, and from a video call mode to a voice call mode, and
in response to detecting switching of the call mode from the voice call mode to the video call mode, display information about a call mode of the other electronic device through a user interface displayed on the display.

US Pat. No. 11,032,513

OPTIMIZING VIDEO CONFERENCING USING CONTEXTUAL INFORMATION

FACEBOOK, INC., Menlo Pa...

1. A method comprising:receiving a video conference data stream depicting a plurality of video conference participants and a user of a client computing device;
optimizing a display of the video conference data stream for the user by:
accessing networking system information corresponding to the user and each participant of the plurality of video conference participants,
determining affinity coefficients representing relationship strengths between the user and each participant based on the networking system information,
determining an importance of each participant based on both the networking system information and the determined affinity coefficients,
optimizing the display of the video conference data stream based on the importance of each participant; and
presenting the optimized display by way of the client computing device.

US Pat. No. 11,032,512

SERVER AND OPERATING METHOD THEREOF

HYPERCONNECT INC., Seoul...

1. An operating method of a server, the method comprising:receiving a request for a video call connection from a first electronic apparatus of a first user;
establishing a video call session between the first electronic apparatus and a second electronic apparatus of a second user;
obtaining a match satisfaction of the first user for the second user based at least in part on a response of the first user to the second user during the video call session;
obtaining first face information of the first user including first feature point distribution information and obtaining second face information of the second user including second feature point distribution information;
training a machine learning model by using the first face information, the second face information, and the match satisfaction to provide a trained machine learning model predictive of match satisfaction of the first user for another user based on face information of the another user;
estimating match satisfaction of the first user for each of standby users by using the first face information, face information of each of the standby users, and the trained machine learning model, when the video call connection between the first electronic apparatus and the second electronic apparatus is terminated; and
selecting a third user, who becomes a next video call counterpart of the first user, from among the standby users by using the estimated match satisfactions of the standby users, including the third user, predicted by the machine learning model trained in accordance with the second face information of the second user.

US Pat. No. 11,032,511

FRAME INTERPOLATION METHOD AND RELATED VIDEO PROCESSOR

NOVATEK Microelectronics ...

1. A frame interpolation method for a video processor, comprising:receiving a series of input frames;
generating a frame difference sequence according to the series of input frames;
determining a long-term cadence according to the frame difference sequence;
determining whether a first input frame is received following the long-term cadence;
generating interpolated frames by applying phase coefficients in a regular phase table corresponding to the long-term cadence when the first input frame is received following the long-term cadence; and
generating interpolated frames by applying phase coefficients in a bad edit phase table corresponding to a bad edit cadence when the first input frame is received without following the long-term cadence.

US Pat. No. 11,032,510

VIDEO PROCESSING METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A video processing method, comprising:determining, by circuitry of a terminal, a second video frame portion associated with a second time point that is after a first time point associated with a first video frame portion of a video;
generating a transitional video frame based on differences between first color values of pixels in the first video frame portion and second color values of pixels in the second video frame portion, a transitional color value of a pixel at a target pixel location in the transitional video frame being within a target color interval, and the target color interval being determined according to a first color value of the first color values that corresponds to the pixel at the target pixel location in the first video frame portion and a second color value of the second color values that corresponds to the pixel at the target pixel location in the second video frame portion; and
performing, by the circuitry, display control of the video according to the transitional video frame,
wherein the first color value that corresponds to the pixel at the target pixel location in the first video frame portion is one of black and white, and the second color value that corresponds to the pixel at the target pixel location in the second video frame portion is the other one of black and white.

US Pat. No. 11,032,509

DISPLAY APPARATUS WITH A DISPLAY AREA AND A NON-DISPLAY AREA AND INCLUDING A SOUND GENERATOR

LG Display Co., Ltd., Se...

1. A display apparatus, comprising:a display panel comprising:
a display area configured to display an image; and
a non-display area outside of the display area;
at least one middle-low-pitched sound band generator in the display area; and
at least one high-pitched sound band generator in the non-display area,
wherein each of the at least one middle-low-pitched sound band generator and the at least one high-pitched sound band generator is configured to vibrate the display panel to generate sound.

US Pat. No. 11,032,508

DISPLAY APPARATUS AND METHOD FOR CONTROLLING AUDIO AND VISUAL REPRODUCTION BASED ON USER'S POSITION

Samsung Electronics Co., ...

1. A display apparatus comprising:a display;
a plurality of speakers;
a sensor;
a communicator comprising communication circuitry; and
a processor configured to control the display apparatus to:
control the display to display an image,
generate a sound corresponding to the image and output the sound through the plurality of speakers,
determine a location of a sensed user based on the user being sensed by the sensor,
receive weather information through the communicator, and
change a first weight of a high band sound and a second weight of a low band sound of the sound, and output the changed sounds through the plurality of speakers based on the determined location of the user and the received weather information.

US Pat. No. 11,032,507

FRAME RATE AND ASSOCIATED DEVICE MANUFACTURING TECHNIQUES FOR IMAGING SYSTEMS AND METHODS

FLIR Commercial Systems, ...

1. An imaging device, comprising:a detector array configured to detect electromagnetic radiation associated with a scene and provide image data frames according to a first frame rate;
a readout circuit configured to provide the image data frames according to a frame rate for the readout circuit;
a fuse configured to set the frame rate for the readout circuit; and
a fuse state circuit configured to determine a state of the fuse, wherein the frame rate for the readout circuit is based at least on the state of the fuse.

US Pat. No. 11,032,506

IMAGE SENSOR AND IMAGE-CAPTURING DEVICE

NIKON CORPORATION, Tokyo...

1. An image sensor comprising:a first layer that includes:
a first photoelectric converter and a second photoelectric converter converting light and generating charge;
a first signal line outputting a signal based on the charge generated by the first photoelectric converter; and
a second signal line outputting a signal based on the charge generated by the second photoelectric converter; and
a second layer that includes:
an A/D converter converting an analog signal based on the charge generated by the first photoelectric converter to a first digital signal and converting an analog signal based on the charge generated by the second photoelectric converter to a second digital signal; and
an adder adding together the first digital signal and the second digital signal, wherein
the second layer is laminated on the first layer.

US Pat. No. 11,032,505

RAMP SIGNAL GENERATOR AND CMOS IMAGE SENSOR USING THE SAME

SK hynix Inc., Icheon-si...

1. A device including a ramp signal generator which comprises:a slope control circuit configured to generate a controllable analog reference voltage according to a digital setting code value to control a slope of a ramp signal; and
at least one unit current cell configured to adjust the slope of the ramp signal by adjusting a current flowing through the at least one unit current cell according to the controllable analog reference voltage generated by the slope control circuit,
wherein the slope control circuit comprises:
a code providing circuit configured to provide the digital setting code value,
wherein the code providing circuit comprises:
a memory configured to receive the digital setting code value from an external image signal processor (ISP), store the digital setting code value, and provide the digital setting code value to the controllable reference voltage generation circuit.

US Pat. No. 11,032,504

SOLID-STATE IMAGING DEVICE, METHOD OF MANUFACTURING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS

SONY SEMICONDUCTOR SOLUTI...

1. A solid-state imaging device, comprising:a photoelectric conversion device that comprises:
an exposed-type photoelectric conversion device in a surface layer of a semiconductor layer, and
an embedded-type photoelectric conversion device in the semiconductor layer, wherein
the exposed-type photoelectric conversion device is on the embedded-type photoelectric conversion device, and
the embedded-type photoelectric conversion device faces a bottom face layer of a concave portion in the semiconductor layer;
a plurality of floating diffusions, wherein
a first floating diffusion of the plurality of floating diffusions is in a surface layer of the semiconductor layer,
the first floating diffusion is located closer to the exposed-type photoelectric conversion device than the embedded-type photoelectric conversion device,
a second floating diffusion of the plurality of floating diffusions is in the bottom face layer of the concave portion, and
the second floating diffusion is located closer to the embedded-type photoelectric conversion device than the exposed-type photoelectric conversion device; and
an amplifying transistor of a fully-depleted type connected to the plurality of floating diffusions.

US Pat. No. 11,032,503

SOLID-STATE IMAGING DEVICE AND INFORMATION PROCESSING DEVICE

SONY SEMICONDUCTOR SOLUTI...

1. A solid-state imaging device, comprising:a plurality of substrates that includes a first substrate and a second substrate, wherein the second substrate is below the first substrate;
a pixel array unit on the first substrate, wherein the pixel array unit includes a plurality of pixels; and
a processing unit on the second substrate, wherein the processing unit is configured to:
input, as input data, a pixel value of at least one target pixel of the plurality of pixels to a register, wherein
the register has a bit length of a first bit number, and
the first bit number is larger than a second bit number that corresponds to a bit depth of the pixel value of the at least one target pixel;
calculate a register value for the at least one target pixel based on the pixel value inputted to the register; and
generate a seed value of a random number based on the calculated register value.

US Pat. No. 11,032,502

SOLID-STATE IMAGING DEVICE AND DRIVING METHOD THEREOF, AND ELECTRONIC APPARATUS

SONY CORPORATION, Tokyo ...

1. An imaging device, comprising:a semiconductor substrate having a first side and a second side, wherein the first side is opposite to the second side;
a first trench starting and extending from the first side of the semiconductor substrate towards the second side of the semiconductor substrate, unobstructed by any other trench, in a cross-sectional view;
a second trench starting and extending from the second side of the semiconductor substrate towards the first side of the semiconductor substrate, unobstructed by any other trench, in the cross-sectional view,
wherein each of the first trench and the second trench has a depth less than a thickness of the semiconductor substrate in the cross-sectional view.

US Pat. No. 11,032,501

LOW NOISE IMAGE SENSOR SYSTEM WITH REDUCED FIXED PATTERN NOISE

Apple Inc., Cupertino, C...

1. A system comprising:a control circuit;
a plurality of pixel circuits;
a multiplexer configured to provide analog pixel data from a selected one of the plurality of pixel circuits;
a successive approximation register (SAR) analog-to-digital converter (ADC), wherein the SAR ADC includes:
a SAR configured to store a digital code corresponding to a most recent conversion performed by the SAR ADC;
a capacitive digital-to-analog converter (CDAC) configured to convert a digital value stored in the SAR into a corresponding analog signal, wherein the CDAC includes a two-dimensional (2D) array of circuit elements and further includes a first plurality of multiplexers coupled to provide a first subset of most significant bits (MSBs) from the SAR into rows of the 2D array and a second plurality of multiplexers coupled to provide a second subset of MSBs into columns of the 2D array and a demultiplexer coupled to provide a set of least significant bits (LSBs) from the SAR into columns of the 2D array; and
a comparator having a first input coupled to receive the corresponding analog signal from the SAR and a second input coupled to receive the analog pixel data from the multiplexer; and
a random number generator configured to generate, for a given frame, selection signals based on a random number such that a set of random ones of the circuit elements of the CDAC are selected for generation of the corresponding analog signal, wherein the selection signals are provided to the demultiplexer and ones of the first and second pluralities of multiplexers.

US Pat. No. 11,032,500

DARK NOISE COMPENSATION IN A RADIATION DETECTOR

SHENZHEN XPECTVISION TECH...

1. A radiation detector, comprising:pixels arranged in an array, the pixels comprising peripheral pixels at a periphery of the array and interior pixels at an interior of the array, each of the pixels configured to generate an electrical signal on an electrode thereof, upon exposure to a radiation;
an electronic system comprising a controller and a current source;
wherein the controller is configured to cause the current source to provide first compensation to the peripheral pixels for a dark noise of the peripheral pixels and configured to cause the current source to provide second compensation to the interior pixels for a dark noise of the interior pixels, the first compensation and the second compensation being different;
wherein the current source is configured to provide the first compensation by providing a first electric current to the peripheral pixels and to provide the second compensation by providing a second electric current to the interior pixels, the first electric current and the second electric current being different;
wherein the first electric current and the second electric current are different in waveforms thereof, or in frequencies thereof.

US Pat. No. 11,032,499

SOLID-STATE IMAGE SENSOR AND IMAGING APPARATUS

SONY CORPORATION, Tokyo ...

1. A solid-state image sensor, comprising:a photoelectric converter configured to generate a charge corresponding to an exposure amount during an exposure period;
a generated-charge retention portion in a first semiconductor region, wherein the generated-charge retention portion is configured to retain the charge;
a generated-charge transfer portion configured to transfer the charge from the photoelectric converter to the generated-charge retention portion to perform a generated-charge transfer after an elapse of the exposure period;
an output charge retention portion configured to retain the charge;
a retained-charge transfer portion configured to transfer the charge retained in the generated-charge retention portion to the output charge retention portion to perform a retained-charge transfer;
a signal generation portion configured to generate a signal corresponding to the charge retained in the output charge retention portion as an image signal after the retained-charge transfer; and
a generated-charge retention gate portion that includes a first generated-charge retention gate and a second generated-charge retention gate, wherein the generated-charge retention gate portion is configured to:
apply a control voltage to the generated-charge retention portion to control a potential of the generated-charge retention portion;
sequentially change a voltage applied to the first generated-charge retention gate from the control voltage to an intermediate voltage and a bias voltage to generate a potential difference in the generated-charge retention portion; and
sequentially change a voltage applied to the second generated-charge retention gate from the control voltage to the intermediate voltage and the bias voltage to perform the retained-charge transfer.

US Pat. No. 11,032,498

HIGH SPEED TWO-DIMENSIONAL IMAGING WITH AN ANALOG INTERFACE

The Regents of the Univer...

1. A method of forming a quantitative two-dimensional image based upon incident events representing individual incident particles, in real time, comprising the steps of:(a) detecting incident events;
(b) amplifying detected events with an analog amplifier;
(c) converting the detected amplified events to light with a light generating element having a decay time;
(d) capturing image frames of the light at a frame rate on the order of the light generating element decay time;
(e) processing each frame pixel by pixel with a massively parallel processor and identifying valid events in individual image frames;
(f) combining valid events to form the quantitative two-dimensional image.

US Pat. No. 11,032,497

SOLID STATE IMAGING DEVICE, METHOD OF MANUFACTURING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS

Sony Corporation, Tokyo ...

1. An imaging device comprising:a semiconductor substrate;
a first photoelectric conversion region disposed in the semiconductor substrate;
a second photoelectric conversion region disposed in the semiconductor substrate adjacent to the first photoelectric conversion region;
a first color filter disposed over the first photoelectric conversion region;
a second color filter disposed over the second photoelectric conversion region;
a groove portion disposed in the semiconductor substrate between the first photoelectric conversion region and the second photoelectric conversion region;
a first insulating film disposed in the groove portion; and
a first air gap disposed between the first color filter and the second color filter.

US Pat. No. 11,032,496

ENHANCED SHUTTER EFFICIENCY TIME-OF-FLIGHT PIXEL

OmniVision Technologies, ...

1. A time-of-flight (TOF) pixel array, comprising:a pixel cell, wherein the pixel cell comprises a first cell region, a second cell region, and a deep trench isolation (DTI) structure,
wherein the first cell, region comprises
a photodiode disposed in a semiconductor material layer to accumulate image charges in response to light incident upon the photodiode,
a first photogate disposed proximate to a frontside of the semiconductor material layer and positioned above the photodiode to attract charges in the semiconductor material layer toward the frontside in response to a Voltage applied to the first photogate, and
a first doped region disposed proximate to the frontside of the semiconductor material layer, wherein the first doped region is implanted partially underneath the first photogate to accumulate charges of the photodiode when the voltage is applied to the first photogate,
wherein the second cell region comprises
a second doped region disposed proximate to the frontside of the semiconductor material layer,
a floating diffusion (FD) disposed in the semiconductor material layer proximate to the frontside of the semiconductor material layer,
a shutter transistor disposed proximate to the frontside of the semiconductor material layer, wherein a source terminal of the shutter transistor is coupled to the second doped region, wherein a drain terminal of the shutter transistor is coupled to the FD, and wherein a gate terminal of the shutter transistor is configured to transfer the image charges in the second doped region to the FD in response to a shutter signal, and
wherein the DTI structure is disposed in the semiconductor material layer to laterally encircle the first cell region.

US Pat. No. 11,032,495

SHARED-COUNTER IMAGE SENSOR

Rambus Inc., San Jose, C...

1. A method of operation in an image sensor integrated within an integrated circuit die, the method comprising:iteratively sampling a first pixel during a first interval to obtain a first sequence of sample values, the first pixel being disposed in a first region of a sensor/counter array on the integrated circuit die together with a second pixel and a first counter, the sensor/counter array including, in addition to the first region, a plurality of other regions disposed in rows and columns within the sensor/counter array and each including a respective counter and plurality of pixels;
accumulating a first count value within the first counter according to the first sequence of sample values;
iteratively sampling the second pixel during a second interval, without sampling the first pixel, to obtain a second sequence of sample values; and
accumulating a second count value within the first counter according to the second sequence of sample values.

US Pat. No. 11,032,494

RECOVERY OF PIXEL RESOLUTION IN SCANNING IMAGING

Versitech Limited, Hong ...

1. A method of enhancing pixel resolution of high speed laser scanning imaging to enhance pixel resolution of high-speed laser scanning imaging by the means of sub-pixel sampling, applicable to one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) imaging;wherein the high speed laser scanning imaging includes on-the-fly line scan imaging of a specimen,
wherein on-the-fly line scan refers to the relative motion between the specimen and the laser line-scan beam,
wherein the on-the-fly line scan imaging of the specimen comprises applying 1D line-scanning to the specimen with unidirectional motion to obtain captured 1D line scans, and reconstructing a 2D image by digitally stacking the captured 1D line scans,
wherein a fast axis of the 2D image corresponds to a line-scan direction, and a slow axis corresponds to a specimen motion direction,
wherein the method further comprises harnessing a warping effect of the 2D image or a resultant 3D image to create a relative subpixel shift on the fast axis, the slow axis, and an axial axis so as to restore a high-resolution 2D or 3D image, and wherein the warping effect is caused by pixel drift between adjacent line-scans as a sampling rate f of a digitizer is unlocked from a laser pulse repetition rate F of the high-speed laser scanning imaging.

US Pat. No. 11,032,493

FRAMING ENHANCED REALITY OVERLAYS USING INVISIBLE LIGHT EMITTERS

International Business Ma...

1. A method for generating an augmented reality environment, the method comprising:locating, by one or more processors, one or more light emitters proximate to a location of a physical display area;
transmitting, by the one or more processors, information of a target content to an augmented reality device using light generated by the one or more light emitters, wherein the information includes a light-based data stream of the target content and the target content is not pre-loaded with data and related information; and
responsive to a reception of the information of the target content by the augmented reality device, determining, by the one or more processors, a portion of the target content for displaying on the physical display area.

US Pat. No. 11,032,492

VISIBLE LIGHT AND IR COMBINED IMAGE CAMERA

Fluke Corporation, Evere...

1. A thermal imaging system comprising:a visible light (VL) module including a VL lens and a VL sensor array for detecting VL image data of a target scene, the VL image data comprising VL image intensity data;
an infrared (IR) module including an IR lens and an IR sensor array for detecting thermal data in the target scene;
one or more processors, the one or more processors configured to:
receive VL image data of the target scene from the VL module;
receive thermal data of the target scene from the IR module;
generate a blended image based on the received VL image data and the received thermal data, wherein generating the blended image comprises:
palletizing the thermal data in color, such that the color of the palletized thermal data portrays temperatures in the target scene; and
blending the VL image intensity data with the palletized thermal data so that the VL image data only adds intensity data to the blended image; and
a display adapted to display the blended image.

US Pat. No. 11,032,491

PRESERVING PRIVACY IN SURVEILLANCE

Alarm.com Incorporated, ...

1. A computer-implemented method comprising:obtaining an image of a scene captured by a camera;
identifying a person in the image through object recognition;
determining locations of feet of the person;
determining that neither locations of feet of the person are within a room;
based on determining that neither locations of the feet are within the room, determining that the person is in front of the room but not inside the room; and
in response to determining that the person is in front of the room but not inside the room, obfuscating, in the image, the room and not the person.

US Pat. No. 11,032,490

CAMERA ARRAY INCLUDING CAMERA MODULES

Verizon Patent and Licens...

1. A system comprising:a camera array comprising a set of camera modules that are each substantially identical relative to one another and communicatively coupled to one another via a daisy chain; and
an aggregation system stored on a memory and executed by one or more processors, the aggregation system operable to:
receive video data describing image frames from the camera array captured by the set of camera modules;
stitch the image frames together based on a frame sync signal and a relative position of each camera module of the set of camera modules to generate three-dimensional video data;
determine that color deficiencies occurred in the stitched image frames based on at least some of the camera modules facing different directions;
determine corrected pixel values for original pixel values in the stitched image frames that include the color deficiencies;
replace the original pixel values with the corrected pixel values; and
generate three-dimensional content that includes the corrected pixel values in a set of pixel values.

US Pat. No. 11,032,489

CAMERA NORMALIZATION

CERNER INNOVATION, INC., ...

1. One or more non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed, perform a method of normalizing images, the method comprising:receiving an indication of a second image to be taken of a patient in a second environment, wherein the patient is already associated with a first image taken at a time prior to the second image;
identifying a first environmental property of the first image, wherein the first environmental property is associated with a first environment where the first image was taken;
identifying a second environmental property of the second environment based on a building management system that monitors the second environment and provides environmental property data, wherein the second environmental property is associated with the second environment where the second image is to be taken;
determining that the first environmental property of the first image does not match the second environmental property of the second environment where the second image is to be taken based on the environmental property data provided by the building management system that monitors the second environment;
transmitting to a remote server a request to adjust the second environmental property to match the first environmental property of the first image; and
responsive to detecting the adjustment of the second environmental property to match the first environmental property, communicating a signal to the image capture device that causes the image capture device to capture the second image.

US Pat. No. 11,032,488

CAMERA SYSTEM WITH LIGHT SHIELD USING OBJECT CONTRAST ADJUSTMENT

Aptiv Technologies Limite...

1. A system, comprising:a controller configured to:
receive image data from an imager of a camera installed on a vehicle;
track, based on the image data, a position of a bright spot detected by the imager in a field of view of the camera, the bright spot being a reflection of light from a roadway traveled by the vehicle;
determine, based on the image data, an object detected by the imager;
position an absorber in a line-of-sight of the camera between the bright spot and the imager; and
adjust a shape and an opacity of the absorber thereby reducing an intensity of the bright spot and increasing a contrast of the object image data via a post-processing normalization in response to receiving subsequent image data from the imager;
wherein the absorber is a liquid crystal filter configured to match a geometry of the bright spot, and the controller is further configured to adjust the shape of the absorber by dynamically adjusting the shape of the absorber; and
wherein the controller is further configured to adjust the opacity of the absorber by adjusting the opacity across a height and a width of the absorber; and
wherein the absorber is comprised of a plurality of concentric shapes, wherein each subsequent concentric shape toward an outer dimension of the absorber is less opaque than the previous concentric shape.

US Pat. No. 11,032,487

INTERCHANGEABLE LENS AND METHOD FOR CONTROLLING THE SAME, SHOOTING APPARATUS, AND CAMERA SYSTEM

SONY CORPORATION, Tokyo ...

1. An interchangeable lens, comprising:a diaphragm; and
a central processing unit (CPU) configured to:
control a motor to drive the diaphragm;
determine a driving plan for the driving of the diaphragm based on a command to drive the diaphragm from a shooting apparatus, wherein the command corresponds to change of a current aperture value to a target aperture value;
determine stabilization time information of the diaphragm based on the driving plan; and
transmit diaphragm driving information to the shooting apparatus based on the driving of the diaphragm, wherein
the diaphragm driving information includes the stabilization time information of the diaphragm.

US Pat. No. 11,032,486

REDUCING A FLICKER EFFECT OF MULTIPLE LIGHT SOURCES IN AN IMAGE

Google LLC, Mountain Vie...

1. A method of reducing a flicker effect of a plurality of light sources in an image captured with an imaging device, the method comprising:detecting a lighting frequency associated with each of at least two of the plurality of light sources;
prioritizing the lighting frequency of each of the at least two of the plurality of light sources relative to the flicker effect upon the image, the prioritizing to identify at least a first-prioritized lighting frequency and a second-prioritized lighting frequency;
determining a first exposure-time factorization set for the first-prioritized lighting frequency and a second exposure-time factorization set for the second-prioritized lighting frequency, the determining of the first and second exposure-time factorization set for the first- and second-prioritized lighting frequency, respectively, comprising:
identifying a first exposure time effective to reduce the flicker effect of the first-prioritized lighting frequency in the image;
identifying a first set of exposure times that includes multiples of a function calculated relative to the first exposure time;
identifying a second exposure time effective to reduce the flicker effect of the second-prioritized lighting frequency in the image; and
identifying a second set of exposure times that includes multiples of a function calculated relative to the second exposure time; and
adjusting an exposure time of the imaging device to a second exposure time in the first exposure-time factorization set that aligns to at least one of a matching or near-to-matching exposure time in the second exposure-time factorization set.

US Pat. No. 11,032,485

OPTICAL ASSEMBLY FOR SUPERIMPOSING IMAGES FROM TWO OR MORE SOURCES

Maranon, Inc., Eagle Roc...

1. An optoelectronic device comprising:an eyepiece;
an image tube;
an optical assembly including:
a non-contact receiver for receiving a first image,
a display for displaying the first image received via the receiver, and
a brightness detector;
wherein the optical assembly is positioned between the eyepiece and the image tube,
wherein a second image is output from the image tube,
wherein the optical assembly is configured to adjust a brightness level of the display based on brightness of the second image,
wherein the optical assembly superimposes the first image with the second image, the first image being received via the non-contact receiver, the second image being generated by the optoelectronic device in which the optical assembly is located,
wherein the first image is generated by a second optoelectronic device that is separate from the optoelectronic device in which the optical assembly is located, such that the first image from the second optoelectronic device and the second image from the optoelectronic device are superimposed for viewing through the eyepiece.

US Pat. No. 11,032,484

IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, IMAGE PROCESSING METHOD, IMAGING METHOD, AND PROGRAM

FUJIFILM Corporation, To...

1. An image processing apparatus comprising:a processor configured to
acquire a captured image in which a subject is imaged,
divide the captured image into a plurality of regions based on brightness information of the captured image,
calculate first white balance related information for each of the divided plurality of regions,
acquire second white balance related infoimation set by a user for the captured image, and
decide a priority region which is decided based on the first white balance related information and the second white balance related information and for which a condition of a dynamic range expansion process to be performed on the captured image is set based on brightness of the priority region.

US Pat. No. 11,032,483

IMAGING APPARATUS, IMAGING METHOD, AND PROGRAM

FUJIFILM Corporation, To...

1. An imaging apparatus comprising:an imager that images a motion picture of a subject; and
a processor configured to:
perform a dynamic range expansion process by causing the imager to capture a plurality of captured images having different exposure conditions in correspondence with a frame rate of the motion picture and generating one composite image from the plurality of captured images,
perform a dynamic range expansion process by causing the imager to capture one captured image in correspondence with the frame rate of the motion picture and correcting an output value of a signal of the one captured image, and
execute the dynamic range expansion process by generating one composite image from the plurality of captured images or the dynamic range expansion process by correcting the output value of the signal of the one captured image, based on a time of one frame period of the frame rate and a total exposure time in a case of capturing the plurality of captured images in the dynamic range expansion process by generating one composite image from the plurality of captured images.

US Pat. No. 11,032,482

AUTOMATIC SCREEN BRIGHTNESS AND CAMERA EXPOSURE ADJUSTMENT FOR REMOTE MULTIMEDIA COLLABORATION SESSIONS

CISCO TECHNOLOGY, INC., ...

1. A method comprising:determining that a screen is within a field of view (FOV) of a camera;
determining a first exposure level associated with the screen and a second exposure level associated with a portion of a scene that is within the FOV of the camera, wherein the first exposure level and the second exposure level correspond to a current brightness of the screen and a current exposure of the camera; and
adjusting at least one of the current exposure of the camera or the current brightness of the screen according to one or more first target exposure levels associated with the screen and one or more second target exposure levels associated with the portion of the scene.

US Pat. No. 11,032,481

CAMERA SCOPE ELECTRONIC VARIABLE PRISM

Medos International Sarl

1. A system comprising:a scope including a prism located at a distal end of the scope;
a hand piece;
an imaging sensor, the imaging sensor including an array of pixels;
interface elements which, when actuated, cause an angle of view provided by the prism to be changed in an image readout frame; and
image acquisition and processing circuitry that receives an indication from one or more of the interface elements to change an angle of view provided by the prism;
wherein the image acquisition and processing circuitry identifies a sub-array of pixels in the array of pixels that corresponds to the indicated angle of view and receives imaging data from only the sub-array of pixels corresponding to the indicated angle of view and generates an image from the imaging data for display on a display device.

US Pat. No. 11,032,480

VIDEO ZOOM CONTROLS BASED ON RECEIVED INFORMATION

Hewlett-Packard Developme...

1. A system comprising:a processor; and
a non-transitory storage medium storing instructions executable on the processor to:
receive information sensed by an optical sensor responsive to light from a plurality of markers positioned at different locations of a boundary of a physical user collaborative area to receive user-input marks at a first participant location during a video conference session, wherein the plurality of markers are distinct from the physical user collaborative area;
based on the received information indicating the different locations of the boundary and further based on analyzing a smoothness of the physical user collaborative area in an image of the physical user collaborative area captured by a camera, determine the boundary of the physical user collaborative area;
control, based on the determined boundary, a video zoom into the physical user collaborative area during the video conference session;
save information of the determined boundary of the physical user collaborative area into a profile; and
for a subsequent video conference session involving participants at participant locations including the first participant location, access the profile to determine the boundary of the physical user collaborative area at the first participant location.

US Pat. No. 11,032,479

BIRD'S-EYE VIEW VIDEO GENERATION DEVICE, BIRD'S-EYE VIEW VIDEO GENERATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM

JVC KENWOOD Corporation, ...

1. A bird's-eye view video generation device comprising:a memory device;
a controller that performs functions of multiple components including:
a video data acquisition unit configured to acquire video data from a plurality of cameras configured to capture videos of surroundings of a vehicle;
a bird's-eye view video generator configured to generate a first bird's-eye view video from a virtual viewpoint at a position above the vehicle by performing viewpoint conversion processing on the video data acquired by the video data acquisition unit to synthesize a viewpoint-converted video;
an obstacle information acquisition unit configured to acquire information from at least one detector configured to detect at least one obstacle around the vehicle and to specify a position of the detected obstacle on the first bird's-eye view video;
a display controller configured to display the first bird's-eye view video in a display; and
a vehicle information acquisition unit configured to acquire a travelling direction of the vehicle,
wherein, when the position of the detected obstacle that is specified by the obstacle information acquisition unit overlaps a synthesis boundary between videos in the first bird's-eye view video, the bird's-eye view video generator is further configured to generate a second bird's-eye view video obtained by changing the position of the first virtual viewpoint of the bird's-eye view video to a position from which the detected obstacle does not overlap the synthesis boundary in the first bird's-eye view video, and wherein, when the position of the obstacle that is specified by the obstacle information acquisition unit overlaps the synthesis boundary in the traveling direction of the vehicle, the bird's-eye view video generator is further configured to generate a third bird's-eye view video obtained by changing the position of the virtual viewpoint of the first bird's-eye view video to a position on a side of the traveling direction of the vehicle from which the detected obstacle does not overlap the synthesis boundary in the first bird's-eye view video.

US Pat. No. 11,032,478

SMART CAMERA USER INTERFACE

Google LLC, Mountain Vie...

1. A computer-implemented method executed using one or more processors, the method comprising:receiving, by the one or more processors, image data, the image data being provided from a camera of a user device and corresponding to a scene viewed by the camera, the image data depicting a plurality of entities;
determining, by the one or more processors, one or more annotations for each of the plurality of entities depicted by the image data, each of the one or more annotations describing an entity characteristic;
predicting, by the one or more processors based on the one or more annotations for each of the plurality of entities, an intended entity from the plurality of entities depicted by the image data, the intended entity comprising an entity that a user of the user device intended to capture in the image data; and
determining, by the one or more processors, one or more actions based on the one or more annotations for the intended entity.

US Pat. No. 11,032,477

MOTION STABILIZED IMAGE SENSOR, CAMERA MODULE AND APPARATUS COMPRISING SAME

SAMSUNG ELECTRONICS CO., ...

1. A mobile device comprising:a camera module; and
an application processor configured to receive image information, time information, and movement information from the camera module,
wherein the camera module comprises:
a movement sensor configured to detect movement of the camera module; and
an image sensor configured to obtain the image information, to receive the movement information from the movement sensor, to output the movement information to the application processor, and to synchronize the movement information with the image information using a global clock.

US Pat. No. 11,032,476

IMAGE SENSOR AND ELECTRONIC DEVICE COMPRISING THE SAME

SAMSUNG ELECTRONICS CO., ...

1. An image sensor, comprising:a pixel array that includes a plurality of pixels;
a first interface directly connected to an external gyro sensor and that receives gyro data output by the gyro sensor in response to a motion; and
a control logic that generates image data by exposing the plurality of pixels for a predetermined exposure period, generates valid data that corresponds to the exposure period using the gyro data, and generates, based on the valid data, compensation information that represents a movement path of the motion,
wherein the gyro data contains a plurality of sampling data generated at a predetermined sampling rate, and the control logic compares timestamps that represent a start time point and an end time point of the exposure period, respectively, to timestamps of the plurality of sampling data, wherein the valid data is determined from the plurality of sampling data.

US Pat. No. 11,032,475

PASSIVE METHOD TO MEASURE STRENGTH OF TURBULENCE

Mission Support and Test ...

1. A method for determining turbulence information for an atmospheric distance between a remote scene and an image capture device comprising:capturing an image reflected from the remote scene with the image capture device, the image reflected from the remote scene does not include laser light or laser light reflected from the remote scene;
generating image data with the image capture device, the image data representing an image of the scene;
storing the image data in a memory;
performing spatial/temporal spectrum characterization processing on at least a portion of the image data to generate spatial/temporal spectrum characterization turbulence data;
performing high confidence block shift processing on at least a portion of the image data to generate high confidence block shift turbulence data; and
combining the spatial/temporal spectrum characterization turbulence data and the high confidence block shift turbulence data to calculate the turbulence information.

US Pat. No. 11,032,474

IMAGE-DISPLAYING DEVICE

Seiko Epson Corporation, ...

1. An image-displaying device comprising:an image capturing sensor that starts capturing an image in synchronization with a first vertical synchronization signal and outputs output data corresponding to the image in a first period;
an image processor that processes the output data and outputs image data corresponding to the output data in a second period from the input of the output data; and
a display that starts a display of a display image based on the image data in synchronization with a second vertical synchronization signal so that a total period of the first period and the second period is less than or equal to a second vertical synchronization period corresponding to the second synchronization signal.

US Pat. No. 11,032,473

SURVEILLANCE AND MONITORING SYSTEM

Minuteman Security Techno...

1. A surveillance data acquisition system, comprising:a server configured to:
receive a selection of a geometric area defined on a geolocation map, a date selection, and a time selection;
identify one or more vehicle monitoring systems comprising a video capture device recorded as being located within the selected geometric area defined on a geolocation map during the selected date and the selected time, the one or more vehicle monitoring systems comprising the video capture device being dispatched in one or more vehicles;
crosscheck locations of the one or more vehicles with the selection of the geometric area and the date and time using status information of the one or more vehicles to determine which vehicles of the one or more vehicles were in the selection of the geometric area at the selected time;
automatically retrieve surveillance video data originating from the one or more vehicle monitoring systems identified as being located within the selected geometric area defined on a geolocation map during the selected date at the selected time, the surveillance video data having been recorded on the one or more vehicle within the selected geometric area defined on a geolocation map at the selected date and the selected time;
output the retrieved surveillance video data recorded in the selected geometric area defined on the geolocation map at the selected date and selected time.

US Pat. No. 11,032,472

IMAGE-CAPTURING DEVICE AND IMAGE-CAPTURING METHOD

TDK Taiwan Corp.

1. A device for driving optical component, comprising:a fixed part;
a movable part, movable relative to the fixed part; said movable part having a top surface and a plurality of side surfaces; and
an electromagnetic driving module, furnished between the fixed part and the movable part for driving the movable part to move relative to the fixed part; said electromagnetic driving module comprising at least one magnet and at least one coil corresponding to said at least one magnet;
wherein said movable part is able to support an optical element; when said optical element is supported by said movable part, said optical element is located above the top surface of the movable part and movable together with the top surface of the movable part;
wherein, each said magnet has a first surface facing one of said side surfaces of said movable part; said first surface of said magnet is neither perpendicular nor parallel to the top surface of the movable part;
said fixed part includes an outer carrier structure;
said movable part includes an inner carrier structure;
said device for driving optical component further comprises a twin-axial rotating element connected between the outer carrier structure and the inner carrier structure: said optical element is disposed at said twin-axial rotating element, the twin-axial rotating element is able to undergo limited pivotal motions about a first axial direction and a second axial direction perpendicular to the first axial direction;
said electromagnetic driving module drives the twin-axial rotating element together with the optical element to undergo the limited pivotal motions about the first axial direction and the second axial direction;
each said magnet further has a second surface opposite to the first surface: said second surface of said magnet is a curved surface facing toward one corresponding said coil;
each said coil has a curved inner surface facing toward one corresponding said magnet;
a virtual center is defined on an inner frame portion; said curved surface of said second surface of each said mag net is a portion of a spherical surface which has a center located right at the virtual center of the inner frame portion; in addition, the curved inner surface of each said coil is a portion of another spherical surface which has a center also located tight at the virtual center of the inner frame portion; and
the at least one magnet comprises at least two first arc magnets and at least two second arc magnets; in addition, the at least one coil comprises at least two first arc coils and at least two second arc coils: the first arc magnets are mounted at two opposing sides of the inner carrier structure and are closing to a second connection ribs, while the first arc coils are located at two opposing sides of the outer carrier structure and are closing to the first arc magnets respectively: by energizing the first arc coils, a corresponding electromagnetic force can be produced to push the first arc magnets and an inner plate portion together with the inner carrier structure to undergo a pivotal motion about the first axial direction: the second arc magnets are mounted at other two opposing sides of the inner carrier structure and are closing to a first connection ribs, while the second arc coils are located at other two opposing sides of the outer carrier structure and are closing to the second arc magnets respectively: by energizing the second arc coils, another corresponding electromagnetic force can be produced to push the second arc magnet and the inner plate portion together with the inner carrier structure to undergo another pivotal motion about the second axial direction.

US Pat. No. 11,032,471

METHOD AND APPARATUS FOR PROVIDING A VISUAL INDICATION OF A POINT OF INTEREST OUTSIDE OF A USER'S VIEW

NOKIA TECHNOLOGIES OY, E...

1. A method comprising:during display of an image in a first orientation, identifying a point of interest outside of a user's view of a portion of the image;
providing a visual indication to the user of the point of interest outside of the user's view by causing at least a portion of the image within the user's view to be repositioned so as to have an orientation, different than the first orientation, that provides the visual indication of the point of interest outside of the user's view; and
after repositioning at least the portion of the image, causing at least the portion of the image to return to the first orientation.

US Pat. No. 11,032,470

SENSORS ARRANGEMENT AND SHIFTING FOR MULTISENSORY SUPER-RESOLUTION CAMERAS IN IMAGING ENVIRONMENTS

INTEL CORPORATION, Santa...

1. An apparatus comprising:one or more processors coupled to memory, the one or more processors to:
arrange sensors of a camera such that pixel centers of pixels of an image are spread evenly across a pixel area having pixel planes corresponding to the sensors, wherein the image is captured by the camera;
re-arrange the sensors by dividing the sensors in pairs of sensors, wherein each pair of sensors corresponds to a pair of pixel planes, wherein the sensors are re-arranged such that the pixel centers are spread evenly across the pixel area while maintaining virtual pixels of the pixels equal to a portion in size of physical pixels of the pixels; and
shift the sensors diagonally such that the corresponding pixel planes are adjusted accordingly for improving quality of the image.

US Pat. No. 11,032,469

IMAGING CONTROL APPARATUS, RADIATION IMAGING SYSTEM, IMAGING CONTROL METHOD, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An imaging control apparatus, comprising:an irradiation field obtaining unit configured to obtain irradiation field information in radiation imaging by an irradiation field obtaining method; and
an area dose obtaining unit configured to obtain an area dose in the radiation imaging based on the irradiation field information, wherein
when the irradiation field information is not obtained by a first irradiation field obtaining method, the irradiation field obtaining unit obtains the irradiation field information based on a second irradiation field obtaining method different from the first irradiation field obtaining method,
the first irradiation field obtaining method being an irradiation field obtaining method based on at least one of a radiation image obtained by radiation imaging, imaging information associated with the radiation imaging, and preset irradiation field information concerning the radiation imaging, and
the second irradiation field obtaining method being based on at least one of the radiation image obtained by radiation imaging, imaging information associated with the radiation imaging, and preset irradiation field information concerning the radiation imaging other than said first irradiation field obtaining method.

US Pat. No. 11,032,468

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE CAPTURING APPARATUS, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing method comprising:performing processing of selecting a learning model from a plurality of learning models that have learned a reference used to record an image generated by an image sensor;
performing, using the selected learning model, determination processing of determining whether the image generated by the image sensor satisfies the reference; and
recording the image generated by the image sensor in a memory in a case in which it is determined in the determination processing that the image generated by the image sensor satisfies the reference,
wherein the processing of selecting the learning model is performed based on at least one of an image capturing instruction by a user, an evaluation result of the image by the user, an environment when the image is generated by the image sensor, and a score of each of the plurality of learning models for the image generated by the image sensor, andwherein the processing of selecting the learning model is performed by changing at least one of a number of nodes in an input layer, a number of nodes an intermediate layer, a number of nodes an output layer, a feature amount represented by a node, an activation function of a node, a weight coefficient of a bond that connects nodes, and a number of layers in the intermediate layer.

US Pat. No. 11,032,467

MOBILE TERMINAL AND CONTROL METHOD THEREOF FOR OBTAINING IMAGE IN RESPONSE TO THE SIGNAL

LG ELECTRONICS INC., Seo...

1. A mobile terminal, comprising:a memory;
a display;
a first camera configured to obtain a first image corresponding to a preview image currently displayed on the display;
a second camera configured to detect an object from the preview image obtained by the first camera and obtain a second image for the detected object;
a sensor configured to sense a housing orientation of the mobile terminal; and
a controller;
wherein the controller:
controls the second camera to obtain the second image together in response to a signal for the first camera to obtain the first image,
controls the obtained first image by the first camera to be displayed on the display in response to sensing a first direction in which the housing orientation of the mobile terminal corresponds to landscape,
controls the obtained second image by the second camera to be displayed on the display in response to sensing a second direction in which the housing orientation of the mobile terminal corresponds to portrait, wherein the first image includes the second image,
wherein if receiving a zoom-in signal for the preview image before receiving the signal for obtaining the first image, displays an indicator on a prescribed region of the zoomed-in preview image and controls the indicator to display a rate occupied by the object included in a preview image after a zoom-in in a region of an object detected before the zoom-in by the second camera, and
obtains a third image corresponding to the preview image after the zoom-in, a fourth image corresponding to the object included in the preview image after the zoom-in and a fifth image corresponding to the object detected by the second camera before the zoom-in, creates a single image group using the third to fifth images, and saves data included in the created image group to the memory.

US Pat. No. 11,032,466

APPARATUS FOR EDITING IMAGE USING DEPTH MAP AND METHOD THEREOF

Samsung Electronics Co., ...

1. An electronic device comprising:at least one image sensor;
a display;
a processor; and
a memory configured to store instructions,
wherein the instructions, when executed by the processor, cause the processor to:
obtain, via the at least one image sensor, a color image and a depth map corresponding to the color image,
generate an image by combining the color image with the depth map,
control the display to display the image,
while displaying the image, receive a first user input selecting an object to be added to the image, and
in response to receiving the first user input, control the display to display at least a part of the object in the image, based on a depth value of the object and depth information of a first region in which the object is located in the image, and
wherein the depth information of the image is determined based on the depth map before receiving the first user input.

US Pat. No. 11,032,465

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGING APPARATUS, AND RECORDING MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus comprising:one or more processors; and
a memory storing instructions which, when the instructions are executed by the one or more processors, cause the image processing apparatus to function as units comprising:
an acquisition unit configured to acquire a plurality of viewpoint images with different viewpoints;
a first generation unit configured to set a detection range of an image shift amount of the plurality of viewpoint images on the basis of a photographing condition and to generate distribution information based on the detection range using the plurality of viewpoint images; and
a second generation unit configured to perform image processing using the generated distribution information and to generate an output image,
wherein the first generation unit decreases the detection range as an image height increases,
wherein the second generation unit generates refocused images by generating viewpoint images which are corrected by performing sharpening or smoothing processing on the plurality of viewpoint images and performing shift synthesis thereon.

US Pat. No. 11,032,464

METHODS AND APPARATUS FOR ABSOLUTE AND RELATIVE DEPTH MEASUREMENTS USING CAMERA FOCUS DISTANCE

Applied Materials, Inc., ...

1. A depth measuring apparatus comprising:a camera assembly configured to capture a plurality of images of a target at a plurality of distances from the target;
a position sensor configured to capture, for each of the plurality of images, corresponding position data associated with a relative distance between the camera assembly and the target; and
a controller configured to, for each of a plurality of regions within the plurality of images:
determine corresponding gradient values within the plurality of images;
determine a corresponding maximum gradient value from the corresponding gradient values; and
determine a depth measurement for a region of the plurality of regions based on the corresponding maximum gradient value and the corresponding position data captured for an image from the plurality of images that includes the corresponding maximum gradient value.

US Pat. No. 11,032,463

IMAGE CAPTURE APPARATUS AND CONTROL METHOD THEREOF

CANON KABUSHIKI KAISHA, ...

1. An image capturing apparatus, comprising:a detector configured to detect a focus adjustment position in an image;
a processor configured to generate a composite image in which a guide indicating the detected position is superimposed on the image;
a display configured to display the composite image generated by the processor;
a controller configured to update, in accordance with information of the position detected by the detector, display information used for processing for generating the composite image; and
a memory configured to store guide images including a plurality of guides of a quantity equal to a number of positions that the detector can detect in a whole image,
wherein the display information includes information indicating a ratio of compositing the image with respective guides, and
the controller updates the information indicating the ratio such that among the plurality of guides a transparency of the guide corresponding to the detected position is lower than a transparency of the guide not corresponding to the detected position,
wherein the display information is stored as a lookup table that the controller updates, and
within a data capacity of the lookup table, a resolution of information indicating the ratio for compositing can be changed in accordance with the quantity of the plurality of guides.

US Pat. No. 11,032,462

METHOD FOR ADJUSTING FOCUS BASED ON SPREAD-LEVEL OF DISPLAY OBJECT AND ELECTRONIC DEVICE SUPPORTING THE SAME

Samsung Electronics Co., ...

1. An electronic device comprising:a camera comprising:
a lens assembly including one or more lenses,
an actuator configured to move at least one lens of the lens assembly, and
an image sensor; and
a processor configured to:
obtain a first image of an external object using the image sensor,
generate a first color image, which corresponds to a first color, and a second color image, which corresponds to a second color, from the first image,
identify, for a display object contained in the first image, an amount of difference in a first spread level between the display object of the first color image and the display object of the second color image,
determine a first on-focus position of the at least one lens with respect to the external object based on at least the amount of difference in the first spread level,
move the at least one lens in a direction corresponding to the first on-focus position by using the actuator,
obtain, in a state where the at least one lens is at least partially moved in the direction corresponding to the first on-focus position, a second image of the external object by using the image sensor,
determine a second on-focus position of the at least one lens with respect to the external object based on at least an amount of difference in a second spread level between the external object contained in the first image and the external object contained in the second image, and
move the at least one lens to the second on-focus position by using the actuator.

US Pat. No. 11,032,461

DUAL CAMERA MODULE AND OPTICAL DEVICE

LG INNOTEK CO., LTD., Se...

1. A camera device comprising:a first camera module; and
a second camera module comprising a first surface facing the first camera module,
wherein the first camera module comprises:
a cover member comprising an upper plate and a lateral plate extending from the upper plate;
a bobbin disposed in the cover member;
a first coil disposed on the bobbin; and
a magnet disposed between the coil and the lateral plate of the cover member,
wherein the lateral plate of the cover member comprises a first lateral plate facing the first surface of the second camera module, a second lateral plate opposite to the first lateral plate, a third lateral plate disposed between the first lateral plate and the second lateral plate, and a fourth lateral plate opposite to the third lateral plate,
wherein the magnet comprises a first magnet disposed between the coil and the third lateral plate of the cover member, and a second magnet disposed between the coil and the fourth lateral plate of the cover member, and
wherein no magnet is disposed between the coil and the first lateral plate of the cover member.

US Pat. No. 11,032,460

IMAGE SENSOR WITH IMAGE RECEIVER AND AUTOMATIC IMAGE COMBINING

Cista System Corp., San ...

1. An imaging system comprising:an image sensor comprising
a first one of N image sensor arrays to generate first image data for a first image, wherein N is an integer greater than two,
N?1 receivers each to receive, into the image sensor, second image data for N?1 respective second images, wherein the first image and the N?1 second images are captured concurrently;
an image combination circuit coupled to the first one of the image sensor arrays and the N?1 receivers to receive the first image data and the second image data and combine the first image data and the second image data into combined image data for a single combined image, according to one or more image combination criteria, and at least one of the first image data and the second image data, and
a transmitter coupled to the image combination circuit to transmit the combined image data for the combined image from the image sensor; and
N?1 second ones of the N image sensor arrays, each coupled to a respective one of the N?1 receivers of the image sensor to generate the second image data for N?1 second images, wherein the N?1 second ones of the N image sensor arrays are external to the image sensor.

US Pat. No. 11,032,459

CAMERA MODULE INCLUDING REINFORCEMENT MEMBERS FOR SUPPORTING PRINTED CIRCUIT BOARD ON WHICH PLURALITY OF IMAGE SENSORS ARE DISPOSED AND ELECTRONIC DEVICE INCLUDING THE SAME

Samsung Electronics Co., ...

1. A camera comprising:a substrate;
a first image sensor;
a second image sensor;
a first reinforcement member disposed in an area around a first area over the first image sensor and a second area over the second image sensor so as to support at least a portion of the substrate;
a first housing disposed in an area including at least a portion of the first area so as to be stacked directly on one part of the first reinforcement member while accommodating a first lens part corresponding to the first image sensor; and
a second housing disposed in an area including at least a portion of the second area so as to be stacked directly on another part of the first reinforcement member while accommodating a second lens part corresponding to the second image sensor,
wherein a first hole is formed at a position corresponding to the first area of the substrate,
wherein a second hole is formed at a position corresponding to the second area of the substrate,
wherein the first image sensor is disposed, at a position corresponding to the first hole, directly on a second reinforcement member which is disposed on an opposite side of the substrate from the first reinforcement member,
wherein the second image sensor is disposed, at a position corresponding to the second hole, directly on the second reinforcement member,
wherein the first reinforcement member comprises a first window and a second window through which the first image sensor and the second image sensor are exposed,
wherein a third window is formed between the first window and the second window, and
wherein an electronic component, associated with performance of at least one of the first image sensor or the second image sensor, is mounted through the third window.

US Pat. No. 11,032,458

MULTI-LENS PROTECTION DEVICE OF MOBILE PHONE

1. A multi-lens protection device for a mobile phone, comprising a protector, a multi-lens positioning plate, a first adhesive layer, an opaque printing layer, and a second adhesive layer, wherein the protector is configured for corresponding in contour and in shape to an upper surface of a camera module convex body on the mobile phone, wherein the multi-lens positioning plate, which has a contour and shape corresponding to the protector, having at least two perforations positioned and shaped corresponding to camera lenses on the camera module convex body, wherein the first adhesive layer is shaped corresponding to the multi-lens positioning plate and attached to one side of the multi-lens positioning plate, while facing the protector, wherein the opaque printing layer is shaped corresponding to the first adhesive layer and attached to one side of the protector while facing the first adhesive layer, wherein the opaque printing layer is adhered to the first adhesive layer, wherein the second adhesive layer is shaped corresponding to the multi-lens positioning plate and attached to another side of the multi-lens positioning plate while facing away from the first adhesive layer, wherein a total thickness of the second adhesive layer, the multi-lens positioning plate, the first adhesive layer, and the opaque printing layer has a height at least equal to a height of the camera lenses protruding from the camera module convex body.

US Pat. No. 11,032,457

BIO-SENSING AND EYE-TRACKING SYSTEM

THE REGENTS OF THE UNIVER...

1. A bio-sensing system, comprising:a frame structured to be placed on a user's face;
a first camera coupled to the frame and facing towards an eye of the user to capture a first set of images of the eye;
a second camera coupled to the frame and facing away from the user and configured to capture a second set of images of an environment from the user's perspective;
one or more sensors configured to measure biological functions of the user and to generate sensor data,
wherein the one or more sensors comprises a photoplethysmogram (PPG) sensor and an accelerometer,
wherein the PPG sensor and the accelerometer are structured to be attached to an earlobe of the user,
wherein the PPG sensor is configured to generate PPG sensor data by capturing infrared light reflected from the earlobe of the user, and
wherein the accelerometer is configured to generate accelerometer data by detecting movement of the user; and
a computer electrically coupled to the one or more sensors, the first camera and the second camera, wherein the computer includes at least one processor in communication with a memory operable to execute to cause the computer to perform a method comprising:
filtering the PPG sensor data;
filtering the accelerometer data;
converting the filtered PPG sensor data to digital PPG sensor data;
converting the filtered accelerometer data to digital accelerometer data;
estimating, based on the digital accelerometer data, noise contributed from user movement; and
removing from the digital PPG sensor data the noise contributed from user movement to obtain a noise filtered PPG sensor data.

US Pat. No. 11,032,456

ULTRAFAST IMAGING APPARATUS

DISCO CORPORATION, Tokyo...

1. An ultrafast imaging apparatus comprising:a chuck table configured to support a workpiece thereon; and
an imaging unit configured to capture images of the workpiece supported on the chuck table, wherein
the imaging unit includes:
an objective lens opposing the workpiece supported on the chuck table,
a beam splitter disposed in a first optical path extending from the objective lens,
an image processing unit disposed in a second optical path extending from the beam splitter, and
an illumination unit disposed in a third optical path extending from the beam splitter,
the illumination unit including
a broadband pulsed light source, and
a spectrometer configured to divide a single pulse of light, which has been emitted from the broadband pulsed light source, into a plurality of wavelengths and to produce a time lag between each two adjacent ones of the plurality of wavelengths, and
the image processing unit including
a diffraction grating configured to divide and diffract return light, which has been reflected by the workpiece supported on the chuck table after application of illumination light onto the workpiece with the time lag from the illumination unit, at different angles according to the wavelengths, and
an image sensor configured to capture images, like a time-resolved photo, of the return light, which has been divided and diffracted by the diffraction grating, at areas for the respective angles corresponding to the wavelengths.

US Pat. No. 11,032,455

FLASH, FLASH ADJUSTMENT METHOD, OPTICAL SYSTEM, AND TERMINAL

HUAWEI TECHNOLOGIES CO., ...

1. A light filling method, wherein the light filling method is implemented in a terminal comprising a flash coupled to a camera and a display, and wherein the light filling method comprises:obtaining, by the camera, a current environment;
displaying, the current environment in the display;
displaying, a focal length and a light-filling strength on a point;
displaying a matching degree between the focal length and the light-filling strength;
receiving, by the display, an instruction from a user through clicking to perform light filling on the point;
filling, by the flash, the light-filling strength; and
adjusting, by the camera, the focal length on the point.

US Pat. No. 11,032,454

CIRCUIT BOARD, MOLDED PHOTOSENSITIVE ASSEMBLY AND MANUFACTURING METHOD THEREFOR, PHOTOGRAPHING MODULE, AND ELECTRONIC DEVICE

NINGBO SUNNY OPOTECH CO.,...

1. A circuit board, at least one photosensitive element being conductively connected to the circuit board, the circuit board comprising:a substrate, the substrate having an edge region and being provided with a set of connectors; and
at least one circuit section, the circuit section being formed on the substrate, the photosensitive element and the circuit section being conductively connected via the connectors, the circuit section forming a ring circuit in the edge region of the substrate, and the ring circuit surrounding the photosensitive element and being located outside of the connectors.

US Pat. No. 11,032,453

IMAGE CAPTURING APPARATUS AND CONTROL METHOD THEREFOR AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image capturing apparatus comprising:a light receiving sensor including two-dimensionally arranged pixels and configured to photoelectrically convert a pair of object images having passed through a first pupil region and a second pupil region into which a pupil region of an image capturing lens is divided in a first direction and to output a first image signal and a second image signal, each of the first image signal and the second image signal corresponding to a respective object image of the pair of object images; and
at least one processor or circuit configured to function as:
a calculation unit configured to calculate a first phase difference between the first image signal and the second image signal in the first direction and a second phase difference between the first image signal and the second image signal in a second direction orthogonal to the first direction;
a focus detection unit configured to calculate a first focus detection result based on the first phase difference; and
a determination unit configured to determine presence or absence of heat haze based on the second phase difference.

US Pat. No. 11,032,452

CAMERA MODULE FOR A MOTOR VEHICLE

VEONEER SWEDEN AB, Varga...

1. A camera module for a motor vehicle, comprising; a lens objective, a housing having a back plate and a lens holder, wherein the back plate carries an image sensor within the housing, wherein the lens holder is mounted to the back plate and holds the lens objective such that the image sensor is arranged in or close to an image plane of the lens objective, the back plate comprises at least one bi-material element holding the image sensor, the bi-material element formed of at least two layers of materials having different thermal properties and being connected together such that the bi-material element is adapted to bend with changing temperature, wherein the bending behavior of the bi-material element formed in the back plate changes the spacing between the image sensor and the lens objective, and wherein the lens holder is made of materials having thermal properties and an end of the lens holder is directly attached to the bi-material element such that a thermal displacement of the bi-material element is able to compensate a thermal expansion of the lens holder and a shift of the image plane caused by changes of and within the lens objective.