US Pat. No. 10,798,457

START-UP PERFORMANCE IMPROVEMENT FOR REMOTE VIDEO GAMING

Nvidia Corporation, Sant...

1. A gaming manager, comprising:a memory; and
a processor configured to
start a video game running remotely, wherein the video game includes a static video portion and a user interactive video portion, and
receive the static video portion for local display while the user interactive video portion is being initialized remotely for subsequent local game play, wherein the static video portion is provided from a different source than the user interactive video portion, and a start time of the static video portion is adjusted to accommodate a display time of the static video portion being different than a required initialization time of the user interactive video portion.

US Pat. No. 10,798,456

METHOD AND DEVICE FOR PRESENTING INFORMATION

Tencent Technology (Shenz...

1. A method for presenting information, applied to a computing device, comprising:receiving an information presentation request sent from a first terminal;
acquiring a first network address of the first terminal;
determining at least one account according to the first network address, a first period to which a current time moment belongs, and a prestored correspondence between a network address, a period and an account; and
selecting presentation information matching with account information of the at least one account; and
sending the presentation information to the first terminal;
wherein the determining the at least one account according to the first network address, the first period to which the current time moment belongs, and the prestored correspondence between the network address, the period and the account comprises:
determining an account with a maximum number of service messages according to the first network address, the first period to which the current time moment belongs, and a prestored correspondence between the network address, the period, the account, and the number of sent service messages.

US Pat. No. 10,798,455

VIDEO DELIVERY

Comcast Cable Communicati...

1. A method comprising:receiving, by a computing device, data indicating a partitioned immersive video frame comprising a plurality of blocks;
receiving data indicating a first area of focus associated with a first user who viewed the frame;
receiving data indicating a second area of focus associated with a second user who viewed the frame;
determining a common area of focus that is common to the first area of focus and the second area of focus;
multicasting, to a first user device associated with the first user and to a second user device associated with the second user, a baseline layer comprising blocks disposed within or overlapping the common area of focus and blocks disposed outside of the common area of focus, wherein in the baseline layer, the blocks disposed within or overlapping the common area of focus have higher resolution than the blocks disposed outside of the common area of focus; and
unicasting, to the first user device, an enhancement layer corresponding to blocks that are disposed outside of the common area of focus and within, or overlapping, the first area of focus.

US Pat. No. 10,798,454

PROVIDING INTERACTIVE MULTIMEDIA SERVICES

International Business Ma...

1. A method comprising:configuring a computer system including at least one memory and at least one processor to perform steps of:
selecting a multimedia program for presentation, the multimedia program having a plurality of segments respectively classified according to associated content;
populating a data entry database via respective data entry templates retrieved for each of the plurality of segments;
generating, via the respective data entry templates, a data stream including identifying information for the content associated with the plurality of segments, wherein a timing of the data stream is configured to coincide with a timing of the plurality of segments upon presentation by a multimedia system;
generating respective query grammars for each of the plurality of segments based on the respective data entry templates, wherein the respective query grammars are configured to interpret commands to access content in the data stream;
simultaneously transmitting, to the multimedia system, the content associated with at least one of the plurality of segments, one or more of the respective query grammars, and the data stream; and
responsive to determining that a detected input query is valid according to a query grammar among the respective query grammars, retrieving content associated with the input query, including additional content from the data stream retrieved without interfering with the presentation of the multimedia program.

US Pat. No. 10,798,453

CONTENT SCHEDULING

The DIRECTV Group, Inc., ...

1. A method, comprising:generating, by a system comprising a processor, a schedule that comprises information relating to scheduling first media content for a first presentation at a first time and second media content for a second presentation at a second time, via a presentation device, based on schedule-related information received via an interface of the system, wherein the schedule-related information indicates that the first media content is to be presented at the first time and the second media content is to be presented at a third time that occurs after the first time and before the second time, and wherein the first media content comprises a presentation length that spans from the first time to the second time;
determining, by the system, that there is a conflict between scheduling of the first media content and the second media content, in response to determining that the third time occurs before the second time;
to facilitate the generating of the schedule, resolving, by the system, the conflict, based on scheduling the second media content to be presented at the second time;
generating, by the system, a message template for a text message, wherein the message template comprises respective information fields usable to insert respective items of the schedule-related information relating to the first media content, and wherein the respective information fields comprise an information field;
communicating, by the system, the message template to a device via a network device of a communication network, wherein, using the message template, a first item of the schedule-related information is inserted into the information field of the message template to facilitate scheduling of the first media content for the first presentation at the first time, by the device, via the text message comprising the message template;
in response to receiving the text message having the message template with the first item of the schedule-related information, determining, by the system, that a second item of the schedule-related information relating to the first media content is missing from the text message based on a parsing of the text message;
in response to determining that the second item of the schedule-related information is missing, determining, by the system, the second item of the schedule-related information relating to the first media content based on a first search of content-related data associated with a data source device, wherein the first search includes the first item of the schedule-related information in a search query associated with the first search;
with regard to a program that is presented on a reoccurring basis for which scheduling of the program is requested on the reoccurring basis, via a third search, searching, by the system, for a first presentation time of a third presentation of third media content of the program during a second time period that occurs after a first time period, wherein the first presentation time is not available via a second search for the first presentation time during the first time period and is available via the third search for the first presentation time during the second time period that occurs before the first presentation time, wherein the third search is scheduled to be performed at a time subsequent to the second search in response to the first presentation time not being available during the first time period, wherein the first media content is part of the program and is a different episode of the program than the third media content, wherein the first presentation time is determined to be at a fourth time, and wherein the schedule comprises additional information relating to scheduling the third media content for the third presentation at the fourth time;
in response to determining, based on a change in a programming time schedule, that a second presentation time of a fourth presentation of fourth media content that is a subsequent episode of the program is changed from a regular periodic basis associated with the first media content and the third media content to a different regular periodic basis associated with a fifth time, modifying, by the system, the schedule to schedule the fourth presentation of the fourth media content at the fifth time; and
executing, by the system, the schedule to facilitate the first presentation of the first media content at the first time, the second presentation of the second media content at the second time, the third presentation of the third media content at the fourth time, and the fourth presentation of the fourth media content at the fifth time via the presentation device.

US Pat. No. 10,798,452

RECOMMENDING MEDIA PROGRAMS BASED ON MEDIA PROGRAM POPULARITY

Google LLC, Mountain Vie...

2. A computer-implemented method, comprising:receiving, at a computer system, information expressing a user's interest in one or more media programs;
training a recommendation engine by using behavior information from a plurality of users;
deriving, based on user profile information, a similarity between the user and one or more of the plurality of users;
identifying, using the recommendation engine, one or more other media programs that other users who share the similarity with the user have frequently presented in search submissions; and
generating and providing, for presentation to the user, a recommendation of the identified one or more other media programs.

US Pat. No. 10,798,451

SYSTEMS AND METHODS FOR DETECTING A REACTION BY A USER TO A MEDIA ASSET TO WHICH THE USER PREVIOUSLY REACTED AT AN EARLIER TIME, AND RECOMMENDING A SECOND MEDIA ASSET TO THE USER CONSUMED DURING A RANGE OF TIMES ADJACENT TO THE EARLIER TIME

Rovi Guides, Inc., San J...

1. A method for detecting a reaction by a user to a media asset to which the user previously reacted at an earlier time, and recommending a second media asset to the user based on other media assets consumed by the user during a predetermined range of times before or after the earlier time, comprising:detecting that a user consumed a media asset at a first time;
detecting, using at least one biometric sensor, a first reaction by the user to the media asset while the media asset was consumed by the user at the first time;
in response to detecting the first reaction:
determining a type of the first reaction;
converting a magnitude of the first reaction to a converted magnitude on a scale that corresponds to the type;
determining whether the converted magnitude is greater than a threshold value on the scale that corresponds to the type;
in response to determining that the converted magnitude is greater than the threshold value on the scale that corresponds to the type, searching a database for an entry indicating a second reaction, by the user, to the media asset, at a second time that is in the past relative to the first time, wherein the second reaction was detected using the at least one biometric sensor;
in response to finding, based on the searching, the entry indicating the second reaction by the user to the media asset at the second time, retrieving, based on information of the entry, identities of each of a plurality of media assets consumed by the user during a predetermined range of time before or after the second time; and
generating for a display a recommendation for a second media asset, to the user, based on the identities of each of the plurality of media assets;
detecting that the user consumed the media asset at a third time that is later than the first time;
detecting a third reaction by the user to the media asset, while the media asset was consumed by the user at the third time;
in response to detecting the third reaction:
determining a type of the third reaction;
converting a magnitude of the third reaction to a second converted magnitude on a scale that corresponds to the second type;
determining whether the second converted magnitude is greater than a second threshold value on the second scale that corresponds to the second type; and
in response to determining that the second converted magnitude is greater than the second threshold value on the second scale that corresponds to the second type, recommending a third media asset, to the user, based on the second identities of each of the second plurality of media assets.

US Pat. No. 10,798,450

DISPLAY APPARATUS AND SET-TOP BOX IDENTIFICATION METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A method of identifying a set-top box connected to a display apparatus of the display apparatus, the method comprising:establishing, by the display apparatus, a network connection between the display apparatus and a network;
obtaining, by the display apparatus, network connection information corresponding to the established network connection;
identifying, by the display apparatus, an internet service provider based on the obtained network connection information;
rearranging, by the display apparatus, a set-top box search list by changing an order of set-top boxes corresponding to the identified internet service provider to assign a highest search priority to a set-top box corresponding to the identified internet service provider;
transmitting by the display apparatus, a control signal for controlling the set-top box of the highest search priority to the connected set-top box; and
based on identifying that the connected set-top box performs an operation based on the transmitted control signal, identifying, by the display apparatus, the connected set-top box the set-top box of the highest search priority.

US Pat. No. 10,798,449

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD FOR VALIDATING AN APPLICATION

Sony Corporation, Tokyo ...

1. An information processing apparatus, comprising:a communication interface configured to communicate with a communication network via a first communication medium;
a broadcast receiver configured to receive a broadcast signal via a second communication medium different from the first communication medium; and
processing circuitry configured to:
store an application information table associated with an application, the application information table being previously obtained from the communication network via the first communication medium by the communication interface, the application information table including a permission bitmap and permission scope information, the permission bitmap being stored in a memory as a stored permission bitmap and specifying a scope of use-permission for the application, and the permission scope information specifying a scope of broadcast resources to which the permission bitmap is applicable;
obtain control information from the broadcast signal received via the second communication medium by the broadcast receiver, the control information including an updated permission bitmap;
update the stored permission bitmap stored in the memory according to the updated permission bitmap obtained from the control information;
determine whether the information processing apparatus is communicatively connected to the network or not communicatively connected to the network;
in a case that the information processing apparatus is determined to be communicatively connected to the network,
determine whether a latest application information table stored in a server is newer than the previously obtained application information table, and
obtain the latest application information table from the server to update the previously obtained application information table stored in the information processing apparatus, when the latest application information table stored in the server is determined to be newer than the previously obtained application information table; and
in response to a user selection to start the application,
determine if an application information table currently stored in the information processing apparatus and associated with the application is expired or not based on expiration date information included in the currently stored application information table,
execute the application according to the currently stored application information table and the stored permission bitmap when the currently stored application information table is determined to be not expired, and
invalidate start of the application when the currently stored application information table is determined to be expired.

US Pat. No. 10,798,448

PROVIDING RESTRICTED OVERLAY CONTENT TO AN AUTHORIZED CLIENT DEVICE

The Nielsen Company (US),...

1. A method comprising:receiving, by a computing system and from a client device displaying a content feed, a request for an overlay having a restriction, the restriction being that only designated devices are authorized to display the overlay, wherein the request for the overlay specifies a particular overlay format of a plurality of different overlay formats that specify a way in which overlay content is presented on the client device, wherein the client device is configured to receive an overlay associated with the request from the computing system and to superimpose the overlay on the content feed;
making a determination, by the computing system, of whether the client device is one of the designated devices authorized to display the overlay; and
if the determination is that the client device is not one of the designated devices, then, responsive to making the determination, causing, by the computing system, the client device to continue to display the content feed without the overlay; and
if the determination is that the client device is one of the designated devices then, responsive to making the determination, communicating, by the computing system, a URL associated with a highest CPM overlay to the client device, wherein the client device is configured to subsequently retrieve the highest CPM overlay from a server specified by the URL and to display the overlay in accordance with the request.

US Pat. No. 10,798,447

USAGE RULES ENFORCEMENT

NAGRAVISION S.A., Chesea...

1. A content handling device comprising;a content source module for receiving content;
a content sink module for outputting the content;
a plurality of content transformation modules defining one or more paths from the content source module to the content sink module, wherein the content is associated with one or more usage rules requiring one or more transformations to be applied to the content and each content transformation module is configured to:
receive the content,
apply a transformation to the content in accordance with the usage rules, and
apply a tagging operation corresponding to the transformation to the content; and
a usage rule tag checking module configured to:
receive the content,
determine whether all tagging operations corresponding to transformations required by the one or more usage rules have been applied to the content,
responsive to the determination being positive, enable the content to be output by the content sink module, and
responsive to the determination being negative, degrade or block output of the content.

US Pat. No. 10,798,446

CONTENT NARROWING OF A LIVE FEED BASED ON COGNITIVE PROFILING

International Business Ma...

1. A method for execution by one or more processing modules of a streaming video processing system, the method comprises:receiving streaming video content;
storing the streaming video content, including storing at least a first stored portion of the streaming video content that precedes a current portion of the received streaming video content;
receiving, from a client device, a request for the streaming video content for display on a display device associated with the client device;
determining a plurality of media content tags associated with the streaming video content, the plurality of media content tags including, at least, a first media content tag associated with the first stored portion of the streaming video content and a second media content tag associated with the current portion of the streaming video content;
retrieving, from a database, a cognitive profile associated with a user of the client device;
calculating, for the first stored portion of the streaming video content, a first interest correlation value based on the first media content tag and the cognitive profile;
calculating, for the current portion of the streaming video content, a second interest correlation value based on the second media content tag and the cognitive profile;
determining whether the first interest correlation value indicates a greater interest level than the second interest correlation value; and
in response to determining that the first interest correlation value indicates a greater interest level than the second interest correlation value, facilitating display of the first stored portion of the streaming video content on the display device prior to display of the current portion of the streaming video content.

US Pat. No. 10,798,445

LINEAR MEDIA SELECTION

Sky CP Limited, (GB)

10. An apparatus for selecting a media item for output in a slot of a broadcast program break, comprising:i) a receiver arranged to:
receive segment definitions corresponding to a plurality of substitutional media items associated with the slot, wherein each segment definition includes an associated set of receiver profile data values;
receive a segment definition corresponding to a linear media item broadcast in the slot; and
ii) a processor arranged to:
compare the profile data associated with the receiver to the segment definition corresponding to the linear media item broadcast in the slot; and
select the linear media item for output in the slot of the broadcast program break in preference to one of the substitutional media items if the result of comparing the profile data associated with the receiver to the segment definition corresponding to the linear media item broadcast in the slot is that the segment definition corresponding to the linear media item matches the profile data associated with the receiver.

US Pat. No. 10,798,444

METHOD AND SYSTEM FOR CHANNEL NAVIGATION AND PREDICTIVE TUNING IN A CONTENT RECEIVER

ADVANCED DIGITAL BROADCAS...

1. A computer-implemented method for channel navigation and predictive tuning in a content receiver having more than one tuner, wherein the channels have assigned channel numbers and categories, the method comprising:providing a list of categories;
providing an ordered list of channels for each category;
receiving a channel zapping command, selecting a next channel or a previous channel from the list of channels for the current category;
receiving a category zapping command, selecting a last watched channel in a next category or a previous category from the list of categories and changing the current category to the next category or the previous category;
storing the channel number of the selected channel as the last watched channel on the current channel category; and
tuning the content receiver to the selected channel;
applying a default assignment of free tuners as per system preferences for the purpose of predictive channel tuning;
executing in a loop, channel prediction comprising the following steps:
awaiting for a channel category change or channel change request;
storing a time of the request;
determining a direction of the request wherein said directions are selected from a group comprising: a next channel, a previous channel, a next category, and a previous category;
verifying that the stored time is within a threshold to the previous category or channel change and based on the verification:
reading current predictive channel tuning assignment and increasing the number of tuners in the determined direction while decreasing the number of tuners in the direction identified by the system preferences;
storing current predictive channel tuning assignment;
tuning the free tuners to channels according to modified tuners assignment;
returning to the beginning of said loop,
wherein the system preferences for the purpose of predictive channel tuning define: a preference of a channel change over a category change; a preference of a next channel over a previous channel; and a preference of a next category over a previous category.

US Pat. No. 10,798,443

SET-TOP BOX WITH ENHANCED CONTENT AND SYSTEM AND METHOD FOR USE OF SAME

Enseo, Inc., Plano, TX (...

1. A set-top box comprising:a housing securing a television input, a television output, a processor, memory, and storage therein;
a busing architecture communicatively interconnecting the television input, the television output, the processor, the memory, and the storage;
a wireless transceiver associated with the housing and coupled to the busing architecture, the wireless transceiver operable to communicate with a non-remote control proximate wireless-enabled interactive programmable device;
the television input configured to receive a source signal from an external source;
the television output configured to forward a fully tuned signal to a television; and
the memory accessible to the processor, the memory including processor-executable instructions that, when executed, cause the processor to:
establish a pairing between the non-remote control proximate wireless-enabled interactive programmable device and the set-top box,
receive and process at least partially processed trajectory data from the non-remote control proximate wireless-enabled interactive programmable device, the at least partially processed trajectory data being representations of motion of the non-remote control proximate wireless-enabled interactive programmable device,
evaluate the at least partially processed trajectory data to assign a meaning to the motion of the non-remote control proximate wireless-enabled interactive programmable device,
responsive to evaluating the at least partially processed trajectory data, generate a command signal to enable the non-remote control proximate wireless-enabled interactive programmable device to have virtual remote control functionality, and
send the command signal to the television;
a configuration profile associated with the memory and processor-executable instructions that enables the set-top box to control a plurality of proximate amenities in a multi-room environment, the plurality of proximate amenities including a particular amenity, the plurality of proximate amenities being associated with a user's stay in the lodging environment; and
wherein the memory includes processor-executable instructions that, when executed cause the processor to:
provide instructions for icons on a touch screen display associated with the non-remote control proximate wireless-enabled interactive handheld device, the icons being associated with the particular amenity,
receive and process the at least partially processed trajectory data from the non-remote control proximate wireless-enabled interactive handheld device, and
responsive to processing the at least partially processed trajectory data, send a command to the particular amenity.

US Pat. No. 10,798,442

COORDINATION OF CONNECTED HOME DEVICES TO PROVIDE IMMERSIVE ENTERTAINMENT EXPERIENCES

The DIRECTV Group, Inc., ...

5. A method, comprising:sending, by a customer device of a customer, a request for delivery of a media to the customer device, wherein the customer device comprises a mobile communication device, wherein the mobile communication device comprises a smartphone or a wearable computing device;
receiving, by the customer device, in response to the sending the request, a query to determine whether the customer wishes to receive a metadata file associated with the media, wherein the metadata file is generated using image recognition or character recognition applied to the media;
sending, by the customer device, a response indicating that the customer wishes to receive the metadata file;
receiving, by the customer device, the metadata file from a first signal;
extracting, by the customer device, the metadata file from the first signal, wherein the metadata file is associated with the media for presentation via an output device, wherein the customer device is synchronized with the output device;
receiving, by the customer device, the media from a second signal; and
sending, by the customer device, an instruction to a connected home device based on contents of the metadata file, wherein the instruction instructs the connected home device to adjust a setting to synchronize the connected home device with a portion of the media that is being presented via the output device.

US Pat. No. 10,798,441

INFORMATION PROCESSING METHOD, APPARATUS, AND DEVICE

TENCENT TECHNOLOGY (SHENZ...

1. An information processing method, applied in a second device, comprising:receiving, by the second device from a fourth device, a first read request containing video information of a video played at the fourth device, a frame number of a current frame, and position information of a user input operation comprising a user interaction with the video to receive information about a commodity in the video;
querying, by the second device, a two-dimensional list corresponding to the current frame for an identification of the commodity according to the position information of the user input operation, wherein the two-dimensional index for the current frame includes at least two different commodities at different locations within the frame;
acquiring, by the second device, identification information of the commodity in the video corresponding to the user interaction with the video based on the query of the two-dimensional list;
acquiring, by the second device, according to the identification information of the commodity, corresponding advertisement information comprising commodity information and attribute information, wherein the commodity information and the attribute information are used to display the commodity and provide purchase information to the user;
sending, by the second device, the first read response comprising the advertisement information to the fourth device, so that the fourth device can display the advertisement information on a display screen of the fourth device; and
receiving, by the second device, a second advertisement access request from a first device, the second advertisement access request containing the identification information of the commodity and the advertisement information of the commodity.

US Pat. No. 10,798,440

METHODS AND SYSTEMS FOR SYNCHRONIZING DATA STREAMS ACROSS MULTIPLE CLIENT DEVICES

FACEBOOK, INC., Menlo Pa...

1. A server system, comprising:one or more processors;
a clock; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs comprising instructions for:
receiving a program manifest for a video from a content delivery network, wherein the video includes a plurality of video segments;
parsing the program manifest to identify a timeline for the video;
determining a current playback position for the video and a corresponding initial timestamp according to the clock;
periodically updating the current playback position for the video according to the clock and the initial timestamp;
receiving, over a span of time, requests from a plurality of client devices to view the video; and
for each request from a respective client device, sending the current playback position to the respective client device.

US Pat. No. 10,798,439

METHOD AND APPARATUS FOR RECEIVING, SENDING AND DATA PROCESSING INFORMATION RELATED TO TIME SUCH AS LEAP SECOND AND DAYLIGHT SAVING TIME (DST)

Saturn Licensing LLC, Ne...

1. A receiving apparatus, comprising:circuitry configured to
receive metadata including information for executing a process related with time information in accordance with a mode, the mode indicating one of a Coordinated Universal Time (UTC) and a non-UTC of a component, and one of the UTC, a Precision Time Protocol (PTP), and a local time of a transmission system including the receiving apparatus for executing the process; and
execute the process related with the time information based on the metadata, wherein
the metadata includes
information for correcting a discontinuous time to a continuous time in accordance with the mode; and
an offset value between a reference time and the discontinuous time, and
the circuitry is further configured to correct the discontinuous time to the continuous time based on the offset value and the information for correcting the discontinuous time to the continuous time in accordance with the mode.

US Pat. No. 10,798,438

DETERMINING AUDIENCE STATE OR INTEREST USING PASSIVE SENSOR DATA

Microsoft Technology Lice...

1. A computer-implemented method comprising:causing a microphone to capture sensed audio originating from an audience during presentation of a media program to the audience;
accessing previously-recorded audio data for an individual member of the audience;
comparing an individual amplitude of the previously-recorded audio data for the individual member of the audience to other amplitudes of other users;
in an instance when the individual amplitude of the individual member of the audience has a lower magnitude than the other amplitudes of the other users, determining that the individual member of the audience is relatively less expressive or vocal than a typical user;
determining an adjusted interest level of the individual member of the audience based at least on the sensed audio, the adjusted interest level accounting for the individual member of the audience being relatively less expressive or vocal than a typical user; and
outputting a representation of the adjusted interest level of the individual member of the audience.

US Pat. No. 10,798,437

SYSTEMS AND METHODS FOR PREDICTIVE MEDIA FILE TRANSFER TO USER-CARRIED STORAGE COMPONENTS

SLING MEDIA LLC, Foster ...

1. A method carried-out utilizing a media content server, a network, and a user-carried memory component carried by an end user and accessible to the media content server over the network, the method comprising:determining when an offline viewing event is forecast to occur due to an anticipated desire on behalf of the end user to view media content in a future timeframe without reliance on a network connection;
when determining that an offline viewing event is forecast to occur, setting a tentative schedule for conducting a media file transfer process, the tentative scheduled taking into account a cumulative amount of media content scheduled for transfer and an approximate duration required for the transfer process to complete wherein the approximate duration is estimated based upon data transfer rates;
utilizing the approximate duration which has been estimated for the media file transfer process to establish media transfer start conditions for initiating transfer of selected media content to the user-carried memory component prior to the forecast offline viewing event;
adjusting on an iterative basis based upon variations in estimated data transfer rates and changes in a set of parameters associated with the forecast offline viewing event, the media transfer start conditions to initiate transfer for the selected media content; and
upon satisfaction of the media transfer start conditions, initiating transfer of the selected media content from the media content server, over the network, and to the user-carried memory component.

US Pat. No. 10,798,436

SYSTEM AND METHOD TO DELIVER VIDEO CONTENT

1. A method, comprising:identifying, by a processing system including a processor, video content scheduled to be broadcast during a first viewing period, wherein the first viewing period is during a peak viewing period;
determining a related multimedia receiver for the video content, wherein the determining the related multimedia receiver for the video content comprises accessing first historical data for the related multimedia receiver from a historical data server, accessing second historical data for a mobile device from the historical data server, and accessing third historical data for a related multimedia storage device, wherein the first historical data and the second historical data include data indicating multimedia receivers that received previous video content associated with the video content, and the third historical data include data indicating multimedia storage devices, including the related multimedia storage device, that received previous video content associated with the video content, and based on the first historical data, the second historical data, and the third historical data, determining that the related multimedia receiver and the mobile device received previous video content that is related to the video content, wherein the related multimedia receiver, the mobile device, or both, are associated with the related multimedia storage device;
determining, by the processing system, a target multimedia receiver that is predicted to request the video content according to the first historical data and the second historical data and according to the related multimedia receiver, the mobile device, and the related multimedia storage device having received the previous video content related to the video content;
transferring, by the processing system, the video content to a multimedia storage device communicatively coupled to the target multimedia receiver during a non-peak viewing period;
storing, by the processing system, data into the multimedia storage device, wherein the data prevents access to the video content from the multimedia storage device prior to the first viewing period;
receiving, by the processing system, a first request from the mobile device for the video content prior to the first viewing period;
responsive to the receiving the first request, providing, by the processing system, a denial of access indicator to the mobile device; and
providing, by the processing system, the video content from the multimedia storage device to the mobile device during the first viewing period in response to receiving a second request from the mobile device for the video content, wherein the mobile device presents the video content to a user.

US Pat. No. 10,798,435

DYNAMIC VISUAL EFFECT ENHANCING SYSTEM FOR DIGITAL CINEMA AND CONTROL METHOD THEREOF

GDC TECHNOLOGY (SHENZHEN)...

1. A dynamic visual effect enhancing system for a digital cinema to be presented in a movie theater, the system comprising:a sound effect control unit configured to control a speaker for producing a sound effect according to the digital movie;
a visual effect light source control unit configured to control a light group for visual effects in a theater, said light source control including;
a lighting control for generating a visual effect lighting control signal that complements the content of the digital movie, and
a power source controller configured to receive the lighting control signal from the lighting control and to control the light group by generating power signals to cause the display of different visual effects;
a programming module for creating an executable program to provide sound effects and visual effects, wherein the executable program is created according to one of an image file and a sound file contained in the digital movie; and
a server configured to control the visual effect light source control unit and the sound effect control unit, wherein the visual effect light source control unit and the sound effect control unit are controlled in accordance with the executable program created by the programming module,
wherein the server includes a storage for storing the image file and the sound file of the digital movie, and
wherein the server is configured to control the light source control unit and the sound effect control unit via a communication network.

US Pat. No. 10,798,434

VIDEO ANALYTICS SYSTEM

Mux, Inc., San Francisco...

1. A method for monitoring playback of a video over a network comprising:for each video player in a set of video players streaming the video during a first time period:
receiving a set of beacon data from a video player in the set of video players, the set of beacon data:
generated by a video interface module of the video player in the set of video players; and
comprising a set of tracked data metrics characterizing the video, an operating frequency of a computational device executing the video player in the set of video players, and an identifier of the computational device executing the video player in the set of video players;
classifying the set of beacon data as deriving from a user view based on the operating frequency of the computational device executing the video player in the set of video players, the identifier of the computational device executing the video player, and a crawler model;
in response to classifying the set of beacon data as deriving from the user view, aggregating the set of beacon data into a beacon stream;
generating a real-time event stream based on the set of tracked data metrics of each set of beacon data in the beacon stream;
calculating a summary metric based on the real-time event stream; and
in response to the summary metric exceeding a threshold, serving an alert at an interface module.

US Pat. No. 10,798,433

DYNAMIC CONTENT DELIVERY ROUTING AND RELATED METHODS AND SYSTEMS

DISH Technologies L.L.C.,...

1. A remote storage digital video recorder (RS-DVR) system comprising:a network interface to communicate via a plurality of different backbone provider networks;
a data storage to store per-subscriber rights media content; and
a processing module coupled to the network interface and the data storage to transmit one or more portions of the per-subscriber rights media content from the RS-DVR system to a client device using a first delivery route to the client device, the first delivery route comprising a first backbone provider network of the plurality of different backbone provider networks, and thereafter transmit a subsequent portion of the per-subscriber rights media content from the RS-DVR system to the client device using a different delivery route to the client device, the different delivery route comprising an alternative backbone provider network of the plurality of different backbone provider networks instead of the first backbone provider network, wherein the processing module marks the one or more portions and the subsequent portion as non-cacheable.

US Pat. No. 10,798,432

METHOD AND SYSTEM FOR PROCESSING HEVC CODED VIDEO IN BROADCAST AND STREAMING APPLICATIONS

Cisco Technology, Inc., ...

1. A method for processing a video stream comprising:receiving the video stream at a video processing device, the video stream comprising video usability syntax elements, the video usability syntax elements comprising a first flag indicative of whether Hypothetical Reference Decoder (HRD) parameters are present for one of: fixed frame rate processing or low delay mode processing, the video usability syntax elements further comprising a second flag indicative of whether the video stream comprises field-coded video sequences, wherein a first value of the first flag indicates the fixed frame rate processing and a second value of the first flag indicates the low delay mode processing, wherein the low delay mode processing requires that all pictures in the video stream comprise a presentation time stamp (PTS) equal to or inferred as equal to each picture's respective decoding time stamp (DTS, and wherein each picture's respective DTS is not present in a packetized elementary stream (PES) packet header;
receiving personal video recording (PVR) assist information in the video stream, wherein the PVR assist information pertains to picture interdependencies with successive tier numbers to extract a self-decodable stream from the video stream, wherein the PVR assist information comprises a third flag indicating to block trick mode over a corresponding segment of the video stream in which the third flag is effective, and wherein the third flag is signaled at each random access point over a duration of the corresponding segment when the trick mode is blocked over the corresponding segment comprising successive random access pictures;
inferring a third value associated with a fourth flag indicative of a fixed picture rate based on the first value of the first flag; and
outputting the video stream at the video processing device based on the first flag, the second flag, the third flag, and the fourth flag.

US Pat. No. 10,798,431

METHOD AND APPARATUS FOR MANAGING COMMUNICATION SESSIONS

1. A device comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations comprising:
receiving, at an Interactive Television (ITV) application server (AS) via a network from a media processor connected to the network, a transfer request message for transferring a communication session, operating between a media server and the media processor, to operate between the media server and a second device connected to the network, wherein the transfer request message is transmitted by the media processor responsive to receiving a selection from a menu presented by the media processor;
determining, by the ITV AS, satisfaction of required rights to transfer the communication session to the second device;
transmitting, by the ITV AS, an accepting message to the media processor operably connected with the network responsive to the determining the satisfaction of required rights to transfer the communication session to the second device;
receiving, by the ITV AS, session state information as defined by session description protocol (SDP) negotiation and associated with characteristics of media content in the communication session operating between the media server and the media processor;
transmitting, by the ITV AS, a communication request message to the second device, wherein the communication request message includes the session state information, and wherein the transfer request message includes identification information associated with the second device;
transmitting, by the ITV AS and responsive to the determining the satisfaction of required rights to transfer the communication session to the second device, a command to the media server to generate adjusted media content according to operational capability of the second device;
evaluating, by the ITV AS, a processing delay to transfer the communication session; and
evaluating, by the ITV AS, a media content startup delay in the second device, wherein the media server transfers the communication session including the adjusted media content to the second device.

US Pat. No. 10,798,430

RECEPTION DEVICE, RECEPTION METHOD, TRANSMISSION DEVICE, AND TRANSMISSION METHOD

Saturn Licensing LLC, Ne...

1. A reception device comprising:processing circuitry configured to:
generate first preference demographic and interest (PDI) information, the first PDI information including a first question identifier identifying a first question and a first answer set by a user to the first question;
store, in a memory, the first PDI information;
perform an initial scanning process, including storing tuning information provided by received low layer signaling (LLS) information;
perform tuning control for tuning digital broadcasting using an Internet Protocol (IP) transmission scheme to receive signaling information and retrieve media presentation description (MPD) information from the received signaling information according to the tuning information, the signaling information being transmitted in a layer of the IP transmission scheme higher than another layer of the IP transmission scheme in which the LLS information is transmitted;
parse the retrieved MPD information to obtain second PDI information associated with a media file, the second PDI information including a second question identifier identifying a second question and a second answer set by a provider of the media file to the second question;
determine whether the first PDI information stored in the memory matches the second PDI information obtained by parsing the retrieved MPD information; and
in response to the first PDI information being determined as matching the second PDI information,
identify a user service description (USD) according to the retrieved MPD information,
determine a distribution path as the digital broadcasting or communication via a network according to the USD,
determine address information of the media file, and
extract segment data of the media file from data packets transmitted over the determined distribution path and according to the determined address information.

US Pat. No. 10,798,429

SYSTEM FOR INSERTING SUPPLEMENTAL CONTENT WITHIN A MEDIA STREAM

Verizon Patent and Licens...

1. A device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
receive, from a media client, a request for supplemental content for a media stream accessed by the media client,
where the supplemental content is to be inserted, by the media client, within the media stream, and
where the request includes information indicating a type of supplemental content insertion to be performed, to insert the supplemental content within the media stream;
determine, based on the information included in the request, the type of supplemental content insertion to be performed to insert the supplemental content within the media stream;
identify a break of the media stream based on a characteristic of the media stream and the type of supplemental content insertion;
determine location information identifying a location associated with the media client;
identify the supplemental content for the break based on the location associated with the media client,
where the supplemental content is to be accessed by the media client during the break based on the media client being at the location;
generate an instruction file associated with the type of supplemental content insertion, with the supplemental content, and with multiple supplemental content sources from which the media client may access the supplemental content,
where the instruction file is to include supplemental content information for inserting the supplemental content in the break of the media stream, and
where at least one of the multiple supplemental content sources is selected based at least in part on the at least one of the multiple supplemental content sources being located physically closer to the location than one or more other supplemental content sources of the multiple supplemental content sources; and
send the instruction file to the media client to enable the media client to access the supplemental content that is to be inserted within the media stream.

US Pat. No. 10,798,428

METHOD AND SYSTEM FOR PROVIDING COUPON

SONY CORPORATION, Tokyo ...

1. A method, comprising:in a system that comprises a terminal device and at least one server:
receiving, by the terminal device, content and a coupon code corresponding to the content;
capturing, by the terminal device, light corresponding to a specific region from the content based on a user input and utilization of a service, wherein
the service includes provision of a first image of the content,
the light includes a spatial part of display light of the first image, and
the specific region of the first image includes a subject;
transmitting, by the terminal device, key information to the at least one server, wherein
the key information includes a spatial modulation component of the display light of the first image and the coupon code, and
the spatial modulation component is associated with the spatial part;
receiving, by the at least one server, the key information from the terminal device;
determining, by the at least one server, existence of privilege information corresponding to the coupon code;
extracting, by the at least one server, the subject from the specific region of the first image based on an image recognition operation and a determination that the privilege information corresponding to the coupon code exists;
determining, by the at least one server, existence of privilege information corresponding to the extracted subject based on the extraction of the subject;
determining, by the at least one server, privilege content based on the privilege information corresponding to the coupon code and the privilege information corresponding to the extracted subject;
issuing, by the at least one server, coupon information based on the determined privilege content;
receiving, by the terminal device, the issued coupon information from the at least one server; and
displaying, by the terminal device, an image of the issued coupon information along with the captured light in real time.

US Pat. No. 10,798,427

SYSTEM AND METHOD FOR PROVIDING FAILOVER OF STORAGE FOR DIGITAL CINEMA BROADCASTING

GDC TECHNOLOGY (SHENZHEN)...

1. A system for providing a failover of storage for distribution of media content in cinema applications, said system comprising:a scheduled playlist unit configured to acquire and store a plurality of scheduled playlists of media content for a plurality of theaters, wherein said plurality of playlists include identifications a plurality of cinematic movies, wherein said scheduled playlist unit intelligently acquires said plurality of playlists through connecting to a scheduling system of each of said plurality of theaters;
a central storage configured to:
store a plurality of digital cinema packages, and
for at least one of said plurality of theaters, transmit a copy of the digital cinema packages according to the scheduled playlist for that theater, wherein said playlist
includes an identification of a plurality of cinematic movies;
a local storage configured to receive from the central storage the transmitted digital cinema packages and store the received copy of the digital cinema packages;
a server configured to receive data of the digital cinema packages directly from the central storage, and from the local storage module in the event reading from the central storage is interrupted;
a media playback unit configured to read the data received by the server and display the read data, said playback unit including a display device, wherein the media playback unit displays the read data only upon receiving a key delivery message;
a detection unit configured to detect whether a process of reading the central storage is interrupted; and
a breakpoint control unit configured to record a frame position of the digital cinema packages received directly from the central storage and identifying the corresponding frame position in the local storage for switching reading of media content by the server from the central storage to the local storage at the frame position, wherein the breakpoint control unit records a frame number of the digital cinema packages read from the central storage near the time of interruption and thereby determines a frame position of the interrupted broadcasting.

US Pat. No. 10,798,426

SYSTEMS AND METHODS FOR SHARING VIDEO DATA VIA SOCIAL MEDIA

Bruce Melanson, Valrico,...

1. A system including a plurality of electronic devices for capturing and sharing video images via social media, the system comprising:(a) a processor in a first electronic device that identifies a video image from a second electronic device being displayed on the second electronic device at an event, wherein the first electronic device includes a computing device and a transmitting device linked to a second electronic device and configured to communicate with a third electronic device;
(b) the first electronic device captures the video image displayed on the second electronic device to form a captured video image while the second electronic device provides the captured video image with a unique identifier;
(c) the third electronic device communicates with the first electronic device by downloading and installing an application, wherein the third electronic device comprises a peripheral device;
(d) creating, with the third electronic device, a user account via the application by providing information comprising at least two of a name, email address, password, date of birth, address, nationality and at least one of favorite sports, favorite sports teams, favorite entertainment events other than sports, desired venues, merchandise types typically purchased, whether the user ever uses coupons and what kind if answered yes, whether the system can forward coupons to the user, whether the system can forward targeted emails to the user, whether the system can forward advertisements to the user, and/or user billing information;
(e) the first electronic device receives and communicates the captured video image with the unique identifier to the third electronic device; and
(f) the third electronic device is configured to share the captured video image of step (e) via social media in real time, at a later date, or a combination thereof.

US Pat. No. 10,798,425

PERSONALIZED KEY OBJECT IDENTIFICATION IN A LIVE VIDEO STREAM

International Business Ma...

1. A method for personalized key object detection in a live video stream, the method comprising:streaming a live video stream in a window of a computing device;
during the streaming:
collecting biophysical data of an end user viewing the streaming, and
responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction;
processing each corresponding frame associated with positive feedback by identifying key words presented in text of the feedback to the corresponding frame, wherein the key words of the corresponding frame are determined by:
parsing the text of the feedback;
computing a frequency of presence of a set of words in the text;
selecting as key words only those in the set having a corresponding frequency that exceeds a threshold value; and
filtering those words in the most frequent set to only those words mapping to one or more profile values of the end user stored in a profile data store;
matching the identified key words to a tag of an object visually presented in the corresponding frame, wherein the tag includes corresponding meta-information describing the object; and
storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.

US Pat. No. 10,798,424

INHERITANCE IN SAMPLE ARRAY MULTITREE SUBDIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing an array of information samples from a data stream, the array of information samples representing a spatially sampled information signal, the decoder comprising:an extractor configured to extract, from the data stream, first subdivision information, second subdivision information, and a maximum size;
a divider configured to:
subdivide the array of information samples into prediction blocks associated with prediction coding according to the first subdivision information,
responsive to a determination that at least one of the prediction blocks is greater than the maximum size, subdivide the at least one of the prediction blocks into a root block of the maximum size, wherein the root block is associated with transform coding, subdivide the root block into the residual blocks associated with transform coding according to the second subdivision information,wherein the prediction blocks correspond to a first hierarchy level of a sequence of hierarchy levels and the residual blocks correspond to a second hierarchy level of the sequence of hierarchy levels;wherein the extractor is further configured to:
extract, for each of the prediction blocks that is greater than the maximum size, inheritance information from the data stream, the inheritance information indicating whether inheritance is used or not and,
if inheritance is indicated to be used, share a coding parameter among the residual blocks corresponding to the respective prediction block; and
a reconstructor configured to:
determine each of residual signals for each of the residual blocks based on the respective coding parameter in transform coding,
determine prediction signals for the prediction blocks using prediction coding, and
reconstruct the array of information samples based on a combination of the prediction signals and the residual signals.

US Pat. No. 10,798,423

CROSS-PLANE FILTERING FOR CHROMA SIGNAL ENHANCEMENT IN VIDEO CODING

InterDigital Madison Pate...

1. A method of encoding a video signal, the method comprising:receiving the video signal comprising a first color component and a second color component;
encoding the first color component and the second color component and decoding the encoded first color component and the encoded second color component to generate a reconstructed first color component and a reconstructed second color component;
generating a cross-plane filter for modifying the reconstructed first color component based on the reconstructed second color component, the reconstructed first color component, and the first color component to minimize a distortion between the first color component and a modified reconstructed first color component;
quantizing a filter coefficient indication associated with the cross-plane filter; and
including the quantized filter coefficient indication to a bitstream that is representative of the video signal.

US Pat. No. 10,798,422

METHOD AND SYSTEM OF VIDEO CODING WITH POST-PROCESSING INDICATION

Intel Corporation, Santa...

1. A computer-implemented method of video coding with post-processing indication comprising:obtaining image data of at least one image;
encoding the image data in conformance with a video coding profile having a profile_tier_level syntax comprising profile syntax elements indicating conformance parameters of a video coding standard and for decoded video;
using one of the profile syntax elements of the profile_tier_level syntax that is a general context interpretation code associated with alternative values and to indicate that the image is to be post-processed after decoding to modify the image data of the image to affect a display of the image, wherein a value of the general context interpretation code is one of three available alternatives of:
a first value to indicate decoding without required post-processing,
a second value to indicate post-processing is necessary after decoding, and
any other value to indicate decoding may be skipped for the image data, and
wherein the general context interpretation code is an existing profile syntax element originally established for reasons other than post-processing and used without modifying the video coding standard by adding codes for the post-processing; and
providing the image data and profile_tier_level syntax settings in a bitstream to be transmitted to a decoder to have the image data decoded, post-processed after decoding depending on a setting of the general context interpretation code, and available to display images.

US Pat. No. 10,798,421

METHOD FOR ENCODING AND DECODING IMAGE INFORMATION

LG Electronics Inc., Seo...

1. A video decoding apparatus, comprising:an entropy decoding module configured to obtain prediction information, residual information and offset information from a received bitstream;
a prediction module configured to generate a prediction sample based on the prediction information;
an inverse transform module configured to derive a residual sample, wherein the residual sample is derived based on the residual information;
an adder module configured to generate a reconstructed picture based on the prediction sample and the residual sample; and
a filter module configured to apply a deblocking filtering process to the reconstructed picture and to apply a sample adaptive offset (SAO) process to a sample in the reconstructed picture based on the offset information after completing the deblocking filtering process,
wherein the offset information includes flag information about whether the SAO process is enabled to the sample, and
wherein the offset information includes band information indicating one of 32 bands as a starting band of a band group consisting of consecutive n bands among the 32 bands, an offset for a band among the n bands is applied to the sample, wherein n is a positive integer.

US Pat. No. 10,798,420

LOSSLESS COMPRESSION TECHNIQUES FOR SINGLE-CHANNEL IMAGES

Apple Inc., Cupertino, C...

1. A method for performing lossless compression of single-channel images, the method comprising:receiving a single-channel image comprised of a collection of pixel intensity values;
for each pixel intensity value in the collection of pixel intensity values, performing the following steps:
calculating a predicted pixel intensity value for the pixel intensity value based on at least one neighboring pixel intensity value,
calculating a prediction error for the predicted pixel intensity value by determining a difference between the predicted pixel intensity value and the pixel intensity value, wherein the prediction error includes a sign bit and magnitude bits,
separating, into a first byte stream, (1) a first group of least significant bits of the magnitude bits, and (2) the sign bit,
separating, into a second byte stream, a second group of most significant bits of the magnitude bits, and
compressing the first and second byte stream using a first and a second compressor, respectively, to produce respective outputs, wherein the first and second compressors separately and concurrently perform a same compression technique; and
combining the outputs to produce a compressed single-channel image.

US Pat. No. 10,798,419

EMBEDDED CODEC CIRCUITRY FOR SUB-BLOCK BASED ENCODING OF QUANTIZED PREDICTION RESIDUAL LEVELS

SONY CORPORATION, Tokyo ...

1. An embedded codec (EEC) circuitry, comprising: encoder circuitry configured to:determine, for a first coding scheme and a second coding scheme, a count of bits required to encode a plurality of quantized prediction residual levels in each sub-block of a plurality of sub-blocks of an image block;
determine a maximum number of quantized prediction residual levels from a total number of the plurality of quantized prediction residual levels in each sub block of the plurality of sub-blocks of the image block within a threshold range, wherein the threshold range indicates a range of values of the plurality of quantized prediction residual levels; and
allocate one of a first bit value or a second bit value to a first signaling bit for each sub-block of the plurality of sub-blocks based on one of the determined count of bits for the first coding scheme and the second coding scheme, or the determined maximum number of quantized prediction residual levels in each sub-block of the plurality of sub-blocks within the threshold range, wherein
the first bit value indicates the first coding scheme,
the first bit value is allocated for a first set of sub-blocks from the plurality of sub-blocks based on a first determination that the count of bits for the first coding scheme is less than the count of bits for the second coding scheme,
the second bit value indicates the second coding scheme, and
the second bit value is allocated for a second set of sub-blocks of the plurality of sub-blocks based on a second determination that the count of bits for the second coding scheme is less than the count of bits for the first coding scheme; and
generate a bit-stream of the image block by selective application of one of the first coding scheme or the second coding scheme on each sub-block of the plurality of sub-blocks, based on one of the first bit value or the second bit value allocated to the first signaling bit for each sub-block of the plurality of sub-blocks.

US Pat. No. 10,798,418

METHOD AND ENCODER FOR ENCODING A VIDEO STREAM IN A VIDEO CODING FORMAT SUPPORTING AUXILIARY FRAMES

AXIS AB, Lund (SE)

1. A method for encoding a video stream in a video coding format, including an auxiliary frame in the encoded video stream, the auxiliary frame being referenced by another frame in the encoded video stream and comprising image data complementing said another frame, the method comprising the steps of:receiving first image data, the first image data being raw image data as captured by a video capturing device;
using the first image data as image data of a first auxiliary frame, wherein the first auxiliary frame is tagged as a non-display frame that is not displayed to a user;
encoding the first auxiliary frame as an intra frame; and
encoding a first frame as an inter frame referencing the first auxiliary frame, including:
determining a first image transformation to be applied to the first image data; and
calculating motion vectors representing the first image transformation by sampling the first image transformation according to a determined macroblock size for the motion vectors of the first frame, wherein data of the first frame include only the motion vectors and do not include residual values, wherein the motion vectors of the first frame are set to represent a first image transformation to be applied to the first image data.

US Pat. No. 10,798,417

DEBLOCK FILTERING FOR 360-DEGREE VIDEO CODING

QUALCOMM Incorporated, S...

1. A method of decoding 360-degree video, the method comprising:receiving video data indicative of a 360-degree video picture projected onto a 2D picture, wherein the 360-degree video picture includes a plurality of faces, and wherein at least some borders of adjacent faces of the 360-degree video picture projected onto the 2D picture are not adjacent to one another in the 2D picture;
performing a prediction process on blocks of the 2D picture; and
deblock filtering pixels along a border of the 2D picture based on relative positions of the adjacent faces in the 360-degree video picture, wherein deblock filtering pixels along the border of the 2D picture uses one or both of a stronger filter or a wider filter relative to deblock filtering pixels not along the border of the 2D picture.

US Pat. No. 10,798,416

APPARATUS AND METHOD FOR MOTION ESTIMATION OF THREE DIMENSION VIDEO

Samsung Electronics Co., ...

1. A apparatus of estimating a motion of a three dimensional (3D) video, comprising:a receiver configured to receive a color image macroblock and a depth image macroblock; and
a determiner configured to determine a motion vector of the color image macroblock and to determine a motion of the depth image macroblock based on the determined motion vector of the color image macroblock,
wherein the determined motion vector of the color image macroblock is used as a motion vector for the depth image macroblock.

US Pat. No. 10,798,415

INTRA PREDICTION METHOD OF CHROMINANCE BLOCK USING LUMINANCE SAMPLE, AND APPARATUS USING SAME

LG Electronics Inc., Seo...

1. A picture decoding method performed by a decoding apparatus, the method comprising:receiving a bitstream including index information on an intra chroma prediction mode for a current chroma block;
determining an intra luma prediction mode for a current luma block which is related to the current chroma block;
deriving the intra chroma prediction mode based on the intra luma prediction mode and the index information;
deriving a prediction sample for the current chroma block based on the derived intra chroma prediction mode; and
generating a reconstructed picture for the current chroma block based on the prediction sample,
wherein the index information is obtained based on a binary codeword for the index information,
wherein when a LM mode is applied to the current chroma block, a predicted sample among the predicted samples of the current chroma block is derived based on at least one of linear model parameters and a linearly interpolated reconstructed sample of the current luma block, and
wherein the LM mode is an intra prediction method of deriving the predicted samples of the current chroma block based on linearly interpolated reconstructed samples of the current luma block comprising the linearly interpolated reconstructed sample of the current luma block which is derived by linearly interpolating reconstructed samples of the current luma block.

US Pat. No. 10,798,414

METHOD AND DEVICE FOR SELECTIVE MULTI-SAMPLE INTRA-PREDICTION

ELECTRONICS AND TELECOMMU...

1. An image decoding method performed by an image decoding apparatus, comprising:determining whether to apply multi-sample prediction to a current block based on an intra prediction mode and a size of the current block, the current block being a prediction unit to which the intra prediction mode is applied;
determining, in case the multi-sample prediction is applied, a representative prediction value for a sample group, the sample group being a region inside the current block, being smaller than the current block, and being predicted as the same representative prediction value; and
predicting a plurality of samples included in the sample group using the determined representative prediction value,
wherein the representative prediction value is determined as an average value of a plurality of reference samples, and
wherein other samples inside the current block which are not included in the sample group are predicted differently from the plurality of samples included in the sample group.

US Pat. No. 10,798,413

METHOD, APPARATUS, AND SYSTEM FOR ENCODING AND DECODING IMAGE

University-Industry Found...

1. An image decoding apparatus, comprising:a decoder configured to decode information specifying a Linear Model (LM) chroma prediction mode of a current block; and
an intra predictor configured to:
determine the LM chroma prediction mode among a plurality of LM chroma prediction modes specified by the decoded information,
wherein:
the plurality of LM chroma prediction modes comprises a first LM chroma prediction mode and a second LM chroma prediction mode, and
both the first LM chroma prediction mode and the second LM chroma prediction mode represent that a chroma prediction pixel in a chroma block of the current block is derived by a luma value;
obtain the luma value using a luma pixel corresponding to the chroma prediction pixel, the luma pixel being in a luma block of the current block; and
generate the chroma prediction pixel using the luma value multiplied by a first weight,
wherein the decoded information is signaled at a block level,
wherein, under the first LM chroma prediction mode, the first weight is obtained by using a filtered luma pixel, the filtered luma pixel being obtained by luma samples located to a left of the luma block,
wherein, under the second LM chroma prediction mode, the first weight is obtained without using the filtered luma pixel,
wherein the luma value is obtained by further using four neighboring luma pixels adjacent to the luma pixel corresponding to the chroma prediction pixel,
wherein the luma value is obtained by a weighted sum of the luma pixel and four neighboring luma pixels, and
wherein a ratio between a first coefficient multiplied with the luma pixel and a second coefficient multiplied with the four neighboring luma pixels is 4:1.

US Pat. No. 10,798,412

ENCODING AND DECODING ARCHITECTURES FOR FORMAT COMPATIBLE 3D VIDEO DELIVERY

Dolby Laboratories Licens...

1. In a decoder, a method for decoding a frame compatible three-dimensional (3D) video signal, the method comprising:receiving a coded bitstream comprising encoded images or pictures, each image comprising two views, wherein the two views are interleaved and each view is subsampled according to a quincunx sampling format;
using a first decoding function to generate a base layer of reconstructed images based on the coded bitstream;
storing in a base layer decoding reference buffer the base layer reconstructed images;
pre-processing using a first pre-processing function the base layer reconstructed images to generate first pre-processed images;
storing in a first enhancement layer decoding reference buffer the first pre-processed images for a first enhancement layer video decoding; and
using a second decoding function to generate a first enhancement layer of reconstructed images based on the coded bitstream and the first pre-processed images,
wherein the two views are decoded and processed in each of the base layer and the first enhancement layer.

US Pat. No. 10,798,411

MOVING PICTURE CODING METHOD, MOVING PICTURE CODING APPARATUS, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING APPARATUS

TAGIVAN II LLC, Chevy Ch...

1. A decoding method for decoding an image from a bitstream, the decoding method comprising:deriving one or more candidates for a motion vector predictor used to decode a motion vector of a current block in the image;
selecting the motion vector predictor out of the one or more candidates,
wherein the deriving includes:
generating a first candidate for the motion vector predictor from a first motion vector of each of one or more first adjacent blocks that are adjacent to the current block in a first direction; and
generating a second candidate for the motion vector predictor from a second motion vector of each of one or more second adjacent blocks that are adjacent to the current block in a second direction, the second direction being different from the first direction, and
wherein the generating of the second candidate includes:
determining for each of the one or more first adjacent blocks whether or not the first adjacent block is predicted by a first prediction mode; and
scaling a scalable second motion vector to generate the second candidate, the scalable second motion vector belonging to one of the one or more second adjacent blocks, when any of the one or more first adjacent blocks are not predicted by the first prediction mode.

US Pat. No. 10,798,410

METHOD, SYSTEM AND APPARATUS FOR INTRA-REFRESH IN VIDEO SIGNAL PROCESSING

TEXAS INSTRUMENTS INCORPO...

1. A method comprising:dividing a video frame area into plurality of row segments with each row segment including an equal number of rows of macro-blocks (MBs);
selecting a set of row segments, wherein the number of row segments in the set of row segments is selected based on a search range height;
limiting a maximum value of a global motion vector to a zero value for a refresh duration;
setting a slice boundary equal to a boundary of the row segment;
disabling a loop filter configured to filter pixels around the slice boundary during the refresh duration;
setting an ROI priority level to the selected row segments;
disabling quantisation modulation during the encoding of the selected set of row segments; and
encoding the selected set of row segments by intra-prediction.

US Pat. No. 10,798,409

IMAGE ENCODING/DECODING APPARATUS AND METHOD

Electronics and Telecommu...

1. An image encoding method comprising:predicting a motion vector of a current block within a current picture using any one or any combination of motion vector information of an adjacent block of the current block, motion vector information of a reference block corresponding to the current block within a reference picture, and motion vector information of an adjacent block of the reference block corresponding to the current block within the reference picture;
scaling the motion vector information of the adjacent block of the current block using a distance from a reference picture referred to by the adjacent block of the current block to the current picture and a distance from a reference picture of the current block to the current picture in response to the reference picture referred to by the adjacent block of the current block being different from the reference picture of the current block; and
determining differential motion vector information of the current block by subtracting the predicted motion vector of the current block from a motion vector of the current block,
wherein a location of the reference block within the reference picture corresponds to a location of the current block within the current picture.

US Pat. No. 10,798,408

LAST FRAME MOTION VECTOR PARTITIONING

GOOGLE LLC, Mountain Vie...

1. A method for encoding or decoding a video signal using a computing device, the video signal including frames defining a video sequence, the method comprising:determining, for a frame before a current frame in the video sequence, a first partitioning for partitioning a first region of the frame, the first partitioning defining at least two prediction sub-regions of the first region, the at least two prediction sub-regions of the first region comprising contiguous, non-overlapping groups of pixels;
determining a motion vector that predicts a prediction sub-region of the first region;
modifying the first partitioning to a second partitioning for partitioning a current region of the current frame by moving at least one border of the first partitioning that is between adjacent prediction sub-regions of the first region by the motion vector such that the first partitioning of the first region is different from the second partitioning of the current region, wherein the current region is collocated with the first region; and
encoding or decoding the current region of the current frame as partitioned by the second partitioning.

US Pat. No. 10,798,407

METHODS AND APPARATUS FOR INTER PREDICTION WITH A REDUCED ABOVE LINE BUFFER IN VIDEO CODING

TENCENT AMERICA LLC, Pal...

1. A method for controlling a motion vector buffer for encoding or decoding of a video sequence, the method comprising:identifying a set of motion vectors associated with a set of blocks of a set of rows of an above coding tree unit (CTU), wherein each motion vector of each block is associated with a P×Q grid, and the set of motion vectors is associated with an N×M grid;
determining a single motion vector, based on the set of motion vectors, to denote motion vector information for each block in each row of the set of rows; and
accessing, in the motion vector buffer, the motion vector based on a candidate block including a position associated with the N×M grid,
wherein each block in each row of the set of rows includes the same single motion vector.

US Pat. No. 10,798,406

SYSTEMS AND METHODS FOR CONTROLLING VIDEO DISPLAY

Hangzhou Hikvision Digita...

1. A method for controlling video display, comprising:determining, by one or more processors, whether a coding frame extracted from a video coding stream is an I Frame;
if the coding frame extracted from the video coding stream is an I Frame, acquiring, by the one or more processors, a timestamp of the I Frame and a timestamp of an adjacent coding frame after the I Frame; and
controlling, by the one or more processors, display of the I Frame based on the timestamp of the I Frame and the timestamp of the adjacent coding frame after the I Frame;
wherein controlling the display of the I Frame based on the timestamp of the I Frame and the timestamp of the adjacent coding frame after the I Frame comprises:
determining whether the timestamp of the I Frame is later than the timestamp of the adjacent coding frame after the I Frame;
if the timestamp of the I Frame is later than the timestamp of the adjacent coding frame after the I Frame, not displaying the I Frame; and
if the timestamp of the I Frame is earlier than the timestamp of the adjacent coding frame after the I Frame, displaying the I Frame.

US Pat. No. 10,798,405

IMAGE ENCODING METHOD AND APPARATUS, AND IMAGE DECODING METHOD AND APPARATUS

Huawei Technologies Co., ...

1. An image decoding method, comprising:determining N decoded units from adjacent decoded units of a to-be-decoded unit according to a first preset rule, wherein a motion prediction mode of the N decoded units is the same as that of the to-be-decoded unit, and N is a positive integer;
generating an nth motion vector group by using a first preset algorithm and based on a motion vector of an nth decoded unit, wherein n comprises any positive integer not greater than N;
decoding a bitstream corresponding to the to-be-decoded unit, to obtain a prediction residual of each sample unit and an index identifier of an optimal motion vector group;
determining the optimal motion vector group in N motion vector groups based on the index identifier of the optimal motion vector group;
determining a prediction sample value of each sample unit in the to-be-decoded unit by using a second preset algorithm and based on the optimal motion vector group; and
determining a reconstruction sample value of each sample unit based on a sum of the prediction sample value of each sample unit and the prediction residual of each sample unit.

US Pat. No. 10,798,404

SYSTEMS AND METHODS OF PERFORMING IMPROVED LOCAL ILLUMINATION COMPENSATION

Qualcomm Incorporated, S...

1. A method of processing video data, the method comprising:obtaining the video data from an encoded bitstream; and
performing bi-predictive motion compensation for a current block of a picture of the video data, wherein performing the bi-predictive motion compensation includes deriving one or more local illumination compensation parameters for the current block using a template of the current block, a first template of a first reference picture, and a second template of a second reference picture, the one or more local illumination compensation parameters including an offset, a first weight corresponding to the first reference picture, and a second weight corresponding to the second reference picture, wherein deriving the one or more illumination compensation parameters for the current block is based on a difference between the first weight and a first default weight and a difference between the second weight and a second default weight, the first default weight and the second default weight being obtained from the encoded bitstream.

US Pat. No. 10,798,403

PREDICTION IMAGE GENERATION DEVICE, VIDEO DECODING DEVICE, AND VIDEO CODING DEVICE

SHARP KABUSHIKI KAISHA, ...

1. A prediction image generation device for generating a prediction image used to code or decode a video, the prediction image generation device comprising:a prediction vector computing circuit; and
a prediction image generation circuit, wherein
the prediction vector computing circuit derives at least one control point motion vector by using a neighboring location and a motion vector in a neighboring block adjacent to a target block,
the prediction vector computing circuit derives a subblock motion vector by using a sum of a difference motion vector and a control point motion vector predictor, and
the prediction image generation circuit generates the prediction image by referring to the subblock motion vector,
wherein the neighboring location is derived by using one of (i) a first neighboring location (xPb?1, yPb?1), (ii) a second neighboring location (xPb+W?1, yPb?1), (iii) a third neighboring location (xPb+W, yPb?1), (iv) a fourth neighboring location (xPb?1, yPb+H?1) and (v) a fifth neighboring location (xPb?1, yPb+H), wherein xPb is an X coordinate of the target block, yPb is a Y coordinate of the target block, H is a height of the target block and W is a width of the target block.

US Pat. No. 10,798,402

SAME FRAME MOTION ESTIMATION AND COMPENSATION

GOOGLE LLC, Mountain Vie...

1. A method for encoding a current block of a current video frame of a video sequence, the method comprising:identifying a first set of motion vector candidates by performing motion estimation based on first data stored in a local static memory of a hardware encoder used for encoding the current video frame, the first data associated with one or more encoded blocks preceding the current block within the current video frame, the first data stored in the memory subsequent to an encoding of the one or more encoded blocks, wherein at least one motion vector candidate of the first set of motion vector candidates is an irregular motion vector indicative of a motion warping associated with the one or more encoded blocks;
identifying a second set of motion vector candidates by performing inter-prediction against at least one encoded block of at least one previously encoded video frame of the video sequence;
selecting at least one motion vector from at least one of the first set of motion vector candidates or the second set of motion vector candidates, the at least one motion vector including the irregular motion vector;
determining a prediction residual block for the current block using a prediction block generated based on the selected at least one motion vector;
transforming the prediction residual block to produce transform coefficients;
quantizing the transform coefficients to produce quantization coefficients;
reconstructing the quantization coefficients to produce a reconstructed current block of the current video frame; and
storing second data in the local static memory for use in encoding a video block following the current block within the current video frame, the second data corresponding to a visual aspect of the video sequence represented by pixel values of the reconstructed current block of the current video frame.

US Pat. No. 10,798,401

METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE

IBEX PT HOLDINGS CO., LTD...

1. A device encoding an image in merge mode, the device comprising:an inter-predictor configured to determine a motion vector and a reference picture index of a current prediction unit;
a generator configured to generate a prediction block, and a residual block using an original block and the prediction block;
a transformer configured to transform the residual block to generate a transformed block;
a quantizer configured to quantize the transformed block using a quantization parameter to generate a quantized block;
an entropy-coder configured to scan the quantized block to entropy-code the quantized block; and
an encoder configured to encode the motion vector and the reference picture index,
wherein the motion vector is encoded using effective spatial and temporal merge candidates,
wherein the quantization parameter is encoded using an average of two effective quantization parameters among a left quantization parameter, an upper quantization parameter and a previous quantization parameter of the current coding unit, and
wherein the spatial merge candidates are a left block, an upper block, a top-right block, a bottom-left block and a top-left block, and the top-left block is available if one or more of the left block, the upper block, the top-right block and the bottom-left block are not effective.

US Pat. No. 10,798,400

INTER PREDICTION MODE-BASED IMAGE PROCESSING METHOD AND APPARATUS THEREFOR

LG ELECTRONICS, INC., Se...

1. A method of processing an image based on inter-prediction by an apparatus, comprising steps of:obtaining a motion vector difference (MVD) for a current block;
obtaining a flag indicating a resolution of the MVD;
rounding, based on the flag, a motion vector predictor (MVP) obtained from a neighbor block of the current block;
up-scaling the MVD based on the flag;
deriving a motion vector (MV) for the current block based on the up-scaled MVD and the rounded MVP; and
generating a prediction block of the current block based on the derived MV.

US Pat. No. 10,798,399

ADAPTIVE VIDEO COMPRESSION

AMAZON TECHNOLOGIES, INC....

1. A computer-implemented method, comprising:determining one or more feature values of video data;
analyzing the one or more feature values using a model trained to classify the video data into one of a plurality of classes associated with a plurality of respective encoding schemes, wherein the plurality of classes correspond to a plurality of clusters of sample video data formed during unsupervised training of the model the encoding scheme of a class determined based at least in part on feature values of sample video data in the cluster that corresponds to the class;
classifying the video data into a class selected from the plurality of classes using the model, the class associated with an encoding scheme; and
encoding the video data using the encoding scheme.

US Pat. No. 10,798,398

DECODING DEVICE AND DECODING METHOD, ENCODING DEVICE, AND ENCODING METHOD

SONY CORPORATION, Tokyo ...

1. A decoding device, comprising:circuitry configured to
receive encoded data of an image, first color information indicating an index identifying a color display control information, and second color information indicating a supplemental enhancement information,
extract the first color information and the second color information from the encoded data;
decode the encoded data to generate the image, the first color information and the second color information;
determine, from the extracted first color information and second color information, whether the second color information has been extracted from the encoded information and provided;
choose either the first color information or the second color information in response to a determination of whether the second color information has been extracted from the encoded information and provided; and
adjust color display of the generated image based on the chosen color information.

US Pat. No. 10,798,397

METHOD AND APPARATUS FOR VIDEO CODING

Tencent America LLC, Pal...

1. A method of video decoding performed in a video decoder, the method comprising:receiving a bit stream including syntax elements, the syntax elements corresponding to residues of a transform block in a coded picture;
determining a first maximum number of context-coded bins of the syntax elements of a first coefficient sub-block in the transform block based on a frequency position of the first coefficient sub-block in the transform block;
coding each of a number of bins of the syntax elements of the first coefficient sub-block according to a context model and the determined first maximum number of the context-coded bins of the syntax elements of the first coefficient sub-block; and
decoding coded bits of the number of the bins of the syntax elements of the first coefficient sub-block based on the context models, wherein
the first maximum number of context-coded bins of the syntax elements of the first coefficient sub-block is different from a second maximum number of context-coded bins of the syntax elements of a second coefficient sub-block in the transform block.

US Pat. No. 10,798,396

SYSTEM AND METHOD FOR TEMPORAL DIFFERENCING WITH VARIABLE COMPLEXITY

Samsung Display Co., Ltd....

1. A video receiver, comprising:a reference frame decoder operating at a first compression ratio and providing first uncompressed reference frame data;
a main decoder operating at a second compression ratio, wherein the main decoder is configured to:
receive compressed video data and the first uncompressed reference frame data; and
generate first uncompressed display data according to the second compression ratio using the compressed video data and the first uncompressed reference frame data;
a reference frame data buffer, configured to store a compressed reference frame data; and
a reference frame encoder operating at the first compression ratio, wherein the reference frame encoder is configured to:
receive the first uncompressed display data from the main decoder;
compress the first uncompressed display data according to the first compression ratio to form the compressed reference frame data; and
write the compressed reference frame data to the reference frame data buffer,
wherein the reference frame decoder is configured to:
retrieve the compressed reference frame data from the reference frame data buffer;
decode the compressed reference frame data according to the first compression ratio to form a second uncompressed reference frame data; and
feed the second uncompressed reference frame data to the main decoder for generating a second uncompressed display data according to the first compression ratio,
wherein the second compression ratio is a lower compression ratio than the first compression ratio.

US Pat. No. 10,798,395

VIDEO ENCODING WITH CONTENT ADAPTIVE RESOURCE ALLOCATION

Oath Inc., New York, NY ...

1. A method for video encoding, comprising:receiving a video file;
segmenting the video file into at least a first portion and a second portion;
in parallel:
analyzing the first portion to determine a first value associated with the first portion; and
analyzing the second portion to determine a second value associated with the second portion;
in parallel:
determining a first bitrate associated with the first portion; and
determining a second bitrate associated with the second portion;
at least one of:
determining whether the first value associated with the first portion is within a threshold of the second value associated with the second portion; or
determining whether the first bitrate associated with the first portion is within a second threshold of the second bitrate associated with the second portion;
in response to at least one of determining that the first value associated with the first portion is within the threshold of the second value associated with the second portion or determining that the first bitrate associated with the first portion is within the second threshold of the second bitrate associated with the second portion, in parallel:
encoding the first portion at the first bitrate to generate a first encoded portion; and
encoding the second portion at the second bitrate to generate a second encoded portion; and
assembling the first encoded portion and the second encoded portion to generate a second video file.

US Pat. No. 10,798,394

LOW COMPLEXITY AFFINE MERGE MODE FOR VERSATILE VIDEO CODING

Avago Technologies Intern...

1. A method for reduced memory utilization for motion data derivation in encoded video, comprising:deriving, by a video decoder of a device from an input video bitstream, one or more control point motion vectors of a first prediction unit of a first coding tree unit proportional to an offset between a sample position of the first prediction unit and a sample position of a second one or more prediction units from a neighboring second coding tree unit located at a top boundary of the first coding tree unit stored in a motion data line buffer of the device during decoding of the first coding tree unit; and
decoding, by the video decoder, one or more sub-blocks of the first prediction unit based on the determined one or more control point motion vectors.

US Pat. No. 10,798,393

TWO PASS CHUNK PARALLEL TRANSCODING PROCESS

HULU, LLC, Santa Monica,...

1. A method comprising:receiving, by a computing device, a first chunk in a plurality of chunks from a video, the plurality of chunks sent to a plurality of transcoding units for transcoding in parallel;
transcoding, by the computing device, the first chunk at a first transcoding unit using a first transcoding process to generate a first transcoded sub-bitstream;
receiving, by the computing device, first statistical information about a video content characteristic from one or more second chunks being transcoded by other transcoding units;
comparing, by the computing device, the first statistical information from the one or more second chunks to second statistical information about the video content characteristic from the first chunk;
determining, by the computing device, whether to perform a second transcoding process based on a result of the comparing;
when the second transcoding process is to be performed, performing, by the computing device, the second transcoding process to re-transcode the first chunk to generate a second transcoded bitstream and outputting the second transcoded bitstream for inclusion into an assembled bitstream without the first transcoded sub-bitstream; and
when the second transcoding process is not to be performed, outputting, by the computing device, the first transcoded sub-bitstream for inclusion into the assembled bitstream.

US Pat. No. 10,798,392

METHODS OF PALETTE BASED PREDICTION FOR NON-444 COLOR FORMAT IN VIDEO AND IMAGE CODING

HFI INNOVATION INC., Zhu...

1. A method of video decoding using a palette coding mode for color video in a non-444 color format, wherein the color video comprises one first-color component and one or more second-color components, and the method comprises:receiving video bitstream comprising color video in the non-444 color format, the video bitstream including coded data associated with a current block in a current picture, wherein the current block comprises one first-color block corresponding to said one first-color component and one or more second-color blocks corresponding to said one or more second-color components;
deriving a color palette from the video bitstream, wherein each entry of the color palette comprises one first-color palette sample of the first-color component and one second-color palette sample of each second-color component; and
recovering the current block by converting an index map of the current block into first-color samples of the first-color block and second-color samples of said one or more second-color blocks according to the color palette, wherein the second-color samples of each second-color block in the recovered current block have reduced spatial resolution compared to the first-color samples of the first-color block in the recovered current block.

US Pat. No. 10,798,391

FILTERING METHOD, MOVING PICTURE CODING APPARATUS, MOVING PICTURE DECODING APPARATUS, AND MOVING PICTURE CODING AND DECODING APPARATUS

TAGIVAN II LLC, Chevy Ch...

1. An encoding method for encoding an image partitioned into a plurality of blocks including a first block and a second block, the encoding method comprising:determining (i) whether both of the first block and the second block are non-IPCM blocks; (ii) whether both of the first block and the second block are IPCM blocks; or (iii) whether one of the first block and the second block is an IPCM block and the other of the first block and the second block is a non-IPCM block;
when it is determined in said determining that the first block is a non-IPCM block and the second block is an IPCM block, generating a reconstructed image,
wherein the generating comprises:
performing prediction on the first block to generate a prediction block;
transforming and quantizing a residual image block that represents a difference between the first block and the prediction block to generate a residual block;
inverse transforming and inverse quantizing the residual block to generate a reconstructed residual image block;
adding the reconstructed residual image block with the prediction block to generate a reconstructed first block, wherein prediction and transformation are performed on the first block, and prediction and transformation are not performed on the second block; and
filtering the reconstructed first block and the second block included in the image to generate filtered data, using values of pixels respectively included in the reconstructed first block and the second block,
wherein the reconstructed image includes (i) a part of the filtered data generated in the filtering as the pixels in the reconstructed first block and (ii) unfiltered pixels in the second block, instead of a part of the filtered data generated in the filtering, as the pixels in the second block,
wherein in the filtering, a fixed value is not used as a quantization parameter of the second block, and
wherein the filtering is performed using the quantization parameter of the second block.

US Pat. No. 10,798,390

METHOD AND APPARATUS FOR SCAN ORDER SELECTION

Huawei Technologies Co., ...

11. An encoder for encoding coefficients of blocks, the encoder comprising:a non-transitory computer-readable medium carrying a computer program comprising a program code; and
one or more processors configured to execute the program code to:
select, for each block, a scan order for the block from a set of scan orders, wherein each block corresponds to a respective coefficient matrix,
scan, for each block, the coefficient matrix corresponding to the block according to the scan order selected for the block to obtain a coefficient vector corresponding to the block,
hide, for at least one block, scan order information identifying the scan order selected for the at least one block at least partially in the coefficient vector corresponding to the at least one block,
code, for each block, the coefficient vector corresponding to the block into a bitstream representing a sequence of video frames, wherein each video frame is associated with one or more blocks, and wherein each coefficient vector of the one or more coefficient vectors corresponds to a block associated with a video frame from the sequence of video frames.

US Pat. No. 10,798,389

METHOD AND APPARATUS FOR CONTENT-AWARE POINT CLOUD COMPRESSION USING HEVC TILES

Tencent America LLC, Pal...

1. A method performed by a video encoder comprising:receiving a data cloud including a plurality of data points representing a three-dimensional (3D) space;
identifying each data point including a region-of-interest (ROI) associated with the data cloud;
dividing the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROI;
performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI; and
performing a patch packing process on the ROI cloud, the patch packing process including:
(i) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map,
(ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and
(iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles.

US Pat. No. 10,798,388

VIDEO CODING DEVICE, METHOD, AND APPARATUS AND INTER-FRAME MODE SELECTION METHOD AND APPARATUS THEREFOR

TENCENT TECHNOLOGY (SHENZ...

1. An inter-frame mode selection method for video coding, the inter-frame mode selection method comprising:determining an optimum coding mode and a coding overhead of a coding unit having a depth;
dividing the coding unit into coding subunits having the depth that is incremented by 1;
determining a sum of coding overheads of the coding subunits into which the coding unit is divided;
based on the determined coding overhead of the coding unit being greater than the determined sum of the coding overheads of the multiple coding subunits, determining that the optimum coding mode of the coding unit is optimum after the coding unit is divided into the coding subunits;
determining whether the determining of the optimum coding mode and the coding overhead of the coding unit is to be skipped;
based on the determining of the optimum coding mode and the coding overhead of the coding unit being determined to be skipped:
skipping the determining of the optimum coding mode and the coding overhead of the coding unit; and
selecting a smallest coding overhead from first coding overheads of the coding unit in previous coding modes before a current coding mode;
determining whether target parameters when the coding unit is coded using one of the previous coding modes corresponding to the selected smallest coding overhead satisfy a preset condition representing that it is predetermined that a coding mode of the coding unit is a skip mode; and
based on the target parameters being determined to satisfy the preset condition, determining that the optimum coding mode of the coding unit is the one of the previous coding modes corresponding to the selected smallest coding overhead.

US Pat. No. 10,798,387

SOURCE-CONSISTENT TECHNIQUES FOR PREDICTING ABSOLUTE PERCEPTUAL VIDEO QUALITY

NETFLIX, INC., Los Gatos...

1. A computer-implemented method, comprising:selecting a first model based on a first spatial resolution of first video content, wherein the first model is included in a plurality of models and associates a set of objective values for a set of objective quality metrics with an absolute quality score, and each model included in the plurality of models is trained using source video content having a different spatial resolution;
computing a first set of values for the set of objective quality metrics based on first encoded video content derived from the first video content;
computing a first absolute quality score for the first encoded video content based on the first model and the first set of values; and
comparing the first encoded video content to another encoded video content based on the first absolute quality score or transmitting at least a portion of the first encoded video content based on the first absolute quality score.

US Pat. No. 10,798,386

VIDEO COMPRESSION WITH GENERATIVE MODELS

12. A method comprising:receiving, by a processor, at least a first portion of a first encoding block associated with a sequence of frames of a video, wherein each frame of the sequence of frames is associated with a feature space, the at least the first portion of the first encoding block including:
a first frame of the sequence of frames or a latent space representation of the first frame; and
a first difference vector comprising a difference between a latent space representation of a second frame of the sequence of frames and the latent space representation of the first frame, wherein the second frame comprises a next frame following the first frame in the sequence of frames, wherein the first frame and the second frame are correlated as the first encoding block when an error measure between the first frame and the second frame is below a threshold, wherein the latent space representation of the first frame and the latent space representation of second frame are associated with a latent space comprising a number of dimensions of the latent space that is lower than the feature space;
determining, by the processor, the latent space representation of the second frame from the first difference vector and the latent space representation of the first frame;
decoding, by the processor, the latent space representation of the second frame into a decoded version of the second frame;
applying, by the processor, the decoded version of the second frame and one of the first frame or a reconstructed version of the first frame to a recurrent neural network to generate a reconstructed version of the second frame; and
storing, by the processor, the reconstructed version of the second frame.

US Pat. No. 10,798,385

INTER-PREDICTION METHOD AND APPARATUS IN IMAGE CODING SYSTEM

LG ELECTRONICS INC., Seo...

1. A video decoding method performed by a decoding device, the method comprising:deriving a reference picture list 0 (L0) and a reference picture list 1 (L1);
deriving two motion vectors (MVs) for a current block, wherein the two MVs include an MVL0 for the L0 and an MVL1 for the L1;
obtaining bi-prediction optical-flow (BIO) usage flag from a picture parameter set (PPS) or a sequence parameter set (SPS) signaled from a bitstream;
determining whether to apply BIO prediction that derives a sample-unit MV in the current block based on the MVL0 and the MVL1;
deriving the sample-unit MV of the current block based on the MVL0 and the MVL1 based on a determination to apply the BIO prediction; and
deriving a prediction sample based on the sample-unit MV,
wherein the determining of whether to apply the BIO prediction comprises:
deriving a first time distance between the current picture and a first reference picture associated with the MVL0 among the reference pictures included in the L0;
deriving a second time distance between the current picture and a second reference picture associated with the MVL1 among the reference pictures included in the L1; and
determining, based on the BIO usage flag, whether to apply the BIO prediction based on the first time distance and the second time distance,
wherein the first time distance is a difference between a picture order count (POC) value of the current picture and a POC value of the first reference picture, and wherein the second time distance is a difference between a POC value of the second reference picture and the POC value of the current picture.

US Pat. No. 10,798,384

REDUCING CONTEXT CODED AND BYPASS CODED BINS TO IMPROVE CONTEXT ADAPTIVE BINARY ARITHMETIC CODING (CABAC) THROUGHPUT

Texas Instruments Incorpo...

1. A method for encoding a video sequence in a video encoder to generate a compressed video bit stream, the method comprising:determining a value of a syntax element for a remaining actual value of a transform coefficient;
binarizing the value using a variable length code (VLC) to generate a sequence of bins, wherein a maximum codeword length of the VLC is 32 bits or less; and
bypass coding the sequence of bins into the compressed video bit stream.

US Pat. No. 10,798,383

METHOD FOR DECODING A DIGITAL IMAGE, CODING METHOD, DEVICES, TERMINAL AND ASSOCIATED COMPUTER PROGRAMS

13. A device comprising:a processor;
a non-transitory computer-readable medium comprising instructions stored thereon, which when executed by the processor configure the device to perform acts comprising:
coding a coded data stream representative of at least one image, said image being divided into blocks, wherein coding comprises:
the following acts implemented for a current block:
encoding the current block into a plurality of encoded data;
extracting the plurality of encoded data from a first sequence comprising a predetermined non-zero number (N) of bits;
inserting, into a first stream portion, bits of coded data not belonging to the extracted sequence;
performing a sequence test including at least 2N iterations of the following acts:
obtaining a sequence of N bits, which is distinct from any sequences already tested;
decoding and reconstructing a version of the current block from the bits of the sequence obtained and from coded data read in the first stream portion;
evaluating, from the version of the reconstructed block, a likelihood measurement associated with the sequence obtained;
determining a piece of information characteristic of the first sequence depending on likelihood measurements associated with the sequences tested;
coding said information;
inserting the coded information into a second stream portion of the encoded data stream; and
performing at least one of transmitting the encoded data stream over a communication network or storing the encoded data stream on a non-transitory computer-readable medium.

US Pat. No. 10,798,382

SUB-BLOCK TRANSFORM

TENCENT AMERICA LLC, Pal...

1. A method of controlling intra and/or inter prediction for decoding of a video sequence, the method being performed by at least one processor, and the method comprising:determining whether a width or a height of a coding unit is a power of two;
based on the width or the height of the coding unit being determined to not be a power of two:
splitting the coding unit into sub-blocks, each of the sub-blocks having a width or a height that is a power of two and maximized, so that a number of the sub-blocks is minimized;
applying a transform on one or more of the sub-blocks into which the coding unit is split;
generating reconstructed samples of a remaining one of the sub-blocks into which the coding unit is split, without applying the transform to the remaining one of the sub-blocks into which the coding unit is split, by interpolating reconstructed samples of the one or more sub-blocks to which the transform is applied;
applying the intra and/or inter prediction on the sub-blocks into which the coding unit is split;
based on each of the width and the height of the coding unit being determined to be a power of two:
splitting the coding unit into partitions;
determining whether a width or a height of any one of the partitions into which the coding unit is split, is a power of two;
based on the width or the height of the any one of the partitions into which the coding unit is split, being determined to not be a power of two, applying a single transform to a whole of the coding unit without the coding unit being split into the partitions, so that the single transform covers the partitions and is used in a power of two basis; and
based on the width or the height of each of the partitions into which the coding unit is split, being determined to be a power of two, applying a plurality of transforms respectively to the partitions into which the coding unit is split.

US Pat. No. 10,798,381

IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, AND IMAGE DECODING APPARATUS

SUN PATENT TRUST, New Yo...

1. An apparatus for decoding a coded signal generated by prediction coding of pictures, the apparatus comprising:one or more processors; and
a storage coupled to the one or more processors;
wherein the one or more processors are configured to:
decode first information from the coded signal, the first information indicating whether or not a motion vector predictor can be selected from among one or more motion vector predictor candidates;
decode second information when the first information indicates that a motion vector predictor can be selected, the second information indicating whether or not a motion vector predictor is selected, in skip mode, from among the one or more motion vector predictor candidates for decoding a current block to be decoded;
decode, from the coded signal, index information indicating a motion vector predictor to be selected from among the one or more motion vector predictor candidates, when both (a) the first information indicates that a motion vector predictor can be selected and (b) the second information indicates that a motion vector predictor is selected, in the skip mode, for the decoding of the current block; and
decode the current block using the motion vector predictor selected based on the index information,
wherein the coded signal does not include the index information when one of (a) the first information does not indicate that the motion vector predictor can be selected and (b) the second information does not indicate that the motion vector predictor is selected.

US Pat. No. 10,798,380

ADAPTIVE USE OF SEARCH MODES BASED ON NEIGHBORING BLOCKS

Amazon Technologies, Inc....

1. A computing device, comprising:an integrated circuit device operable to perform video coding on video data;
a memory operable to store encoded frames of the video data; and
a bus coupling the integrated circuit device to the memory, the bus having a bandwidth for transferring data from the memory to the integrated circuit device;
wherein the integrated circuit device comprises:
a local memory operable to store reference data;
a bandwidth determination circuit operable to monitor transactions sent to the memory, wherein the bandwidth determination circuit uses information associated with the transactions to determine an amount of the bandwidth that is available at a point in time;
an inter-prediction mode selection circuit to select an inter-prediction mode from a plurality of inter-prediction modes that use different reference window sizes, the plurality of inter-prediction modes including a first inter-prediction mode having a higher assigned priority than a second inter-prediction mode, wherein the inter-prediction mode is selected based on a size based on the reference window size associated with the inter-prediction mode, the size being within an amount of data that can be read from the memory using the amount of the bandwidth available at the point in time, and wherein the inter-prediction mode selection circuit is operable to:
determine, at a particular point in time, a current amount of the bandwidth that is available on the bus, wherein the inter-prediction mode selection circuit obtains the current amount of the bandwidth from the bandwidth determination circuit;
determine a particular amount of the bandwidth allocated for coding the video data;
determine that the current amount of the bandwidth is insufficient for obtaining reference data for the first inter-prediction mode;
determine, for a block of video data, encoding parameters of a neighboring block, the encoding parameters including a prediction mode used for predicting the neighboring block, a location in a reference frame of a reference window used to predict the neighboring block, and a size of the reference window;
determine to use the second inter-prediction mode of the plurality of inter-prediction modes based on the prediction mode used for predicting the neighboring block; and
obtain a particular reference window for the second inter-prediction mode based on the reference frame used to predict the neighboring block; and
a motion estimation circuit operable to perform prediction for the block of video data using the second inter-prediction mode and the particular reference window.

US Pat. No. 10,798,379

INTRA/INTER MODE DECISION FOR PREDICTIVE FRAME ENCODING

TEXAS INSTRUMENTS INCORPO...

1. A method for encoding video data having a plurality of frames, the method comprising:for each frame of the plurality of frames:
dividing the respective frame into a plurality of coding units (CU) and, for each CU of the plurality of CUs:
determining an inter mode similarity measurement for the CU;
comparing the inter mode similarity measurement for the CU to a threshold determined based on another inter mode similarity measurement corresponding to inter mode predictive encoding of a first frame that precedes the respective frame using a second frame that precedes the first frame;
if the inter mode similarity measurement for the CU is less than the threshold, applying inter mode predictive coding to encode the CU; and
if the inter mode similarity measurement for the CU is not less than the threshold, then:
calculating an intra mode similarity measurement for the CU;
comparing the inter mode similarity measurement for the CU to the intra mode similarity measurement for the CU;
if the inter mode similarity measurement for the CU is less than the intra mode similarity measurement for the CU, applying inter mode predictive coding to encode the CU; and
if the inter mode similarity measurement for the CU is not less than the intra mode similarity measurement for the CU, applying intra mode predictive coding to encode the CU.

US Pat. No. 10,798,378

WEIGHTED ANGULAR PREDICTION FOR INTRA CODING

ARRIS Enterprises LLC, S...

1. A method for coding video data using one or more processors, the method comprising:receiving a bitstream indicating how a coding tree unit was partitioned into coding units according to a partitioning structure that allows nodes to be split according to a partitioning technique;
selecting an intra direction mode for a coding unit;
selecting one or more of the plurality of reference lines to generate at least one predictor for the intra direction mode;
generating the at least one predictor from one or more plurality of reference samples within each selected reference line by combining predicted pixel values based on a projected position on a main reference line in combination with predicted pixel values based on a projected position on a side reference line,
wherein the predicted pixel values are weighted according to a weight parameter, wherein the weight parameter is determined based on a shift conversion factor, wherein the weight parameter is determined from a weighting table S[n], where

US Pat. No. 10,798,376

METHOD AND APPARATUS FOR VIDEO CODING

Tencent America LLC, Pal...

1. A method for video decoding in a decoder, comprising:decoding prediction information of a current block in a current coding tree unit (CTU) within a picture from a coded video bitstream, the prediction information being indicative of an intra block copy mode;
determining a reference area in the picture for the current block based on signals in the coded video bitstream that indicate constraints of the reference area, the constraints specifying one of (i) addresses of CTUs in a reference picture memory that are excluded from the reference area, and (ii) addresses of CTUs in the reference picture memory that are included in the reference area; and
reconstructing at least a sample of the current block based on reference samples of the reference area in the picture.

US Pat. No. 10,798,375

ENCODING METHOD AND DEVICE THEREFOR, AND DECODING METHOD AND DEVICE THEREFOR

SAMSUNG ELECTRONICS CO., ...

1. A video decoding device comprising:a block splitter configured to split a current block into at least two lower blocks when split information indicating whether the current block is to be split indicates that the current block is to be split into the at least two lower blocks;
an encoding order determiner configured to obtain, when the current block is split into the at least two lower blocks according to the split information, encoding order information indicating a decoding order of the at least two lower blocks, including horizontally neighboring lower blocks, of the current block, from a bitstream, and determine, whether the decoding order of the at least two lower blocks is a first order from a left lower block to a right lower block, among the at least two lower blocks, or a second order from the right lower block to the left lower block, according to the encoding order information;
a prediction method determiner configured to determine a prediction method for the left lower block among the at least two lower blocks; and
a decoder configured to:
when the prediction method is an affine merge mode is applied to the left lower block and the decoding order is the second order from the right lower block to the left lower block, determine an affine merge candidate list to include a neighboring block including a right sample of the left lower block and a neighboring block including a lower right sample of the left lower block,
determine a motion vector of the left lower block by using a motion vector of a block from the affine merge candidate list, and
determine prediction samples of the left lower block by using the motion vector of the left lower block.

US Pat. No. 10,798,374

SET-TOP BOX WITH SELF-MONITORING AND SYSTEM AND METHOD FOR USE OF SAME

Enseo, Inc., Plano, TX (...

1. A set-top box comprising:a housing securing a television input, a television output, a processor, memory, and storage therein;
a busing architecture communicatively interconnecting the television input, the television output, the processor, the memory, and the storage;
the television input configured to receive a source signal from a remote external source over a network, the source signal including a plurality of channels;
the television output configured to forward a fully tuned signal to a television; and
the memory accessible to the processor, the memory including processor-executable instructions that, when executed, cause the processor to:
detect when the television is not being utilized,
responsive to the television not being utilized, automatically scan the plurality of channels as received from the remote external source over the network at the set-top box by the set-top box without user input when the television is not being utilized,
responsive to the television not being utilized, generate at the set-top box TV screen image data for each of the plurality of channels as received from the remote external source over the network at the set-top box without user input when the television is not being utilized, the TV screen image data corresponding to a TV screen image capture, and
responsive to the television not being utilized, generate composite TV screen image data without user input when the television is not being utilized, the composite TV screen image data being resolved to a single TV screen image made from an assemblage of the TV screen image captures corresponding to the plurality of channels.

US Pat. No. 10,798,373

DISPLAY CORRECTION APPARATUS, PROGRAM, AND DISPLAY CORRECTION SYSTEM

SHARP KABUSHIKI KAISHA, ...

1. A display correction apparatus for performing display correction of a display device having a display screen on which an image is displayable, comprising:ambient light characteristic calculation means for calculating ambient light characteristics from a first captured image of the display screen obtained when the display device is in an ambient light measurement mode;
under-ambient-light display characteristic calculation means for calculating display characteristics of the display device under ambient light from a second captured image of the display screen obtained when the display device is in a normal mode;
display characteristic calculation means for calculating display characteristics of the display device by removing the ambient light characteristics from the display characteristics of the display device under ambient light; and
display correction means for performing display correction based on the display characteristics of the display device, wherein
the ambient light measurement mode includes at least one of turning the display device off, turning a backlight of the display device off, or presenting a first display image, and
the first display image is a black image.

US Pat. No. 10,798,372

GATE LINE SCANNING METHODOLOGY FOR THREE-DIMENSIONAL DISPLAY DRIVING METHOD AND DEVICE, AND DISPLAY DEVICE

BOE TECHNOLOGY GROUP CO.,...

1. A three-dimensional (3D) display driving method for driving a display panel to display an image in a 3D manner, wherein the display panel comprises a data line and at least four gate lines, the at least four gate lines comprise a first gate line, a second gate line, a third gate line and a fourth gate line, and the first gate line, the second gate line, the third gate line and the fourth gate line are arranged in rows in that order and adjacent to each other, wherein:each display period comprises a left-eye image frame and a right-eye image frame; and
each left-eye image frame comprises at least four display durations for a left-eye image, the at least four display durations for the left-eye image comprise a first duration, a third duration, a fifth duration, and a seventh duration, each right-eye image frame comprises at least four display durations for a right-eye image, and the at least four display durations for the right-eye image comprise a second duration, a fourth duration, a sixth duration, and an eighth duration; and
wherein the 3D display driving method comprises, during each display period:
within the first duration, scanning the first gate line, and within a part of the first duration, scanning the second gate line;
within the fourth duration, scanning the second gate line;
within the fifth duration, scanning the third gate line, and within a part of the fifth duration, scanning the fourth gate line; and
within the eighth duration, scanning the fourth gate line,
wherein after each display period, a data voltage applied to the first gate is A, a data voltage applied to the second gate line is equal to a result of a formula avg(A×F+B), a data voltage applied to the third gate line is C, and a data voltage applied to the fourth gate line is equal to a result of a formula avg(C×F+D), wherein A represents a data voltage applied to the data line within the first duration and the second duration, B represents a data voltage applied to the data line within the third duration and the fourth duration, C represents a data voltage applied to the data line within the fifth duration and the sixth duration, D represents a data voltage applied to the data line is within the seventh duration and the eight duration, and F represents a brightness factor that is greater than or equal to 0 and smaller than or equal to 1.

US Pat. No. 10,798,371

MULTIVIEW DISPLAY WITH HEAD TRACKING

LEIA INC., Menlo Park, C...

1. A head-tracking multiview display comprising:a multibeam backlight comprising a plurality of multibeam elements spaced apart from one another across the multibeam backlight, each multibeam element of the multibeam element plurality being configured to separately provide a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image; and
a light valve array configured to modulate light beams of the light beam plurality to provide a plurality of views of a scene in the different view directions as the multiview image, the view plurality comprising a set of primary views and a secondary view representing a perspective view of the scene that is angularly adjacent to the primary view set,
wherein the head-tracking multiview display is configured to selectively provide either the primary view set or an augmented set of views comprising the secondary view and a subset of the views of the primary view set according to a tracked position of a user, and wherein a size of each multibeam element is between fifty percent and two hundred percent of a size of a light valve of the light valve array.

US Pat. No. 10,798,370

APPARATUS, SYSTEM, AND METHOD FOR INTERPUPILLARY-DISTANCE-ADJUSTABLE HEAD-MOUNTED DISPLAYS

Facebook Technologies, LL...

1. An apparatus comprising:a flexible planar support frame;
a left eye cup coupled to a left side of the flexible planar support frame;
a right eye cup coupled to a right side of the flexible planar support frame;
a left-eye display screen area mounted to the flexible planar support frame such that the left-eye display screen area projects toward a left eye aperture defined by the left eye cup; and
a right-eye display screen area mounted to the flexible planar support frame such that the right-eye display screen area projects toward a right eye aperture defined by the right eye cup; and
wherein the flexible planar support frame is bendable along a vertical axis between the left-eye display screen area and the right-eye display screen area such that the left-eye display screen area and the right-eye display screen area deflect as the flexible planar support frame bends, altering an angle between a direction of a projection of the left-eye display screen area and a projection of the right-eye display screen area to cause a distance between the projection of the left-eye display screen area and the projection of the right-eye display screen area onto a viewing plane to vary as the flexible planar support frame bends, thereby adjusting for varying interpupillary distances.

US Pat. No. 10,798,369

THREE-DIMENSIONAL DISPLAY DEVICE

BOE TECHNOLOGY GROUP CO.,...

1. A three-dimensional (3D) display device, comprising:a pixel array, comprising a plurality of pixel columns, wherein each of the plurality of pixel columns is inclined by an angle with respect to a direction in which an edge of the pixel array extends, and inclined angles of the plurality of pixel columns are substantially the same; each of the pixel columns is formed by arranging sub-pixels of three colors repeatedly in a fixed order, wherein first dot pitches between any two adjacent sub-pixels in a same pixel column are substantially the same, and the sub-pixels in one of any two adjacent pixel columns in the plurality of pixel columns are staggered in turn with respect to respective sub-pixels in another pixel column of the two adjacent pixel columns such that second dot pitches between any sub-pixel of one color in one pixel column of the two adjacent pixel columns and two adjacent sub-pixels of two colors different than the one color in another pixel column of the two adjacent pixel columns are substantially the same, and a color of each of the sub-pixels is different from colors of all sub-pixels adjacent thereto; and
a grating, comprising transmission regions and non-transmission regions which are arranged alternately and are disposed in accordance with the inclined angles; wherein the transmission regions correspond to respective spacing regions between any two adjacent pixel columns and a center line of each transmission region is overlapped with a center line of a spacing region corresponding to the transmission region; the non-transmission regions correspond to the plurality of pixel columns,
wherein the first dot pitch and the second dot pitch are substantially the same.

US Pat. No. 10,798,368

EXPOSURE COORDINATION FOR MULTIPLE CAMERAS

Lyft, Inc., San Francisc...

1. A method comprising, by a computing system:determining a first target region within a first field of view of a first camera and a second target region within a second field of view of a second camera, wherein the first target region and the second target region are within an overlapping area of the first field of view of the first camera and the second field of view of the second camera;
determining lighting conditions of the first target region and the second target region within the overlapping area;
determining an exposure time for at least the first camera and the second camera based at least in part on the determined lighting conditions; and
instructing the first camera and the second camera to take pictures using the determined exposure time.

US Pat. No. 10,798,367

IMAGING DEVICE, IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An imaging device, comprising:a first imaging unit configured to generate a first polarized image including a first plurality of pixels, each pixel of the first plurality of pixels having a polarization characteristic for one of a plurality of polarization directions;
a second imaging unit configured to generate a second polarized image including a second plurality of pixels, each pixel of the second plurality of pixels having a polarization characteristic for one of the plurality of polarization directions; and
a third imaging unit configured to generate a non-polarized image including a third plurality of pixels, each pixel of the third plurality of pixels having no polarization characteristic,
wherein the third imaging unit is interposed between the first imaging unit and the second imaging unit,
wherein an integrated depth map is generated using the first polarized image and the second polarized image,
wherein a number of the third plurality of pixels is larger than a number of the first plurality of pixels and a number of the second plurality of pixels,
wherein the integrated depth map is converted to generate a depth value for each pixel of the third plurality of pixels, and
wherein the first imaging unit, the second imaging unit, and the third imaging unit are each implemented via at least one processor.

US Pat. No. 10,798,366

MOTION DETECTION DEVICE AND MOTION DETECTION METHOD

SERCOMM CORPORATION, Tai...

1. A motion detection device, comprising:a first image recording unit disposed in a camera, configured to record a first video;
a first storage unit, configured to store the first video;
a motion detection unit, configured to detect a moving object in the first video;
a depth calculation unit, configured to calculate a depth of the moving object;
a determination unit, configured to determine the moving object as the concerned event and send an alert to caution the concerned event has occurred when the depth of the moving object, which is a distance between the moving object and the camera, is smaller than a predetermined threshold distance;
a second image recording unit disposed in the camera, configured to record a second video at the same time as the first image recording unit records the first video, the first video and the second video being captured from different angles, a second image resolution of the second video being lower than a first image resolution of the first video; and
a third image recording unit disposed in the camera, configured to record a third video at the same time as the second image recording unit records the second video, the second video and the third video being captured from different angles;
wherein the depth calculation unit calculates the depth of the moving object according to the second video and the third video.

US Pat. No. 10,798,365

AUTO-ALIGNMENT OF IMAGE SENSORS IN A MULTI-CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. An image capture device comprising:a first image sensor having a first field of view, the first image sensor configured to capture a first image;
a second image sensor having a second field of view, wherein a portion of the first field of view and a portion of the second field of view comprise an overlapping field of view, the second image sensor configured to capture a second image; and
a processor configured to:
correlate data of the first image and the second image representative of the overlapping field of view by shifting at least one image by a first number of pixels along a rolling shutter direction such that a difference measure between pixels of the data of the first image and the second image representative of the overlapping field of view is substantially minimized;
identify a pixel shift between the first image and the second image based on the first number of pixels; and
calibrate the first image sensor or the second image sensor iteratively based on the identified pixel shift until a next identified pixel shift is below a predefined pixel shift threshold.

US Pat. No. 10,798,364

3D IMAGE RECONSTRUCTION BASED ON LENSLESS COMPRESSIVE IMAGE ACQUISITION

Nokia of America Corporat...

1. An apparatus, comprising:a processor and a memory communicatively connected to the processor, the processor configured to:
receive camera geometry information associated with a lensless compressive camera comprising a programmable aperture and a sensor plane including a pair of sensors, wherein the camera geometry information comprises an inter-sensor distance between the sensors and a distance between the programmable aperture and the sensor plane;
obtain reconstructed image data indicative of a pair of reconstructed images reconstructed based on respective sets of compressive measurements associated with the respective pair of sensors;
determine, based on the reconstructed image data, disparity information associated with a common image portion that is common to the pair of reconstructed images; and
determine, based on the camera geometry information and the disparity information associated with the common image portion, depth information associated with the common image portion.

US Pat. No. 10,798,363

VIDEO FILE PROCESSING METHOD AND APPARATUS

Tencent Technology (Shenz...

1. A method for video file processing, comprising:detecting, by processing circuitry of an apparatus, a device type of a first virtual reality (VR) display device, the first VR display device being configured to display a target video to be recorded;
detecting, by the processing circuitry, an application configured to display the target video on the first VR display device based on the device type of the first VR display device;
obtaining, by the processing circuitry, left-eye and right-eye video images of the target video from the detected application on the first VR display device;
generating, by the processing circuitry, a recorded video file of the target video based on the left-eye and right-eye video images obtained from the detected application;
identifying, by the processing circuitry, a video playback terminal device compatible with the first VR display device; and
playing the recorded video file of the target video through the video playback terminal device.

US Pat. No. 10,798,362

PARALLAX CORRECTION DEVICE AND METHOD IN BLENDED OPTICAL SYSTEM FOR USE OVER A RANGE OF TEMPERATURES

Qioptiq Limited, St. Asa...

1. A blended optical device, comprising:a first objective, comprising a first axis and a first image position adjustment means configured to adjust the position of a first image collected by the first objective;
an electronic control circuitry configured to control the first adjustment means to adjust a position of the first image;
a second objective, comprising a second axis and a variable focus mechanism;
a thermal sensor in communication with the electronic control circuitry; and
a blender configured to form a blended image from the first image and a second image collected by the second objective,
wherein the first objective and/or the second objective are not passively athermalised, the electronic control circuitry is configured to receive thermal data and range data from the second objective regarding a range to a target of the second objective and adjust the position of the first image so that the blended image is corrected for thermal errors and parallax errors.

US Pat. No. 10,798,360

INFORMATION PROCESSING SYSTEM, METHOD FOR CONTROLLING SAME, AND PROGRAM

Sony Interactive Entertai...

4. A method for controlling an information processing system obtaining and processing three-dimensional information regarding an object provided in a real space,the information processing system including first and second detection devices provided in mutually different places in the real space and each configured to image the real space and obtain the three-dimensional information including data of a distance to a predetermined point on the object provided in the real space and direction information as to a direction of a predetermined part of the object, the method comprising:
causing the information processing system to generate information related to relative positions and installation directions of the first and second detection devices on a basis of: the data of the distance to the predetermined point on the object and the direction information regarding the object, the data of the distance and the direction information being obtained by the first detection device; and the data of the distance to the predetermined point on the object and the direction information regarding the object, the data of the distance and the direction information being obtained by the second detection device; and
causing the information processing system to obtain the three-dimensional information regarding the object detected by each of the first and second detection devices whose information related to the relative positions and the installation directions has been generated and provide the three-dimensional information regarding the object for a predetermined process.

US Pat. No. 10,798,359

GENERATING HI-RES DEWARPED BOOK IMAGES

FUJI XEROX CO., LTD., Mi...

1. A mobile device, comprising:a camera configured to capture a sequence of image frames of at least one document page; and
a processor configured to compute a disparity map using the captured sequence of image frames; compute a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map; and render a dewarped image from the computed model,
wherein the disparity map comprises features within the at least one document page.

US Pat. No. 10,798,358

IMAGE PROCESSING METHOD AND DEVICE FOR ACCOMPLISHING WHITE BALANCE REGULATION, COMPUTER-READABLE STORAGE MEDIUM AND COMPUTER DEVICE

GUANGDONG OPPO MOBILE TEL...

7. The image processing device of claim 6, wherein the second processing module comprises a plurality of program units, the plurality of program units comprising:a third judgment unit, configured to determine whether the number of the light sources of the kth frame of image is more than or equal to 1;
a first processing unit, configured to, responsive to determining that the number of the light sources of the kth frame of image is less than 1, adopt a gray world method to perform white balance processing on the kth frame of image and the (k+1)th frame of image;
a second processing unit, configured to, responsive to determining that the number of the light sources of the kth frame of image is equal to 1, determine the color temperature and number of the light sources of the (k+1)th frame of image according to the color temperature and number of the light sources of the kth frame of image, and perform white balance processing on the (k+1)th frame of image according to the color temperature of the (k+1)th frame of image; and
a third processing unit, configured to, responsive to determining that the number of the light sources of the kth frame of image is more than 1, determine a primary light source according to at least one of scenario parameters, areas or luminance parameters of the light sources of the kth frame of image, determine the color temperature of the (k+1)th frame of image according to a color temperature of the primary light source, perform white balance processing on the (k+1)th frame of image according to the color temperature of the (k+1)th frame of image, and determine the number of the light sources of the (k+1)th frame of image to be the number of the light sources of the kth frame of image, wherein the scenario parameters comprise image shooting time and signal strength of a Global Positioning System (GPS), and the luminance parameters comprise luminance of multiple light sources.

US Pat. No. 10,798,357

METHOD AND DEVICE FOR CORRECTING COLOR TEMPERATURE OF FLASH LAMP

Beijing Xiaomi Mobile Sof...

1. A method for correcting a color temperature of a flash lamp, comprising:obtaining a first color temperature value of a gray point region in a first image, wherein the first image is obtained by capturing a scene when the flash lamp is closed, and a difference between values of two color channels of a gray point in the gray point region is less than a preset value;
searching for an image region corresponding to the gray point region in a second image, wherein the second image is obtained by capturing the scene when the flash lamp is open, and pixel points in the image region exist and the pixel points correspond to gray points in the gray point region;
obtaining a second color temperature value of the image region; and
correcting the color temperature for light compensation of the flash lamp according to a difference obtained by subtracting the first color temperature value from the second color temperature value.

US Pat. No. 10,798,356

WHITE BALANCE PROCESSING METHOD, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM

GUANGDONG OPPO MOBILE TEL...

1. A white balance processing method, used to perform a white balance process on a plurality of successive frame images each subjected to a color temperature variation under a scenario with multiple light sources, comprising:processing each frame image of the plurality of successive frame images to determine color temperature corresponding to a main light source, comprising:
dividing the frame image into a plurality of regions; according to a histogram of each region, determining whether the region is a target region having the light source; when the region is the target region having the light source, determining whether at least two target regions adjacent to each other exist; when the at least two target regions adjacent to each other exist, stitching at least two light sources included in the at least two target regions into one of the plurality of light sources; and when the at least two target regions adjacent to each other do not exist, determining the light source included in the target region as one of the plurality of light sources;
comparing predetermined parameters of the plurality of light sources with each other to determine the main light source; and
determining a color of the main light source to determine the color temperature of the main light source;
determining whether a variation of primary color temperature is greater than or equal to a predetermined threshold, the variation of primary color temperature referring to a difference between the color temperature of the main light source in a second frame image and the color temperature of the main light source in a first frame image, wherein the first frame image and the second frame image are two frame images adjacent to each other from the plurality of successive frame images; and
performing the white balance process on the second frame image according to the color temperature of the main light source in the first frame image when the variation of primary color temperature is less than the predetermined threshold.

US Pat. No. 10,798,355

COLOR NIGHT VISION CAMERAS, SYSTEMS, AND METHODS THEREOF

APPLIED MINDS, LLC, Burb...

1. An imaging device, comprising:a filter element, wherein the filter element receives and filters an incoming image signal;
one or more image intensifiers for receiving the filtered incoming image signal, wherein each of the one or more image intensifiers has an analog output associated therewith;
one or more accelerometers, wherein the one or more accelerometers track motion of the imaging device;
a mechanism for converting the analog outputs of the one or more image intensifiers into corresponding digital image signals for processing; and
an image processor programmed to utilize accelerometer output to shift the digital image signals and provide the shifted digital image signals for viewing or further processing.

US Pat. No. 10,798,354

PROJECTION DISPLAY APPARATUS AND METHOD FOR CONTROLLING THE SAME

DELTA ELECTRONICS, INC., ...

1. A projection display apparatus, comprising:a control circuit configured to control the projection display apparatus to operate in a first operating mode or a second operating mode according to an input video feature of an input video signal or an operating condition;
a processing circuit configured to receive the input video signal, wherein in the first operating mode, the processing circuit converts the input video signal into a first output video signal, and wherein in the second operating mode, the processing circuit converts the input video signal into a second output video signal;
an imaging device configured to transform the first output video signal into a plurality of output video sub-images each having a first display resolution, wherein the output video sub-images respectively correspond to a plurality of sets of pixel regions that are alternatingly arranged in the first output video signal;
a shifting device cooperating with the imaging device to project the output video sub-images onto a screen at respective times in the first operating mode or to directly project a second output video image corresponding to the second output video signal onto the screen in the second operating mode, wherein the output video sub-images projected onto the screen are misaligned with respect to each other to form a first output video image with an output resolution greater than the first display resolution; and
a communication device configured to couple to stereoscopic glasses, wherein the operating condition comprises the communication device transmitting a stereoscopic left/right eye synchronization signal to the stereoscopic glasses, wherein if the stereoscopic left/right eye synchronization signal is active, the control circuit switches the projection display apparatus to the second operating mode.

US Pat. No. 10,798,353

CALIBRATION APPARATUS, CALIBRATION METHOD, OPTICAL APPARATUS, IMAGE CAPTURING APPARATUS, AND PROJECTION APPARATUS

OLYMPUS CORPORATION, Tok...

1. A calibration apparatus for an optical apparatus provided with a two-dimensional image-conversion device having a plurality of pixels and an optical system that forms an imaging relationship between the image-conversion device and a three-dimensional world coordinate space, wherein the calibration apparatus is configured to:acquire calibration data indicating a correspondence between two-dimensional pixel coordinates of the image-conversion device and three-dimensional world coordinates of the world coordinate space; and
calculate parameters of a camera model in which two coordinate values of the two-dimensional pixel coordinates are expressed as a function of three coordinate values of the three-dimensional world coordinates by fitting the camera model to the acquired calibration data,
wherein, when a projection relationship between an angle of view ? and an image height y of the optical system is roughly expressed by a projection expression y=fP(?) by using a projection focal distance f,
the calibration apparatus is configured to
convert three-dimensional world coordinates (x, y, z) of the acquired calibration data to two-dimensional coordinates (P(?)cos ?, P(?)sin ?) by using three-dimensional spherical coordinates (r, ?, ?) that are equal to the world coordinates, and,
subsequently, by fitting a camera model in which the two coordinate values of the two-dimensional pixel coordinates are expressed as a function of two coordinate values of the two-dimensional coordinates (P(?)cos ?, P(?)sin ?), calculate the parameters of the camera model.

US Pat. No. 10,798,352

RENDERING WIDE COLOR GAMUT TWO-DIMENSIONAL (2D) IMAGES ON THREE-DIMENSIONAL (3D) CAPABLE DISPLAYS

Dolby Laboratories Licens...

1. A method for displaying image data, said method comprising:identifying an established color gamut defined by a predefined number of primary colors;
identifying a number of primary display colors associated with a light source, said number of primary display colors associated with said light source exceeding said number of primary colors defining said established color gamut;
defining a first virtual color gamut based on a combination of said primary display colors associated with said light source to approximate said established color gamut;
receiving video data including intensity values corresponding to a number of colors fewer than said number of primary display colors associated with said light source;
generating intensity values associated with said first virtual color gamut based on said video data;
defining a second virtual color gamut based on residual power of said light source considering said first virtual color gamut;
generating intensity values associated with said second virtual color gamut based on said video data;
generating intensity values associated with said primary display colors of said light source based on said intensity values associated with said first virtual color gamut and said intensity values associated with said second virtual color gamut; and
providing said intensity values associated with said primary display colors to a spatial light modulator.

US Pat. No. 10,798,351

APPARATUS, METHOD AND SYSTEM FOR LOCATION BASED TOUCH

BOE TECHNOLOGY GROUP CO.,...

1. A computing device comprisinga receiving circuitry configured to receive first coordinate information from a projection display apparatus, wherein a projector outputs a second light on a certain area of a projection screen of the projection display apparatus when an image displayed by the computing device is projected to the projection display apparatus, wherein the regional information indicates coordinate information of a projection image on the projection screen of the projection display apparatus and the first coordinate information indicates the location of first optical sensor on a projection screen, and the first optical sensor is one or more of a plurality of optical sensors;
a building circuitry configured to build a coordinate mapping relationship between the projection image and a screen of the computing device based on the regional information;
a determining circuitry configured to determine second coordinate information corresponding to the first coordinate information on the screen of the computing device according to the first coordinate information and the coordinate mapping relationship between a projection image and the screen of the computing device; and
a performing circuitry configured to perform a touch operation corresponding to the second coordinate information within the screen of the computing device according to the second coordinate information.

US Pat. No. 10,798,350

SPLIT APERTURE PROJECTOR/CAMERA

TEXAS INSTRUMENTS INCORPO...

1. An optical apparatus comprising:a first lens;
a first optical element having a first aperture;
a second lens;
a second optical element having a second aperture; and
a third lens having first and second portions equidistant from and on opposite sides of a central axis of the third lens, the first portion configured to receive projected light from the first lens through the first aperture and to project the projected light onto a target, and the second portion configured to receive reflected light reflected from the target and to provide the reflected light to the second lens through the second aperture.

US Pat. No. 10,798,349

PROJECTING APPARATUS

Coretronic Corporation, ...

1. A projecting apparatus, comprising: an illuminating system, a first sensing module, a light valve, a controller and a projection lens; whereinthe illuminating system comprises a light source module and a filter element;
the light source module is used to emit a light beam; and
the filter element comprises a first filter region and a second filter region, and the first filter region and the second filter region are sequentially cut into a transmission path of the light beam;
the first sensing module is disposed beside the filter element, and the first sensing module comprises a first light emitter and a first light sensor;
the first light emitter is used to emit a first sensing light, wherein outside the transmission path of the light beam, the first filter region and the second filter region are sequentially cut into a transmission path of the first sensing light; and
the first light sensor is used to detect the first sensing light, wherein when the first filter region is cut into the transmission path of the first sensing light, the first light sensor detects the first sensing light and generates a first sensing signal, and when the second filter region is cut into the transmission path of the first sensing light, the first light sensor detects the first sensing light and generates a second sensing signal, and the first sensing signal is different from the second sensing signal;
the light valve is disposed on the transmission path of the light beam from the filter element to modulate the light beam into an image beam;
the controller is respectively electrically connected to the first sensing module, the filter element and the light valve, and the controller is used to synchronize the filter element with the light valve by using the first sensing signal and the second sensing signal; and
the projection lens is disposed on a transmission path of the image beam.

US Pat. No. 10,798,348

LIGHT SOURCE APPARATUS, PROJECTION TYPE DISPLAY DEVICE AND LIGHT SOURCE CONTROL METHOD

SONY CORPORATION, Tokyo ...

19. A method of a controller for controlling a solid-state light source that is configured to output light, the method comprising:controlling, by circuitry of the controller, a power source to supply current to the solid-state light source;
determining, by the circuitry, based on a total amount of elapsed time the solid-state light source being ever used for lighting during the entire lifetime of the solid-state light source, whether to change the supply of the current to the solid-state light source to compensate a lowered photo transformation efficiency for maintaining a brightness level of the light output by the solid-state light source at a brightness level or within a brightness level range; and
controlling, by the circuitry, the power source to change the supply of the current to the solid-state light source to output the light at the brightness level or within the brightness level range when the supply of the current is determined to be changed.

US Pat. No. 10,798,347

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. An information processing device, comprising:processing circuitry configured to:
receive input data of a projection target image;
set a superimposition region in the projection target image;
perform a base representation process according to a first portion of the input data to calculate first display luminance values in a region of the projection target image other than the superimposition region;
perform a highlighting process according to a second portion of the input data to calculate second display luminance values in the superimposition region;
generate, according to the first display luminance values and the second display luminance values, a base image that contains a region image corresponding to the set superimposition region in the projection target image; and
generate, according to the second display luminance values, a superimposition image, the superimposition image and the region image when being projected being superimposed one over the other.

US Pat. No. 10,798,346

METHODS AND APPARATUS FOR SHALLOW GRADIENT ARTIFACT REDUCTION IN IMAGE DISPLAY SYSTEMS

TEXAS INSTRUMENTS INCORPO...

1. A method, comprising:with a controller circuit;
receiving first and second blocks of pixel data;
computing first mean values of a first color component in the pixel data;
computing second mean values of a second color component in the pixel data;
computing first variances of the first color component in the pixel data;
computing second variances of the second color component in the pixel data; and
using the first and second mean values and the first and second variances as address values to retrieve indicators of a likelihood of a visible artifact, and adjusting a time between successive loadings of the first and second blocks into memory cells of a spatial light modulator, responsive to the retrieved indicators, in which the likelihood of the visible artifact is reduced by reducing the time between the successive loadings, and the likelihood of a visible artifact is increased by increasing the time between the successive loadings.

US Pat. No. 10,798,345

IMAGING DEVICE, CONTROL METHOD OF IMAGING DEVICE, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An imaging device comprising:a plurality of first imaging units;
a combining processing unit that combines images acquired by the plurality of first imaging units to generate a wide angle image;
a second imaging unit that captures a part of a region of the wide angle image, wherein a frequency of acquiring an image captured by the second imaging unit is higher than a frequency of acquiring the wide angle image; and
a control unit that (1) determines a probability of interruption of a tracking operation of a subject by the second imaging unit based on at least one of (a) a state of the second imaging unit and (b) information included in an image captured by the second imaging unit, and (2) controls a frequency of acquiring the wide angle image to be higher when the probability of interruption of a tracking operation is greater than or equal to a predetermined threshold than when the probability of interruption of a tracking operation is less than the predetermined threshold.

US Pat. No. 10,798,344

ASSET MANAGEMENT MONITORING

Alarm.com Incorporated, ...

1. A method comprising:determining, by an autonomous device, that a present status of an asset within a property that is being monitored by the autonomous device does not correspond to an expected status of the asset,
based on determining that the present status does not correspond to the expected status, determining, by the autonomous device, a previous status of the asset that identifies a location of the property where the asset was previously determined to be located;
navigating, by the autonomous device, to the location of the property; and
collecting, by the autonomous device, one or more images of the location of the property.

US Pat. No. 10,798,343

AUGMENTED VIDEO SYSTEM PROVIDING ENHANCED SITUATIONAL AWARENESS

Insitu, Inc., Bingen, WA...

1. A method for providing situational awareness to image data captured by an image capturing device carried by an unmanned aircraft, the method comprising:determining, by executing instructions with at least one processor, a transformation matrix based on metadata associated with at least one of a plurality of frames of the captured image data;
for at least one overlay, applying, by executing instructions with the at least one processor, the transformation matrix to the at least one overlay to transform the at least one overlay; and
providing, by executing instructions with the at least one processor, the at least one transformed overlay for display; and
in response to receiving a request to change the orientation of the image capturing device,
determining, by executing instructions with the at least one processor, a horizontal field of view for the image capturing device,
determining, by executing instructions with the at least one processor, a vertical field of view for the image capturing device,
determining, by executing instructions with the at least one processor, a horizontal slew rate for changing the orientation of the image capturing device,
determining, by executing instructions with the at least one processor, a vertical slew rate for changing the orientation of the image capturing device,
determining, by executing instructions with the at least one processor, a width of a displayed image,
determining, by executing instructions with the at least one processor, a height of the displayed image,
determining, by executing instructions with the at least one processor, an advance period,
calculating, by executing instructions with the at least one processor, a coordinate based on the horizontal field of view, the vertical field of view, the horizontal slew rate, the vertical slew rate, the width, the height, and the advance period, and
providing, by executing instructions with the at least one processor, an indication of a digital lead indicator at the calculated coordinate for display.

US Pat. No. 10,798,342

HIGH DEFINITION SURVEILLANCE IMAGE STORAGE OPTIMIZATION AND RETENTION TRIGGERING APPARATUS

EAGLE EYE NETWORKS, INC.,...

1. An apparatus for image storage optimization and retention comprising:a purging module; communicatively coupled to,
a network interface to extrinsic sensors; communicatively coupled to,
an extrinsic retention meta-data store; coupled to the purging module and further coupled to
a camera network interface to a plurality of high definition video cameras; said modules interfaces and store interconnected by communication circuits and switches;
a circuit to receive H.264 streams from high definition video cameras;
a circuit to segment H.264 streams into files and embed retention meta-data in each header of a file;
a processor core configured to determine that an extrinsic sensor measurement or event is of substantial significance to a surveillance system; and
a circuit to set a retention flag in an extrinsic retention meta-data store,wherein said purging module comprises:a retention policy store
a statutory and judicial requirements store;
a metric threshold store;
a camera and location ownership store;
an interface to disk file management; and
a circuit to store retention metadata; anda calendar module comprising a calendar processor configured to match video file segment timestamps with date-time of purging directives and when true, trigger purging of segments unprotected by retention meta-data.

US Pat. No. 10,798,341

SYSTEMS AND METHODS FOR COMPILING AND PRESENTING HIGHLIGHTS OF A VIDEO CONFERENCE

Blue Jeans Network, Inc.,...

9. A method for executing a video conference session, said method comprising:displaying, on a first screen of an endpoint of a video conference system, an interface that comprises a first area for presentation of video streams of participants in the video conference session and a second area for presentation of highlight selections by the participants in the video conference session, the first and second areas each including timelines representations of the video conference, the timelines of the first and second areas being presented along axes orthogonal to one another and each including indications of the highlight selections by the participants; and
responsive to a participant's selection of a user interface element, creating an indicator of a highlight in each of the timelines in the first and second areas;
wherein the interface further includes a selection option for toggling the second area between presentation of highlight selections by the participants together with associated comments by the participants with presentation of highlight selections by the participants in context with a transcript of the video conference, and, responsive to user selection of the selection option, toggling the presentation of highlight selections by the participants together with associated comments by the participants in the second area with presentation of highlight selections by the participants in context with a transcript of the video conference.

US Pat. No. 10,798,340

AUXILIARY STREAM TRANSMISSION METHOD BASED ON VIDEO CONFERENCE SYSTEM

YEALINK (XIAMEN) NETWORK ...

1. An auxiliary stream transmission method based on a video conference system, wherein: the method integrates an auxiliary stream function into an auxiliary stream peripheral, a computer is connected to a master server of the video conference system through the auxiliary stream peripheral, and the master server and the auxiliary stream peripheral are connected through a network cable;when the auxiliary stream peripheral is accessed to the master server of the video conference system through a network cable and the computer is accessed to the auxiliary stream peripheral, the auxiliary stream peripheral reports auxiliary stream information in the computer to an encoding/decoding unit of the auxiliary stream peripheral, and the encoding/decoding unit decodes, encodes, and compresses video and audio signals according to the auxiliary stream information, and sends the video and audio signals to the master server through Ethernet; and the master server receives the auxiliary stream information and code streams of the auxiliary stream peripheral, and performs decompression, decoding, and encoding, and then an auxiliary stream is displayed on the master server,
wherein an HDMI interface and a DP interface are disposed on the auxiliary stream peripheral, and the auxiliary stream peripheral is connected to the computer through an HDMI cable or a DP cable.

US Pat. No. 10,798,339

TELEPRESENCE MANAGEMENT

RoboRep Inc., Aurora, On...

1. A telepresence apparatus, comprising:a first computer assembly configured to be network connectable with a second computer assembly via a communication network; and
the first computer assembly also configured to interface with a first memory assembly configured to tangibly store programmed coded instructions, in which the programmed coded instructions are configured to urge the first computer assembly to:
receive a user gesture signal from a gesture-sensing device, in which the gesture-sensing device is configured to be connectable to the first computer assembly, and in which the gesture-sensing device is also configured to detect a user gesture to be provided by a user positioned proximate to the first computer assembly, and in which the gesture-sensing device is also configured to generate the user gesture signal associated with the user gesture that was detected; and
compute whether the user gesture signal, which was received by the first computer assembly, matches a predetermined user gesture stored in the first memory assembly of the first computer assembly; and
compute whether to suspend transmission of an aspect of a telepresence data unit from the first computer assembly to the second computer assembly via the communication network depending on a match made between the user gesture signal and the predetermined user gesture; and
wherein:
an image of a medical instrument is configured to be displayed on a user interface of the second computer assembly; and
a remote controllable laser pointer device is configured to point at the medical instrument located proximate to the first computer assembly; and
a pointer indicator positioned on the user interface of the second computer assembly, in use, is updated to indicate a location that corresponds to the location of the medical instrument that is positioned proximate to the first computer assembly.

US Pat. No. 10,798,338

SINGLE POINT DEVICES THAT CONNECT TO A DISPLAY DEVICE

American Well Corporation...

1. A single point device, comprising:a port for establishing a connection to a port on a display device and for switching the display device from a first mode to a second mode, wherein the display device is configured to render a program in the first mode and is further configured to stream data for a communication session received from a data processing system in the second mode, wherein the data processing system configured to receive requests from medical service providers to initiate live time communication sessions with a user of the display device;
an audio video device configured to capture image data representing one or more images of physical entities in a field of view of the audio video device and further configured to capture acoustic data;
one or more voice interaction facilities configured to detect one or more utterances of a keyword and, in response, activating the single point device;
a receiver device configured:
for dedicated communication with the data processing system; and
to receive (a) audio signals, during a live time communication session, from a client device of an identified medical service provider through the data processing system, and (b) a user interface signal from the data processing system and carrying user interface data that is renderable on the display device that, when rendered by the display device, causes the display device to interrupt a program displaying on the display device; and
a transmitter device configured:
for dedicated communication with the data processing system;
to transmit, in response to activating the single point device and to the data processing system, data indicating that the user wishes to establish a communication session with a medical service provider;
to transmit, through the data processing system and to the client device, image data captured by the audio video device and audio signals received from the audio video device; and
to transmit the user interface signal received from the receiver device to the display device in the second mode to cause the display device to interrupt the program to initiate the communication session, as requested by the medical service provider.

US Pat. No. 10,798,337

COMMUNICATION DEVICE, COMMUNICATION SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM

FUJI XEROX CO., LTD., To...

1. A communication device comprising:a communication interface;
a processor, configured to:
reproduce a voice and/or a video received from a device of an utterer through the communication interface;
acquire feature information indicating a psychological state of an audience who is listening to the voice and/or watching the video of the utterer, wherein the feature information further comprises a change of a value of biometric information per unit time in addition to a value of the biometric information at a time point detected by a biometric sensor;
estimate the psychological state of the audience based on the feature information;
extract a factor changing the psychological state, from contents of the voice or the video which are being reproduced at a time point at which the psychological state of the audience, wherein the factor changing the psychological state is a keyword indicating the contents of the voice of the utterer or a keyword indicating a behavior of the utterer from the contents of the video of the utterer, wherein when the factor is the keyword indicating the voice of the utterer, the keyword is extracted from texts converted from the voice of the utterer; and
transmit the psychological state of the audience and the factor in association with each other, to the device of the utterer through the communication interface, wherein the psychological state of the audience and the keyword are displayed on the device of the utterer.

US Pat. No. 10,798,336

CABLE CONNECTION ELEMENT FOR REDUCING SIGNAL TRANSMISSION LOSS

SIGNAL CABLE SYSTEM CO., ...

1. A cable connection element for reducing signal transmission loss, comprising:a front clad having:
a receiving chamber formed through a rear end of the front clad; and a
first through hole formed through a front end of the front clad to communicate with the receiving chamber; a
base having:
a front portion coupled to the rear end of the front clad; a rear portion; and
a second through hole formed through the front portion and the rear portion; and
a connection terminal having:
a first core-holding portion formed on a front end of the connection terminal and having a first core-receiving hole;
a second core-holding portion formed on a rear end of the connection terminal and having a second core-receiving hole;
a core-clamping portion connected between the first core-holding portion and the second core-holding portion; and
a pin formed on and protruding rearwards from a rear end of the second core-holding portion;
wherein the pin of the connection terminal penetrates through the front portion and the rear portion of the base with the second core-holding portion fixed inside the base, the front end of the connection terminal mounted inside the receiving chamber of the front clad, and the first core-receiving hole corresponding to the first through hole;
wherein the core-clamping portion has:
a first clamping piece being plate-like and resilient and having:
a first middle section horizontally aligned with the front clad and the base;
a first front section obliquely connected between a front end of the first middle section and a top rear edge of the first core-holding portion; and
a first rear section obliquely connected between a rear end of the first middle section and a top front edge of the second core-holding portion;
wherein a top surface of the first core-holding portion having the top rear edge is parallel to a top surface of the second core-holding portion having the top front edge; and
a second clamping piece being plate-like and resilient and having:
a second middle section horizontally aligned with the front clad and the base and formed on a bottom of the first middle section;
a second front section obliquely connected between a front end of the second middle section and a bottom rear edge of the first core-holding portion; and
a second rear section obliquely connected between a rear end of the second middle section and a bottom front edge of the second core-holding portion;
wherein a bottom surface of the first core-holding portion having the bottom rear edge is parallel to a bottom surface of the second core-holding portion having the bottom front edge.

US Pat. No. 10,798,335

CONVERTING VARIABLE FRAME RATE VIDEO TO FIXED FRAME RATE VIDEO

Adobe Inc., San Jose, CA...

1. In a digital medium environment a method to convert a variable frame rate (VFR) video into a fixed frame rate video implemented by a computing device, the method comprising:obtaining, by the computing device, the VFR video comprising source frames with varying frame durations;
determining, by the computing device, a fixed frame rate for the VFR video;
generating, by the computing device, an initial mapping that maps the source frames of the VFR video to a sequence of result frames of the fixed frame rate;
generating, by the computing device, an adjusted mapping to improve smoothness of motion of the VFR video by:
scanning the mapping to locate at least one skipped source frame in the mapping;
scanning the mapping in a first direction from a skip location of the at least one skipped source frame to locate at least one repeated source frame;
computing a timing error which would occur if the mapping is adjusted by adding the skipped source frame at the skip location, removing the repeated source frame, and shifting sequential result frames between the skip location of the skipped source frame in the mapping and a repeat location of the repeated source frame in the mapping in the first direction; and
if the timing error is below a threshold, adjusting the mapping by adding the at least one skipped source frame at the skip location, removing the at least one repeated source frame, and shifting sequential result frames between the skip location of the skipped source frame in the mapping and the repeat location of the repeated source frame in the mapping in the first direction; and
converting, by the computing device, the VFR video into the fixed frame rate video having the determined fixed frame rate based on the adjusted mapping.

US Pat. No. 10,798,334

IMAGE PROCESSING SYSTEM, IMAGE DISPLAY METHOD, DISPLAY DEVICE AND STORAGE MEDIUM

BEIJING BOE DISPLAY TECHN...

1. An image processing system, comprising at least two data processing modules, wherein each of the data processing modules is used to convert image data input to the data processing module into image data having a resolution supported by the data processing module, the resolutions supported by the at least two data processing modules are different, and the image processing system further comprises:a controller, used for:
acquiring a first resolution supported by a display panel currently installed in a display device;
identifying a second resolution of pending image data input by a video source to the image processing system;
determining, in the at least two data processing modules, a target data processing module according to the first resolution and the second resolution, wherein the target data processing module supports the first resolution, and the target data processing module is used to convert image data having the second resolution input to the target data processing module into image data having the first resolution;
retrieving the target data processing module to process the pending image data to obtain target image data; and
outputting the target image data to the display panel.

US Pat. No. 10,798,333

CELL OBSERVATION SYSTEM

Shimadzu Corporation, Ky...

1. A cell observation system comprising a server capable of creating an image relating to an observation target on the basis of data obtained by a microscopic observation unit with respect to the observation target and a browsing terminal for browsing the image created by the server on a screen of a display unit, the server and the browsing terminal being connected via a communication network,wherein the server includes:
a1) an image creation unit that creates images having different resolutions at a plurality of levels with respect to an entire observation target area to be observed by the microscopic observation unit on the basis of the data obtained by the microscopic observation unit, and stores the images in an image storage unit; and
a2) an image transmission processing unit that extracts image information of all or a part of the images having a predetermined resolution stored in the image storage unit and transmits the extracted image information to the browsing terminal, in response to an image transmission request from the browsing terminal, and
wherein the browsing terminal includes:
b1) an operation unit that allows a user to move a display range of the observation target displayed on the screen of the display unit by the user;
b2) an image transmission request unit that transmits the image transmission request to the server so as to transmit an image, formed by the extracted image information, according to a designated resolution and a designated display range after the movement by an operation using the operation unit;
b3) a storage unit that stores a low-resolution image having a relatively low resolution and corresponding to the entire observation target area or a range wider than the display range of the observation target displayed on the screen of the display unit at least at that point in the observation target area; and
b4) a display image formation unit that forms a display image on the basis of at least the low-resolution image until the extracted image information arrives from the server in response to the image transmission request after the movement by the operation using the operation unit, and forms another display image by overlaying on the low-resolution image a new high-resolution image having a relatively high resolution and corresponding to the display range of the observation target displayed on the screen of the display unit at that time based on the extracted image information when the extracted image information is obtained from the server in response to the image transmission request.

US Pat. No. 10,798,332

DUAL PASS-THROUGH IMAGING SYSTEM AND METHOD

Varjo Technologies Oy, H...

1. An imaging system for producing images for a display apparatus, the imaging system comprising:an infrared light source that, in operation, emits infrared light;
at least one imaging unit comprising:
a first image-sensor chip having a first field of view;
a second image-sensor chip having a second field of view, the second field of view being wider than the first field of view, wherein the second field of view comprises an overlapping field of view that overlaps with the first field of view and a remaining non-overlapping field of view;
a semi-transparent reflective element arranged to reflect a portion of light received from a real-world environment towards the first image-sensor chip, whilst transmitting another portion of the light towards the second image-sensor chip;
a first infrared filter arranged, on an optical path between the semi-transparent reflective element and the first image-sensor chip, to block transmission of infrared light towards the first image-sensor chip; and
means for transmitting infrared light received from the overlapping field of view towards the second image-sensor chip, whilst blocking transmission of infrared light received from the non-overlapping field of view towards the second image-sensor chip, said means being arranged on an optical path between the semi-transparent reflective element and the second image-sensor chip; and
at least one processor, communicably coupled to the infrared light source and the at least one imaging unit, configured to:
control the first image-sensor chip and the second image-sensor chip to capture a first image and a second image of the real-world environment, respectively, wherein a portion of the second image-sensor chip that receives the infrared light from the overlapping field of view, in operation, captures depth information pertaining to the overlapping field of view, and wherein a resolution of the first image is higher than a resolution of the second image; and
generate from the first image and the second image at least one extended-reality image to be presented via the display apparatus, based on said depth information.

US Pat. No. 10,798,331

MULTICHROMIC REFLECTIVE LAYER TO ENHANCE SCREEN GAIN

Sony Corporation, Tokyo ...

1. An assembly comprising:at least one substrate against which color video can be projected by at least one projector, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and
at least one multichromic reflective coating disposed on the substrate, the multichromic reflective coating reflecting only wavelengths of light produced by the projector.

US Pat. No. 10,798,330

IMAGING DEVICE AND IMAGING APPARATUS

NIKON CORPORATION, Tokyo...

1. An imaging device, comprising:a plurality of pixels;
a first signal line that is connected to a first pixel among the plurality of pixels;
a second signal line that is connected to a second pixel, different from the first pixel, among the plurality of pixels;
a first signal processing unit that includes a first input pin and a first operational amplifier performing signal processing on a signal input to the first input pin;
a second signal processing unit that includes a second input pin and a second operational amplifier performing signal processing on a signal input to the second input pin; and
a controlling unit that switches between (i) a first mode, which connects the first signal line and the second signal line to the first input pin, and (ii) a second mode, which connects the first signal line to the first input pin and connects the second signal line to the second input pin, wherein
the controlling unit controls a power consumption of the second operational amplifier in the first mode to be lower than a power consumption of the second operational amplifier in the second mode.

US Pat. No. 10,798,329

IMAGE PICKUP APPARATUS, IMAGE PICKUP SYSTEM, AND METHOD OF DRIVING IMAGE PICKUP APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An image pickup apparatus comprising:a pixel array comprising a plurality of unit cells arranged in rows and columns;
a scan circuit configured to scan the pixel array on a row by row basis; and
output lines each arranged corresponding to one of the columns and configured to output, by scanning of the scan circuit, a digital signal from unit cells in a corresponding one of the columns; each unit cell comprising:
a plurality of photoelectric conversion units configured to output an analog signal based on electromagnetic waves incident on respective ones of the plurality of photoelectric conversion units;
a plurality of first circuits serving as a signal holding unit configured to sample and hold the analog signal in a sampling/holding period; and
at least one second circuit serving as an AD conversion unit configured to convert the analog signal held by the plurality of first circuits into the digital signal in an AD conversion period,
wherein the AD conversion period is shorter than the sampling/holding period,
wherein a number of the plurality of first circuits arranged in each unit cell is greater than a number of the at least one second circuit arranged in each unit cell.

US Pat. No. 10,798,328

IMAGE SENSOR INCLUDING PIXEL CIRCUITS

TAIWAN SEMICONDUCTOR MANU...

1. A circuit, comprising:a sensing unit configured to receive light and generate a sensing voltage at a sensing node in response to the light;
a first group of switching units coupled to the sensing node, wherein the first group of switching units comprises:
a first switching unit including a first terminal coupled to the sensing node and a second terminal coupled to a first node;
a second switching unit including a first terminal coupled to the sensing node and a second terminal coupled to a second node; and
a third switching unit including a first terminal coupled to the first node and a second terminal coupled to the second node;
wherein the first group of switching units is configured to generate a first transfer voltage to the first node and generate a first auxiliary voltage to the second node;
a second group of switching units coupled to the sensing node, and configured to generate a second transfer voltage to a third node and generate a second auxiliary voltage to a fourth node;
a first capacitive unit coupled to the first group of switching units, and configured to store charges generated from the sensing unit; and
a readout circuit configured to read at least one of the first transfer voltage and the first auxiliary voltage, and read at least one of the second transfer voltage and the second auxiliary voltage.

US Pat. No. 10,798,327

PHOTOELECTRIC CONVERSION APPARATUS AND IMAGING SYSTEM

CANON KABUSHIKI KAISHA, ...

1. A photoelectric conversion apparatus comprising:a diode of avalanche amplification type;
a pulse shaping circuit configured to shape an output of the diode into a pulse signal;
a pulse conversion circuit configured to convert the pulse signal having a first amplitude into a pulse signal having a second amplitude that is smaller than the first amplitude; and
a signal processing circuit configured to process the pulse signal having the second amplitude output from the pulse conversion circuit,
wherein the diode is supplied with a first power supply voltage and a second power supply voltage,
wherein the signal processing circuit is supplied with a third power supply voltage and a fourth power supply voltage, and
wherein a difference between the first power supply voltage and the second power supply voltage is greater than a difference between the third power supply voltage and the fourth power supply voltage.

US Pat. No. 10,798,326

IMAGING APPARATUS, SIGNAL PROCESSING APPARATUS, AND MOVING BODY

CANON KABUSHIKI KAISHA, ...

1. A photoelectric conversion apparatus including a plurality of pixels arranged on a substrate in a matrix,wherein at least one of the pixels included in a first pixel group includes:
a first electrode;
a second electrode facing to the first electrode;
a third electrode disposed between the first and second electrodes;
a photoelectric conversion layer disposed on the first, second, and third electrodes; and
a micro-lens corresponding to the first, second, and third electrodes;
wherein at least one of the pixels included in a second pixel group includes:
a fourth electrode;
a fifth electrode facing to the fourth electrode;
the photoelectric conversion layer disposed on the fourth and fifth electrodes; and
a micro-lens corresponding to the fourth and fifth electrodes;
wherein the first, second, and third electrodes are electrically connected to nodes that are different from one another, respectively, and
wherein the fourth and fifth electrodes are electrically connected to a same node.

US Pat. No. 10,798,325

ELECTRONIC DEVICE WITH IMAGE SENSOR THAT INCLUDES PHOTOELECTRIC CONVERTING SECTIONS THAT START TO STORE ELETRICAL CHARGE AT DIFFERENT TIMINGS

NIKON CORPORATION, Tokyo...

1. An electronic device comprising:an image sensor that includes
a first photoelectric converting section that converts light into electrical charge,
a second photoelectric converting section that converts light into electrical charge,
a first signal line to which a first signal based on electrical charge of the first photoelectric converting section is output, and
a second signal line to which a second signal based on electrical charge of the second photoelectric converting section is output; and
a control section that performs control in a manner such that, after starting to store electrical charge converted by the first photoelectric converting section, storing electrical charge in the second photoelectric converting section is started, and that, before terminating output of the first signal to the first signal line, output of the second signal to the second signal line is terminated, wherein:
the image sensor further includes
a first transfer gate that transfers electrical charge of the first photoelectric converting section,
a second transfer gate that transfers electrical charge of the second photoelectric converting section,
a first transfer control line, to which a first transfer control signal for controlling the first transfer gate is output, that is connected to the first transfer gate, and
a second transfer control line, to which a second transfer control signal for controlling the second transfer gate is output, that is connected to the second transfer gate; and
the control section further performs control in a manner such that, before outputting the first transfer control signal to the first transfer control line, the second transfer control signal is output to the second transfer control line.

US Pat. No. 10,798,324

SOLID-STATE IMAGE CAPTURE ELEMENT, DRIVING METHOD, AND ELECTRONIC DEVICE

Sony Corporation, Tokyo ...

1. A solid-state image capture element, comprising:a plurality of pixels, wherein each pixel of the plurality of pixels comprises:
a photoelectric conversion unit configured to:
convert incident light into a charge by photoelectric conversion; and
store the charge;
a transfer drive element configured to transfer a charge generated in the photoelectric conversion unit to a memory unit;
a discharge unit configured to discharge a charge remaining in the photoelectric conversion unit; and
a discharge driving unit configured to be driven based on the discharge of the charge from the photoelectric conversion unit to the discharge unit,
wherein a plurality of discharge driving units are arranged in series between the photoelectric conversion unit and the discharge unit.

US Pat. No. 10,798,323

CONTROL METHOD FOR AN ACTIVE PIXEL IMAGE SENSOR

1. An image capture method in an active pixel image sensor, applying an integration period common to all of the pixels before a phase of reading out the pixels, row by row, each pixel comprising a memory node between a photosensitive element and a readout node of the pixel, with a first charge transfer transistor placed between the photosensitive element and the memory node and a second charge transfer transistor placed between the memory node and the readout node, wherein in each new integration period, the method comprises the following control steps, across all of the pixels simultaneously:applying, to the gate of all of the first transfer transistors:
at least one first voltage pulse at an intermediate voltage between the start and the end of the integration period, controlling a transfer of charge from the photodiode to the memory node; and
a final voltage pulse at the end of the integration period, controlling a final transfer of charge from the photodiode to the memory node, the end of the pulse marking the end of the current integration period;
applying, to the gate of all of the second transfer transistors, a second voltage pulse after each first voltage pulse and before the final voltage pulse, the second voltage pulse setting a potential barrier height under the gate of said second transistors in relation to the potential of the memory node, allowing the charge in said memory node beyond a maximum amount of charge that can be held in said memory node to be clipped.

US Pat. No. 10,798,322

HIGH DYNAMIC-RANGE IMAGE SENSOR

Rambus Inc., San Jose, C...

1. A method of operation within an integrated-circuit image sensor having a pixel array, the method comprising:exposing the pixel array to light representative of a scene during first and second frame intervals of equal duration;
oversampling the pixel array a first number of times within the first frame interval to generate a corresponding first number of frames of image data from which a first output image may be constructed;
evaluating one or more of the first number of frames of image data to determine whether a range of luminances in the scene warrants adjustment of an oversampling factor from the first number to a different second number; and
oversampling the pixel array the second number of times within the second frame interval to generate a corresponding second number of frames of image data from which a second output image may be constructed if the range of luminances in the scene is determined to warrant adjustment of the oversampling factor.

US Pat. No. 10,798,321

BIT-DEPTH EFFICIENT IMAGE PROCESSING

Dolby Laboratories Licens...

1. A computer-implemented method for image processing by a device comprising an image sensor for capturing an image having a first bit depth, a capture mode controller configured to control the image sensor for image capture under a plurality of capture modes, an image signal processor configured to produce from an image captured by the image sensor, by applying a non-linear transformation and re-encoding, a non-linearized image, and an inverter configured to produce from an image produced by the image signal processor, by applying an inverse transformation and re-encoding, a re-linearized image, the method comprising:providing a plurality of non-linear transformations each associated with a different capture mode of the plurality of capture modes,
controlling, by the capture mode controller, of the image sensor to capture an image under one of the capture modes, the image having sensor signals encoded at the first bit depth,
communicating, by the capture mode controller, at least the one non-linear transformation associated with the capture mode under which the image was captured by the image sensor to the image signal processor,
applying the one non-linear transformation to the captured image and re-encoding, by the image signal processor, to produce from the captured image a non-linearized image that has code values characterizing the sensor signals of the captured image non-linearly distributed across a second bit depth less than the first bit depth,
providing a plurality of inverse transformations each inverse to a different non-linear transformation of the plurality of non-linear transformations,
receiving from the image signal processor, by the inverter, the non-linearized image,
receiving, from the capture mode controller, a capture-mode specification indicating the capture mode under which the captured image was captured by the image sensor,
selecting, based upon the capture-mode specification, from the plurality of inverse transformations the one inverse transformation inverse to the one non-linear transformation used to produce the non-linearized image; and
applying the one inverse transformation to the non-linearized image and re-encoding, by the inverter, to produce from the non-linearized image a re-linearized image that has code values characterizing the sensor signals of the captured image linearly distributed across a third bit depth greater than the second bit depth.

US Pat. No. 10,798,320

IMAGE SENSOR, COMPARATOR CIRCUIT AND METHOD THEREOF

Taiwan Semiconductor Manu...

1. A comparator circuit of an image sensor, configured to compare a pixel signal and a ramp signal, comprising:a first comparator circuit, comprising differential inputs that receive the pixel signal and the ramp signal, respectively; and
a second comparator circuit, comprising differential inputs that receive the pixel signal and the ramp signal, respectively,
wherein the second comparator circuit is activated when a level of the pixel signal is smaller than a first threshold value, the first comparator circuit is activated when the level of the pixel signal is greater than a second threshold value, and the first threshold value is greater than the second threshold value.

US Pat. No. 10,798,319

CAMERA DEVICE AND METHOD FOR CAPTURING A SURROUNDING REGION OF A VEHICLE IN A SITUATION-ADAPTED MANNER

Conti Temic microelectron...

1. A camera device for capturing a surrounding region of a vehicle, the camera device comprising:an optronics system and an image capture control unit, which are configured to acquire a sequence of images of the surrounding region,
wherein:
the optronics system comprises a wide-angle optical system and a high-resolution image acquisition sensor, and
the optronics system and the image capture control unit are configured:
to produce, respectively for each individual image of the sequence of images, either a reduced-resolution image as a binned image, which has a reduced resolution that is reduced by pixel binning, of an entire capture region of the optronics system, or a high-resolution image as an unbinned image with a maximum resolution of a subregion of the capture region of the optronics system, depending on a current traffic and/or surrounding situation, wherein an image height and an image width of the subregion in the high-resolution image are set depending on the current traffic and/or surrounding situation, and wherein a size of the subregion is set such that a pixel count of the high-resolution image of the subregion is no greater than a pixel count of the reduced-resolution image of the entire capture region of the optronics system; and
to control the image height and/or the image width of the subregion in a current image of the sequence of images in response to a recognized content that is recognized in at least one previous image of the sequence of images.

US Pat. No. 10,798,318

SOLID-STATE IMAGING ELEMENT, IMAGING DEVICE, AND ELECTRONIC DEVICE

Sony Corporation, Toyko ...

1. A solid-state imaging element, comprising:a first substrate including a pixel circuit having a pixel array unit; and
a second substrate including:
signal processing circuits to process signals from the pixel array unit; and
a wiring layer with wiring regions electrically connected to respective ones of the signal processing circuits,
wherein each signal processing circuit has a same circuit pattern,
wherein the second substrate and the first substrate are stacked, and
wherein a wiring pattern of each wiring region is different, wherein a function of each signal processing circuit is changeable based on the wiring pattern of each wiring region connected to each signal processing circuit.

US Pat. No. 10,798,317

MULTISPECTRAL COLOR IMAGING DEVICE BASED ON INTEGRATING SPHERE LIGHTING AND CALIBRATION METHODS THEREOF

The Hong Kong Research In...

1. A multispectral color imaging device, comprising:a light house module, said light house module comprising a light source and a light intensity collection device surrounding the light source, wherein said light intensity collection device collects light radiated from the light source via reflection and emits the collected light through an opening of the light intensity collection device;
an integrating sphere module, said integrating sphere module comprising an integrating sphere, a light inlet at a first side of the integrating sphere, a sample holder gateway at a second side of the integrating sphere, a light outlet at the top of the integrating sphere, and a sample holder having access to an internal space of the integrating sphere, wherein said first and second sides are opposite to each other, said light inlet couples to the opening of the light intensity collection device, said sample holder accesses the internal space of the integrating sphere through said sample holder gateway and is positioned in alignment with the light outlet during imaging, and said sample holder is slidable into the integrating sphere via a slideway element; and
a filter wheel module, said filter wheel module comprising a camera, a filter wheel installed below the camera, and a lens installed below the filter wheel, said lens coupling to the light outlet of the integrating sphere, said filter wheel having a plurality of channels and a plurality of filters located therein, wherein one of the filters is configurable to position between the camera and the lens for filtering.

US Pat. No. 10,798,316

MULTI-SPECTRAL IMAGING USING LONGITUDINAL CHROMATIC ABERRATIONS

HAND HELD PRODUCTS, INC.,...

1. An imager, comprising:an objective lens configured to disperse light reflected from a target object with longitudinal chromatic aberrations along an optical axis of the objective lens;
a sensor configured to:
capture first data for a first captured image of the target object at a first wavelength; and
capture second data for a second captured image of the target object at a second wavelength; and
a processor configured to:
extract a sharp region of the first captured image to obtain a first image, wherein the sharp region includes a certain level of in-focus detail;
calculate a first ratio (K1) of a pixel intensity of a first color to a pixel intensity of a second color and calculate a second ratio (K2) of the pixel intensity of the first color to a pixel intensity of a third color of a remaining region of the first captured image;
obtain a second image based on extraction of pixels having predefined values of the first ratio (K1) and the second ratio (K2);
generate, based on a combination of the first image and the second image, a first combined image associated with the first wavelength;
generate a second combined image associated with the second wavelength; and
generate a multi-dimensional image that includes the first combined image associated with the first wavelength and the second combined image associated with the second wavelength,
wherein at least one of the objective lens and the sensor is configured to move along the optical axis to enable the sensor to capture the first captured image associated with the first wavelength and the second captured image associated with the second wavelength.

US Pat. No. 10,798,315

REMOVAL OF INTERFERENCE OF ABSORBERS FROM INTENSITY DATA

Owens-Brockway Glass Cont...

1. A method for generating a thermal image of a glass gob, the method comprising:extracting pixel intensity data from a plurality of images corresponding to electromagnetic radiation emitted from one or more glass gobs;
generating an array of intensity data for each representative pixel in the plurality of images, wherein each array represents a distribution of intensity data from corresponding pixels in each of the images;
filtering the intensity data in each array to exclude an amount of intensity data such that a remaining amount of intensity data represents a distribution of intensity data uncontaminated by interference; and
generating a thermal image of a glass gob based on the remaining amount of intensity data in each array.

US Pat. No. 10,798,314

IMAGING APPARATUS AND DISPLAY METHOD

OLYMPUS CORPORATION, Tok...

1. An imaging apparatus comprising:an exposure time setting circuit configured to set an exposure time for shooting;
an imaging sensor configured to image an image of a subject by repeating exposure and output of an image signal at a specified time interval;
a microcomputer that includes a unit configured to perform processes as follows
a synthesis process setting unit configured to set a synthesis processing method in which outputs of image signals corresponding to respective pixels are cumulatively processed for a plurality of image signals continuously output from the imaging sensor;
an image processing circuit that includes an image synthesis circuit configured to generate a synthesized image according to a synthesis processing method set by the synthesis process setting unit, for the plurality of image signals continuously output from the imaging sensor, and based on image signal outputs corresponding respectively to pixels of identical addresses; and
a display configured to display an image, wherein
the synthesis process setting unit sets, in the image synthesis circuit, a synthesis processing method for making brightness of a synthesized image generated by the image synthesis circuit to be specified brightness;
the image synthesis circuit generates a synthesized image to be an image equivalent to an exposure time set by the exposure time setting circuit, by synthesizing a plurality of images based on image signals repeatedly output by the image sensor according to a synthesis processing method set by the synthesis process setting unit; and
the display displays an image based on a synthesized image generated by the image synthesis circuit.

US Pat. No. 10,798,313

PRESERVING PRIVACY IN SURVEILLANCE

Alarm.com Incorporated, ...

1. A computer-implemented method comprising:obtaining images of a scene captured by a camera;
identifying an object in the images through object recognition;
determining that (i) the object that is identified in the images is of a particular type that has a privacy restriction and (ii) a person is interacting with the object; and
in response to determining that (i) the object in the images is of the particular type that has the privacy restriction and (ii) the person is interacting with the object, obfuscating an appearance of the object in the images and not obfuscating other portions of the images.

US Pat. No. 10,798,312

CELLULAR PHONE INCLUDING APPLICATION PROCESSOR THE GENERATES IMAGE OUTPUT SIGNALS BASED ON MULTIPLE IMAGE SIGNALS FROM CAMERA MODULES AND THAT PERFORMS RECTIFICATION TO CORRECT DISTORTION IN THE IMAGE OUTPUT SIGNALS

Samsung Electronics Co., ...

1. A cellular phone comprising:a first camera module having a first lens, a first sensor, and a first storage device storing first calibration data for the first camera module, the first camera module configured to generate a first image signal;
a second camera module having a second lens, a second sensor, and a second storage device storing second calibration data for the second camera module, the second camera module configured to generate a second image signal;
a third camera module having a third lens, a third sensor, and a third storage device storing third calibration data for the third camera module, the third camera module configured to generate a third image signal; and
an application processor physically separated from the first, second and third camera modules, the application processor configured to receive the first image signal from the first camera module via a first camera serial interface, to receive the second image signal from the second camera module via a second camera serial interface, and to receive the third image signal from the third camera module via a third camera serial interface,
wherein the application processor includes a first image generator configured to process the first image signal using the first calibration data in the first storage device and to process one of the second image signal and the third image signal using one of the second calibration data in the second storage device and the third calibration data in the third storage device,
wherein the application processor includes a first signal processor, and a second signal processor different from the first signal processor,
the first signal processor receives the first image signal and performs first image signal processing to generate a first image output signal,
the second signal processor receives one of the second image signal and the third image signal and performs second image signal processing to generate a second image output signal, and
wherein the first image generator is configured to output a single image by combining the first and second image output signals.

US Pat. No. 10,798,311

METHODS AND APPARATUS FOR REDUCING SPATIAL FLICKER ARTIFACTS

SEMICONDUCTOR COMPONENTS ...

1. An imaging device comprising:an array of pixels arranged in rows and columns, wherein the array of pixels generates:
first image pixel data corresponding to pixels having a first exposure time, and
second image pixel data corresponding to pixels having a second exposure time,
wherein the first exposure time is greater than the second exposure time; and
wherein the pixels used to generate the first and second image pixel data have the same spatial locations within the array;
a color filter overlaying the array pixels and comprising a plurality of color filters; and
an image signal processor coupled to the array of pixels, wherein the image signal processor:
receives the first image pixel data and second image pixel data;
identifies valid and invalid image pixel data based on pixel data values;
generate a ratio utilizing the valid image pixel data;
wherein:
the ratio comprises a first individual pixel value divided by a second individual pixel value;
the first individual pixel value corresponds to a first color from the plurality of color filters;
the second individual pixel value corresponds to a second color from the plurality of color filters that is different from the first color; and
the first and second individual pixel values have a same exposure time;
normalizes the computed ratio using an ideal ratio, wherein the ideal ratio is the first exposure time divided by the second exposure time; and
modifies the second image pixel data using the normalized computed ratio.

US Pat. No. 10,798,310

HYPERSPECTRAL IMAGER COUPLED WITH INDICATOR MOLECULE TRACKING

Hypermed Imaging, Inc., ...

1. An imaging device, comprising:A) a housing having an exterior and an interior;
B) at least one objective lens attached to or within the housing,
C) a first plurality of lights disposed on the exterior of the housing about the at least one objective lens;
D) a plurality of pixel array photo-sensors within the housing;
E) an optical assembly within the interior of the housing, the optical assembly in optical communication with the at least one objective lens, the optical assembly characterized by further directing light received by at least one objective lens from the tissue of a subject to at least one pixel array photo-sensor in the plurality of pixel array photo-sensors;
F) a plurality of bandpass filters within the housing, wherein
each respective bandpass filter in the plurality of bandpass filters covers a corresponding pixel array photo-sensor in the plurality of pixel array photo-sensors thereby selectively allowing a different corresponding spectral band of light, from the light received by the at least one objective lens and redirected by the optical assembly, to pass through to the corresponding pixel array photo-sensor; and
G) a controller within the housing wherein at least one program is non-transiently stored in the controller and executable by the controller, the at least one program causing the controller to perform a method of:
i) firing a first subset of lights in the plurality of lights for a first period of time while not firing a second subset of lights or a third subset of lights in the plurality of lights, wherein the first subset of lights emit light that is substantially limited to a first spectral range,
ii) collecting a first set of images during the first period of time using at least a first subset of the plurality of pixel array photo-sensors,
iii) firing the second subset of lights in the plurality of lights for a second period of time while not firing the first subset of lights or the third subset of light sources, wherein the second subset of lights emits light that is substantially limited to a second spectral range,
iv) collecting a second set of images during the second period of time using at least a second subset of the plurality of pixel array photo-sensors,
v) firing the third subset of lights in the plurality of lights for a third period of time while not firing the first subset of lights or the second subset of lights, and
vi) collecting a third set of images during the third period of time using a first pixel array photo-sensor in the first or second subset of the plurality of pixel array photo-sensors, wherein the third subset of lights emits light that is substantially limited to a third spectral range, the third spectral range is in the near infrared and not in the visible wavelengths and wherein the bandpass filter covering the first pixel array photo-sensor includes a bandpass for a subset of the near infrared wavelengths.

US Pat. No. 10,798,309

METHOD AND APPARATUS FOR NONUNIFORMITY CORRECTION OF IR FOCAL PLANES

BAE Systems Information a...

1. An infrared (IR) sensor comprising:a front window operable to allow environmental IR light to pass therethrough;
at least one optical lens defining an optical path behind the front window;
an IR light emitting diode (LED) set apart from the optical path and behind the front window, wherein the IR LED is operable to project LED IR light into the optical path;
a focal plane array (FPA) operable to detect environmental IR light and LED IR light; and
a processor operationally connected to the FPA, wherein the processor is configured to calculate an offset and account for the offset for at least one pixel in a plurality of pixels of the FPA.

US Pat. No. 10,798,308

IMAGING CONTROL DEVICE, IMAGING APPARATUS, IMAGING CONTROL METHOD, AND IMAGING CONTROL PROGRAM

FUJIFILM Corporation, To...

1. An imaging control device comprising:a processor configured to perform consecutive imaging control for consecutively imaging a subject a plurality of times by an imaging sensor and emitting auxiliary light from a light emission device in an imaging period in which each of the plurality of times of the imaging is performed,
wherein the imaging periods of the plurality of times do not overlap,
an end timing of a light emission period of the auxiliary light started in the imaging period is later than an end timing of the imaging period,
the imaging period that precedes between the two imaging periods adjacent in time series is set as a first imaging period, and the imaging period that succeeds between the two imaging periods is set as a second imaging period, and
the processor further controls a light emission intensity of the auxiliary light in the light emission period started in the second imaging period or imaging sensitivity in the second imaging period based on the light emission intensity of the auxiliary light emitted in an overlapping period overlapping with the second imaging period in the light emission period started in the first imaging period.

US Pat. No. 10,798,307

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Sony Semiconductor Soluti...

1. An information processing device comprising:a display unit configured to display a captured image;
a plurality of light emitters, the light emitters including a white light emitter that emits white light and an infrared light emitter that emits infrared light;
an imaging sensor that captures an infrared light image produced using the infrared light, and that captures a visible light image produced using the white light; and
a controller that controls operation of the white light emitter and the infrared light emitter, wherein:
when an ambient illuminance is determined to be dark, and without an indication that a shutter activation has been inputted for a recording operation, the controller controls the light emitters such that imaging sensor captures the infrared light image, and controls the display unit to display the captured image as a live view of the infrared light image,
when the ambient illuminance is determined to be dark, and upon an indication that the shutter activation has been inputted, the controller controls the white light emitter to emit white light and controls the infrared light emitter to stop emitting infrared light, in order for the imaging sensor to capture the visible light image for the recording operation, and
when capturing of the visible light image for the recording operation has finished, the controller controls the light emitters such that the imaging sensor captures the infrared light image, and controls the display unit to resume displaying the live view of the infrared light image.

US Pat. No. 10,798,306

INTERPOLATION BASED CAMERA MOTION FOR TRANSITIONING BETWEEN BEST OVERVIEW FRAMES IN LIVE VIDEO

HUDDLY AS, Oslo (NO)

1. A method of transitioning from a first scene having a first set of video settings, to a second scene in a primary video stream, where the primary video stream represents a sub-video image framing detected people within an overview video image represented by an overview video stream captured by a high-resolution video camera comprising a wide angle image sensor and a video processing device, the method comprising:detecting people and positions of people within the overview video image by means of machine learning supported by a convolutional neural network,
when detecting a change of people or positions of people within the overview video stream requiring a different framing according to the detected change, then
calculating a second set of settings for the second scene adjusting the framing according to the change of people or positions of people within the overview video stream;
selecting, based on the first set of settings and the second set of settings, one of a set of predefined transition schemes to use in the transitioning; and
interpolating video frames with intermediate video settings in the primary video stream from the first scene to the second scene according to the selected transition scheme,
where the set of predefined transitioning schemes are different parametric equations controlling the intermediate video settings as function of time.

US Pat. No. 10,798,305

CONTROL APPARATUS, IMAGING SYSTEM, CONTROL METHOD, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. A control apparatus comprising:a display control unit configured to control a display unit to
display a display screen having a first region and a second region which is part of the first region,
display, in the first region, at least a first partial image of a predetermined image, the predetermined image indicating a region where an external imaging apparatus can capture by changing an imaging region which is captured by the external imaging apparatus, and the predetermined image being based on an image captured by the external imaging apparatus, and
display, in the second region, a second partial image of the predetermined image, the second partial image being a part of the first partial image;
a change control unit configured to change a position of the first partial image in the predetermined image to be displayed in the first region of the display screen by the display control unit, in accordance with a predetermined operation; and
an output unit configured to output an instruction to cause the external imaging apparatus to capture a region corresponding to the second partial image displayed in the second region after the change control unit has changed the position of the first partial image in the predetermined image to be displayed in the first region,
wherein a relative position of the first region is fixed with respect to the display screen and does not move while the change control unit is changing the position of the first partial image in the predetermined image to be displayed in the first region, and
wherein a relative position of the second region is fixed with respect to the display screen and does not move while the change control unit is changing the position of the first partial image in the predetermined image to be displayed in the first region.

US Pat. No. 10,798,304

OPTICAL UNIT WITH SHAKE CORRECTION FUNCTION INCLUDING CENTER-OF-GRAVITY POSITION ADJUSTING MEMBER AND METHOD FOR FIXING CENTER-OF-GRAVITY POSITION ADJUSTING MEMBER

NIDEC SANKYO CORPORATION,...

1. An optical unit with a shake correction function comprising a center-of-gravity position adjusting member, the optical unit with the shake correction function comprising:a movable member comprising a lens barrel member configured to hold an optical element, an imaging element configured to receive a subject light to be imaged by the optical element, and a holder made of a resin material to which either one of a coil and magnet comprised in a swing-driving mechanism is attached;
a fixed member to which another one of the coil and magnet comprised in the swing-driving mechanism is attached, the fixed member being configured to support the movable member via a swing-supporting mechanism in such a manner that the movable member is able to swing; and
the center-of-gravity position adjusting member attached to a tubular part of the holder that surrounds an outer circumference of the lens barrel member, the center-of-gravity position adjusting member being configured to adjust a position of a center of gravity of the movable member, wherein
the tubular part comprises a subject-side end part of a predetermined length in an axial direction and a stepped part, the subject-side end part being formed to have a small outside diameter, the stepped part being formed at a position corresponding to the predetermined length away from a subject-side end surface,
the center-of-gravity position adjusting member is in an annular shape having an inner circumferential surface to be engaged with an outer circumference of a small-diameter part, which is the small outside diameter of the tubular part,
the inner circumferential surface of the center-of-gravity position adjusting member is formed to surround the small-diameter part in a range shorter than the predetermined length along the axial direction when a bottom end part of the inner circumferential surface of the center-of-gravity position adjusting member is in contact with the stepped part, and
a part of the small-diameter part that is not surrounded by the inner circumferential surface of the center-of-gravity position adjusting member is bent from an axial center side to an outer circumference side of the tubular part, so that the center-of-gravity position adjusting member is fixed to the tubular part.

US Pat. No. 10,798,303

TECHNIQUES TO COMPENSATE FOR MOVEMENT OF SENSORS IN A VEHICLE

TUSIMPLE, INC., San Dieg...

1. A method, comprising:receiving, from a first set sensors, a first set of sensor data of an area towards which a semi-trailer truck is being driven, wherein the first set of sensors are located on a roof of a cab of a semi-trailer truck;
receiving, from a second set sensors, a second set of sensor data of the area, wherein the second set of sensors are located on a hood of the semi-trailer truck;
receiving, from a height sensor, a measured value indicative of a height of a rear portion of a cab of a semi-trailer truck relative to a chassis of the semi-trailer truck, wherein the height sensor is located at the rear portion of the cab;
determining, based on the measured value, a first correction value for the first set of sensor data and a second correction value for the second set of sensor data; and
compensating for movements of the first set of sensors and the second set of sensors by generating a first set of compensated sensor data and a second set of the compensated sensor data,
wherein the first set of compensated sensor data is generated by adjusting the first set of sensor data based on the first correction value and
wherein the second set of compensated sensor data is generated by adjusting the second set of sensor data based on the second correction value.

US Pat. No. 10,798,302

METHOD OF CAPTURING BASED ON USAGE STATUS OF ELECTRONIC DEVICE AND RELATED PRODUCTS

GUANGDONG OPPO MOBILE TEL...

1. A method of capturing based on a usage status of an electronic device, applicable to the electronic device comprising a gravity sensor, a display screen, a control circuit, and a camera, the display screen having a display region and a non-display region, the camera being located in the non-display region, the method comprising:determining the usage status of the electronic device, the usage status being configured to indicate a gesture of the electronic device when the electronic device is held, the usage status comprising a portrait-mode-upright usage status and a portrait-mode-inverted usage status, the non-display region where the camera is located being located at the top side of the display region when the electronic device is in the portrait-mode-upright usage status, and the non-display region where the camera is located being located at the bottom side of the display region when the electronic device is in the portrait-mode-inverted usage status;
determining a target zoom ratio of the camera when the electronic device is in the portrait-mode-inverted usage status; and
capturing an image based on the target zoom ratio when the electronic device is in the portrait-mode-inverted usage status.

US Pat. No. 10,798,301

PANORAMIC IMAGE MAPPING METHOD

PEKING UNIVERSITY SHENZHE...

1. A panoramic image mapping method to map a spherical surface corresponding to panoramic image or video A into a two-dimensional plane image or video B, mapping formats of the panoramic image or video A include, multi-channel camera acquired panoramic image or video, so as to ameliorate the oversampling of the panoramic image or video in high-latitude areas and reduce the bit rate required for coding the panoramic image and video; comprising: first, dividing the spherical surface into three areas based on the latitudes of the spherical surface, denoted as Area I, Area II, and Area III, respectively; mapping the three areas to a square plane I?, a rectangular plane II?, and a square plane III?, respectively; then, splicing the planes I?, II? and III? into a plane, wherein the resulting plane is the two-dimensional image or video B;wherein Area I corresponds to the area with a latitude of ?90°˜Z1 on the spherical surface, Area II corresponds to the area with a latitude of Z1˜Z2 on the spherical surface, and Area III corresponds to the area with a latitude of Z2˜90° on the spherical surface; the values of the parameters Z1, Z2 are autonomously set and satisfy the condition: ?90°?Z1?Z2?90°;
the square plane I? has a resolution of WI×WI, the rectangular plane II? has a resolution of WII×HII, and the square plane III? has a resolution of WIII×WIII; the values of the parameters WI WII HII WIII are autonomously set;
the mapping method comprises steps of:
1) for each pixel point in the square plane I?, computing its corresponding spherical coordinates (longitudes and latitudes) based on its coordinates (X, Y) in the plane I?; then, taking the pixel value at a corresponding position on the spherical surface (or obtaining a corresponding pixel value of a surrounding pixel by interpolation) as the pixel value of the pixel point (X, Y) in the plane I?, wherein the computing the corresponding spherical coordinates Coordinate based on the coordinates (X, Y) of the pixel point in the plane I? comprises steps of:
1.1) computing the vertical distance and the horizontal distance from the point to the square plane I?, and taking the larger one, denoted as m;
1.2) the square plane I? may comprise a plurality of concentric squares; in this step, the distance from the point to the 0th point on the concentric square where the point is located, denoted as n; wherein the position of the nth point may be randomly selected; computing the distance from the pixel point to the nth point on the concentric square where the pixel point is located in the clockwise or counterclockwise manner;
1.3) computing the latitude and the longitude corresponding to the point with coordinates (X, Y) in the square plane I? based on n and m, obtaining the corresponding coordinates Coordinate on the spherical surface;
2) for each pixel point in the rectangular plane II?, computing its corresponding spherical coordinates, including longitudes and latitudes based on its coordinates (X, Y) in the plane II?; then, taking the pixel value at a corresponding position on the spherical surface, or obtaining a corresponding pixel value of a surrounding pixel by interpolation, as the pixel value of the pixel point (X, Y) in the plane II?;
3) for each pixel point in the square plane III?, computing its corresponding spherical coordinates, including longitudes and latitudes,) based on its coordinates (X, Y) in the plane III?; then, taking the pixel value at a corresponding position on the spherical surface (or obtaining a corresponding pixel value of a surrounding pixel by interpolation) as the pixel value of the pixel point (X, Y) in the plane I?, wherein the computing the corresponding spherical coordinates Coordinate based on the coordinates (X, Y) of the pixel point in the plane III? comprises steps of:
3.1) computing the vertical distance and the horizontal distance from the point to the square plane III?, and taking the larger one, denoted as m;
3.2) the square plane III? may comprise a plurality of concentric squares; the distance from the point to the 0th point on the concentric square where the point is located, denoted as n; wherein the position of the nth point may be randomly selected;
3.3) computing the latitude and the longitude corresponding to the point with coordinates (X, Y) in the square plane III? based on n and m;
4) splicing the planes I?, II?, and III? into a plane based on the value of WI WII H11 WIII.

US Pat. No. 10,798,300

METHOD AND DEVICE FOR UNFOLDING LENS IMAGE INTO PANORAMIC IMAGE

SHENZHEN PISOFTWARE TECHN...

1. A method for expanding a lens image into a panoramic image, comprising the following steps of:step A,
step A1, preparing an original mesh model;
step A2, preparing a lens image expanded into panorama for reference;
step A3, assigning the lens image expanded into panorama for reference in the step A2 to the original mesh model in the step A1;
step A4, calculating, according to an orientation parameter, an angle of view, a distortion parameter, an image eccentricity parameter and a displacement parameter of the lens when the lens image expanded into panorama for reference in step A3 is taken, new mapping coordinate values of each corner of the original mesh model, transforming the original mapping coordinate values of each corner of the original mesh model into corresponding new mapping coordinate values, and generating the new mesh model; and
step A5, taking out the lens image expanded into panorama for reference in the step A3, and storing the new mesh model in the step A4
step B, preparing another lens image to be expanded, assigning the another lens image to be expanded to the new mesh model in step A, and rendering the new mesh model by a GPU to obtain and store a panoramic image.

US Pat. No. 10,798,299

DIGITAL PHOTOGRAPHING APPARATUS, METHODS OF CONTROLLING THE SAME, AND COMPUTER-READABLE STORAGE MEDIUM TO INCREASE SUCCESS RATES IN PANORAMIC PHOTOGRAPHY

Samsung Electronics Co., ...

1. A method of controlling a digital photographing apparatus, the method comprising:displaying a live-view image;
initiating panoramic photography;
capturing a successive series of images during the panoramic photography while the digital photographing apparatus is moving in a direction on an axis; and
providing position information of the digital photographing apparatus with respect to the axis while the digital photographing apparatus is moving,
wherein the position information of the digital photographing apparatus is displayed and overlaid on the live-view image, and
wherein the capturing of the successive series of images during the panoramic photography is performed using an electronic shutter.

US Pat. No. 10,798,298

FLICKER DETECTOR

Facebook Technologies, LL...

1. A method comprising, by a computing system:receiving one or more signals indicative of light intensities captured by one or more cameras, wherein the one or more signals are captured in a plurality of frames at a first frame rate;
calculating light intensity metrics for each frame of the plurality of frames based on the one or more signals captured in the respective frames;
detecting one or more peaks based on the light intensity metrics associated with one or more frames of the plurality of frames, wherein the one or more frames were captured in a predetermined time period;
determining a likelihood of perceptible flicker based on the detected one or more peaks; and
causing the one or more cameras to capture frames at a second frame rate in response to a determination that the likelihood of perceptible flicker exceeds a predetermined threshold.

US Pat. No. 10,798,297

SYSTEMS, METHODS, AND DEVICES FOR UNMANNED VEHICLE DETECTION

DIGITAL GLOBAL SYSTEMS, I...

1. An apparatus for detecting unmanned aerial vehicles in a radio frequency (RF) environment, comprising:at least one RF receiver, an RF analytics module, a direction finding (DF) module, and a video analytics module;
wherein the apparatus is in network communication with at least one video sensor;
wherein the at least one video sensor is configured to capture images of the RF environment and stream video data to the apparatus;
wherein the at least one RF receiver is configured to receive RF data and generate fast Fourier transform (FFT) data based on the RF data;
wherein the RF analytics module is configured to identify at least one signal based on the FFT data;
wherein the DF module is configured to measure a direction from which the at least one signal is transmitted;
wherein the video analytics module is configured to analyze the video data, thereby creating analyzed video data;
wherein the video analytics module is configured to identify at least one unmanned aerial vehicle to which the at least one signal is related based on the analyzed video data, the RF data, and the direction from which the at least one signal is transmitted; and
wherein the apparatus is configured to control the at least one video sensor based on the direction from which the at least one signal is transmitted.

US Pat. No. 10,798,296

CAMERA SYSTEM FOR DETECTING CODINGS

IFM ELECTRONIC GMBH, Tet...

1. A method for operating a stationary camera system for detecting codings applied to subjects, the method comprising:moving the subjects past the camera system;
determining positions of contour points of the codings in relation to edges of a search zone of the camera system; and
determining a reading reliability of the search zone from a distance of the contour points from the edges of the search zone by detecting the codings of several of the subjects brought into the search zone.

US Pat. No. 10,798,295

ELECTRONIC DEVICE, IMAGE CAPTURE METHOD, AND CONTROL DEVICE

SHARP KABUSHIKI KAISHA, ...

1. An electronic device comprising:at least one image capture device; and
at least one control device configured to carry out the following:
detection of one or more objects in an image obtained from the at least one image capture device;
image capture operation determination processing in which (i) a first subject is determined from among the one or more objects, without a user designating an object to be the first subject, and (ii) timing of image capture is determined in accordance with a state of the first subject; and
autofocus processing in which (i) a second subject is determined from among the one or more objects, without the user designating an object to be the second subject, and (ii) the at least one image capture device is controlled to focus on the second subject,
the autofocus processing being commenced before processing to determine the first subject in the image capture operation determination processing finishes,
in a case where the first subject is determined before the second subject is determined, the at least one control device determining the first subject to also be the second subject.

US Pat. No. 10,798,294

REMOTELY CONTROLLING ACCESS TO A DIGITAL CAMERA

NORTONLIFELOCK INC., Tem...

1. A computer-implemented method for remotely controlling access to a digital camera, at least a portion of the method being performed by a monitoring computer device comprising one or more processors, the method comprising:defining, by the monitoring computer device, at least one parameter associated with a capture of media by the digital camera coupled to a monitored computer device;
receiving, at the monitoring computer device, permission from the monitored computer device to control access to the digital camera;
determining, by the monitoring computer device, that the digital camera is capturing media;
determining, by the monitoring computer device, that the at least one parameter is present in the captured media;
determining, by the monitoring computer device, that the presence of the at least one parameter is not approved; and
sending, from the monitoring computer device, an instruction to the monitored computer device to disable use of the digital camera coupled to the monitored computer device for a pre-determined period of time based on determining that the presence of the at least one parameter is not approved.

US Pat. No. 10,798,293

IMAGE PICKUP APPARATUS AND IMAGE PICKUP METHOD

OLYMPUS CORPORATION, Tok...

1. An image pickup apparatus comprising:an image pickup unit that picks up an image of a specific range and acquires an image signal;
a storage unit that stores a plurality of target image dictionaries respectively corresponding to a plurality of types of targets;
an inference processor that determines a type of a specific target on the basis of the image signal acquired by the image pickup unit and the plurality of target image dictionaries stored in the storage unit, and selects one of the target image dictionaries corresponding to the determined type of the specific target from the plurality of target image dictionaries; and
an image pickup controller that performs image pickup control on the basis of the image signal acquired by the image pickup unit and the target image dictionary selected by the inference processor
wherein the inference processor performs matching processing between a picture signal indicating the specific target included in the image signal acquired by the image pickup unit and the selected target image dictionary, and sets the image pickup target if the specific target matches, and does not switch the target image dictionary to another target image dictionary for a predetermined time period if the specific target does not match, and
wherein, after the predetermined time period has elapsed, if the specific target does not match, the inference processor switches the target image dictionary to another target image dictionary.

US Pat. No. 10,798,292

TECHNIQUES TO SET FOCUS IN CAMERA IN A MIXED-REALITY ENVIRONMENT WITH HAND GESTURE INTERACTION

Microsoft Technology Lice...

1. A method performed by a head-mounted display (HMD) device to optimize an auto-focus implementation, comprising:enabling auto-focus operations for a camera, in the HMD device, that is configured to capture a scene in a local physical environment surrounding the HMD device over a field of view (FOV), wherein the camera is a member of a set of one or more sensors that are operatively coupled to the HMD device;
collecting data on a user's hands using the set of sensors; and
suppressing the auto-focus operations for the camera based on the collected data on the user's hands failing to satisfy one or more criteria of an auto-focus subsystem.

US Pat. No. 10,798,291

TASK-ASSISTANCE CONTROL APPARATUS AND TASK IMAGE CONTROL APPARATUS

OLYMPUS CORPORATION, Tok...

1. A task-image control apparatus that assists an operator performing a task while viewing a captured task image, the task-image control apparatus comprising:a task determination circuit that determines task details on the basis of the task image; and
a display-range setting circuit that sets, in accordance with the determined task details, a task-image range to be displayed, wherein
the task determination circuit detects a type of a tool used for the task from the task image and determines task details in accordance with the type of the tool; and
the task determination circuit compares the type of the tool detected from the task image with types of tools scheduled to be used in various steps of the task so as to determine a task step that corresponds to the task image.

US Pat. No. 10,798,290

IMAGE CAPTURING APPARATUS FOR CONTROLLING WHETHER TO EXECUTE FUNCTION ASSIGNED TO OPERATION MEMBER, AND CONTROL METHOD THEREOF

Canon Kabushiki Kaisha, ...

1. An image capturing apparatus having a viewfinder, the apparatus comprising:a plurality of operating members to which functions are allowed to be assigned;
a detecting unit configured to detect when an eye of a user is close to the viewfinder; and
a control unit configured to, in a case where (a) an operating member operated while the detecting unit detects that the eye is close is an operating member which is disposed in a prescribed location and (b) a first function is assigned to the operating member, not to execute the first function,
wherein the first function is (a) to change a state to a lock state in which parameters set as shooting conditions cannot be changed, or (b) to switch a display state between displaying and hiding a menu on a display unit disposed outside of the viewfinder,
wherein the operating member is a member different from the display unit and is a member disposed at a position where a finger on a hand of the user that holds a grip provided in the apparatus can reach, and
wherein, in a case where (a) an operating member operated while the detecting unit detects that the eye is close is an operating member which is disposed in the prescribed location and (b) a function assigned to the operating member is a second function different from the first function, the control unit executes the second function.

US Pat. No. 10,798,289

DEVICE AND METHOD FOR PHOTO AND VIDEO CAPTURE

Snap, Inc., Santa Monica...

1. An image capture device comprising:an image sensor capable of being configured to capture an image or video;
a touchscreen display configured to preview image signals from the image sensor;
a single user input element comprising a touch-sensitive area of the touchscreen display and capable of detecting user activity at the single user input element;
a processor coupled with the image sensor and configured to perform operations comprising:
displaying, on the touch screen display, a preview of image signals from the image sensor;
detecting a state change of the single user input element from a first state to a second state based on a detected user activity, the detected user activity comprising a screen touch on the touch-sensitive area of the touchscreen display during the display of the preview of image signals from the image sensor;
determining a duration of the second state;
providing a first distinct audio signal indicating that an image capture is triggered, based on the duration of the second state being less than a time interval; and
providing a second distinct audio signal indicating that a video recording is triggered, based on the duration of the second state being greater than the time interval, wherein the video recording is captured until a state change of the single user input element from the second state to the first state is detected.

US Pat. No. 10,798,288

MULTI-CAMERA ELECTRONIC DEVICE AND CONTROL METHOD THEREOF

Altek Semiconductor Corp....

1. A method of controlling an electronic device having a plurality of cameras, wherein the method comprises the following steps:detecting a scene by using at least one of the cameras to generate photographing analysis information, wherein the photographing analysis information is associated with auto-exposure, wherein the cameras comprise a first camera and at least one second camera, wherein the first camera detects the scene to generate first photographing analysis information, wherein the at least one second camera detects the scene to generate second photographing analysis information, and wherein the first photographing analysis information comprises a first exposure time and a first luminance gain value;
collecting all the photographing analysis information and generating joint photographing information comprising a joint target through a communication process among all the cameras;
generating an individual photographing parameter of each of the cameras according to the joint photographing information comprising:
setting a second exposure time of the second photographing analysis information according to the first exposure time and a calibration parameter pre-stored in the second camera;
setting a second luminance gain value of the second photographing analysis information according to the first luminance gain value and a luminance gain ratio between the second camera and the first camera, wherein the luminance gain ratio is pre-stored in the second camera; and
setting the second exposure time and the second luminance gain value as the individual photographing parameter of the corresponding second camera; and
controlling each of the cameras to capture an image of the scene by using the respective individual photographing parameter so as to generate a corresponding output image.

US Pat. No. 10,798,287

ANALYSIS APPARATUS AND FOCUSING METHOD

ARKRAY, Inc., Kyoto (JP)...

1. An analysis apparatus comprising:a flow cell which includes a flow passage for a liquid containing a tangible component;
a camera configured to pick up images of the liquid flowing through the flow passage;
an adjuster configured to adjust a relative position of the flow cell with respect to the camera in relation to an optical axis direction of the camera and/or a direction in which the liquid flows through the flow passage; and
a controller configured to judge focusing states of pieces of the tangible component in a cutout image obtained by cutting out an image including the tangible component from a plurality of images of the liquid picked up by the camera at a plurality of positions at which the relative position differs, such that the controller is configured to determine an image pickup position of the flow cell in at least one of the optical axis direction and the direction in which the liquid flows, on the basis of a number of the pieces of the tangible component judged to be in the focusing states,
wherein the controller is further configured to generate the cutout image which is an image including one piece of the tangible component and a background existing therearound included in the image picked up by the camera, a blur image which is obtained by applying a blur process to the cutout image, a mask image which is obtained by applying a mask process to the background included in the cutout image, and a differential image which is based on differences between pixel values of the cutout image and pixel values of the blur image, and the controller is further configured to judge that the tangible component included in the cutout image is in the focusing state if a number of pixels each having a pixel value of not less than a threshold value existing in a range in which masking is not caused when the differential image is masked with the mask image is not less than a predetermined number.

US Pat. No. 10,798,286

CONTRAST DETECTION AUTOFOCUS USING ADAPTIVE STEP

SZ DJI TECHNOLOGY CO., LT...

1. A method for moving an optical device relative to an image sensor to focus an image, comprising:a. receiving a current image acquired by the image sensor when the optical device is at a current position relative to the image sensor;
b. obtaining a current contrast value of the current image and a current contrast variation rate at the current position;
c. determining whether the current contrast variation rate meets a first threshold criterion;
d. moving the optical device from the current position to a next position in accordance with the determination result according to a step size; and
e. repeating the steps a to d until a next contrast variation rate for a next image meets a second threshold criterion, the next image being acquired when the optical device is at the next position,
wherein the step size is determined by:
generating a curve based on a plurality of contrast values including the current contrast value;
determining a position of the optical device relative to the image sensor at which the curve has a peak value; and
determining the step size using the position of the optical device relative to the image sensor at which the curve has a peak value.

US Pat. No. 10,798,285

IMAGE PROCESSING APPARATUS, METHOD, AND MEDIUM TO PERFORM ALIGNMENT OF A PLURALITY OF IMAGES BASED ON CONVERSION COEFFICIENT WHILE FINELY MOVING AN IN-FOCUS POSITION

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one memory configured to store instructions; and
at least one processor in communication with the at least one memory and configured to execute the instructions to:
detect feature points from at least a part of a plurality of images having different in-focus positions;
calculate a conversion coefficient of a second image in the plurality of images with respect to a first image in the plurality of images based on the feature points detected from the first image and the second image;
calculate the conversion coefficient of the second image with respect to the first image based on the feature points detected from the first image, the second image, and a third image in the plurality of images in a case where the conversion coefficient of the second image with respect to the first image, which is calculated using the feature points detected from the first image and the second image, does not satisfy a predetermined condition; and
perform combining processing on the at least a part of the plurality of images based on the conversion coefficient of the second image with respect to the first image.

US Pat. No. 10,798,284

IMAGE PROCESSING APPARATUS AND METHOD

Sony Corporation, Tokyo ...

1. An image processing apparatus comprising:an image processing block that is electrically connectable to an imaging device in a manner that permits the image processing block to electronically receive, from the imaging device, an input image,
wherein the image processing block is configured to:
apply, when the image processing block converts the input image into an intermediately processed image, object area information that identifies areas of the input image where objects are situated in the input image,
apply, when the image processing block converts the intermediately processed image into an object frame image, designated focus position information that identifies an initial point in the input image, and
position, in a main object when the image processing block determines that the initial point is in a background area, the initial point,
wherein:
the background area is where a background is situated in the input image,
one of the areas is the background area, and
one of the objects is the background.

US Pat. No. 10,798,283

INFORMATION APPARATUS, CONTROL METHOD, AND COMPUTER READABLE RECORDING MEDIUM DETERMINING STATE INFORMATION OF THE INFORMATION APPARATUS ACROSS A POWER SWITCH AND TRANSMITTING SUCH STATE INFORMATION TO AN EXTERNAL, PORTABLE INFORMATION TERMINAL

Olympus Corporation, Tok...

1. An information apparatus for communicating with a portable information terminal, the information apparatus comprising:a power switch configured to switch a power state of the information apparatus from an on state to an off state;
a memory configured to store use history information recording a use history of a manually set parameter in the information apparatus; and
a processor comprising hardware, the processor being configured to:
determine state information regarding a state of the information apparatus when the power state is switched, by the power switch, from the on state to the office state;
determine whether the manually set parameter has been changed before and after the power state is switched from the on state to the off state, based on the use history information of a manually set parameter in the information apparatus; and
transmit, to the portable information terminal, the state information indicating that the set parameter in the information apparatus has been changed when the processor determines that the set parameter has been changed while keeping the processor energized after the power state is switched from the on state to the off state.

US Pat. No. 10,798,282

MINING DETECTION SYSTEM AND METHOD

GE GLOBAL SOURCING LLC, ...

1. A detection system comprising:a housing configured to be worn by a user or carried on a mobile device;
at least one sensing circuit selectively coupled to the housing that is configured to sense or detect one or more of electromagnetic radiation, ionizing radiation, a determined liquid analyte, a determined gaseous analyte, a determined powdered analyte concentration, a level of oxygen below a determine threshold value, a data signal strength, a determined level of magnetic flux, a temperature, or a pressure;
at least one communication device that is configured to one or more of generate or communicate a signal in response to the sensing or detecting by the at least one sensing circuit; and
an interference circuit configured to reduce or eliminate interference by the sensing circuit, wherein the interference circuit is an interference reduction device embedded in the housing and communicatively coupled to a microprocessor, the interference reduction device configured to discharge an accumulated charge on the detection system to reduce electromagnetic interference from sources other than a proximate active voltage source and the microprocessor is further configured to cause the interference reduction device to discharge the accumulated charge after a determined period.

US Pat. No. 10,798,281

APPARATUS AND METHOD FOR DISABLING A DRIVER FACING CAMERA IN A DRIVER MONITORING SYSTEM

Bendix Commercial Vehicle...

1. A driver monitoring controller for a vehicle comprising:a camera communications port for receiving video images of a driver and transmitting control messages to an associated driver facing camera;
a control input for receiving a signal from a manually operable switch indicating that the driver desires to disable only the associated driver facing camera;
a vehicle communications port for receiving and transmitting messages; and
control logic in communication with the camera communications port, the control input and the vehicle communications port, wherein the control logic enables the associated driver facing camera upon power up of the vehicle and subsequently disables only the driver facing camera by transmitting a control message to disable the driver facing camera at the camera communications port in response to receiving the signal at the control input indicating the driver desires to disable only the driver facing camera and receiving a vehicle state message at the vehicle communications port indicating a vehicle state does not meet a predetermined condition, wherein the predetermined condition is at least one of a vehicle speed exceeding a minimum speed, a detection of a forward object, and the vehicle being outside a lane of travel; and wherein the control logic reenables the associated driver facing camera in response to receiving a vehicle state message indicating the vehicle state meets the predetermined condition.