US Pat. No. 10,972,813

ELECTRONIC DEVICES, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR DETECTING A TAG HAVING A SENSOR ASSOCIATED THEREWITH AND RECEIVING SENSOR INFORMATION THEREFROM

SONY NETWORK COMMUNICATIO...

1. A method of operating an electronic device, comprising:detecting, using a tag reader circuit, a tag having a sensor associated with the tag, the tag being configured to communicate with the sensor to receive sensor information from the sensor and being further configured to transmit information over a defined distance using a short range wireless protocol via a communication link;
receiving, via the tag reader circuit, the sensor information transmitted by the tag over the communication link; and
sending the sensor information to an application server;
sending a message to the tag to change operational behavior of the tag responsive to the tag being placed in a bi-directional communication mode;
wherein the sensor information comprises authentication information that identifies a person; and
wherein the operational behavior comprises operation of the sensor associated with the tag and/or transmission behavior of the tag.

US Pat. No. 10,972,812

AUTOMATICALLY AND PROGRAMMATICALLY GENERATING CROWDSOURCED TRAILERS

ROKU, INC., Los Gatos, C...

1. A method comprising:receiving interactions with streaming content performed by a plurality of users who consumed the content, wherein the interactions are associated with a landing frame of the content;
assigning a point value to each of the interactions, wherein the assigned point value corresponds to how long after a particular interaction the content is played prior to a subsequent interaction, wherein a longer post-interaction play time corresponds to a higher assigned point value;
identifying a plurality of windows of content within the streaming content, wherein each window comprises a plurality of frames;
accumulating the point values of the interactions for each of the landing frames within each of the plurality of windows;
selecting a particular one of the plurality of windows with a highest accumulated point value;
generating a trailer for the content based on the selected particular window; and
providing the content and the trailer.

US Pat. No. 10,972,811

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An image processing device, comprising:a highlight extracted video creation unit configured to:
extract a highlight video from each video file of a plurality of video files based on position information associated with the plurality of video files, wherein the position information corresponds to a location of recordation of the plurality of video files; and
create a highlight extracted video based on the extraction of the highlight video from each video file of the plurality of video files;
a highlight extracted video storage unit configured to store the created highlight extracted video as one data file;
a highlight extraction setting unit configured to set a ratio of a volume of music in a background of the stored highlight extracted video and a volume of an actual sound associated with the stored highlight extracted video;
a highlight extracted video reproduction unit configured to reproduce the stored highlight extracted video on a display screen, wherein the reproduction is based on the set ratio of the volume of the music in the background of the highlight extracted video and the volume of the actual sound associated with the highlight extracted video; and
a highlight extracted video file output unit configured to output the reproduced highlight extracted video to a device connectable to a network, wherein the output of the reproduced highlight extracted video is based on a user input.

US Pat. No. 10,972,810

METHOD FOR GENERATING A COMPOSITION OF AUDIBLE AND VISUAL MEDIA

Lomotif Private Limited, ...

1. A method for distributing a composition of audio and visual media comprising:receiving a selection of:
an audio track; and
a set of videos comprising a private video stored locally on a first computing device;
for each video in the set of videos, defining a video mask comprising a start point in the video and a duration;
generating a meta file comprising a pointer to the audio track, a pointer to each video in the set of videos, a set of video masks corresponding to the set of videos, and an order of the set of videos;
uploading a portion of the private video from the first computing device to a remote database, the portion of the private video:
characterized by a stored duration less than a private duration of the private video;
comprising a first sequence of frames of the private video defined by a video mask corresponding to the private video;
comprising a second sequence of frames of the private video immediately preceding the first sequence of frames and spanning a first buffer duration; and
comprising a third sequence of frames of the private video immediately succeeding the first sequence of frames and spanning a second buffer duration; and
loading videos in the set of videos and the audio track from the remote database onto a second computing device for replay at the second computing device according to the meta file.

US Pat. No. 10,972,809

VIDEO TRANSFORMATION SERVICE

Amazon Technologies, Inc....

1. A computer-implemented method for transforming media content, comprising:receiving, from a content provider, a source package associated with a content item and including a plurality of source files, each source file having a type of a plurality of predefined types, the plurality of types including: (1) a video type, (2) an audio track type, (3) a timed text type, (4) an interstitial video type, and (5) an image type;
determining a workflow for generating a delivery package based at least in part on the source package, the workflow defining parameters for an execution of one or more transformation modules of a set of transformation modules of a media transformation service (MTS);
generating, by the media transformation service, the delivery package based at least in part on executing the workflow and using the source package as input, the delivery package including a plurality of delivery files, the media transformation service executing the one or more transformation modules of the set of transformation modules to generate the plurality of delivery files based at least in part on transforming respective contents of one or more source files of the source package, the set of transformation modules including:
(1) an overlay module configured at least to overlay a timed text content of the timed text type or an image content of the image type on a video content of the video type,
(2) a stitching module configured at least to insert an interstitial video content of the interstitial video type at a location within the video content or remove another interstitial video content from the video content,
(3) a color space transformation module configured at least to perform at least one of transforming the video content from a first color space to a second color space or transforming the video content from a first display resolution to a second display resolution, and
(4) an audio muxing module configured at least to map an audio track content of the audio track type to the video content; and
providing the delivery package to a distribution service for distribution and presentation of the content item in a plurality of target markets.

US Pat. No. 10,972,808

EXTENSIBLE WATERMARK ASSOCIATED INFORMATION RETRIEVAL

SHARP KABUSHIKI KAISHA, ...

1. A method of receiving a content, the method comprising:receiving the content containing one or more payloads in audio watermarks; and
decoding recovery data corresponding to one of the one or more payloads, wherein the recovery data is represented by a JavaScript Object Notation (JSON) schema, wherein
in a case that a content identifier is included in the recovery data, the JSON schema for a recovery file format includes the content identifier including one of properties (A) as a string, which includes (i) a type indicating entertainment identifier registry information and (ii) a content identification having a minimum length being set to 34 and a maximum length being set to 34, and (B) as an another string, which includes (i) a type indicating advertising identifier information and (ii) a content identification having a maximum length being set to 12.

US Pat. No. 10,972,807

DYNAMIC WATERMARKING OF DIGITAL MEDIA CONTENT AT POINT OF TRANSMISSION

DELUXE ONE LLC, Burbank,...

1. A computer-implemented dynamic watermarking system comprisinga storage device configured to ingest and store media content thereon; and
one or more processors configured with instructions to
generate a first media file copy of the media content;
partition the first media file copy into a first plurality of sequential segments each having respective segment lengths defined by respective numbers of groups of pictures;
encode distinct forensic watermarks into each group of pictures in at least two segments of the first plurality of sequential segments of the first media file copy;
generate a second media file copy of the media content;
partition the second media file copy into a second plurality of sequential segments corresponding in number to the first plurality of sequential segments and each having respective segment lengths defined by respective numbers of groups of pictures, wherein the segment lengths of each of the second plurality of sequential segments are identical segment lengths to the segment lengths of corresponding segments of the first plurality of sequential segments in sequence;
encode distinct forensic watermarks into each group of pictures in at least two segments of the second plurality of sequential segments of the second media file copy;
store the first and second media file copies on the storage device;
receive a user request for transmission of the media file;
determine an identification of the user;
map a unique permutation of the media content by selecting a first subset of the first plurality of sequential segments with at least one watermark and a second subset of the second plurality of sequential segments with at least one different watermark to be combined in sequence; and
transmit the unique permutation of the media content to the user.

US Pat. No. 10,972,806

GENERATING CUSTOMIZED GRAPHICS BASED ON LOCATION INFORMATION

Snap Inc., Santa Monica,...

1. A system comprising:a messaging server system including:
an application server to:
receive from a first client device a location information including a location of the first client device, wherein a first user is associated with the first client device;
cause a status interface to be displayed on the first client device, wherein the status interface includes a plurality of locations that are within a predetermined distance from the location of the first client device, wherein the plurality of locations includes a first location;
receive a selection from the first client device of the first location via the status interface;
store the first location in a location database associated with the first user, wherein the location database includes:
locations previously selected by the first client device via the status interface, locations associated with media content items received from the first client device, or locations associated with the location information received from the first client device,
generate a country selectable item that includes a number of countries included in the location database associated with the first user, a city selectable item that includes the number of cities included in the location database associated with the first user, and a plurality of timeline selectable items that are organized in chronological order of locations in the location database associated with the first user; and
cause a passport interface to be displayed on the first client device, wherein the passport interface includes a plurality of selectable items including the country selectable item, the city selectable item, and the timeline selectable items.

US Pat. No. 10,972,805

TARGETING TELEVISION ADVERTISEMENTS BASED ON AUTOMATIC OPTIMIZATION OF DEMOGRAPHIC INFORMATION

VISIBLE WORLD, LLC, Phil...

1. A method comprising:receiving first information comprising demographic data associated with a plurality of zones;
determining, based on the demographic data, a plurality of demographic vectors associated with a plurality of population segments within the plurality of zones;
determining, based on the plurality of demographic vectors and based on a targeted reach for first media content, a first rotation of the first media content within the plurality of zones;
causing, via a user interface, display of second information associated with placement of the first media content during the first rotation, wherein the user interface comprises at least a national targeted cost-per-mille (CPM) field and an optimized CPM field, and wherein the second information indicates a yield percentage increase or decrease for the national targeted CPM field and the optimized CPM field; and
determining, based on the second information, a second rotation of second media content.

US Pat. No. 10,972,804

NETWORK-BASED CONTROL OF A MEDIA DEVICE

Caavo Inc, Milpitas, CA ...

1. A method implemented by a network-based device, comprising:receiving, from a proxy device, a message identifying a first operating protocol that is utilized by a media device that is remotely located from the network-based device, the first operating protocol being determined based on communications between the proxy device and the media device, the proxy device being remotely located from the network-based device;
receiving a first command that comprises a first identifier that identifies an item of media content to be played back via the media device, the first command being in accordance with a second operating protocol that is incompatible with the media device;
accessing a data structure that maps the first operating protocol to the media device based on the message to determine that the media device utilizes the first operating protocol;
translating the first command into a second command, the second command being in accordance with the first operating protocol and being configured to cause the media device to play back the item of media content via a display device; and
transmitting the second command to the proxy device that is communicatively coupled between the network-based device and the media device for transmission to the media device.

US Pat. No. 10,972,803

APPARATUS, SYSTEMS AND METHODS FOR VIDEO OUTPUT BRIGHTNESS ADJUSTMENT

DISH Technologies L.L.C.,...

1. A method, comprising:communicating at least part of a media content event from a media device to a component of a media presentation system that is communicatively coupled to the media device,
wherein the component of the media presentation system comprises a display,
wherein the communicated media content event is streamed out from the media device to the media presentation system as a stream of digital data that include a value that controls brightness of presented images, and
wherein the media content event is presented on the display to a user; and
after at least a portion of the media content event has been presented on the display for a first time period, causing a series of two or more display adjustments during a second time period to adjust direct lighting directed from the display toward the user by adjusting image characteristics affecting how subsequent portions of the media content event are presented on the display at least in part by:
detecting, during presentation of the media content event to the user, a predefined user action of a remote control that controls operation of at least one of the media device, the display, and/or another component of the media presentation system;
detecting, in response to detecting the user action of the remote control, a light level using a light detector, the media device comprising the light detector, and the light level in a vicinity of the media device and the display;
comparing the detected light level in front of the display with a predefined threshold lighting level to determine a difference between the detected light level in front of the display and the predefined threshold lighting level;
causing a first display adjustment to affect presentation of a first subsequent portion of the media content event to facilitate a first lighting level of direct lighting from the display toward the user in front of the display at least partially by increasing, at the media device and in response to detecting the predefined user action of the remote control, the value of the digital data to increase brightness of a plurality of subsequent images based at least in part on one or more characteristics of a previous image and presented on the display the determined difference between the detected light level in front of the display and the predefined threshold lighting level;
continuing to communicate the media content event from the presentation device interface of the media device to the display of the component of the media presentation system using the increased value of the digital data; and
causing a second display adjustment to affect presentation of a second subsequent portion of the media content event to facilitate a second lighting level of direct lighting from the display toward the user in front of the display, where the second lighting level is different from the first lighting level.

US Pat. No. 10,972,802

METHODS AND SYSTEMS FOR IMPLEMENTING AN ELASTIC CLOUD BASED VOICE SEARCH USING A THIRD-PARTY SEARCH PROVIDER

DISH Network L.L.C., Eng...

1. A method for implementing voice search in an elastic cloud environment communicating with a set-top box (STB), the method comprising:receiving by a voice cloud search server, at least one set of a plurality of pulse-code modulation (PCM) audio packets transmitted from the STB;
sending the PCM audio packets, by the voice cloud search server, to a natural language processing (NLP) service for converting at least one set of PCM audio packets to text;
returning by the NLP service to the voice cloud search server, one or more text sets which have been converted from each set of PCM audio packets processed by the NLP service wherein the conversion of each set is performed in continuous real-time by the NLP service;
in response to a return of the text sets, sending the one or more text sets, by the voice cloud search server, to an elastic voice cloud search server for querying an electronic program guide (EPG) service, channel and program data associated with the text sets wherein the EPG service to return identified channel and program data;
in response to an identified return of channel and television program data, sending one or more requests by the elastic voice cloud search server to a third-party search service to independently perform search applications to determine a set of relevant search results that comprise at least HTML pages of one or more hypertext links pointing to image and video content for serving up at the STB;
extracting, by the elastic voice cloud search server, the image and video content by navigating the one or more hypertext links in the HTML pages to identify at least relevant image and video content;
stripping, by the elastic voice cloud search server, the relevant image and video data by removing associated dynamic scripts in the HTML page; and
returning, by the elastic voice cloud server, only the image and video with one or more error codes if necessary, to serve up in a graphic user interface associated with the STB to a requester.

US Pat. No. 10,972,801

ELECTRONIC APPARATUS, METHOD AND PROGRAM FOR SELECTING CONTENT BASED ON TIME OF DAY

ARRIS ENTERPRISES LLC, S...

1. An electronic apparatus for selecting content, and for use with a display device to be connected to the electronic apparatus, the electronic apparatus comprising:a control circuit that controls access to content provided by the electronic apparatus, determines an operation mode of the electronic apparatus, and determines a power-on state of the display device;
a monitoring circuit that:
monitors the access to the content;
acquires content access information associated with (i) the content and (ii) the access to the content; and
generates a table, the table including content access parameters of the content access information, the content access parameters including a current time, and a current day of the week associated with content that is viewed; and
a non-transitory computer-readable recording medium, wherein
the control circuit causes each of the content access information acquired by the monitoring circuit and the table generated by the monitoring circuit to be stored in the non-transitory computer-readable recording medium,
the monitoring circuit updates the content access parameters in the table as additional content access information is acquired by the monitoring circuit, and
when the control circuit determines that a current operation mode of the electronic apparatus is a current active mode by determining that both the electronic apparatus and the display device are powered on, the control circuit: acquires current content access parameters of current content associated with the current active mode of the electronic apparatus, the current content having the current time, and the current day of the week, refers to the table and selects, when the current time, and the current day of the week of the current access parameters for the current content associated with the current active mode of the electronic apparatus are determined by the control circuit to match an entry of content in the table having the updated content access parameters that includes the current time, and the current day of the week, the content associated with the entry in the table having the updated content access parameters that includes the current time and the current day of the week to be output from the electronic apparatus, wherein the content associated with the entry in the table having the updated content access parameters is associated with the current time, and the current day of the week.

US Pat. No. 10,972,800

APPARATUS AND ASSOCIATED METHODS

NOKIA TECHNOLOGIES OY, E...

1. An apparatus comprising:at least one processor; and
at least one memory including computer program code,
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
based on a current view of video imagery of a scene provided for display to a user, and at least one comment of the one or more commenting users, the at least one comment having location information associated therewith, the location information indicative of one or more of (i) the location of the commenting user relative to the scene, at the time of making the comment, or (ii) a location relative to the scene specified by the commenting user who submitted the comment,
provide for display of the comment overlaid over the current view of the video imagery, the comment displayed in the current view of the video imagery at a position that corresponds to the location information indicative of the at least one of the location of the commenting user relative to the scene at the time of making the comment, or the location relative to the scene specified by the commenting user.

US Pat. No. 10,972,799

MEDIA PRESENTATION DEVICE WITH VOICE COMMAND FEATURE

The Nielsen Company (US),...

1. A method comprising:presenting, by a media presentation device, media content;
determining, by the media presentation device, a voice command associated with the media content, wherein determining the voice command associated with the media content comprises (i) extracting a watermark from the media content, (ii) transmitting the extracted watermark to a server, and (iii) responsive to transmitting the extracted watermark to the server, receiving from the server, information relating to the media content, wherein the information comprises the voice command associated with the media content;
making a determination, by the media presentation device, that, during the presenting of the media content, a user of the media presentation device uttered the determined voice command; and
responsive to making the determination, performing, by the media presentation device, an action to facilitate the user purchasing of a good or service associated with the media content.

US Pat. No. 10,972,798

DISPLAY METHOD AND DEVICE FOR ATTACHED MEDIA INFORMATION

TENCENT TECHNOLOGY (SHENZ...

1. A media information display method performed at a computing device having a touchscreen display, one or more processors and memory storing a plurality of programs to be executed by the one or more processors, the method comprising:while rendering main media information on the touchscreen display:
detecting a first user input operation on the main media information;
in response to the first user input operation:
determining a first position of a progress bar of the main media information on the touchscreen display according to a first location of the first user input operation on the touchscreen display and rendering additional media information at the first position of the progress bar on the touchscreen display, the additional media information corresponding to an advertisement, and the main media information comprises non-advertisement content;
detecting a second user input operation on the touchscreen display;
in response to the second user input operation:
determining a second position of the progress bar of the main media information on the display according to a second location of the second user input operation on the touchscreen display;
moving the progress bar from the first position to the second position of the progress bar on the touchscreen display;
continuously rendering the additional media information when the progress bar moves from the first position of the progress bar to the second position of the progress bar on the touchscreen display while the rendition of the main media information is interrupted by a period corresponding to a difference between the second position and the first position; and
removing the additional media information from the touchscreen display after a preset time period, wherein both the main media information and the additional media information are videos.

US Pat. No. 10,972,797

SYSTEM, DEVICE AND METHOD FOR TRANSMITTING AND PLAYING INTERACTIVE VIDEOS

1. An interactive video transmitting device, comprising:a processor;
a storing unit electrically connected with the processor, storing a program code and a plurality of interactive videos and interactive menu information, each of the interactive videos comprising a plurality of interactive segments respectively stored in the storing unit independently, a frame of each of the interactive segments respectively corresponding to a plurality of interactive options, a plurality of video elements and a plurality of display weightings, said interactive options illustrating colors corresponding to a plurality of color function keys on a remote controller, the interactive menu information comprising a plurality of commands respectively corresponding to the interactive options of the interactive segments at each time point in a timeline of the interactive videos; and
a communicating unit electrically connected with the processor, transmitting the interactive videos and the interactive menu information to a viewing end device playing the interactive videos, the viewing end device receiving a control signal corresponding to one of the color function keys on the remote controller, and according to an in-play interactive segment upon receiving the control signal, executing one of the commands corresponding to the corresponding one of the interactive options illustrated on the in-play interactive segment with the corresponding color as the one of the color function keys on the remote controller;
wherein the processor executes the program code to receive a control signal from the viewing end device, the control signal indicating a first interactive video of the interactive videos is selected, then the processor retrieving, and transmitting to the viewing end device through the communicating unit, the interactive segments of the first interactive video from the storing unit, as well as the corresponding interactive options, the corresponding video elements and the corresponding display weightings;
wherein when the viewing end device plays the interactive segment of the first interactive video, the interactive options corresponding to the interactive segment are displayed on the interactive segment, and the video elements corresponding to the interactive segment are respectively displayed on corresponding interactive options for portions of a time of a duration of the interactive segment, said display weightings respectively corresponding to the video elements, the portions of the time determined by said display weightings;
wherein a quantity of the interactive options is four, and
wherein the video elements corresponding to the interactive segment are respectively displayed on the corresponding interactive options for the portions of the time of the duration of the interactive segment when a quantity of the video elements corresponding to the interactive segment is greater than four.

US Pat. No. 10,972,796

DIGITAL DEVICE AND METHOD FOR CONTROLLING THE SAME

LG ELECTRONICS INC., Seo...

1. A digital television comprising:a memory storing one or more applications;
an interface to receive control signals from a remote controller;
a screen; and
a controller to:
execute an application related to a first icon within a first menu when the controller receives a signal selecting the first icon within the first menu from the remote controller,
display a content of the executed application on the screen,
enter into an edit mode of a second icon within the first menu in response to selecting at least one vertical direction key on the remote controller, and
move the second icon from the first menu to a second menu which is grouped differently from the first menu while displaying the content of the executed application on the screen,
wherein the first menu includes a list of applications previously executed in the digital television and the second menu includes a favorite application list,
wherein the second menu is displayed on the screen together with the first menu,
wherein the first menu is arranged in a single row and the second menu is arranged in a single row, and
wherein the first menu is spaced along a vertical direction from the second menu.

US Pat. No. 10,972,795

DYNAMIC OBJECT UPDATE SUBSCRIPTIONS BASED ON USER INTERACTIONS WITH AN INTERFACE

Slack Technologies, Inc.,...

1. A client device for dynamically maintaining object updates stored on the client device, the client device comprising:one or more memory storage areas for maintaining a local data store of stored object data for a plurality of objects;
one or more processors collectively configured to:
monitor user interaction with the plurality of objects;
detect one or more trigger events indicating a change in user interaction with one or more particular objects of the plurality of objects;
generate a subscription modification request for the one or more particular objects based at least in part on the one or more trigger events, wherein the subscription modification request comprises either a subscribe request to initiate a subscription relating to the one or more particular objects or an unsubscribe request to terminate a subscription relating to the one or more particular objects, and wherein a determination of whether the subscription modification request comprises the subscribe request or the unsubscribe request is based at least in part on whether the one or more particular objects are within a visible portion of a graphical user interface; and
transmit the subscription modification request to a remote computing platform to request a modification of object data transmitted to the client device relating to the one or more particular objects.

US Pat. No. 10,972,794

CONTENT-MODIFICATION SYSTEM WITH TRANSMISSION DELAY-BASED FEATURE

The Nielsen Company (US),...

1. A method comprising:determining a content-transmission delay between a content-distribution system and a content-presentation device;
using at least the determined content-transmission delay as a basis to select, from among a plurality of reference fingerprint data sets, a reference fingerprint data set that corresponds with the determined content-transmission delay; and
transmitting to the content-presentation device, the selected reference fingerprint data set that corresponds with the determined content-transmission delay to facilitate the content-presentation device detecting a match between query fingerprint data representing content received by the content-presentation device and at least a portion of reference fingerprint data in the transmitted reference fingerprint data set, wherein the at least a portion of reference fingerprint data in the selected reference fingerprint data set corresponds with a channel, and wherein the content-presentation device detecting the match causes the content-presentation device to identify the channel as being the one on which the content-presentation device is receiving content.

US Pat. No. 10,972,793

SYSTEMS AND METHODS FOR SCENE CHANGE EVALUATION

Rovi Guides, Inc., San J...

11. A system for evaluating whether to execute a scene change request, the system comprising:control circuitry configured to:
receive, during an output of a first media asset, a scene change request prior to a first scene of the first media asset;
in response to receiving the scene change request, determine a first sequence of scenes preceding the first scene;
identify a second media asset in a viewing history;
identify, in the second media asset, a second scene at which a request for a scene change was received;
determine a second sequence of scenes preceding the second scene;
in response to determining that the second sequence of scenes corresponds to the first sequence of scenes, compare the first scene with the second scene;
determine, based on the comparing, that the first scene does not correspond to the second scene; and
in response to determining that the first scene does not correspond to the second scene, generate for display a warning that the scene change at the second scene is not recommended.

US Pat. No. 10,972,792

SYSTEMS AND METHODS FOR SCENE CHANGE RECOMMENDATIONS

Rovi Guides, Inc., San J...

1. A method for indicating whether a scene in a media asset corresponds to a scene for which a scene change was previously requested, the method comprising:identifying, in a data structure of a user profile, a user scene change command entry associated with a first media asset in a viewing history associated with the user profile;
retrieving, from memory, the first media asset;
identifying, based on the user scene change command entry, a first scene at which the request for a scene change was received;
determining a first sequence of scenes preceding the first scene;
during output of a second media asset, scanning the second media asset for a second sequence of scenes that corresponds to the first sequence of scenes, wherein the second sequence of scenes precedes a second scene;
in response to detecting the second sequence of scenes that corresponds to the first sequence of scenes, comparing the first scene with the second scene;
determining, based on the comparing, that the first scene does not correspond to the second scene; and
generating for display a warning that the scene change at the second scene is not recommended.

US Pat. No. 10,972,791

DIGITAL TELEVISION, ELECTRONIC DEVICE AND CONTROL METHODS THEREOF

K-TRONICS (SUZHOU) TECHNO...

1. A digital television comprising an expansion interface, a controller and a voltage converter, whereinthe controller is connected to the expansion interface, and is connected to an external electronic device through the expansion interface,
the expansion interface is connected to the voltage converter,
the controller is configured to control the voltage converter to supply power to the external electronic device through the expansion interface, and
the controller is further configured to control the external electronic device to be turned on or turned off and communicate with the external electronic device through the expansion interface,
wherein,
the expansion interface comprises a positive power supply pin and a negative power supply pin, wherein the positive power supply phi is connected to the voltage convener through a switching transistor, and the negative power supply pin is grounded, and
the controller is connected to the switching transistor, and is configured to control the switching transistor to be turned on or turned off, so that the voltage converter is connected to or disconnected from the expansion interface.

US Pat. No. 10,972,790

HDMI HARDWARE ISOLATION

ARRIS Enterprises LLC, S...

1. A customer premise equipment (CPE) device comprising:a high-definition multimedia interface (HDMI) connector;
a system-on-chip (SoC);
an isolation block, wherein the isolation block includes:
an HDMI redriver, wherein the HDMI redriver converts AC-coupled TMDS to HDMI physical layer output, the HDMI physical layer output being a DC-coupled HDMI signal, wherein the AC-coupled TMDS is received by the HDMI redriver from the SoC and the HDMI physical layer output is output to the HDMI connector;
wherein each respective one signal path of one or more signal paths between the SoC and the HDMI redriver includes a capacitor that filters out a DC signal component of an AC-coupled TMDS, and wherein the isolation block blocks AC current.

US Pat. No. 10,972,789

METHODS, SYSTEMS, AND DEVICES FOR PROVIDING SERVICE DIFFERENTIATION FOR DIFFERENT TYPES OF FRAMES FOR VIDEO CONTENT

1. A device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising:
receiving, over a communication network, a plurality of requests for frames of video content to provide to a mobile device;
determining a first portion of the plurality of requests are for pre-fetch frames of the video content;
providing, over the communication network, the pre-fetch frames to the mobile device over a default bearer path;
determining a second portion of the plurality of requests are for emergent frames of the video content; and
providing, over the communication network, the emergent frames to the mobile device over a dedicated bearer path, wherein a subscription profile repository adds a subscription profile for the video content, wherein the subscription profile includes a first packet flow specification for a first packet flow carrying the pre-fetch frames and a second packet flow specification for a second packet flow carrying the emergent frames.

US Pat. No. 10,972,788

DISTORTION-BASED VIDEO RE-ENCODING

Amazon Technologies, Inc....

1. A computing system for distortion-based video processing in live video streams comprising:one or more processors; and
one or more memories having stored therein instructions that, upon execution by the one or more processors, cause the computing system to perform operations comprising:
decoding a first portion of input video content included in an input live video stream, wherein one or more edits are applied to the first portion of input video content, wherein a first portion of output video content includes the first portion of input video content with the one or more edits applied thereto;
determining a first amount of distortion associated with the one or more edits to the first portion of input video content, wherein the first amount of distortion is determined based at least in part on one or more differences between the first portion of input video content and the first portion of output video content;
comparing the first amount of distortion to a threshold amount of distortion;
based at least in part on the comparing, determining whether or not to use one or more first motion vectors from the input live video stream to encode the first portion of output video content in an output live video stream; and
encoding the first portion of output video content in the output live video stream.

US Pat. No. 10,972,787

TRANSMISSION METHOD, RECEPTION METHOD, TRANSMITTING DEVICE, AND RECEIVING DEVICE

PANASONIC INTELLECTUAL PR...

1. A reception method, performed by a receiving device, the reception method comprising:receiving a first content and characteristics information of the first content, the first content conforming to High Dynamic Range (HDR), the characteristics information indicating one of an opto-electrical transfer function (OETF) and an electro-optical transfer function (EOTF);
receiving a second content to be displayed with the first content, the second content conforming to Standard Dynamic Range (SDR), the second content including at least one of a caption or an image;
determining whether a scale factor is received, the scale factor being described in Timed Text Markup Language (TTML);
converting the second content to a first converted content with the scale factor such that the first converted content conforms to HDR if the scale factor is determined to be received; and
converting the second content to a second converted content with a default value such that the second converted content conforms to HDR if the scale factor is not determined to be received, the default value being stored in the receiving device.

US Pat. No. 10,972,786

MEDIA CHANNEL IDENTIFICATION AND ACTION WITH MULTI-MATCH DETECTION AND DISAMBIGUATION BASED ON MATCHING WITH DIFFERENTIAL REFERENCE- FINGERPRINT FEATURE

Gracenote, Inc., Emeryvi...

1. A media presentation device comprising:a media input interface through which to receive video content to be presented by the media presentation device, wherein the video content includes video frames having video frame regions, the video frame regions comprising a center, an edge, and a corner;
a media presentation interface for presenting the received video content; and
a network communication interface,
wherein the media presentation device is configured to generate first query fingerprint data representing the video content based on analysis of the video content, and to output the generated first query fingerprint data for transmission through the network communication interface to a server,
wherein the media presentation device is configured to receive from the server, after outputting the first query fingerprint data, a request for second query fingerprint data specifically focused on an identified video frame region of the video frame regions of the video content, wherein the identified video frame region defines a difference between multiple channels that each have reference fingerprint data matching the first query fingerprint data,
wherein the media presentation device is configured to output, for transmission through the network communication interface to the server, the requested second query fingerprint data specifically focused on the identified video frame region of the video content, and
wherein the media presentation device is configured to present, in conjunction with the video content that the media presentation device is presenting, supplemental channel-specific content associated with one of the multiple channels, the one channel being identified from among the multiple channels based on a determination that the second query fingerprint data matches a reference fingerprint of just the one channel of the multiple channels.

US Pat. No. 10,972,785

SYSTEM AND METHOD FOR DYNAMIC PLAYBACK SWITCHING OF LIVE AND PREVIOUSLY RECORDED AUDIO CONTENT

Entercom Communications C...

1. A system for distributing live and on-demand media to a user, comprising:a media content server configured to:
receive a pre-modified live digital audio stream including a plurality of alternative content (AC) start tags and AC stop tags,
modify the pre-modified live digital audio stream by inserting at least a first alternative content into the pre-modified live digital audio stream between a respective AC start/stop tag to produce a live digital audio stream and stream the live digital audio stream for playback on a client device;
a time-shifted media server being configured to store the pre-modified live digital audio stream for transmission of the pre-modified live digital audio stream as a time-shifted digital audio stream upon request from the client device;
an alternative content server configured to:
receive the pre-modified live digital audio stream including the plurality of AC start/stop tags,
as the pre-modified live digital audio stream reaches a time that corresponds to a respective AC start/stop tag, identify and store the respective AC start/stop tag for later transmission to the client device, and
the client device including a processor and a speaker, the client device being configured to:
receive the live digital audio stream from the media content server and output the live digital audio stream via the speaker of the client device, and
in response to a playback command by the user at the client device, the client device being configured to transmit a time-shifted digital audio stream request to the time-shifted media server;
in response to a time-shifted digital audio stream request, the time-shifted media server being configured to stream the time-shifted digital audio stream to the client device;
while outputting the time-shifted digital audio stream via the speakers, the client device being configured to:
receive a respective AC start/stop tag of the plurality of AC start/stop tags of the pre-modified live digital audio stream from the alternative content server as the alternative content server identifies the respective AC start/stop tags from the pre-modified live digital audio stream,
identify an upcoming alternative content period when the first alternative content was inserted in the live digital audio stream based on a respective AC start tag and a respective AC stop tag, and,
in response to determining that the upcoming alternative content period is within a pre-determined timing threshold, transmit an alternative content request to the alternative content server;
the alternative content server further configured to:
in response to receiving the alternative content request, transmit a second alternative content to the client device; and
the client device configured to:
receive the second alternative content, and
when an upcoming alternative content period matches a current period of the time-shifted digital audio stream:
cease outputting, at the speaker, the time-shifted digital audio stream, and
output the second alternative content for the time-shifted digital audio stream.

US Pat. No. 10,972,784

ZONE GROUP CONTROL

Sonos, Inc., Santa Barba...

15. A tangible, non-transitory computer-readable medium having instructions stored thereon that, when executed by one or more processors of a playback device, cause the playback device to perform functions comprising:receiving, via a network interface from a native controller executing on a first mobile device, an instruction to add one or more audio tracks to a queue, wherein, in a first mode associated with the native controller, the playback device is configured to play back from the queue;
while the one or more audio tracks are in the queue, receiving, via the network interface from a media player application associated with a particular wireless protocol, an instruction to play back particular media item, wherein the media player application is executing on (a) the first mobile device or (b) a second mobile device;
in response to the instruction to play back the particular media item, (i) configuring the playback device to play back in a second mode associated with the particular wireless protocol, wherein, in the second mode, the queue is not in use, and (ii) playing back the particular media item via the particular wireless protocol via one or more speakers;
while playing back the particular media item via the one or more speakers, detecting a loss of control of the playback device by the media player application; and
in response to detecting the loss of control of the playback device by the media player application, configuring the playback device to play back in the first mode.

US Pat. No. 10,972,783

DIGITAL CONTENTS RECEIVER, DIGITAL CONTENTS RECEIVING METHOD AND DIGITAL CONTENTS TRANSMITTING AND RECEIVING METHOD

Maxell, Ltd., Kyoto (JP)...

1. A display apparatus comprising:a network interface circuitry configured to receive video content and content information including information which identifies whether video content to be transmitted via network for a viewer's viewing includes 3D video content or not;
a video processor configured to conduct video processing of video content;
an operation instruction receiver configured to receive an operation instruction by the viewer;
a display configured to display video content in 3D view or in 2D view; and
a processor,
wherein in a case where the content information received via the network interface circuitry indicates that the video content to be transmitted includes 3D video content,
the processor is configured to:
cause the display to display an indication indicating the video content to be transmitted includes 3D video content;
request viewer-input by the operation instruction receiver indicating that wearing of glasses for viewing the video content to be transmitted in 3D view is completed; and
control the video processor to conduct 3D video processing of the video content or 2D video processing of the video content in accordance with the operation instruction input by the operation instruction receiver.

US Pat. No. 10,972,782

COMMUNICATION APPARATUS, CONTROL METHOD, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. A communication apparatus comprising:one or more processors; and
one or more memories including instructions that, when executed by the one or more processors, cause the communication apparatus to:
receive, from a first other communication apparatus, an identifier for identifying content held by an external apparatus;
acquire the content from the external apparatus based on the received identifier;
play back the acquired content;
receive playback control information for controlling the playback of the acquired content from the first other communication apparatus via a communication path between the communication apparatus and the first other communication apparatus in a case where the acquired content is being played back;
detect a disconnection of the communication path in a case where the acquired content is being played back; and
perform control in such a manner that a notification based on the disconnection of the communication path is issued to a user based on the detection of the disconnection of the communication path,
wherein the acquired content is continued to be played back while the notification based on the disconnection of the communication path is being issued to a user.

US Pat. No. 10,972,781

WIDEBAND TUNER ARCHITECTURE

MaxLinear, Inc., Carlsba...

1. A system comprising:a first analog-to-digital converter (ADC) configured to digitize a first frequency band comprising a first plurality of channels;
a second ADC configured to digitize a second frequency band comprising a second plurality of channels; and
a digital frontend (DFE) circuit operatively coupled to the first ADC and the second ADC, the DFE being configured to:
frequency shift one or more channels from the first plurality of channels; and
frequency shift one or more channels from the second plurality of channels, wherein the frequency-shifted one or more channels from the first plurality of channels and the frequency-shifted one or more channels from the second plurality of channels are combined into an intermediate frequency (IF) signal.

US Pat. No. 10,972,780

CAMERA CLOUD RECORDING

Comcast Cable Communicati...

1. A method comprising:receiving, by a device associated with a premises, configuration information comprising a time parameter indicating a time duration associated with segments of video and a transmission frequency parameter associated with a frequency of transmission of video segments;
receiving control information associated with activation of the device, wherein the control information comprises an event, and wherein the device is configured to activate based at least on the event;
causing capture of video of at least a portion of the premises based at least on the control information, wherein the captured video comprises a plurality of frames;
sending, to a network device at a first time based on the transmission frequency parameter, a first segment of the captured video comprising a first portion of the plurality of frames, wherein the first segment has a time duration based at least on the time parameter; and
sending, to the network device at a second time based on the transmission frequency parameter, a second segment of the captured video comprising a second portion of the plurality of frames, wherein the second segment has a time duration based at least on the time parameter, wherein the second time is different from the first time.

US Pat. No. 10,972,779

USER DEFINED CONTENT SUMMARY CHANNEL

1. A method, comprising:initiating, by a processing system comprising a processor, a creation of a personalized channel responsive to equipment of a user requesting to create the personalized channel, wherein the initiating the creation of the personalized channel comprises receiving, by the processing system, a completed template from the equipment of the user for the personalized channel;
providing, by the processing system, a search request to equipment of a plurality of content providers for content for the personalized channel according to the request, wherein the plurality of content providers includes a satellite service provider, an interactive television network provider, an Internet based content provider, an over-the-top content provider, and a radio channel;
retrieving, by the processing system, the content from the equipment of the plurality of content providers according to the search request as retrieved content;
classifying, by the processing system, the content for the personalized channel according to the content of the retrieved content to generate classified content simultaneously according to a content type and according to a content source of the retrieved content, wherein the content source includes the plurality of content providers and third party content providers;
determining a priority for the classified content based on the content type;
based on the priority for the classified content, sequencing, by the processing system, the classified content to generate sequenced content, wherein the sequencing is arranged according to the content type and the content source based on the completed template to generate a schedule of the personalized channel;
assigning, by the processing system, the sequenced content to a time slot in the schedule of the personalized channel, resulting in assigned content;
delivering, by the processing system, the assigned content to the personalized channel according to the time slot in the schedule;
receiving, by the processing system, a follower list for the personalized channel based on the completed template, wherein the follower list comprises a list of one or more subscribers to be invited to access the personalized channel;
sending, by the processing system, an invite to each subscriber of the one or more subscribers;
adding, by the processing system, the personalized channel to a channel line-up for a subscriber of the one or more subscribers responsive to receiving an acceptance message from the subscriber of the one or more subscribers;
identifying, by the processing system, a first device of a plurality of devices associated with the subscriber as being a determined target device to reach the subscriber at a given point in time; and
delivering, by the processing system, the assigned content via the personalized channel to the first device in accordance with the identifying of the first device.

US Pat. No. 10,972,778

STREAM CONTROL SYSTEM FOR USE IN A NETWORK

KONINKLIJKE KPN N.V., Ro...

1. Stream control system for use in a network, the network comprisingnetwork resources including nodes and links connecting the nodes, and
at least one network controller having a network controller interface for exchanging network control data, the network controller being arranged to control one or more network resources;
the network being arranged for transferring at least one video stream from a video server to a video client via a distribution chain of network resources,
the distribution chain comprising a server node coupled to the video server and a client node coupled to the video client;
wherein the stream control system comprises a bridge unit, a bridge controller and a streaming controller arranged to control streaming settings at the client node;
the bridge unit being coupled to the bridge controller and being arranged to exchange messages with the network controller and the streaming controller by
communicating with the network controller interface, and
communicating with the streaming controller;
the bridge controller being arranged to control the video stream by
obtaining, from the streaming controller, at least one streaming-control request, the request including a bandwidth requirement of the video stream;
obtaining, via the network controller interface, network resource data including bandwidths available on network resources,
determining, for the request, a resource allocation including an allocated bandwidth based on the network resource data and the streaming-control request, the allocated bandwidth being equal to, or lower than, the bandwidth requirement so that the video stream complies with the network resource data;
transferring, to the streaming controller, the allocated bandwidth so as to enable the streaming controller to control, in accordance with the allocated bandwidth, the streaming settings for the client; and
transferring, to the network controller, network control data to control, in accordance with the allocated bandwidth, the respective distribution chain associated to the respective video stream;
wherein the streaming controller is arranged
to exchange streaming control data with the bridge controller, the streaming control data including the streaming-control request and the allocated bandwidth; and
to control, in accordance with the allocated bandwidth, the streaming settings for the client.

US Pat. No. 10,972,777

METHOD AND APPARATUS FOR AUTHENTICATING MEDIA BASED ON TOKENS

10. A device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising:
obtaining a content item;
receiving a first token that comprises an identification of a date and a time when a first portion of the content item is obtained and a location where the first portion of the content item is obtained,
wherein the receiving of the first token comprises receiving the first token as an encrypted token from a database;
decrypting the encrypted token in accordance with a key to generate a decrypted first token; and
transmitting the content item and the first token to the database in a message, wherein the transmitting of the first token to the database comprises transmitting the decrypted first token.

US Pat. No. 10,972,776

SYNCHRONIZING AND DYNAMIC CHAINING OF A TRANSPORT LAYER NETWORK SERVICE FOR LIVE CONTENT BROADCASTING

1. A communication node, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, comprising:
intercepting, by the processing system, a first streaming session between a content streaming server and a first viewer node of a plurality of viewer nodes, the first streaming session including content data streamed from the content streaming server to the first viewer node;
detecting, by the processing system, during streaming of content to the first viewer node via the first streaming session, an instruction to stream the content from the processing system to a second viewer node of the plurality of viewer nodes;
intercepting, by the processing system, a second streaming session between the content streaming server and the second viewer node;
obtaining, via a third streaming session between the communication node and the content streaming server, content data associated with the content, the obtaining the content data comprising receiving, from the content streaming server, control data to mark a first content item needed by the second viewer node for reliable transport of content data to the second viewer node;
replicating, by the processing system, the content data, resulting in replicated content data; and
injecting, by the processing system, the replicated content data into the first streaming session and the second streaming session to synchronously provide the replicated content data to both the first viewer node and the second viewer node of the plurality of viewer nodes, wherein the injecting the replicated content data comprises beginning injection of the replicated content data into the second streaming session according to the control data.

US Pat. No. 10,972,775

TIME SYNCHRONIZATION OF CLIENT DEVICES USING TWO-WAY TIME TRANSFER IN A LIVE CONTENT DISTRIBUTION SYSTEM

NET INSIGHT AB, Solna (S...

1. A method in a network system capable of node-to-node time-transfer for synchronizing a respective local clock of an originating server and at least one client device or other server by means of two-way time transfer, wherein two-way time transfer comprises providing a local clock signal of the originating server, andfor each client device or other server of said at least one client device or other server which is connected to the originating server:
receiving, in the originating server, a local clock signal of the client device or other server;
estimating a time difference at the originating server based on the local clock signal of the originating server and the received local clock signal of the client device or other server;
transferring said local clock signal of the originating server and the estimated time difference at the originating server to the client device or other server;
estimating a time difference at the client device or other server based on the local clock signal of the client device or other server and the local clock signal of the originating server; and
adjusting the local clock signal of the client device or other server based on the estimated time difference at the originating server, the estimated time difference at the client device or other server, and the local clock signal of the client device or other server,
said method comprising a first mode in which two-way time transfer is employed, and further comprising a second mode in which only said local clock signal of the originating server and said local clock signal of the client device or other server are exchanged between said originating server and said client device or other server.

US Pat. No. 10,972,774

BROADCAST SYNCHRONIZATION

HeartMedia Management Ser...

1. A system comprising:a master media server and a dual-mode media server coupled via a communications network;
the master media server coupled via the communications network to an on-air broadcast chain, the master media server configured to:
deliver first media content for broadcast via the on-air broadcast chain in accordance with constraints established by a clock shell;
exert control over content and timing of media and spots served by the dual-mode media server during periods of time the dual-mode media server is operating in a synchronized mode, but not exert control over the content and timing of media and spots served by the dual-mode media server during periods of time the dual-mode media server is operating in an independent mode; and
the dual-mode media server coupled to the master media server, coupled to a streaming broadcast chain via at least one communications network and configured to deliver second media content for broadcast via the streaming broadcast chain, the dual-mode media server configured to operate in a synchronized mode during a first portion of time and in an independent mode during a second portion of time, wherein:
operating in the synchronized mode includes delivering the first media content via the streaming broadcast chain in substantial synchronization with delivery of the first media content via the on-air broadcast chain;
operating in the independent mode includes delivering at least some different media content via the streaming broadcast chain, but still within the constraints of the clock shell;
the dual-mode media server further configured to:
obtain a copy of the clock shell used by the master media server; and during operation in the independent mode, periodically synchronize the copy of the clock shell with the clock shell.

US Pat. No. 10,972,773

COORDINATING VIDEO DELIVERY WITH RADIO FREQUENCY CONDITIONS

Cisco Technology, Inc., ...

1. A computer-implemented method of video optimization, the computer-implemented method comprising:receiving streaming video data from a video server, wherein the streaming video data is destined for a mobile node;
receiving radio frequency (RF) information from a baseband unit of a transceiver, wherein the baseband unit receives the RF information from a remote radio head of the transceiver, and wherein the RF information indicates a wireless power level of the mobile node and describes a radio link between the mobile node and the remote radio head;
upon detecting, based on the RF information, a change in the wireless power level of the mobile node, modifying video compression of the streaming video data by modifying a video codec rate to match an effective channel data rate of the radio link;
compressing, at a video optimization server, the streaming video data based on the modified video codec rate and policy information regarding the mobile node; and
transmitting the compressed streaming video data to the mobile node through the baseband unit.

US Pat. No. 10,972,772

VARIABLE BIT VIDEO STREAMS FOR ADAPTIVE STREAMING

NETFLIX, INC., Los Gatos...

1. A method, comprising:computing a first estimated bandwidth available for downloading digital content from one or more content servers during a first time window and a second estimated available bandwidth available for downloading the digital content from the one or more content servers during a second time window based on at least one actual bandwidth that was available for downloading the digital content from the one or more content servers during one or more previous time windows;
computing a bandwidth variability based on at least one estimated bandwidth computed for the one or more previous time windows;
determining from a scene complexity map a first complexity level for the digital content within the first time window and a second complexity level for the digital content within the second time window; and
selecting a first encoded portion of the digital content to download for playback during the first time window from a first content stream included in a plurality of encoded content streams based on the bandwidth variability, at least one of the first complexity level and the second complexity level, and at least one of the first estimated bandwidth available during the first time window and the second estimated bandwidth available during the second time window.

US Pat. No. 10,972,771

BROADCASTING SIGNAL TRANSMISSION DEVICE, BROADCASTING SIGNAL RECEPTION DEVICE, BROADCASTING SIGNAL TRANSMISSION METHOD, AND BROADCASTING SIGNAL RECEPTION METHOD

LG ELECTRONICS INC., Seo...

1. A method of processing a broadcast signal in a transmitting system, the method comprising:encoding broadcast data for broadcast services;
encoding first signaling information providing information for discovery and acquisition of a broadcast service;
encoding second signaling information including service entries for the broadcast services,
wherein one of the service entries includes a first service identifier for identifying the broadcast service and bootstrap information for discovering a transport session carrying the first signaling information, and
wherein the bootstrap information includes destination Internet Protocol (IP) address information of the transport session carrying the first signaling information and destination port number information of the transport session carrying the first signaling information;
encoding physical layer signaling information including first physical layer signaling information having a fixed size and second physical layer signaling information having a variable size,
wherein the first physical layer signaling information includes information for indicating a size of the second physical layer signaling information, and
wherein the second physical layer signaling information includes information required to decode the broadcast data and information related to the second signaling information; and
transmitting the broadcast signal including the broadcast data, the first signaling information, the second signaling information, and the physical layer signaling information,
wherein the first signaling information includes User Service Description (USD) information having a second service identifier for identifying the broadcast service and name information of the broadcast service, and
wherein a value of the second service identifier is identical to that of the first service identifier.

US Pat. No. 10,972,770

METHOD FOR ENCRYPTING DATA STREAMS WITH NEGOTIABLE AND ADAPTABLE ENCRYPTION LEVELS

Citrix Systems, Inc., Fo...

1. A method of encryption, the method comprising:determining, by a server according to resources available to the server, a first set of one or more levels of data encryption that the server is capable of handling on data to be communicated with a client for a first time instance, wherein a level of data encryption comprises a type of encryption and a strength of the type of data encryption;
receiving, by the server from the client, a second set of one or more levels of data encryption that the client is capable of handling on the data according to resources available to the client for the first time instance;
selecting, by the server, a first level of data encryption for the first time instance that is in both the determined first set of one or more levels of data encryption and the received second set of one or more levels of data encryption, with which the server and the client agree to proceed; and
identifying, by the server in communication with the client following an interval, an updated level of data encryption for a second time instance, that is in both another first set of one or more levels of data encryption for the second time instance and another second set of one or more levels of data encryption for the second time instance, with which the server and the client agree to proceed.

US Pat. No. 10,972,769

SYSTEMS AND METHODS FOR DIFFERENTIAL MEDIA DISTRIBUTION

Disney Enterprises, Inc.,...

1. A system for receiving media content, the system comprising:a communications network for distributing the media content;
a media repository for storing the media content; and
a receiver communicatively coupled to a differential versioning decoder, the differential versioning decoder comprising a processor and a non-transitory computer-readable medium with computer readable instructions embedded thereon, wherein the computer readable instructions, when executed, cause the processor to:
receive a first version of a media file comprising a first set of data having a first set of attributes, the first set of attributes selected from a group of attribute classes;
receive a first differential data file comprising a set of differential data, the set of differential data comprising differences between the first version of the media file and a second version of the media file having a second set of attributes selected from the group, the second set of attributes differing from the first set of attributes in terms of attributes of a first attribute class, wherein the first differential data file is generated by a differential versioning server based on a first difference map, wherein the first difference map is generated based on the differences between the first version of the media file and the second version of the media file;
regenerate the second version of the media file by applying the first differential data file to the first version of the media file; and
regenerate a third version of the media file by applying a second differential data file to the second version of the media file, the third version of the media file having a third set of attributes selected from the group, the third set of attributes differing from the second set of attributes in terms of attributes of a second attribute class.

US Pat. No. 10,972,768

DYNAMIC REBALANCING OF EDGE RESOURCES FOR MULTI-CAMERA VIDEO STREAMING

Intel Corporation, Santa...

1. An edge compute node, comprising:a network interface to communicate over a network;
a memory; and
processing circuitry to:
receive, via the network interface, an incoming video stream captured by a camera, wherein the incoming video stream comprises a plurality of video segments;
store the plurality of video segments in a receive buffer in the memory;
perform a visual computing task on a first video segment in the receive buffer;
detect a resource overload on the edge compute node, wherein the resource overload causes the edge compute node not to perform the visual computing task on a second video segment in the receive buffer;
receive, via the network interface, load information corresponding to a plurality of peer compute nodes;
select a peer compute node to perform the visual computing task on the second video segment, wherein the peer compute node is selected from the plurality of peer compute nodes based on the load information;
replicate, via the network interface, the second video segment from the edge compute node to the peer compute node; and
receive, via the network interface, a compute result from the peer compute node, wherein the compute result is based on the peer compute node performing the visual computing task on the second video segment.

US Pat. No. 10,972,767

DEVICE AND METHOD OF HANDLING MULTIPLE FORMATS OF A VIDEO SEQUENCE

Realtek Semiconductor Cor...

1. A transmitter for handling multiple formats of a video sequence, comprising:a preprocessing module, for receiving a first format of a video sequence, to generate metadata of a second format of the video sequence according to the first format of the video sequence and the second format of the video sequence, wherein the metadata provides additional information by describing a relation between the first format of the video sequence and the second format of the video sequence, and the first format of the video sequence is standard dynamic range (SDR), and the second format of the video sequence is high dynamic range (HDR); and
an encoder, couple to the preprocessing module, for transmitting the first format of the video sequence and the metadata in a bit stream to a receiver.

US Pat. No. 10,972,766

METHOD AND SYSTEM FOR REMOTELY CONTROLLING CONSUMER ELECTRONIC DEVICE

Gracenote, Inc., Emeryvi...

1. A method for use in connection with a client device that is configured to present media content sequences, the method comprising:receiving, by the client device, one or more reference fingerprints, wherein the one or more reference fingerprints is associated with channel data for a particular channel;
receiving, by the client device, one or more media content sequences;
generating, by the client device, one or more fingerprints of the received one or more media content sequences;
detecting a match between (i) the received one or more reference fingerprints and (ii) the generated one or more fingerprints; and
responsive to detecting the match, sending, by the client device, to one or more server devices, a message that comprises the channel data.

US Pat. No. 10,972,765

CONDITIONING SEGMENTED CONTENT

Comcast Cable Communicati...

1. A method comprising:receiving information associated with an advertising opportunity, wherein the information indicates a location in a content asset to insert the advertising opportunity, wherein the content asset comprises a plurality of content fragments;
determining, based on the received information, a first modified content fragment having a first playback duration and a second modified content fragment having a second playback duration that is different than the first playback duration, wherein one of the first modified content fragment and the second modified content fragment comprises at least a portion of a first fragment of the plurality of content fragments and at least a portion of a second fragment of the plurality of content fragments; and
causing sending of the first modified content fragment and the second modified content fragment.

US Pat. No. 10,972,764

METHOD AND SYSTEM FOR REMOTELY CONTROLLING CONSUMER ELECTRONIC DEVICES

Gracenote, Inc., Emeryvi...

1. A method for use in connection with a client device that is configured to present media content sequences, the method comprising:receiving, by the client device, one or more reference fingerprints;
receiving, by the client device, one or more media content sequences;
generating, by the client device, one or more fingerprints of the received one or more media content sequences;
detecting that no match exists between (i) the received one or more reference fingerprints and (ii) the generated one or more fingerprints; and
responsive to the detecting, sending, by the client device, to one or more server devices, a request comprising the generated one or more fingerprints for comparison with one or more reference fingerprints stored in association with the one or more server devices.

US Pat. No. 10,972,763

METHOD AND SYSTEM FOR REMOTELY CONTROLLING CONSUMER ELECTRONIC DEVICE

Gracenote, Inc., Emeryvi...

1. A method for use in connection with a client device that is configured to present media content sequences, the method comprising:sending, by the client device, usage data for the client device;
responsive to sending the usage data, receiving, by the client device, a subset of reference fingerprints selected from among a plurality of reference fingerprints and selected based on the usage data;
receiving, by the client device, one or more media content sequences;
generating, by the client device, one or more fingerprints of the received one or more media content sequences; and
detecting, by the client device, a match between (i) one or more reference fingerprints in the received subset of reference fingerprints and (ii) the generated one or more fingerprints.

US Pat. No. 10,972,762

SYSTEMS AND METHODS FOR MODIFYING DATE-RELATED REFERENCES OF A MEDIA ASSET TO REFLECT ABSOLUTE DATES

Rovi Guides, Inc., San J...

1. A method for presenting absolute dates for media assets based on date-related references in the media assets, the method comprising:retrieving a media asset;
causing display of the media asset;
parsing the media asset to identify a date-related reference presented in a portion of the media asset;
determining a context of the portion of the media asset based on information associated with the media asset;
determining an absolute date of the date-related reference based on the context of the portion of the media asset;
determining a position of the date-related reference in the portion of the media asset; and
causing display of the absolute date at the determined position of the portion of the media asset, wherein causing the display of the absolute date replaces the display of the date-related reference.

US Pat. No. 10,972,761

MINIMIZING STALL DURATION TAIL PROBABILITY IN OVER-THE-TOP STREAMING SYSTEMS

PURDUE RESEARCH FOUNDATIO...

1. A method comprising:receiving, by a processing system of an edge router deployed in a content distribution network comprising the edge router, a plurality of cache servers, a data center, a first plurality of streams connecting the data center to the plurality of cache servers, and a second plurality of streams connecting the plurality of cache servers to the edge router, a request from a first user endpoint device for a first multimedia chunk file of a plurality of multimedia chunk files collectively making up an item of multimedia content;
determining, by the processing system, that a portion of the first multimedia chunk file is not stored in a cache of the edge router;
determining, by the processing system, that the cache is at a capacity threshold;
selecting, by the processing system, a second multimedia chunk file to evict from the cache, wherein the second multimedia chunk file is one of a plurality of multimedia chunk files stored in the cache;
evicting, by the processing system, the second multimedia chunk file from the cache;
calculating, by the processing system for each combination of a plurality of combinations of one stream of the first plurality of streams, one cache server of the plurality of cache servers, and one stream of the second plurality of streams, an estimated stall duration of a playback of the first multimedia chunk file on the first user endpoint device, wherein the estimated stall duration is calculated as Ti,j,?j,?j(Li)?ds?(Li?1)?, wherein i is the first multimedia chunk file, Li is the portion of the first multimedia chunk file, Ti,j,?j,?j(Li) is a time at which the portion of the first multimedia chunk file begins to play at the first user endpoint device given that the portion of the first multimedia chunk file is downloaded from the each combination, ds is a startup delay of a playback of the portion of the first multimedia chunk file on the first user endpoint device, and ? is an amount of time to play all portions of the first multimedia chunk file prior to the portion of the first multimedia chunk file plus the time to play the portion of the first multimedia chunk file; and
downloading, by the processing system, the portion of the first multimedia chunk file from a first combination of the plurality of combinations of one stream of the first plurality of streams, one cache server of the plurality of cache servers, and one stream of the second plurality of streams, wherein the first combination has a lowest estimated stall duration of the plurality of combinations.

US Pat. No. 10,972,760

SECURE TESTING OF VEHICLE ENTERTAINMENT SYSTEMS FOR COMMERCIAL PASSENGER VEHICLES

PANASONIC AVIONICS CORPOR...

1. A method of performing secure testing of devices in a commercial passenger vehicle,the method comprising:
receiving a first set of username and password associated with a person;
performing a first determination that the first set of username and password associated with the person is authorized to access a test software;
receiving a second set of username and password to access the test software;
performing a second determination that the second set of username and password enables access to the test software;
receiving a third set of username and password to access a server located in the commercial passenger vehicle;
sending the third set of username and password to the server;
receiving, from the server, an indication that the third set of username and password enables secure access to the server, wherein the secure access to the server enables access to media playback devices associated with a vehicle entertainment system in the commercial passenger vehicle, and wherein the media playback devices are located on or in a plurality of seats in the commercial passenger vehicle; and
displaying a test-related graphical user interface (GUI) that facilitates testing of one or more groups of media playback devices in the commercial passenger vehicle, wherein the test-related GUI is displayed in response to the performing the first determination, the performing the second determination, and the receiving the indication.

US Pat. No. 10,972,759

COLOR APPEARANCE PRESERVATION IN VIDEO CODECS

Dolby Laboratories Licens...

1. A method comprising:receiving a standard dynamic range (SDR) image and a reference backward reshaping mapping used to generate a reconstructed high dynamic range (HDR) image from the SDR image, the reference backward reshaping mapping comprising a reference luma backward reshaping mapping from SDR codewords into HDR codewords in the reconstructed HDR image;
using a color preservation mapping function with a set of color preservation mapping inputs generated from the SDR image and the reference backward reshaping mapping to determine a plurality of luminance increase for a plurality of SDR luma histogram bins of an SDR luma histogram, the SDR luma histogram being generated based on luma codewords in the SDR image;
generating a modified backward reshaping mapping used to generate a color-preserved reconstructed HDR image from the SDR image, the modified backward reshaping mapping comprising a modified luma backward reshaping mapping generated from the reference backward reshaping function based on the plurality of luminance increase for the plurality of SDR luma histogram bins of the SDR luma histogram;
encoding and transmitting the SDR image and the modified backward reshaping mapping into an SDR video signal;
causing a display HDR image for rendering on a display device to be generated by a recipient device of the SDR video signal based at least in the SDR image and the modified backward reshaping mapping in the SDR video signal.

US Pat. No. 10,972,758

MULTI-TYPE-TREE FRAMEWORK FOR TRANSFORM IN VIDEO CODING

QUALCOMM Incorporated, S...

1. A method of decoding video data, the method comprising:receiving an encoded video bitstream that forms a representation of a coded picture of the video data;
determining a partitioning of the coded picture of the video data into a plurality of coded units, the partitioning being according to a first tree structure and the plurality of coded units including a leaf node of the first tree structure;
determining that a residual block of the leaf node is recursively split into a plurality of transform units according to a second tree structure comprising at least one of a binary tree, an asymmetric binary tree, or a triple tree;
decoding, from the encoded video bitstream, a first syntax element for the leaf node indicating that only a single transform unit partition structure is allowed in the second tree structure for the residual block of the leaf node, the single transform structure being one of the binary tree, the asymmetric binary tree, or the triple tree;
decoding, from the encoded video bitstream, a second syntax element for a subset of the plurality of transform units indicative of the structure into which the corresponding group of the transform unit is split;
determining the second tree structure for the residual block based on the first and second syntax elements; and
reconstructing the coded picture based on the determined first and second tree structures.

US Pat. No. 10,972,757

METHOD AND APPARATUS FOR ENCODING/DECODING IMAGES

LG ELECTRONICS INC., Seo...

1. A video decoding method performed by a decoding apparatus, the method comprising:receiving a bitstream including network abstraction layer (NAL) unit type information and temporal identifiter (ID) information;
determining a NAL unit type of a leading picture as one of NAL unit types, based on the NAL unit type information, wherein the leading picture precedes an associated random access point picture in output order;
deriving a temporal ID of the leading picture based on the temporal ID information;
configuring a reference picture set, including RefPicSetStCurrBefore, RefPicSetStCurrAfter and RefPicSetLtCurr, for inter prediction with regard to a picture which follows the leading picture in decoding order, based on the NAL unit type and the temporal ID of the leading picture;
performing the inter prediction on a current block in the picture based on the reference picture set; and
reconstructing the picture based on prediction samples of the current block derived by a result of the inter prediction,
wherein the NAL unit types includes a first NAL unit type representing referenced decodable leading picture and a second NAL unit type representing non-referenced decodable leading picture, and
wherein the leading picture with the second NAL unit type is not included in any of the RefPicSetStCurrBefore, the RefPicSetStCurrAfter and the RefPicSetLtCurr of the picture with a same value of the temporal ID.

US Pat. No. 10,972,756

SIGNAL RESHAPING AND CODING FOR HDR AND WIDE COLOR GAMUT SIGNALS

Dolby Laboratories Licens...

1. A method to generate video data in a decoder, the method comprising:receiving an input bitstream comprising a sequence parameter set (SPS) data, wherein the SPS data comprises information indicating whether adaptive reshaping is enabled or not in the input bitstream;
parsing the SPS data; and upon detecting that adaptive reshaping is enabled in the input bitstream:
extracting adaptive reshaping metadata from the input bitstream, wherein the adaptive reshaping metadata comprise at least parameters related to a piece-wise polynomial representation of a reshaping function;
generating the reshaping function based on the adaptive reshaping metadata; and
decoding the input bitstream to generate an output decoded signal based on the reshaping function.

US Pat. No. 10,972,755

METHOD AND SYSTEM OF NAL UNIT HEADER STRUCTURE FOR SIGNALING NEW ELEMENTS

MediaTek Singapore Pte. L...

1. A method of video encoding, the method comprising:receiving video data at an encoding device, wherein a GDR (Gradual Decoding Refresh) picture type is supported by the encoding device;
generating a syntax structure including
a first syntax in an NAL (Network Access Layer) unit header of an NAL unit, the first syntax indicating an NAL unit type selected from a group of types including the GDR picture type,
a second syntax in the NAL unit header, the second syntax indicating whether a set of NAL unit header field data is included in the NAL unit header, and
in a case that the first syntax indicates the GDR picture type and the second syntax indicates that the set of NAL unit header field data is included in the NAL unit header, a refreshed region flag included in the set of NAL unit header field data, the refreshed region flag indicating whether a corresponding coded image area contained in the NAL unit belongs to a refreshed region in a current picture; and
generating encoded video data arranged in the NAL unit according to the syntax structure and the video data.

US Pat. No. 10,972,754

ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD OF GENERATING POWER OF 2 TRANSFORM BLOCK SIZES

PANASONIC INTELLECTUAL PR...

1. An encoder, comprising:circuitry; and
memory, wherein
using the memory, the circuitry:
performs first determination processing of determining whether a size of a current block to be subjected to transform processing is a power of 2;
performs the transform processing on the current block when the size is a power of 2; and
uses, for the current block, a transform skip mode for skipping the transform processing when the size is not a power of 2.

US Pat. No. 10,972,753

VERSATILE TILE CODING FOR MULTI-VIEW VIDEO STREAMING

Apple Inc., Cupertino, C...

1. A video source device, comprising:storage for coded video representing multi-view video, the coded video including a manifest file identifying a plurality of segments of the multi-view video available for download and network locations from which the segments may be downloaded, wherein
the multi view video is partitioned spatially into a plurality of tiles having sizes that are determined based on saliency of the content within their respective regions, and
each of the segments contains coded video representing content contained within a respective tile of the plurality of tiles.

US Pat. No. 10,972,752

STEREOSCOPIC INTERLEAVED COMPRESSION

Advanced Micro Devices, I...

1. An apparatus comprising:a transmitter configured to convey data; and
an encoder configured to use an interleaved transmission scheme in which the encoder is configured to:
receive a plurality of frames of a video stream, wherein each frame of a plurality of frames of the video stream include a stereoscopic image;
generate an encoded first frame of a pair of frames by encoding a left-half of a first frame of the pair of frames with a lower resolution than a right-half of the first frame;
generate an encoded second frame of the pair of frames by encoding a right-half of the second frame with a lower resolution than a left-half of the second frame; and
transmit the encoded first frame and the encoded second frame.

US Pat. No. 10,972,751

VIDEO ENCODING APPARATUS AND METHOD, AND VIDEO DECODING APPARATUS AND METHOD

NIPPON TELEGRAPH AND TELE...

1. A video encoding apparatus that predictive-encodes an encoding target image included in an encoding target video, the apparatus comprising:a prediction device that predicts the encoding target image with reference to a previously-encoded picture as a reference picture and determines first reference information which indicates a first reference region as a reference destination;
a second reference information determination device that determines, from reference information used when the first reference region was predictive-encoded, second reference information which indicates a second reference region as another reference destination for the encoding target image;
a prediction method determination device that determines, based on a difference between the first reference region and a third reference region referred when the first reference region was predictive-encoded, whether the first reference information is used or the first reference information and the second reference information are used to generate a predict image; and
a predicted image generation device that generates the predict image using the first reference information or using the first and second reference information in accordance with a determination result by the determination device.

US Pat. No. 10,972,750

INTRA PREDICTION METHOD OF CHROMINANCE BLOCK USING LUMINANCE SAMPLE, AND APPARATUS USING SAME

LG Electronics Inc., Seo...

1. An image decoding method using an intra prediction, performed by a decoding apparatus, the method comprising:obtaining image information including intra luma prediction mode information and intra chroma prediction mode information from a bitstream;
deriving an intra prediction mode of a luma block based on the intra luma prediction mode information;
deriving an intra prediction mode of a chroma block related to the luma block based on the intra chroma prediction mode information; and
generating a predicted block of the chroma block based on the intra prediction mode of the chroma block,
wherein the intra chroma prediction mode information represents one of intra chroma prediction mode indices, and the intra prediction mode of the chroma block is determined based on the represented one of the intra chroma prediction mode indices and the intra prediction mode of the luma block,
wherein the intra chroma prediction mode information represents a bin string related to the one of the intra chroma prediction mode indices including a first index to a fifth index,
wherein the first index represents an upper-right diagonal mode for the intra prediction mode of the chroma block based on the intra prediction mode of the luma block which is a planar mode, and the first index represents the planar mode for the intra prediction mode of the chroma block based on the intra prediction mode of the luma block which is not the planar mode, and
wherein the bin string is one of bin strings of the first index to the fifth index, bin strings of the first index to the fourth index are represented by ‘100’, ‘101’, ‘110’ and ‘111’,
respectively, and a bin string of the fifth index is represented by ‘0’.

US Pat. No. 10,972,749

SYSTEMS AND METHODS FOR RECONSTRUCTING FRAMES

Disney Enterprises, Inc.,...

1. A computer-implemented method comprising:generating, using an optical flow model, one or more displacement maps based on one or more reference frames and a target frame;
generating one or more warped frames based on the one or more reference frames and the one or more displacement maps;
generating a conditioned reconstruction model by training an initial reconstruction model using training content and one or more reconstruction parameters, wherein the training content comprising a training target frame and one or more training reference frames, and wherein the conditioned reconstruction model optimizes for the one or more reconstruction parameters; and
generating, using the conditioned reconstruction model, one or more blending coefficients and one or more reconstructed displacement maps based on the one or more displacement maps, the one or more warped frames, and the target frame.

US Pat. No. 10,972,748

METHOD AND APPARATUS FOR ENCODING MOTION INFORMATION AND METHOD AND APPARATUS FOR DECODING SAME

SAMSUNG ELECTRONICS CO., ...

1. A method of decoding a motion vector, the method comprising:obtaining a flag indicating whether a prediction mode of a current prediction unit is a merge mode which uses a motion vector included in merge motion vector candidates;
when the flag indicates that the prediction mode of the current prediction unit is the merge mode, obtaining the merge motion vector candidates by using a motion vector of a temporally neighboring prediction unit that is temporally related to the current prediction unit and motion vectors of spatially neighboring prediction units that are spatially related to the current prediction unit;
when a number of motion vectors included in the obtained merge motion vector candidates is smaller than n?1, wherein n is a predetermined integer number, adding a plurality of zero vectors to the obtained merge motion vector candidates so that the number of motion vectors included in the merge motion vector candidates reaches the n;
obtaining an index indicating a motion vector from among the n motion vectors included in the merge motion vector candidates from a bitstream; and
obtaining a motion vector of the current prediction unit by using the motion vector indicated by the obtained index,
wherein the predetermined integer number n is determined based on information regarding the predetermined integer number n, the information being included in at least one of a sequence parameter set (SPS), a picture parameter set (PPS), and a slice header,
wherein the motion vectors of the spatially neighboring prediction units are scanned according to predetermined order, and a motion vector of a spatially neighboring prediction unit different from the motion vectors included in the merge motion vector candidates is added to the merge motion vector candidates.

US Pat. No. 10,972,747

IMAGE OR VIDEO ENCODING AND DECODING

British Broadcasting Corp...

1. A method of encoding image or video content within a set of fixed or adaptable targets, which may include a target time to encode, a target complexity, a target output quality, or a target output bitrate, using an encoder having a plurality of coding configurations and utilising a plurality of coding tools each having a set of selectable options, the method comprising the steps of:selecting an initial coding configuration;
encoding a first part of the content using the encoder in the initial configuration;
determining content based usage measures for the initial configuration;
deriving from those measures predictions of the time difference between the time taken to encode content using the initial coding configuration and the time taken to encode content using at least some of the other coding configurations;
determining from the predictions of the time difference and the given targets a second coding configuration meeting given targets; and
encoding a second or subsequent part of the content using the second or subsequent coding configuration;
wherein options are selected dynamically for each tool during encoding by testing of options, wherein said coding configurations differ one from the other in the number of options tested for each of one or more tools; and
wherein the step of determining content based usage measures for the initial configuration comprises measuring the number of times a tool is used and measuring a representative time taken to test an option for a tool.

US Pat. No. 10,972,746

METHOD OF COMBINING IMAGE FILES AND OTHER FILES

Shuttersong Incorporated,...

1. A method of presenting a combined image and non-image data file useful in reliable data transmission across a computer network comprising the steps of:receiving a combined image and non-image data file by at least one computer, the combined image and non-image data file having a quantity of image data, and a quantity of non-image data, the non-image data comprising an encrypted data file container containing at least a non-image file, the data file container being the non-image data combined with the image data, and being written to the combined image and non-image file immediately after an end-of-file marker of the quantity of image data thereby reducing a likelihood of data loss and corruption, and increasing a reliability of a successful data transmission of the combined image and non-image file across the computer network, the image file being unchanged from an original state;
the at least one computer capable of:
presenting only the image file using a display of the at least one computer without separating the image file from the combined image and non-image data file;
reading the combined image and non-image data file by the at least one computer to identify the image file portion, the reading being performed starting at a beginning of the combined image and non-image file and ending at the end-of-file marker of the quantity of image data; the quantity read from beginning to the end-of-file marker being the image file;
reading the combined image and non-image data file by the at least one computer to identify the non-image file portion, the reading being performed starting immediately after the end-of-file marker of the image file, and ending at an end of the combined image and non-image data file;
decrypting the encrypted data file container;
extracting the non-image file from the data file container based on information contained in the data file container;
presenting the image file using the display of the at least one computer; and
presenting the non-image file using the at least one computer at the same time as the presenting of the image file.

US Pat. No. 10,972,745

DECODING DEVICE AND ENCODING DEVICE

Kabushiki Kaisha Toshiba,...

1. An encoding device comprising:an encoder to generate encoded data;
a generator to generate first color-difference format information, second color- difference format information, and filter information, the first color-difference format information indicating a resolution of a color-difference component of the encoded data, the second color-difference format information indicating a resolution of a color-difference component used when reproducing a decoded image obtained by decoding the encoded data, the filter information indicating a configuration of a filter, the resolution of the color- difference component of each of the first color-difference format information and the second color-difference format information indicating a color-difference format of a 4:4:4 format, a 4:2:2 format, or a 4:2:0 format; and
an outputter to output transmission information including the encoded data, the first color-difference format information, the second color-difference format information, and the filter information.

US Pat. No. 10,972,744

IMAGE SCALING

ANALOG DEVICES INTERNATIO...

1. A video processor, comprising:an input buffer to receive an input image;
a slicer circuit to divide the input image into a plurality of N vertical slices;
N parallel image scalers, wherein each scaler is hardware configured to linewise scale one of the N vertical slices according to an image scaling algorithm; and
an output multiplexer to combine the scaled vertical slices into a combined scaled output image.

US Pat. No. 10,972,743

METHOD FOR DECODING IMAGE AND APPARATUS USING SAME

LG ELECTRONICS INC., Seo...

1. A picture decoding method, by a decoding apparatus, comprising:receiving a picture parameter set (PPS) comprising a num_extra_slice_header_bits syntax element specifying a number of extra bits for a slice segment header;
receiving the slice segment header, wherein the slice segment header comprising zero or more reserved flags when a current slice segment is not a dependent slice segment, wherein a number of the reserved flags in the slice segment header is same as the number of extra bits which is determined based on the num_extra_slice_header_bits syntax element; and
decoding a picture based on the slice segment header,
wherein the picture parameter set further comprises a first picture parameter set identifier syntax element specifying an identifier (ID) of the picture parameter set,
wherein the slice segment header further comprises a second picture parameter set identifier syntax element indicating the identifier (ID) of the picture parameter set, and
wherein the slice segment header comprises a dependent slice segment flag representing whether the current slice segment is the dependent slice segment.

US Pat. No. 10,972,742

ENCODING PROCESS USING A PALETTE MODE

Canon Kabushiki Kaisha, ...

1. A method for processing a current block of pixels of an image using a current palette of a palette coding mode, the current palette comprising a set of entries associating respective entry indexes with corresponding pixel values, the method comprising the steps of:generating an input palette from the pixels of the current block;
modifying the input palette to output the current palette;
wherein the modifying step includes substituting an entry of the input palette with an entry of a palette predictor if a predetermined criterion on the entry is met.

US Pat. No. 10,972,741

IMAGE ENCRYPTION THROUGH DYNAMIC COMPRESSION CODE WORDS

United States Postal Serv...

1. A computer-implemented method comprising:identifying, by at least one of an optical scanning device, a control server, an image processing server, or a computing device configured to transmit or receive image data, a detail type for an image file, wherein the detail type comprises at least one of an image property or a compression detail;
obtaining, by the at least one of the optical scanning device, the control server, the image processing server, or the computing device, a count of records corresponding to the detail type;
determining, by the at least one of the optical scanning device, the control server, the image processing server, or the computing device, that the count of records exceeds a threshold;
receiving or retrieving, by the at east one of the optical scanning device, the control server, the image processing server, or the computing device, a set of raw image files corresponding to the detail type;
identifying, by the at least one of the optical scanning device, the control server the image processing server, or the computing device, current compression and encryption information corresponding to the detail type;
generating, by the at least one of the optical scanning device, the control server, the image processing server, or the computing device, updated compression and encryption information based at least in part on the current compression and encryption information;
compressing, by the at least one of the optical scanning device, the control server the image processing server, or the computing device, the image file based on the current compression information so as to generate a first compressed image file;
compressing, by the at least one of the optical scanning device, the control server, the image processing server, or the computing device, the image file based on the updated compression information so as to generate a second compressed image file;
comparing, by the at least one of the optical scanning device, the control server, the image processing server, or the computing device, compression details of the first compressed image file and the second compressed image file; and
in response to determining that the second compressed image file has a processing metric exceeding that of the first compressed image file, replacing, by the at least one of the optical scanning device, the control server, the image processing server, or the computing device, the current compression and encryption information with the updated compression and encryption information.

US Pat. No. 10,972,740

METHOD FOR BANDWIDTH REDUCTION WHEN STREAMING LARGE FORMAT MULTI-FRAME IMAGE DATA

Forcepoint, LLC, Austin,...

1. A computer-implementable method for performing a bandwidth reduction operation, comprising:receiving a plurality of streams of high-density image frames from a respective plurality of monitored devices, each of the plurality of monitored devices comprising a protected endpoint, each of the plurality of streams of high-density image frames representing a full resolution screen capture of a display of a monitored device from the respective plurality of monitored devices, the protected endpoint comprising an endpoint agent executing on an endpoint device, the endpoint agent being implemented to provide a common infrastructure for a pluggable feature pack, the pluggable feature pack providing a security management function, the pluggable feature pack comprising a frame capture pack, the frame capture pack providing the security management function of capturing high-density image frames in response to an occurrence of a particular user behavior;
storing the plurality of streams of high-density image frames within a monitored content repository;
identifying a subset of the plurality of streams of high-density image frames for increased scrutiny, the subset of the plurality of streams of high-density image frames being identified for increased scrutiny in response to a notification of suspicious user behavior, the suspicious user behavior including a user interaction with certain high-density content; and,
presenting a portion of the subset of the plurality of streams of high-density image frames within a scalable viewport for investigation by a security analyst, the portion of the subset of the plurality of streams of high-density image frames comprising the certain high-density content captured by the protected endpoint in response to the occurrence of the particular user behavior, the scalable viewport comprising a viewport implemented to scale a viewable area and resolution of a particular viewport, the scalable viewport being scaled to present the certain high-density content larger or smaller.

US Pat. No. 10,972,739

LOW-COMPLEXITY TWO-DIMENSIONAL (2D) SEPARABLE TRANSFORM DESIGN WITH TRANSPOSE BUFFER MANAGEMENT

TEXAS INSTRUMENTS INCORPO...

1. A method comprising:entropy decoding a block of transform coefficients from an encoded video bit stream;
applying a first one-dimensional (1D) inverse transform of a two-dimensional (2D) separable inverse transform to the block of transform coefficients;
reducing a bit width of each intermediate result of applying the first 1D inverse transform, wherein the reduced bit width of a first intermediate result and the reduced bit width of a second intermediate result are different, wherein reducing the bit width comprises scaling the bit width of each intermediate result based on a predetermined shift amount; and clipping the scaled bit width of each intermediate result to a predetermined bit width to attain a final bit width, wherein the final bit width for a first scaled intermediate result is different from the final bit width for a second scaled intermediate result;
storing the reduced bit width intermediate results in a transpose buffer; and
applying a second 1D inverse transform of the 2D separable inverse transform to the reduced bit width intermediate results to recover a block of residual values.

US Pat. No. 10,972,738

VIDEO ENCODING APPARATUS HAVING RECONSTRUCTION BUFFER WITH FIXED SIZE AND/OR BANDWIDTH LIMITATION AND ASSOCIATED VIDEO ENCODING METHOD

MEDIATEK INC., Hsin-Chu ...

1. A video encoding apparatus comprising:a data buffer; and
a video encoding circuit, arranged to encode a plurality of frames into a bitstream, wherein each frame comprises a plurality of coding units, each coding unit comprises a plurality of pixels, the frames comprise a first frame and a second frame, encoding of the first frame comprises:
deriving a plurality of reference pixels of a reference frame from a plurality of reconstructed pixels of the first frame, respectively; and
storing reference pixel data into the data buffer for inter prediction, wherein the reference pixel data comprise information of pixel values of the reference pixels; and
encoding of the second frame comprises:
performing prediction upon a coding unit in the second frame to determine a target predictor for the coding unit, comprising:
generating a checking result by checking if a search range on the reference frame for finding a predictor of the coding unit under an inter prediction mode includes at least one reference pixel of the reference frame that is not accessible to the video encoding circuit; and
determining the target predictor for the coding unit according to the checking result;
wherein the checking result indicates that one part of reference pixels within the search range are accessible to the video encoding circuit and another part of reference pixels within the search range are not accessible to the video encoding circuit, and the video encoding circuit finds the predictor of the coding unit under the inter prediction mode by restricting a motion vector search to said one part of reference pixels within the search range only.

US Pat. No. 10,972,737

LAYERED SCENE DECOMPOSITION CODEC SYSTEM AND METHODS

1. A scene decomposition method comprising:receiving light field data from a data source, the light field data comprising inner frustum volume data and outer frustum volume data separated by a display surface;
partitioning the inner frustum volume data and the outer frustum volume data into a plurality of scene decomposition layers comprising a plurality of inner frustum volume layers and a plurality of outer frustum volume layers; and
decoding and merging the plurality of inner frustum volume layers and the plurality of outer frustum volume layers into a single reconstructed set of light field data to reconstruct a display light field.

US Pat. No. 10,972,736

METHOD FOR SIGNALING IMAGE INFORMATION, AND METHOD FOR DECODING IMAGE INFORMATION USING SAME

LG Electronics Inc., Seo...

1. A picture decoding method, by a decoding apparatus, the method comprising:receiving picture information, wherein the picture information includes a skip flag, a merge flag, prediction mode information and partition mode information;
decoding the skip flag indicating whether a skip mode is applied to a current block;
decoding the prediction mode information and the partition mode information for the current block, based on a value of the skip flag being 0;
determining a prediction mode and a partition type of the current block, based on the prediction mode information and the partition mode information, wherein the prediction mode information indicates an inter prediction mode or an intra prediction mode, and the partition mode information indicates one of partition types including 2N×2N partition type, 2N×N partition type, and N×2N partition type;
decoding the merge flag indicating whether a merge mode is applied to a partition of the current block which is partitioned based on the partition type, based on the skip flag and the prediction mode, wherein the value of the skip flag is 0 and the prediction mode is the inter prediction mode; and
performing inter prediction on the partition of the current block, based on the merge flag,
wherein decoding of the partition mode information is performed between decoding of the skip flag and decoding of the merge flag,
wherein a binary code for the 2N×2N partition type is “1”, a binary code for the 2N×N partition type is “01”, and a binary code for the N×2N partition type is “001”.

US Pat. No. 10,972,735

USE OF CHROMA QUANTIZATION PARAMETER OFFSETS IN DEBLOCKING

Microsoft Technology Lice...

1. In a computing system, a method comprising:encoding a picture, thereby producing encoded data, wherein the encoded data includes one or more syntax elements that indicate a picture-level chroma quantization parameter (QP) offset for the picture and further includes one or more syntax elements that indicate a slice-level chroma QP offset for a slice of the picture, and wherein the encoding includes:
quantizing transform coefficients for one or more portions of the slice;
reconstructing at least part of the slice, including inverse quantizing the transform coefficients for the one or more portions of the slice; and
performing deblock filtering on the at least part of the slice, including deriving a control parameter for the deblock filtering using the picture-level chroma QP offset but not the slice-level chroma QP offset, wherein the deriving the control parameter includes (a) setting a first variable by adding the picture-level chroma QP offset and an average of luma QP values for blocks on either side of an edge in the at least part of the slice, and (b) using the first variable to determine a second variable for the deriving the control parameter; and
outputting the encoded data as part of a bitstream.

US Pat. No. 10,972,734

CODING UNIT QUANTIZATION PARAMETERS IN VIDEO CODING

TEXAS INSTRUMENTS INCORPO...

7. A method of video processing, comprising:obtaining a picture using a digital camera on a mobile cellular telephone;
processing the picture on the mobile cellular telephone using the steps of:
dividing the picture into a plurality of non-over-lapping blocks using a recursive quad-tree structure;
determining a minimum coding unit size for the plurality of non-over-lapping blocks of a first size for which a first quantization parameter will be determined wherein the minimum coding unit size is less than the first size;
transforming the plurality of non-over-lapping blocks into a plurality of transformed coefficients in a frequency domain using a transform function;
quantizing the plurality of transformed coefficients using a plurality of quantization parameters at least one of which is the first quantization parameter; and
encoding the plurality of quantized transformed coefficients into a compressed bit stream and signaling at a picture level in the compressed bit stream the minimum coding unit size for which the first quantization parameter is determined for the first non-over-lapping block.

US Pat. No. 10,972,733

LOOK-UP TABLE FOR ENHANCED MULTIPLE TRANSFORM

QUALCOMM Incorporated, S...

1. A method of decoding video data, the method comprising: storing a plurality of look-up tables, wherein each look-up table of the plurality of look-up tables includes a set of horizontal and vertical transform pair combinations wherein each set of horizontal and vertical transform pair combinations for the plurality of look-up tables includes a plurality but fewer than all possible sets of horizontal and vertical transform pair combinations, wherein each horizontal and vertical transform pair combination of each set of horizontal and vertical transform pair combinations includes both a horizontal transform and a vertical transform and is associated with a respective transform pair set index;for a current coefficient block of a video block encoded according to one of a plurality of prediction modes, selecting a look-up table from the plurality of look-up tables based on one or both of a height of the current coefficient block or a width of the current coefficient block;
determining a transform pair set index;
selecting a horizontal and vertical transform pair combination associated with the determined transform pair set index from a set of horizontal and vertical transform pair combinations for the selected look-up table;
applying an inverse transform using the horizontal transform of the horizontal and vertical transform pair combination and the vertical transform of the horizontal and vertical transform pair combination to the current coefficient block to determine a current transform block; and
reconstructing the video block based on the current transform block and a predictive block.

US Pat. No. 10,972,732

IMAGE DECODING APPARATUS, IMAGE CODING APPARATUS, IMAGE DECODING METHOD, AND IMAGE CODING METHOD

SHARP KABUSHIKI KAISHA, ...

1. An image decoding apparatus configured to use a binary tree division in addition to a quad tree division when dividing a picture and decode the picture, the image decoding apparatus comprising:an intra prediction parameter decoding control unit configured to change a first association between a plurality of prediction directions and a plurality of mode indices in an intra prediction mode into a second association based on a shape of a block; and
a prediction image generation unit configured to generate a prediction image by referring to the plurality of mode indices and the second association,
wherein the intra prediction parameter decoding control unit changes a number of first portions of the plurality of prediction directions based on the shape of the block to generate a plurality of changed prediction directions for determining the second association based on the plurality of changed prediction directions, wherein the plurality of prediction directions directed toward a first direction region are the first portions of the plurality of prediction directions.

US Pat. No. 10,972,731

SYSTEMS AND METHODS FOR CODING IN SUPER-BLOCK BASED VIDEO CODING FRAMEWORK

InterDigital Madison Pate...

1. A decoding device comprising:a processor configured to:
receive a partition mode indicator indicating at least one of a first partition mode or a second partition mode for decoding a coding unit (CU), wherein the first partition mode indicates that the CU is to be N×N partitioned into equally sized sub-block, wherein N is greater than 2, and the second partition mode indicates that the CU is to be partitioned into variably sized sub-blocks;
obtain a partition mode for decoding the CU based on the partition mode indicator;
determine whether at least one split coding unit flag associated with the CU based on the partition mode is to be decoded, wherein on a condition that the partition mode is the first partition mode, determining that the at least one split coding unit flag associated with the CU is bypassed; and
decode the CU based on the partition mode.

US Pat. No. 10,972,730

METHOD AND APPARATUS FOR SELECTIVE FILTERING OF CUBIC-FACE FRAMES

MEDIATEK INC., Hsin-Chu ...

1. A method of processing video bitstream for 360-degree panoramic video sequence, the method comprising:receiving the video bitstream comprising sets of six cubic faces corresponding to a 360-degree panoramic video sequence;
determining one or more discontinuous boundaries within each cubic frame corresponding to each set of six cubic faces; and
processing the cubic frames according to information related to said one or more discontinuous boundaries, wherein said processing the cubic frames comprises:
skipping filtering process at said one or more discontinuous boundaries within each cubic frame, wherein whether the filtering process is applied to one or more discontinuous cubic face boundaries in each cubic frame is determined by parsing the syntax of on/off control in the video bitstream.

US Pat. No. 10,972,729

DEBLOCKING FILTER SELECTION AND APPLICATION IN VIDEO CODING

QUALCOMM Incorporated, S...

1. A method of using a deblocking filter on video data, the method comprising:obtaining a reconstructed video block comprising pixel samples of a picture arranged in rows and columns, the reconstructed video block having a height that is the number of the rows of the reconstructed video block and a width that is the number of the columns of the reconstructed video block, the first reconstructed video block having a boundary with a neighboring video block, the neighboring video block comprising pixel samples of the picture arranged in rows and columns, the neighboring video block having a height that is the number of the rows of the neighboring video block and a width that is the number of the columns of the neighboring video block;
determining deblocking filter parameters for a boundary of the reconstructed video block with the neighboring video block in the picture based on a first dimension of the reconstructed video block and a second dimension of the neighboring video block, the first dimension being one of the width or the height of the first reconstructed video block and the second dimension being one of the width or the height of the neighboring video block,
the first dimension being the width of the reconstructed video block when the boundary is vertical or the height of the reconstructed video block when the boundary is horizontal, and
the second dimension being the width of the neighboring video block when the boundary is vertical or the height of the neighboring video block when the boundary is horizontal,
the filter parameters comprising a filter to be applied or a number of pixels along the boundary with the neighboring video block to which the filter is to be applied,
wherein determining the deblocking filter parameters for the boundary of the reconstructed video block comprises determining whether the first dimension is less than or equal to a first threshold and determining whether the second dimension is less than or equal to the first threshold; and
applying the deblocking filter to the pixel samples of the reconstructed video block based on the determined filter parameters.

US Pat. No. 10,972,728

CHROMA ENHANCEMENT FILTERING FOR HIGH DYNAMIC RANGE VIDEO CODING

InterDigital Madison Pate...

1. A method comprising:identifying a characteristic within a picture of a video signal, the characteristic comprising a color component;
determining a sample set that comprises samples in the picture that are associated with the characteristic, the sample set comprising a first sample in a first spatial region and a second sample in a second spatial region of the picture, wherein the first sample and the second sample within the picture have corresponding color component values that are within a predetermined range;
applying a cross-plane filter to a first luma plane component of the first sample in the first spatial region and a second luma plane component of the second sample in the second spatial region to determine a first offset associated with the first sample and a second offset associated with the second sample, the cross-plane filter comprising a high pass filter;
adding the first offset to a first reconstructed chroma plane component of the first sample in the first spatial region that corresponds to the first luma plane component; and
adding the second offset to a second reconstructed chroma plane component of the second sample in the second spatial region that corresponds to the second luma plane component.

US Pat. No. 10,972,727

METHOD AND AN APPARATUS FOR PROCESSING A VIDEO SIGNAL BASED ON INTER-COMPONENT REFERENCE

KWANGWOON UNIVERSITY INDU...

1. A method of decoding a video signal, with a decoding apparatus, comprising:deriving, with the decoding apparatus, a first prediction value of a chrominance block using a sample of a luminance block;
calculating, with the decoding apparatus, a compensation parameter based on a pre-determined reference region;
deriving, with the decoding apparatus, a second prediction value of the chrominance block by applying the compensation parameter to the first prediction value; and
reconstructing, with the decoding apparatus, the chrominance block based on the second prediction value of the chrominance block,
wherein calculating the compensation parameter comprises determining, with the decoding apparatus, the reference region referred to calculate the compensation parameter,
wherein the reference region includes a luminance reference region adjacent to the luminance block and a chrominance reference region adjacent to the chrominance block,
wherein the luminance reference region includes a plurality of sample lines, and
wherein a number of the sample lines belonging to the luminance reference region is variably determined based on an availability of the luminance reference region.

US Pat. No. 10,972,726

TECHNIQUES TO DYNAMICALLY SELECT A VIDEO ENCODER FOR STREAMING VIDEO ENCODING

WHATSAPP INC., Menlo Par...

1. At least one non-transitory computer-readable storage medium comprising instructions that, when executed, cause a system to:generate a video stream at a sending device at a first video bitrate with a first video encoding codec;
send the video stream from the sending device to a receiving device;
receive network performance information comprising one or more of a quality of the video stream, an available network bandwidth, or a network bitrate at a point in time for the video stream;
receive a video codec selection policy from a messaging server device, the video codec selection policy defining video codec selection thresholds and corresponding video encoding codecs to be used at the respective video codec selection thresholds;
determine a target video bitrate or target video bitrate range at which the video stream is to be encoded, the target video bitrate or target video bitrate range representing a maximum limit for the encoding of media content and being computed based on the network performance information for the video stream;
compare the target video bitrate or target video bitrate range to the video codec selection thresholds of the video codec selection policy to select a second video encoding codec configured to encode the video stream at a rate approaching the target video bitrate, the second video encoding codec being a different codec from the first video encoding codec in the codec selection policy;
generate the video stream at the sending device with the second video encoding codec; and
send the video stream from the sending device to the receiving device using the second video encoding codec.

US Pat. No. 10,972,725

METHOD AND APPARATUS FOR INTRA PREDICTION IN VIDEO CODING

HUAWEI TECHNOLOGIES CO., ...

1. A decoder, comprising:one or more processors; and a non-transitory computer-readable storage medium coupled to the processors and storing programming for execution by the processors, wherein the programming, when executed by the processors, configures the decoder to:
determine a set of Most Probable Modes (MPMs) for a current block of a video encoded in a video bitstream, wherein when at least one condition is satisfied, the set of MPMs comprise: a Planar mode, a DC mode, a Vertical mode, a Horizontal mode, an intra prediction mode corresponding to a value of the Vertical mode with a first offset, and an intra prediction mode corresponding to the value of the Vertical mode with a second offset;
obtain an (MPM) flag for the current block from the video bitstream, the MPM flag indicating whether an intra prediction mode for the current block is in the set of MPMs for the current block;
obtain an MPM index for the current block from the video bitstream, when the MPM flag indicates that the intra prediction mode for the current block is in the set of MPMs for the current block; determine the intra prediction mode for the current block based on the MPM index and the set of MPMs for the current block; and
reconstruct the current block using reference samples determined based on the intra prediction mode for the current block.

US Pat. No. 10,972,724

METHOD, CONTROLLER, AND SYSTEM FOR ENCODING A SEQUENCE OF VIDEO FRAMES

Axis AB, Lund (SE)

1. A method of encoding a sequence of video frames captured at a fixed temporal frame rate by a camera mounted to a moving object comprising:receiving input indicating an amount of movement of the camera;
receiving input regarding a predetermined spatial distance; and
selecting between intra-coding and inter-coding of the video frames of the sequence based on the amount of movement of the camera and the predetermined spatial distance, such that the camera moves at most the predetermined spatial distance between capturing video frames which are intra-coded,
wherein the selecting between intra-coding and inter-coding of the video frames of the sequence includes:
calculating, based on the input indicating an amount of movement of the camera, a distance that the camera has moved since it last captured a video frame which was intra-coded; and
selecting between intra-coding and inter-coding of a current or a previous video frame based on a comparison between the calculated distance and the predetermined spatial distance,
wherein the current video frame is selected to be intra-coded if the calculated distance is closer than a threshold value to the predetermined spatial distance, and
wherein the current video frame is selected to be inter-coded if the calculated distance is further than the threshold value from the predetermined spatial distance.

US Pat. No. 10,972,723

METHOD AND APPARATUS FOR PALETTE TABLE PREDICTION

HFI INNOVATION INC., Zhu...

1. A method of coding a piece of video, comprising:determining whether a current block in the piece of video is coded according to either a palette coding mode or a non-palette coding mode;
in response to the current block being determined to be coded according to the palette coding mode,
obtaining a reference palette of the piece of video stored in a buffer, the reference palette being last-used for coding a previously coded block in the piece of video,
generating a current palette according to the obtained reference palette of the piece of video,
encoding or decoding the current block using the current palette according to the palette coding mode, and
updating the buffer such that the reference palette of the piece of video stored in the buffer for coding a subsequent block in the piece of video is formed according to the current palette; and
in response to the current block being determined to be coded according to the non-palette coding mode,
encoding or decoding the current block according to the non-palette coding mode, and
the reference palette of the piece of video stored in the buffer remaining unchanged for coding the subsequent block in the piece of video.

US Pat. No. 10,972,722

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An image processing apparatus comprising:a prediction section configured to generate a predicted value of a color difference component of a pixel within a current block by using a function of a value of a corresponding luminance component, generate a predicted block corresponding to the current block, and generate a coefficient of the function used by the prediction section;
a controller configured to control a ratio of a number of reference pixels in relation to a block size of the current block, by adjusting the ratio based on the block size of the current block; and
a decoding section configured to decode the current block using the predicted block,
wherein the controller is configured to control the ratio by changing the number of reference pixels thinned out upon calculating the coefficient, and
wherein the prediction section, the controller, and the decoding section are each implemented via at least one processor.

US Pat. No. 10,972,721

APPARATUS AND METHOD FOR MULTI CONFIGURATION NEAR EYE DISPLAY PERFORMANCE CHARACTERIZATION

GAMMA SCIENTIFIC INC., S...

1. A method for performance characterization of multi configuration near eye displays as a device under test (DUT) by an optical apparatus, the optical apparatus including a viewfinder digital camera, the method comprising:illuminating a light by a lamp;
forming a reference image of a field-of-view (FOV) measurement aperture illuminated by the lamp, wherein a first portion of the light illuminated is reflected from a beamsplitter, and the first portion of the light is captured as the reference image by the viewfinder digital camera;
forming an actual image of the FOV measurement aperture by projecting a second portion of the light from the beamsplitter onto the DUT and reflecting the second portion of the light back onto said beamsplitter;
superimposing the reference image of the FOV measurement aperture and the actual image of the FOV measurement aperture to align an optical axis of the optical apparatus with an optical axis of the DUT to establish an optical measurement axis;
transposing the captured image of a virtual image of the DUT and a complete field of view of the view finder digital camera to establish alignment of the FOV measurement aperture to another area within a DUT scene field of view;
turning off the lamp to allow only the virtual image of the DUT to be seen by the viewfinder digital camera;
capturing an image of the virtual image of the DUT and the complete field of view of the viewfinder digital camera, wherein the captured image of the virtual image of the DUT and the captured image of the complete field of view of the viewfinder digital camera are both reflected from said beamsplitter;
projecting the virtual image of the DUT onto the FOV measurement aperture; and
performing spectroradiometric measurements on the virtual image of the DUT by a spectroradiometer.

US Pat. No. 10,972,720

VIDEO LOAD BALANCING AND ERROR DETECTION BASED ON MEASURED CHANNEL BANDWIDTH

Raytheon Company, Waltha...

1. A method for interconnecting video signal pathways, the method comprising:receiving at an upstream processing module of an interconnect apparatus a feedback signal from a downstream processing module of the interconnect apparatus via a feedback and control path between the upstream processing module of the interconnect apparatus and the downstream processing module of the interconnect apparatus, the feedback signal including channel integrity information for each of a plurality of video transport channels extending from the upstream processing module of the interconnect apparatus to the downstream processing module of the interconnect apparatus;
receiving at the upstream processing module of the interconnect apparatus a video data input signal from one or more cameras coupled to the upstream processing module of the interconnect apparatus;
for each of the plurality of video transport channels, distributing a portion of the video data input signal to the video transport channel, wherein the amount of video data in the portion and the data rate of the portion is based on the channel integrity information associated with the channel; and
for each of the plurality of video transport channels, sending a random or pseudorandom signal along with cyclical redundancy check information corresponding to the random or pseudorandom signal on the channel from the upstream processing module of the interconnect apparatus to the downstream processing module of the interconnect apparatus.

US Pat. No. 10,972,719

HEAD-MOUNTED DISPLAY HAVING AN IMAGE SENSOR ARRAY

NVIDIA CORPORATION, Sant...

1. A head-mounted display (HMD), comprising:an image sensor array, including:
a left portion of the image sensor array comprised of a plurality of left image sensors configured to capture image data to form live video from a perspective of a left eye of a user,
a right portion of the image sensor array comprised of a plurality of right image sensors configured to capture image data to form live video from a perspective of a right eye of the user,
wherein a principal axis of each left image sensor in the left portion of the image sensor array is positioned to intersect a middle of a lens of the left eye of the user, and
wherein a principal axis of each right image sensor in the right portion of the image sensor array is positioned to intersect a middle of a lens of the right eye of the user; and
a display for displaying the live video formed from the perspective of the left eye of the user and the live video formed from the perspective of the right eye of the user.

US Pat. No. 10,972,718

IMAGE GENERATION APPARATUS, IMAGE GENERATION METHOD, DATA STRUCTURE, AND PROGRAM

NIPPON TELEGRAPH AND TELE...

1. An image generation apparatus that generates images based on an original image, whereinthe images based on the original image are an image A and an image B,
the image A and the image B are for
one who sees the image A with one eye and sees the image B with another eye to perceive a stereo image, and
one who sees the image A and the image B with same eye(s) to perceive only the original image, and
the image generation apparatus includes processing circuitry configured to implement:
a first manipulator that obtains the image A, which is an image generated by superposing the original image and phase-modulated components a which are generated by shifting phases of spatial frequency components of the original image by a first phase being 0.5?, and
a second manipulator that obtains the image B, which is an image generated by superposing the original image and phase-modulated components b which are generated by shifting the phases of spatial frequency components of the original image by a second phase being an opposite phase of the first phase, wherein
an average amplitude of components obtained by superposing the phase-modulated components a and the phase-modulated components b is smaller than an average amplitude of the phase-modulated components a and smaller than an average amplitude of the phase-modulated components b.

US Pat. No. 10,972,717

AUTOMATED FEATURE ANALYSIS OF A STRUCTURE

Solaroid Corporation, Ch...

1. An automated structural feature analysis system, comprising: a Three-Dimensional (3D) device configured to emit a volume scanning 3D beam that scans a structure to generate 3D data that is associated with a distance between the 3D device and each end point of the 3D beam positioned on the structure; an imaging device configured to capture an image of the structure to generate image data associated with the structure as depicted by the image of the structure; and a controller configured to: fuse the 3D data of the structure generated by the 3D device with the image data of the structure generated by the imaging device to determine the distance between the 3D device and each end point of the 3D beam positioned on the structure and to determine a distance between each point on the image, and generate a sketch image of the structure that is displayed to the user that depicts the structure based on the distance between the 3D device and each point of the 3D beam positioned on the structure and the distance between each point on the image: wherein the controller is further configured to: emit a beam of light onto the structure to measure the distance between each element that formulates each plane included in the structure and build a point cloud that depicts each element that formulates each plane included in the structure as a distance from the controller; the point cloud of distance measurements of each element of each plane is combined with the photo-metric image of such plane in the visible light spectrum to compose the data that contains the photometric characteristics of each point on the plane and its distance from the controller.

US Pat. No. 10,972,716

CALIBRATION METHOD AND MEASUREMENT TOOL

RICOH COMPANY, LIMITED, ...

1. A calibration method for calibrating a stereo camera, the calibration method comprising:capturing, using the stereo camera, an image including an object placed so as to fall within an image capturing area of the stereo camera;
determining a second distance from the object to an intermediate measurement point that is not part of the object and is located between the object and the stereo camera;
determining a third distance from the intermediate measurement point to the stereo camera;
calculating a first distance from the object to the stereo camera using the determined second distance and the determined third distance;
measuring, a deviation of a direction of the object from a facing position of the stereo camera; and
determining a calibration parameter for calibrating the stereo camera based on the calculated first distance, the measured deviation, and the captured image.

US Pat. No. 10,972,715

SELECTIVE PROCESSING OR READOUT OF DATA FROM ONE OR MORE IMAGING SENSORS INCLUDED IN A DEPTH CAMERA ASSEMBLY

Facebook Technologies, LL...

1. A depth camera assembly comprising:a plurality of imaging sensors, each sensor configured to capture images of a local area and each imaging sensor comprising an array of pixels; and
a controller coupled to each of the plurality of image sensors, the controller configured to:
select a region of interest in an image captured by the plurality of image sensors based on prior information describing the local area,
identify a subset of pixels of an imaging sensor corresponding to the selected region of interest in the image, and
generate depth information for the local area by differently processing values obtained from the subset of pixels than from values obtained from other pixels of the imaging sensor that are not in the subset of pixels by:
retrieving data from each pixel of the sensor; and
applying a stereo imaging process to data retrieved from pixels in the subset of pixels and differently applying the stereo imaging process to data retrieved from pixels not in the subset of pixels.

US Pat. No. 10,972,714

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM FOR STORING PROGRAM

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one processor, which executes instructions stored in at least one memory, being configured to:
(1) input image data for outputting a photographic image by an output apparatus; and
(2) execute processing that controls a sharpness of an image in relation to data of each pixel of the image data based on (a) distance information related to a distance from a focal plane corresponding to the image data and (b) luminance information of each pixel of the image data,
wherein in the executing of the processing, the at least one processor sets a sharpness control amount corresponding to an in-focus region that is determined to be in-focus in the image, such that, in a case where an average luminance of a peripheral region that neighbors the in-focus region is higher than an average luminance of the in-focus region, the sharpness control amount corresponding to the in-focus region becomes larger than in a case where the average luminance of the peripheral region is smaller than the average luminance of the in-focus region.

US Pat. No. 10,972,713

3DTV AT HOME: EULERIAN-LAGRANGIAN STEREO-TO-MULTI-VIEW CONVERSION

Massachusetts Institute o...

1. A system for converting stereo video content to multi-view video content, comprising:a decomposition processor to decompose a stereoscopic input using a set of basis functions to produce a set of decomposed signals of one or more frequencies;
a disparity processor to estimate disparity information for each of the decomposed signals, the disparity processor
generates a disparity map for each of the left and right views of a received stereoscopic frame,
for each corresponding pair of left and right scanlines of the received stereoscopic frame, decomposes the left and right scanlines into a left wavelet and a right wavelet, each of the wavelets being a sum of basis functions,
establishes an initial disparity correspondence between the left wavelet and the right wavelet based on the generated disparity maps,
refines the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets, and
reconstructs at least one novel view based on the left and right wavelets; and
a re-projection processor to synthesize one or more novel views by re-projecting the decomposed signals, the re-projecting comprising moving the decomposed signals according to the disparity information.

US Pat. No. 10,972,712

IMAGE MERGING METHOD USING VIEWPOINT TRANSFORMATION AND SYSTEM THEREFOR

CENTER FOR INTEGRATED SMA...

1. An image merging method using viewpoint transformation executed by a computer included in a camera system including a plurality of cameras, the method comprising:performing, by the camera system, viewpoint transformation for each image obtained by the plurality of cameras, using a depth map; and
merging images, the viewpoint transformation of which is performed;
wherein said performing the viewpoint transformation for each of the images comprises:
setting a viewpoint transformation relationship including a backward warping relationship;
determining a movement parameter in the viewpoint transformation relationship, the movement parameter being a value associated with a distance to move when a location at which each of the images is captured moves to a virtual specific location; and
transforming viewpoints of the images to be matched using the viewpoint transformation relationship;
wherein the backward warping is performed in accordance with the following relationship:
in which I(x, y) denotes the images captured by the plurality of cameras included in the camera system, x, y denote pixel locations in each of the images captured by the plurality of cameras included in the camera system, I?(x?, y?) denotes transformed images resulting from the viewpoint relationship which is performed, x?, y? denote pixel locations in each of the transformed images resulting from the viewpoint transformation which is performed, f denotes a focal length of each of the plurality of cameras, B denotes a length of a baseline of the camera system, D denotes a parallax between the plurality of cameras, mx, my, m denote the movement parameter, and cx, cy denote coordinates of an image center passing through an optical axes.

US Pat. No. 10,972,711

METHOD OF DETERMINING THE BOUNDARY OF A DRIVEABLE SPACE

TRW Limited, Solihull (G...

1. A method of determining the characteristics of a scene around a vehicle comprising:capturing from a stereo camera a stereo pair of images of the scene,
processing the images to produce a depth map of the scene in which each pixel in the depth map is assigned a value that corresponds to a range of a corresponding region in the scene, the pixels arranged in a grid of rows and columns with each column of pixels in the grid corresponding to a vertically oriented set of regions in the scene and each row a horizontally oriented set of regions in the scene,
binning the values for one or more columns of pixels in the depth map to form a corresponding histogram for each column, wherein the columns provide a 2D histogram image, each bin in each histogram having a count value that corresponds to the number of pixels in the column that have a depth within the range assigned to the bin,
scanning the count values in the one or more range bin histograms from an end representing a lowest range to determine for each histogram a bin having a lowest range that is indicative that an object that represents a non-drivable region is present at a depth that lies in the range of depths assigned to the bin, and thereby identify the location of one or more boundary points in a set of boundary points that lie on a boundary of a drivable space in the scene, wherein the scanning of each respective histogram is stopped once the boundary point is detected and the scanning proceeds to a next histogram, and
determining from a set of boundary points a complete boundary that extends across all columns in the 2D histogram image between boundary points, whereby a boundary line represents an edge of a safe drivable space in the scene.

US Pat. No. 10,972,710

CONTROL APPARATUS AND IMAGING APPARATUS

SONY CORPORATION, Tokyo ...

1. A control apparatus, comprising:processing circuitry configured to set a set value of lowpass characteristics of a lowpass filter on a basis of first evaluation data corresponding to a change in resolution and second evaluation data corresponding to a change in false color between a plurality of pieces of image data taken in accordance with changing of the lowpass characteristics of the lowpass filter.

US Pat. No. 10,972,709

IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER STORAGE MEDIUM

SHENZHEN SENSETIME TECHNO...

1. An image processing method, performed by an electronic device comprising a processor, comprising:obtaining a facial skin tone area in an original image;
filtering the original image to obtain a smooth image;
obtaining a high-frequency image based on the smooth image and the original image;
obtaining a facial skin tone high-frequency image based on the high-frequency image and a facial skin tone mask, wherein the facial skin tone mask is a mask for the facial skin tone area and indicates the facial skin tone area; and
superimposing a luma channel signal of the facial skin tone high-frequency image onto a luma channel signal of the original image to obtain a first image;
wherein the obtaining a facial skin tone area in an original image comprises:
obtaining the facial skin tone area in the original image based on a YCrCb optimized color video signal space, wherein the YCrCb optimized color video signal space comprises a luma channel, a chroma channel, and a saturation channel; and
the filtering the original image comprises: filtering the original image in the luma channel, wherein the method further comprises:
converting the first image to an RGB space;
adjusting color components in the first image based on the RGB space;
converting the first image that the color components have been adjusted to a Hue, Saturation, and Lightness (HSL) space, and
maintaining a luma value of the first image that the color components have been adjusted to be unchanged based on the HSL space, to obtain a second image.

US Pat. No. 10,972,708

METHOD AND APPARATUS FOR COMPENSATING FOR COLOR SEPARATION OF IMAGE IN A LASER PROJECTOR-BASED HOLOGRAPHIC HEAD-UP 3 DISPLAY

Hyundai Mobis Co., Ltd., ...

1. An apparatus for compensating for color separation of an image in a head-up display (HUD) to process image information to be output through the HUD using a laser diode, the apparatus comprising:a memory configured to store a table of correction values for an amount of movement of an image with relation to a change of characteristics of the laser diode;
a sensor configured to monitor the change of the characteristics of the laser diode;
a unit configured to determine a correction value for an amount of movement of an image with relation to the change of the characteristics of the laser diode, which is monitored by the sensor, on the basis of the table; and
a unit configured to divide image information to be output to the HUD into a red (R) image, a green (G) image and a blue (B) image, change positions of the R, G and B images according to the determined correction value, and combine the resultant R, G, and B images.

US Pat. No. 10,972,707

ENDOSCOPE AND METHOD OF MANUFACTURING ENDOSCOPE

OLYMPUS CORPORATION, Tok...

1. An endoscope comprising an image pickup device configured to shoot an object and output an image pickup signal and an optical module configured to convert the image pickup signal into an optical signal and transmit the optical signal using an optical fiber in a distal end section in an insertion section, whereinthe optical module comprises:
a light emitting element which includes a light emitting surface for outputting the optical signal and a rear surface, an external electrode being disposed, out of a first region and a second region obtained by dividing the light emitting surface substantially in half, only in the first region;
a wiring board which includes a first main surface and a second main surface, the light emitting element and a bonding electrode being disposed on the first main surface;
a bonding wire which connects the external electrode and the bonding electrode to each other;
a ferrule which includes an insertion hole, the optical fiber being inserted into the insertion hole;
a frame which includes an upper plate and a side plate, the ferrule being disposed on the upper plate and the side plate being fixed to the first main surface in the wiring board, and which includes an inner section housing the light emitting element and a side surface including an opening; and
a transparent resin disposed in the inner section in the frame,
wherein the upper plate is inclined at a predetermined inclination angle to the first main surface, and a first distance from the first main surface to the first region is longer than a second distance from the first main surface to the second region; and
the inclination angle is not less than 35 degrees and not more than 55 degrees.

US Pat. No. 10,972,706

SURVEILLANCE SYSTEM, SURVEILLANCE METHOD, AND PROGRAM

NEC CORPORATION, Tokyo (...

1. A surveillance system comprising:a memory storing one or more instructions; and
at least one processor configured to execute the one or more instructions to:
acquire information of a surveillance-desired area;
acquire position information of a plurality of portable terminals, each terminal performing surveillance using an image capturing device;
determine a candidate portable terminal to be moved to the surveillance-desired area from among the plurality of portable terminals based on the acquired position information of the plurality of portable terminals; and
output a notification to the candidate portable terminal requesting to move to the surveillance-desired area,
wherein the at least one processor is further configured to execute the one or more instructions to:
acquire index information as an indicator of a degree of necessity for the surveillance determined by a state of surveillance targets, and
determine the candidate portable terminal by:
identifying a dense area in which the number of the portable terminals or a density of the portable terminals is equal to or greater than a first threshold based on the position information of the portable terminals and the acquired index information, and
determining a candidate portable terminal to be moved to the surveillance-desired area from among the portable terminals present in the identified dense area,
wherein the first threshold is changed based on at least one of a plurality of factors including characteristics of places to be surveilled, weather, temperature, humidity, and movement of surveilled area.

US Pat. No. 10,972,705

MEDICAL DISPLAY APPARATUS, ENDOSCOPIC SURGERY SYSTEM, AND METHOD OF DISPLAYING MEDICAL IMAGE

OLYMPUS CORPORATION, Tok...

1. A medical display apparatus comprising:a display configured to display an endoscope image, the endoscope image including a mask pattern;
at least one video signal input circuit wherein a plurality of video signals related to the endoscope image are configured to be input to the at least one video signal input circuit from an external medical instrument;
a video processing circuit configured to generate a video reflecting a display setting specified for each of the video signals;
a first processor; and
a second processor, wherein:
the first processor is configured to determine an endoscope type based on the mask pattern of the endoscope image;
the second processor designates a display set value for each video image for the video processing circuit based on the determination of the endoscope type, each video image being related to a different endoscope image; and
the mask pattern being determined based on pattern matching by edge extraction of one of the video signals.

US Pat. No. 10,972,704

VIDEO IDENTIFICATION AND ANALYTICAL RECOGNITION SYSTEM

1. An analytical recognition system, comprising:a video camera configured to capture video data, wherein the video camera is at least one of a traffic camera or an aerial drone camera;
an antenna configured to capture mobile communication device data;
one or more processors; and
one or more memories storing instructions that, when executed by the one or more processors, cause the one or more processors to function as a data analytics module configured to correlate the video data and the mobile communication device data to generate a profile of a person associated with the video data and the mobile communication device data, the profile including profile data including any one or a combination of the captured video data and the captured mobile communication device data,
wherein the profile includes any one or a combination of the captured video data, the captured mobile communication device data, temporal data associated with the captured video data or the captured mobile communication device data, and location data associated with the captured video data or the captured mobile communication device data;
the captured video data includes any one or a combination of a captured still image and video footage;
the mobile communication device data includes any one or a combination of a WiFi identifier, a media access control (MAC) identifier, a Bluetooth identifier, a cellular identifier, a near field communication identifier, and a radio frequency identifier associated with a mobile communication device in communication with the antenna;
the temporal data includes any one or a combination of a time the video data is captured and a time the mobile communication device data is captured; and
the location data includes any one or a combination of a location at which the video data is captured and a location at which the mobile communication device data is captured.

US Pat. No. 10,972,703

METHOD, DEVICE, AND STORAGE MEDIUM FOR PROCESSING WEBCAM DATA

SHANGHAI XIAOYI TECHNOLOG...

1. A webcam data processing method performed by a processor, the method comprising:determining, at a webcam, data to be processed;
sending, at the webcam, a control request to a server to request a task initiation permission for a webcam cluster;
in response to an authorization instruction of the server, acquiring, at the webcam, property information of each webcam in the webcam cluster from the server;
segmenting, at the webcam, the data to be processed on the basis of the property information of each webcam, to generate a plurality of data segments;
sending, at the webcam, each of the plurality of data segments to a corresponding webcam in the webcam cluster for processing;
receiving, at the webcam, intermediate results generated by the webcam cluster on the basis of the plurality of data segments; and
combining, at the webcam, the intermediate results into a final result.

US Pat. No. 10,972,702

INTELLIGENT ADAPTIVE AND CORRECTIVE LAYOUT COMPOSITION

Pexip AS, Oslo (NO)

1. A method for creating a composed picture layout based on a first set of pictures available in a Multipoint Control Node, MCN, and one or more ruleset(s), comprising the steps of:performing a Pan Zoom Tilt, PZT, process on each of the first set pictures according to a corresponding output of a face detection process in view of a corrective ruleset from the one or more ruleset(s) resulting in a second set of pictures;
counting the respective number of detected faces from the face detection process for each of the pictures in the second set of pictures;
creating the composed picture layout by arranging the second set of pictures according to the respective number of detected faces in view of a weighted presence ruleset from at least one of the group consisting of the one or more ruleset(s), a composition plane defining an overall pattern of the composed picture layout, and a context.

US Pat. No. 10,972,701

ONE-WAY VIDEO CONFERENCING

Securus Technologies, LLC...

10. A communications system in a controlled-environment facility, comprising:a communications management system configured to establish a video conferencing session between a first device and a second device in which the communications management system receives only audio information from the first device and receives both audio and video information from the second device; and
a video processing module configured to add video content selected based upon a user profile, not captured from the first device during the video conferencing session, to the video conferencing session data from the first device before sending the video conferencing session data to the second device.

US Pat. No. 10,972,700

VIDEO CALL METHOD AND VIDEO CALL MEDIATING APPARATUS

HYPERCONNECT, INC., Seou...

1. A video call method between a plurality of terminals, the method comprising:establishing, by a first terminal, a first video call session with a second terminal, the first terminal having a plurality of display areas comprising first, second and third display areas;
establishing, by the first terminal, a second video call session with a third terminal;
displaying, by the first terminal, a first image received from the second terminal through the first video call session and a second image received from the third terminal through the second video call session on a first display area and on a second display area, respectively;
detecting, by the first terminal, a predetermined event while a plurality of video call sessions including the first and the second video call sessions are being maintained;
terminating, by the first terminal, one video call session among the video call sessions in response to the detecting the predetermined event;
establishing, by the first terminal, a third video call session with a fourth terminal;
displaying, by the first terminal, a third image received from the fourth terminal through the third video call session on a third display area,
wherein the detecting the predetermined event and the terminating one video call session among the video call sessions comprises:
detecting eye direction of a user of the first terminal, by using a camera included in the first terminal;
determining whether the detected eye direction is pointing one of the display areas longer than or equal to a reference time period; and
maintaining a video call session corresponding to the pointed display area and terminating the remaining video call sessions, based on the determination.

US Pat. No. 10,972,699

VIDEO COMMUNICATION DEVICE AND METHOD FOR VIDEO COMMUNICATION

Beijing FUNATE Innovation...

1. A local video communication device comprising:a local translucent display device configured to display remote video information received from a remote video communication device; and
a local camera array configured to capture local video information of a plurality of local users;
wherein the local translucent display device includes a naked-eye three-dimensional display, the local camera array comprises a plurality of local cameras arranged in a two-dimensional array, the local camera array is placed on a back of the local translucent display device, and
the local translucent display device further comprises a micro processing unit, wherein the micro processing unit comprises:
a video capture and processing module configured to select some of the plurality of local cameras corresponding to the image positions of a plurality of remote users as selected local cameras from the local camera array, and make the selected local cameras simultaneously capture and process the local video information of the plurality of local users, wherein the selected local cameras and the plurality of remote users are one-to-one correspondence, the images of the plurality of remote users are in the same remote video information, the image position of one remote user corresponds to one local camera in the selected local cameras, the selected local cameras capture local video information to accommodate eye positions and directions of the plurality of remote users;
a location acquisition module configured to obtain a face position of each of the plurality of local users;
a communication module configured to communicate with the remote video communication device; and
a display module configured to display the remote video information.

US Pat. No. 10,972,698

METHOD AND DEVICE FOR SELECTING A PROCESS TO BE APPLIED ON VIDEO DATA FROM A SET OF CANDIDATE PROCESSES DRIVEN BY A COMMON SET OF INFORMATION DATA

INTERDIGITAL VC HOLDINGS,...

1. A method comprising:selecting a process to be applied on video data from a set of candidate processes parametrized by a common set of information data comprising at least a first information data relative to a color encoding space, and a second information data relative to a light domain in which a transform is intended to be applied, the set of candidate processes comprising at least a first candidate process comprising a pre-tone-mapping, a color remapping and a post-tone-mapping, and a second candidate process comprising color volume converting, a pre-tone-mapping, a color remapping and a post-tone-mapping,
wherein said selecting comprises:
obtaining an input value for at least one information data of said common set of information data; and
selecting one of the first or second candidate processes based on a combination of the obtained input values for the color encoding space and the light domain.

US Pat. No. 10,972,697

PROJECTION SYSTEM

Lenovo (Singapore) Pte. L...

1. A system comprising:a plank that comprises a front side and a back side that define a depth, a top side and a bottom side that define parallel planes and a thickness, and an adjustable direction video projector, wherein the depth exceeds the thickness, and wherein the adjustable direction video projector is disposed at the front side and at least in part between the parallel planes; and
circuitry operatively coupled to the adjustable direction video projector that selects one of a plurality of operational modes of the adjustable direction video projector and that adjusts a projection direction of the adjustable direction video projector responsive to selection of the one of the plurality of operational modes, wherein the plurality of operational modes comprise a back wall projection mode, wherein, in the back wall projection mode, the projection direction is directed at least in part between the parallel planes toward the back side of the plank for projection to a back wall surface.

US Pat. No. 10,972,696

UNIVERSAL MIRROR TV AND FLAT PANEL DISPLAY COVER

ELECTRIC MIRROR, LLC, Ev...

1. A reflective cover for a flat panel display, comprising:a surface, the surface is both reflective and transmissive;
a perimeter frame, the perimeter frame is configured to couple to the surface around a perimeter of the surface, the perimeter frame further comprising:
an engagement device, the engagement device is attached to the perimeter frame, the engagement device is configured to engage with a mounting bracket; and
a blackout shroud, the blackout shroud is coupled to the perimeter frame to form an opening, the opening is sized so that during installation, the flat panel display is inserted into the opening, such that after installation, the blackout shroud contacts a back side of the flat panel display thereby substantially blocking out ambient light from entering between the blackout shroud and the flat panel display, and the engagement device engages with the mounting bracket, in operation when the flat panel display is in an on state, images displayed thereon are visible through the surface and when the flat panel display is in an off state, a mirror like reflection is provided from the surface.

US Pat. No. 10,972,695

IMAGE SENSORS WITH REDUCED SIGNAL SAMPLING KICKBACK

SEMICONDUCTOR COMPONENTS ...

1. An image sensor comprising:an image pixel; and
readout circuitry coupled to the image pixel and configured to sample a signal from the image pixel, the readout circuitry comprising:
amplifier circuitry;
a first source follower stage;
a second source follower stage;
a charge storage structure configured to store a voltage associated with the sampled signal, wherein the first and second source follower stages are coupled between the amplifier circuitry and the charge storage structure; and
a switch that couples the first source follower stage to the charge storage structure.

US Pat. No. 10,972,694

IMAGING APPARATUS, IMAGING SYSTEM, AND CONTROL METHOD OF IMAGING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An imaging apparatus comprising:a pixel unit having a plurality of pixels;
a signal processing unit that generates image data by performing signal processing on a pixel signal read from the pixel unit and outputs the image data on a frame basis;
an information generation unit that generates time information on a frame basis; and
an output unit that outputs the time information associated with one frame before output of the image data of the one frame from the signal processing unit is started and starts output of the image data of the one frame after an end of the output of the time information ends.

US Pat. No. 10,972,693

IMAGE SENSOR FOR IMPROVING LINEARITY OF ANALOG-TO-DIGITAL CONVERTER AND IMAGE PROCESSING SYSTEM INCLUDING THE SAME

SAMSUNG ELECTRONICS CO., ...

1. An image sensor comprising:a plurality of pixels;
a ramp generator configured to generate a ramp signal;
a plurality of analog-to-digital converters (ADCs) including a first ADC, a second ADC, a third ADC and a fourth ADC;
a first buffer including an input node configured to receive the ramp signal and a first output node connected to the first ADC and the second ADC;
a second buffer including the input node configured to receive the ramp signal and a second output node connected to the third ADC and the fourth ADC;
a first capacitor connected to the first ADC and the first buffer, a second capacitor connected to the second ADC and the first buffer, a third capacitor connected to the third ADC and the second buffer, and a fourth capacitor connected to the fourth ADC and the second buffer; and
a first counter connected to the first ADC, a second counter connected to the second ADC, a third counter connected to the third ADC, a fourth counter connected to the fourth ADC,
wherein the plurality of pixels have at least ten million pixels.

US Pat. No. 10,972,692

IMAGE SENSOR INCLUDING DIGITAL PIXEL

Samsung Electronics Co., ...

1. An image sensor, comprising:a plurality of pixels, each of the plurality of pixels including:
a photodetector including a photoelectric conversion element that outputs a detection signal in response to light incident thereon;
a comparator that compares the detection signal of the photodetector with a ramp signal and to output a comparison signal in response thereto;
a plurality of first memory cells that store a first counting value corresponding to a first voltage level of the detection signal using the comparison signal of the comparator and to output the first counting value through a plurality of transmission lines; and
a plurality of second memory cells that store a second counting value corresponding to a second voltage level of the detection signal using the comparison signal of the comparator and to output the second counting value through the plurality of transmission lines,
wherein a number of the plurality of first memory cells is the same as a number of the plurality of second memory cells, and the number of the plurality of first memory cells is the same as a number of the plurality of transmission lines.

US Pat. No. 10,972,691

DYNAMIC VISION SENSOR, ELECTRONIC DEVICE AND DATA TRANSFER METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A dynamic vision sensor, comprising:a pixel unit comprising a plurality of pixels, each pixel among the plurality of pixels configured to output an activation signal indicating occurrence of an event, in response to receiving dynamic input;
a first reading unit configured to output a first signal for processing an event of a pixel of the dynamic vision sensor in which the event occurs, based on the activation signal;
a second reading unit configured to output a second signal for processing events of all of the plurality of pixels in which the event occurs, based on the activation signal;
an event counter configured to count a number of events occurring in the pixel unit, based on the activation signal, and to output a selection signal, based on the number of events; and
a selecting unit configured to select an output signal from among one of the first signal to process the event of the pixel and the second signal to process the event of all of the plurality of pixels, based on the selection signal, and to output the output signal.

US Pat. No. 10,972,690

COMPREHENSIVE FIXED PATTERN NOISE CANCELLATION

DePuy Synthes Products, I...

1. A digital imaging method for use with an endoscope in ambient light deficient environments comprising:actuating an emitter to emit a plurality of pulses of electromagnetic radiation to cause illumination within the light deficient environment;
pulsing the emitter at a predetermined interval corresponding to a sensing interval of a pixel array; and
sensing reflected electromagnetic radiation from a pulse with the pixel array to create an image frame in a plurality of cycles, the cycles including an integration time of the pixel array that is controlled using an electronic shutter;
stopping the emitter from pulsing for one or more iterations;
creating a dark frame by sensing the pixel array while the emitter is not pulsing a pulse;
creating a reference frame using said dark frame for use in removing fixed pattern noise;
enhancing precision of the reference frame with continued sampling of one or more subsequent dark frames, wherein pixel data for the reference frame is stored in a buffer for each of the pixels of the reference frame and is incrementally adjusted each time a subsequent dark frame is sampled by updating currently-stored pixel data stored in the buffer by modifying the currently-stored pixel data with a first correction factor, modifying the newly-sampled pixel data with a second correction factor, and then combining the modified currently-stored pixel data with the modified newly-sampled pixel data.

US Pat. No. 10,972,689

SOLID-STATE IMAGE SENSOR, ELECTRONIC APPARATUS, AND CONTROL METHOD OF SOLID-STATE IMAGE SENSOR

Sony Corporation, Tokyo ...

1. A light detecting device comprising:a pixel array including:
a first pixel including a first photoelectric conversion region, a first transfer transistor, a first reset transistor and a first amplifier transistor, and
a second pixel including a second photoelectric conversion region, a second transfer transistor, a second reset transistor and a second amplifier transistor;
power supply circuitry including a current mirror circuit and a current source;
column processing circuitry including a comparator; and
a vertical signal line coupled to the second pixel and the comparator,
wherein the first amplifier transistor is coupled to the current source and a first transistor of the current mirror circuit, and
wherein the second amplifier transistor is coupled to the current source and a second transistor of the current mirror circuit.

US Pat. No. 10,972,688

PIXEL ARCHITECTURE AND AN IMAGE SENSOR

IMEC vzw, Leuven (BE)

1. A pixel architecture for detection of incident light, the pixel architecture comprising:an absorption layer configured to extend in a first plane, facilitate back-side illumination, generate charges in response to incident light on an interface of the absorption layer; and to transport charges in a direction perpendicular to the first plane;
a semiconductor charge-transport layer that extends in a second plane parallel to the first plane and that is configured to receive generated charges from the absorption layer and to transport the generated charges through the charge-transport layer, wherein the semiconductor charge-transport layer comprises:
a bias region;
a charge-dispatch region associated with the bias region that forms a dedicated region in a lateral direction parallel to the second plane of the charge-transport layer;
a charge node; and
one or more doped regions, wherein the one or more doped regions and the bias region have a different bias and are biased differently from a bulk substrate of the semiconductor charge-transport layer, wherein the one or more doped regions include a plurality of discrete implant regions at different depths within the semiconductor charge-transport layer that have different lateral lengths;
an electric connection connecting to the bias region for providing a selectable bias voltage to the bias region; and
at least one transfer gate associated with an area adjoining to the charge-dispatch region in the lateral direction,
wherein the different bias between the doped regions and the bias region facilitate transport of the generated charges towards the charge-dispatch region, and together with the at least one transfer gate facilitate control of a transfer of charges from the charge-dispatch region in the lateral direction to the charge node.

US Pat. No. 10,972,687

IMAGE SENSOR WITH BOOSTED PHOTODIODES FOR TIME OF FLIGHT MEASUREMENTS

OmniVision Technologies, ...

1. A method of operation for an image sensor, the method comprising:selectively applying a bias to a first doped region of a first junction capacitor, a second doped region of a second junction capacitor, a third doped region of a third junction capacitor, and a fourth doped region of a fourth junction capacitor to couple a junction capacitance of each of the first junction capacitor, the second junction capacitor, the third junction capacitor, and the fourth junction capacitor to a photodiode; and
selectively enabling a first vertical transfer gate, a second vertical transfer gate, a third vertical transfer gate, and a fourth vertical transfer gate to transfer an electric signal generated by the photodiode in response to incident image light to a respective one of a first storage node, a second storage node, a third storage node, and a fourth storage node, wherein the junction capacitance provides an electric field to drive the electric signal from the photodiode to the respective one of the first storage node, the second storage node, the third storage node, and the fourth storage node.

US Pat. No. 10,972,686

METHOD FOR RECOGNIZING OBJECT BY USING CAMERA, AND ELECTRONIC DEVICE SUPPORTING SAME

Samsung Electronics Co., ...

1. An electronic device, comprising:a housing including a first surface;
a display exposed through a first portion of the first surface;
a first light emitting source exposed through a second portion of the first surface;
an imaging sensor circuit that is exposed through a third portion of the first surface and is electrically connected with the first light emitting source; and
a processor that is disposed in the housing and is electrically connected with the imaging sensor circuit,
wherein the imaging sensor circuit is configured to:
receive an enable signal from the processor;
perform readout from a first time t1 to a second time t2 depending on the reception of the enable signal; and
provide a first synchronization signal to the first light emitting source from a third time t3 to a fourth time t4 and from a fifth time t5 to a sixth time t6, and
wherein the first to sixth times t1 to t6 have a relationship of the third time t3

US Pat. No. 10,972,685

VIDEO CAMERA ASSEMBLY HAVING AN IR REFLECTOR

Google LLC, Mountain Vie...

1. A video camera assembly, comprising:one or more processors configured to operate the video camera assembly in a day mode and in a night mode;
an image sensor having a field of view of a scene and configured to capture video of a first portion of the scene while in the day mode of operation and in the night mode of operation, the first portion corresponding to the field of view of the image sensor;
one or more infrared (IR) illuminators configured to provide illumination during the night mode of operation while the image sensor captures video; and
an IR reflector component configured to: (i) substantially restrict the illumination onto the first portion of the scene, and (ii) illuminate the first portion in a substantially uniform manner across the field of view of the image sensor.

US Pat. No. 10,972,684

SPARSE LOCK-IN PIXELS FOR HIGH AMBIENT CONTROLLER TRACKING

FACEBOOK TECHNOLOGIES, LL...

1. An image sensor comprising:a two-dimensional array of active pixels, each active pixel in the two-dimensional array of active pixels including a respective photodiode; and
a plurality of lock-in pixels dispersed at two or more regions of the two-dimensional array, each of the plurality of lock-in pixels formed by two adjacent active pixels of the two-dimensional array of active pixels, wherein the photodiodes of the two active pixels are connected to form a common photodiode, and wherein each active pixel of the two adjacent active pixels of the lock-in pixel includes:
a respective charge storage node; and
a respective switch configured to receive a respective control signal to selectively connect the charge storage node to the common photodiode.

US Pat. No. 10,972,683

CAPTIONING COMMUNICATION SYSTEMS

Sorenson IP Holdings, LLC...

1. A method to transcribe videos, the method comprising:obtaining, at a first communication device, a video that includes video data and audio data, the video originating at a second communication device and provided to the first communication device as part of a communication session between the first communication device and the second communication device;
separating, by the first communication device, the audio data from the video;
sending, to a remote system from the first communication device, the audio data from the video without sending the video data to the remote system, wherein the video is obtained at the first communication device through a point-to-point connection between the first communication device and the second communication device without the video passing through the remote system;
in response to sending the audio data, obtaining, at the first communication device, text data originating from the remote system, the text data including a transcription of at least a portion of the audio data; and
presenting, by the first communication device, the video, including the video data and the audio data, concurrently with the text data in real-time during the communication session.

US Pat. No. 10,972,682

SYSTEM AND METHOD FOR ADDING VIRTUAL AUDIO STICKERS TO VIDEOS

Facebook, Inc., Menlo Pa...

1. A system, comprising:a hardware processor; and
a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processor to perform a method comprising:
editing a video to create an edited video, the editing comprising:
playing a video in a video panel of a display screen of an electronic device,
while the video is playing in the video panel, receiving user input at a particular time in the video, wherein the user input includes only a single gesture ending at a particular location in the video panel, and
responsive to the user input, selecting a virtual audio sticker and adding the virtual audio sticker to the video at the particular location in the video panel, wherein the virtual audio sticker comprises:
an image, and
an audio clip.

US Pat. No. 10,972,681

IMAGE ENCODING METHOD AND SYSTEM

SZ DJI TECHNOLOGY CO., LT...

1. An image encoding method comprising:preprocessing source images according to preset preprocessing requirements to generate preprocessed images;
merging the preprocessed images according to preset merging requirements to generate a target image, including:
storing the preprocessed images sequentially in a target cache; and
merging the preprocessed image to form the target image in the target cache according to merging orders and merging coordinates; and
encoding the target image.

US Pat. No. 10,972,680

THEME-BASED AUGMENTATION OF PHOTOREPRESENTATIVE VIEW

Microsoft Technology Lice...

1. On a see-through display configured to provide a photorepresentative view from a user's vantage point of a physical environment via one or more sufficiently transparent portions of the see-through display through which the physical environment is viewable, a method of providing theme-based augmenting of the photorepresentative view, the method comprising:receiving, from the user, an input selecting an augmentation theme for use in augmenting the photorepresentative view, the augmentation theme comprising a plurality of possible augmentations and selected from among at least two augmentation themes available for selection;
obtaining, optically and in real time, environment information of the physical environment;
generating in real time a three-dimensional spatial model of the physical environment including representations of objects present in the physical environment based on the environment information;
identifying, via analysis of the three-dimensional spatial model, one or more features within the three-dimensional spatial model that each corresponds to one or more physical features in the physical environment;
based on such analysis, displaying, on the see-through display, an augmentation of a feature of the one or more features identified via analysis of the three-dimensional spatial model, the augmentation being associated with the augmentation theme and being visible while portions of the physical environment remain viewable through the see-through display, the augmentation selected from the plurality of possible augmentations based on one or more of a size and a shape of the feature identified;
as the user moves about the physical environment, updating the three-dimensional spatial model in real time based on the environment information; and
as a result of the updating of the three-dimensional model, displaying, on the see-through display, an augmentation change.

US Pat. No. 10,972,679

IMAGE PROCESSING APPARATUS AND METHOD FOR CONTROLLING IMAGE PROCESSING APPARATUS

Canon Kabushiki Kaisha, ...

1. An imaging apparatus comprising:an image sensor configured to output image data;
at least one processor configured to function as:
an acquisition unit configured to sequentially acquire pieces of starry-sky image data;
a generation unit configured to perform lighten composite processing on a plurality of pieces of starry-sky image data sequentially acquired by the acquisition unit and generate a star-trail moving image with composite image data obtained as a result of the lighten composite processing; and
a setting unit configured to make a setting for controlling whether to fade out a part of a star trail included in the star-trail moving image generated by the generation unit,
wherein fading out the part of the star trail is processing of gradually darkening the part of the star trail along with the progress of the star-trail moving image, and
wherein the generation unit generates the star-trail moving image based on the setting made by the setting unit.

US Pat. No. 10,972,678

IMAGE ACQUISITION DEVICE, DRIVER ASSISTANCE SYSTEM, AND METHOD FOR OPERATING AN IMAGE ACQUISITION DEVICE

Motherson Innovations Com...

1. An image acquisition device for a driver assistance system of a vehicle, comprisingat least one light entry element for generating image information;
at least one image acquisition sensor with at least one darkening element having a variable translucency in at least a first light frequency range; and
at least one control device, wherein the control device is in operative communication with at least one of the image acquisition sensor, the light entry element, the darkening element, a display device for displaying images acquired using the image acquisition device, and a sensor element,
wherein image information including light rays from the light entry element impinge on the image acquisition sensor along a ray path,
wherein a plurality of darkening elements is provided, and the variable translucency is changeable at least in areas for different first light frequency ranges using different darkening elements,
wherein at least one of the darkening element and the control device are or is set up to change the variable translucency of the darkening element and of at least a first subarea, depending on at least one light parameter, and at least the first subarea is definable using the control device, using static geometric data of the vehicle or of the image acquisition device including at least one of a yaw angle and a pitch angle,
wherein the variable translucency of the darkening element and the first subarea is changeable so that an amount of light impinging on the image acquisition sensor and on a second subarea of the image acquisition sensor remains below a first threshold value,
wherein the first threshold value is dependent on at least one overload limit of the image acquisition sensor in the second subarea on at least one dazzle value of the image acquisition device or partial light incidences on the image acquisition sensor, and
wherein the darkening element includes a matrix with columns and rows, the area to be darkened on the darkening element is calculated by the control device based on a position of the vehicle, a light acquisition direction, and a yaw and pitch angle of the image acquisition device.

US Pat. No. 10,972,677

IMAGING CONTROL APPARATUS AND IMAGING CONTROL METHOD

SONY CORPORATION, Tokyo ...

1. An imaging control apparatus, comprising:circuitry configured to:
detect a flicker component of a light source, wherein the flicker component is in an image;
control timing of a capture of the image
based on timing of a peak or a bottom of the flicker component and based on an exposure time of a sensor during the capture of the image;
control the capture of the image at the bottom of the flicker component based on the exposure time of the sensor that is not an integer multiple of a cycle of the flicker component and based on the exposure time of the sensor includes an odd number of the flicker component corresponding to the cycle; and
control the capture of the image at the peak of the flicker component based on the exposure time of the sensor is not the integer multiple of the cycle of the flicker component and the exposure time of the sensor includes an even number of the flicker component corresponding to the cycle.

US Pat. No. 10,972,676

IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE CAPABLE OF OPTIMIZING HDR IMAGE BY USING DEPTH INFORMATION

ALTEK CORPORATION, Hsinc...

1. An image processing method for an electronic device, the method comprising:obtaining a plurality of first images;
obtaining first depth information;
generating a second image according to the plurality of first images;
identifying a subject and a background of the second image according to the first depth information;
determining whether the subject of the second image needs to be optimized;
when the subject of the second image needs to be optimized, optimizing the subject of the second image, generating an output image according to the background and the optimized subject, and outputting the output image; and
when the subject of the second image does not need to be optimized, generating the output image according to the background and the subject, and outputting the output image.

US Pat. No. 10,972,675

ENDOSCOPE SYSTEM

OLYMPUS CORPORATION, Tok...

1. An endoscope system comprising:an illumination unit is configured to radiate illumination light onto a subject, the illumination light having a spatially non-uniform intensity distribution including a light section and a dark section in a beam cross section orthogonal to an optical axis;
an imaging unit configured to image at least two illumination images of the subject irradiated with the illumination light;
a processor configured to generate two separate images from the at least two illumination images imaged by the imaging unit; and
an intensity-distribution changing unit configured to temporally change the intensity distribution of the illumination light such that the light section and the dark section are positionally interchanged,
wherein the at least two illumination images are imaged by the imaging unit by irradiating the subject with the illuminated light in which the intensity distribution is different between the beams, and intensity values of corresponding pixels within respective illumination images of the at least two illumination images are mutually different, and
wherein one of the two separate images is a deep-layer image including a larger amount of information about a deep-layer region of the subject than the other one of the two separate images, and the other one of the two separate images is a surface-layer image including a larger amount of information about a surface and a surface-layer region of the subject than the deep-layer image, and
wherein, the processor generates the two separate images based on the intensity values of each pixel of the at least two illumination images.

US Pat. No. 10,972,674

ELECTRONIC APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An electronic apparatus, comprising at least one processor and/or at least one circuit to perform the operations of the following units:a converting unit configured to convert a first type of image having a first gradation resolution into a converted image having a second gradation resolution which is higher than the first gradation resolution;
a connecting unit configured to connect with an external apparatus;
a setting unit configured to set a connection mode with the external apparatus to any of a plurality of connection modes, including a first connection mode in which an image having a gradation resolution which is not higher than the first gradation resolution is outputted, and a second connection mode in which an image having a gradation resolution which is higher than the first gradation resolution is outputted; and
a control unit configured to control so that
in a case where the connection is in the first connection mode, the first type of image is outputted from the connecting unit without converting the first type of image by the converting unit, and
in a case where the connection is in the second connection mode, the first type of image is converted by the converting unit, and the converted image is outputted from the connecting unit,
wherein source of the image to be outputted from the connecting unit can be switched from a second type of image having a gradation resolution which is higher than the first gradation resolution to the first type of image, while maintaining the connection in the second connection mode, and
the converting unit converts the first type of image having a first gradation characteristic into the converted image having a second gradation characteristic different from the first gradation characteristic.

US Pat. No. 10,972,673

TARGET TRACKING METHOD AND DEVICE, MOVABLE PLATFORM, AND STORAGE MEDIUM

SZ DJI TECHNOLOGY CO., LT...

1. A target tracking method applicable to a shooting device including a first shooting assembly and a second shooting assembly, comprising:calling the first shooting assembly to shoot an environment to obtain a first image;
calling the second shooting assembly to shoot the environment to obtain a second image;
sending the first image to a control terminal of a movable platform carrying the shooting device to cause the control terminal to display the first image;
obtaining first area indication information sent by the control terminal, the first area indication information being determined by the control terminal by detecting a selection operation of a target object to be tracked performed by a user on the first image displayed by the control terminal;
determining second area indication information of the second image according to the first area indication information and a relative positional relationship between the first shooting assembly and the second shooting assembly;
performing target object recognition on an area indicated by the second area indication information in the second image to determine the target object in the second image to obtain a tracking position area of the target object to be tracked in the second image; and
adjusting a shooting attitude of the shooting device according to the tracking position area of the target object in the second image to adjust a location of the target object in a shooting frame of the first shooting assembly.

US Pat. No. 10,972,672

DEVICE HAVING CAMERAS WITH DIFFERENT FOCAL LENGTHS AND A METHOD OF IMPLEMENTING CAMERAS WITH DIFFERENT FOCAL LENGTHS

Samsung Electronics Co., ...

1. A method of generating an image from multiple cameras having different focal lengths, the method comprising:receiving a wide image and a tele image;
aligning the wide image and the tele image to overlap a common field of view;
establishing a stitching boundary associated with an overlapping region of the wide image and the tele image;
correcting, after establishing the stitching boundary, for photometric differences between the wide image and the tele image, wherein correcting for photometric differences comprises performing global luminance tone correction and local luminance tone correction in the stitching boundary;
selecting, after aligning the wide image and the tele image and correcting for photometric differences between the wide image and the tele image, a stitching seam for the wide image and the tele image; and
joining the wide image and the tele image to generate a composite image, wherein a first portion of the composite image on one side of the stitching seam is from the wide image and a second portion of the composite image on the other side of the stitching seam is from the tele image.

US Pat. No. 10,972,671

IMAGE PROCESSING APPARATUS CONFIGURED TO GENERATE AUXILIARY IMAGE SHOWING LUMINANCE VALUE DISTRIBUTION, METHOD FOR CONTROLLING THE IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus comprising:at least one processor or circuit configured to execute a plurality of tasks, including:
an acquiring task that acquires a luminance component of an input image;
a converting task that converts the luminance component acquired by the acquiring task into a predetermined luminance value for each pixel;
a generating task that generates a graph waveform image that shows a graph showing a relationship between a distribution of the luminance values obtained through the conversion by the converting task and pixel locations in the input image; and
a display controlling task that displays the graph waveform image generated by the generator on a display apparatus, wherein, in a case where the input image is recorded in a first mode, the generated graph waveform image is a logarithmic graph waveform image showing the distribution of the luminance values along one axis, among a vertical axis and a horizontal axis, which is a logarithmic axis representing the luminance values in the graph, and the other axis, among the vertical axis and the horizontal axis, representing the pixel locations in the input image.

US Pat. No. 10,972,670

CONTENT DISPLAY METHOD AND ELECTRONIC DEVICE FOR IMPLEMENTING SAME

Samsung Electronics Co., ...

1. An electronic device comprising:a display;
a memory; and
at least one processor operatively connected with the display and the memory,
wherein the at least one processor is configured to:
control the display to display an original image,
detect a roll creation event,
identify objects from the original image in accordance with the roll creation event,
convert the original image into a roll image by rolling the original image into a 3 dimensional (3D) spherical shape or a circular shape, using the identified objects, and
store the roll image in the memory,
wherein the objects include at least one of a subject, a person, a thing, a background, or a natural environment, and
wherein the roll image is created such that the identified objects in the original image is displayed protruding from the edge of the 3D spherical shape or the circular shape.

US Pat. No. 10,972,669

OPERATION APPARATUS, OPTICAL APPARATUS, AND IMAGING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An operation apparatus for operating an optical element movable for changing an optical characteristic of an optical apparatus, the apparatus comprising:an operation knob;
a detector configured to detect an operation amount of the operation knob;
a controller configured to generate an operation command for the optical element based on the operation amount; and
a memory configured to store an operation target for the optical element,
wherein the controller is configured to cause a display to
display the operation command and the operation target in a first region of the display corresponding to a range which the operation command can take; and
display the operation command and the operation target in a second region corresponding to a partial region in the first region in a case where the operation command and the operation target fall within a range corresponding to the partial region, the partial region being smaller than the first region, the second region being obtained by enlarging the partial region.

US Pat. No. 10,972,668

DISPLAY DEVICE AND CONTROL METHOD FOR DISPLAY DEVICE

SEIKO EPSON CORPORATION, ...

1. A display device comprising:an image display configured to display an image and transmit an outside scene; and
a processor or a circuit configured to:
acquire a position information of a mobile body,
detect a movement of the image display,
execute a visibility control processing based on the position information of the mobile body and the movement of the image display, and
cause the image display to display the image based on the visibility control processing that is executed.

US Pat. No. 10,972,667

OPTICAL IMAGE STABILIZATION APPARATUS AND OPTICAL APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An optical image stabilization apparatus comprising:an image stabilization element;
a movable member configured to hold the image stabilization element;
a support member configured to movably support the movable member;
a coil held by one of the movable member and the support member; and
a magnet held by the other of the movable member and the support member and opposite to the coil,
wherein the optical image stabilization apparatus drives the movable member by electrifying the coil to correct an image blur,
wherein the optical image stabilization apparatus further comprises a nonmagnetic conductor configured to cover a surface different from a magnet opposing surface of the coil that faces the magnet, and
wherein the nonmagnetic conductor is disposed between the coil and the one of the movable member and the support member.

US Pat. No. 10,972,666

LENS MOVING UNIT COMPRISING A SENSING MAGNET AND A CORRECTION MAGNET

LG INNOTEK CO., LTD., Se...

1. A lens moving unit comprising:a base;
a cover member coupled to the base comprising an upper plate and a plurality of lateral walls extending from the upper plate;
a bobbin disposed in the cover member and configured to move in an optical axis direction;
a pair of driving magnets disposed on the cover member;
a coil disposed on the bobbin and facing the pair of driving magnets;
a sensing magnet disposed on the bobbin;
a correction magnet disposed on the bobbin; and
a sensor configured to detect the sensing magnet,
wherein the plurality of lateral walls of the cover member comprises first and second lateral walls opposite to each other, and third and fourth lateral walls disposed between the first and second lateral walls and opposite to each other,
wherein the pair of driving magnets comprises a first magnet disposed on the first lateral wall of the cover member and a second magnet disposed on the second lateral wall of the cover member,
wherein no driving magnet is disposed on the third lateral wall and the fourth lateral wall of the cover member,
wherein the sensing magnet is disposed on a first surface of the bobbin corresponding to the third lateral wall of the cover member, and
wherein the correction magnet is disposed on a second surface of the bobbin corresponding to the fourth lateral wall of the cover member.

US Pat. No. 10,972,665

IMAGING APPARATUS AND IMAGE BLURRING AMOUNT CALCULATION METHOD THEREFOR

OLYMPUS CORPORATION, Tok...

1. An imaging apparatus, that includes an image shooting optical system for forming an image of a subject, the imaging apparatus comprising:a first acceleration sensor that detects accelerations for first and second directions;
a second acceleration sensor that detects accelerations for the first and second directions, the first and second acceleration sensors being located at different positions on a first plane orthogonal to an optical axis of the image shooting optical system; and
a first microprocessor that includes the following sections for performing arithmetic processing:
a first acceleration estimation section that calculates a first-direction acceleration estimated value for a first position on the optical axis on the basis of a distance in the second direction between the optical axis and the first acceleration sensor, a distance in the second direction between the optical axis and the second acceleration sensor, a first-direction acceleration detected value provided by the first acceleration sensor, and a first-direction acceleration detected value provided by the second acceleration sensor,
a second acceleration estimation section that calculates a second-direction acceleration estimated value for the first position on the basis of a distance in the first direction between the optical axis and the first acceleration sensor, a distance in the first direction between the optical axis and the second acceleration sensor, a second-direction acceleration detected value provided by the first acceleration sensor, and a second-direction acceleration detected value provided by the second acceleration sensor, and
a blurring amount calculation section that calculates a first-direction image blurring amount and a second-direction image blurring amount for the imaging apparatus by using the first-direction acceleration estimated value and the second-direction acceleration estimated value, wherein
the first and second acceleration sensors are positioned in a manner such that the optical axis is positioned at a midpoint between the first and second acceleration sensors,
the first acceleration estimation section defines, as a first-direction acceleration estimated value, an average of the first-direction acceleration detected value provided by the first acceleration sensor and the first-direction acceleration detected value provided by the second acceleration sensor, and
the second acceleration estimation section defines, as a second-direction acceleration estimated value, an average of the second-direction acceleration detected value provided by the first acceleration sensor and the second-direction acceleration detected value provided by the second acceleration sensor.

US Pat. No. 10,972,664

IMAGE BLURRING CORRECTION APPARATUS, IMAGING APPARATUS, AND IMAGE BLURRING CORRECTION METHOD THAT CORRECTS IMAGE BLURRING BASED ON PANNING DETECTION AND ANGULAR VELOCITY

OLYMPUS CORPORATION, Tok...

1. An image blurring correction apparatus comprising:an angular-velocity sensor that detects an angular velocity of the image blurring correction apparatus; and
a processor comprising:
a first-panning detection section that detects first panning on the basis of the angular velocity,
a low-pass-filter (LPF) processing section that performs LPF processing on the angular velocity,
a second-panning detection section that detects second panning that has a panning velocity that is lower than that of the first panning on the basis of a processing result of the LPF processing section,
a high-pass-filter (HPF) processing section that performs HPF processing on the angular velocity,
a calculation section that calculates an image-blurring-correction amount on the basis of a detection result of the first-panning detection section or a detection result of the second-panning detection section, and a processing result of the HPF processing section, and
a drive circuit that drives an image-blurring-correction mechanism on the basis of the image-blurring-correction amount,
when the first-panning detection section has detected the first panning, or when the second-panning detection section has detected the second panning, the processor changes one of, or both, a characteristic of the HPF processing section and a characteristic of the calculation section in such a manner as to decrease the image-blurring-correction amount.

US Pat. No. 10,972,663

METHODS FOR AUTOMATICALLY SWITCHING VIDEO CAPTURING AND PLAYING BACK FRAME RATE

GUANGDONG OPPO MOBILE TEL...

1. A method, comprising:receiving an indication that a video mode is invoked on a touch screen unit, wherein the receiving the indication causes the following operations to be automatically performed:
obtaining a first frame and a second frame at a first frame rate;
defining a first region of interest (ROI) in the first frame based on a first selection of a user for the first frame;
defining a second ROI in the second frame based on the first ROI;
determining a first camera motion flow between a first region comprising a portion of the first frame complementary to a region co-located with the second ROI and a corresponding portion of the second frame;
determining a first ROI motion flow between the first ROI and a corresponding portion of the second ROI;
determining a second frame rate based on a first comparative value determined using the first ROI motion flow and the first camera motion flow; and
capturing a third frame at the second frame rate by an image sensor unit, wherein the second frame rate is higher than the first frame rate, or playing back a fourth frame at the second frame rate on the touch screen unit, wherein the second frame rate is lower than the first frame rate.

US Pat. No. 10,972,662

METHOD FOR PROVIDING DIFFERENT INDICATOR FOR IMAGE BASED ON SHOOTING MODE AND ELECTRONIC DEVICE THEREOF

Samsung Electronics Co., ...

1. An electronic device comprising:a camera;
a display;
a memory; and
at least one processor,
wherein the at least one processor is configured to:
execute an application using the camera in a first mode;
control the display to display a first preview image corresponding to the first mode;
obtain a first user input changing the first mode to a second mode;
in response to obtaining the first user input:
capture the first preview image at a point in time at which the first user input is received;
control the display to display the captured first preview image and an indicator corresponding to the second mode on the captured first preview image during a resetting of the camera to the second mode;
identify whether the resetting of the camera to the second mode is completed; and
maintain displaying the captured first preview image and the indicator corresponding to the second mode on the captured first preview image on the display, until the at least one processor identifies that the resetting of the camera to the second mode is completed;
after the resetting of the camera to the second mode, control the display to display a second preview image corresponding to the second mode, wherein a size of the second preview image is equal to a size of the indicator; and
store at least a part of the second preview image in the memory based on a second user input.

US Pat. No. 10,972,661

APPARATUS AND METHODS FOR IMAGE ALIGNMENT

GoPro, Inc., San Mateo, ...

1. A system that obtains a composite image, the system comprising:one or more physical processors configured by machine-readable instructions to:
obtain source images;
identify overlapping portions of the source images;
obtain a disparity measure based on an evaluation of pixels within the overlapping portions of the source images;
apply a transformation operation to one or more pixels within the overlapping portions of the source images to generate transformed source images, wherein the transformation operation is determined based on a discrete refinement and displaces the one or more pixels to reduce the disparity measure; and
obtain the composite image based on the transformed source images.

US Pat. No. 10,972,660

IMAGING DEVICE AND IMAGING METHOD

Olympus Corporation, Tok...

1. An imaging device that is capable of having a fisheye lens, for changing to circular fish-eye and full-frame fisheye depending on focal length, attached to a main body, and that is capable of shooting digital images, comprising:an image sensor on which photometric domains and/or AF region are arranged;
a lens communication circuit that performs communication with a lens that has been attached and acquires lens information including focal length information; and
a processor that detects whether or not a lens that has been attached is a circular fisheye lens for changing to circular fish-eye and full-frame fisheye depending on focal length, based on the lens information, and further, responsive to a determination, based on the focal length, that the lens is a circular fisheye lens, restricts the photometric domains and/or AF regions based on an image circle of the circular fisheye lens.

US Pat. No. 10,972,659

IMAGE CAPTURING DEVICE, A METHOD AND A COMPUTER PROGRAM PRODUCT FOR FORMING AN ENCODED IMAGE

Axis AB, Lund (SE)

1. An image capturing device comprisinga first and a second image sensor;
a first and a second encoder;
an image data combining unit, wherein the image data combining unit is implemented in the first encoder;
wherein the first image sensor is configured to capture a first image frame,
wherein the second image sensor is configured to capture a second image frame;
wherein the first encoder is configured to encode image data of the first image frame into first encoded data, and the second encoder is configured to encode image data of the second image frame into second encoded data;
wherein the image data combining unit is configured to receive the first encoded data from the first encoder and the second encoded data encoded from the second encoder, and to form an encoded image, the encoded image comprising the first encoded data as a first tile or a first slice and the second encoded data as a second tile or a second slice.

US Pat. No. 10,972,658

IMAGE CAPTURE EYEWEAR WITH AUTO-SEND

Snap Inc., Santa Monica,...

1. A system comprising:image capture eyewear, including:
a support structure;
a display system connected to the support structure, the display system configured to present a plurality of assignable recipient markers, wherein a first of the plurality of assignable recipient markers is a first light emitting diode (LED) positioned on the support structure and a second of the plurality of assignable recipient markers is a second LED positioned on the support structure;
a selector connected to the support structure, the selector configured to select one or more of the plurality of assignable recipient markers; and
a camera connected to the support structure to capture an image of a scene;
a processor coupled to the image capture eyewear;
a memory accessible to the processor; and
programming in the memory, wherein execution of the programming by the processor configures the system to perform functions, including functions to:
assign one or more recipients to the first LED and one or more other recipients to the second LED;
receive the image of the scene captured by the camera;
receive an indicator associated with one of the first or second LEDs when the image of the scene was captured; and
transmit the captured image to the one or more of the recipients assigned to the one of the first or second LEDs based on the received indicator.

US Pat. No. 10,972,657

LENS COVER DETECTION DURING SPHERICAL VISUAL CONTENT CAPTURE

GoPro, Inc., San Mateo, ...

1. An image capture device that captures spherical visual content, the image capture device comprising:a housing;
a first image sensor carried by the housing and configured to generate a first visual output signal conveying first visual information based on light that becomes incident thereon, the first visual information defining first visual content;
a second image sensor carried by the housing and configured to generate a second visual output signal conveying second visual information based on light that becomes incident thereon, the second visual information defining second visual content;
a first optical element carried by the housing and configured to guide light within a first field of view to the first image sensor, the first field of view being greater than 180 degrees;
a second optical element carried by the housing and configured to guide light within a second field of view to the second image sensor, the second field of view being greater than 180 degrees, the first optical element and the second optical element carried by the housing such that a peripheral portion of the first field of view and a peripheral portion of the second field of view overlap, the overlap of the peripheral portion of the first field of view and the peripheral portion of the second field of view enabling spherical capture of visual content based on the first visual content and the second visual content; and
one or more physical processors carried by the housing, the one or more physical processors configured by machine-readable instructions to:
obtain lens cover usage information, the lens cover usage information characterizing usage of a first lens cover with respect to first optical element and/or usage of a second lens cover with respect to the second optical element during capture of the spherical visual content;
determine whether the first lens cover is covering the first optical element and/or whether the second lens cover is covering the second optical element during the capture of the spherical visual content based on the lens cover usage information; and
generate an alarm based on the first lens cover covering the first optical element and/or the second lens cover covering the second optical element, the alarm indicating the first lens cover covering the first optical element and/or the second lens cover covering the second optical element during the capture of the spherical visual content.

US Pat. No. 10,972,656

COGNITIVELY COACHING A SUBJECT OF A PHOTOGRAPH

INTERNATIONAL BUSINESS MA...

1. A processor-implemented method for cognitively coaching a user to take favorable photographs, the method comprising:determining characteristics of favorable photographs from a favorable photo database using image analysis techniques;
identifying subjects in a current camera frame;
identifying characteristics of a photograph from a current camera frame;
determining similarities or differences between the favorable photographs and the current camera frame;
determining whether an error between one of the favorable photographs and the current camera frame is within a preconfigured acceptable threshold; and
generating directions that map a current state of similar characteristics to the favorable photographs.

US Pat. No. 10,972,655

ADVANCED VIDEO CONFERENCING SYSTEMS AND METHODS

LOGITECH EUROPE S.A., La...

1. A video conferencing method, comprising:defining, by use of a camera device, a first actual field-of-view, wherein the first actual field-of-view optically frames a first portion of a video conferencing environment;
digitally framing, by use of the camera device, a second portion of the video conferencing environment to provide a first apparent field-of-view, wherein the first apparent field-of-view comprises a portion of the first actual field-of-view;
generating, by use of the camera device, a video stream comprising a plurality of sequentially generated frames of the first apparent field-of-view;
generating, by use of the camera device, a plurality of survey frames that each comprise at least a portion of the first actual field-of-view that is different from the first apparent field-of-view, wherein each individual survey frame of the plurality of survey frames are generated between sequentially generated frames of the first apparent field-of-view;
extracting each of the plurality of survey frames from the video stream before the video stream is transmitted to a user device;
analyzing the plurality of survey frames to generate survey data; and
detecting changes in the survey data over time.

US Pat. No. 10,972,654

CONTROLLING IMAGE CAPTURING SETTING OF CAMERA BASED ON DIRECTION OBJECTED IS DRAGGED ALONG TOUCH SCREEN

Telefonaktiebolaget LM Er...

1. A mobile terminal comprising:a camera;
a touch screen; and
a controller configured to:
control the touch screen to display preview images received via the camera;
control the touch screen to display an auto focus guide at a specific point of the touch screen in response to a first touch input to the specific point;
focus the preview images based on a location of the auto focus guide;
determine an image capturing setting of the camera responsive to an object dragged along the touch screen, wherein a value of the image capturing setting is increased responsive to the object being dragged in a first direction and is decreased responsive to the object being dragged in a second direction;
control the touch screen to display an indicator for indicating the current image capturing setting while maintaining the displayed auto focus guide at the specific point; and
capture an image according to the current image capturing setting.

US Pat. No. 10,972,653

MOBILE TERMINAL AND METHOD OF CONTROLLING AUTO FOCUSING OF CAMERA ON OBJECT IN PREVIEW IMAGE AT USER SELECTED POSITION ON TOUCH SCREEN

TELEFONAKTIEBOLAGET LM ER...

1. A mobile terminal comprising:a camera configured to capture images,
a touch screen configured to receive touch inputs, and
a processor configured to:
control the touch screen to display a preview image of an image to be captured by the camera,
control the touch screen to display an auto focus guide at a selected position, the auto focus guide displayed based on a first touch input received by the touch screen, the first touch input corresponding to a contact touch by a user selecting a specific position of the preview image to be focused,
control the camera to auto focus on an object present at the selected position of the preview image while the first touch input is maintained, and
control the camera to capture the image based on a release of the first touch input after the camera has been controlled to auto focus on the object present at the selected position of the preview image.

US Pat. No. 10,972,652

IMAGING APPARATUS AND IMAGING METHOD

SONY CORPORATION, Tokyo ...

1. An imaging apparatus, comprising:an imager configured to acquire an image; and
a central processing unit (CPU) configured to:
receive a user operation for selection of a first object detection mode of the imaging apparatus, wherein
in the selected first object detection mode no face frame is superimposed on the acquired image before receipt of a first user instruction to focus, and
the face frame represents a face area detected in the acquired image;
select, based on the receipt of the first user instruction to focus, a first type of a subject as a focusing target in each imaging of the acquired image;
detect an area of the subject of the first type from the acquired image based on the selected subject of the first type;
set the detected area of the subject of the first type as an in-focus area of the acquired image; and
superimpose a frame on the set in-focus area of the acquired image.

US Pat. No. 10,972,651

METHOD AND SYSTEM FOR IRIS RECOGNITION

ZKTECO USA LLC, Fairfiel...

1. An iris identification system comprising:a camera module;
a distance detection apparatus connected to the camera module; and
a processing chip connected to the camera module and the distance detection apparatus respectively, wherein
the camera module comprises at least two cameras having different depths of field and being configured to capture iris images;
the distance detection apparatus is configured to detect a distance between a user and the camera module and send the distance to the processing chip; and
the processing chip is configured to determine, according to the detected distance, a depth of field corresponding to the detected distance and control a camera, from the at least two cameras, having the determined depth of field to be turned on to capture an image of an iris of the user.

US Pat. No. 10,972,650

COMMUNICATION APPARATUS AND CONTROL METHOD THEREOF

CANON KABUSHIKI KAISHA, ...

1. A communication apparatus comprising:a communication unit configured to communicate with an external apparatus;
a first reading unit configured to read predetermined information for specifying a type of the external apparatus from a captured image of the external apparatus;
a presenting unit configured to present an operation procedure for setting connection information, according to the type of the external apparatus specified using the predetermined information;
a second reading unit configured to read connection information from a captured image of the external apparatus, based on a display appearance of connection information according to the external apparatus; and
a control unit configured to connect with the external apparatus via the communication unit by using the connection information acquired by the second reading unit.

US Pat. No. 10,972,649

INFRARED AND VISIBLE IMAGING SYSTEM FOR DEVICE IDENTIFICATION AND TRACKING

X Development LLC, Mount...

1. A system comprising:a visible-light camera having a first field of view;
an infrared camera having a second field of view, wherein the second field of view is narrower than the first field of view;
one or more sensors to determine a location and pose of the infrared camera;
a positioning system configured to adjust a position of the infrared camera, wherein the positioning system is configured to adjust the position of the visible-light camera, and wherein the positioning system is configured to adjust the position of the infrared camera separately from the position of the visible-light camera; and
a processing system configured to:
identify a position of a device based at least in part on image data generated by the visible-light camera that includes a representation of the device;
cause the positioning system to position the infrared camera so that the device is in the second field of view; and
record (i) infrared image data from the infrared camera that includes a representation of the device and (ii) position data, based on output of the one or more sensors, that indicates a location and pose of the infrared camera when the infrared image data is acquired.

US Pat. No. 10,972,648

PULSE DETECTION AND SYNCHRONIZED PULSE IMAGING SYSTEMS AND METHODS

FLIR Surveillance, Inc., ...

1. A system comprising:a light pulse detection device configured to:
detect a first light pulse;
determine that the first light pulse is associated with a first pulse sequence;
determine first timing information associated with a second light pulse of the first pulse sequence; and
generate first data associated with the first timing information; and
an imaging device configured to:
determine a first integration period based on the first data; and
capture, using the first integration period, a first image that includes the second light pulse.

US Pat. No. 10,972,647

SYSTEM TO CONTROL CAMERA FUNCTION REMOTELY

CAMERA CONTROL AT A DISTA...

1. A system for remote control of a device, comprising:a remote camera, wherein the remote camera has a mass and a center of mass;
a shadow device, wherein the shadow device is a mock camera, and wherein the shadow device copies the mass and the center of mass of the remote camera;
a controller, wherein the controller is in communication with the remote camera and the shadow device,
wherein the remote camera is at a remote distance from the shadow device and the controller, and wherein the controller and the shadow device are located in the vicinity of one another, and
wherein, when a user adjusts the shadow device, the controller sends a signal to the remote camera to adjust the remote camera accordingly.

US Pat. No. 10,972,646

CAMERA DEVICE AND MOBILE TERMINAL

TRIPLE WIN TECHNOLOGY(SHE...

1. A camera device, comprising:a first camera assembly;
a second camera assembly;
a photosensitive assembly;
a first circuit board;
wherein the first camera assembly and the second camera assembly are symmetrically positioned at both sides of the photosensitive assembly;
the photosensitive assembly comprises a driver and a photosensitive chip connected to the driver;
the driver is configured to drive the photosensitive chip to rotate, a photosensitive surface of the photosensitive chip is rotated between the first camera assembly and the second camera assembly;
a receiving groove is defined on the first circuit board to accommodate the photosensitive assembly, the photosensitive chip is electrically connected to the first circuit board, the first camera assembly and the second assembly are respectively positioned at sides of the first circuit board.

US Pat. No. 10,972,645

DUAL CAMERA MODULE AND APPARATUS FOR MANUFACTURING THE SAME

Furonteer Inc, Seongnam-...

1. A dual camera module comprises:a first camera module having a first optical axis and including a first extension part extending in a direction crossing the first optical axis, wherein the first extension part has a first hole;
a second camera module having a second optical axis aligned with respect to the first optical axis and including a second extension part extending in a direction crossing the second optical axis, wherein the second extension part has a second hole, and wherein a diameter of the second hole is greater than a diameter of the first hole;
a fixing member having a first accommodation part accommodating the first camera module, a second accommodation part accommodating the second camera module, a first fixing part having a protrusion shape protruding to be inserted into the first hole, and a second fixing part having a protrusion shape to be inserted into the second hole, wherein the second fixing part has a diameter smaller than the diameter of the second hole so that a posture of the second camera module is corrected while the second fixing part is inserted into the second hole;
a first adhesive part connecting the first extension part with the first fixing part to fix the first camera module; and
a second adhesive part connecting the second extension part with the second fixing part to fix the second camera module whose posture is corrected to the fixing member.

US Pat. No. 10,972,644

WEARABLE RING FOR HOLDING CAMERA

1. A wearable device for a camera, said wearable device is used to be worn on at least one part of a human body, said wearable device comprising:a body having a bottom surface and a top surface, said body presenting an opening defined between said top surface and said bottom surface, said opening is used to receive therethrough and hold the at least one part of the human body;
a pocket defined in said body between said opening and said top surface for holding the camera presenting an activation button to turn on and turn off the camera, said pocket presenting a circular configuration; and
said body presenting a slot defined in said top surface of said body and extending to said pocket to receive the activation button extending through said slot as the camera is disposed in said pocket to turn off or turn on the camera as said body is worn on the at least one part of the human body, said body including a front surface and a rear surface with said front surface being concave and said rear surface being flat.

US Pat. No. 10,972,643

CAMERA COMPRISING AN INFRARED ILLUMINATOR AND A LIQUID CRYSTAL OPTICAL FILTER SWITCHABLE BETWEEN A REFLECTION STATE AND A TRANSMISSION STATE FOR INFRARED IMAGING AND SPECTRAL IMAGING, AND METHOD THEREOF

Microsoft Technology Lice...

1. A camera comprising:an infrared (IR) illuminator configured to emit active IR light in an IR light sub-band;
a sensor array including a plurality of sensors;
an optical filter for the sensor array switchable between a reflection state and a transmission state, the optical filter including:
a first plurality of liquid crystals configured to
dynamically form cholesteric phase structures that in the reflection state block right-handed circularly polarized light in a spectral light sub-band and transmit light outside of the spectral light sub-band, and
dynamically form a nematic phase arrangement in the transmission state that transmits light in the spectral light sub-band, and
a second plurality of liquid crystals configured to
dynamically form cholesteric phase structures in the reflection state that block left-handed circularly polarized light in the spectral light sub-band and transmit light outside of the spectral light sub-band, and
dynamically form a nematic phase arrangement in the transmission state that transmits light in the spectral light sub-band, wherein the optical filter is configured to transmit IR light in an IR light sub-band in both the transmission state and the reflection state; and
a controller machine configured to:
switch the optical filter to the reflection state to block spectral light in the spectral light sub-band,
activate the IR illuminator to illuminate a subject with the active IR light while the optical filter is in the reflection state,
address the sensors of the sensor array while the optical filter is in the reflection state,
determine, for each of the plurality of sensors of the sensor array, a depth value indicative of a depth of the subject based on a measured aspect of the active IR light emitted from the IR illuminator and reflected from the subject back to each of the plurality of sensors,
switch the optical filter to the transmission state to allow transmission of spectral light in the spectral light sub-band,
deactivate the IR illuminator such that the IR illuminator does not emit active IR light while the optical filter is in the transmission state, and
address the sensors of the sensor array while the optical filter is in the transmission state.

US Pat. No. 10,972,642

IMAGER AND IMAGING DEVICE

FUJIFILM Corporation, To...

1. An imager comprising:a fixing member that has a concave portion in which an imaging sensor chip is fixed to a bottom surface of the concave portion, a wall portion surrounding the concave portion, and a plurality of first terminals electrically connected to the imaging sensor chip;
a sealing member that seals the imaging sensor chip by closing the concave portion in a state of overlapping the wall portion of the fixing member;
a circuit board that is disposed facing a surface, opposite to the bottom surface, of the fixing member and has a larger linear expansion coefficient than the fixing member; and
a conductive member that is provided between the first terminals exposed from the surface of the fixing member and a second terminal formed on the circuit board to fix the fixing member and the circuit board and electrically connect the first terminal and the second terminal to each other,
wherein, in a state of being viewed from a direction perpendicular to a light receiving surface of the imaging sensor chip, an outer edge of a region where the conductive member is disposed overlaps the wall portion of the fixing member, and
a distance between a position overlapping the outer edge in the wall portion and an end of the wall portion on a side of the concave portion is 20% or more of a width of the wall portion in a direction parallel to the light receiving surface.

US Pat. No. 10,972,641

OPTICS, DEVICE, AND SYSTEM FOR ASSAYING

Essenlix Corporation, Mo...

1. An optical adapter for imaging a sample using a hand-held imaging device that has a light source, a single camera, and a computer processor, comprising:an enclosure;
a cavity within the enclosure; and
a lever within the cavity,
wherein the lever comprises at least one optical element and is configured to be moveable between a first position and a second position, wherein (i) in the first position, said imaging device is capable of imaging the sample in a bright field mode, and (ii) in the second position, said imaging device is capable of imaging the sample in a fluorescence excitation mode.

US Pat. No. 10,972,640

ADAPTER FOR INTEGRATING A PROMPT BOX, A CAMERA AND A TRIPOD

1. An adaptor for mechanically coupling a prompting box and a video camera to a tripod, comprising:a single piece bracket having a first horizontal portion with a first fastening round hole located at said first horizontal portion's center, a vertical portion with its lower end connecting to said first horizontal portion, and a second horizontal portion connecting to said vertical portion's upper end, wherein said second horizontal portion has at least two second fastening round roles evenly spaced along said second horizontal portion's longitudinal direction, wherein said prompting box is fastened to said tripod through said first fastening round hole, and wherein said video camera is fastened through one of said second fastening round holes and a screw bolt matching said video camera's tripod mount.

US Pat. No. 10,972,639

IN-VEHICLE DEVICE

CLARION CO., LTD., Saita...

1. An in-vehicle device comprising:an image acquisition unit that acquires an image photographed by a camera including a lens;
a storage unit;
a dirt detection unit that detects dirt on the lens on the basis of the image acquired by the image acquisition unit and that stores dirt region information indicating a region where the dirt exists in the image into a storage unit;
a dirt removal information generation unit that generates dirt removal information indicating a region where the dirt on the lens is removed; and
a rewriting unit that rewrites the dirt region information on the basis of the dirt removal information,
wherein the dirt detection unit includes a first dirt detection unit and a second dirt detection unit that detect different kinds of dirt,
the first dirt detection unit stores the detected dirt as first dirt region information into the storage unit, and
the second dirt detection unit stores the detected dirt as second dirt region information into the storage unit,
further comprising a first monitoring unit that generates first dirt removal information on the basis of the first dirt region information,
a second monitoring unit that generates second dirt removal information on the basis of the second dirt region information, and
a table storage unit that stores an elimination relation table in which a condition for rewriting the first dirt region information and a condition for rewriting the second dirt region information are stored,
wherein the rewriting unit rewrites the second dirt region information on the basis of the condition described in the elimination relation table and the first dirt removal information, and rewrites the first dirt region information on the basis of the condition described in the elimination relation table and the second dirt removal information.