US Pat. No. 10,659,861

COMPOSITE EARCUSHION

BOSE CORPORATION, Framin...

1. An earphone cushion comprising:a body formed of a partially reticulated polymeric foam and including an upper surface configured to engage or surround an ear of a user, side surfaces, and a lower surface;
a snap ring at least partially embedded in the foam body proximate the lower surface and including a periphery configured to engage one or more retention elements of an earcup of a headphone; and
a non-porous film disposed on the upper and side surfaces of the body.

US Pat. No. 10,659,860

HEADPHONES AND HEADPHONE SYSTEM

Semiconductor Energy Labo...

1. Headphones comprising:a housing;
a sound output unit;
a processing unit;
a memory unit;
a lighting unit; and
a detection unit comprising an attitude detection unit, the attitude detection unit comprising a camera module,
wherein the sound output unit is configured to output sound,
wherein the memory unit is configured to store a program,
wherein the attitude detection unit is configured to detect change in attitude from a difference between images taken by the camera module,
wherein the attitude detection unit is configured to supply a first detection signal corresponding to the change in attitude to the processing unit,
wherein the processing unit is configured to read out the program, carry out an operation using the first detection signal and the program, and supply a signal corresponding to an operation result to the lighting unit, and
wherein the lighting unit is configured to emit light in response to the signal supplied from the processing unit.

US Pat. No. 10,659,859

PORTABLE CASE FOR MODULAR HEARING ASSISTANCE DEVICES

Starkey Laboratories, Inc...

1. A portable case for storing and charging hearing assistance devices, the portable case comprising:two or more retention structures, each configured to retain at least part of a respective, single hearing assistance device;
an energy storage device;
charging circuitry electrically coupled to the energy storage device;
a housing; and
a cover configured to reveal an opening of a single retention structure of the two or more retention structures after mechanical manipulation of the cover,
at least one processor configured to:
detect when a first retention structure of the two or more retention structures retains the at least part of the hearing assistance device; and
responsive to detecting the first retention structure retaining the at least part of the hearing assistance device, cause the charging circuitry to charge, via an electrical connection shared by the at least part of the hearing assistance device and the charging circuitry, a power source of the hearing assistance device.

US Pat. No. 10,659,858

DISPLAY APPARATUS

LG Display Co., Ltd., Se...

1. A display apparatus, comprising:a display panel configured to display an image by emitting light;
a supporting member disposed on a rear surface of the display panel;
a sound generating device disposed between the supporting member and the display panel; and
a plurality of partitions at a predetermined interval from the sound generating device, wherein the partitions have different compression force deflections,
wherein the compression force deflection of the partitions is less at a central area of the display panel than at left and right ends of the display panel.

US Pat. No. 10,659,857

RAPIDLY MOUNTABLE CEILING LOUDSPEAKER DEVICE

Huizhou Chuangxiang Audio...

1. A rapidly mountable ceiling loudspeaker device, comprising:a loudspeaker housing comprising a basket, a speaker inner frame and a surface frame ring body, wherein the basket is connected with the speaker inner frame, the surface frame ring body is arranged between the basket and the speaker inner frame, the speaker inner frame can be rotated in the surface frame ring body around the surface frame ring body as a rotation axis, and the surface frame ring body is provided with a via hole and provided with a guide piece at an edge of the via hole;
a ceiling locking assembly comprising a locking sleeve body, a ceiling locking piece and a locking member, wherein the locking sleeve body is arranged on the basket and provided with a locking hole, the ceiling locking piece is provided with a through hole, an end of the locking member is passed through the through hole and screwed into the locking hole, the ceiling locking piece is provided with a ceiling locking portion, and the ceiling locking portion can move in a direction towards or away from the via hole upon the rotation of the speaker inner frame;
a woofer module arranged in the basket; and
a dustproof assembly arranged on the speaker inner frame.

US Pat. No. 10,659,856

AUDITORY LOW FREQUENCY SOUND REPRODUCTION AND VIBRATION GENERATING SPEAKER ENCLOSURE PLATFORM SYSTEM

8. An auditory low frequency sound reproduction and vibration generating speaker enclosure platform system for use to support a seating apparatus thereon, the system configured to deliver sound pressure waves and tremulous movement directly to the seating apparatus to create an enhanced auditory and vibratory experience, the speaker enclosure platform system comprising:a main housing comprising a top wall, a bottom wall opposite the top wall, a front wall connecting the top and bottom walls together, a rear wall connecting the top and bottom walls together, and a pair of side walls connecting the top and bottom walls together and the front and rear walls together, and a plurality of dividing walls coupled to the front, rear, top and bottom walls to form a plurality of internal chambers within the main housing;
a plurality of subwoofers coupled to the main housing, each subwoofer in the plurality of subwoofers extending within one of the plurality of internal chambers in the main housing and comprising a speaker oriented generally upright that partially extends across the top wall of the main housing;
at least one amplifier electrically coupled to the plurality of subwoofers;
a plurality of reflex ports coupled to the front wall of the main housing, each pair of reflex ports in the plurality of reflex ports comprising a pair of openings connecting one of the plurality of internal chambers in the main housing to an area surrounding the system to vent sound pressure waves generated by the subwoofer corresponding to the one of the plurality of internal chambers captured within the one of the plurality of internal chambers; and
a plurality of binding post cups coupled to the rear wall of the main housing with each binding post cup in the plurality of binding post cups coupled to one of the plurality of internal chambers in the main housing, each binding post cup in the plurality of binding post cups being electrically coupled to both one of the plurality of subwoofers and the at least one amplifier;
wherein the top wall of the main housing is configured to directly contact and support the seating apparatus thereon, wherein each subwoofer in the plurality of subwoofers is positioned off-center in one of the plurality of internal chambers of the main housing, a first separation distance away from the binding post cup that is less than a second separation distance away from the pair of reflex ports in the internal chamber, wherein the at least one amplifier is configured to activate one of the plurality of subwoofers to generate the sound pressure waves that project from the speaker of the one of the plurality of subwoofers outward directly to the seating apparatus and throughout the main housing, wherein a portion of the generated sound pressure waves transferred along the top wall of the main housing delivers the tremulous movement to the seating apparatus situated thereon.

US Pat. No. 10,659,855

VOICE ACTIVATED DEVICE WITH INTEGRATED HEATSINK AND SPEAKER

Amazon Technologies, Inc....

1. A voice activated device comprising: a cylindrical housing; a first light emitting diode (LED) configured to emit light, the first LED positioned within the cylindrical housing; a circular light ring disposed about an upper portion of the cylindrical housing, wherein the light is visible through the circular light ring; an integrated speaker and heatsink assembly comprising: an aluminum heatsink; and a speaker subassembly coupled to a first side of the aluminum heatsink, the speaker subassembly comprising a speaker plate coupled to an outward-facing speaker, wherein the aluminum heatsink and the speaker subassembly together enclose a sealed cavity; and a circular light reflector disposed adjacent to a second side of the aluminum heatsink, wherein the circular light reflector comprises: a plurality of substantially linear members extending radially inwards relative to a circular perimeter of the circular light reflector; and a first light diffuser element comprising a triangular member oriented radially inwards relative to the circular perimeter and aligned with the first LED.

US Pat. No. 10,659,854

PLUGGABLE AGGREGATION MODULE FOR AN OPTICAL NETWORK

ADVA OPTICAL NETWORKING S...

1. A pluggable aggregation module adapted to be plugged into a network device of an optical network,said pluggable aggregation module comprising
optical frontends configured to connect said pluggable aggregation module with a corresponding number of modules to exchange optical signals via optical fibres in legacy signal formats; and
an electrical conversion circuit configured to convert the legacy signal formats to an internal signal format used by said network device,
wherein the electrical conversion circuit comprises a Codec chip having serializer/de-serializer circuits connectable to a host board of said network device and serializer/de-serializer circuits connected to the optical frontends integrated in the pluggable aggregation module.

US Pat. No. 10,659,853

FIXED WIRELESS POINT-TO-POINT MESH ENGINEERED NETWORK DEPLOYMENT FRAMEWORK

CenturyLink Intellectual ...

1. A system comprising:an aggregation node comprising a host wireless transceiver, wherein the aggregation node is coupled to a service provider network;
a first node associated with a first customer premises, the first node comprising:
a remote wireless transceiver in communication with the host wireless transceiver;
a first mesh network node transceiver configured to communicate with other mesh network node transceivers;
a processor; and
non-transitory computer readable media comprising instructions executable by the processor to:
establish, via the remote wireless transceiver, a point-to-point wireless connection to the host wireless transceiver of the aggregation node;
provision, via the point-to-point wireless connection, access to the service provider network to the first customer premises;
establish, via the first mesh network node transceiver, a mesh connection to a secondary mesh network node associated with a second customer premises;
provision, via the mesh connection, access to the service provider network to the second customer premises;
receive, via the mesh connection, a first data from the second customer premises; and
transmit, via the point-to-point wireless connection, the first data from the second customer premises to the service provider network.

US Pat. No. 10,659,852

CONNECTOR ELEMENT INFORMATION DETECTIONS

Hewlett-Packard Developme...

1. A first device, comprising:a connector element to receive a physical electrical connection from a respective connector element of a second device if the second device is connected to the first device;
an analog-to-digital converter to convert an analog voltage signal received on the connector element to a digital value; and
logic resources to process the digital value to determine (i) a connection state, from a plurality of connection states for the connector element and (ii) preselected information of the second device, the preselected information including a revision identifier of the second device and a category of the second device.

US Pat. No. 10,659,851

REAL-TIME DIGITAL ASSISTANT KNOWLEDGE UPDATES

Apple Inc., Cupertino, C...

1. A method for integrating information into digital assistant knowledge, the method comprising:at an electronic device:
receiving a data feed, wherein the data feed comprises data relating to an event associated with a time in a media stream;
receiving a user request based on speech input from a user, wherein the user request is associated with the event;
determining, based on the user request, a user intent;
generating a response to the user request based on the data relating to the event, the user intent, a current playback position of currently playing media, and a predefined user preference, wherein the predefined user preference enables a determination of whether to include, within the generated response, up-to-date information regardless of the current playback position; and
causing the response to be delivered.

US Pat. No. 10,659,850

DISPLAYING INFORMATION RELATED TO CONTENT PLAYING ON A DEVICE

GOOGLE LLC, Mountain Vie...

12. An electronic device associated with a user, the electronic device comprising:a display;
memory;
one or more processors; and
one or more programs stored in the memory and configured for execution by the one or more processors, the one or more programs including instructions for:
determining whether media content is playing at a second electronic device in proximity to the electronic device;
in accordance with a determination that media content is playing at the second electronic device, displaying on the display a plurality of first affordances, each of the plurality of first affordances providing the user with a first user-selectable election, wherein each of the first affordances comprises a user interface card that invites the user to receive information on an entity relevant to the media content playing at the second electronic device;
receiving a first user selection of the first user-selectable election by the user;
in response to receiving the first user selection of the first user-selectable election, sampling by the electronic device program information from the media content item playing at the second electronic device;
sending the program information to a server;
receiving from the server an identification of the media content item and one or more second user-selectable elections for the identified media content item;
displaying on the display one or more second affordances providing the second user-selectable elections, each one of the second user-selectable elections corresponding to the entity relevant to the identified media content item;
receiving a second user selection of a first one of the second user-selectable elections by the user; and
displaying on the display information regarding the respective entity relevant to the identified media content item.

US Pat. No. 10,659,849

SYSTEMS AND METHODS FOR SIGNALING OF EMERGENCY ALERT MESSAGES

SHARP KABUSHIKI KAISHA, ...

1. A primary device for transferring an emergency alert message, the primary device comprising:a processor; and
a memory associated with the processor, wherein
the processor is configured to receive the emergency alert message and to transfer the emergency alert message to a companion device on a local area network,
the emergency alert message includes (i) an event description element representing a description of an emergency event and (ii) a live media element representing an identification of an Audio/Visual service,
the event description element includes a first language field identifying a language of the event description element of the emergency alert message,
the live media element includes (i) a bsid field representing an identifier of a broadcast stream which contains an emergency-related live Audio/Visual service, (ii) a service id field indicating a 16 bit integer that uniquely identifying the emergency-related live Audio/Visual service, (iii) a service name element representing a user-friendly name for a service where a live media is available that a receiver can present to a viewer when presenting an option to tune to the live media element, and (iv) a second language field identifying a language of the service name element of a live media stream, and
the first language field and the second language field are represented by formal natural language identifiers and not exceed 35 characters.

US Pat. No. 10,659,848

DISPLAY OVERLAYS FOR PRIORITIZATION OF VIDEO SUBJECTS

International Business Ma...

20. A computer-implemented method (CIM) comprising:receiving, from a first camera device, a narrow frame video data set including data indicative of a first view of a real world live event;
receiving, from a second camera device, a wide frame video data set including data indicative of a second view of a real world live event, with the second view including: (i) substantially all of what is visible in the first view, and (ii) additional viewing area that is not visible in the first view;
identifying, by machine logic, a set of out-of-narrow-frame object(s) of potential interest that are visible in the additional viewing area but not visible in the first view so that each out-of-narrow-frame object has human understandable identification text respectively assigned to it;
identifying, by machine logic, a set of in-narrow-frame object(s) of potential interest that are visible in the first view so that each in-narrow-frame object has human understandable identification text respectively assigned to it;
for each in-narrow-frame object of the set of in-narrow-frame object(s), determining, by machine logic, a respective priority value;
generating an overlay data set including information indicative of: (i) the identification text for each in-narrow-frame object of the set of in-narrow-frame object(s), (ii) the priority value for each in-narrow-frame object of the set of in-narrow-frame object(s), and (iii) the identification text for each out-of-narrow-frame object of the set of out-of-narrow-frame object(s); and
sending the overlay data set to the first camera device for display on a display of the first camera device as an overlay overlaid on the first view.

US Pat. No. 10,659,847

FRAME DROPPING METHOD FOR VIDEO FRAME AND VIDEO SENDING APPARATUS

Huawei Technologies Co., ...

8. A video sending apparatus, comprising:a memory comprising an application program code; and
a processor coupled to the memory, wherein the application program code causes the processor to be configured to:
obtain a video frame sequence of a to-be-sent video;
establish, according to a preset criterion, a reference relationship between video frames in the video frame sequence, wherein the reference relationship comprises that in the video frame sequence, an mth frame references an (m?h)th frame, and an nth frame is referenced by at least two video frames of video frames after the nth frame, wherein m, h, and n are all natural numbers, wherein m is greater than 1 and h, and wherein a quantity of the video frames in the video frame sequence is not less than n+2;
detect a data occupation length of buffered video frames in a video sending buffer during a process of sending the video frame sequence;
responsive to detecting that the data occupation length is greater than a first preset threshold, drop a current to-be-buffered video frame and all video frames in the video frame sequence that reference the current to-be-buffered video frame according to the reference relationship;
responsive to detecting that the data occupation length is less than the first preset threshold, store the current to-be-buffered video frame in the video sending buffer if the data occupation length is less than a second preset threshold, wherein the second preset threshold is less than the first preset threshold;
responsive to detecting that the data occupation length is less than the first preset threshold and greater than the second threshold;
storing the current to-be-buffered video frame if the current to-be-buffered video frame is an instantaneous decoding refresh (IDR) frame; and
dropping the current to-be-buffered video frame if the current to-be-buffered video frame is a non-IDR frame.

US Pat. No. 10,659,846

SYSTEM AND METHOD FOR TRANSMITTING DATA IN A SATELLITE SYSTEM

THALES, Courbevoie (FR)

1. A method for transmitting data in a satellite system comprising at least one satellite having demodulating/decoding capabilities and a gateway, said at least one satellite being adapted to process and to store user data in an on-board cache as directed at least by the gateway, the satellite system comprising at least one user satellite terminal, the method comprising at least the following steps:transmitting a data content request Rq sent by one or more end users to the gateway by way of the user satellite terminal and of the at least one satellite such that the data content request Rq transits the at least one satellite without being demodulated/decoded,
upon receipt of the data content request Rq, after opening of a communication session, receiving the data content request Rq by the gateway,
verifying by the gateway that a data content requested by a user is available in an on-board cache of the at least one satellite,
sending from the gateway an order to the at least one satellite, said order comprising all or some of information contained in the data content request Rq, said order to the at least one satellite indicating a data content or packets that are to be delivered from the on-board cache of the at least one satellite to requesting user terminal(s), and
transmitting from the at least one satellite the data content to the user from the on-board cache in response to said order and the user terminal(s) acknowledging receipt of at least a portion or of all of this content to the gateway.

US Pat. No. 10,659,845

METHODS, SYSTEMS, AND MEDIA FOR PROVIDING VIDEO CONTENT SUITABLE FOR AUDIO-ONLY PLAYBACK

Google LLC, Mountain Vie...

1. A method for selecting content to be presented, the method comprising:receiving, using a hardware processor, a request for a first video content item from a user device;
determining, using the hardware processor, that the user device is currently in a background playback mode in which audio data of video content items is played back and in which video data of the video content items is inhibited from being played back;
determining, using the hardware processor, whether the first video content item is suitable for presentation in the background playback mode based on one or more properties of audio data of the first video content item;
in response to determining that the first video content item is not suitable for presentation in the background playback mode based on the one or more properties of the audio data of the first video content item, setting, using the hardware processor, a first background playback mode indicator associated with the first video content item to indicate that the first video content item is not suitable for presentation in the background playback mode;
in response to determining that the first video content item is not suitable for presentation in the background playback mode, selecting, using the hardware processor, a second video content item that is associated with a second background playback mode indicator that indicates the second video content item is suitable for presentation in the background playback mode, wherein the second background playback mode indicator associated with the second video content item was set as being suitable for presentation in the background playback mode based on one or more properties of audio data of the second video content item; and
in response to selecting the second video content item, automatically causing, using the hardware processor, the audio data of the second video content item to be presented by the user device while inhibiting video data of the second video content item from being presented by the user device.

US Pat. No. 10,659,844

INTERACTION METHOD AND SYSTEM BASED ON RECOMMENDED CONTENT

TENCENT TECHNOLOGY (SHENZ...

7. An interaction method based on recommended content, comprising:receiving, by a server comprising at least a processor, a recommended-content obtaining request from a terminal configured to play a multimedia file, the recommended-content obtaining request being used for requesting for a recommended video corresponding to the multimedia file;
sending, by the server, the recommended video corresponding to the multimedia file and a first quantity of times to the terminal, the terminal being configured to: stream the recommended video on a playback interface at a designated playback time point of the multimedia file, display, after the recommended video has been streamed for a duration, at least one interaction option of the recommended video on the playback interface, detect a selection operation on one of the at least one interaction option on the playback interface, adjust a quantity of times of evaluation of the recommended video from a first quantity to a second quantity in response to detecting the selection operation on an evaluation option for a first time, and restore the quantity of times of evaluation of the recommended video to the first quantity of times of evaluation in response to detecting the selection operation on the evaluation option for a second time;
receiving, by the server, an interaction request associated with the recommended video sent by the terminal, the interaction request being triggered by the selection operation on the interaction option and carrying a content identifier of the recommended video; and
processing, by the server the request associated with the recommended video based on the selected interaction option,
wherein the method further comprises:
obtaining a current login user identifier of the terminal in response to the interaction request associated with the recommended video being a sharing request, wherein the sharing request is generated by the terminal in response to detecting the selection operation on the sharing option, including: obtaining the content identifier of the recommended video and the user identifier; displaying a sharing interface of the recommended video, wherein the sharing interface is used by a user to confirm sharing of the recommended video; displaying icons of multiple information presentation applications; and determining, from the icons of the multiple information presentation applications, a designated information presentation application corresponding to a selected icon; and
sharing the recommended video to a dynamic-information presentation page of the user identifier.

US Pat. No. 10,659,843

CONTENT RIGHTS MANAGEMENT FOR MOBILE DEVICES

T-MOBILE USA, INC., Bell...

1. An electronic device for providing geolocation independent content rights management, comprising:a non-transitory storage medium; and
a processing unit that executes instructions stored in the non-transitory storage medium to:
receive a request for content from a content access device;
determine that the content access device is registered to an account associated with an account location;
determine a first set of access rights that is associated with the account location;
determine a second set of access rights using an actual location of the content access device that is other than the account location, the second set of access rights:
being a subset of rights that would be granted based on location if the actual location were the account location; and
including at least one right in addition to the first set of access rights;
provide access to the content when the first set of access rights or the second set of access rights enables access to a single instance of the content and the processing unit determines that a copy of the content is not stored on a second content access device registered to the account; and
before providing access to the content, lock the copy on the second content access device without removing the copy when:
1) the first set of access rights or the second set of access rights enables the access to the single instance of the content; and
2) the processing unit determines that the copy of the content is stored on the second content access device.

US Pat. No. 10,659,842

INTEGRAL PROGRAM CONTENT DISTRIBUTION

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:receiving, at a cloud-based digital video recorder (DVR) system, a request to record program content excerpted from a continuous media stream;
ascertaining, by the cloud-based DVR system, a scheduled broadcast time for the program content from one or more program-scheduling sources;
identifying, by the cloud-based DVR system, a plurality of indications from a plurality of sources that indicate an apparent broadcast time for the requested program content;
applying, by the cloud-based DVR system, different weights to different subsets of the plurality of indications including first and second subsets, at least one indication in the first subset conflicting with at least one additional indication in the second subset;
determining, by the cloud-based DVR system, the apparent broadcast time for the program content by using the applied different weights of the different subsets of the indications; and
recording, by the cloud-based DVR system, the program content at the determined apparent broadcast time.

US Pat. No. 10,659,841

METHODS AND APPARATUS TO MEASURE EXPOSURE TO STREAMING MEDIA

The Nielsen Company (US),...

1. An apparatus to measure exposure to streaming media, the apparatus comprising:a video retriever to retrieve an image displayed by a media device presenting the streaming media, the media device separate from the video retriever;
a metadata extractor to extract a video watermark from a chroma key of the retrieved image;
a metadata converter to, in response to the extraction of the video watermark, convert the video watermark into text formatted metadata; and
a transmitter to transmit the text formatted metadata to a central facility.

US Pat. No. 10,659,840

VIDEO COMPOSITION BY DYNAMIC LINKING

INTERNATIONAL BUSINESS MA...

1. A method comprising:determining, by one or more computer processors, two or more media content items associated with one or more media content source locations;
determining, by one or more computer processors, a digital characteristic of a first media content item of the two or more media content items does not match a digital characteristic of a second media content item of the two or more media content items;
publishing, by the one or more computer processors, a composition of the two or more media content items to appear as one linked asset with uniform digital characteristics; and
in response to the determining the digital characteristic of the first media content item of the two or more media content items does not match the digital characteristic of the second media content item of the two or more media content items, resampling, by the one or more computer processors, the first media content item of the two or more media content items to a lowest common denominator of the digital characteristic of the first media content.

US Pat. No. 10,659,839

ENCODING DEVICE AND METHOD, DECODING DEVICE AND METHOD, EDITING DEVICE AND METHOD, RECORDING MEDIUM, AND PROGRAM

SONY CORPORATION, Tokyo ...

2. An encoding device comprising:circuitry configured to
set identification information identifying whether pictures in a randomly accessible predetermined section of a bitstream do not refer to pictures included in another predetermined section and buffer characteristics information, which is contained within the bitstream and includes at least a first combination of a first buffer size parameter and a first bit rate parameter for a first randomly accessible predetermined section of the bitstream and a second combination of a second buffer size parameter and a second bit rate parameter for a second randomly accessible predetermined section of the bitstream;
encode an image signal to generate the bitstream including the identification information and the buffer characteristics information as bitstream syntax; and
transmit the bitstream to a decoder that determines whether the bitstream is decodable based on the buffer characteristics information by checking bitstream and decoder conformance using a decoder buffer size and a decoding bit rate in bitstream conformance points corresponding to the buffer characteristics information including the at least the first combination of the first buffer size parameter and the first bit rate parameter and the second combination of the second buffer size parameter and the second bit rate parameter, the first and second buffer size parameters indicating a required buffer size of a buffer that stores the bitstream during decoding of the bitstream and the first and second bit rate parameters indicating an input bit rate of the buffer.

US Pat. No. 10,659,838

DEVICE AND METHOD FOR SHARING DOWNLINK DEMODULATION REFERENCE SIGNALS

HTC Corporation, Taoyuan...

1. A base station (BS) for sharing downlink (DL) demodulation reference signals (DMRS) between data and control signals, comprising:a storage device, for storing instructions of:
allocating a first time-frequency resource for transmitting a DL control signal to a communication device;
allocating a second time-frequency resource for transmitting a DL data to the communication device, wherein the first time-frequency resource and the second time-frequency resource are adjacent;
transmitting the DL control signal in the first time-frequency resource to the communication device;
transmitting the DL data in the second time-frequency resource via a plurality of layers of single-user (SU) Multi-input Multi-output (MIMO) (SU-MIMO) spatial multiplexing (SM), to the communication device;
rate matching around the first time-frequency resource occupied by the DL control signal, when transmitting the DL data signal via the plurality of layers of the SU-MIMO SM; and
transmitting a set of DMRSs for the DL control signal and the DL data to the communication device; and
a processing circuit, coupled to the storage device, configured to execute the instructions stored in the storage device.

US Pat. No. 10,659,837

STORING MULTIPLE INSTANCES OF CONTENT

DISH Technologies L.L.C.,...

1. A method for recording content, comprising:receiving, by a content receiver, an instruction to initiate recording;
in response to the instruction, setting a tuner of one or more tuners to a carrier frequency;
receiving, via the tuner of the content receiver, a first set of content and a second set of content, wherein:
the first set of content and the second set of content are received as part of a single transponder stream by the tuner of the one or more tuners;
the first set of content and the second set of content are scrambled using a key; and
the second set of content is further scrambled using a sub-key;
decrypting, by the content receiver, a message to obtain the key;
descrambling, using a descrambler of the content receiver and the decrypted key, the first set of content and the second set of content; and
recording the descrambled first set of content and the scrambled second set of content on a storage medium, wherein:
when recorded, the scrambled second set of content has been descrambled using the key, but remains scrambled using the sub-key; and
a presence of the scrambled second set of content on the storage medium is made visible in an interface to only authorized users.

US Pat. No. 10,659,836

SYSTEMS AND METHODS FOR PROVIDING A DELETION NOTIFICATION

ROVI GUIDES, INC., San J...

1. A method for indicating an amount of storage space available, comprising: in response to determining an amount of storage space available in a media storage device is less than a set storage threshold, beginning, by control circuitry, a monitoring of a user's interactions with a user device; dynamically comparing, by the control circuitry, the user's interactions with the user device against a plurality of user interaction templates stored for respective types of user interactions, to generate a plurality of respective user interaction scores for the types of user interactions; dynamically comparing, by the control circuitry, each of the plurality of respective user interaction scores to a respective threshold; in response to determining that at least one of the plurality of respective user interaction scores corresponds to a respective threshold, determining, by the control circuitry, a time to notify the user of the amount of storage space available in the media storage device; and generating for display, by the control circuitry, at the time, a notification that indicates to the user the amount of storage space available in the media storage device.

US Pat. No. 10,659,835

STREAMING MEDIA PRESENTATION SYSTEM

FACEBOOK, INC., Menlo Pa...

1. A computer-implemented method comprising:receiving a plurality of media streams from a plurality of client devices;
analyzing the plurality of media streams to detect an influencer within content of a first media stream of the plurality of media streams;
selecting, for a viewing user, the first media stream from the plurality of media streams by determining a correspondence between an attribute of the viewing user and the influencer within the content of the first media stream; and
providing, to a client device associated with the viewing user, a media presentation comprising the first media stream.

US Pat. No. 10,659,834

ONLINE BACKUP AND RESTORATION OF TELEVISION RECEIVER STORAGE AND CONFIGURATION DATA

DISH Technologies L.L.C.,...

1. A method, comprising:communicating a request for backup data by a first television receiver device to a remote server over a communications network in association with executing a restoration of the first television receiver device associated with a requesting customer, the remote server having, stored thereon, one or more lists of identifiers, each list of identifiers previously received by the remote server from a respective television receiver associated with a respective customer in association with executing a respective backup of the respective television receiver, each list of identifiers indicating a respective plurality of television programs stored at the respective television receiver at the time of executing the respective backup;
receiving the backup data by the first television receiver device from the remote server responsive to the request, the backup data comprising a selected list of identifiers of the one or more lists of identifiers selected by the remote server in accordance with the request; and
for each television program of the respective plurality of television programs indicated by the selected list of identifiers:
determining, by the first television receiver device, whether the television program is scheduled to be aired in an upcoming television broadcast receivable by a tuner of the first television receiver device during a broadcast time;
in accordance with determining that the television program is scheduled to be aired, automatically configuring the first television receiver device to tune to the broadcast and locally record the television program during the broadcast time; and
in accordance with determining that the television program is not scheduled to be aired, indicating to the requesting customer whether the television program is available from an alternative television program source.

US Pat. No. 10,659,833

BROADCAST RECEIVER AND BROADCAST RECEIVING SYSTEM

MAXELL, LTD., Kyoto (JP)...

1. A broadcast receiving apparatus comprising:a broadcast receiver configured to receive a broadcast wave of a digital broadcast service capable of executing an application in cooperation with a broadcast program; and
an application controller configured to refer to application-related information that is information relating to the application and configured to control operations including an activation of the application that is in cooperation with the broadcast program based on the application-related information, wherein
the application controller is configured to receive a selection of whether or not the activation of the application is permitted, based on a user's operation,
the application-related information includes an application control code for controlling an operation of the application from a broadcast station,
in a case where the application control code indicates an automatic activation of the application, the application controller controls the activation of the application in accordance with the selection of whether or not the activation of the application is permitted by the user, and
in a case where the application control code indicates a forcible activation of the application, the application controller activates the application even when the activation of the application is not permitted by the user.

US Pat. No. 10,659,832

DYNAMIC BITRATE SELECTION FOR STREAMING MEDIA

GOOGLE LLC, Mountain Vie...

1. A method comprising:receiving, by a processing device, one or more chunks from a first media stream of a plurality of bitrate media streams of a media file at a streaming buffer of the processing device, the first media stream having a first bitrate, the plurality of bitrate media streams comprising a first subset of bitrate media streams including the first media stream and a second subset of bitrate media streams having a bitrate higher than the first bitrate;
monitoring, by the processing device, a status of the streaming buffer by determining a buffer duration of the one or more chunks from the first media stream being buffered at the streaming buffer, wherein the determining of the buffer duration comprises determining a playing time of the one or more chunks from the first media stream being buffered at the streaming buffer;
in response to determining that the buffer duration of the one or more chunks at the streaming buffer satisfies a threshold condition:
for each bitrate media stream of the second subset of bitrate media streams, calculating, by the processing device, an expected download time for a subsequent chunk of the media file to be received at the streaming buffer in the each bitrate media stream;
selecting, by the processing device, a bitrate media stream of the second subset of bitrate media streams based on expected download times calculated for the second subset of bitrate media streams and the playing time of the one or more chunks from the first media stream being buffered at the streaming buffer; and
downloading, by the processing device, the subsequent chunk from the selected bitrate media stream.

US Pat. No. 10,659,831

APPARATUS AND METHOD FOR PRESENTATION OF HOLOGRAPHIC CONTENT

1. A method, comprising:sending, to a service provider network, by a processing system including a processor, a selection associated with a listing of available holographic content;
receiving, by the processing system, holographic video data from the service provider network according to the selection;
obtaining, by the processing system, a pixel density associated with a first portion of a display device;
decoding, by the processing system, the holographic video data according to the pixel density associated with the first portion of a display of the display device to generate holographic video content;
transmitting, by the processing system, the holographic video content to the display device;
decoding, by the processing system, non-holographic media content data to generate decoded video content; and
transmitting, by the processing system, the decoded video content to the display device, wherein the display device simultaneously presents the holographic video content at the first portion of the display and presents the decoded video content at a second portion of the display of the display device, and wherein the first portion and the second portion of the display are part of a single display panel.

US Pat. No. 10,659,830

METHOD, SYSTEM, MOBILE DEVICE, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR VALIDATING RIGHTS OBJECTS

Conversant Wireless Licen...

1. An apparatus for determining whether one or more rights objects associated with the apparatus are valid, the apparatus comprising:a communication interface for transmitting and receiving data;
at least one memory device for storing instructions;
a processor communicatively coupled to the communication interface and the at least one memory device, the processor configured to, when executing the instructions, perform the following:
retrieve a multimedia network time via the communication interface from a network transmitting multimedia;
update a secure clock based at least in part on the multimedia network time and a user-set clock, wherein the secure clock is not user-changeable, updating the secure clock comprising:
determining a difference between the multimedia network time and the user-set clock,
storing the difference,
calculating the secure clock based at least on the stored difference,
re-retrieving the multimedia network time via the communication interface at every predetermined interval of time,
at each interval, re-calculating a new difference between the multimedia network time and the user-set clock,
replacing the stored difference with the new difference when the new difference differs from the stored difference, and
re-calculating the secure clock based on the new stored difference; and
use the secure clock to determine whether one or more rights objects associated with the apparatus are valid.

US Pat. No. 10,659,829

SYNCHRONIZING THE STORING OF STREAMING VIDEO

AXIS AB, Lund (SE)

1. A method for synchronizing video in a wireless hub device, the method comprising:receiving timestamped first video data from a wearable camera via a first wireless connection, wherein the first video data is organized using a hash table;
storing the first video data in a storage;
live streaming the first video data to a remote client via a second wireless connection;
detecting an interruption in the first wireless connection;
generating an indication of video data not received from the wearable camera as a result of the interruption;
after a resumption in the first wireless connection, receiving timestamped second video data corresponding to the video data not received from the wearable camera as a result of the interruption, wherein the second video data is organized using the hash table;
storing the second video data in the storage according to gap synchronization based on one or more timestamps of the first video data and one or more timestamps of the second video data; and
live streaming the second video data to the remote client via the second wireless connection after the resumption in the first wireless connection.

US Pat. No. 10,659,828

ELASTIC SWITCHED DIGITAL VIDEO (SDV) TRAFFIC CONTROL WITH ADAPTIVE BIT RATE STREAMING

ARRIS Enterprises LLC, S...

1. A method for transmitting media content over an access network, comprising:(i) receiving a session setup request over an access network that employs a specified modulation technique from an on-demand manager for receiving on-demand media content at a specified bit rate;
(ii) selecting a profile from an adaptive bit rate (ABR) main manifest for ABR media content corresponding to the on-demand content;
(iii) requesting and receiving an ABR profile manifest from an ABR system for the selected profile of the ABR media content;
(iv) requesting the ABR media content in the profile manifest from the ABR system;
(v) receiving an ABR stream for the ABR media content;
(vi) transforming the ABR stream into a prescribed transport stream;
(vii) causing the prescribed transport stream to be modulated in accordance with the prescribed modulation technique; and
(viii) causing the modulated prescribed transport stream to be transmitted to a client terminal over the access network.

US Pat. No. 10,659,827

TRANSMISSION OF APPLICATIONS WITH CONTENT

Comcast Cable Communicati...

1. A method comprising:receiving, by one or more computing devices and from a user device, a request for first content;
generating, in response to the request for the first content, a first transport stream comprising the first content and application data associated with a first application;
sending at least a portion of the first transport stream to the user device;
detecting an interruption in the sending of the first transport stream;
determining that only a first portion of the application data has been sent to the user device;
generating a second transport stream comprising second content and a second portion of the application data, wherein the generating the second transport stream is based at least on a usage pattern of applications; and
sending, based on a request for the second content at least a portion of the second transport stream to the user device.

US Pat. No. 10,659,826

CLOUD STREAMING SERVICE SYSTEM, IMAGE CLOUD STREAMING SERVICE METHOD USING APPLICATION CODE, AND DEVICE THEREFOR

SK PLANET CO., LTD., Seo...

1. A cloud streaming server comprising:a processor;
a memory storing instructions thereon, the instructions when executed by the processor cause the processor to:
receive a first code corresponding to an application result screen from a web application server;
capture an image by using image region attribute information included in the first code;
perform still image encoding of the captured image by using a still image compression technique to generate a still-image-encoded capture image;
convert the first code to a second code including animation information created using an animation code in the first code, wherein the animation information includes type of animation to be applied to the still-image- encoded capture image, a duration during which the animation is applied, an animation repetition count, a start coordinate, an end coordinate, a start size, and an end size; and
perform a cloud streaming service based on a still image by transmitting, to a user terminal, the still-image-encoded capture image and the second code such that the user terminal creates the application result screen.

US Pat. No. 10,659,825

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR PROVIDING A DESCRIPTION OF A PROGRAM TO A USER EQUIPMENT

1. A method for communicating with a user equipment, comprising:storing a profile of a user and/or the user equipment, including a maximum number of characters of at least one display field in the user equipment, in at least one storage device of at least one server;
storing, in the at least one storage device, a set of data fields associated with a program, wherein the data fields define a first description of the program;
monitoring, using the server, the user equipment via a wired and/or wireless communication link;
updating, using the server, the profile of the user and/or the user equipment based on monitoring the user equipment;
constructing, using the server, a second description of the same program based on the updated profile of the user and/or the user equipment and based on not exceeding the maximum number of characters, the second description comprising a subset of data fields from the set of data fields, the subset of data fields placed in an order that is based on an order specified by the updated profile of the user and/or the user equipment; and
transmitting over the wired and/or wireless communication link, using the server, the second description of the program to be displayed on the user equipment.

US Pat. No. 10,659,824

VIDEO PLAYBACK METHOD AND APPARATUS

Hangzhou Hikvision Digita...

1. A video playback method, comprising:receiving a data obtaining request for a video to be played back sent by a client, wherein the data obtaining request comprises a multiplied speed for video playback;
estimating a current data transmission speed according to historical data transmission speeds, wherein the historical data transmission speeds are obtained according to a preset statistical rule;
selecting a target video frame discarding scheme from preset video frame discarding schemes, according to the current data transmission speed and a theoretical data transmission speed corresponding to a preset video frame discarding scheme, wherein the theoretical data transmission speed is determined according to the multiplied speed for video playback; and
performing discard processing on video data of the video to be played back according to the target video frame discarding scheme, and sending to the client the video data that has been subjected to the discard processing, so that the client plays back the video to be played back;
wherein the step of selecting a target video frame discarding scheme from preset video frame discarding schemes, according to the current data transmission speed and a theoretical data transmission speed corresponding to a preset video frame discarding scheme, comprises:
determining a video frame discarding scheme with the highest priority in the preset video frame discarding schemes as a video frame discarding scheme to be selected;
calculating a target theoretical data transmission speed corresponding to the video frame discarding scheme to be selected, based on a bandwidth compression ratio, a rate for the video playback, and a bit rate for the video to be played back under the video frame discarding scheme to be selected;
determining whether the target theoretical data transmission speed is greater than or equal to the product of the current data transmission speed and a first preset coefficient;
if so, determining the video frame discarding scheme to be selected as the target video frame discarding scheme; and
if not, updating the video frame discarding scheme to be selected to a video frame discarding scheme with the next lower priority according to a descending order by priority, and returning to the step of calculating a target theoretical data transmission speed corresponding to the video frame discarding scheme to be selected, based on a bandwidth compression ratio, a rate for the video playback, and a bit rate for the video to be played back under the video frame discarding scheme to be selected, until the video frame discarding scheme to be selected is a video frame discarding scheme with the lowest priority, and then determining the video frame discarding scheme to be selected as the target video frame discarding scheme.

US Pat. No. 10,659,822

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An information processing apparatus comprising:a specifying unit configured to specify that material data for generating a virtual-viewpoint content at what time is stored in a storage unit that stores a plurality of material data for generating a virtual-viewpoint content; and
an output unit configured to output, based on a specification result from the specifying unit, information for displaying an image distinguishably indicating a time range in which a virtual-viewpoint content can be generated and a time range in which a virtual-viewpoint content cannot be generated.

US Pat. No. 10,659,821

SET-TOP BOX, SYSTEM AND METHOD FOR PROVIDING AWARENESS IN A HOSPITALITY ENVIRONMENT

Enseo, Inc., Richardson,...

1. A system for providing awareness in an environment, the system comprising:a horizontal array of set-top boxes, each set-top box being associated with a room in the environment, each set-top box having an identification including a room identifier;
each set-top box of the horizontal array including a wireless transceiver;
a wireless-enabled interactive device including:
a housing;
an emergency button;
a processor coupled to a wireless transceiver;
a memory accessible to the processor, the memory including processor-executable instructions that, when executed, cause the processor to:
receive set-top box identification beacon signals;
measure the strength of the identification beacon signals;
responsive to the activation of the emergency button, immediately transmit a broadcast signal including a data packet having at least one set-top box identification, a wireless-enabled interactive device identification, and an emergency alert;
an array of wireless routers disposed in the environment, each wireless router configured to receive the data packet from the proximate wireless-enabled interactive device and forward the data packet; and
a server located within the environment and in communication with the horizontal array of wireless routers, the server responsive to the data packet including the emergency alert, activating an emergency alert notification.

US Pat. No. 10,659,820

IMAGE CODING METHOD AND IMAGE CODING DEVICE FOR PARTITIONING AN IMAGE INTO PROCESSING UNITS AND CODING THE PARTITIONED IMAGE TO GENERATE A CODE SEQUENCE

SUN PATENT TRUST, New Yo...

1. An image coding and decoding device for coding an image to generate a code sequence and decoding the code sequence generated by an image coding method, wherein the image coding method is performed using processing circuitry, for partitioning an image into processing units, and coding the partitioned image to generate the code sequence, the image coding method comprising: determining a partitioning pattern for hierarchically partitioning the image in order starting from a largest processing unit which is 64 pixels by 64 pixels, the largest processing unit corresponding to a hierarchy depth value of 0; defining the partitioning pattern only by (i) a maximum hierarchy depth value of N indicating a deepest processing unit which the largest processing unit is partitioned down to, (ii) a minimum hierarchy depth value of K indicating a shallowest processing unit which the largest processing unit is partitioned down to, and (iii) one bit indicating whether or not to partition each of the partitioning units corresponding to the minimum hierarchy depth value of K, when it is determined that the minimum hierarchy depth value of K is greater than 0 and the maximum hierarchy depth value of N is (K+1); and coding the defined partitioning pattern,the image coding and decoding device comprising an image coding device and an image decoding device,
wherein the image coding device comprises:
first processing circuitry; and
a first non-transitory memory storing thereon first executable instructions, which when executed, cause the first processing circuitry to perform:
determining the partitioning pattern according to the maximum hierarchy depth value N and the minimum hierarchy depth value K; and
coding the partitioning pattern to generate the code sequence, and
wherein the image decoding device comprises:
second processing circuitry; and
a second non-transitory memory storing thereon second executable instructions, which when executed, cause the second processing circuitry to perform:
decoding the defined partitioning pattern included in the code sequence; and
determining the partitioning pattern from the decoded defined partitioning pattern.

US Pat. No. 10,659,819

CODING OF A SPATIAL SAMPLING OF A TWO-DIMENSIONAL INFORMATION SIGNAL USING SUB-DIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder comprising:an extractor configured to extract, from a data stream representing video information, information related to a maximum hierarchy level, information related to a first maximum region size associated with prediction coding and a second maximum region size associated with transform coding;
a divider configured to:
divide an array of information samples representing a spatially sampled portion of the video information into a first set of root regions based on the first maximum region size,
sub-divide at least some of the first set of root regions into a first set of sub-regions using recursive multi-tree partitioning,
determine whether a size of at least one of the first set of sub-regions exceeds the second maximum region size;
responsive to a determination that the size of at least one of the first set of sub-regions does exceed the second maximum region size, divide at least one of the first set of sub-regions into a second set of root regions of the second maximum region size,
determine, for each of the second set of root regions of the second maximum region size, whether the respective root region of the second set of root regions is to be sub-divided, and
responsive to a determination that the respective root region of the second set of root regions is to be sub-divided, sub-divide the respective root region of the second set of root regions into a second set of sub-regions using recursive multi-tree partitioning based on the maximum hierarchy level; and
a reconstructor configured to reconstruct the array of information samples using prediction coding in accordance with the first set of sub-regions and transform coding in accordance with the second set of sub-regions.

US Pat. No. 10,659,818

INHERITANCE IN SAMPLE ARRAY MULTITREE SUBDIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing an array of information samples encoded in a data stream and representing video information, the decoder comprising:an extractor configured for:
extracting, from the data stream, multi-tree subdivision information and inheritance information associated with an inheritance coding block of the array of information samples, the inheritance information indicating as to whether inheritance is used, wherein the inheritance coding block corresponds to a first hierarchy level of a sequence of hierarchy levels in accordance with the multi-tree subdivision information and is composed of a set of coding sub-blocks, each of which corresponds to a second hierarchy level of the sequence of hierarchy levels in accordance with the multi-tree subdivision information,
extracting, from the data stream if the inheritance is used with respect to the inheritance coding block, an inheritance subset associated with the inheritance coding block, the inheritance subset including at least one syntax element of a predetermined syntax element type, and
extracting, from the data stream, respective residual information associated with each of the set of coding sub-blocks; and
a predictor configured for:
copying the at least one syntax element in the inheritance subset into a set of syntax elements representing coding parameters used in an inter coding process corresponding to each of the set of coding sub-blocks,
determining, for each of the set of coding sub-blocks, a coding parameter used in the inter coding process associated with the corresponding coding sub-block based on the at least one syntax element,
predicting a respective prediction signal for each of the set of coding sub-blocks based on information associated with a previously reconstructed coding sub-block in accordance with the inter coding process, and
reconstructing each of the set of coding sub-blocks based on the respective coding parameter, the respective prediction signal, and the respective residual information.

US Pat. No. 10,659,817

METHOD OF SAMPLE ADAPTIVE OFFSET PROCESSING FOR VIDEO CODING

HFI Innovation Inc., Zhu...

1. A method of an edge offset (EO) sample-adaptive offset (SAO) processing for reducing an image noise in a video coding system, the method comprising:receiving an input data associated with a reconstructed picture;
determining a first difference between a current reconstructed pixel and a first neighboring reconstructed pixel and a second difference between the current reconstructed pixel and a second neighboring reconstructed pixel;
determining a first SAO sign of the first difference based on the first difference and a SAO-sign threshold that is greater than zero according to:

wherein x represents the first difference, and saoSignThre represents the SAO-sign threshold; the first SAO sign is equal to 1 if the first difference is greater than or equal to the SAO-sign threshold; the first SAO sign is equal to ?1 if the first difference is smaller than or equal to a negative SAO-sign threshold; and the first SAO sign is equal to 0 if an absolute value of the first difference is smaller than the SAO-sign threshold;
determining a second SAO sign of the second difference based on the second difference and the SAO sign threshold according to:

wherein y represents the second difference; the second SAO sign is equal to 1 if the second difference is greater than or equal to the SAO-sign threshold; the second SAO sign is equal to ?1 if the second difference is smaller than or equal to the negative SAO-sign threshold; and the second SAO sign is equal to 0 if an absolute value of the second difference is smaller than the SAO-sign threshold;
determining an EO classification index for the current reconstructed pixel based on the first SAO sign of the first difference and the second SAO sign of the second difference according to:
the EO classification index=the first SAO sign+the second SAO sign+2; and
compensating the current reconstructed pixel by adding a SAO-offset value associated with the EO classification index to the current reconstructed pixel, wherein the SAO-offset value associated with the EO classification index for the current reconstructed pixel is determined according to a SAO-offset sign, an absolute SAO-offset value and a SAO-bit-shift value, and the SAO-offset value is derived by multiplying the SAO-offset sign with a result from applying a left shift by the SAO-bit-shift value to the absolute SAO-offset value.

US Pat. No. 10,659,816

POINT CLOUD GEOMETRY COMPRESSION

Apple Inc., Cupertino, C...

1. A non-transitory computer-readable medium storing program instructions that, when executed by one or more processors, cause the one or more processors to:sub-sample a point cloud, wherein the sub-sampled point cloud comprises fewer points than an original version of the point cloud;
for respective ones of the points of the sub-sampled point cloud:
predict a predicted point location between the respective point of the sub-sampled point cloud and a neighboring point in the sub-sampled point cloud;
determine, based on comparing the predicted point location to the original version of the point cloud, update information for the predicted point location; and
encode data for a compressed version of the point cloud, the data comprising the determined update information for the predicted point locations.

US Pat. No. 10,659,815

METHOD OF DYNAMIC ADAPTIVE STREAMING FOR 360-DEGREE VIDEOS

Indiana University Resear...

1. A device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising:
determining an estimated available bandwidth for downloading panoramic video media data to a panoramic video viewer;
retrieving, from the memory, a plurality of video tiles defining portions of a panoramic video frame, the plurality of video tiles defining subareas of pre-segmented temporal chunks of the panoramic video frame, the plurality of video tiles having a same duration as the temporal chunks;
predicting a future orientation of a second display region of the panoramic video frame to be presented at a display of the panoramic video viewer at a second time based on an orientation of a first display region of the panoramic video frame at a first time, wherein the first display region corresponds to a first plurality of tiles of the plurality of video tiles;
identifying based on the future orientation of the second display region, a predicted list of tiles of the plurality of video tiles for rendering the second display region at the second time, wherein the second display region corresponds to a second plurality of tiles of the plurality of video tiles, wherein the predicted list of tiles includes the second plurality of tiles and zero or more tiles of the plurality of video tiles in a margin area outside of the second display region;
calculating a quality of experience from a plurality of first encoding bitrates for the second plurality of tiles and a plurality of second encoding bitrates for tiles in the margin area, resulting in a selected first encoding bitrate and a selected second encoding bitrate, wherein downloading the second plurality of tiles at the selected first encoding bitrate and the tiles in the margin area at the selected second encoding bitrate is within the estimated available bandwidth; and
facilitating download of the second plurality of tiles at the selected first encoding bitrate and the tiles in the margin area at the selected second encoding bitrate to the panoramic video viewer.

US Pat. No. 10,659,814

DEPTH PICTURE CODING METHOD AND DEVICE IN VIDEO CODING

LG ELECTRONICS INC., Seo...

1. A method for decoding a 3D video by a video decoding apparatus, the method comprising:obtaining, by the video decoding apparatus, a disparity value based on an index representing a reference view and a predetermined value;
deriving, by the video decoding apparatus, motion information of a block in an inter-view reference picture in the reference view based on the disparity value, wherein the block in the inter-view reference picture in the reference view is a block related with a current block in a depth picture in a current view;
deriving, by the video decoding apparatus, motion information of the current block in the depth picture in the current view based on the motion information of the block related with the current block; and
generating, by the video decoding apparatus, a prediction sample of the current block based on the motion information of the current block,
wherein the index representing the reference view is adaptively set equal to a view index of the inter-view reference picture in a reference picture list,
wherein the reference view is the same as a view to which the inter-view reference picture comprising the block related with the current block belongs, and
wherein the predetermined value is “1<<(bit depth?1)”.

US Pat. No. 10,659,813

METHOD, SYSTEM AND DEVICE FOR CODING AND DECODING DEPTH INFORMATION

ZTE CORPORATION, Shenzhe...

1. A method for coding depth information, comprising:acquiring a value of a number of elements included in a first Depth Look-up Table (DLT), coding the value of the number of the elements included in the first DLT and coding values of the elements included in the first DLT, and writing coding bits of the first DLT into a bitstream of a parameter set;
coding flag information corresponding to a specified view, and indicating by the flag information whether the first DLT is used as a prediction reference to code a second DLT;
when it is determined according to the flag information that the first DLT is used as the prediction reference to code the second DLT, coding values of elements included in the second DLT using the first DLT as the prediction reference; and
when it is determined according to the flag information that the first DLT is not used as the prediction reference to code the second DLT, coding a value of a number of the elements included in the second DLT, coding directly the values of the elements included in the second DLT without referencing other DLT(s) including the first DLT, and writing coding bits of the second DLT into the bitstream of the parameter set;
wherein coding the values of the elements included in the second DLT using the first DLT as the prediction reference comprises:
constructing an additional DLT using the first DLT and the second DLT;
coding a value of a number of elements included in the additional DLT;
coding values of the elements included in the additional DLT; and
writing coding bits of the additional DLT into the bitstream of the parameter set.

US Pat. No. 10,659,812

METHOD AND DEVICE FOR VIDEO DECODING AND METHOD AND DEVICE FOR VIDEO ENCODING

SAMSUNG ELECTRONICS CO., ...

1. A video decoding method comprising:obtaining, from a bitstream, information about a prediction mode of a current block;
when the prediction mode of the current block is an intra mode, obtaining information about an intra prediction mode of the current block; and
obtaining a predicted sample value of the current block by using at least two samples in at least two lines in a plurality of lines that are from among samples comprised in an adjacent reference region of the plurality of lines and is located, from the current block, in an intra prediction direction indicated by an intra prediction mode,
wherein the adjacent reference region of the plurality of lines comprises a plurality of lines that are in a vertical direction and located at a left outside of the current block, or a plurality of lines that are in a horizontal direction and located at an upper outside of the current block, and
wherein the at least two samples include at least one sample in each of the at least two lines.

US Pat. No. 10,659,811

INTER PREDICTION METHOD AND APPARATUS FOR SAME

Electronics and Telecommu...

1. A method for decoding a video signal with an image decoding apparatus, comprising:obtaining a spatial merge candidate for a current block from a neighboring block adjacent to the current block;
determining a collocated block comprised in a collocated picture,
wherein the collocated picture is a reference picture having the collocated block among reference pictures included in a reference picture list, and
wherein the collocated block is determined to be either a first block adjacent to a right-bottom position of the current block or a second block comprising a center position of the current block;
obtaining a temporal merge candidate of the current block based on the collocated block,
wherein motion information of the temporal merge candidate comprises a motion vector and a reference picture index, and
wherein the motion vector of the temporal merge candidate is derived from a motion vector of the collocated block within the collocated picture, and
wherein the reference picture index of the temporal merge candidate is set to a predetermined fixed value regardless of a reference picture index of the neighboring block;
generating a merge candidate list comprising the spatial merge candidate and the temporal merge candidate;
determining a merge candidate for the current block based on the merge candidate list and a merge candidate index, the merge candidate index specifying one of the merge candidates in the merge candidate list;
deriving motion information of the current block from motion information of the determined merge candidate; and
generating a prediction block corresponding to the current block based on the derived motion information.

US Pat. No. 10,659,810

INTER PREDICTION METHOD AND APPARATUS FOR SAME

Electronics and Telecommu...

1. A method for decoding a video signal with an image decoding apparatus, comprising:obtaining a spatial merge candidate for a current block from a neighboring block adjacent to the current block;
determining a collocated block comprised in a collocated picture,
wherein the collocated picture is a reference picture having the collocated block among reference pictures included in a reference picture list, and
wherein the collocated block is determined to be either a first block adjacent to a right-bottom position of the current block or a second block comprising a center position of the current block;
obtaining a temporal merge candidate of the current block based on the collocated block,
wherein motion information of the temporal merge candidate comprises a motion vector and a reference picture index, and
wherein the motion vector of the temporal merge candidate is derived from a motion vector of the collocated block within the collocated picture, and
wherein the reference picture index of the temporal merge candidate is set equal to 0 regardless of motion information of the neighboring block;
generating a merge candidate list comprising the spatial merge candidate and the temporal merge candidate;
determining a merge candidate for the current block based on the merge candidate list and a merge candidate index, the merge candidate index specifying one of the merge candidates in the merge candidate list;
deriving motion information of the current block from motion information of the determined merge candidate; and
generating a prediction block corresponding to the current block based on the derived motion information.

US Pat. No. 10,659,809

EFFICIENT ROUNDING FOR DEBLOCKING

SUN PATENT TRUST, New Yo...

1. An image decoding method comprising:determining, on a per line basis, (i) whether or not a filter is to be applied to a boundary between a block and a neighboring block adjacent to the block, the block and the neighboring block being included in a reconstructed image corresponding to an encoded image, the neighboring block having a prediction signal different from a prediction signal of the block and a transform coefficient different from a transform coefficient of the block and (ii) a width of the filter when the filter is determined to be applied; and
filtering, on a per line basis, the boundary when the filter is determined to be applied, wherein
in the filtering, when the width of the filter is a first width, sample pixels in a second width are used, the first width being narrower than the second width,
in the filtering, when the width of the filter is a third width, the third width being narrower than the first width, sample pixels in a fourth width are used, the third width being narrower than the fourth width,
in the determining, when determining whether or not the filter is to be applied to a current line, only results of comparing between a value calculated using pixel values and a threshold are used, the pixel values being in the current line and included in the block or the neighboring block, and
in calculating the value using the pixel values, a first difference value between p0 and q0 and a second difference value between p1 and q1 are used, p0 and p1 being pixel values of a first pixel and a second pixel respectively in the current line included in the block, and q0 and q1 being pixel values of a third pixel and a fourth pixel respectively in the current line included in the neighboring block, the first pixel being closest to the boundary, the second pixel being next to the first pixel, the third pixel being closest to the boundary, and the fourth pixel being next to the third pixel.

US Pat. No. 10,659,808

MOTION VECTOR DERIVATION METHOD, MOVING PICTURE CODING METHOD AND MOVING PICTURE DECODING METHOD

PANASONIC INTELLECTUAL PR...

1. An image decoding method for decoding a block in a picture, the method comprising:obtaining a reference motion vector of a reference block, the reference motion vector being used for deriving a motion vector of a current block to be decoded;
calculating a first parameter corresponding to a difference between a display order of a picture including a reference block and a display order of a reference picture of the reference block, wherein said reference block is motion-compensated using the reference motion vector, and said reference picture is referred to by the reference motion vector;
calculating a second parameter corresponding to a difference between a display order of a current picture and a display order of a reference picture of the current picture, wherein said current picture is a picture including the current block;
judging whether or not the first parameter is within a range having a predetermined maximum value;
generating a multiplier parameter corresponding to the first parameter, the multiplier parameter being used for changing a division operation by the first parameter into a multiplication operation by the multiplier parameter;
deriving the motion vector of the current block by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the predetermined maximum value of the range and the second parameter, when the first parameter is not within the range having the predetermined maximum value as a result of said judging, and by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the first parameter and the second parameter, when the first parameter is within the range having the predetermined maximum value as a result of said judging;
decoding a coded data stream to obtain a decoded difference image of the current block;
generating a motion compensated image of the current block using the derived motion vector and a reference picture corresponding to the derived motion vector; and
reconstructing the current block by adding the motion compensated image of the current block and the decoded difference image of the current block.

US Pat. No. 10,659,807

MOTION VECTOR DERIVATION METHOD, MOVING PICTURE CODING METHOD AND MOVING PICTURE DECODING METHOD

PANASONIC INTELLECTUAL PR...

1. An image coding method for coding a current block in a current picture, the method comprising:obtaining a reference motion vector of a reference block, the reference motion vector being used for deriving a motion vector of the current block to be coded;
calculating a first parameter corresponding to a difference between a display order of a picture including a reference block and a display order of a reference picture of the reference block, wherein said reference block is motion-compensated using the reference motion vector, and said reference picture is referred to by the reference motion vector;
calculating a second parameter corresponding to a difference between a display order of a current picture and a display order of a reference picture of the current picture, wherein said current picture is a picture including the current block;
judging if (i) the reference motion vector of the reference block refers to a picture having a display order located after a display order of a picture including the reference block and (ii) the first parameter is a negative value within a predetermined range;
generating a multiplier parameter corresponding to the first parameter, the multiplier parameter being used for changing a division operation by the first parameter into a multiplication operation by the multiplier parameter;
deriving the motion vector of the current block by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to a predetermined negative value and the second parameter, when the first parameter is a negative value out of the predetermined range as a result of said judging, and by scaling the reference motion vector based on a multiplication of a multiplier parameter corresponding to the first parameter and the second parameter, when the first parameter is a negative value within the predetermined range as a result of said judging;
generating a motion compensated image of the current block using the derived motion vector and a reference picture corresponding to the derived motion vector; and
coding a difference image between the current block and the motion compensated image of the current block.

US Pat. No. 10,659,806

VIDEO ENCODING METHOD AND APPARATUS, AND VIDEO DECODING METHOD AND APPARATUS USING INTERPOLATION FILTER ON WHICH IMAGE CHARACTERISTIC IS REFLECTED

SAMSUNG ELECTRONICS CO., ...

12. A video decoding apparatus comprising at least one processor configured to:determine, based on a ratio of frequency components, a degree of change between neighboring samples of at least one integer pixel unit adjacent to a reference sample of an integer pixel unit of a current sample and the reference sample, and determine an interpolation filter among interpolation filters for producing reference samples of a sub-pixel unit to predict the current sample, based on the degree of change; and
determine a predicted sample value of the current sample by using a reference sample of a sub-pixel unit produced by applying the determined interpolation filter to the reference sample and the neighboring samples, and produce a reconstructed sample value of the current sample by using a residual value between the predicted sample value and a sample value of the current sample,
wherein the interpolation filters have different frequency passbands,
wherein the ratio of frequency components includes a ratio between low-frequency components and high-frequency components, and
wherein the determining of the degree of change comprises determining the degree of change based on the ratio between low-frequency components and high-frequency components among alternating-current (AC) components between the reference sample and neighboring samples of the integer pixel unit.

US Pat. No. 10,659,805

METHOD AND APPARATUS FOR VIDEO INTERMODAL TRANSCODING

ECOLE DE TECHNOLOGIE SUPE...

1. A method for transcoding a succession of type-1 images compressed according to a first compression mode to type-2 images compressed according to a second compression mode, comprising:employing a hardware processor for executing software instructions for:
extracting descriptors of a current type-1 image of said succession of type-1 images;
generating a reconstructed bitstream of said current type-1 image;
for each predefined image region:
extracting motion-vector candidates at fractional-pel precision for a current type-2 image from:
said current type-1 image;
a reference type-2 image corresponding to a prior type 1 image; and
a synthesized part of said current type-2 image;
creating a cache of prediction errors at said fractional-pel precision with reference to said reconstructed bitstream for each said motion-vector candidate for each predefined cell of said each predefined image region;
for each predefined segment of said each predefined image region and for each said motion-vector candidate:
determining a respective prediction error as a function of prediction errors of constituent cells determined from said cache; and
selecting a preferred motion-vector candidate of least prediction error; and
formulating a compressed type-2 image using said preferred motion-vector candidate of said each predefined segment.

US Pat. No. 10,659,804

METHOD AND APPARATUS FOR ENCODING/DECODING IMAGES USING ADAPTIVE MOTION VECTOR RESOLUTION

SK TELECOM CO., LTD., Se...

1. A video decoding apparatus for decoding a motion vector by adaptively determining a motion vector resolution in the unit of blocks which are split from a picture to be decoded, the apparatus comprising:a decoder configured to decode, from a bitstream, a differential motion vector, and motion vector resolution identification information dedicated to a current block to be decoded among the blocks split from the picture, wherein the differential motion vector is a difference between a current motion vector of the current block and a predicted motion vector for the current motion vector, and the motion vector resolution identification information is used for determining a motion vector resolution of the differential motion vector among a plurality of motion vector resolutions; and
an inter predictor configured to
derive one or more predicted motion vector candidates from motion vectors of one or more neighboring blocks of the current block,
convert the predicted motion vector candidates such that the converted predicted motion vector candidates have the same motion vector resolution as the motion vector resolution determined by the motion vector resolution identification information,
derive the predicted motion vector for the current motion vector from the converted predicted motion vector candidates, and
reconstruct the current motion vector of the current block, by adding the differential motion vector to the predicted motion vector.

US Pat. No. 10,659,803

PICTURE PREDICTION METHOD AND RELATED APPARATUS

Huawei Technologies Co., ...

1. A picture processing method, comprising:obtaining, by a picture processing apparatus, motion vectors of two pixel samples in a video frame to which a current picture block belongs; and
obtaining, by the picture processing apparatus, a motion vector of a pixel sample in the current picture block by using an affine motion model and the obtained motion vectors;
wherein the affine motion model comprises:
wherein (x, y) are coordinates of the pixel sample, vx is a horizontal component of the motion vector of the pixel sample, vy is a vertical component of the motion vector of the pixel sample, and a, b, c and d are coefficients of the affine motion model.

US Pat. No. 10,659,802

VIDEO ENCODING AND DECODING

Nokia Technologies Oy, E...

1. A method comprising:obtaining a block of samples for motion prediction;
calculating at least one translational base motion vector component for the block of samples by defining a prediction for said translational base motion vector component;
defining a first differential motion vector component using a first precision;
adding said first differential motion vector component to said prediction for said translational base motion vector component; and
calculating at least one higher order motion vector component by:
defining a prediction for said higher order motion vector component;
defining a second differential motion vector component using a second precision different from the first precision;
adding said second differential motion vector component to said prediction for said higher order motion vector component; and
performing a motion compensation operation for said block of samples using the at least one translational base motion vector component and the at least one differential motion vector component.

US Pat. No. 10,659,801

METHOD AND APPARATUS FOR INTER PREDICTION IN VIDEO CODING SYSTEM

LG Electronics Inc., Seo...

1. A video decoding method performed by a decoding apparatus, the method comprising:deriving control points (CPs) for a current block;
acquiring motion vectors for the CPs;
deriving motion vectors of sub-blocks in the current block based on the acquired motion vectors;
deriving a prediction sample for the current block based on the derived motion vectors; and
generating a reconstructed sample based on the prediction sample,
wherein, based on a number of CPs being 2, coordinates of a top-left sample position of the current block being (0, 0), and a height and a width of the current block being S, coordinates of CP0 among the CPs are (0, 0) and coordinates of CP1 are (S, 0).

US Pat. No. 10,659,800

INTER PREDICTION METHOD AND DEVICE

Hangzhou Hikvision Digita...

1. An inter prediction method for encoding an image frame x, comprising:when performing the inter prediction for encoding the image frame x, determining a reference image frame of the image frame x, wherein the image frame x is a P-frame image or a B-frame image;
for each reference image frame, respectively performing following processes with the existing sample of the reference image frame:
determining whether a resolution of a reference image frame is the same as a resolution of the image frame x;
when determining the resolution of the reference image frame is less than the resolution of the image frame x, and the resolution of the reference image frame is less due to image cropping, adjusting the resolution of the reference image frame to be the same as the resolution of the image frame x, by image filling;
when determining the resolution of the reference image frame is greater than the resolution of the image frame x, and the resolution of the reference image frame is greater due to image filling, adjusting the resolution of the reference image frame to be the same as the resolution of the image frame x, by image cropping;
when determining that the resolution of the reference image frame is the same as the resolution of the image frame x, performing the inter prediction for encoding the image frame x, based on each reference image frame.

US Pat. No. 10,659,799

VIDEO CODING SYSTEM WITH TEMPORAL LAYERS AND METHOD OF OPERATION THEREOF

SONY CORPORATION, Tokyo ...

1. A decoding method for operation in a video coding system, the decoding method comprising:extracting, from an encoding bitstream, a video usability information (VUI) parameter including field_seq_flag, wherein
the field_seq_flag indicates whether a video stream includes a video representing field,
the VUI parameter is common to a plurality of occurrences of a temporal layer, and
the VUI parameter is constant for the plurality of occurrences of the temporal layer;
extracting the temporal layer from the encoding bitstream based on the extracted VUI parameter; and
decoding the encoding bitstream based on the extracted temporal layer to generate the video stream.

US Pat. No. 10,659,798

SAMPLE ARRAY CODING FOR LOW-DELAY

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing a sample array of a video from an entropy-encoded data stream, the decoder comprising:an entropy decoder configured to entropy decode a plurality of entropy slices in the data stream to reconstruct different portions of the sample array of the video, each of the plurality of entropy slices comprising entropy-encoded data for a corresponding row of the sample array,
wherein, for entropy decoding a current entropy slice of the plurality of entropy slices, the entropy decoder is configured to:
decode position difference value of the current entropy slice using Exponential-Golomb coding,
wherein the position difference value is extracted from the data stream and indicates a difference between a starting position of a preceding entropy slice within the data stream and a starting position of the current entropy slice,
wherein the current entropy slice corresponds to a current row of the sample array and the preceding entropy slice corresponds to a previous row of the sample array, the current and previous rows being consecutive rows of the sample array and the previous row being spatially above the current row, and
derive the starting position within the data stream of the current entropy slice based on a sum of the starting position of the preceding entropy slice and the position difference value, wherein entropy decoding of the preceding entropy slice precedes entropy decoding of the current entropy slice.

US Pat. No. 10,659,797

VIDEO FRAME CODEC ARCHITECTURES

Google LLC, Mountain Vie...

1. An electronic device comprising:a frame decompressor configured to individually decompress at a frame-level multiple compressed frames to produce multiple decompressed frames, the multiple compressed frames respectively derived from multiple decoded frames;
a frame decompressor controller coupled to the frame decompressor and configured to arbitrate access to the frame decompressor for multiple cores;
a first core of the multiple cores coupled to the frame decompressor controller, the first core configured to obtain via the frame decompressor controller a decompressed frame of the multiple decompressed frames produced at the frame-level by the frame decompressor; and
a second core of the multiple cores coupled to the frame decompressor controller, the second core configured to obtain via the frame decompressor controller another decompressed frame of the multiple decompressed frames produced at the frame-level by the frame decompressor.

US Pat. No. 10,659,796

BANDWIDTH SAVING ARCHITECTURE FOR SCALABLE VIDEO CODING SPATIAL MODE

Advanced Micro Devices, I...

1. A system configured to perform scalable video encoding comprising:a memory; and
a processing unit, wherein the processing unit is configured to:
receive video data, wherein the video data includes one or more frames;
encode at least one frame to a base layer;
generate inter-layer data based on the at least one frame, wherein the inter-layer data includes any one or a combination of residual data, reconstruction data, and motion data, wherein the inter-layer data includes a bit indicating whether the inter-layer data includes residual data;
upsample the inter-layer data; and
encode the at least one frame to an enhanced layer using the upsampled inter-layer data based on a block type of the base layer, wherein the block type of the base layer indicates whether the inter-layer data includes either residual data or reconstruction data, wherein the resolution of the enhanced layer is greater than the resolution of the base layer.

US Pat. No. 10,659,795

TRANSMISSION DEVICE, TRANSMISSION METHOD, RECEPTION DEVICE, AND RECEPTION METHOD

SONY CORPORATION, Tokyo ...

1. A transmission device comprising:circuitry configured to
encode each picture constituting moving image data of a first frame rate at a second frame rate larger than the first frame rate to obtain a video stream of the second frame rate;
insert identification information into encoded image data of the each picture constituting the video stream of the second frame rate, the identification information indicating whether display start timing of the respective picture is delayed by one frame of the second frame rate or advanced by one frame of the second frame rate with respect to timing of the first frame rate; and
transmit the video stream of the second frame rate in which the identification information is inserted.

US Pat. No. 10,659,794

APPARATUS AND METHOD FOR PALETTE DECODING

MEDIATEK INC., Hsin-Chu ...

1. A video decoder comprising:a palette decoding apparatus, comprising:
a palette color storage device, arranged to store palette colors decoded from a bitstream;
a color index storage device, arranged to store color indices of pixels, wherein the color indices are decoded from the bitstream; and
a palette value processing circuit, arranged to generate a palette value for each pixel of the pixels by:
reading a color index of said each pixel from the color index storage device;
searching for a palette color stored in the palette color storage device that is indexed by the color index of said each pixel; and
setting the palette value of said each pixel by the palette color;
wherein a frame is divided into a plurality of first coding units, each of the first coding units is sub-divided into one or more second coding units, and before a palette value of a last pixel in a first coding unit is generated by the palette value processing circuit, a palette value of a non-last pixel in the first coding unit is generated by the palette value processing circuit and used by a reconstruction circuit of the video decoder;
wherein the palette decoding apparatus generates a palette value for each pixel in one second coding unit, and generates a palette value for each pixel in another second coding unit and the video decoder further comprises:
a residue decoding circuit, arranged to decode an entropy decoding result of the bitstream to generate a residue data for said each pixel in said another second coding unit and generate a residue data for each pixel in yet another second coding unit;
an adder circuit, arranged to add the palette value of said each pixel in said another second coding unit to the residue data of said each pixel in said another second coding unit to generate an adjusted residue data of said each pixel in said another second coding unit;
a storage device; and
a multiplexer circuit, having a first input port, a second input port, a third input port, and an output port, wherein the first input port is arranged to receive an output of the palette decoding apparatus, the second input port is arranged to receive an output of the adder circuit, the third input port is arranged to receive an output of the residue decoding circuit, and the output port is arranged to output the output of the palette decoding apparatus, the output of the adder circuit, and the output of the residue decoding circuit to the same storage device at different timings, respectively;
wherein the reconstruction circuit reads stored data from the storage device for reconstruction.

US Pat. No. 10,659,793

DC COEFFICIENT SIGNALING AT SMALL QUANTIZATION STEP SIZES

Microsoft Technology Lice...

1. A method comprising:receiving encoded data in a bitstream; and
decoding the encoded data to reconstruct a picture, including, for a block of the picture:
determining a quantization step size that applies for the block based on one or more syntax elements in the bitstream;
decoding a variable length code (VLC) for a DC differential for a DC coefficient of the block, the DC differential representing a difference between the DC coefficient and a DC predictor for the DC coefficient of the block, wherein the decoding the VLC uses a VLC table that includes an escape code as well as different VLC values that are associated with different values for DC differentials, and wherein the VLC is not the escape code, such that the decoding the VLC yields a value for the DC differential for the DC coefficient of the block;
checking whether the quantization step size that applies for the block is one of a set of small quantization step sizes; and
reconstructing the DC differential, including using the value for the DC differential for the DC coefficient of the block as a base absolute value or as a final absolute value depending on whether the quantization step size that applies for the block is one of the set of small quantization step sizes.

US Pat. No. 10,659,792

IMAGE PROCESSING DEVICE AND METHOD

SONY CORPORATION, Tokyo ...

1. A decoding method for decoding a bit stream, comprising:performing arithmetic decoding of block lines of a first slice of the bit stream, and performing arithmetic decoding of block lines of a second slice of the bit stream; and
performing arithmetic decoding, by circuitry of an image processing device, on a top block of a current block line in the second slice, by using a context updated in case of decoding a previous block of a previous block line, the previous block line being located above adjacent to the current block line, the previous block being different from a top block of the previous block line which is located above adjacent to the top block of the current block line, and the previous block being other than a last block in decoding order in the previous block line in the first slice.

US Pat. No. 10,659,791

HIERARCHY OF MOTION PREDICTION VIDEO BLOCKS

QUALCOMM Incorporated, S...

1. A method of decoding video data, the method comprising:obtaining an index value for a current video block;
decoding, from a bitstream, one or more syntax elements that indicate a number of motion vector prediction candidates for the current video block and that a subset of motion vector prediction candidates was used to encode the current video block;
generating a set of candidate predictive blocks based on spatial and temporal neighbors to the current video block, wherein the number of candidate predictive blocks in the set of candidate predictive blocks is equal to the number of motion vector prediction candidates for the current video block as indicated by the one or more syntax elements;
selecting a predictive video block from the set of generated candidate predictive blocks based on the index value; and
generating motion information for the current video block based on motion information of the predictive video block.

US Pat. No. 10,659,790

CODING OF HDR VIDEO SIGNALS IN THE ICTCP COLOR FORMAT

Dolby Laboratories Licens...

1. A method to maintain constant luminance or iso-luminance during chroma processing of a video sequence, the method comprising:receiving input pixel values in a first color format;
converting the input pixel values from the first color format to first pixel values in a second color format, wherein in the second color format a pixel value comprises a luma value, a first chroma value, and a second chroma value;
determining whether first chroma values or second chroma values of the first pixel values in the second color format are within a specified color gamut, and for one or more chroma pixel values of the first pixel values in the second color format that are not within the specified color gamut:
generating adjusted chroma pixel values within the specified color gamut in the second color format, wherein generating the adjusted chroma pixel values comprises:
for a chroma pixel value outside of the specified color gamut:
computing a saturation value and a hue angle value based on the first chroma value and the second chroma value of the chroma pixel;
generating a scaled saturation value based on the saturation value, wherein the scaled saturation value is within the specified color gamut; and
adjusting the first chroma value and the second chroma value based on the scaled saturation value and the hue angle value to generate an adjusted first chroma value and an adjusted second chroma value, wherein generating the adjusted first and second chroma pixel values (Ct1, Cp1) comprises computing
Ct1=sat1*cos(?), and
Cp1=sat1*sin(?),where ? denotes the hue angle value for the chroma pixel and sats denotes the scaled saturation value of the chroma pixel; andgenerating output pixel values in the second color format comprising the luma value of the first pixel values and the adjusted chroma pixel values.

US Pat. No. 10,659,789

ENCODING COST AWARE EDGE SELECTION FOR IMPROVED PROGRESSIVE MESH COMPRESSION

GOOGLE LLC, Mountain Vie...

1. A computer-implemented method of progressive mesh compression, comprising:determining, by an encoder, a priority value for each edge of a plurality of edges, the priority value of an edge of the plurality of edges determined based on an error metric value and an estimated encoding cost based on a residual value associated with the edge;
determining, by the encoder, a set of edges for collapse, the set of edges determined from the plurality of edges based on the priority values; and
collapsing, by the encoder, the set of edges and generating vertex split information.

US Pat. No. 10,659,788

BLOCK-BASED OPTICAL FLOW ESTIMATION FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

GOOGLE LLC, Mountain Vie...

1. A method, comprising:determining a first frame portion of a first frame to be predicted, the first frame being in a video sequence;
determining a first reference frame from the video sequence for forward inter prediction of the first frame;
determining a second reference frame from the video sequence for backward inter prediction of the first frame, wherein the second reference frame is different from the first reference frame;
generating an optical flow reference frame portion for inter prediction of the first frame portion by performing an optical flow estimation using the first reference frame and the second reference frame, wherein the optical flow estimation produces a respective motion field for pixels of the first frame portion, and wherein generating the optical flow reference frame portion comprises:
performing the optical flow estimation by minimizing a Lagrangian function, J, for respective pixels of the first frame portion, wherein the Lagrangian function, J, for a current pixel of is
J=Jdata+?Jspatial,
wherein Jdata is a data penalty based on a brightness constancy assumption, Jspatial is a spatial penalty based on a smoothness of the motion field, and ? is a Lagrangian parameter,
wherein Jdata=(Exu+Eyv+Et)2, u is a horizontal component of a motion field for the current pixel, v is a vertical component of the motion field for the current pixel, Ex, Ey, and Et are derivatives of pixel values of reference frame portions with respect to a horizontal axis x, a vertical axis y, and time t, wherein time is represented by frame indexes, and
wherein Jspatial=(?u)2+(?v)2, ?u is a Laplacian of the horizontal component of the motion field, and ?v is a Laplacian of the vertical component of the motion field; and
performing a prediction process for the first frame portion using the optical flow reference frame.

US Pat. No. 10,659,787

ENHANCED COMPRESSION OF VIDEO DATA

AMAZON TECHNOLOGIES, INC....

1. A method of compressing video data, the method comprising:receiving, by a video encoder device, video data of a first frame, the video data of the first frame depicting a physical environment at a first time;
generating, from the video data of the first frame by the video encoder device, a reflectance map representing a reflectance of at least one surface of the physical environment;
generating, from the video data of the first frame by the video encoder device, an illumination map representing shading of the at least one surface of the physical environment;
sending, by the video encoder device to a video decoder device, the reflectance map and the illumination map;
receiving, by the video encoder device, video data of a second frame, the video data of the second frame depicting the physical environment at a second time;
determining, by the video encoder device, a spectral power distribution curve of the video data of the second frame, wherein the spectral power distribution curve represents a difference in illumination of the physical environment between the video data of the first frame and the video data of the second frame;
generating, by the video encoder device, a third frame comprising a representation of the difference in illumination; and
sending, by the video encoder device to the video decoder device, the third frame, wherein the video decoder device is effective to reproduce the video data of the second frame using the third frame, the illumination map, and the reflectance map.

US Pat. No. 10,659,786

METHODS AND SYSTEMS FOR DECODING A VIDEO BITSTREAM

Velos Media, LLC, Dallas...

1. A method for decoding a video bitstream comprising:decoding, in a slice header associated with a picture, a first syntax element with an integer value indicating a number of a plurality of entropy slices defining a first slice, wherein each of the entropy slices contains a plurality of largest coding units (LCUs);
decoding a second syntax element in the slice header indicating an offset with an index i, wherein the index i has as range from 0 to the integer value of the first syntax element minus 1 and the offset indicates, in a unit of bytes, a distance between (i) one of the plurality of the entropy slices in the first slice in the video bitstream and (ii) an entropy slice preceding the one of the plurality of the entropy slices in the video bitstream;
decoding a third syntax element in the slice header indicating a slice type of the first slice;
in circumstances where the third syntax element indicates the slice type of the first slice is a B slice, decoding a flag in the slice header indicating an initialization method of a Context-Adaptive Binary Arithmetic Coding (CABAC) context, wherein
in circumstances where the decoded flag indicates a first value, initializing the CABAC context using a first initialization method at the first LCU of each of the plurality of entropy slices in the B slice, and
in circumstances where the decoded flag indicates a second value, initializing the CABAC context using a second initialization method at the first LCU of each of the plurality of entropy slices in the B slice;
in circumstances where the third syntax element indicates the slice type of the first slice is a P slice, initializing the CABAC context using one of the first initialization method and the second initialization method at the first LCU of each of the plurality of entropy slices in the P slice; and
in circumstances where the third syntax element indicates the slice type of the first slice is an I slice, initializing the CABAC context using a third initialization method at the first LCU of each of the plurality of entropy slices in the I slice, wherein the third initialization method is different from the first initialization method and the second initialization method.

US Pat. No. 10,659,785

APPARATUS, SYSTEM AND METHOD OF VIDEO COMPRESSION USING SMART CODING TREE UNIT SCANNING AND CORRESPONDING COMPUTER PROGRAM AND MEDIUM

INTERDIGITAL VC HOLDINGS,...

1. An apparatus for encoding an image frame,wherein the image frame being partitioned into non-overlapping units, the partitioned units being included in a first region, a second region and a third region of the image frame, each of the first region, the second region and the third region being associated with an indicator indicating a raster scanning order;
comprising:
at least one processor configured for
encoding the third region of the image frame, based on the indicator of the third region, starting at a first unit from the left and the top in the third region using a raster scan technique;
encoding the second region of the image frame, based on the indicator of the second region, starting at a first unit from the right in the second region; and
encoding the first region, based on the indicator of the first region, starting at a first unit from the top or the bottom in the first region.

US Pat. No. 10,659,784

REGION-BASED IMAGE COMPRESSION

Advanced Micro Devices, I...

9. An apparatus for compressing an image, comprising:an encoder configured to:
decompose the image into one or more regions;
select a selected region from the one or more regions to evaluate;
determine whether the selected region does not meet a predetermined compression acceptability criteria; and
in response to the selected region not meeting the predetermined compression acceptability criteria:
split the selected region into subregions;
form transformed and quantized subregions by transforming and quantizing the subregions;
determine whether or not the selected region meets the predetermined compression acceptability criteria based on the transformed and quantized sub-regions;
in response to the selected region meeting the predetermined compression acceptability criteria based on a combination of the split, transform, and quantization, encode the transformed and quantized subregions to form an encoded region; and
in response to the selected region not meeting the predetermined compression acceptability criteria:
determine an adjusted split, transformation, and quantization settings that satisfy the predetermined compression acceptability criteria when applied to the subregions;
form an adjusted split, transformed, and quantized subregions by transforming, and quantizing the subregions using the adjusted split, transformation, and quantization settings; and
encode the subregions to form the encoded region based on the adjusted split, transformed, and quantized subregions.

US Pat. No. 10,659,783

ROBUST ENCODING/DECODING OF ESCAPE-CODED PIXELS IN PALETTE MODE

Microsoft Technology Lice...

1. A computer system comprising a processor and memory that implement a media encoder system, the media encoder system comprising:a buffer configured to store a picture; and
a media encoder configured to perform operations comprising receiving and encoding the picture, wherein the encoding the picture includes encoding a unit of the picture in a palette mode, the unit of the picture being encoded in a lossy manner, and wherein the encoding the unit of the picture in the palette mode includes, for an escape mode of the palette mode:
quantizing a sample value for a color component of the unit of the picture; and
encoding the quantized sample value for the color component of the unit of the picture using a kth-order Exponential-Golomb binarization of a syntax element that represents the quantized sample value for the color component of the unit of the picture, wherein the kth-order Exponential-Golomb binarization is independent of any unit-level quantization parameter (“QP”) for the unit of the picture, and wherein the encoding the quantized sample values includes:
mapping the quantized sample value for the color component of the unit of the picture to a string of one or more binary values, wherein the string of one or more binary values is a part of the kth-order Exponential-Golomb binarization; and
entropy coding the string of one or more binary values.

US Pat. No. 10,659,782

MULTI-LEVEL SIGNIFICANCE MAPS FOR ENCODING AND DECODING

Velos Media, LLC, Dallas...

1. A method of encoding significant-coefficient flags for a transform unit, the method comprising:encoding significant-coefficient-group flags, wherein the transform unit is partitioned into non-overlapping blocks, each block containing a respective group of significant-coefficient flags, and wherein each significant-coefficient-group flag corresponds to a respective block and its respective group of significant-coefficient flags; and
encoding each significant-coefficient flag by
if the significant-coefficient-flag is at position (0,0) in its group, a corresponding significant-coefficient-group flag is non-zero, the group is not the DC block, and all the previous significant-coefficient flags in the group are zero, then setting the significant-coefficient-flag at position (0,0) in the group to be 1 without encoding the significant-coefficient-flag at position (0,0) into a bitstream, and
otherwise
encoding the significant-coefficient flag into the bitstream if the significant-coefficient flag is in a group that has a corresponding significant-coefficient-group flag that is non-zero, and
if the significant-coefficient flag is in a group that has a corresponding significant-coefficient-group flag that is zero, setting the significant-coefficient flag to zero without encoding the significant-coefficient flag into the bitstream.

US Pat. No. 10,659,781

CONCATENATED CODING UNITS IN FLEXIBLE TREE STRUCTURE

TENCENT AMERICA LLC, Pal...

1. A method of partitioning a parent coding unit (CU) in a tree structure for encoding a video sequence, the method comprising:splitting the parent CU into more than two CUs including a first CU and a second CU; and
generating a concatenated CU by concatenating the second CU to the first CU,
wherein the generating of the concatenated CU comprises creating the concatenated CU, wherein the concatenated CU includes only the second CU and the first CU.

US Pat. No. 10,659,780

DE-BLOCKING METHOD FOR RECONSTRUCTED PROJECTION-BASED FRAME THAT EMPLOYS PROJECTION LAYOUT OF 360-DEGREE VIRTUAL REALITY PROJECTION

MEDIATEK INC., Hsin-Chu ...

1. A de-blocking method for a reconstructed projection-based frame that comprises a plurality of projection faces packed in a projection layout of a 360-degree Virtual Reality (360 VR) projection from which a 360-degree image content of a sphere is mapped onto the projection faces, comprising:obtaining, by a de-blocking filter, a first spherical neighboring block for a first block with a block edge to be de-blocking filtered; and
selectively applying de-blocking to the block edge of the first block for at least updating a portion of pixels of the first block;
wherein the projection faces comprise a first projection face and a second projection face, one face boundary of the first projection face connects with one face boundary of the second projection face, there is image content discontinuity between said one face boundary of the first projection face and said one face boundary of the second projection face, the first block is a part of the first projection face, the block edge of the first block is a part of said one face boundary of the first projection face, and a region on the sphere to which the first spherical neighboring block corresponds is adjacent to a region on the sphere from which the first projection face is obtained.

US Pat. No. 10,659,779

LAYERED DEBLOCKING FILTERING IN VIDEO PROCESSING SYSTEMS AND METHODS

REALNETWORKS, INC., Seat...

1. A computer-implemented method for processing digital video data comprising:identifying a current block and a neighboring block, the current block and the neighboring block having (a) same, particular size and (b) a linear boundary therebetween, the current block and neighboring block both being adjacent the linear boundary;
determining that the particular size is smaller than a standard block size; and
applying at least to the current block a deblocking filter with a filtering block size set to the standard block size.

US Pat. No. 10,659,778

METHOD FOR ENABLING RANDOM ACCESS AND PLAYBACK OF VIDEO BITSTREAM IN MEDIA TRANSMISSION SYSTEM

Samsung Electronics Co., ...

1. A method of playing back a bitstream delivering a group of pictures (GOP), the method comprising:upon occurrence of random access to a first picture, determining a network abstraction layer (NAL) unit type of at least one second picture following the first picture by parsing a NAL unit header of the at least one second picture;
removing the at least one second picture from the bitstream based on the determined NAL unit type; and
decoding the bitstream from which the at least one second picture has been removed, and displaying the decoded bitstream,
wherein, during displaying the decoded bitstream, the first picture is additionally displayed repeatedly for a reconstruction time corresponding to a number of the at least one second picture and a time required to reconstruct the at least one second picture.

US Pat. No. 10,659,777

CROSS-CHANNEL RESIDUAL PREDICTION

Intel Corporation, Santa...

1. A system comprising:a memory; and
a processor coupled to the memory, the processor to:
determine a reconstructed residual value for a first channel of video data at a particular pixel position within a picture;
determine, using the reconstructed residual value, a predicted residual value at the particular pixel position for a second channel of the video data, wherein the processor to determine the predicted residual value comprises the processor to:
determine the reconstructed residual value is within a particular subset of a plurality of subsets into which a codec dependent range of possible reconstructed residual values for the first channel has been divided, wherein each of the subsets is assigned a first and a second cross channel residual prediction model parameter; and
predict the predicted residual value using particular first and second cross channel residual prediction model parameters corresponding to the particular subset by multiplying the reconstructed residual value by the particular first cross channel residual prediction model parameter and adding the particular second cross channel residual prediction model parameter.

US Pat. No. 10,659,776

QUALITY SCALABLE CODING WITH MAPPING DIFFERENT RANGES OF BIT DEPTHS

GE VIDEO COMPRESSION, LLC...

1. A decoder for decoding information from a datastream to reconstruct a picture, which is partitioned into slices, the decoder comprising:an entropy decoder configured to entropy decode, using a processor, the slices based on wavefront parallel processing (WPP) in which a current slice of the slices is entropy decoded according to one of at least two modes, wherein the entropy decoder is configured to:
in accordance with a first mode of the at least two modes, decode data related to the current slice using context adaptive entropy decoding to obtain a residual signal, wherein the context adaptive entropy decoding includes deriving contexts across slice boundaries and initializing a symbol probability associated with the current slice depending on a saved state of the symbol probability of a previously decoded slice, wherein, in initializing the symbol probability associated with the current slice in accordance with the first mode, the entropy decoder is configured to:
responsive to a determination that a first coding block associated with the current slice is the first coding block from the left end of a first row of the picture in accordance with a raster scan order, initialize the symbol probability associated with the current slice based on the saved symbol probability as acquired in context adaptive entropy decoding the previously decoded slice up to the second coding block from the left end of a second row of the picture, wherein the second row is associated with the previously decoded slice, and
otherwise, initialize the symbol probability associated with the current slice based on a symbol probability as acquired at the end of context adaptive entropy decoding the previously decoded slice, and
in accordance with a second mode of the at least two modes, decode data related to the current slice using context adaptive entropy decoding to obtain a residual signal, wherein the context adaptive entropy decoding includes restricting the derivation of the contexts so as to not cross the slice boundaries and an initialization of the symbol probabilities independent of a previously decoded slice, and save the symbol probability as acquired in context adaptive entropy decoding the previously decoded slice up to the second coding block in the second row associated with the previously decoded slice in accordance with the raster scan order;
a predictor configured to generate, using the processor, a prediction signal based on prediction parameters related to the current slice from the datastream; and
a reconstruction module configured to reconstruct, using the processor, a portion of the picture related to the current slice based on the residual signal and the prediction signal.

US Pat. No. 10,659,775

VIDEO DECODING DEVICE AND VIDEO DECODING METHOD

PANASONIC INTELLECTUAL PR...

1. A decoding method for decoding an encoded bit stream which is generated by encoding video in units of pictures based on a predetermined encoding standard, the decoding method comprising:acquiring video format information indicating whether the video is encoded with a video format among an interlace format of the predetermined encoding standard or a progressive format of the predetermined encoding standard, from a header of a sequence which is a unit of the video,
(i) setting each of all frames or all fields which are included in the video, as a picture included in the video, regardless of whether the video format is the interlace format of the predetermined encoding standard or the progressive format of the predetermined encoding standard, and (ii) setting a Picture Order Count (POC) indicating display order to each of the pictures included in the video one by one, the POC being different each other,
decoding a picture to be decoded, from among the pictures included in the video, with reference to a picture previously decoded before decoding the picture to be decoded, the picture to be decoded being a field or a frame, and
outputting the picture to be decoded based on the video format, after the decoding the picture to be decoded, wherein
in the decoding, picture data of the picture to be decoded is always decoded with a syntax structure, which is a syntax structure for decoding the progressive format of the predetermined encoding standard regardless of whether the video format is the interlace format of the predetermined encoding standard or the progressive format of the predetermined encoding standard.

US Pat. No. 10,659,774

METHOD AND SYSTEM FOR REMOTE DIAGNOSTICS

InterDigital CE Patent Ho...

1. A method for remote diagnostics in a device connected to head-end unit over a network, the method comprising:receiving a data request from the head-end unit to the device in response to an alert message transmitted from the device to the head-end unit;
determining if the received data request requires the device to execute an internal diagnostics program;
transmitting a first response from the device to the head end unit in response to the reception of the data request if data request does not require the device to execute the internal diagnostics program;
placing the device in a service mode in response to the reception of the data request if the data request requires the device to execute the internal diagnostics program, the service mode used for executing the internal diagnostics program; and
transmitting a second response containing the requested data acquired from execution of the internal diagnostics program in the service mode from the device to the head-end unit;
passing video and/or audio data to a television during execution of the internal diagnostics program; and
exiting the service mode in the device after transmitting the second response, the exiting the service mode including ending the internal diagnostics program.

US Pat. No. 10,659,773

PANORAMIC CAMERA SYSTEMS

Facebook, Inc., Menlo Pa...

1. A method comprising:receiving a set of captured images, each captured image associated with a depth map and a capture viewpoint, the set of captured images including a first captured image associated with a first depth map and first capture viewpoint having position, orientation, and angle of view information and a second captured image associated with a second depth map and second capture viewpoint having position, orientation, and angle of view information, the second capture viewpoint different from the first capture viewpoint;
selecting a render viewpoint comprising position, orientation, and angle of view information, the render viewpoint different from the first capture viewpoint and the second capture viewpoint;
rendering a first depth surface based on the first captured image, the first depth map, and the first capture viewpoint, where the first depth surface is a 3D surface rendered from the render viewpoint;
rendering a second depth surface based on the second captured image, the second depth map, and the second capture viewpoint where the second depth surface is a 3D surface rendered from the render viewpoint; and
generating a rendered image for the render viewpoint by:
assigning an alpha value to each pixel of the first rendered depth surface;
determining, for each pixel of the rendered image, a weighted combination of the first and second rendered depth surfaces based on the alpha value associated with the corresponding pixels of the first rendered depth surface.

US Pat. No. 10,659,772

AUGMENTED REALITY SYSTEM FOR LAYERING DEPTH ON HEAD-MOUNTED DISPLAYS USING EXTERNAL STEREO SCREENS

Disney Enterprises, Inc.,...

1. An augmented reality (AR) system for providing augmenting images at additional depth layers to AR participants, comprising:an AR space;
an AR head mounted display (HMD) wearable by an AR participant in the AR space; and
in the AR space, a display screen displaying stereo content using a stereo display technology,
wherein an optical system in the AR HMD displays to the AR participant wearing the AR HMD objects at an HMD focal distance via transparent displays, and
wherein the AR HMD includes an external screen viewing assembly adapted based on the stereo display technology to receive light associated with the displayed stereo content on the display screen and provide left eye content to a left eye of the AR participant wearing the AR HMD and right eye content to a right eye of the AR participant wearing the AR HMD.

US Pat. No. 10,659,771

NON-PLANAR COMPUTATIONAL DISPLAYS

Google LLC, Mountain Vie...

1. In a near-eye display system, a method comprising:receiving display geometry data for one or more non-planar display panels of the near-eye display system;
rendering, based on a stereoscopic focus volume associated with the display geometry data of the one or more non-planar display panels, an array of elemental images at a position within a near-eye lightfield frame, wherein the stereoscopic focus volume is based on an overlap between a plurality of focus volumes presented by the one or more non-planar display panels, wherein the one or more non-planar display panels presents objects within respective focus volumes of the plurality of focus volumes to be in focus, and wherein the one or more non-planar display panels presents objects within the stereoscopic focus volume to be in focus; and
communicating the near-eye lightfield frame for display at the one or more non-planar display panels of the near-eye display system.

US Pat. No. 10,659,770

STEREO IMAGE DISPLAY APPARATUS

CHERAY CO. LTD., Hsinchu...

1. A stereo image display apparatus, comprising:a display device including a display surface and an image algorithm unit;
a lens array layer disposed adjacent to the display surface of the display device, and including a plurality of lenses; and
a directional structure disposed between the display device and the lens array layer or disposed on the lens array layer,
wherein the directional structure enables light generated by the display device to emit directionally, and the lens array layer is configured to reconstruct an un-reconstructed image displayed by the display surface as an integral image to produce a stereo image,
wherein the directional structure is a baffle layer, the baffle layer is plate-shaped, the baffle layer has a plurality of light transmitting portions, the light transmitting portions correspond to the lenses, respectively, each of a plurality of shielding walls is disposed around a peripheral part of each of the light transmitting portions, respectively, and each of the shielding walls is vertical or oblique to the display surface of the display device, and
wherein each of the light transmitting portions is a solid structure by filling a light-transmitting material;
wherein the directional structure has a first side face facing the lens array layer and a second side face that is opposite to the first side face, and the first side face and the second side face are flat, respectively;
wherein two ends of each of the shielding walls respectively extend to two side edges facing the display device and the lens array layer, so that the light generated by the display device is capable of emitting directionally to the solid structure.

US Pat. No. 10,659,769

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:an obtaining unit configured to obtain an image generated based on a set virtual viewpoint and images captured by a plurality of image capturing apparatuses from different directions;
a receiving unit configured to receive an instruction for switching a virtual viewpoint, the instruction being an instruction not specified by a user in terms of a position and a direction of a virtual viewpoint; and
a display control unit configured to cause
a display unit to display, based on the instruction received by the receiving unit while a first image corresponding to a first virtual viewpoint is displayed on the display unit, a second image corresponding to a second virtual viewpoint a position of which is separated from a position of the first virtual viewpoint by a predetermined distance in a direction opposite to a direction of the first virtual viewpoint, a range displayed in the second image corresponding to the second virtual viewpoint including a range displayed in the first image corresponding to the first virtual viewpoint and being larger than the range displayed in the first image corresponding to the first virtual viewpoint.

US Pat. No. 10,659,768

SYSTEM AND METHOD FOR VIRTUALLY-AUGMENTED VISUAL SIMULTANEOUS LOCALIZATION AND MAPPING

Mitsubishi Electric Resea...

1. A method for reconstructing a three-dimensional (3D) model of a scene from a set of acquired images of the scene acquired by one or multiple sensors in different poses defining viewpoints of the acquired images, the set of acquired images includes a first acquired image acquired from a first viewpoint and a second acquired image acquired from a second viewpoint, wherein the 3D model includes a point cloud having points identified by 3D coordinates, wherein steps of the method are performed by a processor connected to a memory storing the set of images and coupled with stored instructions implementing the method, wherein the instructions, when executed by the processor carry out at least some steps of the method comprising:transforming at least some acquired images from the set of acquired images to produce a set of virtual images of the scene viewed from virtual viewpoints different from the viewpoints of the acquired images, wherein the transforming transforms the first acquired image into a first virtual image corresponding to a first virtual viewpoint different from the viewpoint of each of the acquired images, and wherein the transforming preserves correspondence between at least a subset of pixels of the first acquired image and a subset of pixels of the first virtual image that represent the same points of the scene;
comparing at least some features from the acquired images and the virtual images to determine the viewpoint of at least one image in the set of images, wherein the comparing includes comparing features of pixels of the second acquired image with features of the subset of pixels of the first virtual image to determine a correspondence among a subset of pixels of the second acquired image and the subset of pixels of the first virtual image;
registering the second viewpoint of the second acquired image with respect to the first viewpoint of the first acquired image by tracking the correspondence of the subset of pixels of the second acquired image to the subset of pixels of the first acquired image through common correspondence to the subset of pixels of the first virtual image; and
updating 3D coordinates of at least one point in the model of the scene to match coordinates of intersections of ray back-projections from pixels of at least two images corresponding to the point according to the viewpoints of the two images.

US Pat. No. 10,659,767

METHOD FOR PERFORMING OUT-FOCUS USING DEPTH INFORMATION AND CAMERA USING THE SAME

SAMSUNG ELECTRONICS CO., ...

1. A method for performing an out-of-focus effect by a camera having a first lens and a second lens, the method comprising:photographing a first image with the first lens and photographing a second image of a same scene with the second lens, a first image resolution of the photographed first image being higher than a second image resolution of the photographed second image;
extracting a depth map from the first image and the second image using a stereo matching method;
up-sampling the extracted depth map to the first image resolution of the first image; and
performing an out-of-focus effect comprising blurring pixel regions of the first image having depths in the depth map that satisfy a predetermined condition using the up-sampled depth map.

US Pat. No. 10,659,766

CONFIDENCE GENERATION APPARATUS, CONFIDENCE GENERATION METHOD, AND IMAGING APPARATUS

Canon Kabushiki Kaisha, ...

1. A confidence generation apparatus for generating confidence in a depth distribution, the confidence generation apparatus comprising:a memory that stores a program; and
a processor that executes the program to operate as units comprising:
(1) an acquisition unit configured to acquire a depth distribution which includes depth information representing a depth to an object in each of a plurality of pixels; and
(2) a generation unit configured to generate a global confidence which represents confidence in a global region of the depth distribution,
wherein the generation unit includes:
(a) a first generation processing unit configured to generate local confidences, each of which represents the confidence of the depth information in a respective one of the plurality of pixels based on statistics of a plurality of pixels;
(b) a region division processing unit configured to divide the depth distribution into a plurality of regions based on the depth information;
(c) a region validity generation processing unit configured to generate, for each region divided by the region division processing unit, a region validity indicating an efficacy of the region based on an area of the region; and
(d) a second generation processing unit configured to generate the global confidence in each of the plurality of regions based on (i) the local confidences and (ii) the region validity.

US Pat. No. 10,659,765

THREE-DIMENSIONAL (3D) IMAGE SYSTEM AND ELECTRONIC DEVICE

Shenzhen Goodix Technolog...

1. A three-dimensional (3D) image system, characterized by, comprising:a structural light module, configured to emit a structural light, wherein the structural light module comprises a first light-emitting unit, the first light-emitting unit receives a first pulse signal and emits a first light according to the first pulse signal, a duty cycle of the first pulse signal is less than a specific value, an emission power the first light-emitting unit is greater than a specific power, and the first light has a first wavelength; and
a light-sensing pixel array, configured to receive a reflected light corresponding to the structural light;
wherein the light-sensing pixel array comprises a plurality of light-sensing pixel circuits, and a light-sensing pixel circuit of the plurality of light-sensing pixel circuits comprises:
a light-sensing component;
a first photoelectric readout circuit, coupled to the light-sensing component, configured to output a first output signal; and
a second photoelectric readout circuit, coupled to the light-sensing component, configured to output a second output signal;
wherein a pixel value corresponding to the light-sensing pixel circuit is a subtraction result of the first output signal and the second output signal.

US Pat. No. 10,659,764

DEPTH IMAGE PROVISION APPARATUS AND METHOD

Intel Corporation, Santa...

1. An apparatus, comprising:a projector to project a light pattern on an object, and to move the projected light pattern over the object, so as to swipe the object with the light pattern, wherein the light pattern comprises a substantially one-dimensional shape, wherein to swipe the object with the light pattern includes to provide a single swipe of the object, which includes to move the one-dimensional shape in one direction, in continuous manner, over the object;
a camera coupled with the projector, wherein the camera includes a dynamic vision sensor (DVS) device, to capture changes in at least some image elements that correspond to an image of the object, during the swipe of the object with the light pattern; and
a processor coupled with the projector and the camera, to generate a depth image of the object, based on the single swipe of the object, and further based at least in part on the changes in the at least some image elements, wherein the depth image of the object comprises a plurality of depth images associated with corresponding image elements, wherein a depth image of an image element is proportionate to a focal length of the camera multiplied by a distance between the camera and the projector, and inversely proportionate to a distance between the object and a camera principle point in an image plane, and an angle between a direction of a projected line and a z-direction, at a time of identification of the image element.

US Pat. No. 10,659,763

STEREO CAMERA SYSTEM WITH WIDE AND NARROW INTEROCULAR DISTANCE CAMERAS

CAMERON PACE GROUP LLC, ...

1. A stereographic camera system, comprising:a camera platform comprising:
a first stereographic camera head including a first left camera and a first right camera separated by a first interocular distance settable over a first range, the first stereographic camera head providing a first left video stream and a first right video stream; and
a second stereographic camera head aligned with the first stereographic camera, the second stereographic camera including a second left camera and a second right camera separated by a second interocular distance settable over a second range, at least a portion of the second range smaller than the first range, the second stereographic camera head providing a second left video stream and a second right video stream; and
an output selector to automatically select either the first left and right video streams or the second left and right video streams to output as a 3D video output based on one or more of
a focus distance of lenses in one or both of the first camera head and the second camera head,
a convergence distance of one or both of the first camera head and the second camera head,
a measured distance from the camera platform to a scene object,
a convergence angle,
a disparity,
a tilt angle, and
a desired interocular distance.

US Pat. No. 10,659,762

STEREO CAMERA

HITACHI AUTOMOTIVE SYSTEM...

1. A stereo camera, comprising:a first image-capture unit which captures a first image;
a second image-capture unit which captures a second image;
a geometry correction unit which generates a plurality of third images having a different moving amount in a vertical direction from the second image;
a parallax calculation unit which generates a plurality of parallax images from combinations of the first image and respective third images;
a parallax image evaluation unit which calculates a reliable degree of each parallax image; and
a vertical offset correction unit which calculates a maximum reliable degree and the moving amount corresponding to the maximum reliable degree on the basis of a correspondence relationship between the moving amount and the reliable degree, and sets the moving amount corresponding to the maximum reliable degree as a correction amount of a vertical offset.

US Pat. No. 10,659,761

METHOD FOR TRANSMITTING 360 VIDEO, METHOD FOR RECEIVING 360 VIDEO, APPARATUS FOR TRANSMITTING 360 VIDEO, AND APPARATUS FOR RECEIVING 360 VIDEO

LG ELECTRONICS INC., Seo...

1. A method for processing at least one circular image captured by at least one fisheye lens in a transmitter, the method comprising:mapping the at least one circular image onto a picture in a fisheye video format;
encoding the picture as a coded video bitstream;
generating fisheye-specific metadata which assists in rendering the picture,
wherein the fisheye-specific metadata includes rectangular region top/left information, rectangular region width information, rectangular region height information, circular image centre information and circular image radius information,
wherein the rectangular region top/left information is used to represent a coordinate of a top-left corner of a rectangular region that contains the at least one circular image,
wherein the rectangular region width information is used to represent a coordinate of an width of the rectangular region that contains the at least one circular image,
wherein the rectangular region height information is used to represent a coordinate of a height of the rectangular region that contains the at least one circular image,
wherein the circular image centre information is used to represent a horizontal coordinate value and a vertical coordinate value of a centre of the at least one circular image,
wherein the circular image radius information is used to represent a radius of the at least one circular image that is defined as a length from the centre of the at least one circular image specified by the circular image centre information to the outermost boundary of the at least one circular image; and
transmitting the coded video bitstream and the generated fisheye-specific metadata.

US Pat. No. 10,659,760

ENHANCED HIGH-LEVEL SIGNALING FOR FISHEYE VIRTUAL REALITY VIDEO

QUALCOMM Incorporated, S...

1. A method of processing a file including fisheye video data, the method comprising:processing a file including fisheye video data, the file including a syntax structure including a plurality of syntax elements that specify attributes of the fisheye video data, wherein the plurality of syntax elements include one or more bits that indicate fisheye video type information, and wherein the fisheye video type information includes an indication that the fisheye video data is monoscopic fisheye video data or an indication that the fisheye video data is stereoscopic fisheye video data, an indication of whether images of the stereoscopic fisheye video data are in a left view or a right view, an indication of a number of circular images of the fisheye video data, an indication of a horizontal coordinate of a center of a particular circular image of the fisheye video data, and an indication of a vertical coordinate of a center of the particular circular image of the fisheye video data;
determining, based on the one or more bits of the syntax structure, the fisheye video type information for the fisheye video data; and
outputting, based on the determination, the fisheye video data for rendering.

US Pat. No. 10,659,759

SELECTIVE CULLING OF MULTI-DIMENSIONAL DATA SETS

STRATUS SYSTEMS, INC., S...

1. A computer-implemented method for selectively culling data sets at a data source for a viewer of a display system, comprising:(a) determining a control command based at least on a plurality of requested data sets, field of interest of the viewer, and available bandwidth, wherein the field of interest of the viewer is based on a foveal area of the viewer with respect to the display system;
(b) receiving, at a server, the control command, wherein the control command comprises encoding of instructions on which one or more subsets among the plurality of requested data sets are to be culled to be within constraints of the available bandwidth;
(c) selectively culling, at the server, the one or more subsets among the plurality of requested data sets according to the transmitted control command;
(d) presenting at least partially within the foveal area of the viewer, on the display system, the selectively culled one or more subsets rather than the plurality of requested data sets, for selective viewing by the viewer; and
(e) updating the control command based on user instructions received from the viewer, wherein the user instructions (i) are based on the selectively culled one or more subsets presented on the display system in (d) and (ii) comprise instructions for updating the field of interest of the viewer in the control command.

US Pat. No. 10,659,758

IMAGE ENCODING METHOD AND IMAGE DECODING METHOD

LG Electronics Inc., Seo...

1. A video decoding method by a decoding apparatus, the method comprising:receiving information for deriving motion information of a current depth block of a current depth picture in a current view;
determining, in a reference texture picture, a reference texture block corresponding to the current depth block based on a specific disparity vector;
deriving a median value of motion vectors of the reference texture block and neighboring blocks of the determined reference texture block based on a size of the current depth block being equal to or larger than a size of the reference texture block;
deriving motion information candidates of the current depth block based on the derived median value and motion information of neighboring blocks of the current depth block;
deriving the motion information of the current depth block based on the received information for deriving the motion information and the motion information candidates; and
performing prediction to generate predicted pixels of the current depth block based on the motion information of the current depth block,
wherein a merge mode is applied to the current depth block,
wherein the neighboring blocks of the reference texture block include a left neighboring block of the reference texture block and a top neighboring block of the reference texture block,
wherein the motion information candidates include motion information comprising the median value as a motion vector,
wherein the motion information of the current depth block is determined based on the motion information comprising the median value as the motion vector,
wherein the specific disparity vector is determined for an area in the current depth picture,
wherein the area for which the specific disparity vector is determined is split based on a quad tree structure, and
wherein the reference texture block is a block of the reference texture picture in a reference view.

US Pat. No. 10,659,757

LIGHT FIELD CAPTURE

Apple Inc., Cupertino, C...

1. A non-transitory computer readable medium comprising computer readable code executable by one or more processors to:obtain image data for a scene from a first camera and a second camera, wherein the image data comprises a plurality of pixels;
determine a point of view of a party viewing the scene via a display device from one or more third cameras;
select a subset of the plurality of pixels based on the point of view;
map the subset of the plurality of pixels from a three-dimensional (3D) space associated with the first and second cameras to a two-dimensional (2D) space; and
generate a frame based on the mapped subset of the plurality of pixels.

US Pat. No. 10,659,756

IMAGE PROCESSING APPARATUS, CAMERA APPARATUS, AND IMAGE PROCESSING METHOD

PANASONIC I-PRO SENSING S...

1. An image processing apparatus which is connected to a camera head, the camera head being capable of imaging a left eye image and a right eye image the left eye image and the right eye image having parallax on one screen based on light at a target site incident on an optical instrument, the image processing apparatus comprising:a processor that performs the signal processing of the left eye image and the right eye image which are imaged by the camera head; and
an output that outputs the left eye image and the right eye image on which the signal processing is performed to a monitor, the left eye image and the right eye image being misaligned when projecting a three-dimensional image on the monitor,
wherein the processor adjusts an extraction position of at least one of the left eye image and the right eye image in accordance with a user operation based on the left eye image and the right eye image which are displayed on the monitor, to correct a projection of the three-dimensional image on the monitor,
the processor saves an adjustment result of the extraction position of the at least one of the left eye image and the right eye image in a memory, and
the processor is configured to adjust an extraction position of both of the left eye image and the right eye image which are imaged by the camera head in at least one of a horizontal direction or a vertical direction in accordance with the user operation.

US Pat. No. 10,659,755

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. An information processing device comprising:a display control unit configured to perform movement control of one or more stereoscopic vision objects perceived by a user from a start depth that is different from a target depth to the target depth on a basis of mode specifying information that specifies a mode of the movement control that supports stereoscopic vision by the user,
wherein the mode specifying information includes information regarding a number of the one or more stereoscopic objects,
wherein the mode of the movement control comprises a movement speed of the one or more stereoscopic objects from the start depth to the target depth,
wherein the display control unit determines the movement speed of the one of more stereoscopic objects based on the number of the one or more stereoscopic objects,
wherein, when the number of the one or more stereoscopic objects is greater than a threshold number, the movement speed is faster than when the number of the one or more stereoscopic objects is equal to or less than the threshold number, and
wherein the display control unit is implemented via at least one processor.

US Pat. No. 10,659,754

MULTI-VIEW CODING WITH EFFICIENT RESIDUAL HANDLING

GE VIDEO COMPRESSION, LLC...

1. A method for reconstructing a multi-view signal coded in a data stream, comprising:obtaining a disparity vector with respect to a dependent coding block in a picture of a dependent view of a multi-view signal, the disparity vector representing a disparity between the dependent coding block and a reference-view coding block in a picture of a reference view of the multi-view signal;
identifying the reference-view block in the picture of the reference view based on the disparity vector;
obtaining, from the data stream, a reference-view residual signal associated with the reference-view block, wherein the reference-view residual signal represents a difference between the reference-view coding block and a prediction of the reference-view coding block;
estimating a dependent-view residual signal for the dependent coding block in the picture of the dependent view based on the reference-view residual signal using the disparity vector, wherein the dependent-view residual signal represents a difference between the dependent coding block and a prediction of the dependent coding block;
adding a remaining signal corresponding to the dependent-view residual signal to the estimated dependent-view residual signal to obtain a reconstructed dependent-view residual signal; and
reconstructing the dependent coding block based on the reconstructed dependent-view residual signal and a prediction of the dependent coding block.

US Pat. No. 10,659,753

PHOTOGRAMMETRY SYSTEM AND METHOD OF OPERATION

FARO TECHNOLOGIES, INC., ...

1. A photogrammetry system comprising:a two-dimensional (2D) camera operable to acquire a 2D image at a first resolution and a second resolution, the first resolution being higher than the second resolution, the 2D camera further being operable to acquire a 2D video image at the second resolution;
a user interface having a display; and
a controller having a processor that is responsive to nontransitory executable computer instructions to perform a method comprising:
acquiring a first 2D image of an object with the 2D camera at the first resolution;
detecting at least one feature on the object in the first 2D image;
determining an image sequence, the image sequence including a second position;
acquiring with a plurality of second 2D images with the 2D camera at the second resolution;
tracking, as the 2D camera is moved, the position and pose of the 2D camera based at least in part on the plurality of second 2D images;
indicating on the display a direction of movement from a first position to a second position;
acquiring a third 2D image of the object when the 2D camera reaches the second position; and
determining three-dimensional (3D) coordinates of one or more points on the object based at least in part on the first 2D image and the third 2D image.

US Pat. No. 10,659,752

SYSTEM FOR THE STEREOSCOPIC VISUALIZATION OF AN OBJECT REGION

CARL ZEISS MEDITEC AG, J...

10. A system for visualizing an object region, comprising:an electronic image capturing device having:
a first sensor area or a plurality of first sensor areas, and
a second sensor area or a plurality of second sensor areas,
an optical assembly having
a first optical channel configured for a first imaging beam path imaging the object region on the first sensor area or the plurality of first sensor areas of the image capturing device, and
a second optical channel configured for a second imaging beam path imaging the object region on ft the second sensor area or the plurality of second sensor areas of the image capturing device, a spectral transmission of the second optical channel differing from a spectral transmission of the first optical channel and
a microscope main objective system through which the first imaging beam path and the second imaging beam path pass,
a first image producing device configured to provide a first image of the object region, captured on the first sensor area of the plurality of first sensors areas, and a second image of the object region, captured on the second sensor area of the plurality of second sensor areas to first observer for visualization of the object region,
a second image producing device configured to visualize the object region for a second observer,
a computer unit with an image superposition stage configured to superpose the first image and the second image to form a monoscopic superposition image of the object region,
an orientation determining apparatus configured to determine an orientation of a perpendicular projection of a connecting line of eyes of a second observer into a plane perpendicular to an optical axis of the microscope main objective system, and
an image rotating stage configured to digitally rotate and window the monoscopic superposition image, and to provide an image which has been digitally rotated and windowed in relation to the superposition image to the second image producing device as an image of the object region for a monoscopic visualization of the object region, said digitally rotated and windowed image having an image edge parallel to the perpendicular projection of the connecting line of the eyes of the second observer into the object region.

US Pat. No. 10,659,751

MULTICHANNEL, MULTI-POLARIZATION IMAGING FOR IMPROVED PERCEPTION

Lyft Inc., San Francisco...

16. A computing system comprising:one or more processors; and
one or more computer-readable non-transitory storage media coupled to one or more of the processors, the one or more computer-readable non-transitory storage media comprising instructions operable when executed by one or more of the processors to cause the computing system to perform operations comprising:
accessing first image data generated by an image sensor having a first filter array that has a first filter pattern, wherein the first filter pattern comprises a plurality of first filter types;
accessing one or more second image data generated by one or more additional image sensors each comprising a second filter array that has a second filter pattern different from the first filter pattern, wherein the second filter pattern comprises a plurality of second filter types, wherein the plurality of second filter types and the plurality of first filter types have at least one filter type in common;
determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the one or more second image data based on a portion of the first image data associated with the filter type in common and a portion of the one or more second image data associated with the filter type in common; and
generating, based on the correspondence, composite data using the first image data and the one or more second image data, wherein the composite data comprises one or more third pixels that each comprise first data associated with one or more of the plurality of first filter types and second data associated with one or more of the plurality of second filter types.

US Pat. No. 10,659,750

METHOD AND SYSTEM FOR PRESENTING AT LEAST PART OF AN IMAGE OF A REAL OBJECT IN A VIEW OF A REAL ENVIRONMENT, AND METHOD AND SYSTEM FOR SELECTING A SUBSET OF A PLURALITY OF IMAGES

Apple Inc., Cupertino, C...

1. A method of presenting at least part of an image of a real object in a view of a real environment, comprising:obtaining a first image of at least part of a real object captured by a camera from a first camera pose, wherein a portion of the first image comprising the real object comprises an object image area;
determining a first 3D plane on which the real object is placed;
determining at least one first ray passing through an optical center of the camera at the first camera pose and the object image area;
determining a first spatial relationship comprising at least one first angle between the first 3D plane and the at least one first ray, and a depth of the first 3D plane;
obtaining a second image of a real environment captured from a second camera pose;
determining a target space in a view of the real environment captured from the second camera pose, wherein the target space comprises a second 3D plane;
determining a second spatial relationship comprising at least one second angle between the second 3D plane and at least one second ray passing from the second camera pose to the target space, and a depth of the second 3D plane;
applying one or more warping functions to the object image area based on the first spatial relationship and the second spatial relationship to obtain a warped object image area; and
blending in the warped object image area into the second image, wherein the at least part of the real object appears from the perspective of the second camera pose.

US Pat. No. 10,659,749

EFFICIENT HISTOGRAM-BASED LUMA LOOK MATCHING

Dolby Laboratories Licens...

1. A method for forward reshaping luma components of an input image of a first dynamic range, comprising:providing a reference tone-mapped image over a second dynamic range that corresponds to the input image of the first dynamic range, wherein the first dynamic range is greater than the second dynamic range;
providing a luma forward reshaping function to forward reshape luma components of the input image of the first dynamic range into luma components of a corresponding forward reshaped image over the second dynamic range based on one or more luma mapping parameters;
providing a plurality of combinations of candidate parameter values for the one or more luma mapping parameters;
generating a luma histogram of the input image over the first dynamic range;
generating a luma histogram of the reference tone-mapped image over the second dynamic range;
generating a plurality of mapped luma histograms over the second dynamic range by converting, for each of the provided candidate parameter values for the one or more luma mapping parameters, the luma histogram of the input image over the first dynamic range via the luma forward reshaping function with a respective combination of the provided candidate parameter values to a corresponding mapped luma histogram over the second dynamic range;
comparing the plurality of luma histograms over the second dynamic range with the luma histogram of the reference tone-mapped image over the second dynamic range;
selecting, based on results of comparing the plurality of luma histograms with the luma histogram of the reference tone-mapped image, a specific combination of candidate parameter values from among the plurality of combinations of candidate parameter values;
using the luma forward reshaping function with the one or more luma parameter values that are set to the specific combination of candidate parameter values to forward reshape luma components of the input image into the luma components of the corresponding forward reshaped image over the second dynamic range; and
transmitting the corresponding forward reshaped image to one or more recipient devices.

US Pat. No. 10,659,748

SYSTEM AND METHOD FOR PRESENTING VIRTUAL REALITY CONTENT TO A USER

Visionary VR, Inc., Los ...

1. A system for presenting content to a user, the system comprising:one or more sensors that generate output signals conveying information related to a view direction of the user, the view direction of the user corresponding to a physical direction toward which the gaze of the user is directed;
a display that presents the content to the user, wherein presentation of the content via the display visually simulates virtual objects superimposed within a real world view of the physical space determined by the view direction of the user via the display, wherein the view direction toward which the gaze of the user is directed corresponds to an orientation of the display, and wherein the content includes multiple fields that are viewable and fixed spatially with respect to the physical space and the positions of the fields in the virtual space are independent of the view direction of the user, the multiple fields including a first viewable field and a second viewable field; and
one or more physical computer processors configured by computer readable instructions to:
determine the view direction of the user based on the output signals;
identify a change in the view direction of the user from the first viewable field to the second viewable field based on the output signals;
cause a change in one or more of the rhythm and/or pace of the content via the display responsive to identifying the change in the view direction of the user; and
cause the display to provide a sensory cue to the user responsive to identifying the change in the view direction of the user.

US Pat. No. 10,659,747

VIDEO DISPLAY DEVICE

MAXELL, LTD., Kyoto (JP)...

1. A video display device comprising:a video input unit;
a video correcting unit that performs video correction on a video input by the video input unit; and
a video display unit that displays the video corrected by the video correcting unit, wherein the video correcting unit performs local luminance correction on the video input by the video input unit, acquires a correction intensity for each part of the local luminance correction, and performs local saturation correction based on the correction intensity;
wherein the local luminance correction in the video correcting unit is correction using Retinex correction, gain of the local luminance correction is acquired as the correction intensity by comparing videos before and after the local luminance correction, and an intensity of the local saturation correction is changed according to the gain of the local luminance correction.

US Pat. No. 10,659,746

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Kabushiki Kaisha Toshiba,...

8. An image processing method comprising:inputting inputted pixels forming an inputted image in a raster scan order;
converting, through first pixel position calculation, a position of each of the inputted pixels to a first pixel position in an outputted image;
performing adjustment such that a second pixel position calculation for converting a position which is an integer of an outputted pixel near the first pixel position which is a non-integer in the outputted image to a second pixel position in the inputting image is an inverse function of the first pixel position calculation;
converting the position which is the integer of the outputted pixel near the first pixel position to the second pixel position through the second pixel position calculation;
calculating a pixel value of the second pixel position through interpolation with a surrounding pixel in the inputted pixel; and
outputting the pixel value of the second pixel position as a pixel value of the outputted pixel in an order in which the pixel value is calculated through the interpolation.

US Pat. No. 10,659,745

VIDEO DISPLAY DEVICE AND VIDEO DISPLAY METHOD

PANASONIC INTELLECTUAL PR...

1. A video display device, comprising:an obtainer that obtains video data including a video and dynamic luminance characteristics indicating a time-dependent change in luminance characteristics of the video;
a tone mapping processor that, in the case where a luminance region having a luminance less than or equal to a first luminance is defined as a low luminance region, and a luminance region having a luminance exceeding the first luminance is defined as a high luminance region, (i) performs first tone mapping using first conversion characteristics when first luminance characteristics exceed a predetermined threshold value, and (ii) performs second tone mapping using second conversion characteristics when the first luminance characteristics are less than or equal to the predetermined threshold value, the first luminance characteristics being included in the dynamic luminance characteristics and indicating the number of pixels having luminances less than or equal to a second luminance among pixels included in the low luminance region in one frame of the video, the first tone mapping maintaining the luminances less than or equal to the second luminance, the second tone mapping decreasing the luminances less than or equal to the second luminance; and
a display that displays a video obtained as a result of the first tone mapping or the second tone mapping.

US Pat. No. 10,659,744

DISTANCE INFORMATION GENERATING APPARATUS, IMAGING APPARATUS, AND DISTANCE INFORMATION GENERATING METHOD

CANON KABUSHIKI KAISHA, ...

1. A distance information generating apparatus for generating distance information corresponding to a distance to an object, based on a first image signal and a second image signal, which have a parallax corresponding to the distance to the object, and a third image signal which includes a plurality of pieces of color information,the distance information generating apparatus comprising:at least one processor which executes instructions stored in a memory, the at least one processor being configured to function as:
a generating unit configured to generate the distance information, based on the parallax between the first image signal and the second image signal;
an acquiring unit configured to acquire chromatic aberration information indicative of chromatic aberration of an image-forming optical system used in photographing of the first image signal and the second image signal; and
a correction unit configured to correct the distance information generated in the generating unit based on the chromatic aberration information and weighting coefficients determined based on a contrast evaluation value calculated for each of the plurality of pieces of color information of the third image signal.

US Pat. No. 10,659,743

IMAGE PROJECTION DEVICE

QD LASER, INC., Kawasaki...

1. An image projection device comprising:a first mirror that oscillates to scan an image light beam forming an image projected onto a retina of a user;
a light source that emits the image light beam to the first mirror and emits a detection light beam to the first mirror at a timing different from a timing when the image light beam is emitted;
a second mirror that has a first region and a second region, and scans neither the image light beam nor the detection light beam reflected by the first mirror, the first region reflecting the image light beam reflected by the first mirror to the retina of the user, the second region reflecting the detection light beam reflected by the first mirror in a direction different from a direction in which the image light beam is reflected;
a light detector that detects the detection light beam reflected by the second region of the second mirror; and
a controller that adjusts oscillation of the first mirror and an emission timing of the image light beam from the light source based on a detection result by the light detector.

US Pat. No. 10,659,742

IMAGE GENERATING APPARATUS AND IMAGE DISPLAY CONTROL APPARATUS

Sony Interactive Entertai...

1. An image generating apparatus comprising:a panoramic image generating unit configured to generate a panoramic image by transforming each of eight divided areas obtained by dividing a surface of a sphere with three planes that pass through a center of the sphere and are orthogonal to each other, to a transformed area shaped as a rectangular equilateral triangle, which is a triangle having one right angle and two sides of equal length, said surface of the sphere having at least a partial area onto which a scene viewed from an observation point is projected, such that a number of pixels belonging to a same-latitude pixel group made up of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes and a plurality of same-latitude pixel groups correspond to mutually equal latitude ranges, and placing the transformed area on a panoramic plane; and
an image output unit configured to output the generated panoramic image, wherein:
said three planes are a horizontal plane, a vertical plane along a predetermined frontal direction, which is a direction to be presented to a user as viewed from the observation point, and a vertical plane along a lateral direction which intersects the predetermined frontal direction, and
said panoramic image generating unit generates the panoramic image by placing the eight transformed areas, each being shaped as the rectangular equilateral triangle, in a square shape as a whole on the panoramic plane, such that a point on the sphere in the predetermined frontal direction is placed at the center of the square shape, a hemisphere structured by the four divided areas around the point in the predetermined frontal direction is transformed to an inscribed square formed by connecting midpoints of the four sides of the square shape.

US Pat. No. 10,659,741

PROJECTING OBSTRUCTED CONTENT OVER TOUCH SCREEN OBSTRUCTIONS

INTERNATIONAL BUSINESS MA...

1. A wearable projector device comprising:a processor, a computer readable memory and a computer readable storage medium associated with a computing device;
program instructions to receive display configuration information from a remote computing device, wherein the display configuration information defines a manner in which a portion of content on a touchscreen, obstructed from a user's view by an obstructing object, is to be projected by the wearable projector device; and
program instructions to project obstructed content over the obstructing object based on the portion of content on the touchscreen obstructed from the user's view, wherein the obstructed content changes dynamically based on movement of the obstructing object in real time,
wherein the program instructions are stored on the computer readable storage medium for execution by the processor via the computer readable memory.

US Pat. No. 10,659,740

IMAGE RENDERING APPARATUS, HEAD UP DISPLAY, AND IMAGE LUMINANCE ADJUSTING METHOD

JVCKENWOOD Corporation, ...

1. An image rendering apparatus comprising:a light source unit configured to emit a laser beam;
a detecting unit configured to detect an intensity of the laser beam;
an optical scanner configured to scan the laser beam emitted from the light source unit in a scan area, the scan area having a rendering area and a blanking area;
a light source driving unit configured to control the light source unit in such a way (i) that a rendered image based on input image data is generated by the scan of the optical scanner inside the rendering area of the scan area scanned by the optical scanner and not in the blanking area, and (ii) that a characteristic laser beam for detecting the intensity of the laser beam is emitted to at least an adjusting area at a position in the rendering area and in a pattern corresponding to a rendered content of the rendered image; and
an adjusting unit configured to adjust an output of the laser beam based on a detected value of the characteristics detecting laser beam emitted in at least the adjusting area detected by the detecting unit;
wherein the rendered image includes an indicator related to a speed of a vehicle to which the image rendering apparatus is applied;
wherein the light source driving unit is configured to control the light source unit to emit the characteristics detecting laser beam in such a way that a scale for the indicator related to the speed of the vehicle is rendered within the adjusting area;
wherein the light source unit includes a plurality of laser light sources configured to emit laser light of a plurality of colors; and
wherein the scale comprises a plurality of sectioned areas rendered by the characteristics detecting laser beam in the adjusting area within the rendering area, a color of each respective sectioned area corresponding to a color of the laser light source.

US Pat. No. 10,659,739

CONTROL OF LIGHT SPREADING WITH BLURRING ELEMENT IN PROJECTOR SYSTEMS

Dolby Laboratories Licens...

1. A display, comprising: a light source; a first modulator configured to receive light from the light source and modulate the light into a first light; a blurring element configured to diffuse the first light into a second light; a motor mechanically coupled to the blurring element and configured to move the blurring element during operation of the display and further wherein the motor is configured to rotate or oscillate the blurring element during the operation of the display so that regular patterns for features of the blurring element do not show static structures on PSF shapes of images being projected; and a second modulator configure to receive and modulate the second light.

US Pat. No. 10,659,738

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM PRODUCT

OLYMPUS CORPORATION, Tok...

1. An image processing apparatus comprising a processor comprising hardware,the processor being configured to implement:
a total pixel value calculating process that sums pixel values of a plurality of pixels arranged in one line including a pixel of interest of an input image, such that the pixel values of pixels are summed by each color for pixels arranged in one direction and other direction on opposite sides of the pixel of interest;
an occurrence start point detecting process that determines whether or not the pixel of interest is an axial chromatic aberration occurrence start point based on at least one of a result of the calculation performed by the total pixel value calculating process and the pixel value of the pixel of interest and detects the occurrence start point;
an area determining process that determines a predetermined surrounding area around the pixel of interest detected as the occurrence start point to be an axial chromatic aberration area;
a color space information calculating process that calculates color space information on the pixel of interest detected as the occurrence start point in a specific color space for the one direction and the other direction based on the result of the calculation performed by the total pixel value calculating process;
a color space difference calculating process that calculates a difference between the color space information for the one direction and the color space information for the other direction calculated by the color space information calculating process;
a correction amount calculating process that calculates an amount of correction of axial chromatic aberration in accordance with the difference calculated by the color space difference calculating process; and
a correcting process that corrects the axial chromatic aberration area by using the correction amount.

US Pat. No. 10,659,737

METHOD FOR MONITORING OCCUPANCY IN A WORK AREA

VergeSense, Inc., Mounta...

1. A method for monitoring human occupancy in a work area comprising:during a first scan cycle, recording a first image at a sensor block at a first time, the sensor block comprising an optical sensor defining a field of view intersecting the work area;
extracting a set of features from the first image;
detecting, based on the set of features, a set of objects in the first image, the set of objects including a desk;
matching a location of the desk in the first image to a known location of a first desk according to a baseline asset map to identify the desk in the first image as the first desk;
in response to the set of objects comprising a human located within a threshold distance of proximal the first desk:
associating the human with the first desk; and
updating an occupancy status of the first desk to indicate the first desk as occupied;
in response to the set of objects comprising a human effect located within the threshold distance of the first desk and excluding a human located within the threshold distance of the first desk, the human effect selected from a group consisting of an item of clothing, a bag, a computing device, a notebook, and a beverage container:
associating the human effect with the first desk; and
updating the occupancy status of the first desk to indicate the first desk as occupied with human absent; and
in response to the set of objects excluding a human located within the threshold distance of the first desk and excluding the human effect located within the threshold distance of the first desk, updating the occupancy status of the first desk to indicate the first desk as vacant.

US Pat. No. 10,659,736

ALIGNMENT APPARATUS, LITHOGRAPHY APPARATUS, AND ARTICLE MANUFACTURING METHOD

CANON KABUSHIKI KAISHA, ...

1. An alignment apparatus that performs alignment of a substrate by detecting a mark provided on the substrate, the apparatus comprising:a stage configured to move while holding the substrate;
an imaging device configured to capture an image of the mark on the substrate held by the stage; and
a processor configured to obtain a position of the mark based on the image of the mark captured by the imaging device,
wherein the imaging device includes:
an image sensor configured to accumulate charge by photoelectric conversion;
a frame memory configured to temporarily store at least one frame of image data obtained based on the charge accumulated by the image sensor;
a first transfer unit configured to transfer one frame of image data obtained by the image sensor to the frame memory; and
a second transfer unit configured to transfer one frame of image data read from the frame memory to the processor,
wherein a data transfer rate of the first transfer unit is higher than a data transfer rate of the second transfer unit, and the first transfer unit and the second transfer unit operate asynchronously,
wherein the imaging device is configured to start next image capturing as long as the accumulation of the charge is performed by the image sensor and transfer of one frame of image data obtained by the accumulation to the frame memory by the first transfer unit is completed even if transfer of the one frame of image data to the processor by the second transfer unit is not completed, and
wherein the alignment apparatus moves the stage for next image capturing concurrently with transfer of one frame of image data to the frame memory by the first transfer unit when capturing an image of the mark using the imaging device at a plurality of positions while moving the stage.

US Pat. No. 10,659,735

SURVEILLANCE SYSTEM AND METHOD OF OPERATING SURVEILLANCE SYSTEM

HANWHA TECHWIN CO., LTD.,...

1. A surveillance system comprising:a first frequency transceiver configured to receive a first surveillance image from a first camera, and receive a second surveillance image from a second camera;
a second frequency transceiver configured to receive second frequency transceiver notice information from the second camera; and
a processor configured to
set a reception signal gain value of the first frequency transceiver to a first gain value to receive the first surveillance image from the first camera,
change the reception signal gain value of the first frequency transceiver from the first gain value to a second gain value to receive the second surveillance image from the second camera, in response to receiving the second frequency transceiver notice information from the second camera, and
after and while a communication channel is established from the second camera to the first frequency transceiver, change the reception signal gain value of the first frequency transceiver from the second gain value to the first gain value and receive the second surveillance image from the second camera while the reception signal gain of the first frequency transceiver is set to the first gain value.

US Pat. No. 10,659,734

HANDHELD COMMUNICATIONS DEVICES AND COMMUNICATIONS METHODS

IDEAL Industries, Inc., ...

1. A communications method comprising:using a single cable to directly couple a first communications port of a handheld communications device to a second communications port of a network node;
receiving by the handheld communications device from the network node via the single cable a packet;
extracting by the handheld communications device from the received packet a network address of the network node;
automatically determining by the handheld communications device if a current network address of the handheld communications device and the network address of the network node extracted from the received packet both belong to a same IP subnet;
when it is determined that a current network address of the handheld communications device and the network address of the network node extracted from the received packet do not both belong to the same IP subnet, causing the handheld communications device to use the network address of the network node extracted from the received packet to automatically modify the current network address of the handheld communications device whereby the modified network address of the handheld communications device and the network address of the network node extracted from the received packet belong to the same IP subnet such that the network node and the handheld device are capable of directly exchanging information via the single cable; and
after the handheld communications device and the network node are determined to be capable of directly exchanging information via the single cable, causing the handheld communications device to send information to the network node and retrieve information from the network node via the single cable;
wherein the network node comprises an IP camera or an IP telephone.

US Pat. No. 10,659,733

SYSTEMS AND METHODS FOR IMPLEMENTING AUGMENTED REALITY AND/OR VIRTUAL REALITY

STEELCASE INC., Grand Ra...

1. An enhanced reality workstation for use by an electronic conference attendee, the workstation comprising:an emissive surface assembly that includes at least first and second emissive surface portions that are arranged about an attendee location to be occupied by an attendee using the workstation;
at least a first camera for obtaining images of a first attendee at the workstation;
at least a first sensor for sensing the first attendee's sight trajectory at the workstation;
a processor programmed to perform the steps of:
(i) identifying a target of interest (TOI) presented at some location on the emissive surface assembly subtended by the attendee's sight trajectory;
(ii) using the images from the at least a first camera to generate at least first and second angled video representations of the first attendee to be presented at other workstations.

US Pat. No. 10,659,732

APPARATUS FOR PROVIDING MULTI-PARTY CONFERENCE AND METHOD FOR ASSIGNING ENCODER THEREOF

SAMSUNG SDS CO., LTD., S...

1. An apparatus for providing multi-party conference comprising:an image quality determination module configured to determine an image providing quality for a terminal connected to a multi-party conference created in the apparatus; and
an encoder assignment module configured to assign an encoder to the terminal based on the image providing quality,
wherein when an assignable unused encoder does not exist at a time of assigning the encoder, the encoder assignment module retrieves one encoder among encoders previously assigned to one or more multi-party conferences created in the apparatus and re-assigns the retrieved encoder to the terminal.

US Pat. No. 10,659,731

AUTOMATED CINEMATIC DECISIONS BASED ON DESCRIPTIVE MODELS

Facebook, Inc., Menlo Pa...

1. A method comprising:accessing input data from one or more different input sources, the input sources comprising: one or more cameras, one or more microphones, and a social graph maintained by a social-networking system;
based on the input data, generating a current descriptive model for a current audio-video communication session that comprises one or more descriptive characteristics about (1) an environment associated with the current audio-video communication session, (2) one or more people within the environment, or (3) one or more contextual elements associated with the current audio-video communication session;
generating one or more instructions for the current audio-video communication session that are based the one or more descriptive characteristics; and
sending the one or more instructions to a computing device associated with the one or more cameras and the one or more microphones.

US Pat. No. 10,659,730

SYSTEMS AND METHODS FOR IMPROVED VIDEO CALL HANDLING

T-Mobile USA, Inc., Bell...

1. A user equipment (UE) comprising:one or more processors;
a display; and
memory storing computer-executable instructions that, when executed by the one or more processors, cause the UE to:
participate in a video call having a video component and an audio component with at least one other UE, the UE and the at least one other UE together being associated with a plurality of call participants;
display, on the display, a graphical user interface (GUI) comprising at least a video window, a subtitle window, and a public text interface;
display, in the video window of the GUI, the video component of the video call;
display, in the subtitle window of the GUI, a subtitle transcribed from the audio component by a voice recognition system;
display, in the public text interface of the GUI, a set of subtitles previously displayed in the subtitle window and a set of other textual messages sent by individual call participants to all other call participants of the plurality of call participants; and
display, in the public text interface of the GUI, one or more type identifiers that identify communication types associated with at least one of the set of subtitles and the set of other textual messages.

US Pat. No. 10,659,729

OPTIMIZING VIDEO CONFERENCING USING CONTEXTUAL INFORMATION

FACEBOOK, INC., Menlo Pa...

1. A method comprising:receiving, by a client computing device associated with a user, a video conference data stream depicting a plurality of video conference participants;
accessing social networking system information corresponding to each participant of the plurality of video conference participants;
calculating, for each participant of the plurality of video conference participants, an importance score based on:
the social networking system information corresponding to the participant and social networking system information corresponding to the user of the client computing device, and
a social proximity, determined based on the social networking system information corresponding to the participant, between the participant and the user of the client computing device; and
customizing a display of the video conference data stream for the user based on the importance score calculated for each participant of the plurality of video conference participants.

US Pat. No. 10,659,728

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An information processing apparatus, comprising:a receiving section configured to:
receive a captured image of a video communication partner from a communication destination apparatus; and
receive first information indicating a gaze of the video communication partner is at a user of the information processing apparatus, wherein the user is displayed on the communication destination apparatus; and
a control section configured to:
control a display screen to display the received captured image;
determine a line of sight of the video communication partner in the displayed captured image is misaligned with a line of sight of the user; and
execute an image process on the displayed captured image such that the line of sight of the video communication partner in the displayed captured image point towards the user, wherein
a gaze of the user is at the display screen, and
the execution of the image process is based on the first information and the determination that the line of sight of the video communication partner in the captured image is misaligned with the line of sight of the user.

US Pat. No. 10,659,727

DEVICE AND METHOD FOR TRANSMITTING VIDEO SIGNALS, AND SYSTEM FOR PLAYING VIDEO SIGNALS

BOE TECHNOLOGY GROUP CO.,...

1. A device for transmitting video signals, comprising:two or more signal channels respectively connected to input signal sources, wherein the signal channel comprises two or more of a DisplayPort channel, a high definition multimedia channel, a digital video channel, and a video graphics array channel;
two or more trigger switch units respectively disposed corresponding to the signal channels, wherein when the signal channel is the DisplayPort channel, the high definition multimedia channel, or the digital video channel, the trigger switch unit comprises a hot plug electronic switch, and when the signal channel is the video graphics array channel, the trigger switch unit comprises a plurality of sub-switches corresponding to the respective color channels;
a switch control unit configured to determine a target signal channel according to a channel selection signal, and to turn on the trigger switch unit corresponding to the target signal channel and to turn off the other trigger switch units, wherein, when the switch control unit turns on the trigger switch unit corresponding to the target signal channel and turns off the other trigger switch units corresponding to the other signal channels, each of the signal channel directly determines whether the signal channel needs to be put into operation or standby through a hot plug detection of the hot plug electronic switch or through states of the sub-switches; and
a signal source control unit disposed at each of the input signal sources, and configured to control the operation and standby of the input signal source according to the turning on and turning off of the trigger switch unit of the signal channel connected to the input signal source, wherein the signal source control unit is configured to control the input signal source to be in the operation when the input signal source receives a signal that the trigger switch unit is the turning on, and to control the input signal source to be in the standby when the input signal source receives a signal that the trigger switch unit is the turning off, so as to reduce power consumption.

US Pat. No. 10,659,726

SYSTEM FOR INSPECTING PIPELINES UTILIZING A WIRELESS DEVICE

1. A system for inspecting pipelines with a push cable inspection system utilizing a user supplied wireless device, the push cable inspection system includes a push cable, a video camera and a distance encoder, consisting of:a data display unit receives a video signal from the video camera and a distance signal from the distance encoder, said data display unit overlays said distance signal on to said video signal to create a numeric video feed, said data display unit does not include a display screen for user interaction or control;
a WiFi transmitter receives said numeric video feed from said data display unit, said WiFi transmitter transmits said numeric video feed to the user supplied wireless device; and
an interface software program is installed on the user supplied wireless device, the user supplied wireless device does not provide control of said data display unit, said interface software program enables the numeric video feed to be stored and accessed for transferring, said numeric video feed goes directly to the user supplied wireless device without going through a control device.

US Pat. No. 10,659,725

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

REALTEK SEMICONDUCTOR COR...

1. An image processing device comprising:a decoder configured to decode a video signal to generate a plurality of frames and frame order information indicating the order of the frames and to process the frames to generate an auxiliary data associated with the frames, wherein an amount of data of the auxiliary data is smaller than an amount of data of the frames;
a film mode detection circuit coupled to the decoder and configured to determine whether the frames contain substantially the same frames by calculating frame similarities using the auxiliary data;
a frame sequence control circuit coupled to the film mode detection circuit and configured to select the frames according to whether the frames contain substantially the same frames to generate a plurality of selected frames; and
a video enhancement circuit coupled to the frame sequence control circuit and configured to perform video enhancement processing on the selected frames;
wherein when the frames do not contain substantially the same frames, the selected frames are the same as the frames, and when the frames contain substantially the same frames, the selected frames are part of the frames.

US Pat. No. 10,659,724

METHOD AND APPARATUS FOR PROVIDING DROPPED PICTURE IMAGE PROCESSING

ATI Technologies ULC, Ma...

1. An apparatus comprising:a frame rate converter operative to generate a corrupted picture indication information indicating a corrupted or repeated source frame, and to adaptively create a plurality of frame rate converted frames by using a plurality of alternate source frames output from a decoder instead of using a replacement source frame, the plurality of alternate source frames used to create a frame rate converted frame in the plurality of frame rate converted frames affected by the corrupted or repeated source frame including at least a neighboring previous source frame and a non-neighboring future source frame with respect to the frame rate converted frame, as well as a previous source frame prior to the neighboring previous source frame and a future source frame subsequent to the non-neighboring future source frame, wherein the neighboring previous source frame and the non-neighboring future source frame with respect to the frame rate converted frame are non-sequential, the neighboring previous source frame and the previous source frame prior to the neighboring previous source frame are sequential, and the non-neighboring future source frame and the future source frame subsequent to the non-neighboring future source frame are sequential; and
wherein the frame rate converter is operative to create the plurality of frame rate converted frames by generating motion vector information using the plurality of alternate source frames, and wherein the frame rate converter is operative to output the plurality of frame rate converted frames to a display.

US Pat. No. 10,659,723

DE-INTERLACING DATA ARRAYS IN DATA PROCESSING SYSTEMS

Arm Limited, Cambridge (...

1. A method of operating a data processing system that includes a scaler operable to scale a received input data array to provide a scaled output version of the input data array, the method comprising:when it is desired to produce a de-interlaced and scaled output version of an input data array:
configuring the scaler with one or more scaling parameters;
providing the input data array to the scaler; and
the scaler scaling the input data array using the one or more scaling parameters so as to simultaneously de-interlace and scale the input data array and to thereby produce a de-interlaced and scaled output version of the input data array;
wherein one or more of the one or more scaling parameters are determined based on the ratio of the size of the de-interlaced and scaled output data array to the size of a de-interlaced version of the input data array; and
wherein the input data array comprises an even or an odd frame to be de-interlaced for display;
wherein one or more of the one or more scaling parameters are determined based on whether the input data array comprises an even frame or an odd frame;
wherein the one or more scaling parameters include an initial offset;
wherein the initial offset is determined based on whether the input data array comprises an even frame or an odd frame; and
wherein the scaler is operable to use a negative value of the initial offset when the input data array comprises an even frame.

US Pat. No. 10,659,722

VIDEO SIGNAL RECEIVING APPARATUS AND VIDEO SIGNAL RECEIVING METHOD

RENESAS ELECTRONICS CORPO...

1. A video signal receiving apparatus comprising:a reception processing unit which receives a first video signal for transmitting a same video content with a first aspect ratio and a second video signal for transmitting the same video content with a second aspect ratio different from the first aspect ratio respectively;
a first video adjustment unit which adjusts a size of a first video included in the first video signal;
a second video adjustment unit which adjusts a size of a second video included in the second video signal; and
an arithmetic processing unit which performs image size determination processing for determining a size adjustment amount of the second video adjusted by the second video adjustment unit,
wherein the second video adjustment unit performs scaling processing for reducing or enlarging a second image included in the second video signal to generate a scaling image, and performs shift processing for shifting the second image in a horizontal or vertical direction to generate a shift image,
wherein the arithmetic processing unit:
calculates a similarity degree between a first image included in the first video signal and the scaling image, and calculates a similarity degree between the first image and the shift image; and
uses the scaling image or the shift image having the higher calculated similarity degree as an image to be subjected to a next scaling processing and a next shift processing in the second video adjusting unit,
wherein the arithmetic processing unit:
sets the higher of the similarity degree between the first image and the scaling image and the similarity degree between the first image and the shift image to a maximum value of the similarity degree; and
if the higher of a similarity degree between a scaling image obtained by performing the next scaling processing and the first image and a similarity degree between a shift image obtained by performing the next shift processing and the first image is lower than the set maximum value of the similarity degree, determines a scaling amount and a shift amount of the scaling image or the shift image having the higher similarity degree as the size adjustment amount of the second video, and
wherein the second video adjustment unit adjusts the size of the second video using the determined size adjustment amount of the second video.

US Pat. No. 10,659,721

METHOD OF PROCESSING A SEQUENCE OF CODED VIDEO FRAMES

ARRIS Enterprises LLC, S...

1. A method of processing a sequence of coded video frames conveyed by a digital data stream, where each frame represents an image, comprising:a. receiving the sequence of coded video frames at a recording device, and recording the sequence of coded video frames,
b. determining a frame interval between presentation of an ith coded frame of the sequence and an (i+1)th coded frame of the sequence,
c. at the recording device, calculating a stream time stamp for the ith coded frame, wherein the calculated stream time stamp increases monotonically throughout the sequence of coded video frames as it is recorded, and
d. at the recording device, calculating a stream time stamp for the (i+1)th coded video frame based on the stream time stamp for the ith coded video frame and the frame interval determined in step b;
the method further comprising:
e. initializing a frame interval variable (Int_fr) with a frame interval value based on a nominal frame rate;
f. receiving a first frame and reading the presentation time stamp value of the frame, assigning the presentation time stamp value of the first frame to a presentation time stamp variable (PTS), assigning a value of PTS to a clock start variable (Clock_start) for representing the first frame's clock time based on presentation time stamp, assigning a value of PTS to a clock presentation time stamp variable (Clock_pts) for representing a frame's clock time based on presentation time stamp, and assigning a value of zero to a stream time stamp variable (ST*) for representing a stream time stamp of interest;
g. receiving a next frame and reading the presentation time stamp value of said next frame, assigning the presentation time stamp value of said next frame to PTS, assigning the updated value of PTS to the variable Clock_pts, and assigning a value (ST*+Int_fr) to the variable ST*; and
h. testing whether there is a discontinuity in presentation time stamp value between said next frame and the previous frame and, if so, assigning the updated value of ST* to a second stream time stamp variable (ST_dis*) for representing the value of the stream time stamp of interest at a point of discontinuity in presentation time stamp and assigning the value of PTS to the variable Clock_start and storing said next frame in a database using the value of ST* as an index.

US Pat. No. 10,659,720

IMAGE SENSING SYSTEM THAT REDUCES POWER CONSUMPTION AND AREA AND OPERATING METHOD OF THE SAME

Samsung Electronics Co., ...

1. An image sensing system comprising:a pixel array comprising a first pixel, a second pixel, and a third pixel interposed between the first pixel and the second pixel;
an analog-to-digital converter circuit configured to convert a first image signal received from the first pixel to first image data during a first sensing time, to convert a second image signal received from the second pixel to second image data during the first sensing time, and to convert a third image signal received from the third pixel to third image data during a second sensing time following the first sensing time; and
a memory in which the first image data and the second image data are written during a first write time and in which the third image data are written during a second write time following the first write time,
wherein the first to third pixels are arranged in a row direction,
wherein the memory comprises a first line storage area in which the first to third image data are stored, and
wherein the first line storage area comprises:
an odd-numbered pixel storage area in which the first image data and the second image data are written during the first write time; and
an even-numbered pixel storage area in which the third image data are written during the second write time.

US Pat. No. 10,659,719

MOUNTING STRUCTURE

HONGFUJIN PRECISION ELECT...

1. A mounting structure for removably mounting a cover body to an external component of the cover body, the mounting structure comprising:a limiting member coupled to the cover body;
a latching member, one end of the latching member coupled to the limiting member, and another end of the latching member configured to latch with the external component; and
a positioning member defining a positioning slot and a releasing slot, the positioning slot continuous with the releasing slot; wherein:
the latching member is selectively received in either the positioning slot or the releasing slot;
the latching member is movably coupled to the positioning member;
when the latching member is moved to be received within the positioning slot, the latching member is locked by the positioning member and latched with the external component; and
when the latching member is moved to be received within the releasing slot, the latching member is released by the positioning member and released from latching with the external component.

US Pat. No. 10,659,718

IMAGE DISPLAY APPARATUS

LG ELECTRONICS INC., Seo...

1. An image display apparatus comprising:a display including a first electrode and a second electrode, for wireless power reception;
a signal processor disposed apart from the display, and including a third electrode and a fourth electrode, for wireless power transmission; and
a first bridge electrode and a second bridge electrode, including one ends apart from the first electrode and the second electrode, facing the first electrode and the second electrode, and other ends apart from the third electrode and the fourth electrode, facing the third electrode and the fourth electrode,
wherein wireless power is transferred from the signal processor to the display by the first and second bridge electrodes, based on capacitance between the first and second bridge electrodes and the first and second electrodes, and capacitance between the first and second bridge electrodes and the third and fourth electrodes, respectively.

US Pat. No. 10,659,717

AIRBORNE OPTOELECTRONIC EQUIPMENT FOR IMAGING, MONITORING AND/OR DESIGNATING TARGETS

THALES, Courbevoie (FR)

1. An airborne optronic equipment item comprising:at least one image sensor suitable for acquiring a plurality of images of a region flown over by a carrier of said equipment item; and
a data processor configured or programmed to:
receive at least one of said acquired images and transmit said at least one acquired image to a display device;
access a database of images of said region flown over from a source other than the at least one image sensor, said database comprising a numerical model of terrain of said region, and a plurality of ortho-rectified air or satellite images or SAR of said region, said images being geolocated;
extract from said database information making it possible to synthesize a virtual image of said region which would be seen by an observer situated at a predefined observation point and looking, with a predefined field of view, along a predefined line of sight;
synthesize said virtual image by projection of one or more of said ortho-rectified air or satellite images onto said numerical model of the terrain; and
transmit said virtual image to said or to another display device.

US Pat. No. 10,659,716

IMAGING ELEMENT, IMAGING METHOD AND ELECTRONIC APPARATUS

Sony Corporation, Tokyo ...

1. A light detecting device comprising:a first pixel configured to output a first signal;
a second pixel configured to output a second signal, the second pixel disposed adjacent to the first pixel in a first direction;
a first signal line coupled to the first pixel and configured to convey the first signal;
a second signal line coupled to the second pixel and configured to convey the second signal;
a shield line disposed between the first signal line and the second signal line, wherein the first signal line, the second signal line and the shield line extend in the first direction; and
a potential line coupled to the shield line, the potential line extending in a second direction which is perpendicular to the first direction,
wherein a potential of the shield line and the potential line is a predetermined voltage.

US Pat. No. 10,659,715

FRACTIONAL-READOUT OVERSAMPLED IMAGE SENSOR

Rambus Inc., Sunnyvale, ...

1. A method of operation within an integrated-circuit image sensor having a pixel array, the method comprising:reading the pixel array after each of a plurality of sub-frame intervals to generate a corresponding plurality of sub-frame readouts, the sub-frame intervals transpiring sequentially within a frame interval;
selectively combining the sub-frame readouts to generate a digital image corresponding to the frame interval, including excluding from the digital image (i) over-threshold pixel values within a first one of the sub-frame readouts and (ii) under-threshold pixel values within a second one of the sub-frame readouts.

US Pat. No. 10,659,714

IMAGE SENSOR AND ELECTRONIC DEVICE WITH ACTIVE RESET CIRCUIT, AND METHOD OF OPERATING THE SAME

Sony Corporation, Tokyo ...

1. An image sensor, comprising:pixel circuitry including:
a light sensing element;
a charge storage node coupled to the light sensing element;
an output transistor configured to output a signal that is based on a potential of the charge storage node to an output line;
a selection transistor; and
a reset transistor coupled to the charge storage node and a power supply line, and first circuitry including:
a switching transistor that couples a power supply node to the power supply line;
a first transistor coupled to a power source and the selection transistor;
a second transistor coupled to the power source, a gate of the second transistor coupled to a gate of the first transistor;
a third transistor coupled to the second transistor; and
a fourth transistor coupled to the third transistor and the output line.

US Pat. No. 10,659,713

IMAGE SENSING DEVICE

Canon Kabushiki Kaisha, ...

1. An image sensing device comprising:a pixel array in which a plurality of pixels are arranged in a matrix, and each pixel includes first and second photoelectric converters that share a microlens;
a plurality of column signal lines each connected to pixels arranged in a corresponding column out of the plurality of pixels; and
a readout circuit including a plurality of column circuits each configured to receive a signal of a corresponding column signal line out of the plurality of column signal lines,
wherein each of the plurality of pixels includes first and second driving elements respectively configured to output signals corresponding to charges respectively generated by the first and second photoelectric converters to a corresponding column signal line out of the plurality of column signal lines, and
in a state in which signals are output from both the first and second driving elements included in a selected pixel to a column signal line corresponding to each of the plurality of column circuits, the readout circuit reads out a signal of the column signal line corresponding to the column circuit.

US Pat. No. 10,659,712

SIGNAL TRANSFER CIRCUIT AND IMAGE SENSOR INCLUDING THE SAME

SAMSUNG ELECTRONCIS CO., ...

1. A signal transfer circuit comprising:a transmission circuit configured to output a driving signal to a signal line;
a conversion circuit configured to receive an input signal that is a single-ended signal transferred through the signal line, and configured to convert the input signal to a differential signal including a first output amplified signal and a second output amplified signal, the first output amplified signal swinging downwardly from a first output DC level, the second output amplified signal swinging upwardly from a second output DC level that is lower than the first output DC level; and
a sensing output circuit configured to generate an output signal based on the differential signal,
wherein the conversion circuit includes:
a first amplifier configured to amplify the input signal to generate a first intermediate amplified signal that swings downwardly from a first intermediate DC level:
a second amplifier configured to amplify the first intermediate amplified signal to generate a second intermediate amplified signal that swings upwardly from a second intermediate DC level; and
a level adjustment circuit configured to adjust the first intermediate DC level of the first intermediate amplified signal and the second intermediate DC level of the second intermediate amplified signal to generate the first output amplified signal and the second output amplified signal.

US Pat. No. 10,659,711

SOLID-STATE IMAGING APPARATUS

SHARP KABUSHIKI KAISHA, ...

1. A solid-state imaging apparatus comprising:a driving device that reads image data from an imaging unit, wherein
the driving device,
when reading the image data from the imaging unit, renders the image data composed of image data having V rows and H columns and blanking data having V? rows and H? columns, and
renders an amount of the image data changeable for at least every one frame so as to change the V? rows and the H? columns of the blanking data, wherein
the V rows and H? columns of the blanking data satisfy inequalities (1) and (2):

where Fxftrg is a frequency of an XFTRG signal that is input from outside the solid-state imaging apparatus, Fclk is a frequency of a CLK signal that is generated inside the solid-state imaging apparatus to synchronize with the XFTRG signal and Z is an integer.

US Pat. No. 10,659,710

A/D CONVERSION DEVICE, GRAY CODE GENERATION DEVICE, SIGNAL PROCESSING DEVICE, IMAGING ELEMENT, AND ELECTRONIC DEVICE

SONY CORPORATION, Tokyo ...

1. A gray code generation device, comprising:first circuitry configured to generate a delayed input clock signal based on a first delay in an input clock signal;
a first plurality of phase interpolators configured to generate a plurality of multi-phase clock signals with a second delay, based on the input clock signal and the delayed input clock signal, wherein
each of the generated plurality of multi-phase clock signals has a phase shift with respect to the input clock signal, and
the second delay is based on a phase difference between the input clock signal and the delayed input clock signal; and
second circuitry that includes a second plurality of phase interpolators and logic gates, wherein the second circuitry is configured to generate gray codes from the plurality of multi-phase clock signals.

US Pat. No. 10,659,708

SOLID-STATE IMAGING DEVICE, METHOD OF DRIVING THE SAME, AND ELECTRONIC APPARATUS

Sony Corporation, Toyko ...

1. An imaging device, comprising:a pixel array unit including first, second, third, and fourth pixels;
wherein the first, second, and third pixels are coupled to first, second, and third transfer transistors respectively,
wherein the fourth pixel is coupled to a fourth and a fifth transfer transistor,
wherein each of the first, second, third and fourth transfer transistors is coupled to a first floating diffusion;
wherein the fifth transfer transistor is coupled to a reading transistor,
wherein the reading transistor is coupled to a second floating diffusion,
wherein turning on the reading transistor couples the fifth transfer transistor to the second floating diffusion,
wherein the first floating diffusion is coupled to a source of a first reset transistor and a gate of a first amplification transistor via a first wiring,
wherein the first amplification transistor and the first reset transistor are arranged along a first line,
wherein the second floating diffusion is coupled to a source of a second reset transistor and a gate of a second amplification transistor via a second wiring,
wherein the fourth pixel is a phase difference detection pixel,
wherein the second amplification transistor and the second reset transistor are arranged along a second line, and
wherein the first line is parallel to the second line.

US Pat. No. 10,659,707

IMAGING APPARATUS AND IMAGING METHOD, CAMERA MODULE, AND ELECTRONIC APPARATUS CAPABLE OF DETECTING A FAILURE IN A STRUCTURE IN WHICH SUBSTRATES ARE STACKED

Sony Semiconductor Soluti...

1. An imaging apparatus comprising:a first substrate including a pixel and a pixel control line; and
a second substrate, the first substrate and the second substrate being stacked on each other, wherein
the second substrate includes a row drive unit and a failure detector,
one end of the pixel control line is connected to the row drive unit via a first connection electrode,
the other end of the pixel control line is connected to the failure detector via a second connection electrode,
the row drive unit supplies a control signal for controlling operation of the pixel to the pixel control line via the first connection electrode, and
the failure detector detects a failure in accordance with the control signal supplied via the first connection electrode, the pixel control line, and the second connection electrode.

US Pat. No. 10,659,706

PHOTOELECTRIC CONVERSION DEVICE AND IMAGING SYSTEM

CANON KABUSHIKI KAISHA, ...

1. A photoelectric conversion device comprising:a plurality of pixels each of which includes a photoelectric converter that generates charges by photoelectric conversion, a first transfer unit that is connected to the photoelectric converter and transfers charges in the photoelectric converter to a first holding portion, a second transfer unit that is connected to the first holding portion and transfers charges in the first holding portion to a second holding portion, an amplifier unit that is connected to the second holding portion and outputs a signal based on charges held in the second holding portion, and a third transfer unit that is connected to the photoelectric converter and transfers charges in the photoelectric converter to a node as an overflow drain to which a power source voltage is supplied; and
a control unit that, in an exposure period in which signal charges are accumulated in the photoelectric converter, changes a potential barrier formed by the third transfer unit with respect to the signal charges accumulated in the photoelectric converter from a first level to a second level that is higher for the signal charges than the first level.

US Pat. No. 10,659,705

SOLID-STATE IMAGING DEVICE AND ELECTRONIC DEVICE WITH TRANSISTOR GROUPS

Sony Corproation, Tokyo ...

1. An imaging device comprising:a first layer comprising:
a first transistor group;
a second transistor group including a reset transistor and a switch transistor; and
a photoelectric conversion section disposed between the first transistor group and the second transistor group;
a second layer comprising a floating diffusion; and
a third layer comprising an additional capacitance,
wherein the switch transistor is configured to selectively enable and disable the additional capacitance to change a conversion efficiency of the floating diffusion.

US Pat. No. 10,659,704

IMAGING DEVICE

PANASONIC INTELLECTUAL PR...

1. An imaging device comprising:a pixel that outputs a pixel signal corresponding to an amount of incident light;
an output signal line that is connected to the pixel to allow the pixel signal from the pixel to be output to the output signal line;
a first transistor that has a first gate, a first source, and a first drain, one of the first source and the first drain being connected to the output signal line to allow the pixel signal to be output from the output signal line; and
a first circuit that is connected to the first gate, the first circuit being configured to generate a third voltage that is a voltage between a first voltage and a second voltage, the first voltage being a voltage for turning on the first transistor, the second voltage being a voltage for turning off the first transistor.

US Pat. No. 10,659,703

IMAGING DEVICE AND IMAGING METHOD FOR CAPTURING A VISIBLE IMAGE AND A NEAR-INFRARED IMAGE

JVC KENWOOD CORPORATION, ...

1. An imaging device comprising:an optical system that images a light from a subject;
an illumination device that illuminates the subject with visible light or near-infrared light as excitation light;
a beam splitter that disperses the infrared light that is emitted from the subject illuminated by the excitation light, into a first light in a first wavelength range, and a second light in a second wavelength range whose wavelength is longer than the first wavelength range;
a color imaging device that has a red filter capable of imaging the near-infrared light and images the first light in the first wavelength range;
a band-stop filter that is provided in front of the color imaging device and cuts a wavelength of the excitation light; and
a near-infrared light imaging device that images the second light in the second wavelength range, wherein:
a pixel pitch of the near-infrared light imaging device is larger than a pixel pitch of the color imaging device, and
a sampling position of the near-infrared light imaging device is displaced in a pixel arrangement horizontally or vertically with respect to a sampling position for red of the color imaging device.

US Pat. No. 10,659,702

IMAGE CAPTURING APPARATUS THAT MATCHES AN IMAGING RANGE WITH AN IRRIDATION RANGE

Canon Kabushiki Kaisha, ...

1. An image capturing apparatus comprising:a first imaging portion;
a second imaging portion;
a first illumination portion;
a second illumination portion;
a first holding portion configured to hold the first imaging portion;
a second holding portion configured to hold the second imaging portion;
a third holding portion configured to hold the first illumination portion;
a fourth holding portion configured to hold the second illumination portion;
an imaging portion guide, being circular, configured to guide the first holding portion and the second holding portion in a circumferential direction; and
an illumination portion guide, being circular, configured to guide the third holding portion and the fourth holding portion in the circumferential direction.

US Pat. No. 10,659,701

METHOD AND SYSTEM FOR MULTIPLE F-NUMBER LENS

Magic Leap, Inc., Planta...

1. An imaging system comprising:a near infrared (NIR) light source configured to emit a plurality of NIR light pulses toward one or more first objects, wherein a portion of each of the plurality of NIR light pulses is reflected off of the one or more first objects;
one or more lens elements configured to receive and focus the portion of each of the plurality of NIR light pulses reflected off of the one or more first objects onto an image plane, and to receive and focus visible light reflected off of one or more second objects onto the image plane;
an aperture stop;
a filter positioned at the aperture stop, the filter including:
a central region with a first linear dimension, the central region being characterized by higher transmittance values in one or more wavelength ranges than in other wavelength ranges, wherein the one or more wavelength ranges include an NIR wavelength range and a visible wavelength range; and
an outer region surrounding the central region with a second linear dimension greater than the first linear dimension, the outer region being characterized by higher transmittance values in the NIR wavelength range than in the visible wavelength range; and
an image sensor positioned at the image plane, the image sensor including a two-dimensional array of pixels, wherein the image sensor is configured to:
detect a two-dimensional intensity image of the one or more second objects in an unbinned pixel mode, wherein the two-dimensional intensity image is formed by light in the visible wavelength range transmitted through only the central region of the filter; and
detect a time-of-flight three-dimensional image of the one or more first objects in a binned pixel mode in which each respective group of two or more adjacent pixels are binned as a binned pixel, wherein the time-of-flight three-dimensional image is formed by light in the NIR wavelength range transmitted through both the central region and the outer region of the filter.

US Pat. No. 10,659,700

MOBILE TERMINAL AND METHOD FOR FILLING LIGHT FOR SAME

HISENSE MOBILE COMMUNICAT...

1. A method for filling light for a mobile terminal which comprises a main body, a first screen located on a front surface of the main body and a second screen located on a back surface of the main body, the method comprising:upon receiving an activating instruction for a target camera, activating the target camera;
displaying an image acquired by the target camera on a display screen which is located on a surface different from a surface where the target camera is located;
detecting brightness of present ambient light, and
in response to that the brightness of the present ambient light is less than a preset brightness threshold, illuminating a display screen on the same surface where the target camera is located;
wherein before the illuminating the display screen on the same surface where the target camera is located, the method further comprises:
determining that a distance between the target camera and a subject is smaller than a preset distance threshold,
wherein a zoom motor is provided in the main body;
wherein the determining that the distance between the target camera and the subject is smaller than the preset distance threshold comprises:
detecting a present current in the zoom motor;
determining, according to a pre-stored correspondence between a current value and a lens pushing distance, a moving distance corresponding to the present current for the zoom motor to push the target camera;
determining, according to a pre-stored correspondence between a lens pushing distance and a shooting distance, that the moving distance for the zoom motor to push the target camera is less than the preset distance threshold, wherein the shooting distance is a distance between the target camera and the subject.

US Pat. No. 10,659,699

APPARATUS AND METHOD FOR RECONSTRUCTING A THREE-DIMENSIONAL PROFILE OF A TARGET SURFACE

ASM TECHNOLOGY SINGAPORE ...

1. An apparatus for reconstructing a three-dimensional profile of a target surface of an object, the apparatus comprising:a lighting apparatus having at least a first mode of illumination to illuminate the target surface and at least a second mode of illumination to illuminate the target surface independent of the first mode of illumination, the lighting apparatus having a first lighting device to produce the first mode of illumination, which is a pattern onto the target surface and a second lighting device to produce the second mode of illumination, which illuminates every part of the target surface without a corresponding pattern, wherein light from the first and second modes of illumination are arranged to illuminate the target surface along a same angle of incidence and a ratio (R) of a difference of intensity between the first and second modes of illumination (I1-I2) against an intensity of the second mode of illumination (I2, such that R=(I1-I2/I2) is constant across the profile of the target surface;
a lighting control apparatus operative to sequentially activate only one of the at least two modes of illumination at a time;
an imaging device operative to sequentially capture a first image of the target surface when the target surface is illuminated by the first mode of illumination and a second image of the target surface when the target surface is illuminated by the second mode of illumination respectively; and
a processor for reconstructing the three-dimensional profile of the target surface based on a combination of image characteristics obtained from only the first and second images of the target surface as captured by the imaging device.

US Pat. No. 10,659,698

METHOD TO CONFIGURE A VIRTUAL CAMERA PATH

Canon Kabushiki Kaisha, ...

1. A method for an apparatus to configure a path of a virtual camera, the method comprising:receiving user steering information of a user to control the path of the virtual camera in a scene;
determining a primary target based upon a field of view of the virtual camera;
estimating, based on the received steering information, a future path and a corresponding future field of view of the virtual camera;
determining a secondary target of the scene proximate to the estimated future path of the virtual camera based on a preferred perspective of the secondary target; and
configuring the path to capture the secondary target from the preferred perspective and the primary target in a resultant field of view of the virtual camera.

US Pat. No. 10,659,697

PORTABLE ELECTRONIC DEVICE WITH RETRACTABLE ANTENNA ROD FOR A CAMERA

1. A portable electronic device comprising:a housing defining an interior space and a longitudinal axis, the housing comprising:
a surface,
an opening through the surface, the opening separate from the interior space;
a retractable antenna rod having a first end and a second end that is opposite the first end, wherein the retractable antenna rod is coupled at the first end, and wherein the retractable antenna rod is configured to extend from the interior space in a direction parallel to the longitudinal axis of the housing and into an extended position;
a camera coupled to the second end of the retractable antenna rod, wherein the camera is configured to pass through the surface of the housing, via the opening, and into the interior space when the retractable antenna rod retracts into a retracted positioned within the interior space of the housing;
one or more electrical actuators operably coupled to the retractable antenna rod within the interior space;
a touch-screen display affixed to the housing; and
an electronic processor located in the interior space, the electronic processor electrically coupled to the touch-screen display and to the one or more electrical actuators, the electronic processor configured to:
receive a signal from the touch-screen display,
determine a movement direction of the retractable antenna rod based on the signal, and
move the retractable antenna rod in the movement direction via the one or more electrical actuators.

US Pat. No. 10,659,696

ELECTRONIC DEVICE, CONTROL METHOD OF ELECTRONIC DEVICE, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A display control device comprising:at least one memory and at least one processor which function as a display control unit that performs control to switch an image to be displayed from a plurality of images, on a display unit, in a predetermined order, in accordance with a predetermined operation for switching an image to be displayed to an image apart by a predetermined number of images from a current image in the predetermined order,
wherein, when an image apart by the predetermined number of images from the current image in the predetermined order belongs to a first group which is the same group as the current image, in accordance with the predetermined operation, the display control unit performs control not to switch an image to be displayed from the current image to the image apart by the predetermined number of images but instead to switch an image to be displayed from the current image to an image apart by a number of images which is greater than a number of a plurality of images included in the first group in the predetermined order.

US Pat. No. 10,659,695

GIMBAL CONTROL METHOD, GIMBAL CONTROL SYSTEM AND GIMBAL DEVICE

Haoxiang Electric Energy ...

1. A gimbal control system, comprising:a processor, for obtaining simulation position information, measurement position information and simulation angular velocity information of a Pitch axis of a gimbal in real-time;
a first comparator, for calculating a first position error between the simulation position information of the Pitch axis and the measurement position information of the Pitch axis;
a first proportional-integral-derivative (PID) controller, for processing the first position error with proportional-derivative calculation, wherein: the first position error is compensated with the simulation angular velocity information of the Pitch axis during the proportional-derivative calculation; the first PID controller is further for generating a first torque control instruction for controlling a torque of a Pitch axis motor according to a result of the proportional-derivative calculation after compensating, so as to enable the Pitch axis to reach a position corresponding to the simulation position information of the Pitch axis;
wherein:
the processor is further for obtaining simulation position information, measurement position information and simulation angular velocity information of a Yaw axis of the gimbal in real-time; and
the gimbal control system further comprises:
a second comparator, for calculating a second position error between the simulation position information of the Yaw axis and the measurement position information of the Yaw axis; and
a second PID controller, for processing the second position error with the proportional-derivative calculation, wherein: the second position error is compensated with the simulation angular velocity information of the Yaw axis during the proportional-derivative calculation; the second PID controller is further for generating a second torque control instruction for controlling a torque of a Yaw axis motor according to a result of the proportional-derivative calculation after compensating, so as to enable the Yaw axis to reach a position corresponding to the simulation position information of the Yaw axis.

US Pat. No. 10,659,694

IMAGING DEVICE, IMAGING METHOD AND IMAGING DEVICE CONTROL PROGRAM

FUJIFILM Corporation, Mi...

1. An imaging device comprising:a stage on which a vessel having an observation target received therein is installed;
an imaging optical system that forms an image of the observation target within the vessel;
an actuator that moves at least one of the stage or the imaging optical system in a main scanning direction and a sub-scanning direction orthogonal to the main scanning direction, and moves the at least one of the stage or the imaging optical system forward and backward in the main scanning direction;
an imaging element that receives the image formed by the imaging optical system, and outputs an image signal of the observation target; and
a central processing unit that performs shake correction for correcting a shake caused by movement of at least one of the stage or the imaging optical system on the image signal which is output from the imaging element,
wherein the central processing unit switches a correction filter used in the shake correction in accordance with a movement direction of at least one of the stage or the imaging optical system in the main scanning direction,
wherein the correction filter is asymmetric about a pixel position of a correction target in a direction corresponding to the main scanning direction.

US Pat. No. 10,659,693

IMAGE PROCESSING DEVICE AND METHOD OF CORRECTING IMAGES

HANWHA TECHWIN CO., LTD.,...

1. An image processing device to correct for wobble of an image, the image processing device comprising:an input interface to communicate with an image sensor; and
a motion vector detector to process first and second images captured by the image sensor using a single gain value,
wherein the motion vector detector is configured to:
receive the first and second images from the image sensor through the input interface;
obtain the single gain value of the image sensor;
detect, in the first image, feature points having feature values higher than a threshold value; and
compare the first image to the second image by using at least parts of the feature points to determine a global motion vector of the first image, the threshold value being adjustable depending on the single gain value of the image sensor.

US Pat. No. 10,659,692

IMAGE BLUR CORRECTION DEVICE, IMAGING APPARATUS, CONTROL METHOD OF IMAGING APPARATUS AND NON-TRANSITORY STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image blur correction device that corrects blur of an object image by using an image blur correction unit, the object image being imaged by an imaging unit through an imaging optical system, the image blur correction device comprising:a memory; and
at least one processor operating in accordance with a program stored in the memory,
wherein the at least one processor comprises:
a detection unit configured to detect motion amounts of images on the basis of imaged images;
a calculation unit configured to calculate a motion amount of an object on the basis of the motion amounts of the images detected by the detection unit and a detection signal of blur detected by a blur detection sensor; and
a control unit configured to control the image blur correction unit on the basis of the motion amount of the object calculated by the calculation unit, and
wherein the detection unit detects the motion amounts of the images on the basis of multiple settings with different resolutions to detect the motion amounts, determines a correlation between the motion amounts of the images detected on the basis of the settings, and determines a setting to be used to detect the motion amounts of the images on the basis of the correlation.

US Pat. No. 10,659,691

CONTROL DEVICE AND IMAGING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. A control device comprising at least one processor or one circuit which functions as:a correction control unit configured to acquire a blur detection signal detected by a blur detection unit to obtain a correction amount of an image blur and control an image blur correction unit configured to correct the image blur;
a subject detection unit configured to detect a position of a subject in a photographed image and acquire position information of the subject in the photographed image; and
a setting unit configured to seta subject selectable mode in which a user is able to select a desired subject,
wherein an image blur correction effect in a second state in which the subject selectable mode is set and the desired subject is selected is higher than an image blur correction effect in a first state in which the subject selectable mode is set and the desired subject is not selected.

US Pat. No. 10,659,690

SYSTEMS AND METHODS FOR MOBILE PLATFORM IMAGING

SZ DJI TECHNOLOGY CO., LT...

1. A method of image stabilization comprising:obtaining movement data for an imaging device mounted to a mobile platform;
adjusting an input image acquired by the imaging device according to the movement data to obtain a stabilized image, including:
applying a projective transform image stabilization to the input image based on the movement data, including:
selecting a mesh of input points on the input image; and
determining a plurality of output points each corresponding to one input point of the mesh of input points by applying a projective transform to the one input point based on a deviation of an Euler angle of the one input point from an objective Euler angle; and
displaying the stabilized image according to a selected viewport.

US Pat. No. 10,659,689

IMAGE CAPTURE APPARATUS AND CONTROL METHOD

CANON KABUSHIKI KAISHA, ...

1. An image capture apparatus, comprising:a power receiving circuitry that receives power from a power supply device;
a charging control circuitry that charges a battery by using power received from the power supply device;
a power supply control circuitry that supplies power to components of the image capture apparatus by using power received from the power supply device;
a determining circuitry that determines a power supply capability of the power supply device; and
a control circuitry that controls the charging control circuitry and the power supply control circuitry,
wherein, in a case where the power supply capability of the power supply device determined by the determining unit is greater than or equal to a first power when an operating mode of the image capture apparatus is a moving image shooting mode, the control unit controls the charging control circuitry such that charging of the battery is performed with a second power obtained from power received from the power supply device, and controls the power supply control circuitry such that power supply to the components of the image capture apparatus is performed with a remaining power of power received from the power supply device, and
in a case where the power supply capability of the power supply device determined by the determining unit is greater than or equal to the second power and lower than the first power when the operating mode of the image capture apparatus is the moving image shooting mode, the control unit controls the charging control circuitry such that charging of the battery is performed with the second power obtained from power received from the power supply device, and controls the power supply control circuitry such that power supply to the components of the image capture apparatus is not performed with a remaining power of power received from the power supply device.

US Pat. No. 10,659,688

IMAGING SYSTEM, METHOD, AND APPLICATIONS

1. A multicamera panoramic imaging system, comprising:a plurality of discrete imaging systems configured in a side-by-side array that forms a three-dimensional geometric shape having a center;
each of the discrete imaging systems having a first lens element with a plurality of edges each configured in the side-by-side array to abut an adjacent edge of the first lens element in an adjacent one of the discrete imaging systems to form a plurality of common edges between the first lens elements in the discrete imaging systems, each of the edges defining a plurality of edge surface angles at points along the edges with respect to the center of the three-dimensional geometric shape; and
each of the discrete imaging systems configured to constrain a plurality of chief rays that are incident along the edges of the first lens element to have an angle of incidence equal to the edge surface angles defined by the plurality of edges, such that the chief rays incident the common edges between adjacent ones of the discrete imaging systems are substantially parallel and provide combined fields of view along the common edges to form images with minimal or no parallax.

US Pat. No. 10,659,687

IMAGING APPARATUS, IMAGING DISPLAY CONTROL METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. An imaging apparatus comprising:a shutter button configured to output a signal;
an image sensor;
a display; and
circuitry configured to
start a predetermined processing procedure in response to a first operation of the shutter button at an initial position of the imaging apparatus, the predetermined processing procedure including displaying an instruction to move the imaging apparatus along a first direction,
after the first operation of the shutter button, start the panoramic image imaging operation in response to a second operation of the shutter button at a record start position of the panoramic image imaging operation,
automatically stop the panoramic image imaging operation in response to an arrival of the imaging apparatus at a record stop position of the panoramic image imaging operation, the record stop position being determined according to the record start position and being located in a second direction from the record start position the second direction being opposite to the first direction, and
generate a panoramic image using images captured by the image sensor during the panoramic image imaging operation.

US Pat. No. 10,659,686

CONVERSION OF AN INTERACTIVE MULTI-VIEW IMAGE DATA SET INTO A VIDEO

Fyusion, Inc., San Franc...

1. A method comprising:selecting a sequence of images from among a plurality of live images captured by a camera on a mobile device as the mobile device moves along a path, wherein an orientation of the camera varies along the path such that an object in the live images is captured from a plurality of camera views, wherein the sequence of images is selected based upon sensor data from an inertial measurement unit in the mobile device and upon image data such that one of the live images is selected for each of a plurality of angles along the path;
generating from the sequence of images a multi-view interactive digital media representation, wherein each of the sequence of images includes the object from a different camera view such that when the plurality of images is output to the touchscreen display the object appears to undergo a 3-D rotation through an angular view amount, wherein the 3-D rotation of the object is generated without a 3-D polygon model of the object;
determining a speed function that maps time in the video to the plurality of angles along the path;
selecting a designated plurality of the sequence of images based on the speed function;
encoding a designated plurality of the sequence of images as a video via a designated encoding format, wherein when presenting the video on a display screen the 3-D rotation of the object appears to move at the speed determined by the speed function; and
storing the video on a storage medium.

US Pat. No. 10,659,685

CONTROL OF VIEWING ANGLES FOR 360-DEGREE VIDEO PLAYBACK

Visual Supply Company, O...

1. A method comprising:presenting, on a display of a computing device, a graphical user interface (GUI) for viewing a projection on the display of a 360-degree video captured by a 360-degree camera, the 360-degree video comprising a plurality of video frames, the projection comprising a view as simulated by a virtual camera within the 360-degree video, the GUI including an option to set an orientation of the virtual camera within the 360-degree video as the 360-degree video is presented and an option to identify one or more key frames in the plurality of video frames, each key frame having a key-frame orientation of the virtual camera within the 360-degree video;
receiving, via the GUI, a selection of a first key frame from the plurality of video frames;
identifying a first orientation of the first key frame within the 360-degree video when the first key frame is selected;
receiving, via the GUI, a selection of a second key frame from the plurality of video frames;
identifying a second orientation of the second key frame within the 360-degree video when the second key frame is selected;
determining, by the one or more processors, intermediate orientations of the virtual camera for frames between the first key frame and the second key frame, the intermediate orientations of the virtual camera being determined to provide a continuous transition of the intermediate orientations between the first orientation and the second orientation, wherein determining the intermediate orientations comprises gradually changing the orientation of the virtual camera to reach the second orientation at the second key frame by interpolating the intermediate orientations between the first orientation and the second orientation; and
playing, by the one or more processors, the projection of the 360-degree video on the display, the projection including the first orientation, the intermediate orientations, and the second orientation.