US Pat. No. 10,893,326

METHODS AND APPARATUS FOR AUDIENCE MEASUREMENT AND DETERMINING INFRARED SIGNAL BASED ON SIGNATURE OF SYMBOL ALPHABET

THE NIELSEN COMPANY (US),...

1. A media monitoring meter system comprising;a symbol alphabet meter to generate a query symbol alphabet based on captured data of an optical pulse stream, the optical pulse stream detected from a media remote control; a symbol alphabet matcher to:
apply a probability function to the query symbol alphabet and a reference symbol alphabet to determine a symbol alphabet matching probability; and determine a symbol alphabet match based on the symbol alphabet matching probability; and a key code detector to determine a key code based on the symbol alphabet match.

US Pat. No. 10,893,325

DISPLAY DEVICE AND CONTROL METHOD THEREFOR

SAMSUNG ELECTRONICS CO., ...

1. A display device comprising:a display,
a storage configured to store a content corresponding to a user,
a sub-processor configured to receive, from a sensor, a first detection signal indicative of the user in a first area among a plurality of areas divided according to a distance from the display device; and
a processor configured to, based on the sub-processor receiving, from the sensor, the first detection signal indicative of the user in the first area during a time period that the display is in an inactivated state, automatically perform the following operations:
become activated by the sub-processor,
upon becoming activated, obtain the content corresponding to the user from the storage,
request a server to transmit information on the obtained content that is generated based on a time point at which the processor became activated,
receive the information from the server, and
update the obtained content based on the received information,
wherein the processor is further configured to, based on receiving, from the sensor, a second detection signal indicative of the user in a second area closer to the display device than the first area, among the plurality of areas, activate the display from the inactivated state and control to display the updated content through the activated display.

US Pat. No. 10,893,324

VIEWING DATA TRANSFER USING BARCODE OR QR CODE AND SIGNAL CAPTURE DEVICE

SmarDTV SA, Cheseaux-sur...

1. A television broadcast receiver comprising:a memory for storing channel view data encoding channel view events and a processor configured to:
store channel view data in the memory;
generate a signal encoding the channel view data, wherein the signal is an audio and/or video signal; and
cause the signal to be presented to the viewer for capture by the viewer with a signal capture device,
wherein the channel view data is used by the processor to encode channel view events such that smaller symbols are used for more frequently viewed channels.

US Pat. No. 10,893,323

METHOD AND APPARATUS OF MANAGING VISUAL CONTENT

Grass Valley Limited, Ne...

1. A system for managing audio visual content, the system comprising:a window selector configured to set at least one time window for a received audio visual stream having a plurality of content fingerprints derived by a fingerprint generator using an irreversible data reduction process from respective temporal regions within the audio visual stream;
a separator configured to separate out at least one audio or video component from the content fingerprints in the at least one time window, with the at least one audio or video component corresponding to a content characteristic of the audio visual stream;
a test signal detector configured to generate at least one fingerprint value, respectively, of the separated at least one audio or video component by executing a statistical analysis operation that comprises at least one of comparing the at least one audio or video component to a predetermined threshold and comparing the at least one audio or video component to a range of known values;
a histogram generator configured to generate a histogram based on a frequency of occurrences of the generated at least one fingerprint value occurring in the at least one time window of the audio visual stream; and
an audio visual data classifier configured to generate statistical metadata by comparing the frequency of occurrences set forth in the generated histogram to at least one known type of audio visual content,
wherein the generated statistical metadata configures an automatic control system for distributing the audio visual stream.

US Pat. No. 10,893,322

METHOD OF DISPLAYING MULTIPLE CONTENT STREAMS ON A USER DEVICE

MIMIK TECHNOLOGY, INC., ...

1. A method of displaying multimedia content, the multimedia content including at least one video stream from a video source, on a display of a user device, said user device registered to a serving node, the method comprising:determining, by said serving node, a type of said user device and display capabilities of said user device;
determining, by said serving node, a network connection from said serving node to said user device;
streaming, via said serving node, said at least one video stream from said video source to the display of said user device, wherein said at least one video stream is transcoded and transrated based on the display capabilities and the network connection;
inspecting, by said serving node, said at least one video stream to determine an identity of a user associated with said at least one video stream, a context associated with said at least one video stream, and a time said at least one video stream was displayed on the display of said user device;
comparing, by said serving node, said at least one video stream to a history, associated with the user, stored at said serving node to determine whether said at least one video stream is typical or atypical in relation to the user;
comparing, by said serving node, the context of said at least one video stream with a history of other users to determine whether a social variable is associated with said at least one video stream; and
updating, by said serving node, the history associated with the user and user characterization associated with the user based on the context, the time, said type of the user device, whether said at least one video stream was typical, and whether the social variable was present.

US Pat. No. 10,893,321

SYSTEM AND METHOD FOR DETECTING AND CLASSIFYING DIRECT RESPONSE ADVERTISEMENTS USING FINGERPRINTS

ENSWERS CO., LTD., Seoul...

1. A method comprising:receiving one or more broadcast streams comprising a first advertisement segment and a second advertisement segment;
obtaining a first fingerprint corresponding to the first advertisement segment, a second fingerprint corresponding to the second advertisement segment, and a third fingerprint corresponding to a third advertisement segment that is a master segment;
based on comparing the first fingerprint to the third fingerprint, making a first determination that the first advertisement segment non-identically matches the third advertisement segment within a threshold extent of similarity;
storing data indicating that the first advertisement segment is a first variation segment of the third advertisement segment;
based on comparing the second fingerprint to the third fingerprint and to the first fingerprint, making a second determination that the second advertisement segment non-identically matches the third advertisement segment within the threshold extent of similarity and making a third determination that the second advertisement segment does not identically match the first advertisement segment; and
storing data indicating that the second advertisement segment is a second variation segment of the third advertisement segment.

US Pat. No. 10,893,320

DYNAMIC VIDEO OVERLAYS

Gracenote, Inc., Emeryvi...

1. A system, comprising:a display;
memory that stores instructions and a plurality of templates; and
one or more processors of a client device configured by the instructions to perform operations comprising:
accessing a video input stream that includes first video content from a content provider and second video content generated by a set-top box device upstream of the one or more processors, wherein, in a frame of the video input stream, the first video content corresponds to a first screen portion of the display and the second video content corresponds to a second screen portion of the display;
generating, from the first video content, a query fingerprint by a query fingerprint generator of the client device according to an image of the video input stream;
accessing, based on a comparison of the query fingerprint and a reference fingerprint generated by a reference fingerprint generator, replacement video content provided by a replacement content source, wherein the replacement video content differs from the first video content, wherein the replacement video content is accessed separately from the video input stream;
identifying, based on the set-top box device, a template of the plurality of templates, wherein the identified template indicates that the second screen portion comprises an overlay for displaying the second video content generated by the set-top box device;
comparing the video input stream to the identified template to determine that the frame of the video input stream corresponds to the identified template;
responsive to a determination that the frame of the video input stream corresponds to the identified template, generating a video output stream comprising the second video content for the second screen portion and the video replacement content for the first screen portion to mimic a presentation of the first video content and the second video content in the video input stream; and
causing the video output stream to be presented on the display, wherein comparing the video input stream to the identified template comprises:
downsampling the frame of the video input stream to form a downsampled frame,
determining a cross-correlation between the downsampled frame and the identified template,
performing a comparison of the cross-correlation to a threshold, and
determining, based on the comparison, that the frame of the video input stream corresponds to the identified template.

US Pat. No. 10,893,319

SYSTEMS AND METHODS FOR RESUMING A MEDIA ASSET

ROVI GUIDES, INC., San J...

1. A method for resuming a media asset, comprising:extracting metadata associated with a media asset, the metadata comprising a plurality of positions in the media asset;
receiving a first input from a user to pause the media asset at a first position of the plurality of positions;
based on receiving the first input from the user:
storing the first position in a bookmark for the media asset;
determining, based on the metadata and natural language processing rules, that the first position in the media asset corresponds to a middle of a sentence;
determining, based on the metadata and the natural language processing rules, a second position of the plurality of positions in the media asset corresponding to a start of the sentence;
updating the bookmark to include the second position;
receiving a second input from the user to resume the media asset; and
based on receiving the second input from the user, generating the media asset for display from the bookmark.

US Pat. No. 10,893,318

AIRCRAFT ENTERTAINMENT SYSTEMS WITH CHATROOM SERVER

Thales Avionics, Inc., I...

1. A vehicle chatroom server comprising:at least one network interface configured to communicate with passenger terminals;
at least one processor connected to communicate through the at least one network interface; and
at least one memory storing code that is executed by the at least one processor to perform operations comprising to:
obtain passenger information;
characterize potential passenger discussion interests based on the passenger information;
identify a grouping of passengers who satisfy a common interest rule based on the potential passenger discussion interests; and
communicate with passengers in the grouping through a computerized chatbot module providing natural-language text and/or computer synthesized speech that is provided to the passengers in the grouping to invite to a discussion-focused chatroom hosted by the chatroom server.

US Pat. No. 10,893,317

COMMUNICATION EXCHANGE SYSTEM FOR REMOTELY COMMUNICATING INSTRUCTIONS

1. A system for remotely communicating instructions, the system comprising:a server communicatively coupled to a first user device and an instructor device, the server, the first user device, and the instructor device individually comprising a memory and a processor, the server configured to:
receive location information from the first user device, the location information characterizing a location of the first user device, wherein the location information defines visual content captured by the first user device;
transmit at least a portion of the location information to the instructor device, the instructor device configured to present the visual content within an instructor interface based on the received location information and receive input from an instructor through the instructor interface, the input defining an instruction associated with the visual content;
receive instruction information defining the instruction associated with the visual content from the instructor device; and
transmit at least a portion of the instruction information to the first user device, the first user device configured to present the instruction overlaid on top of the visual content within a first learner interface based on the received instruction information;
wherein the instruction information comprises a direction, the direction defining how to use an item to perform the instruction, and wherein one or more of a size, a shape, and a color associated with the direction indicates one or more of a pressure to apply, a speed with which to move the item, a depth with which the item is to be pushed on or into a surface of the target, a time to contact the item to the surface of the target, and period of time to use the item.

US Pat. No. 10,893,316

IMAGE IDENTIFICATION BASED INTERACTIVE CONTROL SYSTEM AND METHOD FOR SMART TELEVISION

Shenzhen Prtek Co. Ltd., ...

1. An interactive control system for smart TV based on image recognition, comprising:a camera;
processing circuitry and a memory circuitry operatively connected to the camera, said memory circuitry containing instructions executable by said processing circuitry whereby said interactive control system is operative to:
control the camera to capture a first digital image of a card being presented to the interactive control system by a user;
automatically adjust a focus to acquire a content of the card;
customize a gesture template as a preset gesture template;
recognize a gesture of the user holding the card in the first digital image and output a corresponding gesture recognition result by comparing the gesture of the user in the first digital image to the preset gesture template, wherein the corresponding gesture recognition result is a channel switching, a program selecting or a content searching, and wherein when the user holds the card in a first manner, the corresponding gesture recognition result is the channel switching, and wherein the gesture is defined by a manner in which the user's hand is holding the card, and wherein when the user holds the card in the first manner, a portion of the user's index finger contacts a first planar surface of the card, and the user's thumb contacts a second planar surface of the card opposite the first planar surface of the card, and wherein the user's thumb in contact with the second planar surface of the card is within a visible range of the camera;
recognize the content of the card in the first digital image and output a card recognition result;
perform a related interactive operation responsive to the corresponding gesture recognition result and the card recognition result.

US Pat. No. 10,893,315

CONTENT PRESENTATION SYSTEM AND CONTENT PRESENTATION METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. A content presentation system, comprising:a server that constitutes a content delivery network; and
a client apparatus that is provided with content delivered via the content delivery network,
wherein the server or the client apparatus pre-fetches, with reference to metadata in which priorities provided from a plurality of provision sources are described to change with time, a series of segments constituting the content in a descending order of the priorities,
wherein each of the priorities is provided by a content provider that provides the content, and described by a value according to an intention of a creator that creates the content,
wherein the metadata describes the priorities associated with elements designating the series of segments using a number identifying a first segment of the series of the segments to which the priorities are applied and a segment number of the series of the segments, and
wherein the metadata referred to when the segments are pre-fetched is defined as a child element to an adaptation set described in an MPD of MPEG DASH.

US Pat. No. 10,893,314

METHOD AND APPARATUS FOR MANAGING PROVISION OF MEDIA PROGRAMS DIRECTLY FROM CONTENT PROVIDERS

FOX MEDIA LLC, Los Angel...

1. A method of providing a media program of a plurality of media programs to a user device directly from a selected content provider of a plurality of content providers via a content provider playback application executed by the user device, the content provider playback application provided and maintained by the content provider of the media program, comprising:transmitting a registration request having registration information from a user device in a subscription management service, the registration request to subscribe a user of the device to the selected content provider providing the media program;
receiving program guide information of the plurality of media programs including the media program from the subscription management service for presentation by the user device, the program guide information received after brokering, by the subscription management service, between the user device and the selected content provider to subscribe the user to the selected content provider providing the media program, the brokering including:
determining if the user is subscribed to the selected content provider;
if the user is not subscribed to the selected content provider, subscribing the user to the selected content provider;
transmitting payment for the subscription of the user from the subscription management service to the selected content provider; and
billing the user for the subscription to the selected content provider;
transmitting a playback request from a program guide presented by the user device using the program guide information, the playback request comprising a user selection of the media program from the selected content provider for presentation exclusively by the content provider playback application executed by the user device; and
presenting the media program from the selected content provider by the content provider playback application executed by the user device.

US Pat. No. 10,893,313

SECURE BRIDGING OF THIRD-PARTY DIGITAL RIGHTS MANAGEMENT TO LOCAL SECURITY

Active Video Networks, In...

1. A method, comprising:at a headend of a multichannel video programming distributor (MVPD), the headend including a virtual set-top application, an MVPD network distinct from the virtual set-top application, a first router communicatively coupled to the virtual set-top application, a conditional-access encoder, and a second router coupled to the conditional-access encoder:
at the virtual set-top application:
receiving encrypted content from a content provider, wherein the content provider is distinct from the MVPD;
decrypting the encrypted content in accordance with a first Digital Rights Management (DRM) protocol used by the content provider;
processing the decrypted content from the content provider;
encrypting data corresponding to the processed decrypted content from the content provider;
transmitting the encrypted data over a secure data link from the first router to the second router, wherein the secure data link comprises a virtual private network (VPN) implemented across an Ethernet link; and
at the second router, decrypting the encrypted data and sending the decrypted data from the second router to the conditional-access encoder;
determining whether a set-top of a customer supports user-interface overlay rendering; and
upon determining that the set-top of the customer does not support user-interface overlay rendering:
encrypting, via the conditional-access encoder, the decrypted data and at least one user-interface overlay using a second DRM protocol compatible with the set-top of the customer, wherein the second DRM protocol is distinct from the first DRM protocol; and
transmitting the data and the at least one user-interface overlay, as encrypted by the conditional-access encoder, to the set-top of the customer; and
upon determining that the set-top of the customer does support user-interface overlay rendering:
encrypting, via the conditional-access encoder, the decrypted data using the second DRM protocol compatible with the set-top of the customer; and
transmitting, to the set-top of the customer (i) the data as encrypted by the conditional-access encoder, and (ii) the at least one user-interface overlay that has not been encrypted using the second DRM protocol.

US Pat. No. 10,893,312

DIGITAL CONTENT PROVISION

BRITISH TELECOMMUNICATION...

1. A method of operating a processor-based system to generate recommendations of digital content items for a target user of a digital content item provision system, said method comprising operating said processor-based system to:generate, for each of a plurality of users of the digital content item provision system, a user profile from a concept lattice representing digital content items selected by the user, the concept lattice comprising one or more formal concepts, each formal concept comprising a data structure including one or more digital content item identifiers and one or more attributes shared by the identified digital content items, the concept lattice being based on a Formal Concept Analysis;
wherein user profile generation comprises:
a) calculating a relevancy measure for each of the one or more formal concepts in said concept lattice representing digital content items selected by the user; and
b) selecting relevant formal concepts on the basis of said relevancy measures to use in generating the user profile;
compare said user profiles to identify similar users; and
transmit to said target user, recommendations of one or more digital content items based on digital content items selected by similar users to said target user;
wherein the relevancy measure depends upon the number of digital content items found in the formal concept but not in the largest of the one or more formal subconcepts of the formal concept.

US Pat. No. 10,893,311

METHOD OF CONTROLLING A SYNCHRONIZATION SERVER, AND EQUIPMENT FOR PERFORMING THE METHOD

ORANGE, Paris (FR)

1. A synchronization control method performed by a synchronization gateway, the synchronization gateway forming part of a system including a control room subsystem operated for one or more television channels, a television (TV) head end subsystem, and the synchronization gateway, wherein the TV head end subsystem is configured to receive first audiovisual content and metadata from the control room subsystem and to convey the first audiovisual content in an audiovisual stream to a terminal subsystem, and wherein the method comprises:receiving first metadata from the control room subsystem, the first metadata relating to the first audiovisual content and comprising an audiovisual content identifier, a first TV broadcast channel identifier, and time-and-date information of the first audiovisual content;
obtaining an identifier of a user interactivity element from the audiovisual content identifier by searching a memory that associates the first audiovisual content with the identifier of the user interactivity element;
determining time-and-date information for presentation of the user interactivity element on a screen of a playback device of the terminal subsystem based on the received time-and-date information of the first audiovisual content and a duration of the first audiovisual content and/or of the user interactivity element such that the user interactivity element is presented synchronously with the first audiovisual content on the screen of the playback device; and
transmitting to the TV head end subsystem a request for presentation of the user interactivity element, the request including the identifier of the user interactivity element, said transmitting comprising triggering transmission of the request for presentation at a time instant configured such that said user interactivity element is received by the terminal subsystem in the audiovisual stream from the TV head end subsystem and is displayed on the screen of the playback device of said terminal subsystem in the form of an interactivity module at the time and date indicated in said time-and-date information for presentation of the user interactivity element,
wherein said received time-and-date information includes a datestamp for broadcasting the first audiovisual content, or a datestamp requested or desired for presentation of the user interactivity element, and wherein said time-and-date information for presentation of the user interactivity element corresponds to a time-shifted time and date for starting broadcasting of the first audiovisual content, wherein a time shift of said time-shifted time and date information accounts for a propagation delay of said user interactivity element up to the display of the interactivity element on the screen.

US Pat. No. 10,893,310

MANAGING PLAYBACK BITRATE SELECTION

Amazon Technologies, Inc....

1. A computer-implemented method, including:receiving, from a first viewer device, a first identifier of first media content;
receiving, from a second viewer device, a second identifier of second media content;
accessing, based on the first identifier, first master manifest data for the first media content, the first master manifest data including a first plurality of representations, each of the first plurality of representations having bitrate information associated therewith, each of the associated bitrate information being a different value, each of the first plurality of representations further having playback mode information associated therewith, the first plurality of representations including:
a first subset of representations for which the associated playback mode information indicates support for only a download playback mode;
a second subset of representations for which the associated playback mode information indicates support for only a streaming playback mode; and
a third subset of representations for which the associated playback mode information indicates support for both the download playback mode and the streaming playback mode;
accessing, based on the second identifier, second master manifest data for the second media content, the first media content and the second media content being different, the second master manifest data including a second plurality of representations, each of the second plurality of representations having bitrate information associated therewith, each of the associated bitrate information being a different value, each of the second plurality of representations further having playback mode information associated therewith, the second plurality of representations including:
a fourth subset of representations for which the associated playback mode information indicates support for only the streaming playback mode; and
a fifth subset of representations for which the associated playback mode information indicates support for both the download playback mode and the streaming playback mode;
wherein the second master manifest data does not include any representations for which only the download playback mode is indicated as being supported;
configuring the first master manifest data to generate first filtered manifest data compatible with a version of viewer device-side representation selection logic by:
excluding the second subset of representations;
including the first subset of representations; and
including the third subset of representations;
wherein the first filtered manifest data does not include any of the playback mode information for the first plurality of representations and includes the bitrate information for the first plurality of representations;
transmitting the first filtered manifest data to the first viewer device;
configuring the second master manifest data to generate second filtered manifest data compatible with the version of viewer device-side representation selection logic by:
excluding the fourth subset of representations; and
including the fifth subset of representations; and
wherein the second filtered manifest data does not include any of the playback mode information for the second plurality of representations and includes the bitrate information for the second plurality of representations; and
transmitting the second filtered manifest data to the second viewer device;
receiving, from the first viewer device, a first selection of a representation from the first filtered manifest data; and
receiving, from the second viewer device, a second selection of a representation from the second filtered manifest data.

US Pat. No. 10,893,309

METHOD AND APPARATUS FOR AUTOMATIC HLS BITRATE ADAPTATION

ARRIS Enterprises LLC, S...

1. A method of receiving and processing a media program, the media program comprising a plurality of media program versions, each of the plurality of media program versions generated for a different presentation throughput than the other of the plurality of media program versions, each of the plurality of media program versions comprising a plurality of media program version segments, the method comprising:transmitting a request for the media program;
receiving a master playlist for the requested media program, the master playlist comprising an index to plurality of media playlists, each media playlist having an address to each of a plurality of media program segments of a related variant of the media program suitable for a first presentation throughput, the first presentation throughput comprising a first communication throughput and a first processing throughput;
transmitting a request for a media program segment of the plurality of media program segments of a first variant of the media program;
receiving the requested media program segment;
processing the received media program segment;
determining a presentation throughput of the received media program segment, comprising:
determining a decoding performance of the received media program segment;
determining a rendering performance of the received media program segment; and
determining the presentation throughput at least in part from the determined decoding performance and the determined rendering performance;
determining if the presentation throughput of the received media program segment differs from a desired presentation throughput by more than a tolerance amount;
if the determined presentation throughput of the received media program segment differs from the desired presentation throughput by more than a tolerance amount, transmitting a request for a temporally following media program segment of another variant of the media program suitable for the determined presentation throughput; and
if the determined presentation throughput of the received media program segment differs from the desired presentation throughput by more than a tolerance amount, transmitting a request for a temporally following media program segment of the variant of the media program suitable for the determined presentation throughput; wherein
the step of determining the decoding performance of the received media program segment comprises determining a first time interval tD required to decode ND frames of the media program segment by: (i) determining a first time when the decoding of the ND frames is initiated according to a local clock; (ii) determining a second time when the decoding of the ND frames is completed according to the local clock; and (iii) determining the time interval ND according to a difference between the second time and the first time; and
the step of determining if the presentation throughput of the received media program segment differs from the desired presentation throughput by more than a tolerance amount comprises comparing at least the determined first time interval tD with a desired playback frame interval of the media program.

US Pat. No. 10,893,308

HYBRID STATISTICAL MULTIPLEXER

Harmonic, Inc., San Jose...

1. A non-transitory computer-readable storage medium that stores one or more sequences of instructions for delivering a transport stream, which when executed by one or more processors, cause:over a plurality of multiplexing cycles, multiplexing a plurality of single program transport streams (SPTSs) onto a multiple program transport stream (MPTS), wherein multiplexing packets in said plurality of SPTSs onto said MPTS comprises, in each multiplexing cycle, determining whether any portion of the packets carried by said plurality of SPTSs may be delayed such that said portion is multiplexed onto said MPTS in a future multiplexing cycle;
determining, for each multiplexing cycle in said plurality of multiplexing cycles, delay information that identifies how many packets were delayed;
adjusting, based on the delay information for a prior multiplexing cycle in said plurality of multiplexing cycles, a size of said bit rate pool for a subsequent allocation cycle in a plurality of allocation cycles; and
delivering said MPTS to one or more recipients.

US Pat. No. 10,893,307

VIDEO SUBTITLE DISPLAY METHOD AND APPARATUS

Alibaba Group Holding Lim...

1. A method comprising:receiving a play request for a target video sent by a terminal;
determining a display mode of a subtitle of the target video set according to a content type corresponding to the subtitle of the target video;
determining key information from an initial subtitle of the target video;
determining an expanded subtitle of the key information;
determining the subtitle of the target video according to the initial subtitle and the expanded subtitle; and
controlling the terminal to display the subtitle on a video picture of the target video according to the display mode when the target video is played.

US Pat. No. 10,893,306

DIGITAL ENCRYPTION OF TOKENS WITHIN VIDEOS

PAYPAL, INC., San Jose, ...

1. A method, comprising:receiving, at a computer system, a user request from a first user of a first computer system to generate a video with a digital token embedded in the video;
transmitting, by the computer system, a token request to a remote system, wherein the token request includes an indication of whether the digital token will be embedded in metadata or audiovisual data;
receiving, from the remote system responsive to the token request, the digital token encrypted with a first encryption key, wherein the encrypted digital token includes information identifying an amount of digital currency;
determining, by the computer system, an alternative digital currency available for redeeming the amount of the digital currency;
determining a bonus available for redeeming the amount of the alternative digital currency;
altering, by the computer system, data in the video to include the encrypted digital token to create an altered video including the encrypted digital token;
transmitting, by the computer system, the altered video including the encrypted digital token; and
providing, by the computer system, an option to redeem one of the amount of the digital currency or the amount and the bonus of the alternative digital currency.

US Pat. No. 10,893,305

SYSTEMS AND METHODS FOR ENCODING AND PLAYING BACK VIDEO AT DIFFERENT FRAME RATES USING ENHANCEMENT LAYERS

DIVX, LLC, San Diego, CA...

1. A non-transitory machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process comprising:receive video having a plurality of access units;
encode the video into a set of layers comprising a base layer and at least one enhancement layer, wherein the encoding comprises: (i) using a temporal identifier of each access unit to determine the particular layer associated with the access unit and (ii) retaining an order of the temporal identifiers of the plurality of access units of the video; and
store the set of layers into separate container files;
wherein:
the base layer comprises a sequence of frames encoded at a first frame rate such that frames in the base layer only depend upon other frames in the base layer;
the enhancement layer comprises additional frames that, when merged with the sequence of frames in the base layer, form a sequence of frames encoded at a second frame rate;
frames in the enhancement layer are encoded such that the frames in the enhancement layer only depend upon frames in the base layer; and
the enhancement layer comprises metadata that specifies a sequential order for selecting access units from the base layer and the enhancement layer to combine the plurality of access units into a single video stream.

US Pat. No. 10,893,304

APPARATUS FOR RECEIVING BROADCAST SIGNAL AND METHOD FOR RECEIVING BROADCAST SIGNAL

LG ELECTRONICS INC., Seo...

1. An apparatus for receiving a broadcast signal, the apparatus comprising:a tuner configured to receive a broadcast signal,
wherein the broadcast signal includes service components of a broadcast service, service signaling information for the broadcast service and a service list table for providing bootstrap information,
further the bootstrap information is different based on a type of delivery protocol of the service signaling information included in the service list table, and the type of delivery protocol corresponds to either a Real-time Object delivery over Unidirectional Transport (ROUTE) protocol or a Moving Picture Experts Group (MPEG) Media Transport Protocol (MMTP);
a processor configured to process the service list table, the service signaling information and the service components of the broadcast service,
wherein the processor is further configured to:
determine the type of delivery protocol by reading an @slsProtocol element included in the service list table,
in response to determining the type of delivery protocol is the ROUTE protocol, read at least one source Internet Protocol (IP) address, at least one destination IP address and at least one destination port of at least one LCT (Layered Coding Transport) channel from the bootstrap information, and
in response to determining the type of delivery protocol is the MMTP, read at least one destination IP address and at least one destination port of at least one MMTP session from the bootstrap information,
wherein the service signaling information includes user service description data, first session description data and second session description data,
wherein the user service description data provides entry point information,
wherein the user service description data includes service identification information for the broadcast service, name information containing a name of the broadcast service and language information for indicating a language for the name of the broadcast service,
wherein the first session description data provides information on at least one ROUTE session,
wherein the first session description data includes source IP address information and destination IP address information identifying the at least one ROUTE session in the broadcast signal,
wherein the second session description data provides information on the at least one LCT channel, and
wherein the second session description data includes a Transport Session Identifier (TSI) for identifying the at least one LCT channel that carries the service components delivered through the identified ROUTE session.

US Pat. No. 10,893,303

STREAMING CHUNKED MEDIA SEGMENTS

Amazon Technologies, Inc....

1. A system to manage streaming media content comprising:a computing device with a processor and memory configured to execute a content delivery management component of a content delivery service, wherein the content delivery management component is configured to:
receive, from an encoding component, encoded content chunk data responsive to a content request for encoded video content, wherein the encoding component is configured to encode video content according to an encoding profile comprising bitrates and video encoding formats to generate the encoded video content, wherein the encoded video content is associated with two or more video segments, wherein each individual video segment of the two or more video segments is organized into a plurality of fragments that sum up to the individual video segment, wherein each individual fragment of the plurality of fragments is organized into a plurality of chunks that sum up to the individual fragment; wherein one or more chunks of the plurality of chunks comprise the encoded content chunk data;
buffer the received encoded content chunk data;
process the buffered encoded content chunk data to determine a specified encoded content chunk size;
determine whether the buffered encoded chunk data corresponds to the specified encoded content chunk size;
responsive to a determination that the buffered encoded chunk data does not correspond to the specified encoded content chunk size, continue to buffer received encoded content chunk data; and
responsive to a determination that the buffered encoded chunk data does correspond to the specified encoded content chunk size, cause a transmission of the received encoded content chunk data to a set of distribution endpoints of the content delivery service.

US Pat. No. 10,893,302

ADAPTIVE LIVESTREAM MODIFICATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:receiving, by a computer device, original livestream video content from a livestreaming application on a first mobile device;
receiving, by the computer device, a modification command from a second device located in a proximity of the first mobile device;
modifying, by the computer device, the original livestream video content based on the modification command to create a modified livestream video content, the modified livestream video content being different from the original livestream video content;
transmitting, by the computer device, the modified livestream video content; and
sending, by the computer device, a communication to the second device indicating that the original livestream video content has been modified,
wherein the modification command comprises a warning signal, the warning signal indicating that the first mobile device has entered a warning zone around the second device,
the warning signal warns the first mobile device that the modifying will take place as a result of the first mobile device getting closer to the second device,
the modification command comprises a modify-livestream signal, the modify-livestream signal instructing the computer device to perform the modifying,
the modify-livestream signal is triggered by the proximity being within a first predefined distance,
the warning signal is triggered by the proximity being within a second predefined distance, and
the first predefined distance is less than the second predefined distance.

US Pat. No. 10,893,301

CODING OF A SPATIAL SAMPLING OF A TWO-DIMENSIONAL INFORMATION SIGNAL USING SUB-DIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder comprising:an extractor configured to:
extract, from a data stream representing a video, first subdivision flags and second subdivision flags, wherein the first subdivision flags are used in prediction coding of a first set of regions obtained by multi-tree subdivision of an array of information samples representing a spatially sampled portion of the video, each of the first subdivision flags being associated with one of the first set of regions, and the second subdivision flags are used in transform coding of a second set of regions obtained by the multi-tree subdivision of at least one of the first set of regions, each of the second subdivision flags being associated with one of the second set of regions,
entropy decode each of the first subdivision flags using a first probability estimation context, which is determined based on a hierarchy level of a first region in the first set of regions associated with the respective first subdivision flag, and
entropy decode each of the second subdivision flags using a second probability estimation context, which is determined based on a size of a region in the second set of regions associated with the respective second subdivision flag; and
a reconstructor configured to reconstruct the array of information samples using prediction coding for the first sets of regions and transform coding for the second sets of regions.

US Pat. No. 10,893,300

SYSTEM AND METHOD FOR VIDEO PROCESSING

SZ DJI TECHNOLOGY CO., LT...

1. A method for processing a video, comprising:receiving and storing a reference frame of the video in a memory;
obtaining a prediction table for the reference frame of the video based on a pixel value of each reference pixel within the reference frame, including obtaining a prediction Huffman table by:
for each of reference pixels within the reference frame, determining a prediction value based on respective pixel values of one or more adjacent pixels; and
generating the prediction Huffman table of difference values of the reference pixels, the difference value of one of the reference pixels being based at least on the prediction value of the one of the reference pixels;
storing the prediction table in the memory and releasing the reference frame from the memory;
receiving a target frame arranged after the reference frame in a frame sequence of the video; and
coding, based on the stored prediction table of the reference frame, the target frame of the video.

US Pat. No. 10,893,299

SURFACE NORMAL VECTOR PROCESSING MECHANISM

Intel Corporation, Santa...

1. A method to facilitate processing video bit stream data, comprising estimating relative location and intensity of light sources in a 360° six degrees of freedom (6 DoF) scene, including:generating point clouds from the 6 DoF scene;
computing surface normals, determining whether light intensity peaks for objects in the 6 DoF scene are higher than a predetermined threshold difference; and
performing light estimations for each object upon a determination that the light intensity peaks are higher than a predetermined threshold difference.

US Pat. No. 10,893,298

METHOD AND APPARATUS FOR VIDEO CODING

TENCENT AMERICA LLC, Pal...

1. A method for video decoding in a decoder, comprising:receiving a split direction syntax element, a first index syntax element, and a second index syntax element that are associated with a coding block of a picture, the coding block coded with a triangular prediction mode and partitioned into a first triangular prediction unit and a second triangular prediction unit according to a split direction indicated by the split direction syntax element, the first and second index syntax elements indicating a first merge index and a second merge index to a merge candidate list constructed for the first and second triangular prediction units;
determining one of the first merge index and the second merge index to have a first value of the first index syntax element;
determining the other one of the first merge index and the second merge index to have one of the following values:
(i) a second value of the second index syntax element when the second value is smaller than the first value, and
(ii) a third value that is the second value plus 1 when the second value is greater than or equal to the first value;
determining the split direction according to the split direction syntax element;
identifying the first triangular prediction unit and the second triangular prediction unit according to the determined split direction; and
reconstructing the coding block according to the first triangular prediction unit, the second triangular prediction unit, the determined first merge index for the first triangular prediction unit, and the determined second merge index for the second triangular prediction unit,
wherein the method further comprises determining a triangular prediction index indicating a combination of the split direction syntax element, the first merge index, and the second merge index according to,
triangular prediction index=a*the first value of the first index syntax element+b*the second value of the second index syntax element+c*a fourth value indicated by the split direction syntax element,
where a, b, and c are integers.

US Pat. No. 10,893,297

PROCESSING IMAGE DATA USING TIERED BIT-LAYERS

Apical Ltd., Cambridge (...

1. A method for processing image data, the method comprising:receiving data elements defining a portion of a line of pixels of an image, the image comprising one or more lines of pixels definable by one or more respective sets of data elements;
performing a transform operation on the received data elements to obtain a plurality of binary transform coefficients, wherein the transform operation is performed independently of data elements defining any other line of pixels;
encoding the plurality of transform coefficients as a sequence of tiered bit-layers, each bit-layer in the sequence of bit-layers comprising a set of bits corresponding to a given bit position in each of the plurality of transform coefficients;
outputting the encoded plurality of transform coefficients;
receiving further data elements defining a further portion of said line of pixels of the image;
performing a transform operation on the received further data elements to obtain a further plurality of binary transform coefficients;
comparing the plurality of transform coefficients with the further plurality of transform coefficients; and
encoding one or both of the plurality of transform coefficients and the further plurality of transform coefficients based on the comparing.

US Pat. No. 10,893,296

SENSOR DATA COMPRESSION IN A MULTI-SENSOR INTERNET OF THINGS ENVIRONMENT

EMC IP Holding Company LL...

1. A method, comprising:obtaining non-image sensor data from a plurality of sensors;
applying, by at least one edge-based processing device, an image-based compression technique to the non-image sensor data to generate compressed sensor data responsive to the plurality of sensors satisfying one or more predefined sensor proximity criteria based on a distance between the plurality of sensors; and
providing, by the at least one edge-based processing device, the compressed sensor data to a data center.

US Pat. No. 10,893,295

MULTI-VIEW CODING AND DECODING

ORANGE, Paris (FR)

1. A decoding method comprising the following acts performed by a decoding device:decoding at least one current view belonging to a multi-view image which has been previously coded, said at least one current view representing a given perspective of a scene, wherein the decoding comprises:
determining, in a data signal or in another data signal representative of another multi-view image, coding data of at least one view which is necessary for the decoding of said at least one current view and which constitutes a view situated on at least one pathway of views necessary for the decoding of said at least one current view, said at least one view necessary for the decoding of said at least one current view being not yet decoded and not available at the time of said decoding,
decoding said at least one view necessary for the decoding of said at least one current view, independently or else with respect to at least one other view already decoded or not, by using said coding data,
determining in said data signal:
first coded data representative of a difference between said at least one current view and a first view of said multi-view image or of another multi-view image,
at least second coded data representative of a difference between said at least one current view and a second view of said multi-view image or of another multi-view image,
selecting either said first coded data, or said at least second coded data,
decoding the first or the at least second coded data selected, and
decoding said at least one current view on the basis of said at least one decoded view necessary for the decoding of said at least one current view and on the basis of the first or of the at least second coded data decoded.

US Pat. No. 10,893,294

METHOD AND APPARATUS FOR LOW-COMPLEXITY BI-DIRECTIONAL INTRA PREDICTION IN VIDEO ENCODING AND DECODING

InterDigital VC Holdings,...

1. A method for video decoding, comprising:decoding a directional intra prediction mode for a block of a picture in a video, said directional intra prediction mode having a direction;
accessing, based on said directional intra prediction mode, a first predictor for a sample, the sample being within said block;
accessing, based on said directional intra prediction mode, a second predictor for said sample, said first and second predictors being on a line at least approximating said direction;
predicting a sample value of said sample, by interpolation using said first and second predictors, wherein said interpolation is responsive to a difference between said second predictor and said first predictor values, wherein said difference is scaled by at least a ratio, and wherein a denominator of said ratio is based at least on W+H, where W is a width of said block and H is a height of said block; and
decoding said sample of said block based on said predicted sample value.

US Pat. No. 10,893,293

IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, AND IMAGE DECODING APPARATUS

SUN PATENT TRUST, New Yo...

1. A coding method for coding each block among blocks of pictures, the coding method comprising:selecting codec standard from a plurality of codec standards;
selecting one of a first coding process and a second coding process to be performed based on the selected codec standard information, the second coding process being different from the first coding process; and
coding the pictures by performing the selected one of the first coding process and the second coding process,
wherein the first coding process includes:
deriving a candidate for a motion vector predictor to be used in coding of a motion vector for a current block to be coded, from a first motion vector of a first block included in a first picture, the first picture being different from a picture that includes the current block;
adding the derived candidate to a list of candidates;
selecting one motion vector predictor from the list of candidates; and
coding the motion vector of the current block using the selected motion vector predictor, and coding the current block using the motion vector and a reference picture of the current block,
wherein the reference picture of the current block is different from the first picture, and
wherein the deriving includes:
determining whether the reference picture of the current block is a long-term reference picture or a short-term reference picture, and whether a first reference picture of the first block is a long-term reference picture or a short-term reference picture;
deriving the candidate from the first motion vector without scaling based on a temporal distance in the case of determining that each of the reference picture of the current block and the first reference picture of the first block is a long-term reference picture; and
deriving the candidate from the first motion vector by scaling based on a temporal distance in the case of determining that each of the reference picture of the current block and the first reference picture of the first block is a short-term reference picture, and
wherein the coding is compliant with a first standard that is selected.

US Pat. No. 10,893,292

ELECTRONIC CIRCUIT AND ELECTRONIC DEVICE PERFORMING MOTION ESTIMATION THROUGH HIERARCHICAL SEARCH

Samsung Electronics Co., ...

1. An electronic circuit configured to perform motion estimation between images, the electronic circuit comprising:processing circuitry configured to,
determine a current block and candidate blocks with regard to each of decimated images, the decimated images being generated from an original image such that the decimated images have resolutions that are different, the decimated images including a first decimated image having a first resolution which is a lowest resolution among the resolutions and a second decimated image having a second resolution which is a highest resolution amongst the resolutions,
determine a first number of first candidate blocks based on a location of the current block without a full search for all pixels with regard to the first decimated image of a first resolution which is a lowest resolution among the resolutions,
select some of the candidate blocks for each of the decimated images by referring to the decimated images in order from the first decimated image to the second decimated image such that i) the first candidate blocks selected with regard to the first decimated image are used to select a second number of second candidate blocks with regard to a decimated image immediately thereafter, and (ii) the candidate blocks which are not selected with regard to the first decimated image are not used to select the candidate blocks with regard to the decimated image immediately thereafter such that the second number of the second candidate blocks is less than the first number of the first candidate blocks, and
generate a motion vector for the current block based on one reference patch, the one reference patch being determined from reference patches which are indicated by candidate motion vectors of the second candidate blocks.

US Pat. No. 10,893,291

ULTIMATE MOTION VECTOR EXPRESSION WITH ADAPTIVE DIRECTIONAL INFORMATION SET

Qualcomm Incorporated, S...

1. A method of decoding video data, the method comprising:determining a list of candidates for a current block of the video data from one or more spatial neighboring blocks in a set of spatial neighboring blocks that spatially neighbor the current block of video data;
determining a distance resolution and a direction resolution for a motion vector of at least one of the candidates;
determining, based on data obtained from a bitstream that comprises an encoded representation of the video data, a base candidate index, a direction index and a distance index;
determining a base candidate based on the base candidate index;
determining a direction based on the direction index, the direction index pointing to the direction in a direction table;
determining a distance based on the distance index, the distance index pointing to the distance in a distance table;
adapting one or more of the direction table or the distance table based on the distance resolution or the direction resolution;
determining a motion vector difference (MVD) based on the direction and the distance;
determining a prediction block using the MVD and a motion vector of the base candidate; and
decoding the current block based on the prediction block.

US Pat. No. 10,893,290

APPARATUS FOR MOVING IMAGE CODING, APPARATUS FOR MOVING IMAGE DECODING, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

FUJITSU LIMITED, Kawasak...

1. An apparatus for moving image coding, the apparatus comprising:a memory configured to store a reference panoramic image used for coding a coding-target panoramic image obtained by transforming a panoramic image captured by an imaging device; and
a processor coupled to the memory, the processor being configured to
decide a vector that represents an amount of shift of the coding-target panoramic image relative to the reference panoramic image,
generate a corrected coding-target panoramic image by correcting a position of each of a plurality of coding-target regions in the coding-target panoramic image in accordance with the amount of shift indicated by the vector,
obtain a first difference based on first motion estimation with use of the reference panoramic image and the coding-target panoramic image before correction,
obtain a second difference based on second motion estimation with use of the reference panoramic image and the corrected coding-target panoramic image,
in response to the obtaining of the second difference smaller than the first difference, perform coding processing on an image of each of the plurality of coding-target regions in the corrected coding-target panoramic image by using the reference panoramic image,
in response to the obtaining of the second difference equal to or greater than the first difference, perform the coding processing on an image of each of the plurality of coding-target regions in the coding-target panoramic image before the correcting by using the reference panoramic image.

US Pat. No. 10,893,289

AFFINE MOTION PREDICTION-BASED IMAGE DECODING METHOD AND DEVICE USING AFFINE MERGE CANDIDATE LIST IN IMAGE CODING SYSTEM

LG ELECTRONICS INC., Seo...

1. An image decoding method, by a decoding apparatus, comprising:constructing a merge candidate list for deriving motion information of subblock units of a current block, wherein the merge candidate list includes inherited affine candidates and constructed affine candidates;
deriving control point motion vectors (CPMVs) for control points (CPs) of the current block based on the merge candidate list;
deriving prediction samples for the current block based on the CPMVs; and
generating a reconstructed picture for the current block based on the derived prediction samples,
wherein a maximum number of the inherited affine candidates is 2,
wherein a first inherited affine candidate is derived from a left block group including a bottom-left corner neighboring block and a left neighboring block of the current block,
wherein a second inherited affine candidate is derived from a top block group including a top-right corner neighboring block, a top neighboring block and a top-left corner neighboring block of the current block,
wherein the CPs include a CP0, a CP1, and a CP2, and
wherein the CP0 is a point at a top left position of the current block, the CP1 is a point at a top right position of the current block, and the CP2 is a point at a bottom left position of the current block.

US Pat. No. 10,893,288

DECODERS AND METHODS THEREOF FOR MANAGING PICTURES IN VIDEO DECODING PROCESS

TELEFONAKTIEBOLAGET LM ER...

1. A method of decoding, in a decoder, a representation of a current picture of a video stream of multiple pictures using reference pictures, wherein each picture belongs to a layer identified by a layer identity, the method comprising:receiving least significant bits of a Picture Order Count (POC) value (pic_order_cnt_lsb) of the current picture from a bitstream; and
determining the POC value of the current picture, to be used by the decoder, as a sum of the pic_order_cnt_lsb and most significant bits of the POC (PicOrderCntMsb) of the current picture;
wherein the PicOrderCntMsb of the current picture is derived using at least a prevPicOrderCntMsb and a prevPicOrderCntLsb, where:
prevPicOrderCntMsb is set equal to PicOrderCntMsb of a previous reference picture in decoding order that has a layer identity equal to zero; and
prevPicOrderCntLsb is set equal to the value of pic_order_cnt_lsb of the previous reference picture.

US Pat. No. 10,893,287

GEOSPATIAL MEDIA RECORDING SYSTEM

Remote Geosystems, Inc., ...

1. A computer implemented system, comprising:a processor in communication with a non-transitory computer readable medium containing a video editor executable to:
edit video stream data to associate global positioning data at intervals in said video stream data,
depict video stream data as a video in a first display area on a display surface of a computing device;
depict a geospatial representation surrounding said global positioning data associated with said video stream data in a second display area on said display surface of said computing device;
depict said global positioning data associated at said intervals in said video stream data as a plurality of location coordinate indicators in said geospatial representation depicted in said second display area on said display surface of said computing device;
define a video segment in said video, said video segment retaining association with said global positioning data;
delete said coordinate location indicators corresponding to said video segment from said geospatial representation depicted on said display surface of said computer; and
remove an area in said geospatial representation corresponding to said coordinate location indicators deleted from said geospatial representation depicted in said second display area on said display surface of said computer.

US Pat. No. 10,893,286

METHODS AND APPARATUS FOR LOW-COMPLEXITY MTS

TENCENT AMERICA LLC, Pal...

1. A method for video decoding, the method comprising:determining whether at least one parameter of a block is less than or equal to a threshold;
signaling, in response to determining the at least one parameter of the block is less than or equal to the threshold, one of a horizontal transform and a vertical transform;
splitting, in response to determining that the at least one parameter of the block is greater than the threshold, the block into sub-blocks;
applying a first signaling scheme on a luma component and a second signaling scheme on a chroma component;
performing ones of transforms on the sub-blocks; and
decoding a video stream by using the sub-blocks upon which the ones of the transforms are performed,
wherein a maximum block size of the first signaling scheme is different than a maximum block size of the second signaling scheme.

US Pat. No. 10,893,285

DEVICE AND METHOD FOR CODING VIDEO DATA BASED ON ONE OR MORE REFERENCE LINES

FG Innovation Company Lim...

1. A method of decoding a bitstream by an electronic device, the method comprising:determining a block size of a block unit from an image frame according to the bitstream;
determining a number of a plurality of reference lines based on the block size;
determining a line index of the block unit in the bitstream;
comparing the line index with a first predefined value to determine whether a mode flag is included in the bitstream;
determining a mode index in the bitstream for directly selecting a prediction mode of the block unit from a most probable mode (MPM) list of the block unit when the mode flag is not included in the bitstream;
determining whether the mode flag is equal to a second predefined value for determining whether the prediction mode is selected from the MPM list based on the mode index when the mode flag is included in the bitstream;
selecting one of the plurality of reference lines based on the line index; and
reconstructing the block unit based on the selected one of the plurality of reference lines and the prediction mode.

US Pat. No. 10,893,284

SUB-PICTURES FOR PIXEL RATE BALANCING ON MULTI-CORE PLATFORMS

TEXAS INSTRUMENTS INCORPO...

1. A video decoder comprising:a first decoder processing core;
a second decoder processing core;
a decoder controller coupled to the first decoder processing core and the second decoder processing core, wherein the decoder controller is operable to:
receive an input compressed bit stream that includes a picture;
determine that a portion of the picture is encoded in a first and second encoded sub-pictures, where the first encoded sub-picture is associated with a first identifier and includes an integer number of first coding units, and where the second encoded sub-picture is associated with a second identifier and includes an integer number of second coding units;
determine which of the first and second encoded sub-pictures is identified by decoding a first syntax element of a first slice header of a first slice of one of the first or second encoded sub-pictures;
determine which of the first and second encoded sub-pictures is identified by decoding a second syntax element of a second slice header of a second slice of one of the first or second encoded sub-pictures, the first and second syntax elements identifying at least one of the first or second encoded sub-pictures, the second syntax element of the second slice header different than the first syntax element of the first slice header;
dispatch the first slice to the first decoder processing core in response to a determination the first encoded sub-picture is identified by the first slice; and
dispatch the second slice to the second decoder processing core in response to a determination the second encoded sub-picture is identified by the second slice;
the first decoder processing core operable to decode the first encoded sub-picture; and
the second decoder processing core operable to decode the second encoded sub-picture, wherein the second decoder processing core decodes the second slice in parallel with the first decoder processing core decoding the first slice.

US Pat. No. 10,893,283

REAL-TIME ADAPTIVE VIDEO DENOISER WITH MOVING OBJECT DETECTION

Google LLC, Mountain Vie...

1. A method for adaptive noise filtering of a source video, the method comprising:receiving a source frame of the source video from a video capturing device;
removing noise from the source frame by:
dividing the source frame into source blocks;
performing moving object detection to identify first source blocks that are moving blocks;
performing a noise estimation for the source frame;
identifying one or more second source blocks that are static blocks;
calculating a noise value for the source frame based on a sum of weighted variances for a number of consecutive static blocks;
calculating an average noise value over a fixed running window; and
responsive to the average noise value satisfying a noise value threshold, performing at least one of lowering a variance threshold for the moving object detection or lowering a difference threshold for noise filtering;
generating an output frame based on the moving object detection and the noise estimation; and
encoding the output frame.

US Pat. No. 10,893,282

IMAGE CODING METHOD, IMAGE CODING APPARATUS, IMAGE DECODING METHOD, IMAGE DECODING APPARATUS, AND IMAGE CODING AND DECODING APPARATUS

Velos Media, LLC, Dallas...

1. A method comprising:determining a sample adaptive offset (SAO) type parameter for a SAO value, wherein the SAO type parameter indicates whether the SAO value is an edge offset value or a band offset value, and the SAO value is to be added to a pixel value of a reconstructed image;
encoding the SAO type parameter into a bitstream;
determining an integer indicating a magnitude of the SAO value; and
encoding the determined integer using variable length coding into a plurality of bins,
wherein each bin of the plurality of bins is encoded into the bitstream using bypass arithmetic coding with a fixed probability.

US Pat. No. 10,893,281

COMPRESSION OF A VIDEO STREAM HAVING FRAMES WITH RELATIVELY HEIGHTENED QUALITY PARAMETERS ON BLOCKS ON AN IDENTIFIED POINT OF INTEREST (POI)

International Business Ma...

1. A computer-implemented method, comprising:receiving specification of a type of point of interest in video data;
analyzing frames in the video data to identify a point of interest in the frames of the specified type,
wherein the frames in the video data are analyzed by an object detector;
adjusting quality parameters on blocks on the identified point of interest to improve a quality thereof,
wherein a first of the quality parameters adjusted on blocks on the identified point of interest includes motion estimation,
wherein adjusting the motion estimation includes increasing an amount of motion estimation bits on blocks on the identified point of interest;
adjusting quality parameters on blocks not on the identified point of interest to reduce a quality thereof; and
outputting a compressed video stream having the adjusted quality parameters.

US Pat. No. 10,893,280

REFINED ENTROPY CODING FOR LEVEL MAPS

GOOGLE LLC, Mountain Vie...

1. An apparatus for decoding a current block, comprising:a processor configured to: obtain a transform type for decoding a transform block for the current block;
select, based on the transform type, a template for selecting context for coding a value of a non-zero map, wherein:
the non-zero map indicates which coefficients of the transform block have non-zero values; and
the template identifies relative neighboring locations of the value that are used for determining the context;
select, based on the template, the context for entropy decoding the value of the non-zero map; and
decode, from a compressed bitstream, the value of the non-zero map based on the context.

US Pat. No. 10,893,279

DECODING APPARATUS AND DECODING METHOD, AND CODING APPARATUS AND CODING METHOD

SONY CORPORATION, Tokyo ...

1. A decoding apparatus comprising:a decoding unit configured to decode a bit stream coded according to a coding standard having a profile in which a lowest compression rate is set for each tier of a plurality of tiers including a main tier and a high tier,
wherein the lowest compression rate of the high tier of each respective level that is a predetermined level or higher and the lowest compression rate of the main tier of the respective level are different from each other, and
wherein the decoding unit is implemented via at least one processor.

US Pat. No. 10,893,278

VIDEO BITSTREAM GENERATION METHOD AND DEVICE FOR HIGH-RESOLUTION VIDEO STREAMING

SK TELECOM CO., LTD., Se...

1. A method of generating a video stream of an input video, the method comprising:receiving a plurality of encoded bitstreams encoded at different bit rates for an input video, respectively, wherein each of the plurality of encoded bitstreams includes encoded data of one or more tiles for a whole region of each frame of the input video and is encoded without allowing reference between adjacent tiles during an inter prediction;
encoding position information indicating where a display area is to be displayed;
extracting encoded data of one or more tiles corresponding to the display area selected from one of the plurality of encoded bitstreams, wherein the extracted encoded data is encoded at a first bitrate among the different bit rates; and
generating a bitstream including at least the encoded data of the display area and the position information.

US Pat. No. 10,893,277

METHOD AND DEVICE FOR INTRA PREDICTION VIDEO

SAMSUNG ELECTRONICS CO., ...

1. A method for decoding a video, the method comprising:determining availability of pixel values on a predetermined number of adjacent pixel positions used for intra prediction of a current block from among blocks obtained by splitting a picture forming the video according to a hierarchical structure;
when a pixel value on a first adjacent pixel position among the predetermined number of adjacent pixel positions is unavailable, searching for an available pixel value on a second adjacent pixel position by searching pixel values on the predetermined number of adjacent pixel positions in a predetermined direction based on the first adjacent pixel positions;
replacing an unavailable pixel value on the first adjacent pixel position with an available pixel value on the second adjacent pixel position, to generate a new pixel value on the first adjacent pixel position; and
performing intra prediction on the current block by using pixel values on the predetermined number of adjacent pixel positions comprising the new pixel value on the first adjacent pixel position,
wherein the first adjacent pixel position is a lowermost left adjacent pixel position from among left and lower left adjacent pixel positions of the current block,
wherein the searching for the second adjacent pixel values further comprises searching for the second adjacent pixel position by searching left and lower left adjacent pixel positions of the current block from bottom to top based on the first adjacent pixel position, and when the available pixel value on the second adjacent pixel position is not found among the left and lower left adjacent pixel positions of the current block, searching top and upper right adjacent pixel positions of the current block from left to right,
wherein, when all the predetermined number of adjacent pixels are unavailable, all the predetermined number of adjacent pixels are replaced into a predetermined value determined based on a bit-depth.

US Pat. No. 10,893,276

IMAGE ENCODING DEVICE AND METHOD

SONY CORPORATION, Tokyo ...

1. An image encoding device, comprising:a setting unit configured to:
set a prediction mode to be used for an image that is encoded in a state in which types of the prediction mode which are a selection target are limited, wherein
the prediction mode is set based on a picture depth indicative of a reference relationship between a plurality of pictures of the image,
a first picture of the plurality of pictures and a second picture of the plurality of pictures have a first picture depth,
the second picture refers to the first picture in the first picture depth, and
the second picture is referred to by a third picture of the plurality of pictures in a second picture depth;
change a correspondence relationship between the picture depth and prediction modes as the selection target based on a combination of an application of the image and a picture type, wherein the application of the image is associated with a picture quality of the image and a processing speed for the application; and
set the prediction mode based on: the correspondence relationship, and the picture depth equal to or smaller than two, wherein
a prediction mode of a first block size is set as the selection target and a prediction mode of a second block size is limited, and
the first block size is smaller than the second block size; and
an encoding unit configured to encode the image for each of recursively divided encoded blocks of the image based on the prediction mode set by the setting unit.

US Pat. No. 10,893,275

VIDEO CODING METHOD, DEVICE, DEVICE AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A method, performed by at least one processor of a video encoding device, the method comprising:identifying, by the at least one processor, N image frames in a sliding window, from among an image frame sequence, the N image frames in the sliding window comprising N?1 encoded image frames and a to-be-encoded image frame located at an end of the sliding window;
obtaining, by the at least one processor, a motion amplitude difference of each of the N image frames in the sliding window, the motion amplitude difference indicating a difference between a motion amplitude of a corresponding image frame and a previous motion amplitude of a previous image frame, and the motion amplitude of the corresponding image frame indicating a ratio between an inter-frame prediction cost and an intra-frame prediction cost of the corresponding image frame;
updating, by the at least one processor, a static variable based on the motion amplitude difference of each of the N image frames in the sliding window to obtain an updated static variable, the static variable indicating a number of consecutive static image frames; and
encoding, by the at least one processor, the to-be-encoded image frame as an I frame based on the updated static variable not being less than a first preset threshold,
wherein the updating the static variable based on the motion amplitude difference of each of the N image frames in the sliding window comprises:
based on the to-be-encoded image frame being identified as a static image frame, increasing the static variable by 1, to obtain the updated static variable; and
based on the to-be-encoded image frame being identified as a non-static image frame, setting the updated static variable to 0.

US Pat. No. 10,893,274

METHOD FOR PROCESSING VIDEO SIGNAL ON BASIS OF ARBITRARY PARTITION TRANSFORM

LG Electronics Inc., Seo...

1. A method of processing a video signal based on an arbitrary partition transform, comprising:determining a partition set for a two-dimensional data block, wherein the partition set indicates a set of one-dimensional partitions split from the two-dimensional data block based on a scanning pattern;
determining a transform matrix set corresponding to the partition set, wherein a two-dimensional arbitrary partition transform of the transform matrix set is defined as successive execution of two one-dimensional arbitrary partition transforms;
obtaining an arbitrary partition transform vector for each one-dimensional partition by successively applying the two one-dimensional arbitrary partition transforms to the one-dimensional partition;
obtaining arbitrary partition transform coefficients for the two-dimensional data block by inverse-mapping arbitrary partition transform vectors for respective one-dimensional partitions;
quantizing the arbitrary partition transform coefficients; and
entropy-encoding the quantized arbitrary partition transform coefficients.

US Pat. No. 10,893,273

DATA ENCODING AND DECODING

SONY CORPORATION, Tokyo ...

1. A method of encoding video data comprising:encoding a video coding parameter representing a value of a syntax element to encode by an exponential Golomb encoding scheme selected from amongst a set of encoding schemes; and
limiting a length in bits to which the value of the syntax element representing the video coding parameter is encoded by the exponential Golomb encoding scheme to 32 bits by
calculating, via circuitry, a maximum length of a prefix portion based on a bit depth of data values to be encoded plus six, and
allocating, via the circuitry, bits to the prefix portion and to a suffix portion, a length in bits of the suffix portion being dependent on a value encoded by the prefix portion.

US Pat. No. 10,893,272

IMAGE BLOCK CODING BASED ON PIXEL-DOMAIN PRE-PROCESSING OPERATIONS ON IMAGE BLOCK

SONY CORPORATION, Tokyo ...

1. An embedded codec (EBC) circuitry, comprising:a memory to store a first image block of a plurality of image blocks of an input image frame; and
encoder circuitry, wherein the encoder circuitry is configured to:
compute a first sum of absolute differences (SAD) from a first prediction block of row-wise residual values and a second SAD from a second prediction block of column-wise residual values in pixel-domain, wherein the first prediction block and the second prediction block corresponds to the first image block;
select a residual prediction type from a set of residual prediction types as an optimal residual prediction type for each of a first encoding mode and a second encoding mode, based on the computed first SAD and the computed second SAD;
select a set of quantization parameters as optimal quantization parameters for each of the first encoding mode and the second encoding mode, based on the computed first SAD and the computed second SAD; and
generate a set of bit-streams of encoded first image block in the first encoding mode and the second encoding mode, based on the selected residual prediction type and the selected set of quantization parameters for the first encoding mode and the second encoding mode.

US Pat. No. 10,893,271

METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL

LG ELECTRONICS, INC., Se...

1. A method for decoding an image signal, comprising:determining an input length and an output length of a non-separable transform based on a height and a width of a current block;
determining a non-separable transform matrix related to the input length and the output length of a non-separable transform; and
applying the non-separable transform matrix to coefficients by a number of the input length in the current block,
wherein the input length of the non-separable transform is determined as 8, and the output length of the non-separable transform is determined as 16, based on that each of the height and the width of a current block is equal to 4.

US Pat. No. 10,893,270

IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, AND IMAGE ENCODING PROGRAM, AND IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM

JVC KENWOOD CORPORATION, ...

1. An image encoding device adapted to segment an image into blocks and encode the image in units of blocks resulting from segmenting the image, comprising:a block segmentation unit that recursively segments the image into rectangles of a predetermined size to generate a target block subject to encoding; and
an encoding unit that encodes block segmentation information of the target block, wherein the block segmentation unit includes:
a quartering unit that quarters the target block in recursive segmentation in a horizontal direction and a vertical direction to generate four blocks; and
a halving unit that halves the target block in recursive segmentation in the horizontal direction or the vertical direction to generate two blocks,
wherein when a previous recursive segmentation is halving and the target block is of a predetermined size, the halving unit prohibits the target block subject to current recursive segmentation from being segmented in the same direction as the direction in which the target block was segmented in the previous recursive segmentation and thereby prevents:
block segmentation that results in a rectangle further elongated in the same direction and
an increase in the memory bandwidth necessary for at least one of intra prediction or inter prediction, and
wherein the memory bandwidth necessary for the intra prediction is prevented from increasing when the target block is halved inside the target block, by restricting the target block subject to current recursive segmentation from being segmented in the same direction as the direction in which the target block was segmented in the previous recursive segmentation.

US Pat. No. 10,893,269

IMAGE PROCESSING DEVICE AND METHOD

SONY CORPORATION, Tokyo ...

1. An image processing device comprising:a block setting unit configured to set a size of a current block for an intra prediction process according to a size of a peripheral block situated on a periphery of the current block, the prediction process generating a predicted image of an image to be encoded;
an intra prediction unit configured to perform the intra prediction process on the current block set by the block setting unit and generate the predicted image; and
an encoding unit configured to encode the image to be encoded using the predicted image generated by the intra prediction unit,
wherein the block setting unit sets the size of the current block for the prediction process according to a first block division mode set when the image to be encoded is an artificial image generated artificially and according to a second block division mode set when the image to be encoded is a normal image,
wherein the first block division mode and the second block division mode are different,
wherein, in the first block division mode, the block setting unit sets the size of the current block according to the size of the peripheral block, and
wherein the block setting unit, the intra prediction unit, and the encoding unit are each implemented via at least one processor.

US Pat. No. 10,893,268

METHOD OF VIDEO CODING BY PREDICTING THE PARTITIONING OF A CURRENT BLOCK, A DECODING METHOD, AND CORRESPONDING CODING AND DECODING DEVICES AND COMPUTER PROGRAMS

ORANGE, Paris (FR)

1. A method of coding a current image that has previously been partitioned into blocks, the method performing the following acts by a coder device for a current block that is to be coded:partitioning the current block at least once into a plurality of subblocks; and
representing the partitioning of the current block in the form of a first digital information sequence;
comparing the first digital information sequence representative of said partitioning of the current block to a second digital information sequence representative of a partitioning of a block that has already been coded and then decoded, a value of a first bit of said first digital information sequence being compared to a value of first bit of said second digital information sequence, and so on until a value of a last bit of each of said first and second digital information sequences, and then during said comparison,
for two bits having the same respective position in said both first and second digital information sequences,
if said two bits have not the same value, a bit is generated to indicate that a subblock resulting from the partitioning of the already coded and then decoded block and corresponding to a subblock resulting from the partitioning of the current block, has been partitioned again,
if said two bits have the same value,deducing that neither a subblock of the current block, nor a corresponding subblock of the already coded and decoded block has been partitioned again, such that no bit is generated to indicate said deduction;and in which:
for said bit generated,
encoding said bit as generated; and
transmitting to a decoder a data signal including said bit as generated.

US Pat. No. 10,893,267

METHOD FOR PROCESSING IMAGE ON BASIS OF INTRA-PREDICTION MODE AND APPARATUS THEREFOR

LG ELECTRONICS INC., Seo...

1. A method for processing an image based on an intra-prediction mode by an apparatus, comprising:deriving the intra prediction mode applied to an intra prediction of a current block;
determining whether unequal weighted prediction generating a prediction sample by applying different weights to a plurality of reference samples based on the intra-prediction mode is applied to the current block; and
generating a prediction sample by weighting two or more reference samples of reference samples neighboring the current block based on the intra-prediction mode,
wherein the step of determining checks whether a prediction direction of the intra-prediction mode applied to the current block corresponds to one of predetermined angles at which the unequal weighted prediction is applied, and
wherein whether to apply filtering on the prediction sample of the current block is determined based on whether to apply the unequal weighted prediction to the current block such that the filtering is applied when the unequal weighted prediction is not applied and the filtering is not applied when the unequal weighted prediction is applied.

US Pat. No. 10,893,266

METHOD AND SYSTEM FOR OPTIMIZING BITRATE SELECTION

Disney Enterprises, Inc.,...

1. A method, comprising:encoding a video program into a first plurality of video streams, each of the first plurality of video streams being encoded at a corresponding one of a first plurality of bitrates;
generating a first alternate video stream at a first alternate bitrate that is not one of the first plurality of bitrates at which the first plurality of video streams are encoded, wherein the first alternate video stream at the first alternate bitrate is generated based on one of the first plurality of video streams padded with additional data after the one of the first plurality of video streams has been encoded;
providing, to a plurality of viewing clients, an option to select one of the first plurality of video streams or the first alternate video stream at the first alternate bitrate;
receiving, by an optimization logic, a streaming capacity of each of the plurality of viewing clients, wherein the streaming capacity of each of the plurality of viewing clients is based on a selection of one of the first plurality of video streams or the first alternate video stream at the first alternate bitrate; and
determining, by the optimization logic, a second plurality of bitrates based on at least one selection of the first alternate stream at the first alternate bitrate by the plurality of viewing clients.

US Pat. No. 10,893,264

TRAFFIC LIGHT-TYPE SIGNAL STRENGTH METER/INDICATOR LINKED TO AN ANTENNA AGC CIRCUIT

VOXX International Corpor...

1. A signal strength meter/indicator for use with an over-the-air broadcast television signal receiving antenna and which measures and provides an indication of the relative signal strength of a broadcast television signal received by the receiving antenna, the receiving antenna having an output and providing an RF (radio frequency) output signal thereon, the output signal corresponding to the over-the-air broadcast signal received by the signal receiving antenna, the signal strength meter/indicator comprising:an input bandpass filter circuit, the input bandpass filter circuit having an output, the input bandpass filter circuit being responsive to the output signal of the signal receiving antenna and generating a filtered output signal on the output of the input bandpass filter circuit in response to the output signal of the signal receiving antenna;
a first preamplifier circuit, the first preamplifier circuit having an output, the first preamplifier circuit being responsive to the filtered output signal of the input bandpass filter circuit and generating an amplified, filtered output signal on the output of the first preamplifier circuit in response to the filtered output signal of the input bandpass filter circuit;
a controllable variable attenuator circuit, the controllable variable attenuator circuit having an output and at least one control signal input on which is provided at least one control signal, the controllable variable attenuator circuit being responsive to the amplified, filtered output signal of the first preamplifier circuit and the at least one control signal provided on the at least one control signal input and generating an adjustably attenuated output signal on the output of the controllable variable attenuator circuit in response to the amplified, filtered output signal of the first preamplifier circuit and the at least one control signal;
a second preamplifier circuit, the second preamplifier circuit having an output, the second preamplifier circuit being responsive to the adjustably attenuated output signal of the controllable variable attenuator circuit and generating an amplified output signal on the output of the second preamplifier circuit in response to the adjustably attenuated output signal of the controllable variable attenuator circuit;
a splitter, the splitter having at least a first output and a second output, the splitter being responsive to the amplified output signal of the second preamplifier circuit and generating a first output signal on the first output of the splitter and a second output signal on the second output of the splitter in response to the amplified output signal of the second preamplifier circuit;
an output bandpass filter circuit, the output bandpass filter circuit having an output, the output bandpass filter circuit being responsive to the first output signal of the splitter and generating a filtered output signal on the output of the output bandpass filter circuit in response to the first output signal of the splitter, the filtered output signal of the output bandpass filter circuit being providable to a television signal viewing device;
a VHF (very high frequency)/UHF (ultra high frequency) filter circuit, the VHF/UHF filter circuit having an output, the VHF/UHF filter circuit being responsive to the second output signal of the splitter and generating a filtered output signal on the output of the VHF/UHF filter circuit in response to the second output signal of the splitter;
a power detector circuit, the power detector circuit having an output, the power detector circuit being responsive to the filtered output signal of the VHF/UHF filter circuit and generating a digitized output signal on the output of the power detector circuit in response to the filtered output signal of the VHF/UHF filter circuit, the digitized output signal corresponding to the filtered output signal of the VHF/UHF filter circuit;
a microcontroller, the microcontroller having at least one control signal output and at least one indicator signal output, the microcontroller being responsive to the digitized output signal of the power detector circuit and generating in response to the digitized output signal of the power detector circuit the at least one control signal on the at least one control signal output of the microcontroller which is provided to the controllable variable attenuator circuit on the at least one control signal input of the controllable variable attenuator circuit and further generating in response to the digitized output signal of the power detector circuit at least one indicator signal on the at least one indicator signal output; and
a display, the display being responsive to the at least one indicator signal of the microcontroller and displaying an indication of the relative signal strength of the broadcast television signal received by the signal receiving antenna in response to the at least one indicator signal of the microcontroller.

US Pat. No. 10,893,263

METHOD AND SYSTEM FOR PARING SHUTTER GLASSES WITH A REMOTE CONTROL UNIT

ADVANCED DIGITAL BROADCAS...

1. A non-transitory computer readable storage medium comprising instructions that cause a processor to pair shutter glasses with a remote control unit (RCU) for use with a display device configured to display a plurality of video streams by time interleaving the frames of the video streams and sending synchronization signal to the shutter glasses, by performing the following steps when such instructions are executed by the processor, the steps comprising:generating at least one synchronization signal for the shutter glasses;
transmitting, by the RCU, an infrared (IR) signal for receipt by the display device, such that the IR signal passes through at least one lens of the shutter glasses prior to receipt of the IR signal by the display device, wherein the shutter glasses are separate from the RCU and operate, by alternating the at least one lens of the shutter glasses between a transparent state and an opaque state, according to the generated at least one synchronization signal;
receiving, by the display device, the IR signal from the RCU after the IR signal passes through the at least one lens of the shutter glasses;
comparing the temporal correspondence of the received IR signal with the generated at least one synchronization signal; and
in case a temporal correspondence is found, pairing the RCU with the shutter glasses for which the correspondent synchronization signal was found, wherein the shutter glasses for which the correspondent synchronization signal was found is a first shutter glasses of a plurality of shutter glasses, and wherein the RCU that is paired with the first shutter glasses is a first RCU of a plurality of RCUs, and wherein the correspondent synchronization signal is a first synchronization signal of the generated at least one synchronization signal, and wherein the steps further comprise:
transmitting, by a second RCU of the plurality of RCUs, a second IR signal for receipt by the display device, such that the IR signal passes through at least one lens of a second shutter glasses of the plurality of shutter glasses prior to receipt of the second IR signal by the display device, wherein the second shutter glasses operate, by alternating the at least one lens of the second shutter glasses between a transparent state and an opaque state, according to a second synchronization signal of the generated at least one synchronization signal;
receiving, by the display device, the second IR signal from the second RCU after the second IR signal passes having passed through the at least one lens of the second shutter glasses;
comparing the temporal correspondence of the received second IR signal with the generated at least one synchronization signal; and
pairing the second RCU with the second shutter glasses in response to temporal correspondence between the received second IR signal and the second synchronization signal.

US Pat. No. 10,893,262

LIGHTFIELD RENDERING BASED ON DEPTHS FROM PHYSICALLY-BASED VOLUME RENDERING

Siemens Healthcare GmbH, ...

1. A method for lightfield volume rendering, the method comprising:rendering, by a graphics processing unit or processor as a physically-based renderer and from a medical dataset representing a three-dimensional region of a patient, the medical dataset from a patient scan by a magnetic resonance or computed tomography scanner, to create a lightfield representing the three-dimensional region of the patient in a plurality of two-dimensional views, the rendering stochastically generating a plurality of depths of scattering for each pixel of each of the two-dimensional views;
assigning depths to locations for the pixels of the two-dimensional views in the lightfield, the depth for each pixel of each of the two-dimensional views of the lightfield being determined as a depth combination calculated from the plurality of depths at which stochastically determined scattering used in the rendering by the physically-based renderer for that pixel occurs;
filtering the two-dimensional views as a function of the depths assigned to the pixels of the lightfield;
rendering, by a lightfield renderer performing the filtering, another image representing the three-dimensional region of the patient from the lightfield and depths assigned to the pixels of the lightfield; and
transmitting the other image.

US Pat. No. 10,893,261

POSITIONAL ZERO LATENCY

Dolby Laboratories Licens...

1. A method for streaming video data, comprising:based on viewing tracking data received from a streaming client device after a first time, determining a target view direction of a viewer in relation to a three-dimensional (3D) scene depicted by a first video image (a) that has been streamed in a video stream to the streaming client device before the first time point and (b) that has been rendered with the streaming client device to the viewer at the first time point;
identifying, based on the target view direction of the viewer, a target view portion in a second video image to be streamed in the video stream from a video streaming server to the streaming client device before a second time point subsequent to the first time point and to be rendered at the second time point;
wherein a spatial extent of a surround region covered by the target view portion in the second video image is determined based at least in part on a transmission latency time between the video streaming server and the streaming client device;
encoding the target view portion in the second video image into the video stream with a target spatiotemporal resolution higher than a non-target spatiotemporal resolution used to encode remaining non-target view portions in the second video image that are outside the target view portion;
transmitting, to the streaming client device, the second video image comprising the target view portion encoded with the target spatiotemporal resolution and the remaining non-target view portions encoded with the non-target spatiotemporal resolution via the video stream;
wherein the method is performed by one or more computing devices.

US Pat. No. 10,893,260

DEPTH MAPPING WITH A HEAD MOUNTED DISPLAY USING STEREO CAMERAS AND STRUCTURED LIGHT

Facebook Technologies, LL...

1. A depth camera assembly comprising:a structured light illuminator configured to emit a structured light pattern onto a local area;
an image capture device configured to capture a set of images of the local area;
a mode select module coupled to the image capture device and to the structured light illuminator, the mode select module configured to:
determine a mode of the image capture device and the structured light illuminator based in part on a signal to noise ratio (SNR) of the captured set of images, and
communicate instructions for configuring the determined mode to at least one or more of the image capture device and the structured light illuminator,
wherein the mode is such that, responsive to a first value of the SNR, the structured light illuminator is deactivated and the image capture device is activated, and responsive to a second value of the SNR that is lower than the first value, the mode is such that the structured light illuminator is activated and the image capture device is activated.

US Pat. No. 10,893,259

APPARATUS AND METHOD FOR GENERATING A TILED THREE-DIMENSIONAL IMAGE REPRESENTATION OF A SCENE

Koninklijke Philips N.V.,...

1. An apparatus for generating a tiled three-dimensional image representation of a scene, the apparatus comprising:a receiver,
wherein the receiver is arranged to receive a tiled three-dimensional image representation of a scene from a first viewpoint,
wherein the tiled three-dimensional image representation comprises a plurality of interconnected tiles,
wherein each tile comprises a depth map and a texture map,
wherein the texture map represents a viewport of the scene from the first viewpoint,
wherein the tiles form a tiling pattern;
a first processor circuit,
wherein the first processor circuit is arranged to determine for determining neighboring border regions in at least a first tile and in a second tile in response to the tiling pattern,
wherein the first tile and the second tile are neighboring tiles;
a second processor circuit,
wherein the second processor circuit is arranged to modify at least a first depth value of a first border region of the first tile in response to at least a second depth value in a second border region of the second tile such that a difference between the first depth value and the second depth value is reduced for at least some values of the first depth value and the second depth value,
wherein the first border region and the second border region are neighboring border regions.

US Pat. No. 10,893,258

DISPLACEMENT-ORIENTED VIEW SYNTHESIS SYSTEM AND METHOD

National Taiwan Universit...

1. A displacement-oriented view synthesis system, comprising:a plurality of three-dimensional (3D) warping devices each coupled to receive at least one input image captured from a corresponding reference view, and each performing 3D warping on the input image to generate at least one corresponding warped image in a target view;
a view blending device coupled to receive the warped images, and performing view blending on the warped images to generate at least one blended image in the target view; and
an inpainting device coupled to receive the blended image, and performing inpainting on the blended image to generate a synthesized image in the target view;
wherein the inpainting is performed according to a difference displacement between frames of different views;
wherein the difference displacement is obtained by the following steps:
determining a foreground displacement associated with a foreground object, the foreground displacement being constructed by connecting a foreground-associated point in the target view with a corresponding point in the reference view;
determining a background displacement associated with a background object, the background displacement being constructed by connecting a background-associated point in the target view with the corresponding point in the reference view; and
obtaining the difference displacement by subtracting the background displacement from the foreground displacement.

US Pat. No. 10,893,257

MULTI-DIMENSIONAL DATA CAPTURE OF AN ENVIRONMENT USING PLURAL DEVICES

AEMASS, INC., San Franci...

1. A method of delivering three-dimensional (3D) data to one or more users, the method comprising:producing, by a computer processing system, a series of time-indexed 3D frames to generate a scene in motion over a first period of time, wherein the time-indexed 3D frames are produced by:
receiving data, from each data capture device of one or more data capture devices, each data capture device including a depth sensor, the data from each of the one or more data capture devices comprising image data of the scene in motion and time-indexed depth data indicating a set of distance measurements to points on surfaces of the scene in motion sampled over the first period of time from one or more points of view,
receiving location data for the first period of time, the location data including data for a location of each of the one or more data capture devices,
determining 3D coordinates in a common coordinate system for each of the points of the depth data for each of the one or more data capture devices at each sampled time of the first period of time,
performing real-time data discrimination by the computer processing system, one or more of the data capture devices, or a combination thereof to differentiate at least a first portion of the received data from a second portion of the received data, and
generating, in a memory coupled with the computer processing system, the series of time-indexed 3D frames of the scene in motion based at least in part on the differentiation of the first portion of the received data and the second portion of the received data, wherein each of the time-indexed 3D frames includes a 3D point cloud with the 3D coordinates of the points of the depth data for each of the one or more data capture devices in the common coordinate system at a given sampled time of the first period of time; and
transmitting, by the computer processing system, the series of time-indexed 3D frames to one or more users for viewing, wherein a first point of view of a first user for the series of time-indexed 3D frames may be selected from any of a plurality of different points of view within the 3D point cloud of each of the 3D frames.

US Pat. No. 10,893,256

APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR OMNIDIRECTIONAL VIDEO

Nokia Technologies Oy, E...

1. A method comprising:obtaining a first coded tile or sub-picture track and a second coded tile or sub-picture track, the first and second coded tile or sub-picture tracks representing different spatial parts of an input video sequence, and the first and second coded tile or sub-picture tracks comprising same width and height in pixels;
providing an indication of a first group of tile or sub-picture tracks that are alternatives for extraction, the first group of tile or sub-picture tracks comprising the first and second coded tile or sub-picture tracks; and
creating an extractor track comprising a sample corresponding to a coded picture, the sample comprising an extractor, the extractor comprising a sample constructor comprising a reference to an identifier of the first group of tile or sub-picture tracks, the reference intended to be resolved by selecting one of the tile or sub-picture tracks in the first group to be a source of extraction, and the sample constructor intended to be resolved by copying data by reference from the source of extraction.

US Pat. No. 10,893,255

CAMERA SYSTEM FOR INCREASING BASELINE

Center For Integrated Sma...

1. A camera system comprising:a single lens; and
an image sensor including at least one pixel array, the at least one pixel array including a plurality of pixels in a two-dimensional arrangement and a single microlens disposed on the plurality of pixels to be shared,
wherein light shielding layers formed with Offset Pixel Apertures (OPAs) are disposed on at least two pixels of the plurality of pixels, respectively,
wherein the OPAs are formed on the light shielding layers to maximize a spaced distance between the OPAs; and
wherein an offset f-number associated with the spaced distance between the OPAs and each height of the at least two pixels is larger than an f-number of the single lens.

US Pat. No. 10,893,254

METHOD FOR TRANSMITTING 360-DEGREE VIDEO, METHOD FOR RECEIVING 360-DEGREE VIDEO, APPARATUS FOR TRANSMITTING 360-DEGREE VIDEO, AND APPARATUS FOR RECEIVING 360-DEGREE VIDEO

LG ELECTRONICS INC., Seo...

16. A 360-degree video transmission apparatus, the apparatus comprising:a projection processor configured to generate pictures of 360-degree video;
an encoder configured to encode the pictures;
a metadata processor configured to generate metadata; and
a transmission processor configured to perform processing for storage or transmission of the encoded pictures and the metadata,
wherein:
the metadata includes viewing space information, and
the viewing space information includes information indicating a shape type of the viewing space, when the shape type of the viewing space is an ellipsoid, the viewing space information includes information indicating a length of a x axis of the viewing space, information indicating a length of a y axis of the viewing space, and information indicating a length of a z axis of the viewing space, and the length of the x axis is in the ranged of 1 to 65 536 * 216 —1, the length of the y axis is in the ranged of 1 to 65 536 * 216 —1, and length of the z axis is in the ranged of 1 to 65 536 * 216 —1.

US Pat. No. 10,893,253

METHOD AND APPARATUS FOR DISTRIBUTION OF 3D TELEVISION PROGRAM MATERIALS

Google Technology Holding...

1. A method for converting video content, the method comprising:receiving a 3D video stream and metadata associated with the 3D video stream, wherein the metadata includes a type of 3D to 2D conversion applicable to the 3D video stream, and wherein the type of 3D to 2D conversion is a value from a plurality of values that at least indicates an output resolution for a 2D video stream and a manner in which a left 3D view and a right 3D view of the 3D video stream are included within the 3D video stream;
determining that a 3D to 2D conversion is to be performed;
in response to determining that the 3D to 2D conversion is to be performed, identifying the output resolution for the 2D video stream indicated by the type of 3D to 2D conversion; and
converting the 3D video stream to the 2D video stream with the output resolution indicated by the type of 3D to 2D conversion using either the left 3D video or the right 3D video.

US Pat. No. 10,893,252

IMAGE PROCESSING APPARATUS AND 2D IMAGE GENERATION PROGRAM

EMBODYME, INC., Tokyo (J...

1. An image processing apparatus comprising:a three-dimensional (3D) data acquisition unit that acquires 3D data of a 3D model in which a captured image is mapped on a surface;
a correct image acquisition unit that acquires captured image data used as a correct image;
a two-dimensional (2D) image generation unit that generates 2D image data from the 3D data acquired by the 3D data acquisition unit according to a predetermined 2D conversion algorithm; and
an evaluation value calculation unit that calculates an evaluation value representing similarity between the 2D image data generated by the 2D image generation unit and the captured image data acquired by the correct image acquisition unit,
wherein
the 2D image generation unit modifies the 2D conversion algorithm by learning to optimize the evaluation value calculated each time for 2D image data generated when the 3D data is input to the 2D image generation unit and processing is repeatedly performed,
the evaluation value calculation unit includes a first evaluation value calculation unit that uses the 3D data acquired by the 3D data acquisition unit, and the captured image data acquired by the correct image acquisition unit or the 2D image data generated by the 2D image generation unit as an input, identifies whether an input image is a correct image or the 2D image data generated by the 2D image generation unit according to a predetermined identification algorithm, and calculates a probability that the input image is identified as the correct image as a first evaluation value, and
the 2D conversion algorithm of the 2D image generation unit is modified to maximize the first evaluation value calculated by the first evaluation value calculation unit using the 2D image data as an input, and the identification algorithm of the first evaluation value calculation unit is modified to minimize the first evaluation value calculated by the first evaluation value calculation unit using the 2D image data as an input and maximize the first evaluation value calculated by the first evaluation value calculation unit using the captured image data as an input.

US Pat. No. 10,893,251

THREE-DIMENSIONAL MODEL GENERATING DEVICE AND THREE-DIMENSIONAL MODEL GENERATING METHOD

PANASONIC INTELLECTUAL PR...

1. A three-dimensional model generating device, comprising:a memory configured to store a program; and
a processor configured to execute the program and control the three-dimensional model generating device to:
for each of input images included in one or more items of video data and having mutually different viewpoints, generate a converted image from the input image, by extracting pixels at predetermined pixel locations in the input image;
detect features in the converted images and estimate, for each of the input images, a camera parameter at a capture time of the input image, based on a pair of similar features between two of the converted images;
generate a three-dimensional model using the input images and the camera parameters; and
prioritize each of the predetermined pixel locations so that pixel locations at four corners of each of the input images are set at an increased priority level as compared to priority levels of remaining pixel locations of the predetermined pixel locations, the pixel locations at the four corners being more significantly affected by lens distortion as compared to the remaining pixel locations, wherein
the camera parameters are estimated using, preferentially, pixels at the pixel locations at the four corners.

US Pat. No. 10,893,250

FREE-VIEWPOINT PHOTOREALISTIC VIEW SYNTHESIS FROM CASUALLY CAPTURED VIDEO

Fyusion, Inc., San Franc...

1. A method comprising:rendering a plurality of target viewpoint images based on plurality of multiplane images of a three-dimensional scene, each of the plurality of multiplane images corresponding with a respective one of the plurality of target viewpoint images, each of the plurality of multiplane images associated with a respective one of a plurality of single plane images of the three-dimensional scene, each of the plurality of single plane images being captured from a respective viewpoint, each of the plurality of multiplane images including a respective plurality of depth planes, each of the respective plurality of depth planes including a respective plurality of pixels from the respective single plane image, each of the respective plurality of pixels in the respective plurality of depth planes being positioned at approximately a same distance from the respective viewpoint;
determining a weighted combination of the respective target viewpoint image for each of the plurality of multiplane images via a processor at a computing device, wherein a sampling density of the plurality of single plane images is sufficiently high that the weighted combination satisfies an inequality wherein a maximum pixel disparity of any scene point between adjacent ones of the plurality of target viewpoint images is less than or equal to a minimum of: (a) a number of depth layers associated with the plurality of multiplane images and (b) half of a target rendering resolution for a novel viewpoint image; and
transmitting the weighted combination of the respective target viewpoint image for each of the plurality of multiplane images as the novel viewpoint image.

US Pat. No. 10,893,249

IMAGING DEVICE

NIKON CORPORATION, Tokyo...

1. An electronic device comprising:an operating unit that receives an imaging instruction;
an imaging unit that outputs an imaging signal imaging a subject;
a display unit that displays a through image based on the imaging signal; and
a controller that generates moving image data comprised of a plurality of frames of a predetermined imaging time period, which includes a time point when the imaging instruction is received based on the imaging signal.

US Pat. No. 10,893,248

IMAGING SENSOR AND IMAGING DEVICE

MAXELL, LTD., Kyoto (JP)...

1. An imaging sensor comprising:an imaging sensor main body in which a light-receiving element is arranged in each pixel; and
a filter including:
a color filter in which a plurality kinds of filter parts are arranged in a predetermined array in a manner corresponding to an array of pixels of the imaging sensor body, the plurality of kinds of filter parts having different spectral transmission characteristics in response to wavelengths in a visible-light band; and
an optical filter which has:
i) a transmission characteristic in the visible-light band; and
ii) a light blocking characteristic in a first wavelength band adjacent to a longer wavelength side of the visible-light band, except for a second wavelength band for light transmission, the second wavelength band being a predetermined range having a lower limit and an upper limit within the first wavelength band; wherein:
the color filter has a third wavelength band, in which transmittances of the plurality of kinds of filter parts are approximate to one another, on a longer wavelength side than the visible-light band; and
the spectral transmission characteristics of the optical filter and the spectral transmission characteristics of the plurality of kinds of filter parts in the color filter are set in a manner such that the second wavelength band is included in the third wavelength band.

US Pat. No. 10,893,247

MEDICAL SIGNAL PROCESSING DEVICE AND MEDICAL OBSERVATION SYSTEM

SONY OLYMPUS MEDICAL SOLU...

1. A medical signal processing device, comprising:circuitry configured to
process an image obtained by an image sensor including a plurality of pixels,
generate a video signal for display,
set an operation mode of the Y gamma-correction to any of a first, second, or third operation mode on the basis of brightness of the image obtained by the imaging sensor, wherein a Y gamma-curve in the Y gamma-correction is different in each of the operation modes,
calculate a histogram of the luminance signal for the image obtained by the imaging sensor, wherein luminance values in the histogram are grouped into a dark area comprising luminance values below a first threshold, an intermediate area comprising luminance values between the first threshold and a second threshold, and a bright area comprising luminance values above the second threshold,
set the operation mode of the Y gamma-correction to the first operation mode in response to a determination that the calculated histogram is a histogram of a first pattern having peaks in both the dark area and the bright area,
set the operation mode of the Y gamma-correction to the second operation mode in response to a determination that the calculated histogram is a histogram of a second pattern having a peak only in the dark area,
set the operation of the Y gamma-correction to the third operation mode in response to a determination that the calculated histogram is a histogram of a third pattern having a peak only in the bright area, and
perform the Y gamma-correction on a luminance signal for each pixel in the image obtained by the imaging sensor, wherein
the imaging sensor images a subject image taken in by an endoscope inserted into a subject,
the image obtained by the imaging sensor includes the subject image and a mask area other than the subject image,
the circuitry is further configured to detect border points between the subject image and the mask area on the basis of the luminance signal for each pixel in the image obtained by the imaging sensor, and
the Y gamma-correction is performed only on an area surrounded by the detected border points in the entire image obtained by the imaging sensor.

US Pat. No. 10,893,246

PROJECTION SYSTEM AND AUTOMATIC SETTING METHOD THEREOF

Coretronic Corporation, ...

1. A setting method, adapted to a projection system, wherein the projection system comprises a processing device and at least one projector, the projector comprises a projection unit and an image capturing unit, and the setting method comprises:firstly, projecting an image frame onto a projection plane by the projection unit, wherein the image frame comprises a grid point array;
secondly, capturing the image frame on the projection plane by the image capturing unit according to different values of at least one setting parameter of the image capturing unit, so as to obtain a plurality of captured images corresponding to the different values of the at least one setting parameter; and
then, analyzing the captured images to determine whether the captured images meet a preset image condition,
when one of the captured images meets the preset image condition, selecting the one of the captured images,
setting the image capturing unit with a value of the at least one setting parameter corresponding to the selected captured image automatically, or
when all of the captured images fail to meet the preset image condition, projecting a prompt image onto the projection plane by the projection unit for reminding a user manually adjusting values of the at least one setting parameter of the image capturing unit of the projector.

US Pat. No. 10,893,245

METHOD FOR PROJECTING IMAGE AND ROBOT IMPLEMENTING THE SAME

LG ELECTRONICS INC., Seo...

1. A robot, comprising:a motor assembly configured to move the robot in a space;
a projector configured to project an image; and
a controller configured to:
select a projection area in the space based on the following:
first information including type information of the image to be projected, and
second information related to a user to view the image,
control the motor assembly, and
control the projector to project the image to the projection area.

US Pat. No. 10,893,244

COMPENSATING FOR VIGNETTING

Lumileds LLC, San Jose, ...

1. A system comprising:a camera configured to capture images of a scene that extends over a field of view of the camera;
a lighting system configured to generate illumination to illuminate the field of view of the camera, the illumination having a distribution shaped to have an intensity at a center of the field of view that is lower than an intensity of the illumination at an edge of the field of view; and
a processor configured to manipulate first image data corresponding to the scene illuminated by the lighting system using second image data corresponding to the scene with ambient lighting.

US Pat. No. 10,893,243

LAWN VIOLATION DETECTION

Alarm.com Incorporated, ...

1. A monitoring system that is configured to monitor a property, the monitoring system comprising:a camera that is configured to capture image data; and
a monitor control unit that is configured to:
receive, from the camera, image data;
based on the image data, determine that an animal is located at the property, and that a human is within a threshold distance of the animal;
based on determining that the animal is located at the property and that a human is within the threshold distance of the animal and based on the image data, determine that the animal discharged feces on the property and that the human did not remove the feces from the property by:
determining that the animal lowered a base of a tail of the animal relative to a nose of the animal;
determining that the human did not bend over after the animal lowered the base of the tail of the animal relative to the nose of the animal; and
based on determining that the human did not bend over after the animal lowered the base of the tail of the animal relative to the nose of the animal, determining that the human did not remove the feces from the property, and
based on determining that the animal discharged feces on the property and that the human did not remove the feces from the property, perform a monitoring system action.

US Pat. No. 10,893,242

MOBILE COMMUNICATION PLATFORM

Jenesia1 Inc., Powellsvi...

1. A mobile communication system comprising:a vehicle having an interior and an exterior, the vehicle including a wheel and suspension system for travel over roads at speeds of up to 100 MPH, the vehicle having a front cabin and an enclosed rear cabin separated from the front cabin by a structural divider, the vehicle being a pickup truck with a truck bed, the rear cabin including the bed of the pickup truck;
a central server within the enclosed rear cabin of the vehicle and coupled to a secured wireless network;
a telecommunication sub-system including:
a plurality of IP-based devices coupled to the central devices providing for surveillance, recording, and broadcast of video, audio, voice and data, the plurality of devices including at least one camera mounted to the exterior of the vehicle to provide for a 360 degree view around the vehicle for surveillance of a site around the vehicle; and
a plurality of operating stations within the vehicle coupled to the central server for monitoring, operation and control of the devices, the plurality of operating stations including at least one operating station in the front cabin for use by an operator in the front cabin and at least one operating station in the rear cabin for use by an operator in the rear cabin, each of the plurality of operating stations including audio/voice telecommunication equipment networked with one another over the central server to provide for communication between the plurality of operating stations and conversations between the operator in the front cabin and the operator in the rear cabin over the secured wireless network.

US Pat. No. 10,893,241

SYSTEM AND COMPUTER PROGRAM PRODUCT FOR MONITORING, CONTROLLING AND SURVEILLING PORTABLE LABORATORY REACTOR

Xerox Corporation, Norwa...

1. An automated wireless laboratory system comprising:a reactor having multiple ports;
a drive motor for agitating contents within the reactor;
a speed sensor for measuring revolutions per minute of the drive motor;
a temperature sensor for measuring a temperature within the reactor;
a pressure sensor for measuring a pressure within the reactor;
a pH sensor for measuring a pH of contents within the reactor;
a conductivity sensor for measuring a conductivity of the contents within the reactor;
a bath surrounding the reactor for at least one of heating or cooling the reactor;
a bath temperature sensor for measuring a temperature within the bath; and
at least one computing device configured to:
monitor and wirelessly transmit data about: the temperature within the reactor; the pressure within the reactor, the pH, the conductivity, the revolutions per minute, and the temperature within the bath;
receive wireless instructions from a remote user that specify the speed of the drive motor, the temperature with in the reactor, the bath temperature, the pH, the pressure; and
set the speed of the drive motor, set the temperature with in the reactor, set the pH, and set the pressure based upon wireless instructions from the remote user.

US Pat. No. 10,893,240

CAMERA LISTING BASED ON COMPARISON OF IMAGING RANGE COVERAGE INFORMATION TO EVENT-RELATED DATA GENERATED BASED ON CAPTURED IMAGE

NEC CORPORATION, Tokyo (...

1. A surveillance control system controlling a plurality of cameras comprising:at least one memory storing instructions;
at least one processor connected to the memory that, based on the instructions, performs operations comprising:
generating event-related data based on an image that has been captured, the event-related data being at least a size or a movement of an event;
executing coverage analysis for the plurality of cameras by comparing imaging range to the event-related data; and
listing cameras, out of the plurality of cameras, which can capture an image of the event, based on a result of the coverage analysis, so that an operator can select one of the listed cameras.

US Pat. No. 10,893,239

SURVEILLANCE SYSTEM WITH FIXED CAMERA AND TEMPORARY CAMERAS

NEC CORPORATION, Tokyo (...

1. A surveillance system comprising:a fixed camera;
a temporary camera configured to be adjusted based on sensor data obtained from a sensor of the temporary camera and a fixed camera sensor co-located with the fixed camera, the temporary camera being coupled to the fixed camera; and
a controller configured to extend coverage of the fixed camera coupled to the fixed camera and the temporary camera,
wherein the controller includes a field of view estimator coupled to the sensor of the temporary camera and the camera sensor, the field of view estimator estimating a field of view of the fixed camera and the temporary camera,
wherein the sensor of the temporary camera includes a GPS receiver, and the fixed camera sensors include a GPS receiver,
wherein the sensor of the temporary camera and the fixed camera sensors further include any one of a gyroscopic sensor and an acceleration sensor, the gyroscopic sensor and the acceleration sensor generating horizontal directional data and vertical directional data of the fixed camera and the temporary camera,
wherein the sensor of the temporary camera and the camera sensor further include an altitude sensor, the altitude sensor obtaining height information of the fixed camera and the temporary camera, and
wherein the field of view estimator estimates the field of view of the fixed camera and the temporary camera based on coordination data, the horizontal directional data, the vertical directional data and the height information.

US Pat. No. 10,893,238

VIDEO SHARING SYSTEM FOR ROAD USERS

1. A traffic surveillance and guidance system comprising:a traffic server configured to provide acquisition of traffic information from a plurality of information providers; and
a plurality of registered road users of the traffic server, the traffic server being configured to receive and update and record data of geographical positions transmitted from mobile terminals associated with registered road users of the traffic server,
wherein a user profile of each registered road user includes a user defined geometrically shaped model and size of a geographical area the user is willing to share video streams of traffic conditions and incidents inside the geographical shaped modelled and sized area around the position the respective road users are located on at any time, and
video cameras and corresponding video displays are carried by the road users, or the video cameras and video displays are part of vehicles used by the road users when traveling,
wherein the traffic server is configured to follow movements of road users based on the received and recorded geographical positions, and detect when relative movements of road users is providing a situation with partial overlap of the respective modelled and sized geographical areas around at least a first road user and at least a second road user, and when a partial overlap is detected a video union is established between the at least two road users, wherein
the video union includes at least a video communication channel or video distribution process between the road users constituting the video union, and
the video communication channel or video distribution process is established via Internet streaming of video between Internet addresses of respective video cameras and video displays recorded in the respective user profiles of road users in the video union,
wherein when the traffic server is receiving a request from the at least first road user, having a registered video display, of video streams from a specific video camera registered with the at least second road user in the video union, the traffic server initiates Internet streaming of video via the Internet addresses of the specific video camera of the at least second road user to the Internet address of the video display of the at least first road user, and
the traffic server is further configured to monitor video unions that are constituted between road users, and whenever relative movements of a specific road user in a video union is providing a situation in which the specific road user no longer is having partly overlapping respective modelled and sized geographical area with at least one of the other road users in the video union, the traffic server is configured to disconnect the specific road user from the existing video communication channel or video distribution process in the video union such that the specific road user is no longer part of the video union.

US Pat. No. 10,893,237

METHOD AND SYSTEM FOR MULTI-GROUP AUDIO-VIDEO INTERACTION

1. A method for multi-group audio-video interaction, comprising:receiving, by an audio-video interaction system, a virtual group creation request initiated by a first user, creating a first virtual group, and determining a first session group to which the first user belongs;
associating, by the audio-video interaction system, the first session group with the first virtual group, and adding all session users in the first session group to the first virtual group;
determining, by the audio-video interaction system, a second session group to which a second user belongs, when receiving a virtual group join request for the first virtual group initiated by the second user;
associating, by the audio-video interaction system, the second session group with the first virtual group, and adding all session users in the second session group to the first virtual group; and
sending, by the audio-video interaction system, a user list of the second session group to other session users that are in the first virtual group and not in the second session group, to enable the other session users not in the second session group to incrementally pull and play an audio-video stream according to the user list.

US Pat. No. 10,893,236

SYSTEM AND METHOD FOR PROVIDING VIRTUAL INTERPERSONAL COMMUNICATION

Honda Motor Co., Ltd., T...

1. A computer-implemented method for providing virtual interpersonal communication, comprising:receiving data associated with one-to-one interpersonal communication between a user and a target individual;
determining at least one contextual data point associated with statements spoken by the user and the target individual during the one-to-one interpersonal communication and at least one associated behavioral attribute of the user and the target individual based on emotional characteristics and physiological characteristics that are simultaneously exhibited and respectively associated with the user and the target individual during the one-to-one interpersonal communication;
analyzing at least one statement spoken by the user to a virtual representation of the target individual, wherein the virtual representation of the target individual is presented as a virtual avatar of the target individual; and
presenting the virtual avatar of the target individual in a manner that replicates a personality of the target individual and communicates with the user based on the at least one contextual data point and the at least one associated behavioral attribute.

US Pat. No. 10,893,235

CONFERENCING APPARATUS AND METHOD FOR SWITCHING ACCESS TERMINAL THEREOF

SAMSUNG SDS CO., LTD., S...

1. A conference server comprising:a conference information management module configured to generate mapping information for terminal identification information of a first terminal and access information of a conference participant who is accessing a conference through the first terminal; and
an access switch module configured to provide a token corresponding to the mapping information to the first terminal according to an access terminal switch request from the first terminal and, when the token is received from a second terminal, switch a terminal of the conference participant from the first terminal to the second terminal according to validity of the received token,
wherein the token includes identification information of an access switch target terminal received from the first terminal,
wherein the access switch module determines that the received token is valid when the identification information of the access switch target terminal included in the received token is identical to identification information of the second terminal, and
wherein the access switch module includes the token in an access link to the conference server and provides the access link in which the token is included to the first terminal.

US Pat. No. 10,893,234

SYSTEM AND METHOD OF DYNAMIC PLAYBACK VARIATION FOR MULTIMEDIA COMMUNICATION

David Clark Company Incor...

1. A system for digital data transfer comprising:a source entity configured to send a plurality of data packets at a plurality of send times separated by transmission time intervals, the data packets related to at least one type of media content; and
a destination entity configured to:
receive the data packets at a plurality of receipt times separated by receipt time intervals;
determine whether at least one receipt time interval between data packets is less than the transmission time interval of corresponding data packets; and
playback media content from the data packets at a dynamically varying playback speed dependent on the receipt time intervals,
wherein, when the receipt time interval between the receipt times of two data packets is less than the transmission time interval between the send times of said data packets, the dynamically varying playback speed for said data packets is dynamically adjusted to be faster than a nominal playback speed.

US Pat. No. 10,893,233

INTERACTION METHOD, INTERACTION DEVICE AND INTERACTION SYSTEM FOR REMOTE CONSULTATION

BOE TECHNOLOGY GROUP CO.,...

1. An interaction method for remote consultation, comprising:acquiring a remote consultation request;
creating a consultation group based on a real-time audio and video cloud service interface, and recording the remote consultation request in a database;
generating a consultation token based on patient information in the remote consultation request in the database and attribute information of the consultation group, and recording the consultation token in the database; and
sending the consultation token in the database to a first terminal and a second terminal, and performing an information interaction by the first terminal and the second terminal based on a user datagram protocol in response to the first terminal and the second terminal entering the consultation group according to the consultation token.

US Pat. No. 10,893,232

CONTROLLED-ENVIRONMENT FACILITY VIDEO COMMUNICATIONS MONITORING SYSTEM

Securus Technologies, LLC...

1. A system for monitoring video communications provided to a resident of a controlled-environment facility, the system comprising:a communication system configured to:
host a live video visitation between the resident and one or more non-residents, wherein the video visitation comprises a plurality of live video feeds, and wherein the resident is associated with a privilege classification by the controlled-environment facility; and
a video feed recording system configured to:
generate a plurality of video recordings, each video recording corresponding to a video feed of the plurality of live video feeds; and
a video processing system configured to:
sample a plurality of frames of a first video feed of the plurality of video feeds to detect indications of a non-verbal communication displayed in the first video feed, wherein the frames are sampled at a frequency determined based on the privilege classification of the resident; and
annotate a first video recording corresponding to the first video feed, wherein the annotations specify the locations of the detected indications of the non-verbal communication displayed in the first video recording.

US Pat. No. 10,893,231

EYE CONTACT ACROSS DIGITAL MEDIUMS

International Business Ma...

1. A method implemented by an information handling system that includes a memory and a processor, the method comprising:initiating a video conference between a first user utilizing a first device and a second user utilizing a second device, wherein the first device uses a camera positioned at a first set of coordinates to capture a first live video feed of the first user from a first alignment perspective, and wherein the first device displays a second live video feed of the second user on the screen;
identifying a set of eyes of the second user in the second live video feed based on performing facial recognition on the second live video feed;
computing a pupillary distance between the identified set of eyes;
determining, based on the pupillary distance, a second set of coordinates between the identified set of eyes of the second user;
manipulating the first live video feed such that the manipulated first live video feed captures the first user from a second alignment perspective corresponding to the second set of coordinates; and
transmitting the manipulated first live video feed to the second device.

US Pat. No. 10,893,230

DYNAMICALLY SWITCHING CAMERAS IN WEB CONFERENCE

International Business Ma...

1. A method for dynamically switching between at least two cameras in communication with a device operable by a user, said method comprising:analyzing a first video input received from a first camera of the at least two cameras to identify first directional parameters of a face of the user and a second video input received from a second camera of the at least two cameras to identify second directional parameters of the face of the user;
executing a first algorithm on the first directional parameters to identify a first percentage associated with facial coverage of the user and on the second directional parameters to identify a second percentage associated with facial coverage of the user;
determining that the first percentage is greater than the second percentage and in response, utilizing the first video input to capture the face of the user and disabling the second video input,
wherein the first camera and the second camera are attached to the device, the first camera is attached to the device and the second camera is attached to a monitor associated with the device, or the first camera is attached to the monitor associated with the device and the second camera is attached to the device.

US Pat. No. 10,893,229

DYNAMIC PIXEL RATE-BASED VIDEO

Amazon Technologies, Inc....

1. A computer-implemented method comprising:receiving a first video frame from a video source, the first video frame having a first pixel value at a first frame location of a plurality of frame locations, wherein the first frame location includes a horizontal pixel location and a vertical pixel location;
receiving a second video frame from the video source, the second video frame having a second pixel value at the first frame location;
identifying the first frame location based at least in part on determining a first difference between the first pixel value and the second pixel value, wherein the first frame location is identified based on an accumulated difference being above a threshold, the accumulated difference being a sum of the first difference between the first pixel value and the second pixel value and a second difference between the first pixel value and a third pixel value, the third pixel value of a third video frame having the third pixel value at the first frame location; and
sending a package including a pixel update value and an indication of the first frame location to a video destination, wherein the pixel update value is based at least in part on the second pixel value.

US Pat. No. 10,893,228

SYSTEM AND METHOD FOR DISPLAYING INFORMATION IN A VEHICLE

GM GLOBAL TECHNOLOGY OPER...

1. An automotive vehicle comprising:a sensor configured to detect features external to the vehicle;
a human-machine interface (“HMI”) configured to signal an alert to an operator of the vehicle, wherein the HMI comprises an augmented reality interface; and
a controller in communication with the sensor and the HMI, the controller being in communication with a non-transient computer-readable storage medium provided with a road furniture database, the road furniture database comprising a plurality of items of road furniture having associated road furniture geolocations and road furniture classifications, the controller being configured to, in response to the vehicle being proximate a respective road furniture geolocation for a respective item of road furniture and the sensor not detecting the respective item of road furniture, control the HMI to signal an alert, wherein the controller is configured to control the augmented reality interface to signal the alert by displaying an augmented reality overlay including indicia associated with the respective item of road furniture, the indicia comprising an image associated with a respective road furniture classification of the respective item of road furniture, wherein the image is scaled based on a distance between a current vehicle location and the respective geolocation of the respective item of road furniture, wherein the controller is further configured to, in response to the vehicle being proximate the respective road furniture geolocation for the respective item of road furniture and the sensor detecting the respective item of road furniture, control the HMI to not signal the alert.

US Pat. No. 10,893,227

TIMESTAMP CALIBRATION OF THE 3D CAMERA WITH EPIPOLAR LINE LASER POINT SCANNING

SAMSUNG ELECTRONICS CO., ...

1. An image sensor unit, comprising:a pixel array forming an image plane, the pixel array including at least one row of pixels that forms at least a portion of an epipolar line of a scanning line of a sequence of light spots reflected from a surface of an object, each pixel in the at least one row of pixels being associated with a corresponding column in the pixel array, and each pixel in the at least one row of pixels generating a pixel-specific output in response to detecting a light spot in the sequence of light spots that has been reflected from the surface of the object; and
a timestamp generator associated with a corresponding pixel in the at least one row of pixels, the timestamp generator generating a pixel-specific timestamp value for the pixel-specific output of the corresponding pixel.

US Pat. No. 10,893,226

FOCAL PLANE ARRAY PROCESSING METHOD AND APPARATUS

Massachusetts Institute o...

1. A method comprising:generating a photocurrent with a photodetector in a two-dimensional array of photodetectors, the photocurrent representing radiation incident on the two-dimensional array of photodetectors from a scene;
converting the photocurrent into a digital signal with an analog-to-digital converter (ADC) in a two-dimensional array of ADCs;
storing a count value representing the digital signal in a modulo M counter operably coupled to the ADC; and
generating a representation of the scene having a dynamic range based on a product of M and a least significant bit of the at least one ADC.

US Pat. No. 10,893,225

ELECTRONIC DEVICE HAVING LARGE DYNAMIC RANGE FOR IMAGE SENSING

InnoLux Corporation, Mia...

1. An electronic device comprising:a first sensing module comprising a first sensing transistor having a first gate, a second gate and a semiconductor layer, the semiconductor layer of the first sensing transistor being disposed between the first gate and the second gate of the first sensing transistor, and the first gate of the first sensing transistor being coupled to a top gate line;
a first bottom gate line carrying a first bottom gate signal configured to read light information captured by the first sensing module; and
a first threshold voltage generation module comprising:
a node coupled to the second gate of the first sensing transistor, and configured to provide a first threshold voltage in a dark state to the node of the first threshold voltage generation module;
a first reference transistor having a first gate, a second gate, a semiconductor layer disposed between the first gate and the second gate of the first reference transistor, a source and a drain, shielded by a light-blocking material, and configured to generate the first threshold voltage in the dark state, wherein the drain of the first reference transistor is coupled to the second gate of the first reference transistor; and
a first clamping circuit coupled to the first sensing transistor, the first reference transistor and the first bottom gate line.

US Pat. No. 10,893,224

IMAGING ELEMENT AND ELECTRONIC DEVICE

SONY CORPORATION, Tokyo ...

1. An imaging element, comprising:a pixel array unit that comprises:
a plurality of pixels;
a plurality of charge voltage converting units; and
a plurality of switches, wherein
a pixel of the plurality of pixels comprises:
a charge voltage converting unit of the plurality of charge voltage converting units;
a plurality of pixel transistors that includes an amplification transistor, a selection transistor, a reset transistor, and a coupling transistor;
a plurality of transfer gate units;
a plurality of photoelectric conversion elements, wherein
the plurality of transfer gate units and the plurality of photoelectric conversion elements are in a substantially symmetrical arrangement, around the charge voltage converting unit, in a vertical direction of alignment of the plurality of pixels in the pixel array unit and a horizontal direction of the alignment of the plurality of pixels in the pixel array unit; and
a switch of the plurality of switches, wherein
the reset transistor and the coupling transistor are on a first side of the charge voltage converting unit,
the selection transistor and the amplification transistor are on a second side of the charge voltage converting unit,
the first side is different from the second side,
each of the reset transistor, the coupling transistor, the selection transistor, and the amplification transistor is connected to the charge voltage converting unit, and
the plurality of charge voltage converting units is connected to a signal line in parallel via respective switches of the plurality of switches.

US Pat. No. 10,893,223

SYSTEMS AND METHODS FOR ROLLING SHUTTER COMPENSATION USING ITERATIVE PROCESS

GoPro, Inc., San Mateo, ...

1. A system for correcting digital image deformities, the system comprising:one or more physical processors configured by machine readable instructions to:
obtain an input image defined by an input pixel array, the input pixel array captured by an imaging sensor, the input pixel array including input pixels characterized by input pixel positions within the input pixel array and input pixel values;
obtain acquisition times specifying time of capture of sets of input pixels within the input pixel array;
obtain orientation information specifying imaging sensor orientations at the acquisition times of the sets of input pixels within the input pixel array; and
determine an output pixel array based on the input image, the output pixel array including output pixels that have been transformed to account for changes in the imaging sensor orientation at different ones of the acquisition times, wherein correspondence between one or more of the output pixels of the output pixel array to one or more of the input pixels of the input pixel array is determined based on one or more fixed point iterations, and output pixel value of the one or more output pixels of the output pixel array is determined based on input pixel value of the one or more corresponding input pixels of the input pixel array.

US Pat. No. 10,893,222

IMAGING DEVICE AND CAMERA SYSTEM, AND DRIVING METHOD OF IMAGING DEVICE

PANASONIC INTELLECTUAL PR...

1. An imaging device comprising:a photoelectric converter that includes a first electrode, a second electrode, and a photoelectric conversion layer located between the first electrode and the second electrode;
a voltage supply circuit;
an output circuit that is coupled to the second electrode, the output circuit being configured to output a signal that corresponds to a potential of the second electrode; and
a detection circuit that is configured to detect a level of the signal from the output circuit, wherein
the photoelectric converter has photoelectric conversion characteristics in which a first rate of change is greater than a second rate of change, the first rate of change being a rate of change of a photoelectric conversion efficiency of the photoelectric converter with respect to a bias voltage applied between the first electrode and the second electrode when the bias voltage is in a first voltage range, the second rate being a rate of change of the photoelectric conversion efficiency of the photoelectric converter with respect to the bias voltage when the bias voltage is in a second voltage range that is greater than the first voltage range, and
the voltage supply circuit:
supplies a voltage to one of the first electrode and the second electrode to cause a potential difference between the first electrode and the second electrode to be a first potential difference, in a case where the level detected by the detection circuit is less than a first threshold value; and
supplies a voltage to the one of the first electrode and the second electrode to cause the potential difference between the first electrode and the second electrode to be a second potential difference that is greater than the first potential difference, in a case where the level detected by the detection circuit is greater than or equal to a second threshold value that is greater than or equal to the first threshold value.

US Pat. No. 10,893,221

IMAGING SENSOR WITH WAVELENGTH DETECTION REGIONS AND PRE-DETERMINED POLARIZATION DIRECTIONS

Sony Corporation, Tokyo ...

1. An imaging sensor comprising:a plurality of wavelength detection regions, the plurality of wavelength detection regions including a first wavelength detection region, a second wavelength detection region that is directly adjacent to the first wavelength detection region, a third wavelength detection region that is directly adjacent to the second wavelength detection region and is not directly adjacent to the first wavelength detection region, and fourth wavelength detection region that is directly adjacent to the third wavelength detection region and is not directly adjacent to the first wavelength detection region and the second wavelength detection region,
wherein the first wavelength detection region comprises a plurality of pixels configured to
detect light within a first pre-determined wavelength range, and
detect the light at different pre-determined polarization directions, and
wherein the first pre-determined wavelength range is a near-infrared wavelength range,
wherein the second wavelength detection region comprises a second plurality of pixels configured to
detect the light within a second pre-determined wavelength range, and
detect the light within the second pre-determined wavelength range at the different pre-determined polarization directions,
the second pre-determined wavelength range is different than the first pre-determined wavelength range,
wherein the third wavelength detection region comprises a third plurality of pixels configured to
detect the light within a third pre-determined wavelength range, and
detect the light within the third pre-determined wavelength range at the different pre-determined polarization direction,
the third pre-determined wavelength range is different than the first pre-determined wavelength range and the second pre-determine wavelength range, and
wherein the fourth wavelength detection region comprises a fourth plurality of pixels configured to
detect the light within a fourth pre-determine wavelength range, and
detect the light within the fourth pre-determined wavelength range at the different pre-determined polarization directions,
the fourth pre-determined wavelength range is different than the first pre-determined wavelength range, the second pre-determined wavelength range, and the third pre-determined wavelength range.

US Pat. No. 10,893,220

DUAL-BAND DIVIDED-APERTURE INFRA-RED SPECTRAL IMAGING SYSTEM

REBELLION PHOTONICS, INC....

1. An infrared (IR) imaging system for imaging a scene, the imaging system comprising:a housing comprising an optical system comprising:
an optical focal plane array (FPA) unit;
a plurality of spatially and spectrally different optical channels to transfer IR radiation from the scene towards the optical FPA unit, each optical channel positioned to transfer a portion of the IR radiation incident on the optical system from the scene towards the optical FPA unit; and
at least one temperature calibration element configured to be imaged by the optical FPA unit along with an image of the scene,
wherein at least one of the plurality of optical channels is in the mid-wavelength infrared spectral range and at least another one of the plurality of optical channels is in the long-wavelength infrared spectral range,
wherein the imaging system is configured to acquire a first video image of the scene in the mid-wavelength infrared spectral range and a second video image of the scene in the long-wavelength infrared spectral range; and
wherein the image of the at least one temperature calibration element is used to calibrate the optical FPA unit, said first and second video images including said image of said at least one temperature calibration element to provide dynamic calibration.

US Pat. No. 10,893,219

SYSTEM AND METHOD FOR ACQUIRING VIRTUAL AND AUGMENTED REALITY SCENES BY A USER

DROPBOX, INC., San Franc...

1. A non-transitory computer readable medium storing instruction thereon that, when executed by at least one processor, cause a computing device to:present a map of a geographic area within a user interface of the computing device;
generate a location indicator within the map of the geographic area indicating a designated location for a virtual or augmented (VAR) scene;
determine that a location of the computing device corresponds to the designated location for the VAR scene; and
based on determining the location of the computing device corresponds to the designated location, present a view of the VAR scene within the user interface.

US Pat. No. 10,893,218

SYSTEMS AND METHODS FOR GENERATING PANORAMIC VISUAL CONTENT

GoPro, Inc., San Mateo, ...

1. A system that generates panoramic visual content, the system comprising:one or more physical processors configured by machine-readable instructions to:
obtain visual information defining visual content captured by an image capture device, the visual content having a spherical field of view, wherein the visual content having the spherical field of view is generated based on stitching of first visual content and second visual content, the first visual content captured using a first optical element of the image capture device and the second visual content captured using a second optical element of the image capture device, the first optical element having a first field of view and the second optical element having a second field of view, the first optical element and the second optical element carried by the image capture device such that a peripheral portion of the first field of view and a peripheral portion of the second field of view overlap, the overlap of the peripheral portion of the first field of view and the peripheral portion of the second field of view enabling spherical capture of the visual content, a stitch line for stitching of the first visual content and the second visual content positioned within the overlap, further wherein the stitch line is positioned within lateral portions of the visual content mapped onto a two-dimensional plane using an equirectangular projection;
determine placement of a viewing window within the spherical field of view of the visual content, the viewing window defining an extent of the visual content to be included within a punchout of the visual content, the viewing window having a panoramic field of view, wherein the placement of the viewing window is determined to be located along a horizontal center of the visual content mapped onto the two-dimensional plane using the equirectangular projection such that the stitch line is positioned closer to a lateral edge of the viewing window than to a center of the viewing window; and
generate the panoramic visual content based on the viewing window, the panoramic visual content including the punchout of the extent of the visual content within the viewing window.

US Pat. No. 10,893,217

ELECTRONIC APPARATUS AND METHOD FOR CLIPPING A RANGE OUT OF A WIDE FIELD VIEW IMAGE

Canon Kabushiki Kaisha, ...

1. An electronic apparatus comprising a memory and at least one processor and/or at least one circuit to perform the operations of the following units:an image acquisition unit configured to obtain a wide field view image having a wide field of view angle, captured by one or more image sensors; and
a control unit configured to,
perform control, in the case that the wide field view image is obtained using a first operation mode, to clip a first range narrower than an entire range of the wide field view image about a vertical axis to a position of the electronic device at the time of capturing, out of the wide field view image, and
perform control, in the case that the wide field view image is captured using a second operation mode:
to clip a second range narrower than the entire range of the wide field view image about a horizontal axis to the position of the electronic device at the time of capturing and different from the first range, out of the wide field view image,
wherein the first operation mode and the second operation mode are different imaging modes from each other.

US Pat. No. 10,893,216

ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING SAME

Canon Kabushiki Kaisha, ...

1. An electronic apparatus comprising:one or more processors; and
at least one memory storing executable instructions, which when executed by the one or more processors, cause the electronic apparatus to perform the operations of the following units comprising:
obtaining a wide range image captured by one or more imaging units;
obtaining attribute information attached to the wide range image, the attribute information including orientation information indicating an orientation of an imaging apparatus during imaging; and
selecting one of a plurality of processes to display the wide range image, the plurality of processes including a first process providing a display in which a zenith and a yaw angle is corrected based on the orientation information, a second process providing a display in which the zenith is corrected based on the orientation information without correcting the yaw angle, a third process providing a display in which the yaw angle is corrected based on the orientation information without correcting the zenith, and a fourth process providing a display in which neither the zenith nor the yaw angle of the wide range image is corrected, wherein selecting one of the plurality of processes to display the wide range image comprises:
determining whether an installation position of the imaging apparatus during capturing of the wide range image is specified;
if the installation position of the imaging apparatus during capturing of the wide range image is specified, selecting one of the first process, the second process, the third process, and the fourth process based on a type of the specified installation position; and
if the installation position of the imaging apparatus during capturing of the wide range image is not specified, selecting one of the first process and the second process in which at least the zenith is corrected based on the orientation information.

US Pat. No. 10,893,215

CASCADED STANDARDIZED HOT-PLUGGABLE TRANSCEIVING UNITS PROVIDING A MULTIVIEWER FUNCTIONALITY

Riedel Communications Can...

1. A system comprising:at least one standardized hot-pluggable transceiving units respectively comprising:
a housing having specific standardized dimensions and adapted to being inserted into a chassis of a hosting unit;
a plurality of connectors, each connector receiving at least one distinct source video stream;
at least one processing unit in the housing for:
scaling the source video streams received by the plurality of connectors into a corresponding plurality of scaled video streams;
mosaicing the plurality of scaled video streams into a mosaiced video stream; and
outputting the mosaiced video stream via one of the plurality of connectors.

US Pat. No. 10,893,214

CAMERA MONITORING SYSTEM, IMAGE PROCESSING DEVICE, VEHICLE AND IMAGE PROCESSING METHOD

KYOCERA Corporation, Kyo...

1. A camera monitoring system, comprising:a first camera configured to capture an image of an area including a predetermined area on the left side of a vehicle;
a second camera configured to capture an image of an area including a predetermined area on the right side of the vehicle;
a third camera configured to capture an image of an area including a predetermined area behind the vehicle;
a monitor configured to display a composite image including a first captured image captured by the first camera, a second captured image captured by the second camera and a third captured image captured by the third camera; and
an image processor configured to
control a layout of the composite image, and
change the layout according to at least environmental information about an external environment of the vehicle.

US Pat. No. 10,893,213

VEHICLE UNDERCARRIAGE IMAGING SYSTEM

ACV Auctions Inc., Buffa...

1. A vehicle undercarriage imaging system comprising:a mirror assembly including a base and a mirror surface, the mirror surface having a width and the mirror assembly having a height;
a camera positionable at a location that is central in a width direction along the mirror surface and spaced apart from the mirror surface in a depth direction that is normal to the width direction, the camera having a field of view oriented toward the mirror surface;
wherein the mirror surface is angled in the depth direction such that a reflected field of view that is viewable in the mirror surface at the camera is above the mirror assembly in the height direction, and
wherein the camera is configured to capture sequential images of at least a portion of a vehicle undercarriage passing above the mirror assembly within the reflected field of view, and
wherein the mirror assembly includes an extension arm movably attached to the base at a first end of the extension arm, the extension arm including a camera mount at a second end of the extension arm.

US Pat. No. 10,893,212

SPHERICALLY-ARRANGED IMAGING ARRAY HAVING PAIRS OF CAMERAS WITH OVERLAPPING FIELDS OF VIEW

SONY CORPORATION, Tokyo ...

1. An all-celestial imaging apparatus comprising:a plurality of imaging parts, each imaging part of the plurality of imaging parts being arranged in a direction different from that of other imaging parts of the plurality of imaging parts, wherein
the plurality of imaging parts are arranged such that all imaging ranges on at least one circumference of imaging ranges by the plurality of imaging parts are each overlapped by angles of view of two or more pairs of the plurality of imaging parts,
the plurality of imaging parts are arranged being stacked on each other in at least two or more tiers, and
in a case a vertical angle of view is set to be ? for angles of view of two imaging parts of the plurality imaging parts located adjacent to each other in an up-and-down direction in the at least two or more tiers to overlap each other by 50% or more at a distance equal to or larger than a shortest distance t enabling calculation of depth information that indicates a distance to an object to be imaged, a distance d between the two imaging parts is set based on the vertical angle of view and the shortest distance t.

US Pat. No. 10,893,211

METHODS AND SYSTEMS OF LIMITING EXPOSURE TO INFRARED LIGHT

SEMICONDUCTOR COMPONENTS ...

1. A method of operating an imaging system with an infrared illumination, comprising:repeatedly illuminating a field of view of an image sensor with infrared light from a light emitting diode (LED), each illumination defining an exposure time, and time between contiguous illuminations defining a frame period; and
forcing the frame period to be greater than a frame period threshold by a first resistor coupled to a first terminal of a LED driver circuit.

US Pat. No. 10,893,210

IMAGING APPARATUS CAPABLE OF MAINTAINING IMAGE CAPTURING AT A SUITABLE EXPOSURE AND CONTROL METHOD OF IMAGING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An imaging apparatus comprising:a plurality of pixels; and
at least one processor or circuit programmed to function as:
a scan unit configured to perform a first scan that outputs, from a first pixel of the plurality of pixel, a first signal based on a light flux that has passed through a first region of an exit pupil of imaging optics and a second scan that outputs, from a second pixel of the plurality of pixels, a second signal based on a light flux that has passed through a second region that is a part of the first region;
a prediction unit configured to predict a value of the second signal;
a determination unit configured to determine whether or not to set a second exposure time of the second pixel to be shorter than a first exposure time of the first pixel based on the value of the second signal predicted by the prediction unit; and
an exposure time setting unit configured to set the first exposure time and the second exposure time based on a determination result of the determination unit,
wherein, when the value predicted by the prediction unit exceeds a predetermined threshold, the determination unit determines to set the second exposure time to be shorter than the first exposure time.

US Pat. No. 10,893,209

PHOTOGRAPHING DIRECTION DEVIATION DETECTION METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A photographing direction deviation detection method, comprising:obtaining, by a deviation detection device, a horizontal acceleration of an image capturing device at each of a plurality of time points in a pre-determined time period;
calculating, by the deviation detection device, a horizontal moving speed of the image capturing device at each of the plurality of time points according to the horizontal acceleration;
setting, by the deviation detection device, a weight for the horizontal moving speed at each of the plurality of time points;
obtaining, by the deviation detection device, an aggregated horizontal moving speed of the pre-determined time period according to the horizontal moving speed and the weight for the horizontal moving speed at each of the plurality of time points;
determining, by the deviation detection device, a target photographing direction of the image capturing device according to the aggregated horizontal moving speed
calculating a direction angle between the target photographing direction and an actual photographic direction of the image capturing device; and
when the direction angle satisfies a preset condition, determining, by the deviation detection device, that the actual photographing direction of the image capturing device deviates.

US Pat. No. 10,893,208

CAMERA MODULE, SELECTOR, CONTROLLER, CAMERA MONITORING SYSTEM, AND MOVEABLE BODY

KYOCERA Corporation, Kyo...

1. A camera module comprising:a first camera, the first camera configured to capture images in a first imaging range;
a second camera capable of switching between a first imaging direction and a second imaging direction, the second camera configured to
capture images in a second imaging range when facing the first imaging direction, and
capture images in a third imaging range when facing the second imaging direction; and
wherein an overlapping range between the first imaging range and the third imaging range is greater than an overlapping range between the first imaging range and the second imaging range; and
a controller, the controller configured to:
acquire a first captured image captured by the first camera;
execute a first process on the acquired first captured image to generate a first processed image;
acquire a second captured image captured by the second camera;
generate a second processed image yielded by execution of a second process on the acquired second captured image; and
generate a third processed image yielded by execution of a third process on the acquired second captured image, the third process being different from the second process,
wherein an overlapping range between the first processed image and the third processed image is greater than an overlapping range between the first processed image and the second processed image.

US Pat. No. 10,893,207

OBJECT TRACKING APPARATUS, OBJECT TRACKING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM

FUJITSU LIMITED, Kawasak...

1. An object tracking apparatus comprising:a memory; and
a processor coupled to the memory and configured to
execute a tracking process that includes
detecting one or more objects to be tracked from frames included in a set of frames of a same time point in pieces of video captured by respective cameras, and
calculating, for each object to be tracked, a three-dimensional position of the object in real space, based on positions of the object in the frames from which the object is detected,
execute, for each object to be tracked, a prediction process that includes predicting positions of the object in next frames that are included in the pieces of video and from which the object is to be detected next, based on the three-dimensional position of the object,
execute an influence-degree obtaining process that includes comparing, for each object to be tracked, a feature of the object to be tracked in each of the next frames in the pieces of video with a feature of another object that overlaps the object to be tracked at a back side of each predicted position of the object to be tracked, based on the predicted positions of the object in next frames, to calculate a backside influence degree that the other object has on detection of the object to be tracked, and
execute a difficulty-degree obtaining process that includes calculating, for each object to be tracked, a detection difficulty degree for detecting the object from each of the next frames captured by the respective cameras, based on the backside influence degree,
wherein the tracking process is configured to
select, the next frames within a predetermined number in an order of a small detection difficulty from among the next frames that are included in the set of next frames in the pieces of video and from which the object is to be detected, and
detect the object from the selected next frames.

US Pat. No. 10,893,206

USER EXPERIENCE WITH DIGITAL ZOOM IN VIDEO FROM A CAMERA

Alarm.com Incorporated, ...

1. A computer-implemented method comprising:receiving a first video stream from a camera, wherein the first video stream corresponds to an entire field of view of the camera that is at a resolution less than a maximum resolution that the camera can sense for the entire field of view;
providing the first video stream from the camera;
providing a digitally zoomed portion of the first video stream from the camera;
requesting a second video stream from the camera corresponding to the digitally zoomed portion of the first video stream, wherein the second video stream corresponds to the digitally zoomed portion of the first video stream and is at least at a maximum resolution that the camera can sense for the digitally zoomed portion;
receiving the second video stream from the camera;
replacing the digitally zoomed portion of the first video stream with the second video stream from the camera while continuing to receive the first video stream from the camera;
receiving user input that indicates to zoom out; and
in response to receiving the user input that indicates to zoom out, requesting the camera stop providing the second video stream from the camera.

US Pat. No. 10,893,205

IMAGE DISPLAY METHOD AND IMAGE DISPLAY DEVICE USING THE SAME

Qisda Corporation, Taoyu...

1. An image display method, comprising:obtaining a picture;
capturing a first object image corresponding to a first characteristic parameter from the picture when the content of the picture matches the first characteristic parameter;
setting the first object image as a first click image;
superimposing the first click image on a first display portion of the picture;
displaying the first display portion of the picture and the superimposed first click image; and
displaying a second display portion of the picture in response to operation of selecting the superimposed first click image, wherein the second display portion contains the first object image and a background image of the first object image thereof.

US Pat. No. 10,893,204

PHOTOGRAPHY COMPOSITION GUIDING METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A photographing preview composition guiding method, comprising:obtaining a first image in a photographing preview mode of a photographing device;
determining a target photographing object in the first image;
obtaining a second image, wherein the second image comprises the target photographing object, wherein the second image and the first image correspond to different angles of view, and wherein an angle of view indicates a photographing range of the photographing device;
constructing a third image based on the first image and the second image, wherein the third image comprises the target photographing object, wherein an angle of view corresponding to the third image is the same as a smaller angle of view in the angles of view corresponding to the first image and the second image, and wherein constructing the third image based on the first image and the second image comprises determining a shape and a size of the third image based on a large angle of view image and a small angle of view image; and
displaying a composition prompt of the third image in a photographing preview interface.

US Pat. No. 10,893,203

PHOTOGRAPHING METHOD AND APPARATUS, AND TERMINAL DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A photographing method, wherein the photographing method is applied to a terminal device comprising a first screen and a second screen, and wherein the photographing method comprises:obtaining a first instruction, wherein the first instruction triggers a photographing function of the terminal device;
presenting a viewfinder window on the first screen in response to the first instruction;
obtaining a feature parameter of a target image, wherein the target image is an image presented in the viewfinder window;
presenting preset playing content on the second screen in response to the feature parameter of the target image matching a standard feature parameter; and
performing a photographing operation.

US Pat. No. 10,893,202

STORING METADATA RELATED TO CAPTURED IMAGES

GOOGLE LLC, Mountain Vie...

1. A method implemented using one or more processors, comprising:streaming data captured by one or more cameras to a camera application active on a first client device of one or more client devices operated by a user;
invoking an automated assistant at least partially based on the camera application being active on the first client device;
performing image recognition analysis on the data captured by one or more of the cameras to detect a vehicle;
in response to detection of the vehicle, provide to the user, as output from the automated assistant, a suggested task request to remember a parking location associated with the depicted vehicle;
receiving, at the first client device while the data captured by the one or more cameras is streamed to the camera application, confirmation from the user to perform the suggested task request; and
storing metadata indicative of the parking location in one or more computer-readable mediums, wherein the one or more computer-readable mediums are searchable by the automated assistant using the metadata.

US Pat. No. 10,893,201

VIDEO STABILIZATION METHOD WITH NON-LINEAR FRAME MOTION CORRECTION IN THREE AXES

PELCO, INC., Fresno, CA ...

1. A method of performing electronic image stabilization of images captured by an image sensor on a camera device, comprising: measuring non-linear motion of a camera device with a motion sensor during an exposure time for each line of a frame captured by the image sensor of the camera device, wherein a first line of the frame has a first exposure time, and subsequent lines of the frame have an exposure time that is later than a previous line of the frame, each line of the frame having an associated position in the frame; and adjusting the position of each line of the frame based, at least in part, on the measured non-linear motion to create a modified frame that corrects for non-linear motion that occurred for each line of the frame; the measuring non-linear motion further comprising: generating motion data indicative of the measured non-linear motion for each line read from the image sensor for the frame; and storing the motion data in association with line identification information indicative of a line number or order in the frame for each line of the frame.

US Pat. No. 10,893,200

AUTOFOCUS AND OPTICAL IMAGE STABILIZER SYSTEM

1. An autofocus (AF) and optical image stabilization (OIS) system, comprising:a) a first actuator to hold and move a lens barrel;
b) a second actuator to move an image sensor, wherein the second actuator is not rigidly connected to the first actuator, and wherein the first and the second actuators are independently controlled,
c) the first and the second actuators are MEMS piston-tube actuators, wherein each of said actuators provide a 3 degrees-of-freedom motion comprising of a translation along a z-axis being perpendicular to a plane of each actuator, and a bi-axial tilt about an x-axis and a y-axis being along the plane of each actuator, and wherein both of said actuators tilt about the x- and the y-axes to achieve OIS and translate along the z-axis to achieve AF, and wherein AF is achieved by moving one or both of the image sensor and the lens barrel, and the OIS is achieved by tilting the lens barrel using the lens barrel actuator around the x- and y-axes and the image sensor actuator is tilted independently around the x- and y-axes to ensure a sensor plane is rotated to coincide with an image plane.

US Pat. No. 10,893,199

IMAGING APPARATUS AND CONTROL METHOD THEREOF

Canon Kabushiki Kaisha, ...

1. An imaging apparatus having (a) an imaging unit that outputs data of a plurality of pixel portions by a first stream and (b) a memory unit having a buffer function that stores image signals of a plurality of frames based on the data, the imaging apparatus comprising:a controller having a processor which executes instructions stored in a memory or having circuitry, the controller being configured to function as:
(1) an acquisition unit configured to acquire still image data that corresponds to a timing at which an instruction to capture a still image has been provided, by a second stream that is different from the first stream in the imaging unit;
(2) a determination unit configured to determine a frame to be used for a moving image that corresponds to the timing from the frames stored in the memory unit; and
(3) a control unit configured to perform control in which an image signal of the frame to be used for the moving image determined by the determination unit is output to a storage destination,
wherein a resolution of the data output by the first stream is lower than a resolution of the still image data output by the second stream, and wherein the still image data is output by the second stream while the data is output by the first stream.

US Pat. No. 10,893,198

WEARABLE SATELLITE RECEIVER WITH REDUCED POWER CONSUMPTION

Snap Inc., Santa Monica,...

1. A wearable electronic device, comprising:an accelerometer;
a position acquisition device;
a camera; and
first hardware processing circuitry configured to perform operations comprising:
configuring the position acquisition device into a low power state such that the position acquisition device is unable to obtain a location of the wearable electronic device;
receiving, while the position acquisition device is in the low power state, a command to capture an image with the camera;
capturing an image with the camera in response to the command;
determining that a configurable setting indicates that a first location is to be captured and stored in association with the captured image;
configuring, in response to the configurable setting, the position acquisition device into an operative state to determine a second location of the wearable electronic device;
activating an accelerometer in response to determining that the configurable setting indicates that the first location is to be captured and stored in association with the captured image;
in response to the command to capture the image and in response to activating the accelerometer, storing acceleration measurements from the accelerometer until at least the position acquisition device obtains the second location of the wearable electronic device;
determining the first location based on the accelerometer measurements and the second location; and
writing the first location and the image captured by the camera to a stable storage device.

US Pat. No. 10,893,197

PASSIVE AND ACTIVE STEREO VISION 3D SENSORS WITH VARIABLE FOCAL LENGTH LENSES

Microsoft Technology Lice...

19. A method of three-dimensional imaging, the method including:emitting an output light in a first wavelength range with an illuminator;
receiving a trigger command;
in response to the trigger command, changing a field of illumination of the illuminator, wherein changing the field of illumination of the illuminator includes moving one or more movable illuminator lenses positioned proximate an output of the illuminator and movable relative to the output of the illuminator, and the one or more movable illuminator lenses are configured to vary the field of illumination such that the field of illumination and the first field of view of the first imaging sensor and the second field of view of the second imaging sensor are fixed relative to one another;
in response to the trigger command, changing a field of view of a first imaging sensor and a second imaging sensor by moving one or more movable imaging lenses positioned proximate an input of the imaging sensor and movable relative to a photoreceptor of the imaging sensor, wherein the one or more movable imaging lenses are configured to vary a field of view of each imaging sensor such that a first field of view of the first imaging sensor and a second field of view of the second imaging sensor are fixed relative to one another;
attenuating a portion of a reflected light with a bandpass filter outside of the first wavelength range to create a filtered light;
detecting the filtered light with the first imaging sensor and the second imaging sensor; and
measuring a depth value by comparing information from the first imaging sensor and the second imaging sensor to calculate a stereoscopic depth measurement from a disparity of the first image from the first imaging sensor and the second image from the second imaging sensor.

US Pat. No. 10,893,196

PANORAMIC CAMERA

IMMERVISION, INC., Montr...

1. A space orientation-based panoramic camera capturing a panoramic image, the captured panoramic image having perspective and/or geometrical distortion, the camera comprising:(a) a spatial orientation device for obtaining both (i) an indication of an orientation of an optical axis of a lens of the panoramic camera relative to a horizon line when capturing the panoramic image and (ii) an indication of a relative position of the panoramic camera about the optical axis when capturing the panoramic image; and
(b) a processing unit automatically generating, based on the indications obtained by the spatial orientation device, an interpretation of the user's intent at the moment of shooting of an image or at the moment of displaying an image;the indications of the orientation of the optical axis of the lens of the panoramic camera relative to the horizon line, of the relative position of the panoramic camera about the optical axis, and the interpretation of the user's intent at the time of capturing the panoramic image being used to generate information in order to produce a processed image to be displayed or printed, wherein the processed image to be displayed or printed has the perspective and/or geometrical distortion at least partially corrected and corresponds to a strip of the panoramic image shot by the camera.

US Pat. No. 10,893,195

SYSTEMS AND METHODS OF MULTI-SENSOR CAMERAS AND REMOTE IMAGERS

Sensormatic Electronics, ...

1. A camera system, comprising:a base including a processing circuit and a communications circuit, the processing circuit positioned in the base;
a plurality of imagers coupled to the processing circuit via the communications circuit, each imager including a lens and a sensor module that receives light via the lens and outputs a plurality of images based on the received light; and
a plurality of connections coupling each imager to the processing circuit via the communications circuit, each imager mounted to an outside surface of the base or spaced from the base, each connection including at least one of a wired connection and a wireless connection, the processing circuit receives the plurality of images from each imager via each respective connection and generates a combined image using the plurality of images, the base has a surface-to-volume ratio greater than a threshold value at which an operating temperature of the processing circuit is less than 60 degrees Celsius while the processing circuit is processing the plurality of images to generate the combined image.

US Pat. No. 10,893,194

DISPLAY APPARATUS AND CONTROL METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A display apparatus comprising:a receiver;
a display; and
a processor configured to:
control the receiver to receive a plurality of image segments of a content;
generate a content image mapped to a three dimensional (3D) object based on the plurality of image segments received via the receiver;
control the display to display a first area of the content image corresponding to a first viewpoint;
control the display to display, while the first area corresponding to the first viewpoint is displayed, information about a display quality of a second area of the content image corresponding to a second viewpoint adjacent to the first viewpoint, based on reception states of the plurality of image segments; and
in response to the second viewpoint being selected via a user command moving a viewpoint, control the display to display the second area corresponding to the second viewpoint.

US Pat. No. 10,893,193

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image processing apparatus, comprising:a first acquisition unit configured to acquire first data based on a first image that includes a high-luminance region and a second image that includes an overlapping region with the first image and that does not include the high-luminance region, the first data representing flare distribution in the overlapping region in the first image;
a second acquisition unit configured to acquire second data representing a position of the high-luminance region in the first image;
a generation unit configured to generate third data representing flare distribution in the first image based on the first data and the second data; and
a correction unit configured to correct the first image based on the third data.

US Pat. No. 10,893,192

MOVING IMAGE PICKUP APPARATUS, MOVING IMAGE PICKUP METHOD, AND MOVING IMAGE PICKUP PROGRAM USING A THREE-DIMENSIONAL LOOKUP TABLE

JVCKENWOOD Corporation, ...

1. A moving image pickup apparatus, comprising:a common processing unit comprising:
a fixed level signal generating unit configured to generate a fixed level signal that have discrete values in a color space and an address in association with each other; and
an image processing unit configured to perform image processing on the fixed level signal to generate grid data under a condition set by a user or a condition automatically set by the image pickup apparatus; and
an individual processing unit comprising:
a three-dimensional lookup table storage unit configured to store a three-dimensional lookup table in which the address and the grid data are associated with each other and output the grid data corresponding to an address of an image pickup signal; and
an interpolation processing unit configured to perform interpolation processing using the grid data and the image pickup signal,
wherein
when the three-dimensional lookup table of N×N×N grids is stored in the individual processing unit, the fixed level signal generating unit generates the fixed level signal in which the respective intervals between each of the values R, G, and B of the fixed level signal become a value obtained by dividing the maximum value of the values R, G, and B by N?1,
the fixed level signal generating unit performs the following processing of:
generating, first, the fixed level signal of values (B, G, R)=(0, 0, 0) and an address of (B, G, R)=(0, 0, 0) in association with each other,
then generating the fixed level signal in which each of the values B, G, and R is increased by the intervals and the address in which the corresponding values of B, G, and R are increased by one in association with each other, and
repeating the generation of the fixed level signal and address, thereby generating the fixed level signal and the address of all combinations of values R, G, and B.

US Pat. No. 10,893,191

IMAGE CAPTURING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image capturing apparatus comprising:an image capturing device configured to capture an image;
a control circuit configured to decide whether or not to perform an automatic image capturing by the image capturing device, using parameters obtained in a learning operation in which a learning circuit learns conditions of images that a user likes based on supervised data;
wherein the control circuit sets a score used by the learning circuit in the learning operation, to a first image captured by the image capturing device based on an instruction of a user and a plurality of second images continuously captured by the image capturing device before and/or after the image capturing of the first image, and registers the first and second images and the scores as the supervised data, and
wherein the scores set to the plurality of second images are lower than the score set to the first image.

US Pat. No. 10,893,190

TRACKING IMAGE COLLECTION FOR DIGITAL CAPTURE OF ENVIRONMENTS, AND ASSOCIATED SYSTEMS AND METHODS

PreNav, Inc., Redwood Ci...

1. A method for digitally capturing a real-world environment, the method comprising:receiving a map of the environment;
identifying multiple scan locations within the environment;
scanning the environment to obtain scan data, from individual scan locations;
comparing the scan data to the map of the environment;
based at least in part on the scan data, creating a view capture route corresponding to a path in the environment;
receiving optical data from an optical sensor carried by a human operator, as the human operator moves bodily along the view capture route;
receiving, from a scanner system at a fixed position within the environment position and orientation data corresponding to the optical sensor, at multiple points along the view capture route while the human operator moves bodily along at least a portion of the view capture route;
tracking the human operator's progress along the view capture route based on the position and orientation data;
based at least in part on tracking the human operator's progress along the view capture route, providing guidance cues configured to direct bodily movement of the human operator along the view capture route; and
based at least in part on the optical data, generating a 3-D, viewable, virtual representation of the environment.

US Pat. No. 10,893,189

PHOTOGRAPHY ACCESSORY FOR A PORTABLE ELECTRONIC DEVICE

The Runningman (UK) Limit...

1. A photography housing for receiving a mobile phone, the housing comprising:first and second housing components defining at least a partial enclosure for receiving a mobile phone, the first and second housing components being movably connected to one another, such that an internal dimension of the at least a partial enclosure is alterable to releasably fit a dimension of the mobile phone inserted therein,
wherein the photography housing comprises a user input device for controlling the operation of an image sensor of a mobile phone received within the at least a partial enclosure during use of the photography housing,
the first and second housing components being biased towards each other so as to resiliently hold the mobile phone there-between in use, and
the at least a partial enclosure having an open side such that at least a portion of a mobile phone may protrude beyond the at least a partial enclosure in use through said open side.

US Pat. No. 10,893,188

IMAGING APPARATUS HAVING AUTOFOCUS FUNCTION, AND FOCUS ADJUSTMENT METHOD THEREFOR

CANON KABUSHIKI KAISHA, ...

1. An imaging apparatus comprising:an imaging device that photoelectrically converts an optical image obtained through a focus lens and outputs image signals;
a memory storing instructions; and
a processor that implements the instructions to execute a plurality of tasks, including:
a focus detecting task that performs focus detection based on the image signals output from the imaging device, in a state where an aperture value is set to a first aperture value;
a driving task that drives the focus lens to a focus position set according to the focus detection performed by the focus detecting task;
a first acquiring task that acquires a first correction amount of the set focus position according to a difference between the first aperture value and a second aperture value to be used at a time of photographing;
a position determining task that determines a focus position at the time of photographing; and
a controlling task that controls photographing using the determined focus position at the time of photographing, when a user's operation is received,
wherein the position determining task determines:
in a case where the first correction amount is larger than a predetermined amount, a focus position corrected according to the first correction amount as the focus position at the time of photographing; and
in a case where the first correction amount is equal to or smaller than the predetermined amount, the set focus position as the focus position at the time of photographing.

US Pat. No. 10,893,187

DUAL-CORE FOCUSING IMAGE SENSOR, CONTROL-FOCUSING METHOD THEREFOR, AND MOBILE TERMINAL

GUANGDONG OPPO MOBILE TEL...

1. A method of control-focusing for a dual-core focusing image sensor, the dual-core focusing image sensor comprising:a photosensitive cell array comprising:
a plurality of focus photosensitive units, each of which comprising a first half and a second half; and
a plurality of dual-core focusing photosensitive pixels;
a filter cell array disposed above the photosensitive cell array and comprising a plurality of white filter cells; and
a micro-lens array disposed above the filter cell array and comprising:
a plurality of first micro-lenses, each of which having an elliptical shape; and
a plurality of second micro-lenses;
wherein the first half of each of the plurality of focus photosensitive units is covered by one of the white filter cells and the second half of each of the plurality of focus photosensitive units is covered by the plurality of the second micro-lenses, each of the white filter cells is covered by one of the first micro-lenses, and each of the dual-core focusing photosensitive pixels is covered by one of the second micro-lenses; and
the method comprising:
controlling the photosensitive cell array to enter a focusing mode;
reading first phase difference information of each of the focusing photosensitive units and second phase difference information of each of the dual-core focusing photosensitive pixels; and
performing a focusing processing according to the first phase difference information and the second phase difference information.

US Pat. No. 10,893,186

MEDICAL IMAGING APPARATUS AND MEDICAL OBSERVATION SYSTEM

SONY OLYMPUS MEDICAL SOLU...

1. A medical imaging apparatus to which a fiberscope including a plurality of optical fibers is connectable, the medical imaging apparatus comprising:an image sensor to capture a subject image of a subject gathered by the fiberscope and emitted from proximal ends of the plurality of optical fibers;
a focus lens to form the subject image emitted from the proximal ends of the plurality of optical fibers on the image sensor;
circuitry configured to
move the focus lens along the optical axis;
perform, when the fiberscope is connected to the medical imaging apparatus, a first auto-focus control to move the focus lens in a first movement range towards a distal end, evaluate a focusing state of the subject image, and position the focus lens at a second lens position deviated from a first lens position at which the proximal ends of the optical fibers are focused, the first movement range including a third lens position at a local minimum focus evaluating value, wherein the second lens position is at a local maximum focus evaluating value after the third lens position towards the distal end, wherein the first lens position has a higher focusing evaluation value than the second lens position; and
position the focus lens at the second lens position during image capture.

US Pat. No. 10,893,185

INTERMEDIATE UNIT AND CAMERA SYSTEM

Sony Corporation, Tokyo ...

12. A camera system comprising:a first cable;
a second cable;
a first camera configured to output a first video signal having a first resolution in one of a spatial direction and a temporal direction;
a camera control unit configured to control a second camera, the second camera having a second resolution that is lower than the first resolution in one of the spatial direction and the temporal direction; and
an intermediate device including
a first connect circuitry coupled to the first camera with the first cable;
a second connect circuitry coupled to the camera control unit with the second cable; and
an information bridge circuitry connected to the first connect circuitry and the second connect circuitry, the information bridge circuitry configured to mediate a communication related to camera control between the first camera and the camera control unit via the first cable, the first connect circuitry, the second connect circuitry, and the second cable.

US Pat. No. 10,893,184

ELECTRONIC DEVICE AND METHOD FOR PROCESSING IMAGE

Samsung Electronics Co., ...

1. A method comprising:obtaining a plurality of images comprising an external object through an image sensor;
providing preview images based on the plurality of images;
identifying image information including first image information of a reference image from among the preview images and second image information corresponding to images except the reference image from among the previous images, at least based on the preview images; and
generating an image by performing image fusion of an image set including at least one image among the plurality of images based on a difference between the first image information and the second image information, in response to an input related to photographing,
wherein the image information and the second image information comprise at least one of locations of the external object within the preview images, a reference axis, edge information, the amount of exposure, brightness, or blur information, for a respective image.

US Pat. No. 10,893,183

ON-VEHICLE IMAGING SYSTEM

GM Global Technology Oper...

1. An image processing device, comprising:an image sensor, an internal lens, a steerable mirror, an external lens, a lidar device, and a controller;
wherein the lidar device includes a laser transmitter and a receiver;
wherein the laser transmitter is arranged to project a laser beam onto the steerable mirror;
wherein the controller is operatively connected to the image sensor, the lidar device, and the steerable mirror;
wherein the external lens is disposed to monitor a viewable region;
wherein the steerable mirror is interposed between the internal lens and the external lens;
wherein the steerable mirror is arranged to project the viewable region from the external lens onto the image sensor via the internal lens;
wherein the controller is arranged to control the steerable mirror to a first setting to project the viewable region onto the image sensor, and control the image sensor to capture an image file of a field of view (FOV) that is associated with the viewable region that is projected onto the image sensor via the internal lens; and
wherein the controller is arranged to control the steerable mirror to a second setting, activate the laser transmitter to project the laser beam into the viewable region via the steerable mirror, and capture via the receiver a reflected image of the laser beam associated with the viewable region via the steerable mirror.

US Pat. No. 10,893,182

SYSTEMS AND METHODS FOR SPECTRAL IMAGING WITH COMPENSATION FUNCTIONS

GALILEO GROUP, INC., Mel...

1. A method comprising:at an imaging device comprising one or more light source sets, a detector and a controller, wherein at least one program is non-transiently stored in the controller and executable by the controller, the at least one program causing the controller to perform the method of:
(A) acquiring a reference image of a region of interest (ROI) by using the detector to collect light over a reference time period while the ROI is not exposed to any light emitted from the one or more light source sets, wherein the reference image comprises an array of pixels each corresponding to a sub-region in an array of sub-regions of the ROI;
(B) firing a first light source set while not firing any other light source set in the one or more light source sets, wherein the first light source set in the one or more light source sets emits light that is substantially limited to a first spectral range;
(C) acquiring a first target image of the ROI by using the detector to collect light over a first time period while the ROI is exposed to the light emitted from the first light source set, wherein the first target image comprises an array of pixels each corresponding to a sub-region in the array of sub-regions of the ROI; and
(D) compensating the first target image of the ROI using the reference image of the ROI, thereby generating a first compensated image of the ROI, wherein each respective pixel in the array of pixels of the first target image is compensated using the corresponding pixel in the array of pixels of the reference image.

US Pat. No. 10,893,181

SYSTEM AND APPARATUS FOR COLOR IMAGING DEVICE

Chromatra, LLC, Beverly,...

1. An integrated color imaging system, comprising:a housing having an objective lens configured to receive an image of incident electromagnetic radiation from a scene and focus the image on a photocathode of an image intensifier located inside the housing, the image intensifier having a plurality of radiation sensitive sensors and a phosphor screen;
a first filter wheel located inside the housing between the objective lens and the photocathode, the first filter wheel is configured to be rotatable relative to the image intensifier, the first filter wheel comprises a plurality of first channels selectively positionable in an optical path of the image, and at least one of the first channels is clear and unfiltered;
a second filter wheel located inside the housing between the phosphor screen and an eyepiece lens of the housing, the second filter wheel is configured to be rotatable relative to the image intensifier, the second filter wheel comprises a plurality of second channels selectively positionable in the optical path between the phosphor screen and the eyepiece lens, and at least one of the second channels is clear and unfiltered; and
the clear and unfiltered channels of the first and second filter wheels are configured to be selectively aligned in the optical path and retained in stationary positions.

US Pat. No. 10,893,180

IMAGING DEVICE

Nidec Copal Corporation, ...

1. An imaging device, comprising:a substrate mounting an imaging portion;
a lens barrel holding a lens;
a holder, holding the lens barrel, connected to the substrate;
a first elastic body biasing the substrate in a first direction that is perpendicular to an optical axis; and
a plate that is secured to the holder,
wherein the holder comprises a hook portion that protrudes to an outside;
wherein the plate has a hole portion into which the hook portion is inserted; and
wherein the holder and the plate are fitted together through the hook portion and the hole portion.

US Pat. No. 10,893,179

COMBINED CAMERA

HANGZHOU HIKVISION DIGITA...

1. A combined camera, comprising a first camera housing (10) inside which a first camera (11) is provided, wherein one side of the first camera housing (10) is provided with a rotating disc (30) that is rotatable relative to the first camera housing (10); a rotating bracket is fixed on the rotating disc (30), a second camera housing (20) that is rotatable relative to the rotating bracket is provided on the rotating bracket, and a second camera (21) is provided within the second camera housing (20),wherein the combined camera further comprises a first motor (12), and the first motor (12) is fixed within the first camera housing (10) and is configured to drive the rotating disc (30) to rotate relative to the first camera housing (10), and
wherein the first camera (11), the rotating disc (30) and the first motor (12) are sequentially arranged along a length direction of the first camera housing (10).

US Pat. No. 10,893,178

CAMERA MODULE

Samsung Electro-Mechanics...

1. A camera module comprising:a lens module comprising at least two lenses stacked in an optical axis direction and disposed in a housing; and
a stop module,
wherein the stop module comprises:
a first plate and a second plate stacked in the optical axis direction, wherein the first and second plates are configured to form apertures;
a magnet portion configured to drive the first plate and the second plate by movement of the magnet portion; and
a coil fixedly disposed on the housing facing the magnet portion in a direction intersected with the optical axis direction, and configured to move the magnet portion linearly,
wherein the stop module except for the coil is configured to move in the optical axis direction with the lens module.

US Pat. No. 10,893,177

ELECTRONIC DEVICE WITH RETRACTABLE CAMERA MODULE

Chiun Mai Communication S...

1. An electronic device, comprising:a body; and
a camera module comprising:
a drive assembly comprising a driver and a first connecting block connected to the driver, the driver being configured to drive the first connecting block to linearly move toward or away from the driver;
a camera assembly comprising a camera bracket and a second connecting block connected to the camera bracket; and
a first connecting rod, wherein one end of the first connecting rod connects to the first connecting block, and the other end of the first connecting rod connects to the second connecting block; the first connecting block is configured to drive the first connecting rod to move in a circular motion during a linear movement of the first connecting block, thereby causing the camera bracket to be received in the body.

US Pat. No. 10,893,176

IMAGING DEVICE, ELECTRONIC TERMINAL, AND IMAGING SYSTEM

Panasonic Intellectual Pr...

1. An imaging device capable of connecting to an electronic terminal, the imaging device comprising:an imaging unit configured to capture an image of a subject and generate a live-view image;
a first transmission unit configured to transmit the live-view image to the electronic terminal;
a first reception unit configured to receive, from the electronic terminal, an imaging request for the imaging device; and
a first controller configured to control the imaging unit, the first transmission unit, and the first reception unit,
the first controller configured to acquire a first image frame, which is a captured image after executing imaging using the imaging unit, in response to the imaging request,
the first transmission unit configured to transmit the live-view image to the electronic terminal after the imaging request without any interruption caused by transmission of the first image frame in response to the imaging request,
the electronic terminal configured to extract, after the imaging request, a prescribed live-view image from the live-view image received from the first transmission unit, the electronic terminal configured to cause a display unit of the electronic terminal to display the extracted live-view image as the captured image, the electronic terminal configured to cause a memory to store the extracted live-view image as the captured image.

US Pat. No. 10,893,175

SHADOWLESS CAMERA HOUSING

Bendix Commercial Vehicle...

1. A camera assembly comprising:a circuit board on which an imager is mounted;
a lens assembly mounted over the imager to the circuit board;
a first infrared light source mounted on the circuit board at a first location that corresponds to a driver area of a passenger compartment;
a second infrared light source mounted on the circuit board at a second location that corresponds to a passenger area of the passenger compartment;
a first light pipe mounted to the circuit board at the first location;
a second light pipe mounted to the circuit board at the second location; and
a housing comprising a front side and a rear side, wherein the circuit board, lens assembly, first and second infrared light sources, and first and second light pipes are disposed in the housing,
wherein the first and second light pipes extend from the circuit board to the front side of the housing such that light is conveyed from the first and second infrared light sources, respectively, to an illumination exit plane of the camera assembly at the front side of the housing, which housing thereby does not obstruct respective illumination cones produced at the illumination exit plane,
wherein an end of the first and second light pipes, respectively, each have a textured exit geometry providing a diffused, light cone, and
wherein a diameter of the end of the first and second light pipes, respectively, is between 5 and 6 mm.

US Pat. No. 10,893,174

CAMERA, REMOTE VIDEO SPEECH SYSTEM AND APPLICATIONS THEREOF

GUANGZHOU CHANGEN ELECTRO...

1. A camera, comprising: a camera shielding case, a holder, and a cloud deck shielding case, wherein the holder is a hollow structure, and a cavity of the camera shielding case being communicated with a cavity of the cloud deck shielding case through the holder with the hollow structure; a lens, a camera mainboard, an optical fiber transceiver, and a first filter being arranged in the camera shielding case; the camera shielding case being further provided with an optical fiber interface; the lens being connected with the camera mainboard, the camera mainboard being connected with the optical fiber transceiver, and the optical fiber transceiver being connected with the optical fiber interface through an optical fiber; output ends of the first filter being connected with the camera mainboard and the optical fiber transceiver respectively to supply power to the camera mainboard and the optical fiber transceiver, wherein a second filter and a bus controlling module are arranged in the cloud deck shielding case; the camera mainboard being connected with an input end of the second filter through a power line and a control wire extending through the cavity of the hollow holder, and an output end of the second filter being connected with the bus controlling module.

US Pat. No. 10,893,173

COLOR SPACE VALUES CORRESPONDING TO CLASSIFICATION IDENTIFIERS

Hewlett-Packard Developme...

7. An image processing system comprising:a memory storing instructions; and
a processor, by executing the instructions stored in the memory, to:
identify a color mapping resource in response to a classification identifier corresponding to image data;
map the image data with the color mapping resource having color space values corresponding to the classification identifier; and
generate control data for operating a print apparatus to print based on the color mapping resource by combining a first set of halftone image data processed using a first look-up table (LUT) with image data corresponding to a first coverage attribute threshold and a second set of halftone image data processed using a second LUT corresponding to a second coverage attribute threshold that is different from the first coverage attribute threshold.

US Pat. No. 10,893,172

COLOR CALIBRATION

Hewlett-Packard Developme...

16. A printing system comprising:a printing device to print a plurality of base color inks, the printing device comprising a deposit mechanism to deposit drops of base color ink having a defined drop weight onto a print substrate;
a print controller to receive print job data defined within a first color space; and
a memory comprising:
a color-mapping look-up table to map input color data for the print job data to output color data defined within a Neugebauer primary area coverage space, said output color data being used to generate print instructions for the printing device,
color reference data; and
a color calibrator to:
obtain visual property measurements for a plurality of Neugebauer primary test areas printed by the printing device with an area coverage corresponding to a respective Neugebauer primary area coverage vector;
compute a Neugebauer primary area coverage mapping by comparing a difference between an area coverage value derived from the visual property measurements and a reference area coverage value from the color reference data for each Neugebauer primary test area; and
apply the Neugebauer primary area coverage mapping to the color-mapping look-up table to generate a calibrated color-mapping look-up table, wherein the print controller is configured to use the calibrated color-mapping look-up table to print the print job data using the printing device.

US Pat. No. 10,893,171

CORRECTIVE DATA FOR A RECONSTRUCTED TABLE

Hewlett-Packard Developme...

1. A computing device, comprising:a processing resource; and
a memory resource storing machine readable instructions to cause the processing resource to:
generate a key from a reconstructed table;
determine corrective data for the reconstructed table based on the generated key; and
implement the corrective data to the reconstructed table to generate an updated table.

US Pat. No. 10,893,170

PROFILE ADJUSTMENT METHOD, AND PROFILE ADJUSTMENT DEVICE

Seiko Epson Corporation, ...

1. A profile adjustment method of causing a computer to adjust a profile to be used to convert first coordinate values of a first color space into second coordinate values of a second color space, the profile adjustment method comprising:displaying a screen including at least
an input profile selection field through which an input profile defining a correspondent relation between the first coordinate values and third coordinate values of a profile connection space is configured to be selected,
an output profile selection field through which an output profile defining a correspondent relation between the third coordinate values and the second coordinate values is configured to be selected,
a link profile selection field through which a link profile defining a correspondent relation between the first coordinate values and the second coordinate values is configured to be selected, and
an adjustment target profile designation field through which an adjustment target profile is configured to be selected;
accepting a combination of profiles to be used to convert the first coordinate values into the second coordinate values, or accepting one profile to be used to convert the first coordinate values into the second coordinate values, through at least one or more of the input profile selection field, the output profile selection field, or the link profile selection field;
selectively accepting, as the adjustment target profile, one from the input profile, the output profile, and the link profile, or one from two profiles among the input profile, the output profile, and the link profile, through the adjustment target profile designation field;
accepting an adjustment target at coordinates at which an adjustment target color is expressed; and
adjusting the adjustment target profile based on the accepted adjustment target,
upon accepting, as the combination of the profiles, the input profile and the output profile in the input profile selection field and the output profile selection field, respectively, the selectively accepting of the one as the adjustment target profile being performed by simultaneously displaying a designation item of the input profile, a designation item of the output profile, and a designation item of the link profile in a field adjacent to the adjustment target profile designation field in response to selection of a list drop-down arrow adjacent to the adjustment target profile designation field, and accepting selection of one designation item from the designation items of the input profile, the output profile, and the link profile that are being displayed simultaneously.

US Pat. No. 10,893,169

RELAY APPARATUS, CONTROL METHOD, AND INFORMATION PROCESSING SYSTEM

Canon Kabushiki Kaisha, ...

1. A relay apparatus that relays communication between a server which provides a service and a terminal which receives the service, the apparatus comprising:at least one processor;
wherein the at least one processor transmits to the terminal a first request to set, in the terminal, a piece of transmission target information for each of a plurality of services; and
the at least one processor notifies the terminal of a predetermined service of a plurality of services if a second request corresponding to the predetermined service of the plurality of services is received from the server after the at least one processor transfers the first request,
wherein in a case where the predetermined service is notified by the second request to the terminal in which the piece of transmission target information has been set, the terminal transfers the piece of transmission target information set for the notified predetermined service, among a plurality of pieces of information that can be provided by the terminal, and
the at least one processor receives the piece of transmission target information transferred from the terminal, and transfers the piece of transmission target information to the server.

US Pat. No. 10,893,168

MULTI-FUNCTION APPARATUS AND METHOD FOR AUTHENTICATING RECEIVED FACSIMILE DATA AND OUTPUTTING THE RECEIVED FACSIMILE

SHARP KABUSHIKI KAISHA, ...

1. A multi-function apparatus comprising:a receiver that receives facsimile data from originating numbers;
a memory that stores the facsimile data and originating numbers received by the receiver such that the facsimile data are associated with the respective originating numbers thereof;
a controller that, when a user logs into the multi-function apparatus, compares each of the originating numbers stored in the memory with each of destination numbers in a communication destination list that corresponds to the user from among a plurality of communication destination lists that respectively correspond to a plurality of users;
a display that displays, exclusively to the user, a reception list of only the facsimile data associated with the originating numbers that correspond to the destination numbers in the communication destination list of the user; and
a printer that prints the facsimile data included in the reception list displayed on the display; wherein
when an administrator logs into the multi-function apparatus, the display displays the facsimile data received from the originating numbers which do not correspond to any of the plurality of communication destination lists respectively corresponding to the plurality of users.

US Pat. No. 10,893,167

EXTRACTING A DOCUMENT PAGE IMAGE FROM A ELECTRONICALLY SCANNED IMAGE HAVING A NON-UNIFORM BACKGROUND CONTENT

Hewlett-Packard Developme...

1. A method comprising:acquiring data representing a first image produced by electronically scanning a page against a background, wherein the first image contains a non-uniform background content due at least in part to a variation introduced by the background being non-uniform; and
extracting an image of the page from the first image, the extracting comprising:
processing the data representing the first image to generate second data representing a second image of the page and the background, the second image having a lower resolution than a resolution of the first image;
characterizing the background content of the first image by identifying the candidate pixels and determining the boundary of the page using the second image;
identifying candidate pixels associated with the page based at least in part on the characterized background content; and
based at least in part on the identified candidate pixels and a model for a boundary of the page, determining the boundary of the page.

US Pat. No. 10,893,166

MANAGEMENT SYSTEM, METHOD, AND PROGRAM STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A management system comprising:a client configured to communicate with a plurality of network devices; and
a manager configured to manage the client,
wherein the manager includes
a first memory storing first instructions, and
a first processor executing the first instructions to cause the manager to:
manage a destination list set including a plurality of destination lists, each of which includes one or more destinations and
generate a task in which the destination list set and a network device to be distributed are set,
wherein the client includes
a second memory storing second instructions, and
a second processor executing the second instructions to cause the client to:
execute a first determination to determine whether the network device to be distributed is able to manage the plurality of destination lists included in the destination list set, based on the task generated by the manager,
execute a second determination to determine whether the number of destinations included in the plurality of destination lists included in the destination list set is within the number of destinations manageable by the network device to be distributed,
distribute distribution data including a plurality of destination lists of the destination list set to the network device to be distributed when it is determined in the first determination that the network device to be distributed is able to manage the plurality of destination lists and it is determined in the second determination that the number of destinations included in the plurality of destination lists of the destination list set is within the number of manageable destinations, and
generate one destination list manageable by the network device to be distributed using the destinations included in the plurality of destination lists of the destination list set when it is determined in the first determination that the network device to be distributed is not able to manage the plurality of destination lists and it is determined in the second determination that the number of destinations included in the plurality of destination lists of the destination list set is within the number of destinations in which the number of manageable destinations, and
wherein the distribution data including the generated destination lists is distributed to the network device to be distributed.

US Pat. No. 10,893,165

INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image processing apparatus, comprising:a scanner that scans an image of a document and obtains image data;
a memory that stores a set of instructions; and
at least one processor that executes the instructions stored in the memory to:
register a name of an object;
cause a display to display the object;
receive an operation on the object by a user;
generate a file based on the image data obtained by the scanner;
set a file name of the file; and
perform, based on the operation on the object, a transmitting process for transmitting the file which has the set file name,
wherein the file name is able to be set by a first name setting method for setting the file name based on the name of the object,
wherein the file name is able to be set by a second name setting method for setting the file name based on a name received from the user separately from the name of the object,
wherein a plurality of objects including a first object and a second object are able to be registered,
wherein any one of at least the first name setting method and the second name setting method is able to be selected for the first object before a first operation on the first object is performed,
wherein any one of at least the first name setting method and the second name setting method is able to be selected for the second object before a second operation on the second object is performed, and
wherein different name setting methods from each other are able to be selected for the first object and the second object.

US Pat. No. 10,893,164

IMAGE SENSOR UNIT AND IMAGE READING DEVICE

Nippon Sheet Glass Compan...

1. An image sensor unit comprising:a linear light source that illuminates a document with a light;
a first erecting equal-magnification lens array and a second erecting equal-magnification lens array arranged in the stated order away from the document so as to receive a light reflected from the document and form an erecting equal-magnification image;
a visual field restriction device provided on an intermediate imaging plane between the first erecting equal-magnification lens array and the second erecting equal-magnification lens array;
a spectral device that disperses a light output from the second erecting equal-magnification lens array; and
a linear image sensor that receives a light dispersed by the spectral device.

US Pat. No. 10,893,163

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INSTRUCTIONS CAUSING IMAGE OUTPUTTING DEVICE TO EXECUTE OUTPUTTING OPERATION

Brother Kogyo Kabushiki K...

20. An image outputting system, comprising:a computer including a processor, a memory, a user interface and a communication interface; and
an image outputting device configured to execute an outputting operation in accordance with image data, the image outputting device being connected with the computer,
the processor of the computer is configured to perform:
receiving an output instruction instructing output of the image data;
obtaining, from an external program stored in the memory, transfer requirement information requesting to transfer recommendable condition information, the recommendable condition information including multiple parameters respectively corresponding to multiple items constituting an execution condition of the outputting operation;
transmitting parameter request information to a server through the communication interface, the parameter request information including key information indicating a current status of the computer, the parameter request information being information requesting the server to transmit recommendable parameters corresponding to particular items which are parts of the multiple items, the recommendable parameters being parameters relating to the key information;
receiving parameter instruction information including the recommendable parameters associated with the key information from the server through the communication interface as a response to the parameter request information;
transferring the recommendable condition information including the recommendable parameters included in the parameter instruction information to the external program in response to the transfer requirement information so that the external program can identify the recommendable parameters in accordance with the input through the user;
obtaining designated condition information, from the external program, the designated condition information being information representing a condition identified by the external program in accordance with the input through the user interface; and
outputting output instruction information to cause the image outputting device to execute the outputting operation in accordance with the execution condition represented by the designated condition information, and
the image outputting device performing the outputting operation in response to receipt of the output instruction information output by the computer.

US Pat. No. 10,893,162

SYSTEM, METHOD OF DETECTING ALTERNATION OF PRINTED MATTER, AND STORAGE MEDIUM

Ricoh Company, Ltd., Tok...

1. A system comprising:circuitry configured to
embed a digital watermark data in an original image, wherein the original image is divided into a plurality of blocks each block having been embedded with a pattern corresponding to each value of the digital watermark data;
store, in a memory, the original image, the digital watermark data embedded in the original image, and an embedding position of the pattern of the digital watermark data in association with each other;
detect a pattern in a scanned image of a printed matter;
decode the detected pattern to acquire digital watermark data included in the scanned image;
align each block between the original image and the scanned image, based on the embedding position of the pattern associated with the original image and a detection position of the pattern detected from the scanned image; and
obtain a difference between the original image and the scanned image aligned with each other to detect an alteration of the printed matter.

US Pat. No. 10,893,161

PRINTING SYSTEM OPERABLE FROM PLURALITY OF APPLICATIONS, INFORMATION PROCESSING APPARATUS, AND METHOD AND PROGRAM FOR CONTROLLING INFORMATION PROCESSING APPARATUS

Canon Kabushiki Kaisha, ...

1. A printing system comprising:a plurality of external apparatuses, including a first external apparatus having a display configured to display information and including a second external apparatus, configured to execute a plurality of sheet management applications; and
a printing apparatus provided with a plurality of sheet containers and configured to (i) register sheet information in correspondence with the plurality of sheet containers and (ii) to update print adjustment information linked with the registered sheet information in accordance with a request from one of the plurality of sheet management applications,
wherein the printing apparatus includes a printing apparatus controller having a processor and a memory configured to perform operations including:
retaining information corresponding to the first external apparatus executing a first sheet management application, and
providing a first notification to the first external apparatus in a case where a second notification for starting a second sheet management application is acquired from the second external apparatus in a state where the information corresponding to the first external apparatus is retained,
wherein the first external apparatus includes a first external apparatus controller having a processor and a memory configured to perform operations including:
causing the display to display a first screen of the first sheet management application, wherein the first screen is capable of listing the registered sheet information, and
causing the display to display a second screen of the first sheet management application upon acquisition of the first notification from the printing apparatus,
wherein the first screen includes a first message indicating that two or more sheet management applications should not be executed simultaneously, and
wherein the second screen includes a second message related to a process of prohibiting the plurality of external apparatuses from simultaneously executing the two or more sheet management applications.

US Pat. No. 10,893,160

MULTI-FEED DETECTION APPARATUS FOR CHANGING A THRESHOLD VALUE FOR DETECTING MULTI-FEED OR STOPPING DETECTION OF MULTI-FEED BASED ON A SHAPE OF A MEDIUM

PFU LIMITED, Ishikawa (J...

1. A multi-feed detection apparatus comprising:a conveyance roller to convey a medium;
an ultrasonic sensor including an ultrasonic transmitter for transmitting an ultrasonic wave;
an ultrasonic receiver facing the ultrasonic transmitter for receiving the ultrasonic wave through the medium and generating an ultrasonic signal corresponding to the received ultrasonic wave;
an imaging device to image the medium being conveyed by the conveyance roller and sequentially generate a line image; and
a processor to:
detect a width of the medium in each line image,
detect a vertex count of the medium included in the sequentially generated line image based on the width of the medium in the each line image,
detect a length of the medium included in the sequentially generated line image, in a direction perpendicular to a width of the medium,
estimate a shape of the medium based on the vertex count and the length of the medium,
detect a media multi-feed by comparing the ultrasonic signal with a predetermined threshold value, and
change the predetermined threshold value or stop detection of the media multi-feed, based on the estimated shape of the medium.

US Pat. No. 10,893,159

DOCUMENT FEEDER AND IMAGE FORMING APPARATUS

SHARP KABUSHIKI KAISHA, ...

1. A document feeder comprising:a document placing portion on which a document is placed;
a transporter which transports the document placed on the document placing portion so as to output the document from a document output port in an output direction;
a document discharge portion on which the document output from the document output port is stacked; and
a substrate including a light source that emits light with which the document discharge portion is irradiated,
the document discharge portion being disposed below the document placing portion,
the substrate being fixed to the document placing portion so as to incline relative to a horizontal reference of the document feeder, and inclining a surface of the substrate on which the light source is mounted such that an optical axis of the light source extends toward the document output port.

US Pat. No. 10,893,158

DISPLAY DEVICE, PROGRAM, AND DISPLAY METHOD OF DISPLAY DEVICE

SHARP KABUSHIKI KAISHA, ...

1. A display device, comprising:a display that displays an input screen having a plurality of areas arranged with an input element;
determination circuitry that determines a cause of a disabled state if the input element is in the disabled state; and
display controlling circuitry that performs control to identifiably display the plurality of areas arranged with the input element in the disabled state, in a color corresponding to the cause, wherein
the display device further comprises a storage that stores colors corresponding to the plurality of areas, and
if a cause of disabling an input element included in one area is based on a content input in an input element included in a different area, the display controlling circuitry performs control to display a whole of the one area in a color corresponding to the different area.

US Pat. No. 10,893,157

INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING APPARATUS

Ricoh Company, Ltd., Tok...

1. An information processing system, comprising:a server including first circuitry and a first memory; and
an information processing apparatus including second circuitry and being connectable to the server via a communication network, wherein
the second circuitry of the information processing apparatus is configured to
acquire, as logged-in user information, information input by a user when the user logs in to the information processing apparatus, and
transmit the logged-in user information to the server, and
the first circuitry of the server is configured to
store the transmitted logged-in user information in the first memory,
acquire a user request to the information processing apparatus based on audio information of voice input via a terminal, the user request including a type of job and user identifying information,
replace the user identifying information included in the acquired user request with the acquired logged-in user information stored in the first memory to generate a modified user request, the user identifying information specifying address information of the user, and
transmit the modified user request including the acquired logged-in user information to the information processing apparatus to instruct the information processing apparatus to execute the user request.

US Pat. No. 10,893,156

SCANNING AUTHORIZATION

KYOCERA Document Solution...

1. A method, comprising:storing in a memory at least one keyword in a word bank, at least one permission in at least one profile, and at least one action to be performed responsive to a scanning operation of a document including the at least one keyword;
obtaining a scan file and a user identification associated therewith of the scanning operation;
performing a recognition operation on an image in the scan file;
checking for the at least one keyword in a recognized content obtained from the image;
responsive to recognizing the at least one keyword in the recognized content, checking for the at least one permission having the user identification to determine whether a user associated with the user identification is authorized to perform the scanning operation; and
performing the at least one action for the scanning operation responsive to an outcome of the checking for the at least one permission:
wherein the at least one action comprises:
allowing a completion of the scanning operation responsive to the outcome indicating the scanning operation is authorized by having the user identification including the at least one permission; and
prohibiting a completion of the scanning operation responsive to the outcome indicating the scanning operation is unauthorized by the user identification not being included in the at least one permission.

US Pat. No. 10,893,155

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM CONTAINING INSTRUCTIONS CAUSING DEVICE TO DOWNLOAD DATA FROM SERVER

Brother Kogyo Kabushiki K...

1. A non-transitory computer-readable recording medium for an information processing device provided with a first communication interface, a second communication interface and a controller, the recording medium containing instructions realizing an application program, the instructions causing, when executed, the controller to perform:a first communication process of downloading content data from a storage server through the first communication interface;
a second communication process of transmitting the content data downloaded in the first communication process to a target device capable of outputting the content data through the second communication interface;
a first determining process of determining whether a startup option including address information is delivered when the application program is started; and
in response to determining, in the first determining process, that the startup option including the address information is delivered, a second determining process of determining whether the address information included in the startup option is address information indicating the storage server,
the instructions further causing, when executed, the controller to:
in response to determining, in the second determining process, that the address information included in the startup option is the address information indicating the storage server:
download, in the first communication process, the content data from the storage server indicated by the address information included in the startup option; and
transmit, in the second communication process, the content data downloaded in the first communication process, to the target device.

US Pat. No. 10,893,154

PERSONALIZED SOUVENIR PRODUCING INTERACTIVE KIOSK SYSTEM

1. A personalized souvenir producing interactive kiosk system, the personalized souvenir producing interactive kiosk system comprising:a kiosk having:
a user interface providing a user-interactive display;
a payment receiving and processing system for accepting and processing payment from a user;
a powering source for powering said kiosk;
a digital input receiver for receiving a digital-image from an electronic device, said kiosk being in communication with said electronic device;
a housing storing:
a supply of magnetic sheeting;
a printer configured to print a user-selected said digital image on a section of said supply of magnetic sheeting stored in said kiosk in response to receiving said digital-image and said payment from said user;
and
a central processing unit controlling functions of said printer;
and
a dispenser tray for dispensing and delivering a personalized souvenir to said user;wherein said personalized souvenir is a refrigerator magnet;wherein said payment receiving and processing system is accessible from an exterior of said kiosk and comprises a paper currency acceptor for receiving and processing paper money payments and paper currency storage bin for storage thereof;wherein said payment receiving and processing system further comprises a mag-stripe reader for reading and processing payment from a payment-card;wherein said powering source comprises a power cord;wherein said user interface comprises a touchscreen digital display;wherein said kiosk further comprises at least one port configured receive a transfer cable and connect with said electronic device for receiving said user-selected said digital-image from said electronic device;wherein said at least one port is a Universal Serial Bus port configured to transfer said digital-image from said electronic device to said central processing unit;wherein said kiosk further comprises wireless communication capability for communicating with a remote said electronic device;wherein said further comprises a cutting press for separating said supply of magnetic sheeting into individual said refrigerator magnet size sections for printing said user-selected digital image thereon;wherein said kiosk further comprises a database configured to store preprogrammed-templates, preprogrammed-images, and preprogrammed-backgrounds for further customization of said personalized souvenir;wherein said digital image is selected from a group consisting of a photograph, a graphic image, and said preprogrammed-image;wherein an orientation of at least one of said digital images is able to be manipulated and customized by said user using said user interface;wherein said digital image is printed on a top-surface of said refrigerator magnet opposing a magnetic base;wherein said top-surface comprises a vinyl substrate;wherein internal components of said kiosk are coated with a non-stick material for aiding in effective dispensing of said personalized souvenir.

US Pat. No. 10,893,153

INFORMATION PROCESSING APPARATUS THAT PERFORMS ALBUM CREATION PROCESS AND METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS THAT PERFORMS ALBUM CREATION PROCESS

KYOCERA Document Solution...

1. An information processing apparatus, comprising:a storage medium that stores a plurality of pieces of photograph data and a plurality of comments;
an input device that accepts a selection of, from among the plurality of pieces of photograph data stored by the storage medium, a set of pieces of photograph data to be included in album page data used for printing of an album; and
a processor that places the set of pieces of photograph data selected at the input device in the album page data, adds an enclosure image to each of the set of pieces of photograph data in the album page data, selects, from among the plurality of comments stored by the storage medium, a comment to be added to the each of the set of pieces of photograph data, and performs control so that the comment thus selected is included in the enclosure image, thus adding the comment to the each of the set of pieces of photograph data,
wherein
the input device accepts a total number of pieces of the photograph data to be included in the album page data,
in a case where the number of the set of pieces of photograph data selected at the input device is less than the total number, the processor automatically selects a piece of the photograph data,
when automatically selecting the piece of the photograph data, the processor selects, from among unselected pieces of the photograph data, a piece of photograph data showing a person whose face faces a face of a person shown in any one of the set of pieces of photograph data selected via the input device,
the storage medium stores reply comments that are replies to the plurality of comments, respectively,
the processor places, in the album page data, two pieces of the photograph data side by side so that the faces of the persons shown respectively in the two pieces of the photograph data face each other,
the processor adds the comment to one of the two pieces of the photograph data respectively showing the faces facing each other, and
the processor adds one of the reply comments corresponding to the comment added to the one of the two pieces of the photograph data respectively showing the faces facing each other to the other of the two pieces of the photograph data.

US Pat. No. 10,893,152

METHOD, COMPUTER PROGRAM, AND ALGORITHM FOR COMPUTING NETWORK SERVICE VALUE PRICING BASED ON COMMUNICATION SERVICE EXPERIENCES DELIVERED TO CONSUMERS AND MERCHANTS OVER A SMART MULTI-SERVICES (SMS) COMMUNICATION NETWORK

INCNETWORKS, INC., Somer...

1. A mobile cloud computing (MCC) network system for transferring data during a handover of a user equipment connected to an MCC communication network, the MCC network system comprising:at least one processor; and
a memory coupled to the at least one processor, the memory for storing data to facilitate transfer of the data to the user equipment connected to the MCC communication network, and the memory storing instructions to cause the at least one processor to:
receive, by an MCC access controller connected to the MCC communication network and configured to enable the transfer and execution of at least one mobile cloud application for the user equipment as the user equipment moves to a current visiting MCC network, an access request submitted by the user equipment to move to the current visiting MCC network;
in response to the access request from the user equipment to the current visiting MCC network, perform, by the MCC access controller, a connection handover for the user equipment to the current visiting MCC network;
in response to the access request from the user equipment to the current visiting MCC network, add, by the MCC access controller, each previously visiting MCC network as an ad hoc addition of a server node on a local, regional, national, or global level within the MCC communication network and identifying the user equipment as a visitor within the current visiting MCC network and each previously visiting MCC network;
during the connection handover, execute simultaneously, by the MCC access controller, a Quality of Experience (QoE) data handover to deliver user equipment specific QoE data, which includes, user equipment specific applications, resources, and services presently not available on the current visiting MCC network, from a home MCC network of the user equipment to the user equipment via the current visiting MCC network so that the user equipment can maintain at the current visiting MCC network the same user equipment specific QoE service levels as defined on the home MCC network;
automatically locate, by the MCC access controller, the user equipment specific QoE data on one or more home servers on the home MCC network of the user equipment; and
automatically transferring and storing a copy of the user equipment specific QoE data from at least one home server on the home MCC network to at least one visiting server on the current visiting MCC network; and
automatically maintain, synchronize, and age, by the MCC access controller, the user equipment specific QoE data of the user equipment on the current visiting MCC network,
wherein the MCC access controller receives from the user equipment at the MCC access controller registration information to identify the user equipment on the current visiting MCC network,
wherein the MCC access controller assigns in at least one database, based on the registration information of the user equipment, a home location network associated with the user equipment, a service identity to identify the user equipment to permit access to the current visiting MCC network for requesting and receiving services, and at least one end user permanent device registered and authorized by the user equipment to transmit requests and to receive services on behalf of the user equipment, and
wherein the MCC access controller is configured such that the user equipment is decoupled from the permanent device to permit use of a temporary device to gain access to the user equipment specific QoE data at the current visiting MCC network, a home data access request is transmitted to the home MCC network to gain access to the user equipment specific QoE data, when the user equipment uses the temporary device to connect to the current visiting MCC network and after the user equipment transmits to the MCC access controller a security feature comprising the service identity and successfully completes an user equipment authentication and verification process on the current visiting MCC network.

US Pat. No. 10,893,151

DATA GAP BRIDGING METHODS AND SYSTEMS

Hewlett Packard Enterpris...

1. A non-transitory computer readable medium comprising computer executable instructions stored thereon that, when executed by a processor in a source system, cause the processor to:receive, at a downlink monitor, a sent data volume from an external computing device of a vendor by sending a secure counter check to the external computing device;
determine, by an uplink monitor, an operator received data volume from the vendor based on a charging function;
receive a first charging data report (CDR) including the sent data volume from a vendor;
compare the first CDR to an operator received data volume and when the first CDR is different than the operator received data volume, rejecting the first charging report;
provide a second CDR having the operator received data volume to the vendor;
receive a charging data acceptance including a negotiated data volume from the vendor derived from calculating a constraint-bound charge range between the sent data volume and the operator received data volume based on a representation of the sent data volume relative to the received data volume allowing reconciliation of the difference therebetween;
construct a publicly verifiable proof of charging based on the charging data acceptance; and
send the publicly verifiable proof of charging to the vendor.