US Pat. No. 10,063,987

METHOD, APPARATUS, AND COMPUTER-READABLE MEDIA FOR FOCUSSING SOUND SIGNALS IN A SHARED 3D SPACE

NUREVA INC., Calgary (CA...

1. A method of focusing combined sound signals from a plurality of physical microphones in order to determine a processing gain for each of a plurality of virtual microphone locations in a shared 3D space, comprising:defining, by at least one processor, a plurality of virtual microphone bubbles in the shared 3D space, each bubble having location coordinates in the shared 3D space, each bubble corresponding to a virtual microphone;
receiving, by the at least one processor, sound signals from the plurality of physical microphones in the shared 3D space;
determining, by the at least one processor, a processing gain at each of the plurality of virtual microphone bubble locations, based on a received combination of sound signals sourced from each virtual microphone bubble location in the shared 3D space;
identifying, by the at least one processor, a sound source in the shared 3D space, based on the determined processing gains, the sound source having coordinates in the shared 3D space; and
focusing, by the at least one processor, combined signals from the plurality of physical microphones to the sound source coordinates by adjusting a weight and a delay for signals received from each of the plurality of physical microphones; and
outputting, by the at least one processor, a plurality of streamed signals comprising (i) real-time location coordinates, in the shared 3D space, of the sound source, and (ii) sound source processing gain values associated with each virtual microphone bubble in the shared 3D space.

US Pat. No. 10,394,358

METHOD, APPARATUS AND COMPUTER-READABLE MEDIA FOR TOUCH AND SPEECH INTERFACE

Nureva, Inc., (CA)

1. Touch and speech input apparatus configured for a user to provide input to (i) a touch sensitive input device and (ii) a speech input device, wherein multiple touch instances create an overlap in touch object time windows enabling more than one touch speech command to be executed from a common speech event, comprising:at least one memory storing a plurality of words in a global dictionary; and
at least one processor configured to:
receive a first input from the touch sensitive input device with respect to a first touch object;
establish a first touch object time window with respect to the first touch object, wherein the first touch object time window comprises a first pre-touch window and first a post-touch window;
receive a second input from the touch sensitive input device that is within the first touch object time window, with respect to a second touch object;
establish a second touch object time window with respect to the second touch object, wherein the second touch object time window comprises a second pre-touch window and a second post-touch window;
receive an input from the speech input device;
determine whether the received input from the speech input device is present in the global dictionary;
if the received input from the speech input device is present in the global dictionary, determine whether the received input from the speech input device has been received within the established first touch object time window;
if the received input from the speech input device has been received within the established first touch object time window, activate an action corresponding to both (i) the received first input from the touch sensitive input device and (ii) the received input from the speech input device, and make unavailable from a list of recognized words from the global dictionary a command corresponding to the first touch input; and
determine whether the received input from the speech input device has been received within the established second touch object time window, and, if so, activate an action corresponding to both (i) the received second input from the touch sensitive input device and (ii) the received input from the speech input device, and make unavailable from the list of recognized words from the global dictionary a command corresponding to the second touch input.

US Pat. No. 10,338,713

METHOD, APPARATUS AND COMPUTER-READABLE MEDIA FOR TOUCH AND SPEECH INTERFACE WITH AUDIO LOCATION

Nureva, Inc., (CA)

1. Touch and speech input with audio location apparatus configured for one or more users to provide input to (i) a touch sensitive input device and (ii) a speech input device in a shared physical space, comprising:at least one memory storing a plurality of words in a global dictionary; and
at least one processor configured to:
receive an input from the touch sensitive input device in the shared physical space;
establish a touch time window with respect to the received input from the touch sensitive input device;
receive an input from the speech input device in the shared physical space;
determine whether the received input from the speech input device is present in the global dictionary;
determine a position location of a sound source in the shared physical space from the received input from the speech input device;
determine whether the received input from the touch sensitive input device and the position location of received input from the speech input device are both within a same region of the touch sensitive input device in the shared physical space;
if the received input from the speech input device is present in the global dictionary, determine whether the received input from the speech input device has been received within the established touch time window; and
if the received input from the speech input device has been received within the established touch time window, and the received input from the touch sensitive input device and the received input from the speech input device are both within a same region of the touch sensitive input device in the shared physical space, activate an action corresponding to both (i) the received input from the touch sensitive input device and (ii) the received input from the speech input device.

US Pat. No. 10,397,726

METHOD, APPARATUS, AND COMPUTER-READABLE MEDIA FOR FOCUSING SOUND SIGNALS IN A SHARED 3D SPACE

Nureva, Inc., (CA)

1. A method of real-time, low-latency sound source location targeting in the presence of reverb and ambient noise signals in a shared three-dimensional space, comprising:predefining, in the shared three-dimensional space, a three-dimensional coordinate grid of a plurality of virtual-microphone locations, each of which is related to a plurality of physical microphones in the shared three-dimensional space, so as to define, for each virtual-microphone location, delay and weight factors with respect to each related physical microphone in the shared three-dimensional space;
at least one processor core provided for each physical microphone, for parallel-process-calculating, for each physical microphone with respect to each virtual microphone location, sound source location by:
fetching from memory the delay factor for each virtual microphone location with respect to the corresponding physical microphone;
fetching from memory the weight factors for each virtual microphone location with respect to the corresponding physical microphone;
fetching from memory at least one sound source signal from the corresponding physical microphone in the shared three-dimensional space;
using at least one delay line to process the fetched at least one sound source signal from the corresponding physical microphone using the fetched delay factor to produce a delayed sound source signal for each virtual microphone location; and
multiplying the delayed sound source signal by the fetched weight factor for each virtual microphone to produce a delayed and weighted sound source signal for each virtual microphone for the corresponding physical microphone;
summing the delayed and weighted sound source signals from all of the processor cores to provide a summed total signal corresponding to each virtual microphone location;
measuring the energy of the summed total signal for each virtual microphone location;
determining, from the measured energy of each summed signal, a three-dimensional grid coordinate location for each sound source with respect to each virtual microphone location in the shared three-dimensional space; and
outputting, in real-time, the determined three-dimensional grid location coordinates and signal strengths of all of the sound sources in the shared three-dimensional space.

US Pat. No. 10,387,108

METHOD, APPARATUS AND COMPUTER-READABLE MEDIA UTILIZING POSITIONAL INFORMATION TO DERIVE AGC OUTPUT PARAMETERS

Nureva, Inc., (CA)

1. A method of automatic gain control utilizing sound source position information in a shared space having a plurality of microphones and a plurality of sound sources, comprising:receiving combined sound signals from the plurality of microphones;
locating, using one or more processors, position information corresponding to each of the plurality of sound sources in the shared space;
determining, using the one or more processors, the distance to each of the plurality of sound sources from each of the plurality of the microphones in the shared space, based on the position information;
defining, using the one or more processors, a predetermined gain weight adjustment for each of the plurality of microphones, based on the distance information;
applying the defined plurality of gain weight adjustments to the plurality of microphones in order to control the gain of a desired plurality of sound sources in the shared space;
maintaining, using the one or more processors, a substantially constant ambient sound level regardless of the position of the plurality of sound sources and the applied gain weight adjustments to the plurality of microphones, based on the received signals from the plurality of microphones; and
outputting, using the one or more processors, a summed signal of the plurality of sound sources at a derived gain with a constant ambient sound level across the plurality of sound source positions in the shared space.