US Pat. No. 9,129,381

MODIFICATION OF POST-VIEWING PARAMETERS FOR DIGITAL IMAGES USING IMAGE REGION OR FEATURE INFORMATION

FotoNation Limited, Ball...

1. A method of automatically generating one or more new images using an original image, comprising:
acquiring one or more reference images that are temporally or spatially related to the original image;
determining sums of high-order DCT coefficients of one or more regions of the one or more reference images;
determining sums of high-order DCT coefficients of one or more regions in the original image;
performing comparisons of the sums of high-order DCT coefficients of one or more regions in the original image with the sums
of high-order DCT coefficients of one or more regions of the one or more reference images:

based, at least in part, on the performed comparisons, automatically detecting and selecting, within a digital image acquisition
device, one or more groups of pixels that correspond to a background region or a foreground region, or both within the original
image; and

automatically generating values of pixels of one or more new images based on the selected one or more groups of pixels from
the original image to cause the background region or foreground region, or both to be present in the one or more new images.

US Pat. No. 9,242,602

REARVIEW IMAGING SYSTEMS FOR VEHICLE

Fotonation Limited, Ball...

1. A rearview imaging system for a vehicle, comprising:
at least one video camera mounted on the vehicle for continuously providing a video signal of images captured with a wide
angle horizontal field of view (FoV), and pointing in a rearward direction of the vehicle,

a display device configured to generate one or more real-time displays, viewable by a driver, of the images captured by the
at least one video camera, and

a video processor configured to process the video signal, in response to detecting that the vehicle is reversing and turning
in a turn direction, by performing:

automatically digitally adjusting the video signal of images that is captured with the FoV, to cause images encoded in the
video signal, which are displayed on at least one of the one or more real-time displays, to rotate horizontally in the turn
direction.

US Pat. No. 9,456,128

PORTRAIT IMAGE SYNTHESIS FROM MULTIPLE IMAGES CAPTURED ON A HANDHELD DEVICE

Fotonation Limited, Ball...

1. A hand-held digital image capture device, comprising:
a lens and an image sensor for capturing digital images;
a display unit configured to display the digital images on a display screen;
a digital processor operatively coupled to the image sensor and the display unit, and configured to:
upon detecting a particular object in a field of view in front of a user of the device:
upon receiving a user input, automatically capture a plurality of digital images of the particular object only upon detecting
that the hand-held image capture device is repositioned along a concave path with respect to an optical axis of the hand-held
digital image capture device pointing toward the particular object;

wherein the concave path has an approximately constant radius centered at the particular object; and
store the plurality of digital images in a storage device.

US Pat. No. 9,462,180

DETECTING FACIAL EXPRESSIONS IN DIGITAL IMAGES

FotoNation Limited, Galw...

1. A method of in-camera processing to create a photographic record of a captured image of a group of faces by increasing
smiling faces in the image, the method comprising:
acquiring the captured image with a camera having an image sensor for acquiring digital images;
providing one or more digitally-acquired first images of the group of faces in addition to the captured image;
from among the group of faces, identifying a number of non-smiling faces in one of the images;
based, at least in part, on the number of non-smiling faces, initiating a plurality of operations including, when a predetermined
number of non-smiling faces are present in said one of the images:

(i) delaying acquisition and recording of the captured image to allow time for reducing non-smiling faces in the captured
image relative to said number of non-smiling faces in said one of the images; and

(ii) replacing a face region in the captured image by combining a face region in at least one of the one or more digitally-acquired
images into the captured image; and

storing the captured image with the replaced face region as digital content in a mass storage device.

US Pat. No. 9,280,810

METHOD AND SYSTEM FOR CORRECTING A DISTORTED INPUT IMAGE

FotoNation Limited, Ball...

1. An image acquisition system comprising:
a first memory for storing at least a portion of a distorted input image acquired from an image sensor and a lens system;
a second memory for storing a corrected output image; and
an interpolator module connected to said first memory for reading distorted input image information and to said second memory
for writing corrected output image information, said interpolator comprising: a bi-cubic interpolator; and a pair of bi-linear
interpolators and being switchable between a first high quality mode and a second high speed mode where,

in said first high quality mode, for each pixel for said output image, said interpolator is arranged to read a 4×4 pixel window
from said first memory and with said bi-cubic interpolator to interpolate said 4×4 pixel window to provide said output pixel
value, and

in said second high speed mode, for each pair adjacent output pixels for said corrected output image, said interpolator is
arranged to read a 4×4 pixel window from said first memory, said 4×4 pixel window bounding a pair of 2×2 pixel windows, each
of which are interpolated in parallel by said pair of bi-linear interpolators to provide said output pixel values.

US Pat. No. 9,094,648

TONE MAPPING FOR LOW-LIGHT VIDEO FRAME ENHANCEMENT

FotoNation Limited, Ball...

1. An image acquisition and processing device, comprising:
an image acquisition unit configured to:
acquire a plurality of sharp images of a scene; and
acquire a blurred image nominally of the same scene;
an image combining unit configured to:
generate a first combined image based, at least in part, on a first sharp image, from the plurality of sharp images, and color
information of the blurred image; and

generate a second combined image based, at least in part, on a second sharp image, from the plurality of sharp images, and
the first combined image;

wherein the first sharp image is different than the second sharp image; and
an image display unit configured to:
display the second combined image.

US Pat. No. 9,053,524

EYE BEAUTIFICATION UNDER INACCURATE LOCALIZATION

FOTONATION LIMITED, Ball...

1. A method of enhancing an appearance of a face within a digital image, comprising using a processor in:
acquiring the digital image including the face, wherein the step of acquiring comprises:
capturing the digital image using a lens and an image sensor of a processing device that includes the processor, or receiving
said digital image following capture by another device that includes a certain lens and a certain image sensor, or a combination
thereof;

identifying one or two groups of pixels that each include a pupil region of an eye region within the face in the digital image;
identifying a border between said pupil region and an iris of said eye region using luminance information;
adjusting the digital image by adding one or more glint pixels at a pupil side of the border between said iris and said pupil
to generate an enhanced image; and

displaying, transmitting, communicating or digitally storing or otherwise outputting the enhanced image or a further processed
version, or combinations thereof.

US Pat. No. 9,398,209

FACE TRACKING FOR CONTROLLING IMAGING PARAMETERS

Fotonation Limited, Ball...

1. A face detection and recognition method, comprising:
receiving a first preview image of a scene and a second preview image of nominally the same scene;
determining whether the first preview image comprises one or more face regions;
in response to determining that the first preview image comprises one or more face regions:
determining a first location of a first face region in the first preview image;
based at least in part on the first location of the first face region in the first preview image, predicting a second location
in the second preview image of a second face region corresponding to the first face region in the first preview image;

based at least in part on pixel information of the second face region in the second preview image, determining one or more
characteristics of the second face region in the second preview image;

based at least in part on the one or more characteristics of the second face region in the second preview image, determining
one or more acquisition parameters for acquiring a main image of nominally the same scene; and

acquiring the main image of nominally the same scene using the one or more acquisition parameters;
wherein the method is performed using one or more computing devices.

US Pat. No. 9,307,212

TONE MAPPING FOR LOW-LIGHT VIDEO FRAME ENHANCEMENT

Fotonation Limited, Ball...

1. An image processing method, comprising:
acquiring a first well-exposed image frame of a scene;
acquiring additional image frames of the same scene;
wherein the additional image frames include a second image frame and a third image frame;
generating one or more combined images based on the well-exposed image frame of the scene and at least one of the additional
image frames;

determining a frame-to-frame motion between the second image frame and the third image frame; and
responsive to said frame-to-frame motion exceeding a threshold, acquiring a second well-exposed image frame of the scene.

US Pat. No. 9,053,681

REAL-TIME VIDEO FRAME PRE-PROCESSING HARDWARE

FotoNation Limited, Ball...

16. A method, comprising:
processing, in a dynamically reconfigurable heterogeneous systolic array, a first image frame to generate a plurality of image
processing primitives from the first image frame;

wherein the plurality of image processing primitives comprises at least one image map;
storing the plurality of image processing primitives and the first image frame in a memory store;
based, at least in part, on the plurality of image processing primitives, determining one or more characteristics of the first
image frame;

based on the one or more characteristics determined for the first image frame, reconfiguring the dynamically reconfigurable
heterogeneous systolic array; and

processing, using the reconfigured dynamically reconfigurable heterogeneous systolic array, a second image frame based on
the one or more characteristics;

wherein the method is performed using one or more computing devices.

US Pat. No. 10,148,943

IMAGE ACQUISITION DEVICE AND METHOD BASED ON A SHARPNESS MEASURE AND AN IMAGE ACQUISTION PARAMETER

FotoNation Limited, Galw...

1. A method for acquiring an image in a digital image acquisition device or camera module of the type which includes processing circuitry which automatically provides digital image corrections, comprising:acquiring a first image frame including a region containing a subject at a first focus position;
determining a first sharpness of said subject within said first image frame;
identifying an imaged subject size within the first image frame;
determining a second focus position based on said imaged subject size;
acquiring a second image frame at said second focus position;
determining a second sharpness of said subject within said second image frame;
determining a sharpness threshold as a function of image acquisition parameters for the first and/or second image frame; and
responsive to a condition in which said second sharpness does not exceed said first sharpness and does not exceed said sharpness threshold: determining values of camera motion parameters and/or subject motion parameters for said second image frame before, if at all, performing a focus sweep to determine an optimal focus position for said subject.

US Pat. No. 9,736,370

DIGITAL IMAGE CAPTURE DEVICE HAVING A PANORAMA MODE

FotoNation Limited, Galw...

1. A hand-held digital image capture device comprising:
a touch-sensitive display screen (“touch screen”) for an image preview;
one or more user controls for controlling the device, and for setting a user-selectable panorama mode on the device,
a processor which executes one or more code instructions to cause:
upon setting the panorama mode on the device, displaying, on the touch screen, a bar having a width and/or a height,
wherein the width and/or the height are user-adjustable by interaction with the touch screen to select a desired horizontal
sweep angle,

selecting a sweep angle by adjusting the width and/or the height of the bar according to a user interacting with the touch
screen, and

after selecting the sweep angle:
automatically capturing a plurality of successive overlapping images during a sweep of the device along the selected sweep
angle, and

generating a panoramic image from the plurality of successive overlapping images by synthesizing the plurality of successive
overlapping images, the panoramic image having a panorama width corresponding to the selected sweep angle.

US Pat. No. 9,239,957

IMAGE PROCESSING METHOD AND APPARATUS

FotoNation Limited, Ball...

1. An image processing device comprising:
an image sensor for acquiring an image depicting one or more face regions; and
a processing module configured to:
select, from said one or more face regions, a particular face region that includes one or more eye-iris regions;
analyze the one or more eye-iris regions to select a particular eye-iris region that comprises a particular eye-iris pattern,
which allows to biometrically identify a person depicted in said image;

submit one or more eye-iris patterns, from the one or more eye-iris regions, to a biometric authentication unit (BAU);
for the particular eye-iris pattern, from the one or more eye-iris patterns:
receive, from the BAU, a BAU generated eye-iris code confirming that the particular eye-iris pattern is of a quality insufficient
to biometrically identify an individual;

adjust a contrast of the particular eye-iris pattern; and
in response to adjusting the contrast of the particular eye-iris pattern, re-submit the adjusted particular eye-iris pattern
to the BAU for confirming that the adjusted particular eye-iris pattern is of a quality sufficient to biometrically identify
the individual;

generate a substitute eye-iris region comprising a substitute eye-iris pattern that is distinct from the particular eye-iris
pattern included in the particular eye-iris region;

wherein the substitute eye-iris pattern does not allow to biometrically identify said person;
replace, in the particular face region, the particular eye-iris region with the substitute eye-iris region; and
store, in a storage device, said image, which includes the substitute eye-iris region in the particular face region.

US Pat. No. 9,940,519

IMAGE PROCESSING METHOD AND SYSTEM FOR IRIS RECOGNITION

FotoNation Limited, Galw...

1. An image processing method for iris recognition of a predetermined subject, comprising:a) acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein said probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated;
b) identifying one or more iris regions within said one or more eye regions of said probe image; and
c) analyzing the one or more identified iris regions to detect whether they belong to the predetermined subject.

US Pat. No. 9,137,425

METHODS AND APPARATUSES FOR USING IMAGE ACQUISITION DATA TO DETECT AND CORRECT IMAGE DEFECTS

FotoNation Limited, Ball...

1. One or more non-transitory processor-readable media having embedded code therein for programming a processor to perform
a method of detecting a potential defect in an image, the method comprising:
storing a first value indicative of a position of a source of light relative to a lens;
wherein the first value indicates whether the source of light is:
to the right of the lens,
to the left of the lens,
to the bottom of the lens,
to the top of the lens,
to the top right of the lens,
to the top left of the lens,
to the bottom left of the lens, or
to the bottom right of the lens;
determining a second value indicative of a distance between an image capture apparatus and a subject;
determining a third value indicative of a lens focal length;
determining an in-image pupil size based, at least in part, on the first value, the second value, and the third value;
using the first value to identify an expected orientation for a half-red eye defect; and
identifying defect candidates in the image at least in part based on the expected orientation and the in-image pupil size.

US Pat. No. 9,412,007

PARTIAL FACE DETECTOR RED-EYE FILTER METHOD AND APPARATUS

FOTONATION LIMITED, Galw...

1. A device, comprising:
an integral flash for providing illumination during image acquisition;
a digital image capturing apparatus for capturing a digital image, wherein
the device is configured to:
use a red-eye filter configured to determine a grouping of image pixels, in the digital image, containing image pixels having
a color indicative of a red-eye phenomenon and a shape indicative of the red-eye phenomenon;

determine whether other image pixels, of the digital image, that are located around the grouping of image pixels have the
color indicative of the red-eye phenomenon;

determine whether a substantially white area of pixels is found in vicinity of the grouping of image pixels; and
modify the color of the image pixels in the grouping in response to a set of conditions being satisfied, wherein the set of
conditions includes both:

that the other image pixels do not have the color indicative of the red-eye phenomenon; and
that a substantially white area of pixels is found in the vicinity of the grouping of image pixels.

US Pat. No. 9,224,034

FACE SEARCHING AND DETECTION IN A DIGITAL IMAGE ACQUISITION DEVICE

FotoNation Limited, Ball...

1. A device, comprising:
an image acquiring unit communicatively coupled to a memory unit, and configured to acquire data of a digital image that depicts
one or more objects; and

a face detection unit configured to:
perform face detection within a first relatively large window of said image at a first location;
obtain from said face detection a confidence level that is below a threshold indicating a probability that no face is present
at or in a vicinity of said first location;

perform second face detection within a second relatively large window of said image at a second location displaced from the
first location by a first x-amount in a first direction and/or by a first y-amount in an orthogonal second direction;

obtain from said second face detection a confidence level that is at or above said threshold indicating a probability that
the image at least may include a face in the vicinity of the second location;

apply a sequence of windows, including relatively smaller windows than said second relatively large window, at further locations
in the vicinity of said second location, including a third location that is displaced from the second location by a second
x-amount that is smaller than the first x-amount in the first direction and/or by a second y-amount that is smaller than the
first y-amount in the orthogonal second direction wherein a magnitude of displacement by respective values of x-amount in
a first direction and/or by respective values of y-amount in an orthogonal second direction is variable between successive
windows of said sequence of windows within said image and wherein said x-amount and/or said y-amount decreases between successive
windows when a confidence level indicating that the image at least may include a face increases across one or more further
thresholds; and

repeat the obtaining of confidence levels until face detection for multiple predetermined sizes of windows has been performed
over an entire region of interest of said image.

US Pat. No. 10,032,068

METHOD OF MAKING A DIGITAL CAMERA IMAGE OF A FIRST SCENE WITH A SUPERIMPOSED SECOND SCENE

FotoNation Limited, Galw...

1. A digital image acquisition device including an optical system having first and second lens systems for acquiring digital images, and one or more processor-readable media having embodied therein processor-readable code for programming one or more processors to perform the method comprising the following, not necessarily in the order stated:superimposing on a representation of a first scene a subject locator symbol, the representation of the first scene acquired through the first lens system of the digital image acquisition device;
after superimposing the subject locator symbol, acquiring a representation of a second scene through a second lens system of the digital image acquisition device;
scaling at least part of the representation of the second scene to substantially the same size as the subject locator symbol; and
inserting the scaled part of the representation of the second scene into the representation of the first scene at the position of the subject locator symbol to form a composite image,
wherein the subject locator symbol is directly scalable and positionable relative to the representation of the first scene in response to user input provided to the subject locator symbol, prior to acquiring the representation of the second scene.

US Pat. No. 9,767,539

IMAGE CAPTURE DEVICE WITH CONTEMPORANEOUS IMAGE CORRECTION MECHANISM

FotoNation Limited, Galw...

1. A computer-implemented method comprising:
receiving a plurality of images of a same scene captured by a camera;
wherein the plurality of images comprises a particular image of the same scene, a preview image of the same scene and a postview
image of the same scene;

wherein the preview image, from the plurality of images, is an image acquired by the camera just shortly before the particular
image was acquired by the camera;

wherein the postview image, from the plurality of images, is an image acquired by the camera just after the particular image
was acquired by the camera;

wherein each of the preview image and the postview image has a resolution different than a resolution of the particular image;
wherein each of the preview image and the postview image depicts the same objects and subjects as those depicted in the particular
image of the plurality of images;

determining whether the particular image, of the plurality of images, comprises a particular region that includes a defective
depiction of one or more eyes;

in response to determining that the particular image comprises the particular region that includes the defective depiction
of the one or more eyes:

selecting, either the preview image or the postview image that comprises a certain region;
wherein:
the certain region includes a non-defective depiction of the one or more eyes; and
the certain region corresponds to the particular region in the particular image; and
generating a combination image based, at least in part, on the particular image and the certain region by replacing the particular
region in the particular image with the certain region;

wherein the method is performed by one or more processors of a computing device configured as an optical system.

US Pat. No. 9,160,897

FAST MOTION ESTIMATION METHOD

FotoNation Limited, Ball...

1. A digital image stabilization method, comprising:
acquiring a sequence of temporally proximate image frames;
computing an estimated total camera motion between the image frames;
determining a desired component of the estimated total camera motion including distinguishing an undesired component of the
estimated total camera motion, and including characterizing vector values of motion between the image frames, including:

determining at least first and second thresholds adapted to said image frames to generate a sign value range {?1, 0, +1},
including calculating said thresholds as percentages of a dynamic luminance range of the sequence;

incrementing (+1) a counter for each pixel group having a summed luminance that is greater than a first threshold;
decrementing (?1) said counter for each pixel group having a summed luminance that is less than a second threshold, wherein
the counter is unchanged (0), neither incremented nor decremented, for each pixel group having a summed luminance between
the first and second thresholds;

determining final values of counts for the image frames by summing along rows and columns; and
representing the vector values of the desired camera motion based on said final values of counts for the image frames;
(d) generating a corrected image sequence including the desired component of the estimated total camera motion, and excluding
the undesired component; and

(e) rendering, storing, displaying, transmitting or transferring the corrected image sequence or a further processed version,
or combinations thereof.

US Pat. No. 9,607,585

REAL-TIME VIDEO FRAME PRE-PROCESSING HARDWARE

FotoNation Limited, Ball...

8. An image processing method comprising:
receiving frame data of a first image frame;
generating, based on the frame data of the first image frame, an integral image primitive;
wherein the image processing primitive is generated by identifying a sub-region of the first image frame and generating the
integral image primitive from data of the sub-region by computing for a pixel of the integral image primitive a sum of luminance
values of all pixels located above and to the left from the pixel in the sub-region of the first image frame;

storing the integral image primitive and the first image frame in a memory store;
based, at least in part, on the integral image primitive, determining at least one characteristic,
based, at least in part, on the at least one characteristic, reconfiguring a dynamically reconfigurable heterogeneous systolic
array to process, based at least in part on the integral image primitive, a second image frame;

wherein the method is performed using one or more computing devices.

US Pat. No. 9,998,675

REARVIEW IMAGING SYSTEM FOR VEHICLE

FotoNation Limited, Galw...

1. A rearview imaging system for a vehicle, comprising:at least one video camera mounted on the vehicle for providing a wide angle horizontal field of view of a scene rearward of the vehicle,
a display device in the vehicle comprising a screen for viewing portions of the wide angle field of view by a driver of the vehicle, and
a video processor for subdividing the camera field of view into multiple horizontally disposed subsidiary fields of view and displaying said subsidiary fields of view on visually separated side-by-side regions of the display device screen, wherein:
during operation the system varies the extent of at least one subsidiary field of view as a function vehicle motion so that, during system operation, displayed portions of the camera wide angle field of view are divided into at least two subsidiary fields of view separated by a boundary and, while the vehicle is reversing and turning in a particular direction, when an image straddles the boundary between two subsidiary fields of view, the field of view angle of one or each of the subsidiary fields of view is adjusted to shift the angle of the scene subtended so that the image appears entirely within one of the subsidiary fields of view.

US Pat. No. 9,977,985

METHOD FOR PRODUCING A HISTOGRAM OF ORIENTED GRADIENTS

FotoNation Limited, Galw...

34. A method for producing a histogram of oriented gradients (HOG) for at least a portion of an image comprising:dividing said image portion into cells, each cell comprising a plurality of image pixels;
associating at least one sector with a bin, each sector corresponding to a respective angle within a full circle,
for each image pixel of a cell,
obtaining a horizontal gradient component, gx, and a vertical gradient component, gy, based on differences in pixel values along at least a row of said image and a column of said image respectively including said pixel; and
allocating a gradient weight to a sector of a pair of sectors bounding said pixel gradient according to a relative angle of said pixel gradient to each of the pair of sectors bounding said pixel gradient; and
allocating the gradient weight for a sector to an associated bin according to an angular relationship between said sector and said associated bin; and
accumulating a weighted pixel gradient for each pixel of a cell associated with a bin to provide a HOG for said cell.

US Pat. No. 9,262,807

METHOD AND SYSTEM FOR CORRECTING A DISTORTED INPUT IMAGE

Fotonation Limited, Ball...

1. A method for correcting distorted images, the method comprising:
dividing an input image that contains distortions into a plurality of input tiles, each input tile having a plurality of input
tile coordinates;

for each input tile, of the plurality of input tiles, generating a corresponding output tile by:
determining a mapping between the plurality of input tile coordinates and a plurality of output tile coordinates;
based, at least in part, on the mapping, determining corrected tile information for the input tile;
based, at least in part, on the corrected tile information, determining a minimum memory address location and a maximum memory
address location of a memory space in which pixel information for the input tile is stored;

based on the minimum memory address location and the maximum memory address location, retrieving the pixel information for
the input tile from the memory space;

based, at least in part, on the mapping and the pixel information of the input tile, generating corrected pixel information
which does not contain one or more distortions of the distortions; and

storing the corrected pixel information in the corresponding output tile.

US Pat. No. 10,051,208

OPTICAL SYSTEM FOR ACQUISITION OF IMAGES WITH EITHER OR BOTH VISIBLE OR NEAR-INFRARED SPECTRA

FotoNation Limited, Galw...

1. An optical system for an image acquisition device comprising:a filter comprising a central aperture arranged to transmit both visible and selected near infra-red (NIR) wavelengths and a peripheral aperture arranged to block the visible wavelengths and to transmit said NIR wavelengths, wherein a diameter ratio between said peripheral aperture and said central aperture permits images for which spatial resolution of NIR pixels is comparable to or higher than corresponding spatial resolution of visible light sensing pixels simultaneously acquired by said image sensor;
an image sensor comprising an array of pixels including pixels sensitive to the visible wavelengths and corresponding pixels sensitive to said NIR wavelengths; and
a lens assembly axially located between said filter and said image sensor and comprising a plurality of lens elements, said plurality of lens elements being arranged to simultaneously focus NIR light, received from a given object distance through the central aperture of said filter and the peripheral aperture of said filter, and visible light, received from said given object distance through only said central aperture of said filter, on a sensor surface.

US Pat. No. 9,846,739

FAST DATABASE MATCHING

FotoNation Limited, Galw...

1. A method of identifying a possible match between a sample data record and any in a plurality of enrolled data records in
a data base, each enrolled data record comprising a first plurality of data positions, the method comprising:
a) prior to initiating a process for matching a sample data record separate from the data base with one of the enrolled data
records in the data base, without first associating the sample data record with one of the enrolled data records, designating
a second plurality of reference positions among the first plurality of data positions in a first enrolled data record, some
of which reference positions in said first enrolled data record are separated by other data positions in said first enrolled
data record, each reference position corresponding to a location in said first enrolled data record at which a key value,
useful as a characteristic feature for identifying said first enrolled data record, is positioned, there being a first key
value at a first enrolled data record reference position and a second key value at a second enrolled data record reference
position, the totality of said key values providing an identification for distinguishing said first enrolled data record from
others in the plurality of enrolled data records;

b) providing, for at least said first enrolled data record, an enrollment mask comprising a series of enrollment mask data
positions, each corresponding to one in the first plurality of data positions in said first enrolled data record, the enrollment
mask including at least first and second enrollment mask reference positions corresponding to first and second enrolled data
record reference positions, wherein the first key value is associated with said first enrollment mask reference position and
the second key value is associated with said second enrollment mask reference position to match a sample record with said
first enrolled data record;

c) for the sample data record, defining a sample mask comprising sample mask data positions, each corresponding to a data
position in said first enrolled data record, including first and second sample mask reference positions corresponding to said
first and second enrollment mask reference positions and corresponding to the first and second reference positions in the
enrolled data record reference positions;

d) associating said first key value with said first sample mask reference position and associating said second key value with
said second mask reference position to identify in the sample record presence of at least said first and second key values
at positions corresponding to reference positions in said first enrolled data record that are associated with said first and
second key values; and

e) applying the sample mask to the sample data record to determine whether the first and second key values are at positions
in the sample data record corresponding to the first and second sample mask reference positions to identify a possible match
between the sample data record and said first enrolled data record.

US Pat. No. 9,934,559

METHOD FOR CORRECTING AN ACQUIRED IMAGE

FotoNation Limited, Galw...

1. A method of correcting an image obtained by an image acquisition device including:obtaining successive measurements, Gn, of device movement during exposure of each row of an image;
selecting an integration range, idx, in proportion to an exposure time, te, for each row of the image,
averaging accumulated measurements, Cn, of device movement for each row of an image across said integration range to provide successive filtered measurements, G, of device movement during exposure of each row of an image, and
correcting said image for device movement using said filtered measurements G.

US Pat. No. 9,692,964

MODIFICATION OF POST-VIEWING PARAMETERS FOR DIGITAL IMAGES USING IMAGE REGION OR FEATURE INFORMATION

FotoNation Limited, Galw...

1. A method for re-composing an image, the method comprising:
continuously acquiring a plurality of images with selectively adjusted exposures by:
as a camera acquires the plurality of images:
calculating an overall exposure for at least one image of the plurality of images;
determining whether a first image, of the at least one image, depicts a face;
in response to determining that the first image, of the at least one image, depicts the face:
identifying, in the first image, of the at least one image, a first region in which the face is depicted;
calculating a face exposure for the first region in the first image;
based, at least in part, on the overall exposure and the face exposure, determining a first adjusted exposure for at least
one region of a second image to be acquired;

acquiring a second image, of the plurality of images, using the first adjusted exposure;
identifying, in the acquired second image, a second region which corresponds to the first region depicting the face;
based, at least in part, on the first image and the second image, generating a re-composed image by replacing at least one
group of pixels in the first region in the first image with adjusted exposure pixels of the second image; and

displaying the re-composed image on a device display.

US Pat. No. 9,652,834

METHOD AND APPARATUS FOR VIEWING IMAGES

FotoNation Limited, Galw...

6. A computer-implemented method for viewing images on an interactive computing device, the method comprising:
receiving a plurality of images of nominally the same scene wherein the plurality of images comprises a main image having
a main focus measure, and one or more sub-images having sub-image focus measures different from the main focus measure;

displaying the main image on a display device;
responsive to a user selecting, touching or pointing to a particular point of the main image:
determining, in the main image, a refocus region comprising a grouping of pixels that includes the particular point;
determining a size and a location of the refocus region;
based, at least in part, on the size and the location of the refocus region of the particular point within the main image,
determining a magnification factor;

based, at least in part, on the magnification factor and the location of the refocus region, determining a modified refocus
region within the main image;

determining a first sub-image, from the one or more sub-images, that depicts at least a part of the refocus region and has
a first focus measure different than the main focus measure;

modifying the main image by modifying the modified refocus region of the main image using contents of the third sub-image;
and

displaying the modified main image on the display device wherein the method is performed using one or more computing devices.

US Pat. No. 9,319,499

WEARABLE ULTRA-THIN MINIATURIZED MOBILE COMMUNICATIONS

FotoNation Limited, Ball...

1. A wireless device comprising:
a wireless communications circuit mounted in a wireless headset and adapted to provide control data used to wirelessly control
a cellular telephone device separate from the wireless headset;

wherein the wireless headset is adapted to engage a portion of a head a user;
an audio output component communicatively coupled to the wireless communications circuit to receive signals from the wireless
communications circuit and deliver audio output to the user representing the received signals;

an audio input component, mounted in the wireless headset, communicatively coupled to the wireless communications circuit
to supply audio signals, representing the user's voice, to the wireless communications circuit;

wherein at least one of the audio input component mounted in the wireless headset or the wireless communications circuit mounted
in the wireless headset is configured to interpret voice commands received from the user without other input from the user,
and, based on the interpreted voice commands create the control data which is used to wirelessly control the cellular telephone
device; and

a camera component, including an imaging chip adapted to sense light impinging on the imaging chip, mounted in the wireless
headset and communicatively coupled to the wireless communications circuit for supplying image signals representing images
captured by the camera component to the wireless communications circuit, the camera component configured to face in the same
direction as the user when the wireless headset is engaged with the portion of the head of the user.

US Pat. No. 9,118,833

PORTRAIT IMAGE SYNTHESIS FROM MULTIPLE IMAGES CAPTURED ON A HANDHELD DEVICE

FotoNation Limited, Ball...

1. A hand-held digital image capture device, comprising:
a lens and an image sensor for capturing digital images;
a display unit configured to display the digital images on a display screen;
a digital processor, operatively coupled to the image sensors and the display unit, the digital processor being configured
to:

detect a particular object in a field of view in front of a user of the image capture device;
capture a first image comprising a representation of the particular object;
detect whether the image capture device is repositioned along a predetermined concave path;
wherein the predetermined concave path is with respect to an optical axis of the image capture device pointing toward the
particular object;

wherein the predetermined concave path is around the particular object;
wherein the predetermined concave path has an approximately constant radius centered at the particular object;
automatically capture a second image of the particular object only upon detecting that the image capture device is repositioned
along the predetermined concave path with respect to the optical axis of the image capture device pointing toward the particular
object; and

store the first image and the second image in a storage device.

US Pat. No. 9,053,545

MODIFICATION OF VIEWING PARAMETERS FOR DIGITAL IMAGES USING FACE DETECTION INFORMATION

Fotonation Limited, Ball...

1. A method of generating one or more new digital images using an original digitally-acquired image including a face, comprising:
using a processor in
acquiring with an image sensor an original image including the face;
identifying one or more groups of pixels that correspond to two or more faces within the original image, including comparing
one or more detected luminances and at least one facial orientation within a face identified within a first group of pixels
with one or more expected luminances and at least one expected facial orientation of digital facial images, respectively,
and identifying a second group of pixels that correspond to another face within the original image;

matching one or both of the first and second group of pixels with stored data to uniquely identify at least one of the two
of more faces as a specific person or persons, including applying face recognition to the face or faces by comparing the face
or faces with stored face image data identified with said specific person or persons;

generating a luminance map of the original image including the one or more groups of pixels;
selecting a portion of the original image to include the first and second groups of pixels based on the luminance map or on
the comparing or both; and

automatically generating values of pixels of one or more new images based on the selected portion in a manner which includes
at least one of the two or more faces within the one or more new images

wherein the automatically generating values of pixels of one or more new images comprises applying an automatic fill-flash
to, or otherwise adjusting luminance of, one or more of the identified faces based on analysis of the luminance map or on
the comparing or both, including brightening one or more of the identified faces relative to background pixels of the one
or more new images; and

wherein the automatically generating values of pixels of one or more new images further comprises adjusting a relative rotational
orientation of one or more faces of the two or more faces within the original image, wherein the adjusting the relative rotational
orientation of the one or more faces comprises rotating the one or more faces relative to other portions of the image.

US Pat. No. 10,057,261

METHOD FOR CONFIGURING ACCESS FOR A LIMITED USER INTERFACE (UI) DEVICE

FotoNation Limited, Galw...

1. A method, operable with a computing device in a local area network comprising a wireless LAN, to configure access for a limited user interface (UI) device to a network service across a secure link in the local area network via a local network access point in the local area network, the method comprising,employing a User Communication Device as a gateway between the limited UI device and a remote server providing the remote service, including the steps of the User Communication Device:
obtaining from the limited UI device a device identifier via an out-of-band channel;providing said device identifier to said network service via a secure network link; receiving from the network service a zero knowledge proof (ZKP) challenge;providing configuration information to said limited UI device also via an out-of-band channel, said configuration information including information sufficient to enable said limited UI device to connect to said local network access point;providing the ZKP challenge to said limited UI device also via an out-of-band channel;receiving a secure channel key from said network service indicating a successful response from said limited UI device to said ZKP challenge; andproviding said secure channel key to said limited UI device enabling said limited UI device to access said network service.

US Pat. No. 9,691,136

EYE BEAUTIFICATION UNDER INACCURATE LOCALIZATION

FOTONATION LIMITED, Galw...

1. A device configured to enhance an appearance of a face within a digital image, the device comprising:
a lens;
an image sensor;
a processor; and
a processor-readable medium having code embedded therein configured to program the processor to:
identify a face region within a digital image of a scene including a depiction of a face, wherein the face region is a region
of the digital image that includes the depiction of the face, wherein the face region includes less than all of said digital
image, the digital image captured using the lens and an image sensor or received following capture by another device that
includes a lens and an image sensor;

identify within the face region one or more sub-regions to be enhanced with localized luminance smoothing, wherein each of
the one or more sub-regions includes less than all of said face region;

apply one or more localized luminance smoothing kernels to luminance data of the one or more sub-regions identified within
the face region to produce one or more enhanced sub-regions within the face region, wherein the one or more localized smoothing
kernels are configured to modify luminance data within each particular sub-region of the one or more sub-regions by, at least
in part, replacing the luminance values of each respective pixel of a plurality of pixels in that sub-region with an average
computed from the luminance values of pixels surrounding that respective sub-region and not including luminance values of
the pixels within that respective sub-region;

generate an enhanced image including an enhanced version of the face region comprising pixels corresponding to the one or
more enhanced sub-regions within the face region; and

provide the enhanced image or a further processed version of the enhanced image for display, transmission, communication or
digital storage or other type of output from the device.

US Pat. No. 9,117,282

FOREGROUND / BACKGROUND SEPARATION IN DIGITAL IMAGES

FotoNation Limited, Ball...

1. A method comprising:
using a processor-based digital image acquisition device that includes a processor programmed by processor-readable code embedded
that is stored within one or more digital storage media to perform the steps of:

providing a first map comprising one or more regions that are within a main digital image of a scene, wherein each of the
one or more regions has one or more pixels with a common characteristic;

providing an object profile corresponding to a region of interest of said main digital image, wherein the region of interest
includes an object candidate;

wherein said object profile comprises an object portion and at least a second portion that is adjacent to said object portion
and has a known association and orientation relative to the object portion;

determining that the object candidate within the region of interest matches said object portion of the object profile;
determining an orientation of the region of interest within said main digital image;
comparing said region of interest with said object profile to determine whether said region of interest matches said object
profile; and

designating said region of interest as a foreground or background region based on said comparing.

US Pat. No. 9,674,427

IMAGE PROCESSING METHOD AND APPARATUS

FotoNation Limited, Galw...

1. An image processing method operable in a hand held image acquisition device with a user-facing camera comprising:
a) obtaining an image with the user-facing camera;
b) identifying at least one face region within the image;
c) determining a distribution of intensity values for pixels of at least one identified face region; and
d) responsive to less than a threshold number of pixels for an identified face region having intensity values having a lowest
intensity value, signalling to a user that a lens for the user-facing camera needs to be cleaned.

US Pat. No. 9,516,217

DIGITAL IMAGE PROCESSING USING FACE DETECTION AND SKIN TONE INFORMATION

FotoNation Limited, Galw...

1. A method of processing a digital image using face detection, comprising:
identifying a first group of pixels that correspond to a first face image within the digital image and a second group of pixels
that correspond to a second face image within the digital image;

based on the first group of pixels, detecting a first skin tone of the first face image;
based on the second group of pixels, detecting a second skin tone of the second face image;
detecting a first attribute value of pixels that correspond to a background for the first face image;
detecting a second attribute value of pixels that correspond to a background for the second face image;
wherein the first attribute value is different from the second attribute value;
based on the first skin tone, making a first adjustment to fill-flash of the first face image;
based on the first attribute value, making a second adjustment to illumination of the first face image;
based on the second skin tone, making a third adjustment to fill-flash of the second face image;
based on the second attribute value, making a fourth adjustment to illumination of the second face image; and
wherein the second adjustment is different than the fourth adjustment.

US Pat. No. 10,119,808

SYSTEMS AND METHODS FOR ESTIMATING DEPTH FROM PROJECTED TEXTURE USING CAMERA ARRAYS

FotoNation Limited, (IE)...

1. A camera array, comprising:at least one two-dimensional array of cameras comprising a plurality of cameras;
an illumination system configured to illuminate a scene with a projected texture;
a processor;
memory containing an image processing pipeline application and an illumination system controller application;
wherein the at least one two-dimensional array of cameras comprises at least two two-dimensional arrays of cameras located in complementary occlusion zones on opposite sides of the illumination system;
wherein the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture;
wherein the image processing pipeline application directs the processor to:
utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, wherein a spatial pattern period of the projected texture is different along different epipolar lines;
capture a set of images of the scene illuminated with the projected texture;
determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images that includes at least one image captured by a camera in each of the two-dimensional arrays of cameras, wherein generating a depth estimate for a given pixel location in the image from the reference viewpoint comprises:
identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles with respect to each other, wherein each epipolar line in the plurality of epipolar lines is between a camera located at the reference viewpoint and an alternative view camera from a plurality of alternative view cameras in the two-dimensional array of cameras and each epipolar line is used to determine the direction of anticipated shifts of corresponding pixels with depth in alternative view images captured by the plurality of alternative view cameras, wherein disparity along a first epipolar line is greater than disparity along a second epipolar line and the projected pattern incorporates a smaller spatial pattern period in a direction corresponding to the second epipolar line;
comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and
selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint.

US Pat. No. 10,122,919

DIGITAL IMAGE CAPTURE DEVICE HAVING A PANORAMA MODE

FotoNation Limited, Galw...

1. In a hand-held digital image capture device of the type having a touch screen allowing a user to interact with and control camera functions by touching and manipulating icons, symbols, geometric figures or other items displayed on the touch screen by camera code instructions executed on a processor of the device, which instructions enable control of the camera functions, a method of acquiring a panoramic image of a selectable image sweep angle suitable for use when a user holds and sweeps the device through a field of view to create the panoramic image in accord with a selectable sweep angle, the method comprising:setting the device in a panoramic mode;
displaying an item on the touch screen having a touch responsive feature by which the user is able to adjust a setting to select the sweep angle; and
after the sweep angle is selected, in response to user input, commencing a digital photographic process based on execution of the camera code instructions, causing the device to automatically capture a plurality of successive overlapping images during a sweep of the device along the selected sweep angle.

US Pat. No. 10,089,740

SYSTEM AND METHODS FOR DEPTH REGULARIZATION AND SEMIAUTOMATIC INTERACTIVE MATTING USING RGB-D IMAGES

FotoNation Limited, (IE)...

1. An array camera, comprising:a plurality of cameras that capture images of a scene from different viewpoints;
memory containing an image processing pipeline application;
wherein the image processing pipeline application direct the processor to:
capture a set of images using a group of cameras from the plurality of cameras;
receive (i) an image comprising a plurality of pixel color values for pixels in the image and (ii) an initial depth map corresponding to the depths of the pixels within the image, wherein the initial depth map is generated using the set of images; and
regularize the initial depth map into a dense depth map using pixels for which depth is known to estimate depths of pixels for which depth is unknown by using affine combinations of the depths of nearby known pixels to compute depths for the unknown pixel depths, wherein regularizing the initial depth map into the dense depth map further comprises performing Laplacian matting to compute a Laplacian L, wherein the Laplacian matting is optimized by solving a reduced linear system for depth values only in unknown regions;
wherein regularizing the initial depth map into the dense depth map further comprises:
 finding an approximate dense depth map using a diffusion process where LD is a diffusion Laplacian constructed such that each pixel is connected to a plurality of its surrounding neighbors using spatial proximity;
pruning the Laplacian L based upon the approximate dense depth map; and
detecting and correcting depth bleeding across edges by computing a Laplacian residual R based upon the pruned Laplacian and removing incorrect depth values based on the Laplacian residual R; and
using the dense depth map to perform image-based rendering.

US Pat. No. 9,779,287

CLASSIFICATION AND ORGANIZATION OF CONSUMER DIGITAL IMAGES USING WORKFLOW, AND FACE DETECTION AND RECOGNITION

FotoNation Limited, Galw...

1. A method comprising:
displaying, within a graphical user interface, an image that depicts one or more faces;
detecting one or more faces within the image;
receiving user input that selects a face of the one or more faces to be a currently-selected face;
selecting a set of images, from a collection of images, that closely match the currently-selected face;
wherein an image in the set of images comprises a normalized face region that is normalized by:
based on, at least in part, one or more distances between eyes, a nose, and a mouth depicted in a face region of the image,
determining a profile type of a face depicted in the face region;

based on, at least in part, the profile type, determining a transformation, and applying the transformation to the face region
to generate the normalized face region;

concurrently with displaying the currently-selected face, displaying each image in the set of images;
providing, within the graphical user interface, a control that enables a user to select a target image from the set of images;
and

in response to detecting that the user has selected the target image using said control, associating the currently-selected
face with a person to whom the target image corresponds; wherein the method is performed by one or more computing devices.

US Pat. No. 9,743,001

METHOD OF STABILIZING A SEQUENCE OF IMAGES

FotoNation Limited, Galw...

1. A method operable within an image capture device for stabilizing a sequence of images captured by the image capture device,
comprising:
using lens based sensors indicating image capture device movement during image acquisition, performing optical image stabilization
(OIS) during acquisition of each image of said sequence of images to provide a sequence of OIS corrected images;

using inertial measurement sensors, determining frame-to-frame movement of the device for each frame during which each OIS
corrected image is captured;

obtaining at least an estimate of OIS controlled lens movement responsive to device movement during image acquisition with
a lens system including an OIS controller providing, to a central processor of said image capture device, a record of OIS
controlled lens movement applied during capture of each OIS corrected image;

removing said estimate from the frame-to-frame movement determined for the frame during which said OIS corrected image was
captured to provide a residual measurement of movement for said frame wherein said removing comprises transforming said frame-to-frame
movement according to a camera intrinsic matrix and multiplying an inverse of said record of OIS control with transformed
frame-to-frame movement to provide said residual measurement of movement; and

performing electronic image stabilization (EIS) of each OIS corrected image based on said residual measurement to provide
a stabilized sequence of images.

US Pat. No. 10,123,003

METHOD FOR CALIBRATING AN IMAGE CAPTURE DEVICE

FotoNation Limited, Galw...

1. A method for calibrating an image capture device comprising:a) mounting at least one sample device from a batch for movement through a plurality of orientations relative to a horizontal plane,
b) for a given orientation, focusing said sample device at a sequence of positions, each position being at a respective focus distance from said device;
c) recording a lens actuator setting for said sample device at each position;
d) repeating steps b) and c) at a plurality of distinct orientations of said sample device;
e) repeating steps a) to d) for a number of sample devices from the batch;
f) determining respective relationships between lens actuator settings at any given position for distinct orientations from said plurality of distinct orientations and actuator settings at a selected orientation of said plurality of distinct orientations, and statistically analyzing the recorded lens actuator settings for said sample devices to determine said relationships;
g) recording lens actuator settings for said image capture device to be calibrated at least at two points of interest (POI), each a specified focus distance from said device with said image capture device positioned at said selected orientation; and
h) calibrating said image capture device for said plurality of distinct orientations based on said determined relationships and said recorded lens actuator settings.

US Pat. No. 9,818,024

IDENTIFYING FACIAL EXPRESSIONS IN ACQUIRED DIGITAL IMAGES

FotoNation Limited, Galw...

1. A method for mirroring a user facial expression in a three-dimensional (3D) in-game avatar in real-time, the method comprising:
generating a 3D in-game avatar and displaying the 3D in-game avatar within a game display of a game by performing:
capturing an image of a user as the user participates in the game;
based on contents of the image, determining a plurality of user facial features in the image depicting a user facial expression
of the user, wherein the plurality of user facial features comprises a left eye region, a right eye region, and a lips region;

applying an active appearance face model, which includes a plurality of facial feature parameters, to the left eye region,
the right eye region and the lips region to determine similarities between the left eye region and a reference left eye feature
defined by one or more facial feature parameters of the plurality of facial feature parameters, similarities between the right
eye region and a reference right eye feature defined by one or more facial feature parameters of the plurality of facial feature
parameters, and similarities between the lips region and a reference lips feature defined by one or more facial feature parameters
of the plurality of facial feature parameters;

generating a left eye sub-model based on the similarities between the left eye region and the reference left eye feature;
generating a right eye sub-model based on the similarities between the right eye region and the reference right eye feature;
generating a lips sub-model based on the similarities between the lips region and the reference lips feature;
generating a global model by projecting and fitting the left eye sub-model onto the left eye region, projecting and fitting
the right eye sub-model onto the right eye region, and projecting and fitting the lips sub-model onto the lips region; and

based on the global model, generating a 3D face model of the 3D in-game avatar; and
causing the 3D in-game avatar to mimic the user facial expression in the game display in real-time.

US Pat. No. 10,110,804

PORTRAIT IMAGE SYNTHESIS FROM MULTIPLE IMAGES CAPTURED ON A HANDHELD DEVICE

FotoNation Limited, Galw...

1. A hand-held digital image capture device, comprising:a lens and image sensor for capturing digital images;
a display screen;
a processor; and
a user-selectable mode switch configured to trigger the device to detect a face in the field of view of the device and to generate a face delimiter on the display screen surrounding an initial position of an image of the face on the screen, the device thereafter indicating to the user if the device departs from movement along a predetermined concave path with the optical axis of the device pointing towards the face, such indication being made by movement of the image of the face relative to the delimiter,
wherein the device is further configured to capture and store a plurality of images at successive positions along the concave path.

US Pat. No. 9,639,775

FACE OR OTHER OBJECT DETECTION INCLUDING TEMPLATE MATCHING

FotoNation Limited, Ball...

1. A digital image processing device, comprising:
an optoelectronic system;
a memory;
two or more image processing units;
a plurality of object detection templates, wherein each object detection template of the plurality of object detection templates
is tuned for high detection, low detection ratios for detecting faces;

a plurality of high-quality object detection templates tuned for low detection, high rejection ratios for detecting faces;
wherein the high-quality object detection templates are different from the object detection templates;
wherein the optoelectronic system acquires a plurality of digital images;
wherein a first data processing unit (“DPU”) that lacks a program counter, of the two or more image processing units:
determines, for a first digital image of the plurality of digital images, a location and a boundary of one or more spatial
regions where one can expect to detect a face by applying in parallel, to the first digital image, two or more object detection
templates, of the plurality of object detection templates;

stores in the memory the first digital image and information about the location and the boundary of one or more spatial regions;
processes a second digital image of the plurality of digital images;
wherein, as the first DPU processes the second digital image, by applying in parallel, to the second digital image, the two
or more object detection templates of the plurality of object detection templates for detecting faces, a second DPU of the
two or more image processing units:

based on the location and the boundary, retrieves the one or more spatial regions of the first digital image and performs
an additional processing on the one or more spatial regions of the first digital image by:

determining whether the face is depicted in the one or more spatial regions as facing a camera at a particular angle by applying,
to the one or more spatial regions of the first digital image, one or more high-quality object detection templates of the
plurality of high-quality object detection templates tuned for low detection, high rejection ratios for detecting faces;

in response to determining that the face is depicted in the one or more spatial regions as facing to the camera at the particular
angle, sending a confirmation message that the face facing the camera at the particular angle was detected; and

wherein the additional processing on the one or more spatial regions of the first digital image by the second DPU is performed
in parallel with the processing of the second digital image by the first DPU.

US Pat. No. 9,665,799

CONVOLUTIONAL NEURAL NETWORK

FOTONATION LIMITED, Galw...

21. An image processing system comprising:
an image cache comprising an input port and an output port, said image cache being responsive to a request to read a block
of N×M pixels extending from a specified location within an input map to provide said block of N×M pixels at said output port;

an image processor being arranged to read at least one block of N×M pixels from said image cache output port, and to process
said at least one block of N×M pixels to provide at least one output pixel value;

said image cache being configured to write output pixel values to a specified write address via said image cache input port;
said image cache comprising a plurality of interleaved memories, each memory storing a block of pixel values at a given memory
address, the image cache being arranged to determine for a block of N×M pixels to be read from said image cache: a respective
one address within each of said interleaved memories in which said pixels of said block of N×M pixels are stored; a respective
memory of said plurality of interleaved memories within which each pixel of said block of N×M pixels is stored; and a respective
offset for each pixel of said block of N×M pixels within each memory address, so that said image cache can simultaneously
provide said N×M pixels at said output port in a single clock cycle; and

a controller arranged to cause said image processor to read from a specified location of said at least one input map a configurable
block of N×M pixels and to write output pixel values to specified locations within said image cache.

US Pat. No. 10,115,003

IMAGE PROCESSING APPARATUS

FotoNation Limited, Galw...

1. An image processing apparatus comprising a normalisation module operatively connected across a bus to a memory storing an image in which a region of interest (ROI) has been identified within the image, the ROI being bound by a rectangle having a non-orthogonal orientation within the image, the normalisation module being arranged to:divide said ROI into one or more slices, each slice comprising a plurality of adjacent rectangular tiles;
for each slice, successively read ROI information for each tile from said memory including: reading a portion of said image extending across at least a width of said slice line-by-line along an extent of a slice;
for each tile, downsample said ROI information to a buffer to within a scale SD<2 of a required scale for a normalised version of said ROI; and fractionally downsample and rotate downsampled information for a tile within said buffer to produce a respective normalised portion of the ROI at the required scale for the normalised ROI; and
accumulate downsampled and rotated information for each tile within a normalised ROI buffer for subsequent processing by said image processing apparatus.

US Pat. No. 9,942,470

DETECTING FACIAL EXPRESSIONS IN DIGITAL IMAGES

FotoNation Limited, Galw...

1. A method of in-camera processing to provide an increased number of smiling faces in a recorded digital image of a group of faces, comprising:acquiring one or more digital images; and
providing in-camera processing which automatically:
identifies one or more faces in a first one of the images;
from among the one or more faces, identifies a particular number of non-smiling faces; and
based, at least in part, on the particular number, initiates one or more operations when a predetermined minimum number of non-smiling faces are in the first image, said one or more operations including:
delaying recording of an image until the number of non-smiling faces in a subsequently acquired image is less than the predetermined number and, when the number of non-smiling faces in a subsequently acquired image is less than the predetermined number,
forming a composite image, based on the subsequently acquired image and one or more of the other images, to further reduce the number of non-smiling faces by replacing a face region in the subsequently acquired image with a face region in one of the one or more of the other images; and
recording the composite image as digital content in a storage medium.

US Pat. No. 9,684,966

FOREGROUND / BACKGROUND SEPARATION IN DIGITAL IMAGES

FotoNation Limited, Galw...

1. An apparatus for providing improved separation between foreground and background regions in a digital image of a scene,
comprising a processor and one or more processor-readable media for programming the processor to control the apparatus to
perform a method comprising: acquiring a main digital image; providing a first map comprising one or more regions within a
main digital image, each having one or more pixels with a common characteristic; providing an object profile corresponding
to a region of interest of said main digital image including an object candidate; determining that the object candidate within
the region of interest matches an object portion of the object profile; determining an orientation of the region of interest
within said main digital image; defining said object profile as comprising said object portion and at least a second portion
adjacent said object portion having a known association and orientation relative to the object portion; comparing said region
of interest with said object profile to determine whether said region of interest matches said object profile; and designating
said region of interest as a foreground or background region based on said comparison.

US Pat. No. 9,712,743

DIGITAL IMAGE PROCESSING USING FACE DETECTION AND SKIN TONE INFORMATION

FotoNation Limited, Galw...

1. A method of processing a digital image using face detection, the method comprising:
identifying a first group of pixels that correspond to a first face region within a digital image;
determining one or more first initial color values for pixels in the first group of pixels;
based on the one or more first initial color values for the pixels in the first group of pixels, determining a first skin
tone for the first face region;

identifying a second group of pixels that correspond to a second face region within the digital image;
determining one or more second initial color values for pixels in the second group of pixels;
based on the one or more second initial color values for the pixels in the second group of pixels, determining a second skin
tone for the second face region;

determining whether the first skin tone is lighter than the second skin tone; and
in response to determining that the first skin tone is lighter than the second skin tone:
based on, at least in part, the first skin tone and the second skin tone, determining an initial fill-flash and applying the
initial fill-flash to the first face region;

based on, at least in part, the first skin tone and the second skin tone, increasing the fill-flash and applying the increased
fill-flash to the second face region.

US Pat. No. 10,097,757

METHOD FOR DETERMINING BIAS IN AN INERTIAL MEASUREMENT UNIT OF AN IMAGE ACQUISITION DEVICE

FotoNation Limited, Galw...

1. A method for determining bias in an inertial measurement unit of an image acquisition device comprising:mapping at least one reference point within an image frame into a 3D spherical space based on a lens projection model for the image acquisition device to provide a respective anchor point in 3D space for each reference point;
for at least one of said at least one reference points within a given image frame:
obtaining an estimate of frame-to-frame motion at said reference point between said given frame and a previously acquired frame;
obtaining from said inertial measurement unit a measure of device orientation for an acquisition time of said reference point in said given frame and said previously acquired frame, said measure including a bias component;
projecting a corresponding anchor point in 3D space according to a difference in said measure of device orientation in said given frame and said previously acquired frame to provide a 3D vector Vm;
projecting a result of said estimated frame-to-frame motion for said point from said given frame into said previously acquired frame into 3D space to provide a 3D vector Ve;
providing a cross product Vc of said 3D vectors Vm and Ve, and
obtaining an updated bias component value as a function of said cross product Vc for a plurality of reference points in one or more image frames.

US Pat. No. 10,134,117

METHOD AND APPARATUS FOR VIEWING IMAGES

FotoNation Limited, Galw...

1. A computer-implemented method for processing multiple images captured at different focal lengths to create a composite image for viewing on a touch display device associated with an interactive computing device, the method comprising:creating a plurality of images of nominally the same scene, including a main image comprising a main image region having a main focus measure, and one or more sub-images each comprising a sub-image region corresponding to the main image region but having a sub-image focus measure different from the main focus measure, where the focus measure is calculated for multiple ones of the sub-regions in the sub-images which correspond to the main image region and the computing device selects a change in focus based on the user designating replacement of the main image region with either a sub-image region having a higher or a lower focus measure relative to the focus measure of the main image region;
while the main image is presented on the touch display device for viewing by the user, automatically detecting, with the interactive computing device, the user selecting the main image region for replacement with a sub-image region having a higher or a lower focus measure than that of the main image region by application of a circular-like motion about the main image region, where:
(i) application of the circular-like motion in a first direction causes the interactive computing device to select a sub-image region having a higher focus measure than that of the main image region and replace the main image region with the selected sub-image region having the higher focus measure to create a modified main image, and application of the circular-like motion in a second direction opposite the first direction causes the interactive computing device to select a sub-image region having a lower focus measure than that of the main image region and replace the main image region with the selected sub-image region having the lower focus measure to create a modified main image, and
(ii) either (a) the first direction is a clockwise direction and the second direction is a counter-clockwise direction, or (b) the second direction is a clockwise direction and the first direction is a counter-clockwise direction, and
displaying the modified main image on the display device.

US Pat. No. 10,152,631

OPTICAL SYSTEM FOR AN IMAGE ACQUISITION DEVICE

FotoNation Limited, Galw...

1. A smartphone for performing biometric recognition of a subject, the smartphone including an optical system comprising:a filter comprising a central aperture of a first diameter arranged to transmit both visible wavelengths and selected near infra-red (NIR) wavelengths and a peripheral aperture of a second given diameter arranged to block visible wavelengths and to transmit said NIR wavelengths, wherein a diameter ratio between the second given diameter of said peripheral aperture and the first diameter of said central aperture is between about 1.5:1 to about 1.6:1;
an image sensor comprising an array of pixels including pixels sensitive to visible wavelengths and corresponding pixels sensitive to said NIR wavelengths; and
a lens assembly axially located between said filter and said image sensor and comprising a plurality of lens elements with a focal length shallow enough that the optical system can fit within the smartphone, said plurality of lens elements being arranged to simultaneously focus NIR light, received from a given object distance through the central aperture of said filter and the peripheral aperture of said filter, and visible light, received from said given object distance through only said central aperture of said filter, on said image sensor with sufficient spatial resolution such that NIR features including a subject's iris pattern are registered with corresponding visible facial features of the subject.

US Pat. No. 10,127,682

SYSTEM AND METHODS FOR CALIBRATION OF AN ARRAY CAMERA

FotoNation Limited, (IE)...

1. A method of manufacturing an array camera device, the method comprising:assembling an array of cameras comprising a plurality of imaging components that capture images of a scene from different viewpoints, where the components include:
a set of one or more reference imaging components, each having a reference viewpoint, and
a set of one or more associate imaging components;
configuring the array of cameras to communicate with at least one processor;
configuring the processor to communicate with at least one display;
configuring the processor to communicate with at least one type of memory; and
performing a calibration process for the array of cameras, where the calibration process comprises:
performing a defect detection process that includes:
identifying pixel defects in each of the plurality imaging components, and determining whether pixel defects in a particular imaging component may be corrected in image processing using image data from at least one other one of the plurality of imaging components by:
selecting a defect in a first one of the plurality of imaging components in the array camera and determining a parallax translation path of the defect over a range of supported distances,
determining a parallax translation path over the range of supported distances from defects in other ones of the plurality of imaging components of the array camera,
determining whether a predetermined number of parallax translation paths of defects in other ones of the plurality of imaging components intersect the parallax translation path of the selected defect, and
generating a disposition output for the first one of the imaging components having the selected defect if a predetermined number of parallax translation paths for defects from other images components intersect the parallax translation path of the selected defect; and
performing a geometric calibration process that includes determining translation information that relates image data from particular pixels in a reference imaging component to image data from corresponding pixels in each associate imaging component associated with the reference imaging component.

US Pat. No. 10,051,197

IMAGE ACQUISITION METHOD AND APPARATUS

FotoNation Limited, Galw...

1. An image acquisition method operable in a hand held image acquisition device with a camera comprising:a) obtaining a first image of a scene with the camera at a nominal exposure level;
b) determining a number of relatively bright pixels within said first image;
c) determining a number of relatively dark pixels within said first image;
d) determining based on said number of relatively bright pixels a negative exposure adjustment;
e) determining based on said number of relatively dark pixels a positive exposure adjustment; and
f) acquiring respective images at said nominal exposure level, with said negative exposure adjustment and with said positive exposure adjustment as component images for a high dynamic range (HDR) image of the scene, wherein:
said determining said negative exposure adjustment further comprises: acquiring a second image of said scene with a negative exposure adjustment based on the number of relatively bright pixels within said first image; determining a second number of relatively bright pixels with said second image; and further adjusting said negative exposure adjustment as a function of the second number of relatively bright pixels; or
said determining said positive exposure adjustment further comprises: acquiring a third image of said scene with a positive exposure adjustment based on the number of relatively dark pixels within said first image; determining a third number of relatively dark pixels with said third image; and further adjusting said positive exposure adjustment as a function of the third number of relatively dark pixels.

US Pat. No. 10,560,690

METHOD FOR DYNAMICALLY CALIBRATING AN IMAGE CAPTURE DEVICE

FotoNation Limited, Galw...

1. A method for calibrating an image capture device comprising:a) determining a first lens actuator setting for a first determined distance to an object in a scene to be imaged by an image capture device;
b) determining a second lens actuator setting providing a greater sharpness than the first lens actuator setting;
repeating a) and b) at a second determined distance different than the first determined distance;
determining a calibration correction to obtain a determined calibration correction based at least in part on:
a difference between the first lens actuator setting and the second lens actuator setting for the first determined distance; and
a difference between the first lens actuator setting and the second lens actuator setting for the second determined distance; and
adjusting a stored calibrated lens actuator setting based at least on the determined calibration correction.

US Pat. No. 10,148,945

METHOD FOR DYNAMICALLY CALIBRATING AN IMAGE CAPTURE DEVICE

FotoNation Limited, Galw...

1. A method for dynamically calibrating an image capture device comprising:a) determining a distance (DCRT, DEST) to an object within a scene being imaged by said image capture device;
b) determining a first lens actuator setting (DACINIT) as a function of: a stored calibrated lens actuator setting (DACNEARPLP) for a pre-determined near focus distance (DNEAR), a stored calibrated lens actuator setting (DACFARPLP) for a pre-determined far focus distance (DFAR), a focal length (f) of the image capture device and said determined distance (DCRT, DEST);
c) determining a second lens actuator setting (DACFOCUS) providing maximum sharpness for said object in a captured image of said scene; and
d) storing said determined distance (DCRT, DEST), first lens actuator setting (DACINIT) and second lens actuator setting (DACFOCUS);
e) subsequently repeating steps a) to c) at a second determined distance separated from said first determined distance;
f) determining a calibration correction (ERRNEARPLP, ERRFARPLP) for each of said calibrated lens actuator settings (DACNEARPLP, DACFARPLP) as a function of at least: respective differences between said second lens actuator setting (DACFOCUS) and said first lens actuator setting (DACINIT) for each of said first and second determined distances; and
g) adjusting the stored calibrated lens actuator settings according to said determined calibration corrections.

US Pat. No. 10,146,924

SYSTEMS AND METHODS FOR AUTHENTICATING A BIOMETRIC DEVICE USING A TRUSTED COORDINATING SMART DEVICE

FotoNation Limited, (IE)...

1. A process for enrolling a configurable biometric device with a network service comprising:obtaining a device identifier (ID) of the configurable biometric device using a coordinating smart device;
communicating the device ID of the configurable biometric device from the coordinating smart device to the network service using a secure communications link;
communicating a first challenge from the network service to the coordinating smart device, where the first challenge is generated based on a challenge-response authentication protocol;
communicating the first challenge and a response uniform resource locator (URL) from the coordinating smart device to the configurable biometric device;
generating a first response to the first challenge and communicating the first response to the network service utilizing the response URL;
receiving a secure channel key by the coordinating smart device from the network service;
communicating the secure channel key from the coordinating smart device to the configurable biometric device;
performing a biometric enrollment process using the configurable biometric device including capturing biometric information from a user; and
creating a secure communication link between the configurable biometric device and the network service using the secure channel key when the first response satisfies the challenge-response authentication protocol.

US Pat. No. 10,657,351

IMAGE PROCESSING APPARATUS

FotoNation Limited, Galw...

1. An image processing apparatus comprising:an image capture sensor;
a plurality of infra-red (IR) sources; and
a processor operatively coupled to the plurality of IR sources and the image capture sensor and configured to:
acquire multiple images from the sensor, each image illuminated with one or more of the IR sources such that each image is captured with illumination from a different direction;
combine component images corresponding to the multiple images into a final combined image by:
determining a median value for corresponding pixel locations of subsets of the component images;
setting the determined median value as a pixel value for a respective intermediate combined image; and
combining the intermediate combined images into the final combined image by selecting a median value for corresponding pixel locations of the intermediate combined images as a pixel value for corresponding pixel locations of the final combined image.

US Pat. No. 10,547,772

SYSTEMS AND METHODS FOR REDUCING MOTION BLUR IN IMAGES OR VIDEO IN ULTRA LOW LIGHT WITH ARRAY CAMERAS

FotoNation Limited, Galw...

1. A method for synthesizing an image from multiple images captured from different viewpoints using an array camera, the method comprising:capturing image data using a plurality of active cameras within an array camera, where the plurality of active cameras are configured to capture image data comprising pixel brightness values that form a reference image and a plurality of alternate view images captured from different viewpoints;
applying geometric corrections to find correspondences between pixels in the plurality of alternate view images and pixels in the reference image using a processor configured by software using depth information;
summing the pixel brightness values for pixels in the reference image with pixel brightness values for corresponding pixels in the alternate view images to create pixel brightness sums for the pixel locations in the reference image using the processor configured by software; and
synthesizing an output image from the viewpoint of the reference image using image data comprising the pixel brightness sums for the pixel locations in the reference image using the processor configured by software.

US Pat. No. 10,334,152

IMAGE ACQUISITION DEVICE AND METHOD FOR DETERMINING A FOCUS POSITION BASED ON SHARPNESS

FotoNation Limited, Galw...

1. A method for acquiring an image comprising:acquiring a sequence of image frames of a given scene;
detecting within each image frame any region containing a subject with a reference subject size;
recording an identifier of any region containing such a subject on which auto-focus might be based;
determining a number of variations of subject identifiers for a given number of frames preceding a given frame in said sequence;
responsive to said number of variations exceeding a threshold, disabling subject based auto-focus for subsequent frames until at least a condition for a given scene is met;
responsive to said number of variations not exceeding a threshold in a given frame containing at least one subject, utilising subject based auto-focus for acquisition of at least a subsequent frame in said sequence; and
otherwise using non-subject based auto-focus for at least a subsequent frame in said sequence.

US Pat. No. 10,176,377

IRIS LIVENESS DETECTION FOR MOBILE DEVICES

FotoNation Limited, Galw...

1. A method comprising:acquiring a plurality of image pairs using one or more image sensors of a mobile device;
selecting a particular image pair, from the plurality of image pairs, that depicts an eye-iris region in-focus;
generating a hyperspectral image by fusing images included in the particular image pair based on, at least in part, the particular image pair;
generating a particular feature vector for the eye-iris region depicted in the particular image pair based on, at least in part, the hyperspectral image;
retrieving one or more trained model feature vectors generated based on depictions of one or more facial features of a particular user of the mobile device;
determining a distance metric based on, at least in part, the particular feature vector and the one or more trained model feature vectors;
determining whether the distance metric exceeds a first threshold; and
generating a first message indicating that the plurality of image pairs fails to depict the particular user of the mobile device in response to determining that the distance metric exceeds the first threshold.

US Pat. No. 10,142,560

CAPTURING AND PROCESSING OF IMAGES INCLUDING OCCLUSIONS FOCUSED ON AN IMAGE SENSOR BY A LENS STACK ARRAY

FotoNation Limited, (IE)...

1. A camera array, comprising:a plurality of cameras configured to capture images of a scene, where each camera comprises:
optics comprising at least one lens element and at least one aperture; and
a sensor comprising a two-dimensional array of pixels and control circuitry for controlling imaging parameters;
a processor configured by software to:
capture a plurality of images from different viewpoints using the plurality of cameras, where each image captured by the plurality of cameras includes pixels that are occluded in at least one other image captured by the plurality of cameras; and
normalize the plurality of images based upon calibration data to enable scan-line based parallax searches;
measure parallax between the normalized images by adaptively comparing the similarity of neighborhoods of pixels for different parallax-induced shifts along scan-lines;
identify occluded pixels based upon the measured parallax information;
generate a depth map using the measured parallax information;
select at least one distance as an “in best focus” distance; and
blur an image produced by the camera array based upon the “in best focus” distance and distance information from the depth map.

US Pat. No. 10,122,993

AUTOFOCUS SYSTEM FOR A CONVENTIONAL CAMERA THAT USES DEPTH INFORMATION FROM AN ARRAY CAMERA

FotoNation Limited, (IE)...

1. An array camera system, comprising:an array camera comprising a plurality of cameras and a separate camera, where:
the plurality of cameras capture images of a scene from different viewpoints;
the separate camera has a fixed geometric relationship with each of the plurality of cameras in the array camera; and
the separate camera captures an image of the scene from a different viewpoint to the viewpoints of the other cameras in the array camera;
a processor; and
memory in communication with the processor storing software;
wherein the software directs the processor to:
obtain a focus window based upon image data from the separate camera, where:
at least one object is detected within an image captured by the separate camera; and
the focus window is a defined region having a set of pixels, where the defined region is within an image captured by the separate camera based upon the location of the at least one object;
determine a focus window for the array camera based upon the focus window from the separate camera, where the focus window for the array camera comprises a corresponding set of pixels that corresponds to the set of pixels within the focus window from the separate camera;
obtain image data pertaining to the focus window of the array camera from at least two cameras in the plurality of cameras,
determine depth information for the focus window of the array camera from the image data from the at least two cameras,
colocate the depth information from the focus window of the array camera to the focus window of the separate camera to account for parallax between the separate camera and the plurality of cameras to generate depth information for the focus window of the separate camera,
determine a focus depth for the separate camera based upon the depth information for the focus window of the separate camera, and
adjust the focus of the separate camera to a desired focus based upon the determined focus depth for the separate camera.

US Pat. No. 10,558,430

NEURAL NETWORK ENGINE

FotoNation Limited, Galw...

1. A neural network engine configured to receive at least one M×N window of floating point number values corresponding to a pixel of an input map and a corresponding set of M×N floating point number kernel values for a neural network layer of a neural network, the neural network engine comprising:a plurality of M×N floating point multipliers, each floating point multiplier having a first operand input configured to be connected to an input map value and a second operand input configured to be connected to a corresponding kernel value;
pairs of multipliers within said M×N floating point multipliers providing respective floating point number outputs to respective input nodes of a tree of nodes, each node of said tree being configured to provide a floating point number output corresponding to either: a larger of inputs of said node; or a sum of said inputs, one output node of said tree providing a first input of an output logic, and an output of one of said M×N floating point multipliers providing a second input of said output logic;
wherein when said engine is configured to process a convolution layer of the neural network, each of said kernel values comprises a trained value for said layer, said nodes of the tree are configured to sum their inputs and said output logic is configured to sum its first and second inputs, to apply an activation function to said sum and to provide an output of said activation function as an output of said output logic;
wherein when said engine is configured to process an average pooling layer of the neural network, each of said kernel values comprises a value corresponding to 1/(M×N), said nodes of the tree are configured to sum their inputs and said output logic is configured to sum its first and second inputs and to provide said sum as said output of said output logic; and
wherein when said engine is configured to process a max pooling layer of the neural network, each of said kernel values comprises a value equal to 1, said nodes of the tree are configured to output a larger of their inputs and said output logic is configured to output a larger of its first and second inputs as said output of said output logic.

US Pat. No. 10,182,216

EXTENDED COLOR PROCESSING ON PELICAN ARRAY CAMERAS

FotoNation Limited, (IE)...

1. A method of generating an image of a scene using a camera array including at least one camera that captures an RGB image of a scene and at least one camera that captures a black and white (B/W) image of the scene, the method comprising:obtaining input images captured by a plurality of cameras that includes a camera that captures an RGB image and a camera that captures a B/W image, where the input images includes a first RGB input image that includes image information captured in at least three channels (RGB) of information and a second B/W input image that includes image information captured in a single black and white (B/W) channel of information;
generate a fused image using a processor configured by software to:
measure parallax using the input images captured by the plurality of cameras to produce a depth map;
normalize the second B/W input image in the photometric reference space of the first RGB input image;
cross-channel normalize the first RGB input image with respect to the B/W input image by applying gains and offsets to pixels of the first RGB input image; and
perform cross-channel fusion using the first RGB input image and the second B/W input image to produce an image.

US Pat. No. 10,657,628

METHOD OF PROVIDING A SHARPNESS MEASURE FOR AN IMAGE

FotoNation Limited, Galw...

1. A method of providing a sharpness measure for an image comprising:detecting an object region within an image;
obtaining meta-data for the image;
scaling the object region;
calculating a gradient map for the scaled object region;
comparing the gradient map to a threshold determined for the image;
obtaining, based at least in part on the comparing, a filtered gradient map of values exceeding the threshold;
determining a sharpness measure for the object region as a function of the filtered gradient map of values, the sharpness measure being proportional to at least one of the filtered gradient map values; and
tracking the object region over multiple image frames based at least in part on the sharpness measure,
wherein the threshold for the image is a function of at least: a contrast level for the detected object region, a distance to a subject, and an ISO value used for image acquisition.

US Pat. No. 10,540,586

METHOD AND SYSTEM FOR TRACKING AN OBJECT

FotoNation Limited, Galw...

1. A method of tracking an object across a stream of images comprising:a) passing an initial frame of said stream of images through a first neural network comprising a plurality of convolutional layers, each convolutional layer producing at least one feature map;
b) providing a feature map output from said first neural network to a second multi-classifier neural network comprising at least one fully-connected layer to determine a region of interest (ROI) bounding an object of a given class in said initial frame;
c) storing a feature map output from a first convolutional layer of said first neural network corresponding to said ROI as weights for at least a first layer of neurons of a third multi-layer neural network;
d) acquiring a subsequent frame from said stream of images;
e) scanning at least a portion of said frame ROI by ROI using said first neural network to produce respective feature maps from said first convolutional layer for each ROI;
f) providing said feature maps to said third multi-layer neural network to provide an output proportional to the level of match between the feature map values used for the weights of the first layer of neurons and the feature map values provided for a candidate ROI to identify a candidate ROI having a feature map best matching the stored feature map;
g) responsive to said match meeting a threshold, updating the stored feature map indicative of the features of said object according to the feature map for the best matching candidate ROI including updating the weights for said neurons according to the feature map for the best matching candidate ROI from said subsequent frame; and
h) repeating steps d) to g) until said match fails to meet said threshold.

US Pat. No. 10,511,786

IMAGE ACQUISITION METHOD AND APPARATUS

FotoNation Limited, Galw...

1. A system comprising:one or more processors;
an image sensor;
memory comprising computer executable instructions that, when executed by the one or more processors, cause the system to perform operations comprising:
obtaining a first image via the image sensor based at least in part on a first exposure level;
identifying a first set of pixels of the first image based at least in part on a light thresholding curve and first intensities associated with the first set of pixels;
identifying a second set of pixels of the first image based at least in part on a dark threshold curve and second intensities associated with the second set of pixels;
determining an underexposure adjustment level based at least in part on the first set of pixels;
determining an overexposure adjustment level based at least in part on the second set of pixels;
obtaining, via the image sensor, a second image based at least in part on the underexposure adjustment level; and
obtaining, via the image sensor, a third image based at least in part on the overexposure adjustment level.

US Pat. No. 10,482,618

SYSTEMS AND METHODS FOR HYBRID DEPTH REGULARIZATION

FotoNation Limited, Galw...

11. A depth sensing method, comprising:obtaining image data for a plurality of images from multiple viewpoints using the plurality of cameras, wherein the image data for the plurality of images comprises a reference image and at least one alternate view image;
generating a raw depth map containing depth estimates for pixels within the reference image using the image data for the reference image and the image data for the at least one alternate view image using a first depth estimation process, and a confidence map describing reliability of depth estimates contained within the raw depth map; and
generating a regularized depth map by:
computing a secondary depth map containing depth estimates for pixels within the reference image using a second depth estimation process, wherein the second depth estimation process calculates depth using the image data for the reference image and the image data for the at least one alternate view image, by using a different estimation technique than that of the first depth estimation process; and
computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map, where a depth estimate for a pixel in the reference image is selected from the raw depth map when the depth estimate is indicated as being reliable by the confidence map.

US Pat. No. 10,467,469

OPTICAL SYSTEM FOR AN IMAGE ACQUISITION DEVICE

FotoNation Limited, Galw...

1. An optical system for an image acquisition device comprising:an image sensor comprising an array of pixels including pixels sensitive to IR wavelengths for acquiring an image; and
multiple optical components formed in a single piece of plastic material, comprising a prism shape having first and second opposing surfaces and third and fourth opposing surfaces, formed along the prism shape to provide a lens assembly having a predetermined focal length, the lens assembly including:
a collecting lens having a convex surface positioned along the first surface to initially transmit light received from a scene along a first optical axis and then along a second axis transverse to the first optical axis for focusing on the image sensor, the lens assembly arranged to focus IR light received from a given object distance on a surface of the image sensor, the lens assembly further including:
a first reflective coating formed along the third surface for reflecting collected light along the second axis transverse to said first optical axis;
a second reflective coating formed along the fourth surface for further reflecting the collected light along a third axis parallel to the first optical axis; and
a second lens positioned along the second surface to focus the collected light on the image sensor so that said optical system has a length, as measured along the first optical axis and the third axis, less than the predetermined focal length of the lens assembly.

US Pat. No. 10,356,346

METHOD FOR COMPENSATING FOR OFF-AXIS TILTING OF A LENS

FotoNation Limited, Galw...

1. An image acquisition system comprising a lens assembled to project an image onto an image sensor, where the lens is tilted relative to the image sensor, the image acquisition system including a calibration module configured to:acquire a set of calibrated parameters

 corresponding to said tilting of said lens, where z is the optical axis relative to which said lens is tilted;
acquire an image from said image sensor through said lens, where Px? and Py? indicate a coordinate of a pixel in said acquired image;
map image information from said acquired image to a lens tilt compensated image according to the formulae:

where s comprises a scale factor given by

and where ux and uy indicate the location of a pixel in said lens tilt compensated image;
and store said lens tilt compensated image in a memory.

US Pat. No. 10,346,677

CLASSIFICATION AND ORGANIZATION OF CONSUMER DIGITAL IMAGES USING WORKFLOW, AND FACE DETECTION AND RECOGNITION

FotoNation Limited, Galw...

1. A processor-based image acquisition and processing system, comprising:an image acquisition component including a lens and image sensor for acquiring a digital image, and
a processing component including a processor and a memory having processor-readable code embedded therein for programming the processor to perform a method of classifying and archiving images including face regions that are acquired with the image acquisition component, the method comprising:
displaying, within a graphical user interface, an image that depicts one or more faces; detecting one or more faces within the image;
receiving user input that selects a face of the one or more faces to be a currently-selected face;
selecting a set of images, from a collection of images, that include face regions which closely match the currently-selected face, by first normalizing the face regions in the set of images and then extracting face classifier parameters from the face regions in the set of images;
concurrently with display of the currently-selected face, displaying each image in the set of images;
selecting a target image from the set of images by first determining a subset of N most probable matches, based on whether a majority of members of the set have like identity, and then selecting the target image based on whether the geometric distance between the faceprint of the currently-selected face and the faceprint of the target image lie within a certain geometric distance of each other in facespace; and
associating the currently-selected face with a person to which the target image corresponds, wherein the method is performed by one or more computing devices.

US Pat. No. 10,304,166

EYE BEAUTIFICATION UNDER INACCURATE LOCALIZATION

FotoNation Limited, Galw...

1. A method of enhancing an appearance of a face within a digital image, comprising using a processor in:acquiring a digital image of a scene including a face, including capturing the image using a lens and an image sensor of a processing device that includes the processor, or receiving the acquired image following capture of the image by another device that includes a lens and an image sensor, or a combination thereof;
identifying a group of pixels that includes a pupil region of an eye region within the face in the digital image;
identifying a border between said pupil region and an iris region or a sclera region within the eye region using luminance information;
adjusting and enhancing the acquired digital image, including adding one or more glint pixels at a pupil side of the border between said iris and said pupil to generate an enhanced image; and
displaying, transmitting, communicating or digitally storing or otherwise outputting the enhanced image or a further processed version of the enhanced image.

US Pat. No. 10,229,504

METHOD AND APPARATUS FOR MOTION ESTIMATION

FotoNation Limited, Galw...

1. A method of estimating motion between a pair of image frames of a given scene comprising the steps of:a) calculating respective integral images for each of said image frames;
b) selecting at least one corresponding region of interest within each frame; and
c) for each region of interest:
i. calculating an integral image profile from each integral image, each profile comprising an array of elements, each element comprising a sum of pixel intensities from successive swaths of said region of interest for said frame;
ii. correlating said integral image profiles to determine a relative displacement of said region of interest between said pair of frames; and
iii. dividing each region of interest into a plurality of further regions of interest; and
d) repeating step c) until a required hierarchy of estimated motion for successively divided regions of interest is provided, wherein said calculating an integral image profile comprises sub-sampling said integral image at a first sub-sampling interval at a first selected level of said required hierarchy and for each repetition of step c), sub-sampling said integral image at successively smaller sub-sampling intervals.

US Pat. No. 10,656,391

LENS SYSTEM FOR A CAMERA MODULE

FotoNation Limited, Galw...

1. A camera module for a shallow form factor device, the camera module comprising an optical system comprising:seven even aspheric singlet lens elements formed of a molded plastics material, a flat image sensor, and
a central aperture stop with at least 3 lens elements disposed in front of said aperture stop and at least 3 rear lens elements disposed between said aperture stop and said image sensor, said aperture stop having a pupil wide enough to provide an aperture having an f-number of at least f/1.0,
wherein said optical system has a total track length (TTL) less than about 8.2 mm and a minimum spacing of about 0.6 mm between a surface of the rearmost lens element and an imaging plane of said image sensor where sag z at height h for a surface of each lens element is described by the formula:
z=f(h2,R,k)+A1h4+A2h6+A3h8+A4h10+A5h12+A6h14,where R is radius of curvature;k is conic constant; andA1 to A6 are aspheric coefficients, where a magnitude of an aspheric coefficient A1 of at least one surface of each of said rear lens elements is greater than 5×10?2.

US Pat. No. 10,643,383

SYSTEMS AND METHODS FOR 3D FACIAL MODELING

FotoNation Limited, Galw...

1. A 3D facial modeling system comprising:a plurality of cameras configured to capture images from different viewpoints;
a processor; and
a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to:
obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras;
locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images;
select at least one pair of key feature points, comprising a labeled key feature point in a first image of the plurality of images, and the correspondingly labeled key feature point in a second image in the plurality of images;
determine a depth value for each selected key feature point in the first image by calculating the disparity between the at least one pair of key feature points; and
generate a 3D model of the face by providing a convolutional neural network with the determined depth values and the first image.

US Pat. No. 10,638,099

EXTENDED COLOR PROCESSING ON PELICAN ARRAY CAMERAS

FotoNation Limited, Galw...

1. A method of generating an image of a scene using a camera array including at least one camera that captures an RGB image of a scene and at least one camera that captures near-infrared (IR) spectral wavelengths of the scene, the method comprising:obtaining input images captured by a plurality of cameras that includes a camera that captures an RGB image and a camera that captures near-IR wavelengths, where the input images includes a first input image that includes image information captured in at least three channels (RGB) of information and a second input image that includes image information captured in at least a near-IR channel of information;
generate a fused image using a processor configured by software to:
measure parallax using the input images captured by the plurality of cameras to produce a depth map;
normalize the second input image in the photometric reference space of the first input image;
cross-channel normalize the first input image with respect to the second input image by applying gains and offsets to pixels of the first input image; and
perform cross-channel fusion using the first input image and the second input image to produce an image.

US Pat. No. 10,628,568

BIOMETRIC RECOGNITION SYSTEM

FotoNation Limited, Galw...

1. A biometric recognition system for a hand held computing device incorporating an inertial measurement unit (IMU) comprising a plurality of accelerometers and at least one gyroscope, said system comprising a tremor analysis component arranged to:obtain from said IMU accelerometer signals indicating device translational acceleration along each of X, Y and Z axes as well as a gyroscope signal indicating rotational velocity about the Y axis during a measurement window;
filter each of said IMU signals to provide filtered frequency components for said signals during said measurement window;
provide a spectral density estimation for each of said filtered accelerometer magnitude signal and said filtered gyroscope signal;
determine an irregularity for each spectral density estimation, each spectral density estimation comprising a periodogram, wherein said irregularity comprises;
and irregularity (IJP) for said accelerometer magnitude and said gyroscope periodograms calculated as follows;

 and
an irregularity (IKP) for said accelerometer magnitude and said gyroscope periodograms calculated as follows;

where pm holds the magnitude coefficient of each periodogram, and N is the number of elements in pm; and
based on said determined irregularities, authenticating a user of said handheld computing device further comprising a first authentication component requiring a higher level of authentication than provided by said tremor analysis component, said system being responsive to said first authentication component authenticating a given user to place said handheld computing device in first authentication state and said tremor analysis component, being responsive to said handheld computing device entering said first authentication state to periodically verify said user to maintain said handheld computing device in a second authentication state, said second authentication state providing greater access to said handheld computing device than said first authentication state.

US Pat. No. 10,574,905

SYSTEM AND METHODS FOR DEPTH REGULARIZATION AND SEMIAUTOMATIC INTERACTIVE MATTING USING RGB-D IMAGES

FotoNation Limited, Galw...

1. An array camera, comprising:a plurality of cameras that capture images of a scene from different viewpoints;
memory containing an image processing pipeline application;
wherein the image processing pipeline application direct the processor to:
capture a set of images using a group of cameras from the plurality of cameras;
receive (i) an image comprising a plurality of pixel color values for pixels in the image and (ii) an initial depth map corresponding to the depths of the pixels within the image, wherein the initial depth map is generated using the set of images; and
regularize the initial depth map into a dense depth map using pixels for which depth is known to estimate depths of pixels for which depth is unknown, wherein regularizing the initial depth map into the dense depth map further comprises:
performing Laplacian matting to compute a Laplacian L;
obtain a binary confidence map C that indicates whether a depth at a given pixel is confident, where the confidence map C is obtained by a thresholded gradient of the image;
wherein the Laplacian matting is optimized by solving a reduced linear system for depth values of pixels that are marked as non-confident based on the confidence map C; and
using the dense depth map to perform image-based rendering.

US Pat. No. 10,560,684

SYSTEM AND METHODS FOR CALIBRATION OF AN ARRAY CAMERA

FotoNation Limited, Galw...

1. A method for manufacturing an array camera device, the method comprising:assembling an array of cameras comprising a plurality of imaging components that capture images of a scene from different viewpoints;
configuring the array of cameras to communicate with at least one processor;
configuring the processor to communicate with at least one type of memory; and
performing a calibration process for the array of cameras, where the calibration process comprises:
capturing images of a test pattern using the array of cameras, where each of the plurality of imaging components captures an image from a particular viewpoint;
generating a first set of scene independent geometric corrections for image data captured by a first imaging component using a first set of test pattern image data captured by the first imaging component and data describing the test pattern using the processor;
generating a corrected test pattern image based on the first set of scene independent geometric corrections and the first set of test pattern image data captured by the first imaging component using the processor; and
generating a second set of scene independent geometric corrections for image data captured by a second imaging component of the plurality of imaging components using a second set of test pattern image data captured by the second imaging component and data for the corrected test pattern image using the processor; and
loading calibration information into the memory, wherein the calibration information is based on the first and second sets of generated scene independent geometric corrections.

US Pat. No. 10,430,682

SYSTEMS AND METHODS FOR DECODING IMAGE FILES CONTAINING DEPTH MAPS STORED AS METADATA

FotoNation Limited, Galw...

1. A system for rendering an image using an image container file, the system comprising:a processor; and
memory containing a rendering application and an image container file, wherein the image container file comprises:
an encoded image synthesized from a plurality of images captured by a plurality of heterogeneous cameras, wherein each camera of the plurality of heterogeneous cameras captures the scene from a different viewpoint;
a depth map that specifies depths from a reference viewpoint for pixels in the encoded image based on disparity between pixels of the plurality of images;
an auxiliary map that provides information corresponding to pixel locations within a synthesized image; and
metadata describing the image container file, wherein the metadata comprises offset information to locate the encoded image, the depth map, and the auxiliary map;
wherein the rendering application configures the processor to:
locate the encoded image within the image container file;
decode the encoded image;
locate the depth map and the auxiliary map within the image container file; and
post process the decoded image to apply a depth based effect to the pixels of the decoded image based on the depth map, the auxiliary map, and the metadata to create a rendered image.

US Pat. No. 10,306,120

CAPTURING AND PROCESSING OF IMAGES CAPTURED BY CAMERA ARRAYS INCORPORATING CAMERAS WITH TELEPHOTO AND CONVENTIONAL LENSES TO GENERATE DEPTH MAPS

FotoNation Limited, (IE)...

1. A camera array, comprising:a plurality of cameras including at least one camera comprising a telephoto lens and at least one camera comprising a wider field of view lens, where the plurality of cameras are configured to capture a plurality of images comprising different images representing different viewpoints of a same scene, wherein the camera array comprises a one-dimensional array of cameras;
control circuitry for independently controlling imaging parameters and operation of each camera in the plurality of cameras, wherein at least two cameras in the plurality of cameras have similar integration time settings; and
a parallax confirmation and measurement module for detecting and metering parallax using pixel correlation across cameras with similar integration time conditions; and
an image processing pipeline module configured to generate a depth map using the parallax information.

US Pat. No. 10,250,871

SYSTEMS AND METHODS FOR DYNAMIC CALIBRATION OF ARRAY CAMERAS

FotoNation Limited, (IE)...

1. A method of dynamically generating geometric calibration data for an array of cameras, comprising:acquiring a set of images of a scene using a plurality of cameras, where the set of images comprises a reference image and an alternate view image;
detecting features in the set of images using a processor directed by an image processing application;
identifying within the alternate view image features corresponding to features detected within the reference image using a processor directed by an image processing application;
rectifying the set of images based upon a set of geometric calibration data using a processor directed by an image processing application;
determining residual vectors for geometric calibration data at locations where features are observed within the alternate view image based upon observed shifts in locations of features identified as corresponding in the reference image and the alternate view image using a processor directed by an image processing application;
determining updated geometric calibration data for a camera that captured the alternate view image based upon the residual vectors using a processor directed by an image processing application, wherein determining updated geometric calibration data comprises:
using at least an interpolation process to generate a residual vector calibration field from the residual vectors;
mapping the residual vector calibration field to a set of basis vectors; and
generating a denoised residual vector calibration field using a linear combination of less than the complete set of basis vectors; and
rectifying an image captured by the camera that captured the alternate view image based upon the updated geometric calibration data using a processor directed by an image processing application.

US Pat. No. 10,235,590

SYSTEMS AND METHODS FOR ENCODING IMAGE FILES CONTAINING DEPTH MAPS STORED AS METADATA

FotoNation Limited, Galw...

1. A system for generating modifiable blur effects, comprising:a processor;
an array of cameras comprising a plurality of cameras;
a display; and
a memory containing machine readable instructions;
wherein the machine readable instructions direct the processor to:
obtain image data using the array of cameras, where the image data comprises a plurality of images of a scene captured from different viewpoints that includes a reference image captured from a reference viewpoint;
create a depth map that specifies depths for pixels in the reference image using at least a portion of the image data;
apply a depth based blur effect during synthesis of an image using the reference image and the depth map;
encode the synthesized image and the reference image;
write the encoded images and the depth map to an image file;
store the image file in the memory;
retrieve the image from the memory;
decode the synthesized image;
display the synthesized image using the display;
create a modified synthesized image by applying a second depth based blur effect using the depth map;
display the modified synthesized image using the display;
encode the modified synthesized image; and
store the modified synthesized image in the image file.

US Pat. No. 10,713,186

PERIPHERAL PROCESSING DEVICE

FotoNation Limited, Galw...

1. A peripheral processing device comprising:a physical interface for connecting said processing device to a host computing device through a communications protocol;
a local controller connected to local memory across an internal bus and being arranged to provide input/output access to data stored on said peripheral processing device to said host computing device through a file system application programming interface, API;
a neural processor comprising at least one network processing engine for processing a layer of a neural network according to a network configuration;
a memory for at least temporarily storing network configuration information for said at least one network processing engine, input image information for processing by one of said at least one network processing engine, intermediate image information produced by said at least one network processing engine and output information produced by said at least one network processing engine,
said local controller being arranged to receive said network configuration information for each network processing engine through a file system API write command;
said local controller being arranged to receive said input image information for processing by said neural processor through a file system API write command; and
said local controller being arranged to write said output information to said local memory for retrieval by said host computing device through a file system API read command.

US Pat. No. 10,701,277

AUTOMATIC EXPOSURE MODULE FOR AN IMAGE ACQUISITION SYSTEM

FotoNation Limited, Galw...

1. A method for automatically determining exposure settings for an image acquisition system, the image acquisition system comprising an infra-red (IR) illumination source and an image sensor responsive to selected IR wavelengths, the method comprising:maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system, wherein the image exposure settings comprise an exposure time value and a gain value for the image sensor, and wherein in the look-up tables a product of the exposure time and gain values increases with corresponding distance values;
acquiring an image of a subject from a camera module while the subject is illuminated by the IR illumination source;
determining a light condition occurring during the acquisition based on the acquired image of the subject, including detecting at least one feature of the subject in the acquired image and determining the lighting condition based on a region of the acquired image corresponding to the detected at least one feature;
calculating a distance between the subject and the camera module during the acquisition based on the acquired image of the subject;
determining whether a correction of the image exposure settings for the camera module is required based on the calculated distance and the determined light condition;
responsive to correction being required:
selecting image exposure settings corresponding to the calculated distance from a look-up table corresponding to the determined light condition; and
acquiring a new image of the subject using the selected image exposure settings.

US Pat. No. 10,542,208

SYSTEMS AND METHODS FOR SYNTHESIZING HIGH RESOLUTION IMAGES USING IMAGE DECONVOLUTION BASED ON MOTION AND DEPTH INFORMATION

FotoNation Limited, Galw...

1. An array camera, comprising:a processor; and
a memory connected to the processor and configured to store an image deconvolution application;
wherein the image deconvolution application configures the processor to:
obtain light field image data, where the light field image data comprises an image having a plurality of pixels and a depth map;
determine motion data by applying ego-motion techniques to estimate motion data by:
detecting features in a first image and matching those features to a second image; and
generate an optical flow describing the estimate of the motion of the array camera;
generate a depth-dependent point spread function based on the image, the depth map, and the motion data;
measure the quality of the image based on the generated depth-dependent point spread function;
when the measured quality of the image is within a quality threshold, incorporate the image into the light field image data.

US Pat. No. 10,491,819

PORTABLE SYSTEM PROVIDING AUGMENTED VISION OF SURROUNDINGS

FotoNation Limited, Galw...

1. A portable system providing augmented peripheral vision for awareness of surroundings, comprising:a helmet permitting a user wearing the helmet to receive a first field of view in the surroundings based on optical information received with the user's natural vision directly from the surroundings without digital video processing;
a plurality of camera units, mounted about the helmet to generate multiple channels of video data, each camera channel capturing a different field of view of a scene in a region surrounding the helmet; and
processing circuitry coupled to generate a composite field of view from some of the channels of video data, where said processing circuitry comprises a first processing unit coupled to receive the video data, and automatically (i) provide images of the scene for presentation on display units based on programmably adjustable fields of view, (ii) detect an object of potential interest based on predefined criteria, and then (iii) change the camera field of view to provide an enlarged image of the object,
where the first field of view subtends an angle directly in front of the user's head and the cameras are positioned along a surface of the helmet to provide a peripheral view of the surroundings to the left of the first field of view and to provide a peripheral view of the surroundings to the right of the first field of view.

US Pat. No. 10,474,894

IMAGE PROCESSING METHOD AND SYSTEM FOR IRIS RECOGNITION

FotoNation Limited, Galw...

1. An image processing method for iris recognition of a predetermined subject, comprising:a) identifying one or more iris regions within one or more eye regions of a probe image, the one or more eye regions illuminated by an infra-red (IR) illumination source, the probe image being overexposed until skin portions of the probe image are saturated; and
b) analyzing the one or more identified iris regions to detect whether they belong to the predetermined subject.

US Pat. No. 10,418,001

REAL-TIME VIDEO FRAME PRE-PROCESSING HARDWARE

FotoNation Limited, Galw...

1. A system for processing a sequence of frames of video data based on data acquired for a plurality of pixels with an imaging sensor, comprising:image processing circuitry providing one or more functions taken from the group consisting of adjusting exposure or white balance, demosaicing, color alteration, gamma correction, a converting between color-encoded signals and downsampling;
advanced hardware circuitry providing additional image processing functions, configured to operate in conjunction with said image processing circuitry, comprising a plurality of image processing modules, each including a processor, with multiple ones of the image processing modules arranged into processing chains according to a systolic architecture, where the image processing modules are interconnected and configured to perform a sequence of operations on data that flows between them, the image processing modules including one or more data processing units taken from the group consisting of a plurality of pixel processing modules, one or more frame processing modules, one or more region processing modules and one or more kernel processing modules;
a CPU connected to receive and process RGB data; and
data storage including an image and data cache and a long-term data store, the image and data cache configured to receive raw data from the sensor and RGB data from said image processing circuitry, and the long term data store configured to store processed RGB data received from the CPU and image data, based on the processed RGB data, in a MPEG format or in a JPEG format, the additional image processing functions of the advanced hardware circuitry providing in the data storage one or more scene processing primitives where the advanced hardware circuitry provides a primitive including a direct pixel to map-pixel mapping which includes a color thresholding, based on multiple thresholds, with data values of one or more of the thresholds used for determining how close a particular pixel value is to a predetermined color space value indicative of how close an image pixel is to skin color.

US Pat. No. 10,402,846

ANONYMIZING FACIAL EXPRESSION DATA WITH A SMART-CAM

FotoNation Limited, Galw...

1. A method of responding to a criterion-based request for information collected from one or more users who each meet the criterion while also complying with a user-requested privacy requirement, the method comprising:using one or more computing devices, including a processor, performing each in a series of steps, the series comprising:
receiving, from a content provider, a request for information collected from users who meet one or more criteria;
based on the one or more criteria, collecting and retrieving timestream data for each among multiple ones of the users who meet the one or more criteria, which data comprises facial or audio expressions of each user, wherein the timestream data is associated with one or more user sessions for each among the multiple ones of the users who meet the one or more criteria;
performing types of detection or monitoring of one or more activities indicative of user attention or user reaction for each of the multiple users by running one or more software application programs comprising computer code on the processor, the software application programs including code operating to provide face tracking, face detection, face feature detection, eye gaze determination, eye tracking, audio expression determination, or determination of an emotional state;
in response to one among the multiple users selecting a high privacy level among multiple levels of privacy, applying one or more of the software application programs to automatically aggregate the timestream data collected for the one user which meets said one or more criteria with timestream data collected for one or more others of the multiple users, which meets said one or more criteria, into a statistical dataset by processing the timestreams to ensure the high level of privacy in the statistical dataset, including providing the statistical data set to the content provider without providing to the content provider individual timestream data collected for the one user who has requested the high level of privacy.

US Pat. No. 10,375,302

SYSTEMS AND METHODS FOR CONTROLLING ALIASING IN IMAGES CAPTURED BY AN ARRAY CAMERA FOR USE IN SUPER RESOLUTION PROCESSING USING PIXEL APERTURES

FotoNation Limited, Galw...

1. An array camera, comprising:a plurality of cameras;
a processor configured to receive digital pixel data from the plurality of cameras via interface circuitry; and
memory containing an image processing pipeline application;
wherein the image processing pipeline application directs the processor to:
obtain a set of images of a scene that include aliasing from the plurality of cameras, wherein the aliasing is different in each image;
fuse portions of the set of images to form fused image at each of a plurality of hypothesized disparities;
compare the portion of the fused image obtained at each hypothesized disparity to the scene captured in the set of images;
select a hypothesized disparity at which the portion of the fused image is most similar to the scene captured in the set of images accounting for aliasing as a disparity estimate; and
synthesize an image of the scene from a reference viewpoint using the set of images and disparity information including the disparity estimate.

US Pat. No. 10,366,472

SYSTEMS AND METHODS FOR SYNTHESIZING HIGH RESOLUTION IMAGES USING IMAGES CAPTURED BY AN ARRAY OF INDEPENDENTLY CONTROLLABLE IMAGERS

FotoNation Limited, (IE)...

1. A method for generating an image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, photometric calibration data for the imager array, geometric calibration data for the imager, the method comprising:obtaining input images captured by the plurality of imagers using a processor configured by image processing pipeline software, where the input images capture a scene in which depths of points in the imaged scene vary and each of the input images differs from the other input images due to scene dependent geometric displacements due to parallax experienced by each of the plurality of imagers based upon the different depths of the points in the imaged scene;
applying scene independent geometric corrections to the plurality of images using the geometric calibration data to obtain a plurality of geometrically registered images using the processor configured by image processing pipeline software;
determining scene dependent parallax information with respect to the input images based upon disparity relative to a reference point of view resulting from the different depths of points in the imaged scene using the processor configured by the image processing pipeline software, where the scene dependent parallax information comprises scene dependent geometric transformations;
determining an initial estimate of at least a portion of an image from a plurality of pixels from the plurality of input images based upon a total shift for each of the plurality of pixels relative to a reference view, where the total shift of a given pixel location is the combination of the scene independent geometric correction determined for the given pixel using the geometric calibration data and the scene dependent geometric transformation determined for the given pixel location; and
synthesizing an image using the initial estimate of the portion of the image.

US Pat. No. 10,311,554

METHOD OF PROVIDING A SHARPNESS MEASURE FOR AN IMAGE

FotoNation Limited, Galw...

1. An image processing device arranged to, and comprising the steps of:detect an object region within an acquired image;
obtain meta-data for the acquired image;
scale the chosen object region to a fixed size;
calculate a gradient map for the scaled object region;
compare the gradient map against a threshold, wherein the threshold for the acquired image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO value used for image acquisition;
determine a sharpness measure for the object region as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values; and
responsive to the sharpness measure indicating the object region is not sufficiently focussed, change focusing for acquirinq a subsequent image.

US Pat. No. 10,311,649

SYSTEMS AND METHOD FOR PERFORMING DEPTH BASED IMAGE EDITING

FotoNation Limited, (IE)...

1. An array camera system for capturing and manipulating captured image data, comprising:an array camera comprising a plurality of cameras, where each camera includes separate optics, and a plurality of sensor elements, and each camera is configured to independently capture an image of a scene;
a processor;
a display connected to the processor and configured to display images;
a memory connected to the processor;
software connected to the processor and directs the processor to:
capture images;
store the captured images in the memory; and
generate a depth map having depth information associated with the captured images of the scene; and
an image manipulation application within the memory that directs the processor to:
select a collection of pixels within at least one captured image based upon depth information, wherein the collection of pixels is selected by identifying a boundary of the collection of pixels based upon color and intensity values, and the associated depth information, where the boundary is determined by separately clustering each given pixel based upon depth, color, and intensity;
modify the pixels of the selected collection of pixels of the at least one captured image;
copy the modified pixels of the selected collection of pixels; and
paste the modified pixels of the selected collection of pixels into another image.

US Pat. No. 10,310,145

IMAGE ACQUISITION SYSTEM

FotoNation Limited, Galw...

1. An image acquisition system for acquiring iris images for use in biometric recognition of a subject, the system including an optical system comprising:at least first and second lens systems each arranged in front of a common image sensor with each lens system including an optical axis in parallel spaced apart relationship to the optical axis of the other lens system, each lens system having a fixed focus, and a different aperture than the other lens system to provide a different angular field of view, each of the first and second lens systems comprising multiple optical elements, the first lens system having a closer focus and a smaller aperture than the second lens system, such that the image acquisition system can simultaneously acquire iris images across a focal range of at least from 200 mm to 300 mm.

US Pat. No. 10,275,709

METHOD AND SYSTEM FOR TRACKING AN OBJECT

FotoNation Limited, Galw...

1. A method of tracking an object across a stream of images comprising:a) determining a region of interest (ROI) bounding said object in an initial frame of said stream of images;
b) providing a histogram of gradients (HOG) map for said ROI by:
i) dividing said ROI into an array of M×N cells, each cell comprising a plurality of image pixels;
ii) determining a HOG for each of said cells, each HOG comprising a plurality of q bins, each corresponding to a range of orientation angles, each bin having a value being a function of a number instances of a pixel gradient in a cell corresponding to the bin;
d) storing said HOG map as indicative of the features of said object, including providing a multi-layer neural network, at least a first layer of said network comprising neurons, each initially weighted according to the HOG map for said ROI from said initial frame;
e) acquiring a subsequent frame from said stream of images;
f) scanning at least a portion of said frame ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features;
g) responsive to said match meeting a threshold, updating the stored HOG map indicative of the features of said object according to the HOG map for the best matching candidate ROI, said updating including updating the weights for said neurons according to the HOG map for the best matching candidate ROI from said subsequent frame; and
h) repeating steps e) to g) until said match fails to meet said threshold.

US Pat. No. 10,261,219

SYSTEMS AND METHODS FOR MANUFACTURING CAMERA MODULES USING ACTIVE ALIGNMENT OF LENS STACK ARRAYS AND SENSORS

FotoNation Limited, (IE)...

1. A method comprising:providing a lens stack array having a plurality of active focal planes in conjunction with a sensor, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each focal plane is contained within a region of the lens stack array that does not contain pixels from another focal plane;
capturing images of a known target using the plurality of active focal planes at different spatial relationships between the lens stack array and the sensor, where the lens stack array comprises a plurality of lens stacks, and where each lens stack corresponds to a focal plane of the plurality of focal planes;
scoring the images captured by the plurality of active focal planes, where the resulting scores provide a direct comparison of the extent to which a central and a peripheral region of interest are each focused in each of the images at each of the spatial relationships, and where the resulting scores are indicative of each active lens stack's ability to focus;
comparing the resulting scores of captured images, wherein the comparison of scores further comprises computing:
a first best-fit plane that defines a spatial relationship between the lens stack array and the sensor based on each active lens stack's ability to focus on the central region of interest according to a first predetermined criterion;
a second best-fit plane that defines a spatial relationship between the lens stack array and the sensor based on each active lens stack's ability to focus on the peripheral region of interest according to a second predetermined criterion; and
at least a third intervening plane between the first and second best-fit planes; and
determining a final spatial relationship arrangement between the lens stack array and the sensor utilizing the comparison of the scores
wherein the images are scored such that a score is provided for each region of interest visible in each image, the score being indicative of the extent to which the respective region of interest is focused in the image; and
the comparison of scores comprises determining mathematical relationships for each of a plurality of active focal planes that characterize the relationships between:
the scores of the extent to which the central region of interest is focused in the images captured by the respective active focal plane and the spatial relationship between the lens stack array and the sensor; and
the scores of the extent to which the at least one peripheral region of interest is focused in the images captured by the respective active focal plane and the spatial relationship between the lens stack array and the sensor.

US Pat. No. 10,218,889

SYSTEMS AND METHODS FOR TRANSMITTING AND RECEIVING ARRAY CAMERA IMAGE DATA

FotoNation Limited, (IE)...

1. A method of transmitting image data, comprising:capturing image data using a first set of active cameras in an array of cameras;
generating a first line of image data by multiplexing at least a portion of the image data captured by the first set of active cameras using a predetermined process, wherein the predetermined process is selected from a plurality of predetermined processes for multiplexing captured image data;
generating a first set of additional data containing information identifying the cameras in the array of cameras that form the first set of active cameras and information indicating the predetermined process used to multiplex at least the portion of the image data;
transmitting the first set of additional data and the first line of image data;
capturing image data using a second set of active cameras in the array of cameras, wherein the second set of active cameras is different from the first set of active cameras;
generating a second line of image data by multiplexing at least a portion of the image data captured by the second set of active cameras;
generating a second set of additional data containing information identifying the cameras in the array of cameras that form the second set of active cameras; and
transmitting the second set of additional data and the second line of image data.

US Pat. No. 10,212,366

IRIS IMAGE ACQUISITION SYSTEM

FotoNation Limited, Galw...

1. An iris image acquisition system comprising: an image sensor comprising an array of pixels including pixels sensitive to near infra-red (NIR) wavelengths; at least one NIR light source capable of selectively emitting light with different discrete NIR wavelengths; a processor, operably connected to said image sensor and said at least one NIR light source, to acquire image information from said image sensor under illumination at one of said different discrete NIR wavelengths; a lens assembly comprising a plurality of lens elements with a total track length no more than 4.7 mm, each lens element comprising a material with a refractive index inversely proportional to wavelength, said different discrete NIR wavelengths being matched with the refractive index of the material for the lens elements to balance an axial image shift induced by a change in object distance with the axial image shift due to change in illumination wavelength, and to focus NIR light at a first NIR wavelength reflected from an iris imaged at a first object distance between about 200 mm and 270 mm on said image sensor and to focus NIR light at a second NIR wavelength, longer than said first NIR wavelength, reflected from the iris imaged at a second object distance, between about 270 mm to 350 mm, on said image sensor.

US Pat. No. 10,740,627

MULTI-CAMERA VISION SYSTEM AND METHOD OF MONITORING

FotoNation Limited, Galw...

1. A multi-camera vision processing system which provides multiple fields of view of a scene exterior to a structure, comprising:a plurality of imaging systems, each including at least one camera positionable about a peripheral surface of the structure, the imaging systems each configured to provide object classifications over a range of camera focus distances extending away from the peripheral surface of the structure, with each camera configured or positioned about the peripheral surface to receive image data from a field of view characterized by a field of view angle, each camera system including a processor, memory and a non-transitory computer readable medium containing program instructions representing software executable on the processor, which instructions, when executed by the processor, cause the camera system to perform a sequence of steps which classify an object among multiple classifications based on an image of the object present within a camera FOV, each object classification represented by an icon; and
a central control unit comprising a programmable processor, memory and a non-transitory computer readable medium, the central control unit coupled (i) to receive classification or position information of objects from the imaging systems and (ii) to present, on a visual display, object detection information for an image corresponding to a classified object in the scene, relative to the position of the structure, with detection information corresponding to the object overlaid on a map with an icon representative of the classified object, without requiring high speed transfer of video data, corresponding to the classified object, between an image acquisition component of an imaging system and a visual display.

US Pat. No. 10,735,635

CAPTURING AND PROCESSING OF IMAGES CAPTURED BY CAMERA ARRAYS INCORPORATING CAMERAS WITH TELEPHOTO AND CONVENTIONAL LENSES TO GENERATE DEPTH MAPS

FotoNation Limited, Galw...

1. A camera array, comprising:a plurality of cameras including at least three cameras, wherein a first camera is equipped with a lens at a first zoom level providing a widest-angle view, a second camera is equipped with a lens at a second zoom level, and a third camera is equipped with a telephoto lens at a third zoom level providing a greatest magnification view to provide at least three distinct zoom magnifications, where the plurality of cameras are configured to capture a plurality of images comprising different images representing different viewpoints of a same scene;
control circuitry for independently controlling imaging parameters and operation of each camera in the plurality of cameras;
a parallax confirmation and measurement module for detecting and metering parallax using pixel correlation across cameras; and
an image processing pipeline module configured to generate a depth map using the parallax information.

US Pat. No. 10,674,138

AUTOFOCUS SYSTEM FOR A CONVENTIONAL CAMERA THAT USES DEPTH INFORMATION FROM AN ARRAY CAMERA

FotoNation Limited, Galw...

1. An array camera system, comprising:an array camera comprising a plurality of cameras that capture images of a scene from different viewpoints;
a separate camera having a fixed geometric relationship with each of the plurality of cameras in the array camera, where the separate camera captures an image of the scene from a different viewpoint to the viewpoints of the other cameras in the array camera;
a processor; and
memory in communication with the processor storing software;
wherein the software directs the processor to:
obtain a focus window of the separate camera which includes a partial selection of image data from the separate camera,
determine a focus window of the array camera which includes a partial selection of image data from the array camera based upon the focus window of the separate camera, using measurements of the fixed geometric relationship between the separate camera and each of the plurality of cameras in the array camera, wherein a baseline distance between the separate camera and each of the plurality of cameras in the array camera is larger than a baseline distance between cameras within the plurality of cameras of the array camera, such that disparities between objects in images captured by the separate camera and each of the plurality of cameras in the array camera are greater than disparities between objects in images captured by cameras within the plurality of cameras of the array camera,
obtain image data pertaining to the focus window of the array camera from at least two cameras in the plurality of cameras,
determine depth information for the focus window of the array camera from the image data from the at least two cameras,
colocate the depth information from the focus window of the array camera to the focus window of the separate camera to generate depth information for the focus window of the separate camera, and
determine a focus depth for the separate camera based upon the depth information for the focus window of the separate camera.

US Pat. No. 10,606,050

PORTRAIT LENS SYSTEM SUITABLE FOR USE IN A MOBILE CAMERA

FotoNation Limited, Galw...

1. A lens system comprising:a first lens, a second lens, and a third lens positioned along an optical axis to transmit light received from an object to a focal plane, the first lens including a concave surface and a convex surface, the convex surface of the first lens including a first radius of curvature,
the second lens including a concave surface that includes a second radius of curvature that is complementary to the first radius of curvature of the convex surface of the first lens, the second lens including a convex surface, the second lens further including a bore positioned along the optical axis, wherein, when positioned between the object and the focal plane, to provide an image of the object along the focal plane:
(i) the concave surface of the first lens faces the object to receive light traveling from the object for entry into the lens system,
(ii) the concave surface of the second lens is positioned to face the convex surface of the first lens and is spaced apart from the convex surface of the first lens at a separation distance,
(iii) light from the object which is transmitted through the concave surface of the second lens is internally reflected along the convex surface of the second lens;
(iv) the light internally reflected along the convex surface of the second lens is internally reflected along the concave surface of the first lens; and
(v) the light internally reflected from the concave surface of the first lens enters the bore and is transmitted through the third lens to the focal plane.

US Pat. No. 10,586,032

SYSTEMS AND METHODS FOR AUTHENTICATING A BIOMETRIC DEVICE USING A TRUSTED COORDINATING SMART DEVICE

FotoNation Limited, Galw...

1. A process for enrolling a configurable biometric device with a network service using a coordinating smart device comprising:obtaining a device identifier (ID) of the configurable biometric device using the coordinating smart device;
communicating the device ID of the configurable biometric device from the coordinating smart device to the network service using a secure communications link;
communicating at least one challenge from the network service to the coordinating smart device, where the at least one challenge is generated based on a challenge-response authentication protocol;
communicating the at least one challenge from the coordinating smart device to the configurable biometric device;
obtaining a response network address by the configurable biometric device from the coordinating smart device;
generating a response to each challenge of the at least one challenge and communicating each response to the network service by the configurable biometric device utilizing the response network address;
receiving a secure channel key by the coordinating smart device from the network service when a predetermined number of the at least one challenge is satisfied by the respective response from the configurable biometric device according to the challenge-response authentication protocol;
communicating the secure channel key from the coordinating smart device to the configurable biometric device;
performing a biometric enrollment process using the configurable biometric device including capturing biometric information from a user; and
creating a secure communication link between the configurable biometric device and the network service using the secure channel key.

US Pat. No. 10,546,231

METHOD FOR SYNTHESIZING A NEURAL NETWORK

FotoNation Limited, Galw...

1. A method for synthesizing a neural network from a plurality of component neural networks, each component neural network comprising a combination of convolutional and/or fully-connected layers between at least one common input and at least one common output, the method comprising the steps of:mapping each component neural network to a respective component neural network graph with each node of each component neural network graph corresponding to a layer of said component neural network and with each node connected to each other node in accordance with connections of the layers in said component neural network;
providing a first structural label for each node in accordance with the structure of the corresponding layer of the component neural network and a distance of the node from one of a given input or output;
merging the graphs for each component neural network into a single merged graph by merging nodes from component neural network graphs having the same first structural label;
providing a second structural label for each node of the single merged graph in accordance with the structure of the corresponding layer of the or each component neural network and a distance of the node from the other of a given input or output;
providing a contracted-merged graph by merging nodes of the merged graph having the same second structural label; and
mapping the contracted-merged graph to a synthesized neural network with each node of the contracted-merged graph corresponding to a layer of said synthesized network and with each layer of the synthesized network connected to each other layer in accordance with connections of the nodes in the contracted-merged graph.

US Pat. No. 10,497,089

CONVOLUTIONAL NEURAL NETWORK

FotoNation Limited, Galw...

1. A convolutional neural network (CNN) for an image processing system comprising:an image cache comprising an input port and an output port, said image cache being responsive to a request to read a block of N×M pixels extending from a specified location within an input map to provide said block of N×M pixels at said output port;
a convolution engine being arranged to read at least one block of N×M pixels from said image cache output port, to combine said at least one block of N×M pixels with a corresponding set of weights to provide a product, and to subject said product to an activation function to provide an output pixel value;
said image cache being configured to write output pixel values to a specified write address via said image cache input port;
said image cache comprising a plurality of interleaved memories, each memory storing a block of pixel values at a given memory address, the image cache being arranged to determine for a block of N×M pixels to be read from said image cache: a respective one address within each of said interleaved memories in which said pixels of said block of N×M pixels are stored; a respective memory of said plurality of interleaved memories within which each pixel of said block of N×M pixels is stored; and a respective offset for each pixel of said block of N×M pixels within each memory address, so that said image cache can simultaneously provide said N×M pixels at said output port in a single clock cycle; and
a controller arranged to provide a set of weights to said convolution engine before processing at least one input map, to cause said convolution engine to process said at least one input map by specifying locations for successive blocks of N×M pixels and to generate an output map within said image cache by writing said output pixel values to successive locations within said image cache, wherein said weights are stored as 8-bit floating point number with an exponent bias greater that 7D.

US Pat. No. 10,412,314

SYSTEMS AND METHODS FOR PHOTOMETRIC NORMALIZATION IN ARRAY CAMERAS

FotoNation Limited, Galw...

1. A method of manufacturing an array camera device, the method comprising:assembling an array of cameras comprising a plurality of imaging components that capture images of a scene from different viewpoints, where the components include:
a set of one or more reference imaging components, each having a reference viewpoint, and
a set of one or more associate imaging components;
configuring the array of cameras to communicate with at least one processor;
configuring the processor to communicate with at least one display;
configuring the processor to communicate with at least one type of memory; and
performing a photometric normalization process for the array of cameras, where the photometric normalization process comprises:
receiving image data for a scene captured by a first one of the plurality of imaging components;
receiving image data for a scene captured by a second one of the plurality of imaging components;
determining a nominal parallax for image data of the second one of the plurality the of imaging components that translate information for a particular pixel in the image data of the second one of the plurality of imaging components to a corresponding pixel in the first one of the plurality of imaging components;
applying the nominal parallax of the second one of the plurality of imaging components to the image data of the second one of the plurality of imaging components;
applying a low pass filter to the image data from the first one of the plurality of imaging components and the shifted image data of the second one of the plurality of imaging components; and
computing gain and offset parameters for the second one of the plurality of imaging components from the low pass filtered shifted image data of the second one of the plurality of imaging components and the low pass filtered image data of the first one of the plurality of imaging components.

US Pat. No. 10,390,005

GENERATING IMAGES FROM LIGHT FIELDS UTILIZING VIRTUAL VIEWPOINTS

FotoNation Limited, Galw...

1. A system configured to synthesize images using a plurality of images captured from multiple viewpoints, comprising:a processor; and
a memory connected to the processor and configured to store the plurality of images captured from the multiple viewpoints and an image manipulation application;
wherein the plurality of images comprises:
image data and pixel position data, the plurality of images being captured using an array camera comprising a plurality of cameras that capture the plurality of images from viewpoints of the plurality of cameras, wherein multiple cameras in the array camera simultaneously capture the plurality of images, the viewpoints of the plurality of cameras including a viewpoint of a first camera within the array camera and additional viewpoints, the additional viewpoints including a viewpoint of a second camera within the array camera, and wherein the plurality of images includes occluded pixel information captured from at least one of the additional viewpoints describing pixels not visible from the viewpoint of the first camera; and
wherein the image manipulation application configures the processor to:
obtain the plurality of images;
generate a depth map from the viewpoint of the first camera for the plurality of images using the plurality of images, where the depth map comprises depth information for one or more pixels in the image data, wherein generating the depth map comprises:
(1) for each of a plurality of depth levels, shifting the plurality of images into a stack of images for a particular depth level and computing a variance for a particular pixel in the image stack; and
(2) determining a depth level for the particular pixel by minimizing the variance for the particular pixel across the image stack;
determine a virtual viewpoint for the plurality of images based on the pixel position data and the depth map for the plurality of images, where the virtual viewpoint comprises a virtual location and virtual depth information, wherein the virtual viewpoint is a separate viewpoint from the viewpoint of the first camera, and is a viewpoint from an interpolated position between the viewpoint of the first camera and the viewpoint of the second camera;
compute a virtual depth map based on the plurality of images by projecting pixel depth information from the depth map from the viewpoint of the first camera to the virtual viewpoint; and
generate an image from the perspective of the virtual viewpoint based on the plurality of images and the virtual depth map by projecting pixels from the plurality of images based on the pixel position data and the depth map, where:
the generated image comprises a plurality of pixels selected from the image data based on the pixel position data and the virtual depth map, and
the pixels that are projected include at least one occluded pixel from the occluded pixel information that is visible from the perspective of the virtual viewpoint.

US Pat. No. 10,380,752

SYSTEMS AND METHODS FOR ESTIMATING DEPTH AND VISIBILITY FROM A REFERENCE VIEWPOINT FOR PIXELS IN A SET OF IMAGES CAPTURED FROM DIFFERENT VIEWPOINTS

FotoNation Limited, (IE)...

1. An array camera, comprising:a plurality of cameras;
a processor;
memory containing an image processing application and calibration data;
wherein the processor is configured by the image processing application to produce an image by:
directing the plurality of cameras to capture a plurality of images including a reference image and at least one alternate view image;
retrieving the calibration data from memory;
rectifying the plurality of captured images using the calibration data;
performing a calibration process;
determining a range of disparities to search based upon characteristics of at least one of the plurality of images;
measuring similarity of corresponding pixels in the plurality of images at a plurality of disparities;
generating an initial depth map; and
refocusing the reference image using the initial depth map;
wherein the processor is configured by the image processing application to measure similarity of corresponding pixels in the plurality of images at the plurality of disparities using a cost function.

US Pat. No. 10,348,979

BLURRING A DIGITAL IMAGE

FotoNation Limited, Galw...

1. A method of processing at least a portion of an input digital image comprising rows of pixels extending in two mutually perpendicular directions over a 2D field, each pixel having a pixel value, the method comprising:defining a kernel for processing an image, the kernel comprising multiple rows parallel to one another, each row comprising a plurality of contiguous elements with each element in the same row having a like non-zero value;
selecting at least two parallel rows of pixels within said image portion;
for each pixel in a first of the selected rows of pixels within said image portion, and for each pixel in a second of the selected rows of pixels within said image portion:
calculating the cumulative sum of each pixel value in the first row by adding the value of said pixel to only the sum of all preceding pixel values in the same row of said image portion; and
calculating the cumulative sum of each pixel value in the second row by adding the value of said pixel to only the sum of all preceding pixel values in the same row of said image portion; and
then, after calculating the cumulative sums, convolving said kernel with the image portion at successive kernel positions along each selected row wherein, with said convolving performed pixel by pixel along each selected row, for each target pixel: calculating the target pixel value, based only on those portions of the selected rows aligned with corresponding rows of kernel elements for convolving about said target pixel by: first, for each said aligned portion of each selected row, calculating the difference between (i) the cumulative sum of the pixel corresponding to the last element in said aligned portion of the selected row and (ii) the cumulative sum of the pixel corresponding to the element immediately preceding the first element in said aligned portion of the selected row, and then summing each said calculated difference to provide a sum over each said aligned portion for each target position; and
scaling each sum over each said aligned portion for each target position to provide a processed target pixel value for the convolving step.

US Pat. No. 10,334,241

SYSTEMS AND METHODS FOR DETECTING DEFECTIVE CAMERA ARRAYS AND OPTIC ARRAYS

FotoNation Limited, (IE)...

12. A non-transitory computer-readable medium including instructions that, when executed by a processing unit, evaluates a camera's suitability as a reference camera to be used in screening the camera array having a plurality of cameras for defectiveness, the instructions comprising:retrieving captured image data of a known target that was captured using a plurality of cameras, where the known target image data forms a plurality of known target images;
identifying localized defects in each of the plurality of known target images;
identifying corresponding regions between target images captured by different cameras of the plurality of cameras, wherein the corresponding image regions between the plurality of known target images are determined by searching for correspondence along an epipolar line up to a predetermined maximum parallax shift distance, where the epipolar line is defined parallel to the relative locations of the center of a first camera and the center of a second camera;
identifying for at least one region of a target image captured by the first camera, localized defects in corresponding regions of target images captured by a set of at least one other camera of the plurality of cameras that correspond to the at least one region of the target image captured by the first camera; and
evaluating the corresponding image regions in accordance with a set of one or more localized defect criteria to determine whether the first camera is suitable as a reference camera.

US Pat. No. 10,331,960

METHODS FOR DETECTING, IDENTIFYING AND DISPLAYING OBJECT INFORMATION WITH A MULTI-CAMERA VISION SYSTEM

FotoNation Limited, Galw...

1. A method of identifying an object of interest in a zone of a region about a moving vehicle based on sizes of object images, comprising:acquiring a scene image in a different field of view for each in a series of at least two zones of the region about the structure where a first zone is relatively close to the structure and a second zone, separate and distinct from the first zone, extends farther away from the structure than the first zone, wherein:
a first of the scene images is acquired with an image acquisition depth of field setting that sufficiently resolves an image of a first object positioned in the first zone for identification as a first object type with template matching criteria, while an image of a second object of the first object type positioned in the second zone cannot be sufficiently resolved in the first of the scene images for identification as a first object type with the template matching criteria; and
a second of the scene images is acquired with an image acquisition depth of field setting that sufficiently resolves an image of a second object positioned in the second zone for identification as a first object type with the template matching criteria, while an image of the first object of the first object type positioned in the first zone cannot be sufficiently resolved in the second of the scene images for identification as a first object type with the template matching criteria.

US Pat. No. 10,303,916

IMAGE PROCESSING APPARATUS

FotoNation Limited, Galw...

1. An image processing apparatus comprising:an image capture sensor;
a set of infra-red (IR) sources surrounding the image capture sensor; and
a processor operatively coupled to said IR sources and said image capture sensor and configured to acquire from the sensor a succession of images, each image illuminated with a different combination of the IR sources such that each in a plurality of the images is captured with illumination from a different direction, the processor being further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of said component images as a pixel value for the combined image, where the processor arranged to perform 3D temporal median filtering in a cascaded fashion to obtain g sets of cascaded images which are passed through a median filter with g intermediate fused images generated and combined into said combined image.

US Pat. No. 10,275,676

SYSTEMS AND METHODS FOR ENCODING IMAGE FILES CONTAINING DEPTH MAPS STORED AS METADATA

FotoNation Limited, Galw...

1. A system for generating modifiable blur effects, comprising:a processor;
an array of cameras comprising a plurality of cameras;
a display; and
a memory containing machine readable instructions;
wherein the machine readable instructions direct the processor to:
obtain image data using the array of cameras, where the image data comprises a plurality of images of a scene captured from different viewpoints that includes a reference image captured from a reference viewpoint;
create a depth map that specifies depths for pixels in the reference image using at least a portion of the image data;
apply a depth based blur effect during synthesis of an image using the reference image and the depth map;
encode the synthesized image and the reference image;
write the encoded images and the depth map to an image file;
store the image file in the memory;
retrieve the image from the memory;
decode the synthesized image;
display the synthesized image using the display;
create a modified synthesized image by applying a second depth based blur effect using the depth map;
display the modified synthesized image using the display;
encode the modified synthesized image; and
store the modified synthesized image in the image file.

US Pat. No. 10,275,648

IMAGE PROCESSING METHOD AND SYSTEM FOR IRIS RECOGNITION

FotoNation Limited, Galw...

1. A method of iris recognition, comprising:a) acquiring an image;
b) detecting a body region larger than and comprising at least one iris in said image;
c) responsive to detecting said body region, performing a first eye modelling on the detected body region;
d) performing a first iris segmentation on the result of said first eye modelling;
e) responsive to the result of the first iris segmentation satisfying one or more first criteria, selecting the result of the first iris segmentation;
otherwise:
f) performing a first iris identification on the detected body region of said image;
g) performing a second eye modelling on the result of said first iris identification;
h) performing a second iris segmentation on the result of said second eye modelling;
i) responsive to the result of the second iris segmentation satisfying one or more second criteria, selecting the result of the second iris segmentation;
otherwise:
j) performing a second iris identification on said image, the second iris identification using a detection scale larger than the detection scale used by the first iris identification;
k) performing a third eye modelling on the result of said second iris identification;
l) performing a third iris segmentation on the result of said third eye modelling;
m) responsive to the result of the third iris segmentation satisfying one or more third criteria, selecting the result of the third iris segmentation; and
n) extracting an iris code from any selected iris segment of said image.

US Pat. No. 10,740,633

HUMAN MONITORING SYSTEM INCORPORATING CALIBRATION METHODOLOGY

FotoNation Limited, Galw...

1. A method for monitoring eyelid opening values by acquiring video image data with a camera which data are representative of a person engaged in an activity, where the activity may be driving a vehicle, operating industrial equipment, or performing a monitoring or control function, comprising:detecting that the person's head undergoes a change in yaw angle, such that eyelids of both eyes of the person are captured with the camera, but one eye is closer to the camera than the other eye, and
applying a weighting factor which varies as a function of the yaw angle such that a value representative of eyelid opening data based on both eyes is calculated as: w(LEOD)+(1?w)(REOD), where:
LEOD is the measured left eyelid opening distance and REOD is the measured right eyelid opening distance; and
where the weight, w, varies from zero to one and changes proportionally to the change in head yaw angle.

US Pat. No. 10,742,861

SYSTEMS AND METHODS FOR TRANSMITTING AND RECEIVING ARRAY CAMERA IMAGE DATA

FotoNation Limited, Galw...

1. A method of transmitting image data, comprising:receiving image data captured using a first set of active cameras in an array of cameras;
generating a first line of image data by multiplexing at least a portion of the image data captured by the first set of active cameras;
generating a first set of additional dynamic data describing operational parameters of the first set of active cameras;
transmitting the first set of additional dynamic data and the first line of image data;
receiving image data captured using a second set of active cameras in the array of cameras, wherein the second set of active cameras has different imaging characteristics from the first set of active cameras;
generating a second line of image data by multiplexing at least a portion of the image data captured by the second set of active cameras;
generating a second set of additional dynamic data describing the operational parameters of the second set of active cameras;
transmitting the second set of additional dynamic data and the second line of image data, wherein the first and second sets of additional dynamic data vary based on the different imaging characteristics of the first and second sets of active cameras; and
packetizing the first set of additional dynamic data and the first line of image data to produce at least one packet of data; and wherein transmitting the first set of additional dynamic data and the first line of image data further comprises transmitting the at least one packet of data.

US Pat. No. 10,742,904

MULTISPECTRAL IMAGE PROCESSING SYSTEM FOR FACE DETECTION

FotoNation Limited, Galw...

1. An image processing system comprising:at least one image sensor comprising a plurality of pixels, each pixel comprising a plurality of sub-pixels, and configured to provide, during an image acquisition period, a first image plane from a group of first sub-pixels selectively sensitive to a first NIR light band and a second image plane from a group of second sub-pixels selectively sensitive to a second NIR light band, wherein the sensitivity of the first sub-pixels to the first NIR light is greater than the sensitivity of the second sub-pixels to the second NIR light band, and the number of first sub- pixels is greater than the number of second sub-pixels;
at least one NIR light source capable of separately emitting first NIR light corresponding to the first NIR light band and second NIR light corresponding to the second NIR light band, the first NIR light having a higher intensity than the second NIR light; and
a face detector configured to detect at least a first face from the first image plane and a second face from the second image plane, respectively, wherein the first face is a face of a first person and the second face is a face of a second person, different from the first person.

US Pat. No. 10,706,577

FACIAL FEATURES TRACKER WITH ADVANCED TRAINING FOR NATURAL RENDERING OF HUMAN FACES IN REAL-TIME

FotoNation Limited, Galw...

1. A method, comprising:receiving an image of a face from a frame of a video stream;
based on the image, selecting a head orientation class from a comprehensive set of head orientation classes, each head orientation class including a respective 3D model;
determining modifications to the 3D model to describe the face in the image;
projecting a 2D model of tracking points of facial features in an image plane based on the 3D model;
wherein the comprehensive set of head orientation classes comprises 35 different 3D head orientation models representing 35 different head orientations with respect to a camera view of the video stream, wherein each 3D head orientation model has averaged facial features for the respective head orientation of the face with respect to the camera view;
differentiating the 35 different 3D head orientation models of the comprehensive set of head orientation classes by respective yaw angles and pitch angles of each head orientation of each head orientation class; and
controlling, actuating, or animating a piece of hardware based on the tracking points of the facial features.

US Pat. No. 10,701,293

METHOD FOR COMPENSATING FOR OFF-AXIS TILTING OF A LENS

FotoNation Limited, Galw...

1. A method comprising:acquiring a set of calibrated parameters

 corresponding to tilting of a lens relative to an optical axis normal to an image sensor;
acquiring an image from the image sensor through the lens, where Px? and Py? indicate a coordinate of a pixel in the acquired image;
mapping image information from the acquired image to a lens tilt compensated image according to the formulae:

wherein s comprises a scale factor given by

 and
wherein ux and uy indicate a location of a pixel in the lens tilt compensated image; and
storing the lens tilt compensated image in a memory.

US Pat. No. 10,694,114

CAPTURING AND PROCESSING OF IMAGES INCLUDING OCCLUSIONS FOCUSED ON AN IMAGE SENSOR BY A LENS STACK ARRAY

FotoNation Limited, Galw...

1. A camera array, comprising:a plurality of cameras configured to capture a plurality of images of a scene;
control circuitry configured to control the plurality of cameras;
an image processor configured to process at least a subset of images captured by the plurality of cameras;
wherein the plurality of cameras comprises at least three cameras, wherein a first camera is equipped with a lens at a first zoom level providing a widest-angle view, a second camera is equipped with a lens at a second zoom level, and a third camera is equipped with a telephoto lens at a third zoom level providing a greatest magnification view to provide at least three distinct zoom magnifications;
wherein cameras in the plurality of cameras are configured to operate with at least one difference in operating parameters;
wherein the image processor is configured to generate at least one higher resolution super-resolved image using a plurality of the captured images;
wherein the image processor is configured to: measure parallax within the processed images by detecting parallax-induced changes taking into account the position of the cameras that captured the images;
generate a depth map using the measured parallax;
synthesize images having different levels of zoom; and
synthesize an image at a zoom level between the zoom level of the first camera and the zoom level of the third camera using the images captured by the plurality of cameras and the depth map; and
wherein the second camera provides a zoom level between the widest-angle view and the greatest magnification view.

US Pat. No. 10,663,751

GIMBAL ADJUSTMENT SYSTEM

FotoNation Limited, Galw...

1. A gimbaled adjustment system comprising:a base;
a plate including a lower portion connected to the base;
a shaft, attached to the base, the shaft extending away from the base and along a central axis, with the shaft including a pivot attached to the plate, the pivot and a portion of the plate lower surface forming a joint which permits the plate to rotate about the pivot;
multiple magnetic elements including a first series of magnetic elements and a second series of magnetic elements, with magnetic elements in the first series positioned on the base to provide a first plurality of magnetic field patterns each extending toward the plate and with magnetic elements in the second series attached for movement with the plate to provide a second plurality of magnetic field patterns extending toward the base, where (i) magnetic field patterns associated with multiple ones of the magnetic elements cause the plate to be rotated about the joint or stabilized about the joint; and (ii) field patterns associated with magnetic elements in the first series are controllable to generate forces to rotate the plate about the pivot with multiple degrees of freedoms, including rotation of the plate along a plane and around the central axis.

US Pat. No. 10,540,806

SYSTEMS AND METHODS FOR DEPTH-ASSISTED PERSPECTIVE DISTORTION CORRECTION

FotoNation Limited, Galw...

1. A camera system, comprising:a plurality of cameras configured to capture image data from multiple viewpoints, wherein cameras in the plurality of cameras are situated in various positions corresponding to the multiple viewpoints;
a processor;
a memory containing an image processing application; and
a display;
wherein the image processing application stored in the memory directs the processor to:
obtain image data captured by the plurality of cameras from multiple viewpoints including an initial viewpoint;
generate depth map data indicating distances to faces within a scene from the initial viewpoint using information based on differences among the multiple viewpoints of the image data;
detect a face within the image data and a distance from the initial viewpoint to the face from the depth map data;
segment face image data from background image data using the depth map data;
rerender the face from a synthetic viewpoint by warping the segmented face image data based upon the depth map data to generate warped face image data, where the synthetic viewpoint is a greater distance from the face along an optical axis relative to the distance from the initial viewpoint to the face, and the warping corrects perspective distortion in the segmented face image data resulting from camera optics by:
selecting a desired viewpoint distance that specifies a distance from the synthetic viewpoint to the face;
projecting the segmented face image data to 3D locations based upon distances to pixels within the segmented face image data contained within the depth map data;
re-projecting the 3D locations to new 2D pixel locations based upon the desired viewpoint distance to create warped face image data; and
filling holes in the warped face image data;
combine the warped face image data with the background image data to create perspective distortion corrected image data; and
output the perspective distortion corrected image data to the display.

US Pat. No. 10,462,445

SYSTEMS AND METHODS FOR ESTIMATING AND REFINING DEPTH MAPS

FotoNation Limited, Galw...

1. A method for improving accuracy of depth map information derived from image data descriptive of a scene where pixels of such image data, acquired with one or more image acquisition devices, each have an assigned intensity value, the method comprising:performing a matching cost optimization by iteratively refining disparities between corresponding pixels in the image data and using optimization results to create a sequence of first disparity values for an initial disparity map for the scene based in part on a superpixel-wise cost function;
performing a guided filter operation on the first disparity values by applying other image data containing structural details that can be transferred to the first disparity values to restore degraded features or replace some of the first disparity values with values more representative of structural features present in the image data descriptive of the scene,
the guided filtering operation performed by applying a series of weighted median filter operations to pixel intensity values in the sequence of first disparity values so that each median filter operation replaces a member in the sequence of first disparity values with a median intensity value, where each median intensity value is based on intensity values in a group of pixels within a window of pixels positioned about said member in the sequence,
each window being of a variable size to include a variable number of pixels positioned about said member in the sequence, where selections of multiple ones of the window sizes are based on a measure of similarity between the first disparity values and said other image data, and wherein the series of weighted median filter operations provides a new sequence of disparity values for a refined disparity map or from which a depth map of improved accuracy can be created.

US Pat. No. 10,462,362

FEATURE BASED HIGH RESOLUTION MOTION ESTIMATION FROM LOW RESOLUTION IMAGES CAPTURED USING AN ARRAY SOURCE

FotoNation Limited, Galw...

1. A method for performing feature based high resolution motion estimation of a camera comprising a plurality of imagers in an imager array from a plurality of low resolution images captured by imagers in the imager array, comprising:performing feature detection using a processor configured by software to identify an initial location for at least one visual feature in a first plurality of low resolution images, where the first plurality of low resolution images includes one image captured by each of a plurality of imagers in an imager array from different perspectives at a first point in time;
performing feature detection using the processor configured by software to identify an initial location for the at least one visual feature in a second plurality of low resolution images, where the second plurality of low resolution images includes one image captured by each of the plurality of imagers in the imager array from different perspectives at a second point in time;
synthesizing a first set of high resolution image portions from the first plurality of low resolution images using the processor configured by software to perform a super-resolution process using parallax information, where the synthesized high resolution image portions contain the identified at least one visual feature;
synthesizing a second set of high resolution image portions from the second plurality of low resolution images using the processor configured by software to perform a super-resolution process using parallax information, where the synthesized high resolution image portions contain the identified at least one visual feature;
performing feature detection within the first and second sets of high resolution image portions to identify locations for the at least one visual feature at a higher resolution than the initial locations identified in the low resolution images using the processor configured by software; and
estimating camera motion using the identified locations for the at least one visual feature in the first and second sets of high resolution image portions using the processor configured by software.

US Pat. No. 10,455,168

IMAGER ARRAY INTERFACES

FotoNation Limited, Galw...

1. A method of obtaining image data, comprising:separately capturing image data using an imager array comprising a plurality of focal planes functioning as a plurality of cameras in a single semiconductor substrate, where each camera comprises a two dimensional arrangement of pixels having at least two pixels in each dimension and each camera is contained within a region of the imager array that does not contain pixels from another camera, and wherein separately capturing the image data is performed by separately controlling at least two of the cameras having different imaging characteristics;
selecting a set of active cameras from said plurality of cameras, the active cameras having sensors which are active and that are providing the image data;
multiplexing the image data into at least one line of image data from the selected set of active cameras;
packetizing, by a master control logic, the at least one line of image data with additional data into a packet of data, wherein the additional information comprises at least codes inserted between pixel data for identifying a focal plane, integration time of each focal plane and information for demultiplexing the image data; and
transmitting the packet of data via output ports of an output interface in the imager array.

US Pat. No. 10,452,910

METHOD OF AVOIDING BIOMETRICALLY IDENTIFYING A SUBJECT WITHIN AN IMAGE

FotoNation Limited, Galw...

1. A method of image processing within an image acquisition device, the method comprising:storing one or more standard iris region images in storage;
acquiring an original image of one or more subjects, the one or more subjects including one or more image face regions, the one or more image face regions comprising one or more image iris regions;
analyzing the one or more image iris regions to identify an input iris region image, the input iris region image comprising an input iris pattern, the input iris pattern capable of being image processed and biometrically identifying one of the one or more subjects within said original image;
responsive to identifying the input iris region image, determining, from among the one or more stored standard iris region images, a substitute iris region image, the determined substitute iris region image comprising a substitute iris pattern distinguishable from the input iris pattern, the substitute iris pattern inhibiting identification of said one of the one or more subjects within said original image, wherein said determining the substitute iris region image comprises:
retrieving, from among the one or more standard iris region images stored in the storage, a first standard iris region image;
performing a blurring operation on the first standard iris region image to generate a blurred first standard iris region image;
performing first subsequent blurring operations on the blurred first standard iris region image to generate a plurality of blurred standard iris region images, each subsequent blurring operation of the first subsequent blurring operations increasing a blur of the first standard iris region image;
subtracting each blurred standard iris region image of the plurality of blurred standard iris region images from another blurred standard iris region image of the plurality of blurred standard iris region images to generate a plurality of detail standard iris region images;
performing a blurring operation on the identified input iris region image to generate a blurred identified input iris region image;
performing second subsequent blurring operations on the blurred identified input iris region image to generate a plurality of blurred input iris region images, each subsequent blurring operation of the second subsequent blurring operations increasing a blur of the identified input iris region image;
subtracting each blurred input iris region image of the plurality of blurred input iris region images from another blurred input iris region image of the plurality of blurred input iris region images to generate a plurality of detail input iris region images;
subtracting each detail input iris region image of the plurality of detail input iris region images from the identified input iris region image to generate a difference input iris region image; and
adding each detail standard iris region image of the plurality of detail standard iris region images to the difference input iris region image to generate the substitute iris region image.

US Pat. No. 10,339,626

METHOD FOR PRODUCING FRAMING INFORMATION FOR A SET OF IMAGES

FotoNation Limited, Galw...

1. A method for producing framing information for a set of source images, each comprising an object region, comprising the steps of:a) one or more of: scaling, translating and rotating images of said set of N source images so that said object region is aligned within said set of source images;
b) for a given image of said set of object aligned source images, at a given frame size, a given frame angle for a frame relative to said set of object aligned images and at a first candidate boundary position for said frame, determining if there is at least one position for a second boundary of said frame orthogonal to said first boundary where said frame lies within said image and said frame encloses said object region;
c) responsive to said determining, incrementing counters associated with said first candidate boundary position for each position for said second boundary where said frame lies within said image and said frame encloses said object region;
d) responsive to any counter meeting a threshold value,K?N, for said set of source images, indicating that framing is possible at said given frame size, said frame angle, said first candidate boundary position and any position for said second boundary associated with said threshold meeting counter; and
e) responsive to no counter meeting said threshold value, K, repeating steps b) to e) for another image of said set of source images.

US Pat. No. 10,298,949

METHOD AND APPARATUS FOR PRODUCING A VIDEO STREAM

FotoNation Limited, Galw...

1. A method of processing a stream of images comprising:a) obtaining an image frame of a scene with a relatively short exposure time (SET), the SET image frame comprising a first image;
b) obtaining an image frame of the same scene with a relatively longer exposure time (LET);
c) determining motion blur characteristics for the SET image frame corresponding to motion within the LET image frame, said determining motion blur characteristics comprising determining a motion vector indicating motion between said SET image and a temporally adjacent SET image;
d) applying the motion blur characteristics to blur the first image of the SET image frame in a pre-determined number of directions to provide multiple blurred versions of the first image wherein said applying the motion blur characteristics to the first image comprises, for each pixel;
determining, according to the motion vector for the pixel, a blending component corresponding to each blurred version of said first image; and
combining corresponding pixel values of said SET image frame and each of said blurred versions of said first image according to said blending components to provide a blurred SET image frame;
e) blending the blurred SET image frame and the LET image frame to provide a High Dynamic Range image frame; and
f) repeating steps a) to e) for successive pairs of images in said stream.

US Pat. No. 10,225,543

SYSTEM AND METHODS FOR CALIBRATION OF AN ARRAY CAMERA

FotoNation Limited, Galw...

1. A method for manufacturing an array camera device, the method comprising:assembling an array of cameras comprising a plurality of imaging components that capture images of a scene from different viewpoints, where the plurality of imaging components comprises:
a set of one or more reference imaging components, each having a reference viewpoint; and
a set of one or more associate imaging components;
configuring the array of cameras to communicate with at least one processor;
configuring the processor to communicate with at least one display;
configuring the processor to communicate with at least one type of memory; and
performing a calibration process for the array of cameras, where the calibration process comprises:
capturing images of a test pattern using the array of cameras, where each of the plurality of imaging components captures an image from a particular viewpoint;
generating scene independent geometric corrections for reference image data captured by a reference imaging component using test pattern image data captured by the reference imaging component and data describing the test pattern using the processor;
generating a corrected test pattern image for the reference imaging component based on the scene independent geometric corrections for the reference image data and the image of the test pattern captured by the reference imaging component using the processor; and
generating scene independent geometric corrections for associate image data captured by an associate imaging component using test pattern image data captured by the associate imaging component and data for the corrected test pattern image using the processor; and
loading calibration information into the memory.

US Pat. No. 10,776,076

NEURAL NETWORK ENGINE

FotoNation Limited, Galw...

1. A neural network engine configured to receive at least one set of values corresponding to a pixel of an input map and a corresponding set of kernel values for a neural network layer of a neural network, the neural network engine comprising:a plurality of multipliers, each multiplier having a first operand input configured to be connected to an input map value and a second operand input configured to be connected to a corresponding kernel value; and
pairs of multipliers of the plurality of multipliers providing respective outputs to respective input nodes of a tree of nodes, each node of the tree being configured to provide an output corresponding to either:
a larger of inputs of the node; or
a sum of the inputs of the node;
wherein an output of the tree providing a first input of an output logic, and an output of one of the plurality of multipliers providing a second input of the output logic;
wherein, based at least in part on the configuration of the nodes of the tree, the output logic is configurable as a convolution layer of the neural network, an average pooling layer of the neural network, and a max pooling layer of the neural network.

US Pat. No. 10,769,435

METHOD OF AVOIDING BIOMETRICALLY IDENTIFYING A SUBJECT WITHIN AN IMAGE

FotoNation Limited, Galw...

1. A method of image processing comprising:with an imaging device, acquiring an image of a subject, the image including a face region of the subject;
with an image processing device:
identifying an iris region within the face region;
analyzing the iris region to identify a first iris region;
determining that the first iris region is capable of providing a first iris pattern that enables the subject to be biometrically identified by the first iris pattern;
determining a second iris region for the first iris region, the second iris region including a substitute iris pattern distinct from the first iris pattern and avoiding the subject to be biometrically identified,
wherein determining the second iris region includes:
retrieving a standard iris region from a data storage device, the standard iris region including a characteristic included within the first iris region;
subtracting a first blurred image of the first iris region from the first iris region to produce a detail first iris region;
subtracting a second blurred image of the standard iris region from the standard iris region to produce a detail standard iris region;
adding the detail standard iris region to the detail first iris region to create the second iris region for the first iris region;
extracting a first iris code from the substitute iris pattern;
comparing the first iris code with a second iris code from the first iris pattern; and
reconstructing the first iris region within the image based on the second iris region.

US Pat. No. 10,767,981

SYSTEMS AND METHODS FOR ESTIMATING DEPTH FROM PROJECTED TEXTURE USING CAMERA ARRAYS

FotoNation Limited, Galw...

1. A camera array, comprising:at least two two-dimensional arrays of cameras comprising a first two-dimensional array of cameras and a second two-dimensional array of cameras, each comprising a plurality of cameras, wherein a horizontal baseline between the at least two two-dimensional arrays of cameras is larger than a vertical baseline between cameras within each array of cameras;
an illumination system configured to illuminate a scene with a projected texture;
a processor;
memory containing an image processing pipeline application and an illumination system controller application;
wherein the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture;
wherein the image processing pipeline application directs the processor to:
utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture;
capture a set of images of the scene illuminated with the projected texture;
determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images, wherein generating a depth estimate for a given pixel location in the image from the reference viewpoint comprises:
identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles, wherein disparity observed along a first epipolar line corresponding to a horizontal baseline between a camera in the first two-dimensional array of cameras and a camera in the second two-dimensional array of cameras is greater than disparity observed along a second perpendicular epipolar line between at least two cameras within at least one of the two-dimensional arrays of cameras;
comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and
selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint.

US Pat. No. 10,757,333

METHOD FOR DETERMINING BIAS IN AN INERTIAL MEASUREMENT UNIT OF AN IMAGE ACQUISITION DEVICE

FotoNation Limited, Galw...

1. A method comprising:determining a motion between a first point depicted in a first image frame and a second point depicted in a second image frame;
determining, using an inertial measurement unit, a first orientation of a device at a first time, the first time corresponding to the first image frame;
determining, using the inertial measurement unit, a second orientation of the device at a second time, the second time corresponding to the second image frame;
mapping a first vector in a three-dimensional space using at least the motion;
mapping a second vector in the three-dimensional space using at least the first orientation and the second orientation; and
determining, based at least in part on the first vector and the second vector, a value indicating a bias of an output of the inertial measurement unit.

US Pat. No. 10,733,472

IMAGE CAPTURE DEVICE WITH CONTEMPORANEOUS IMAGE CORRECTION MECHANISM

FotoNation Limited, Galw...

1. An electronic device comprising:one or more processors; and
one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
receiving first image data representing a first image;
receiving second image data representing a second image;
determining a first region of the first image;
determining a second region of the second image;
determining a difference between a Discrete Cosine Transform (DCT) coefficient of the first region and a DCT coefficient of the second region;
based at least in part on the difference, assigning the first region of the first image as either a background or a foreground and assigning the second region of the second image as the other of the background or the foreground; and
correcting a defect in the first image or the second image based at least in part on whether the first image is assigned as the background or the foreground.

US Pat. No. 10,708,492

ARRAY CAMERA CONFIGURATIONS INCORPORATING CONSTITUENT ARRAY CAMERAS AND CONSTITUENT CAMERAS

FotoNation Limited, Galw...

1. An array camera configuration, comprising:a camera array comprising a plurality of cameras each having a field of view, where each camera comprises optics that form an image on a focal plane defined by an array of pixels that capture image data, wherein a first subset of the plurality of cameras forms a first constituent array camera, the cameras in the first subset have a first combined field of view, and an object of interest is visible within the first combined field of view;
a processor; and
a memory containing an image processing application, wherein the image processing application directs the processor to:
obtain a first set of image data from the first constituent array camera;
identify the object of interest within the first set of image data;
select a second subset of the plurality of cameras to form a second constituent array camera, by maximizing the baseline between at least one camera in the first constituent array camera that captures at least a portion of the first set of image data and at least one camera in the second constituent array camera that captures at least a portion of a second set of image data, wherein the cameras in the second subset have a second combined field of view and the object of interest is visible within the second combined field of view;
obtain the second set of image data from the second constituent array camera;
identify the object of interest within the second set of image data; and
calculate depth information for the object of interest using the at least a portion of the first set of image data and the at least a portion of the second set of image data.

US Pat. No. 10,684,681

NEURAL NETWORK IMAGE PROCESSING APPARATUS

FotoNation Limited, Galw...

1. An apparatus comprising:one or more processors configured to:
acquire an image from an image sensor;
identify a region of interest containing a face region in the image;
determine a plurality of facial landmarks in the face region within the region of interest;
use said plurality of said facial landmarks to transform said face region into a transformed face region having a given pose;
use transformed landmarks within said transformed face region to identify a pair of eye regions within said transformed face region;
feed each identified eye region of said pair of eye regions to a respective first and second convolutional neural network, each network configured to produce a respective feature vector comprising a plurality of numerical values;
feed each feature vector to respective eyelid opening level neural networks to obtain a respective eyelid opening value for each eye region;
combine the feature vectors into a combined feature vector; and
feed the combined feature vector to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values, wherein said eyelid opening level neural networks and said gaze angle neural network are jointly trained based on a common training set.

US Pat. No. 10,615,973

SYSTEMS AND METHODS FOR DETECTING DATA INSERTIONS IN BIOMETRIC AUTHENTICATION SYSTEMS USING ENCRYPTION

FotoNation Limited, Galw...

1. A method of detecting an unauthorized data insertion between a first electronic module and a second electronic modules in a biometric authentication system transmitting a stream of data segments for one or more subjects, the method comprising:acquiring or generating multiple data segments in the first electronic module for authentication of at least one of the one or more subjects;
transmitting the data segments from the first electronic module to the second electronic module in accord with the following steps;
(i) prior to transmitting the stream of data segments from the first electronic module to the second electronic module, encrypting at least one of the data segments;
(ii) after transmitting at least one of the at least some of the data segments from the first electronic module to the second electronic module, attempting to decrypt said at least one of the data segments; and
(iii) determining whether an unauthorized data insertion has occurred between the first electronic module and the second electronic modules based on whether decryption of said at least one of the data segments is unsuccessful;
(iv) halting processing of said at least one of the data segments if decryption of said at least one of the data segments is unsuccessful.

US Pat. No. 10,587,806

METHOD AND APPARATUS FOR MOTION ESTIMATION

FotoNation Limited, Galw...

1. A method of estimating motion between a pair of image frames of a given scene comprising:a) calculating respective integral images for each of said image frames;
b) selecting at least one corresponding region of interest, within each frame;
c) for each region of interest:
i. calculating an integral image profile from each integral image, each profile comprising an array of elements, each element comprising a sum of pixel intensities from successive swaths of said region of interest for said frame;
ii. correlating said integral image profiles to determine a relative displacement of said region of interest between said pair of frames; and
iii. dividing each region of interest into a plurality of further regions of interest; and repeating step c) until a required hierarchy of estimated motion for successively divided regions of interest is provided,
wherein said calculating an integral image profile comprises sub-sampling said integral image at a first sub-sampling interval at a first selected level of said required hierarchy.

US Pat. No. 10,587,812

METHOD OF GENERATING A DIGITAL VIDEO IMAGE USING A WIDE-ANGLE FIELD OF VIEW LENS

FotoNation Limited, Galw...

1. A method of generating a digital video image using a lens positioned in front of an image sensor array, including providing a lens having a sufficiently wide field of view (WFOV), and positioned sufficiently near to the sensor array, that the image field of the lens is so curved at the sensor array that different regions of the image field are substantially in focus on the sensor array for different positions of the lens relative to the sensor array, the method further comprising:(a) selecting a desired region of interest in the image field of the lens,
(b) adjusting the position of the lens relative to the sensor array so that the selected region of interest is brought substantially into focus on the sensor array,
(c) capturing and storing the image on the sensor array of the substantially in-focus selected region of interest,
(d) at least partially correcting the stored substantially in-focus image for field-of-view distortion due to said WFOV lens,
(e) displaying the corrected image, and
(f) cyclically repeating steps (a) to (e).

US Pat. No. 10,515,439

METHOD FOR CORRECTING AN ACQUIRED IMAGE

FotoNation Limited, Galw...

1. A method of correcting an image obtained by an image acquisition device comprising:obtaining a first measurement of device movement during exposure of a first row of an image and a second measurement of device movement during exposure of a second row of the image, Gn;
selecting an integration range, idx, based, at least in part, on an exposure time, te, for the first row of the image and the second row of the image,
averaging accumulated measurements, Cn, of device movement for the first row of the image and the second row of the image to provide a filtered measurement,G, of device movement during exposure of the first row of the image and the second row of the image, and
correcting the image for device movement using the filtered measurement G.

US Pat. No. 10,514,529

PORTRAIT LENS SYSTEM FORMED WITH AN ADJUSTABLE MENISCUS LENS

FotoNation Limited, Galw...

1. A lens system comprising:first and second lenses aligned along an optical axis for transmitting light from an object along a path toward a focal plane,
the first lens including an anterior first surface through which received light can enter into the lens system and a posterior convex parabolic second surface having a first radius of curvature,
the second lens including an anterior concave parabolic first surface characterized by a second radius of curvature, complementary to the first radius of curvature of the second surface of the first lens, where the concave parabolic first surface of the second lens is positioned to face the convex parabolic second surface of the first lens in spaced apart relation from the convex parabolic second surface of the first lens.

US Pat. No. 10,460,198

IMAGE PROCESSING SYSTEM

FotoNation Limited, Galw...

1. An image processing system comprising a template matching engine (TME) operatively connected to a memory storing image information, the TME being configured to:read at least a portion of an image from said memory using a raster scan; and
as each pixel of said image portion is being read, calculate a respective feature value of a plurality of feature maps as a function of said pixel value;
the TME further comprising:
a pre-filter responsive to a current pixel location corresponding to a node within a first limited detector cascade to be applied to a window within said portion of an image to:
compare a feature value from a selected one of said plurality of feature maps corresponding to said pixel location to a threshold value; and
responsive to pixels for all nodes within said first limited detector cascade to be applied to said window having been read, determine a score for said window based on the comparisons of said feature values and said threshold values for said nodes; and
a classifier, responsive to said pre-filter indicating that a score for a window is below a window threshold, not applying a second detector cascade longer than said first limited detector cascade to said window before indicating that said window does not comprise an object to be detected.

US Pat. No. 10,455,147

IMAGE PROCESSING METHOD

FotoNation Limited, Galw...

1. A method of processing an image with a lens assembly having a given focal length for acquiring images, comprising:a) acquiring an image of a scene including a detected imaged object having a recognizable feature and determining whether the detected imaged object is a false face or other false object comprising the recognizable feature, where a false object is one which does not conform to normal anthropometric rules;
b) if the imaged object is determined to be a false face or other false object, determining, based on variation in focus position, a lens actuator setting providing a maximum sharpness for said false object;
c) determining a lens displacement corresponding to said lens actuator setting for said false object;
d) calculating a distance to said false object based on said lens displacement;
e) determining a dimension of said recognizable feature of said false object as a function of said distance to said false object, size of said imaged false object and the focal length of the lens assembly with which said image of said false object comprising the recognizable feature was acquired; and
f) performing autofocusing based on said determined dimension of said recognizable feature of said imaged false object instead of an assumed dimension of said feature for subsequent processing of images of said scene including said false object.

US Pat. No. 10,455,218

SYSTEMS AND METHODS FOR ESTIMATING DEPTH USING STEREO ARRAY CAMERAS

FotoNation Limited, Galw...

1. An array camera, comprising:a first array camera comprising a plurality of cameras that capture images of a scene from different viewpoints;
a second array camera comprising a plurality of cameras that capture images of a scene from different viewpoints, where the second array camera captures at least one image of a scene from a different viewpoint to the viewpoints of the cameras in the first array camera;
a processor; and
memory in communication with the processor;
wherein software directs the processor to:
obtain a first set of image data from the first array camera comprising image data of a scene captured from a first set of different viewpoints;
obtain a second set of image data from the second array camera comprising image data of the same scene from a viewpoint different from the first set of viewpoints;
identifying an object of interest in the first and second sets of image data; and
determining a depth measurement for the object of interest using at least a portion of the first set of image data; and
determining whether the depth measurement for the object of interest using at least a portion of the first set of image data corresponds to an observed disparity below a threshold; and
when the depth measurement corresponds to an observed disparity below the threshold, refining the depth measurement using at least a portion of the second set of image data.

US Pat. No. 10,373,052

METHOD AND SYSTEM FOR TRACKING AN OBJECT

FotoNation Limited, Galw...

1. An image processing system being arranged to track an object across a stream of images, the system comprising a processor arranged to:a) determine a region of interest (ROI) bounding said object in an initial acquired frame of said stream of images;
b) provide a histogram of gradients (HOG) map for said ROI by:
i) dividing said ROI into an array of M×N cells, each cell comprising a plurality of image pixels;
ii) determining a HOG for each of said cells, each HOG comprising a plurality of q bins, each bin corresponding to a range of orientation angles, and each bin having a value being a function of a number of instances of a pixel gradient in a cell corresponding to the bin;
c) store said HOG map as indicative of the features of said object by initially weighting each neuron of a first layer of neurons of a multi-layer neural network according to the HOG map for said ROI from said initial acquired frame;
d) acquire a subsequent frame from said stream of images;
e) scan at least a portion of said subsequent acquired frame ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features, including the processor executing said neural network on each candidate ROI to provide an output proportional to the level of match between the HOG map values used for the weights of the first layer of neurons and the HOG map values provided for a candidate ROI;
f) responsive to said match meeting a threshold, update the stored HOG map indicative of the features of said object according to the HOG map for the best matching candidate ROI including updating the weights for said neurons according to the HOG map for the best matching candidate ROI from said subsequent acquired frame; and
g) repeating steps d) to f) until said match fails to meet said threshold.

US Pat. No. 10,375,319

CAPTURING AND PROCESSING OF IMAGES INCLUDING OCCLUSIONS FOCUSED ON AN IMAGE SENSOR BY A LENS STACK ARRAY

FotoNation Limited, (IE)...

1. A camera array, comprising:a plurality of cameras configured to capture images of a scene;
an image processor configured to process at least a subset of images captured by the plurality of cameras;
wherein the plurality of cameras comprises at least three cameras, wherein a first camera is equipped with a lens at a first zoom level providing a widest-angle view, a second camera is equipped with a lens at a second zoom level, and a third camera is equipped with a lens at a third zoom level providing a greatest magnification view to provide at least three distinct zoom magnifications;
wherein cameras in the plurality of cameras are configured to operate with at least one difference in operating parameters;
wherein the image processor is configured to:
measure parallax within the processed images by detecting parallax- induced changes taking into account the position of the cameras that captured the images;
generate a depth map using the measured parallax;
synthesize images having different levels of zoom; and
synthesize an image at a zoom level between the zoom level of the first camera and the zoom level of the third camera using the images captured by the plurality of cameras and the depth map.