US Pat. No. 9,470,906

VIRTUAL OR AUGMENTED REALITY HEADSETS HAVING ADJUSTABLE INTERPUPILLARY DISTANCE

Magic Leap, Inc., Dania ...

1. A virtual or augmented reality headset, comprising:
a frame including opposing arm members, a bridge positioned intermediate the opposing arm members, and a plurality of linear
rails, at least one linear rail provided at each of opposing sides of the frame defined by a central reference plane;

a pair of virtual or augmented reality eyepieces each having an optical center, the pair of virtual or augmented reality eyepieces
movably coupled to the plurality of linear rails to enable adjustment of an interpupillary distance between the optical centers;

a pair of pins which slideably couple respective ones of the virtual or augmented reality eyepieces to respective ones of
the opposing arm members between a temple region and an ear region of the respective one of the opposing arm members such
that each of the virtual or augmented reality eyepieces is slidably supported by a respective one of the linear rails and
a respective one of the pins; and

an adjustment mechanism coupled to both of the pair of virtual or augmented reality eyepieces and operable to simultaneously
move the pair of virtual or augmented reality eyepieces in adjustment directions aligned with the plurality of linear rails
to adjust the interpupillary distance.

US Pat. No. 9,417,452

DISPLAY SYSTEM AND METHOD

MAGIC LEAP, INC., Dania ...

1. A method of operation in a virtual image system or an augmented reality system, the method comprising:
for each of at least some of a plurality of frames being presented to an end user, determining a location of appearance of
a virtual object in a field of view of the end user relative to an end user frame of reference;

adjusting a presentation of at least one subsequent frame based at least in part on the determined location of appearance
of the virtual object in the field of view of the end users;

predicting an occurrence of a head movement of the end user based at least in part on the determined location of appearance
of the virtual object in the field of view of the end user;

estimating at least one value indicative of an estimated speed of the predicted head movement of the end user;
determining at least one first value that at least partially compensates for the estimated speed of the predicted head movement
of the end user;

rendering the at least one subsequent frame based at least in part on the determined first value; and
estimating at least one change in a speed in the predicted head movement of the end user, wherein the at least one change
in the speed occurs between a start of the predicted head movement and an end of the predicted head movement, and wherein
estimating the at least one value indicative of the estimated speed of the predicted head movement includes estimating the
at least one value indicative of the estimated speed that at least partially accommodates for the at least one estimated change
in the speed in the predicted head movement of the end user.

US Pat. No. 9,488,474

OPTICAL SYSTEM HAVING A RETURN PLANAR WAVEGUIDE

MAGIC LEAP, INC., Dania ...

1. A waveguide apparatus, comprising:
a first planar waveguide having a first end, a second end, a first face, and a second face; and
a diffractive optical element operatively coupled to the first planar waveguide; and
a second planar waveguide having a first end, a second end, a first face, and a second face;
wherein the respective first ends of the first and second planar waveguides are adjacent each other,
wherein the respective second ends of the first and second planar waveguides are adjacent each other,
wherein the respective first and second ends of the first and second planar waveguides are opposed to each other along respective
lengths of the first and second planar waveguides,

wherein the respective first and the second faces of the first and second planar waveguides form respective first and second
at least partially internally reflective optical paths along respective portions of the lengths of the first and second planar
waveguides,

wherein the diffractive optical element is configured to interrupt the first at least partially internally reflective optical
paths to provide a plurality of optical paths between an exterior and an interior of the first planar waveguide via the first
face thereof at respective positions along a portion of the length of the first planar waveguide, and

wherein the first and second planar at least partially internally reflective optical paths are oriented in opposite directions.

US Pat. No. 9,389,424

METHODS AND SYSTEMS FOR IMPLEMENTING A HIGH RESOLUTION COLOR MICRO-DISPLAY

Magic Leap, Inc., Dania ...

1. A tiled array of fiber scanned displays, comprising:
a number of fiber scanners that are affixed in the tiled array in a polygonal pattern, wherein
each of the fiber scanners produces a component image;
a fiber scanner in the number of fiber scanners comprises:
projector optics disposed within a housing tube; and
a scan fiber disposed within a piezoelectric actuator tube, which is coupled with the projector optics, according to analysis
results and one or more compensators; and

a number of component images are tiled together in a corresponding polygonal pattern derived from the polygonal pattern to
produce an overall image that is visually seamless;

the overall image has a resolution of 5.24 mega-pixels with a pixel pitch of 3.66 ?m and an aspect ratio of 5:4; and
the tiled array has a dynamic range of 12-bit and produces the overall image at a refresh rate of 72 Hz.

US Pat. No. 9,846,306

USING A PLURALITY OF OPTICAL FIBERS FOR AUGMENTED OR VIRTUAL REALITY DISPLAY

MAGIC LEAP, INC., Planta...

1. A system for displaying virtual content to a user, comprising:
a plurality of optical fibers to project light associated with one or more frames of image data to be presented to the user,
a lens coupled to an exit end of an optical fiber of the plurality of optical fibers;
an actuator coupled to the lens to scan the lens;
a first reflector to reflect transmitted light associated with a first frame of the one or more frames of image data at a
first angle to an eye of the user; and

a second reflector to reflect transmitted light associated with a second frame of the one or more frames of image data at
a second angle to the eye of the user, where

the first reflector and the second reflector are configured such that a respective refractive index is varied in an analog
manner based in part or in whole upon a total number of sub-frames in the first frame or the second frame as well as a refresh
rate at which the one or more frames are presented to the eye of the user,

wherein the lens is configured to alter a diameter of a light beam projected by at least one of the plurality of optical fibers,
wherein the lens comprises a gradient refractive index, and
wherein the actuator is configured to cause scanning of the lens separately from causing scanning of the at least one of the
plurality of optical fibers.

US Pat. No. 9,429,752

USING HISTORICAL ATTRIBUTES OF A USER FOR VIRTUAL OR AUGMENTED REALITY RENDERING

MAGIC LEAP, INC., Dania ...

1. A method of operation in an augmented reality system, the method comprising:
receiving information indicative of an identity of an end user;
retrieving at least one user specific historical attribute for the end user based at least in part on the received information
indicative of the identity of the end user;

displaying frames to the end user based at least in part on the retrieved at least one user specific historical attribute
for the end user;

predicting an end point of a head movement of the end user based on a location of a virtual object in a field of view of the
end user;

displaying frames to the end user based at least in part on the retrieved at least one user specific historical attribute
for the end user includes rendering at least one subsequent frame to at least one image buffer, the at least one subsequent
frame shifted toward the predicted end point of the head movement;

rendering a plurality of subsequent frames that shift toward the predicted end point of the head movement in at least partial
accommodation of at least one head movement attribute for the end user, the at least one head movement attribute indicative
of at least one previous head movement of the end user; and

predicting an occurrence of a head movement of the end user based at least in part on a location of appearance of the virtual
object in the field of view of the end user.

US Pat. No. 9,665,173

TRIANGULATION OF POINTS USING KNOWN POINTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method of displaying augmented reality, comprising:
capturing a set of 2D points of a real-world environment at one or more image capturing devices of one or more augmented reality
systems;

determining respective positions of the one or more image capturing devices for the set of 2D points;
determining a three-dimensional (3D) position of a 2D point of the set of 2D points based at least in part on the set of 2D
points and the respective positions; and

generating, at an augmented reality system, a virtual content for the real-world environment by using at least the three-dimensional
position that is shared via a computer network.

US Pat. No. 9,348,143

ERGONOMIC HEAD MOUNTED DISPLAY DEVICE AND OPTICAL SYSTEM

Magic Leap, Inc., Dania ...

1. A freeform waveguide comprising at least three physical surfaces, at least one of which contains a plurality of reflective
and refractive freeform optical surfaces disposed thereon, where an interior space defined by the physical surfaces is filled
by a refractive medium having an index (n) greater than 1, where the plurality of reflective and refractive surfaces folds
and extends an optical path length so that the waveguide can be fit to an eyeglass shape, which enables an image display unit
to be placed at a side of a head, and which enables a wide see-through field of view of up to 90° relative to a straight ahead
view in temple directions, and up to 60° in a nasal direction, and up to 60° above and below relative to a straight ahead
view, where inner and outer surfaces thereof are designed, within a constraint of fitting an eyeglass form factor and a maximum
thickness, so that the plurality of freeform reflective and refractive optical surfaces guide light towards a pupil of a user
without distorting the image, the physical and optical surfaces comprising:
(a) a physical inner surface 115, disposed towards the pupil of the user, where the physical inner surface is constrained to approximate a pre-designated
curved surface for an eyeglass form factor, where the inner surface is configured to reflect an image to an eyeball of the
user with a minimum amount of distortion;

(b) a physical outer surface 125, disposed towards an external scene, where the physical outer surface is configured to reflect an image to the pupil of the
user with a minimum amount of distortion, where the physical outer surface is within a maximum distance of the inner surface
at all points, where the physical outer surface contains at least one refractive surface to allow light from the external
scene to pass through the waveguide and reach the eyeball of the user;

(c) a physical edge surface 120, which optionally contains a refractive surface for light from an image display unit to enter the waveguide;

(d) a refractive input surface 130, disposed on one of the physical surfaces, that allows light from an image display unit to enter the waveguide;

(e) a refractive output surface 135 that allows light to exit the waveguide, disposed upon the physical inner surface, near the pupil of the user; and

(f) a plurality of three (3) or more freeform reflective surfaces, disposed upon the physical inner and outer surfaces, where
each reflection is produced by either satisfying a Total Internal Reflection criterion, or by application of a semi-transparent,
partially reflective coating to the surface of the waveguide; where these reflections are optimized to guide the light along
the interior of the waveguide with a minimum of distortion, where a plurality of reflections extends the optical path length
such that the waveguide enables a wide see-through field of view, and a size suitable to fitting to a human head;
whereupon light 140 from an image display unit 105 enters the waveguide, through the refractive input surface 130;whereupon the light 140 follows a path 145 along the waveguide that comprises the plurality of reflections upon the plurality of reflective and refractive surfaces,
from the refractive input surface 130 to the refractive output surface 135, where each reflection is produced either by satisfying conditions of Total Internal Reflection, or by a semi-transparent
coating applied to the surface;whereupon light 140 passes through the refractive output surface 135 beyond which the user places the pupil 150 to view the image;whereupon light 198 from the external scene is refracted through the physical outer surface 125 of the waveguide 100 and the physical inner surface 115 of the waveguide before reaching the pupil 150, where the see-through field of view through the waveguide is up to 90° in the temple directions, up to 60° in the nasal
direction, and up to 60° above and below a straight ahead view.

US Pat. No. 9,215,293

SYSTEM AND METHOD FOR AUGMENTED AND VIRTUAL REALITY

Magic Leap, Inc., Fr. La...

1. A system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising:
a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing
circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least
a portion of the virtual world data; and

a user device having a wearable user display component, wherein the user device is operatively coupled to the computer network,
wherein at least a first portion of the virtual world data comprises a virtual object rendered from and representing a physical
object local to a first user,

the computer network is operable to transmit the first portion of the virtual world data to the user device associated with
a second user, and

the wearable user display component visually displays the virtual object to the second user, such that a virtual representation
of the physical object local to the first user is visually presented to the second user at the second user's location.

US Pat. No. 9,612,403

PLANAR WAVEGUIDE APPARATUS WITH DIFFRACTION ELEMENT(S) AND SYSTEM EMPLOYING SAME

Magic Leap, Inc., Planta...

1. A waveguide array apparatus, comprising:
a plurality of planar waveguides, including
a first planar waveguide of the plurality having a first end, a second end, a first face, and a second face, the first end
of the first planar waveguide opposed to the second end of the first planar waveguide along a length of the first planar waveguide,
at least the first and the second faces of the first planar waveguide forming a first at least partially internally reflective
optical path along at least a portion of the length of the first planar waveguide, and

a second planar waveguide of the plurality having a first end, a second end, a first face, and a second face, the first end
of the second planar waveguide opposed to the second end of the second planar waveguide along a length of the second planar
waveguide, at least the first and the second faces of the second planar waveguide forming a second at least partially internally
reflective optical path along at least a portion of the length of the second planar waveguide,

wherein the second face of the first planar waveguide is disposed adjacent to the first face of the second planar waveguide;
a first reflector disposed adjacent respective first ends of the first and second planar waveguides; and
a second reflector disposed adjacent respective second ends of the first and second planar waveguides,
wherein each of the first and second planar waveguides has a respective diffractive optical element[s] disposed between the
respective first and second ends at respective positions along at least a portion of the length of the respective first and
second planar waveguides to reflect a respective portion of a respective spherical wave front outwardly from the first face
of the respective first and second planar waveguides.

US Pat. No. 9,852,548

SYSTEMS AND METHODS FOR GENERATING SOUND WAVEFRONTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method of displaying augmented reality, comprising:
determining a head pose of a user of a head-mounted virtual or augmented reality display system;
determining a perceived location, in a passable real world model, of a virtual object to be associated with an audio object
in relation to the head pose of the user, wherein the audio object corresponds to predetermined sound data, and the passable
real world model shares at least an interaction of the user with one or more other users;

reducing computational resource utilization at least by identifying a plurality of object recognizers in a ring topology;
dynamically altering one or more parameters of the predetermined sound data based at least in part on the perceived location
of the virtual object to be associated with the audio object in relation to the head pose of the user; and

generating a visual trigger for signaling a head movement to the user toward the virtual object and presenting the visual
trigger within a field of view of the user with the virtual or augmented reality display system that transmits light signals
of the virtual object associated with the audio object to at least one eye of the user.

US Pat. No. 9,671,566

PLANAR WAVEGUIDE APPARATUS WITH DIFFRACTION ELEMENT(S) AND SYSTEM EMPLOYING SAME

MAGIC LEAP, INC., Planta...

1. A waveguide array apparatus, comprising:
a plurality of planar waveguides including
a first planar waveguide of the plurality having a first end, a second end, a first face, and a second face, the first end
of the first planar waveguide opposed to the second end of the first planar waveguide along a length of the first planar waveguide,
at least the first and the second faces of the first planar waveguide forming a first at least partially internally reflective
optical path along at least a portion of the length of the first planar waveguide, and

a second planar waveguide of the plurality having a first end, a second end, a first face, and a second face, the first end
of the second planar waveguide opposed to the second end of the second planar waveguide along a length of the second planar
waveguide, at least the first and the second faces of the second planar waveguide forming a second at least partially internally
reflective optical path along at least a portion of the length of the second planar waveguide,

wherein the second face of the first planar waveguide is disposed adjacent to the first face of the second planar waveguide;
a first reflector disposed adjacent respective first ends of the first and second planar waveguides; and
a second reflector disposed adjacent respective second ends of the first and second planar waveguides,
wherein the first planar waveguide has a first diffractive optical element disposed between the respective first and second
ends along at least a portion of the length of the first planar waveguide to reflect a portion of a spherical wave front outwardly
from the first face of the first planar waveguide, and

wherein the second planar waveguide has a second diffractive optical element disposed between the respective first and second
ends along at least a portion of the length of the second planar waveguide to reflect a portion of a flat wave front outwardly
from the first face of the second planar waveguide.

US Pat. No. 9,767,616

RECOGNIZING OBJECTS IN A PASSABLE WORLD MODEL IN AN AUGMENTED OR VIRTUAL REALITY SYSTEM

MAGIC LEAP, INC., Planta...

1. A method for recognizing objects with a virtual or augmented reality display system, comprising:
identifying map points in a world model that is projected to at least one eye of a user by a projector in a virtual or augmented
reality display system, wherein the map points corresponds to one or more real objects and is used to determine a geometric
structure of a location of the user;

recognizing, using at least one microprocessor, a real object of the one or more real objects in the world model presented
by the virtual or augmented reality display system to the user at least by deriving a geometric structure of the real object
using an image segmentation technique based in part or in whole upon the map points; and

displaying one or more virtual objects to the user relative to the real object, based at least in part on a location of the
user.

US Pat. No. 9,541,383

OPTICAL SYSTEM HAVING A RETURN PLANAR WAVEGUIDE

MAGIC LEAP, INC., Dania ...

1. A waveguide apparatus, comprising:
a first planar waveguide having a first end, a second end, a first face, and a second face; and
a diffractive optical element operatively coupled to the first planar waveguide; and
a second planar waveguide having a first end, a second end, a first face, and a second face;
wherein the respective first ends of the first and second planar waveguides are adjacent each other,
wherein the respective second ends of the first and second planar waveguides are adjacent each other,
wherein the respective first and second ends of the first and second planar waveguides are opposed to each other along respective
lengths of the first and second planar waveguides,

wherein the respective first and the second faces of the first and second planar waveguides form respective first and second
at least partially internally reflective optical paths along respective portions of the lengths of the first and second planar
waveguides,

wherein the diffractive optical element is configured to interrupt the first at least partially internally reflective optical
paths to provide a plurality of optical paths between an exterior and an interior of the first planar waveguide via the first
face thereof at respective positions along a portion of the length of the first planar waveguide, and

wherein the first and second planar at least partially internally reflective optical paths are oriented in opposite directions.

US Pat. No. 9,651,368

PLANAR WAVEGUIDE APPARATUS CONFIGURED TO RETURN LIGHT THERETHROUGH

MAGIC LEAP, INC., Planta...

1. A waveguide apparatus, comprising:
a planar waveguide having a first end, a second end, a first face, and a second face;
a diffractive optical element operatively coupled to the planar waveguide; and
first and second reflectors disposed adjacent the respective first and second ends of the planar waveguide,
wherein the first and second ends of the planar waveguide are opposed to each other along a length of the planar waveguide,
wherein the first and the second faces of the planar waveguide form an at least partially internally reflective optical path
along a portion of the length of the planar waveguide, and

wherein the diffractive optical element is configured to interrupt the at least partially internally reflective optical paths
to provide a plurality of optical paths between an exterior and an interior of the planar waveguide via the first face thereof
at respective positions along a portion of the length of the planar waveguide.

US Pat. No. 9,310,559

MULTIPLE DEPTH PLANE THREE-DIMENSIONAL DISPLAY USING A WAVE GUIDE REFLECTOR ARRAY PROJECTOR

Magic Leap, Inc., Dania ...

1. A wave guide reflector array projector apparatus, comprising:
a first planar set of a plurality of rectangular wave guides, each of the rectangular wave guides in the first planar set
having at least a first side, a second side, a first face, and a second face, the second side opposed to the first side along
a length of the rectangular wave guide, at least the first and the second sides forming an at least partially internally reflective
optical path along at least a portion of the length of the rectangular wave guide, and each of the rectangular wave guides
in the first planar set including a respective plurality of curved micro-reflectors disposed between the first and the second
sides at respective positions along at least a portion of the length of the respective rectangular wave guide to partially
reflect a respective portion of a spherical wave front outwardly from the first face of the respective rectangular wave guide;
and

at least a second planar set of a plurality of rectangular wave guides, each of the rectangular wave guides in the second
planar set having at least a first side, a second side, a first face, and a second face, the second side opposed to the first
side along a length of the rectangular wave guide, at least the first and the second sides forming an at least partially internally
reflective optical path along at least a portion of the length of the rectangular wave guide, and each of the rectangular
wave guides in the second planar set including a respective plurality of curved micro-reflectors disposed between the first
and the second sides at respective positions along at least a portion of the length of the respective rectangular wave guide
to partially reflect a respective portion outwardly from the first face of the respective rectangular wave guide,

the second planar set of rectangular wave guides arranged laterally from the first planar set of rectangular wave guides along
a first lateral (Z) axis, the first lateral axis perpendicular to a longitudinal axis (X), the longitudinal (X) axis parallel
to the lengths of the rectangular wave guides of at least the first and the second planar sets.

US Pat. No. 9,804,397

USING A FREEDOM REFLECTIVE AND LENS OPTICAL COMPONENT FOR AUGMENTED OR VIRTUAL REALITY DISPLAY

MAGIC LEAP, INC., Planta...

1. A system, comprising:
a freeform reflective and lens optical component to increase a size of a field-of-view for a defined set of optical parameters,
the freeform reflective and lens optical component comprising a first curved surface, a second curved surface, a third curved
surface, and a fourth surface,

wherein the fourth surface at least partially transmits light received by the freeform reflective and lens optical component
from an input device via the fourth surface,

wherein the first curved surface at least partially reflects and imparts a focal change to light received by the first curved
surface from the fourth surface,

wherein the second curved surface at least partially reflects light received by the second curved surface from the first curved
surface toward the third curved surface and passes light received by the second curved surface from the third curved surface,

wherein the third curved surface at least partially reflects light out of the freeform reflective and lens optical component
via the second curved surface,

wherein the fourth surface is faced opposite the first curved surface,
wherein the fourth surface is a flat surface, and
wherein the fourth surface is smaller than each of the first curved surface, the second curved surface, and the third curved
surface.

US Pat. No. 9,726,893

APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL CAPABILITY

Magic Leap, Inc., Planta...

1. A compact optical see-through head-mounted display, capable of combining a see-through path with a virtual view path such
that the opaqueness of the see-through path can be modulated and the virtual view occludes parts of the see-through view and
vice versa, the display comprising:
a. a microdisplay for generating an image to be viewed by a user, the microdisplay having a virtual view path associated therewith;
b. a transmission-type spatial light modulator (640) for modifying the light from an external scene to block portions of the see-through view that are to be occluded the spatial
light modulator having a see-through path (607) associated therewith;

c. an objective optics, facing an external scene, configured to receive the incoming light from the external scene and to
focus the light upon the spatial light modulator, where the objective optics is a three-reflection freeform prism, comprising
five optical freeform surfaces: refractive surface S4, reflective surface S5, S4? and S6 and refractive surface S7, where the objective optics is configured to form an intermediate image inside the objective optics;

d. a beamsplitter configured to merge a digitally generated virtual image from a microdisplay and a modulated see-through
image of an external scene passing from a spatial light modulator, producing a combined image;

e. an eyepiece configured to magnify the combined image, where the eyepiece is a two-reflection freeform prism, comprising
four optical freeform surfaces: refractive surface S1, reflective surface S2, reflective surface S1? and refractive surface S3;

f. an exit pupil configured to face the eyepiece, the exit pupil whereupon the user observes the combined view of the virtual
and see-through views in which the virtual view occludes portions of the see-through view;

wherein the objective optics is disposed upon a front layer of the display, where the spatial light modulator is disposed
on the back layer of the display at or near an intermediate image plane of the see-through path, facing a side of the beam
splitter where the microdisplay is disposed on the back layer of the display, facing a different side of the beam splitter
where the beam splitter is disposed such that the see-through path is merged with the virtual view path and the light from
the merged path is directed to the eyepiece wherein the eyepiece is disposed upon the back layer of the display,

whereupon the incoming light from the external scene enters the objective optics through the refractive surface S4, is consecutively reflected by the reflective surfaces S5, S4? and S6, and exits the objective optics through the refractive surface S7 whereupon the incoming light forms an intermediate image at its focal plane on the spatial light modulator, whereupon the
spatial light modulator modulates the light in the see-through path to occlude portions of the see-through view, whereupon
the spatial light modulator transmits the modulated light into the beam splitter, whereupon the light from the microdisplay
enters the beam splitter, whereupon the beamsplitter merges the modulated light in the see-through path with the light in
the virtual view path and folds toward the eyepiece for viewing, whereupon the light from the beam splitter enters the eyepiece
through the refractive surface S3, then is consecutively reflected by the reflective surfaces S1? and S2, and exits the eyepiece through the refractive surface S1 and reaches the exit pupil, where the viewer's eye is aligned to see a combined view of a virtual view and a modulated see-through
view.

US Pat. No. 9,851,563

WIDE-FIELD OF VIEW (FOV) IMAGING DEVICES WITH ACTIVE FOVEATION CAPABILITY

Magic Leap, Inc., Planta...

1. A foveated imaging system, capable of capturing a wide field of view image and a foveated image, where the foveated image
is a controllable region of interest of the wide field of view image, the system comprising:
a. an objective lens, facing an external scene, configured to receive incoming light from an external scene and to focus the
light upon a beamsplitter;

b. a beamsplitter, configured to split the incoming light from the external scene into a wide field of view imaging path and
a foveated imaging path;

c. the wide field of view imaging path comprising:
i. a first stop, separate from the objective lens, which limits the amount of light received in the wide field of view path
from the beamsplitter;

ii. a wide field of view imaging lens, different from the objective lens and configured to receive light from the first stop
and form a wide field view image on a wide field of view imaging sensor;

iii. a wide field of view imaging sensor, configured to receive light from the wide field of view imaging lens;
d. the foveated imaging path consisting of:
i. a second stop, which limits the amount of light received in the foveated imaging path from the beamsplitter;
ii. a scanning mirror, the scanning mirror being a multi-axis movable mirror configured to reflect the light from the beamsplitter;
iii. a reflective surface, disposed upon the beamsplitter, configured to direct light reflected by the scanning mirror;
iv. a foveated imaging lens, configured to receive a portion of the light, associated with a region of interest of the external
scene, from the scanning mirror, and form a foveated image on a foveated imaging sensor; and

v. a foveated imaging sensor, configured to receive light from the foveated imaging lens,
wherein the incoming light from the external scene passes through the objective lens-to the beamsplitter,
wherein the beamsplitter divides the light into the two optical paths, the wide field of view imaging path and the foveated
imaging path,

wherein the light passes through the first stop to the wide field of view imaging lens-along the wide field of view imaging
path,

wherein the wide field of view imaging lens focuses the wide field of view image upon the wide field of view imaging sensor,
wherein the light passes through the second stop to the scanning mirror along the foveated imaging path,
wherein the scanning mirror reflects a region of interest toward the foveated imaging lens through the reflective surface
on the beam splitter,

wherein the foveated imaging lens focuses the foveated image upon the foveated imaging sensor, and
wherein the two images are recorded by the wide field of view and foveated sensors, a wide field of view image and a higher
resolution image of the region of interest within it.

US Pat. No. 9,740,006

ERGONOMIC HEAD MOUNTED DISPLAY DEVICE AND OPTICAL SYSTEM

MAGIC LEAP, INC., Planta...

1. An image display system, comprising:
a freeform optical waveguide prism having a first major surface and a second major surface, the first major surface of the
optical waveguide prism which in use is positioned to at least one of receive actively projected images into the optical waveguide
prism from an active image source or emit the actively projected images out of the optical waveguide prism and the second
major surface of the optical waveguide prism which in use is positioned to receive images of a real-world ambient environment
into the optical waveguide prism, which real-world ambient environment is external to the image display system, at least some
portions of the first and the second major surfaces of the optical waveguide prism being refractive surfaces that internally
propagate light entering the optical waveguide prism along at least a portion of a length of the optical waveguide prism;

a freeform compensation lens having a first major surface and a second major surface, the first major surface of the compensation
lens having a shape that at least approximately matches a shape of the second major surface of the optical waveguide prism,
the freeform compensation lens positioned relatively outwardly of the second major surface of the optical waveguide toward
the real-world ambient environment to form a gap between the first major surface of the compensation lens and the second major
surface of the optical waveguide prism; and

at least one coupling lens located between an image display unit and the freeform optical waveguide prism, the at least one
coupling lens to guide light from the image display unit into the optical waveguide prism and correct for optical aberrations,
the at least one coupling lens comprising a liquid lens which is selectively adjustable to adjust a focal plane of the image
display system.

US Pat. No. 9,761,055

USING OBJECT RECOGNIZERS IN AN AUGMENTED OR VIRTUAL REALITY SYSTEM

MAGIC LEAP, INC., Planta...

1. An augmented reality system, comprising:
a passable world model comprising a set of map points corresponding to one or more objects of the real world;
a processor to communicate with one or more individual augmented reality display systems to pass a piece of the passable world
model to the one or more individual augmented reality display systems, wherein the piece of the passable world model is passed
based at least in part on respective locations corresponding to the one or more individual augmented reality display systems
and wherein communication between the processor and the one or more individual augmented reality display systems includes
asynchronous communication; and

a set of object recognizers to recognize a set of objects of the passable world model, the set of object recognizers arranged
as a ring of object recognizers that run on the passable world model data, wherein the ring of object recognizers comprises
at least two object recognizers, and wherein a first object recognizer of the at least two object recognizers recognizes a
first object, and a second object recognizer of the at least two object recognizers recognizes a subset of the first object.

US Pat. No. 9,832,437

COLOR SEQUENTIAL DISPLAY

MAGIC LEAP, INC., Planta...

1. A system, comprising:
a light source having a red light source, a green light source, and a blue light source;
a display having at least one pixel corresponding to image data formed from light generated from a red channel for the red
light source, a green channel for the green light source, and a blue channel for the blue light source; and

a controller that controls operation of the light source and the display to display the at least one pixel, the controller
modifying the red channel to create a reduced red channel, modifying the green channel to create a reduced green channel,
and modifying the blue channel to create a reduced blue channel, wherein the controller forms a white channel from portions
removed from the red channel, the green channel, and the blue channel, such that the at least one pixel is formed by sequential
presentation in any order of the white channel, the reduced red channel, the reduced green channel, and the reduced blue channel.

US Pat. No. 9,791,700

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS

MAGIC LEAP, INC., Planta...

1. A system for displaying virtual content, the system comprising:
an image-generating source device that oscillates or vibrates to transmit one or more frames of image data in a time-sequential
manner;

a light modulator configured to transmit light associated with the one or more frames of the image data;
a substrate comprising a plurality of substrate subsections to direct the light that comprises curved wavefronts from the
light modulator to a user's eye by using at least a plurality of reflectors housed in the substrate to render the image information
in focus in the user's eye;

a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of the image data
at a first angle to the user's eye; and

a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the
user's eye, wherein

each reflector of the plurality of reflectors has a selectively variable angle of reflection, and
each reflector is configured such that a refractive index of the each reflector is varied in an analog manner based in part
or in whole upon a total number of sub-frames in the first frame or the second frame as well as a refresh rate at which the
one or more frames are presented to the user's eye.

US Pat. No. 9,857,591

METHODS AND SYSTEM FOR CREATING FOCAL PLANES IN VIRTUAL AND AUGMENTED REALITY

Magic Leap, Inc., Planta...

1. An augmented reality display system, comprising:
first and second spatial light modulators operatively coupled to an image source for projecting light associated with one
or more frames of image data, wherein the first spatial light modulator comprises a Digital Light Processing system that provides
grayscale images and the second spatial light modulator comprises a Liquid Crystal Display that provides a color map, the
combination of the Digital Light Processing system and the Liquid Crystal Display work in conjunction to create multiple depth
planes; and

a variable focus element (VFE) for varying a focus of the projected light such that a first frame of image data is focused
at a first depth plane, and a second frame of image data is focused at a second depth plane, and wherein a distance between
the first depth plane and the second depth plane is fixed,

wherein the first and second spatial light modulators are disposed along a same optical path.

US Pat. No. 9,766,703

TRIANGULATION OF POINTS USING KNOWN POINTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method of displaying augmented reality, comprising:
capturing a set of 2D points of a real-world environment at one or more image capturing devices of one or more augmented reality
systems;

determining respective positions of the one or more image capturing devices for the set of 2D points;
determining a three-dimensional (3D) position of a 2D point of the set of 2D points based at least in part on the set of 2D
points and the respective positions; and

generating, at an augmented reality system, a virtual content for the real-world environment by using at least the three-dimensional
position that is shared via a computer network.

US Pat. No. 10,115,232

USING A MAP OF THE WORLD FOR AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. An augmented reality system, comprising:a first individual augmented reality display system corresponding to a first location, wherein the first individual augmented reality display system is configured for capturing a first set of geometric map points pertaining to the first location;
a second individual augmented reality display system corresponding to a second location, wherein the second individual augmented reality display system is configured for capturing a second set of geometric map points pertaining to the second location;
a third individual augmented reality display system configured for capturing data pertaining to a particular location of a user; and
a server comprising a processor configured for receiving the first set of geometric map points from the first individual augmented reality display system and the second set of geometric map points from the second individual augmented reality display system, and constructing at least a portion of a passable geometric map of the real world comprising the first and second locations using a topological graph to spatially stitch the first and second sets of geometric map points together into a single larger coherent passable geometric map, wherein the processor is further configured for storing the single larger coherent passable geometric map in a database, for localizing the user within the single larger coherent map of the real world by comparing the captured data to the topological map, and retrieving a third set of the geometric points from the database pertaining to a plurality of real objects at the particular location of the user;
wherein the third individual augmented reality display system is configured for displaying virtual content to the user in relation to the plurality of real objects.

US Pat. No. 10,115,233

METHODS AND SYSTEMS FOR MAPPING VIRTUAL OBJECTS IN AN AUGMENTED OR VIRTUAL REALITY SYSTEM

Magic Leap, Inc., Planta...

1. A method of displaying virtual or augmented reality, comprising:capturing a first set of data at one or more first sensors in a first virtual or augmented reality display system corresponding to a first location;
capturing a second set of data at one or more second sensors in a second virtual or augmented reality display system corresponding to a second location;
receiving the first set of data and the second set of data from the first and second virtual or augmented reality display systems via one or more computer networks;
providing, by at least one processor, one or more timing or quality targets to at least a mapping module for allocating computational resources;
constructing or updating, by the at least one processor of a computing system, a map of a real world at least by stitching, into the map of the real world, a smaller world model including a first node representing the first location and a second node representing the second location in the map with an edge that is emphasized with a first emphasis and is associated with a connectivity strength that represents an extent of sharing between the first node and the second node;
determining, at the computing system, the first virtual or augmented reality system, or the second virtual or augmented reality system, one or more map points for the smaller world model at least by viewing one or more existing features with one or more new, virtual keyframes that are positioned in relation to the first location and the second location;
identifying a map point from the map, a maximum residual stress value for the map, and a bundle adjust process;
determining whether a stress value associated with the map point exceeds the maximum residual stress value;
adjusting the stress value associated with the map point by applying the bundle adjust process to the map point; and
identifying a topological map that corresponds to the map.

US Pat. No. 10,015,477

LIGHT PROJECTOR USING AN ACOUSTO-OPTICAL CONTROL DEVICE

Magic Leap, Inc., Planta...

1. A system for projecting light, the system comprising:a light generator that generates image light that corresponds to a series of images;
a display device the receive the image light generated by the light generator; and
an acousto-optical scanner, wherein the display device deflects the image light onto the acousto-optical scanner and the acousto-optical scanner deflects portions of the image light onto a plurality of diffractive optical elements, wherein each of the plurality of diffractive optical elements corresponds to a different depth plane of the series of images,
wherein the acousto-optical scanner is configured to receive the image light from the display device at a first resolution, and to output the image light at a second resolution greater than the first resolution.

US Pat. No. 9,846,967

VARYING A FOCUS THROUGH A VARIABLE FOCUS ELEMENT BASED ON USER ACCOMMODATION

MAGIC LEAP, INC., Planta...

1. A system for displaying virtual content to a user, comprising:
an accommodation tracking module to determine an accommodation of the user's eye;
an image-generating source to provide one or more frames of image data in a time-sequential manner;
a light generator to project light associated with the one or more frames of image data;
a plurality of waveguides to receive light rays associated with image data and to transmit the light rays toward the user's
eye;

a first variable focus element (VFE) adjacent one side of the plurality of waveguides to vary a focus of the transmitted light
based at least in part on the determined accommodation of the user's eye;

a second VFE adjacent an opposite side of the plurality of waveguides to compensate to cancel out optical effects of the first
VFE for light rays traveling from the real world through the at least one waveguide towards the user's eye;

at least one lens respectively disposed between at least one respective pair of the plurality of waveguides,
wherein each of the at least one lens has a static focal distance, and
wherein the at least one lens causes the plurality of waveguides to simultaneously create a plurality of different focal planes
of wavefronts.

US Pat. No. 9,874,749

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS

MAGIC LEAP, INC., Planta...

1. A wearable augmented reality display system, comprising:
a vision system having one or more optical elements to project one or more images at respective one or more depth planes to
a user;

a processor communicatively coupled to a mapping database for retrieving a map data corresponding to one or more real objects
of the world, the mapping database receiving inputs from at least one component of one or more wearable augmented reality
display systems;

a sensor to acquire sensor data; and
a pose module to utilize the retrieved map data and the sensor data to determine a head pose of the user,
wherein the processor processes the retrieved map data and the head pose of the user to determine one or more output parameters,
and wherein the processor instructs the vision system to project the one or more images at the one or more depth planes to
the user based at least in part on the determined output parameters.

US Pat. No. 9,753,286

ERGONOMIC HEAD MOUNTED DISPLAY DEVICE AND OPTICAL SYSTEM

Magic Leap, Inc., Planta...

1. An image display system which projects displayed virtual image into a pupil of a user through a waveguide prism, allowing
the user to see displayed content overlaid upon a real world scene, where the system has a wide see-through field of view,
of up to 90° in the temple direction, up to 60° in the nasal direction, and up to 60° above and below a straight-ahead view,
and where the system fits into the shape of an eyeglass form factor, the system comprising:
a. An image display unit 105, disposed towards the temple side of a users head, which projects light into a waveguide, where the image display unit is
constrained to be outside of a reference curved surface defined by the shape of an average human head;

b. an optional coupling lens group 110, disposed between the image display unit and a waveguide, composed of one or more lenses, which guide light from the image
display unit 105 into the waveguide 100 and corrects for optical aberration;

c. a transparent optical waveguide prism 100, which accepts the light from the image display unit and propagates the light until the, image is projected into the field
of view of the user; where the waveguide has. a physical inner surface 115, physical edge surface 120 and physical outer surface 125, a first refractive surface 130, and a second refractive surface 135, and a plurality of reflective surfaces, where the waveguide has a shape that fits into an eyeglass form factor and has a
wide see-through field of view of up to 90° in the temple direction, up to 60° in the nasal direction, and up to 60° above
and below a straight-ahead view;

d. a compensation lens 160, secured to the physical outer surface 125 of the waveguide 100, which corrects for optical distortion caused by viewing the world through the waveguide prism; where the inner surface of
the compensation lens 165 approximates the shape of the outer surface 125 of the waveguide; where a small air gap 195 is maintained between the compensation lens and the waveguide on surfaces where the total internal reflection criterion is
satisfied for the outer surface 125 of the waveguide;

whereupon the image display unit 105 transmits light 140 into the optional coupling lens 110 followed by the waveguide 100, or into the waveguide directly, through a first refractive surface 130;

whereupon the light 140 follows a path 145 along the waveguide that comprises a plurality of reflections from the first refractive surface 130 to the second refractive surface 135;

whereupon light 140 passes through the second refractive surface 135 beyond which where the user places his or her pupil 150 to view the image;

whereupon light 198 from the real-world scene passes through the compensation lens 160 and the waveguide 100 before reaching the pupil 150, where the see-through field of view of the real-world scene is up to 90° in the temple direction, up to 60° in the nasal
direction, and up to 60° above and below a straight-ahead view.

US Pat. No. 9,881,420

INFERENTIAL AVATAR RENDERING TECHNIQUES IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method for displaying augmented reality, comprising:
displaying a virtual object to a user of an augmented reality display system;
associating a navigation object to the virtual object, wherein the navigation object is configured to be responsive to one
or more predetermined conditions; and

modifying at least one parameter of the navigation object by setting a level of sensitivity of the navigation object in response
to the one or more predetermined conditions.

US Pat. No. 10,043,312

RENDERING TECHNIQUES TO FIND NEW MAP POINTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Planta...

1. An augmented reality system, comprising:one or more sensors to capture a set of map points pertaining to the real world, wherein the set of map points are captured through a plurality of augmented reality systems; and
a processor to determine a position of a plurality of keyframes that captured the set of map points, render lines from the determined positions of the plurality of keyframes to respective map points captured from the plurality of keyframes, identify points of intersection between the rendered lines, and to determine a set of new map points based at least in part on the identified points of intersection.

US Pat. No. 9,911,234

USER INTERFACE RENDERING IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Planta...

2. A method of displaying augmented reality, comprising:
projecting light associated with a virtual object to a user's eyes, wherein the virtual object comprises a virtual user interface;
determining a user input from the user based on an interaction of the user with at least one component of the virtual user
interface;

determining an action to be performed based on the received user input; and
performing the action in response to the received user input, the action comprising projecting light associated with one or
more additional virtual objects to the user's eyes,

wherein the projection of the virtual user interface is based on Wi-Fi signal strength data or cellphone differential signal
strength and the virtual user interface is a floating virtual interface rendered using body-centered rendering.

US Pat. No. 9,857,170

PLANAR WAVEGUIDE APPARATUS HAVING A PLURALITY OF DIFFRACTIVE OPTICAL ELEMENTS

MAGIC LEAP, INC., Planta...

1. A waveguide apparatus, comprising:
a planar waveguide having an at least partially transparent optical element comprising a first end, a second end, a first
face, and a second face, wherein the planar waveguide is configured to permit light beams emitted by or reflected from physical
objects that are located on a far side of the planar waveguide relative to a viewer to reach at least one eye of the viewer;
and

a plurality of diffractive optical elements comprising two dynamic diffractive optical elements and operatively coupled to
the planar waveguide, wherein

the two dynamic diffractive optical elements have different diffractive lens aspects such that when each of the two dynamic
diffractive elements is alternatively switched on, the each of the two dynamic diffractive elements focuses light to a different
focus distance,

the first and second ends of the planar waveguide are opposed to each other along a length of the planar waveguide,
the first and the second faces of the planar waveguide form an at least partially internally reflective optical path along
a portion of the length of the planar waveguide, and

the plurality of diffractive optical elements are configured to interrupt the at least partially internally reflective optical
path to provide a respective plurality of optical paths between an exterior and an interior of the planar waveguide via the
first face thereof at respective positions along the portion of the length of the planar waveguide.

US Pat. No. 10,013,806

AMBIENT LIGHT COMPENSATION FOR AUGMENTED OR VIRTUAL REALITY

Magic Leap, Inc., Planta...

1. A method of displaying augmented reality, comprising:detecting at least one property pertaining to an ambient light;
modifying, based at least in part on the at least one detected property, a location of a virtual image; and
projecting light associated with the virtual image at the modified location to a user of an augmented reality system.

US Pat. No. 9,922,462

INTERACTING WITH TOTEMS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. An augmented reality display system, comprising:
one or more sensors configured to identify an inanimate physical object or a virtual object as a totem and to capture data
pertaining to an interaction of the user of the augmented reality display system with the totem;

a processor configured to determine a user input based at least in part on the data pertaining to the interaction of the user
with the totem; and

in response to the user input, an optical sub-system configured to emit light to at least one eye of the user to render a
virtual content perceived by the user in relation to the totem that is also perceived by the user through the augmented reality
display system that is wearable by the user, wherein

the virtual content includes one or more virtual input structures that are operatively coupled to the processor and appear
to emanate from the totem to assimilate one or more physical input structures of one or more physical input devices that change
their appearances as perceived by the user in response to interactions by the user;

the optical sub-system is further configured to present one or more first virtual input elements of the virtual content to
the user in response to a first user input that manipulates the inanimate physical object or the virtual object into a first
position or orientation with respect to a reference frame; and

the optical sub-system is further configured to present one or more second virtual input elements of the virtual content to
the user in response to a second user input that manipulates the inanimate physical object or the virtual object into a second
position or orientation with respect to the reference frame.

US Pat. No. 9,911,233

SYSTEMS AND METHODS FOR USING IMAGE BASED LIGHT SOLUTIONS FOR AUGMENTED OR VIRTUAL REALITY

Magic Leap, Inc., Planta...

1. A method for displaying virtual or augmented reality, comprising:
capturing at least one parameter associated with ambient light in an environment;
identifying a first light map corresponding to one or more first light map parameters and a second light map corresponding
to one or more second light map parameters based at least in part on at least one parameter, wherein a light map includes
a geometric space and models light information of virtual content and the environment in which a user is located;

identifying one or more errors in one or more light map parameters of the one or more first light map parameters or the one
or more second light map parameters based in part or in whole upon the at least one parameter associated with ambient light
and;

constructing and storing a new light map in a library comprising the first light map and the second light map at least by
combining the one or more first light map parameters respectively with the one or more second light map parameters into one
or more corresponding combined parameters and by modifying at least one image generation characteristic of a virtual or augmented
reality system to reduce the one or more errors in the one or more light map parameters;

modifying the virtual content to be presented to a user based at least in part on the new light map; and
projecting light associated with the virtual content that has been modified.

US Pat. No. 9,874,752

APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL CAPABILITY

Magic Leap, Inc., Planta...

1. A compact optical see-through head-mounted display (300), capable of combining a see-through path (307) with a virtual view path (305) such that the opaqueness of the see-through path can be modulated and the virtual view occludes parts of the see-through
view and vice versa, the display comprising:
a. a microdisplay (350) for generating an image to be viewed by a user, the microdisplay having a virtual view path (305) associated therewith;

b. a reflection-type spatial light modulator (340) for modifying the light from an external scene in the real world to block portions of the see-through view that are to be
occluded, the spatial light modulator having a see-through path (307) associated therewith;

c. an objective optics (320), facing an external scene, configured to receive the incoming light from the external scene and to focus the light upon
the spatial light modulator (340) where the objective optics is a one-reflection freeform prism comprising three optical freeform surfaces: refractive surface
S4, reflective surface S5 and refractive surface S6;

d. a beamsplitter (330) configured to merge a digitally generated virtual image from a microdisplay (350) and a modulated see-through image of an external scene passing from a spatial light modulator, producing a combined image;

e. an eyepiece (310) configured to magnify the combined image, where the eyepiece is a one-reflection freeform prism comprising three optical
freeform surfaces: refractive surface S1, reflective surface S2 and refractive surface S3;

f. an exit pupil (302) configured to face the eyepiece, the exit pupil whereupon the user observes the combined view of the virtual and see-through
views in which the virtual view occludes portions of the see-through view;

g. a roof mirror (325) configured to reflect light from the external scene into the objective optics, where the roof mirror adds an additional
reflection to revert the see-through view so as to maintain the parity between the external scene and the see-through view
presented to the viewer;
wherein the mirror (325) is disposed upon the front layer (315) of the display, wherein the objective optics (320) is disposed upon the front layer (315) of the display, where the spatial light modulator (340) is disposed on the back layer (317) of the display, at or near an intermediate image plane of the see-through path, facing a side of the beam splitter (330), where the microdisplay (350) is disposed on the back layer (317) of the display, facing a different side of the beam splitter (330), where the beam splitter (330) is disposed such that the see-through path (307) is merged with the virtual view path (305) and the light from the merged path is directed to the eyepiece (310), wherein the eyepiece (210) is disposed upon the back layer (317) of the display,whereupon the incoming light from an external scene reflected by the minor (325), enters the objective optics (320) through the refractive surface S4, then is reflected by the reflective surface S5 and exits objective prism (320) through the refractive surface S6 and forms an intermediate image at its focal plane on the spatial light modulator (340), whereupon the spatial light modulator (340) modulates the light in the see-through path to occlude portions of the see-through view, whereupon the spatial light modulator
reflects the modulated light into the beam splitter (330), whereupon the light from the microdisplay (350) enters the beam splitter (330), whereupon the beamsplitter (330) merges the modulated light in the see-through path (307) with the light in the virtual view path (305) and folds toward the eyepiece (310) for viewing, whereupon the light from the beam splitter enters the eyepiece (310) through the refractive surface S3, then is reflected by the reflective surface S2 and exits the eyepiece (310) through the refractive surface S1 and reaches the exit pupil (302), where the viewer's eye is aligned to see a combined view of a virtual view and a modulated see-through view.

US Pat. No. 9,841,601

DELIVERING VIEWING ZONES ASSOCIATED WITH PORTIONS OF AN IMAGE FOR AUGMENTED OR VIRTUAL REALITY

MAGIC LEAP, INC., Planta...

1. A system for displaying virtual content to a user, comprising:
an image-generating source to provide one or more frames of image data in a time-sequential manner;
a light source to project light associated with the one or more frames of image data;
a waveguide assembly to receive the light and deliver the light towards at least one eye of an user with multiple wavefront
curvatures for a focus level corresponding to a focal distance, wherein

the waveguide assembly comprises at least a first waveguide component operatively coupled to a first weak lens to modify first
light associated with a first frame of the image data such the first light is perceived as coming from a first focal plane,
and a second waveguide component operatively coupled to a second weak lens to modify second light associated with a second
frame of the image data such that the second light is perceived as coming from a second focal plane, and

the first waveguide component and the second waveguide component sandwich the first weak lens or the second weak lens and
are stacked along a thickness direction in which respective thicknesses of the first and second waveguide components are defined;
and

one or more variable focusing elements optically coupled to the waveguide assembly.

US Pat. No. 10,042,097

METHODS AND SYSTEM FOR CREATING FOCAL PLANES USING AN ALVAREZ LENS

MAGIC LEAP, INC., Planta...

1. An augmented reality (AR) display system for delivering augmented reality content to a user, comprising:an image-generating source to provide one or more frames of image data;
a light modulator to transmit light associated with the one or more frames of image data;
a lens assembly comprising first and second transmissive plates, the first and second transmissive plates each having a first side and a second side that is opposite to the first side, the first side being a plano side, and the second side being a shaped side, the second side of the first transmissive plate comprising a first surface sag based at least in part on a cubic function, and the second side of the second transmissive plate comprising a second surface sag based at least in part on an inverse of the cubic function; and
a diffractive optical element (DOE) to receive the light associated with the one or more frames of image data and direct the light to the user's eyes, the DOE being disposed between and adjacent to the first side of the first transmissive plate and the first side of the second transmissive plate, and wherein the DOE is encoded with refractive lens information corresponding to the inverse of the cubic function such that when the DOE is aligned so that the refractive lens information of the DOE cancels out the cubic function of the first transmissive plate, a wavefront of the light created by DOE is compensated by the wavefront created by the first transmissive plate, thereby generating collimated light rays associated with virtual content delivered to the DOE.

US Pat. No. 10,089,453

BLUE LIGHT ADJUSTMENT FOR BIOMETRIC IDENTIFICATION

Magic Leap, Inc., Planta...

1. A head mounted display system configured to provide variable levels of blue light to an eye of a user, the display system comprising:a frame configured to be wearable on the head of the user;
a display configured to project at least blue light into the eye of the user and to modify an intensity of the blue light relative to an intensity of non-blue light;
a camera configured to capture a first image of the eye while the display projects light at a first ratio of intensity of blue light to non-blue light into the eye and configured to capture a second image of the eye while the display projects a second ratio of intensity of blue light to non-blue light different from the first ratio into the eye; and
a hardware processor programmed to:
analyze the images from the camera to determine whether a change in a pupil parameter between the second image and the first image passes a biometric application threshold;
based at least in part on the determined change, instruct the display to modify a ratio of the intensity of blue light to non-blue light and instruct the camera to capture a third imam of the eye;
determine that a change in the pupil parameter between the third image and the first image or the second image passes a biometric application threshold; and
perform a biometric application in response to the determination that the change in the pupil parameter between the third image and the first image or the second image passes a biometric application threshold.

US Pat. No. 10,042,166

BEAM ANGLE SENSOR IN VIRTUAL/AUGMENTED REALITY SYSTEM

MAGIC LEAP, INC., Planta...

1. A display subsystem for a virtual image generation system used by an end user, comprising:a waveguide apparatus;
an imaging element configured for emitting light;
a collimation element configured for collimating the light from the imaging element into a light beam;
an in-coupling element (ICE) configured for directing the collimated light beam from the collimation element down the waveguide apparatus, such that a plurality of light rays exit the waveguide apparatus to display a pixel of an image frame to the end user, the pixel having a location encoded with angles of the plurality of exiting light rays; and
a sensing assembly configured for sensing at least one parameter indicative of the exiting light ray angles, wherein the at least one sensed parameter comprises an intensity of at least one light ray representative of the plurality of exiting light rays.

US Pat. No. 9,952,042

METHOD AND SYSTEM FOR IDENTIFYING A USER LOCATION

Magic Leap, Inc., Dania ...

1. A method of identifying a user location, comprising:capturing, using a camera of an augmented reality system, an image of a field of view of a user;
reducing a set of data associated with the image to create a fingerprint by the augmented reality system, wherein the fingerprint comprises a color histogram of the image indicating a location of the user, global positioning system data, and Wi-Fi data;
transmitting the fingerprint to a database system having a database of stored fingerprints;
comparing, at the database system, at least the color histogram of the fingerprint against the database of stored fingerprints, wherein the database of stored fingerprints is a topological map having nodes and lines respectively representing real world physical spaces and relationships between the real world physical spaces, each node of the topological map comprising a respective stored fingerprint including a color histogram, global positioning system data, and Wi-Fi data, the topological map not being a geometric map created from extracted points and tagged images, the topological map being a simplified representation of physical spaces in a real world, the topological map comprising fingerprints of spaces and relationships between fingerprints corresponding to various spaces the relationships between the fingerprints corresponding to the various spaces are not geographical relationships;
identifying a localization area of the user based at least in part on a match between the fingerprint and a stored fingerprint of the database by the database system; and
transmitting passable world data corresponding to the localization area from the database system to the augmented reality system without transmitting passable world data not relevant to the localization area, the passable world data comprising at least keyframes for use by the augmented reality system to determining pose, wherein localization area corresponds to a volume and the pose corresponds to a point within the volume.

US Pat. No. 9,928,654

UTILIZING PSEUDO-RANDOM PATTERNS FOR EYE TRACKING IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Planta...

1. An eye tracking device to be used in a head-worn virtual or augmented reality device, comprising:a virtual content generation subsystem configured to render virtual contents to a user;
a plurality of light sources comprising multiple groups of light sources configured to illuminate one eye of the user at least with a pseudorandom pattern, wherein
the pseudorandom pattern that comprises a patterned code generated at least by pseudo-randomly varying at least one parameter of one or more groups of the multiple groups of light sources while leaving the at least one parameter of one or more remaining groups of the multiple groups unchanged in a plurality of time slices;
an eye tracking device positioned in relation to the one eye of the user and configured to detect one or more characteristics pertaining to an interaction between the light from the one or more groups of the multiple groups of the plurality of light sources and the one eye based in part or in whole upon the pseudorandom pattern; and
a processor operatively coupled to the eye tracking device to:
determine an eye model representing the one eye of the user with two circular geometric shapes;
determine a plurality of eye pointing vectors with the eye model based at least in part upon the one or more characteristics pertaining to the interaction; and
determine a movement or a pose of the one eye based at least in part on the plurality of eye pointing vectors and apply the movement or the pose of the one eye as a calculated movement or calculated pose to both eyes.

US Pat. No. 9,915,826

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS HAVING IMPROVED DIFFRACTIVE GRATING STRUCTURES

Magic Leap, Inc., Planta...

1. An augmented reality (AR) display system for delivering augmented reality content to a user, comprising:
an image-generating source to provide one or more frames of image data;
a light modulator to transmit light associated with the one or more frames of image data; and
a diffractive optical element (DOE) to receive the light associated with the one or more frames of image data and direct the
light to the user's eyes, the DOE comprising a diffraction structure having a waveguide substrate, a surface grating, and
an underlayer disposed between the waveguide substrate and the surface grating, wherein the waveguide substrate has a waveguide
refractive index, and the underlayer has an underlayer diffractive index that is different from the waveguide refractive index.

US Pat. No. 9,904,058

DISTRIBUTED LIGHT MANIPULATION OVER IMAGING WAVEGUIDE

Magic Leap, Inc., Planta...

1. A stacked waveguide assembly comprising:
a first waveguide comprising:
a first layer of an incoupling optical element configured to couple light at a first wavelength into a first layer of a light
distributing element, the light distributing element comprising a wavelength selective region;

a first layer of the wavelength selective region configured to receive incoupled light from the first layer of the incoupling
optical element and to attenuate the incoupled light not at the first wavelength relative to incoupled light at the first
wavelength,

wherein the first layer of the light distributing element is configured to couple the incoupled light at the first wavelength
out of the first layer of the wavelength selective region; and

a first layer of an outcoupling optical element configured to receive the incoupled light at the first wavelength from the
first layer of the light distributing element and to couple the incoupled light out of the first waveguide; and

a second waveguide comprising:
a second layer of the incoupling optical element configured to couple light at a second wavelength into a second layer of
the light distributing element, the second wavelength different from the first wavelength;

a second layer of the wavelength selective region configured to receive incoupled light from the second layer of the incoupling
optical element and to attenuate the incoupled light not at the second wavelength relative to incoupled light at the second
wavelength,

wherein the second layer of the light distributing element is configured to couple the incoupled light at the second wavelength
out of the second layer of the wavelength selective region; and

a second layer of the outcoupling optical element configured to receive the incoupled light at the second wavelength from
the second layer of the light distributing element and to couple the incoupled light out of the second waveguide.

US Pat. No. 10,089,526

EYELID SHAPE ESTIMATION

Magic Leap, Inc., Planta...

1. A method for eye tracking, comprising:under control of a hardware processor:
generating an eye-box over an eye in an eye image for determining an eye pose of the eye in the eye image, wherein the eye pose comprises a direction toward which the eye is looking,
wherein the eye-box has an upper edge and a lower edge,
wherein the upper edge overlays a portion of an upper eyelid of the eye, and
wherein the lower edge overlays a portion of a lower eyelid of the eye;
generating a plurality of radial lines extending from approximately the center of the eye-box to the upper edge or the lower edge;
determining candidate points on the plurality of radial lines using an edge detector;
determining parameters of a parabolic fit curve from a subset of the candidate points, wherein the parabolic fit curve is an estimation of an eyelid shape of the eye in the eye image;
determining a score of the parabolic fit curve is above a threshold score using the parameters of the parabolic fit curve;
determining that the parabolic fit curve is a preferred estimation of the eyelid shape of the eye in the eye image; and
determining the eye pose of the eye in the eye image using parameters of the preferred estimation of the eyelid shape; and
tracking the eye in the eye image using the eye pose of the eye in the eye image.

US Pat. No. 9,996,977

COMPENSATING FOR AMBIENT LIGHT IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Planta...

1. An augmented reality device, comprising;one or more sensors to detect at least one property pertaining to an ambient light;
a processor communicatively coupled to the one or more sensors to modify a location of a virtual image to be projected to the user of a head-mounted augmented reality system based at least in part on the at least one detected property; and
an optical sub-system to project light associated with the virtual image at the modified location to the user of the augmented reality device.

US Pat. No. 9,948,874

SEMI-GLOBAL SHUTTER IMAGER

MAGIC LEAP, INC., Planta...

1. An image sensor comprising:a two-dimensional pixel array divided into a plurality of blocks, each of the plurality of blocks comprising pixels arranged in at least two different rows and two different columns, and
a shutter that exposes the plurality of blocks, with all pixels in each block being exposed synchronously;
readout circuitry comprising a plurality of readout circuits, each readout circuit corresponding to one of the plurality of blocks, each of the readout circuits capable of receiving and processing signals from the pixels in each corresponding block; and
wherein the two-dimensional pixel array includes M rows of pixels, M no less than 100, and wherein a height of each block is at least one twentieth of a combined height of M rows but no more than one fifth of the combined height of M rows;
wherein the two-dimensional pixel array includes N columns N no less than 100, and a width of each block is at least one twentieth of a combined width of N columns of pixels but no more than one fifth of the combined width of N columns.

US Pat. No. 9,891,077

DUAL COMPOSITE LIGHT FIELD DEVICE

MAGIC LEAP, INC., Planta...

1. An apparatus, comprising:
a first waveguide having opposed planar input and output faces;
a diffractive optical element (DOE) formed across the first waveguide, said DOE for coupling light into the waveguide and
wherein the light coupled into the waveguide is directed via total internal reflection to an exit location on the waveguide;
and

a light sensor having an input positioned adjacent to the exit location of the first waveguide to capture light exiting therefrom
and generating output signals corresponding thereto, wherein the angle and position of the light sensor with respect to the
exit location of the first waveguide are movable and under the control of a processor.

US Pat. No. 9,939,643

MODIFYING A FOCUS OF VIRTUAL IMAGES THROUGH A VARIABLE FOCUS ELEMENT

Magic Leap, Inc., Dania ...

1. A system for displaying virtual content to a user, comprising:a light source to multiplex one or more light patterns associated with one or more frames of image data in a time-sequential manner;
a waveguide to receive the one or more light patterns at a first focus level;
a variable focus element (VFE) coupled to and located on a first side external to the waveguide and defining a first plurality of focus levels to place at least some of the light patterns at a second focus level of the plurality of focus levels; and
a second VFE that is located on a second side external to the waveguide and defines a second plurality of focus levels to adjust a focus of light from an outside world such that the user's view of the outside world is substantially undistorted as the second VFE varies the focus of the light from the outside world.

US Pat. No. 9,978,182

TECHNIQUE FOR MORE EFFICIENTLY DISPLAYING TEXT IN VIRTUAL IMAGE GENERATION SYSTEM

MAGIC LEAP, INC., Planta...

1. A method of operating a virtual image generation system, the method comprising:allowing an end user to visualize an object of interest in a three-dimensional scene;
spatially associating a text region within a field of view of the user, wherein the text region is spatially associated with the object of interest;
generating a gesture reference associated with the object of interest;
generating a textual message that identifies at least one characteristic of the object of interest;
streaming the textual message within the text region;
sensing gestural commands from the end user by detecting an angular position of an anatomical part of the end user relative to a plurality of different regions of the gesture reference; and
controlling the streaming of the textual message in response to the sensed gestural commands, wherein the gesture reference is an annular ring surrounding the object of interest, and
wherein a first side of the annular ring forms one of the different regions, and
a second side of the annular ring opposite of the first side of the annular ring forms another one of the different regions.

US Pat. No. 9,915,824

COMBINING AT LEAST ONE VARIABLE FOCUS ELEMENT WITH A PLURALITY OF STACKED WAVEGUIDES FOR AUGMENTED OR VIRTUAL REALITY DISPLAY

Magic Leap, Inc., Planta...

1. A system for displaying virtual content to a user, comprising:
a plurality of waveguides to receive light rays associated with image data and to transmit the light rays toward at least
one eye of a user, wherein the plurality of waveguides are stacked about a direction pointing to the at least one eye of the
user;

a first lens coupled to a first waveguide of the plurality of waveguides to modify light rays transmitted from the first waveguide,
thereby delivering first light rays having a first wavefront curvature and a first bit depth to the at least one eye of the
user;

a second lens coupled to a second waveguide of the plurality of waveguides to modify light rays transmitted from the second
waveguide, thereby delivering second light rays having a second wavefront curvature and a second bit depth to the at least
one eye of the user, wherein

the first bit depth is lower than the second bit depth; and
a combined volumetric display subsystem including the first lens, the first waveguide, the second lens, and the second waveguide
and configured to project, to the at least one eye of the user, light rays that have a variable wavefront curvature and a
higher bit depth than the first bit depth and a higher frame rate than a lower frame rate at which the first light rays or
the second light rays are presented.

US Pat. No. 10,048,501

APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL CAPABILITY

Magic Leap, Inc., Planta...

1. A compact optical see-through head-mounted display, capable of combining a see-through path with a virtual view path such that the opaqueness of the see-through path can be modulated and the virtual view occludes parts of the see-through view and vice versa, the display comprising:a. a microdisplay for generating an image to be viewed by a user, the microdisplay having a virtual view path associated therewith;
b. a transmission-type spatial light modulator for modifying the light from an external scene to block portions of the see-through view that are to be occluded the spatial light modulator having a see-through path associated therewith;
c. an objective optics, facing an external scene, configured to receive the incoming light from the external scene and to focus the light upon the spatial light modulator, where the objective optics is a one-reflection freeform prism comprising three optical freeform surfaces: refractive surface S4, reflective surface S5 and refractive surface S6;
d. a beamsplitter configured to merge a digitally generated virtual image from a microdisplay and a modulated see-through image of an external scene passing from a spatial light modulator, producing a combined image;
e. an eyepiece configured to magnify the combined image, where the eyepiece is a two-reflection freeform prism comprising three optical freeform surfaces: refractive surface S1, reflective surface S2, reflective surface S1? and refractive surface S3;
f. an exit pupil configured to face the eyepiece, the exit pupil whereupon the user observes the combined view of the virtual and see-through views in which the virtual view occludes portions of the see-through view;
g. a roof mirror configured to reflect light from the objective optics into the spatial light modulator, where the roof mirror adds an additional reflection to the see-through path to revert the see-through view so as to maintain parity between the external scene and the see-through view presented to the viewer;
wherein the objective optics is disposed upon a front layer of the display, wherein the mirror is disposed upon a front layer of the display, where the spatial light modulator is disposed on the back layer of the display, at or near an intermediate image plane of the see-through path, between the mirror and the beam splitter, where the microdisplay is disposed on the back layer of the display, facing a different side of the beam splitter, where the beam splitter is disposed such that the see-through path is merged with the virtual view path and the light from the merged path is directed to the eyepiece, wherein the eyepiece is disposed upon the back layer of the display,
whereupon incoming light from the external scene enters the objective optics through the refractive surface S4, then is reflected by the reflective surface S5 and exits the objective optics through the refractive surface S6 and is folded by the mirror toward the back layer and forms an intermediate image at its focal plane on the spatial light modulator whereupon the spatial light modulator modulates the light in the see-through path to occlude portions of the see-through view, whereupon the spatial light modulator transmits the modulated light into the beam splitter, whereupon the light from the microdisplay enters the beam splitter whereupon the beamsplitter merges the modulated light M the see-through path with the light in the virtual view path and folds toward the eyepiece for viewing, whereupon the light from the beam splitter enters the eyepiece through the refractive surface S3, then is consecutively reflected by the reflective surfaces S1? and S2, and exits the eyepiece through the refractive surface S1 and reaches exit pupil, where the viewer's eye is aligned to see a combined view of a virtual view and a modulated see-through view.

US Pat. No. 9,972,132

UTILIZING IMAGE BASED LIGHT SOLUTIONS FOR AUGMENTED OR VIRTUAL REALITY

Magic Leap, Inc., Dania ...

1. An augmented reality device, comprising:an optical apparatus, the optical apparatus projects light associated with one or more virtual objects;
a light probe, the light probe captures at least one light parameter associated with ambient light; and
a processor, the processor selects a light map from a library of preexisting light maps comprising a selected light map, the preexisting light maps having respective pluralities of light parameters, and the selected light map is selected from the library of preexisting light maps based at least in part on the at least one light parameter captured by the light probe,
wherein the processor generates a transformed light map from the selected light map and an object light map,
the selected light map comprising a user-centric light map modeled as a user-centric sphere, the user-centric sphere being centered on a user,
the object light map comprising an object-centric light map modeled as an object-centric sphere, the object-centric sphere surrounding at least one of the one or more virtual objects,
the transformed light map being generated by projecting data from the user-centric sphere onto the object-centric sphere from a point of view of the at least one of the one or more virtual objects to create a new light map, and
the transformed light map being used by the augmented reality device to modify at least one of the one or more virtual objects.

US Pat. No. 10,073,272

VIRTUAL OR AUGMENTED REALITY HEADSETS HAVING ADJUSTABLE INTERPUPILLARY DISTANCE

Magic Leap, Inc., Dania ...

1. A virtual or augmented reality headset, comprising:a frame including opposing arm members and a bridge portion positioned intermediate the opposing arm members;
a pair of virtual or augmented reality eyepieces each having an optical center, the pair of virtual or augmented reality eyepieces movably coupled to the frame to enable adjustment of an interpupillary distance between the optical centers; and
an adjustment mechanism coupled to both of the pair of virtual or augmented reality eyepieces and operable to simultaneously move the pair of virtual or augmented reality eyepieces to adjust the interpupillary distance, the adjustment mechanism including a linear actuator physically coupled to the pair of virtual or augmented reality eyepieces by a plurality of links which are arranged such that movement of the linear actuator in a first direction causes the plurality of links to increase the interpupillary distance between the optical centers of the pair of virtual or augmented reality eyepieces and movement of the linear actuator in a second direction opposite the first direction causes the plurality of links to decrease the interpupillary distance between the optical centers of the pair of virtual or augmented reality eyepieces;
wherein the frame includes at least two linear rails on each of the opposing sides of the frame vertically offset from each other to guide a respective one of the virtual or augmented reality eyepieces, and
wherein, for each of the opposing sides of the frame, the at least two linear rails and the arm member form a fork structure.

US Pat. No. 10,100,154

HIGH REFRACTIVE INDEX MATERIALS

Magic Leap, Inc., Planta...

1. A method of fabricating a high refractive index polymer composite, the method comprising:combining a thiol monomer and an ene monomer to yield a composite mixture;
heating the composite mixture to yield a homogenous composite mixture; and
curing the homogeneous composite mixture to yield a polymer composite,
wherein the ene monomer comprises zirconium oxo (meth)acrylate clusters, and the zirconium oxo (meth)acrylate clusters provide 50 wt % to 100 wt % of the ene monomer in the composite mixture.

US Pat. No. 10,101,802

MASSIVE SIMULTANEOUS REMOTE DIGITAL PRESENCE WORLD

Magic Leap, Inc., Planta...

1. A system configured to allow one or more users to interact with a virtual world comprised of virtual world data, the system comprising:a computer network comprising one or more computer servers, the one or more computer servers comprising memory, processing circuitry, and software stored in the memory and executable by the processing circuitry to process at least a portion of the virtual world data;
the computer network operable to transmit the virtual world data to a wearable user device for presentation to a first user;
a gateway operatively coupled to, and distinct from, the wearable user device and the computer network and configured to monitor and regulate an exchange of virtual world data between the wearable user device and the computer network to allow an optimum data processing of the wearable user device; and
a user sensing system connected to the wearable user device comprising a camera configured to detect an angular measurement of a user's pupil,
wherein the system is configured such that at least a portion of the virtual world changes in response to a change in the virtual world data,
wherein, in conjunction with the virtual world changes in response to a change in the virtual world data, at least a portion of the virtual world data is changed in response to a static physical object external to the user and sensed by the wearable user device,
wherein the static physical object external to the user comprises a mapped object in a physical environment in vicinity of the first user,
wherein the optimum data processing comprises prioritizing a plurality of renderings, such that processing a rendering of a dynamic virtual object is prioritized over a rendering of a static virtual object and processing a rendering of data in a field of view having less than sixty degrees forward of the angular measurement of the user's pupil is prioritized over a rendering of data outside the field of view, where the plurality of renderings are performed by the gateway device and transmitted to the wearable user device, and
wherein the change in virtual world data represents rendering at least one of the dynamic virtual object and the static virtual object with the static physical object external to the user according to a predetermined relationship.

US Pat. No. 10,302,957

POLARIZING BEAM SPLITTER WITH LOW LIGHT LEAKAGE

Magic Leap, Inc., Planta...

1. A polarizing beam splitter comprising:an optically transmissive spacer having first and second opposing faces, wherein the optically transmissive spacer comprises a first region along the first face and a second region along the second face;
a reflection-preventing polarizer between the first and second regions;
a first polarizer on the first opposing face; and
a second polarizer on the second opposing face,
wherein the first polarizer is a different type of polarizer from the second polarizer.

US Pat. No. 10,021,149

SYSTEM AND METHOD FOR AUGMENTED AND VIRTUAL REALITY

Magic Leap, Inc., Planta...

1. A system for enabling at least one user to interact within a virtual world comprising virtual world data, comprising:a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; and
a first user device, configured to be operated by a first user, comprising an environment-sensing system and a user-sensing system,
wherein the first user device is operatively coupled to the computer network,
wherein the environment-sensing system is configured to capture a local environment audio input,
wherein the user-sensing system is configured to capture a user audio input from the first user,
wherein at least a first portion of the virtual world data is the local environment audio input,
wherein at least a second portion of the virtual world data is the user audio input from the first user;
wherein the virtual world further comprises a visual rendering by at least one of the computer servers or the one or more computing devices; and
wherein the visual rendering of the virtual world is presented in a three-dimensional format.

US Pat. No. 10,008,038

UTILIZING TOTEMS FOR AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Planta...

1. A method of displaying augmented reality, comprising:identifying a physical object as a totem having at least one visual user input element on a physical surface of the totem;
capturing at least one image of a physical interaction of at least one finger of a user of an augmented reality system with the physical surface of the totem;
mapping locations of the physical interaction of the at least one finger of the user with the physical surface of the totem to a position of each of the at least one input element, thereby determining an interaction of the at least one finger of the user with the at least one visual user input element of the totem;
detecting at least one characteristic pertaining to the interaction of the at least one finger of the user with the at least one user input element of the totem based on the at least one captured image; and
determining a user input based at least in part on the at least one detected characteristic.

US Pat. No. 10,060,766

DUAL COMPOSITE LIGHT FIELD DEVICE

MAGIC LEAP, INC., Planta...

1. An apparatus comprising:a processor;
a first waveguide having opposed planar input and output faces;
a first diffractive optical element (DOE) formed across the first waveguide, the first DOE configured to couple light into the first waveguide, wherein the light coupled into the first waveguide is directed via total internal reflection to an exit location of the first waveguide; and
a light sensor having an input positioned adjacent to the exit location of the first waveguide to capture light exiting therefrom and generate output signals corresponding thereto,
wherein:
the angle and position of the light sensor with respect to the exit location of the first waveguide are movable and are controllable by the processor, and
the output signals comprise a polar coordinate pixel corresponding to the captured light.

US Pat. No. 9,990,777

PRIVACY-SENSITIVE CONSUMER CAMERAS COUPLED TO AUGMENTED REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. An augmented reality display system, comprising:a housing for one or more components for the augmented reality display system, wherein the one or more components comprises at least a spatial light modulator to project light associated with one or more virtual images to a user and a plurality of sensors to capture information pertaining to the user's surroundings, and wherein at least one sensor of the plurality of sensors is an image-based sensor;
a processing module communicatively coupled to the housing to process a set of data retrieved from the plurality of sensors, wherein the processing module comprises a gating mechanism that selectively allows the set of data retrieved from the plurality of sensors to be transmitted to a mobile platform; and
a detachable camera removably attached to the housing of the augmented reality display system, wherein the gating mechanism selectively allows the set of data retrieved from the plurality of sensors to pass through to the mobile platform based at least in part on whether the detachable camera is detected to be attached to the housing of the augmented reality display system.

US Pat. No. 9,946,071

MODIFYING LIGHT OF A MULTICORE ASSEMBLY TO PRODUCE A PLURALITY OF VIEWING ZONES

Magic Leap, Inc., Dania ...

1. A system for displaying virtual content to a user, comprising:a plurality of light guiding structures to propagate coherent light associated with one or more frames of image data and configured to produce an aggregate wavefront;
a phase modulator coupled to one or more light guiding structures of the plurality of light guiding structures and configured to induce one or more phase delays in projected light that is projected by the one or more light guiding structures;
a processor configured to control the phase modulator in a manner such that the aggregate wavefront produced by the plurality of light guiding structures is varied between a planar wavefront and a plurality of curved wavefronts with respective curvatures that correspond to a plurality of focal distances for the virtual content; and
a waveguide assembly that comprises at least one weak lens and is configured to present the image data at the plurality of focal distances to at least one eye of the user based at least in part upon the aggregate wavefront and the one or more phase delays.

US Pat. No. 9,984,506

STRESS REDUCTION IN GEOMETRIC MAPS OF PASSABLE WORLD MODEL IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Dania ...

1. An augmented or virtual reality system, comprising:a set of map points captured in a plurality of existing keyframes from a real world by a set of augmented or virtual reality display systems;
a data structure to receive or store the set of map points captured from the real world;
a processor communicatively coupled to the data structure to construct or update a geometric map of the real world based at least in part on the set of map points, wherein
a node in the geometric map comprises a keyframe that captures at least a subset of map points of the set of map points, and
a strength of a connection between two nodes in the geometric map corresponds to a number of map points shared between the two nodes;
the processor further configured to generate or update a simplified map corresponding to the geometric map at least by representing the real world with a plurality of point nodes corresponding to the plurality of existing keyframes and connecting the plurality of point nodes with a set of edges;
the processor further configured to determine and position a virtual keyframe in relation to a normal direction of at least one existing keyframe of the plurality of existing keyframes, and
a head worn or mounted display coupled to the processor, comprising a waveguide, and configured to project light beams to at least one eye of a user based at least in part upon the geometric map, the simplified map, and the virtual keyframe that is determined and positioned in relation to a normal direction of at least one existing keyframe of the plurality of existing keyframes.

US Pat. No. 10,254,536

COLLIMATING FIBER SCANNER DESIGN WITH INWARD POINTING ANGLES IN VIRTUAL/AUGMENTED REALITY SYSTEM

MAGIC LEAP, INC., Planta...

1. A display subsystem for a virtual image generation system for use by an end user, comprising:a planar waveguide apparatus;
an optical fiber having a distal tip affixed relative to the planar waveguide apparatus, and an aperture proximal to the distal tip;
at least one light source coupled the optical fiber and configured for emitting light from the aperture of the optical fiber;
a mechanical drive assembly to which the optical fiber is mounted to the drive assembly, the mechanical drive assembly configured for displacing the aperture of the optical fiber in accordance with a scan pattern;
an optical waveguide input apparatus configured for directing the light from the aperture of the optical fiber down the planar waveguide apparatus, such that the planar waveguide apparatus displays one or more image frames to the end user.

US Pat. No. 10,275,902

DEVICES, METHODS AND SYSTEMS FOR BIOMETRIC USER RECOGNITION UTILIZING NEURAL NETWORKS

MAGIC LEAP, INC., Planta...

1. A method of identifying a user of a system, comprising:analyzing image data;
generating shape data based on the image data;
analyzing the shape data;
generating general category data based on the shape data;
generating narrow category data by comparing the general category data with a characteristic; and
generating a classification decision based on the narrow category data,
wherein the characteristic is from a known potentially confusing mismatched individual.

US Pat. No. 10,068,374

SYSTEMS AND METHODS FOR A PLURALITY OF USERS TO INTERACT WITH AN AUGMENTED OR VIRTUAL REALITY SYSTEMS

Magic Leap, Inc., Planta...

1. A method for a first user located in a first real geographic location and a second user located in a second real geographic location different from the first real geographic location to interact within a virtual world comprising virtual world data, the method comprising:sensing a plurality of inanimate real world objects at the first real geographic location of a first user device, the first user device comprising at least a first head mounted display device having an environment-sensing system, the plurality of inanimate real world objects being sensed in one or more fields of view of the first user;
creating an avatar for a first user accessing a virtual world through the first user device at the first real geographic location, wherein the virtual world includes a virtual representation of the plurality of inanimate real world objects as the plurality of inanimate real world objects appear in the first real geographical location;
placing the avatar in the virtual world;
displaying the avatar in an augmented reality view of the virtual world to a second user accessing the virtual world through a second user device from a second real geographical location, the second user device comprising at least a second head mounted display device, wherein the avatar is animated based on at least input from the first head mounted display device;
identifying one or more inanimate real world objects of the plurality of real world objects for display by at least receiving one or more inputs from the first user to select one or more inanimate real world objects of the plurality of inanimate real world objects for display or receiving one or more inputs from the first user to remove one or more inanimate real world objects of the plurality of inanimate real world objects from display;
displaying one or more virtual representations of the one or more inanimate real world objects as the one or more inanimate real world objects appear in the first real geographical location to the second user through the second user device, the one or more inanimate real world object sensed at the first real geographic location by the first user device appears to be physically present in the augmented reality view of the virtual world displayed to the second user using the second user device, thereby allowing the second user to experience the first real geographical location through the second user device from the second real geographical location; and
facilitating interaction between the first user and second user in the virtual world through the first user device and the second user device.

US Pat. No. 10,146,997

EYELID SHAPE ESTIMATION USING EYE POSE MEASUREMENT

Magic Leap, Inc., Planta...

1. A method for eyelid shape estimation, comprising:under control of a hardware processor:
detecting a pupillary boundary of an eye using an edge detector;
determining an eye pose of the eye using the pupillary boundary,
wherein an eye pose coordinate system of the eye pose comprises an azimuthal angle and a zenithal angle of the eye relative to a resting orientation of the eye,
wherein a functional relationship between the eye pose coordinate system and an eyelid shape coordinate system comprises a mapping matrix, and
wherein the eyelid shape coordinate system comprises a horizontal shift, a vertical shift, and a curvature of the eyelid;
estimating an eyelid shape of the eye based at least in part on the eye pose and the functional relationship, wherein the functional relationship relates the eye pose to the eyelid shape; and
fitting a parabolic curve of an eyelid shape of the eye based on the eyelid shape.

US Pat. No. 10,134,186

PREDICTING HEAD MOVEMENT FOR RENDERING VIRTUAL OBJECTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A user display device, comprising:a housing frame mountable on a head of a user;
a lens mountable on the housing frame;
a projection subsystem coupled to the housing frame to determine a location of appearance of a display object in a field of view of the user based at least in part on a prediction of a predicted head movement of the user, and to project the display object to the user based on the determined location of appearance of the display object, wherein the location of appearance of the display object is moved in response to the predicted head movement exceeding a nominal head movement value and the projection subsystem partially compensating for a perceived pixel spreading in response to the predicted head movement by decreasing both a size and an intensity of a first set of pixels in an image frame associated with the display object based at least on respective size and intensity of another set of pixels in the image, wherein the nominal head movement value corresponds to a nominal speed or a nominal acceleration; and
a first pair of cameras mountable on the housing frame to capture a field of view image as seen by eyes of the user, wherein the field of view image contains at least one physical object, wherein the captured field of view image is used to gather information regarding movements of the head of the user, wherein the information regarding movements of the head of the user only consists of a center of attention of the user, an orientation of the head of the user, a direction of the head of the user, a speed of movement of the head of the user, an acceleration of the head of the user and a distance of the head of the user in relation to a local environment of the user.

US Pat. No. 10,186,082

ROBUST MERGE OF 3D TEXTURED MESHES

MAGIC LEAP, INC., Planta...

1. A method of merging three-dimensional (3D) meshes, the method comprising:receiving a first mesh and a second mesh;
performing spatial alignment to register the first mesh with respect to the second mesh by:
identifying an overlapping region where the first mesh and the second mesh overlap;
identifying a bounding box of the overlapping region that contains the overlapping region;
for each respective vertex of the first mesh within the bounding box, searching for a corresponding closest vertex of the second mesh, thereby establishing a plurality of matching pairs, each matching pair comprising the respective vertex of the first mesh and the corresponding closest vertex of the second mesh;
removing one or more false matching pairs by, for each matching pair of the plurality of matching pairs:
estimating a first normal consistent connected group (NCNG) of the respective vertex of the first mesh and a second NCNG of the corresponding closest vertex of the second mesh;
upon determining that a ratio between an area of the first NCNG and an area of the second NCNG is greater than a first predetermined threshold, classifying the respective vertex of the first mesh and the corresponding closest vertex of the second mesh as a false matching pair; and
removing the false matching pair from the plurality of matching pairs;
determining a rigid transformation to be applied to the first mesh so as to minimize a distance between a respective vertex of the first mesh and a corresponding closest vertex of the second mesh in each matching pair of the plurality of matching pairs; and
applying the rigid transformation to the first mesh to obtain a transformed first mesh;
performing mesh clipping along a first clipping seam on the transformed first mesh and along a second clipping seam on the second mesh to remove redundant mesh vertices in the overlapping region; and
performing geometry refinement around the first clipping seam and the second clipping seam to close up mesh concatenation holes created by mesh clipping.

US Pat. No. 10,186,085

GENERATING A SOUND WAVEFRONT IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. An augmented reality display system, comprising:an optical apparatus to project light associated with one or more virtual objects to a user of a head-mounted augmented reality display system, wherein a perceived location of the one or more virtual objects is known, and wherein the one or more virtual objects is associated with a predetermined sound data;
a processor having at least a sound module to dynamically alter one or more parameters of the predetermined sound data based at least in part on the perceived location and movement of the one or more virtual objects in relation to the user, thereby producing a sound wavefront; and
an equalization module that equalizes sound of the sound wavefront,
wherein the predetermined sound data is associated with an audio object associated with, but located a distance from, the one or more virtual objects.

US Pat. No. 10,254,483

SHAPED FIBER ELEMENTS FOR SCANNING FIBER DISPLAYS

MAGIC LEAP, INC., Planta...

1. A fiber optic element of a fiber scanning system, the fiber optic element comprising:a motion actuator having:
longitudinal side members;
an internal orifice disposed between the longitudinal side members;
a first support region disposed at one end of the motion actuator;
a central region; and
a second support region disposed at an opposing end of the motion actuator; and
a first fiber optic cable passing through the internal orifice, the first fiber optic cable having a first coupling region disposed between the longitudinal side members and a first fiber joint disposed in the central region;
a second fiber optic cable passing through the internal orifice, the second fiber optic cable having:
a second fiber joint disposed in the central region and spliced to the first fiber joint;
a second coupling region disposed between the longitudinal side members and in mechanical contact with the second support region;
a light delivery region extending away from the second support region of the motion actuator; and
a light emission tip, wherein the light delivery region is characterized by a first diameter and the light emission tip is characterized by a second diameter less than the first diameter.

US Pat. No. 10,234,687

METHODS AND SYSTEM FOR CREATING FOCAL PLANES IN VIRTUAL AND AUGMENTED REALITY

Magic Leap, Inc., Planta...

1. A method of displaying augmented reality, comprising:projecting light associated with a first frame of image data;
receiving, at a first waveguide of a stack of waveguides, the projected light associated with the first frame of image data, the first waveguide comprising a first diffractive optical element;
modifying the projected light associated with the first frame of image data;
delivering the modified light to a user's eye, wherein the modified light associated with the first frame of image data is perceived at a first depth plane;
projecting light associated with a second frame of image data;
receiving, at a second waveguide of the stack of waveguides, the projected light associated with the second frame of image data, the second waveguide comprising a second diffractive optical element;
modifying the projected light associated with the second frame of image data; and
delivering the modified light to the user's eye, wherein the modified light associated with the second frame of image data is perceived at a second depth plane, wherein the first depth plane is different from the second depth plane, and wherein the first depth plane and the second depth plane are perceived simultaneously.

US Pat. No. 10,191,294

THREE DIMENSIONAL VIRTUAL AND AUGMENTED REALITY DISPLAY SYSTEM

Magic Leap, Inc., Planta...

1. A three-dimensional image visualization system, comprising:an integrated module comprising:
a selectively transparent projection device configured to receive input image light and project an image associated with the input image light toward an eye of a viewer from a projection device position in space relative to the eye of the viewer, the projection device being capable of assuming a substantially transparent state when no image is projected; and
a diffraction element configured to divide the input image light into a plurality of modes, each of the plurality of modes directed at a different angle,
wherein the selectively transparent projection device is configured to allow a first mode of the plurality of modes to exit the selectively transparent projection device toward the eye, the first mode having a simulated focal distance based in part on a selectable geometry of the diffraction element, and
wherein the selectively transparent projection device is configured to trap at least a second mode of the plurality of modes within the selectively transparent projection device; and
an occlusion mask device coupled to the projection device and configured to selectively block light traveling toward the eye from one or more positions opposite of the occlusion mask from the eye of the viewer in an occluding pattern correlated with the image projected by the projection device.

US Pat. No. 10,126,812

INTERACTING WITH A NETWORK TO TRANSMIT VIRTUAL IMAGE DATA IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method, comprising:sensing a physical object at a first location using at least one or more outward facing cameras of a head-mounted user display device and recognizing a type of the physical object sensed at the first location by the head-mounted user display device by:
capturing one or more field-of-view images,
extracting one or more sets of points from the one or more field-of-view images,
extracting one or more fiducials for at least one physical object in the one or more field-of-view images based on at least some of the one or more sets of points,
processing at least some of the one or more fiducials for the at least one physical object to identify the type of the physical object sensed at the first location, wherein processing at least some of the one or more fiducials comprises comparing the one or more fiducials to sets of previously stored fiducials;
associating a virtual object with the sensed physical object based on the type of the physical object as a result of both recognizing the type of the sensed physical object at the first location by the head-mounted user display device using the at least one or more outward facing cameras and identifying a predetermined relationship between the virtual object and the type of the sensed physical object recognized at the first location by the head-mounted user display device;
receiving virtual world data representing a virtual world, the virtual would data including at least data corresponding to manipulation of the virtual object in the virtual world by a first user at the first location;
transmitting at least the virtual world data corresponding to manipulation of the virtual object by the first user at the first location to a head-mounted user display device, wherein the head-mounted user display device renders a display image associated with at least a portion of the virtual world data including at least the virtual object to the first user based on at least an estimated depth of focus of a first user's eyes;
creating additional virtual world data originating from the manipulation of the virtual object by the first user at the first location; and
transmitting the additional virtual world data to a second user at a second location different from the first location for presentation to the second user, such that the second user experiences the additional virtual world data from the second location.

US Pat. No. 10,288,419

METHOD AND SYSTEM FOR GENERATING A VIRTUAL USER INTERFACE RELATED TO A TOTEM

MAGIC LEAP, INC., Planta...

1. A method for generating a virtual user interface, comprising:detecting a manipulation of a totem, said manipulation being an action by a user;
recognizing, based on the detected manipulation, a first active command by the user to create a first virtual user interface;
determining, from a virtual world model obtained through a cloud network, a set of map points associated with a position of the totem, wherein the set of map points correspond to a finger of the user;
rendering, in real-time, the first virtual user interface at the determined map points associated with the position of the totem such that the first virtual user interface, when viewed by the user, appears to be stationary at the position of the totem and includes a plurality of first level menu items; and
detecting a further manipulation of the totem, the further manipulation triggering an expansion of the plurality of first level menu items by spreading apart fingers of a hand, wherein the expansion of the plurality of first level menu items renders a plurality of second level menu items that appears on the finger of the user;
recognizing, based on the detected further manipulation, a second active command to create a second virtual user interface comprising the plurality of second level menu items such that the second virtual user interface, when viewed by the user, appears to be stationary at the position of the totem; and
rendering, in real-time and based on the detected further manipulation, the second virtual user interface at the determined map points associated with the position of the totem such that the plurality of first level menu items of the virtual user interface and the plurality of second level menu items of the second virtual user interface are simultaneously viewable on the finger of the user, the second level menu items being derived from the first level menu items as lower level menu items.

US Pat. No. 10,176,639

VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION

Magic Leap, Inc., Planta...

1. A method of operating a virtual image generation system, the method comprising:estimating a focal point of an eye within a field of view of the end user;
rendering a plurality of synthetic image frames of a three-dimensional scene;
sequentially displaying the plurality of image frames to the end user;
generating a non-uniform resolution distribution for each of the displayed image frames in response to the estimated focal point, the non-uniform resolution distribution having a region of highest resolution and a region of lower resolution, wherein the region of highest resolution is coincident with the estimated focal point, and wherein the estimated focal point of the end user has an error margin to provide a focal range within the field of the view of the end user, and the region of highest resolution intersects the focal range;
blurring the displayed image frames in the region of lower resolution; and
dynamically modifying the error margin based on an assumed eye angular velocity profile.

US Pat. No. 10,175,478

METHODS AND SYSTEMS FOR GENERATING VIRTUAL CONTENT DISPLAY WITH A VIRTUAL OR AUGMENTED REALITY APPARATUS

Magic Leap, Inc., Planta...

1. A method for generating stereoscopic images for virtual reality or augmented reality, comprising:transmitting input light beams having an incident direction and carrying image information of at least one stereoscopic image into a substrate of an eyepiece by using an in-coupling optic element;
refracting, at the in-coupling optic element, the input light beams toward a first diffractive element;
diffracting, with at least the first diffractive element, a first portion of the input light beams incident on a first portion of the first diffractive element to propagate in a diffracted direction that points to a portion of a second diffractive element on the eyepiece while allowing a remaining portion of the input light beams to continue to propagate in the incident direction within the substrate of the eyepiece and to interact with a different portion of the second diffractive element, wherein the first diffractive element and the second diffractive element are disposed on two opposing sides of the substrate; and
projecting exiting light beams with an output light beam density for the at least one stereoscopic image to at least one eye of a viewer with the second diffractive element to diffract some of the first portion of the input light beams that is diffracted by the first diffractive element to the second diffractive element as the exiting light beams and to direct a remaining portion of the first portion incident on the second diffractive element in a direction to continue to propagate within the substrate, wherein
the output light beam density is configured based at least part upon degrees of spatial overlapping between the first and second diffractive elements, or the output light beam intensity is increased by embedding a beam-splitting surface in the substrate or by being sandwiched between the substrate and another substrate to split at least a part of the input light beams into a plurality of portions comprising a transmitted portion and a reflected portion,
the first diffractive elements and the second diffractive elements are configured to comprise diffractive structures of both a volumetric type and a surface relief type, rather than the volumetric type of diffractive structures alone or the surface-relief type of diffractive structures alone, and the first and second diffractive elements are disposed on or in one or more transparent or translucent optical components.

US Pat. No. 10,151,875

ULTRA-HIGH RESOLUTION SCANNING FIBER DISPLAY

Magic Leap, Inc., Planta...

1. A system for scanning electromagnetic imaging radiation, comprising:a drive electronics system configured to generate at least one pixel modulation signal;
at least one electromagnetic radiation source configured to modulate light from the at least one electromagnetic radiation source based on the at least one pixel modulation signal;
a first waveguide configured to follow a first scan pattern and produce a first projected field area;
a second waveguide configured to follow a second scan pattern and produce a second projected field area; and
a first scanning actuator operatively coupled to and configured to physically displace the first waveguide along with at least one other intercoupled waveguide, and a second scanning actuator operatively coupled to and configured to physically displace the second waveguide along with at least one other intercoupled waveguide,
wherein each of the first waveguide and the second waveguide is operatively coupled to the at least one electromagnetic radiation source, and
wherein the drive electronics system is configured to luminance modulate at least one of the first waveguide or second waveguide concurrent with the first projected field area overlapping with the second projected field area.

US Pat. No. 10,338,391

VIRTUAL/AUGMENTED REALITY SYSTEM HAVING REVERSE ANGLE DIFFRACTION GRATING

Magic Leap, Inc., Planta...

1. A display subsystem for a virtual image generation system for use by an end user, comprising:a planar waveguide apparatus;
an optical fiber;
at least one light source configured for emitting light from a distal end of the optical fiber;
a mechanical drive assembly to which the optical fiber is mounted as a fixed-free flexible cantilever, the drive assembly configured for displacing a distal end of the optical fiber about a fulcrum in accordance with a scan pattern, such that the emitted light initially diverges from a longitudinal axis coincident with the fulcrum from each of a plurality of off-axis scanning positions of the optical fiber;
an optical modulation apparatus configured for converging the emitted light from the optical fiber at each of the off-axis scanning positions towards the longitudinal axis; and
an optical waveguide input apparatus configured for directing the light from the optical modulation apparatus down the planar waveguide apparatus, such that the planar waveguide apparatus displays one or more image frames to the end user.

US Pat. No. 10,254,454

DISPLAY SYSTEM WITH OPTICAL ELEMENTS FOR IN-COUPLING MULTIPLEXED LIGHT STREAMS

Magic Leap, Inc., Planta...

1. A display system comprising:a waveguide; and
an image injection device configured to direct a multiplexed light stream into the waveguide, the multiplexed light stream comprising a plurality of light streams having different light properties, the light properties comprising wavelength and polarization,
wherein the waveguide comprises liquid crystal in-coupling optical elements configured to selectively in-couple a first of the streams of light while being transmissive to one or more other streams of light, wherein the liquid crystal in-coupling optical elements are configured to selectively in-couple the first stream of light based upon a wavelength and a polarization of light forming the first stream of light, and
wherein the in-coupling optical elements comprise at least one of meta-surfaces, metamaterials, or PBPE structures.

US Pat. No. 10,282,611

EYELID SHAPE ESTIMATION

Magic Leap, Inc., Planta...

1. A method for eye tracking, comprising:under control of a hardware processor:
generating a plurality of lines extending from a lower edge of a bounding shape, over an eye in an eye image, to an upper edge of the bounding shape;
determining candidate points on the plurality of lines;
determining parameters of a fit curve from a subset of the candidate points,
wherein the fit curve is an estimation of an eyelid shape of the eye in the eye image,
wherein a score of the fit curve is above a threshold score, and
wherein the fit curve is a preferred estimation of the eyelid shape of the eye in the eye image;
determining an eye pose of the eye in the eye image using parameters of the eyelid shape; and
tracking the eye in the eye image using the eye pose of the eye in the eye image.

US Pat. No. 10,249,087

ORTHOGONAL-PROJECTION-BASED TEXTURE ATLAS PACKING OF THREE-DIMENSIONAL MESHES

MAGIC LEAP, INC., Planta...

1. A method of atlas packing for computer graphics performed by one or more computer processors, the method comprising:receiving a three-dimensional (3D) mesh including a plurality of triangles representing surfaces of one or more objects, each triangle having a respective texture;
for each respective triangle of the plurality of triangles:
determining a normal of the respective triangle; and
categorizing the respective triangle into one of six directions along positive and negative of x-, y-, and z-directions according a predominant component of the normal;
categorizing triangles in each respective direction of the six directions into one or more layers orthogonal to the respective direction;
for each respective layer in a respective direction:
identifying one or more connected components, each connected component comprises a plurality of connected triangles;
projecting each respective connected component onto a plane orthogonal to the respective direction to obtain a corresponding projected two-dimensional (2D) connected component; and
cutting the projected 2D connected component into one or more sub-components, each sub-component contained within a respective rectangular bounding box;
packing the bounding boxes of all sub-components of all projected 2D connected components in all layers in all directions into one or more atlases;
for each respective triangle of each sub-component in each atlas, copying a texture of a corresponding triangle of the 3D mesh to the respective triangle; and
storing the one or more atlases in a computer memory.

US Pat. No. 10,345,590

AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS AND METHODS FOR DETERMINING OPTICAL PRESCRIPTIONS

Magic Leap, Inc., Planta...

1. An augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising a wearable augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the head-mounted ophthalmic system, the augmented reality display platform comprising optics configured to output light to form an image in the eye,
wherein the augmented reality head-mounted ophthalmic system is configured to conduct an eye exam by automatically varying a focus of the image to provide incremental changes in optical prescription,
wherein the optics comprise a waveguide stack comprising a plurality of waveguides, and
wherein different waveguides of the plurality of waveguides output the light of the optics with different amounts of wavefront divergence corresponding to different depth planes.

US Pat. No. 10,332,315

AUGMENTED REALITY DISPLAY SYSTEM FOR EVALUATION AND MODIFICATION OF NEUROLOGICAL CONDITIONS, INCLUDING VISUAL PROCESSING AND PERCEPTION CONDITIONS

Magic Leap, Inc., Planta...

1. A display system comprising:a head-mountable, augmented reality display configured to output light with variable wavefront divergence to display virtual content;
one or more inwardly-directed sensors;
one or more outwardly-directed sensors;
one or more processors; and
one or more computer storage media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
performing a neurological analysis by:
determining a reaction to a stimulus by receiving data from the one or more inwardly-directed sensors; and
identifying a neurological condition associated with the reaction;
determining environmental triggers associated with the neurological condition;
monitoring an ambient environment with the one or more outwardly-directed sensors;
detecting a presence of an environmental trigger in the ambient environment; and
providing a perception aid based on the detected presence of the environmental trigger.

US Pat. No. 10,313,639

METHODS AND SYSTEMS FOR LARGE-SCALE DETERMINATION OF RGBD CAMERA POSES

Magic Leap, Inc., Planta...

1. A method of determining camera poses for a plurality of image frames, the method comprising:capturing the plurality of image frames using a camera;
computing relative poses between each set of image frame pairs to provide a relative pose set, wherein computing the relative poses comprises performing:
a first process for a first subset of the image frame pairs having a temporal separation between image frames of the image frame pairs less than a threshold; and
a second process for a second subset of the image frame pairs having a temporal separation between image frames of the image frame pairs greater than the threshold;
detecting and removing miscategorized relative poses from the relative pose set to provide a remaining relative pose set;
determining global poses for the plurality of image frames using the remaining relative pose set;
computing extended relative poses for spatially close image frame pairs to provide an extended relative pose set;
detecting and removing extended miscategorized relative poses from the extended relative pose set to provide a remaining extended relative pose set; and
determining updated global poses for the plurality of image frames using the remaining relative pose set and the remaining extended relative pose set.

US Pat. No. 10,203,762

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

MAGIC LEAP, INC., Planta...

1. An augmented reality display system, comprising:an image capturing device of the augmented reality display system to capture at least one image, wherein
the image capturing device comprises one or more image capturing sensors,
at least a portion of the at least one image is perceived within a field of view of a user, and
the at least one image captures at least one gesture that is created by the user and interacts with virtual content projected by the augmented reality display system to the user; and
a processor coupled directly with no intervening elements or indirectly with one or more intervening elements to the image capturing device to recognize the at least one gesture as at least one recognized gesture, the processor configured to recognize the at least one gesture as the at least one recognized gesture is further configured to:
identify a plurality of candidate gestures and a plurality of computational utilization or expense requirements for gesture recognition of the at least one gesture;
determine whether the at least one image includes one or more identifiable depth points at least by performing a line search for the at least one image with one or more lines or line segments;
determine an order of processing in which a plurality of analysis nodes is executed to perform respective gesture identification processes on the at least one gesture with respect to the plurality of candidate gestures based at least in part upon the plurality of computational resource utilization or expense requirements;
during one or more earlier stages in the order of processing, generate one or more reduced sets of candidate gestures from the plurality of candidate gestures at least by executing one or more first gesture identification processes of the respective gesture identification processes that analyze the at least one image with a first analysis node to remove one or more candidate gestures from the plurality of candidate gestures based at least in part upon a first computational resource utilization or expense requirement of the plurality of computational resource utilization or expense requirements; and
during one or more later stages in the order of processing, determine and recognize the at least one gesture from the one or more reduced sets of candidate gestures as the at least one recognized gesture at least by analyzing the at least one image based at least in part on the one or more reduced sets of candidate gestures with a second analysis node corresponding to a second computational expense criterion and at least by executing one or more second gesture identification processes on the at least one gesture and the one or more reduced sets of candidate gestures, wherein
the one or more second gesture identification processes consume a larger amount of processing power than the one or more first gesture identification processes; and
the processor further configured to determine a user input based at least in part on the at least one recognized gesture.

US Pat. No. 10,127,369

BLUE LIGHT ADJUSTMENT FOR BIOMETRIC SECURITY

Magic Leap, Inc., Planta...

1. A head mounted display system configured to project variable levels of blue light to an eye of a user, the display system comprising:a frame configured to be wearable on the head of the user;
a display configured to project at least blue light into the eye of the user and to modify an intensity of the blue light relative to an intensity of non-blue light;
a camera configured to capture a first image of the eye while the display projects light at a first ratio of intensity of blue light to non-blue light into the eye and configured to capture a second image of the eye while the display projects a second ratio of intensity of blue light to non-blue light different from the first ratio into the eye; and
a hardware processor programmed to:
analyze the images from the camera to determine a change in a pupil parameter between the second image and the first image passes a biometric application threshold;
based at least in part on the determined change, instruct the display to modify a ratio of the intensity of blue light to non-blue light and instruct the camera to capture a third image of the eye;
determine that a change in the pupil parameter between the third image and the first image or the second image matches a biometric characteristic of a human individual; and
based on the determination that a change in the pupil parameter between the third image and the first image or the second image matches a biometric characteristic of a human individual, determine an identity of the human individual.

US Pat. No. 10,282,907

INTERACTING WITH A NETWORK TO TRANSMIT VIRTUAL IMAGE DATA IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC, Plantat...

10. An augmented reality display system, comprising:a camera to capture an image of a real physical object in a field of view of a user, wherein a location of the real physical object in a real world is known;
a module for processing data, wherein the module is stored in a memory, the module rendering from the captured image of the real physical object a rendered physical object, wherein the rendered physical object appears substantially identical to the real physical object; and
a display for displaying to the user, in a blended reality mode selected by the user from a plurality of visualization modes, the rendered physical object at a display location in the display that corresponds to an area of the display through which the real physical object is viewable, wherein the blended reality mode comprises displaying virtual objects and the rendered physical object at a physical environment of the user, wherein the rendered physical object is displayed in place of the real physical object in the display.

US Pat. No. 10,261,318

ARCHITECTURES AND METHODS FOR OUTPUTTING DIFFERENT WAVELENGTH LIGHT OUT OF WAVEGUIDES

Magic Leap, Inc., Planta...

1. A display system comprising:a display comprising a waveguide stack comprising a plurality waveguides, each waveguide comprising a first major surface and a second major surface, wherein the waveguide stack comprises:
a first waveguide;
a first wavelength selective filter proximate to and on the first major surface of the first waveguide;
a first outcoupling optical element comprising diffractive elements proximate to and on the first wavelength selective filter;
a second waveguide over the first outcoupling optical element;
a second wavelength selective filter proximate to and on the first major surface of the second waveguide;
a second outcoupling optical element comprising diffractive elements proximate to and on the second wavelength selective filter of the second waveguide;
a first incoupling optical element configured to incouple incident light into the first waveguide; and
a second incoupling optical element configured to incouple incident light into the second waveguide;
wherein the first wavelength selective filter has a first rearward surface adjacent the first major surface and a first forward surface opposite the first rearward surface, the first wavelength selective filter configured to:
transmit incoupled light at a first plurality of wavelengths through the first rearward surface and reflect a portion of the transmitted light at the first plurality of wavelengths away from the first forward surface; and
wherein the first outcoupling optical element is configured to outcouple the incoupled light of the first plurality of wavelengths transmitted through the first wavelength selective filter.

US Pat. No. 10,241,263

ULTRA-HIGH RESOLUTION SCANNING FIBER DISPLAY

Magic Leap, Inc., Planta...

1. A system for scanning electromagnetic imaging radiation, comprising:a drive electronics system configured to generate at least one pixel modulation signal;
at least one electromagnetic radiation source configured to modulate light from the at least one electromagnetic radiation source based on the at least one pixel modulation signal;
a first waveguide configured to follow a first scan pattern and produce a first projected field area;
a second waveguide configured to follow a second scan pattern and produce a second projected field area;
a scanning actuator operatively coupled to and configured to physically displace the first and second waveguide along with at least one other intercoupled waveguide,
wherein each of the first waveguide and the second waveguide is operatively coupled to the at least one electromagnetic radiation source, and
wherein the drive electronics system is configured to luminance modulate at least one of the first waveguide or second waveguide concurrent with the first projected field area overlapping with the second projected field area.

US Pat. No. 10,234,939

SYSTEMS AND METHODS FOR A PLURALITY OF USERS TO INTERACT WITH EACH OTHER IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A system for interaction between a first user and a second user within a virtual world comprising virtual world data, the system comprising:a first head-mounted user display device associated with the first user at a first geographical location;
a second head-mounted user display device associated with the second user at a second geographical location different from the first geographical location;
a processor configured to sense an inanimate real world object at the first geographic location of the first head-mounted user display device, wherein the inanimate real world object at the first geographic location is selected for display by the first user; and
wherein the virtual world data includes at least a virtual representation of the inanimate real world object selected for display by the first user, wherein the inanimate real world object selected for display by the first user sensed at the first geographic location by the first head-mounted user display device appears to be physically present in an augmented reality view of the virtual world displayed to the second user using the second head-mounted user display device, the virtual representation of the inanimate real world object generated as a result of the first head-mounted user display device sensing the inanimate real world object selected for display by the first user to generate a virtual representation of the inanimate real world object as the inanimate real world object appears in the first geographic location,
wherein first head-mounted user display device is operable to display a blended reality view of the virtual world to the first user at the first geographical location based on the virtual world data, and
wherein the second head-mounted user display device is operable to display a virtual reality view of the virtual world to the second user at the second geographical location based on the virtual world data, such that the second user can see the virtual representation of the inanimate real world object that is selected for display by the first user through the second head-mounted user display device at the second geographical location.

US Pat. No. 10,237,540

LIGHT PROJECTOR USING AN ACOUSTO-OPTICAL CONTROL DEVICE

MAGIC LEAP, INC., Planta...

1. A system for projecting light, the system comprising:a light generator that generates image light that corresponds to a series of images;
a display device configured to receive the image light generated by the light generator and transmit display image light at a first resolution, the display image light comprising a first set of image points; and
an acousto-optical scanner optically coupled to the display device, wherein the display device directs the display image light onto the acousto-optical scanner and the acousto-optical scanner directs portions of the display image light onto a plurality of diffractive optical elements, wherein each of the plurality of diffractive optical elements corresponds to a different depth plane of the series of images;
wherein the acousto-optical scanner is configured to receive the display image light from the display device at the first resolution corresponding to the first set of image points, selectively generate a second set of image points, and output image light at a second resolution greater than the first resolution, the output image light comprising the first set of image points and the second set of image points.

US Pat. No. 10,198,864

RUNNING OBJECT RECOGNIZERS IN A PASSABLE WORLD MODEL FOR AUGMENTED OR VIRTUAL REALITY

MAGIC LEAP, INC., Planta...

1. An augmented reality system, comprising: one or more databases storing a passable world model data comprising sets of points respectively pertaining to real objects of the physical world; a plurality of object recognizers configured for simultaneously running on the passable world model data independent of each other, each of the object recognizers being programmed to recognize an object of the real world based on a known geometry of the object corresponding to a set of points of the passable world model data; and a head-worn augmented reality display system configured for displaying virtual content to a user based at least in part on the recognized object;wherein each of the object recognizers is dedicated to recognize a predetermined object;
wherein each of the object recognizers is autonomic and autonomous; and
wherein the object recognizers comprise a basic object recognizer configured for running on the passable world model data to identify a generic object, and a detailed object recognizer configured for running non the passable world model data on the generic objects to identify a specific object.

US Pat. No. 10,180,734

SYSTEMS AND METHODS FOR AUGMENTED REALITY

MAGIC LEAP, INC., Planta...

1. An augmented reality (AR) display system, comprising:a hand-held controller comprising:
an electromagnetic field emitter to emit a known magnetic field in a known coordinate system, and
an inertial measurement (“IMU)” component that assists in determining positioning and/or orientation of the electromagnetic field emitter relative to other components;
an electromagnetic sensor to measure a parameter related to a magnetic flux at the electromagnetic sensor resulting from the known magnetic field, wherein the electromagnetic field emitter and the electromagnetic sensor are mobile;
a depth sensor to measure a distance in the known coordinate system;
a controller to determine pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system based at least in part on the parameter related to the magnetic flux measured by the electromagnetic sensor and the distance measured by the depth sensor; and
a display system to display virtual content to a user based at least in part on the pose information of the electromagnetic sensor relative to the electromagnetic field emitter.

US Pat. No. 10,175,491

APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL CAPABILITY

Magic Leap, Inc., Planta...

1. A compact optical see-through head-mounted display, capable of combining a see-through path with a virtual view path such that the opaqueness of the see-through path can be modulated and the virtual view occludes parts of the see-through view and vice versa, the display comprising:a. a microdisplay for generating an image to be viewed by a user, the microdisplay having a virtual view path associated therewith;
b. a reflection-type spatial light modulator for modifying the light from an external scene to block portions of the see-through view that are to be occluded, the spatial light modulator having a see-through path associated therewith;
c. an objective optics, facing an external scene, configured to receive the incoming light from the external scene and to focus the light upon the spatial light modulator, where the objective optics is a four-reflection freeform prism, comprising six optical freeform surfaces: refractive surface S4, reflective surfaces S5, S4?, S5?, and S6 and refractive surface S7, where the objective optics is configured to form an intermediate image inside the objective optics;
d. a beamsplitter configured to merge a digitally generated virtual image from a microdisplay and a modulated see-through image of an external scene passing from a spatial light modulator, producing a combined image;
e. an eyepiece configured to magnify the combined image, where the eyepiece (410) is a two-reflection freeform prism, comprising four optical freeform surfaces, refractive surface S1, reflective surface S2, reflective surface S1? and refractive surface S3;
f. an exit pupil configured to face the eyepiece, the exit pupil whereupon the user observes the combined view of the virtual and see-through views in which the virtual view occludes portions of the see-through view;
wherein the objective optics is disposed upon the front layer of the display, where the spatial light modulator is disposed on the back layer of the display at or near an intermediate image plane of the see-through path, facing a side of the beam splitter, where the microdisplay is disposed on the back layer of the display, facing a different side of the beam splitter, where the beam splitter is disposed such that the see-through path is merged with the virtual view path and the light from the merged path is directed to the eyepiece, wherein the eyepiece is disposed upon the back layer of the display,
whereupon the incoming light from the external scene enters the objective optics through the refractive surface S4, then is consecutively reflected by the reflective surfaces S5, S4?, S5? and S6, and exits the objective prism through the refractive surface S7, whereupon the incoming light forms an intermediate image at its focal plane on the spatial light modulator, whereupon the spatial light modulator modulates the light in the see-through path to occlude portions of the see-through view, whereupon the spatial light modulator reflects the modulated light into the beam splitter, whereupon the light from the microdisplay enters the beam splitter, whereupon the beamsplitter merges the modulated light in the see-through path with the light in the virtual view path and folds toward the eyepiece for viewing, whereupon the light from the beam splitter enters the eyepiece through the refractive surface S3, then is consecutively reflected by the reflective surfaces S1? and S2, and exits the eyepiece through the refractive surface S1 and reaches the exit pupil, where the viewer's eye is aligned to see a combined view of see a combined view of a virtual view and a modulated see-through view.

US Pat. No. 10,162,184

WIDE-FIELD OF VIEW (FOV) IMAGING DEVICES WITH ACTIVE FOVEATION CAPABILITY

Magic Leap, Inc., Planta...

1. A foveated imaging system, configured to capture a wide field of view image and a foveated image, where the foveated image is a controllable region of interest of the wide field of view image, the system comprising:an objective lens, facing an external scene, configured to receive incoming light from an external scene and to focus the light upon a beamsplitter,
a beamsplitter, configured to split the incoming light from the external scene into a first portion on a wide field of view imaging path and second portion on a foveated imaging path;
the wide field of view imaging path comprising:
a first stop, separate from the objective lens, which limits the amount of the first portion received in the wide field of view path from the beamsplitter;
a wide field of view imaging lens, different from the objective lens and configured to receive the first portion from the first stop and focus a wide field of view image on a wide field of view imaging sensor, wherein the wide field of view imaging sensor is configured to record the first portion focused from the wide field of view imaging lens as a wide field of view image;
the foveated imaging path consisting of:
a second stop, which limits the second portion received in the foveated imaging path from the beamsplitter;
a scanning mirror, the scanning mirror being a dual-axis movable mirror configured to tilt about an X and Y axis and reflect a selected region of interest within the second portion from the beamsplitter back towards the beamsplitter;
a reflective surface, disposed upon the beamsplitter, configured to direct the second portion reflected by the scanning mirror;
a foveated imaging lens, configured to receive the second portion, associated with a region of interest of the external scene, from the scanning mirror, and focus a foveated image on a foveated imaging sensor wherein the foveated imaging sensor is configured to record the region of interest from the foveated imaging lens as a higher resolution foveated image;
wherein the foveated image system is configured to display the higher resolution foveated image within the wide field of view image.

US Pat. No. 10,379,349

METHODS AND SYSTEMS FOR DETECTING HEALTH CONDITIONS BY IMAGING PORTIONS OF THE EYE, INCLUDING THE FUNDUS

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the augmented reality head-mounted ophthalmic system, wherein the augmented reality head-mounted ophthalmic display platform comprises:
a waveguide stack comprising a plurality of waveguides,
wherein one or more waveguides of the plurality of waveguides are configured to output image light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein different amounts of wavefront divergence from the plurality of waveguides correspond to different depth planes;
a light source configured to illuminate a portion of the wearer's eye; and
an imaging system configured to capture an image of the illuminated portion of the wearer's eye,
wherein the augmented reality head-mounted ophthalmic system is configured to monitor health of the wearer's eye and detect abnormalities of the wearer's eye based on the image.

US Pat. No. 10,345,591

METHODS AND SYSTEMS FOR PERFORMING RETINOSCOPY

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the head-mounted ophthalmic system, the eye having a retina;
at least one light source configured to project light from the light source into the eye to form an image in the eye, the at least one light source configured to sweep the light from the light source across the retina of the eye to produce a reflex of the retina; and
a sensor configured to measure a response of the retina to the swept light,
wherein the head-mounted ophthalmic system is configured to perform retinoscopy to measure refractive error of the eye based upon the measured response of the retina to the swept light,
wherein the display platform comprises a waveguide stack comprising a plurality of waveguides,
wherein the plurality of waveguides are configured to output image light with different amounts of wavefront divergence corresponding to a plurality of depth planes, wherein one or more waveguides of the plurality of waveguides are configured to output the image light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides.

US Pat. No. 10,313,661

WIDE BASELINE STEREO FOR LOW-LATENCY RENDERING

Magic Leap, Inc., Planta...

1. A method of operating a virtual image generation system, the method comprising:rendering a left synthetic image and a right synthetic image of a three-dimensional scene respectively from a first left focal center and a first right focal center relative to a first head pose of an end user, the first left and first right focal centers being spaced from each other a distance greater than an inter-ocular distance of the end user;
warping the left synthetic image and the right synthetic image respectively to a second left focal center and a second right focal center relative to a second head pose of the end user different from the first head pose of the end user, the second left and right focal centers spaced from each other a distance equal to the inter-ocular distance of the end user;
constructing a frame from the left and right warped synthetic images;
displaying the frame to the end user.

US Pat. No. 10,260,864

DYNAMIC DISPLAY CALIBRATION BASED ON EYE-TRACKING

Magic Leap, Inc., Planta...

1. A display system comprising:an eye-tracking camera;
a wearable display comprising a plurality of display color layers, each display color layer of the plurality of display color layers configured to output a different color;
non-transitory data storage configured to store a plurality of chromatic calibrations for the display, each chromatic calibration in the plurality of chromatic calibrations associated with a calibration position relative to the display, wherein each chromatic calibration in the plurality of chromatic calibrations corrects for a chromatic imperfection of the display comprising a chromatic mismatch between two or more display color layers of the plurality of display color layers; and
a hardware processor in communication with the eye-tracking camera, the display, and the non-transitory data storage, the hardware processor programmed to:
determine, based on information from the eye-tracking camera, an eye position, relative to the display, of the user of the display;
access, based at least partly on the determined eye position, one or more of the plurality of chromatic calibrations;
calculate, based at least in part on the one or more of the plurality of chromatic calibrations, a correction to apply to at least one of the two or more display color layers of the display to at least partially correct the chromatic imperfection of the display; and
apply the correction to the at least one of the two or more display color layers of the display.

US Pat. No. 10,175,564

PROJECTOR WITH SCANNING ARRAY LIGHT ENGINE

MAGIC LEAP, INC., Planta...

1. A projector assembly, comprising:a light emitting diode (LED) array, wherein the LED array has an array axis, wherein the LED array includes a plurality of LEDs arranged along the array axis, and wherein the plurality of LEDs are individually addressable;
a rotatable actuator supporting the LED array, wherein the rotatable actuator has a rotation axis, and wherein the rotation axis and the array axis are parallel;
a collimator positioned in optical communication with the LED array for collimating light emitted from the plurality of LEDs; and
a set of imaging optics positioned in optical communication with the collimator for focusing collimated light and forming a first image of the LED array at a distance, wherein the first image includes a first axis corresponding to the array axis and a second axis orthogonal to the rotation axis.

US Pat. No. 10,163,011

ESTIMATING POSE IN 3D SPACE

Magic Leap, Inc., Planta...

1. An imaging system comprising:an image capture device comprising a lens and an image sensor, the lens configured to direct light from an environment surrounding the image capture device to the image sensor, the image sensor configured to:
sequentially capture a first plurality of image segments of an image based on the light from the environment, the image representing a field of view (FOV) of the image capture device, the FOV comprising a portion of the environment and including a plurality of sparse points, and
sequentially capture a second plurality of image segments, the second plurality of image segments captured after the first plurality of image segments and forming at least another portion of the image;
non-transitory data storage configured to sequentially receive the first and second plurality of image segments from the image sensor and store instructions for estimating at least one of a position and orientation of the image capture device within the environment; and
at least one hardware processor operably coupled to the non-transitory data storage and configured by the instructions to:
identify a first group of sparse points based in part on a corresponding subset of the first plurality of image segments, the first group of sparse points identified as the first plurality of image segments are received at the non-transitory data storage,
determine at least one of the position and orientation of the imaging device within the environment based on the first group of sparse points,
identify a second group of sparse points based in part on a corresponding subset of the second plurality of image segments, the second group of sparse points identified as the second plurality of image segments are received at the non-transitory data storage, and
update the at least one of the position and orientation of the imaging device within the environment based on the first and second group of sparse points.

US Pat. No. 10,061,130

WIDE-FIELD OF VIEW (FOV) IMAGING DEVICES WITH ACTIVE FOVEATION CAPABILITY

Magic Leap, Inc., Planta...

1. A clustered imaging system comprising:at least two foveated imaging systems each configured to capture a wide field of view image composing an extended field of view of the clustered imaging system, wherein each wide field of view image has at least one intersection at a neighboring field of view image boundary between the at least two wide field of view images of the least two foveated imaging systems,
wherein each of the at least two foveated imaging systems system comprise:
an objective lens, facing an external scene, configured to receive incoming light from an external scene and to focus the incoming light upon a beamsplitter;
the beamsplitter, configured to split the incoming light from the external scene into a first portion on a wide field of view imaging path and a second portion on a foveated imaging path;
the wide field of view imaging path comprising:
a first stop, separate from the objective lens, which limits the amount of the first portion received in the wide field of view path from the beamsplitter;
a wide field of view imaging lens, different from the objective lens and configured to receive the first portion from the first stop and focus the first portion on a wide field of view imaging sensor;
the wide field of view imaging sensor, configured to record the first portion light as the wide field of view image;
the foveated imaging path consisting of:
a second stop, which limits the second portion received in the foveated imaging path from the beamsplitter;
a multi-axis scanning mirror configured to selectively reflect the second portion as an identified region of interest back towards the beamsplitter;
a reflective surface disposed upon the beamsplitter, configured to direct the identified region of interest reflected by the scanning mirror;
a foveated imaging lens, configured to form the identified region of interest directed by the reflective surface on a foveated imaging sensor; and
the foveated imaging sensor, configured to record the identified region of interest from the foveated imaging lens as a higher resolution foveated image,
wherein the clustered foveated image system is configured to display at least one higher resolution foveated image from the at least two foveated imaging systems within the extended field of view image.

US Pat. No. 10,345,592

AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS AND METHODS FOR DIAGNOSING A USER USING ELECTRICAL POTENTIALS

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising:
a frame configured to be worn on the head of a wearer wearing the head-mounted system;
a display attached to the frame, the display configured to pass light from the world into an eye of the wearer, wherein the display comprises at least one light source and a waveguide stack comprising a plurality of waveguides, wherein each waveguide of the plurality of waveguides comprises:
in-coupling diffractive optical elements configured to in-couple light from the light source; and
out-coupling diffractive optical elements configured to out-couple light to the eye of the wearer,
wherein one or more waveguides of the plurality of waveguides are configured to output light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein the different amounts of wavefront divergence correspond to different focal planes;
a plurality of electrodes attached to the frame and configured to be placed around the eye, wherein the electrodes are configured to measure electrical potentials of a retina of the eye.

US Pat. No. 10,295,338

METHOD AND SYSTEM FOR GENERATING MAP DATA FROM AN IMAGE

MAGIC LEAP, INC., Planta...

1. A method of generating map data, comprising:capturing, by a virtual or augmented display system, an image of a field of view of a user;
determining a set of map points without comparing first features in the image and second features from existing images at least by:
determining and positioning a virtual keyframe based at least in part upon performing one or more analyses on the captured image and one or more additional keyframes;
projecting a plurality of geometric entities from the virtual key frame to a plurality of features or points in the image and the one or more additional keyframes; and
determining the set of map points by using at least the plurality of geometric entities;
identifying one or more sets of sparse points and one or more sets of dense points based in part or in whole on the set of map points that has been determined;
generating sparse point descriptors and dense point descriptors respectively for the one or more sets of sparse points and the one or more sets of dense points after performing point normalization on the one or more sets of sparse points and the one or more sets of dense points;
storing the one or more sets of sparse points and the one or more sets of dense points, the sparse point descriptors, and the dense point descriptors into a single map;
executing one or more object recognizers on the single map, wherein the one or more object recognizers are configured to recognize respective predetermined objects of a real world based at least in part on the single map; and
re-inserting, by the one or more object recognizers, geometric information and parametric information about the respective predetermined objects into a passable world model.

US Pat. No. 10,267,970

THERMAL DISSIPATION FOR WEARABLE DEVICE

Magic Leap, Inc., Planta...

1. An optical device comprising:a display assembly configured to display content to a user of the optical device;
a frame defining a pair of eye openings and being configured to position the display assembly in front of the eyes of a user of the optical device;
a temperature monitoring system configured to monitor a distribution of heat within the frame; and
a processor configured to receive temperature data from the temperature monitoring system and to adjust an output of the display assembly by spatially shifting a position of content displayed by the display assembly in accordance with changes in a distribution of heat within the frame.

US Pat. No. 10,261,162

ELECTROMAGNETIC TRACKING WITH AUGMENTED REALITY SYSTEMS

Magic Leap, Inc., Planta...

1. A beltpack having a control and quick release module which comprises:a first outer housing component and a second outer housing component, wherein the first outer housing component and the second outer housing component are coupled together;
one or more buttons positioned on the first outer housing, wherein the one or more buttons are overlaid over a top printed circuit board;
a first end which is connected to the first outer housing;
a second end which is connected to the second housing;
a local processing and data module positioned in between the first outer housing and the second outer housing; and
electrical leads positioned in between the first outer housing and the second outer housing, wherein the electrical leads connect the first end, the local processing and data module, and the second end.

US Pat. No. 10,262,462

SYSTEMS AND METHODS FOR AUGMENTED AND VIRTUAL REALITY

Magic Leap, Inc., Planta...

1. An augmented reality system, comprising:at least one server comprising a processor and configured to:
receive first data corresponding to a first location and second data corresponding to a second location respectively from the first virtual or augmented reality display system and the second virtual or augmented reality display system through one or more networks communicably connecting a first and second virtual or augmented reality display systems to the at least one server, wherein
the first data corresponds to the first location at which the first virtual or augmented reality display system is located,
the second data corresponds to the second location at which the second virtual or augmented reality display system is located, and
the first virtual or augmented reality display system is different from the second virtual or augmented reality display system,
reduce the first location to a first node and the second location to a second node using at least the first data and the second data,
create a plurality of virtual keyframes based at least in part upon a plurality of existing keyframes, which have been captured by the first and second virtual or augmented reality display systems, at least by performing a plurality of analyses on corresponding normals of the plurality of existing keyframes, wherein a virtual keyframe is created to provide a new position to view a plurality of existing points or features;
determine a set of shared points from a first set of points and a second set of points that are respectively identified from the first data and the second data based at least in part upon performing one or more analyses using at least one or more corresponding virtual keyframes of the plurality of virtual keyframes,
construct at least a portion of a map of a real world at least by connecting the first node for the first location and the second node for the second location with an edge that is determined based in part or in whole upon a total number of shared points in the set of shared points;
identify, at the at least one server from a storage device, detailed environment data pertaining to the first location and virtual contents from the at least one server to the second virtual or augmented reality display system based at least in part upon an anticipated position of the second virtual or augmented reality display system.

US Pat. No. 10,228,242

METHOD AND SYSTEM FOR DETERMINING USER INPUT BASED ON GESTURE

Magic Leap, Inc., Planta...

1. A method for determining a user input, comprising:capturing, at one or more image capturing sensors, an image of a field of view of a user, the image comprising a gesture created by the user;
determining a sequence for a plurality of gesture analysis processes based in part or in whole upon computational resource utilization of the plurality of gesture analysis processes;
analyzing, at least by a microprocessor, the image to determine a set of candidates and to identify a set of points associated with the gesture;
removing at least one candidate from gesture recognition with at least a first gesture analysis process of the plurality of gesture analysis processes to reduce the set of candidates to a remaining set of one or more remaining candidates while skipping one or more remaining gesture analysis processes of the plurality of gesture analysis processes for the at least one candidate;
generating respective scoring values for the one or more remaining candidates based in part or in whole on matching results of the one or more remaining candidates with predetermined gestures in a database; and
determining a user input based at least in part on a recognized gesture that is recognized by at least a second gesture analysis process.

US Pat. No. 10,163,010

EYE POSE IDENTIFICATION USING EYE FEATURES

Magic Leap, Inc., Planta...

1. A head mounted display (HMD) system comprising:an image capture device for tracking an eye pose of an eye of a wearer of the HMD system in an eye image of the eye of the wearer, wherein the eye pose comprises a direction toward which the eye is looking;
non-transitory memory configured to store the eye image;
a display for providing virtual image information to the wearer of the HMD system based on the eye pose of the wearer in the eye image; and
a hardware processor programmed to:
receive the eye image from the image capture device;
map a pupil in the eye image to an equivalent frontal view to provide a remapped eye image;
identify an eye feature based at least partly on the remapped eye image;
determine a pitch and a yaw of the eye based at least partly on the remapped eye image;
determine a roll of the eye based at least partly on the eye feature in the remapped eye image;
determine the eye pose of the eye based at least partly on the pitch, the yaw, and the roll;
determine the virtual image information to be provided to the wearer of the HMD using the eye pose of the eye; and
cause the display to provide the virtual image information to the wearer of the HMD system.

US Pat. No. 10,127,723

ROOM BASED SENSORS IN AN AUGMENTED REALITY SYSTEM

MAGIC LEAP, INC., Planta...

1. A sensor system, comprising:a plurality of sensors or transducers to respectively capture information pertaining to one or more images or keyframes of a space, wherein respective pose information of the plurality of sensors or transducers relative to the space is known and comprises an orientation of the plurality of sensors or transducers within the space;
at least one processor of a remote computing system configured to receive the information captured by the plurality of sensors or transducers and to determine whether the information is to be used in constructing or updating a map residing on the remote computing system and to select and discard at least a first part of the information that is determined not to be used to construct or update the map;
the at least one processor further configured to identify a plurality of layers of signature information pertaining to at least the space;
the at least one processor further configured to modify the map into an overlaid map at least by placing the plurality of layers of signature information in context with each other in relation to the map;
the at least one processor further configured to determine multiple pieces of location information from at least the plurality of layers of signature information that are aligned in the overlaid map, the multiple pieces of location information respectively and uniquely corresponding to a plurality of locations in the space;
the at least one processor further configured to determine multiple pieces of fingerprint data for the plurality of locations in the space using at least some of the multiple pieces of location information;
the at least one processor further configured to construct or update at least a portion of the map residing on the remote computing system and corresponding to the space at least by selecting and using the at least a second part of the information from the information captured by the plurality of sensors or transducers to represent the space as a node and by storing one or more pieces of fingerprint data for the node in a set of fingerprint data, rather than storing a keyframe or an image from which the fingerprint data is derived, based at least in part on the at least the second part of the information; and
a wireless network to transmit the overlaid map to a virtual or augmented reality display system that determines a real-world location in the space at least by using at least the set of fingerprint data derived at least from images or keyframes pertaining to a plurality of nodes in the overlaid map, and displays virtual content to the user of the virtual or augmented reality display system in relation to the real-world location.

US Pat. No. 10,378,930

DUAL COMPOSITE LIGHT FIELD DEVICE

MAGIC LEAP, INC., Planta...

1. An apparatus comprising:a processor;
a first waveguide having opposed planar input and output faces;
a first diffractive optical element (DOE) formed across the first waveguide, the first DOE configured to couple first light into the first waveguide, wherein the first waveguide is configured to direct the first light via total internal reflection to an exit location of the first waveguide;
a second waveguide having opposed planar input and output faces, the second waveguide aligned with and parallel to the first waveguide;
a light sensor having an input positioned adjacent to the exit location of the first waveguide to capture the first light exiting therefrom and generate an output signal corresponding thereto, wherein the angle and position of the light sensor with respect to the exit location of the first waveguide are movable and are controllable by the processor;
a light generator configured to inject second light into the second waveguide, wherein the second waveguide is configured to direct the second light via total internal reflection to an output face of the second waveguide; and
a darkening layer disposed between the first waveguide and the second waveguide, the darkening layer configured to reduce a brightness of a third light originating from outside the apparatus.

US Pat. No. 10,371,876

REFLECTIVE DIFFRACTIVE GRATINGS WITH VARIABLE REFLECTIVITY AND DISPLAY SYSTEMS HAVING THE SAME

Magic Leap, Inc., Planta...

1. An optical device comprising:an optical waveguide structure comprising:
an incoupling optical element configured to redirect incident light at angles such that the incident light propagates through the optical waveguide structure by total internal reflection, wherein the incoupling optical element comprises:
a diffractive optical grating structure comprising:
a plurality of protrusions; and
a reflective layer on surfaces of the protrusions, and
wherein at least one parameter of the reflective layer varies across an area occupied by the plurality of protrusions, and
wherein a reflectivity of the diffractive optical grating structure varies with the at least one parameter of the reflective layer across the area occupied by the plurality of protrusions.

US Pat. No. 10,371,896

COLOR SEPARATION IN PLANAR WAVEGUIDES USING DICHROIC FILTERS

MAGIC LEAP, INC., Planta...

1. An eyepiece for projecting an image to an eye of a viewer, the eyepiece comprising:a first planar waveguide positioned in a first lateral plane, wherein the first waveguide comprises a first diffractive optical element (DOE) coupled thereto and disposed at a lateral position, the first DOE configured to diffract image light in a first wavelength range centered at a first wavelength;
a second planar waveguide positioned in a second lateral plane adjacent the first lateral plane, wherein the second waveguide comprises a second DOE coupled thereto and disposed at the lateral position, the second DOE configured to diffract image light in a second wavelength range centered at a second wavelength longer than the first wavelength;
a third planar waveguide positioned in a third lateral plane adjacent the second lateral plane, wherein the third waveguide comprises a third DOE coupled thereto and disposed at the lateral position, the third DOE configured to diffract image light in a third wavelength range centered at a third wavelength longer than the second wavelength;
a first optical filter disposed between the first waveguide and the second waveguide at the lateral position, wherein the first optical filter is configured to have:
a first transmittance value at the first wavelength range;
a second transmittance value at the second wavelength range and the third wavelength range that is greater than the first transmittance value; and
a first reflectance value at the first wavelength range that is greater than about 90%; and
a second optical filter positioned between the second waveguide and the third waveguide at the lateral position, wherein the second optical filter is configured to have:
a third transmittance value at the first wavelength range and the second wavelength range;
a fourth transmittance value at the third wavelength range that is greater than the third transmittance value; and
a second reflectance value at the second wavelength range that is greater than about 90%.

US Pat. No. 10,296,792

IRIS BOUNDARY ESTIMATION USING CORNEA CURVATURE

Magic Leap, Inc., Planta...

1. A wearable display system comprising:a display;
an image capture device configured to capture an image of an eye of a user;
non-transitory memory configured to store executable instructions; and
a hardware processor in communication with the display, the image capture device, and the non-transitory memory, the hardware processor programmed by the executable instructions to:
obtain a camera calibration;
obtain physiological parameters of the eye in a three-dimensional coordinate frame of the eye, wherein the physiological parameters comprise:
a radius of a corneal sphere comprising a cornea of the eye,
a radius of an iris of the eye, and
a distance between a center of the corneal sphere and a center of a pupil of the eye;
receive the image of the eye, the image comprising at least a portion of the cornea of the eye and the iris of the eye;
transform the image of the eye into the coordinate frame of the eye via a perspective transformation to generate a transformed image;
determine, from the transformed image, an intersection between the corneal sphere and the eye;
convert, based at least in part on the camera calibration, the intersection from the coordinate frame of the eye to a coordinate frame of the image of the eye;
determine the limbic boundary based at least in part on the intersection; and
utilize the limbic boundary in a biometric application.

US Pat. No. 10,255,529

STRUCTURE LEARNING IN CONVOLUTIONAL NEURAL NETWORKS

Magic Leap, Inc., Planta...

1. A method implemented with a processor, comprising:creating a neural network;
generating output from the neural network;
identifying a low performing layer from the neural network, the low performing layer having a relatively lower performance than a performance of another layer in the neural network;
inserting a new specialist layer at the low performing layer; and
repeating the act of identifying and the act of inserting until a top of the neural network is reached.

US Pat. No. 10,547,833

CAMERA CALIBRATION SYSTEM, TARGET, AND PROCESS

Magic Leap, Inc., Planta...

1. A system, comprising:a movable platform having a mounting surface to hold a plurality of cameras;
a tessellated concave target having a plurality of planar target regions,
wherein a principle axis of the tessellated concave target is oriented towards the movable platform,
wherein the movable platform is spaced apart from the tessellated concave target; and
an input stack mechanism to load a camera onto the movable platform and an output stack mechanism to offload the camera off the movable platform.

US Pat. No. 10,386,639

METHODS AND SYSTEMS FOR DIAGNOSING EYE CONDITIONS SUCH AS RED REFLEX USING LIGHT REFLECTED FROM THE EYES

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform comprising a waveguide stack configured to pass light from the world into an eye of a wearer wearing the head-mounted ophthalmic system, the eye comprising a retina and a cornea;
a light source configured to project light into the eye of the wearer, at least a portion of the light reflecting off at least a portion of the eye so as to produce a reflection; and
a camera configured to capture an image of the reflection, the device being configured to perform a diagnostic test of the wearer's eye to detect abnormalities of the eye,
wherein the waveguide stack comprises a plurality of waveguides configured to output light with wavefront divergence corresponding to different depth planes, wherein the waveguide stack is configured to provide a fixation target for the wearer at the different depth planes and to vary display of the fixation target between a plurality of the different depth planes.

US Pat. No. 10,390,165

MIXED REALITY SYSTEM WITH SPATIALIZED AUDIO

Magic Leap, Inc., Planta...

1. A spatialized audio system, comprising:a frame to be worn on a head of a user;
a plurality of speakers attached to the frame such that, when the frame is worn by the user, each of the plurality of speakers are disposed at a respective non-zero distance from the user's head, such that each of the plurality of speakers does not contact any surface of the user's head, including the user's ears;
a head pose sensor to collect head pose data of the user;
a head pose processor to determine a head pose of the user from the head pose data; and
a spatialized audio processor to generate spatialized audio data based on the determined head pose of the user,
wherein the speakers generate first sounds corresponding to the generated spatialized audio data,
wherein the spatialized audio processor receives first timing information at a first time to synchronize the first sounds with second sounds at the first time,
wherein the first timing information comprises an optical cue in a video corresponding to the generated second sounds or an optical cue projected separately from a video corresponding to the generated second sounds.

US Pat. No. 10,378,882

LIGHT FIELD DISPLAY METROLOGY

Magic Leap, Inc., Planta...

1. An optical metrology system for measuring imperfections in a light field generated by a display, the optical metrology system comprising:a display configured to project a target light field comprising a virtual object having an intended focus position;
a camera configured to obtain images of the target light field;
a hardware processor programmed with executable instructions to:
access one or more images corresponding to a portion of the light field;
apply a display-to-camera pixel mapping to transfer pixel values of the display to pixel values of the camera, wherein the display-to-camera pixel mapping comprises:
a first gamma correction that maps color levels of the display to a first intermediate color representation;
a pixel-dependent coupling function that maps the first intermediate color representation to a second intermediate color representation; and
a second gamma correction that maps the second intermediate color representation to color levels registered by the camera;
analyze the one or more images to identify a measured focus position corresponding to a position at which the virtual object is in focus; and
determine imperfections in the light field based at least in part on a comparison of the measured focus position and the intended focus position.

US Pat. No. 10,371,947

METHODS AND SYSTEMS FOR MODIFYING EYE CONVERGENCE FOR DIAGNOSING AND TREATING CONDITIONS INCLUDING STRABISMUS AND/OR AMBLYOPIA

Magic Leap, Inc., Planta...

1. A wearable augmented reality device configured to be used by a wearer having two eyes deficient in aligning at a convergence point, the device comprising:an augmented reality head-mounted ophthalmic system comprising a wearable augmented reality display platform configured to pass light from the world into the eyes of the wearer wearing the head-mounted system;
at least one light source configured to output light for propagation into the eyes of the wearer to form an image;
a first waveguide stack for a first eye of the wearer and a second waveguide stack for a second eye of the wearer, wherein each waveguide stack comprises a plurality of waveguides, wherein different waveguides of the plurality of waveguides are configured to output light with wavefront divergence corresponding to different depth planes; and
an eye tracking system configured to determine a gaze of the eye,
wherein the device is configured to modify the image by adding a compensating prism correction configured to align both eyes to a single convergence point.

US Pat. No. 10,365,488

METHODS AND SYSTEMS FOR DIAGNOSING EYES USING ABERROMETER

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system configured to pass light from the world into eyes of a wearer wearing the head-mounted ophthalmic system, each eye having a cornea, lens, and retina, and the augmented reality head-mounted ophthalmic system comprising:
at least one light source and wearable optics configured to output light having a wavefront and to direct the wavefront into the eyes of the wearer so as to pass through the cornea and lens of each eye and be reflected back by the retina of the eye, wherein the wearable optics comprise
a first waveguide stack for a first eye of the wearer; and
a second waveguide stack for a second eye of the wearer,
wherein each waveguide stack comprises a plurality of waveguides, wherein each waveguide of the plurality of waveguides has an associated diffractive out-coupling optical element, wherein each waveguide is configured to output light with the associated diffractive out-coupling optical element, wherein different waveguides of the plurality of waveguides are configured to output light with wavefront divergence corresponding to different depth planes; and
an aberrometer configured to measure the wavefront that passes through at least one eye and is reflected to its aberrometer to determine abnormalities of the at least one eye.

US Pat. No. 10,337,691

INTEGRATING POINT SOURCE FOR TEXTURE PROJECTING BULB

Magic Leap, Inc., Planta...

1. A texture projecting light bulb comprising:an incandescent filament configured to produce infrared light;
an integrating sphere enclosing the incandescent filament, wherein the integrating sphere comprises a diffusely reflective interior surface, a baffle disposed at least partially within the integrating sphere, and an aperture configured to allow light to pass out of the integrating sphere; and
a light bulb enclosure surrounding the integrating sphere, the enclosure comprising one or more regions transmissive to infrared light and one or more regions opaque to infrared light, wherein the one or more transmissive regions are configured to project a structured light pattern of infrared light detectable by a computer vision system.

US Pat. No. 10,317,690

MULTI-FOCAL DISPLAY SYSTEM AND METHOD

MAGIC LEAP, INC., Planta...

1. An augmented reality display system, comprising:a scanning device for scanning one or more frames of image data, wherein the scanning device is communicatively coupled to an image source to receive the image data;
a plurality of switchable screens configured to provide a first aperture and comprising a first switchable screen in a diffusive state and a second switchable screen in a transparent state at a time point during operation;
a variable focus element (VFE) configured to provide a second aperture, and configured to reduce a field curvature produced by the scanning device, wherein the VFE is also configured for variably focusing the one or more frames of image data among the plurality of switchable screens that spreads light associated with the image data to corresponding viewing distances, wherein the plurality of switchable screens is configured to provide the first aperture that is larger than the second aperture of the variable focus element; and
viewing optics operatively coupled to the plurality of switchable screens to relay the one or more frames of image data.

US Pat. No. 10,304,246

BLANKING TECHNIQUES IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method of presenting virtual contents with a virtual or augmented image presentation system, the method comprising:displaying at least one virtual object to an end user at least by projecting light rays corresponding to the at least one virtual object to at least one eye of the end user with a projector subsystem comprising one or more projectors;
identifying a plurality of relations between eye movements and head movements in a plurality of pairs of respective directions, a relation in the plurality of relations indicating a respective extent of the eye movements along a respective pair of directions of the plurality of pairs of directions before initiation of the head movements in the respective direction;
determining a distance between a current focus of the end user and a determined location of the at least one virtual object, wherein the distance includes an angular distance between the current focus of the end user and the determined location of the at least one virtual object;
predicting a predicted head movement by the end user based in part or in whole upon the distance between the current focus of the end user and the determined location of the at least one virtual object and further upon the plurality of relations between the eye movements and the head movements in the plurality of pairs of directions;
temporarily blanking at least a portion of a presentation of the at least one virtual object for a refresh period on a display device based at least in part on the predicted head movement and a detected head movement captured by at least one transducer or sensor, wherein the refresh period represents a temporal period between a first end of a scan line or scan pattern and a first start of a temporally successive scan line or scan pattern or between a second end of a frame and a second start of a temporally successive frame;
pulsing or modulating one or more light sources producing one or more spots or dots of light at a first frame rate that is lower than a second frame rate of one or more image capturing devices of the virtual or augmented image presentation system;
capturing, with at least one sensor, the one or more spots or dots as one or more line traces, wherein a line trace comprises a line direction and a line length;
determining a direction of the detected head movement based at least in part upon the line direction; and
determining a velocity of the detected head movement based at least in part upon the line length.

US Pat. No. 10,109,108

FINDING NEW POINTS BY RENDER RATHER THAN SEARCH IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method of displaying augmented reality, comprising:capturing a set of map points pertaining to the real world, wherein the set of map points are captured through a plurality of augmented reality systems;
determining positions of a plurality of keyframes that captured the set of map points;
rendering lines from the determined positions of the plurality of keyframes to respective map points captured from the plurality of keyframes;
identifying points of intersection between the rendered lines; and
determining a set of new map points based at least in part on the identified points of intersection.

US Pat. No. 10,444,419

DITHERING METHODS AND APPARATUS FOR WEARABLE DISPLAY DEVICE

Magic Leap, Inc., Planta...

1. A device comprising:an input coupling grating having a first grating structure characterized by a first set of grating parameters, wherein the input coupling grating is configured to receive light from a light source;
an expansion grating having a second grating structure characterized by a second set of grating parameters varying in at least two dimensions, wherein the second grating structure is configured to receive light from the input coupling grating, and wherein the second grating structure has a phase variation pattern that causes light beams to diffract with different phases at different portions along the second grating structure; and
an output coupling grating having a third grating structure characterized by a third set of grating parameters, wherein the output coupling grating is configured to receive light from the expansion grating and to output light to a viewer.

US Pat. No. 10,379,352

METHODS AND SYSTEMS FOR DIAGNOSING AND TREATING EYES USING LASER THERAPY

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform comprising:
a frame configured to be worn on the head of a wearer wearing the head-mounted system; and
a display attached to the frame, the display comprising;
a waveguide stack configured to pass light from the world into an eye of the wearer; and
a light source configured to provide light for generating virtual content,
wherein the waveguide stack comprises a plurality of waveguides, each waveguide of the plurality of waveguides comprising:
in-coupling diffractive optical elements configured to in-couple light provided by the light source, and
out-coupling diffractive optical elements configured to out-couple in-coupled light to the eye of the wearer,
wherein one or more waveguides of the plurality of waveguides are configured to output in-coupled light to an eye of the wearer with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein different amounts of wavefront divergence are associated with different accommodation by the eye, wherein the outputted light with different amounts of wavefront divergence forms virtual objects at different perceived depths away from the wearer, and wherein each waveguide of the plurality of waveguides is configured to guide light therein by total internal reflection between opposing surfaces of the waveguide;
a laser configured to administer laser therapy to the eye of the wearer; and
an adaptable optics element configured to direct the light outputted from the waveguide stack to project an image to a particular portion of the wearer's eye.

US Pat. No. 10,371,945

METHODS AND SYSTEMS FOR DIAGNOSING AND TREATING HIGHER ORDER REFRACTIVE ABERRATIONS OF AN EYE

Magic Leap, Inc., Planta...

1. A wearable augmented reality device, the device comprising:at least one light source;
and wearable optics configured to:
pass light from the world into an eye of a person wearing the head-mounted system; and
output light from the at least one light source for propagation into the eye of the person to form an image in the eye, the at least one light source and wearable optics configured to provide refractive correction for higher order refractive errors;
wherein the wearable optics comprise a waveguide stack comprising a plurality of waveguides, wherein the waveguide stack is configured to output the light from the at least one light source for propagation into the eye with wavefront divergence corresponding to different depth planes.

US Pat. No. 10,073,267

OUTCOUPLING GRATING FOR AUGMENTED REALITY SYSTEM

MAGIC LEAP, INC., Planta...

1. An eyepiece for projecting an image to an eye of a viewer, the eyepiece comprising:a waveguide having a surface and configured to propagate light therein; and
a diffractive optical element optically coupled to the waveguide, the diffractive optical element including:
a plurality of first ridges protruding from the surface of the waveguide and arranged as a periodic array having a period, each of the plurality of first ridges having a first height in a direction perpendicular to the surface of the waveguide and a first width in a direction of the period; and
a plurality of second ridges, each of the plurality of second ridges protruding from a respective first ridge and having a second height greater than the first height and a second width less than the first width;
wherein the diffractive optical element is configured to diffract a first portion of the light propagating in the waveguide toward the eye as a first order reflection, and to diffract a second portion of the light propagating in the waveguide away from the eye as a first order transmission.

US Pat. No. 10,466,394

DIFFRACTION GRATINGS FORMED BY METASURFACES HAVING DIFFERENTLY ORIENTED NANOBEAMS

Magic Leap, Inc., Planta...

1. An optical system comprising:a metasurface configured to diffract visible light having a wavelength, the metasurface comprising:
a plurality of repeating unit cells, each unit cell consisting of two to four sets of nanobeams, wherein:
a first set of nanobeams are formed by one or more first nanobeams; and
a second set of nanobeams are formed by a plurality of second nanobeams disposed adjacent to the one or more first nanobeams and separated from each other by a sub-wavelength spacing,
wherein the one or more first nanobeams and the plurality of second nanobeams are elongated in different orientation directions, and
wherein the unit cells repeat at a period less than or equal to about 10 nm to 1 ?m.

US Pat. No. 10,386,641

METHODS AND SYSTEMS FOR PROVIDING AUGMENTED REALITY CONTENT FOR TREATMENT OF MACULAR DEGENERATION

Magic Leap, Inc., Planta...

1. A wearable augmented reality device configured to be used by a wearer, said device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform, said augmented reality head-mounted ophthalmic system configured to pass light from the world into an eye of a wearer wearing the head-mounted system; and
a light source configured to project light into the eye of the wearer to form an image in the eye,
wherein the augmented reality display platform comprises a waveguide stack comprising a plurality of waveguides, each waveguide of the plurality of waveguides comprising:
in-coupling diffractive optical elements configured to in-couple light provided by the light source; and
out-coupling diffractive optical elements configured to out-couple in-coupled light to the eye of the wearer,
wherein the one or more waveguides of the plurality of waveguides are configured to output the in-coupled light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein the different amounts of wavefront divergence correspond to different depth planes; and
wherein the wearable device is configured to selectively project pixels of an image to healthy cells of the eye of the wearer by out-coupling the in-coupled light from the waveguide stack to the healthy cells.

US Pat. No. 10,379,351

METHODS AND SYSTEMS FOR DIAGNOSING AND TREATING EYES USING LIGHT THERAPY

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising:
a frame configured to be worn on the head of a wearer wearing the head-mounted ophthalmic system;
one or more light sensors attached to the frame;
an augmented reality display attached to the frame and comprising a light source, the display configured to:
pass light from the world into an eye of the wearer; and
output light from the light source into the eye of the wearer to form an image in the eye,
wherein the wearable augmented reality device is configured to:
detect an exposure amount of one or more wavelengths of light propagating towards the eye; and
modify the exposure amount of the one or wavelengths of light reaching the eye based on the detected exposure amount.

US Pat. No. 10,379,355

AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS AND METHODS FOR DELIVERY OF MEDICATION TO EYES

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform comprising:
a frame configured to be worn on the head of a wearer wearing the head-mounted system; and
a display attached to the frame, the display comprising:
a waveguide stack configured to pass light from the world into an eye of the wearer; and
a light source configured to provide light for generating virtual content,
wherein the waveguide stack comprises a plurality of waveguides, each waveguide of the plurality of waveguides comprising:
in-coupling diffractive optical elements configured to in-couple light provided by the light source, and
out-coupling diffractive optical elements configured to out-couple in-coupled light to the eye of the wearer,
wherein one or more waveguides of the plurality of waveguides are configured to output in-coupled light to an eye of the wearer with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein different amounts of wavefront divergence are associated with different accommodation by the eye, and wherein the outputted light with different amounts of wavefront divergence forms virtual objects at different perceived depths away from the wearer, wherein each waveguide of the plurality of waveguides is configured to guide light therein by total internal reflection between opposing surfaces of the waveguide; and
a medication delivery system configured to deliver medication to the eye of the wearer,
wherein the augmented reality head-mounted ophthalmic system is configured to:
display a visual cue on the display; and
provide an alert to the wearer to focus on the visual cue while the medication is delivered.

US Pat. No. 10,371,948

METHODS AND SYSTEMS FOR DIAGNOSING COLOR BLINDNESS

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising:
a display configured to pass light from the world into an eye of a wearer wearing the head-mounted ophthalmic system, the display comprising at least one light source and a waveguide stack comprising a plurality of waveguides, wherein each waveguide of the plurality of waveguides comprises:
in-coupling diffractive optical elements configured to in-couple light from the light source; and
out-coupling diffractive optical elements configured to out-couple light to the eye of the wearer,
wherein one or more waveguides of the plurality of waveguides are configured to output light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein the different amounts of wavefront divergence correspond to different depth planes,
wherein the wearable augmented reality device is configured to:
display a color test image using the display; and
determine a deficiency of the wearer in detecting specific colors based on the color test image.

US Pat. No. 10,306,213

LIGHT OUTPUT SYSTEM WITH REFLECTOR AND LENS FOR HIGHLY SPATIALLY UNIFORM LIGHT OUTPUT

Magic Leap, Inc., Planta...

1. An optical system comprising:a reflector comprising:
a light input opening;
a light output opening; and
reflective interior sidewalls extending between the light input opening and the light output opening, and
a Fourier transform lens proximate the light output opening of the reflector, wherein the Fourier transform lens is configured to receive light from the reflector and to transform the light from the reflector into lens output light having increased spatial uniformity relative to the light received from the reflector.

US Pat. No. 10,444,527

THREE DIMENSIONAL VIRTUAL AND AUGMENTED REALITY DISPLAY SYSTEM

Magic Leap, Inc., Planta...

1. A three-dimensional image visualization system, comprising:an integrated module comprising:
a selectively transparent projection device configured to receive input image light and configured to project an image associated with the input image light toward an eye of a viewer from a projection device position in space relative to the eye of the viewer, the projection device being capable of assuming a substantially transparent state when no image is projected; and
a diffraction element coupled to the selectively transparent projection device and configured to divide the input image light into a plurality of modes, each of the plurality of modes directed at a different angle,
wherein the selectively transparent projection device is configured to allow at least a first mode of the plurality of modes to exit the selectively transparent projection device toward the eye, the first mode having a simulated focal distance based in part on a selectable geometry of the diffraction element, and
wherein the selectively transparent projection device is configured to trap at least a second mode of the plurality of modes within the selectively transparent projection device;
an occlusion mask device coupled to the projection device and configured to selectively block light traveling toward the eye from one or more positions opposite of the occlusion mask from the eye of the viewer in an occluding pattern correlated with the image projected by the projection device; and
a controller operatively coupled to the integrated module and the occlusion mask device, and configured to coordinate projection of the image and associated occluding pattern, as well as interposition of the diffraction pattern at the selectable geometry.

US Pat. No. 10,437,062

AUGMENTED AND VIRTUAL REALITY DISPLAY PLATFORMS AND METHODS FOR DELIVERING HEALTH TREATMENTS TO A USER

Magic Leap, Inc., Planta...

1. A wearable augmented reality device configured to be used by a wearer, the device comprising:an augmented reality head-mounted ophthalmic system comprising:
a frame configured to be worn on the head of a wearer wearing the head-mounted ophthalmic system;
an augmented reality display attached to the frame and comprising:
a light source, and
a waveguide stack comprising a plurality of waveguides,
wherein each waveguide is configured to propagate light therein by total internal reflection between opposing surfaces of the waveguide and wherein each waveguide comprises out-coupling diffractive optical elements configured to out-couple light into an eye of the wearer with wavefront divergence corresponding to a depth plane,
wherein one or more waveguides of the plurality of waveguides are configured to output light to an eye of the wearer with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, and wherein different amounts of wavefront divergence provide different accommodation by the eye and are associated with different depth planes away from the wearer,
wherein the augmented reality display is configured to pass light from the world into an eye of the wearer and output, via the waveguide stack, light into the eye of the wearer to form an image in the eye,
wherein the augmented reality head-mounted ophthalmic system is configured to deliver a therapy to the wearer, wherein the therapy is selected from the group consisting of tactile therapy, sound therapy, and temperature therapy.

US Pat. No. 10,402,649

AUGMENTED REALITY DISPLAY DEVICE WITH DEEP LEARNING SENSORS

Magic Leap, Inc., Planta...

1. A head mounted display system comprising:a plurality of sensors for capturing different types of sensor data, each of the plurality of sensors disposed on a frame of the head mounted display system, the frame configured to be worn on the head of a user and to position a display system in front of the eyes of the user, the plurality of sensors comprising an outward-facing camera configured to obtain face images;
non-transitory memory configured to store
executable instructions, and
a deep neural network for performing face recognition and lighting detection using the sensor data captured by the plurality of sensors,
wherein the deep neural network comprises an input layer for receiving input of the deep neural network, a plurality of lower layers, a plurality of middle layers, and a plurality of head components for outputting results of the deep neural network associated with the face recognition and the lighting detection,
wherein the input layer is connected to a first layer of the plurality lower layers,
wherein a last layer of the plurality of lower layers is connected to a first layer of the middle layers,
wherein a head component of the plurality of head components comprises a head output node, and
wherein the head output node is connected to a last layer of the middle layers through a plurality of head component layers representing a unique pathway from the plurality of middle layers to the head component;
a display configured to display information related to the face recognition and the lighting detection; and
a hardware processor in communication with the plurality of sensors, the non-transitory memory, and the display, the hardware processor programmed by the executable instructions to:
receive the different types of sensor data from the plurality of sensors;
determine the results of the deep neural network using the different types of sensor data; and
cause display of the information related to the face recognition and the lighting detection.

US Pat. No. 10,386,636

MULTI-FOCAL DISPLAY SYSTEM AND METHOD

Magic Leap, Inc., Planta...

1. An augmented reality display system, comprising:a light projection device operatively coupled to an image source for projecting a pair of consecutive image frames comprising first and second consecutive image frames for viewing by a user's eyes through a display; and
a composite variable focus element (VFE) system comprising: a first VFE and a second VFE arranged in series relative to the first VFE, wherein
the composite VFE is structured such that
a total optical power of the composite VFE is a combination of a first optical power of the first VFE and a second optical power of the second VFE
the first VFE is configured to switch between two focus states to focus respective first and second consecutive image frames at respective first and second depth planes, the first VFE being configured to focus respective pairs of consecutive image frames to respective pairs of depth planes,
a first distance between the first depth plane and the second depth plane is based at least in part upon a constant optical power difference comprising a difference between the first optical power of the first VFE and the second optical power of the second VFE, and
the second VFE is configured to further focus the first and second consecutive image frames to third and fourth depth planes, wherein a distance between the third depth plane and the fourth depth plane is based at least in part upon the constant optical power difference.

US Pat. No. 10,386,640

METHODS AND SYSTEMS FOR DETERMINING INTRAOCULAR PRESSURE

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising a wearable augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the head-mounted ophthalmic system;
a light source configured to project light into the eye of the wearer, wherein the projected light is configured to induce changes in accommodation or vergence of the eye; and
a light-monitoring device configured to measure light reflected from tissue of the eye,
wherein the augmented reality head-mounted ophthalmic system is configured to determine intraocular pressure from the measured reflected light, and wherein the measured reflected light encompasses glint characteristics associated with the induced changes in accommodation or vergence.

US Pat. No. 10,379,354

METHODS AND SYSTEMS FOR DIAGNOSING CONTRAST SENSITIVITY

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising:
an augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the head-mounted system, wherein the display platform comprises:
a light source configured to output light for propagation into the eye of the wearer to form an image in the eye; and
a user interface configured to receive input from a user;
wherein the wearable augmented reality device is configured to:
project the image to the wearer;
detect a response regarding the image to determine a contrast sensitivity of the wearer; and
alert the wearer to hazardous conditions not visible to the wearer due to a deficiency in the contrast sensitivity of the wearer.

US Pat. No. 10,371,946

METHODS AND SYSTEMS FOR DIAGNOSING BINOCULAR VISION CONDITIONS

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising a wearable augmented reality display platform comprising a waveguide stack configured to pass light from the world into left and right eyes of a wearer wearing the head-mounted system; and
first and second displays included in the augmented reality display platform for the left and right eyes respectively,
wherein the augmented reality head-mounted ophthalmic system is configured to identify a vision defect of the wearer by projecting, via the waveguide stack, light forming independent first and second images into the left and right eyes respectively,
wherein the waveguide stack comprises a plurality of waveguides configured to output light with different amounts of wavefront divergence corresponding to a plurality of depth planes, wherein one or more waveguides of the plurality of waveguides are configured to output light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides.

US Pat. No. 10,436,958

FABRICATING NON-UNIFORM DIFFRACTION GRATINGS

Magic Leap, Inc., Planta...

1. A method of fabricating non-uniform structures, the method comprising:implanting different densities of ions into corresponding areas of a substrate;
patterning a resist layer on the substrate; and then
etching the substrate with the patterned resist layer, leaving the substrate with at least one non-uniform structure having non-uniform characteristics associated with the different densities of ions implanted in the areas.

US Pat. No. 10,417,831

INTERACTIONS WITH 3D VIRTUAL OBJECTS USING POSES AND MULTIPLE-DOF CONTROLLERS

Magic Leap, Inc., Planta...

1. A system comprising:a display system of a wearable device configured to present a three-dimensional (3D) view to a user and permit a user interaction with objects in a field of regard (FOR) of a user, the FOR comprising a portion of the environment around the user that is capable of being perceived by the user via the display system;
a sensor configured to acquire data associated with a pose of the user;
a hardware processor in communication with the sensor and the display system, the hardware processor programmed to:
initiate a cone cast, wherein the cone cast comprises a cast of a virtual cone with a dynamically-adjustable aperture;
identify one or more contextual features in the environment;
dynamically resize the aperture of the virtual cone based at least in part on the one or more contextual features;
translate the virtual cone based on the data associated with the pose of the user;
scan for a collision between the virtual cone and one or more objects in the environment;
identify, in response to the collision, a collided object; and
perform a user interaction associated with the collided object.

US Pat. No. 10,408,613

METHOD AND SYSTEM FOR RENDERING VIRTUAL CONTENT

Magic Leap, Inc., Planta...

1. A method of rendering virtual content, comprising:rendering a virtual world for a user by using at least a virtual world model based at least in part upon a location of the user at least by performing a first process, the first process comprising:
determining the location of a user in the virtual world at least by matching a signature of the location with a predetermined signature of a first node in a topological map, without having to process geometries at the location, wherein nodes in the topological map correspond to respective distinct signatures and do not include keyframes of corresponding locations;
retrieving a first set of data associated with a first part of the virtual world by using the virtual world model, the first set of data corresponding to both the first node in a topological map and the location of the user, wherein the virtual world model does not render virtual contents that are displayed to the user; and
rendering, based at least in part or in whole on the first set of data, the first part of the virtual world to a user device of the user at least by generating the virtual contents for the first part;
retrieving a second part of the virtual world by using at least the virtual world model based at least in part upon an anticipated location from the location of the user at least by performing a second process, the second process comprising:
determining an anticipated location for the user at the location at least by:
identifying an area around the user;
communicating an intended direction of the user and one or more locations within the area to a server storing the virtual world model; and
determining, at the server, the anticipated location from the world model based at least in part upon the intended location of the user and the one or more locations within the area;
retrieving a second set of data associated with the second part of the virtual world that corresponds to a second node representing the anticipated location of the user in the topological map; and
sharing a space at the anticipated location with a plurality of users including the user at least by updating the virtual world for the user, wherein updating the virtual world comprises rendering, based on the location and the anticipated location of the user, the second part of the virtual world using at least the second set of data, the second part comprising a virtual object that was previously generated in response to an action of a different user in the second part of the virtual world.

US Pat. No. 10,379,353

AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS AND METHODS FOR DIAGNOSING HEALTH CONDITIONS BASED ON VISUAL FIELDS

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising:
an augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the head-mounted system, wherein the display platform comprises:
a light source configured to project light into the eye of the wearer to form a moving image in the eye; and
a user interface configured to receive input from a user,
wherein the wearable augmented reality device is configured to project the image at a particular portion of a periphery of the wearer's visual field and to detect a response regarding the image to determine the health of that particular portion of the visual field,
wherein the device is configured to provide a visual, audio, or tactile alert based on detection of a hazardous condition in the wearer's surroundings, and
wherein the hazardous condition is invisible to the wearer due to a visual field deficiency.

US Pat. No. 10,163,265

SELECTIVE LIGHT TRANSMISSION FOR AUGMENTED OR VIRTUAL REALITY

Magic Leap, Inc., Planta...

1. A method, comprising:tracking a movement of a first user's eye using a first head mounted display device;
estimating a depth of focus of the first user's eye based on the tracked eye movement;
modifying a light beam associated with a display object based on the estimated depth of focus such that the display object appears in focus;
projecting the light beam toward a display lens of the first head mounted display device;
directing the light beam into the first user's eye using the display lens;
selectively allowing a transmission of light from a local environment of the first user based on at least a selection of an augmented reality mode of the first head mounted display device;
capturing a field-of-view image by the first head mounted display device at the local environment of the first user;
generating a rendered physical object using the field-of-view image, the rendered physical object corresponding to a physical object present in the local environment of the first user, and the rendered physical object representing the physical object as it appears in the local environment; and
transmitting at least a portion of virtual world data associated with the display object and the rendered physical object to a second head mounted display device associated with a second user at a second location, the second head mounted display device projects the display object and the rendered physical object at the second location based at least in part on the virtual world data, wherein the first user and the second user interface with a shared virtual reality, the first head mounted display device of the first user operates in an augmented reality mode and the second head mounted display device of a second user operates in a virtual reality mode, and the second head mounted display device displaying the rendered physical object as the physical object appears in the local environment of the first user.

US Pat. No. 10,473,934

METHODS AND SYSTEMS FOR PERFORMING SLIT LAMP EXAMINATION

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform configured to pass light from the world into an eye of a wearer wearing the head-mounted ophthalmic system;
a light source configured to provide augmented reality content on a plurality of depth planes and to project an illumination beam of light into the eye of the wearer to illuminate an anterior or posterior portion of the eye, a cross-sectional beam shape of the illumination beam configured such that a dimension of the cross-sectional beam shape along a superior-inferior direction of the eye is greater than a dimension of the cross-sectional beam shape along a nasal-temporal direction of the eye; and
an imaging system configured to track a gaze of the wearer's eye and to capture an image of the illuminated portion of the wearer's eye to perform a slit lamp examination to determine health of the eye.

US Pat. No. 10,466,480

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS HAVING UNEQUAL NUMBERS OF COMPONENT COLOR IMAGES DISTRIBUTED ACROSS DEPTH PLANES

Magic Leap, Inc., Planta...

1. A display system comprising:a display configured to display image content on a plurality of depth planes by outputting a plurality of component color images, the display comprising:
a waveguide stack comprising:
a first plurality of waveguides, each of the first plurality of waveguides comprising first outcoupling optical elements configured to selectively outcouple light of a first component color, wherein each of the first plurality of waveguides is configured to output light of the first component color with a different amount of wavefront divergence than other waveguides of the first plurality of waveguides;
a second plurality of waveguides, each of the second plurality of waveguides comprising second outcoupling optical elements configured to selectively outcouple light of a second component color, wherein each of the second plurality of waveguides is configured to output light of the second component color with a different amount of wavefront divergence than other waveguides of the second plurality of waveguides; and
a third plurality of waveguides, each of the third plurality of waveguides comprising third outcoupling optical elements configured to selectively outcouple light of a third component color, wherein each of the third plurality of waveguides is configured to output light of the third component color with a different amount of wavefront divergence than other waveguides of the second plurality of waveguides,
wherein the light of the first component color comprises light having a wavelength within a first wavelength range, wherein the light of the second component color comprises light having a wavelength within a second wavelength range, and wherein the light of the third component color comprises light having a wavelength within a third wavelength range,
wherein the first plurality of waveguides totals a greater number of waveguides than the second plurality of waveguides.

US Pat. No. 10,466,486

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS HAVING IMPROVED DIFFRACTIVE GRATING STRUCTURES

Magic Leap, Inc., Planta...

1. An augmented reality (AR) display system for delivering augmented reality content to a user, comprising:an image-generating source to provide one or more frames of image data;
a light modulator to transmit light associated with the one or more frames of image data;
a diffractive optical element (DOE) to receive the light associated with the one or more frames of image data and direct the light to the user's eyes, the DOE comprising a diffraction structure having a waveguide substrate, a surface grating, and an underlayer disposed between the waveguide substrate and the surface grating; and
wherein the surface grating has a surface grating refractive index, the underlayer has an underlayer refractive index, and the surface grating refractive index is smaller than the underlayer refractive index.

US Pat. No. 10,371,949

METHODS AND SYSTEMS FOR PERFORMING CONFOCAL MICROSCOPY

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted display configured to pass light from the world into an eye of a wearer wearing the wearable augmented reality display, wherein the display comprises:
a waveguide stack comprising a plurality of waveguides, wherein each waveguide of the plurality of waveguides has an associated diffractive out-coupling optical element,
wherein the plurality of waveguides are configured to output, via the respective associated diffractive out-coupling optical elements, image light with different amounts of wavefront divergence corresponding to a plurality of depth planes, wherein one or more waveguides of the plurality of waveguides are configured to output the image light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides,
wherein the image light provides images to the eye of the wearer;
a confocal microscope, the confocal microscope comprising:
a light source comprising an aperture configured to form a point light source; and
an imaging system to image the eye,
wherein the light source is configured to project light beams to the eye, the light beams having temporally varying wavelengths.

US Pat. No. 10,510,188

OVER-RENDERING TECHNIQUES IN AUGMENTED OR VIRTUAL REALITY SYSTEMS

MAGIC LEAP, INC., Planta...

1. A method of operation in a virtual image presentation system, the method comprising:over-rendering a frame of a sequence of frames for a field of view provided by the virtual image presentation system such that a set of total pixels included in the frame exceeds a first set of pixels included in a reduced frame that is to be presented in a maximum area of a display to a user when the display is configured to present one or more virtual images corresponding to the frame at a maximum image resolution;
determining a detected or predicted speed or a detected or predicted acceleration of head movement of the user;
predicting, with a predictive head tracking module including a processor and one or more transducers in the virtual image presentation system, at least one predicted head movement of the user based at least in part upon the detected or predicted speed or the detected or predicted acceleration of head movement of the user;
determining a portion of the frame to present to the user based on at least one of a detected head movement and the at least one predicted head movement, wherein
the portion is of a size that is smaller than an entire size of the frame, and
the portion of the frame to be presented to the user is based at least in part on determining a location of a virtual object having at least a defined minimum speed in the field of view of the user;
selectively reading out data for the portion of the frame from at least one frame buffer of the virtual image presentation system;
adjusting an actual or perceived pixel size of at least one pixel of a set of pixels in a first portion of at least one subsequent frame into an adjusted pixel size of the at least one pixel based in part or in whole upon a variation in pixel spacing values of adjacent pixels and the at least one predicted head movement, wherein a pixel spacing value indicates spacing between two adjacent pixels and is predicted to cause the variation in the first portion relative to a remaining portion of the at least one subsequent frame based at least in part upon the at least one predicted head movement; and
presenting the at least one subsequent frame after the frame to the user at least by using at least the adjusted pixel size for the first portion in the at least one subsequent frame and by using the actual or perceived pixel size for the remaining portion in the at least one subsequent frame.

US Pat. No. 10,495,453

AUGMENTED REALITY SYSTEM TOTEMS AND METHODS OF USING SAME

MAGIC LEAP, INC., Planta...

1. An augmented reality system, comprising:a spherical totem having a surface configured for a manipulation such that the manipulation is detectable as user input by an augmented reality system, the spherical totem having a spherical shape,
wherein the spherical totem comprises a soft material configured to provide a tactile perception to a user when the user interacts with the spherical totem via touch, the tactile perception being a perception of a deformation of the surface of the spherical totem upon user interaction, wherein the spherical totem does not have physical input structures to provide the user input to the augmented reality system,
wherein the augmented reality system is configured to display a three dimensional flower-petal shaped virtual user interface in response to the user interaction, such that the virtual user interface appears to emanate from the spherical totem.

US Pat. No. 10,466,477

METHODS AND SYSTEMS FOR PROVIDING WAVEFRONT CORRECTIONS FOR TREATING CONDITIONS INCLUDING MYOPIA, HYPEROPIA, AND/OR ASTIGMATISM

Magic Leap, Inc., Planta...

1. A wearable ophthalmic device comprising:an augmented reality head-mounted display system configured to pass light from the world through the display and into an eye of a person wearing the head-mounted display system;
a light source configured to emit image light for propagation into the eye of the person to form an image in the eye; and
an adaptable optics element configured to apply a wavefront correction to the image light from the light source forming the image based on an optical prescription for the eye,
wherein the head-mounted display system comprises a waveguide stack comprising a plurality of waveguides configured to receive the image light from the light source and output the image light with wavefront divergence corresponding to different depth planes, and
wherein the waveguide stack is configured to provide image content at the different depth planes.

US Pat. No. 10,442,727

PATTERNING OF HIGH REFRACTIVE INDEX GLASSES BY PLASMA ETCHING

Magic Leap, Inc., Planta...

1. A method for forming one or more diffractive gratings in a waveguide, the method comprising:providing a waveguide having a refractive index of greater than or equal to about 1.65, wherein more than 50 wt % of the waveguide is formed of one or more of B2O3, Al2O3, ZrO2, Li2O, Na2O, K2O, MgO, CaO, SrO, BaO, ZnO, La2O3, Nb2O5, TiO2, HfO, and Sb2O3;
providing a mask layer over the waveguide, the mask layer having a pattern corresponding to the one or more diffractive gratings, the pattern selectively exposing portions of the waveguide; and
anisotropically etching the exposed portions of the waveguide to define the one or more diffractive gratings in the waveguide.

US Pat. No. 10,445,881

NEURAL NETWORK FOR EYE IMAGE SEGMENTATION AND IMAGE QUALITY ESTIMATION

Magic Leap, Inc., Planta...

1. A system for eye image segmentation and image quality estimation, the system comprising:an eye-imaging camera configured to obtain an eye image;
non-transitory memory configured to store the eye image;
a hardware processor in communication with the non-transitory memory, the hardware processor programmed to:
receive the eye image;
process the eye image using a convolution neural network to generate a segmentation of the eye image; and
process the eye image using the convolution neural network to generate a quality estimation of the eye image,
wherein the convolution neural network comprises a segmentation tower and a quality estimation tower,
wherein the segmentation tower comprises segmentation layers and shared layers,
wherein the quality estimation tower comprises quality estimation layers and the shared layers,
wherein a first output layer of the shared layers is connected to a first input layer of the segmentation tower and to a second input layer of the segmentation tower, at least one of the first input layer or the second input layer comprising a concatenation layer,
wherein the first output layer of the shared layers is connected to an input layer of the quality estimation layer, and
wherein the eye image is received by an input layer of the shared layers.

US Pat. No. 10,430,985

AUGMENTED REALITY SYSTEMS AND METHODS UTILIZING REFLECTIONS

Magic Leap, Inc., Planta...

1. A display system, comprising:a wearable display device comprising:
a display area comprising light redirecting features configured to direct light to a user, wherein the display area is at least partially transparent and is configured to provide a view of an ambient environment through the display area;
one or more hardware processors; and
a non-transitory computer-readable storage medium including computer-executable instructions that, when executed by the one or more hardware processors, cause the one or more hardware processors to perform operations comprising:
determining that a specular reflection is within the user's field of view through the display area, wherein determining that the specular reflection is within the user's field of view comprises:
receiving an indication of diffusion of detected reflected electromagnetic radiation or acoustical waves; and
determining that a specular reflection is present if the indicated diffusion is below a threshold.

US Pat. No. 10,422,991

BUCKLING MODE ACTUATION OF FIBER SCANNER TO INCREASE FIELD OF VIEW

MAGIC LEAP, INC., Planta...

1. A system comprising:an optical fiber having a distal fiber end and a proximal fiber end;
a first electromechanical transducer mechanically coupled to the optical fiber between the distal fiber end and the proximal fiber end, wherein the first electromechanical transducer is configured to apply a buckling force to the optical fiber by reducing a length of the first electromechanical transducer between a distal buckling end of the first electromechanical transducer and a proximal buckling end of the first electromechanical transducer; and
a second electromechanical transducer mechanically coupled to the optical fiber between the distal fiber end and the proximal fiber end, wherein the second electromechanical transducer is configured to excite whirling of the distal fiber end.

US Pat. No. 10,448,189

VIRTUAL REALITY, AUGMENTED REALITY, AND MIXED REALITY SYSTEMS WITH SPATIALIZED AUDIO

MAGIC LEAP, INC., Planta...

1. A spatialized audio system, comprising:a sensor to detect a most current head pose of a listener; and
a processor to render audio data in first and second stages
the first stage comprising rendering first audio data corresponding to a first plurality of sources, each of the first plurality of sources having one of a first plurality of positions, to second audio data corresponding to a second plurality of sources, each of the second plurality of sources having one of a second plurality of positions, and
the second stage comprising rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources, each of the third plurality of sources having one of a third plurality of positions, based on the detected most current head pose of the listener,
wherein the second plurality of sources consists of fewer sources than the first plurality of sources,
wherein rendering the first audio data to the second audio data comprises warping the first audio data, and
wherein rendering the second audio data to the third audio data comprises warping the second audio data.

US Pat. No. 10,429,649

AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS AND METHODS FOR DIAGNOSING USING OCCLUDER

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform configured to pass light from the world into at least one eye of a person wearing the ophthalmic system;
a light source configured to output light for propagation into the eye of the person to form an image in the eye; and
a user interface configured to receive input from the person,
wherein the ophthalmic system is configured to occlude one or more selectable portions of a field of view of the person's eye and to receive input from the person regarding the wearer's vision through the user interface, wherein the wearable augmented reality device is configured to occlude a central region of the field of view and wherein the wearable augmented reality device is configured to determine whether occluding the central region improves the person's vision of the image, and conclude a presence of a visual defect in the eye of the person based on the determined improvement in the person's vision.

US Pat. No. 10,425,171

DETERMINING PROXIMITY OF TRANSMITTER ANTENNAS OF PORTABLE DEVICES TO A HUMAN BODY FOR LIMITING TRANSMITTER OUTPUT POWER TO MEET SPECIFIC ABSORPTION RATE (SAR) REQUIREMENTS

Magic Leap, Inc., Planta...

1. A method, comprising:sensing, by a proximity sensor communicatively coupled to a transmitting device, whether an object is proximate to the transmitting device;
determining whether the transmitting device or a portion thereof is proximate to a portion of a human body at least by:
determining whether the transmitting device or a portion thereof is within a field of view of an image sensing system that is communicably coupled to the transmitting device; and
analyzing data captured by the image sensing system to determine whether the transmitting device or the portion thereof is proximate to the portion of the human body based in part or in whole upon results of determining whether the transmitting device or the portion thereof is within the field of view of the image sensing system; and
adjusting transmitting output power of an antenna operatively coupled to the transmitting device to be less than or equal to a threshold output power based in part or in whole upon results of determining whether the transmitting device or the portion thereof is proximate to the portion of the human body.

US Pat. No. 10,564,423

AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS AND METHODS FOR DELIVERY OF MEDICATION TO EYES

Magic Leap, Inc., Planta...

1. A wearable augmented reality device comprising:an augmented reality head-mounted ophthalmic system comprising an augmented reality display platform comprising:
a frame configured to be worn on the head of a wearer wearing the head-mounted system; and
a display attached to the frame, the display comprising:
a waveguide stack configured to pass light from the world into an eye of the wearer; and
a light source configured to provide light for generating virtual content,
wherein the waveguide stack comprises a plurality of waveguides, each waveguide of the plurality of waveguides comprising:
in-coupling diffractive optical elements configured to in-couple light provided by the light source, and
out-coupling diffractive optical elements configured to out-couple in-coupled light to the eye of the wearer,
wherein one or more waveguides of the plurality of waveguides are configured to output in-coupled light to an eye of the wearer with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein different amounts of wavefront divergence are associated with different accommodation by the eye, and wherein the outputted light with different amounts of wavefront divergence forms virtual objects at different perceived depths away from the wearer, wherein each waveguide of the plurality of waveguides is configured to guide light therein by total internal reflection between opposing surfaces of the waveguide; and
a medication delivery system configured to deliver medication to the eye of the wearer,
wherein the augmented reality head-mounted ophthalmic system is configured to:
display a visual cue on the display; and
provide an alert to the wearer to focus on the visual cue while the medication is delivered.

US Pat. No. 10,466,561

DIFFRACTIVE DEVICES BASED ON CHOLESTERIC LIQUID CRYSTAL

Magic Leap, Inc., Planta...

1. A diffraction grating comprising:a cholesteric liquid crystal (CLC) layer comprising a plurality of chiral structures, wherein each chiral structure comprises a plurality of liquid crystal molecules that extend in a layer depth direction by at least a helical pitch and are successively rotated in a first rotation direction,
wherein the helical pitch is a length in the layer depth direction corresponding to a net rotation angle of the liquid crystal molecules of the chiral structures by one full rotation in the first rotation direction, and
wherein arrangements of the liquid crystal molecules of the chiral structures vary periodically in a lateral direction perpendicular to the layer depth direction, and
wherein the chiral structures comprise a plurality of first liquid crystal molecules stacked over a plurality of second liquid crystal molecules, wherein the first liquid crystal molecules and the second liquid crystal molecules extend in the layer depth direction at different helical pitches.

US Pat. No. 10,469,546

SYSTEM AND METHOD FOR AUGMENTED AND VIRTUAL REALITY

Magic Leap, Inc., Planta...

1. A method, comprising:selecting a visualization mode selected from a group consisting of an augmented reality mode, a virtual reality mode, and a blended reality mode, the visualization mode comprising the blended reality mode, the blended reality mode comprising rendering individual physical objects sensed using an environment sensing system as individual virtual physical objects;
sensing two or more individual physical objects using the environment sensing system;
projecting light toward a display lens of a head-mounted user display device, the light comprising a virtual object and renderings of the two or more individual physical objects;
reflecting the light using the display lens toward a user's eye;
selectively allowing transmission of light from an outside environment directly through the display lens in response to selecting the visualization mode using the display lens wherein the head-mounted user display device is configured for displaying one of a virtual object, a physical object, a virtual physical object, or any combination of virtual objects, physical objects, and
transmitting the virtual object and the renderings of the two or more individual physical objects to a second head-mounted user display device in a second environment, the second head-mounted user display device projecting second light toward a second display lens of the second head-mounted user display device and reflecting the second light using the second display lens toward a second user's eye, the second head-mounted user display device operating in the augmented reality mode or the virtual reality mode and displaying at least the virtual object and the renderings of the two or more physical objects to the second user.

US Pat. No. 10,437,048

METHODS AND SYSTEMS FOR MULTI-ELEMENT LINKAGE FOR FIBER SCANNING DISPLAY

Magic Leap, Inc., Planta...

1. A multi-element fiber scanner for scanning electromagnetic imaging radiation, the multi-element fiber scanner comprising:a base having a base plane and a longitudinal axis orthogonal to the base plane;
a first fiber link passing through the base in a direction parallel to the longitudinal axis, wherein the first fiber link is operatively coupled to at least one electromagnetic radiation source;
a plurality of additional links, each spatially separated by an air gap, joined to the base, and extending from the base, wherein the first fiber link passes through the base independently from the plurality of additional links and the first fiber link and each of the plurality of additional links are spatially separated by air gaps; and
a retention collar disposed a predetermined distance along the longitudinal axis from the base, wherein the first fiber link and the plurality of additional links are separately joined to the retention collar.

US Pat. No. 10,345,593

METHODS AND SYSTEMS FOR PROVIDING AUGMENTED REALITY CONTENT FOR TREATING COLOR BLINDNESS

Magic Leap, Inc., Planta...

1. A wearable augmented reality device configured to be used by a wearer, said device comprising:an augmented reality head-mounted ophthalmic system comprising a wearable augmented reality display platform, said augmented reality head-mounted ophthalmic system configured to pass light from the world into an eye of a wearer wearing the head-mounted system, said wearable augmented reality display platform comprising a display comprising at least one light source, wherein the wearable augmented reality display platform comprises a waveguide stack comprising a plurality of waveguides, each waveguide of the plurality of waveguides comprising:
in-coupling diffractive optical elements configured to in-couple light provided by the light source, and
out-coupling diffractive optical elements configured to out-couple in-coupled light to the eye of a wearer of the head-mounted augmented reality display,
wherein one or more waveguides of the plurality of waveguides are configured to output the in-coupled light with a different amount of wavefront divergence than one or more other waveguides of the plurality of waveguides, wherein the different amounts of wavefront divergence correspond to different depth planes; and
wherein said wearable augmented reality device is configured to selectively modify wearer perception of said light from the world by out-coupling in-coupled light from the waveguide stack to the eye of the wearer based on a color detection deficiency of the wearer.

US Pat. No. 10,558,047

AUGMENTED REALITY SPECTROSCOPY

Magic Leap, Inc., Planta...

1. A wearable spectroscopy system comprising:a head-mounted display system removably coupleable to a user's head;
at least one eye tracking camera configured to detect a gaze of the user;
one or more light sources coupled to the head-mounted display system and configured to emit light with at least two different wavelengths in an irradiated field of view, wherein the spectroscopy system is configured to determine a gaze direction of the user and to direct light emission along substantially the same direction as the determined gaze direction;
one or more electromagnetic radiation detectors coupled to the head-mounted member and configured to receive reflected light from a target object within the irradiated field of view;
a controller operatively coupled to the one or more light sources and the one or more electromagnetic radiation detectors, the controller configured to cause the one or more light sources to emit pulses of light while also causing the one or more electromagnetic radiation detectors to detect levels of light absorption related to the emitted pulses of light and reflected light from the target object;
an absorption database of light absorption properties of at least one material; and
a graphics processor unit to display an output to the user
wherein the head-mounted display system comprises a waveguide stack configured to output light with selectively variable levels of wavefront divergence.

US Pat. No. 10,536,783

TECHNIQUE FOR DIRECTING AUDIO IN AUGMENTED REALITY SYSTEM

Magic Leap, Inc., Planta...

1. A hearing aid for use by an end user, comprising:at least one sensor configured for detecting a focus of the end user on a real sound source;
an adaptive microphone assembly configured for converting sounds into electrical signals;
at least one speaker configured for converting the electrical signals to sounds for perception by the end user; and
a control subsystem configured for modifying, based on
the detected focus of the end user,
a direction of a greatest sensitivity of the adaptive microphone assembly, and
a distance of the greatest sensitivity of the adaptive microphone assembly,
wherein the control subsystem is configured for processing the electrical signals to compare characteristics of a first sound originating from the real sound source to characteristics of a second sound originating from a different source, emphasizing sounds having the same type of characteristics as the characteristics of the first sound, and deemphasizing sounds having the same type of characteristics as the second sound.

US Pat. No. 10,481,399

SYSTEMS AND METHODS FOR OPTICAL SYSTEMS WITH EXIT PUPIL EXPANDER

Magic Leap, Inc., Planta...

1. An optical system comprising:an image projection system, the image projection system configured to emit a coherent beam of light at a plurality of wavelengths in the visible spectral range;
a waveguide comprising a first edge, a second edge and a pair of reflective surfaces disposed between the first and the second edges, the pair of reflective surfaces separated by a gap having a gap height d, the waveguide comprising a material having a refractive index n, the pair of reflective surfaces having a reflectivity r, the beam emitted from the image projection system being coupled into the waveguide at an input angle ?; and
a control system configured to vary at least one parameter selected from the group consisting of: a wavelength from the plurality of wavelengths, the gap height d, the refractive index n and the reflectivity r, wherein the variation of the at least one parameter is correlated with variation in the input angle ? such that the equation 2nd sin ?=m? is satisfied for values of the input angle ?, wherein m is an integer and ? is wavelength of the beam.

US Pat. No. 10,436,968

WAVEGUIDES HAVING REFLECTIVE LAYERS FORMED BY REFLECTIVE FLOWABLE MATERIALS

Magic Leap, Inc., Planta...

1. A method of making an optical waveguide structure, the method comprising:forming a reflective optical element for a waveguide, wherein forming the reflective optical element comprises:
providing a pattern of protrusions on a first surface of the waveguide; and
depositing a reflective ink to form a reflective ink layer on surfaces of the protrusions, the reflective ink layer having at least one parameter that varies across an area occupied by the pattern of protrusions, and
wherein a reflectivity of the reflective optical element varies with the at least one parameter of the reflective ink across the area occupied by the pattern of protrusions.

US Pat. No. 10,551,533

METHODS AND SYSTEM FOR CREATING FOCAL PLANES USING AN ALVAREZ LENS

Magic Leap, Inc., Planta...

1. An augmented reality (AR) display system for delivering augmented reality content to a user, comprising:an image-generating source to provide one or more frames of image data;
a light modulator to transmit light associated with the one or more frames of image data;
a lens assembly comprising first and second transmissive plates, the first and second transmissive plates each having a first side and a second side that is opposite to the first side, the first side being a plano side, and the second side being a shaped side, the second side of the first transmissive plate comprising a first surface sag based at least in part on a cubic function, and the second side of the second transmissive plate comprising a second surface sag based at least in part on an inverse of the cubic function; and
a diffractive optical element (DOE) to receive the light associated with the one or more frames of image data and direct the light to the user's eyes, the DOE being disposed between and adjacent to the first side of the first transmissive plate and the first side of the second transmissive plate, and wherein the DOE is encoded with refractive lens information corresponding to the inverse of the cubic function.