US Pat. No. 9,710,887

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING CONTEXT DISPLAY AND PROJECTORS

Varjo Technologies Oy, H...

1. A display apparatus comprising:
at least one context display for rendering a context image, wherein an angular width of a projection of the rendered context
image ranges from 40 degrees to 220 degrees, the at least one context display comprising at least one projection surface;
and

at least one focus image projector for rendering a focus image, wherein an angular width of a projection of the rendered focus
image ranges from 5 degrees to 60 degrees;
an image steering unit:
means for detecting a gaze direction; and
a processor coupled in communication with the image steering unit and the means for detecting the gaze direction, wherein
the processor is configured to:

(a) receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;
(b) process the input image to generate the context image and the focus image, the context image having a first resolution
and the focus image having a second resolution, wherein:

a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked,
the focus image substantially corresponds to the region of visual accuracy of the input image, and
the second resolution is higher than the first resolution;
(c) render the context image at the at least one context display;
(d) render the focus image at the at least one projection surface via the at least one focus image projector; and
(e) control the image steering unit to adjust a location of the projection of the rendered focus image on the at least one
projection surface, such that the projection of the rendered focus image substantially overlaps the projection of the masked
region of the rendered context image,

wherein the processor is configured to perform (c), (d) and (e) substantially simultaneously, andan arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image
to create a visual scene.

US Pat. No. 9,939,898

DISPLAY APPARATUS AND METHOD USING PORTABLE ELECTRONIC DEVICE

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one focus display;
a processor coupled to the at least one focus display, wherein the processor is configured to render a focus image at the at least one focus display, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees,wherein the display apparatus is arranged to be detachably attached to a portable electronic device and to be communicably coupled with the portable electronic device, and wherein a processor of the portable electronic device is configured to render a context image at a display of the portable electronic device, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees,means for detecting a gaze direction, wherein the processor of the display apparatus is configured to communicate the detected gaze direction to the processor of the portable electronic device; wherein the processor of the portable electronic device is configured to:(a) obtain an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;(b) process the input image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution, wherein:(i) a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked,
(ii) the focus image substantially corresponds to the region of visual accuracy of the input image, and
(iii) the second resolution is higher than the first resolution; and(c) communicate the focus image to the processor of the display apparatus; andwherein the projection of the rendered context image is combined with the projection of the rendered focus image to create a visual scene.

US Pat. No. 9,711,114

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING PROJECTORS

Varjo Technologies Oy, H...

1. A display apparatus comprising:
at least one context image projector or at least one context display for rendering a context image, wherein an angular width
of a projection of the rendered context image ranges from 40 degrees to 220 degrees; and

at least one focus image projector for rendering a focus image, wherein an angular width of a projection of the rendered focus
image ranges from 5 degrees to 60 degrees,

at least one projection surface;
an image steering unit;
means for detecting a gaze direction; and
a processor coupled in communication with the image steering unit and the means for detecting the gaze direction, wherein
the processor is configured to:

(a) receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;
(b) process the input image to generate the context image and the focus image, the context image having a first resolution
and the focus image having a second resolution, wherein:

a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked,
the focus image substantially corresponds to the region of visual accuracy of the input image, and
the second resolution is higher than the first resolution;
(c) render the context image at the at least one context display or at the at least one projection surface via the at least
one context image projector;

(d) render the focus image at the at least one projection surface via the at least one focus image projector; and
(e) control the image steering unit to adjust a location of the projection of the rendered focus image on the at least one
projection surface, such that the projection of the rendered focus image substantially overlaps the projection of the masked
region of the rendered context image on the at least one projection surface,

wherein the processor is configured to perform (c), (d) and (e) substantially simultaneously, andan arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image
to create a visual scene.

US Pat. No. 9,989,774

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING OPTICAL COMBINERS AND CONTEXT AND FOCUS IMAGE RENDERERS

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one context image renderer for rendering a context image;
at least one focus image renderer for rendering a focus image;
an exit optical element; and
at least one optical combiner for optically combining a projection of the rendered context image with a projection of the rendered focus image to create a visual scene, an angular width of the projection of the rendered context image being greater than an angular width of the projection of the rendered focus image, the at least one optical combiner comprising:
a first semi-transparent reflective element having a first side and a second side, the first side obliquely facing the exit optical element, the second side facing the at least one context image renderer; and
a second semi-transparent reflective element obliquely facing the first side of the first semi-transparent reflective element,wherein the at least one context image renderer is arranged in a manner that the projection of the rendered context image emanating therefrom passes through the first semi-transparent reflective element towards the exit optical element,and wherein the at least one focus image renderer is arranged in a manner that the projection of the rendered focus image emanating therefrom passes through the first semi-transparent reflective element towards the second semi-transparent reflective element, from where the projection of the rendered focus image is reflected towards the first semi-transparent reflective element, and is then reflected from the first side of the first semi-transparent reflective element towards the exit optical element.

US Pat. No. 9,711,072

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING FOCUS AND CONTEXT DISPLAYS

Varjo Technologies Oy, H...

1. A display apparatus comprising:
at least one context display for rendering a context image, wherein an angular width of a projection of the rendered context
image ranges from 40 degrees to 220 degrees;

at least one focus display for rendering a focus image, wherein an angular width of a projection of the rendered focus image
ranges from 5 degrees to 60 degrees;

at least one optical combiner for combining the projection of the rendered context image with the projection of the rendered
focus image to create a visual scene;

means for detecting a gaze direction; and
a processor coupled in communication with the at least one optical combiner and the means for detecting the gaze direction,
wherein the processor is configured to:

(a) receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;
(b) process the input image to generate the context image and the focus image, the context image having a first resolution
and the focus image having a second resolution, wherein:

a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked,
the focus image substantially corresponds to the region of visual accuracy of the input image, and
the second resolution is higher than the first resolution; and
(c) render the context image at the at least one context display and the focus image at the at least one focus display substantially
simultaneously, whilst controlling the at least one optical combiner to combine the projection of the rendered context image
with the projection of the rendered focus image in a manner that the projection of the rendered focus image substantially
overlaps the projection of the masked region of the rendered context image.

US Pat. No. 9,990,037

DISPLAY APPARATUS AND METHOD USING PORTABLE ELECTRONIC DEVICE

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one focus display;
a processor coupled to the at least one focus display, wherein the processor is configured to render a focus image at the at least one focus display, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees,wherein the display apparatus is arranged to be detachably attached to a portable electronic device and to be communicably coupled with the portable electronic device, and wherein a processor of the portable electronic device is configured to render a context image at a display of the portable electronic device, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees,means for detecting a gaze direction, wherein the processor of the display apparatus is configured to communicate the detected gaze direction to the processor of the portable electronic device; wherein the processor of the portable electronic device is configured to:(a) obtain an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;(b) process the input image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution, wherein:(i) a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked,
(ii) the focus image substantially corresponds to the region of visual accuracy of the input image, and
(iii) the second resolution is higher than the first resolution; and(c) communicate the focus image to the processor of the display apparatus; andwherein the projection of the rendered context image is combined with the projection of the rendered focus image to create a visual scene.

US Pat. No. 9,983,413

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING CONTEXT AND FOCUS IMAGE RENDERERS AND OPTICAL COMBINERS

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one context image renderer for rendering a context image;
at least one focus image renderer for rendering a focus image;
an exit optical element; and
at least one optical combiner for optically combining a projection of the rendered context image with a projection of the rendered focus image to create a visual scene, an angular width of the projection of the rendered context image being greater than an angular width of the projection of the rendered focus image, the at least one optical combiner comprising:
a first semi-transparent reflective element having a first side and a second side, the first side facing the exit optical element; and
a second semi-transparent reflective element facing the second side of the first semi-transparent reflective element,wherein the at least one context image renderer is arranged in a manner that the projection of the rendered context image emanating therefrom is incident upon the first side of the first semi-transparent reflective element and reflected towards the exit optical element therefrom, andwherein the at least one focus image renderer is arranged in a manner that the projection of the rendered focus image emanating therefrom is incident upon the second side of the first semi-transparent reflective element and reflected towards the second semi-transparent reflective element therefrom, and is then reflected from the second semi-transparent reflective element towards the first semi-transparent reflective element, from where the projection of the rendered focus image is allowed to pass through towards the exit optical element.

US Pat. No. 9,905,143

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING IMAGE RENDERERS AND OPTICAL COMBINERS

Varjo Technologies Oy, H...

1. A display apparatus comprising:
at least one context image renderer for rendering a context image, wherein an angular width of a projection of the rendered
context image ranges from 40 degrees to 220 degrees;

at least one focus image renderer for rendering a focus image, wherein an angular width of a projection of the rendered focus
image ranges from 5 degrees to 60 degrees; and

at least one optical combiner for combining the projection of the rendered context image with the projection of the rendered
focus image to create a visual scene,

means for detecting a gaze direction; and
a processor coupled in communication with the at least one optical combiner and the means for detecting the gaze direction,
wherein the processor is configured to:

(a) receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;
(b) process the input image to generate the context image and the focus image, the context image having a first resolution
and the focus image having a second resolution, wherein:

(i) the focus image substantially corresponds to the region of visual accuracy of the input image, and
(ii) the second resolution is higher than the first resolution; and
(c) render the context image at the at least one context image renderer and the focus image at the at least one focus image
renderer substantially simultaneously, whilst controlling the at least one optical combiner to combine the projection of the
rendered context image with the projection of the rendered focus image in a manner that the projection of the rendered focus
image substantially overlaps a projection of a region of the rendered context image that substantially corresponds to the
region of visual accuracy of the input image
wherein the visual scene is created in a manner that at least two different optical distances are provided therein.

US Pat. No. 10,082,672

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING ELECTROMECHANICAL FACEPLATE

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one focus display;
at least one electromechanical faceplate detachably attached to an outer surface of the display apparatus; and
a processor coupled to the at least one focus display and to the at least one electromechanical faceplate, wherein the processor is configured to render a focus image at the at least one focus display,wherein the display apparatus is arranged to be detachably attached to a portable electronic device, the at least one electromechanical faceplate comprising a wireless communication interface, wherein the wireless communication interface is to be employed to communicably couple the portable electronic device and the display apparatus, and wherein a processor of the portable electronic device is configured to render a context image at a display of the portable electronic device, an angular width of a projection of the rendered context image being greater than an angular width of a projection of the rendered focus image,further wherein the projection of the rendered context image is optically combined with the projection of the rendered focus image to create a visual scene.

US Pat. No. 10,371,998

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING POLARIZERS AND OPTICAL COMBINERS

Varjo Technologies Oy, H...

1. A display apparatus (100) comprising:an image source (102);
a processor (104) coupled to the image source (102), wherein the processor (104) is configured to render an image at the image source (102);
at least one optical combiner (106) for combining a projection of the rendered image with a projection of a real world image, wherein the at least one optical combiner (106) comprises a reflective element for substantially reflecting the projection of the rendered image towards a direction in which the projection of the real world image is directed;
a first polarizing element (108) for polarizing the projection of the real world image at a first polarization orientation, wherein the first polarizing element (108) is positioned on a first side of the at least one optical combiner (106) upon which the projection of the real world image is incident, whilst the reflective element is positioned on a second side of the at least one optical combiner (106);
a second polarizing element (110) facing the second side of the at least one optical combiner (106), the second polarizing element (110) having polarization properties for reducing ambient light within the display apparatus, wherein the polarization properties of the second polarizing element (110) are to be adjusted with respect to the first polarization orientation of the first polarizing element (108), and wherein the second polarizing element is movable by an actuator to position the combined projection on eyes of a user; and
a third polarization element (114) positioned in an optical path between the image source (102) and the at least one optical combiner (106), the third polarizing element (114) having polarization properties for suppressing artifacts of the image rendered at the source.

US Pat. No. 10,564,429

GAZE-TRACKING SYSTEM USING ILLUMINATORS EMITTING DIFFERENT WAVELENGTHS

VARJO TECHNOLOGIES OY, H...

1. A gaze-tracking system for use in a head-mounted display apparatus, the gaze-tracking system comprising:a plurality of illuminators for emitting light pulses to illuminate a user's eye when the head-mounted display apparatus is worn by the user, the plurality of illuminators comprising at least a first illuminator and a second illuminator, a first wavelength of first light pulses emitted by the first illuminator being longer than a second wavelength of second light pulses emitted by the second illuminator;
at least one lens positioned on an optical path of reflections of the first and second light pulses from the user's eye, wherein the at least one lens has no chromatic-aberration correction, and the at least one lens has a second local plane for the second wavelength and a first focal plane for the first wavelength that is farther away from the at least one lens than the second focal plane;
at least one camera for capturing an image of the reflections of the first and second light pulses, wherein,
the image is representative of a position of the reflections on an image plane of the at least one camera; and
the image of the reflections of the first light pulses and the image of the reflections of the second light pulses are both in focus on the camera image plane due to the at least one lens having no chromatic-aberration correction; and
a processor coupled in communication with the plurality of illuminators and the at least one camera, wherein the processor is configured to control operations of the plurality of illuminators and the at least one camera, and to process the captured image to detect a gaze direction of the user.

US Pat. No. 10,602,132

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING LIGHT SOURCE AND CONTROLLABLE SCANNING MIRROR

VARJO TECHNOLOGIES OY, H...

1. A display apparatus comprising:at least one light source per eye, the at least one light source being operable to emit a substantially collimated and monochromatic light beam;
at least one controllable scanning mirror per eye, the at least one controllable scanning mirror being arranged to reflect the light beam towards a projection surface, wherein the at least one scanning mirror is to be controlled to change a direction in which the light beam is reflected;
means for detecting a gaze direction of a user, wherein the gaze direction is to be detected when the display apparatus in operation is worn by the user; and
a processor coupled in communication with the at least one light source, the at least one controllable scanning mirror and the means for detecting the gaze direction, wherein the processor is configured to:(a) obtain an input image and determine, based upon the detected gaze direction of the user, a region of visual accuracy of the input image;(b) generate pixel data corresponding to at least a first region and a second region of the input image, wherein the second region substantially corresponds to the region of visual accuracy of the input image or a part of the region of visual accuracy, while the first region substantially corresponds to a remaining region of the input image or a part of the remaining region, wherein the first region is to have a first resolution, while the second region is to have a second resolution, the second resolution being higher than the first resolution; and(c) control the at least one light source and the at least one controllable scanning mirror to draw the first region and the second region of the input image over the projection surface,wherein a first scanning pattern to be swept by the at least one controllable scanning mirror for drawing the first region is different from a second scanning pattern to be swept by the at least one controllable scanning mirror for drawing the second region, wherein the second scanning pattern is to have at least one additional ripple function in a direction that is substantially perpendicular to a current scanning direction.

US Pat. No. 10,592,739

GAZE-TRACKING SYSTEM AND METHOD OF TRACKING USER'S GAZE

VARJO TECHNOLOGIES OY, H...

1. A gaze-tracking system for use in a head-mounted display apparatus, the gaze-tracking system comprising:at least one illuminator for emitting light pulses;
at least one first optical element comprising a plurality of micro-to-nano-sized components, the plurality of micro-to-nano-sized components being shaped and arranged relative to each other in a manner that, when incident upon the plurality of micro-to-nano-sized components, a structure of the light pulses is modified to produce structured light, wherein the produced structured light is to be used to illuminate a user's eye when the head-mounted display apparatus is worn by the user;
at least one camera for capturing an image of reflections of the structured light from the user's eye, wherein the plurality of micro-to-nano-sized components cause the reflections of the structured light to appear as a plurality of glints, each of the plurality of glints being symmetrical about only one axis;
wherein the plurality of glints are represented in the captured image as positioned within a circular region of a surface of the user's eye, the plurality of glints being arranged into at least two columns, and wherein the plurality of glints are in a form of six characters that are arranged into two columns of three characters each, the characters being V-shaped;
wherein the image is representative of a form of the reflections and a position of the reflections on an image plane of the at least one camera; and
a processor coupled in communication with the at least one illuminator and the at least one camera, wherein the processor is configured to control the at least one illuminator and the at least one camera, and to process the captured image to detect a gaze direction of the user.

US Pat. No. 10,122,990

IMAGING SYSTEM AND METHOD OF PRODUCING CONTEXT AND FOCUS IMAGES

Varjo Technologies Oy, H...

1. An imaging system comprising:at least one imaging sensor per eye of a user; and
a processor coupled to the at least one imaging sensor, the processor being configured to control the at least one imaging sensor to capture at least one image of a real world environment,
wherein the processor is arranged to be communicably coupled with a display apparatus, the display apparatus comprising:
an optical combiner;
means for tracking a gaze direction of the user,
at least one context image renderer for rendering a context image of the at least one image, and
at least one focus image renderer for rendering a focus image of the at least one image, separate from the context image,
further wherein the processor is configured to:
receive, from the display apparatus, information indicative of the gaze direction of the user;
determine a region of visual accuracy of the at least one image, based upon the gaze direction of the user;
process the at least one image to:
generate the context image at a first resolution by binning pixels of the at least one image, and
generate the focus image at a second resolution higher than the first resolution by demosaicing the at least one image,
wherein, when processing the at least one image, the processor is configured to crop the at least one image to generate the focus image in a manner that the focus image substantially corresponds to the region of visual accuracy of the at least one image; and
communicate the generated context image and the generated focus image to the display apparatus,
wherein the optical combiner of the display apparatus is operable to optically combine the generated content image and the generated focus image for display to the user.

US Pat. No. 10,565,797

SYSTEM AND METHOD OF ENHANCING USER'S IMMERSION IN MIXED REALITY MODE OF DISPLAY APPARATUS

Varjo Technologies Oy, H...

1. A system for enhancing a user's immersion in a mixed reality mode of a head-mounted display apparatus, the system being at least communicably coupled to the head-mounted display apparatus, the system comprising:at least one camera; and
a processor communicably coupled to the at least one camera, wherein the processor is configured to:
(i) control the at least one camera to capture a sequence of images of a given real-world environment;
(ii) analyze the sequence of images to identify a spatial geometry of real objects present in the given real-world environment;
(iii) analyze the sequence of images to identify material categories to which the real objects or their portions belong, wherein each real object or its portion belongs to its corresponding material category;
(iv) process the sequence of images to generate a sequence of mixed-reality images, based upon the spatial geometry and the material category of at least one real object from amongst the real objects, wherein the at least one real object is to be represented by at least one virtual object in the sequence of mixed-reality images, the sequence of mixed-reality images is to be generated in a manner that a visual behaviour of the at least one virtual object in the sequence of mixed-reality images emulates at least one material property associated with the material category of the at least one real object;
(v) render, at the head-mounted display apparatus, the sequence of mixed-reality images;
(vi) generate an audio signal that is representative of an acoustic behaviour of the at least one virtual object, based upon the spatial geometry and the material category of the at least one real object, wherein the acoustic behaviour of the at least one virtual object is to emulate at least one material property associated with the material category of the at least one real object; and
(vii) play the audio signal, at the head-mounted display apparatus, substantially simultaneously with the rendering of the sequence of mixed-reality images.

US Pat. No. 10,602,033

DISPLAY APPARATUS AND METHOD USING IMAGE RENDERERS AND OPTICAL COMBINERS

VARJO TECHNOLOGIES OY, H...

1. A display apparatus comprising:at least one context image renderer for rendering a context image, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees;
at least one focus image renderer for rendering a focus image, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees;
at least one first optical combiner for combining the projection of the rendered context image with the projection of the rendered focus image to form a combined projection; and
at least one second optical combiner for combining the combined projection with a projection of a real world image, wherein the at least one second optical combiner is switchable to different levels in transparency.

US Pat. No. 10,382,699

IMAGING SYSTEM AND METHOD OF PRODUCING IMAGES FOR DISPLAY APPARATUS

Varjo Technologies Oy, H...

1. An imaging system comprising:at least one focusable camera for capturing at least one image of a given real-world scene;
means for generating a depth map or a voxel map of the given real-world scene, the depth map or voxel map comprising a grid of pixels, each pixel having a monochromatic gray color indicating an optical depth of at least one object within the real world scene; and
a processor coupled to the at least one focusable camera and the means for generating the depth map or the voxel map, wherein the processor is arranged to be communicably coupled with a display apparatus, the display apparatus comprising means for tracking a gaze direction of a user,
wherein the processor is configured to:
receive, from the display apparatus, information indicative of the gaze direction of the user;
map the gaze direction of the user to the depth map or the voxel map to determine an optical depth of a region of interest in the given real-world scene; and
adjust a blur effect associated with at least one object that is visible in the at least one image and lies outside the region of interest in the given real-world scene, based upon an optical depth between the at least one object and the region of interest;
wherein the display apparatus further comprises at least one context image renderer for rendering a context image and at least one focus image renderer for rendering a focus image, and wherein the processor is further configured to:
process the at least one image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution, the second resolution being higher than the first resolution, wherein the focus image is to be generated from a region of the at least one image that substantially corresponds to the region of interest in the give real-world scene; and
communicate the generated context image and the generated focus image to the display apparatus for rendering thereat;
wherein to generate the focus image, the processor is further configured to crop the focus image to a predefined shape determined according to a shape of the region of interest in the real-world scene.

US Pat. No. 10,488,917

GAZE-TRACKING SYSTEM AND METHOD OF TRACKING USER'S GAZE USING REFLECTIVE ELEMENT

VARJO TECHNOLOGIES OY, H...

1. A gaze-tracking system for use in a head-mounted display apparatus, the gaze-tracking system comprising:at least one illuminator, the at least one illuminator being operable to emit light pulses, wherein the light pulses are to be used to illuminate a user's eye when the head-mounted display apparatus is worn by the user;
at least one camera, the at least one camera being operable to capture at least one image of reflections of the light pulses from the user's eye;
at least one reflective element, wherein the at least one reflective element is to be arranged on an optical path of the reflections of the light pulses, such that when incident upon the at least one reflective element, the reflections of the light pulses are reflected towards the at least one camera;
at least one actuator associated with the at least one reflective element, wherein the at least one actuator is to be employed to move the at least one reflective element; and(i) a processor coupled in communication with the at least one illuminator, the at least one camera and the at least one actuator, the processor being configured to:(ii) control the at least one actuator to adjust, based upon the detected gaze direction of the user, a position of the at least one reflective element;wherein the at least one camera is operable to capture at least two images of the reflections of the light pulses, and wherein the processor is configured to:control the at least one actuator to move the at least one reflective element to a first position;
control the at least one camera to capture a first image of the reflections of the light pulses, when the at least one reflective element is arranged at the first position;
control the at least one actuator to move the at least one reflective element to a second position, wherein the second position is different from the first position; and
control the at least one camera to capture a second image of the reflections of the light pulses, when the at least one reflective element is arranged at the second position.

US Pat. No. 10,701,342

IMAGING SYSTEM AND METHOD FOR PRODUCING IMAGES USING CAMERAS AND PROCESSOR

Varjo Technologies Oy, H...

1. An imaging system for producing images to be displayed to a user via a head-mounted display apparatus in real or near real time, the head-mounted display apparatus comprising means for detecting a gaze direction of the user when the head-mounted display apparatus, in operation, is worn by the user, the imaging system comprising:a first outer camera and a second outer camera, the first outer camera and the second outer camera being arranged in a manner that a distance between the first outer camera and the second outer camera is equal to or greater than a predefined distance;
at least one inner camera, the at least one inner camera being arranged at a substantially similar distance between the first outer camera and the second outer camera; and
a processor coupled to the first outer camera, the second outer camera and the at least one inner camera, the processor being communicably coupled to said means for detecting the gaze direction of the user, wherein the processor is configured to:
(i) obtain a given inter-pupillary distance of the user with respect to the user's gaze at infinity;
(ii) receive, from said means, information indicative of the detected gaze direction of the user;
(iii) control the first outer camera, the second outer camera and the at least one inner camera to capture a first outer image, a second outer image and at least one inner image of a given real-world scene, respectively, wherein the first outer image, the second outer image and the at least one inner image are to be captured substantially simultaneously; and
(iv) process the first outer image and the at least one inner image to generate a first view of the given real-world scene to be displayed to a left eye of the user, and process the second outer image and the at least one inner image to generate a second view of the given real-world scene to be displayed to a right eye of the user, based upon the given inter-pupillary distance and the detected gaze direction of the user.

US Pat. No. 10,698,482

GAZE TRACKING USING NON-CIRCULAR LIGHTS

Varjo Technologies Oy, H...

1. A gaze-tracking system for use in a head-mounted display apparatus, the gaze-tracking system comprising:a plurality of non-circular light sources that, in operation, emit light for illuminating a user's eye when the head-mounted display apparatus in operation is worn by the user;
at least one camera; and
a processor configured to:
control the at least one camera to capture an image of the user's eye and reflections of the plurality of non-circular light sources from the user's eye;
identify at least one of the plurality of non-circular light sources from where at least one of the reflections originated, based upon shapes, rotational orientations and relative positions of the reflections of the plurality of non-circular light sources, to differentiate said reflections from visual artifacts; and
detect a gaze direction of the user based upon a relative position of a pupil of the user's eye with respect to the reflections of the plurality of non-circular light sources;
wherein at least two of the plurality of non-circular light sources have a same non-circular shape and the at least two of the plurality of non-circular light sources have different rotational orientations.

US Pat. No. 10,665,034

IMAGING SYSTEM, DISPLAY APPARATUS AND METHOD OF PRODUCING MIXED-REALITY IMAGES

Varjo Technologies Oy, H...

1. An imaging system for producing mixed-reality images for a display apparatus, the imaging system comprising at least one camera and a processor communicably coupled to the at least one camera, wherein the processor is configured to:control the at least one camera to capture a given image of a real-world environment;
analyze the given image to identify at least one surface in the real-world environment that displays visual content;
compare the visual content displayed in the given image with a reference image of the visual content to determine a size, a position and an orientation of the at least one surface with respect to the at least one camera;
process the reference image of the visual content, based on the size, the position and the orientation of the at least one surface, to generate a processed image of the visual content; and
replace the visual content displayed in the given image with the processed image of the visual content to generate a given mixed-reality image, wherein a resolution of the processed image of the visual content is higher than a resolution of the visual content displayed in the given image.

US Pat. No. 10,495,895

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING POLARIZERS

VARJO TECHNOLOGIES OY, H...

1. A display apparatus comprising:an image source (102) for rendering an image;
a projection screen (108) facing a direction that is at a predefined angle to a direction in which the rendered image is to be projected from the image source (102);
an exit optical element (110) facing the projection screen (108) through which a projection is directed;
a first polarizing element (104) facing the image source (102), the first polarizing element (104) being arranged to polarize a projection of the rendered image at a first polarization orientation;
a first optical element (106a) arranged to reflect the polarized projection of the rendered image towards the projection screen (108), wherein the first polarizing element is interposed between the image source and the first optical element, and wherein the projection screen (108) is arranged to unpolarize the polarized projection, whilst reflecting the unpolarized projection towards the exit optical element (110); and
a second polarizing element (106b) positioned in an optical path between the projection screen (108) and the exit optical element (110), the second polarizing element (106b) being arranged to polarize the unpolarized projection at a second polarization orientation, wherein the second polarization orientation is different from the first polarization orientation.

US Pat. No. 10,395,111

GAZE-TRACKING SYSTEM AND METHOD

VARJO TECHNOLOGIES OY, H...

1. A gaze-tracking system for use in a head-mounted display apparatus, the gaze-tracking system comprising:a first set of illuminators for emitting infrared light at a predefined angle to a view direction of a user's eye;
at least one first optical element for reflecting the infrared light emitted by the first set of illuminators towards the user's eye to illuminate the user's eye when the head-mounted display apparatus is worn by the user;
at least one photo sensor for sensing positions of reflections of the infrared light emitted by the first set of illuminators from the user's eye in relation to the at least one photo sensor;
at least one actuator for moving at least one of:
(i) the first set of illuminators for emitting infrared light,
(ii) the at least one photo sensor for sensing positions of reflections of the infrared light emitted by the first set of illuminators; and
a processor coupled in communication with the first set of illuminators, the at least one photo sensor and the at least one actuator, wherein the processor is configured to collect sensor data from the at least one photo sensor and process the sensor data to detect a change in the gaze direction of the user, and to control the at least one actuator to adjust, based upon the detected change in the gaze direction of the user, a position of the at least one of:
(i) the first set of illuminators to maintain the emitted infrared light at the predefined angle to the view direction of the user's eye,
(ii) the at least one photo sensor to maintain the relative positions of the reflections of the infrared light emitted by the first set of illuminators from the user's eye and the at least one photo sensor.

US Pat. No. 10,764,567

DISPLAY APPARATUS AND METHOD OF DISPLAYING

Varjo Technologies Oy, H...

1. A display apparatus comprising:means for detecting a gaze direction of a user with respect to an image plane;
a processor coupled to said means, wherein the processor or an image source communicably coupled to the processor is configured to process an input image, based upon the detected gaze direction, to generate a first image and a second image;
a first image renderer per eye, the first image renderer being employed to render the first image;
a second image renderer per eye, the second image renderer being employed to render the second image;
an optical combiner;
a first array of micro-prisms arranged in a proximity of an image rendering surface of the second image renderer, wherein the micro-prisms of the first array split light emanating from pixels of the second image renderer into a plurality of directions to produce a plurality of projections of the second image, and wherein each micro-prism of the first array splits light emanating from at least one corresponding pixel into said plurality of directions;
an optical element arranged on an optical path between the first array of micro-prisms and the optical combiner, the optical element being employed to direct the plurality of projections of the second image towards the optical combiner; and
an optical shutter arranged on said optical path, between the optical element and the optical combiner, wherein the optical shutter selectively allows a given portion of the plurality of projections of the second image to pass through towards the optical combiner, whilst blocking a remaining portion of the plurality of projections of the second image,
wherein the optical combiner optically combines a projection of the first image with the given portion of the plurality of projections of the second image, to produce on the image plane an output image having a spatially-variable angular resolution,
further wherein the processor controls the optical shutter based upon the detected gaze direction, whilst the first image and the second image are being rendered;
wherein the processor is further configured to:
detect whether or not the gaze direction of the user corresponds exactly to a region of the image plane whereat any one of the plurality of projections of the second image would be incident;
render the second image via the second image renderer, when the gaze direction of the user corresponds exactly to said region; and
further process the second image prior to rendering, when the gaze direction of the user does not correspond exactly to said region;
wherein when further processing the second image, the processor is configured to:
divide the second image into two portions when the gaze direction corresponds to a region of the image plane whereat two of the plurality of projections of the second image would be incident, and swap positions of the two portions; or
divide the second image into four portions when the gaze direction of the user corresponds to a region of the image plane whereat four of the plurality of projections of the second image would be incident, and diagonally swap positions of the four portions.

US Pat. No. 10,771,774

DISPLAY APPARATUS AND METHOD OF PRODUCING IMAGES HAVING SPATIALLY-VARIABLE ANGULAR RESOLUTIONS

Varjo Technologies Oy, H...

1. A display apparatus for producing an image having a spatially-variable angular resolution on an image plane, the display apparatus comprising:an image renderer per eye;
at least one optical element arranged on an optical path between the image renderer and the image plane, the at least one optical element comprising at least a first optical portion and a second optical portion having different optical properties with respect to magnification; and
a processor coupled to the image renderer, wherein the processor or an image source communicably coupled to the processor is configured to generate a warped image based upon the optical properties of the first optical portion and the second optical portion,
wherein the processor is configured to render the warped image via the image renderer, wherein projections of a first portion and a second portion of the warped image are to be differently magnified by the first optical portion and the second optical portion of the at least one optical element, respectively, to produce the image on the image plane in a manner that the produced image appears de-warped to a user, and
wherein, when generating the warped image, the processor or the image source is configured to adjust an intensity of the first portion and the second portion of the warped image in a manner that, upon being differently magnified, the projections of the first portion and the second portion of the warped image produce the image on the image plane that appears to have a uniform brightness across the image.

US Pat. No. 10,614,734

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING CONTROLLABLE SCANNING MIRROR

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one image renderer;
at least one light source per eye, the at least one light source being operable to emit a substantially collimated and monochromatic light beam;
at least one controllable scanning mirror per eye, the at least one controllable scanning mirror being arranged to reflect the light beam towards a projection surface, wherein the at least one scanning mirror is to be controlled to change a direction in which the light beam is reflected;
at least two actuators associated with the at least one controllable scanning mirror;
means for detecting a gaze direction of a user, wherein the gaze direction is to be detected when the display apparatus in operation is worn by the user; and
a processor coupled in communication with the at least one image renderer, the at least one light source, the at least one controllable scanning mirror, the at least two actuators and the means for detecting the gaze direction, wherein the processor is configured to:
(a) obtain an input image and determine, based upon the detected gaze direction of the user, a region of visual accuracy of the input image;
(b) process the input image to generate at least a context image and a focus image, the context image having a first resolution, the focus image having a second resolution, the second resolution being higher than the first resolution, wherein the focus image substantially corresponds to the region of visual accuracy of the input image;
(c) determine, based upon the detected gaze direction of the user, a focus area within the projection surface over which the focus image is to be drawn, the focus area being only a portion of an entire projection area of the projection surface;
(d) render the context image via the at least one image renderer;
(e) draw the focus image via the at least one light source and the at least one controllable scanning mirror; and
(f) control the at least two actuators to align the at least one controllable scanning mirror in a manner that the focus image is drawn over the determined focus area within the projection surface,
wherein the processor is configured to perform (d), (e) and (f) substantially simultaneously, and wherein a projection of the drawn focus image is optically combined with a projection of the rendered context image to create a visual scene;
wherein the processor is configured to repeat (a) to (f) for a sequence of input images, and wherein when repeating (a) to (f) the processor is further configured to:
(g) change a location of the focus area on the projection surface based upon a change in the gaze direction of the user;
(h) control the at least two actuators to re-align the at least one controllable scanning mirror with respect to the change in the location of the focus area on the projection surface;
(i) predict an extent to which the at least one controllable scanning mirror is unstable until re-aligned; and
(j) determine which pixel of the drawn focus image is to be drawn while the at least one controllable scanning mirror is unstable.

US Pat. No. 10,798,332

DUAL PASS-THROUGH IMAGING SYSTEM AND METHOD

Varjo Technologies Oy, H...

1. An imaging system for producing images for a display apparatus, the imaging system comprising:an infrared light source that, in operation, emits infrared light;
at least one imaging unit comprising:
a first image-sensor chip having a first field of view;
a second image-sensor chip having a second field of view, the second field of view being wider than the first field of view, wherein the second field of view comprises an overlapping field of view that overlaps with the first field of view and a remaining non-overlapping field of view;
a semi-transparent reflective element arranged to reflect a portion of light received from a real-world environment towards the first image-sensor chip, whilst transmitting another portion of the light towards the second image-sensor chip;
a first infrared filter arranged, on an optical path between the semi-transparent reflective element and the first image-sensor chip, to block transmission of infrared light towards the first image-sensor chip; and
means for transmitting infrared light received from the overlapping field of view towards the second image-sensor chip, whilst blocking transmission of infrared light received from the non-overlapping field of view towards the second image-sensor chip, said means being arranged on an optical path between the semi-transparent reflective element and the second image-sensor chip; and
at least one processor, communicably coupled to the infrared light source and the at least one imaging unit, configured to:
control the first image-sensor chip and the second image-sensor chip to capture a first image and a second image of the real-world environment, respectively, wherein a portion of the second image-sensor chip that receives the infrared light from the overlapping field of view, in operation, captures depth information pertaining to the overlapping field of view, and wherein a resolution of the first image is higher than a resolution of the second image; and
generate from the first image and the second image at least one extended-reality image to be presented via the display apparatus, based on said depth information.

US Pat. No. 11,029,385

TRACKING SYSTEM FOR HEAD-MOUNTED DISPLAY APPARATUS AND METHOD OF TRACKING

Varjo Technologies Oy, H...

1. A tracking system for use in a head-mounted display apparatus, the tracking system comprising:at least one emitter that, in operation, emits signals;
a first receiver and a second receiver that, in operation, sense the emitted signals and generate sensor data, the first receiver and the second receiver being arranged on a first portion and a second portion of the head-mounted display apparatus, respectively, wherein the first portion faces a user when the head-mounted display apparatus is worn by the user, and the second portion is a part of a user-interaction controller of the head-mounted display apparatus; and
a processor configured to process the generated sensor data to determine relative positions and orientations of the first receiver and the second receiver with respect to the at least one emitter, and to determine, based on the determined relative positions and orientations, a relative position and orientation of the second receiver with respect to the first receiver.

US Pat. No. 11,029,408

DISTANCE-IMAGING SYSTEM AND METHOD OF DISTANCE IMAGING

Varjo Technologies Oy, H...

1. A distance-imaging system comprising:an illuminating unit that, in operation, projects a spatially non-uniform pattern of light spots onto objects present in a real-world environment, wherein a first portion of said pattern has a higher density of light spots than a second portion of said pattern;
at least one camera that, in operation, captures an image of reflections of the light spots from surfaces of the objects; and
at least one optical element associated with the at least one camera, wherein a first optical portion of the at least one optical element has a higher magnification factor than a second optical portion of the at least one optical element, wherein reflections of said first and second portions of the spatially non-uniform pattern of light spots are differently magnified and/or de-magnified in horizontal and vertical directions by said first and second optical portions of the at least one optical element, respectively.

US Pat. No. 11,030,719

IMAGING UNIT, DISPLAY APPARATUS AND METHOD OF DISPLAYING

Varjo Technologies Oy, H...

1. A display apparatus comprising:an imaging unit comprising:
at least one camera, the at least one camera is to be used to capture an image of a given real-world scene; and
at least one optical element arranged on an optical path of a projection of the given real-world scene, wherein the at least one optical element comprises a first optical-element portion and a second optical-element portion having different optical properties with respect to magnification, wherein the projection of the given real-world scene is differently magnified by the first optical-element portion and the second optical-element portion in a manner that the image captured by the at least one camera has a variable angular resolution across a field of view of the at least one optical element, an angular resolution of a first portion of the captured image being greater than an angular resolution of a second portion of the captured image;
at least one image renderer; and
a processor coupled to the at least one camera and the at least one image renderer, wherein the processor is configured to:
process the captured image of the given real-world scene to generate an output image; and
render the output image via the at least one image renderer, wherein a shape of the first optical-element portion and a shape of the second optical-element portion are based on an aspect ratio of the output image.

US Pat. No. 11,030,720

DIRECT RETINAL PROJECTION APPARATUS AND METHOD

Varjo Technologies Oy, H...

1. A direct retinal projection apparatus comprising:means for detecting a gaze direction of a user;
at least one projector;
at least one first optical element comprising at least a first optical portion and a second optical portion having different optical properties with respect to magnification, wherein the at least one first optical element comprises an optical axis and is asymmetrical with respect to the optical axis, and the second optical portion is substantially ellipsoidal in shape;
at least one first actuator associated with the at least one first optical element; and
a processor configured to render a warped image having a spatially-uniform angular resolution via the at least one projector, whilst adjusting an orientation of the at least one first optical element via the at least one first actuator, based on the detected gaze direction of the user, to direct a projection of the warped image from the at least one first optical element towards a retina of a user's eye, wherein the asymmetrical first optical element with the elliptical second optical portion differently magnifies projections of a first portion and a second portion of the warped image, to produce on the retina of the user's eye a de-warped image having different spatially-variable angular resolutions at least along orthogonal axes of the de-warped image.

US Pat. No. 11,030,817

DISPLAY SYSTEM AND METHOD OF USING ENVIRONMENT MAP TO GENERATE EXTENDED-REALITY IMAGES

Varjo Technologies Oy, H...

1. A display system comprising:at least one display or projector;
at least one camera;
means for tracking a position and orientation of a user's head; and
at least one processor configured to:
control the at least one camera to capture a plurality of images of a real-world environment using a default exposure setting of the at least one camera, whilst processing head-tracking data obtained from said means to determine corresponding positions and orientations of the user's head with respect to which the plurality of images are captured;
process the plurality of images, based on the corresponding positions and orientations of the user's head, to create an environment map of the real-world environment;
generate at least one extended-reality image from at least one of the plurality of images using the environment map;
render, via the at least one display or projector, the at least one extended-reality image;
adjust an exposure of the at least one camera to capture at least one underexposed image of the real-world environment, whilst processing corresponding head-tracking data obtained from said means to determine a corresponding position and orientation of the user's head with respect to which the at least one underexposed image is captured;
process the at least one of the plurality of images, based on a transitional and rotational difference between a position and orientation of the user's head with respect to which the at least one of the plurality of images is captured and the position and orientation with respect to which the at least one underexposed image is captured, to generate at least one derived image;
generate at least one next extended-reality image from the at least one derived image using the environment map;
render, via the at least one display or projector, the at least one next extended-reality image; and
identify oversaturated pixels in the environment map and modify intensities of the oversaturated pixels in the environment map, based on the at least one underexposed image and the position and orientation with respect to which the at least one underexposed image is captured,
wherein the at least one processor is configured to detect whether or not there are oversaturated pixels in any of the plurality of images, and wherein the at least one underexposed image is captured when it is detected that there are oversaturated pixels in the at least one of the plurality of images.

US Pat. No. 11,023,041

SYSTEM AND METHOD FOR PRODUCING IMAGES BASED ON GAZE DIRECTION AND FIELD OF VIEW

Varjo Technologies Oy, H...

1. A system for producing images for a display apparatus, the system comprising at least one image source, and a processor communicably coupled to the at least one image source, wherein the processor is configured to:obtain information indicative of an angular size of a field of view providable by at least one image renderer of the display apparatus;
obtain information indicative of a gaze direction of a user;
receive a sequence of images from the at least one image source; and
process the sequence of images to generate at least one sequence of processed images to be communicated to the display apparatus, wherein, when processing the sequence of images, the processor is configured to crop a given image, based on the gaze direction of the user and the angular size of the field of view providable by the at least one image renderer, to generate at least one processed image, an angular size of a field of view represented by the at least one processed image being larger than the angular size of the field of view providable by the at least one image renderer of the display apparatus,
wherein the at least one image source is implemented as at least one camera, and the processor is further configured to:
obtain information indicative of a head orientation of the user;
determine a change in the head orientation of the user relative to a previous head orientation of the user;
select adaptively a framerate to be employed for the at least one camera based on the change in the head orientation of the user; and
adjust a three-dimensional orientation of the at least one camera, based on the head orientation of the user, and employ the selected framerate to capture the sequence of images, the sequence of images being representative of a real-world environment.

US Pat. No. 11,017,562

IMAGING SYSTEM AND METHOD FOR PRODUCING IMAGES USING MEANS FOR ADJUSTING OPTICAL FOCUS

Varjo Technologies Oy, H...

9. A method for producing images for a display apparatus, the method being implemented via an imaging system comprising at least one imaging unit, a given imaging unit comprising a camera, an optical element that comprises at least a first optical portion and a second optical portion having different focal lengths, and means for adjusting an optical focus of the given imaging unit, the method comprising:obtaining, from the display apparatus, information indicative of a gaze direction of a user;
generating a depth or voxel map of a given real-world scene;
determining, based on the gaze direction of the user and the depth or voxel map of the given real-world scene, an optical depth of at least one object present in a region of interest within the given real-world scene; and
adjusting an optical focus of the given imaging unit, based on the optical depth of the at least one object and the focal lengths of the first optical portion and the second optical portion, to capture at least one warped image of the given real-world scene, the at least one warped image having a spatially-uniform angular resolution,
wherein the at least one object comprises a first object and a second object, a first optical depth of the first object being different from a second optical depth of the second object, wherein the method further comprises:
selecting a given optical depth that lies between the first optical depth and the second optical depth; and
adjusting the optical focus of the given imaging unit, based on the given optical depth, to capture the at least one warped image of the given real-world scene.

US Pat. No. 10,990,167

DISPLAY APPARATUS AND METHOD USING PROJECTION MATRICES TO GENERATE IMAGE FRAMES

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one display or projector;
means for tracking a position and orientation of a user's head; and
a processor coupled to the at least one display or projector and said means, wherein the processor or at least one external processor communicably coupled to the processor is configured to:
obtain, from said means, head-tracking data indicative of the position and orientation of the user's head;
process the head-tracking data to determine a current position and orientation of the user's head and a velocity and/or acceleration with which the position and orientation of the user's head is changing;
predict, based on the current position and orientation of the user's head and the determined velocity and/or acceleration, at least a first position and orientation and a second position and orientation of the user's head at time t1 and t2, respectively, during a lifetime of a given frame being rendered;
determine, based on the predicted first position and orientation and the predicted second position and orientation of the user's head, at least one first projection matrix and at least one second projection matrix to be applied to three-dimensional image data pertaining to the given frame, respectively, wherein the first and second projection matrices are indicative of a geometrical relationship between three-dimensional points of an extended-reality visual scene and two-dimensional pixels of the at least one display or projector; and
apply the at least one first projection matrix and the at least one second projection matrix to said three-dimensional image data to generate at least one first image frame and at least one second image frame, respectively,
wherein the processor is configured to render, via the at least one display or projector, the at least one first image frame and the at least one second image frame at the time t1 and t2, respectively, during the lifetime of the given frame.

US Pat. No. 10,979,681

DISPLAY APPARATUS AND METHOD OF DISPLAYING USING LIGHT SOURCE AND BEAM SCANNING ARRANGEMENT

Varjo Technologies Oy, H...

1. A display apparatus comprising:at least one light source per eye, the at least one light source being operable to emit a light beam;
at least one beam scanning arrangement per eye, the at least one beam scanning arrangement being configured to direct the light beam towards a projection surface, and to sweep the light beam according to a scanning pattern, the scanning pattern being substantially in a form of a spiral of pixels; and
a processor configured to control the at least one light source and the at least one beam scanning arrangement to draw at least a first region of an input image over the projection surface,
wherein a resolution of the first region of the input image and a distance between centers of adjacent pixels along the spiral vary as a function of an angular distance from a center of the spiral,
and wherein a width of the light beam is modulated according to the variation in resolution.

US Pat. No. 10,777,016

SYSTEM AND METHOD OF ENHANCING USER'S IMMERSION IN MIXED REALITY MODE OF DISPLAY APPARATUS

Varjo Technologies Oy, H...

1. A system for enhancing a user's immersion in a mixed reality mode of a head-mounted display apparatus, comprising:at least one camera; and
a processor communicably coupled to the at least one camera, wherein the processor is configured to:
control the at least one camera to capture a sequence of images of a given real-world environment;
analyze the sequence of images to identify a spatial geometry of real objects present in the given real-world environment;
generate material segmentation information from the sequence of images to identify material categories to which one or more portions of the real objects belong;
process the sequence of images to:
generate a sequence of mixed-reality images, based upon the spatial geometry of at least one of the real objects and the material categories of one or more portions of at the least one real object; and
represent the at least one real object by at least one virtual object in the sequence of mixed-reality images having a visual behaviour emulating material properties associated with the material categories of the one or more portions of the at least one real object; and
render, at the head-mounted display apparatus, the sequence of mixed-reality images.

US Pat. No. 10,452,911

GAZE-TRACKING SYSTEM USING CURVED PHOTO-SENSITIVE CHIP

VARJO TECHNOLOGIES OY, H...

1. A gaze-tracking system for use in a head-mounted display apparatus, the gaze-tracking system comprising:a plurality of illuminators for emitting light pulses to illuminate a user's eye when the head-mounted display apparatus is worn by the user;
at least one camera for capturing an image of reflections of the light pulses from the user's eye, the image being representative of a position of the reflections on an image plane of the at least one camera, the at least one camera comprising a plurality of photo-sensitive elements arranged into at least one chip, wherein a first surface of the at least one chip bulges inwards in a substantially-curved shape, such that a focal plane of photo-sensitive elements positioned proximally to edges of the at least one chip is farther away than a focal plane of photo-sensitive elements positioned substantially at a center portion of the at least one chip, the first surface facing the user's eye when the head-mounted display apparatus is worn by the user; and
a processor coupled in communication with the plurality of illuminators and the at least one camera, wherein the processor is configured to control operations of the plurality of illuminators and the at least one camera, and to process the captured image to detect a gaze direction of the user.