US Pat. No. 9,324,156

METHOD AND APPARATUS FOR SEARCHING IMAGES

ATHEER, INC., Mountain V...

1. A computer-implemented method comprising: receiving, at a processor, first and second images, said first and second images
sharing at least a portion of subject matter therein; defining a first search path for said first image; defining a first
transition for said first image; searching for said first transition along said first search path; upon detecting said first
transition, searching along a second search path, said second search path substantially following said first transition; defining
a third search path for said second image; defining a second transition for said second image; searching for said second transition
along said third search path, said second transition corresponding with said first transition; and upon detecting said second
transition, searching along a fourth search path, said fourth search path substantially following said second transition;
wherein in searching along said second search path, following said first transition in at least two directions; and in searching
along said fourth search path, following said second transition in at least two directions.

US Pat. No. 9,142,185

METHOD AND APPARATUS FOR SELECTIVELY PRESENTING CONTENT

ATHEER, INC., Mountain V...

1. A machine implemented method, comprising:
obtaining input data;
generating output data from said input data;
determining a status of at least a first contextual factor;
determining whether said status of said first contextual factor meets a first standard;
if said status of said first contextual factor meets said first standard, applying a first transformation to said output data,
said first transformation comprising definition of a first output region substantially corresponding to a first retinal region,
said first retinal region being less than an entirety of a visual field, said first transformation further comprising limiting
output of said output data to said first output region or excluding output of said output data therefrom;

outputting said output data in a form suitable for display to a viewer; and
updating said output data to maintain said substantial correspondence between said first output region and said first retinal
region subsequent to a change in at least one of a group consisting of a position of said first retinal region and an orientation
of said first retinal region, without perceptibly changing said output data responsive to said change in said at least one
of said group consisting of said position of said first retinal region and said orientation of said first retinal region.

US Pat. No. 9,589,000

METHOD AND APPARATUS FOR CONTENT ASSOCIATION AND HISTORY TRACKING IN VIRTUAL AND AUGMENTED REALITY

ATHEER, INC., Mountain V...

1. An apparatus, comprising:
a processor adapted to execute executable instructions;
a data entity establisher instantiated on said processor, said data entity establisher comprising executable instructions
adapted for establishing a plurality of distinct data entities, each said distinct data entity comprising at least one of
a group consisting of an augmented reality entity and a virtual reality entity;

a state establisher instantiated on said processor, said state establisher comprising executable instructions adapted for
establishing a plurality of distinct states for each said data entity, each said state comprising:

a state time;
a plurality of state properties at least substantially corresponding to properties of one said data entity substantially at
said state time, said plurality of state properties comprising a state spatial arrangement of said data entity;

a data store in communication with said processor;
a storer instantiated on said processor, said storer comprising executable instructions adapted for storing in said data store
said plurality of distinct data entities and said distinct states therefor, so as to enable output of said distinct data entities
to at least one of an augmented reality environment and a virtual reality environment with each said data entity exhibiting
at least a portion of one of said distinct states therefor;

a user output in communication with said processor;
wherein:
said data entity establisher and said state establisher are adapted to cooperate to establish:
a first data entity with a first state and a second state, a first state time of said first state being substantially different
from a second state time of said second state; and

said first data entity with said first state, and a second data entity with a third state, said first state time of said first
state being substantially different from a third state time of said third state;

said storer is adapted to store for substantially concurrent output at a same output time:
a first iteration of said first data entity exhibiting said first state and a second iteration of said first data entity exhibiting
said second state, said output time being substantially different from said first and second state times; and

said first iteration of said first data entity exhibiting said first state and a first iteration of said second data entity
exhibiting said third state, said output time being substantially different from said first and third state times;

said user output is adapted to output at said same output time so as to be sensed by a user:
said first iteration of said first data entity exhibiting said first state and said second iteration of said first data entity
exhibiting said second state; and

said first iteration of said first data entity exhibiting said first state and said first iteration of said second data entity
exhibiting said third state.

US Pat. No. 9,442,575

METHOD AND APPARATUS FOR APPLYING FREE SPACE INPUT FOR SURFACE CONSTRAINED CONTROL

ATHEER, INC., Mountain V...

1. A machine-implemented method, comprising:
establishing a surface constrained input standard in a processor;
establishing a surface constrained input response in said processor;
establishing a free space input standard in said processor;
sensing with a sensor a free space input;
communicating said free space input to said processor;
determining in said processor whether said free space input satisfies said free space input standard;
if said free space input satisfies said free space input standard:
generating in said processor a virtual surface constrained input satisfying said surface constrained input standard; and
executing said surface constrained input response in response to said virtual surface constrained input.

US Pat. No. 9,223,981

APPARATUS FOR PROCESSING WITH A SECURE SYSTEM MANAGER

ATHEER, INC., Mountain V...

1. A method, comprising:
detecting a communication among a plurality of secure data entities and at least one non-secure data entity, said plurality
of secure data entities comprising a communication monitor, said communication monitor being adapted to detect said communication;

prohibiting execution of a non-secure executable instruction from said at least one non-secure data entity on any of said
secure data entities unless said non-secure executable instruction is recorded in a permitted instruction record; and

prohibiting execution of said non-secure executable instruction if said non-secure executable instruction is recorded in a
prohibited instruction record, wherein said permitted instruction record and said prohibited instruction record are secure
data entities.

US Pat. No. 9,947,240

METHOD AND APPARATUS FOR POSITION AND MOTION INSTRUCTION

Atheer, Inc., Mountain V...

1. An apparatus, comprising:a processor;
a sensor in communication with said processor;
a visual display in communication with said processor;
wherein:
at least one of said sensor and said processor establishes world data, said world data comprising at least one of a group consisting of a world position of an entity and a world motion of said entity;
said at least one of said sensor and said processor dynamically updates said world data until a condition is satisfied;
said processor establishes target data, said target data comprising at least one of a group consisting of a target position of said entity and a target motion of said entity;
said processor establishes guide data, said guide data guiding said entity toward said at least one of said target position and said target motion;
said guide data comprising a visual representation of at least a portion of said subject;
said visual representation being at least substantially anthropomorphic;
said processor is establishes evaluation data substantially representative of an evaluation of said world data against said target data;
said processor dynamically updates at least a portion of said target data responsive to said world data until said condition is satisfied;
said processor dynamically updates at least a portion of said guide data responsive to said target data until said condition is satisfied;
said processor dynamically updates at least a portion of said evaluation data until said condition is satisfied;
said display outputs said guide data; and
said display to outputs at least a portion of said evaluation data.

US Pat. No. 9,842,122

METHOD AND APPARATUS FOR SEARCHING IMAGES

Atheer, Inc., Mountain V...

1. A machine-implemented method for searching images to detect and determine an outline of hands therein, comprising:
receiving a digital image comprising a plurality of pixels in a processor, said digital image exhibiting at least a portion
of a hand therein;

defining a first search path of pixels within said digital image, said first search path being substantially a straight line
of said pixels, substantially horizontal, and proximate a bottom edge of said digital image;

defining a transition between said hand in said digital image and a background of said digital image, comprising at least
one of a color transition, a brightness transition, an edge transition, a focus transition, and a depth transition of said
pixels within said digital image;

searching for said transition between said hand and said background among said pixels along said first search path;
upon detecting said transition between said hand and said background, searching among said pixels along a second search path,
said second search path substantially following said transition between said hand and said background;

in following said transition, defining an outline of said hand based on said transition;
determining a shape of said outline of said hand in said digital image;
identifying a posture of said hand from said outline of said hand in said digital image; and
in searching among said pixels along said second search path, if said transition forks, forking said second search path and
searching among said pixels along at least two forks thereof.

US Pat. No. 9,576,188

METHOD AND APPARATUS FOR SUBJECT IDENTIFICATION

Atheer, Inc., Mountain V...

1. A method, comprising:
establishing at least one substantially three dimensional learning model of at least one learning subject;
establishing at least one substantially three dimensional gallery model for at least one gallery subject;
establishing at least one substantially three dimensional query model of a query subject;
determining a transform of at least one parent gallery model from among said at least one gallery model in combination with
at least one active learning model from among said at least one learning model so as to yield at least one transformed gallery
model, wherein said transformed gallery model approaches correspondence with at least one of said at least one query model
in at least one model property as compared with said parent gallery model;

applying said transform; and
comparing at least one substantially two dimensional transformed gallery image at least substantially corresponding with said
at least one transformed gallery model against at least one substantially two dimensional query image at least substantially
corresponding with said at least one query model, so as to determine whether said at least one query subject is said at least
one gallery subject.

US Pat. No. 9,852,652

METHOD AND APPARATUS FOR POSITION AND MOTION INSTRUCTION

Atheer, Inc., Mountain V...

1. A machine-implemented method, comprising:
sensing world data via a sensor of a motion instruction apparatus, said world data comprising a world position of a body part
of a user;

determining target data via a processor of the motion instruction apparatus, said target data comprising a target position
of the body part of the user;

producing visual guide data via the processor, said visual guide data being based on the target data and the world data;
displaying said visual guide data overlaid onto the world data to the user via a pass-through display of the motion instruction
apparatus;

wherein said visual guide data comprises at least one of a group consisting of visual virtual reality data and visual augmented
reality data; and

repeating, in response to a movement of the body part, the sensing, determining, producing and displaying steps until the
world position of the body part matches the target position of the body part to within a predetermined margin;

signaling, via the pass-through display, to the user that the world position matches the target position.

US Pat. No. 9,747,306

METHOD AND APPARATUS FOR IDENTIFYING INPUT FEATURES FOR LATER RECOGNITION

ATHEER, INC., Mountain V...

1. A method, comprising:
defining an actor input for an actor;
defining a command associated with said actor input
defining a region for said actor input;
detecting said actor input in said region;
executing said command in response to said actor input;
identifying at least one structural geometric salient feature of said actor from said actor input associated with said executed
command;

defining a structural geometric actor model of said actor from said at least one structural geometric salient feature;
retaining a data set comprising at least one of said at least one structural geometric salient feature or said structural
geometric actor model; and

using said data set to identify subsequent actor inputs.

US Pat. No. 9,710,067

METHOD AND APPARATUS FOR MANIPULATING CONTENT IN AN INTERFACE

ATHEER, INC., Mountain V...

1. A machine implemented method for use in at least one of virtual reality and augmented reality, the method comprising:
defining, in a processor of a hand-held device, a positional data space with respect to a display of the hand-held device,
wherein the positional data space comprises at least two dimensions and substantially coincides with at least a region of
real space visualized through the display, and wherein the display of the hand-held device is configured to be positioned
in front of a user's eyes;

establishing, in said processor, a field of view visible to the user via the display, wherein the field of view is in said
positional data space;

defining, via the processor, a first domain of said positional data space with respect to the display, wherein the first domain
includes said field of view of the user;

defining, via the processor, a second domain of said positional data space with respect to the display and said field of view,
said second domain being substantially distinct from said first domain and substantially excluding said field of view;

defining, via the processor, a bridge in said positional data space with respect to the display and said field of view, said
bridge comprising a spatial boundary between said first and second domains and contiguous with at least a portion of said
first domain and at least a portion of said second domain, wherein the bridge allows motion of an entity between said first
and second domains, and wherein the entity is at least one of a virtual reality entity and an augmented reality entity;

detecting, with a sensor of the hand-held device, whether a physical end-effector moves across the bridge while engaged with
the entity, wherein the physical end-effector is provided by the user, and wherein

when said entity is in said first domain, if the sensor detects the physical end-effector moving across said bridge while
engaged with said entity, moving, via the processor, said entity across said bridge into said second domain;

when said entity is in said first domain, if the sensor does not detect the physical end-effector engaged with the entity
and said entity moves across said bridge, moving said entity across said bridge but within said first domain;

when said entity is in said second domain, if the sensor detects the physical end-effector moving across said bridge while
engaged with said entity, moving, via the processor, said entity into said first domain;

when said entity is in said second domain, if the sensor does not detect the physical end-effector moving across the bridge
and the entity is not engaged with said physical end-effector, not moving, via the processor, said entity into said first
domain; and

based on data detected by the sensor, visibly outputting said entity to the display when said entity is in said first domain,
but not visibly outputting said entity to said display when said entity is in said second domain.

US Pat. No. 9,606,359

METHOD AND APPARATUS FOR CONTROLLING FOCAL VERGENCE OF OPTICAL CONTENT

ATHEER, INC., Mountain V...

1. An apparatus, comprising:
a first optic comprising a first lens;
a see-through display; and
a second optic comprising a second lens;
wherein:
said first optic is adapted to receive optical environment content from an environment external to said apparatus and deliver
said optical environment content to said see-through display;

said see-through display is adapted to deliver optical output content to said second optic, and to receive said optical environment
content and deliver said optical environment content to said second optic;

said second optic is adapted to receive said optical output content and said optical environment content and deliver said
optical output content and said optical environment content to a viewing position;

said first optic is adapted to alter a focal vergence of said optical environment content; and
said second optic is adapted to alter said focal vergence of said optical environment content, and to alter a focal vergence
of said optical output content;

such that said focal vergence of said optical output content and said focal vergence of said optical environment content are
alterable substantially independently.

US Pat. No. 10,002,208

METHOD FOR INTERACTIVE CATALOG FOR 3D OBJECTS WITHIN THE 2D ENVIRONMENT

Atheer, Inc., Mountain V...

1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment, the method comprising:receiving, with a processor, from a user, an import request to import the two-dimensional environment to be used as a background for the three-dimensional model;
importing, with the processor, based on the import request, the two-dimensional environment;
receiving, with the processor, from the user, a perspective request to provide a perspective to the two-dimensional environment, wherein the perspective request includes a selection of intersection points and lines connecting the intersection points in the two-dimensional environment that provides the perspective of the two-dimensional environment;
removing, based upon the perspective request, one or more objects within the two-dimensional environment and displaying the two-dimensional environment without the one or more removed objects;
receiving, with the processor, from the user, a scale request to provide a scale to the two-dimensional environment;
receiving, with the processor, from the user, an interactive catalog request to provide an interactive catalog;
providing, with the processor, the interactive catalog based upon the interactive catalog request;
wherein the interactive catalog includes one or more three-dimensional models of objects, each of the one or more three-dimensional models of objects superimposable upon the two-dimensional environment;
receiving, with the processor, from the user, a superimposing request to superimpose a three-dimensional model of an object from the interactive catalog onto the two-dimensional environment;
superimposing, with the processor, based on the superimposing request, the three-dimensional model of the object onto the two-dimensional environment with a scale and a perspective of the three-dimensional model of the object relative to the scale of the two-dimensional environment and the perspective of the two-dimensional environment; and
for each of the one or more removed objects, generating a replacement three-dimensional model of the removed object, and superimposing within the two-dimensional environment the replacement three-dimensional model of the removed object in a same position and perspective occupied by the removed object before removal from the two-dimensional environment.

US Pat. No. 9,924,091

APPARATUS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES

Atheer, Inc., Mountain V...

1. An apparatus, comprising:
a housing configured to be wearable by a user;
a first image sensor coupled to the housing, the first image sensor having a first focal length and a first field of view;
a second image sensor coupled to the housing, the second image sensor having a second focal length and a second field of view,
wherein:

the first field of view and the second field of view in a region approximate the user;
the first field of view and the second field of view are substantially similar;
the second focal length is greater in length than the first focal length;
the first image sensor is configured to capture a first image; and
the second image sensor is configured to capture a second image; and
a processor operably coupled to the first image sensor and the second image sensor, wherein the processor is configured to:
receive the first image from the first image sensor;
receive the second image from the second image sensor;
define a set of elements comprising a first element of the first image and a second element of said second image, wherein
the first element substantially corresponds to the second element;

determine a first rate of change of an image property of the first element, wherein the first rate of change is indicative
of a first focus level of the first element;

determine a second rate of change of the image property of the second element, the second rate of change is indicative of
a second focus level of the second element; and

assign the set of elements to at least one category in view of a relative difference between the first rate of change and
the second rate of change.

US Pat. No. 9,710,110

METHOD AND APPARATUS FOR APPLYING FREE SPACE INPUT FOR SURFACE CONSTRAINED CONTROL

ATHEER, INC., Mountain V...

1. A machine-implemented method, comprising:
establishing a data entity in a processor, said data entity being adapted to:
define a surface constrained input standard in said processor;
define a surface constrained input response in said processor;
accept a surface constrained input;
determine in said processor whether said surface constrained input satisfies said surface constrained input standard; and
execute said surface constrained input response if said surface constrained input satisfies said surface constrained input
standard;

establishing a free space input standard in said processor;
sensing with a sensor a free space input;
communicating said free space input to said processor;
determining in said processor whether said free space input satisfies said free space input standard;
if said free space input satisfies said free space input standard:
generating in said processor a virtual surface constrained input satisfying said surface constrained input standard; and
communicating said virtual surface constrained input to said data entity so as to invoke said data entity to execute said
surface constrained input response in response to said free space input.

US Pat. No. 9,557,822

METHOD AND APPARATUS FOR DISTINGUISHING FEATURES IN DATA

ATHEER, INC., Mountain V...

1. A machine implemented method for controlling a device through hand inputs, comprising:
establishing a depth image comprising a plurality of pixels in a processor in communication with said device;
defining said hand in said depth image with said processor, comprising:
establishing a depth value for said pixels of said depth image;
establishing a depth value standard distinguishing said hand in said depth image based on said depth value for said pixels;
establishing a plurality of test boundary pixels collectively comprising a boundary for said hand within said depth image,
and determining a next of said plurality of test boundary elements at least partially from a current of said test boundary
elements, and for each of said test boundary pixels:

establishing eight dominant directions;
establishing a property matrix comprising said depth value for a three by three configuration of pixels centered on and excluding
said test boundary pixel;

establishing a three by three dominant direction matrix for each of said dominant directions, each said dominant direction
matrix centered on and excluding said test boundary pixel and comprising weighting factors of:

8 in said dominant direction;
4s 45 degrees offset from said dominant direction;
2s 90 degrees offset from said dominant direction;
1s 135 degrees offset from said dominant direction;
0 180 degrees offset from said dominant direction;
for each dominant direction matrix, multiplying each value thereof with a corresponding depth value of said property matrix
and summing products thereof to yield a dominant direction value;

determining a test inward direction for said hand relative to said test boundary pixel by comparing said dominant direction
values;

determining a test hand pixel in said depth image displaced at least one pixel from said test boundary pixel in said test
inward direction;

comparing said depth value of said test hand pixel to said depth value standard;
if said depth value of said test hand pixel satisfies said depth value standard, identifying said test hand pixel as belonging
to said hand;

if said pixels identified as belonging to said hand comprise a substantially continuous trace disposed inward from said boundary,
identifying a portion of said depth image enclosed by said trace as belonging to said hand;

determining with said processor at least one of a configuration and a motion of said hand;
identifying with said processor a control command associated with said at least one of said configuration and said motion
of said hand; and

calling said control command with said processor, so as to control said device.

US Pat. No. 9,964,764

VISUAL TRAINING DEVICES, SYSTEMS, AND METHODS

Atheer Labs, Inc., Mount...

1. A visual training aid, comprising:an eyewear article including a display monitor that at least partially blocks a user's view, the eyewear article configured to be worn by a user moving along a competitive training course;
an image generator mounted to the eyewear article in a position to display an image on the display monitor that is viewable by the user;
a processor in data communication with the image generator;
a global positioning system in data communication with the processor;
a computer readable medium in data communication with the processor; and
a camera mounted to the eyewear article in a position to capture competitive training course image data,
wherein the global positioning system generates current position data of the eyewear article,
wherein the processor is programmed with instructions for selecting from the computer readable medium time-stamped past position data of the eyewear article on the competitive training course, for correlating the time-stamped past position data of the eyewear article with the current position data of the eyewear article, and for displaying on the display monitor a dynamically updated visual image of at least one person representing at least one virtual competitor based on the correlation between past position data and current position data while the eyewear article moves along the competitive training course, and
wherein the camera is configured to capture and display on the display monitor a portion of the competitive training course as competitive training course image data corresponding to where the user is looking.

US Pat. No. 9,700,202

SYSTEM AND METHOD FOR IMPROVING THE PERIPHERAL VISION OF A SUBJECT

ATHEER, INC., Mountain V...

1. A method comprising:
providing visual central content for display, the central content including a central visually discernible characteristic;
providing visual peripheral content for display, the peripheral content including a peripheral visually discernible characteristic;
displaying the central content to a central vision of the subject; and
displaying the peripheral content to a peripheral vision of the subject;
detecting input from the subject regarding the central content and the peripheral content;
assessing whether the subject correctly identifies the central content and the peripheral content.

US Pat. No. 9,684,820

METHOD AND APPARATUS FOR SUBJECT IDENTIFICATION

ATHEER, INC., Mountain V...

1. A method, comprising:
image-capturing a plurality of substantially two-dimensional learning images of a learning face, each of said learning images
exhibiting a learning face aspect and a learning face illumination, said learning face aspects and said learning face illuminations
varying among said learning images;

communicating said learning images to a processor;
determining in said processor a substantially three-dimensional learning model of said learning face from said learning images,
said learning model exhibiting said learning face aspects and said learning face illuminations;

image-capturing a substantially two-dimensional gallery image of each of a plurality of gallery faces, each of said gallery
images exhibiting a gallery face aspect and a gallery face illumination, said gallery face aspects and said gallery face illuminations
being at least substantially consistent among said gallery images;

communicating said gallery images to said processor;
determining in said processor a substantially three-dimensional gallery model of each of said gallery faces from said gallery
images, said gallery models exhibiting respective gallery face aspects and gallery face illuminations;

image-capturing a substantially two-dimensional query image of a query face, said query image exhibiting a query face aspect
and a query face illumination, at least one of said query face aspect and said query face illumination being at least substantially
inconsistent with said substantially consistent gallery face aspects and gallery face illuminations among said gallery images;

communicating said query image to said processor;
determining in said processor a substantially three-dimensional query model of said query face from said query image, said
query model exhibiting said query face aspect and said query face illumination;

determining in said processor a pre-transform for said query model in combination with said learning model so as to yield
a transformed query model, wherein a transformed query face aspect of said transformed query model approaches correspondence
with said substantially consistent gallery face aspects and a transformed query face illumination of said transformed query
model approaches correspondence with said substantially consistent gallery face illuminations;

determining a transform in said processor as at least substantially an inverse of said pre-transform, such that a transformed
gallery face aspect of a transformed gallery model approaches correspondence with said query face aspect of said query model
and a transformed gallery face illumination approaches correspondence with said query face illumination;

applying said transform to at least one of said gallery models in said processor to yield at least one said transformed gallery
model;

for each of said at least one transformed gallery models, determining in said processor a substantially two-dimensional transformed
gallery image exhibiting said transformed gallery face aspect and said transformed gallery face illumination;

comparing said at least one transformed gallery image against said query image in said processor so as to determine whether
said query face is said gallery face in said at least one transformed gallery image.

US Pat. No. 10,013,578

APPARATUS FOR PROCESSING WITH A SECURE SYSTEM MANAGER

Atheer, Inc., Mountain V...

1. A head mounted display, comprising:a body adapted to be worn on a user's head;
a processor adapted for executing executable instructions;
at least one input unit in communication with said processor;
at least one output unit in communication with said processor, said at least one output unit comprising at least one graphical display disposed proximate at least one of said user's eyes;
a data storage unit in communication with said processor;
a plurality of non-secure data entities, comprising:
a non-secure program instantiated on said processor, said non-secure program comprising non-secure executable instructions;
a non-secure data store disposed on said data storage unit, said non-secure data store comprising non-secure executable instructions stored therein;
a non-secure input, said non-secure input being adapted to input to said head mounted display via said at least one input unit;
a non-secure output, said non-secure output being adapted to output from said head mounted display via said at least one output unit;
a plurality of secure data entities, comprising:
a communication monitor instantiated on said processor, said communication monitor comprising secure executable instructions, said communication monitor being adapted to detect communication among any of at least one of said secure data entities and at least one of said non-secure data entities;
a permitted instruction record instantiated on said processor;
a first prohibitor instantiated on said processor, said first prohibitor comprising said secure executable instructions, said first prohibitor being adapted to prohibit execution of said non-secure executable instructions on said at least one secure data entity unless said non-secure executable instructions are recorded in said permitted instruction record;
a prohibited instruction record instantiated on said processor; and
a second prohibitor instantiated on said processor, said second prohibitor comprising said secure executable instructions, said second prohibitor being adapted to prohibit execution of said non-secure executable instructions on said at least one of said non-secure data entities if said non-secure executable instructions are recorded in said prohibited instruction record;
a secure data store disposed on said data storage unit, said secure data store comprising secure executable instructions stored therein;
a secure input, said secure input being adapted to input to said head mounted display via said at least one input unit;
a secure output, said secure output being adapted to output from said head mounted display via said at least one output unit;
a loader adapted to instantiate said communication monitor, said permitted instruction record, said first prohibitor, said prohibited instruction record, and said second prohibitor on said processor from said secure executable instructions stored in said secure data store, and to instantiate said non-secure program on said processor from said non-secure executable instructions in said non-secure data store;
wherein:
wherein said permitted instruction record and said prohibited instruction record are secure data entities;
said permitted instruction record excludes all said non-secure executable instructions reading from said any of said secure data entities; and
said permitted instruction record excludes all said non-secure executable instructions writing to any of said secure data entities.

US Pat. No. 9,967,459

METHODS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES

Atheer, Inc., Mountain V...

1. A method, comprising:obtaining a first image, the first image having a first focal length and a first field of view;
obtaining a second image, the second image having a second focal length and a second field of view;
defining a set of elements comprising a first element of the first image and a second element of the second image, wherein the first element substantially corresponds to the second element;
determining a first rate of change of an image property of the first element;
determining a second rate of change of the image property of the second element; and
assigning the set of elements to at least one category in view of a relative difference between the first rate of change and the second rate of change.

US Pat. No. 9,703,383

METHOD AND APPARATUS FOR MANIPULATING CONTENT IN AN INTERFACE

ATHEER, INC., Mountain V...

1. A machine implemented method for use in at least one of virtual reality and augmented reality, the method comprising:
defining, in a processor of a head mounted display unit, a positional data space with respect to a display of the head mounted
display unit, wherein the positional data space comprises at least two dimensions and substantially coincides with at least
a region of real space visualized through the display, wherein the display of the head mounted display unit is configured
to be positioned in front of a user's eyes;

establishing, in said processor, a field of view visible to the user via the display, wherein the field of view is in said
positional data space;

defining, with the processor, a first domain of said positional data space with respect to the display, wherein the first
domain includes the field of view of the user;

defining, with the processor, a second domain of said positional data space relative to the display and said field of view,
said second domain being substantially distinct from said first domain and substantially excluding said field of view;

defining, via the processor, a bridge in said positional data space with respect to the display and said field of view, said
bridge comprising a spatial boundary between said first and second domains and contiguous with at least a portion of said
first domain and at least a portion of said second domain, wherein the bridge allows motion of an entity between said first
and second domains, and wherein the entity is at least one of a virtual reality entity and an augmented reality entity;

detecting, with a sensor of the head mounted display unit, whether a physical effector moves across the bridge while engaged
with the entity, wherein the physical effector is provided by the user, and wherein

if when said entity is in said first domain, if the sensor detects the physical effector moving across said bridge while engaged
with said entity, moving, via the processor, said entity across said bridge into said second domain;

if when said entity is in said first domain, if the sensor does not detect the physical effector and said entity moves across
said bridge, moving, via the processor, said entity across said bridge but within said first domain;

if when said entity is in said second domain, if the sensor detects the physical effector moving across said bridge while
engaged with said entity, moving, via the processor, said entity into said first domain;

if when said entity is in said second domain, if the sensor does not detect the physical effector moving across the bridge
and the entity is not engaged with said physical effector, not moving, via the processor, said entity into said first domain;
and

based on data detected by the sensor, visibly outputting said entity to the display when said entity is in said first domain,
but not visibly outputting said entity to said display when said entity is in said second domain.

US Pat. No. 10,048,760

METHOD AND APPARATUS FOR IMMERSIVE SYSTEM INTERFACING

Atheer, Inc., Mountain V...

1. A method comprising:defining, by a processor of a head-mounted device, a first hand gesture by a user for a first augmented-reality user interface;
sensing, by a sensor of the head-mounted device, the first hand gesture in a three-dimensional environment;
in response to sensing the first hand gesture, generating, by the processor, the first augmented-reality user interface, wherein the first augmented-reality user interface includes:
a first manipulation site configured to receive a first input from a first digit of a hand of the user; and
a second manipulation site configured to receive a second input from a second digit of the hand of the user;
displaying, by a display of the head-mounted device, the first manipulation site at a first point proximate to and aligned with the first digit of the hand; and
displaying, by the display of the head-mounted device, the second manipulation site at a second point proximate to and aligned with the second digit of the hand.

US Pat. No. 10,045,718

METHOD AND APPARATUS FOR USER-TRANSPARENT SYSTEM CONTROL USING BIO-INPUT

Atheer, Inc., Mountain V...

1. An apparatus, comprising:a vehicle adapted to be worn by an individual without restricting activity of the individual;
a first sensor disposed on the vehicle, the first sensor to sense a cardiac wave of the individual with substantially no dedicated sensing action required from the individual and with sufficient sensitivity to identify the individual using a characteristic heart function of the body of the individual;
a second sensor disposed on the vehicle, the second sensor to measure a variable input with substantially no dedicated sensing action required from the individual;
a processor coupled to the first sensor, wherein the processor is to:
identify a standard cardiac wave associated with the individual;
receive the variable input from the second sensor;
adjust the standard cardiac wave associated with the individual in view of the variable input to obtain an adjusted standard cardiac wave;
compare data representative of the cardiac wave with data representative of the adjusted standard cardiac wave associated with the individual; and
in response to data representative of the cardiac wave matching the data representative of the adjusted standard cardiac wave, perform a first subject service associated with the cardiac wave and the individual without any direct interaction by the individual with the apparatus.

US Pat. No. 10,019,845

METHOD AND APPARATUS FOR CONTENT ASSOCIATION AND HISTORY TRACKING IN VIRTUAL AND AUGMENTED REALITY

Atheer, Inc., Mountain V...

2. A method comprising:establishing a first data entity via a data entity establisher instantiated on a processor;
establishing a first state for said first data entity via a state establisher instantiated on said processor;
establishing a second state for said first data entity via said state establisher, said second state being distinct from said first state;
establishing a second data entity via said data entity establisher, said second data entity being distinct from said first data entity;
establishing a third state for said second data entity via said state establisher, said third state being distinct from said first and second states;
wherein each of said states comprises a state time and a plurality of state properties at least substantially corresponding to a respective of said data entities substantially at said state time and comprising a state spatial arrangement thereof;
wherein a first state time of said first state is substantially different from a second state time of said second state;
wherein a third state time of said third state is substantially different from said first state time of said first state;
storing said first and second data entities and said first, second, and third states in a data store in communication with said processor via a storer instantiated on said processor;
retrieving from said data store and outputting to an output in communication with said processor via an outputter instantiated on said processor at a same output time:
a first iteration of said first data entity exhibiting at least a portion of said first state and a second iteration of said first data entity exhibiting at least a portion of said second state, said output time being substantially different from said first and second state times; and
said first iteration of said first data entity exhibiting said at least said portion of said first state and a first iteration of said second data entity exhibiting at least a portion of said third state, said output time being substantially different from said first and third state times.

US Pat. No. 10,013,138

METHOD AND APPARATUS FOR SECURE DATA ENTRY USING A VIRTUAL INTERFACE

Atheer, Inc., Mountain V...

1. A method, comprising:generating, by a processor, a virtual data entry interface for a head-mounted display;
generating, by the processor, a first input configuration of the virtual data entry interface for the head-mounted display to securely receive a first input from a viewer;
displaying, by the head-mounted display, the virtual data entry interface with the first input configuration at a defined focus distance and a defined location relative to an eye of the viewer to securely display the virtual data entry interface, wherein:
the defined focus distance is a distance such that the virtual data entry interface is in focus to the eye of the viewer and is not in focus to an eye of another individual;
the defined location is a location on the head-mounted display that is viewable to the eye of the viewer and is not viewable to the eye of the other individual;
receiving, by a sensor, the first input from the viewer, the first input corresponding to the virtual data entry interface with the first input configuration;
in response to receiving the first input, automatically generating, by the processor, a second input configuration for the virtual data entry interface for the head-mounted display to securely receive a subsequent second input from the viewer, wherein the first input configuration is different than the second input configuration such that the viewer performs a second action to input the second input that is different than a first action to input the first input;
displaying, by the head-mounted display, the virtual data entry interface with the second input configuration at the defined focus distance and the defined location relative to the viewer; and
receiving, by the sensor, the second input from the viewer, the second input corresponding to the virtual data entry interface with the second input configuration.

US Pat. No. 9,996,636

METHOD FOR FORMING WALLS TO ALIGN 3D OBJECTS IN 2D ENVIRONMENT

Atheer, Inc., Mountain V...

1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment, the method comprising:receiving, with a processor, from a user, an import request to import the two-dimensional environment to be used as a background for the three-dimensional model;
importing, with the processor, based on the import request, the two-dimensional environment;
receiving, with the processor via a user interface, from the user, a ground plane input comprising a plurality of ground plane points selected by the user based on a visual appearance of the two-dimensional environment to define a ground plane corresponding to a horizontal plane of the two-dimensional environment;
automatically generating, with the processor, and displaying, via a display unit, a scale and perspective overlay forming a three-dimensional environment for the two-dimensional environment based on the ground plane input;
receiving, with the processor via the user interface, from the user, input of two or more wall-floor intersection points selected by the user on the two-dimensional environment, wherein at least two of the two or more wall-floor intersection points are located at a wall-floor intersection of a same wall with the ground plane;
automatically generating, with the processor, and displaying, via the display unit, a wall plane, representing a vertical plane of the two-dimensional environment orthogonal to the horizontal plane, in the scale and perspective overlay positioned at the at least two wall-floor intersection points;
receiving, with the processor, from the user, a superimposing request to superimpose the three-dimensional model of the object onto the two-dimensional environment; and
superimposing, with the processor, and displaying, via the display unit, the three-dimensional model of the object on the scale and perspective overlay for the two-dimensional environment based on the ground plane input and the wall-floor intersection points.

US Pat. No. 9,971,853

METHOD FOR REPLACING 3D OBJECTS IN 2D ENVIRONMENT

Atheer, Inc., Mountain V...

1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment, the method comprising:receiving, with a processor, from a user, an import request to import the two-dimensional environment to be used as a background for the three-dimensional model;
importing, with the processor, based on the import request, the two-dimensional environment;
receiving, with the processor, from the user, a superimposing request to superimpose a three-dimensional model of a smart object onto the two-dimensional environment;
superimposing, with the processor, the three-dimensional model of the smart object onto the two-dimensional environment based on the superimposing request with a scale and a perspective based on the two-dimensional environment, where the smart object includes a playback animation;
saving, with the processor, a resulting image in a storage device communicatively coupled to the processor, the resulting image including the three-dimensional model of the smart object superimposed onto the two-dimensional environment; and
visualizing, via a video display unit communicatively coupled to the processor, the resulting image including the three-dimensional model of the smart object, wherein the playback animation of the smart object is displayed with a scale and a perspective based on the scale and the perspective of the three-dimensional model of the smart object and a position of the three-dimensional model of the smart object within the two-dimensional environment.

US Pat. No. 9,894,269

METHOD AND APPARATUS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES

Atheer, Inc., Mountain V...

1. A method, comprising steps of:
receiving a first image, the first image having a first focal length and a first field of view;
receiving a second image, the second image having a second focal length and a second field of view, wherein the second focal
length is longer than the first focal length, and wherein the second field of view is similar to the first field of view;

defining a set of elements comprising a first element in the first image and a second element in the second image, wherein
the first element substantially corresponds to the second element;

determining a value of the first focal length for the first element in the first image;
determining a value of the second focal length for the second element in the second image;
assigning the set of elements as background elements when the value of the second focal length is greater than or equal to
the value of the first focal length;

assigning the set of elements as foreground elements when the value of the second focal length is less than the value of the
first focal length; and

determining a relative distance between the first element and the second element.

US Pat. No. 9,881,026

METHOD AND APPARATUS FOR IDENTIFYING INPUT FEATURES FOR LATER RECOGNITION

Atheer, Inc., Mountain V...

1. A method, comprising:
defining an actor input for an actor;
defining a command associated with said actor input;
defining a region for said actor input;
detecting said actor input in said region;
executing said command in response to said actor input;
identifying at least one structural geometric salient feature of said actor from said actor input associated with said executed
command;

defining a structural geometric actor model of said actor from said at least one structural geometric salient feature;
retaining a data set comprising at least one of said at least one structural geometric salient feature or said structural
geometric actor model; and

using said data set to identify subsequent actor inputs without requiring further training process from said actor for identifying
said salient features of said actor.

US Pat. No. 9,665,987

METHOD AND APPARATUS FOR SELECTIVELY PRESENTING CONTENT

ATHEER, INC., Mountain V...

1. A machine implemented method, comprising:
obtaining input data;
generating output data from said input data;
determining a status of at least a first contextual factor;
determining whether said status of said first contextual factor meets a first standard;
if said status of said first contextual factor meets said first standard:
determining a position and an orientation of an eye of a viewer;
applying a first transformation to said output data, said first transformation comprising:
defining a first output region substantially corresponding with a first portion of a retina of said eye;
limiting output of said output data to said first output region, or excluding output of said output data from said first output
region;

outputting said output data in a form suitable for display to a viewer;
updating said determination of said position and said orientation of said eye subsequent to a change in at least one of said
position and said orientation of said eye;

maintaining said substantial correspondence between said first output region and said first portion of said retina without
perceptibly changing said output data responsive to said change in said at least one of said position and said orientation
of said eye.

US Pat. No. 10,216,271

METHOD AND APPARATUS FOR INDEPENDENT CONTROL OF FOCAL VERGENCE AND EMPHASIS OF DISPLAYED AND TRANSMITTED OPTICAL CONTENT

Atheer, Inc., Santa Clar...

1. An apparatus, comprising:a first optic comprising a plurality of first optic regions;
a see-through display comprising a plurality of display regions;
a second optic comprising a plurality of second optic regions;
an environment sensor adapted to sense a distance to an environment external to said apparatus along a target path;
wherein:
said first optic regions, said display regions, and said second optic regions correspond such that if said target path is oriented through a target display region of said display regions, said target path is also oriented through a corresponding target first optic region of said first optic regions and a corresponding target second optic region of said second optic regions;
said first optic is adapted to receive optical environment content from said environment in said first optic regions and deliver said optical environment content to said see-through display correspondingly in said display regions;
said see-through display is adapted to receive said optical environment content from said first optic in said display regions and deliver said optical environment content to said second optic correspondingly in said second optic regions, and to deliver optical display content in said display regions to said second optic correspondingly in said second optic regions;
said second optic is adapted to receive said optical environment content and said optical display content from said see-through display in said second optic regions and deliver said optical environment content and said optical display content to an optical content receiver;
said first optic is adapted to alter a focal vergence of said optical environment content in said first optic regions; and
said second optic is adapted to alter said focal vergence of said optical environment content and to alter a focal vergence of said optical display content in said second optic regions; and
such that said focal vergence of said optical display content as delivered to said optical content receiver by said second optic and said focal vergence of said optical environment content as delivered to said optical content receiver by said second optic are alterable substantially independently of one another.

US Pat. No. 9,990,043

GESTURE RECOGNITION SYSTEMS AND DEVICES FOR LOW AND NO LIGHT CONDITIONS

Atheer Labs, Inc., Mount...

1. A system, comprising:a thermographic camera having a first field of view, the thermographic camera to capture an infrared image within the first field of view;
an optical camera having a second field of view, the optical camera to detect an optical image, wherein the first field of view and the second field of view are substantially the same; and
a processor coupled to the thermographic camera, the processor to:
receive the infrared image from the thermographic camera;
receive the optical image from the optical camera;
determine whether a light condition of the optical image is a normal light condition, a low light condition, or a no light condition;
in response to the low light condition or the no light condition:
detect infrared radiation in the infrared image;
define a first readable zone within the infrared image, wherein the first readable zone is a portion of the first field of view that includes an image of a body part of a user;
identify a first gesture made by the user based on the infrared radiation within the first readable zone to obtain first gesture information; and
execute a first gesture recognition command associated with the first gesture information; and
in response to the normal light condition:
define a second readable zone within the optical image associated, wherein the second readable zone is a portion of the second field of view that includes the image of the body part of the user;
identify a second gesture made by the user within the second readable zone using optical recognition to obtain second gesture information; and
execute a second gesture recognition command associated with the second gesture information.

US Pat. No. 9,977,844

METHOD FOR PROVIDING A PROJECTION TO ALIGN 3D OBJECTS IN 2D ENVIRONMENT

Atheer, Inc., Mountain V...

1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment, the method comprising:receiving, with a processor, from a user, an import request to import the two-dimensional environment to be used as a background for the three-dimensional model;
importing, with the processor, based on the import request, the two-dimensional environment;
receiving, with the processor, from the user, a superimposing request to superimpose the three-dimensional model of the object onto the two-dimensional environment;
superimposing, with the processor, the three-dimensional model of the object onto the two-dimensional environment based on the superimposing request;
superimposing, with the processor, a shadow of the three-dimensional model of the object onto one or more planes of the two-dimensional environment, the three-dimensional model positioned away from the one or more planes; and
displaying, with the processor, the three-dimensional model of the object on the two-dimensional environment and the shadow on the one or more planes of the two-dimensional environment for guiding a positioning of the three-dimensional model of the object within the two-dimensional environment.

US Pat. No. 10,070,054

METHODS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES

Atheer, Inc., Mountain V...

1. A method, comprising:receiving a first image with a first focal length and a first field of view;
receiving a second image with a second focal length and a second field of view, wherein the second focal length is longer than the first focal length, and wherein the second field of view is similar to the first field of view;
defining a set of elements comprising a first element in the first image and a second element in the second image, wherein the first element substantially corresponds to the second element;
determining a first rate of change of an image property of the first element, the first rate of change is indicative of a first focus level of the first element;
determining a second rate of change of the image property of the second element, the second rate of change is indicative of a second focus level of the second element; and
assigning the set of elements as background elements when the second focus level is greater than or equal to the first focus level.

US Pat. No. 9,916,681

METHOD AND APPARATUS FOR SELECTIVELY INTEGRATING SENSORY CONTENT

Atheer, Inc., Mountain V...

1. A method, comprising:
capturing an image representative of a real-world environment;
defining, by a processor, a reference point in the image;
determining a first set of data representative of an object in the image at the reference point;
determining a second set of data representative of a surface of the object;
identifying a first subset of data in the second set of data representative a potential shadow on the surface of the object,
wherein the potential shadow indicates a shadow may be located on at least a portion of the surface of the object based on
a feature of the second set of data;

identifying a second subset of data in the second set of data representative of a light source in the image based on the second
set of data;

determining an anticipated shadow on the surface of the object from the light source, wherein the anticipated shadow is a
shadow that may be located on the surface of the object based on an illumination pattern of the light source;

generating notional sensory content to augment the image of the real-world environment at the reference point; and
determining that the potential shadow matches the anticipated shadow; and
in response to the potential shadow matching the anticipated shadow, applying the potential shadow onto the notional sensory
content.

US Pat. No. 10,216,355

METHOD FOR PROVIDING SCALE TO ALIGN 3D OBJECTS IN 2D ENVIRONMENT

Atheer, Inc., Mountain V...

1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment, the method comprising:receiving, from a user via a user interface of a user device, an import request to import the two-dimensional environment to be used as a background for the three-dimensional model;
importing, based on the import request, the two-dimensional environment;
receiving a height of the user device relative to a ground plane of the two-dimensional environment;
calculating a scale and a perspective for the three-dimensional model of the object in the two-dimensional environment based on the height of the user device and an angle formed between the ground plane and a light ray projected from the user device to the ground plane;
calculating a space geometry and scale of the two-dimensional environment based on the height and angle;
calculating a first position for the three-dimensional model of the object in the two-dimensional environment based on the height and the angle for correctly placing the three-dimensional model of the object in the two-dimensional environment with respect to the space geometry and scale of the two-dimensional environment;
receiving, from the user via the user interface of the user device, a superimposing request to superimpose the three-dimensional model of the object onto the two-dimensional environment;
superimposing the three-dimensional model of the object onto the two-dimensional environment at the first position with the scale and perspective based on the superimposing request; and
responsive to receiving an adjusted position of the user device, displaying the three-dimensional model of the object superimposed onto the two-dimensional environment with an updated scale and updated perspective relative to an updated space geometry and scale of the two-dimensional environment, wherein the updated scale and updated perspective of the three-dimensional model and the updated space geometry and scale of the two dimensional environment are updated according to the adjusted position of the three-dimensional model of the object and the adjusted position of the user device.

US Pat. No. 10,147,232

METHOD AND APPARATUS FOR SELECTIVELY PRESENTING CONTENT

Atheer, Inc., Mountain V...

1. An apparatus, comprising:a first sensor to measure a first contextual factor of a movement of a viewer to obtain first contextual data;
a second sensor to measure a second contextual factor of an eye of the viewer to obtain second contextual data;
a processor coupled to the first sensor and the second sensor; and
a see-through display coupled to the processor wherein the processor is to:
receive the first contextual data and the second contextual data;
determine whether the first contextual data meets a contextual data standard;
when the first contextual data does not meet the contextual data standard:
generate a first set of display data to display at the see-through display;
display the first set of display data at the see-through display;
when the first contextual data meets the contextual data standard:
generate a second set of display data to display at the see-through display when the first contextual data meets the contextual data standard, wherein the first set of display data is different than the second set of display data;
determine a disposition of the eye of the viewer relative to the see-through display using the second contextual data;
define a first display region of the see-through display that corresponds to a first portion of a field of view of the viewer;
define a first subset of the display data to display in the first display region;
define a second display region of the see-through display that corresponds to a second portion of the field of view of the viewer;
define a second subset of the display data to display in the first display region;
display the first subset of the display data at the first display region;
display the second subset of the display data at the second display region, such that the second portion of the field of view of the viewer is substantially unobstructed by the second subset of the display data;
identify a change in the first portion of the field of view corresponding to a change in the disposition of the eye;
in response to the change in the first portion of the field of view, update the first display region to obtain an updated first display region that corresponds to the change in the first portion of the field of view;
identify a change in the second portion of the field of view corresponding to the change in the disposition of the eye;
in response to the change in the second portion of the field of view, update the second display region to obtain an updated second display region that corresponds to the change in the second portion of the field of view;
when the first contextual data does not meet the contextual data standard, display the first set of display data at the see-through display; and
when the first contextual data meets the contextual data standard, display the first subset of the display data at the updated first display region and display the display data at the updated second display region, such that the second portion of the field of view of the viewer is substantially unobstructed by the display data.

US Pat. No. 10,116,839

METHODS FOR CAMERA MOVEMENT COMPENSATION FOR GESTURE DETECTION AND OBJECT RECOGNITION

Atheer Labs, Inc., Mount...

1. A method, comprising:receiving a video stream comprised of a sequential series of frames from a camera, wherein the video stream is captured at a frame rate;
receiving motion data from a motion sensor that is physically associated with the camera to detect motion of the camera, wherein the motion data is captured at a sampling rate;
associating a first frame of the sequential series of frames with a portion of the motion data that is captured approximately contemporaneously with the first frame, the portion of the motion data indicative of an amount of movement of the camera when the camera captured the first frame;
when the sampling rate is greater than the frame rate, aggregating a first frame sample of the motion data and a second sample of the motion data captured between the first frame of the sequential series of frames and a second frame of the sequential series of frames to obtain an aggregated movement value representative of the motion of the camera when the camera captured the first frame;
comparing the aggregated movement value with a first threshold for the amount of movement of the camera;
when the aggregated movement value does not exceed the first threshold, accepting the first frame from the video stream; and
when the aggregated movement value exceeds the first threshold, rejecting the first frame from the video stream.

US Pat. No. 10,163,264

METHOD AND APPARATUS FOR MULTIPLE MODE INTERFACE

Atheer, Inc., Mountain V...

1. A method, comprising:generating, by a processor, a world space, the world space being substantially bound by a portion of a physical world environment relative to a head mounted display, wherein:
translational movement of the world space substantially corresponds to translational movement by the head mounted display from a first point in space to a second point in space, and
rotational movement of the world space substantially corresponds to rotational movement by the head mounted display where the head mounted display remains at the first point in space and rotates about an axis;
generating, by the processor, a sphere space, the sphere space being a finite area substantially surrounding the head mounted display;
translational movement of the sphere space corresponds to the translational movement by the head mounted display from the first point in space to the second point in space;
rotational movement of the sphere space substantially corresponds to the rotational movement by the head mounted display where the head mounted display remains at the first point in space and rotates about the axis;
generating, by the processor, a display space, the display space being a finite plane disposed in front of the head mounted display;
translational movement of the display space does not correspond to the translational movement of the head mounted display from the first point in space to the second point in space;
rotational movement of the display space does not correspond to the rotational movement of the head mounted display where the head mounted display remains at the first point in space and rotates about the axis;
receiving, from a sensor, a first constructive translational movement input representative of a non-translational movement of a body of a user where the body of the user does not move from the first point in space to the second point in space and the non-translational movement mimics a movement of the body of the user translationally moving in the world space;
executing, by the processor, a first translational instruction associated with the first constructive translational movement input, wherein the first translational instruction comprises the moving the body of the user a first distance that corresponds with an actual movement of the body of the user from the first point in space to the second point in space in the world space relative to the head mounted display; and
sensing, by the sensor, a presence of the world space resizing stimulus, the world space resizing stimulus indicating an amount to reduce a size of the world space relative to the head mounted display;
executing, by the processor, a world space resizing instruction to reduce the size of the world space by the amount indicated by the world space resizing stimulus;
receiving, from the sensor, a second constructive translational movement input; and
executing, by the processor, a second translational instruction associated with the second constructive translational movement input, wherein the second translational instruction comprises the moving the body of the user a second distance that corresponds with an actual movement of the body of the user from the first point in space to the second point in space in the reduced-size world space.

US Pat. No. 10,133,356

METHOD AND APPARATUS FOR CONTROLLING A SYSTEM VIA A SENSOR

Atheer, Inc., Mountain V...

1. A method, comprising:establishing, by a processor, a first saturation profile, the first saturation profile defining a first intensity level of one or more colors for a first object within least a portion a field of view of a sensor;
establishing, by the processor, an executable command for the first saturation profile;
receiving, from the sensor, an input, wherein the input is an image or a video;
determining, by the processor, a second saturation profile for at least a portion of the input, the second saturation profile defining a second intensity level of one or more colors for a second object within the image or the video of the input;
comparing, by the processor, the first saturation profile to the second saturation profile; and
executing, by the processor, the executable command when the first saturation matches the second saturation profile.

US Pat. No. 10,126,881

METHOD AND APPARATUS FOR APPLYING FREE SPACE INPUT FOR SURFACE CONSTRAINED CONTROL

Atheer, Inc., Mountain V...

1. A method, comprising:determining, by a processor, a surface constrained input standard for a surface constrained input, wherein the surface constrained input is an input generated approximate to a physical surface;
determining a free space input boundary to receive a free space input, wherein the free space input boundary is a virtual boundary within an augmented reality construct that is bounded by a virtual space with a dimension substantially similar to the physical surface;
determining, by the processor, a free space input standard for a free space input, wherein the free space input standard comprises:
a first substandard with a first input parameter;
a second substandard with a second input parameter;
sensing, by a sensor, a free space input within the free space input boundary;
determining that a first portion of the free space input meets the first input parameter;
determining that a second portion of the free space input meets the second input parameter;
identifying the surface constrained input that is associated with the free space input;
generating a device input that corresponds to the surface constrained input; and
sending the device input to a device not configured to receive the free space input.

US Pat. No. 10,248,284

METHOD AND APPARATUS FOR INTERFACE CONTROL WITH PROMPT AND FEEDBACK

Atheer, Inc., Santa Clar...

1. A method, comprising:establishing, by a processor, user inputs that include a one-finger input, a two-finger input, a pinch-in input, and a pinch-out input, said user inputs comprising free space hand gesture inputs;
establishing, by the processor, a base form, a hover form, an engaged press form, and an engaged press-and-hold form for said one-finger input;
establishing, by the processor, a base form, an engaged press form, and an engaged swipe form for said two-finger input;
establishing, by the processor, a base form and an engaged form for said pinch-out input;
establishing, by the processor, a base form and an engaged form for said pinch-in input;
establishing, by the processor, a plurality of graphical cursors and associating each of said plurality of graphical cursors with at least one of said user inputs, comprising:
a base form one-finger input graphical cursor comprising a hollow circle with dashed crosshair marks disposed around a periphery thereof, associated with said base form of said one-finger input;
a hover form one-finger input graphical cursor comprising a hollow circle with contracted dashed crosshair marks disposed around a periphery thereof, associated with said hover form of said one-finger input;
an engaged form one-finger press input graphical cursor comprising a filled circle, associated with said engaged press form of said one-finger input;
an engaged form one-finger press-and-hold input graphical cursor comprising a filled circle with at least one concentric circle thereabout, associated with said engaged press-and-hold form of said one-finger input;
a base form two-finger input graphical cursor comprising a hollow circle with arrow marks disposed around a periphery thereof, associated with said base form of said two-finger input;
an engaged form two-finger press input graphical cursor comprising a filled circle with arrow marks disposed around a periphery thereof, associated with said engaged press form of said two-finger input;
an engaged form two-finger swipe input graphical cursor comprising a filled circle with arrow marks disposed around a periphery thereof with at least one of said arrow marks comprising at least two arrows, associated with said engaged press-and-hold form of said two-finger input;
a base form pinch-out input graphical cursor comprising a hollow dashed circle with arrow marks disposed around a periphery thereof and pointing outward therefrom, associated with said base of said pinch-out input;
an engaged form pinch-out input graphical cursor comprising a filled dashed circle with arrow marks disposed around a periphery thereof and pointing outward therefrom with each of said arrow marks comprising at least two arrows, associated with said engaged form of said pinch-out input;
a base form pinch-in input graphical cursor comprising a hollow dashed circle with arrow marks disposed within a periphery thereof and pointing inward therefrom, associated with said base of said pinch-in input;
a base form pinch-in input graphical cursor comprising a filled dashed circle with arrow marks disposed within a periphery thereof and pointing inward therefrom with each of said arrow marks comprising at least two arrows, associated with said engaged form of said pinch-in input; and
wherein each of said plurality of graphical cursors is graphically distinctive from other cursors and is graphically indicative of said user inputs associated therewith, so as to identify to a viewer of a display;
in said processor, anticipating a user input;
outputting to said display said base form of said cursor associated with said anticipated user input;
detecting, by a sensor, a user hover;
if said detected user hover corresponds with said anticipated user input associated with an outputted cursor and said outputted cursor comprises said hover form, outputting to said display said hover form of said cursor associated with said anticipated user input, so as to confirm to said viewer a match between said anticipated user input and said detected user hover;
detecting, by the sensor, the user input; and
if said detected user input corresponds with said anticipated user input associated with said outputted cursor and said outputted cursor comprises said engaged form, outputting to said display said engaged form of said cursor associated with said anticipated user input, so as to confirm to said viewer a match between said anticipated user input and said detected user input.

US Pat. No. 10,241,638

METHOD AND APPARATUS FOR A THREE DIMENSIONAL INTERFACE

Atheer, Inc., Santa Clar...

1. A method, comprising:generating, by a processor, a three dimensional interface;
generating a virtual object in the three dimensional interface;
generating an interaction zone individually associated with the virtual object, wherein the interaction zone include a space surrounding the virtual object that is distinct from a space occupied by the virtual object,
sensing, by a sensor, a location of a stimulus relative to the virtual object;
determining an uncertainty level of the sensor in determining the location of the stimulus;
in response to the uncertainty level exceeding a threshold level, increasing a size of the interaction zone in relation to the virtual object, wherein the increased size of the interaction zone increases a precision level of the sensor in sensing a response associated with the stimulus relative to the virtual object;
in response to the uncertainty level not exceeding the threshold level, decreasing the size of the interaction zone in relation to the virtual object, wherein the decreased size of the interaction zone decreases the precision level of the sensor in sensing the response associated with the stimulus relative to the virtual object; and
in response to sensing the stimulus, executing an instruction associated with the response.