US Pat. No. 9,204,022

CAMERA MOUNTING ASSEMBLY

GoPro, Inc., San Mateo, ...

1. A camera housing comprising:
an enclosure body configured to enclose a camera, the enclosure body comprising a top face, a bottom face, a left face, a
right face, and a front face, the bottom face comprising an inner hinge structure, and the top face comprising a first fastening
structure;

an enclosure door comprising an outer hinge structure on a first edge of the enclosure door and a second fastening structure
on a second edge of the enclosure door opposite the first edge, the outer hinge structure and the inner hinge structure forming
a hinge when coupled such that the enclosure door is pivotally attached to the enclosure body about the hinge, and the second
fastening structure detachably coupling to the first fastening structure such that the enclosure door is secured to the enclosure
body in a closed position when the first fastening structure is coupled to the second fastening structure; and

a camera mounting assembly coupled to the front face of the enclosure body, the camera mounting assembly comprising an opening
within an inner-front surface of the camera mounting assembly configured to be placed over a camera lens of a camera when
the camera is enclosed within the enclosure body when the enclosure door is secured to the enclosure body in the closed position,
the camera mounting assembly further comprising an indentation at each of four corners of the camera mounting assembly, the
inner-front surface of the camera mounting assembly including a plurality of recessed channels each associated with an indentation,
each indentation and associated recessed channel configured to allow for the passage of light through the indentation and
associated recessed channel and incident upon the camera lens.

US Pat. No. 9,485,419

CAMERA SYSTEM ENCODER/DECODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data; and
an image capture accelerator chip (“ICA”) coupled between the image sensor chip and the ISP, the image capture accelerator
comprising:

an input configured to receive the image data from the image sensor chip;
an encoder configured to, when the camera system is configured to operate in an accelerated capture mode, encode the received
image data to produce encoded image data;

a memory configured to store the encoded image data;
a decoder configured to, when the camera system is configured to operate in a standby mode:
access the encoded image data from the memory; and
decode the encoded image data to produce decoded image data; and
an output configured to:
when the camera system is configured to operate in a normal capture mode, output the received image data to the ISP; and
when the camera system is configured to operate in the standby mode, output the decoded image data to the ISP.

US Pat. No. 9,482,931

DETACHABLE CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A camera mount comprising:
a ring base comprising a bottom surface, the bottom surface comprising attachment means configured to couple the camera mount
to a surface;

a floating base comprising:
a spine protruding from a top surface of the floating base;
a first arm protruding from a first side of the top surface of the floating base; and
a second arm protruding from a second side of the top surface of the floating base opposite the first side;
wherein the first arm, the second arm, and the spine are configured to secure a camera to the camera mount; and
a plurality of tabs coupling the ring base to the floating base such that the ring base surrounds the floating base and such
that a gap exists between the ring base and the floating base, one or more of the plurality of tabs configured to fracture
in response to an above-threshold impact force on the camera.

US Pat. No. 9,204,021

CAMERA MOUNTABLE ARM

GoPro, Inc., San Mateo, ...

1. A camera mountable arm comprising:
a plurality of segments including a first segment, a second segment, and a third segment, the third segment pivotally coupled
to the second segment by a first hinge mechanism, the second segment pivotally coupled to the first segment by a second hinge
mechanism, the first segment detachably coupled to a camera, the first segment comprising a recess extending substantially
along a length of the first segment; and

the arm configured to be operable in a plurality of positions, including:
a folded position, wherein the first, second, and third segments are aligned such that the second segment is received and
at least partially enclosed within the recess along the first segment, and wherein the third segment abuts the first segment
such that an outer surface of the third segment is substantially flush with an outer surface of the first segment; and

an outstretched position, wherein the third and second segments are separated by a first angular displacement at the first
hinge mechanism and the second and first segments are separated by a second angular displacement at the second hinge mechanism.

US Pat. No. 9,332,177

CREDENTIAL TRANSFER MANAGEMENT CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A camera, the camera comprising a processor and a non-transitory computer-readable storage medium containing instructions
that, when executed by the processor, cause the camera to:
capture an image of a smart device display, the smart device display displaying an encoded image, wherein wireless credentials
for a communication device configured to operate as a wireless access point are encoded within the encoded image;

decode the captured image to obtain the wireless credentials;
configure the camera to operate as a wireless station; and
connect to the communication device using the wireless credentials.

US Pat. No. 9,185,387

IMAGE BLUR BASED ON 3D DEPTH INFORMATION

GoPro, Inc., San Mateo, ...

1. A method for applying image blur based on depth information, comprising:
receiving a 3D image captured by a 3D camera, the 3D image comprising a plurality of objects and depth information for each
of the plurality of objects;

identifying one of the plurality of objects as a subject object;
determining a first distance between the 3D camera and the subject object based on the depth information associated with the
subject object;

determining a second distance between the subject object and a first additional object of the plurality of objects based on
the depth information associated with the first additional object and the depth information associated with the subject object;

receiving a virtual f-number and a virtual focal length;
calculating a first blur factor for the first additional object, the first blur factor based on the first distance, the second
distance, the virtual f-number, and the virtual focal length; and

applying a first image blur to the first additional object based on the first blur factor, wherein a magnitude of the applied
first image blur is greater if the first additional object is closer to the camera than the subject object than if the subject
object is closer to the camera than the first additional object.

US Pat. No. 9,521,302

CAMERA SYSTEM WITH A SQUARE-PROFILE CAMERA

GoPro, Inc., San Mateo, ...

2. A camera system, comprising:
a camera, comprising:
a front face and a rear face each having a substantially square cross-section, the front face comprising a camera lens centered
on the front face, and the rear face comprising a communicative interface;

a top face, a bottom face, a left face, and a right face each having a substantially rectangular cross-section, the top face
comprising a user interface configured to allow a user of the camera to interact with the camera; and

a housing, comprising:
a housing body comprising four adjacent walls forming a band,
wherein the camera fits flush within an interior of the band when the camera is compressibly secured within the housing, and
wherein at least three of the four adjacent walls of the housing body consist of four integrally contiguous and orthogonal
legs forming a rectangular frame.

US Pat. No. 9,369,614

CAMERA MOUNTABLE ARM

GoPro, Inc., San Mateo, ...

1. A camera mountable arm comprising:
a first segment comprising a recess and an attachment mechanism configured to couple to a camera mount;
a second segment pivotally coupled to the first segment, a width of the second segment being less than a width of the first
segment; and

a handle pivotally coupled to the second segment, a width of the handle being greater than the width of the first segment;
the arm configured to be operable in a folded position, wherein the second segment is received and enclosed within the recess
of the first segment and wherein a portion of a first face of the handle abuts a parallel portion of a second face of the
first segment.

US Pat. No. 9,253,421

ON-CHIP IMAGE SENSOR DATA COMPRESSION

GoPro, Inc., San Mateo, ...

1. A method for compressing image data, the method comprising:
receiving pixel data representative of an image captured by an image sensor chip;
determining an amount of power available to the image sensor chip to output pixel data;
sorting the received pixel data into odd row pixel data and even row pixel data, and further sorting the odd row pixel data
and even row pixel data into odd column pixel data and even column pixel data;

selecting a data compression factor based on the determined amount of power available to the image sensor chip to output pixel
data;

compressing, by the image sensor chip, the sorted pixel data at a magnitude of compression selected based on the selected
data compression factor;

transmitting, by the image sensor chip, the compressed pixel data to a digital signal processor (DSP);
decompressing, by the DSP, the compressed pixel data; and
compressing, by the DSP, the decompressed pixel data into a digital image format.

US Pat. No. 9,122,133

CAMERA MOUNT FOR SPORTS BOARD

GoPro, Inc., San Mateo, ...

1. A mounting system comprising:
a top mount portion configured to detachably couple to an electronic device, the top mount portion comprising:
a screw hole component configured to receive a threaded screw, the screw hole component protruding perpendicularly outwards
from a surface of the top mount portion; and

a plurality of blade components each comprising a base portion protruding perpendicularly outwards from both the screw hole
component and the surface of the top mount portion and an edge portion coupled to the base portion and protruding perpendicularly
outwards from the screw hole component, each base portion comprising a first face and a second face, the first face parallel
to the second face, and the plurality of blade components configured to cut through a sports equipment object, wherein the
base portion is in direct contact with both the screw hole component and the surface of the top mount portion.

US Pat. No. 9,507,245

DETACHABLE CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A camera mount comprising:
a mount base with a bottom surface comprising coupling means for coupling the mount base to an object;
a spine protruding from a top surface of the mount base;
an elevated surface protruding from the top surface of the mount base on either side of the spine such that a camera, when
coupled to the camera mount, abuts the elevated surface and not the top surface of the mount base;

a first flexible arm protruding from a first side of the top surface of the mount base, the first flexible arm comprising
a first slot separating the first flexible arm into a first portion and a second portion, each of the first portion and the
second portion configured to elastically flex outward and away from the spine in response to an impact force on a camera coupled
to the mount at least partially in a direction of the first portion or the second portion, respectively;

a second flexible arm protruding from a second side of the top surface of the mount base opposite the first side, the second
flexible arm separating the second flexible arm into a third portion and a fourth portion, each of the third portion and the
fourth portion configured to elastically flex outward and away from the spine in response to an impact force on the camera
at least partially in a direction of the third portion or the fourth portion, respectively;

wherein the first flexible arm and the second flexible arm are configured to secure the camera to the camera mount.

US Pat. No. 9,325,916

CINEMATIC IMAGE BLUR IN DIGITAL CAMERAS BASED ON EXPOSURE TIMING MANIPULATION

GoPro, Inc., San Mateo, ...

1. A method for generating cinematic blur in a digital camera, the method comprising:
accessing a cinematic effect comprising a total exposure time for a digital image sensor of the digital camera and an effective
exposure time;

generating a pulse pattern for the image sensor comprising a plurality of pulse durations such that consecutive pulse durations
are separated by non-zero spacing durations, the sum of the plurality of pulse durations substantially equal to the total
exposure time and the sum of the plurality of pulse durations and spacing durations being substantially equal to or less than
the effective exposure time;

applying the generated pulse pattern to the image sensor;
capturing, by the image sensor, image information representative of light incident upon the image sensor, the image information
representative of light incident upon the image sensor during the pulse durations and not representative of light incident
upon the image sensor during the spacing durations; and

storing the captured image information.

US Pat. No. 9,268,200

CAMERA HOUSING

GoPro, Inc., San Mateo, ...

1. A camera housing comprising:
a frame configured to secure a camera along an outside perimeter of the camera, the frame including a first frame segment
and a second frame segment;

a first latch portion coupled to the first frame segment; and
a second latch portion coupled to the second frame segment and pivotally coupled to the first latch portion;
wherein the first frame segment and the second frame segment are spatially separable when the first latch portion and the
second latch portion are in an open configuration, and wherein the first frame segment abuts the second frame segment when
the first latch portion and the second latch portion are in a closed configuration.

US Pat. No. 9,204,041

ROLLING SHUTTER SYNCHRONIZATION

GoPro, Inc., San Mateo, ...

1. A multi-camera system for mitigating field of view (FOV) artifacts in 360-degree imaging, the multi-camera system comprising:
a plurality of cameras, each camera having a rolling shutter for enabling the capture of panoramic image data from a field
of view of the camera;

a first camera pair comprising a first camera positioned opposite a second camera, the first camera and the second camera
oriented to face outwards in substantially opposite directions;

a second camera pair comprising a third camera positioned opposite a fourth camera, the third camera and the fourth camera
oriented to face outwards in substantially opposite directions, wherein the first camera pair and the second camera pair are
oriented to face outwards in substantially perpendicular directions where at least a portion of the field of view of the third
camera overlaps a portion of the field of view of the first camera to form an overlap region;

a first rolling shutter and a second rolling shutter of the first camera pair configured to roll in substantially opposite
directions; and

a third rolling shutter and a fourth rolling shutter of the second camera pair configured to roll in substantially opposite
directions;

wherein the first camera is configured to configure the first camera, the second camera, the third camera, and the fourth
camera to capture image data at temporally proximate times, and to configure the rolling directions of the first rolling shutter,
the second rolling shutter, the third rolling shutter, and the fourth rolling shutter such that: the rolling direction of
the second rolling shutter is substantially opposite the rolling direction of the first rolling shutter, the rolling direction
of the third rolling shutter is substantially opposite the rolling direction of the second rolling shutter, and the rolling
direction of the fourth rolling shutter is substantially opposite the rolling direction of the third rolling shutter.

US Pat. No. 9,383,628

HUMIDITY PREVENTION SYSTEM WITHIN A CAMERA HOUSING

GoPro, Inc., San Mateo, ...

1. A humidity prevention system in a camera system, the humidity prevention system comprising:
an external shell of the camera system including a lens barrel, the lens barrel comprising a lens;
a front cover coupled to the external shell and coupled to the lens barrel;
an outer cover coupled to the front cover of the camera system;
a lens window coupled to the outer cover; and
a plurality of seals, each seal coupled to one or more of the external shell, the lens barrel, the front cover, the outer
cover, and the lens window, thereby forming an airtight cavity between the external shell, the lens barrel, the front cover,
the outer cover, and the lens window.

US Pat. No. 9,338,373

IMAGE SENSOR DATA COMPRESSION AND DSP DECOMPRESSION

GoPro, Inc., San Mateo, ...

1. A method for decompressing image data, the method comprising:
receiving, from an image sensor chip, compressed pixel data, the compressed pixel data comprising pixel data organized into
a plurality categories and compressed by the image sensor chip at a magnitude of compression selected based on an amount of
battery power available to the image sensor chip, wherein each pixel of the compressed pixel data is organized into a first
category comprising one of odd column pixel data and even column pixel data and a second category comprising one of odd row
pixel data and even row pixel data;

decompressing, by a digital signal processor (DSP), the compressed pixel data to produce decompressed pixel data organized
into the plurality of categories;

combining the categories of decompressed pixel data to produce combined pixel data;
compressing the combined pixel data to produce a digital image comprising a digital image format; and
outputting the digital image.

US Pat. No. 9,288,413

IMAGE CAPTURE ACCELERATOR

GoPro, Inc., San Mateo, ...

1. A method for image capture acceleration in an image signal processor (“ISP”), comprising:
receiving image data captured by an image sensor array, the image data comprising a set of pixel bits, each pixel bit with
an associated bit depth;

in response to the ISP being configured to process data in an accelerated operating mode, bypassing demosaicing operations
performed by the ISP when the ISP is configured to process data in a standard operating mode;

converting the pixel bit depths of the pixel bits of the image data using one or more look-up tables;
reordering the converted image data into a format of the YUV color space;
encoding the transformed image data using one or more encoders; and
storing the encoded image data in a non-transitory computer-readable storage medium.

US Pat. No. 9,294,671

CONVERSION BETWEEN ASPECT RATIOS IN CAMERA

GoPro, Inc., San Mateo, ...

1. A method for converting between aspect ratios, comprising:
obtaining an input image having a source aspect ratio;
applying, by a processing device, a transformation to the input image to scale and warp the image to generate an output image
having a target aspect ratio, the target aspect ratio different than the source aspect ratio, the output image non-linearly
warped relative to the input image such that a distortion in the output image relative to the input image is greater in a
corner region of the output image than a center region of the output image, wherein applying the transformation comprises:

shifting a pixel of the input image along a first axis as a non-linear function of an original pixel position of the pixel
along the first axis, independently of an original pixel position of the pixel along a second axis perpendicular to the first
axis; and

shifting the pixel of the input image along the second axis as a non-linear function of the original pixel position of the
pixel along the first and second axes; and

outputting the output image having the target aspect ratio.

US Pat. No. 9,237,271

EDGE-BASED ELECTRONIC IMAGE STABILIZATION

GoPro, Inc., San Mateo, ...

1. A method for stabilizing a digital video, the method comprising:
receiving a reference video frame depicting a scene, the reference video frame depicting a reference point represented by
one or more pixels at a reference pixel location;

receiving a second frame depicting at least a portion of the scene, wherein movement of the reference point between the first
and second frame results in the reference point being depicted at a different pixel location in the second frame;

generating a stabilized second frame by non-uniformly shifting at least some points depicted in the second frame by an amount
based on distances of the points from an edge of the second frame such that a greater shift is applied to non-edge points
than edge points; and

generating a stabilized video comprising a sequence of frames including the reference frame and the stabilized second frame,
wherein depicted motion of the reference point between the first frame and the stabilized second frame is reduced relative
to depicted motion of the reference point between the first frame and the second frame.

US Pat. No. 9,521,398

MODULAR CONFIGURABLE CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A method for capturing a video corresponding to a first channel of three-dimensional content, the method comprising:
determining an orientation of a camera sensor of a first camera as one of an upright orientation or an upside-down orientation,
wherein in the upright orientation, a top side of the first camera is above the bottom side of the first camera with respect
to a scene, and wherein in the upside-down orientation, the top side of the first camera is below the bottom side of the first
camera with respect to the scene;

initiating capture of the video on the first camera in response to receiving a synchronization signal, the video comprising
a sequence of frames, each frame comprising an array of pixels;

for each frame of the video, sequentially capturing by the camera sensor, pixels representing the scene in a first scan order
responsive to the determined orientation being the upright orientation, wherein sequentially capturing the pixels representing
the scene in the first order comprises sequentially capturing rows of pixels in an order from the top side of the first camera
to the bottom side of the first camera and capturing each row of pixels by sequentially capturing pixels in an order from
a left side of the first camera to a right side of the first camera;

for each frame of the video, sequentially capturing by the camera sensor, the pixels representing the scene in a second scan
order responsive to the determined orientation being the upside-down orientation, wherein sequentially capturing the pixels
representing the scene in the second order comprises sequentially capturing rows of pixels in an order from the bottom side
of the first camera to the top side of the first camera, and capturing each row of pixels by sequentially capturing pixels
in an order from the right side of the first camera to the left side of the first camera; and

storing the captured video of the first camera to a first memory of the first camera.

US Pat. No. 9,473,713

IMAGE TAPING IN A MULTI-CAMERA ARRAY

GoPro, Inc., San Mateo, ...

1. A method comprising:
capturing a plurality of images with each camera in a camera array comprising a plurality of cameras, each image comprising
at least one portion overlapping with a corresponding portion of a corresponding image;

for each image, performing a first warp operation on the image such that each overlapping portion of the image is substantially
aligned with the corresponding overlapping portion of the corresponding image;

for each image, performing a second warp operation on the image to reduce distortion resulting from the first warp operation,
wherein a magnitude of the second warp operation on a portion of the image is based on a distance of the portion to the nearest
overlapping portion of the image, and wherein the magnitude of the second warp operation on a first portion is greater than
the magnitude of the second warp operation on a second portion closer to an overlapping portion than the first portion;

taping each warped image together based on the overlapping portions of the images to form a combined image; and
cropping the combined image to produce a rectangular final image.

US Pat. No. 9,462,186

INITIAL CAMERA MODE MANAGEMENT SYSTEM

GoPro, Inc., San Mateo, ...

1. A smart device for communicating with one or more cameras, the smart device comprising a processor and a non-transitory
computer-readable storage medium containing instructions that, when executed by the processor, cause the smart device to:
establish a connection between the smart device and one or more cameras configured to communicate with the smart device, each
of the one or more cameras configured to operate in an initial mode, the initial mode comprising a mode of the camera before
the camera receives communications from the smart device;

receive, from each of the one or more cameras while the camera is configured to operate in the initial mode, a communication
identifying the initial mode of the camera;

store each identified initial mode at the smart device;
after establishing the connection between the smart device and the one or more cameras, change, for each camera of a portion
of the one or more cameras, the mode of the camera from the initial mode of the camera to a second mode; and

in response to receiving a request from a camera of the portion of the one or more cameras to disconnect from the smart device:
retrieve the stored initial mode of the camera from the smart device; and
change the mode of the camera to the retrieved initial mode of the camera.

US Pat. No. 9,389,491

CAMERA MOUNT WITH SPRING CLAMP

GoPro, Inc., San Mateo, ...

1. A mounting system configured to detachably couple a camera system to an apparatus, comprising:
a spring clamp comprising:
a first arm comprising a first handle and a first jaw;
a second arm comprising a second handle and a second jaw, the second arm pivotally coupled to the first arm at a spring joint,
the first and second jaws forcibly separable into an open position in response to pressure on the first and second handles,
and the spring joint otherwise forcibly compressing the first and second jaws together into a closed position;

a strap comprising an inside surface and an outside surface, the strap fixedly coupled to the first arm and adjustably coupled
to the second arm, the strap configured to secure an apparatus by tightening around at least a portion of the apparatus when
the first and second jaws are in the closed position around at least the portion of the apparatus such that at least a portion
of the inside surface of the strap abuts a surface of the portion of the apparatus, such that at least a first portion of
the outside surface of the strap faces an inside surface of the first arm and an inside surface of the second arm, and such
that at least a second portion of the outside surface of the strap faces an outside surface of the second arm; and

a mounting component coupled to at least one of the arm components, the mounting component configured to couple to a camera
system.

US Pat. No. 9,485,422

IMAGE CAPTURE ACCELERATOR

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor configured to produce image data representative of light incident upon the image sensor during a capture interval;
an image signal processor (“ISP”) with a first input and a second input; and
an image capture accelerator integrated circuit chip (“ICA”) coupled between the image sensor and the ISP, the ICA comprising:
an input configured to receive the image data from the image sensor;
a first output configured to output image data at a first resolution, the first output coupled to the first input of the ISP;
a compression engine configured to decimate the received image data into a plurality of image sub-band components; and
a second output configured to output to an image sub-band component, the second output coupled to the second input of the
ISP, the image sub-band component comprising a second resolution lower than the first resolution.

US Pat. No. 9,479,710

CINEMATIC IMAGE BLUR IN DIGITAL CAMERAS BASED ON EXPOSURE TIMING MANIPULATION

GoPro, Inc., San Mateo, ...

1. A method for capturing an image by a digital camera, the method comprising:
accessing, by a camera interface, an image capture setting comprising a pulse pattern including a plurality of pulse durations
each separated by a non-zero spacing duration such that a total exposure time comprising a sum of the pulse durations is less
than an effective exposure time comprising a sum of the pulse durations and the non-zero spacing durations;

capturing, by an image sensor of the camera, image information by applying the pulse pattern to an exposure input of the image
sensor, the image information representative of light incident upon the image sensor during the pulse durations and not representative
of light incident upon the image sensor during the spacing durations; and

outputting, by the image sensor, the captured image information for storage by the camera by applying a signal to a read input
of the input sensor after an interval of time equal to or greater than the effective exposure time.

US Pat. No. 9,454,063

HEAT TRANSFER CAMERA RING

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
a camera body having a camera lens structured on a front surface of the camera body;
electronics internal to the camera body, the electronics for capturing images via the camera lens; and
an internal heat sink thermally coupled to the electronics for dissipating heat produced by the electronics; and
a thermally conductive lens ring positioned around the camera lens of the camera on the front surface of the camera body and
partially covered by a thermally insulating material, the lens ring thermally coupled to the heat sink internal to the camera
body to transfer the heat produced by the electronics from the internal heat sink to an exterior of the camera body.

US Pat. No. 9,420,173

CAMERA SYSTEM DUAL-ENCODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data; and
an image capture accelerator chip (“ICA”) coupled between the image sensor chip and the ISP, the image capture accelerator
comprising:

an input configured to receive the image data from the image sensor chip;
a decimator configured to decimate the received image data into a plurality of image sub-band components;
a wavelet encoder configured to encode a first subset of the plurality of image sub-band components;
an H.264 encoder configured to encode a second subset of the plurality of image sub-band components;
a concatenator configured to concatenate the encoded first subset of image sub-band components and the second subset of image
sub-band components to produce concatenated encoded image sub-band components; and

an output configured to output the received image data when the ICA is configured to operate in a normal mode and to output
the concatenated encoded image sub-band components when the ICA is configured to operate in an accelerated mode.

US Pat. No. 9,420,182

CAMERA SYSTEM DUAL-ENCODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data; and
an image capture accelerator chip (“ICA”) coupled between the image sensor chip and the ISP, the image capture accelerator
comprising:

an input configured to receive the image data from the image sensor chip;
an H.264 encoder configured to encode the received image data to produce a set of frames, the set of frames comprising a set
of i-frames, a set of b-frames, and a set of p-frames,

a wavelet encoder configured to encode the set of i-frames to produce a set of encoded i-frames;
a concatenator configured to concatenate the set of encoded i-frames, the set of b-frames, and the set of p-frames to produce
concatenated encoded image data; and

an output configured to output the received image data when the ICA is configured to operate in a normal mode and to output
the concatenated encoded image data when the ICA is configured to operate in an accelerated mode.

US Pat. No. 9,395,031

CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A camera mount comprising:
a rail mount component configured to securely couple to a camera, the rail mount component comprising a rail base, a first
rail wing, and a second rail wing, the first rail wing and the second rail wing extending horizontally along opposite sides
of the rail base, the first rail wing and the second rail wing protruding outward from and towards the rail base such that
an angle between the rail base and each of the first rail wing and the second rail wing is less than 90 degree; and

a sliding mount component configured for insertion into the rail mount component, the sliding mount component comprising a
sliding base, a first lever pivotally coupled to a first end of the sliding base, and a second lever pivotally coupled to
a second end of the sliding base opposite the first end, the first lever and the second lever each comprising an associated
wedge such that when each of the first lever and the second lever are in a locked configuration, the associated wedge exerts
a force towards the rail base such that a reciprocal force is exerted outward by the rail mount component upon the sliding
mount component causing a friction force between the sliding mount component and an inside surface of each of the first rail
wing and the second rail wing, securely affixing the sliding mount component to the rail mount component.

US Pat. No. 9,282,226

CAMERA HOUSING FOR A SQUARE-PROFILE CAMERA

GoPro, Inc., San Mateo, ...

1. A camera comprising:
a front face, the front face comprising a camera lens substantially centered on the front face, the camera lens protruding
from the front face of the camera, the front face comprising a square with rounded corners cross-section;

a rear face, the rear face comprising a Universal Serial Bus (“USB”) interface, the rear face comprising a square with rounded
corners cross-section;

a top face, the top face comprising an interface button configured to allow a user of the camera to interact with the camera,
the top face comprising a substantially square cross-section;

a bottom face comprising a substantially square cross-section;
a left face comprising a substantially square cross-section;
a right face comprising a substantially square cross-section; and
a camera housing, the camera housing comprising a square with rounded corners cross-section in at least one dimension and
configured to at least partially abut each of the top face, the bottom face, the left face, the right face, the front face,
and the rear face, the camera housing comprising an opening within a front side or a rear side of the camera housing.

US Pat. No. 9,171,577

ENCODING AND DECODING SELECTIVELY RETRIEVABLE REPRESENTATIONS OF VIDEO CONTENT

GoPro, Inc., San Mateo, ...

1. A method for processing compressed video data, the method comprising:
storing, in a storage structure, for each of a plurality of frames of video, a corresponding plurality of image components
representative of the frame of video, the frame of video comprising a corresponding image at an original resolution, the plurality
of image components including a base image component associated with a lowest resolution and one or more additional image
components associated with resolutions greater than or equal to the lowest resolution and less than or equal to the original
resolution, and the base image component comprising the corresponding image at the lowest resolution;

selecting a first display resolution at which to display a first frame of video based on a first processing load of a decoder;
retrieving a first subset of the image components corresponding to the first frame of video from the storage structure selected
based on the first display resolution, the first subset of image components including the base image component corresponding
to the first frame of video;

decoding the retrieved first subset of image components to generate a first modified frame of video comprising the image corresponding
to the first frame of video at the first display resolution;

displaying the first modified frame of video;
in response to a change in processing load of the decoder to a second processing load, selecting a second display resolution
at which to display a second frame of video based on the second processing load;

retrieving a second subset of the image components corresponding to the second frame of video from the storage structure selected
based on the second display resolution, the second subset of image components including the base image component corresponding
to the second frame of video;

decoding the retrieved second subset of image components to generate a second modified frame of video comprising the image
corresponding to the second frame of video at the second display resolution; and

displaying the second modified frame of video.

US Pat. No. 9,423,673

QUICK-RELEASE BALL-AND-SOCKET JOINT CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A mounting system for attaching a camera to a surface, comprising:
an upper mount component structured to at least partially enclose a camera, the upper mount component having a bottom surface
including a first plurality of protrusions extending from the bottom surface, each protrusion comprising a hole;

an adapter comprising a second plurality of protrusions and a ball component coupled to the second plurality of protrusions
by a neck component having a smaller diameter than the ball component, the second plurality of protrusions extending from
the neck component at a fixed angle such that the angle between the second plurality of protrusions and the neck component
is less than 180 degrees, the second plurality of protrusions each including a hole and configured to interlock with the first
plurality of protrusions such that the holes of the first plurality of protrusions interlock with the holes of the second
plurality of protrusions;

a lower mount component having a top surface and a bottom surface, the top surface comprising a reciprocal socket component
configured to rotationally couple with the ball component of the adapter, the socket component set at an angle relative to
the top surface and having a split within an inside surface of the socket component from a top side of the socket component,
the socket component comprising a screw hole protrusion on an outer surface of the socket component on either side of the
split, the screw hole protrusions configured to align and receive a screw such that when a screw is inserted into the screw
hole protrusions, portions of the socket component on either side of the split flexibly compress together such that the ball
component is secured within the socket component, the bottom surface comprising a coupling mechanism configured to couple
the lower mount component to an object or surface; and

a hinge component configured for insertion into the aligned set of holes of the first plurality of protrusions and the second
plurality of protrusions, pivotally coupling the upper mount component to the adapter.

US Pat. No. 9,420,174

CAMERA SYSTEM DUAL-ENCODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data; and
an image capture accelerator chip (“ICA”) coupled between the image sensor chip and the ISP, the image capture accelerator
comprising:

an input configured to receive the image data from the image sensor chip;
a decimator configured to, when the camera system is configured to operate in an accelerated capture mode, decimate the received
image data into a first portion of the received image data and a second portion of the received image data;

a first entropy encoder configured to encode the first portion of the received image data to produce first encoded image data;
a second entropy encoder configured to encode the second portion of the received image data to produce second encoded image
data;

a memory configured to store the first encoded image data and the second encoded image data; and
an output configured to, when the camera system is configured to operate in a normal capture mode, output the received image
data to the ISP.

US Pat. No. 9,357,184

CREDENTIAL TRANSFER MANAGEMENT CAMERA NETWORK

GoPro, Inc., San Mateo, ...

1. A smart device, the smart device comprising a processor and a non-transitory computer-readable storage medium containing
instructions that, when executed by the processor, cause the smart device to:
communicatively couple to a wireless access point when the smart device is configured to operate as a wireless station, the
wireless access point communicatively coupled to a set of cameras;

receive, from a user, a selection of a first subset of the set of cameras, the first subset of cameras comprising a plurality
of cameras;

receive, from the user, a first configuration setting for the first subset of cameras;
establish a wireless connection with each of the first subset of cameras such that the smart device is communicatively coupled
to every camera in the first subset of cameras simultaneously;

configure the first subset of cameras based on the received first configuration setting;
receive, from the user, a selection of a second subset of the set of cameras, the second subset of cameras comprising a plurality
of cameras different from the first subset of cameras;

receive, from the user, a second configuration setting for the second subset of cameras, the second configuration setting
different from the first configuration setting;

establish a wireless connection with each of the second subset of cameras such that the smart device is communicatively coupled
to every camera in the second subset of cameras simultaneously; and

configure the second subset of cameras based on the received second configuration setting.

US Pat. No. 9,350,895

AUDIO SIGNAL LEVEL ESTIMATION IN CAMERAS

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
a first microphone configured to capture audio over a time interval to produce a first captured audio signal;
a second microphone configured to capture audio over the time interval to produce a second captured audio signal, the second
captured audio signal dampened relative to the first captured audio signal by a dampening factor; and

a microphone controller coupled to the first microphone and the second microphone, the microphone controller configured to:
in response to a determination that the first captured audio signal does not clip, store the first captured audio signal;
and

in response to a determination that the first captured audio signal clips:
identify a first slope of the first captured audio signal at a first zero crossing of a measure of amplitude of the first
captured audio signal;

identify a second slope of the second captured audio signal at a second zero crossing of the measure of amplitude of the second
captured audio signal, the second zero crossing corresponding to the first zero crossing;

identify a gain comprising a ratio of the first slope to the second slope;
amplify the second captured audio signal based on the identified gain; and
store the amplified second captured audio signal.

US Pat. No. 9,325,917

AUTO-ALIGNMENT OF IMAGE SENSORS IN A MULTI-CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for synchronizing a pair of image sensors, the method comprising:
capturing an image with each image sensor at a substantially same time, the image sensors having a rolling shutter direction
and an overlapping field of view;

correlating the captured image data representative of the overlapping field of view by shifting at least one image by a first
number of pixels along the rolling shutter direction such that a measure of difference between pixels of the captured image
data representative of the overlapping field of view is substantially minimized;

identifying, by one or more processors, a pixel shift between the captured images based on the first number of pixels; and
calibrating, based on the identified pixel shift, at least one image sensor to synchronize subsequent image capture by the
image sensors.

US Pat. No. 9,197,885

TARGET-LESS AUTO-ALIGNMENT OF IMAGE SENSORS IN A MULTI-CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for determining a pixel shift between an image pair captured by image sensors, the method
comprising:
accessing a first image and a second image of the image pair captured at a substantially same time, the images comprising
image data representative of an overlapping field of view between the image sensors;

determining edge magnitude and edge phase of the image data corresponding to the first image and the second image based on
a luma component of the image data;

determining an edge length for each of a plurality of candidate edges identified based at least in part on the determined
edge magnitude and edge phase of the image data;

identifying, by one or more processors, one or more edges in the image data corresponding to the first image and the second
image by selecting candidate edges of the plurality having determined edge lengths greater than an edge length threshold;

matching the identified one or more edges in the image data corresponding to the first image to the identified one or more
edges in the image data corresponding to the second image; and

determining a pixel shift between the image pair based, at least in part, on the matching of edges.

US Pat. No. 9,460,727

AUDIO ENCODER FOR WIND AND MICROPHONE NOISE REDUCTION IN A MICROPHONE ARRAY SYSTEM

GoPro, Inc., San Mateo, ...

1. A method for encoding an audio signal captured by a microphone array system in the presence of wind noise, the method comprising:
capturing at least a first audio signal via a first microphone of a microphone array and a second audio signal via a second
microphone of the microphone array;

combining the first audio signal and the second audio signal to generate a beamformed audio signal;
determining a selected audio signal having a lower wind noise metric between the first audio signal and the second audio signal;
processing the selected audio signal to modulate the selected audio signal based on a high frequency carrier signal to generate
a high frequency signal; and

combining the high frequency signal and the beamformed audio signal to generate an encoded audio signal.

US Pat. No. 9,418,396

WATERMARKING DIGITAL IMAGES TO INCREASE BIT DEPTH

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for compressing an image, the method comprising:
accessing an image comprising an array of pixels, each pixel comprising image data represented by a plurality of image data
bits;

truncating, for each pixel, a subset of least significant image data bits to produce an array of truncated pixels, each truncated
subset of least significant image data bits collectively comprising a set of truncated image data bits;

generating, by a processor, a watermark comprising a set of watermark coefficients representative of the set of truncated
image data bits;

generating a transformed image by converting the array of truncated pixels into a set of image coefficients in a frequency
domain;

embedding the watermark in the transformed image by modifying a subset of the image coefficients with the set of watermark
coefficients to form a modified set of coefficients;

generating a modified image by converting the modified set of coefficients into a spatial domain, the modified image representative
of both the array of truncated pixels and the generated watermark;

compressing the modified image to produce a compressed image; and
storing the compressed image.

US Pat. No. 9,392,194

FRAME MANIPULATION TO REDUCE ROLLING SHUTTER ARTIFACTS

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor configured to:
capture light during a frame capture interval to produce frame data, wherein the captured light comprises light incident upon
the image sensor during the entire frame capture interval, and wherein the frame data is representative of the captured light;
and

wait for a blanking interval of time that comprises an entire time period after the frame capture interval and a beginning
of a subsequent frame capture interval, wherein no light is captured during the blanking interval;

a buffer memory for buffering the frame data; and
a hardware image signal processor that processes the buffered frame data during a frame processing interval to produce processed
frame data, wherein the frame processing interval begins after the end of the frame capture interval and ends before the end
of the subsequent frame capture interval, and wherein the length of the frame processing interval is longer than both the
length of the blanking interval and the length of the frame capture interval.

US Pat. No. 9,521,318

TARGET-LESS AUTO-ALIGNMENT OF IMAGE SENSORS IN A MULTI-CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for determining a pixel shift between an image pair captured by image sensors, the method
comprising:
accessing a first image and a second image of the image pair captured at a substantially same time, the images comprising
image data representative of an overlapping field of view between the image sensors;

determining edge magnitude and edge phase of the image data based on pixel luma components in the image data;
identifying, by one or more processors, one or more edges in the image data based, at least in part, on the edge magnitude
and the edge phase of the image data, the identified one or more edges being substantially perpendicular to a rolling shutter
direction of the image sensors;

matching the identified one or more edges in the image data corresponding to the first image to the identified one or more
edges in the image data corresponding to the second image;

determining a pixel shift between the image pair based, at least in part, on the matching of edges; and
calibrating the image sensors using the determined pixel shift.

US Pat. No. 9,485,418

CAMERA SYSTEM TRANSMISSION IN BANDWIDTH CONSTRAINED ENVIRONMENTS

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image capture accelerator chip (“ICA”) comprising:
an input configured to receive the image data from the image sensor chip;
a compression engine configured to decimate the received image data into a plurality of image sub-band components; and
one or more outputs configured to output the image data and a set of the plurality of image sub-band components; and
an image signal processor chip (“ISP”) with a first input and a second input, the ICA coupled between the image sensor chip
and the ISP, the ISP configured to detect an amount of output bandwidth available to the camera system, the ISP comprising:

one or more inputs configured to receive the image data and the set of image sub-band components;
an encoder configured to encode the received image data and the set of image sub-band components; and
an output configured to select and output either the encoded image data or one or more encoded image sub-band components based
on the detected amount of output bandwidth.

US Pat. No. 9,389,487

PROTECTIVE LENS ATTACHMENT

GoPro, Inc., San Mateo, ...

1. A camera system including:
a camera comprising a lens and lens housing, the lens housing protruding outward from a front surface of the camera and securely
enclosing the lens, the lens housing comprising an outer lens housing surface; and

a protective lens attachment comprising a protective lens and a protective lens casing, the protective lens casing comprising
an open side and a lens side, the protective lens casing securely enclosing the protective lens at the lens side and comprising
an opening into the protective lens casing at the open side, the protective lens casing further comprising an interior surface,
a compressible ring located around an inside perimeter of the interior surface such that no gap exists between the compressible
ring and an edge at the open side, and a plurality of compressible ribs perpendicular to the compressible ring, protruding
from the interior surface, and running from the compressible ring towards and perpendicular to the protective lens such that
a gap exists between each compressible rib and the lens and such that no gap exists between each compressible rib and the
compressible ring, the protective lens attachment configured to securely couple over the lens housing such that the outer
lens housing surface exerts a compressive force outward against the compressible ribs and the compressible ring and the compressible
ribs and compressible ring exert a reciprocal force onto the outer lens housing surface, wherein the lens of the camera and
the protective lens of the protective lens attachment are aligned when the protective lens attachment is securely coupled
over the lens housing.

US Pat. No. 9,377,672

DETACHABLE CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A camera mount comprising:
a mount base with a bottom surface comprising coupling means for coupling the mount base to an object;
a spine protruding from a top surface of the mount base;
an elevated surface protruding from the top surface of the mount base on either side of the spine such that a camera, when
coupled to the camera mount, abuts the elevated surface and not the top surface of the mount base,

a first flexible arm protruding from a first side of the top surface of the mount base, the first flexible arm configured
to elastically flex outward and away from the spine in response to an impact force on a camera coupled to the mount at least
partially in a direction of the first flexible arm;

a second flexible arm protruding from a second side of the top surface of the mount base opposite the first side, the second
flexible arm configured to elastically flex outward and away from the spine in response to an impact force on the camera at
least partially in a direction of the second flexible arm;

wherein the first flexible arm and the second flexible arm are configured to secure the camera to the camera mount.

US Pat. No. 9,485,417

IMAGE SENSOR ALIGNMENT IN A MULTI-CAMERA SYSTEM ACCELERATOR ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
a first image sensor chip configured to produce first image data representative of light incident upon the first image sensor
chip;

an image capture accelerator chip (“ICA”) comprising:
a compression engine configured to decimate the first image data into a plurality of first image sub-band components; and
one or more outputs configured to output the first image data and a first image sub-band component; and
an image signal processor chip (“ISP”), the ICA coupled between the first image sensor chip and the ISP, the ISP comprising:
one or more inputs configured to received the first image data, the first image sub-band component, and a second image sub-band
component representative of second image data from a second camera system, the second image data representative of light incident
upon a second image sensor chip of the second camera system; and

an alignment engine configured to adjust a first field of view of the first image sensor chip or a second field of view of
the second image sensor chip based on the first image sub-band component and the second image sub-band component.

US Pat. No. 9,438,799

COMPRESSION AND DECODING OF SINGLE SENSOR COLOR IMAGE DATA

GoPro, Inc., San Mateo, ...

1. A method for previewing captured image data, the method comprising:
capturing, by an image sensor of a camera, image data, the image data comprising a plurality of image planes;
encoding, by an image processor, the captured image data into a plurality of encoded image planes, each encoded image plane
representative of one or more of the image planes of the image data, such that all of the encoded image planes form an image
at a first resolution and a subset of less than all of the encoded image planes form an image at a second resolution less
than the first resolution; and

providing, to an external display, the subset of the encoded image planes for a substantially real-time preview of the image
at the second resolution, the external display configured to decode the subset of encoded image planes to obtain the image
at the second resolution.

US Pat. No. 9,196,039

IMAGE SENSOR READ WINDOW ADJUSTMENT FOR MULTI-CAMERA ARRAY TOLERANCE

GoPro, Inc., San Mateo, ...

1. A method comprising:
accessing image data captured by an image sensor in each of a plurality of cameras in a camera array, each image sensor comprising
an image sensor window and a read window smaller than and located within the image sensor window, the image data from each
image sensor representative of light incident upon the read window during capture, a first camera in the camera array including
a first portion of a first field of view that overlaps with a second portion of a second field of view of a second camera
in the camera array;

identifying a portion of the accessed image data representative of the first portion of the first field of view and a portion
of the accessed image data representative of the second portion of the second field of view;

determining a set of correlation coefficients for the identified portions of image data representative of an amount of correlation
within the image data portions; and

responsive to the set of correlation coefficients representing a below-threshold level of correlation, adjusting the location
of one or more of the read window of the image sensor of the first camera and the read window of the image sensor of the second
camera.

US Pat. No. 9,262,801

IMAGE TAPING IN A MULTI-CAMERA ARRAY

GoPro, Inc., San Mateo, ...

11. A method comprising:
capturing a plurality of images with each camera in a plurality of cameras in a camera array, each image comprising a field
of view (FOV) portion representative of an overlapping field of view with at least one corresponding FOV portion of an adjacent
image;

for each image, performing a first warping on the image such that the FOV portion of the image is substantially aligned with
the corresponding FOV portion of the adjacent image;

for each image, resizing the image by cropping portions of the image that are not aligned with the FOV portion of the image;
for each image, performing a second warping on the image to reduce distortion resulting from the first warping, wherein a
magnitude of the second warping of a portion of the image is based on a distance of the portion to the FOV portion of the
image, wherein the magnitude of the second warping of the FOV portion of the image is substantially zero;

cropping portions of the plurality of images representative of FOV portions overlapping with corresponding FOV portions of
adjacent images; and

generating a final image by taping each cropped image to an adjacent cropped image.

US Pat. No. 9,241,096

HOUSING WITH TOUCH-THROUGH MEMBRANE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
a camera comprising:
a camera body having a camera lens structured on a first surface of the camera body;
imaging electronics internal to the camera body, the imaging electronics for capturing images via the camera lens; and
a touch-sensitive surface structured on a second surface of the camera body opposite the first surface, the touch-sensitive
surface for receiving user input to the camera; and

a camera housing comprising:
a first housing portion structured to receive the camera;
a second housing portion configured to be moved between an open position and a closed position relative to the first housing
portion, wherein the camera housing encloses the camera when the second housing portion is in the closed position, the second
housing portion having an external surface and an internal surface;

an inner hinge structure located on a bottom edge adjacent to a bottom face of the first housing portion;
an outer hinge structure located on a bottom edge adjacent to a bottom face of the second housing portion, the outer hinge
structure detachably coupling to the inner hinge structure, wherein the outer hinge structure and the inner hinge structure
form a hinge when coupled so that the second housing portion and the first housing portion are pivotally attached about the
hinge;

a first fastening structure located on a top face of the first housing portion, the top face opposite to the bottom face of
the first housing portion;

a second fastening structure located on a top edge adjacent to a top face of the second housing portion, the top face opposite
to the bottom face of the second housing portion, the second fastening structure for detachably coupling to the first fastening
structure so that the second housing portion is secured to the first housing portion in the closed position when the first
fastening structure is coupled to the second fastening structure;

a membrane positioned to substantially align with the touch-sensitive surface when the second housing portion is in the closed
position, and configured to transfer touch interactions from an external side of the membrane to the touch-sensitive surface,
the membrane comprising an external surface and an internal surface;

a compressible structure adhered between the internal surface of the second housing portion and the external surface of the
membrane such that the compressible structure does not make contact with the external surface of the second housing portion,
the camera configured to exert a compressive force on the compressible structure when the camera is securely enclosed within
the camera housing, and the compressible structure, in response to the compressive force, configured to exert a reciprocal
force on the membrane, forcibly pressing the membrane onto the touch sensitive surface; and

a spacer positioned on the internal surface of the membrane, wherein the compressible structure presses the spacer against
camera body.

US Pat. No. 9,142,257

BROADCAST MANAGEMENT SYSTEM

GoPro, Inc., San Mateo, ...

1. A method for managing a media broadcast, the method comprising:
receiving a plurality of videos comprising at least a first video and a second video;
generating, by a processor, a broadcast management interface comprising a first stream window for playing the first video
and a second stream window for playing the second video concurrently with the first video;

responsive to receiving a selection of the first stream window while the first video is playing, creating a first entry in
the broadcast map, the first entry comprising an identifier for the first video and a selection time indicating an elapsed
time into the broadcast when the selection of the first stream window occurred; and

responsive to receiving a selection of the second stream window while the second video is playing, creating a second entry
in the broadcast map, the second entry comprising an identifier for the second video and a selection time indicating an elapsed
time into the broadcast when the selection of the second stream window occurred.

US Pat. No. 9,478,008

IMAGE STITCHING IN A MULTI-CAMERA ARRAY

GoPro, Inc., San Mateo, ...

1. A method for stitching images, comprising:
accessing a set of images associated with an overlap region for display to a user, the overlap region comprising a corresponding
portion of each of the set of images;

identifying one or more image features within the overlap region, each image feature associated with a portion of the overlap
region;

assigning a priority to each image feature;
stitching the set of images together to produce a stitched image, wherein stitching the set of images together comprises,
for each image feature:

selecting one of a plurality of stitching operations each associated with a different stitching quality based on the priority
assigned to the image feature; and

performing, for the portion of the overlap region associated with the image feature, the selected stitching operation on the
portions of the set of images corresponding to the portion of the overlap region; and

storing the stitched image.

US Pat. No. 9,509,889

BOARD GRIP CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A board grip camera mount comprising:
a board grip pad, having a top and a bottom, the bottom of the board grip pad configured to attach to a board; and
a pad mount, having a top and a bottom, the bottom of the pad mount secured to the top of the board grip pad, the pad mount
further comprising a mount cavity with an opening at the top of the pad mount and structured to receive a portion of a camera
boom, the mount cavity further including a mounting mechanism to secure the received portion of the camera boom, a depth of
the mount cavity corresponding to a breakaway point along the camera boom, the depth of the mount cavity such that the breakaway
point sits above the top of the pad mount.

US Pat. No. 9,451,727

HEAT SINK FOR A SQUARE CAMERA

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
an image sensor to capture images;
a front face comprising a lens window;
a lens assembly for directing light received through the lens window to the image sensor;
a printed circuit board oriented substantially perpendicular to the front face of the camera and positioned under the lens
assembly, the printed circuit board comprising at least one electronic component mounted on a surface of the printed circuit
board, the at least one electronic component to process the images captured by the image sensor, the at least one electronic
component generating heat when in operation; and

a heat sink comprising an external portion structured around a perimeter of the lens window of the front face of the camera
and at least one arm extending perpendicular from the external portion to the interior of the camera in between the printed
circuit board and the lens assembly, the at least one arm thermally coupled to a top surface of the at least one electronic
component on the printed circuit board so as to provide a thermally conductive path for the heat generated by the at least
one electronic component to an external surface of the camera.

US Pat. No. 9,357,115

CAMERA MOUNTING ASSEMBLY

GoPro, Inc., San Mateo, ...

1. A camera mounting assembly comprising:
an attachment mechanism configured to attach the camera mounting assembly around a lens of a camera comprising an image sensor;
a perimeter wall protruding a first distance from an inner-front surface of the camera mounting assembly, the perimeter wall
comprising a plurality of indentations each protruding a second distance less than the first distance from the inner-front
surface, each indentation corresponding to a corner of the image sensor; and

one or more recessed channels within the inner-front surface of the camera mounting assembly, each recessed channel corresponding
to one of the plurality of indentations thereby forming an unobstructed path from a corner of the image sensor through the
recessed channel and through the indentation.

US Pat. No. 9,503,636

CREDENTIAL TRANSFER MANAGEMENT CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A camera, the camera comprising a processor and a non-transitory computer-readable storage medium containing instructions
that, when executed by the processor, cause the camera to:
capture an image of a smart device display, the smart device display displaying an encoded image, wherein wireless credentials
for a communication device configured to operate as a wireless access point are encoded within the encoded image;

decode the captured image to obtain the wireless credentials;
configure the camera to operate as a wireless station; and
connect to the communication device using the wireless credentials.

US Pat. No. 9,466,109

IMAGE STITCHING IN A MULTI-CAMERA ARRAY

GoPro, Inc., San Mateo, ...

1. A method for stitching images, comprising:
accessing a set of images associated with an overlap region for display to a user, the overlap region comprising a corresponding
portion of each of the set of images;

identifying one or more image features within the overlap region;
determining, for each of a plurality of image stitching operations, based on the identified image features, the likelihood
that stitching the accessed set of images together using the image stitching operation will produce one or more image artifacts;

selecting, from the plurality of image stitching operations, a stitching operation for use in stitching the accessed set of
images together based on the determined likelihoods;

stitching the set of images together to produce a stitched image using the selected stitching operation; and
storing the stitched image.

US Pat. No. 9,360,742

SWIVEL CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A camera mounting system, comprising:
an upper mount component configured to at least partially securely enclose a camera, the upper mount component comprising
a first plurality of protrusions extending from a bottom surface of the upper mount component; and

a lower mount component, the lower mount component including:
an inner rotating component comprising a second plurality of protrusions and a cylindrical shaft, the second plurality of
protrusions configured to interlock with the first plurality of protrusions of the upper mount component, pivotally coupling
the upper mount component to the lower mount component;

an outer sleeve component configured to at least partially enclose the cylindrical shaft such that the cylindrical shaft can
rotate relative to the outer sleeve component; the outer sleeve component configured to at least partially enclose the inner
sleeve component, the outer sleeve component configured to couple to a base mount component;

the outer sleeve component comprising a third plurality of protrusions and a shaft receptacle, the third plurality of protrusions
configured to interlock with a forth plurality of protrusions of a base mount, pivotally coupling the lower mount component
to the base mount.

US Pat. No. 9,355,433

IMAGE STITCHING IN A MULTI-CAMERA ARRAY

GoPro, Inc., San Mateo, ...

1. A method for stitching images, comprising:
accessing a set of images associated with an overlap region for display to a user, the overlap region comprising a corresponding
portion of each of the set of images;

identifying one or more image features within the overlap region;
assigning a priority to each image feature;
dividing the overlap region into sub-blocks;
selecting, for each sub-block, one or more stitching operations based on the priorities assigned to the image features within
the sub-block;

digitally stitching, by a hardware image processor, the set of images together to produce a stitched image using the selected
stitching operations; and

storing the stitched image.

US Pat. No. 9,330,436

MULTI-CAMERA ARRAY WITH ADJACENT FIELDS OF VIEW

GoPro, Inc., San Mateo, ...

1. An apparatus comprising:
a common outer lens;
a plurality of cameras in a camera array, each camera including a plurality of lenses, an image sensor, and a housing securing
the plurality of lenses and image sensor such that, for each camera of the plurality of cameras, light passing through the
common outer lens and through an aperture of the camera is directed by the plurality of lenses to be incident upon the image
sensor;

wherein each camera in the plurality of cameras is oriented at a pitch, roll, and yaw such that an image captured by a first
camera in the plurality of cameras comprises an overlapping portion with at least one other camera in the plurality of cameras
when images are synchronously captured by the plurality of cameras, and such that each camera in the plurality of cameras
faces in a direction that converges in at least one dimension with a direction faced by at least one other camera in the plurality
of cameras, wherein the pitch is greater than 60 degrees and less than 80 degrees.

US Pat. No. 9,685,194

VOICE-BASED VIDEO TAGGING

GoPro, Inc., San Mateo, ...

1. A method for identifying events of interest in a captured video, the method comprising:
storing multiple stored speech patterns for multiple input types, the multiple stored speech patterns corresponding to a command
for identifying the events of interest within the captured video, wherein the multiple stored speech patterns include a first
stored speech pattern for a first input type, wherein storing the first stored speech pattern comprises:

receiving, from a user, an input configuring a camera into a training mode to learn the first stored speech pattern;
capturing the first stored speech pattern from the user; and
storing the first stored speech pattern, wherein the first stored speech pattern is stored in response to capturing the first
stored speech pattern from the user a threshold number of times;

accessing a captured speech pattern, the captured speech pattern captured from the user during capture of the captured video;
determining that the captured speech pattern corresponds to the first stored speech pattern; and
in response to determining that the captured speech pattern corresponds to the first stored speech pattern, storing event
of interest information in metadata associated with the captured video, the event of interest information identifying (i)
the first input type for a first event of interest, and (ii) an event moment during the capture of the captured video at which
the captured speech pattern was captured from the user.

US Pat. No. 9,812,175

SYSTEMS AND METHODS FOR ANNOTATING A VIDEO

GoPro, Inc., San Mateo, ...

1. A system for annotating a video, the system comprising:
a touchscreen display configured to present video content and receive user input during the presentation of the video content,
the video content having a duration, the touchscreen display generating output signals indicating a location of a user's engagement
with the touchscreen display; and

one or more physical processors configured by machine-readable instructions to:
effectuate presentation of the video content on the touchscreen display;
determine reception of annotation input based on the location of the user's engagement with the touchscreen display at one
or more points within the duration, the annotation input defining an in-frame visual annotation for the video content;

responsive to the reception of the annotation input:
associate the in-frame visual annotation with a visual portion of the video content based on the location of the user's engagement
with the touchscreen display; and

associate the in-frame visual annotation with the one or more points within the duration such that a subsequent presentation
of the video content includes the in-frame visual annotation positioned at the visual portion of the video content at the
one or more points within the duration.

US Pat. No. 10,194,097

APPARATUS AND METHODS FOR THE STORAGE OF OVERLAPPING REGIONS OF IMAGING DATA FOR THE GENERATION OF OPTIMIZED STITCHED IMAGES

GOPRO, INC., San Mateo, ...

1. An apparatus configured to stitch source images according to a first stitching quality, the apparatus comprising:two or more cameras characterized by two or more corresponding fields of view (FOVs), the two or more corresponding FOVs being characterized by at least one overlapping region;
a processor apparatus; and
a non-transitory computer readable medium in data communication with the processor apparatus and comprising one or more instructions which when executed by the processor apparatus, cause the apparatus configured to stitch source images to:
obtain two or more images from the two or more cameras;
identify the at least one overlapping region of the obtained two or more images;
post-process the obtained two or more images to create a post-processed image in accordance with the first stitching quality having a first confidence metric; and
store the post-processed image and one or more information associated with the identified at least one overlapping region, where the one or more information associated with the identified at least one overlapping region enables a re-stitch of the obtained two or more images in accordance with a second confidence metric, the second confidence metric being different than the first confidence metric.

US Pat. No. 9,860,970

HEAT SINK FOR A SQUARE CAMERA

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
a camera housing;
an image sensor internal to the camera housing to capture images;
a lens assembly for directing light to the image sensor;
a first printed circuit board positioned at least partially under the lens assembly internal to the camera housing, the first
printed circuit board comprising at least one electronic component mounted on a surface of the first printed circuit board,
the at least one electronic component to process the images captured by the image sensor, the at least one electronic component
generating heat when in operation; and

a heat sink comprising an external portion exposed on an external surface of the camera housing and at least one interior
portion extending in between the printed circuit board and the lens assembly, the at least one interior portion thermally
coupled to a top surface of the at least one electronic component on the first printed circuit board so as to provide a thermally
conductive path for the heat generated by the at least one electronic component to the external heat dissipating surface of
the camera housing.

US Pat. No. 9,854,157

CAMERA WITH TOUCH SENSOR INTEGRATED WITH LENS WINDOW

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
a camera body;
an image sensor internal to the camera body for capturing images;
a lens assembly comprising one or more lens elements to direct light to the image sensor, the one or more lens elements including
a lens window on the external face of the camera body, the lens window comprising a transparent integrated touch sensor to
detect a position of a touch on the lens window and to generate a touch signal indicating the position of the touch;

a display screen on an external face of the camera body;
a touch controller to process the touch signal and to update a display on the display screen in response to the touch signal.

US Pat. No. 9,667,859

SYSTEMS AND METHODS FOR DETERMINING PREFERENCES FOR CAPTURE SETTINGS OF AN IMAGE CAPTURING DEVICE

GoPro, Inc., San Mateo, ...

1. A system for determining preferences for capture settings of an image capturing device, the system comprising:
one or more physical computer processors configured by computer readable instructions to:
obtain a first portion of a first video segment from a user;
obtain a second portion of a second video segment from the user;
aggregate the first portion and the second portion to form an aggregated video segment;
obtain a first set of capture settings associated with capture of the first portion;
obtain a second set of capture settings associated with capture of the second portion;
determine the preferences for the capture settings of the image capturing device based upon the first set of capture settings
and the second set of capture settings, the preferences for the capture settings being associated with the user, wherein the
capture settings define aspects of operation for one or more of a processor of the image capturing device, an imaging sensor
of the image capturing device, and/or an optical element of the image capturing device; and

effectuate transmission of instructions to the image capturing device, the instructions including the determined preferences
for the capture settings and being configured to cause the image capturing device to adjust the capture settings to the determined
preferences; and

wherein the image capturing device is configured to adjust the capture settings to the determined preferences based upon current
contextual information that defines one or more current temporal attributes and/or current spatial attributes associated with
the image capture device and current capture settings of the image capturing device.

US Pat. No. 9,491,356

MOTION ESTIMATION AND DETECTION IN A CAMERA SYSTEM ACCELERATOR ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data; and
an image capture accelerator chip (“ICA”) coupled between the image sensor chip and the ISP, the image capture accelerator
comprising:

an input configured to receive the image data from the image sensor chip;
a decimator configured to decimate the received image data into a plurality of image sub-band components;
a motion detection circuit configured to generate a motion map based on a first subset of the plurality of image sub-band
components;

a motion estimation circuit configured to generate a set of motion vectors based on the generated motion map and a second
subset of the plurality of image sub-band components; and

an output configured to output the received image data to the ISP.

US Pat. No. 9,213,218

HUMIDITY PREVENTION SYSTEM WITHIN A CAMERA HOUSING

GoPro, Inc., San Mateo, ...

1. A humidity prevention system in a camera system, the humidity prevention system comprising:
an external shell of the camera system including a lens barrel, the lens barrel comprising a lens;
a front cover of the camera system coupled to the external shell and coupled to the lens barrel via a first seal, the first
seal creating an airtight seal between the front cover and the lens barrel;

an outer front cover of the camera system coupled to a lens window via a second seal, the second seal creating an airtight
seal between the outer front cover and the lens window, the outer front cover further coupled to the front cover via a third
seal, the third seal creating an airtight seal between the outer front cover and the front cover, the space between the outer
front cover, the lens window, the front cover, and the lens barrel comprising an airtight cavity configured to prevent the
entry of humidity into the airtight cavity; and

a fourth seal coupled to the external shell and the front cover, the fourth seal creating an airtight seal between the external
shell and the front cover.

US Pat. No. 10,089,710

IMAGE CAPTURE ACCELERATOR

GoPro, Inc., San Mateo, ...

1. A camera, comprising:an image sensor chip configured to capture image data;
a camera memory; and
an image signal processor chip configured to:
receive the captured image data;
perform one or more color space conversion operations on the image data to produce converted image data;
perform one or more encoding operations on the converted image data to produce encoded image data; and
write the encoded image data to the camera memory;
wherein the image signal processor chip, when configured to operate in a standard processing mode, is configured to perform one or more pre-processing operations, including a demosaicing operation, on the received image data before performing the one or more color space conversion operations;
wherein the image signal processor, when configured to operate in an accelerated processing mode, is configured to swizzle the image data and bypass the one or more pre-processing operations.

US Pat. No. 9,591,217

CAMERA SYSTEM ENCODER/DECODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip; and
an image signal processor chip (“ISP”) comprising:
an encoder configured to:
when the camera system is configured to operate in an accelerated capture mode, perform a first encoding operation on image
data to produce first encoded image data and store the encoded image data in memory; and

when the camera system is configured to operate in a normal mode or a standby mode, perform a second encoding operation on
image data to produce second encoded image data and output the second encoded image data; and

a decoder configured to, when the camera system is configured to operate in the standby mode, access the first encoded image
data from the memory and decode the first encoded image data to produce decoded image data, wherein the encoder, when the
camera system is configured to operate in the standby mode, is configured to perform the second encoding operation on the
decoded image data to produce the second encoded image data.

US Pat. No. 9,513,535

CAMERA MOUNT FOR SPORTS BOARD

GoPro, Inc., San Mateo, ...

1. A mounting system comprising:
a top mount portion configured to detachably couple to an electronic device, the top mount portion comprising:
a screw hole component comprising a cylinder with an outer surface configured to receive a screw, the cylinder protruding
perpendicularly outwards from a surface of the top mount portion further than any other portion of the mounting system; and

a plurality of blades each protruding perpendicularly outwards from and abutting both the outer surface of the cylinder and
a surface of the top mount portion such that no gap exists between the outer surface of the cylinder and the blade, each blade
comprising a first face and a second face parallel to the first face, the plurality of blades configured to cut through a
sports equipment object, wherein a width of each blade along the surface of the top mount portion is greater than a diameter
of the screw hole component.

US Pat. No. 10,094,513

QUICK RELEASE BITE MOUNT

GoPro, Inc., San Mateo, ...

1. A bite mount structured to be held in a mouth of a user, the bite mount comprising:a quick release coupler portion extending from a first end to a second end, the first end configured to couple with a reciprocal buckle component; and
a bite portion having a first rail portion and a second rail portion, a first end of the first rail portion and a first end of the second rail portion extending from the second end of the quick release coupler portion, a second end of the first rail portion diverging away from a second end of the second rail portion, the bite portion including a ridge near the second end of the first rail portion and the second end of the second rail portion and defining an enclosed opening between the ridge and the first and second rail portions, the ridge protruding perpendicular to a plane between a top portion and a bottom portion of the bite portion and relative to the first and second rail portions.

US Pat. No. 9,594,228

THERMAL COMPENSATION TO ADJUST CAMERA LENS FOCUS

GoPro, Inc., San Mateo, ...

1. An integrated image sensor and a camera lens apparatus comprising:
an image sensor substrate comprising an image sensor on an image plane;
a camera lens mount comprising a first material that expands in length with an increase in temperature according to a first
positive coefficient of thermal expansion, the camera lens mount comprising:

a base portion including a lower surface adjacent to the image sensor substrate; and
a tube portion extending from the base portion in a direction of an optical axis substantially perpendicular to the image
plane, the tube portion having a channel;

a lens barrel having a first portion extending into the channel of the tube portion, and a second portion outside the channel
of the tube portion, the lens barrel comprising a second material that expands in length with the increase in temperature
according to a second positive coefficient of thermal expansion;

a lens optical assembly secured by the lens barrel, the lens optical assembly comprising optical characteristics that cause
a negative change in focal length with the increase in temperature according to a negative thermal optical coefficient;

a collet connecting the interior surface of the camera lens mount and an exterior surface of the lens barrel, wherein the
interior surface of the lens mount, the collet and the exterior surface of the lower portion of the lens barrel are each longitudinally
oriented in a direction substantially parallel to the optical axis, the collet to couple the lens barrel to the camera lens
mount, the collet comprising a third material that expands in length with the increase in temperature according to a third
positive coefficient of thermal expansion;

wherein the first material, second material, third material, lengths of the camera lens mount, lens barrel, and collet, and
the thermal optical coefficient of the lens optical assembly are such that the focal plane is maintained in approximate alignment
with the image plane in response to the increase in temperature.

US Pat. No. 9,984,672

NOISE CANCELLATION FOR AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A noise cancellation system for an unmanned aerial vehicle, the system comprising:an audio capture module configured to receive an audio signal captured from a microphone of a camera of the unmanned aerial vehicle;
a metadata module configured to retrieve noise information associated with at least one noise generating component of the unmanned aerial vehicle, the noise information comprising operational conditions of the at least one noise generating component at a time of capture of the audio signal and a distance of the at least one noise generating component from the audio capture module at the time of capture of the audio signal;
a filter configured to receive the audio signal and the noise information and retrieve a baseline profile based on the noise information for the at least one noise generating component, the filter to filter out noise frequencies identified in the baseline profile from the audio signal to generate a filtered audio signal.

US Pat. No. 9,851,623

MULTI CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A camera mounting assembly for a plurality of cameras, the camera mounting assembly comprising:
a cage structure including a plurality of detachable frames corresponding to each face of the cage structure, each of the
plurality of detachable frames attached to one or more adjacent frames of the plurality of detachable frames via a securing
mechanism at a corner of each of the plurality of detachable frames, each of the plurality of detachable frames comprising
a frame outline around a perimeter of a corresponding face, a lens opening in an interior of the corresponding face, and one
or more arms connected between the frame outline and the lens opening, the lens opening configured to secure around a lens
on a front side of one of the plurality of cameras mounted within the cage structure;

a interior structure located in an interior of the cage structure, the interior structure comprising a connector on one or
more faces configured to couple to a back side of a respective camera of the plurality of cameras mounted within the cage
structure; and

one or more interior structure standoffs comprising a first end attached to a corner of one of the plurality of detachable
frames, and a second end attached to the interior structure to secure the interior structure in the interior of the cage structure.

US Pat. No. 9,769,364

AUTOMATICALLY DETERMINING A WET MICROPHONE CONDITION IN A SPORTS CAMERA

GoPro, Inc., San Mateo, ...

1. A method for determining if a first microphone is wet in a camera system having the first microphone and a second microphone,
wherein the first microphone is positioned in a recess coupled to a drainage channel to drain water from the recess away from
the first microphone, and wherein the second microphone is positioned away from the drainage channel, the method comprising:
capturing a first audio signal by the first microphone and capturing a second audio signal by the second microphone;
detecting, by a wind detector, if a wind level in an operating environment of the camera system is below a predefined wind
threshold while capturing the first audio signal and the second audio signal;

determining, by a processor, a first average signal level of the first audio signal and a second average signal level of the
second audio signal over a predefined time interval;

determining, by the processor, a ratio of the first average signal level to the second average signal level;
responsive to the ratio of the first average signal level to the second average signal level being below a predefined ratio
threshold and the wind level being below the predefined wind threshold, determining, by the processor, that the first microphone
is wet; and

outputting an indication that the first microphone is wet.

US Pat. No. 9,686,493

IMAGE CAPTURE ACCELERATOR

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
an image sensor chip configured to capture a plurality of frames at a first frame rate;
an image signal processor chip configured to process the plurality of frames captured at a first frame rate at a first processing
rate; and

an accelerator chip coupled between the image sensor chip and the image signal processor chip and configured to receive the
plurality of frames from the image sensor chip, and comprising:

a capture mode input configured to receive information from the camera identifying whether the plurality of frames captured
at the first frame rate was captured in a first capture mode or a second capture mode;

accelerator circuitry configured to process the plurality of frames captured at the first frame rate at a second processing
rate faster than the first processing rate; and

a demultiplexor configured to receive the plurality of frames from the image sensor chip, to provide the plurality of frames
to the image signal processor in response to a determination that the plurality of frames was captured in the first capture
mode, and to provide the plurality of frames to the accelerator circuitry in response to a determination that the plurality
of frames was captured in the second capture mode.

US Pat. No. 9,432,561

CAMERA HEAT SINK

GoPro, Inc., San Mateo, ...

7. A camera system, comprising:
a camera body having a camera lens structured on a front surface of the camera body;
a thermally conductive material exposed on an external face of the camera body, the thermally conductive material to transfer
heat to the external face of the camera body;

a removable heat sink for removably coupling to the thermally conductive material; and
a housing structured to at least partially enclose the camera body, the housing structured to enable at least a portion of
the heat sink to protrude through the housing;

electronics internal to the camera body, the electronics for capturing images via the camera lens, and the electronics thermally
coupled to the thermally conductive material to transfer heat produced by the electronics to the thermally conductive material,
wherein the electronics comprise a processor adapted to detect when the removable heat sink is attached to the camera, and
wherein the processor is adapted to operate the camera in a first mode responsive to detecting the removable heat sink is
not attached to the camera and to operate the camera in a second mode responsive to detecting the removable heat sink is attached
to the camera.

US Pat. No. 9,842,381

GLOBAL TONE MAPPING

GoPro, Inc., San Mateo, ...

11. A specially-configured hardware system for generating a tone mapped image configured to: access an image captured by an
image sensor, the accessed image comprising, for each image pixel of the accessed image, a corresponding set of luminance
values each representative of a color component of the pixel; generate a first histogram for aggregate luminance values of
the image; access a target histogram for the image representative of a desired global image contrast; compute a transfer function
based on the first histogram and the target histogram such that when the transfer function is applied to the aggregate luminance
values of the image to create a set of modified aggregate luminance values, a histogram of the modified aggregate luminance
values is within a threshold similarity of the target histogram; modify the accessed image by applying the transfer function
to the set of luminance values corresponding to each pixel of the image to produce a tone mapped image; and output the modified
image.

US Pat. No. 10,095,696

SYSTEMS AND METHODS FOR GENERATING RECOMMENDATIONS OF POST-CAPTURE USERS TO EDIT DIGITAL MEDIA CONTENT FIELD

GoPro, Inc., San Mateo, ...

1. A system that generates recommendations of post-capture users to edit digital media content, the system comprising:one or more physical computer processors configured by computer readable instructions to:
obtain contextual parameters of digital media content, the digital media content being associated with a content capture user and/or an end user, the contextual parameters defining one or more temporal attributes and/or spatial attributes associated with capture of the digital media content;
receive editing parameters selected by the content capture user and/or the end user, the editing parameters defining one or more editing attributes of an edited version of the digital media content to be created;
obtain post-capture user profiles, individual post-capture user profiles including expertise attributes associated with individual post-capture users, the expertise attributes including stated information and feedback information, the stated information being provided by the post-capture users themselves and the feedback information including information provided by one or more of content capture users and/or end users for whom the individual post-capture users have created edited versions of other digital media content;
identify a set of post-capture users as potential matches for creating the edited version of the digital media content based upon the contextual parameters, the editing parameters, and the one or more expertise attributes of the post-capture user profiles; and
effectuate presentation of the set of post-capture users to the content capture user and/or the end user for selection by the content capture user and/or the end user of one of the post-capture users from the set of post-capture users to create the edited version of the digital media content.

US Pat. No. 10,031,396

TUNABLE POLYMER DISPERSED LIQUID CRYSTAL LENS DOUBLET

GoPro, Inc., San Mateo, ...

1. A camera system includingan image sensor assembly configured to capture images, the image sensor assembly centered about an optical axis;
a tunable optical element for focusing light onto the image sensor assembly, the tunable optical element being substantially cylindrical having a bottom side and a top side centered about the optical axis comprising:
a first layer of a first material, the first layer having a first refractive index associated with the first material,
a second layer of a second material, wherein a polarization of the second material is controllable by an applied electric field, wherein the second layer has a second refractive index controllable by the polarization of the second material,
wherein the first and second layers are layered in a layer stack of the tunable optical element, and
wherein the second layer has an arc-shaped cross section forming a spherical cap, the spherical cap having its respective peak aligned with the optical axis;
wherein the first layer has a second cross section forming a reciprocal shape to the spherical cap; and
a first control element coupled to the tunable optical element for applying a voltage differential to the first and second layers, the voltage differential creating the electric field controlling the second refractive index of the second layer of the tunable optical element.

US Pat. No. 9,922,398

SYSTEMS AND METHODS FOR GENERATING STABILIZED VISUAL CONTENT USING SPHERICAL VISUAL CONTENT

GoPro, Inc., San Mateo, ...

1. A system for generating stabilized visual content using spherical visual content, the system comprising:
one or more physical processors configured by machine readable instructions to:
obtain the spherical visual content, the spherical visual content captured by one or more image sensors during a time duration,
the spherical visual content including phenomena caused by motion of the one or more image sensors and/or one or more optical
components that guide light onto the one or more image sensors during at least a part of the time duration, the spherical
visual content including pixels represented in an image space, the image space including a projection point inside the image
space, wherein the spherical visual content is transformed into a spherical projection space by projecting the pixels in the
image space to the spherical projection space along lines including the projection point;

determine a capture path taken by the one or more image sensors during the time duration, the capture path reflecting positions
and orientations of the one or more image sensors during the time duration, the capture path including capture viewpoints
from which the one or more image sensors captured the spherical visual content during the time duration, the capture path
including a first capture viewpoint from which the spherical visual content was captured at a first point in time within the
time duration;

determine a smoothed path based on the capture path, the smoothed path having smoother changes in positions and/or orientations
than the capture path, the smoothed path including smoothed viewpoints, the smoothed path including a first smoothed viewpoint
at the first point in time within the time duration;

warp the image space based on a difference between the capture path and the smoothed path, the difference between the capture
path and the smoothed path including a difference between the positions of the first capture viewpoint and the first smoothed
viewpoint at the first point in time;

determine the stabilized visual content by projecting the spherical visual content represented in the warped image space to
the spherical projection space, wherein views of the stabilized visual content appear to be from the smoothed viewpoints such
that a view of the stabilized visual content corresponding to the first point in time appears to be from the first smoothed
viewpoint; and

effectuate presentation of the stabilized visual content on a display.

US Pat. No. 10,033,915

CAMERA PERIPHERAL DEVICE FOR SUPPLEMENTAL AUDIO CAPTURE AND REMOTE CONTROL OF CAMERA

GoPro, Inc., San Mateo, ...

1. A peripheral device, comprising:a wireless communication interface to wirelessly communicate with a camera;
one or more microphones to capture ambient audio including voice commands;
a processor to recognize a highlight voice command captured by the one or more microphones, to transmit a highlight tag control signal to control the camera based on the recognized highlight voice command to record a metadata tag at a particular time location within a video when the highlight tag control signal is received by the camera, and to receive an acknowledgment message in response to the camera executing the recognized highlight voice command; and
a feedback mechanism to provide a feedback signal in response to receiving the acknowledgement message, the feedback signal indicative of the recognized highlight voice command.

US Pat. No. 10,033,928

APPARATUS AND METHODS FOR ROLLING SHUTTER COMPENSATION FOR MULTI-CAMERA SYSTEMS

GoPro, Inc., San Mateo, ...

1. A computerized system configured to obtain composite images, the system comprising:a processor adapted to execute a plurality of computer instructions; and
non-transitory storage medium including the plurality of the computer instructions which, when executed by the processor, cause the processor to:
obtain component images, the component images including a first component image comprised of a first plurality of pixels captured by a first imaging sensor and a second component image comprised of a second plurality of pixels captured by a second imaging sensor, the component images captured by the first imaging sensor and the second imaging sensor on a row-by-row basis, the row-by-row capture of the component images resulting in capture of individual rows of the component images at different acquisition times;
generate a first composite image by performing a first transformation operation on the component images;
for pixels in the first composite image, determine corresponding rows in the component images such that for a first set of pixels in the first composite image a first row of the first component image is determined to be corresponding, and for a second set of pixels in the first composite image a second row of the second component image is determined to be corresponding;
determine acquisition times of the component images associated with row locations corresponding to the pixels in the first composite image such that a first acquisition time is determined for the first row of the first component image and a second acquisition time is determined for the second row of the second component image;
determine orientations of the first imaging sensor and the second imaging sensor based on the acquisition times and orientation information of the first imaging sensor and the second imaging sensor such that a first orientation of the first imaging sensor is determined for capture of the first row of the first component image by the first imaging sensor based on the first acquisition time and a second orientation of the second imaging sensor is determined for capture of the second row of the second component image by the second sensor based on the second acquisition time; and
perform a second transformation operation on the component images based on the first orientation of the first imaging sensor and the second orientation of the second imaging sensor to generate a second composite image such that the second transformation operation compensates for a difference between the first acquisition time of the first row of the first component image and the second acquisition time of the second row of the second component image and orientations of the first imaging sensor and the second image sensor during capture of the corresponding rows in the component images.

US Pat. No. 10,002,641

SYSTEMS AND METHODS FOR DETERMINING HIGHLIGHT SEGMENT SETS

GoPro, Inc., San Mateo, ...

1. A system configured for determining highlight segment sets, the system comprising:one or more processors configured by machine-readable instructions to:
obtain content files that define content in content segment sets, the content segment sets including a first content segment set that includes a first content segment and a second content segment;
determine individual highlight segment sets of content segments from the content segment sets, wherein determining a first highlight segment set of content segments included in the first content segment set includes:
(a) selecting an individual content segment included in the first content segment set as a selected content segment for inclusion in the first highlight segment set based on selection criterion, the selection criterion including a first selection criterion;
(b) determining diversity scores for content segments that are (i) included in the first content segment set and (ii) not yet selected for inclusion in the first highlight segment set, the diversity scores indicating a level of similarity between the individual content segments not yet selected for inclusion in the first highlight segment set and the selected content segment selected at operation (a); and
(c) disqualifying one or more of the content segments for inclusion in the first highlight segment set for future iterations based on the diversity scores,
iterate (a)-(c) for multiple iterations to determine the first highlight segment set.

US Pat. No. 9,922,387

STORAGE OF METADATA AND IMAGES

GoPro, Inc., San Mateo, ...

1. A computerized capture system for obtaining an imaging content and metadata, the system comprising:
an imaging sensor configured to generate output signals conveying a series of images of the imaging content of an activity;
a sensor interface configured to obtain information from one or more sensors other than the imaging sensor, the obtained information
being relevant to the activity, the one or more sensors other than the imaging sensor including a first sensor;

an information storage configured to store metadata comprised of information provided by at least the first sensor; and
one or more processors configured to:
detect a first event indicative of commencement of the activity and a metadata acquisition session during which metadata of
the activity is acquired;

detect a second event indicative of commencement of acquisition of a first portion of the imaging content, the first portion
of the imaging content acquired for a first time interval, the first portion including at least some images of the series
of images, wherein the second event is different from the first event and is detected during the metadata acquisition session;

detect a third event indicative of commencement of acquisition of a second portion of the imaging content, the second portion
of the imaging content acquired for a second time interval, the second portion including at least some images of the series
of images, wherein the third event is different from the first event and the second event and is detected during the metadata
acquisition session;

detect a fourth event indicative of cessation of the activity and the metadata acquisition session;
produce a first session file comprising the metadata of the activity acquired during the metadata acquisition session between
the first event and the fourth event and a first link to the first portion of the imaging content; and

produce a second session file comprising the metadata of the activity acquired during the metadata acquisition session between
the first event and the fourth event and a second link to the second portion of the imaging content;

wherein the metadata acquisition session between the first event and the fourth event is configured no smaller than the first
time interval or the second time interval.

US Pat. No. 9,866,759

SMART SHUTTER IN LOW LIGHT

GoPro, Inc., San Mateo, ...

1. A method for controlling shutter speed and digital gain of a digital camera, comprising:
detecting, by a camera controller, if a luminance level of light entering the digital camera is below a predefined luminance
threshold;

detecting, by a camera controller, if motion meeting a predefined motion criteria is present in image frames captured by the
digital camera;

responsive to detecting that the luminance level is not below the predefined luminance threshold or determining that the motion
meeting the predefined motion criteria is not present in the image frames, controlling, by the camera controller, the digital
camera to operate with a default shutter speed and a default digital gain;

responsive to detecting that the motion meeting the one or more predefined motion criteria is present in the images frames
captured by the digital camera, controlling, by the camera controller, the digital camera to operate with an adjusted shutter
speed and an adjusted digital gain, the adjusted shutter speed adjusted in a first direction from the default shutter speed
and the adjusted digital gain adjusted in a second direction from the default digital gain, the second direction opposite
the first direction.

US Pat. No. 9,818,169

ON-CHIP UPSCALING AND DOWNSCALING IN A CAMERA ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an accelerator chip comprising:
a decimator configured to decimate the image data into a plurality of image sub-band components;
a downscale engine configured to downscale the image data using one or more of the image sub-band components; and
an upscale engine configured to upscale the image data using one or more of the image sub-band components; and
an image signal processor chip configured to process image data outputted by the image sensor chip or the accelerator chip
and to output the processed image data.

US Pat. No. 9,779,777

SYNCHRONIZATION OF FRAME RATE TO A DETECTED CADENCE IN A TIME LAPSE IMAGE SEQUENCE USING SAMPLING

GoPro, Inc., San Mateo, ...

1. A method for synchronization of a frame rate to a detected cadence, the method comprising:
receiving a sequence of image frames having a first frame rate;
receiving motion data of a camera recorded by one or more sensors tracking motion of the camera while the camera captures
the sequence of image frames;

converting the motion data in a particular window of time from time domain data to frequency domain data;
determining a dominant frequency having the highest magnitude value in the frequency domain data for the particular window
of time;

sampling frames from the sequence of image frames in the particular window of time that are at a sampling frequency that is
within a predefined tolerance of the dominant frequency; and

creating a new image sequence with the sampled frames.

US Pat. No. 9,571,741

SMART SHUTTER IN LOW LIGHT

GoPro, Inc., San Mateo, ...

1. A method for controlling shutter speed and digital gain of a digital camera, comprising:
detecting, by a luminance sensor of the digital camera, a luminance level of light entering the digital camera;
determining if the luminance level is below a predefined luminance threshold;
responsive to determining that the luminance level is below the predefined luminance threshold, detecting if motion meeting
a predefined motion criteria is present in image frames captured by the digital camera;

responsive to determining that the luminance level is not below the predefined luminance threshold or determining that the
motion meeting the predefined motion criteria is not present in the image frames, controlling the digital camera to operate
with a default shutter speed and a default digital gain;

responsive to detecting that the motion meeting the one or more predefined motion criteria is present in the images frames
captured by the digital camera, controlling the digital camera to operate with an adjusted shutter speed and an adjusted digital
gain, the adjusted shutter speed and the adjusted digital gain resulting in a same exposure value as the default shutter speed
and the default digital gain.

US Pat. No. 9,557,738

RETURN PATH CONFIGURATION FOR REMOTE CONTROLLED AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A method to control automatic return of an aerial vehicle, the method comprising:
tracking a flight path of the aerial vehicle represented by a plurality of location data points;
generating a return path from the plurality of location data points of the tracked flight path, wherein the return path comprises
determining a route from a current location of the aerial vehicle to an initial take off location of the aerial vehicle;

storing a redundant route in the generated return path, the redundant route traced around an obstacle by at least one repeated
location data point in the plurality of location data points;

monitoring one or more sensors during flight of the aerial vehicle for detection of a predefined condition;
detecting whether the predefined condition has been met;
loading, by a processor, a return path into a memory of the aerial vehicle in response to the detected predefined condition
being met, the loaded return path including the generated return path;

generating, by the processor, a modified return path to a return location by revising the loaded return path to avoid the
stored redundant route traced around the obstacle, the modified return path omitting the at least one repeated location data
point from the loaded return path; and

controlling, by the processor, the aerial vehicle to automatically navigate to the return location according to the modified
return path to avoid the obstacle.

US Pat. No. 10,148,882

SYSTEM AND METHOD FOR FRAME CAPTURING AND PROCESSING

GoPro, Inc., San Mateo, ...

1. A system that captures and processes frames of frame data, comprising:an image sensor that captures frames of frame data representative of light incident upon the image sensor using a rolling shutter and that outputs the frames of frame data, wherein the image sensor captures at least one of the frames over a frame capture interval and then waits over a blanking interval before capturing another frame;
a buffer that receives and stores the frames output by the image sensor; and
an image signal processor that retrieves the frames from the buffer and processes the frames over successive frame processing intervals to generate a video having a time interval per frame greater than the frame capture interval, wherein at least one of the successive frame processing intervals is greater than the frame capture interval and is less than or equal to a sum of the frame capture interval and the blanking interval.

US Pat. No. 10,148,939

MAPPING OF SPHERICAL IMAGE DATA INTO RECTANGULAR FACES FOR TRANSPORT AND DECODING ACROSS NETWORKS

GOPRO, INC., San Mateo, ...

1. A method for mapping spherical images to a 2D projection of a cubic representation of a spherical field of view (FOV) comprising:capturing a first hemispherical image and a second hemispherical image, each of the first hemispherical image and the second hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the first and second hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane;
mapping a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, wherein the mapping of the modified first hemispherical image to the first portion of the 2D projection comprises:
warping the modified first hemispherical image into a first square image;
extracting a first central portion from the first square image, the first central portion being a square image;
dividing a remainder of the first square image into four remaining portions of the first square image, the four remaining portions of the first square image having equal size;
mapping the first central portion to a first face of the 2D projection of the cubic image; and
mapping each of the four remaining portions of the first square image to one half of each of a set of four faces of the 2D projection of the cubic image, the set of four faces of the 2D projection of the cubic image including a second, third, fourth, and fifth face of the 2D projection of the cubic image;
mapping a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image including a non-overlap portion of the second hemispherical image;
mapping the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image; and
encoding the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.

US Pat. No. 10,031,314

PRISM-BASED FOCAL PLANE ADJUSTMENT FOR THERMAL COMPENSATION IN A LENS ASSEMBLY

GoPro, Inc., San Mateo, ...

1. An integrated sensor and lens assembly, comprising:an image sensor;
a lens mount coupled to the image sensor;
a lens barrel secured by the lens mount, the lens barrel comprising a first material that expands or contracts in response to a temperature change according to a first coefficient of thermal expansion, wherein expansion or contraction of the lens barrel causes a first shift in a focal plane in a first direction along an optical axis substantially perpendicular to the focal plane; and
at least two optical components intersecting the optical axis, the at least two optical components comprising a second material that expands or contracts in response to the temperature change according to a second coefficient of thermal expansion to cause a first change in combined thickness of the at least two optical components along the optical axis;
multiple passively actuating elements coupled between the lens mount and the at least two optical components, the multiple passively actuating elements comprising a third material that expands or contracts in response to the temperature change according to a third coefficient of thermal expansion, wherein expansion or contraction of the multiple passively actuating elements causes a second change in combined thickness of the at least two optical components along the optical axis; and
wherein the second material, third material, and geometry of the at least two optical components are configured such that the first change in combined thickness and the second change in combined thickness of the at least two optical components cause a second shift in the focal plane in a second direction along the optical axis opposite the first direction.

US Pat. No. 9,973,695

SYSTEMS AND METHODS FOR CAPTURING STITCHED VISUAL CONTENT

GoPro, Inc., San Mateo, ...

1. A system for capturing stitched visual content, the system comprising:a set of image sensors configured to generate visual output signals conveying visual information within a capture field, the set of image sensors comprising:
a first image sensor including a first array of photosites, the first image sensor having a first edge and a second edge that is opposite of the first edge, the first array of photosites having a first set of rows arrayed between the first edge and the second edge, the first set of rows including a first row adjacent to a second row and a third row adjacent to the second row, the first image sensor configured to generate first visual output signals conveying first visual information within a first portion of the capture field based on light incident on the first array of photosites, the first image sensor generating the first visual output signals sequentially across the first set of rows such that the first visual output signals are generated row by row from the first edge to the second edge, wherein the sequential generation of the first visual output signals:
causes the first visual information to be defined by the light incident on the first array of photosites at different times; and
defines a first rolling shutter direction for the first portion of the capture field, the first rolling shutter direction indicating a first direction in which the first visual information is defined across the first portion as a function of time; and
a second image sensor including a second array of photosites, the second image sensor having a third edge and a fourth edge that is opposite of the third edge, the second array of photosites having a second set of rows arrayed between the third edge and the fourth edge, the second set of rows including a fourth row adjacent to a fifth row and a sixth row adjacent to the fifth row, the second image sensor configured to generate second visual output signals conveying second visual information within a second portion of the capture field based on light incident on the second array of photosites, the second image sensor generating the second visual output signals sequentially across the second set of rows such that the second visual output signals are generated row by row from the third edge to the fourth edge, wherein the sequential generation of the second visual output signals:
causes the second visual information to be defined by the light incident on the second array of photosites at different times; and
defines a second rolling shutter direction for the second portion of the capture field, the second rolling shutter direction indicating a second direction in which the second visual information is defined across the second portion as the function of time;
wherein the first portion is adjacent to the second portion, and the first rolling shutter direction is parallel to and same as the second rolling shutter direction; and
one or more physical processors configured by machine-readable instructions to:
obtain a first image based on the first visual information;
obtain a second image based on the second visual information; and
generate a stitched image based on the first image and the second image.

US Pat. No. 9,838,730

SYSTEMS AND METHODS FOR AUDIO TRACK SELECTION IN VIDEO EDITING

GoPro, Inc., San Mateo, ...

1. A system that automatically edits video clips to
synchronize accompaniment by similar musical tracks, the system comprising:
one or more non-transitory storage media storing video content and initial instructions defining a preliminary version of
a video clip made up from the stored video content, the initial instructions indicating specific portions of the video content
to be included in the preliminary version of the video clip and an order in which the specific portions of the video content
should be presented, the video clip divided into video segments, the video segments including a first video segment and a
second video segment; and

one or more physical processors configured by machine readable instructions to:
determine occurrences of video events within the preliminary version of the video clip, the individual occurrences of video
events corresponding to different moments within the preliminary version of the video clip;

access a repository of musical tracks, the repository of musical tracks including a first musical track, a second musical
track, a third musical track, a first set of audio event markers, a second set of audio event markers, and a third set of
audio event markers, the individual musical tracks including one or more audio event markers, wherein the individual audio
event markers of the first set of audio event markers correspond to different moments within the first musical track, the
individual audio event markers of the second set of audio event markers correspond to different moments within the second
musical track, and the individual audio event markers of the third set of audio event markers correspond to different moments
within the third musical track;

effectuate presentation of at least some of the musical tracks on a graphical user interface of a video application for selection
by a user to use as accompaniment for the first video segment of the video clip, the at least some of the musical tracks including
the first musical track;

determine first revised instructions defining a first revised version of the video clip that is synchronized with the first
musical track so that one or more moments within the first video segment of the video clip corresponding to one or more occurrences
of video events are aligned with one or more moments within the first musical track corresponding to one or more audio event
markers;

responsive to the user's selection of the first musical, track as accompaniment for the first video segment of the video clip:
effectuate playback of the first revised version of the video clip along with the first musical track as accompaniment; and
identify other musical tracks similar to the first musical track based on an audio characteristic parameter of the first musical
track and audio characteristic parameters of the other musical tracks, the other musical tracks including the second musical
track and the third musical track, wherein the audio characteristic parameters define one or more characteristics of the first
musical track and the other musical tracks; and

effectuate presentation of the other musical tracks similar to the first musical track on the graphical user interface of
the video application for selection by the user to use as accompaniment for the second video segment of the video clip;

register the user's rejection of the third musical track as accompaniment for the second video segment of the video clip;
and

responsive to the user's rejection of the third musical track as accompaniment for the second video segment of the video clip,
remove the musical tracks similar to the third musical track from the graphical user interface based on an audio characteristic
parameter of the third musical track and audio characteristics parameters of the musical tracks similar to the third musical
track, wherein the musical tracks similar to the third musical track include the second musical track.

US Pat. No. 9,794,632

SYSTEMS AND METHODS FOR SYNCHRONIZATION BASED ON AUDIO TRACK CHANGES IN VIDEO EDITING

GoPro, Inc., San Mateo, ...

1. A system that automatically edits video clips to synchronize accompaniment by different musical tracks, the system comprising:
one or more storage media storing video content and first instructions defining a first version of a video clip made up from
the stored video content, the first instructions indicating specific portions of the video content to be included in the first
version of the video clip and an order in which the specific portions of the video content should be presented, the specific
portions of the video content including a first portion and a second portion, the second portion following the first portion
in the first version of the video clip, wherein:

the first version of the video clip includes one or more occurrences of video events, the video events corresponding to particular
visuals within the video clip, the individual occurrences of the video events corresponding to different moments within the
first version of the video clip;

the first portion of the video content includes a first video event occurring at a first moment within the first version of
the video clip;

the second portion of the video content includes a second video event occurring at a second moment within the first version
of the video clip; and

the first version of the video clip is synchronized with a first musical track, the first musical track providing an accompaniment
for the video clip, the first musical track characterized by first musical track audio event markers including a first audio
event marker occurring at a third moment within the first musical track and a second audio event marker occurring at a fourth
moment within the first musical track, the fourth moment occurring later in the first musical track than the third moment,
the individual first musical track audio event markers corresponding to different moments within the first musical track and
characterizing audio characteristics of the first musical track at the different moments,

wherein:
the first version of the video clip is synchronized with the first musical track such that the first moment corresponding
to the first video event is aligned to the third moment corresponding to the first audio event marker and the second moment
corresponding to the second video event is aligned to the fourth moment corresponding to the second audio event marker; and

a boundary between the first portion of the video content and the second portion of the video content in the first version
of the video clip is located at a beat of the first musical track at or near a mid-point of the first video event and the
second video event; and

one or more physical processors configured by machine readable instructions to:
determine that a second musical track has been selected to replace the first musical track as the accompaniment for the video
clip; and

determine second instructions defining a second version of the video clip that is synchronized with the second musical track
so that one or more moments within the video clip corresponding to one or more of the occurrences of the video events are
aligned with one or more moments within the second musical track corresponding to one or more second musical track audio event
markers.

US Pat. No. 9,792,667

TARGET-LESS AUTO-ALIGNMENT OF IMAGE SENSORS IN A MULTI-CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for determining a pixel shift between an image pair captured by image sensors, the method
comprising:
accessing a first image and a second image of the image pair captured at a substantially same time, the images comprising
image data representative of an overlapping field of view between the image sensors;

performing a smoothing operation on the first image and the second image to produce smoothed image data;
determining edge magnitude of the smooth image data based on derivatives along at least a row and a column of pixel luma components
in the smoothed image data;

determining edge phase of the smoothed image data based on an inverse tangent of the derivatives of the pixel luma components;
identifying, by one or more processors, one or more edges in the image data based, at least in part, on the edge magnitude
and the edge phase of the smoothed image data;

matching the identified one or more edges in the image data corresponding to the first image to the identified one or more
edges in the image data corresponding to the second image; and

determining a pixel shift between the image pair based, at least in part, on the matching of edges.

US Pat. No. 9,749,738

SYNTHESIZING AUDIO CORRESPONDING TO A VIRTUAL MICROPHONE LOCATION

GoPro, Inc., San Mateo, ...

1. A method for generating a synthesized audio signal for a virtual microphone, the method comprising:
receiving at least one video;
receiving one or more visual objects identified in the at least one video;
receiving a plurality of audio signals each originating from a respective audio source, each audio source associated with
one of the visual objects;

tracking a time-varying position of each of the one or more visual objects in the at least one video;
receiving a virtual microphone position; and
generating a synthesized audio signal from a combination of the plurality of audio signals based on the virtual microphone
position and the tracked time-varying positions of the one or more visual objects, the synthesized audio signal approximating
an audio signal that would have been captured by a microphone positioned at the virtual microphone position during capture
of the plurality of audio signals.

US Pat. No. 9,659,349

COLOR FILTER ARRAY SCALER

GoPro, Inc., San Mateo, ...

1. A method for generating a scaled image captured by a camera comprising:
identifying a scaling position in a captured image, the captured image comprising an array of subpixels;
identifying a first set of red subpixels within the captured image adjacent to the identified scaling position;
computing a scaled red subpixel value at the scaling position based on values of the first set of red subpixels and the distance
of each of the first set of red subpixels to the identified scaling position, the scaled red subpixel value computed such
that an amount of noise and an amount of blur corresponding to the scaled red subpixel value are within a threshold of an
amount of noise and amount of blur, respectively, corresponding to the first set of red subpixels;

identifying a second set of blue subpixels within the captured image adjacent to the identified scaling position;
computing a scaled blue subpixel value at the scaling position based on values of the second set of blue subpixels and the
distance of each of the first set of blue subpixels to the identified scaling position, the scaled blue subpixel value computed
such that an amount of noise and an amount of blur corresponding to the scaled blue subpixel value are within a threshold
of an amount of noise and amount of blur, respectively, corresponding to the second set of blue subpixels;

identifying a third set of green subpixels within the captured image adjacent to the identified scaling position, the third
set of green subpixels including Gr and Gb green subpixels;

computing a scaled green subpixel value at the scaling position based on values of the third set of green subpixels and the
distance of each of the third set of green subpixels to the identified scaling position, the scaled green subpixel value computed
such that an amount of noise and an amount of blur corresponding to the scaled green subpixel value are within a threshold
of an amount of noise and amount of blur, respectively, corresponding to the third set of green subpixels, and such that an
imbalance in values between the Gr and Gb green subpixels of the third set of green subpixels is within a threshold of imbalance;
and

generating a scaled image representative of the captured image, the scaled image including at least the scaled red subpixel
value, the scaled blue subpixel value, and the scaled green subpixel value.

US Pat. No. 9,639,560

SYSTEMS AND METHODS THAT EFFECTUATE TRANSMISSION OF WORKFLOW BETWEEN COMPUTING PLATFORMS

GoPro, Inc., San Mateo, ...

1. A system that effectuates transmission of workflow between computing platforms, the system comprising:
one or more physical computer processors configured by computer readable instructions to:
receive, from a client computing platform, a first command, the first command including a proxy image, wherein the proxy image
represents an image stored on the client computing platform;

associate an identifier with the proxy image;
effectuate transmission of the identifier to the client computing platform, the identifier to be associated with the image
stored on the client computing platform;

determine edits, at a remote computing platform, to the image based upon the proxy image;
effectuate transmission of instructions from the remote computing platform to the client computing platform, the instructions
including the identifier and being configured to cause the client computing platform to process the edits on the image; and

determine classifications to associate with the image based upon object recognition within the proxy image.

US Pat. No. 9,664,877

PRISM-BASED FOCAL PLANE ADJUSTMENT FOR THERMAL COMPENSATION IN A LENS ASSEMBLY

GoPro, Inc., San Mateo, ...

1. An integrated sensor and lens assembly, comprising:
an image sensor;
a lens mount coupled to the image sensor;
a lens barrel secured by the lens mount, the lens barrel comprising one or more lenses along an optical axis substantially
perpendicular to a focal plane, the lens barrel comprising a first material that expands or contracts in response to a temperature
change according to a first coefficient of thermal expansion, wherein expansion of the lens barrel causes a first shift in
the focal plane in a first direction along the optical axis; and

a pair of prisms intersecting the optical axis, the prisms comprising a second material that expands or contracts in response
to the temperature change according to a second coefficient of thermal expansion to cause a first change in combined thickness
of the pair of prisms along the optical axis;

a pair of pushrods coupled between the lens mount and the pair of prisms, the pair of pushrods comprising a third material
that expands or contracts in response to the temperature change according to a third coefficient of thermal expansion, wherein
expansion of the pair of pushrods causes a second change in combined thickness of the pair of prisms along the optical axis;
and

wherein the second material, third material, and geometry of the pair of prisms are configured such that the first change
in combined thickness and the second change in combined thickness of the pair of prisms cause a second shift in the focal
plane in a second direction along the optical axis opposite the first direction.

US Pat. No. 9,666,232

SCENE AND ACTIVITY IDENTIFICATION IN VIDEO SUMMARY GENERATION BASED ON MOTION DETECTED IN A VIDEO

GoPro, Inc., San Mateo, ...

1. A system configured to generate a video summary from recorded video footage, the system comprising:
storage media that stores video including multiple frames and audio captured contemporaneously with the frames, the video
having an unedited viewing time;

one or more physical processor configured by machine readable instructions to:
obtain a user-selected summary length for a video summary of the video;
obtain metadata for the video, the metadata including biometric information of a user captured contemporaneously with capture
of the video, motion values that characterize motion of objects visible within the frames of the video, and camera motion
information characterizing position and/or motion of the camera during capture of the video;

analyze the biometric information of the user captured contemporaneously with capture of the video, the motion values that
characterize motion of objects visible within the frames of the video, the camera motion information characterizing position
and/or motion of the camera during capture of the video, and the audio captured contemporaneously with the frames of the video
to identify events of interest within the video;

rank the events of interest within the video based on the biometric information of the user captured contemporaneously with
capture of the individual events of interest, the motion values that characterize motion of objects visible within the frames
of the individual events of interest, the camera motion information characterizing position and/or motion of the camera during
capture of the individual events of interest, and the audio captured contemporaneously with capture of the individual events
of interest;

select portions of the video for inclusion in the video summary based on the identified events of interest, the ranking, and
the user-selected summary length; and

generate an electronic file defining the video summary from the selected portions of the video so that the video summary has
the user-selected summary length.

US Pat. No. 9,612,507

CAMERA MOUNTABLE ARM

GoPro, Inc., San Mateo, ...

1. A camera mountable arm comprising:
a first segment configured to pivotally couple to a camera mount;
a second segment configured to pivotally couple to the first segment; and
a handle segment configured to pivotally couple to the second segment when the camera mountable arm is in a first configuration
and to directly pivotally couple to the camera mount when the camera mountable arm is in a second configuration;

wherein the camera mountable arm, when configured in the first configuration, is operable in a folded position and an extended
position, a portion of a first face of the first segment configured to substantially abut a portion of a second face of the
handle segment when the camera mountable arm is operable in the folded position such that the first segment is parallel to
the handle segment.

US Pat. No. 9,613,628

AUDIO DECODER FOR WIND AND MICROPHONE NOISE REDUCTION IN A MICROPHONE ARRAY SYSTEM

GoPro, Inc., San Mateo, ...

1. A method for decoding an encoded audio signal, the method comprising:
receiving the encoded audio signal, the encoded audio signal representing a non-beamformed audio signal modulated from a low
frequency range to a high frequency range and combined with a beamformed audio signal spanning the low frequency range and
a mid-frequency range between the low frequency range and the high frequency range;

responsive to receiving an input to recover the beamformed audio signal, applying a low pass filter to the encoded audio signal
to filter out the non-beamformed audio signal modulated from the low frequency range to the high frequency range to generate
an original audio signal; and

responsive to receiving an input to recover a reduced wind noise audio signal, processing the encoded audio signal to generate
the reduced wind noise audio signal, the reduced wind noise audio signal representing the non-beamformed audio signal in the
low frequency range and the beamformed audio signal in the mid-frequency range.

US Pat. No. 9,609,195

INTEGRATED IMAGE SENSOR AND LENS ASSEMBLY

GoPro, Inc., San Mateo, ...

1. An integrated image sensor and lens assembly comprising:
an image sensor substrate comprising an image sensor in an image plane;
a lens holder comprising:
a base portion including a recess to partially enclose an image sensor assembly housing the image sensor; and
a tube portion extending from the base portion in a direction of an optical axis substantially perpendicular to the image
plane, the tube portion having a channel;

a lens barrel having a first portion extending into the channel of the tube portion, and a second portion outside the channel
of the tube portion, the second portion having a camera lens window; and

an adhesive ring formed between an interior side surface of the tube portion of the lens holder and an exterior side surface
of the first portion of the lens barrel inside the channel of the tube portion, the interior surface of the lens holder and
the exterior surface of the lower portion of the lens barrel each oriented in a direction substantially parallel to the optical
axis, the adhesive ring to radially bond the first portion of the lens barrel to the lens holder.

US Pat. No. 9,602,795

SYSTEM AND METHOD FOR PRESENTING AND VIEWING A SPHERICAL VIDEO SEGMENT

GoPro, Inc., San Mateo, ...

1. A system for presenting an event of interest within a spherical video segment, the system comprising:
a two dimensional display configured to present two dimensional images;
a sensor configured to generate output signals conveying information related to an orientation of the display;
one or more physical computer processors configured by computer readable instructions to:
obtain the spherical video segment, the spherical video segment including tag information associated with the event of interest
in the spherical video segment, the tag information identifying a point in time in the spherical video segment and a viewing
angle within the spherical video segment at which the event of interest is viewable in the spherical video segment;

determine the orientation of the display based on the output signals;
determine a display field of view within the spherical video segment to be presented on the display based on the orientation
of the display;

effectuate presentation of the display field of view of the spherical video segment on the display;
proximate to the point in time, determine whether the viewing angle is located within the display field of view;
responsive to a determination proximate to the point in time that the viewing angle is outside the display field of view,
generate alert information indicating the event of interest for the spherical video segment is located outside the display
field of view; and

effectuate presentation of a notification based upon the alert information, wherein the notification includes the alert information
and is presented within the display field of view.

US Pat. No. 9,571,807

AUDIO SIGNAL LEVEL ESTIMATION IN CAMERAS

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
a first microphone;
a second microphone, the first microphone and the second microphone configured to capture audio simultaneously to produce
a first audio signal and a second audio signal, respectively, and the second audio signal dampened relative to the first audio
signal by a damping factor such that an amplitude of the second audio signal is less than an amplitude of the first audio
signal; and

a microphone controller coupled to the first microphone and the second microphone, the microphone controller configured to:
compare a first signal to noise ratio (SNR) of the first audio signal to a second SNR of the second audio signal;
in response to a determination that the first SNR is equal to or greater than the second SNR, store the first audio signal;
and

in response to a determination that the first SNR is less than the second SNR:
identify a gain between the first audio signal and the second audio signal, the gain representing the damping factor;
amplify the second audio signal by the identified gain; and
store the amplified second audio signal.

US Pat. No. 10,074,013

SCENE AND ACTIVITY IDENTIFICATION IN VIDEO SUMMARY GENERATION

GoPro, Inc., San Mateo, ...

1. A method of generating a video summary of a video, the method comprising:accessing metadata associated with a video, the accessed metadata representative of one or more aspects of the capture of the video as a function of time during capture of the video;
identifying patterns in the metadata as a function of time that correspond to performance of one or more activities being performed by a subject of the video;
determining the one or more activities being performed by the subject of the video during specific portions of the video based on the identifications of the patterns in the metadata as a function of time, the one or more activities including a first activity performed by the subject of the video during a first portion of the video, the first activity being of a given type of activity;
identifying moments within the video at which events of interest are captured in the video based on the accessed metadata, the moments including a first moment during performance of the first activity by the subject of the video at which a first event of interest occurs, the first event being of a given type of event;
identifying individual highlight scenes in the video for the individual events of interest, wherein lengths of footage in the video included in the highlight scenes before and after the moments at which the events of interest occur are based on types of the activities being performed by the subject of the video at the moments in the video at which the events of interest are captured and types of the events such that a first scene in the video is identified for the first event of interest and a length of footage included in the first scene before and after the first moment is a first length based on the given type of activity being of a first type of activity and the given type of event being of a first type of event, a second length based on the given type of activity being of the first type of activity and the given type of event being of a second type of event, a third length based on the given type of activity being of a second type of activity and the given type of event being of the first type of event, and a fourth length based on the given type of activity being of the second type of activity and the given type of event being of the second type of event, the first length different from the second length and the third length, the third length different from the fourth length; and
generating a video summary of the video for playback, the video summary including at least one of the highlight scenes.

US Pat. No. 10,051,363

SUBMERSIBLE MICROPHONE SYSTEM WITH A COMPRESSIBLE SPACER

GoPro, Inc., San Mateo, ...

1. A camera system comprising:a camera body;
an image sensor assembly internal to the camera body to capture images;
a lens assembly to focus light onto the image sensor;
an audio sensor assembly internal to the camera body, the audio sensor assembly comprising:
a microphone for converting ambient audio into an electrical signal;
an audio circuit board coupled to the microphone, the audio circuit board to process the electrical signal from the microphone, the audio circuit board comprising a circuit board port adjacent to the microphone, the circuit board port structured such that an opening exists in the audio circuit board;
a compressible annulus coupled to the audio circuit board structured such that an opening in the compressible annulus is at least partially aligning with the opening in the audio circuit board;
a support annulus for providing mechanical stability coupled to the compressible annulus, the support annulus structured such that an opening exists in the support annulus that at least partially aligns with the opening in the audio circuit board and the opening the compressible annulus;
a sensor housing for coupling the support annulus, the compressible annulus, and the audio sensor assembly to the camera body and providing mechanical support to the audio sensor assembly; and
a waterproof membrane configured to prevent moisture from passing from external to the camera body to the audio sensor assembly while allowing transmission of audio signals through the membrane, the waterproof membrane covering the microphone over the opening in the circuit board, the opening in the compressible annulus, and the opening in the support annulus.

US Pat. No. 9,973,746

SYSTEM AND METHOD FOR PRESENTING AND VIEWING A SPHERICAL VIDEO SEGMENT

GoPro, Inc., San Mateo, ...

1. A system for presenting an event of interest within a spherical video segment, the system comprising:a two dimensional display configured to present two dimensional images;
a sensor configured to generate output signals conveying information related to an orientation of the display;
one or more physical computer processors configured by computer readable instructions to:
obtain the spherical video segment, the spherical video segment including tag information associated with the event of interest in the spherical video segment, the tag information identifying a point in time in the spherical video segment and a viewing angle within the spherical video segment at which the event of interest is viewable in the spherical video segment;
determine the orientation of the display based on the output signals;
determine a display field of view within the spherical video segment to be presented on the display based on the orientation of the display;
effectuate presentation of the display field of view of the spherical video segment on the display;
proximate to the point in time, determine whether the viewing angle is located within the display field of view;
responsive to a determination proximate to the point in time that the viewing angle is outside the display field of view, generate alert information indicating the event of interest for the spherical video segment is located outside the display field of view; and
effectuate presentation of a notification based upon the alert information, wherein the notification includes the alert information and is presented within the display field of view.

US Pat. No. 9,871,792

HOSTLESS MDNS-SD RESPONDER WITH AUTHENTICATED HOST WAKE SERVICE

GoPro, Inc., San Mateo, ...

1. A method to initiate a wake service using a multicast domain name system-service discovery (mDNS-SD) responder, the wake
service to authenticate a trusted client device when a host processor is in a sleep state, the method comprising:
configuring, by the host processor, the mDNS-SD responder and the wake host service in a communication controller before the
host processor enters the sleep state;

receiving, by the mDNS-SD responder of the communication controller, a service discover query from a trusted client device
while the host processor is in the sleep state in response to receiving, by the host processor, a signal indicating the mDNS-SD
responder configuration is complete;

transmitting, through an mDNS-SD responder, a response to the service discovery request;
receiving, by the mDNS-SD responder in response to a wake service request, a payload value from the trusted client device
to authenticate the trusted client device; and

waking, in response to the trusted client device being authenticated, the host processor to connect with the trusted client
device; and

wherein the host processor enters a sleep mode while the mDNS-SD responder responds to mDSN-SD queries and authenticates the
trusted client device.

US Pat. No. 9,853,969

BLUETOOTH LOW ENERGY HOSTLESS PRIVATE ADDRESS RESOLUTION

GoPro, Inc., San Mateo, ...

1. A method of enabling private address resolution using a wireless personal area network controller when a host processor
is in a sleep state, the method comprising:
establishing a connection with at least one client device receiving a broadcast of the host processor name, the connection
allowing an exchange of a public address and an identity resolution key for each device;

generating a trusted database comprising the public address and the corresponding identity resolution keys for the at least
one connected devices, the trusted database corresponding to trusted client devices for a connection between a trusted client
device in the trusted database and a host processor;

storing the trusted database in a memory associated with the wireless personal area network controller;
receiving, by the wireless personal area network controller, a connection request from a trusted client device while the host
processor is in the sleep state, the connection request including a resolvable private address and identity resolution key
of the trusted client device;

obtaining a corresponding public address for the trusted client device based on the resolvable private address and the identity
resolution key received from the connection request and identifying the obtained corresponding public address stored in the
trusted database which is used to authenticate the client device; and

waking, in response to client device being authenticated, the host processor to connect with the trusted client device.

US Pat. No. 9,807,530

GENERATING AN AUDIO SIGNAL FROM MULTIPLE MICROPHONES BASED ON UNCORRELATED NOISE DETECTION

GoPro, Inc., San Mateo, ...

1. A method for generating an output audio signal in an audio capture system having a plurality of microphones, the method
comprising:
receiving at least a first audio signal and a second audio signal from the plurality of microphones;
generating a first plurality of frequency sub-band signals from the first audio signal corresponding to a plurality of frequency
sub-bands and generating a second plurality of frequency sub-band signals from the second audio signal corresponding to the
plurality of frequency sub-bands;

for each of the first and second pluralities of frequency sub-band signals, applying a frequency band-dependent offset to
generate a first plurality of offset frequency sub-band signals from the first plurality of frequency sub-band signals and
a second plurality of offset frequency sub-band signals from the second plurality of frequency sub-band signals;

determining, by a processor, an overall correlation metric between the first plurality of offset frequency sub-band signals
and the second plurality of offset frequency sub-band signals;

responsive to the overall correlation metric exceeding a first predefined threshold, processing the audio signals according
to a correlated audio signal processing algorithm to generate an output audio signal; and

responsive to the overall correlation metric not exceeding the first predefined threshold, processing the audio signals according
to an uncorrelated audio signal processing algorithm to generate the output audio signal.

US Pat. No. 9,794,615

BROADCAST MANAGEMENT SYSTEM

GoPro, Inc., San Mateo, ...

1. A method for playing a broadcast, the method comprising:
storing, by a video database, a plurality of videos including a set of videos indexed to an event;
storing, by a broadcast map database, a plurality of broadcast maps including at least two broadcast maps associated with
the event and created from the set of videos, the at least two broadcast maps each storing respective sequences of instructions
indicating different timing of transitions between the set of videos indexed to the event;

receiving a selection of a broadcast map from the at least two broadcast maps associated with the event;
starting a timer, the timer tracking an amount of time elapsed into the broadcast;
determining based on a first entry in the broadcast map, a first identifier for a first video file in the set of videos, a
first elapsed time into the broadcast, and a first start point in the first video file;

responsive to the timer reaching the first elapsed time, streaming, by a processing device, the first video file to a viewer
client starting at the first start point in the first video file;

determining based on a second entry in the broadcast map, a second identifier for a second video file in the set of videos,
a second elapsed time into the broadcast, and a second start point in the second video file;

responsive to the timer reaching the second elapsed time, stopping streaming of the first video file and streaming the second
video file to the viewer client starting at the second start point in the second video file;

reading a third entry from the broadcast map, the third entry referencing an audio voiceover stream and a third elapsed time;
and

outputting the audio voiceover stream to the viewer client responsive to the time reaching the third elapsed time, the audio
voiceover stream outputted concurrently with streaming of the second video file.

US Pat. No. 9,787,884

DRAINAGE CHANNEL FOR SPORTS CAMERA

GoPro, Inc., San Mateo, ...

1. A camera, comprising:
a lens assembly for directing light received through a lens window to an image sensor;
a substantially cubic camera housing enclosing the lens assembly, the substantially cubic camera housing comprising a bottom
face, left face, right face, back face, top face, and front face;

a first microphone integrated with the front face of the camera and positioned within a recess on an interior facing portion
of the front face;

a lower drain below the first microphone comprising an opening in the substantially cubic camera housing near the front face,
the lower drain to allow water that collects in the recess housing the first microphone to drain;

an upper drain above the first microphone comprising an opening in the substantially cubic camera housing near the front face,
the upper drain to allow air to enter the recess as the water drains;

a channel through the interior facing portion of the front face that couples the recess to the lower drain; and
a second microphone integrated with a rear portion of the substantially cubic camera housing.

US Pat. No. 9,781,342

SYSTEM AND METHOD FOR IDENTIFYING COMMENT CLUSTERS FOR PANORAMIC CONTENT SEGMENTS

GoPro, Inc., San Mateo, ...

1. A system configured for determining comment distributions for panoramic digital content, the system comprising:
one or more processors configured by machine readable instructions to:
host, over a network, a panoramic content segment of digital content to client computing platforms on which users consume
the panoramic content segment such that a field of view within a panorama of the panoramic content segment is selectable by
a user during presentation of the panoramic content segment, the panoramic content segment having a beginning and an ending
and having a segment duration from the beginning to the ending;

receive, over the network from the client computing platforms, user comment information conveying user comments for the panoramic
content segment, the user comment information conveying a first user comment for the panoramic content segment including a
time indication that identifies a point in time in the segment duration of the panoramic content segment and a location indication
that identifies a viewing angle within the panorama of the panoramic content segment;

determine a comment distribution for the panoramic content segment from the user comment information, the comment distribution
representing three dimensions of data such that the user comments are plotted as a function of number of user comments, points
in time across the segment duration, and viewing angles across the panorama of the panoramic content segment;

identify a comment cluster based on the comment distribution for the panoramic content segment, wherein the comment cluster
is identified based on the user comment information including time indications that identify points in time that are within
a first period of time within the segment duration and location indications that identify viewing angles that are within a
first view range within the panorama, such that the comment cluster indicates an event within the panoramic content segment
at a point in time within the first period of time and at a location within the first view range;

in a recurring or ongoing matter, receive view information over the network from a client computing platform associated with
the user indicating a current field of view selected by the user during the presentation of the panoramic content segment,
the view information including one or more visible ranges of viewing angles within the panorama for a window in time within
the segment duration;

determine, for the window of time within the segment duration, whether the first view range associated with the comment cluster
identified is located within or outside the one or more visible ranges of viewing angles selected by the user;

generate, responsive to a determination that the first view range associated with the comment cluster identified is located
outside an individual one of the one or more visible ranges of viewing angles selected by the user during the window of time,
alert information indicating the event within the panoramic content segment is located outside the current field of view selected
by the user,

effectuate transmission of the alert information over the network to a client computing platform associated with the user
causing the client computing platform associated with the user to effectuate presentation of a notification based on the alert
information.

US Pat. No. 9,743,060

SYSTEM AND METHOD FOR PRESENTING AND VIEWING A SPHERICAL VIDEO SEGMENT

GoPro, Inc., San Mateo, ...

1. A system for facilitating use of a display as a viewfinder into a spherical video segment and capturing a video segment
from the spherical video segment, the system comprising:
the display configured to present two-dimensional images;
a sensor configured to generate output signals conveying information related to an orientation of the display, wherein the
orientation of the display is a three-dimensional orientation of the display in the real world, and wherein the output signals
adjust over time in correspondence with adjustments of the three-dimensional orientation of the display over time;

one or more physical computer processors configured by computer readable instructions to:
obtain the spherical video segment, wherein the spherical video segment is a segment of video content that has been captured
by one or more image sensors configured to capture both a 360-degree horizontal field-of-view and at least a 180-degree vertical
field-of-view;

determine the three-dimensional orientation of the display in the real world based on the output signals such that the three-dimensional
orientation can be used to determine a display field-of-view within the spherical video segment, wherein the output signals
are generated by the sensor, and wherein the three-dimensional orientation of the display adjusts over time during presentation
of the video content;

determine the display field-of-view within the spherical video segment to be presented on the display such that the display
field-of-view presented on the display can be used as a viewfinder into the spherical video segment, wherein the display field-of-view
is based on the three-dimensional orientation of the display, and wherein adjustments of the three-dimensional orientation
of the display over time during presentation of the video content correspond to adjustments of the display field-of-view;

effectuate presentation of the display field-of-view of the spherical video segment on the display to facilitate use of the
display as a viewfinder into the spherical video segment; and

capture the presented display field-of-view of the spherical video segment as a two-dimensional video segment.

US Pat. No. 9,742,979

CREDENTIAL TRANSFER MANAGEMENT CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A camera comprising a processor and a non-transitory computer-readable storage medium containing instructions that, when
executed by the processor, cause the camera to:
hide a service set identifier (“SSID”) associated with the camera such that other devices cannot wirelessly detect the camera;
receive, from a user, a request to couple the camera to a smart device;
in response to the request, unhide the SSID associated with the camera such that the smart device can wirelessly detect the
camera;

determine whether the smart device is configured to operate as a wireless station or a wireless access point;
in response to the smart device being configured to operate as the wireless station, configure the camera to operate as the
wireless access point and communicatively couple the camera to the smart device; and

in response to the smart device being configured to operate as the wireless access point, configure the camera to operate
as the wireless station and communicatively couple the camera to the smart device.

US Pat. No. 9,721,611

SYSTEM AND METHOD OF GENERATING VIDEO FROM VIDEO CLIPS BASED ON MOMENTS OF INTEREST WITHIN THE VIDEO CLIPS

GoPro, Inc., San Mateo, ...

1. A system configured for generating videos from video clips based on one or more moments of interest within individual ones
of the video clips, the system comprising:
one or more physical processors configured by machine-readable instructions to:
identify one or more moments of interest within individual video clips of a set of video clips for generating a video, individual
moments of interest being associated with values of attributes of the video clips, the set of video clips comprising a first
video clip and a second video, such that:

a first moment of interest is identified within the first video clip, the first moment of interest corresponding to a first
point in time within the first video clip, the first point in time being associated the first video clip having a first value
of a first attribute; and

a second moment of interest is identified within the second video clip, the second moment of interest corresponding to a second
point in time within the second video clip, the second point in time being associated with the second video clip having a
second value of a second attribute;

associate individual moments of interest with individual portions of a video, the associations including a first association
of the first moment of interest with a first portion of the video, and a second association of the second moment of interest
with a second portion of the video;

generate the video using the set of video clips based on the associations, such that the video is generated using the first
video clip based on the first association and the second video clip based on the second association; and

wherein first moment of interest is identified within the first video clip by:
determining individual values of individual attributes of the first video clip, including determining that a value of the
first attribute of the first video clip is the first value;

determining a first preference of the user, the first preference specifying one or more values of the first attribute of the
video clips;

determining whether the value of the first attribute of the first video clip matches at least one of the one or more values
of the first attribute specified by the first preference;

responsive to the first preference specifying at least the first value of the first attribute, determining that the first
point in time within the first video clip is associated with the first video clip having the first value of the first attribute;
and

associating the first point in time with the first moment of interest, the association facilitating identification of the
first moment of interest within the first video clip.

US Pat. No. 9,661,197

QUICK-RELEASE BALL-AND-SOCKET JOINT CAMERA MOUNT

GoPro, Inc., San Mateo, ...

1. A mounting system for attaching a camera to a surface, comprising:
an upper mount component structured to at least partially enclose a camera, the upper mount component having a bottom surface
including a protrusion extending from the bottom surface at a fixed, non-perpendicular angle relative to the bottom surface,
the protrusion comprising a ball component;

a lower mount component having a top surface and a bottom surface, the top surface comprising a reciprocal socket component
configured to rotationally couple with the ball component of the upper mount component, the socket component tilted relative
to the top surface and having a split within an inside surface of the socket component from a top side of the socket component,
the socket component comprising a screw hole protrusion on an outer surface of the socket component on either side of the
split, the screw hole protrusions configured to align and receive a screw such that when a screw is inserted into the screw
hole protrusions, portions of the socket component on either side of the split flexibly compress together such that the ball
component is secured within the socket component, the bottom surface comprising a first coupling mechanism; and

a base mount component comprising a second coupling mechanism configured to releaseably couple to the first coupling mechanism.

US Pat. No. 9,684,949

CAMERA SYSTEM ENCODER/DECODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an encoder configured to:
perform a first encoding operation on image data to produce first encoded image data and store the encoded image data in memory;
and

perform a second encoding operation on image data to produce second encoded image data and output the second encoded image
data; and

a decoder configured to access the first encoded image data from the memory and decode the first encoded image data to produce
decoded image data, wherein the encoder is further configured to perform the second encoding operation on the decoded image
data to produce the second encoded image data.

US Pat. No. 9,628,690

CAMERA CONTROLLER WITH CONTEXT-SENSITIVE INTERFACE

GoPro, Inc., San Mateo, ...

1. A handheld camera mount configured to be in communication with a camera, the handheld camera mount comprising:
a mounting feature connecting a camera housing to the handheld camera mount, the mounting feature comprising inner locking
portions to fixably engage with the camera housing; and

a handle housing configured to secure a rotating membrane, a communication subsystem and a printed circuit board having a
processor and a non-transitory computer-readable storage medium;

the rotating membrane comprising a plurality of sides, wherein each side includes a subset of switches of a plurality of switches
individually exposable for interaction; and

the non-transitory computer-readable storage medium having instructions stored thereon that, when executed, cause the processor
to:

receive a selection of a switch of the plurality of switches,
conduct a lookup in a lookup table for a first setting mapped to the selected switch, the lookup table stored in a database
in memory and mapping the plurality of switches to a plurality of settings of the camera, and

transmit, via the communication subsystem, a first command to the camera, the first command including the first setting.

US Pat. No. 9,628,718

IMAGE SENSOR ALIGNMENT IN A MULTI-CAMERA SYSTEM ACCELERATOR ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
a first image sensor chip configured to produce first image data representative of light incident upon the first image sensor
chip; and

an image signal processor chip (“ISP”) comprising:
one or more inputs configured to receive the first image data and to receive a second sub-band component representative of
second image data from a second camera system, the camera system and the second camera system comprising at least partially
overlapping fields of view;

a compression engine configured to decimate the first image data into a plurality of image sub-band components including a
first sub-band component representative of the first image data; and

an alignment engine configured to adjust one or both of the fields of view of the camera system and the second camera system
based on a comparison of the first sub-band component and the second sub-band component.

US Pat. No. 10,055,816

TARGET-LESS AUTO-ALIGNMENT OF IMAGE SENSORS IN A MULTI-CAMERA SYSTEM

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for determining a pixel shift between an image pair captured by image sensors, the method comprising:accessing a first image and a second image of the image pair, the first and second images comprising image data representative of an overlapping field of view between the image sensors;
performing a smoothing operation on the first image and the second image to produce smoothed image data;
determining edge magnitude data and edge phase data from the smoothed image data;
identifying, by one or more processors, one or more edges in the image data based at least in part on the edge magnitude data and the edge phase data of the smoothed image data; and
determining a pixel shift between the image pair by matching a first edge in the first image with a second edge in the second image using determined edge lengths for the identified one or more edges.

US Pat. No. 9,965,703

COMBINING INDEPENDENT SOLUTIONS TO AN IMAGE OR VIDEO PROCESSING TASK

GoPro, Inc., San Mateo, ...

1. A method for generating an algorithm for performing an image or video processing task, the method comprising:receiving a training set of input images or videos;
applying a plurality of base algorithms to the training set, each of the base algorithms independently generating respective base algorithm results as respective solutions to the image or video processing task;
applying a first generation of different combining algorithms, where the applying of the first generation of different combining algorithms comprises combining the respective solutions from each of the respective base algorithms into respective combined solutions, where the applying of the first generation of different combining algorithms further comprises:
applying a first subset of a predefined plurality of operations to at least a subset of the respective base algorithms to generate first tier results; and
applying a second subset of the predefined plurality of operations to the first tier results of the respective base algorithms results to generate second tier results;
evaluating the respective combined solutions from the first generation of different combining algorithms to generate respective fitness scores representing measures of how well the plurality of different combining algorithms each perform the image or video processing task;
updating the first generation of different combining algorithms based on the respective fitness scores to generate a second generation of different combining algorithms; and
selecting, by a processor, an optimized combining algorithm from the second generation of different combining algorithms that best meets a predefined optimization criterion associated with the image or video processing task;
wherein the image or video processing task comprises a background segmentation task, wherein each of the plurality of base algorithms comprises different background segmentation algorithms, and wherein each of the respective base algorithm results comprises respective binary masks generated from a different one of the different background segmentation algorithms.

US Pat. No. 9,963,243

LANDING DETECTION SYSTEMS

GoPro, Inc., San Mateo, ...

1. A method for detecting a landing of an aerial vehicle, comprising: performing a static test to detect the landing of the aerial vehicle, the static test comprising:upon a determination that the aerial vehicle has a static state, running a static timer for a first time period; and
upon expiration of the first time period, determining that the aerial vehicle remains in the static state;
performing a thrust test to detect the landing of the aerial vehicle, the thrust test comprising:
upon a determination that a thrust level of the aerial vehicle is below a thrust threshold, running a thrust timer for a second time period;
upon expiration of the second time period, determining that the thrust level of the aerial vehicle remains below the thrust threshold; and
upon a determination that the thrust level of the aerial vehicle remains below the thrust threshold, determining that a change in altitude of the aerial vehicle over the second time period is below an altitude change threshold; performing a shock test to detect the landing of the aerial vehicle, the shock test comprising:
upon a determination that a shock is detected, determining that the aerial vehicle experienced a previous descent; and
upon a determination that the aerial vehicle experienced the previous descent, determining that a control input of the aerial vehicle has a stick-down state; upon a detection of the landing by one of the static test, the thrust test, or the shock test, performing a free-fall test to detect a free fall of the aerial vehicle;
upon a lack of a detection of the free fall by the free-fall test, setting a landed state for the aerial vehicle and disarming a rotor of the aerial vehicle; and
upon a detection of the free fall by the free-fall test, setting an in-air state for the aerial vehicle and not disarming the rotor of the aerial vehicle.

US Pat. No. 9,961,261

IMAGE ALIGNMENT USING A VIRTUAL GYROSCOPE MODEL

GOPRO, INC., San Mateo, ...

1. A method for aligning a target image to a reference image in the presence of lens distortion, the method comprising:receiving the target image and the reference image, each of the target image and the reference image being captured by a lens having lens distortion parameters;
detecting a first plurality of visual features appearing in the target image at first image feature coordinates in a two-dimensional image space, and a corresponding second plurality of visual features appearing in the reference image at second image feature coordinates in the two-dimensional image space;
transforming, based on the lens distortion parameters, the first and second image feature coordinates from the two-dimensional image space to a three-dimensional spherical space to generate respective first spherical feature coordinates and second spherical feature coordinates;
applying, by a processor, a rotation to the target image in the three-dimensional spherical space to generate a rotated target image, the applying of the rotation comprising (i) aligning at least a subset of the first spherical feature coordinates to a corresponding subset of the second spherical feature coordinates, and (ii) determining the subset of the first spherical feature coordinates and the subset of the second spherical feature coordinates as background features in the target image and the reference image, respectively; and
inverse transforming, based on the lens distortion parameters, the rotated target image to the two-dimensional image space;
wherein the determining of the subset of the first spherical feature coordinates and the subset of the second spherical feature coordinates as background features comprises:
determining feature tracks representing a distance along a spherical arc and a direction from each of the first spherical feature coordinates to a corresponding one of the second spherical feature coordinates;
clustering the feature tracks by length to determine a plurality of feature track clusters; and
determining from the plurality of feature track clusters, a background cluster of feature tracks corresponding to the background features.

US Pat. No. 9,953,679

SYSTEMS AND METHODS FOR GENERATING A TIME LAPSE VIDEO

GoPro, Inc., San Mateo, ...

1. A system for generating a time lapse video, the system comprising: one or more physical processors configured by machine readable instructions to:access a video;
extract images from the video;
group the images into image groups, individual image groups having individual sizes defined by numbers of the images in the individual image groups and including similar and sequential images such that a first image group having a first size includes a first image and a second image, the first image similar to the second image and sequential with the second image;
detect numbers and types of classified visuals within the images, individual types of classified visuals corresponding to individual classification weights, such that a first number of a first type of classified visual within the first image is detected, the first type of classified visual corresponding to a first classification weight;
determine image classification weights for the images based on the numbers and the types of classified visuals detected within the individual images and the individual classification weights such that a first image classification weight is determined for the first image based on the first number of the first type of classified visual and the first classification weight;
determine interest weights for the images based on the image classification weights for the individual images and the sizes of the image groups to which the individual images belong, such that a first interest weight is determined for the first image based on the first image classification weight and the first size;
generate an interest curve for the images based on the interest weights such that a value of the interest curve at a point corresponding to the first image is based on the first interest weight;
generate a retime curve for the images based on the interest curve, the retime curve defining perceived speeds at which the time lapse video is displayed during playback such that the retime curve defines a first perceived speed at which a portion of the time lapse video corresponding to the first image is displayed during playback;
determine time lapse images to be included in the time lapse video based on the images and the retime curve; and
generate the time lapse video based on the time lapse images.

US Pat. No. 9,946,256

WIRELESS COMMUNICATION DEVICE FOR COMMUNICATING WITH AN UNMANNED AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A wireless communication device, the device comprising:a housing;
a touch sensitive display integrally included within the housing;
multiple radio frequency transceivers included within the housing, wherein a first radio frequency transceiver communicates with an unmanned aerial vehicle and a second radio frequency transceiver communicates with a network;
multiple input mechanisms included with the housing that are physically engageable for manual manipulation by a user separate from manipulation of the housing; and
a processor included within the housing, wherein the processor is configured to:
obtain, via the first radio frequency transceiver, visual information captured by an image capture subsystem of the unmanned aerial vehicle;
display the visual information via the touch sensitive display;
detect parameter values of parameters of a touch on the touch sensitive display, the parameter values of the touch including a location value specifying a location of the touch on the touch sensitive display and/or a pressure value specifying a pressure of the touch on the touch sensitive display;
determine a first set of inputs based upon the parameter values of the parameters of the touch on the touch sensitive display;
receive a second set of inputs when one or more of the multiple input mechanisms are physically engaged; and
effectuate transmission, via the first radio frequency transceiver, of instructions to the unmanned aerial vehicle based upon the first set of inputs and/or the second set of inputs, the instructions being configured to adjust flight controls and/or adjust the image capture subsystem of the unmanned aerial vehicle.

US Pat. No. 9,930,271

AUTOMATIC COMPOSITION OF VIDEO WITH DYNAMIC BACKGROUND AND COMPOSITE FRAMES SELECTED BASED ON FRAME CRITERIA

GoPro, Inc., San Mateo, ...

1. A method for generating a composite output video from an input video having a sequence of frames, the method comprising:receiving a current video frame for processing from the sequence of frames;
determining, by a processing device, whether the current video frame meets first criteria;
responsive to the current video frame meeting the first criteria, performing, by the processing device, a foreground/background segmentation based on a predictive model to extract a foreground object image from the current video frame, the foreground object image comprising a representation of a foreground object depicted in the current video frame with background pixels subtracted, and storing the foreground object image to a foreground object list that stores a plurality of previously extracted foreground object images;
overlaying each of the foreground object images in the foreground object list onto the current video frame to generate a composite video frame;
determining whether the current video frame meets second criteria; and
responsive to the current video frame meeting the second criteria, updating the predictive model.

US Pat. No. 9,886,733

WATERMARKING DIGITAL IMAGES TO INCREASE BIT DEPTH

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for compressing an image, the image comprising image data representative of the image, the
method comprising:
generating, by a processor apparatus, a watermark comprising a set of watermark coefficients representative of a set of truncated
image data bits, the truncated image data bits comprising a set of least significant bits of the image data representative
of the image;

generating a transformed image by converting a portion of the image data into a set of image coefficients in a frequency domain;
embedding the generated watermark in the transformed image by modifying a subset of the image coefficients with the set of
watermark coefficients to form a modified set of coefficients; and

generating a modified image by converting the modified set of coefficients into a spatial domain, the modified image representative
of both the portion of the image data and the generated watermark.

US Pat. No. 9,883,120

AUTOMATIC COMPOSITION OF COMPOSITE IMAGES OR VIDEO WITH STEREO FOREGROUND OBJECTS

GoPro, Inc., San Mateo, ...

1. A method for generating a composite output image from an input video having a sequence of frames, the method comprising:
receiving a sequence of stereo video frames depicting a foreground object;
for selected frames in the sequence of stereo video frames, performing by a processing device, foreground/background segmentations
to extract respective stereo foreground object images each comprising a representation of the foreground object with background
pixels subtracted;

storing the respective stereo foreground object images to a foreground object list, each of the stereo foreground object images
having left and right images with a disparity between them;

transforming the stereo foreground object images to adjust the respective disparities between the respective left and right
images based on a change between a convergence depth for the respective selected frames and a convergence depth for a current
frame, the transforming of the stereo foreground object images comprising:

decreasing the disparity for a given stereo foreground object image, in response to the convergence depth for the current
frame being further away than a convergence depth for a selected frame corresponding to the given stereo foreground object;
or

increasing the disparity for the given stereo foreground object image, in response to the convergence depth for the current
frame being closer than the convergence depth for the selected frame corresponding to the given stereo foreground object;
and

overlaying the transformed stereo foreground object images onto the current frame to generate a composite output image.

US Pat. No. 9,871,994

APPARATUS AND METHODS FOR PROVIDING CONTENT CONTEXT USING SESSION METADATA

GoPro, Inc., San Mateo, ...

1. A non-transitory computer readable medium storing a plurality of computer-readable instructions which, when executed by
one or more processors of a user interface apparatus, cause the user interface apparatus to:
obtain a first video stream, the first video stream acquired during a first activity by a first capture device, the first
video stream including a first sequence of images, the first sequence of images including visual capture of the first activity,
the first sequence of images captured during a first time duration;

obtain a first metadata stream, the first metadata stream characterizing the first activity captured within the first sequence
of images, the first metadata stream including values of a first parameter characterizing an aspect of the first activity
captured within the first sequence of images, one or more of the values of the first parameter corresponding to one or more
images of the first sequence of images;

obtain a second video stream, the second video stream acquired during a second activity by a second capture device, the second
video stream including a second sequence of images, the second sequence of images including visual capture of the second activity,
the second sequence of images captured during a second time duration, wherein at least a portion of the first time duration
overlaps with at least a portion of the second time duration;

obtain a second metadata stream, the second metadata stream characterizing the second activity captured within the second
sequence of images, the second metadata stream including values of a second parameter characterizing an aspect of the second
activity captured within the second sequence of images, one or more of the values of the second parameter corresponding to
one or more images of the second sequence of images;

present on a display of the user interface apparatus:
at least a portion of the first video stream, the portion of the first video stream including at least some of the images
of the first sequence of images;

values of the first parameter corresponding to the presented images of the first sequence of images, the presented values
of the first parameter characterizing the aspect of the first activity at one or more moments within the first time duration;
and

values of the second parameter characterizing the aspect of the second activity at one or more moments within the second time
duration, the one or more moments within the second time duration overlapping with the one or more moments within the first
time duration;

identify an occurrence of a highlight event during the second activity based on the values of the second parameter, the highlight
event occurring during some or all of the one or more moments within the second time duration; and

responsive to the identification of the highlight event during the second activity, present on the display of the user interface
apparatus at least a portion of the second video stream, the portion of the second video stream including one or more of the
images of the second sequence of images corresponding to the moments within the second time duration during which the highlight
event occurred.

US Pat. No. 9,836,853

THREE-DIMENSIONAL CONVOLUTIONAL NEURAL NETWORKS FOR VIDEO HIGHLIGHT DETECTION

GoPro, Inc., San Mateo, ...

20. A three-dimensional convolutional neural network system for video highlight detection, the system comprising:
one or more physical processors configured by machine-readable instructions to:
access video content, the video content having a duration;
segment the video content into a first set of video segments, individual video segments within the first set of video segments
including a first number of video frames, the first set of video segments comprising a first video segment and a second video
segment, the second video segment following the first video segment within the duration;

input the first set of video segments into a first three-dimensional convolutional neural network, the first three-dimensional
convolutional neural network outputting a first set of spatiotemporal feature vectors corresponding to the first set of video
segments, wherein the first three-dimensional convolutional neural network includes a sequence of layers comprising:

a preliminary layer group that, for the individual video segments:
accesses a video segment map, the video segment map characterized by a height dimension, a width dimension, a number of video
frames, and a number of channels,

increases the dimensionality of the video segment map;
convolves the video segment map to produce a first set of feature maps;
applies a first activating function to the first set of feature maps;
normalizes the first set of feature maps; and
downsamples the first set of feature maps;
one or more intermediate layer groups that, for the individual video segments:
receives a first output from a layer preceding the individual intermediate layer group:
convolves the first output to reduce a number of channels of the first output;
normalizes the first output;
increases the dimensionality of the first output;
convolves the first output to produce a second set of feature maps;
convolves the first output to produce a third set of feature maps;
concatenates the second set of feature maps and the third set of feature maps to produce a set of concatenated feature maps;
normalizes the set of concatenated feature maps;
applies a second activating function to the set of concatenated feature maps; and
combines the set of concatenated feature maps and the first output; and
a final layer group that, for the individual video segments:
receives a second output from a layer preceding the final layer group;
reduces an overfitting from the second output;
convolves the second output to produce a fourth set of feature maps;
applies a third activating function to the fourth set of feature maps;
normalizes the fourth set of feature maps;
downsamples the fourth set of feature maps; and
converts the fourth set of feature maps into a spatiotemporal feature vector;
input the first set of spatiotemporal feature vectors into a long short-term memory network, the long short-term memory network
determining a first set of predicted spatiotemporal feature vectors based on the first set of spatiotemporal feature vectors,
individual predicted spatiotemporal feature vectors corresponding to the individual video segments characterizes a prediction
of a video segment following the individual video segments within the duration and/or a video segment preceding the individual
video segments within the duration; and

determine a presence of a highlight moment within the video content based on a comparison of one or more spatiotemporal feature
vectors of the first set of spatiotemporal feature vectors with one or more predicted spatiotemporal feature vectors of the
first set of predicted spatiotemporal feature vectors;

wherein:
the first three-dimensional convolutional neural network is initialized with pre-trained weights from a trained two-dimensional
convolutional neural network, the pre-trained weights from the trained two-dimensional convolutional neural network being
stacked along a time dimension; and

the long short-term memory network is trained with second video content including highlights.

US Pat. No. 9,838,731

SYSTEMS AND METHODS FOR AUDIO TRACK SELECTION IN VIDEO EDITING WITH AUDIO MIXING OPTION

GoPro, Inc., San Mateo, ...

1. A system that automatically edits video clips to synchronize accompaniment by different musical tracks, the system comprising:
one or more non-transitory storage media storing video content and initial instructions defining a preliminary version of
an edited video clip made up from the stored video content, the initial instructions indicating specific portions of the video
content to be included in the preliminary version of the edited video clip and an order in which the specific portions of
the video content should be presented; and

one or more physical processors configured by machine readable instructions to:
determine occurrences of video events within the preliminary version of the edited video clip, the individual occurrences
of video events corresponding to different moments within the preliminary version of the edited video clip;

access a repository of musical tracks, the repository of musical tracks including a first musical track, a second musical
track, a first set of audio event markers associated with the first musical track, and a second set of audio event markers
associated with the second musical track, individual sets of audio event markers including one or more audio event markers,
the individual audio event markers associated with the first musical track corresponding to different moments within the first
musical track and the individual audio event markers associated with the second musical track corresponding to different moments
within the second musical track;

effectuate presentation of two or more of the musical tracks on a graphical user interface of a video application for selection
by a user to use as accompaniment for the edited video clip;

determine revised instructions defining revised versions of the edited video clip that are synchronized with the two or more
of the musical tracks presented on the graphical user interface such that first revised instructions define a first revised
version of the edited video clip that is synchronized with the first musical track so that one or more moments within the
edited video clip corresponding to one or more occurrences of video events are aligned with one or more moments within the
first musical track corresponding to one or more of the audio event markers of the first musical track and second revised
instructions define a second revised version of the edited video clip that is synchronized with the second musical track so
that one or more moments within the edited video clip corresponding to one or more occurrences of video events are aligned
with one or more moments within the second musical track corresponding to one or more of the audio event markers of the second
musical track, wherein the revised instructions are determined prior to the user's selection of one or more of the musical
tracks;

effectuate presentation of an audio mixing option on the graphical user interface of the video application for selection by
the user, the audio mixing option defining volume at which the musical tracks are played as accompaniment for the edited video
clip;

responsive to the user's selection of the first musical track, effectuate playback of the first revised version of the edited
video clip along with the first musical track as accompaniment, the first musical track played at the volume defined by the
user selection of the audio mixing option; and

responsive to the user's selection of the second musical track, effectuate playback of the second revised version of the edited
video clip along with the second musical track as accompaniment, the second musical track played at the volume defined by
the user selection of the audio mixing option.

US Pat. No. 9,817,394

SYSTEMS AND METHODS FOR ADJUSTING FLIGHT CONTROL OF AN UNMANNED AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A system for adjusting flight control of an unmanned aerial vehicle in order to capture a moment when a performer and a
stationary object are in proximity of each other, the system comprising:
a flight control subsystem configured to provide the flight control for the unmanned aerial vehicle;
an image sensor carried by the unmanned aerial vehicle configured to generate output signals conveying visual information;
and

a processor configured to:
determine a first distance between the performer and the unmanned aerial vehicle based on the visual information;
recognize the stationary object based on the visual information;
determine a second distance between the stationary object and the unmanned aerial vehicle;
adjust the flight control based on the first distance and the second distance such that, responsive to the performer approaching
the stationary object, the unmanned aerial vehicle is in a position for the image sensor to capture the moment when the performer
and the stationary object are in proximity of each other; and

responsive to the unmanned aerial vehicle being in the position, control the image sensor to capture a video segment that
includes the moment when the performer and the stationary object are in proximity of each other such that the performer and
the stationary object are captured in a single field of view of the image sensor.

US Pat. No. 9,807,501

GENERATING AN AUDIO SIGNAL FROM MULTIPLE MICROPHONES BASED ON A WET MICROPHONE CONDITION

GoPro, Inc., San Mateo, ...

1. A method for generating an output audio signal in an audio capture device having multiple microphones including at least
a first reference microphone capturing a first audio signal, a second reference microphone capturing a second audio signal,
and a drainage microphone capturing a third audio signal, the drainage microphone adjacent to a drainage channel for draining
liquid away from the drainage microphone, the method comprising:
during a first time period, determining that both the first reference microphone and the second reference microphone are wet,
selecting the third audio signal from the drainage microphone, and generating a first mono audio output signal corresponding
to the first time period from the third audio signal;

during a second time period, determining that both the first reference microphone and the second reference microphone are
dry, selecting the first audio signal from the first reference microphone and selecting the second audio signal from the second
reference microphone, generating a stereo audio output signal corresponding to the second time period by processing the first
and second audio signals;

during a third time period, determining that the first reference microphone is dry and the second reference microphone is
wet, selecting the first audio signal from the first reference microphone, and generating a second mono audio output signal
corresponding to the third time period from the first audio signal; and

during a fourth time period, determining that the second reference microphone is dry and the first reference microphone is
wet, selecting the second audio signal from the second reference microphone, and generating a third mono output audio signal
corresponding to the fourth time period from the second audio signal.

US Pat. No. 9,787,900

DYNAMIC SYNCHRONIZATION OF FRAME RATE TO A DETECTED CADENCE IN A TIME LAPSE IMAGE SEQUENCE

GoPro, Inc., San Mateo, ...

1. A method for dynamic synchronization of a frame rate to a detected cadence, the method comprising:
configuring a camera to operate according to an initial frame rate;
initiating capture of a time lapse image sequence on the camera, according to the initial frame rate;
recording motion data of the camera during the capture of the time lapse image sequence, the motion data recorded by one or
more sensors tracking motion of the camera while the camera captures the time lapse image sequence;

converting the motion data in a particular window of time from time domain data to frequency domain data;
determining a dominant frequency in the frequency domain data;
determining a modified frame rate that is within a predefined tolerance range of the dominant frequency;
continuing capture of the time lapse image sequence according to the modified frame rate; and
storing the time lapse image sequence to a storage medium.

US Pat. No. 9,736,384

COMPRESSION AND DECODING OF SINGLE SENSOR COLOR IMAGE DATA

GoPro, Inc., San Mateo, ...

1. A method for displaying captured image data, the method comprising:
receiving, by an image processing system, image data captured by an image sensor of a camera, the image data comprising a
plurality of image planes, the image processing system configured to perform an editing operation on the image data;

selecting one of a plurality of de-Bayer filters based on the editing operation, each of the plurality of de-Bayer filters
associated with a different filter style;

filtering, by the image processing system, a subset of the plurality of image planes of the received image data using the
selected de-Bayer filter to produce an output image; and

providing, by the image processing system, the output image to a display configured to display the output image.

US Pat. No. 9,756,249

ELECTRONIC IMAGE STABILIZATION FREQUENCY ESTIMATOR

GoPro, Inc., San Mateo, ...

1. A method for controlling electronic image stabilization, the method comprising:
capturing a video stream;
determining an availability of a computational resource for performing the electronic image stabilization on one or more frames
of the video stream;

receiving data from a gyroscope, the gyroscope connected to an imaging device, wherein the data from the gyroscope includes
sensed motion of the imaging device;

determining a baseline motion frequency threshold based on the availability of the computational resource;
estimating a motion frequency of the imaging device during capture of the video stream from the data received from the gyroscope;
comparing the estimated motion frequency of the imaging device to the determined baseline motion frequency threshold;
performing, in response to the estimated motion frequency meeting a first criteria based on the baseline motion frequency
threshold, the electronic image stabilization on the one or more frames of the video stream; and

disabling, in response to the estimated motion frequency meeting a second criteria based on the baseline motion frequency
threshold, performing the electronic image stabilization on the one or more frames of the video stream.

US Pat. No. 9,720,413

SYSTEMS AND METHODS FOR PROVIDING FLIGHT CONTROL FOR AN UNMANNED AERIAL VEHICLE BASED ON OPPOSING FIELDS OF VIEW WITH OVERLAP

GoPro, Inc., San Mateo, ...

1. An unmanned aerial vehicle comprising:
a housing;
a motor carried by the housing, the motor configured to drive a rotor;
a first image sensor carried within the housing and configured to generate a first output signal conveying first visual information
based on light that becomes incident thereon;

a second image sensor carried within the housing and configured to generate a second output signal conveying second visual
information based on light that becomes incident thereon;

a first optical element configured to guide light within a first field of view to the first image sensor, the first field
of view being greater than 180 degrees, the first optical element being carried by the housing and being vertically oriented
such that the first field of view is directed upwards when the unmanned aerial vehicle operates leveled with respect to ground;

a second optical element configured to guide light within a second field of view to the second image sensor, the second field
of view being greater than 180 degrees, the second optical element being carried by the housing such that a centerline of
the second field of view is substantially opposite from a centerline of the first field of view and a peripheral portion of
the first field of view and a peripheral portion of the second field of view overlap, and being vertically oriented such that
the second field of view is directed downwards when the unmanned aerial vehicle operates leveled with respect to ground, wherein
the peripheral portion of the first field of view and the peripheral portion of the second field of view that overlap circumscribe
the unmanned aerial vehicle and include 360 degrees of lateral directions around the unmanned aerial vehicle; and

one or more processors carried by the housing, wherein the one or more processors are configured by machine readable instructions
to:

receive the first output signal and the second output signal;
determine a disparity of an object within the peripheral portion of the first field of view and the peripheral portion of
the second field of view that overlap; and

provide flight control for the unmanned aerial vehicle based on the disparity.

US Pat. No. 9,661,195

AUTOMATIC MICROPHONE SELECTION IN A SPORTS CAMERA BASED ON WET MICROPHONE DETERMINATION

GoPro, Inc., San Mateo, ...

1. A method for generating an output audio signal in an audio capture system having multiple microphones including at least
a first microphone and a second microphone, the first microphone including a drainage enhancement feature structured to drain
liquid more quickly than the second microphone lacking the drainage enhancement feature, the method comprising:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during
a time interval;

receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during
the time interval;

determining, by a processor, a correlation metric between the first audio signal and the second audio signal representing
a similarity between the first audio signal and the second audio signal;

responsive to the correlation metric exceeding a predefined threshold, outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the predefined threshold, determining if the first and second microphones
are submerged in liquid;

responsive to determining that the first and second microphones are not submerged, determining whether the first microphone
is wet;

responsive to determining that the first microphone is wet, outputting the second microphone signal for the time interval;
responsive to determining that the first microphone is not wet or that the first and second microphones are submerged, determining
a first noise metric for the first audio signal and a second noise metric for the second audio signal;

responsive to a sum of the first noise metric and a bias value being less than the second noise metric, outputting the first
audio signal for the time interval; and

responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting
the second audio signal for the time interval.

US Pat. No. 9,639,935

APPARATUS AND METHODS FOR CAMERA ALIGNMENT MODEL CALIBRATION

GoPro, Inc., San Mateo, ...

1. A non-transitory computer-readable storage medium, comprising executable instructions that, when executed by a processor,
facilitate performance of operations, comprising:
receiving a first input frame captured by a first image capture device of an image capture apparatus, the first image capture
device having a first field-of-view;

receiving a second input frame captured by a second image capture device of the image capture apparatus, the second image
capture device having a second field-of-view such that a first region of the first field-of-view, corresponding to a first
region of the first input frame, overlaps a second region of the second field-of-view, corresponding to a second region of
the second input frame; and

generating a calibrated camera alignment model for the first image capture device and the second image capture device, wherein
generating the calibrated camera alignment model includes:

identifying a camera alignment model for the first image capture device and the second image capture device, wherein the camera
alignment model includes information describing a first alignment path for a defined location in the first region of the first
input frame and a second alignment path for the defined location in the second region of the second input frame;

identifying the first alignment path as a first candidate alignment path in the first input frame;
identifying a second candidate alignment path in the first input frame spatially adjacent to the first candidate alignment
path in a first lateral direction;

identifying the second alignment path as a third candidate alignment path in the second input frame;
identifying a fourth candidate alignment path in the second input frame spatially adjacent to the third candidate alignment
path in a second lateral direction;

identifying a first point along the first candidate alignment path or the second candidate alignment path corresponding to
a second point along the third candidate alignment path or the fourth candidate alignment path;

on a condition that the first point is a point along the second candidate alignment path:
generating an updated first alignment path by updating the first alignment path based on the second candidate alignment path;
omitting the first alignment path from the calibrated camera alignment model; and
including the updated first alignment path in the calibrated camera alignment model;
on a condition that the second point is a point along the fourth candidate alignment path:
generating an updated second alignment path by updating the second alignment path based on the fourth candidate alignment
path;

omitting the second alignment path from the calibrated camera alignment model; and
including the updated second alignment path in the calibrated camera alignment model; and
outputting or storing the calibrated camera alignment model.

US Pat. No. 9,663,227

SYSTEMS AND METHODS FOR CONTROLLING AN UNMANNED AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A system for controlling an unmanned aerial vehicle, the system comprising:
a flight control subsystem configured to provide flight control for the unmanned aerial vehicle;
a sensor configured to generate output signals conveying visual information;
a sensor control subsystem configured to control the sensor through adjustments of one or more of aperture timing, exposure,
focal length, angle of view, depth of field, focus, light metering, white balance, resolution, frame rate, object of focus,
capture angle, a zoom parameter, video format, a sound parameter, and a compression parameter;

a remote controller configured to transmit flight control information and sensor control information; and
a controller interface configured to receive the flight control information and the sensor control information,
wherein the flight control subsystem is further configured to provide the flight control based on the flight control information,
wherein the sensor control subsystem is further configured to control the sensor based on the sensor control information such
that the visual information includes an image of a user,

wherein the remote controller is further configured to recognize one or more gestures from the user and interpret the one
or more gestures as one or both of the flight control information and the sensor control information.

US Pat. No. 9,665,098

SYSTEMS AND METHODS FOR DETERMINING PREFERENCES FOR FLIGHT CONTROL SETTINGS OF AN UNMANNED AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A system for determining preferences for flight control settings of an unmanned aerial vehicle, the system comprising:
one or more physical computer processors configured by computer readable instructions to:
obtain consumption information associated with a user consuming video segments, the consumption information for a given video
segment defining user engagement during the given video segment and/or user response to the given video segment, the consumption
information including consumption information for a first video segment and consumption information for a second video segment;

obtain sets of flight control settings associated with capture of the video segments, wherein the flight control settings
define aspects of a flight control subsystem for the unmanned aerial vehicle and/or a sensor control subsystem for the unmanned
aerial vehicle, the sets of flight control settings including a first set of flight control settings associated with capture
of the first video segment and a second set of flight control settings associated with capture of the second video segment;

determine the preferences for the flight control settings of the unmanned aerial vehicle based upon the first set of flight
control settings and the second set of flight control settings, the preferences for the flight control settings being associated
with the user; and

effectuate transmission of instructions to the unmanned aerial vehicle, the instructions including the determined preferences
for the flight control settings and being configured to cause the unmanned aerial vehicle to adjust the flight control settings
to the determined preferences.

US Pat. No. 9,588,407

INVERTIBLE TIMER MOUNT FOR CAMERA

GoPro, Inc., San Mateo, ...

1. A timer mount system comprising:
a housing comprising a first plate and a second plate, each plate comprising respective securing mechanisms on respective
exterior surfaces, the securing mechanisms comprising respective openings structured such that each of the respective openings
aligns along a rotational axis, the respective securing mechanisms each structured to enable mating with a reciprocal securing
interface;

a drive shaft configured to extend along the rotational axis through the respective openings of the first plate and the second
plate, the drive shaft having a first end and a second end each structured to enable mating with a reciprocal socket;

a panning mechanism within the housing, the panning mechanism configured to store rotational energy in response to the drive
shaft being wound in a first rotational direction about the rotational axis and the panning mechanism to release stored rotational
energy to the drive shaft to cause the drive shaft to rotate in a second rotational direction about the rotational axis at
a pre-defined rate of rotation;

a first mount component comprising the reciprocal socket to removeably secure to either the first or second end of the drive
shaft and to mate with the first end or second end of the drive shaft such that rotation of the drive shaft about the rotational
axis causes rotation of the first mount component;

a second mount component comprising the reciprocal securing interface to removeably secure to either of the respective securing
mechanisms of the first or second plates of the housing, the second mount component further comprising a recess into which
the first or second end of the drive shaft extends, the recess structured such that the drive shaft is rotatable within the
recess without causing rotation of the second mount component.

US Pat. No. 9,584,720

CAMERA SYSTEM DUAL-ENCODER ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data; and
an image capture accelerator chip (“ICA”) coupled between the image sensor chip and the ISP, the image capture accelerator
comprising:

an input configured to receive the image data from the image sensor chip;
a first encoder configured to encode a first portion of the image data to produce first encoded image data;
a second encoder configured to encode a second portion of the image data to produce second encoded image data; and
an output configured to output the received image data when the ICA is configured to operate in a normal mode and to output
one or both of the first encoded image data and the second encoded image data when the ICA is configured to operate in an
accelerated mode.

US Pat. No. 9,554,038

CAMERA SYSTEM TRANSMISSION IN BANDWIDTH CONSTRAINED ENVIRONMENTS

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to detect an amount of bandwidth available to the camera system, the ISP
comprising:

an input to receive the image data from the image sensor chip;
a compression engine configured to decimate the received image data into a plurality of image sub-band components including
a first image sub-band component representative of the image data; and

an encoder configured to:
in response to the amount of bandwidth available to the camera system being less than a threshold amount of bandwidth, encode
the first image sub-band component and output the encoded first image sub-band component; and

in response to the amount of bandwidth available to the camera system being greater than the threshold amount of bandwidth,
encode the image data and output the encoded image data.

US Pat. No. 10,134,114

APPARATUS AND METHODS FOR VIDEO IMAGE POST-PROCESSING FOR SEGMENTATION-BASED INTERPOLATION

GOPRO, INC., San Mateo, ...

1. A computerized apparatus configured to generate interpolated frames of video data, the apparatus comprising:a video data interface configured to receive a plurality of frames of video data;
a processing apparatus in data communication with the video data interface; and
a storage apparatus in data communication with the processing apparatus, the storage apparatus having a non-transitory computer readable medium comprising instructions which are configured to, when executed by the processing apparatus, cause the computerized apparatus to:
obtain a first frame of video data via the video data interface;
segment one or more objects within the first frame of video data;
obtain a second frame of video data via the video data interface;
segment one or more objects within the second frame of video data;
match at least a portion of the one or more objects within the first frame of video data with the one or more objects within the second frame of video data;
compute a motion of pixels for the matched portion of the one or more objects;
compute a motion of pixels associated with a background image, the computed motion of pixels associated with the background image having the one or more objects segmented out prior to the computation of the motion of the pixels associated with the background image; and
generate an interpolated frame of video data via use of the computed motion of pixels for the matched portion of the one or more objects and the computed motion of pixels associated with the background image, the interpolated frame of video data residing temporally between the first frame of video data and the second frame of video data.

US Pat. No. 10,056,115

AUTOMATIC GENERATION OF VIDEO AND DIRECTIONAL AUDIO FROM SPHERICAL CONTENT

GoPro, Inc., San Mateo, ...

1. A method for generating a video with corresponding audio, the method performing by a computing system including one or more processors, the method comprising:receiving a video, by the computing system, the video comprising frames including a target, the video having a field of view;
receiving, by the computing system, audio channels representing audio captured concurrently with the video, individual audio channels comprising directional audio corresponding to respective different directions;
defining, by the computing system, respective spatial sub-regions to subdivide the frames of the video;
mapping, by the computing system, the audio channels to the spatial sub-regions of the video such that each sub-region is associated with at least one audio channel;
determining, by the computing system, a time-varying path of the target within the video based on an analysis of content of the video and/or information associated with the video;
extracting, by the computing system, sub-frames from the frames based on the time-varying path of the target, the sub-frames having a reduced field of view relative to the field of view of the video, the sub-frames including the target;
for individual sub-frames, by the computing system:
determining one or more of the spatial sub-regions overlapping a given sub-frame;
determining a composite audio channel, the composite audio channel including one or more of the audio channels mapped to the one or more of the spatial sub-regions overlapping the given sub-frame;
generating a portion of an audio stream from the composite audio channel; and
outputting, by the computing system, the sub-frames and the audio stream.

US Pat. No. 9,972,066

SYSTEMS AND METHODS FOR PROVIDING VARIABLE IMAGE PROJECTION FOR SPHERICAL VISUAL CONTENT

GoPro, Inc., San Mateo, ...

1. A system for providing variable image projection for spherical visual content, the system comprising:one or more physical processors configured by machine readable instructions to:
obtain visual information defining an image of the spherical visual content, the image including an array of pixels;
obtain a field of view for the spherical visual content, the field of view defining an extent of the image to be displayed;
determine a location of a projection point based on the field of view;
determine a two-dimensional projection of the spherical visual content by projecting pixels of the image within the field of view to a two-dimensional projection plane, wherein an individual pixel is projected along an individual projection line including the projection point and the individual pixel; and
effectuate presentation of the two-dimensional projection of the spherical visual content on a display.

US Pat. No. 9,965,883

UNIFIED IMAGE PROCESSING FOR COMBINED IMAGES BASED ON SPATIALLY CO-LOCATED ZONES

GoPro, Inc., San Mateo, ...

1. An apparatus comprising:a plurality of image sensors;
a processor; and
a non-transitory computer-readable medium comprising instructions that when executed by the processor cause the processor to:
access a plurality of images respectively captured by the plurality of image sensors;
combine the plurality of images to form a combined image;
determine a plurality of zones for the combined image, each zone of the plurality of zones encompassing a portion of the combined image;
project one of the plurality of zones onto at least two of the plurality of images to determine two or more projected zones in respective images of the plurality of images;
perform an image processing operation with a first set of one or more parameters in the two or more projected zones of the plurality of images; and
perform the image processing operation with a second set of one or more parameters in a different zone of the plurality of images, wherein the second set of one or more parameters is different from the first set of one or more parameters.

US Pat. No. 9,967,457

SYSTEMS AND METHODS FOR DETERMINING PREFERENCES FOR CAPTURE SETTINGS OF AN IMAGE CAPTURING DEVICE

GoPro, Inc., San Mateo, ...

1. A system configured to determine preferences for capture settings of an image capturing device, the system comprising:one or more physical computer processors configured by computer readable instructions to:
obtain consumption information associated with a user consuming a first video segment, the consumption information defining user engagement during a video segment and/or user response to the video segment;
obtain consumption information associated with the user consuming a second video segment;
obtain a first set of capture settings associated with capture of the first video segment;
obtain a second set of capture settings associated with capture of the second video segment;
determine the preferences for the capture settings of the image capturing device based upon the first set of capture settings and the second set of capture settings, the preferences for the capture settings being associated with the user, wherein the capture settings define aspects of operation for one or more of a processor of the image capturing device, an imaging sensor of the image capturing device, or an optical element of the first image capturing device, wherein the preferences for the capture settings are commonalities between the first set of capture settings and the second set of capture settings; and
effectuate transmission of instructions to the image capturing device, the instructions including the determined preferences for the capture settings and being configured to cause the image capturing device to automatically adjust the capture settings to the determined preferences.

US Pat. No. 9,961,355

ENCODING AND DECODING SELECTIVELY RETRIEVABLE REPRESENTATIONS OF VIDEO CONTENT

GOPRO, INC., San Mateo, ...

1. A method for processing compressed video data, the method comprising:storing, in a storage structure, for each of a plurality of frames of video having a corresponding image at an original display resolution, a corresponding plurality of image components representative of the frame of video, the plurality of image components including a base image component associated with a given display resolution, and one or more additional image components associated with display resolutions greater than or equal to the given resolution and less than or equal to the original resolution, and the base image component comprising the corresponding image at the given display resolution;
selecting, based at least on a first processing load of a decoder, a first display resolution at which to display a first of the plurality of frames of video;
retrieving from the storage structure a first subset of the plurality of image components corresponding to the first frame of video, the first subset of image components selected based at least on the first display resolution and including the base image component corresponding to the first frame of video;
decoding the retrieved first subset of image components to generate a first modified frame of video at the first display resolution, the first modified frame of video comprising the image corresponding to the first frame of video;
based at least on a change in processing load of the decoder to a second processing load, selecting, based on the second processing load, a second display resolution at which to display a second frame of video;
retrieving from the storage structure a second subset of the plurality of image components corresponding to the second frame of video, the second subset selected based at least on the second display resolution and including the base image component corresponding to the second frame of video; and
decoding the retrieved second subset of image components to generate a second modified frame of video at the second display resolution, the second modified frame of video comprising the image corresponding to the second frame of video.

US Pat. No. 9,904,148

SWIVEL CAMERA MOUNT LOCKING MECHANISM

GoPro, Inc., San Mateo, ...

1. A camera mount comprising:
a swivel component with a hole through a center of the swivel component, the swivel component comprising:
a top face, the top face comprising a mount component configured to couple to a reciprocal mount component, the reciprocal
mount component configured to couple to a camera housing; and

a bottom face comprising a plurality of detents on the bottom face, the plurality of detents located equidistant from the
hole;

a base component comprising:
an attachment mechanism configured to couple the camera mount to an object or user;
a release lever protruding from the base component, the release lever comprising a locking protrusion for insertion into one
of the plurality of detents, the release lever configured to exert a compressive force into the bottom face of the swivel
component; and

a rippled washer coupled between the attachment mechanism and the release lever, the rippled washer comprising a contoured
edge, a top face of the rippled washer component partially abutting the release lever; and

a screw component configured for insertion through the hole of the swivel component, and configured to rotatably couple the
swivel component to the base component;

wherein the swivel component can rotate relative to the base component when the release lever is forcibly pivoted away from
the swivel component such that the locking protrusion is not inserted into a detent, and wherein the swivel component is rotatably
affixed relative to the base component when protrusion is inserted into a detent.

US Pat. No. 9,894,393

VIDEO ENCODING FOR REDUCED STREAMING LATENCY

GoPro, Inc., San Mateo, ...

1. A non-transitory computer-readable medium comprising instructions for streaming a video, the instructions executable by
a processor and comprising instructions for:
receiving, from a client device, a request to stream the video for playback by the client device;
accessing a video frame from video frames included in the video, the video frame being a last frame in a group of pictures
included in the video, the group of pictures comprising an intra-coded video frame and one or more inter-coded video frames;

segmenting the video frame into a plurality of frame segments;
identifying a frame boundary segment immediately preceding a frame segment in another group of pictures included in the video;
generating segment headers indicating a sequence order of the frame segments, the segment headers comprising a frame marking
header, the frame marking header indicating a boundary of the video frame relative to other video frames in the video and
including a flag identifying the frame boundary segment;

generating communication packets each having a payload comprising one of the segment headers and a corresponding one of the
frame segments, the communication packets comprising a communication packet with a payload comprising the frame boundary segment
and the frame marking header; and

transmitting the communication packets to the client device for playback of the video frame, the client device rendering the
video frame using the frame marking header.

US Pat. No. 9,877,036

INTER FRAME WATERMARK IN A DIGITAL VIDEO

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for reconstructing a video frame, the method comprising:
accessing a modified reference frame from a video, the modified reference frame temporally proximate to a degraded frame in
the video, the modified reference frame including a watermark representative of a low-resolution version of the degraded frame;

generating a transformed image frame by converting the modified reference frame into a set of image coefficients in a frequency
domain;

extracting a set of watermark coefficients representative of the watermark;
converting the set of watermark coefficients into a first approximation of the degraded frame;
generating a second approximation of the degraded frame using a motion vector between the modified reference frame and the
first approximation of the degraded frame; and

storing the second approximation of the degraded frame in place of the degraded frame, the storing the second approximation
of the degraded frame comprising:

generating a third approximation of the degraded frame by modifying the second approximation of the degraded frame using a
set of low-frequency coefficients determined from the set of watermark coefficients, the generating of the third approximation
of the degraded frame comprising:

generating a set of reconstructed frame coefficients by converting the second approximation of the degraded image into the
frequency domain;

identifying a subset of reconstructed frame coefficients corresponding to low-frequency coefficients;
modifying the set of reconstructed frame coefficients by replacing the subset of reconstructed frame coefficients with the
set of low-frequency coefficients determined from the watermark coefficients; and

generating the third approximation of the degraded frame by converting the modified set of reconstructed frame coefficients
into the spatial domain; and

storing the third approximation of the degraded frame in place of the degraded frame.

US Pat. No. 9,874,308

CAMERA SYSTEM USING STABILIZING GIMBAL

GoPro, Inc., San Mateo, ...

1. A mount for connecting a gimbal to a mount platform, the mount comprising:
a fixed mount floor rigidly attachable to the mount platform;
a fixed mount ceiling rigidly attachable to the mount platform, wherein a gap exists between a top surface of the fixed mount
floor and a bottom surface of the fixed mount ceiling;

a plurality of elastic connectors protruding from the bottom surface of the fixed mount ceiling;
a floating base having a top surface mechanically coupled to the plurality of elastic connectors and hanging below the fixed
mount ceiling and adjacent to the fixed mount floor;

a gimbal connection housing rigidly attached to the floating base, the gimbal connection housing removably connectable to
the gimbal;

a plurality of locking blocks protruding from the floating base towards the fixed mount floor;
a plurality of locking slots in the fixed mount floor structured to form a cavity reciprocal to the plurality of locking blocks;
wherein at an equilibrium position without a net contact force applied to the floating base in a direction toward the fixed
mount floor, a gap exists between ends of the plurality of locking blocks and corresponding ends of the plurality of locking
slots, and wherein at a non-equilibrium position when a net contact force is applied to the floating base in the direction
towards the fixed mount floor, the ends of the plurality of locking blocks are flush with the corresponding ends of the plurality
of locking slots.

US Pat. No. 9,854,263

ENCODING AND DECODING SELECTIVELY RETRIEVABLE REPRESENTATIONS OF VIDEO CONTENT

GoPro, Inc., San Mateo, ...

1. A method for processing compressed video data, the method comprising:
accessing, by a computer system, a plurality of encoded video frames, each video frame comprising a plurality of data structure
components encoded from said each video frame, each of the plurality of data structure components being associated with a
different one of a plurality of image resolutions such that the video frame can be displayed at one image resolution of the
plurality of image resolutions in part by combining all of a subset of data structure components associated with an image
resolution that is equal to or less than the one image resolution, each of the subset of data structure components comprising
a data size that is equal to or less than that associated with the one image resolution, such that the subset of data structure
components comprises data sufficient to reconstruct the video frame at a corresponding one of the plurality of image resolutions;

identifying a hardware display communicatively coupled to the computer system;
determining an available bandwidth between the computer system and the hardware display;
selecting the one image resolution from the plurality of image resolutions based on the determined available bandwidth;
selecting, for each video frame of the plurality of video frames, a subset of data structure components of the video frame,
the selected subset of data structure components comprising all of the data structure components associated with the image
resolution that is equal to or less than the one image resolution selected from the plurality of available image resolutions;
and

transmitting, for each video frame, the selected subset of data structure components to the hardware display.

US Pat. No. 9,832,397

IMAGE TAPING IN A MULTI-CAMERA ARRAY

GoPro, Inc., San Mateo, ...

1. A method comprising:
capturing a plurality of images with each camera in a camera array comprising a plurality of cameras, each image comprising
at least one portion overlapping with a corresponding portion of a corresponding image;

aligning overlapping portions of corresponding images to produce a set of aligned images;
for each aligned image, performing a warp operation on the aligned image to produce a warped image, wherein a magnitude of
the warp operation on a portion of the aligned image increases with an increase in distance from the portion of the aligned
image to the nearest overlapping portion of the aligned image;

taping each warped image together based on the overlapping portions of the warped images to form a combined image; and
cropping the combined image to produce a rectangular final image.

US Pat. No. 9,747,667

SYSTEMS AND METHODS FOR CHANGING PROJECTION OF VISUAL CONTENT

GoPro, Inc., San Mateo, ...

1. A system for changing projection of visual content, the system comprising:
one or more physical processors configured by machine-readable instructions to:
access first visual information defining the visual content in a first projection;
access second visual information defining lower resolution versions of the visual content in the first projection;
determine a transformation of the visual content from the first projection to a second projection, the transformation of the
visual content from the first projection to the second projection including a visual compression of a portion of the visual
content in the first projection;

identify the portion of the visual content in the first projection;
determine an amount of the visual compression of the portion of the visual content in the first projection;
select one or more of the lower resolution versions of the visual content for the visual compression of the portion based
on the amount of the visual compression of the portion, the one or more of the lower resolution versions of the visual content
including one or more lower resolution versions of the portion of the visual content in the first projection; and

transform the visual content from the first projection to the second projection using the one or more of the lower resolution
versions of the portion of the visual content selected for the visual compression of the portion.

US Pat. No. 9,754,159

AUTOMATIC GENERATION OF VIDEO FROM SPHERICAL CONTENT USING LOCATION-BASED METADATA

GoPro, Inc., San Mateo, ...

1. A method for generating an output video from spherical video content, the method comprising:
storing, by a video server, a first spherical video having first spherical video content and first video metadata including
location data pertaining to a location of a first camera capturing the first spherical video content and timing data pertaining
to a time of capture of the first spherical video content;

receiving user metadata representing a target path, the target path comprising a sequence of time-stamped locations corresponding
to a target;

determining by the video server, based on the user metadata and the first video metadata, a first matching portion of the
first spherical video, the first matching portion captured when the first camera was within a threshold vicinity of the target,
wherein determining the first matching portion of the first spherical video comprises:

determining for each of a sequence of corresponding time points, distances between the target and the first camera based on
the first video metadata and the user metadata;

determining a time range over which the distances are less than a distance threshold; and
determining the first matching portion based on the time range responsive to the time range exceeding a predefined time threshold;
determining a sequence of sub-frames by selecting, for each of a plurality of frames of the first matching portion of the
first spherical video, a sub-frame having content relevant to the target path, each of the sequence of sub-frames comprising
a non-spherical field of view;

combining the sequence of sub-frames to generate a first portion of the output video relevant to the target; and
outputting the output video.

US Pat. No. 9,699,360

CAMERA HOUSING WITH INTEGRATED EXPANSION MODULE

GoPro, Inc., San Mateo, ...

1. An apparatus, comprising:
a first portion of a camera housing having an integrated camera battery;
a first portion of a first securing mechanism on a bottom edge of the first portion of the camera housing, the first portion
of the first securing mechanism structured to removably couple to a second portion of the first securing mechanism on a bottom
edge of a second portion of the camera housing, the second portion of the camera housing structured to form a cavity to receive
a camera, the first securing mechanism structured to enable the first portion of the camera housing to rotate between an open
position and a closed position with respect to the second portion of the camera housing;

a first portion of a second securing mechanism on a top edge of the first portion of the camera housing, the first portion
of the second securing mechanism structured to removably couple to a second portion of the second securing mechanism structured
on a top edge of the second portion of the camera housing, the second securing mechanism structured to enable the first portion
of the camera housing to be securely latched to the second portion of the camera housing when in the closed position;

a sealing element around edges of a first surface of the first portion of the camera housing, the sealing element to form
a watertight seal between the first and second portions of the camera housing when in the closed position; and

an interface protruding from the first surface of the first portion of the camera housing to couple the integrated camera
battery to the camera and to provide power to the camera.

US Pat. No. 9,646,652

SCENE AND ACTIVITY IDENTIFICATION IN VIDEO SUMMARY GENERATION BASED ON MOTION DETECTED IN A VIDEO

GoPro, Inc., San Mateo, ...

1. A method for identifying scenes in captured video for inclusion in a video summary, the method comprising:
accessing a video, the video including a plurality of frames;
determining a velocity of content of individual frames of the video;
determining an acceleration of the content of the individual frames of the video based on at least two of the velocities of
the content of the individual frames;

determining a score for the individual frames of the video based on the velocity and the acceleration of the content of the
individual frames;

selecting one or more of the individual frames of the video based on the determined scores;
identifying, for individual selected frames, a corresponding video scene, the corresponding video scene comprising a first
amount of the video occurring before the selected frame and a second amount of the video occurring after the selected frame;
and

selecting one or more of the identified video scenes for inclusion in the video summary.

US Pat. No. 9,635,257

DUAL-MICROPHONE CAMERA

GoPro, Inc., San Mateo, ...

1. A method for activating at least one of a plurality of microphones configured within a camera, comprising;
receiving motion data from a camera accelerometer, the motion data representative the camera's movement;
determining a motion vector for the camera based on the received motion data, the motion vector based on a direction in which
the camera is moving;

selecting less than all of the plurality of microphones based on the motion vector, wherein each selected microphone is located
on a camera surface facing the direction of the motion vector; and

capturing audio data using only the selected microphones and not the unselected microphones in the plurality of microphones
such that the unselected microphones do not produce audio signals.

US Pat. No. 9,635,262

MOTION ESTIMATION AND DETECTION IN A CAMERA SYSTEM ACCELERATOR ARCHITECTURE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor chip configured to produce image data representative of light incident upon the image sensor chip;
an image signal processor chip (“ISP”) configured to process the image data, the ISP comprising:
an input configured to receive the image data from the image sensor chip;
a decimator configured to decimate the received image data into a plurality of image sub-band components;
a motion detection engine configured to generate a motion map based on a first subset of the plurality of image sub-band components;
a motion estimation engine configured to generate a set of motion vectors based on the generated motion map and a second subset
of the plurality of image sub-band components; and

an output configured to output one or more of the received image data and the set of motion vectors.

US Pat. No. 10,063,776

CAMERA MODE CONTROL

GoPro, Inc., San Mateo, ...

13. A camera, comprising:one or more sensors to generate sensor data;
an image sensor to capture images according to different capture modes including an initial operation mode and an alternative operation mode;
a buffer to store the captured images;
an image processor to read the captured images from the buffer and to encode the captured images according to different encoding modes including an initial encoding mode and an alternative encoding mode; and
a mode controller to:
receive the sensor data from the one or more sensors;
determine a confidence value associated with an alternative operation mode for operating the camera based at least in part on the received sensor data, the confidence value indicating a likelihood of the alternative operation mode being preferred by a user to capture an event;
while the camera is operating in an initial operating mode in which the image sensor captures images according to the initial capture mode and the image processor reads the captured images from the buffer and encodes the captured images according to the initial encoding mode, responsive to determining that the confidence value exceeds a first threshold, transition the camera to a priming operation mode in which the image sensor captures images according to the alternative capture mode and the image processor reads the captured images from the buffer and encodes the captures images according to the initial encoding mode; and
while the camera is operating in the priming operation mode, responsive to determining that the confidence value subsequently exceeds a second threshold greater than the first threshold, transition the camera to the alternative operation mode in which the image sensor captures images according to the alternative capture mode and the image processor reads the captured images from the buffer of the camera and encodes the captured images according to an alternative encoding mode.

US Pat. No. 10,043,237

EQUATORIAL STITCHING OF HEMISPHERICAL IMAGES IN A SPHERICAL IMAGE CAPTURE SYSTEM

GOPRO, INC., San Mateo, ...

1. A method for stitching hyper-hemispherical images to generate a rectangular projection of a spherical image, the method comprising:receiving a first circular image corresponding to first hyper-hemispherical field of view captured by a first camera facing a first direction and second circular image corresponding to second hyper-hemispherical field of view captured by a second camera facing a second direction opposite the first direction;
causing a projection of the first circular image to a first rectangular image by mapping an outer edge of the first circular image to a bottom edge of the first rectangular image and mapping a center point of the first circular image to a top edge of the first rectangular image;
causing a projection of the second circular image to a second rectangular image by mapping an outer edge of the second circular image to a top edge of the second rectangular image and mapping a center point of the second circular image to a bottom edge of the second rectangular image; and
stitching the bottom edge of the first rectangular image with the top edge of the second rectangular image to generate the rectangular projection of the spherical image.

US Pat. No. 9,959,604

DYNAMIC GLOBAL TONE MAPPING WITH INTEGRATED 3D COLOR LOOK-UP TABLE

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:an image sensor configured to convert light incident upon the image sensor into raw RGB image data; and
an image signal processor (“ISP”) configured to:
access a raw RGB color space corresponding to the raw RGB image data;
convert the raw RGB color space into a YCbCr reference color space;
convert the raw RGB image data into YCbCr image data using the YCbCr reference color space;
generate, for each of a plurality of Y-layers of the YCbCr image data, a 2D color look-up table (“LUT”);
convert the YCbCr image data into optimized CbCr image data using the 2D LUTs;
generate optimized YCbCr image data by blending, for at least two Y-layers, the corresponding CbCr image data;
convert the optimized YCbCr image data into tone-mapped optimized sRGB image data; and
store the tone-mapped optimized sRGB image data in a memory of the camera.

US Pat. No. 9,922,682

SYSTEMS AND METHODS FOR ORGANIZING VIDEO FILES

GoPro, Inc., San Mateo, ...

1. A system for organizing video files, the system comprising:
one or more physical processors configured by machine-readable instructions to:
receive electronic information parts, the electronic information parts defining separate temporal segments of visual content
within video frames for playback, the electronic information parts including a first electronic information part and a second
electronic information part,

wherein the first electronic information part includes a first video frame and a first header including a first locator indicating
location of the first video frame in the first electronic information part and the second electronic information part includes
a second video frame and a second header including a second locator indicating location of the second video frame in the second
electronic information part,

wherein the first electronic information part is received at a first time and the second electronic information part is received
at a second time that is subsequent to the first time,

wherein the electronic information parts include a third electronic information part received at a third time that is subsequent
to the second time, the third electronic information part including a third video frame and a third header including a third
locator indicating location of the third video frame in the third electronic information part;

generate a first combined electronic information, the first combined electronic information including the first electronic
information part and the second electronic information part and a first combined header, wherein the first combined header
includes a first combined locator indicating location of the first video frame in the first combined electronic information
and a second combined locator indicating location of the second video frame in the first combined electronic information;
and

generate a second combined electronic information, the second combined electronic information including the first electronic
information part, the second electronic information part, the third electronic information part, and a second combined header,
wherein the second combined header includes a third combined locator indicating location of the first video frame in the second
combined electronic information, a fourth combined locator indicating location of the second video frame in the second combined
electronic information, and a fifth combined locator indicating location of the third video frame in the second combined electronic
information.

US Pat. No. 9,886,961

AUDIO WATERMARK IN A DIGITAL VIDEO

GoPro, Inc., San Mateo, ...

1. A computer-implemented method for compressing a video with audio, the method comprising:
accessing a video comprising a plurality of image frames and audio data, the audio data comprising a plurality of audio data
portions each associated with a corresponding image frame, each image frame comprising an array of pixels, each pixel comprising
image data;

for each image frame of the plurality of image frames:
identifying an audio data portion corresponding to the image frame;
generating, by a processor, a watermark comprising a set of watermark coefficients representative of the audio data portion;
generating a transformed image frame by converting the array of pixels of the image frame into a set of image coefficients
in a frequency domain;

embedding the watermark in the transformed image frame by modifying a subset of the image coefficients with the set of watermark
coefficients to form a modified set of coefficients;

generating a modified image frame by converting the modified set of coefficients into a spatial domain, the modified image
frame representative of the image frame and the audio data portion;

compressing the modified image frame to produce a compressed image frame; and
storing the compressed image frame.

US Pat. No. 9,881,349

APPARATUS AND METHODS FOR COMPUTERIZED OBJECT IDENTIFICATION

GOPRO, INC., San Mateo, ...

1. A method of detecting an object by a computerized imaging apparatus, the method comprising:
observing an object, wherein the object comprises a pattern;
sensing at least a portion of the pattern on the object; and
identifying the object based on the sensed at least portion of the pattern;
wherein the pattern comprises at least one of a patch of color or at least one medium that is undetectable via wavelengths
that are visible to a human eye but detectable by the computerized imaging apparatus,

wherein the pattern comprises the at least one medium, and sensing the pattern comprises sensing a first medium that is absorbent
at a given wavelength range outside a given human-visible spectral range, and a second medium that is less absorbent in the
given spectral range relative to the first medium, and

wherein the first medium and the second medium comprise fiber threads woven into a textile of the object.

US Pat. No. 9,864,257

CAMERA FRAME WITH SIDE DOOR

GoPro, Inc., San Mateo, ...

1. A housing for a camera comprising:
a housing body configured to secure a camera within the camera housing, the housing body comprising a left wall, a top wall,
a right wall, a first bottom wall segment coupled to the left wall, and a second bottom wall segment coupled to the right
wall;

a hinge mechanism pivotally coupling the top wall and the left wall; and
a latching mechanism, wherein
when the latching mechanism is configured in a closed state, the latch mechanism couples the first bottom wall segment and
the second bottom wall segment together such that the housing is configured to securely enclose the camera, thereby affixing
the left wall relative to the top wall; and

when the latching mechanism is configured in an open state, the first bottom wall segment is decoupled from the second bottom
wall segment, thereby enabling the left wall and first bottom wall segment to pivotally rotate around the hinge mechanism
relative to the top wall, and enabling the removal of the camera from or insertion of the camera into the housing body.

US Pat. No. 9,836,054

SYSTEMS AND METHODS FOR DETERMINING PREFERENCES FOR FLIGHT CONTROL SETTINGS OF AN UNMANNED AERIAL VEHICLE

GoPro, Inc., San Mateo, ...

1. A system for determining control settings for an unmanned aerial vehicle, the system comprising:
one or more physical computer processors configured by computer readable instructions to:
obtain first consumption information and second consumption information, the first consumption information characterizing
user engagement during playback of a first video segment and/or user response to the playback of the first video segment and
the second consumption information characterizing user engagement during playback of a second video segment and/or user response
to the playback of the second video segment;

obtain a first set of control settings defining one or more aspects of capture of the first video segment based on the first
consumption information and a second set of control settings defining one or more aspects of capture of the second video segment
based on the second consumption information;

determine one or more flight control settings based upon the first set of control settings and the second set of control settings;
and

effectuate transmission of instructions to the unmanned aerial vehicle, the instructions configured to cause the unmanned
aerial vehicle to implement the one or more flight control settings.

US Pat. No. 9,756,250

FRAME MANIPULATION TO REDUCE ROLLING SHUTTER ARTIFACTS

GoPro, Inc., San Mateo, ...

1. A camera system, comprising:
an image sensor configured to capture a set of two or more frames over a frame set interval, wherein capturing a frame comprises
capturing light during a frame capture interval to produce frame data, wherein the captured light comprises light incident
upon the image sensor during the frame capture interval, and wherein the frame data is representative of the captured light;

a buffer memory for buffering the frame data associated with the set of frames; a frame processor configured to receive the
frame data associated with the set of frames from the buffer memory and combine the frame data associated with the set of
frames to produce combined frame data; and

a hardware image signal processor that processes combined frame data during a frame processing interval to produce a processed
frame comprising processed frame data representative of the set of frames, the frame processing interval longer than the frame
capture interval and equal to or less than the frame set interval.