Apple Patent | Creation of optimal working, learning, and resting environments on electronic devices
Patent: Creation of optimal working, learning, and resting environments on electronic devices
Drawings: Click to check drawins
Publication Number: 20210096646
Publication Date: 20210401
Applicant: Apple
Abstract
Some implementations disclosed herein present a computer-generated reality (CGR) environment in which a user participates in an activity, identify a cognitive state of the user (e.g., working, learning, resting, etc.) based on data regarding the user’s body (e.g., facial expressions, hand movements, physiological data, etc.), and update the environment with a surrounding environment that promotes a cognitive state of the user for the activity.
Claims
-
A method of identifying a surrounding environment that promotes a cognitive state of a user for a particular user activity, the method comprising: at a device comprising a processor: presenting a computer-generated reality (CGR) environment in which a user participates in an activity, wherein the CGR environment presents multiple surrounding environments at different times during user participation in the activity; obtaining data regarding a body of the user via a sensor while the multiple surrounding environments are presented in the CGR environment; assessing the multiple surrounding environments based on the data obtained regarding the body of the user; and based on the assessing, identifying a surrounding environment of the multiple surrounding environments that promotes a cognitive state of the user for the activity.
-
The method of claim 1, further comprising changing parameters of the surrounding environment and identifying parameters that promote the cognitive state of the user for the particular activity.
-
The method of claim 2, wherein the parameters are selected from a group consisting of colors, light level, decor, sound or music, and volume.
-
The method of claim 1, wherein the multiple surrounding environments are assessed based on comparing the data obtained regarding the body of the user in response to at least a first surrounding environment and a second surrounding environment of the multiple surrounding environments.
-
The method of claim 1, wherein the activity is selected from a group consisting of working, learning, and resting.
-
The method of claim 1, wherein the sensor is selected from a group consisting of a heart rate sensor, a pulse oximeter, a blood pressure sensor, a temperature sensor, an electro-cardiogram (EKG) sensor, an electroencephalography (EEG) sensor, an electromyography (EMG) sensor, a functional near infrared spectroscopy signal (fNIRS) sensor, and a GSR sensor.
-
The method of claim 1, wherein the sensor is selected from a group consisting of an inward facing camera and a downward-facing camera.
-
The method of claim 7, wherein the data regarding the body of the user includes hand/body tracking, further comprising using the hand/body tracking to identify movements associated with the cognitive state.
-
The method of claim 1, wherein assessing the multiple surrounding environments includes assessing how well each of the surrounding environments promotes the cognitive state.
-
A method of changing a computer-generated reality (CGR) environment based on a detected cognitive state, the method comprising: at a device comprising a processor: presenting a computer-generated reality (CGR) environment configured based on a first activity; obtaining data regarding a body of the user via a sensor in the CGR environment; determining a cognitive state of the user based on the data obtained regarding the body of the user; and based on the cognitive state, presenting the CGR environment configured based on a second activity different from the first activity.
-
The method of claim 10, wherein the first activity is selected from a group consisting of working, learning, and resting.
-
The method of claim 10, wherein the sensor is selected from a group consisting of a heart rate sensor, a pulse oximeter, a blood pressure sensor, a temperature sensor, an electro-cardiogram (EKG) sensor, an electroencephalography (EEG) sensor, an electromyography (EMG) sensor, a functional near infrared spectroscopy signal (fNIRS) sensor, and a GSR sensor.
-
The method of claim 10, wherein the sensor is selected from a group consisting of an inward facing camera and a downward-facing camera.
-
The method of claim 13, wherein the data regarding the body of the user includes hand/body tracking, further comprising using the hand/body tracking to identify movements associated with the cognitive state.
-
The method of claim 10, wherein the CGR environment is automatically configured based on the second activity different from the first activity.
-
The method of claim 10, further comprising providing a recommendation to the user and receiving an input from the user.
-
The method of claim 10, further comprising: obtaining second data regarding the body of the user via the sensor in the CGR environment; determining a second cognitive state of the user based on the second data obtained regarding the body of the user; and based on the second cognitive state, presenting the CGR environment configured based on a third activity different from the second activity.
-
A non-transitory computer-readable storage medium storing program instructions that are computer-executable to perform operations comprising: presenting a computer-generated reality (CGR) environment in which a user participates in an activity, wherein the CGR environment presents multiple surrounding environments at different times during user participation in the activity; obtaining data regarding a body of the user via a sensor while the multiple surrounding environments are presented in the CGR environment; assessing the multiple surrounding environments based on the data obtained regarding the body of the user; and based on the assessing, identifying a surrounding environment of the multiple surrounding environments that promotes a cognitive state of the user for the activity.
-
The non-transitory computer-readable storage medium of claim 18 wherein the operations further comprise changing parameters of the surrounding environment and identifying parameters that promote the cognitive state of the user for the particular activity.
-
The non-transitory computer-readable storage medium of claim 18 wherein the surrounding environments are assessed based on comparing the data obtained regarding the body of the user in response to at least a first surrounding environment and a second surrounding environment of the multiple surrounding environments.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This Application claims the benefit of U.S. Provisional Application Ser. No. 62/907,114 filed Sep. 27, 2019, which is incorporated herein in its entirety.
TECHNICAL FIELD
[0002] The present disclosure generally relates to displaying content on electronic devices, and in particular, to systems, methods, and devices for promoting a cognitive state of a user for a particular user activity.
BACKGROUND
[0003] A user’s cognitive state while viewing content on an electronic device can have a significant effect on the user’s ability to work or learn. For example, staying focused and engaged while using a head-mounted device (HMD) in a computer-generated reality (CGR) environment may be required for meaningful experiences, such as learning a new skill, watching educational or entertaining content, or reading a document. Improved techniques for assessing and promoting the cognitive state of a user viewing and interacting with content may enhance the user’s productivity and ability to interact with the content. Consequently, content creators and display systems may be able to provide better user experiences that a user is more likely to enjoy, comprehend, and learn from based on being able to determine the user’s cognitive state.
SUMMARY
[0004] Attributes of content may result in particular types of physiological responses in users viewing the content. For example, users may have different preferences when it comes to optimal working, learning, and resting conditions. Some users may like to work in a quiet environment with low light, whereas other users may prefer to work with music and bright light. Clutter in a room or on a desk may increase creativity for some, but may be distracting for others. Similarly, the optimal resting environment is both individualized and contextual. In some cases, users may not even be aware that they need a break, or they may not realize the most restorative environment for themselves. Creating optimal, individualized environments for users to work, learn and rest in a computer-generated reality (CGR) environment, and presenting them at the right time, can benefit users by increasing their productivity and learning efficiency while also providing them with the necessary resting opportunity when such an environment is not readily available in their surroundings.
[0005] Implementations herein recognize that the physiological response (e.g., heart rate, breath rate, brain activity, user satisfaction, facial expressions, body language, etc.) of a user over time, or in response to a change in CGR environment, may be indicative of the user’s cognitive state or preference in CGR environment. For example, the pupils of users may experience dilation or constriction at different rates for different cognitive states. By presenting a user participating in an activity with multiple surrounding environments (e.g., living room, library, office, forest, coffee shop, ocean, waterfall, babbling brook, etc.), obtaining data regarding the user’s body, and assessing the obtained data for each surrounding environment, implementations may identify a particular surrounding environment that promotes a positive, or otherwise desirable, cognitive state of the user for the activity (e.g., identifying the optimal environment that puts the user in a most relaxed or most restful state).
[0006] Moreover, implementations may identify a cognitive state of the user and suggest a change in user activity. For example, implementations may detect fatigue while the user is working, or may detect that the user has relaxed while the user activity is relaxing or taking a break. Based on the detection, implementations may then show the user’s favorite calm forest when the user is fatigued, or show a working/learning inspiring environment when the user is rested.
[0007] In some implementations, parameters of the surrounding environment, such as colors, light level, decor, sound, music, volume, etc., are changed in order to identify particular parameters that promote the cognitive state of the user for the activity. In some implementations, physiological data of the user is obtained via a heart rate sensor, pulse oximeter, blood pressure sensor, temperature sensor, electrocardiogram (EKG) sensor, electroencephalography (EEG) sensor, electromyography (EMG) sensor, functional near infrared spectroscopy signal (fNIRS) sensor, galvanic skin response (GSR) sensor, etc. Furthermore, inward facing cameras may record eye characteristics (e.g., eye tracking or pupillary response) or downward-facing cameras may record facial expressions. Image-based or sensor-based hand/body tracking may be used to identify movements associated with stress, relaxation, etc.
[0008] In some implementations, a context of the content or activity is used to identify a present cognitive state of the user or a desired cognitive state of the user. For example, if the user activity consists of working, implementations may identify that the user desires a productive, calm, or relaxed cognitive state. In some implementations, information about the user’s preferences and past activities, and attributes of the content may be used to identify or promote the cognitive state of the user. In some implementations, a message is displayed to the user to make the user aware of his or her cognitive state, suggest a break, or provide an option to change environments, and in some implementations, an action may take place automatically (e.g., automatically changing environments based on the cognitive state of the user). In some implementations, the content creator collects or receives data, based on privacy settings, in order to optimize the environment of the user. For example, the content creator may save associated data, modify content, or update the environment of other similar users performing similar activities.
[0009] In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions that are computer-executable to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
[0011] FIG. 1 illustrates a device displaying content and obtaining physiological data from a user according to some implementations.
[0012] FIG. 2 illustrates a pupil of the user of FIG. 1 in which the diameter of the pupil varies.
[0013] FIG. 3 is a flowchart illustrating selecting an environment by rating different environments and changing parameters of the selected environment in accordance with some implementations.
[0014] FIGS. 4A, 4B, and 4C illustrate different environments in accordance with some implementations.
[0015] FIG. 5 is a flowchart illustrating promoting a user activity based on detecting a cognitive state of the user in accordance with some implementations.
[0016] FIGS. 6A, 6B, and 6C illustrate a productivity application and different environments in accordance with some implementations.
[0017] FIG. 7 is a block diagram illustrating device components of an exemplary device according to some implementations.
[0018] FIG. 8 is a block diagram of an example head-mounted device (HMD) in accordance with some implementations.
[0019] FIG. 9 is a flowchart illustrating an exemplary method of updating an environment, according to some implementations.
[0020] FIG. 10 is a flowchart illustrating an exemplary method of identifying a surrounding environment that promotes a cognitive state of a user for an activity.
[0021] In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
DESCRIPTION
[0022] Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
[0023] FIG. 1 illustrates a physical environment 5 including a device 10 (e.g., a hand-held device) with a display 15. The device 10 may include an integrated controller or may be in communication with a separate controller, one or both of which may be in the physical environment 5. A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, physical locations, and physical people (e.g., user 25). People can directly sense and/or interact with the physical environment 5, such as through sight, touch, hearing, taste, and smell.
[0024] In some implementations, the device 10 is configured to manage, coordinate, and, present a computer-generated reality (CGR) environment to the user 25. In some implementations, the device 10 includes a suitable combination of software, firmware, or hardware. The device 10 is described in greater detail below with respect to FIG. 6 and FIG. 7. In some implementations, a controller of the device 10 is a computing device that is local or remote relative to the physical environment 5. In some implementations, the functionalities of the controller of the device 10 are provided by or combined with the device 10, for example, in the case of a hand-held device or head-mounted device (HMD) that functions as a stand-alone unit.
[0025] In one example, a controller of the device 10 is a local server located within the physical environment 5. In another example, the controller of the device 10 is a remote server located outside of the physical environment 5 (e.g., a cloud server, central server, etc.). In some implementations, the controller of the device 10 is communicatively coupled with the device 10 via one or more wired or wireless communication channels (e.g., BLUETOOTH, IEEE 802.11x, IEEE 802.16x, IEEE 802.3x, etc.).
[0026] According to some implementations, the device 10 presents a CGR environment to the user 25 while the user 25 is present within the physical environment 5. A CGR environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person’s physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person’s head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands).
[0027] A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects.
[0028] Examples of CGR include virtual reality and mixed reality. A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person’s presence within the computer-generated environment, and/or through a simulation of a subset of the person’s physical movements within the computer-generated environment.
[0029] In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end.
[0030] In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground.
[0031] Examples of mixed realities include augmented reality and augmented virtuality. An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment 5, which are representations of the physical environment 5. The system composites the images or video with virtual objects and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment 5 by way of the images or video of the physical environment 5, and perceives the virtual objects superimposed over the physical environment 5. As used herein, a video of the physical environment 5 shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment 5 and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment 5, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment 5.
[0032] An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof.
[0033] An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer-generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
[0034] There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person’s eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head-mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person’s eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one implementation, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person’s retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
[0035] As shown in FIG. 1, in some implementations, the device 10 displays content 20 to a user 25. The content 20 may include a video, a presentation, other time-varying content, or content presented as part of a CGR environment. In some implementations, the device 10 is configured to obtain image data or physiological data (e.g., pupillary data, electrocardiography (EKG) data, etc.) about a body of the user 25 via one or more sensors. In some implementations, the inputs used by the device 10 to collect data about a body of the user 25 include cameras (e.g., to detect body language, perform hand-tracking, identify facial expressions, etc.), microphones (e.g., to identify tone of voice), and physiological sensors (e.g., to measure pupil size, gaze, electroencephalography (EEG), EKG, electromyography (EMG), functional near infrared spectroscopy signal (fNIRS), galvanic skin response (GSR), pulse, respiratory rate, etc.). Moreover, in some implementations, the device may also determine content or context of the content 20 (e.g., a location of the user, language, tone of voice, pictures and videos, etc.).
[0036] In some implementations, the device 10 may associate the captured image data or physiological data with a cognitive state of the user 25. For example, the device 10 may analyze various factors in real time to determine the cognitive state of the user. In some implementations, the body language of the user 25 is modeled using user position data, combined with hand tracking technologies, and inferred or measured body posture. In some implementations, the device 10 utilizes a computational model of cognitive state assessment, including detecting the body language associated with a cognitive state, and any corresponding physiological signs of the cognitive state. For example, combining body language detection, such as yawning, with a decreased heart rate may provide a heightened indicator the user is tired or bored and not engaged in a working activity.
[0037] Users will have the option to opt in or out with respect to whether his or her user data is obtained or used or to otherwise turn on and off any features that obtain or use user information. Moreover, each user will have the ability to access and otherwise find out anything that the system has collected or determined about him or her. User data is stored securely on the user’s device. For example, user data associated with the user’s body and/or cognitive state may be stored in a secure enclave on a user’s device, restricting access to the user data and restricting transmission of the user data to other devices.
[0038] While the device 10 is illustrated as a hand-held device, other implementations involve devices with which a user interacts without holding and devices worn by a user. In some implementations, as illustrated in FIG. 1, the device 10 is a handheld electronic device (e.g., a smartphone or a tablet). In some implementations the device 10 is a laptop computer or a desktop computer. In some implementations, the device 10 has a touchpad and, in some implementations, the device 10 has a touch-sensitive display (also known as a “touch screen” or “touch screen display”). In some implementations, the device 10 is or is in communication with a wearable device such as a head mounted display (HMD), watch, or armband.
[0039] Moreover, while this example and other examples discussed herein illustrate a single device 10 in a physical environment 5, the techniques disclosed herein are applicable to multiple devices as well as to multiple real-world environments. For example, the functions of device 10 may be performed by multiple devices.
[0040] In some implementations, the device 10 includes an eye tracking system for detecting eye position and eye movements. For example, an eye tracking system may include one or more infrared (IR) light-emitting diodes (LEDs), an eye tracking camera (e.g., near-IR (NIR) camera), and an illumination source (e.g., an NIR light source) that emits light (e.g., NIR light) towards the eyes of the user 25. Moreover, the illumination source of the device 10 may emit NIR light to illuminate the eyes of the user 25 and the NIR camera may capture images of the eyes of the user 25. In some implementations, images captured by the eye tracking system may be analyzed to detect position and movements of the eyes of the user 25, or to detect other information about the eyes such as pupil dilation or pupil diameter. Moreover, the point of gaze estimated from the eye tracking images may enable gaze-based interaction with content.
[0041] In some implementations, the device 10 has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some implementations, the user 25 interacts with the GUI through finger contacts and gestures on the touch-sensitive surface. In some implementations, the functions include image editing, drawing, presenting, word processing, website creating, disk authoring, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, and/or digital video playing. Executable instructions for performing these functions may be included in a computer readable storage medium or other computer program product configured for execution by one or more processors.
[0042] In some implementations, the device 10 employs various physiological sensor, detection, or measurement systems. Detected physiological data may include, but is not limited to, EEG, EKG, EMG, fNIRS, blood pressure, GSR, or pupillary response. Moreover, the device 10 may simultaneously detect multiple forms of physiological data in order to benefit from synchronous acquisition of physiological data. Moreover, in some implementations, the physiological data represents involuntary data, e.g., responses that are not under conscious control. For example, a pupillary response may represent an involuntary movement.
[0043] In some implementations, one or both eyes 45 of the user 25, including one or both pupils 50 of the user 25 present physiological data in the form of a pupillary response. The pupillary response of the user 25 results in a varying of the size or diameter of the pupil 50, via the optic and oculomotor cranial nerve. For example, the pupillary response may include a constriction response (miosis), e.g., a narrowing of the pupil, or a dilation response (mydriasis), e.g., a widening of the pupil. In some implementations, the device 10 may detect patterns of physiological data representing a time-varying pupil diameter.
[0044] FIG. 2 illustrates a pupil 50 of the user 25 of FIG. 1 in which the diameter of the pupil 50 varies with time. As shown in FIG. 2, a present physiological state (e.g., present pupil diameter 55) may vary in contrast to a past physiological state (e.g., past pupil diameter 60). For example, the present physiological state may include a present pupil diameter and a past physiological state may include a past pupil diameter. The physiological data may represent a response pattern that dynamically varies over time.
[0045] The device 10 may use the physiological data to implement the techniques disclosed herein. For example, a user’s pupillary response to an environment change in the content 20 may be compared with the user’s prior responses to similar environment change events in the same or other content.
[0046] FIG. 3, in accordance with some implementations, is a flowchart illustrating a method 300 for selecting an environment by rating different environments and changing parameters of the selected environment. In some implementations, the method 300 is performed by one or more devices (e.g., device 10). The method 300 can be performed at a mobile device, HMD, desktop, laptop, or server device. The method 300 can be performed on an HMD that has a screen for displaying 3D images or a screen for viewing stereoscopic images. In some implementations, the method 300 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 300 is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., a memory).
[0047] At block 310, the method 300 exposes users to different environments by displaying content including each of the different environments to a user on a display of a device, e.g., resting environments. For example, as illustrated in FIGS. 4A-4C, a user may be exposed to a library environment 312, a living-room control environment 314, and a forest environment 316.
[0048] At block 320, the method 300 rates each of the environments based on the user’s response to the environment. In some implementations, physiological (e.g., eye-tracking, pupil diameter, heart rate, respiratory rate, GSR, etc.), neural (e.g., EEG, fNIRS, etc.), and behavioral signals (facial gestures, user ratings and performance, etc.) are combined by method 300 to rate each environment. For example, in resting mode, heart and respiratory rates may slow down, heart rate variability (HRV) may increase, and pupil diameter and brain activity as measured by EEG or fNIRS resembles default activity. Furthermore, in learning and working mode, heart and respiratory rates may increase to normal levels, HRV may decrease, pupil diameter and brain activity may show maximum task involvement, and the user may demonstrate enhanced levels of focus and concentration.
……
……
……