空 挡 广 告 位 | 空 挡 广 告 位

Sony Patent | Information processing device, information processing method, and recording medium

Patent: Information processing device, information processing method, and recording medium

Drawings: Click to check drawins

Publication Number: 20210019911

Publication Date: 20210121

Applicant: Sony

Abstract

[Problem] To provide an information processing device, an information processing method, and a recording medium that can present a notification related to a real object more intuitively. [Solution] An information processing device includes a control unit configured to perform: processing of determining whether a real object associated with notification content is present in the same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied; and processing of outputting the notification content to a position related to the real object depending on whether the real object is present.

Claims

  1. An information processing device comprising: a control unit configured to perform processing of determining whether a real object associated with notification content is present in a same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied, and processing of outputting the notification content to a position related to the real object depending on whether the real object is present.

  2. The information processing device according to claim 1, wherein the control unit outputs the notification content to the position related to the real object in a case in which the real object is present, and outputs the notification content to the same space as the person to be notified together with information indicating the real object in a case in which the real object is not present.

  3. The information processing device according to claim 2, wherein the position related to the real object is at least one of positions on the real object or in periphery of the real object.

  4. The information processing device according to claim 2, wherein the notification condition includes at least notification time, a notification place, or the person to be notified.

  5. The information processing device according to claim 4, wherein the notification time is predetermined time, a timer setting, or a predetermined timing.

  6. The information processing device according to claim 4, wherein the control unit performs control for displaying the notification content on the real object at the notification place in a case in which a condition for the notification time is satisfied.

  7. The information processing device according to claim 4, wherein the control unit recognizes the person to be notified based on sensing data acquired from a space, and, at the notification time, outputs the notification content on the real object in a case in which the real object is present, and outputs the notification content in vicinity of the person to be notified together with the information indicating the real object in a case in which the real object is not present.

  8. The information processing device according to claim 4, wherein attribute information is associated with the notification content, and the control unit controls output of the notification content in accordance with the attribute information.

  9. The information processing device according to claim 8, wherein the attribute information includes importance, and the control unit changes an output mode at the time of outputting the notification content in accordance with the importance.

  10. The information processing device according to claim 8, wherein the attribute information includes a security condition, and the control unit performs control for outputting the notification content in a case in which the notification condition and the security condition are satisfied.

  11. The information processing device according to claim 8, wherein the attribute information includes a repetition setting, and the control unit performs processing of repeatedly outputting the notification content in accordance with the repetition setting.

  12. The information processing device according to claim 4, wherein the control unit detects a first input operation of inputting the notification content performed by an input person based on sensing data acquired by an environment sensor disposed in a space.

  13. The information processing device according to claim 12, wherein the first input operation is an input operation using an operation body, and the control unit performs control for detecting a locus of the operation body based on the sensing data, and projecting the detected locus.

  14. The information processing device according to claim 13, wherein the control unit performs processing of: detecting a second input operation of registering the notification content performed by the input person based on the sensing data; and storing the projected locus in a storage unit as notification information at the time when the second input operation is detected.

  15. The information processing device according to claim 14, wherein the control unit recognizes the real object related to the notification content based on the first input operation, and stores information indicating the recognized real object in the storage unit in association with the notification content.

  16. The information processing device according to claim 14, wherein the control unit performs control for displaying the notification information stored in the storage unit in a predetermined region in a space irrespective of whether the notification condition is satisfied.

  17. The information processing device according to claim 16, wherein the control unit controls a display mode of the notification information in the predetermined region based on a parameter added to the notification information.

  18. The information processing device according to claim 16, wherein the control unit groups the notification information in accordance with the notification time, the person to be notified, or the notification place to be displayed in the predetermined region.

  19. An information processing method comprising: determining, by a processor, whether a real object associated with notification content is present in a same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied; and outputting, by the processor, the notification content to a position related to the real object depending on whether the real object is present.

  20. A recording medium in which a computer program is recorded, the computer program for causing a computer to function as a control unit configured to perform: processing of determining whether a real object associated with notification content is present in a same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied; and processing of outputting the notification content to a position related to the real object depending on whether the real object is present.

Description

FIELD

[0001] The present disclosure relates to an information processing device, an information processing method, and a recording medium.

BACKGROUND

[0002] In the related art, as a method of managing information such as task management and messaging, slips and whiteboards are used as an analog method, and smartphones that are widespread these days and terminal devices of smart glasses are used as a digital method.

[0003] In recent years, a technique of implementing various user interfaces and new user interaction has been developed.

[0004] For example, the following Patent Literature 1 discloses a technique of displaying a message near a user’s feet at the moment when the user comes home and switches on a lighting fixture by linking a projector attached to a roof of a room with a switch of the lighting fixture.

CITATION LIST

Patent Literature

[0005] Patent Literature 1: JP 2014-21428** A**

SUMMARY

Technical Problem

[0006] However, in Patent Literature 1 described above, a timing of presenting information is limited to the time when the switch of the lighting fixture is turned on, and an output place is also limited to a region under the lighting fixture.

[0007] A slip or a whiteboard is fixed to a certain place, so that the user cannot carry information and cannot check the information at a required timing. The user can carry the information if a terminal device is used, but cannot notice a notification of a task or a message at a timing when the user does not carry the terminal device.

[0008] There may be a case of using a real object to finish a task, but in the related art, presence or absence of the real object at a notification timing of the task has not been sufficiently considered.

[0009] Thus, the present disclosure provides an information processing device, an information processing method, and a recording medium that can make more intuitive presentation about a notification related to the real object.

Solution to Problem

[0010] According to the present disclosure, an information processing device is provided that includes: a control unit configured to perform processing of determining whether a real object associated with notification content is present in a same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied, and processing of outputting the notification content to a position related to the real object depending on whether the real object is present.

[0011] According to the present disclosure, an information processing method is provided that includes: determining, by a processor, whether a real object associated with notification content is present in a same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied; and outputting, by the processor, the notification content to a position related to the real object depending on whether the real object is present.

[0012] According to the present disclosure, a recording medium in which a computer program is recorded is provided, the computer program for causing a computer to function as a control unit configured to perform: processing of determining whether a real object associated with notification content is present in a same space as a person to be notified at the time when a notification condition associated with the notification content is satisfied; and processing of outputting the notification content to a position related to the real object depending on whether the real object is present.

Advantageous Effects of Invention

[0013] As described above, according to the present disclosure, it is possible to make more intuitive presentation about a notification related to a real object.

[0014] The effect described above is not necessarily limited, and any one of effects described in the present description or another effect that may be grasped from the present description may be exhibited in addition to the effect described above, or in place of the effect described above.

BRIEF DESCRIPTION OF DRAWINGS

[0015] FIG. 1 is a diagram for explaining an outline of an information processing system according to one embodiment of the present disclosure.

[0016] FIG. 2 is a block diagram illustrating an example of a configuration of a system according to the present embodiment.

[0017] FIG. 3 is a diagram for explaining an input of a task related to a real object using a digital pen according to the present embodiment.

[0018] FIG. 4 is a flowchart illustrating an example of a procedure of registration processing of the system according to the present embodiment.

[0019] FIG. 5 is a diagram illustrating a screen example of a registration UI according to the present embodiment.

[0020] FIG. 6 is a flowchart illustrating an example of registration processing of a notification condition included in additional information of a task according to the present embodiment.

[0021] FIG. 7 is a flowchart illustrating an example of registration processing of attribute information included in the additional information of the task according to the present embodiment.

[0022] FIG. 8 is a diagram illustrating an example of determination of importance based on handwritten content according to the present embodiment.

[0023] FIG. 9 is a flowchart illustrating an example of a procedure of notification processing according to the present embodiment.

[0024] FIG. 10 is a diagram illustrating an example of output representation of task importance according to the present embodiment.

[0025] FIG. 11 is a diagram illustrating an example of task display in a case in which the real object is not present nearby according to the present embodiment.

[0026] FIG. 12 is a diagram illustrating an example of pool display according to the present embodiment.

[0027] FIG. 13 is a diagram illustrating another example of pool display according to the present embodiment.

[0028] FIG. 14 is an explanatory diagram illustrating a hardware configuration of an information processing device according to the present disclosure.

DESCRIPTION OF EMBODIMENTS

[0029] The following describes a preferred embodiment of the present disclosure in detail with reference to the attached drawings. In the present description and the drawings, constituent elements having substantially the same functional configuration are denoted by the same reference numeral, and redundant description will not be repeated.

[0030] The description will be made in the following order. [0031] 1. Outline of information processing system according to one embodiment of present disclosure [0032] 2. Configuration example [0033] 2-1. Input device 200 [0034] 2-2. Sensor device 300 [0035] 2-3. Output device 400 [0036] 2-4. Information processing device 100 [0037] 3. Operation processing [0038] 3-1. Registration processing [0039] 3-2. Notification processing [0040] 4. Complement [0041] 4-1. Pool display [0042] 4-2. Application example [0043] 4-3. Effect [0044] 5. Hardware configuration [0045] 6. Conclusion

1.* OUTLINE OF INFORMATION PROCESSING SYSTEM ACCORDING TO ONE EMBODIMENT OF PRESENT DISCLOSURE*

[0046] FIG. 1 is a diagram for explaining an outline of an information processing system according to one embodiment of the present disclosure. The information processing system according to the present embodiment includes an information processing device 100 (not illustrated in FIG. 1), a sensor device 300 (a camera is illustrated as an example in FIG. 1), and an output device 400 (a projector 410 is illustrated as an example in FIG. 1).

[0047] The sensor device 300 is a device that senses various kinds of information. For example, the sensor device 300 includes a camera, a depth sensor, and a microphone, and senses information related to a user and a space in which the user is present. For example, the sensor device 300 senses a position, a posture, a motion, and a line of sight of the user, a shape of a room, and an arrangement of real objects such as furniture, a household electrical appliance, a trash can, an interior article, and daily necessaries. The number of the sensor devices 300 may be one or plural.

[0048] The output device 400 is a device that outputs various kinds of information from the information processing device 100, and assumed to be the projector 410, for example. The projector 410 can project information on an optional place (that is, a region) such as a wall, a floor, a table, or a piece of furniture included in the space that is sensed by the sensor device 300 as a projection place (that is, a projection surface or a projection region). The projector 410 may be implemented by a plurality of projectors, or by what is called a moving projector so that projection can be performed on any place in the space. The number of the output devices 400 may be one or plural.

BACKGROUND

[0049] As described above, various user interaction techniques have been developed in the related art. However, in the technique disclosed in Patent Literature 1 described above, a timing of presenting information is limited to the time when a switch of a lighting fixture is turned on, and an output place is also limited to a region under the lighting fixture.

[0050] A slip or a whiteboard is fixed to a certain place, and a user cannot carry information to check the information at a required timing. If a terminal device is used, the user can carry the information, but cannot notice a notification of a task or a message at a timing when the user does not carry the terminal device.

[0051] The task or the message is assumed to be related to a real object such that “Taking out garbage at 9:00 a.m.” or “Put a letter in a post”, but in the related art, an input or an output of notification information related to the real object has not been sufficiently considered.

[0052] Thus, the present disclosure provides a mechanism that can make more intuitive presentation about a notification related to the real object in the space.

[0053] For example, as illustrated in FIG. 1, in a case in which a real object 10 is present around the user at the time of notifying the user of notification information associated with the real object 10 at a predetermined notification timing, notification information 20 is projected on the real object 10 by the projector 410 to enable the notification related to the real object to be made more intuitively.

2.* CONFIGURATION EXAMPLE*

[0054] FIG. 2 is a block diagram illustrating an example of a configuration of a system 1 according to the present embodiment. As illustrated in FIG. 2, the system 1 includes the information processing device 100, an input device 200, the sensor device 300, and the output device 400.

[0055] 2-1. Input Device 200

[0056] The input device 200 includes a digital pen 210, a touch panel 220, and a keyboard 230.

[0057] The digital pen 210 is an electronic operation body on which a light emitting unit such as an infrared (IR) light emitting diode (LED) is mounted. The light emitting unit emits light when a button, a switch, or the like disposed on the digital pen 210 is operated, when a pen point is pressed against a ground plane, or when the pen oscillates, for example. The digital pen 210 may transmit, to the information processing device 100, a predetermined command based on a user operation of the button or the switch disposed on the digital pen 210, movement of the pen, or the like.

[0058] The touch panel 220 and the keyboard 230 are disposed on a device such as a smartphone, a tablet terminal, a smart watch, smart eyeglasses, and a PC, and detects a user operation to be transmitted to the information processing device 100. The touch panel 220 and the keyboard 230 may be disposed on a wall, a floor, a table, a door, and the like in a house.

[0059] The user can input a task related to an optional real object in the space to the information processing device 100 using the input device 200. FIG. 3 illustrates a diagram for explaining the input of the task related to the real object using the digital pen 210. As illustrated in FIG. 3, the user performs writing on the real object 10 that is planned to be used for finishing the task using the digital pen 210. In this case, the information processing device 100 detects a luminous point of the light emitting unit disposed at the pen point to recognize handwriting with the sensor device 300 disposed in a real space, and performs visual feedback control for projecting a handwriting image 21 by the projector 410. The information processing device 100 recognizes the real object 10 with the sensor device 300, and registers the handwriting image 21 as a task. In this way, the user can freely perform writing on every real object in the real space, and can intuitively register the task related to the real object. In a case in which the real object related to the task is not present in the vicinity of the user, a name of the real object and the like may be written with the digital pen 210 to be registered as a task in the information processing device 100. In a case of a task that is required to be notified while being associated with the user himself/herself instead of the real object, the task can be registered in the information processing device 100 as a task associated with the user himself/herself by writing content of the task on a wall, a floor, and the like at a present place.

[0060] As an input unit, a fingertip, a voice, and a gesture may be used in addition to the digital pen 210, or a device such as a smartphone, a tablet terminal, a smart watch, smart eyeglasses, and a PC may be used. Alternatively, the input device 200 may acquire medium information such as an image or a moving image to be input to the information processing device 100.

[0061] The input device 200 may also include an optional constituent element with which information can be input by the user other than the constituent elements described above. For example, the input device 200 may include a mouse, a button, a switch, a lever, and the like.

[0062] 2-2. Sensor Device 300

[0063] The sensor device 300 includes a human sensor 310, an acceleration sensor 320, a depth sensor 330, a microphone 340, a camera 350, a gyro sensor 360, and a geomagnetic sensor 370.

[0064] The human sensor 310 is a device that detects presence/absence of a person. The human sensor 310 is, for example, an optical sensor using infrared rays and the like. The acceleration sensor 320, the gyro sensor 360, and the geomagnetic sensor 370 are motion sensors that detect a motion of a person, and may be disposed on a terminal device such as a wearable device and a smartphone owned by the user. The depth sensor 330 is a device that acquires depth information such as an infrared range finding device, an ultrasonic range finding device, Laser Imaging Detection and Ranging (LiDAR), or a stereo camera. The microphone 340 is a device that collects surrounding sound, and outputs voice data obtained by converting the surrounding sound into a digital signal via an amplifier and an analog digital converter (ADC). The microphone 340 may be an array microphone. The camera 350 is an imaging device such as an RGB camera that includes a lens system, a driving system, and an imaging element, and takes an image (a static image or a moving image). The number of the cameras 350 may be plural, and the camera 350 may be a movable type that can photograph an optional direction in the space.

[0065] The sensor device 300 senses information based on control performed by the information processing device 100. For example, the information processing device 100 can control a zoom factor and an imaging direction of the camera 350.

[0066] The sensor device 300 may also include an optional constituent element that can perform sensing other than the constituent elements described above. For example, the sensor device 300 may include various sensors such as an illuminance sensor, a force sensor, an ultrasonic sensor, an atmospheric pressure sensor, a gas sensor (Co2), and a thermocamera.

[0067] 2-3. Output Device 400

[0068] The output device 400 includes the projector 410, a display 420, a speaker 430, and a unidirectional speaker 440. The system 1 may include, as the output device 400, one of these components or a combination of a plurality of these components, or may include a plurality of devices of the same type.

[0069] The projector 410 is a projection device that projects an image on an optional place in the space. The projector 410 may be a wide-angle projector of a fixed type, for example, or may be what is called a moving projector including a movable part that can change a projecting direction such as a Pan/Tilt driving type. For example, the display 420 may be disposed on a TV, a tablet terminal, a smartphone, a PC, and the like. The TV is a device that receives radio waves of television broadcast, and outputs an image and a voice. The tablet terminal is typically a mobile apparatus that has a larger screen than that of a smartphone and can perform wireless communication, and can output an image, a voice, vibration, and the like. The smartphone is typically a mobile apparatus that has a smaller screen than that of the tablet and can perform wireless communication, and can output an image, a voice, vibration, and the like. The PC may be a desktop PC of a fixed type, or may be a notebook PC of a mobile type, and can output an image, a voice, and the like. The speaker 430 converts voice data into an analog signal to be output (reproduced) via a Digital Analog Converter (DAC) and an amplifier. The unidirectional speaker 440 is a speaker that can form directivity in a single direction.

[0070] The output device 400 outputs information based on control performed by the information processing device 100. The information processing device 100 can also control an output method in addition to the content of the information to be output. For example, the information processing device 100 can control the projecting direction of the projector 410, or control directivity of the unidirectional speaker 440.

[0071] The output device 400 may also include an optional constituent element that can make an output other than the constituent elements described above. For example, the output device 400 may include a wearable device such as a head mounted display (HMD), augmented reality (AR) glasses, and a clock-type device. The output device 400 may also include a lighting device, an air conditioning device, a music reproducing device, a household electrical appliance, and the like.

[0072] 2-4. Information Processing Device 100

[0073] The information processing device 100 includes an interface (IF) unit 110, a handwriting recognition unit 120, a gesture detection unit 130, a voice recognition unit 131, a map management unit 140, a user position specification unit 150, a user recognition unit 160, a control unit 170, a timer 180, and a storage unit 190.

[0074] I/F Unit 110

[0075] The I/F unit 110 is a connection device for connecting the information processing device 100 to another appliance. For example, the I/F unit 110 is implemented by a Universal Serial Bus (USB) connector and the like, and inputs/outputs information to/from each of the constituent elements, that is, the input device 200, the sensor device 300, and the output device 400. For example, the I/F unit 110 is connected to the input device 200, the sensor device 300, and the output device 400 via a wireless/wired local area network (LAN), Digital Living Network Alliance (DLNA) (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), or other private lines. The I/F unit 110 may be connected to another appliance via the Internet or a home network.

[0076] Handwriting Recognition Unit 120

[0077] The handwriting recognition unit 120 has a function of recognizing handwriting of the user that is written by an operation body such as the digital pen 210 or a finger in the real space based on the information sensed by the sensor device 300. Specifically, the handwriting recognition unit 120 analyzes a taken image (a taken image obtained by imaging a handwriting image projected by the projector 410) acquired from the camera 350, performs character recognition, and performs morphological analysis, semantic analysis, and the like on an extracted character string. In the character recognition, an action at the time of writing (an order of making strokes in writing, a writing start position, a writing end position, and the like) may be referred to in addition to the handwriting image. The handwriting recognition unit 120 can also identify a writer by pattern recognition and the like using machine learning. The handwriting recognition unit 120 outputs a recognition result to the control unit 170.

[0078] Gesture Detection Unit 130

[0079] The gesture detection unit 130 has a function of detecting a gesture of the user based on the information sensed by the sensor device 300. Specifically, the gesture detection unit 130 detects the gesture such as a posture of the user and a motion of a head, a hand, or an arm using the acceleration sensor 320, the depth sensor 330, the camera 350, the gyro sensor 360, and the geomagnetic sensor 370 included in the sensor device 300. The gesture detection unit 130 outputs a detection result to the control unit 170.

[0080] Voice Recognition Unit 131

[0081] The voice recognition unit 131 has a function of recognizing a voice of the user based on the information sensed by the sensor device 300. Specifically, the voice recognition unit 131 extracts an uttered voice of the user from voice information collected by the microphone 340 included in the sensor device 300, performs voice recognition (converts the voice into text), and performs morphological analysis, semantic analysis, and the like on an acquired character string. The voice recognition unit 131 outputs a recognition result to the control unit 170.

[0082] Map Management Unit 140

[0083] The map management unit 140 has a function of generating a map within the space and performing what is called space recognition such as recognition of the real object based on the information sensed by the sensor device 300. Specifically, the map management unit 140 acquires information indicating shapes of objects forming the space such as a wall surface, a roof, a floor, a door, furniture, daily commodities, and the like (information indicating the shape of the space) based on the depth information obtained by infrared range finding, ultrasonic range finding, or a stereo camera, for example. The information indicating the shape of the space may be two-dimensional information, or may be three-dimensional information such as a point cloud.

[0084] The map management unit 140 also acquires three-dimensional position information of the real object present in the space based on the infrared range finding, the ultrasonic range finding, the taken image, and the depth information.

[0085] The sensor device 300 is disposed in every place in a living space, for example. The map management unit 140 can recognize every room in the living space such as an entrance, a corridor, a kitchen, a living room, a dining room, a study, a bedroom, a bathroom, a washroom, and a veranda, and can map the arrangement of the real objects in each room.

[0086] User Position Specification Unit 150

[0087] The user position specification unit 150 has a function of specifying a position of the user in a three-dimensional space recognized by the map management unit 140. Specifically, the user position specification unit 150 recognizes (estimates) a position in the three-dimensional space recognized by the map management unit 140 corresponding to the position of the user recognized by the user recognition unit 160. The user position specification unit 150 outputs information indicating the specified position of the user in the space to the control unit 170.

[0088] User Recognition Unit 160

[0089] The user recognition unit 160 has a function of recognizing the user in the space based on the information sensed by the sensor device 300, and acquiring information about the user. For example, based on information acquired by a thermocamera, an RGB camera, a stereo camera, an infrared sensor, an ultrasonic sensor, or the like included in the sensor device 300, the user recognition unit 160 performs personal identification and the like based on presence/absence, a position, sight line information including a position of a viewpoint and a sight line direction, a posture, face recognition, and the like of a person. The user recognition unit 160 outputs acquired user information to the control unit 170.

[0090] The various kinds of recognition and detection described above are performed regularly, continuously, or intermittently, and a recognition result and a detection result are stored in the storage unit 190 by the control unit 170.

[0091] Control Unit 170

[0092] The control unit 170 functions as an arithmetic processing unit and a control device, and controls the entire operations in the information processing device 100 in accordance with various computer programs. The control unit 170 may be implemented by an electronic circuit such as a central processing unit (CPU) and a microprocessor, for example. The control unit 170 may also include a read only memory (ROM) that stores a computer program to be used, an arithmetic parameter, and the like, and a random access memory (RAM) that temporarily stores a parameter and the like that vary as appropriate.

[0093] The control unit 170 also includes a display data generation unit 171 and a task registration unit 173.

[0094] The display data generation unit 171 generates display data to be output by the output device 400. Specifically, first, the display data generation unit 171 recognizes a locus of a line drawn by the digital pen 210, a fingertip, or the like (that is, movement position of the digital pen 210 or the fingertip) based on sensing data acquired from the sensor device 300. For example, the display data generation unit 171 analyzes a movement locus of the luminous point of the light emitting unit disposed at the pen point of the digital pen 210 or the fingertip of the user based on the taken image acquired by the camera 350, the depth information, and the like. The display data generation unit 171 then generates a handwriting image that displays the recognized locus (this is an image as feedback of a handwriting input of the user, so that the image displaying the locus is referred to as the “handwriting image” herein).

[0095] The display data generation unit 171 also generates a registration user interface (UI) at the time of task registration. The display data generation unit 171 further generates a notification image for notifying the task registered in the storage unit 190.

[0096] The task registration unit 173 performs processing of storing (registering) the task (an example of the notification information) in the storage unit 190 based on the information input from the sensor device 300 and the input device 200. For example, the task registration unit 173 stores the character string recognized by the handwriting recognition unit 120, or the handwriting image taken by the camera 350 or generated by the display data generation unit 171 (a character string, a chart, an illustration, and the like) (these are examples of notification content) in a notification list (this may be also referred to as a task list) of the storage unit 190 together with additional information. The additional information includes a notification condition (notification time, a user to be notified, a notification place, and a real object used for finishing the task) and attribute information (importance, security information, and a repetition setting). The control unit 170 extracts the additional information from a written character string, information input to the registration UI that is displayed at the time of task registration, a gesture or a voice of the user, and the like. The task registration unit 173 may also register a user voice as the task.

[0097] The control unit 170 also controls a display output and a voice output from the output device 400.

[0098] Specifically, the control unit 170 according to the present embodiment determines whether the notification condition for the task registered in the storage unit 190 is satisfied, and in a case in which the notification condition is satisfied, performs control to output corresponding notification content from the output device 400. For example, the control unit 170 determines whether the registered notification condition is satisfied based on timer information output from the timer 180, the position of the user in the space specified by the user position specification unit 150, a result of identifying the user obtained by the user recognition unit 160, and the like.

[0099] In a case in which real object information is registered in the task to be notified, the control unit 170 determines whether a predetermined real object is present in the same space (for example, in the same room) as the user as a person to be notified. In a case in which the real object is present, the control unit 170 performs control for displaying (for example, projecting) a character string, a handwriting image, or the like registered as a task at a position related to the real object, that is, on the real object or around the real object. In a case in which an output function (a display unit, a voice output unit, and the like) is provided to the real object itself, the control unit 170 may perform control for causing the real object to display a character string, a handwriting image, or the like registered as a task, or reproducing a voice registered as a task, a predetermined notification sound, or the like. In a case in which the real object is present at a blind spot of the user (the blind spot of the user is recognized based on orientation of a head part of the user (person to be notified) or the sight line information), the control unit 170 may sound the real object or a device in the vicinity of the real object, may cause a lighting fixture of the real object or a device in the vicinity of the real object to blink, or may project a display image for guiding the user to the real object in the sight line direction of the user by the projector 410. On the other hand, in a case in which the real object is not present, the control unit 170 performs control for displaying the character string, the handwriting image, or the like registered as a task in any of output regions (for example, a projection region such as a wall or a table positioned in the sight line direction of the user) in the same space as the user (person to be notified) together with information indicating the real object (a name, an image, or the like of the real object). The output regions in the same space as the user (person to be notified) include take-alongs such as a smartphone, a cellular telephone terminal, a smart watch, smart eyeglasses, and an HMD owned by the user.

[0100] At the time of notifying the task, the control unit 170 may process the registered character string, the handwriting image, or the like to be displayed in accordance with the registered attribute information.

[0101] Timer 180

[0102] The timer 180 measures time, and outputs timer information to the control unit 170.

[0103] Storage Unit 190

[0104] The storage unit 190 is implemented by a read only memory (ROM) that stores a computer program, an arithmetic parameter, and the like used for processing performed by the control unit 170, and a random access memory (RAM) that temporarily stores a parameter and the like that vary as appropriate.

[0105] The task (notification information) is stored in the storage unit 190 by the task registration unit 173.

[0106] The configurations of the system 1 according to the present embodiment have been specifically described above. The configuration of the system 1 illustrated in FIG. 2 is merely an example, and the present embodiment is not limited thereto. For example, although not illustrated in FIG. 2, another device may be connected to the information processing device 100.

[0107] The information processing device 100 may be constituted of a plurality of devices. The information processing device 100 may also be implemented by a smart home terminal, a PC, a home server, an edge server, an intermediate server, or a cloud server.

3.* OPERATION PROCESSING*

[0108] Subsequently, the following specifically describes a procedure of operation processing of the system 1 according to the present embodiment with reference to the drawings.

[0109] 3-1. Registration Processing

[0110] First, with reference to FIG. 4, the following describes an example of a procedure of registration processing of the system 1 according to the present embodiment. FIG. 4 is a flowchart illustrating an example of the procedure of registration processing of the system 1 according to the present embodiment.

[0111] As illustrated in FIG. 4, first, the information processing device 100 detects an input operation (first input operation) on an environmental object performed by the user using the digital pen 210 or a fingertip based on the information acquired from the input device 200 or the sensor device 300 (Step S103). The environmental object is an object constituting an environment such as a wall, a floor, a window, a door, a bed, a desk, a table, a chair, a refrigerator, a trash can, and a plastic bottle, and includes the “real object” according to the present embodiment. As described above, the real object is every object that is present in the real space and assumed to be used for finishing the task such as furniture, a household electrical appliance, a trash can, an interior article, and daily necessaries, for example. The information processing device 100 according to the present embodiment can detect a handwriting input action on the environmental object as an input operation by the user by analyzing a stroke of a human hand using the digital pen 210 or a fingertip (a motion of a hand or an arm drawing a character or a chart), and the luminous point of the light emitting unit (IR LED and the like) disposed at the pen point of the digital pen 210 based on the sensing data acquired from the sensor device 300. The information processing device 100 may start to detect such a handwriting input action at the time when a switch of the digital pen 210 is turned on, or at the time when a predetermined command utterance, a predetermined gesture operation, or the like are detected.

[0112] Next, the information processing device 100 determines an input mode (deletion operation mode/writing operation mode) of the detected input operation (Step S106). The input mode may be determined based on a stroke of a user’s hand or a locus of the luminous point of the pen point of the digital pen 210, or may be determined based on switching of the switch of the digital pen 210. For example, in a case in which the locus of the luminous point of the pen point of the digital pen 210 forms a cancel line or a predetermined cancel mark, the information processing device 100 determines that the input mode is the deletion operation mode. In a case in which the locus of the luminous point of the pen point of the digital pen 210 forms a shape other than the cancel line or the predetermined cancel mark (for example, some chart, a character, a symbol, and a simple line), the information processing device 100 determines that the input mode is the writing operation mode.

[0113] Subsequently, in a case of the writing operation mode, the information processing device 100 performs input processing (Step S109). Specifically, the information processing device 100 performs control for recognizing the locus of the line drawn by the digital pen 210 or a fingertip (the movement locus constituted of movement positions of the digital pen 210 or the fingertip), generating an image displaying the recognized locus, and projecting the generated image on the recognized movement locus from the projector 410. Due to this, the user is enabled to perform handwriting input on every environmental object in the real space without being regionally restricted like a display screen of a terminal device. According to the present embodiment, more intuitive and simpler input can be implemented in daily life by employing the handwriting input, and convenience of management of the task that may be generated in the living space is greatly improved. By directly registering a chart, an illustration, and a character input by handwriting as a task to be displayed at the time of task notification (described later), the user is enabled to intuitively grasp content, importance, and urgency of the task, and remember a feeling of himself/herself or a situation at the time of input by viewing the character written by himself/herself.

[0114] On the other hand, in a case of the deletion operation mode, the information processing device 100 performs deletion processing (Step S112). For example, in a case of detecting an input operation of a cancel line, the information processing device 100 performs control for causing a canceled character, chart, illustration, and the like not to be displayed.

[0115] Subsequently, in a case in which a registration UI call is made (Yes at Step S115), the information processing device 100 displays the registration UI for registering the additional information related to the task in the vicinity of the user (Step S118). In a case in which the user wants to register the task input by handwriting, the user performs an operation to be a trigger for advancing the process to the registration processing (second input operation, the registration UI call in this case). The registration UI call may be drawing of a specific mark using the digital pen 210, a predetermined gesture operation or voice, or a pressing and holding operation of the digital pen 210.

[0116] FIG. 5 illustrates a screen example of the registration UI. As illustrated in FIG. 5, a registration UI 25 is projected to be displayed in the vicinity of the real object 10 on which the user performs handwriting input by the digital pen 210, for example. On the registration UI 25, an input box (for example, a pull-down type or a handwriting input type) for the additional information is displayed, the additional information to be registered in association with the task. The user can designate notification time, an object (a real object used for finishing the task), a user (a person to be notified of the task. The user may be a registrant himself/herself (that is, the registrant of the task and the person to be notified are the same person), or may be another person such as a member of family. The person to be notified is not necessarily designated), and a place (a place where the task is finished, a notification place) via the registration UI 25. To a box of the “object” (the real object used for finishing the task), a name of the real object 10 recognized by the system side (for example, “trash can”) may be presented to be checked by the user. The user does not necessarily input all pieces of the additional information displayed on the registration UI 25. Items displayed on the registration UI 25 are not limited to the example illustrated in FIG. 5, and an optimum registration UI may be generated to be displayed in accordance with a situation. For example, registration UIs different for each user may be generated, additional information that is previously estimated based on machine learning for task registration may be presented as a candidate, and in accordance with a place such as a living room and a bedroom, a person who often uses the place or time when the place is often used may be estimated and presented as a candidate for the additional information.

[0117] Next, the information processing device 100 inputs the additional information of the task (Step S121). The information processing device 100 acquires the information input on the displayed registration UI 25 by the user with the digital pen 210, a finger, or the like based on the sensing data acquired from the sensor device 300. The present embodiment describes a case of displaying the registration UI by way of example, but the present embodiment is not limited thereto. The information processing device 100 may extract the additional information based on a voice, a gesture, or handwriting content of the user without displaying the registration UI. The additional information includes a notification condition (notification time, a user to be notified, a place, and a real object used for finishing the task), and the attribute information (importance, security information, and a repetition setting (snooze function)).

[0118] The information processing device 100 then performs completion processing (Step S124). Specifically, the information processing device 100 performs processing of storing a character, a chart, an illustration, and the like written on the environmental object in the storage unit 190 in association with the additional information (registration processing). The character, the chart, the illustration, and the like written on the environmental object may be saved as it is as an image, or text (a character string) that is recognized at the same time and a processing result such as a semantic analysis result may also be saved. In a case in which the task content (notification content) is written as “taking out garbage” with the digital pen 210, a time condition of the notification condition is assumed to be “9:00 a.m. XX/XX”, and the real object is assumed to be a trash can, for example, a saving format of the task is as follows. Object Data is point group data in a case of the real object, and is identification data such as a face recognition data in a case of the user.

Example of Saved Data

[0119] Tag, Object Data, Drawing, Data, Time [0120] {“trash can”}, {point cloud}, {“hoge.png”}, {YYYY.MM.DD.HH.MM.SS}

[0121] At the time of saving the task, the information processing device 100 may feed completion of registration back to the user using a sound or an image. After the registration, the information processing device 100 may cause the projected handwriting image and registration UI not to be displayed.

[0122] The completion processing may be performed in accordance with a registration completion operation performed by the user. For example, the user may tap a completion button on various displayed GUIs such as the projected registration UI with the digital pen 210, a touch pen, a fingertip, and the like. The registration completion operation may be writing a specific mark with the digital pen 210 and the like, enclosing a written task with a specific mark, or drawing an underline. The registration completion operation may also be a gesture such as whisking the written task by hand, or inputting a specific command such as “register” by voice.

[0123] On the other hand, in a case in which the registration UI call at Step S115 described above is not made (No at Step S115), the information processing device 100 recognizes the written content as scribble remaining at the present place (Step S127). In this case, the information processing device 100 may cause the written content recognized as scribble to be deleted (not to be displayed) after a certain time has elapsed. Due to this, the user can enjoy scribbling on any place such as a floor, a wall, and a desk.

[0124] The procedure of registration processing according to the present embodiment has been described above with reference to FIG. 4. The operation processing illustrated in FIG. 4 is merely an example, and the present disclosure is not limited to the example illustrated in FIG. 4. For example, the present disclosure is not limited to an order of steps illustrated in FIG. 4. At least some of the steps may be performed in parallel, or may be performed in reverse order. For example, pieces of the processing from Step S103 to Step S109 and pieces of the processing from Step S115 to Step S118 may be performed in parallel, or may be performed in reverse order. That is, the registration UI call is made beforehand to display the registration UI, and the task content may be input to the environmental object (including the real object) with the digital pen 210 and the like thereafter.

[0125] All pieces of the processing illustrated in FIG. 4 are not necessarily performed. For example, a registration call for simply registering a written task may be made without performing the processing of registration UI call from Step S115 to Step S118. The user can make the registration call after writing the additional information, and register the task and the additional information. After the deletion processing at Step S112, the process may proceed to the registration UI call at Step S115. This is because, after some of characters are deleted, the rest of the characters, an illustration, and the like may be registered as a task.

[0126] All pieces of the processing illustrated in FIG. 7 are not necessarily performed by a single device, and the pieces of processing are not necessarily performed in temporal sequence.

[0127] Subsequently, the following specifically describes registration of the additional information of the task according to the present embodiment from Step S121 to Step S124 described above with reference to FIG. 6 to FIG. 7.

[0128] FIG. 6 is a flowchart illustrating an example of registration processing of the notification condition included in the additional information of the task according to the present embodiment. As illustrated in FIG. 6, first, in a case of registering the notification time (Yes at Step S133), the information processing device 100 performs setting of the notification time (Step S139), timer setting (Step S142), or timing setting (Step S145) in accordance with a condition item (Step S136).

[0129] Information about the notification time may be acquired from a user input to the registration UI, or may be acquired from the written content. As the setting of the notification time, a year, a month, a date, an hour, and a minute can be set. In a case in which the timer setting is performed, the information processing device 100 starts to measure time with the timer 180. Regarding the timing setting, sunset or a predetermined timing depending on weather and the like, specifically, various situations such as “when it rains”, “when it is sunny”, “when it is hot”, “when evening comes”, and “in the morning” can be set as the notification timing. The information processing device 100 may acquire, from a cloud and the like, sunset time or time at which weather will be changed, and set the acquired time as the notification time.

[0130] Next, in a case of registering the real object used for (related to) finishing the task (Yes at Step S148), the information processing device 100 registers the real object information. The real object information may be acquired from the user input to the registration UI, or may be acquired from the written content. For example, the real object can be designated and registered by touching a target real object, writing a specific mark on the target real object, or enclosing the target real object with a specific mark by the user with the digital pen 210. A method of designating the real object may be touching the real object by a fingertip, or pointing at the real object by gesture. In this way, by using the real object at the time of task registration, the real object can be designated more intuitively and simply. The real object related to the finish of the task is not limited to an inorganic substance, and may be another user, a pet, and the like.

[0131] Even in a case in which the real object related to the task is not present in the vicinity of the user, there may be a case of thinking of a task and starting to input task content at a present place. In this case, the real object may be designated by writing a name of the real object. For example, at a place where a trash can is not present nearby, the information processing device 100 acquires and registers the task content of “9:00 a.m. tomorrow” and the real object information of “trash can” from writing of “9:00 a.m. tomorrow trash can”.

[0132] Subsequently, the person to be notified is set (Step S154 to Step S160). Specifically, for example, in a case in which the user (registrant) designates a person other than himself/herself (another user. The number thereof may be one or plural) via the registration UI and the like (Yes at Step S154), the information processing device 100 sets the designated other user as the person to be notified (Step S157). In a case in which the person to be notified is unspecified such as any member of user’s family living together, the user may set the person to be notified as “unspecified” or “anybody”.

[0133] On the other hand, in a case in which another user is not designated (No at Step S154), the information processing device 100 automatically sets the registrant of the task (user himself/herself) as the person to be notified (Step S160).

[0134] In this way, as the person to be notified of the task, in addition to the registrant himself/herself of the task, another user living together with the user in the living space can be designated, or a user not living with the user but being designated by the user can be set.

……
……
……

您可能还喜欢...