雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Google Patent | Transitioning between map view and augmented reality view

Patent: Transitioning between map view and augmented reality view

Drawings: Click to check drawins

Publication Number: 20210102820

Publication Date: 20210408

Applicant: Google

Abstract

A method includes: triggering presentation of at least a portion of a map on a device that is in a map mode, wherein a first point of interest (POI) object is placed on the map, the first POI object representing a first POI located at a first physical location; detecting, while the map is presented, an input triggering a transition of the device from the map mode to an augmented reality (AR) mode; triggering presentation of an AR view on the device in the AR mode, the AR view including an image captured by a camera of the device, the image having a field of view; determining whether the first physical location of the first POI is within the field of view; and if so, triggering placement of the first POI object at a first edge of the AR view.

Claims

  1. A method comprising: triggering presentation of at least a portion of a map on a device that is in a map mode, wherein a first point of interest (POI) object is placed on the map, the first POI object representing a first POI located at a first physical location; detecting, while the map is presented, an input triggering a transition of the device from the map mode to an augmented reality (AR) mode; triggering presentation of an AR view on the device in the AR mode, the AR view including an image captured by a camera of the device, the image having a field of view; determining whether the first physical location of the first POI is within the field of view; and in response to determining that the first physical location of the first POI is not within the field of view, triggering placement of the first POI object at a first edge of the AR view.

  2. The method of claim 1, further comprising determining that the first edge is closer to the first physical location than other edges of the AR view, wherein the first edge is selected for placement of the first POI object based on the determination.

  3. The method of claim 2, wherein determining that the first edge is closer to the first physical location than the other edges of the AR view comprises determining a first angle between the first physical location and the first edge, determining a second angle between the first physical location and a second edge of the image, and comparing the first and second angles.

  4. The method of claim 1, wherein detecting the input comprises, in the map mode, determining a vector corresponding to a direction of the device using an up vector of a display device on which the map is presented, determining a camera forward vector, and evaluating a dot product between the vector and a gravity vector.

  5. The method of claim 1, wherein the first POI object is placed at the first edge of the AR view, the method further comprising: detecting a relative movement between the device and the first POI; and in response to the relative movement, triggering cessation of presentation of the first POI object at the first edge, and instead triggering presentation of the first POI object at a second edge of the image opposite the first edge.

  6. The method of claim 5, wherein triggering cessation of presentation of the first POI object at the first edge comprises triggering gradual motion of the first POI object out of the AR view at the first edge so that progressively less of the first POI object is visible until the first POI object is no longer visible at the first edge.

  7. The method of claim 5, wherein triggering presentation of the first POI object at the second edge comprises triggering gradual motion of the first POI object into the AR view at the second edge so that progressively more of the first POI object is visible until the first POI object is fully visible at the second edge.

  8. The method of claim 5, further comprising, after triggering cessation of presentation of the first POI object at the first edge, pausing for a predetermined time before triggering presentation of the first POI object at the second edge.

  9. The method of claim 5, wherein the first physical location of the first POI is initially outside of the field of view and on a first side of the device, and wherein detecting the relative movement comprises detecting that the first physical location of the first POI is instead outside of the field of view and on a second side of the device, the second side opposite to the first side.

  10. The method of claim 1, wherein triggering presentation of the map comprises: determining a present inclination of the device; and causing the portion of the map to be presented, the portion being determined based on the present inclination of the device.

  11. The method of claim 10, wherein the determination comprises applying a linear relationship between the present inclination of the device and the portion.

  12. The method of claim 10, wherein the transition of the device from the map mode to the AR mode, and a transition of the device from the AR mode to the map mode, are based on the determined present inclination of the device without use of a threshold inclination.

  13. The method of claim 1, wherein at least a second POI object in addition to the first POI object is placed on the map in the map view, the second POI object corresponding to a navigation instruction for a traveler to traverse a route, the method further comprising: detecting a rotation of the device; in response to detecting the rotation, triggering rotation of the map based on the rotation of the device; and triggering rotation of at least part of the second POI object corresponding to the rotation of the map.

  14. The method of claim 13, wherein the second POI object comprises an arrow symbol placed inside a location legend, wherein the part of the second POI object that is rotated corresponding to the rotation of the map includes the arrow symbol, and wherein the location legend is not rotated corresponding to the rotation of the map.

  15. The method of claim 14, wherein the location legend is maintained in a common orientation relative to the device while the map and the arrow symbol are rotated.

  16. The method of claim 1, wherein multiple POI objects in addition to the first POI object are presented in the map view, the multiple POI objects corresponding to respective navigation instructions for a traveler to traverse a route, a second POI object of the multiple POI objects corresponding to a next navigation instruction on the route and being associated with a second physical location, the method further comprising: when the AR view is presented on the device in the AR mode, triggering presentation of the second POI object at a location on the image corresponding to the second physical location, and not triggering presentation of a remainder of the multiple POI objects other than the second POI object on the image.

  17. The method of claim 1, further comprising triggering presentation, in the map mode, of a preview of the AR view.

  18. The method of claim 17, wherein triggering presentation of the preview of the AR view comprises: determining a present location of the device; receiving an image from a service that provides panoramic views of locations using an image bank, the image corresponding to the present location; and generating the preview of the AR view using the received image.

  19. The method of claim 18, further comprising transitioning from the preview of the AR view to the image in the transition of the device from the map mode to the AR mode.

  20. The method of claim 1, further comprising, in response to determining that the first physical location of the first POI is within the field of view, triggering placement of the first POI object at a location in the AR view corresponding to the first physical location.

  21. A computer program product tangibly embodied in a non-transitory storage medium, the computer program product including instructions that when executed cause a processor to perform operations, the operations comprising: triggering presentation of at least a portion of a map on a device that is in a map mode, wherein a first point of interest (POI) object is placed on the map, the first POI object representing a first POI located at a first physical location; detecting, while the map is presented, an input triggering a transition of the device from the map mode to an augmented reality (AR) mode; triggering presentation of an AR view on the device in the AR mode, the AR view including an image captured by a camera of the device, the image having a field of view; determining whether the first physical location of the first POI is within the field of view; and in response to determining that the first physical location of the first POI is not within the field of view, triggering placement of the first POI object at a first edge of the AR view.

  22. A system comprising: a processor; and a computer program product tangibly embodied in a non-transitory storage medium, the computer program product including instructions that when executed cause the processor to perform operations, the operations comprising: triggering presentation of at least a portion of a map on a device that is in a map mode, wherein a first point of interest (POI) object is placed on the map, the first POI object representing a first POI located at a first physical location; detecting, while the map is presented, an input triggering a transition of the device from the map mode to an augmented reality (AR) mode; triggering presentation of an AR view on the device in the AR mode, the AR view including an image captured by a camera of the device, the image having a field of view; determining whether the first physical location of the first POI is within the field of view; and in response to determining that the first physical location of the first POI is not within the field of view, triggering placement of the first POI object at a first edge of the AR view.

Description

TECHNICAL FIELD

[0001] This document relates, generally, to transitioning between a map view and an augmented reality (AR) view.

BACKGROUND

[0002] The use of AR technology has entered a number of areas already, and is continuing to be applied in new areas. Particularly, the rapid adaptation of handheld devices such as smartphones and tablets has created new opportunities to apply AR. However, a remaining challenge is to provide a user experience that is intuitive and does not require the user to endure jarring experiences.

[0003] In particular, there can be difficulties in presenting information to users using AR on smaller screens typical of smartphones and tables. This can be particularly difficult when presenting a point of interest (POI) on a map when the user is moving or facing a direction away from the POI.

SUMMARY

[0004] In a first aspect, a method includes: triggering presentation of at least a portion of a map on a device that is in a map mode, wherein a first point of interest (POI) object is placed on the map, the first POI object representing a first POI located at a first physical location; detecting, while the map is presented, an input triggering a transition of the device from the map mode to an augmented reality (AR) mode; triggering presentation of an AR view on the device in the AR mode, the AR view including an image captured by a camera of the device, the image having a field of view; determining whether the first physical location of the first POI is within the field of view; and in response to determining that the first physical location of the first POI is not within the field of view, triggering placement of the first POI object at a first edge of the AR view.

[0005] Therefore, information regarding the existence, location or other properties of the POI can be indicated to the viewer in AR view mode even if the view being presented to the user does not include the location of the POI, e.g. they are looking or facing (or the camera is facing) a different direction. This can enhance the amount of information being supplied to the user, whilst using a smaller screen (e.g. on a smartphone or stereo goggles) without degrading the AR effect.

[0006] A map may be described as a visual representation of real physical features, e.g. on the ground or surface of the Earth. These features may be shown in their relative sizes, respective forms and relative location to each other according to a scale factor. A POI may be a map object or feature. AR mode may include providing an enhanced image or environment as displayed on a screen, goggles or other display. This may be produced by overlaying computer-generated images, sounds, or other data or objects on a view of a real-world environment, e.g. a view provided using a live-view camera or real time video. The field of view may be the field of view of a camera or cameras. The edge of the AR view may be an edge of the screen or display or an edge of a window within the display, for example.

[0007] Implementations can include any or all of the following features. The method further includes determining that the first edge is closer to the first physical location than other edges of the AR view, wherein the first edge is selected for placement of the first POI object based on the determination. Determining that the first edge is closer to the first physical location than the other edges of the AR view comprises determining a first angle between the first physical location and the first edge, determining a second angle between the first physical location and a second edge of the image, and comparing the first and second angles. Detecting the input includes, in the map mode, determining a vector corresponding to a direction of the device using an up vector of a display device on which the map is presented, determining a camera forward vector, and evaluating a dot product between the vector and a gravity vector. The first POI object is placed at the first edge of the AR view, the method further comprising: detecting a relative movement between the device and the first POI; and in response to the relative movement, triggering cessation of presentation of the first POI object at the first edge, and instead triggering presentation of the first POI object at a second edge of the image opposite the first edge. Triggering cessation of presentation of the first POI object at the first edge comprises triggering gradual motion of the first POI object out of the AR view at the first edge so that progressively less of the first POI object is visible until the first POI object is no longer visible at the first edge. Triggering presentation of the first POI object at the second edge comprises triggering gradual motion of the first POI object into the AR view at the second edge so that progressively more of the first POI object is visible until the first POI object is fully visible at the second edge. The method further comprises, after triggering cessation of presentation of the first POI object at the first edge, pausing for a predetermined time before triggering presentation of the first POI object at the second edge. The first physical location of the first POI is initially outside of the field of view and on a first side of the device, and detecting the relative movement comprises detecting that the first physical location of the first POI is instead outside of the field of view and on a second side of the device, the second side opposite to the first side. Triggering presentation of the map comprises: determining a present inclination of the device; and causing the portion of the map to be presented, the portion being determined based on the present inclination of the device. The determination comprises applying a linear relationship between the present inclination of the device and the portion. The transition of the device from the map mode to the AR mode, and a transition of the device from the AR mode to the map mode, are based on the determined present inclination of the device without use of a threshold inclination. At least a second POI object in addition to the first POI object is placed on the map in the map view, the second POI object corresponding to a navigation instruction for a traveler to traverse a route, the method further comprising: detecting a rotation of the device; in response to detecting the rotation, triggering rotation of the map based on the rotation of the device; and triggering rotation of at least part of the second POI object corresponding to the rotation of the map. The second POI object comprises an arrow symbol placed inside a location legend, wherein the part of the second POI object that is rotated corresponding to the rotation of the map includes the arrow symbol, and wherein the location legend is not rotated corresponding to the rotation of the map. The location legend is maintained in a common orientation relative to the device while the map and the arrow symbol are rotated. Multiple POI objects in addition to the first POI object are presented in the map view, the multiple POI objects corresponding to respective navigation instructions for a traveler to traverse a route, a second POI object of the multiple POI objects corresponding to a next navigation instruction on the route and being associated with a second physical location, the method further comprising: when the AR view is presented on the device in the AR mode, triggering presentation of the second POI object at a location on the image corresponding to the second physical location, and not triggering presentation of a remainder of the multiple POI objects other than the second POI object on the image. The method further comprises triggering presentation, in the map mode, of a preview of the AR view. Triggering presentation of the preview of the AR view comprises: determining a present location of the device; receiving an image from a service that provides panoramic views of locations using an image bank, the image corresponding to the present location; and generating the preview of the AR view using the received image. The method further comprises transitioning from the preview of the AR view to the image in the transition of the device from the map mode to the AR mode. The method further comprises, in response to determining that the first physical location of the first POI is within the field of view, triggering placement of the first POI object at a location in the AR view corresponding to the first physical location.

[0008] In a second aspect, a computer program product is tangibly embodied in a non-transitory storage medium, the computer program product including instructions that when executed cause a processor to perform operations as set out in any of the aspects described above.

[0009] In a third aspect, a system includes: a processor; and a computer program product tangibly embodied in a non-transitory storage medium, the computer program product including instructions that when executed cause the processor to perform operations as set out in any of the aspects described above.

[0010] It should be noted that any feature described above may be used with any particular aspect or embodiment of the invention.

BRIEF DESCRIPTION OF DRAWINGS

[0011] FIGS. 1A-B show an example of transitioning between a map view and an AR view.

[0012] FIG. 2 shows an example of a system.

[0013] FIGS. 3A-G show another example of transitioning between a map view and an AR view.

[0014] FIGS. 4A-C show an example of maintaining an arrow symbol true to an underlying map.

[0015] FIGS. 5A-C show an example of controlling a map presence using device tilt.

[0016] FIG. 6 conceptually shows device mode depending on device tilt.

[0017] FIGS. 7-11 show examples of methods.

[0018] FIG. 12 schematically shows an example of transitioning between a map view and an AR view.

[0019] FIG. 13 shows an example of a computer device and a mobile computer device that can be used to implement the techniques described here.

[0020] Like reference symbols in the various drawings indicate like elements.

DETAILED DESCRIPTION

[0021] This document describes examples of implementing AR functionality on a user device such as a smartphone, tablet or AR goggles. For example, approaches are described that can provide a smooth transition between a map view and an AR view on the user device. Providing a more seamless transition back-and-forth between such modes can ensure a more enjoyable, productive and useful user interaction with the device, and thereby eliminate some barriers that still remain for users to engage with AR. In so doing, the approach(es) can stimulate an even wider adoption of AR technology as a way to develop the interface between the human and the electronic device.

[0022] In some implementations, virtual and physical camera views can be aligned, and contextual anchors can be provided that may persist across all modes. AR tracking and localization can be established before entering AR mode. For example, a map can be displayed in at least two modes. One mode, which may be referred to as a 2D mode, shows a top view of the map and may be present when the user is holding the phone in a generally horizontal orientation, such as parallel to the ground. In another mode, which may be referred to as an AR mode, the map may be reduced down to a small (e.g., tilted) map view (e.g., a minimap). This can be done when the user is inclining or declining the phone compared to the horizontal position, such as by pointing the phone upright. A pass-through camera on the phone can be used in AR mode to provide better spatial context and overlay upcoming turns, nearby businesses, etc. user interface (UI) anchors such as a minimap, current position, destination, route, streets, upcoming turns, and compass direction can transition smoothly as the user switches between modes. As the UI anchors move off screen, they can dock to the edges to indicate additional content.

[0023] Some implementations provide a consistency of visual user interface anchors and feature an alignment between the virtual map and physical world. This can reduce potential user barriers against transitioning into or out of an AR mode (sometimes referred to as switching friction) and can enable seamless transitions between 2D and AR modes using natural and intuitive gestures. Initializing the tracking while still in the 2D mode of a piece of software, such as an app, can make the transition to AR much quicker.

[0024] In some implementations, using different phone orientations in upright and horizontal mode to determine the user facing direction can help avoid the gimbal lock problem and thus provide a stable experience. Implementations can provide that accuracy of tracking and the use case are well aligned. Accurate position tracking can be challenging when facing down. For example, errors and jittering in the position may be easily visible when using GPS or the camera for tracking. When holding the phone facing the ground, there may be less distinguishing features available for accurately determining one’s position from visual features or VPS. When holding the phone up, AR content may be further away from the user, and a small error/noise in the position of the phone may not show in the AR content.

[0025] While some implementations described here mention AR as an example, the present subject matter can also or instead be applied with virtual reality (VR). In some implementations, corresponding adjustments to the examples described herein can then be made. For example, a device can operate according to a VR mode; a VR view or a VR area can be presented on a device; and a user can have a head-mounted display such as a pair of VR goggles.

[0026] FIGS. 1A-B show an example of transitioning between a map view and an AR view. These and other implementations described herein can be provided on a device such as the one(s) shown or described below with regard to FIG. 13. For example, such a device can include, but is not limited to, a smartphone, a tablet or a head-mounted display such as a pair of AR goggles.

[0027] In the example shown in FIG. 1A, the device has at least one display, including, but not limited to, a touchscreen panel. Here, a graphical user interface (GUI) 100 is presented on the display. A navigation function is active on the GUI 100. For example, the navigation function can be provided by local software (e.g., an app on a smartphone) or it can be delivered from another system, such as from a server. Combinations of these approaches can be used.

[0028] The navigation function is presenting a map view 102 in the GUI 100. This can occur in the context of the device being in a map mode (sometimes referred to as a 2D mode), which can be distinct from, or co-existent with, another available mode, including, but not limited to, an AR mode. In the present map mode, the map view 102 includes a map area 104 and a direction presentation area 106. The map area 104 can present one or more maps 108 and the direction presentation area 106 can present one or more directions 110.

[0029] In the map area 104, one or more routes 112 can be presented. The route(s) can be marked between at least the user’s present position and at least one point of interest (POI), such as a turn along the route, or the destination of the route, or interesting features along the way. Here, a POI object 114 is placed along the route 112 to signify that a right turn should be made at N Almaden Avenue. The POI object 114 can include one or more items. Here, the POI object 114 includes a location legend 114A which can serve to contain the (in this case) information about the POI (such as a turn), an arrow symbol 114B (here signifying a right turn) and text content 114C with information about the POI represented by the POI object 114. Other items can be presented in addition to, or in lieu of, one or more of the shown items of the POI object 114. While only a single POI object 114 is shown in this example, in some implementations the route 112 can include multiple POI objects.

[0030] In the example shown in FIG. 1B, the GUI 100 is presenting an AR view 116. The AR view 116 can be presented in the context of when the device is in an AR mode (sometimes referred to as a 3D mode), which can be distinct from, or co-existent with, another available mode, including, but not limited to, a map mode. In the present AR mode, the AR view 116 presents an image area 118 and a map area 120. The image area 118 can present one or more images 122 and the map area 120 can present one or more maps 124. For example, the image 122 can be captured by a sensor associated with the device presenting the GUI 100, including, but not limited to, by a camera of a smartphone device. As another example, the image 122 can be an image obtained from another system (e.g., from a server) that was captured at or near the current position of the device presenting the GUI 100.

[0031] The current position of the device presenting the GUI 100 can be indicated on the map 124. Here, an arrow 126 on the map 124 indicates the device location relative to the map 124. Although the placement of the arrow 126 is based on the location of the device (e.g., determined using location functionality), this is sometimes referred to as the user’s position as well. The arrow 126 can remain at a predefined location of the map area 120 as the device is moved. For example, the arrow 126 can remain in the center of the map area 120. In some implementations, when the user rotates the device, the map 124 can rotate around the arrow 126, which can provide an intuitive experience for the user.

[0032] One or more POI objects can be shown in the map area 120 and/or in the image area 118. Here, a POI object 128 is placed at a location of the image 122. The POI object 128 here corresponds to the POI object 114 (FIG. 1A). As such, the POI object 128 represents the instruction to make a right turn at N Almaden Avenue. That is, N Almaden Avenue is a physical location that can be represented on the map 108 and in the AR view 116. In some implementations, the POI object 114 (FIG. 1A) can be associated with a location on the map 108 that corresponds to the physical location of N Almaden Avenue. Similarly, the POI object 128 can be associated with a location on the image that corresponds to the same physical location. For example, the POI object 114 can be placed at the map location on the map 108, and the POI object 128 can be presented on the image 122 as the user traverses the remaining distance before reaching the physical location of N Almaden Avenue.

[0033] In some implementations, the POI object 128 may have been transitioned from the map 108 (FIG. 1A) as part of a transition into the AR mode. For example, the POI object 128 here corresponds to an instruction to make a turn as part of traversing a navigation route, and other objects corresponding to respective POIs of the navigation may have been temporarily omitted so that the POI object 128 is currently the only one of them that is presented.

[0034] One or more types of input can cause a transition from a map mode (e.g., as in FIG. 1A) to an AR mode (e.g., as in FIG. 1B). In some implementations, a maneuvering of the device can be recognized as such an input. For example, holding the device horizontal (e.g., aimed toward the ground) can cause the map view 102 to be presented as in FIG. 1A. For example, holding the device angled towards the horizontal plane (e.g., tilted or upright) can cause the AR view 116 to be presented as in FIG. 1B. In some implementations, some or all of the foregoing can be caused by detection of another input. For example, a specific physical or virtual button can be actuated. For example, a gesture performed on a touchscreen can be recognized.

[0035] The map view 102 and the AR view 116 are examples of how multiple POI objects in addition to the POI object 114 can be presented in the map view 102. The multiple POI objects can correspond to respective navigation instructions for a traveler to traverse the route 112. In the AR view 116, the POI object 128, as one of the multiple POI objects, can correspond to a next navigation instruction on the route and accordingly be associated with the physical location of N Almaden Avenue. As such, when the AR view 116 is presented on the device in the AR mode, the POI object 128 can be presented at a location on the image 122 corresponding to the physical location of N Almaden Avenue. Moreover, a remainder of the multiple POI objects associated with the route 112 (FIG. 1A) may not presently appear on the image 122.

[0036] FIG. 2 shows an example of a system 200. The system 200 can be used for presenting at least one map view and at least one AR view, for example as described elsewhere herein. The system 200 includes a device 202 and at least one server 204 that can be communicatively coupled through at least one network 206, such as a private network or the internet. Either or both of the device 202 and the server 204 can operate in accordance with the devices or systems described below with reference to FIG. 13.

[0037] The device 202 can have at least one communication function 208. For example, the communication function 208 allows the device 202 to communicate with one or more other devices or systems, including, but not limited to, with the server 204.

[0038] The device 202 can have at least one search function 210. In some implementations, the search function 210 allows the device 202 to run searches that can identify POIs (e.g., interesting places or events, and/or POIs corresponding to navigation destinations or waypoints of a route to a destination). For example, the server 204 can have at least one search engine 212 that can provide search results to the device 202 relating to POIs.

[0039] The device 202 can have at least one location management component 214. In some implementations, the location management component 214 can provide location services to the device 202 for determining or estimating the physical location of the device 202. For example, one or more signals such as a global positioning system (GPS) signal or another wireless or optical signal can be used by the location management component 214.

[0040] The device 202 can include at least one GUI controller 216 that can control what and how things are presented on the display of the device. For example, the GUI controller regulates when a map view, or an AR view, or both should be presented to the user.

[0041] The device 202 can include at least one map controller 218 that can control the selection and tailoring of a map to be presented to the user. For example, the map controller 218 can select a portion of a map based on the current location of the device and cause that portion to be presented to the user in a map view.

[0042] The device 202 can have at least one camera controller 220 that can control a camera integrated into, connected to, or otherwise coupled to the device 202. For example, the camera controller can capture an essentially live stream of image content (e.g., a camera passthrough feed) that can be presented to the user.

[0043] The device 202 can have at least one AR view controller 222 that can control one or more AR views on the device. In some implementations, the AR controller can provide live camera content, or AR preview content, or both, for presentation to the user. For example, a live camera feed can be obtained using the camera controller 220. For example, AR preview images can be obtained from a panoramic view service 224 on the server 204. The panoramic view service 224 can have access to images in an image bank 226 and can use the image(s) to assemble a panoramic view based on a specified location. For example, the images in the image bank 226 may have been collected by capturing images content while traveling on roads, streets, sidewalks or other public places in one or more countries. Accordingly, for one or more specified locations on such a canvassed public location, the panoramic view service 224 can assemble a panoramic view image that represents such location(s).

[0044] The device 202 can include at least one navigation function 228 that can allow the user to define routes to one or more destinations and to receive instructions for traversing the routes. For example, the navigation function 228 can recognize the current physical position of the device 202, correlate that position with coordinates of a defined navigation route, and ensure that the traveler is presented with the (remaining) travel directions to traverse the route from the present position to the destination.

[0045] The device 202 can include at least one inertia measurement component 230 that can use one or more techniques for determining a spatial orientation of the device 202. In some implementations, an accelerometer and/or a gyroscope can be used. For example, the inertia measurement component 230 can determine whether and/or to what extent the device 202 is currently inclined with regard to some reference, such as a horizontal or vertical direction.

[0046] The device 202 can include at least one gesture recognition component 232 that can recognize one or more gestures made by the user. In some implementations, a touchscreen device can register hand movement and/or a camera can register facial or other body movements, and the gesture recognition component 232 can recognize these as corresponding to one or more predefined commands. For example, this can activate a map mode and/or an AR mode and/or both.

[0047] Other inputs than gestures and measured inclination can be registered. The device 202 can include input controls 234 that can trigger one or more operations by the device 202, such as those described herein. For example, the map mode and/or the AR mode can be invoked using the input control(s) 234.

[0048] FIGS. 3A-G show another example of transitioning between a map view and an AR view. This example includes a gallery 300 of illustrations that are shown at different points in time corresponding to the respective ones of FIGS. 3A-G. Each point in time is here represented by one or more of: an inclination diagram 302, a device 304 and a map 306 of physical locations. For example, in the inclination diagram 302 the orientation of a device 308 can be indicated; the device 304 can present content such as a map view and/or an AR view on a GUI 310; and/or in the map 306 the orientation of a device 312 relative to one or more POIs can be indicated.

[0049] The devices 304, 308 and 312 are shown separately for clarity, but are related to each other in the sense that the orientation of the device 308 and/or the device 312 can cause the device 304 to present certain content on the GUI 310.

[0050] The device 304, 308 or 312 can have one or more cameras or other electromagnetic sensors. For example, in the map 306, a field of view (FOV) 314 can be defined by respective boundaries 314A-B. The FOV 314 can define, for example, what is captured by the device’s camera depending on its present position and orientation. A line 316, moreover, extends rearward from the device 312. In a sense, the line 316 defines what objects are to the left or to the right of the device 312, at least with regard to those objects that are situated behind the device 312 from the user’s perspective.

[0051] Multiple physical locations 318A-C are here marked in the map 306. These can correspond to the respective physical locations of one or more POIs that have been defined or identified (e.g., by way of a search function or navigation function). For example, each POI can be a place or event, a waypoint and/or a destination on a route. In this example, a physical location 318A is currently located in front of the device 312 and within the FOV 314. A physical location 318B is located behind the device 312 and not within the FOV 314. Another physical location 318C, finally, is also located behind the device 312 and not within the FOV 314. While both of the physical locations 318B-C are here positioned to the left of the device 312, the physical location 318C is currently closer to the line 316 than is the physical location 318B.

[0052] On the device 304, the GUI 310 here includes a map area 320 and an AR area 322. In the map area, POI objects 324A-C are currently visible. The POI object 324A here is associated with the POI that is situated at the physical location 318A. Similarly, the POI object 324B is here associated with the POI of the physical location 318B, and the POI object 324C is associated with the POI of the physical location 318C, respectively. As such, the user can inspect the POI objects 324A-C in the map area 320 to gain insight into the positions of the POIs. The map area 320 can have any suitable shape and/or orientation. In some implementations, the map area 320 can be similar or identical to any map area described herein. For example, and without limitation, the map area 320 can be similar or identical to the map area 104 (FIG. 1A) or to the map 124 (FIG. 1B).

[0053] Assume now that the user makes a recognizable input into the device 304. For example, the user changes the inclination of the device 308 from that shown in FIG. 3A to that of FIG. 3B. This can cause one or more changes to occur on the device 304. In some implementations, the map area 320 can recede. For example, the amount of the map area 320 visible on the GUI 310 can be proportional to, or otherwise have a direct relationship with, the inclination of the device 308.

[0054] Another change based on the difference in inclination can be a transition of one or more POI objects in the GUI 310. Any of multiple kinds of transitions can be done. For example, the system can determine that the physical location 318A is within the FOV 314. Based on this, transition of the POI object 324A can be triggered, as schematically indicated by an arrow 326A, to a location within the AR area 322 that corresponds to the physical location 318A. In some implementations, software being executed on the device 304 triggers transition of content by causing the device 304 to transition that content on one or more screens. For example, if the AR area 322 contains an image depicting one or more physical locations, the POI object 324A can be placed on that image in a position that corresponds to the physical location of the POI at issue. As such, the transition according to the arrow 326A exemplifies that, in response to determining that the physical location 318A of the POI to which the POI object 324A corresponds is within the FOV 314, the POI object 324A can be placed at a location in the AR area 322 corresponding to the physical location 318A.

[0055] As another example, docking of one or more POI objects at an edge or edges of the GUI 310 can be triggered. In some implementations, software being executed on the device 304 triggers docking of content by causing the device 304 to dock that content on one or more screens. Here, the system can determine that the physical location 318B is not within the FOV 314. Based on this, the POI object 324B can be transitioned, as schematically indicated by an arrow 326B, to an edge of the AR area 322. In some implementations, docking at an edge of the AR area 322 can include docking at an edge of an image presented on the GUI 310. For example, the POI object 324B, which is associated with the POI of the physical location 318B, can be placed at a side edge 328A that is closest to the physical location of that POI, here the physical location 318B. As such, the transition according to the arrow 326B exemplifies that it can be determined that the side edge 328A is closer to the physical location 318B than other edges (e.g., an opposite side edge 328B) of the image. The side edge 328A can then be selected for placement of the POI object 324B based on that determination.

[0056] Similarly, the system can determine that the physical location 318C is not within the FOV 314. Based on this, transition of the POI object 324C can be triggered, as schematically indicated by an arrow 326C, to the side edge 328A. That is, the POI object 324C, which is associated with the POI of the physical location 318C, can be placed at the side edge 328A that is closest to the physical location of that POI, here the physical location 318C. As such, the transition according to the arrow 326C exemplifies that it can be determined that the side edge 328A is closer to the physical location 318C than other edges (e.g., the opposite side edge 328B) of the image. The side edge 328A can then be selected for placement of the POI object 324C based on that determination.

[0057] In some implementations, determinations such as those exemplified above can involve comparisons of angles. For example, determining that the side edge 328A is closer to the physical location 318B than, say, the opposite side edge 328B, can include a determination of an angle between the physical location 318B and the side edge 328A. For example, determining that the side edge 328A is closer to the physical location 318B can include a determination of an angle between the physical location 318B and the opposite side edge 328B. The se angles can then be compared to make the determination.

[0058] Assume now that the user further inclines the device 308 relative to the horizontal plane. FIG. 3C illustrates an example that further recession of the map area 320 can be triggered in response. In some implementations, software being executed on the device 304 triggers recession of content by causing the device to recede that content on one or more screens. For example, the map area 320 can be proportional or in another way directly dependent on the amount of tilt. This can allow more of the AR area 322 to be visible, in which the POI objects 324A-C are located.

[0059] Assume now that the user rotates the device 312 in some direction. For example, FIG. 3C illustrates that the device 312 is rotated clockwise in an essentially horizontal plane, as schematically illustrated by arrows 330. This is going to change the FOV 314, as defined by the lines 314A-B, and also the line 316. Eventually, the device 312 may have the orientation shown in FIG. 3G as a result of such a rotation. That is, the device 312 then has an orientation where a modified FOV 314’ includes the physical location 318A but neither of the physical locations 318B-C. The physical location 318B, moreover, continues to be situated behind and to the left of the device 312, because the physical location 318B is on the same side of the line 316 as in, say, FIG. 3A.

……
……
……

您可能还喜欢...