Sony Patent | Image Generating Apparatus And Image Display Control Apparatus

Patent: Image Generating Apparatus And Image Display Control Apparatus

Publication Number: 10659742

Publication Date: 20200519

Applicants: Sony

Abstract

An image generating apparatus generates a panoramic image by transforming at least one divided area including a range onto which a scene viewed from an observation point is projected, out of eight divided areas obtained by dividing the surface of a sphere having at least a partial range onto which the scene is projected, with three planes that pass through the center of the sphere and are orthogonal to each other, into such an area that the number of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes, and placing the transformed area on a plane, and outputs the generated panoramic image.

TECHNICAL FIELD

The present invention relates to an image generating apparatus for generating a panoramic image, an image display control apparatus for displaying a panoramic image, an image generating method, a program, and image data.

BACKGROUND ART

Equidistant cylindrical projection is known as an image format for a panoramic image in which a whole sky scene as viewed from an observation point is projected onto a two-dimensional plane. According to this projection, an omniazimuth scene that spans 360 degrees horizontally and spans 180 degrees vertically is included in a rectangular shape of image data which has an aspect ratio of 1:2. By using such a panoramic image, it is possible to realize a panorama viewer for displaying a scene in any desired direction depending on how the user manipulates the direction of its viewpoint, for example.

SUMMARY

Technical Problem

According to the above-described image format of the equidistant cylindrical projection, the entire upper side of the image corresponds to one point at the zenith (directly above), and the entire lower side thereof corresponds to one point at the nadir (directly below). Therefore, in regions near the upper side and the lower side (regions including scenes directed nearly directly above and directly below the observation point), the amount of information per pixel is extremely small compared with regions in the middle of the image which include scenes at a height near the horizon, resulting a lot of wasteful information.

The present invention has been made in view of the above situation. It is an object of the present invention to provide an image generating apparatus, an image display control apparatus, an image generating method, a program, and image data which are capable of reducing wasteful information contained in a panoramic image.

Solution to Problem

An image generating apparatus according to the present invention includes a panoramic image generating unit configured to generate a panoramic image by transforming at least one divided area including a range onto which a scene viewed from an observation point is projected, out of eight divided areas obtained by dividing the surface of a sphere having at least a partial range onto which the scene is projected, with three planes that pass through the center of the sphere and are orthogonal to each other, into such an area that the number of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes, and placing the transformed area on a plane, and an image output unit configured to output the generated panoramic image.

An image display control apparatus according to the present invention includes an acquiring unit configured to acquire a panoramic image by transforming at least one divided area including a range onto which a scene viewed from an observation point is projected, out of eight divided areas obtained by dividing the surface of a sphere having at least a partial range onto which the scene is projected, with three planes that pass through the center of the sphere and are orthogonal to each other, into such an area that the number of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes, and placing the transformed area on a plane, and a rendering unit configured to render a display image representing a scene in a predetermined visual field range on the basis of the acquired panoramic image, and control a display apparatus to display the rendered display image on a screen thereof.

A method of generating an image according to the present invention includes a step of generating a panoramic image by transforming at least one divided area including a range onto which a scene viewed from an observation point is projected, out of eight divided areas obtained by dividing the surface of a sphere having at least a partial range onto which the scene is projected, with three planes that pass through the center of the sphere and are orthogonal to each other, into such an area that the number of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes, and placing the transformed area on a plane, and a step of outputting the generated panoramic image.

A program according to the present invention enables a computer to function as means for generating a panoramic image by transforming at least one divided area including a range onto which a scene viewed from an observation point is projected, out of eight divided areas obtained by dividing the surface of a sphere having at least a partial range onto which the scene is projected, with three planes that pass through the center of the sphere and are orthogonal to each other, into such an area that the number of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes, and placing the transformed area on a plane, and means for outputting the generated panoramic image. The program may be provided as being stored in a nontemporal information storage medium that can be read by a computer.

Image data according the present invention represents a transformed area that is transformed from at least one divided area including a range onto which a scene viewed from an observation point is projected, out of eight divided areas obtained by dividing the surface of a sphere having at least a partial range onto which the scene is projected, with three planes that pass through the center of the sphere and are orthogonal to each other, such that the number of pixels corresponding to mutually equal latitudes is progressively reduced toward higher latitudes, and that is placed on a plane.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1A is a perspective front view of a hypothetical sphere onto which there is projected a whole sky scene that is contained in a panoramic image according to a first example generated by an image generating apparatus according to an embodiment of the present invention.

FIG. 1B is a perspective rear view of the hypothetical sphere onto which there is projected the whole sky scene contained in the panoramic image according to the first example.

FIG. 1C is a front elevational view of the hypothetical sphere onto which there is projected the whole sky scene contained in the panoramic image according to the first example.

FIG. 2 is a diagram depicting a panoramic image according to equidistant cylindrical projection.

FIG. 3 is a diagram depicting the panoramic image according to the first example.

FIG. 4A is a perspective front view of a hypothetical sphere onto which there is projected a whole sky scene that is contained in a panoramic image according to a second example generated by the image generating apparatus according to the embodiment of the present invention.

FIG. 4B is a perspective rear view of the hypothetical sphere onto which there is projected the whole sky scene contained in the panoramic image according to the second example.

FIG. 4C is a front elevational view of the hypothetical sphere onto which there is projected the whole sky scene contained in the panoramic image according to the second example.

FIG. 4D is a rear elevational view of the hypothetical sphere onto which there is projected the whole sky scene contained in the panoramic image according to the second example.

FIG. 5 is a diagram depicting the panoramic image according to the second example.

FIG. 6 is a diagram depicting a panoramic image according to a third example.

FIG. 7 is a diagram depicting an example of a pixel layout of a panoramic image generated by the image generating apparatus according to the embodiment of the present invention.

FIG. 8 is a diagram depicting an example of a pixel layout of a panoramic image which is of a rectangular shape.

FIG. 9 is a diagram depicting another example of a pixel layout of a panoramic image which is of a rectangular shape.

FIG. 10 is a block diagram depicting an arrangement of an image display system including the image generating apparatus and an image display control apparatus according to the embodiment of the present invention.

FIG. 11 is a functional block diagram depicting functions of the image display system.

FIG. 12 is a diagram illustrative of a sampling process for rendering a display image.

FIG. 13 is a diagram depicting an example of a panoramic image with sampling pixel strings added thereto.

DESCRIPTION OF EMBODIMENT

An embodiment of the present invention will be described in detail below on the basis of the drawings.

An image generating apparatus according to the present embodiment generates a panoramic image of an image format, which is different from an equidistant cylindrical projection, including a whole sky scene as viewed from an observation point. The panoramic image generated by the image generating apparatus according to the embodiment will hereinafter be referred to as a panoramic image P. The panoramic image P is represented by two-dimensional (planar) image data including the whole sky scene. The whole sky signifies all azimuths that span 360 degrees horizontally (in leftward and rightward directions) and span 180 degrees vertically (in upward and downward directions) from the zenith to the nadir as seen from the observation point.

Three examples of the image format of the panoramic image P will be described below in comparison with a panoramic image according to the equidistant cylindrical projection. The first example of the panoramic image P will first be described below. The panoramic image P according to the first example will hereinafter be referred to as a first panoramic image P1. The panoramic image generated according to the equidistant cylindrical projection will hereinafter be referred to as an equidistant cylindrical image P0.

The whole sky scene as viewed from the observation point is projected onto a hypothetical sphere around the position of the observation point. The hypothetical sphere onto which the whole sky scene is projected is referred to as a sphere S. FIGS. 1A through 1C depict the sphere S, FIG. 1A being a perspective front view as viewed from above, FIG. 1B a perspective rear view as viewed from below, and FIG. 1C a front elevational view. The position of a point E.sub.1 to be described later is in a frontal direction. On the surface of the sphere S, a point corresponding to the zenith (directly above the observation point) is referred to as a point U, and a point corresponding to the nadir (directly below the observation point) is referred to as a point D. The point U and the point D are on opposite sides of the sphere S across the center thereof. A great circle of the sphere S that is perpendicular to a straight line UD corresponds to the astronomical horizon as viewed from the observation point, and the scene viewed horizontally from the observation point is projected onto the great circle. A plane including the great circle of the sphere S that is perpendicular to the straight line UD will be referred to as a horizontal plane. A plane that is orthogonal to the horizontal plane will be referred to as a vertical plane.

According to the first example, a position on the surface of the sphere S is expressed by a coordinate system of latitudes .theta. and longitudes .PHI.. A point F on the horizontal plane is assumed to be the origin (.theta.=0, .PHI.=0) of the coordinate system. The latitude .theta. of a certain point on the sphere S is expressed as an angle formed between a straight line interconnecting that point and the center of the sphere S and the horizontal plane. The longitude .PHI. of the point is expressed by an angle formed between a great circle of the sphere S that includes that point, the point U, and the point D and a great circle of the sphere S that includes the point F, the point U, and the point D. As depicted in FIG. 1C, the direction from the horizontal plane toward the zenith is referred to as a positive direction of latitudes .theta.. Therefore, the latitude .theta. of the point U is defined as .pi./2, and the latitude .theta. of the point D as -.pi./2. The right-hand direction from the point F as it is viewed from the observation point is referred to as a positive direction of longitudes .PHI..

Four points on the sphere S that are angularly spaced by 90 degrees along the horizontal plane are referred to as points E.sub.1 through E.sub.4. Specifically, the latitudes .theta. of these four points are all 0, and the longitudes .PHI. of the points E.sub.1, E.sub.2, E.sub.3, and E.sub.4 are .pi./4, 3.pi./4, 5.pi./4 (or -3.pi./4), -.pi./4, respectively. For example, if the observer at the observation point faces in the direction of the point E.sub.1, then the point E.sub.2 is in the right-hand direction of the observer, the point E.sub.3 is in the backward direction of the observer, and the point E.sub.4 is in the left-hand direction of the observer. The point E.sub.1 and the point E.sub.3 are on opposite sides of the sphere S across the center thereof, and the point E.sub.2 and the point E.sub.4 are on opposite sides of the sphere S across the center thereof. A straight line E.sub.1E.sub.3 and a straight line E.sub.2E.sub.4 are orthogonal to each other on the horizontal plane. In FIGS. 1A through 1C, a line of latitude at .theta.=0 that is included in the horizontal plane and four lines of longitude that pass through the points E.sub.1 through E.sub.4 are indicated by the solid lines. Several lines of latitude are indicated by the broken lines.

Furthermore, eight areas of the surface of the sphere S divided by three planes that pass through the center of the sphere S and that are orthogonal to each other are expressed as divided areas A.sub.1 through A.sub.8. According to the first example, the three orthogonal planes are the horizontal plane including the points E.sub.1 through E.sub.4, a vertical plane including the points E.sub.1, E.sub.3, U, and D, and another vertical plane including the points E.sub.2, E.sub.4, U, and D. Specifically, the area surrounded by a line of longitude interconnecting the points U and E.sub.1, a line of latitude interconnecting the points E.sub.1 and E.sub.2, and a line of longitude interconnecting the points E.sub.2 and U is defined as the divided area A.sub.1. Similarly, the area surrounded by the points U, E.sub.2, and E.sub.3 is defined as the divided area A.sub.2, the area surrounded by the points U, E.sub.3, and E.sub.4 as the divided area A.sub.3, the area surrounded by the points U, E.sub.4, and E.sub.1 as the divided area A.sub.4, the area surrounded by the points D, E.sub.1, and E.sub.2 as the divided area A.sub.5, area surrounded by the points D, E.sub.2, and E.sub.3 as the divided area A.sub.6, the area surrounded by the points D, E.sub.3, and E.sub.4 as the divided area A.sub.7, and the area surrounded by the points D, E.sub.4, and E.sub.1 as the divided area A.sub.8. Each of these divided areas A.sub.1 through A.sub.8 is an area surrounded by three lines of latitude and longitude each having a length corresponding to 1/4 of the circumference of a great circle of the sphere S, and their sizes and shapes are equal to each other.

FIG. 2 depicts the equidistant cylindrical image P0 including a scene projected onto the sphere S. The point F at the longitude .PHI.=0 is at the center of the equidistant cylindrical image P0. According to the equidistant cylindrical projection, the scene projected onto the surface of the sphere S is transformed into the equidistant cylindrical image P0 which is of a rectangular shape having an aspect ratio of 1:2 in order to keep vertical and horizontal positional relationships as viewed from the observation point. In the equidistant cylindrical image P0, the lines of latitude of the sphere S extend parallel to each other in the horizontal directions, and the lines of longitude of the sphere S extend parallel to each other in the vertical directions, with all the lines of latitude and all the lines of longitude being orthogonal to each other. The divided areas A.sub.1 through A.sub.8 are transformed into respective square-shaped areas. The equidistant cylindrical image P0 has an upper side which corresponds in its entirety to the point U and a lower side which corresponds in its entirety to the point D. Because of the above transformation, areas positioned in the vicinity of the points U and D on the surface of the sphere S (high-latitude areas) are expanded horizontally in the equidistant cylindrical image P0. Therefore, in the vicinity of the upper and lower sides of the equidistant cylindrical image P0, the amount of information contained per unit pixel is reduced compared with low-latitude areas in the middle of the image.

FIG. 3 depicts the first panoramic image P1 including a scene projected onto the sphere S. As depicted in FIG. 3, the first panoramic image P1 is of a square shape as a whole. The center of the square shape corresponds to the point D, and the point U which is opposite to the point D on the sphere S corresponds to four corners of the square shape. In other words, the four vertexes of the first panoramic image P1 correspond to the single point U on the sphere S. The midpoint of the upper side of the square shape corresponds to the point E.sub.1, the midpoint of the right side thereof to the point E.sub.2, the midpoint of the lower side thereof to the point E.sub.3, and the midpoint of the left side thereof to the point E.sub.4. Of the four vertexes of the first panoramic image P1 which correspond to the single point U on the sphere S, the upper right vertex is defined as a point U.sub.1, the lower right vertex as a point U.sub.2, the lower left vertex as a point U.sub.3, and the upper left vertex as a point U.sub.4.

A line of latitude at .theta.=0 on the sphere S forms a square E.sub.1E.sub.2E.sub.3E.sub.4 in the first panoramic image P1, where the midpoints of the four sides serve as the vertexes of the square and the point D serves as the center of the square. Lines of latitude at .theta.<0 form squares in the first panoramic image P1, where they make 90-degree bends at positions intersecting with straight lines E.sub.1D, E.sub.2D, E.sub.3D, and E.sub.4D and the point D serves as the centers of the squares. On the other hand, lines of latitude at .theta.>0 are divided into four squares E.sub.1U.sub.1E.sub.2D, DE.sub.2U.sub.2E.sub.3, U.sub.4E.sub.1DE.sub.4, and E.sub.4DE.sub.3U.sub.3 that are provided by dividing the first panoramic image P1 into four pieces. These four squares correspond to respective four areas obtained when the surface of the sphere S is divided into four pieces by two vertical planes that are orthogonal to each other. In each of these squares, lines of latitude (i.e., lines where planes orthogonal to the two vertical planes and the sphere S cross each other) are juxtaposed parallel to a diagonal line of the square. Lines of longitude on the sphere S extend radially from the point D at the center in the first panoramic image P1, bend at positions where they intersect with the line of latitude at .theta.=0, and extend to either ones of the squares that correspond to the point U.

Each of divided areas A.sub.1 through A.sub.8 that are obtained by dividing the surface of the sphere S into eight pieces is transformed into an area shaped as a rectangular equilateral triangle in the first panoramic image P1. In the first panoramic image P1, each of the divided areas is transformed into a shape relatively close to a shape on the original spherical plane compared with the equidistant cylindrical image P0 where each divided area is transformed into a square shape. Therefore, the difference between the amount of information contained per unit pixel in high-latitude areas and the amount of information contained per unit pixel in low-latitude areas is reduced compared with the equidistant cylindrical image P0. Hereinafter, areas in a panoramic image P that are converted from the divided areas will be referred to as transformed areas. For the convenience of illustration, the individual transformed areas in the panoramic image P are denoted by the same reference symbols as those of the corresponding divided areas on the sphere S. For example, a transformed area in the first panoramic image P1 which is obtained by transforming the divided area A.sub.1 on the sphere S is referred to as a transformed area A.sub.1.

The associated relationship between positional coordinates on the surface of the sphere S and positional coordinates in the first panoramic image P1 will be described below. It is assumed that the positional coordinates in the first panoramic image P1 are represented by an orthogonal coordinate system where the x-axis extends in the horizontal directions, the y-axis extends in the vertical directions, and the origin is located at the central position as depicted in FIG. 3. In the orthogonal coordinate system, the right side of the first panoramic image P1 is indicated by x=1, the left side thereof by x=-1, the upper side thereof by y=1, and the lower side thereof by y=-1.

In this case, a latitude .theta. and a longitude .PHI. on the surface of the sphere S are expressed by the following equations using variables u, v, and a:

.theta..pi..PHI..pi..times..times. ##EQU00001## where u, v, a are expressed by the following equations depending on positional coordinates (x, y) in the first panoramic image P1:

.times..times..times..times..times..times..times..times..function..gtoreq- ..gtoreq..times..times..times..times..times..pi..times..times..times..time- s..times..times..times..times..times..times..function..gtoreq..ltoreq..tim- es..times..times..times..times..pi..times..times..times..times..times..tim- es..times..times..times..times..function..ltoreq..ltoreq..times..times..ti- mes..times..times..pi..times..times..times..times..times..times..times..ti- mes..times..times..function..ltoreq..gtoreq..times..times..times..times..t- imes..times..times. ##EQU00002## The associated relationship between positions on the sphere S and positions in the first panoramic image P1 is defined by these equations. As can be understood from these equations, latitudes .theta. in each of the divided areas are linearly related to both x and y.

Except the points (x=1, x=-1, y=1, y=-1) on the outer circumference of the first panoramic image P1, the positional coordinates on the sphere S and the positional coordinates in the first panoramic image P1 are held in one-to-one correspondence with each other. Furthermore, pixels that are adjacent to each other in the first panoramic image P1 correspond to areas that are adjacent to each other in the sphere S. In other words, although there are locations where lines of latitude and lines of longitude bend in the first panoramic image P1, discrete areas that are separate from each other on the sphere S are not transformed such that they are adjacent to each other in the first panoramic image P1. The points on the outer circumference of the first panoramic image P1 are contiguous, on the sphere S, to locations on corresponding same sides in case each side of the square shape is folded back on itself about the midpoint. For example, the n-th pixel from the left end and the n-th pixel from the right end of the upper side of the square shape correspond to adjacent areas on the sphere S.

In the equidistant cylindrical image P0, the amount of information per unit pixel is the largest in low-latitude areas (middle areas of the image). If the number of pixels in the vertical directions of the equidistant cylindrical image P0 is indicated by 2N, then the number of pixels in the horizontal directions thereof is indicated by 4N, so that the number of pixels corresponding to a visual field range of 90 degrees (e.g., a range from the point E.sub.1 to the point E.sub.2) on the horizontal plane is N. In contrast, in the first panoramic image P1 where the number of pixels in the vertical directions is indicated by 2N, though the pixels corresponding to the visual field range of 90 degrees on the horizontal plane are arranged obliquely as along a straight line E.sub.1E.sub.2 in FIG. 3 for example, the number of those pixels is N as with the equidistant cylindrical image P0. Therefore, the first panoramic image P1 is able to provide an essentially equivalent image quality in low-latitude areas compared with the equidistant cylindrical image P0 that has the same number of pixels in the vertical directions. In a visual field range of 180 degrees along the vertical directions from the zenith (the point U) via the horizontal plane to the nadir (the point D), the number of pixels corresponding to this visual field range of the equidistant cylindrical image P0 is in agreement with the number 2N of pixels in the vertical directions of the image. In first panoramic image P1, in contrast, the visual field range corresponds to a route from the point U.sub.1 via the point E.sub.1 to the point D in FIG. 3, for example, so that the number of pixels corresponding to the visual field range is represented by (2N-1) which is produced by subtracting 1 from the number 2N of pixels of one side of the first panoramic image P1. Here, 1 is subtracted because the pixel at the position of the point E.sub.1 is an endpoint of a straight line U.sub.1E.sub.1 and also an endpoint of a straight line E.sub.1D and hence is shared by these end points. At any rate, since the number of pixels in the vertical directions of the first panoramic image P1 is essentially the same as with the equidistant cylindrical image P0, the number of pixels corresponding to a visual field range in the vertical directions of the first panoramic image P1 is able to offer an essentially equivalent resolution. At higher latitudes, the number of pixels of the first panoramic image P1 decreases. However, as the equidistant cylindrical image P0 suffers a lot of wasteful information in high-latitude areas, the image quality in high-latitude areas of the first panoramic image P1 is almost not degraded compared with the equidistant cylindrical image P0. In other words, the first panoramic image P1 is comparable in terms of image quality to the equidistant cylindrical image P0 which as the same number of pixels in the vertical directions as the first panoramic image P1 throughout the whole sky.

Providing the first panoramic image P1 and the equidistant cylindrical image P0 have the same number of pixels in the vertical directions, the number of pixels in the horizontal directions of the first panoramic image P1 is exactly one-half of that of the equidistant cylindrical image P0. Therefore, on the whole, the first panoramic image P1 offers an image quality essentially equivalent to that of the equidistant cylindrical image P0 with one-half of the number of pixels. Consequently, using the first panoramic image P1, it is possible to reduce the image data size without a loss of image quality compared with the equidistant cylindrical image P0. In addition, the first panoramic image P1 makes it possible to achieve a higher image resolution without involving an increase in the image data size compared with the equidistant cylindrical image P0. Furthermore, when a panoramic image is to be generated as a moving image, the frame rate can be increased and the processing burden required to encode and decode the moving image can be reduced. Moreover, when a panoramic image is to be displayed as a three-dimensional image, image data including two panoramic images for the left and right eyes can be provided with an equivalent number of pixels to one equidistant cylindrical image P0.

Next, the second example of the image format of the panoramic image P in the second embodiment will be described below. The panoramic image P according to the second example will hereinafter be referred to as a second panoramic image P2. According to the second example, for transforming the positions on the sphere S into the positions in the second panoramic image P2, two hemispheres provided by dividing the surface of the sphere S into two halves are transformed using coordinate systems that are different from each other. The definition of positional coordinates on the sphere S according to the second example will be described below with reference to FIGS. 4A through 4D.

FIG. 4A is a perspective front view of the sphere S as viewed from above. FIG. 4B is a perspective rear view of the sphere S as viewed from below. FIG. 4C is a front elevational view of the sphere S and FIG. 4D is a rear elevational view of the sphere S. The position of the point F is in a frontal direction. In the second example, as is the case with FIGS. 1A through 1C, a point corresponding to the zenith is referred to as a point U, and a point corresponding to the nadir is referred to as a point D. Four points on the sphere S that are angularly spaced by 90 degrees along the horizontal plane are referred to as points F, L, B, and R. When the observer at the center (observation point) of the sphere S faces in the direction of the point F (frontal direction), the right-hand direction points to the point R, the backward direction to the point B, and the left-hand direction to the point L.

With respect to the frontal half of the sphere S, i.e., the range thereof depicted in FIG. 4C, the positional coordinates are defined by the similar latitudes .theta. and longitudes .PHI. to the first example described above. In other words, the lines of latitude extend parallel to the horizontal plane, and the lines of longitude represent the circumference of great circles of the sphere S that pass through the point U and the point D. The hemispherical surface of the frontal half of the sphere S will hereinafter be referred to as a frontal region, and the coordinate system that indicates positions in the frontal region as a frontal coordinate system. In FIGS. 4A and 4C, several lines of latitude are indicated by broken lines in the frontal region. In the frontal coordinate system, the point F is assumed to be the origin (.theta.=0, .PHI.=0), and, as indicated in FIG. 4C by the arrows, the direction from the point F toward the zenith (point U) is assumed to be a positive direction of latitudes .theta., and the direction from the point F toward the point R is assumed to be a positive direction of longitudes .PHI.. As with the first example, the point U is defined as .theta.=.pi./2 and the point D as .theta.=-.pi./2. Furthermore, the point R is defined as .theta.=0, .PHI.=.pi./2, and the point L as .theta.=0, .PHI.=-.pi./2.

With respect to the back half of the sphere S, i.e., the range thereof depicted in FIG. 4D, latitudes .theta. and longitudes .PHI. are defined in different directions from those in the frontal region. Specifically, latitudes .theta. and longitudes .PHI. are defined in directions that are 90 degrees inclined to those in the frontal region. The lines of latitude represent the circumference of cross sections of the sphere S that are perpendicular to a straight line LR, and the lines of longitude represent the circumference of great circles of the sphere S that pass through the point L and the point R. The hemispherical surface of the back half of the sphere S will hereinafter be referred to as a back region, and the coordinate system that indicates positions in the back region as a back coordinate system. In FIGS. 4B and 4D, several lines of latitude in the back region defined by the back coordinate system are indicated by dot-and-dash lines. As depicted in FIG. 4D, in the back coordinate system, the lines of latitude extend parallel to a straight line UD (i.e., orthogonal to the lines of latitude in the frontal coordinate system) as viewed from behind the sphere S. In the back coordinate system, the point B is assumed to be the origin (.theta.=0, .PHI.=0), and, as indicated by the arrows, the direction from the point B toward the point L is assumed to be a positive direction of latitudes .theta., and the direction from the point B toward the point D is assumed to be a positive direction of longitudes .PHI.. Consequently, the point U, the point L, the point D, and the point R that are positioned on the boundary between the frontal region and the back region are expressed by positional coordinates in the back coordinate system that are different from those in the frontal coordinate system. Specifically, in the back coordinate system, the point L is defined as .theta.=.pi./2 and the point R as .theta.=-.pi./2. Furthermore, the point D is defined as .theta.=0, .PHI.=.pi./2, and the point U as .theta.=0, .PHI.=-.pi./2.

Furthermore, eight areas of the surface of the sphere S divided by three planes that pass through the center of the sphere S and that are orthogonal to each other are expressed as divided areas A.sub.9 through A.sub.16. The three orthogonal planes that are orthogonal to each other are a horizontal plane including the point F, the point L, the point B, and the point R, a vertical plane including the point U, the point F, the point D, and the point B, and another vertical plane including the point U, the point L, the point D, and the point R. Specifically, the area surrounded by the point U, the point F, and the point L is defined as the divided area A.sub.9, the area surrounded by the point D, the point F, and the point L as the divided area A.sub.10, the area surrounded by the point D, the point R, and the point F as the divided area A.sub.11, the area surrounded by the point U, the point F, and the point R as the divided area A.sub.12, the area surrounded by the point U, the point B, and the point R as the divided area A.sub.13, area surrounded by the point D, the point B, and the point R as the divided area A.sub.14, the area surrounded by the point D, the point L, and the point B as the divided area A.sub.15, and the area surrounded by the point U, the point B, and the point L as the divided area A.sub.16. Each of these divided areas A.sub.9 through A.sub.16 is an area surrounded by three lines of latitude and longitude each having a length corresponding to 1/4 of the circumference of a great circle of the sphere S, and their sizes and shapes are equal to each other.

FIG. 5 depicts the second panoramic image P2 including a scene projected onto the sphere S. As depicted in FIG. 5, the second panoramic image P2 is of a square shape as a whole as with the first panoramic image P1. The center of the square shape corresponds to the point F, and the point B which is opposite to the point F on the sphere S corresponds to four corners of the square shape. In other words, the four vertexes of the second panoramic image P2 correspond to the single point B on the sphere S. The midpoint of the left side of the square shape corresponds to the point L, the midpoint of the upper side thereof to the point U, the midpoint of the right side thereof to the point R, and the midpoint of the lower side thereof to the point D. Of the four vertexes which correspond to the point B, the upper right vertex is defined as a point B.sub.1, the lower right vertex as a point B.sub.2, the lower left vertex as a point B.sub.3, and the upper left vertex as a point B.sub.4.

In the second panoramic image P2, the frontal region of the sphere S is transformed into a square shape RULD depicted in FIG. 5. In this square shape, the lines of latitude extend parallel to each other in the horizontal directions (directions parallel to the straight line LR), whereas the lines of longitude extend radially from the point U, and bend at positions where they intersect with the straight line RL and then extend to the point D.

On the other hand, the back region of the sphere S is divided into four areas each transformed into a transformed area shaped as a rectangular equilateral triangle and disposed outside of the square shape RULD. The positions where the transformed areas are disposed are determined such that contiguous areas on the sphere S are also adjacent to each other in the second panoramic image P2. Specifically, in the second panoramic image P2, as with the first panoramic image P1, the eight divided areas A.sub.9 through A.sub.16 into which the surface of the sphere S is divided are transformed into transformed areas A.sub.9 through A.sub.16 each shaped as a rectangular equilateral triangle, making up a square panoramic image where they keep their adjacent relationship on the sphere S. In the transformed areas A.sub.13 through A.sub.16 that are disposed outside of the square shape RULD, lines of latitude of the back coordinate system are juxtaposed parallel to the straight line LR as is the case with the lines of latitude of the frontal coordinate system.

The associated relationship between positional coordinates on the surface of the sphere S and positional coordinates in the second panoramic image P2 will be described below. It is assumed that the positional coordinates in the second panoramic image P2 are represented by an orthogonal coordinate system where the x-axis extends in the horizontal directions, the y-axis extends in the vertical directions, and the origin is located at the central position, as depicted in FIG. 5. In the orthogonal coordinate system, the right side of the second panoramic image P2 is indicated by x=1, the left side thereof by x=-1, the upper side thereof by y=1, and the lower side thereof by y=-1.

In this case, a latitude .theta. and a longitude .PHI. on the surface of the sphere S are expressed by the following equations using variables u and v:

.theta..pi..PHI..pi..times..times. ##EQU00003## where u and v are expressed by the following equations depending on positional coordinates (x, y) in the second panoramic image P2: Transformed areas A.sub.9, A.sub.10, A.sub.11, and A.sub.12: u=x, v=y Transformed area A.sub.13: u=x-1, v=y-1 Transformed area A.sub.14: u=1-x, v=-y-1 Transformed area A.sub.15: u=1+x, v=1+y Transformed area A.sub.16: u=-x-1, v=1-y [Equation 4]

The associated relationship between positions on the sphere S and positions in the second panoramic image P2 is defined by these equations. According to the second example, however, as described above, the latitudes .theta. and longitudes .PHI. in the frontal region are defined by the frontal coordinate system, whereas the latitudes .theta. and longitudes .PHI. in the back region are defined by the back coordinate system. In the second panoramic image P2, latitudes .theta. in each of the divided areas are also linearly related to both x and y.

Except the points (x=1, x=-1, y=1, y=-1) on the outer circumference of the second panoramic image P2, the positional coordinates on the sphere S and the positional coordinates in the second panoramic image P2 are also held in one-to-one correspondence with each other. Furthermore, pixels that are adjacent to each other in the second panoramic image P2 correspond to areas that are adjacent to each other in the sphere S. The points on the outer circumference of the second panoramic image P2 are contiguous, on the sphere S, to locations on corresponding same sides in case each side of the square shape is folded back on itself about the midpoint. As with the first panoramic image P1, the second panoramic image P2 offers an image quality essentially equivalent to that of the equidistant cylindrical image P0 with one-half of the number of pixels of the equidistant cylindrical image P0.

According to the second example, unlike the first panoramic image P1, the scene on the frontal side as viewed from the observer (the scene projected onto a hemispherical surface about the point F) is transformed, without being divided, into a square shape whose center is aligned with the center of the second panoramic image P2. Therefore, the second panoramic image P2 is suitable for use in an application where a frontal scene, rather than a back scene, is to be presented to the user.

更多阅读推荐......