MagicLeap Patent | Method and apparatus for scanning and printing a 3d object

Patent: Method and apparatus for scanning and printing a 3d object

Publication Number: 20250310455

Publication Date: 2025-10-02

Assignee: Magic Leap

Abstract

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display. Additional techniques include processing and formatting of the three-dimensional representation data to be printed by a three-dimensional printer so a three-dimensional model of the object may be formed.

Claims

1. 1-20. (canceled)

21. A system comprising:at least one processor; anda computer-readable storage device communicatively coupled to the at least one processor and storing instructions that when executed by the at least one processor, cause the at least one processor to perform operations comprising:receiving an image that depicts an object;extracting from the image, features associated with an object surface of the object, wherein each feature corresponds to a respective location on the object surface;for each extracted feature,determining whether there is a point in a point cloud that has overlapping point coordinates that overlap the location of the extracting feature, wherein each point in the point cloud includes respective point features and respective point coordinates that indicate a location on the object surface that the respective point features represent;in response to determining there is a particular point with overlapping point coordinates, adjusting one or more point features of the particular point based on the extracted feature and adjusting point coordinates of the particular point based on the location of the extracted feature; andgenerating a three dimensional representation of the object based on information in the point cloud.

22. The system of claim 21, wherein each point in the point cloud includes a respective probability indicating an accuracy of the point coordinates in the point correctly represent a location on the object surface for the point features included in the point, and wherein the three dimensional representation is generated by considering the respective probabilities of the points in the point cloud.

23. The system of claim 22, wherein the operations further comprise adjusting the probability in the particular point based on the overlap between the point coordinates of the particular point and the location of the extracted feature.

24. The system of claim 21, wherein a particular point feature in the particular point is generated from features extracted from a plurality of overlapping images,wherein the particular point includes a probability indicating that the particular point feature is the same in the plurality of overlapping images, andwherein in generating the three dimensional representation, the particular feature of the particular point is used based on the probability.

25. The system of claim 24, wherein two images are identified as overlapping images in response to performing a cross-correlation identifying common regions of the object surface in the two images.

26. The system of claim 21, where the system is a portable device including a camera that takes the image.

27. The system of claim 21, wherein generating the three dimensional representation of the object comprises:calculating a convex hull for the points in the point cloud; andadding to the convex hull texture information based on point features corresponding to a texture of the surface of the object.

28. The system of claim 21, wherein the point could is a depth map where each point coordinates includes a respective point depth, and wherein the operations further comprise:determining a distance between a location on the object surface from a camera that took the image; andadjusting a point depth of the particular point based on the determined distance.

29. The system of claim 21, wherein the operations further comprise in response to determining there is no point with overlapping coordinates, adding a new point to the point cloud to represent the extracted feature and the location associated with the extracted feature.

30. A computer-implemented method comprising:receiving an image that depicts an object;extracting from the image, features associated with an object surface of the object, wherein each feature corresponds to a respective location on the object surface;for each extracted feature,determining whether there is a point in a point cloud that has overlapping point coordinates that overlap the location of the extracting feature, wherein each point in the point cloud includes respective point features and respective point coordinates that indicate a location on the object surface that the respective point features represent;in response to determining there is a particular point with overlapping point coordinates, adjusting one or more point features of the particular point based on the extracted feature and adjusting point coordinates of the particular point based on the location of the extracted feature; andgenerating a three dimensional representation of the object based on information in the point cloud.

31. The method of claim 30, wherein each point in the point cloud includes a respective probability indicating an accuracy of the point coordinates in the point correctly represent a location on the object surface for the point features included in the point, andwherein the three dimensional representation is generated by considering the respective probabilities of the points in the point cloud.

32. The method of claim 31, further comprising adjusting the probability in the particular point based on the overlap between the point coordinates of the particular point and the location of the extracted feature.

33. The method of claim 30, wherein a particular point feature in the particular point is generated from features extracted from a plurality of overlapping images,wherein the particular point includes a probability indicating that the particular point feature is the same in the plurality of overlapping images, andwherein in generating the three dimensional representation, the particular feature of the particular point is used based on the probability.

34. The method of claim 33, wherein two images are identified as overlapping images in response to performing a cross-correlation identifying common regions of the object surface in the two images.

35. The method of claim 30, wherein the point could is a depth map where each point coordinates includes a respective point depth, and wherein the method further comprises:determining a distance between a location on the object surface from a camera that took the image; andadjusting a point depth of the particular point based on the determined distance.

36. The method of claim 30, further comprising in response to determining there is no point with overlapping coordinates, adding a new point to the point cloud to represent the extracted feature and the location associated with the extracted feature.

37. A non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising:receiving an image that depicts an object;extracting from the image, features associated with an object surface of the object, wherein each feature corresponds to a respective location on the object surface;for each extracted feature,determining whether there is a point in a point cloud that has overlapping point coordinates that overlap the location of the extracting feature, wherein each point in the point cloud includes respective point features and respective point coordinates that indicate a location on the object surface that the respective point features represent;in response to determining there is a particular point with overlapping point coordinates, adjusting one or more point features of the particular point based on the extracted feature and adjusting point coordinates of the particular point based on the location of the extracted feature; andgenerating a three dimensional representation of the object based on information in the point cloud.

38. The computer-readable medium of claim 37, wherein each point in the point cloud includes a respective probability indicating an accuracy of the point coordinates in the point correctly represent a location on the object surface for the point features included in the point, andwherein the three dimensional representation is generated by considering the respective probabilities of the points in the point cloud.

39. The computer-readable medium of claim 38, wherein the operations further comprise adjusting the probability in the particular point based on the overlap between the point coordinates of the particular point and the location of the extracted feature.

40. The computer-readable medium of claim 37, wherein a particular point feature in the particular point is generated from features extracted from a plurality of overlapping images,wherein the particular point includes a probability indicating that the particular point feature is the same in the plurality of overlapping images, andwherein in generating the three dimensional representation, the particular feature of the particular point is used based on the probability.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 17/547,751, filed on Dec. 10, 2021, entitled “METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT,” which is a continuation of U.S. application Ser. No. 16/685,983, filed on Nov. 15, 2019, entitled “METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT,” which is a continuation of U.S. application Ser. No. 15/308,959, filed on Nov. 4, 2016, entitled “METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT,” which is a 35 U.S.C. § 371 National Phase filing of International Application No. PCT/EP2015/060320, filed on May 11, 2015, entitled “METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT,” which claims priority to and the benefit under 35 U.S.C. § 119 (e) of U.S. Provisional Patent Application Ser. No. 61/992,601, filed on May 13, 2014, entitled “METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT,” and claims priority to and the benefit under 35 U.S.C. § 119 (e) of U.S. Provisional Patent Application Ser. No. 61/992,204, filed on May 12, 2014, entitled “METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT.” The contents of these applications are incorporated herein by reference in their entirety.

BACKGROUND

Mobile phones incorporate components that make them versatile and practically indispensable to their owners. Most existing smartphones include a camera and various inertial sensors, such as an accelerometer or compass. The smartphones can also include a proximity sensor, magnetometer, or other types of sensors.

Smartphones can be used to capture information with their cameras. Users value a smartphone's ability to take pictures because this feature allows the user to easily capture memorable moments or images of documents, such as might occur when performing bank transactions. However, smartphones are generally used to acquire images of simple scenes, such as a single photograph or a video with a sequence of image frames.

Heretofore, smartphones have not been used to produce output that can be printed to a three-dimensional (3D) printer. Such printers can be used to print a 3D representation of an object. However, a suitable 3D representation of an object has generally been created with specialized hardware and software applications.

SUMMARY

In some aspects, a method is provided for forming a 3D representation of an object with a smartphone or other portable electronic device. Such a representation may be formed by scanning the object to acquire multiple image frames of the object from multiple orientations. Information acquired from these image frames may be combined into a first representation of the object. That first representation may be processed to generate a second representation. In the second representation, the object may be represented by structural information, which may indicate the location of one or more surfaces, and texture information. The second representation may be modified, to remove or change structural information indicating structures that cannot be physically realized with a 3D printer.

In some embodiments, the second representation, and/or the modified second representation, may be a portable document format.

In some aspects, a method is provided for forming a 3D representation of an object with a smartphone or other portable electronic device. Such a representation may be formed by scanning the object to acquire multiple image frames of the object from multiple orientations.

Information acquired from these image frames may be combined into a first representation of the object. That first representation may be processed to generate a second representation. In the second representation, the object may be represented by structural information, which may indicate the location of one or more surfaces, and texture information. The second representation may be used to generate a visual display of the object while the image frames are acquired.

In accordance with other aspects, any of the foregoing methods may be embodied as computer-executable instructions embodied in a non-transitory medium.

In accordance with yet other aspects, any of the foregoing methods may be embodied as a portable electronic device in which a processor is configured to perform some or all of the acts comprising the method.

One type of embodiment is directed to a portable electronic device comprising a camera and at least one processor. The at least one processor is configured to form a first representation of an object from a plurality of image frames acquired with the camera from a plurality of directions, the representation comprising locations in a three-dimensional space of features of the object. The at least one processor is further configured to determine, from the first representation, a second representation of the object, the second representation comprising locations of one or more surfaces. The at least one processor is further configured to modify the second representation to remove surfaces that are not printable in three dimensions store the modified second representation as a three-dimensional printable file.

In some embodiments, modifying the second representation to remove surfaces that are not printable in three dimensions comprises removing surfaces that are not joined to other surfaces to provide a wall thickness above a threshold. In some embodiments, modifying the second representation to remove surfaces that are not printable in three dimensions comprises removing surfaces that are not a part of a closed hull. In some embodiments, modifying the second representation to remove surfaces that are not printable in three dimensions comprises computing normals to surfaces to remove surfaces having a normal in a direction toward an interior of a hull.

In some embodiments, the portable electronic device further comprises one or more inertial sensors. In some embodiments, the at least one processor is further configured to form the first representation based on outputs of the one or more inertial sensors when the plurality of image frames is acquired.

In some embodiments, the portable electronic device further comprises a display. In some embodiments, the at least one processor is further configured to render an image of the object based on a first portion of the plurality of image frames while a second portion of the plurality of image frames is being acquired. In some embodiments, the three-dimensional printable file is in a portable document format. In some embodiments, the three-dimensional printable file comprises separate information about structure of the object and visual characteristics of the structure.

One type of embodiments is direct to a method of forming a file for printing on a three-dimensional printer, the file comprising a representation of an object. The method comprises acquiring a plurality of image frames using a camera of a portable electronic device, and while the image frames are being acquired, construct a first three-dimensional representation of the object, determine a second three-dimensional representation of the object, by calculating a convex hull of the object, and display a view of the second three-dimensional representation on a two dimensional screen.

In some embodiments, the method further comprises modifying the second three-dimensional representation to remove surfaces that are not printable in three dimensions. In some embodiments, modifying the second three-dimensional representation to remove surfaces that are not printable in three dimensions comprises removing surfaces that are not joined to other surfaces to provide a wall thickness above a threshold. In some embodiments, modifying the second three-dimensional representation to remove surfaces that are not printable in three dimensions comprises removing surfaces that are not a part of the convex hull. In some embodiments, modifying the second three-dimensional representation to remove surfaces that are not printable in three dimensions comprises computing normals to surfaces to remove surfaces having a normal in a direction toward an interior of the convex hull.

In some embodiments, constructing the first representation comprises constructing the first representation based on outputs of one or more inertial sensors of the portable electronic device when the plurality of image frames is acquired. In some embodiments, the method further comprises rendering an image of the object based on a first portion of the plurality of image frames while a second portion of the plurality of image frames is being acquired. In some embodiments, the three-dimensional printable file is in a portable document format. In some embodiments, the three-dimensional printable file comprises separate information about structure of the object and visual characteristics of the structure.

One type of embodiment is directed to at least one non-transitory, tangible computer-readable storage medium having computer-executable instructions, that when executed by a processor, perform a method of forming a three dimensional representation of an object from a plurality of image frames captured with a camera of a portable electronic device. The method comprises forming a first representation of the object from the plurality of image frames, the representation comprising locations in a three-dimensional space of features of the object, determining, from the first representation, a second representation of the object, the second representation comprising locations of one or more surfaces, modifying the second representation to remove surfaces that are not printable in three dimensions, and storing the modified second representation as a three-dimensional printable file.

In some embodiments, modifying the second representation to remove surfaces that are not printable in three dimensions comprises removing surfaces that are not joined to other surfaces to provide a wall thickness above a threshold. In some embodiments, modifying the second representation to remove surfaces that are not printable in three dimensions comprises removing surfaces that are not a part of a closed hull. In some embodiments, modifying the second representation to remove surfaces that are not printable in three dimensions comprises computing normals to surfaces to remove surfaces having a normal in a direction toward an interior of a hull.

In some embodiments, forming the first representation comprises forming the first representation based on outputs of one or more inertial sensors of the portable electronic device when the plurality of image frames is acquired. In some embodiments, the method further comprises rendering an image of the object based on a first portion of the plurality of image frames while a second portion of the plurality of image frames is being acquired. In some embodiments, the three-dimensional printable file is in a portable document format. In some embodiments, the three-dimensional printable file comprises separate information about structure of the object and visual characteristics of the structure.

BRIEF DESCRIPTION OF DRAWINGS

The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:

FIG. 1 is a sketch of a user operating a portable electronic device to form a three dimensional representation of an object;

FIG. 2 is a block diagram of components of a portable electronic device that may capture a three dimensional representation of an object;

FIG. 3 is a schematic diagram of processing of image frames that includes forming a composite image captured as an object is being imaged, improving quality of the composite image, and providing feedback to a user, in accordance with some embodiments;

FIG. 4 is a sketch of capturing, from different perspectives, multiple image frames of the same region of an object which can be used to determine three-dimensional information by implementing techniques as described herein.

FIG. 5 is a block diagram of components of a portable electronic device and a three-dimensional printer in which some embodiments of the invention may be implemented;

FIG. 6 is a flowchart of processing image frames to create a three-dimensional model, in accordance with some embodiments;

FIG. 7 is a flowchart of processing image frames to improve a quality of a composite image and providing feedback to a user, in accordance with some embodiments;

FIG. 8 is another flowchart of processing image frames to improve a quality of a composite image and provide feedback to a user, in accordance with some embodiments;

FIG. 9 is a flowchart of processing image frames to improve a quality of a composite image and controlling operation of a camera of a portable electronic device, in accordance with some embodiments;

FIG. 10 is a sketch of a representation of image frames as a three dimensional point cloud, in accordance with some embodiments;

FIG. 11 is a flowchart of a process of building a composite image by representing features of image frames as the three dimensional point cloud, in accordance with some embodiments;

FIG. 12 is a schematic diagram that illustrates adjusting a pose of an image frame by aligning the image frame with a preceding image frame, in accordance with some embodiments of the invention;

FIGS. 13A, 13B, 13C and 13D are schematic diagrams illustrating an exemplary process of scanning a document by acquiring a stream of images, in accordance with some embodiments of the invention;

FIGS. 14A and 14B are schematic diagrams of an example of adjusting a relative position of an image frame of an object being scanned by aligning the image frame with a preceding image frame, in accordance with some embodiments of the invention;

FIGS. 15A, 15B, 15C and 15D are schematic diagrams illustrating an exemplary process of capturing a stream of image frames during scanning of an object, in accordance with some embodiments of the invention;

FIGS. 16A, 16B, 16C and 16D are conceptual illustrations of a process of building a network of image frames as the stream of image frames shown in FIGS. 13A, 13B, 13C and 13D is captured, in accordance with some embodiments;

FIGS. 17A, 1 7B and 1 7C are schematic diagrams illustrating another example of the process of capturing a stream of image frames during scanning of an object, in accordance with some embodiments of the invention;

FIG. 18 is a conceptual illustration of a process of building a network of image frames as the stream of image frames shown in FIGS. 13A, 13B and 13C is captured, in accordance with some embodiments of the invention;

FIG. 19 is a flowchart of a process of converting a three-dimensional image into a file format printable on a 3D printer, in accordance with some embodiments of the invention;

FIG. 20 is a representation of a 3D object in PDF language.

您可能还喜欢...