空 挡 广 告 位 | 空 挡 广 告 位

Google Patent | Robotic manipulation using domain-invariant 3d representations predicted from 2.5d vision data

Patent: Robotic manipulation using domain-invariant 3d representations predicted from 2.5d vision data

Drawings: Click to check drawins

Publication Number: 20210101286

Publication Date: 20210408

Applicant: Google

Abstract

Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.

Claims

  1. A method implemented by one or more processors of a robot, the method comprising: identifying an image captured by a camera of a robot, the image capturing an object to be manipulated by the robot, and the image comprising multiple channels, including one or more color channels and a depth channel; generating an object mask of the object to be manipulated by the robot, wherein generating the object mask comprises: processing one or more of the channels of the image using an object detection network; generating a three-dimensional (3D) point cloud of the object, wherein generating the 3D point cloud of the object comprises: processing, using a point cloud prediction network: all of the channels of at least a portion of the image, and the generated object mask of the object; and using the generated 3D point cloud in controlling one or more actuators of the robot.

  2. The method of claim 1, wherein the object detection network is trained for processing images to generate bounding boxes and masks of objects in the images, and further comprising: generating a bounding box for the object to be manipulated by the robot, wherein generating the bounding box comprises: processing one or more of the channels of the image using the object detection network.

  3. The method of claim 2, wherein the at least the portion of the image that is processed using the point cloud prediction network is a crop of the image selected based on the bounding box.

  4. The method of claim 3, further comprising selecting the crop of the image based on pixels of the image, included in the crop, being encompassed by the bounding box.

  5. The method of claim 3, wherein generating the 3D point cloud of the object further comprises: processing, using the point cloud prediction network: one or more camera intrinsics, the one or more camera intrinsics defining one or more intrinsic parameters of the camera that take into account the crop of the image.

  6. The method of claim 5, wherein in processing the one or more camera intrinsics using the point cloud prediction network, the one or more camera intrinsics are applied, as side input to the point cloud prediction network, downstream from an initial input at which all of the channels of at least the portion of the image, and the generated object mask of the object are applied as initial input.

  7. The method of claim 6, wherein all of the channels of at least the portion of the image and the generated object mask of the object are initially processed using an initial encoder of the point cloud prediction network, and wherein the camera intrinsics are applied as side input after the initial encoder and prior to processing using an initial decoder of the point cloud prediction network.

  8. The method of claim 1, wherein using the generated 3D point cloud in controlling the one or more actuators of the robot comprises: generating a prediction of successful manipulation of the object, generating the prediction of successful manipulation of the object comprising: generating the prediction of successful manipulation by processing the generated 3D point cloud of the object, or a transformation of the 3D point cloud, using a critic network; and controlling the one or more actuators of the robot based on the prediction of successful manipulation.

  9. The method of claim 8, wherein generating the prediction of successful manipulation comprises: identifying a candidate end effector pose, of an end effector of the robot; generating the transformation of the 3D point cloud by transforming the 3D point cloud to an end effector frame that is relative to the end effector pose; and generating the prediction of successful manipulation by processing the transformation of the 3D point cloud using the critic network.

  10. The method of claim 9, wherein controlling the one or more actuators of the robot based on the prediction of successful manipulation comprises: selecting the candidate end effector pose based on the prediction of successful manipulation satisfying at least one criterion; and in response to selecting the candidate end effector pose: controlling the one or more actuators of the robot to cause the end effector to traverse to the candidate end effector pose.

  11. The method of claim 10, wherein the manipulation is grasping, and wherein controlling the one or more actuators of the robot based on the prediction of successful manipulation further comprises: causing the end effector to attempt a grasp of the object after the end effector is in the candidate end effector pose.

  12. The method of claim 10, further comprising: identifying an additional candidate end effector pose of the end effector; generating an additional transformation of the 3D point cloud by transforming the 3D point cloud to an additional end effector frame that is relative to the additional end effector pose; and generating an additional prediction of successful manipulation by processing the additional transformation of the 3D point cloud using the critic network; wherein the at least one criterion utilized in selecting the candidate end effector pose based on the prediction of successful manipulation comprises the prediction of successful manipulation being more indicative of success than the additional prediction of successful manipulation.

  13. The method of claim 1, wherein the point cloud prediction network comprises a plurality of encoder-decoder modules, and at least one fully-connected layer.

  14. A method of training a point cloud prediction network, the method implemented by one or more processors and comprising: rendering a simulated image of a simulated environment of a simulator, the simulated image capturing at least one simulated object of the simulated environment, and the simulated image comprising multiple channels, including one or more color channels and a depth channel; generating an object mask of the simulated object; generating a ground truth depth image for the object based on the object mask and the depth channel of the simulated image; generating a predicted three-dimensional (3D) point cloud of the simulated object, wherein generating the predicted 3D point cloud of the simulated object comprises: processing, using a point cloud prediction network: all of the channels of at least a portion of the image, and the generated object mask of the simulated object; generating a projection of the predicted 3D point cloud, the projection being a predicted depth image for the simulated object that based on the predicted 3D point cloud; generating a loss based at least in part on comparison of: the projection of the predicted 3D point cloud, and the ground truth depth image of the simulated object; and updating one or more weights of the point cloud prediction network based at least in part on the generated loss.

  15. The method of claim 14, wherein generating the projection of the 3D point cloud comprises using intrinsic parameters, of a simulated camera utilized to render the simulated image, to generate the projection of the predicted 3D point cloud.

  16. The method of claim 14, further comprising: determining a bounding box for the simulated object; wherein the at least the portion of the image that is processed using the point cloud prediction network is a crop of the image selected based on the bounding box.

  17. The method of claim 16, further comprising selecting the crop of the image based on pixels of the image, included in the crop, being encompassed by the bounding box.

  18. The method of claim 14, further comprising: capturing a real image of a real environment, the real image capturing at least one real object, and the real image comprising multiple channels, including one or more color channels and a depth channel; generating an additional object mask of the real object, wherein generating the additional object mask comprises: processing one or more of the channels of the real image using an object detection network; generating an additional ground truth depth image for the real object based on the additional object mask and the depth channel of the real image; generating an additional predicted three-dimensional (3D) point cloud of the real object, wherein generating the additional predicted 3D point cloud of the real object comprises: processing, using a point cloud prediction network: all of the channels of at least a portion of the real image, and the generated object mask of the real object; generating an additional projection of the additional predicted 3D point cloud, the additional projection being an additional predicted depth image for the real object that based on the additional predicted 3D point cloud; generating an additional loss based at least in part on comparison of: the projection of the additional predicted 3D point cloud, and the additional ground truth depth image of the real object; and updating one or more weights of the point cloud prediction network based at least in part on the generated updated loss.

  19. The method of claim 14, further comprising: determining the training of the point cloud prediction network satisfies one or more criteria; and in response to determining the training of the point cloud prediction network satisfies the one or more criteria: training a critic network utilizing addition point clouds that are predicted using the trained point cloud prediction network.

  20. A system comprising: one or more actuators operably coupled to a robot; one or more processors; and a memory, the memory comprising computer readable instructions that, when executed by the one or more processor, cause the system to perform a method comprising: identifying an image captured by a camera of the robot, the image capturing an object to be manipulated by the robot, and the image comprising multiple channels, including one or more color channels and a depth channel; generating an object mask of the object to be manipulated by the robot, wherein generating the object mask comprises: processing one or more of the channels of the image using an object detection network; generating a three-dimensional (3D) point cloud of the object, wherein generating the 3D point cloud of the object comprises: processing, using a point cloud prediction network: all of the channels of at least a portion of the image, and the generated object mask of the object; and using the generated 3D point cloud in controlling the one or more actuators of the robot.

  21. (canceled)

Description

BACKGROUND

[0001] Various machine learning based approaches to robotic control have been proposed. Some of those approaches train a machine learning model (e.g., a deep neural network model) that can be utilized to generate one or more predictions that are utilized in control of a robot, and train the machine learning model using training data that is based only on data from real-world physical robots. However, these and/or other approaches can have one or more drawbacks. For example, generating training data based on data from real-world physical robots requires heavy usage of one or more physical robots in generating data for the training data. This can be time-consuming (e.g., actually navigating a large quantity of paths requires a large quantity of time), can consume a large amount of resources (e.g., power required to operate the robots), can cause wear and tear to the robots being utilized, and/or can require a great deal of human intervention.

[0002] In view of these and/or other considerations, use of robotic simulators has been proposed to generate simulated robot data that can be utilized in generating simulated training data that can be utilized in training of the machine learning models. However, there is often a meaningful “reality gap” that exists between real robots and real environments - and the simulated robots and/or simulated environments simulated by a robotic simulator. This can result in generation of simulated training data that do not accurately reflect what would occur in a real environment. This can affect performance of machine learning models trained on such simulated training data and/or can require a significant amount of real world training data to also be utilized in training to help mitigate the reality gap.

SUMMARY

[0003] Implementations disclosed herein relate to training a point cloud prediction model (a machine learning model such as a neural network model) that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of the object, to generate a domain-invariant three-dimensional (3D) representation of the object (e.g., a 3D point cloud of the object). Various implementations further relate to utilizing the domain-invariant 3D representation to train (e.g., at least in part in simulation) a robotic manipulation policy model (e.g., a critic network or other policy model) using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Various implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.

[0004] The domain-invariant 3D representations are generated based on processing, using the trained shape prediction network (e.g., a point cloud prediction network or other 3D shape prediction network), 2.5D observations captured by a camera of the robot. The 2.5D observations can be images that include one or more color channels (e.g., red, green, and blue channels) and a depth channel. In other words, each pixel of the images can have a depth channel and one or more color channels. The camera can be, for example, a RGB-D camera that includes one or more sensors that capture vision data that collectively (and optionally after processing) defines an image having a plurality of pixels and, for each of the pixels, a depth channel and one or more additional channels (e.g., red, green, and blue channels). Various types of RGB-D cameras can be utilized, including passive RGB-D cameras and active RGB-D cameras (e.g., that include a speckle projector, or that utilize a light source and time-of-flight). As described herein, each 2.5D image utilized in generating a domain-invariant 3D representation can be a single view 2.5D image.

[0005] Various efficiencies can be achieved by training the shape prediction model and/or the robotic manipulation policy utilizing simulated data. For example, ground truth data utilized in training one or both models can be efficiently obtained from simulation. Also, utilization of the domain-invariant 3D representation as input to the robotic manipulation policy model can enable the network to be trained based primarily (or solely) on simulated training data, while mitigating the reality gap when the robotic manipulation policy model is utilized on real world robots. For example, a domain-invariant 3D representation that is a 3D point cloud of the object describes the 3D shape of the object, which can have minimal/no reality gap when simulated. Such 3D point cloud is invariant to texture or environmental changes, which can have significant reality gap when simulated. Further, the domain-invariant 3D representation can be compact (data-wise), while being semantically interpretable and directly applicable for object manipulation. This can enable efficient processing of such representation using a robotic manipulation policy model, while achieving high accuracy and/or robustness.

[0006] Further, the domain-invariant 3D representation can be efficiently transformed between frames, such as a frame of the camera that captured the 2.5D observation (used to generate the 3D representation) to a frame of a robotic end effector. As described herein, the robotic manipulation policy can optionally be trained to process a transformed domain-invariant 3D representation (and optionally only the transformed representations) in generating a probability (or other value) of the corresponding robotic end effector pose (that used in generating the transformed domain-invariant 3D representation) leading to a successful manipulation (e.g., grasp). Since the domain-invariant 3D representation is compact (data-wise), such processing can be computationally efficient and/or model(s) (e.g., neural network model(s)) representing the policy can be compact (data-wise). For example, processing the domain-invariant 3D representation can be more efficient than processing full RGB image(s). Also, for example, processing the transformed domain-invariant 3D representation, without also processing the end effector pose used to generate the transformed domain-invariant 3D representation, can be more efficient that processing bot the domain-invariant 3D representation and the end effector pose. Yet further, the texture and environmental invariance of the domain-invariant 3D representation enables it to be effectively applied to the robotic manipulation policy model for variously textured object and/or various environments.

[0007] The domain-invariant 3D representation of the object can be a full 3D point cloud of the object. The shape prediction model can be trained utilizing a large quantity of simulated training data, and a small quantity (or no) quantity of real world training data. For example, 50,000 or more (e.g., 60,000, 70,000) episodes in simulation can be utilized, while less than a thousand (e.g., less than 600, less than 500) episodes in the real world can be utilized. The 2.5D observation of the object, that is processed to generate the domain-invariant 3D representation, can be an RGB-D (where D is depth) image. In various implementations, an additional object mask channel (in addition to the RGB-D channels) can also be utilized as input to the shape prediction model to, for example, enable handling of situations where multiple objects are present in the 2.5D observation of the object. The mask can be generated based on processing the 2.5D observation of the object (e.g., at least the 2D portion thereof) utilizing an object detection network such as a Mask-RCNN network. For example, assume a target object is an apple and a 2.5D image includes the apple and a banana. The object detection network can be utilized to determine pixels of the 2.5D image that correspond to the apple, and the additional mask generated as an additional channel where those pixels in the channel have a value indicating the target object is present in the corresponding pixels.

[0008] In some implementations, the robotic manipulation policy model is a critic prediction network that can be utilized to generate, based on the domain-invariant 3D representation and a candidate end effector pose, a manipulation outcome prediction for the candidate end effector pose. For example, where the manipulation is grasping, a critic grasp prediction network can be utilized to predict, based on a candidate grasp pose (of an end effector) and a domain-invariant 3D representation of an object, a probability that the candidate grasp pose will be successful. For instance, a domain-invariant 3D representation that is a 3D point cloud can be transformed to a frame relative to the candidate grasp pose. The transformed 3D point cloud can then be processed using the critic grasp prediction network to generate the probability that the candidate grasp pose will be successful. Various candidate grasp poses can be considered utilizing the critic grasp prediction network, and a highest probability candidate grasp pose selected for attempting a grasp. In other implementations, the robotic manipulation policy model can be utilized for other robotic tasks such as, for example, pushing an object, pulling an object, etc. In some implementations, a non-transformed domain-invariant 3D representation and a candidate end effector pose can processed using a robotic manipulation policy to generate a manipulation outcome prediction. Put another way, in those implementations both the non-transformed domain-invariant 3D representation and the candidate end effector pose can be applied as input, instead of the transformed domain-invariant 3D representation. Also, in some implementations the robotic manipulation policy can include an action prediction network, instead of or in addition to a critic prediction network. For example, the action prediction network can be used to process the non-transformed domain-invariant 3D representation to generate output that indicates an action prediction for a robotic task. For instance, the output can indicate (directly or indirectly) an end effector pose.

[0009] As mentioned above, in various implementations the domain-invariant 3D representation can optionally be transformed to the frame of an end effector pose being considered. For example, an initial domain-invariant 3D representation can be generated by processing a 2.5D image (and optionally a mask) using a trained shape prediction model. The initial domain-invariant 3D representation can then be transformed to the frame of a candidate end effector pose. The transformed domain-invariant 3D representation can then be processed using the robotic manipulation policy model to, for example, generate a prediction of success of manipulation utilizing the candidate end effector pose. Accordingly, in those various implementations the transformed domain-invariant 3D representation can be processed using the robotic manipulation policy without directly processing the candidate end effector pose using the robotic manipulation policy. Rather, the candidate end effector pose is reflected by the transformed domain-invariant 3D representation, which is transformed to the frame of the candidate end effector pose. Implementations that use the transformed domain-invariant 3D representation, instead of a non-transformed initial domain-invariant 3D representation and separate representation of the candidate end effector pose, can be trained more efficiently and/or can be more robust and/or accurate during inference.

[0010] The above description is provided as an overview of some implementations of the present disclosure. Further description of those implementations, and other implementations, are described in more detail below.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] FIG. 1 illustrates an example environment in which implementations disclosed herein can be implemented.

[0012] FIG. 2 illustrates an example of generating, using a trained point cloud prediction network and based on a RGB-D image captured by a robot, a predicted 3D point cloud. FIG. 2 further illustrates an example of using the predicted 3D point cloud in controlling a robot.

[0013] FIGS. 3A and 3B illustrate a flow chart of an example method of training a point cloud prediction network using simulated and real world data.

[0014] FIG. 4 illustrate a flow chart of an example method of training a critic network using predicted 3D point clouds.

[0015] FIG. 5 illustrates a flow chart of an example method of using a trained critic network and predicted 3D point clouds in controlling a robot.

[0016] FIG. 6 schematically depicts an example architecture of a robot.

[0017] FIG. 7 schematically depicts an example architecture of a computer system.

DETAILED DESCRIPTION

[0018] Some implementations disclosed herein relate to training a point cloud prediction network (a machine learning model such as a neural network model) that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of the object, to generate a domain-invariant three-dimensional (3D) representation of the object (e.g., a 3D point cloud of the object). In some of those implementations, self-supervision is utilized in training the point cloud prediction network. Some implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model (e.g., a critic network) using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of objects to be manipulated. Various implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the trained robotic manipulation policy model.

[0019] A 3D point cloud of an object, that is generated by processing a 2.5D observation of an object utilizing a point cloud prediction network, can be domain-invariant, semantically interpretable, and directly applicable for object manipulation (e.g., grasping, pushing, placing, and/or other manipulation(s). The generated 3D point cloud can be lightweight and flexible as compared to other 3D representations such as voxel grids and triangle meshes. This can result in utilization of less resources (e.g., memory resources) when the 3D point cloud is processed using a trained robotic manipulation policy in controlling a robot. Further, the generated 3D point cloud can describe the full 3D shape of the object, which is invariant to surface textures or environmental conditions. This can mitigate the reality gap when the generated 3D point clouds are used in training a critic network, and the generated 3D point clouds and the supervision signals used in training the policy network are based at least in part on simulated data. As used herein, the “reality gap” is a difference that exists between real robots and/or real environments–and simulated robots and/or simulated environments simulated by a robotic simulator. Yet further, the generated 3D point cloud can directly be used to localize objects in the scene, hence simplifying the task(s) when training a critic network or other policy network.

[0020] Prior to turning to the figures, some particular non-limiting examples are provided of training a point cloud prediction network and of training a policy network that is a grasping critic network. As one particular example of training a point cloud prediction network, assume a set of RGB-D observations {.sub.1, .sub.2, … , .sub.N}, where .sub.n.di-elect cons.R.sup.h.times.w.times.4 is an individual simulated or real world observation that captures a target object. The goal can be to learn a domain-invariant point cloud representation .sub.n that reflects the 3D geometry of the target object. N RGB-D images can be obtained by using a mobile manipulator moving around and taking snapshots of various workspaces from different angles (which, in the simulated world, is achieved by altering the vantage of a simulated observation). The depth values from the RGB-D observations forms a 2.5D representation of the objects (e.g., visible part subject to noise) and thus do not provide the full 3D geometry. Further, there is a reality gap between the depth values in simulation and real world, hence making a policy that is solely trained in simulation quite ineffective in the real world.

[0021] Continuing with the particular example, self-supervised labeling of the observations can occur. While target point clouds for supervised learning of a deep network can be easily obtained in simulation, this task becomes notoriously costly (e.g., computer resource wise) and time-consuming for real world data. Further, the presence of noise and un-modeled nonlinear characteristics in a real world depth sensor make the learning harder, especially in the context of transfer learning. To address this challenge, implementations disclosed herein generate self-supervised labels using view-based supervision with differential re-projection operators.

[0022] As an example, a point cloud of an object can be represented as a set of K points ={p.sub.k=(x.sub.k, .sub.k, z.sub.k)|1.ltoreq.k.ltoreq.K}, where x.sub.k, .sub.k, z.sub.k are coordinates regarding k-th point p.sub.k along xyz axis, respectively. Without loss of generality, the point cloud coordinates can be assumed to be defined in the camera frame. Further, the ground-truth point cloud annotation can be assumed to not be directly available in the real world data and, thus, multi-view projections can be used as the supervision signal (e.g., both for real world and for simulated data). For example, the camera intrinsic matrix E can be used to obtain the 2D projection in the image space from the point cloud (e.g., homogeneous coordinate (.sub.k, v.sub.k, 1)is projected from (x.sub.k, y.sub.k, z.sub.k):

( u k , v k , 1 ) T .about. E ( x k , k , z k ) T ( 1 a ) = { ( u k , v k ) } K k = 1 ( 1 b ) ##EQU00001##

[0023] For localization, the corresponding tight bounding box can be derived from the 2D projection: B=(.sup.mid, v.sup.mid,, h), where .sup.mid, v.sup.mid, , h represents the bounding box center and size, respectively. N RGB-D images/snapshots can be collected from various scenes in simulation and real world by moving a mobile manipulator around the workspace. For the real world dataset, Mask-RCNN or other object detection network can be used to detect object bounding boxes B.sup.n and their associated mask at each frame. For the simulation dataset, bounding boxes and their associated mask can be directly obtained. Note that multiple objects may be present in many of the snapshots. The data associated to the m-th object in the n-th frame can be denoted by (.).sup.m,n. The number of objects in the n-th frame can be denoted by C.sub.n. The mask for each object can be used to extract its associated depth values from the depth channel in each observation, and the camera intrinsic matrix E.sup.n used to obtain .sup.m,n from the depth values. This enables determination of .sup.m,n and B.sup.m,n for all 1.ltoreq.n.ltoreq.N and 1.ltoreq.m.ltoreq..sup.n.

[0024] The point cloud prediction network can be used to generate a predicted point cloud which, in turn, can be used to determine .sup.m,n and {circumflex over (B)}.sup.m,n using equations (1a) and (1b) (above). The loss function for training the point cloud prediction network can be defined as:

.sub..theta.=.SIGMA..sub.n=1.sup.N.SIGMA..sub.m=1.sup.Mn(({circumflex over (B)}.sup.m,nB.sup.mn)++.lamda..sub.M.sub..theta..sup.M(.sup.m,n,.sup- .m,n))+.lamda..sup..theta..parallel..theta..parallel. (2)

, where .lamda..sup.B,.lamda..sup.M,.lamda..sup..theta. are weighting coefficients, .sub..theta..sup.B is the Huber loss between the estimated and labeled bounding box, and .sub..theta..sup.M is the projected point-cloud prediction loss.

[0025] In some implementations, the point cloud prediction network can include several encoder-decoder modules and a fully-connected layer to predict the point clouds. In some implementations, the input channels to the point cloud prediction network can include color channel(s) (e.g., R, G, and B), a depth channel, and an object mask channel. Further, the input channels can optionally be a dynamic crop of the base image (e.g., based on object detection), that is focused on the target object. Yet further, to account for the dynamic cropping, an additional input can optionally be provided as side input downstream from the initial inputs. For example, the additional input can be provided right after the initial encoder to provide the point cloud prediction network with the adapted camera intrinsic characteristic resulted from cropping. The additional input can be one or more camera intrinsics that define one or more intrinsic parameters of the camera that take into account the crop of the image.

[0026] A particular example is now provided of utilizing predicted 3D point clouds in training a policy network that is a grasping policy network. The grasping policy network can be a critic network represented by and can be used to predict the probability of the success for a sample grasp, s.di-elect cons.R.sup.4 of a target object based on the predicted point cloud , and the transformation from the robot base to the camera frame. The sample grasp s=(p, .psi.) can be a candidate end effector pose and can be composed of the 3D gripper position p with respect to the robot base and the gripper yaw rotation .psi..

[0027] The predicted point cloud can be first transformed to the proposed grasp frame, thereby generating a transformed predicted point cloud. In some implementations, the transformation can be performed using .sub.s=(3), where can be directly calculated based on the sample grasp pose s. Optionally, the order of points in can be shuffled to allow the critic network to adapt to variations in the order of point clouds. In some implementations, the critic network can include multiple fully-connected layers each followed by a ReLU activation function with BatchNorm. The last layer of the critic network can optionally be linear and reduces the output size to one. It can be followed by a sigmoid activation function to provide the grasp success (e.g., a probability measure from 0 to 1).

[0028] In various implementations, a majority, a vast majority (e.g., 90% or more), or all of the training data for training the grasp policy can be generated in simulation. As one example, a heuristic grasping policy for the data collection in simulation can include: (1) compute the center of volume of the target object, , based on the predicted point cloud; (2) set the translation part of the grasp pose to plus some random noise .epsilon..di-elect cons.R.sup.3, i.e. x=+.epsilon. (3); and (3) randomly draw a yaw angle from a uniform distribution in the range [-.pi./2, .pi./2]. The grasp success can then be evaluated by moving the simulated end effector to a pre-grasp pose s*, where s* is a pose offset (e.g., above) from s with some height difference constant, i.e. s*-s=(0,0, .delta.h, 0). This pre-grasp pose s* enables aligning the simulated robot end-effector with respect to the simulated object before attempting the simulated grasp. Then the simulated robot is moved to pose s, and the simulated robot is commanded to attempt the grasp (e.g., close its parallel-jaw gripper when the end effector is a parallel-jaw gripper). The simulated robot is then commanded to lift the object (e.g., by moving back to s*). Then the grasp success is evaluated by checking whether the simulated object is moved above its original pose. This evaluation can be done easily in simulation since there is access to the ground truth object pose in simulation. The training data can be collected by running simulation robots in parallel and stored for training the off-policy grasping network.

[0029] Turning now to the figures, FIG. 1 illustrates an example environment in which implementations disclosed herein can be implemented. FIG. 1 includes a point cloud training system 140, which is implemented by one or more computer systems. The point cloud training system 140 interfaces with one or more simulators 120, one or more robots (e.g., robot 190), and/or one or more human-held cameras in obtaining RGB-D images for various environments with various objects. For example, robot 190 can navigate partially or all the way around table 250, while directing its vision component 199 (e.g., RGB-D camera) toward the table 250, to generate RGB-D images from the vision component 199. Other camera trajectories can likewise be captured with different environments, different objects and/or object arrangements, and optionally using different vision components. RGB-D images can additionally or alternatively be generated using simulator 120, with simulated environments and objects. Moreover, RGB-D images from human-held vision components can additionally or alternatively be utilized.

[0030] The point cloud training system 140 utilizes the RGB-D images to generate self-supervised training data and trains the point cloud prediction network 170 based on such training data. In various implementations, the point cloud training system 140 can perform one or more (e.g., all) aspects of method 300 of FIGS. 3A and 3B.

[0031] The point cloud training system 140 is illustrated in FIG. 1 as including an object mask engine 142, a ground truth depth engine 144, a predicted 3D point cloud engine 146, a 3D point cloud projection engine 148, and a loss engine 149. In other implementations, fewer or more engines can be provided, and/or one or more aspects of one or more engines can be combined. Implementations of the illustrated engines are now described with reference to generating an instance of training data, and training based on such instance. However, it is noted that training of the point cloud prediction network will be based on thousands of instances of training data and that batch training techniques can optionally be utilized.

[0032] In generating an instance of self-supervised training data based on an RGB-D image (e.g., a rendered image from simulation, or a real world image), the object mask engine 142 generates an object mask of an object captured in the RGB-D image. For example, where the RGB-D image is a real world image, the object mask engine 142 can use an object detection network 172 to detect object bounding box(es) for object(s) in the RGB-D image as well as an associated mask for each of the object(s). For instance, the object detection network 172 can be a mask-RCNN network or other trained network. Also, for example, where the RGB-D image is a simulated image, bounding box(es) as well as associated mask(s) can be directly obtained from the simulation data. Where multiple objects are captured in the RGB-D image, one of the objects can be selected for use in generating the instance of self-supervised training data. Optionally, other of the multiple objects in the RGB-D image can each be used in generating a corresponding additional instance of self-supervised training data. Put another way, multiple training instances can be generated based on an RGB-D image that includes multiple objects, with each of the instances being for a corresponding single one of the objects.

[0033] In generating the instance of self-supervised training data based on the RGB-D image, the ground truth depth engine 144 can generate a ground truth depth image based on the object mask and the depth channel of the RGB-D image. For example, the ground truth depth engine 144 can generate the depth image to include, for those pixels in the object mask (generated by object mask engine 142) that represent the object, the depth values, from the depth channel, that corresponds to those pixels. The other pixels of the ground truth depth image can be zero or other null value. Optionally, the ground truth depth image can be restricted to those pixels that are included in the bounding box determined by object mask engine 142 and/or can include (e.g., in an extra channel) the bounding box determined by object mask engine 142.

[0034] The instance of self-supervised training data can include training instance input of the RGB-D image (or at least a crop that includes those pixels included in the generated bounding box), the object mask, and optionally camera intrinsics that take into account the crop of the RGB-D image. The instance of self-supervised training data can include training instance output of the ground truth depth image.

[0035] The predicted 3D point cloud engine 146 can process the training instance input using the point cloud prediction network 170 to generate a predicted 3D point cloud of the object. For example, the predicted 3D point cloud engine 146 can apply the RGB-D image (or at least the crop) and the object mask to initial layer(s) of the point cloud prediction network 170, and apply the camera intrinsics as side input downstream from the initial layer(s) (e.g., following initial encoding layer(s)). The predicted 3D point cloud engine 146 can generate the predicted 3D point cloud using the point cloud prediction network 170, based on the applied input and using current weights of the point cloud prediction network 170.

[0036] The 3D point cloud projection engine 148 generates a projection of the predicted 3D point cloud generated by the predicted 3D point cloud engine 146. The generated projection can be a predicted depth image for the simulated object and can be generated based on the predicted 3D point cloud. For example, the 3D point cloud projection engine 148 can generate the projection using the camera intrinsics to obtain a 2D projection in the image space from the point cloud. For instance, equations (1a) and (1b) (above) can be utilized. The 3D point cloud projection engine 148 can also optionally generate a bounding box that can be derived from the 2D projection.

[0037] The loss engine 149 generates a loss based at least in part on comparison of the projection of the predicted 3D point cloud (generated by engine 148) and the ground truth depth image (generated by engine 144). The loss engine 149 can further generate the loss based on comparison of the ground truth bounding box optionally generated by engine 144 and the predicted bounding box optionally generated by engine 148. As one example, the loss engine 149 can generate the loss based on equation (2) (above). It is noted that, in various implementations, the loss engine 149 can generate a batch loss that is based on multiple instances of training data in a batch. The loss engine 149 then updates one or more weights of the point cloud prediction network 170 based on the generated loss. For example, the loss engine 149 can back-propagate the loss to update the weights of the point cloud prediction network 170.

[0038] Once the point cloud prediction network 170 is trained, it can be utilized by a real world robot (e.g., robot 190) in generating predicted 3D point clouds based on RGB-D images captured by the robot. Further, the predicted 3D point clouds can be utilized, by the robot, in controlling one or more of its actuators. In some implementations, the trained point cloud prediction network 170 is utilized in training a critic network or other robotic policy network, to process predicted 3D point clouds (generated using the trained point cloud prediction network 170) to generate corresponding output that is utilized in control of the robot. One non-limiting example of such training is now described with respect to critic training system 150 of FIG. 1, which is implemented by one or more computer systems.

[0039] The critic training system 150 interfaces with the trained point cloud prediction network 170, one or more simulators 120, and optionally one or more robots (e.g., robot 190) in training a critic network 174. In various implementations, the critic training system 150 can perform one or more (e.g., all) aspects of method 400 of FIG. 4.

……
……
……

您可能还喜欢...