How can I compute the camera pose using relative rotation and translation matrix in RGBD images?

I have a kinect camera that can move around a certain object. I have computed 3d corresponding points in two consecutive images and got 3*3 rotation matrix and 3*1 translation matrix to convert the first 3d point clouds to the second ones but I need to obtain the camera pose (the location and orientation (yaw,pitch,roll) of the camera) along times and then track it. I don't know how I can use obtained matrix for computing the camera pose.