ChapterPDF Available

Future Trends in Perception and Manipulation for Unfolding and Folding Garments

Authors:

Abstract and Figures

This paper presents current approaches for robotic garment folding-oriented 3D deformable object perception and manipulation. A major portion of these approaches are based on 3D perception algorithms that match garments to a model, and are thus model-based. They require a full view of an extended garment, in order to then apply a preprogrammed folding sequence. Other approaches are based on 3D manipulation algorithms that are focused on modifying the pose of the garment, also oriented at matching it with a model. We present our own garment-agnostic algorithm, which requires no model to unfold clothes, and works using a single view from an RGB-D sensor. The unfolding algorithm also has been validated through experiments using a garment dataset of RGB-D sensor data, and additional validation with a humanoid robot platform. Finally, conclusions regarding the current state of the art and on the future trends of these research lines are discussed.
Content may be subject to copyright.
CHAPTER X
Future Trends in Perception and Manipulation for
Unfolding and Folding Garments
D. ESTEVEZ, J. G. VICTORES and C. BALAGUER
RoboticsLab, Universidad Carlos III de Madrid, destevez@ing.uc3m.es
This paper presents current approaches for robotic garment folding-
oriented 3D deformable object perception and manipulation. A major por-
tion of these approaches are based on 3D perception algorithms that match
garments to a model, and are thus model-based. They require a full view of
an extended garment, in order to then apply a preprogrammed folding se-
quence. Other approaches are based on 3D manipulation algorithms that
are focused on modifying the pose of the garment, also oriented at match-
ing it with a model. We present our own garment-agnostic algorithm,
which requires no model to unfold clothes, and works using a single view
from an RGB-D sensor. The unfolding algorithm also has been validated
through experiments using a garment dataset of RGB-D sensor data, and
additional validation with a humanoid robot platform. Finally, conclusions
regarding the current state of the art and on the future trends of these re-
search lines are discussed.
1 Introduction
Every day, humans encounter garment manipulation tasks in both domestic
(e.g. laundry) and industrial (e.g. garment manufacturing) environments. A
growing need exists for automated solutions to help us to perform these
tasks, as they are tedious and repetitive. Additionally, another critical fac-
tor involved in this increase in demand is the increasing aging of the world
population. This is because of the decrease in mobility associated with el-
der age. Currently, the only existing automated solutions are bulky and ex-
pensive, as can be seen in Fig. 1 (right). They are intended to be used in an
industrial environment, so they are not suitable for domestic use.
2 Open Conference on Future Trends in Robotics
Fig. 1. Current clothes folding solutions available for garment folding. On the
left, human workers folding clothes in a textile factory. On the right, an automated
solution available in the market, manufactured by Texgraff©.
Robots, more specifically, humanoid robots, arise as a sensible choice.
Humanoid robots are designed to work in human environments and to have
human-like locomotion and manipulation capabilities. However, working
with garments involves deformable 3D objects. This not a trivial task for
robots. Modeling is especially complex due to the almost infinite number
of poses into which a textile article can be brought. The situation becomes
even more complex in the presence of several garments, as they can easily
be entangled, increasing the amount of occlusions and complicating the
recognition of each individual garment using 2D or 3D computer vision
techniques. Bringing garments to a desired configuration from an arbitrary
initial pose is a very challenging issue.
The way humans perform laundry has inspired most of the works that can
be found throughout robotic literature. The human pipeline usually begins
with the extraction of a garment from a washing or drying machine. Gar-
ments are placed on a pile, from where deformable 3D object perception
allows them to be picked up one at a time, initiating an iterative sequence.
Deformable 3D object manipulation is used to extend and flatten the gar-
ment, either in the air, or with the help of a flat surface such as a table. The
extended garment is then classified and fit into a garment model according
to its category. Finally, the garment model is used to execute a prepro-
grammed folding sequence on the garment. Once a garment is folded, the
next garment is picked up from the pile.
This paper focuses on past and current approaches in robotic garment per-
ception and manipulation, and on exploring the robotic trends of the future.
Future Trends in Perception and Manipulation for Unfolding and Folding
Garments 3
2 State of the Art
One of the main contributors within the existing model-based work has
been the computer graphics community (Chen, Yin, & Su, 2009). Model-
based approaches focus on classifying garments in categories according to
a model when the garments are grasped or extended on a flat surface.
Kita et al. use deformable models to estimate the state of hanging clothes
based on 3D observed data (Y. Kita, Saito, & Kita, 2004)(Yasuyo Kita,
Ueshiba, Neo, & Kita, 2009). Several candidate shapes are generated
through physical simulations of hanging clothes. They are later compared
to the observed garment data. Further deformation of these candidate
shapes is allowed to make the model fit the data more accurately. The
shape which is more consistent with the data is finally selected.
Miller et al. present a method for modeling garments once they are extend-
ed on a flat surface in (Miller, Fritz, Darrell, & Abbeel, 2011). Parameter-
ized shapes are used as model, where some parameters are fit from gar-
ment data and other parameters are computed using the fit parameters.
Each garment category requires a different model, and parameters allow
each model to adapt to the garment shape within each category.
A method for classifying and estimating the poses of deformable objects is
presented in (Li et al., 2015). It consists in creating a training set of de-
formable objects by off-line simulation of different garments, extracting
depth images from different points of view. A codebook is built for a set of
different poses of each deformable object. With this codebook, classifying
deformable objects on different categories and estimating their current
pose is possible, for later regrasping or folding the garment.
Clothing article manipulation is the other main approach to garment fold-
ing. Osawa et al. propose in (Osawa, Seki, & Kamiya, 2006) a method to
unfold garments in order to classify them. It consists in alternatively re-
grasping clothing from the lowest point and attempting to expand them us-
ing a two-arm manipulator.
The method introduced by Cusumano-Towner et al. in (Cusumano-
Towner, Singh, Miller, O’Brien, & Abbeel, 2011) allows a bi-manipulator
robot to identify a clothing article, estimate its current state and achieve a
desired configuration, generalizing to previously unseen garments. For that
purpose, the robot uses a Hidden Markov Model (HMM) throughout a se-
4 Open Conference on Future Trends in Robotics
quence of manipulations and observations, in conjunction with a relaxation
of a strain-limiting finite element model for cloth simulation that can be
solved via convex optimization.
In (Li et al., 2015) Li presents a method for unfolding deformable objects
with a bi-manipulator robot. With this method, the robot is capable of tak-
ing a clothing article from an unknown state to a known state by iterative
regrasping, detecting the most suitable grasping points in each state to
achieve its goal. For locating the most suitable grasping points, the 3D
point cloud obtained by the robot is matched to the mesh model that incor-
porates the information about the best regions to grasp in order to unfold
the garment.
In (Willimon, Birchfield, & Walker, 2011) Willimon et al. use several fea-
tures obtained from a depth image, such as peak regions and corners loca-
tion, to determine the location and orientation of points where the robot
later interacts with the garment.
CloPeMa
1
is a recent EU-FP7 research project (2012-2015) whose objec-
tive is to advance the state of the art in perception and manipulation of fab-
ric, textiles and garments. As part of the CloPeMa project, a method to de-
tect single folds has been presented by Mariolis et al. in (Mariolis &
Malassiotis, 2013)(Mariolis & Malassiotis, 2015). In order to detect such
folds, first, a dataset of unfolded clothes templates is built. These templates
are later used to perform a shape matching between the folded garment
shape, obtained by the camera, and the unfolded garment model. This pro-
cess is iterative, and the initial results are served as feedback to adapt the
model for a better fit. Stria et al. propose in (Stria, Pruša, Hlaváč, Wagner,
& Petrik, 2014)(Stria, Pruša, & Hlaváč, 2014) a polygon-based model for
clothes configuration recognition using the estimated position of the most
important landmarks in the clothing article. Once identified, these land-
marks can be used for automated folding using a robotic manipulator. The
clothes contour is extracted from an RGB image and processed using a
modified grab-cut algorithm, and dynamic programming methods are used
to fit it to the polygonal model. Doumanoglou et al. follow in
(Doumanoglou, Kargakos, Kim, & Malassiotis, 2014) an approach based
on Active Random Forests in order to recognize clothing articles from
depth images. The classifier allows the robot to perform actions to collect
extra information in order to disambiguate the current hypotheses, such as
changing the viewpoint.
1
http://www.clopema.eu/
Future Trends in Perception and Manipulation for Unfolding and Folding
Garments 5
3 A Garment-Agnostic Approach to Unfolding
Existing work found in the literature has focused in garment recognition
and modeling once the garment is extended, as well as in developing fold-
ing algorithms using those models. For this reason, our work has focused
on the step previous to having an unfolded garment, which is how to un-
fold a clothing article that has been picked up from a pile of clothes and is
placed on a flat surface. The original algorithm has been recently pub-
lished in (Estevez, Victores, Morante, & Balaguer, 2016). Fig. 2 depicts
the outline of the algorithm, which uses a single RGB-D sensor view for
rapid processing of the deformable 3D object perception algorithm.
Fig. 2. Pipeline of the Garment-Agnostic Approach to Unfolding algorithm.
The depth information provided by the RGB-D sensor is converted into a
grayscale image. Garment Segmentation is performed in the HSV space,
and then Garment Depth Map Clustering is performed using a watershed
algorithm. This algorithm provides us with labeled regions, each having a
different height. In this labeled image, we assume that the highest height
region belongs to the fold. Starting on this region, and ending in the gar-
ment border, tentative paths are created by the Garment Pick and Place
Points stage, in several directions to analyze the height profile. For each
profile, a bumpiness value B is computed as in Equation (1).
n
iipathipathB 1)1()(
(1)
6 Open Conference on Future Trends in Robotics
The lowest one is selected as the unfolding direction. A final extension on
this line is performed to create a pick point on the fold border, and a place
point outside the garment. Experiments for evaluation of the algorithm
were performed over a dataset of RGB-D sensor data, and additional vali-
dation with a humanoid robot platform as seen in Fig. 3.
Fig. 3. Humanoid robotics clothes folding scenario. The clothes are placed on a
flat white table, while a RGB-D sensor is positioned on the top.
4 Conclusions and Future Work
In this paper we have reviewed several model-based approaches, and also
garment manipulation-based approaches for garments, in the context of de-
formable 3D object perception and manipulation. Finally, a model-free
garment-agnostic algorithm to unfold clothes was presented. Results show
that our approach to garment-agnostic unfolding is promising to be includ-
ed in a complete pipeline of clothes folding. The main contribution of our
work is its analysis of the garment not dependent of a prior model. Using
only depth information for detecting overlapped regions (except for Gar-
ment Segmentation, where other algorithms could be used), as opposed to
using RGB images, makes our algorithm independent of the colors and
patterns present in the garments.
Our future lines of research include implementing a pre-processing stage
in which a perspective transformation is performed (to perceive the gar-
ment as if a bird's eye point of view were used), using multiple views of
Future Trends in Perception and Manipulation for Unfolding and Folding
Garments 7
the deformable 3D object (e.g. KinFu with GPU acceleration), and per-
forming large-scale experiments in perception and manipulation with large
datasets and prolonged physical trials (e.g. with industrial manipulators).
In a broader sense, we expect the robotics community to aim its efforts to-
wards perceiving and manipulating deformable 3D objects. They represent
a grand portion of our everyday life whether in domestic, office, or out-
door environments. While our first approaches have been aimed at this te-
dious household and industrial chore, the long term goal of this work is to
allow robots to perform any kind task that requires addressing the difficul-
ties of perceiving and manipulating these objects.
Acknowledgements
The research leading to these results has received funding from the Ro-
boCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de
vida de los ciudadanos. Fase III; S2013/MIT-2748), funded by Programas
de Actividades I+D en la Comunidad de Madrid and cofunded by Structur-
al Funds of the EU.
References
Chen, Q., Yin, Y., & Su, A. F. (2009). Review of Cloth Modeling. 2008
International Workshop on Education Technology and Training and on
Geoscience and Remote Sensing, ETT and GRS 2008, 1, 238241.
Cusumano-Towner, M., Singh, A., Miller, S., O’Brien, J. F., & Abbeel, P.
(2011). Bringing clothing into desired configurations with limited
perception. Proceedings - IEEE International Conference on Robotics and
Automation, 38933900.
Doumanoglou, A., Kargakos, A., Kim, T.-K., & Malassiotis, S. (2014).
Autonomous Active Recognition and Unfolding of Clothes Using Random
Decision Forests and Probabilistic Planning. Proc. IEEE International
Conference on Robotics and Automation (ICRA14), 987993.
Estevez, D., Victores, J. G., Morante, S., & Balaguer, C. (2016). Towards
Robotic Garment Folding: A Vision Approach for Fold Detection. In IEEE
International Conference on Autonomous Robot Systems and
8 Open Conference on Future Trends in Robotics
Competitions (ICARSC).
Kita, Y., Saito, F., & Kita, N. (2004). A deformable model driven visual
method for handling clothes. IEEE International Conference on Robotics
and Automation, 2004. Proceedings. ICRA ’04. 2004, 4, 04.
Kita, Y., Ueshiba, T., Neo, E. S., & Kita, N. (2009). Clothes state
recognition using 3d observed data. Proceedings - IEEE International
Conference on Robotics and Automation, 12201225.
Li, Y., Xu, D., Yue, Y., Wang, Y., Chang, S., Grinspun, E., & Allen, P. K.
(2015). Regrasping and Unfolding of Garments Using Predictive Thin
Shell Modeling. In ICRA.
Mariolis, I., & Malassiotis, S. (2013). Matching folded garments to
unfolded templates using robust shape analysis techniques. Lecture Notes
in Computer Science (Including Subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), 8048 LNCS (PART 2),
193200.
Mariolis, I., & Malassiotis, S. (2015). Modelling folded garments by fitting
foldable templates. Machine Vision and Applications, 26(4), 549560.
Miller, S., Fritz, M., Darrell, T., & Abbeel, P. (2011). Parametrized shape
models for clothing. Proceedings - IEEE International Conference on
Robotics and Automation, 48614868.
Osawa, F., Seki, H., & Kamiya, Y. (2006). Unfolding of Massive Laundry
and Classification Types. Journal of Advanced Computational Intelligence
and Intelligent Informatics, 457463.
Stria, J., Pruša, D., & Hlaváč, V. (2014). Polygonal Models for Clothing.
Stria, J., Pruša, D., Hlaváč, V., Wagner, L., & Petrik, V. (2014). Garment
Perception and its Folding using a Dual-arm Robot, (Iros).
Willimon, B., Birchfield, S., & Walker, I. (2011). Model for unfolding
laundry using interactive perception. IEEE International Conference on
Intelligent Robots and Systems, 48714876.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Folding clothes is a current trend in robotics. Previously to folding clothes, they have to be unfolded. It is not realistic to perform model-based unfolding, as every garment has a different shape, size, color, texture, etc. In this paper we present a garment-agnostic algorithm to unfold clothes that works using 3D sensor information. The depth information provided by the sensor is converted into a grayscale image. This image is segmented using watershed algorithm. This algorithm provide us with labeled regions, each having a different height. In this labeled image, we assume that the highest height region belongs to the fold. Starting on this region, and ending in the garment border, tentative paths are created in several directions to analyze the height profile. For each profile, a bumpiness value is computed, and the lowest one is selected as the unfolding direction. A final extension on this line is performed to create a pick point on the fold border, and a place point outside the garment. The proposed algorithm is tested with a small set of clothes in different positions.
Conference Paper
Full-text available
This work presents a novel method performing shape matching of folded garments to unfolded templates, aiming to facilitate unfolding by robotic manipulators. The proposed method incorporates robust shape analysis techniques, estimating point correspondences between contours of folded garments and unfolded templates. The analysis results are also used for estimating the location of the folding axis on the templates and discriminating between different types of garments. The method has been experimentally evaluated using both synthetic and real datasets of folded garments and the produced results indicate the usefulness of the proposed approach.
Conference Paper
Full-text available
The work addresses the problem of clothing perception and manipulation by a two armed industrial robot aiming at a real-time automated folding of a piece of garment spread out on a flat surface. A complete solution combining vision sensing, garment segmentation and understanding, planning of the manipulation and its real execution on a robot is proposed. A new polygonal model of a garment is introduced. Fitting the model into a segmented garment contour is used to detect garment landmark points. It is shown how folded variants of the unfolded model can be derived automatically. Universality and usefulness of the model is demonstrated by its favorable performance within the whole folding procedure which is applicable to a variety of garments categories (towel, pants, shirt, etc.) and evaluated experimentally using the two armed robot. The principal novelty with respect to the state of the art is in the new garment polygonal model and its manipulation planning algorithm which leads to the speed up by two orders of magnitude.
Article
Full-text available
This work presents a novel method performing shape matching between folded garments and corresponding unfolded templates. It incorporates both partial and global shape analysis techniques, estimating point correspondences between contours of folded garments and generic templates of unfolded garments. The established correspondences are used in the estimation of polygonal models of the folded garments. The initial matching results are also used for modifying the original templates to better fit the folded garments. The method is applied in an iterative fashion, using the fitted templates as a new input, resulting to more accurate matching as indicated by the experimental results. The produced polygon models can be used for planning the unfolding strategy during autonomous robotic manipulation.
Conference Paper
Full-text available
In this paper, we propose a deformable-model-driven method to recognize the state of hanging clothes using three-dimensional (3D) observed data. For the task to pick up a specific part of the clothes, it is indispensable to obtain the 3D position and posture of the part. In order to robustly obtain such information from 3D observed data of the clothes, we take a deformable-model-driven approach[4], that recognizes the clothes state by comparing the observed data with candidate shapes which are predicted in advance. To carry out this approach despite large shape variation of the clothes, we propose a two-staged method. First, small number of representative 3D shapes are calculated through physical simulations of hanging the clothes. Then, after observing clothes, each representative shape is deformed so as to fit the observed 3D data better. The consistency between the adjusted shapes and the observed data is checked to select the correct state. Experimental results using actual observations have shown the good prospect of the proposed method.
Conference Paper
Full-text available
We consider the problem of autonomously bring- ing an article of clothing into a desired configuration using a general-purpose two-armed robot. We propose a hidden Markov model (HMM) for estimating the identity of the article and tracking the article's configuration throughout a specific sequence of manipulations and observations. At the end of this sequence, the article's configuration is known, though not necessarily desired. The estimated identity and configuration of the article are then used to plan a second sequence of manipulations that brings the article into the desired configuration. We propose a relaxation of a strain- limiting finite element model for cloth simulation that can be solved via convex optimization; this serves as the basis of the transition and observation models of the HMM. The observation model uses simple perceptual cues consisting of the height of the article when held by a single gripper and the silhouette of the article when held by two grippers. The model accurately estimates the identity and configuration of clothing articles, enabling our procedure to autonomously bring a variety of articles into desired configurations that are useful for other tasks, such as folding.
Conference Paper
Full-text available
— We consider the problem of recognizing the configuration of clothing articles when crudely spread out on a flat surface, prior to and during folding. At the core of our approach are parametrized shape models for clothing articles. Each clothing category has its own shape model, and the variety in shapes for a given category is achieved through variation of the parameters. We present an efficient algorithm to find the parameters that provide the best fit when given an image of a clothing article. The models are such that, once the parameters have been fit, they provide a basic parse of the clothing article, allowing it to be followed by autonomous folding from category level specifications of fold sequences. Our approach is also able to recover the configuration of a clothing article when folds are being introduced—an important feature towards closing the perception-action loop. Additionally, our approach provides a reliable method of shape-based classification, simply by examining which model yields the best fit. Our experiments illustrate the effectiveness of our approach on a large set of clothing articles. Furthermore, we present an end-to-end system, which starts from an unknown spread-out clothing article, performs a parametrized model fit, then follows a category-level (rather than article specific) set of folding instructions, closing the loop with perceptual feedback by re-fitting between folds. I.
Article
Deformable objects such as garments are highly unstructured, making them difficult to recognize and manipulate. In this paper, we propose a novel method to teach a two-arm robot to efficiently track the states of a garment from an unknown state to a known state by iterative regrasping. The problem is formulated as a constrained weighted evaluation metric for evaluating the two desired grasping points during regrasping, which can also be used for a convergence criterion The result is then adopted as an estimation to initialize a regrasping, which is then considered as a new state for evaluation. The process stops when the predicted thin shell conclusively agrees with reconstruction. We show experimental results for regrasping a number of different garments including sweater, knitwear, pants, and leggings, etc.
Article
A model-driven method for handling clothes by two manipulators based on observation with stereo cameras is proposed. The task considered in this paper is to hold up a specific part of clothes (e.g. one shoulder of a pullover) by the second manipulator, when the clothes is held in the air by the first manipulator. First, the method calculates possible 3D shapes of the hanging clothes by simulating the clothes deformation. The 3D shape whose appearance gives the best fit with the observed appearance is selected as estimation of the current state. Then, based on the estimated shape, the 3D position and normal direction of the part where the second manipulator should hold are calculated. The experiments using actual two manipulators have shown the good potential of the proposed method.
Article
Cloud modeling is one of the important and effective means in investigating cloud processes with sophisticated representations of cloud microphysics, and it can reasonably well resolve the time evolution, structure and life cycles of a single cloud and its systems. This paper presents a brief discussion and review of cloud models of three kinds, i.e., models with bulk parameterization, spectrum bin scheme and for cloud resolving, and some of their major applications are introduced including the influence of precipitation on cumulous dynamics, condensation growth and spectrum broadening, interaction between aerosol particles and clouds, together with cloud chemistry.