We report on the design and development of HandWaver, a gesture-based mathematical making environment for use with immersive, room-scale virtual reality. A beta version of HandWaver was developed at the IMRE Lab at the University of Maine and released in the spring of 2017. Our goal in developing HandWaver was to harness the modes of representation and interaction available in virtual environments and use them to create experiences where learners use their hands to make and modify mathematical objects. In what follows, we describe the sandbox construction environment, an experience within HandWaver where learners construct geometric figures using a series of gesture-based operators, such as stretching figures to bring them up into higher dimensions, or revolving figures around axes that learners can position by dragging and locking. We describe plans for research and future development. OVERVIEW OF HANDWAVER HandWaver is a gesture-based virtual mathematical making environment, currently optimized for in-room (as opposed to seated) immersive virtual reality platforms (such as the HTC Vive) that support gesture recognition. From points in space, users can construct uni-, two-, and three-dimensional mathematical objects through iterations of gesture-based operators. Figure 1 shows iterations of the stretch operator: a point is stretched into a line segment; the line segment is stretched into a plane figure; the plane figure is stretched into a prism. The hands that are shown in the images are virtual renderings of a user's actual hands, tracked via a Leap Motion sensor that is mounted to the virtual reality headset (see Figure 2). Figure 1. Different cases of the stretch operator: a point is stretched into a line segment, the segment is stretched into a plane figure, and the plane figure is stretched into a prism. Figure 2. A user (red sweatshirt) in the virtual space. The large monitor displays a 2D view of the user's first-person view of the virtual world. The device that tracks the user's hand movements is mounted to the front of the headset he is wearing. ICTMT 13 323 Lyon 3-6 July 2017 A second gesture-based operator is revolve. Users can position an axis in space, select objects to rotate around the axis, and then spin a wheel to revolve the selected objects around the axis. Revolving objects in this way creates surfaces of revolution. Figures 3 and 4 show different cases of the revolve operator. In Figure 3, a point is revolved to create a circle; the circle is then revolved around itself to create a sphere; and the circle is revolved around an axis to create a torus. Figure 3. Different cases of the revolve operator. The ship's wheel is a spindle that users turn to revolve figures. The line through the ship's wheel is the axis of rotation. In Figure 4, a segment is revolved parallel to an axis of rotation to create a cylinder; a segment is revolved perpendicular to an axis of rotation to create an annulus; the annulus is revolved around itself to create a sphere with a hole in its center. Figure 4. Different cases of revolving a segment. When the segment is parallel to the axis of rotation, the result is a cylinder. When the segment is perpendicular, the result is an annulus. The last image shows an annulus revolved around itself to create a sphere with a hole in its center (note: the hole is visible in the image by slicing the sphere). We organized the sandbox environment around the stretch and revolve operators to help learners train their dimensional deconstruction skills (Duval, 2014). Dimensional deconstruction is the process of resolving geometric figures into lower-dimensional components, rather than seeing them as whole, fixed shapes. In the HandWaver sandbox, learners can fluidly move from lower-dimensional shapes (e.g., circles) to their higher dimensional analogs (e.g., spheres) and vice versa. The environment brings plane and solid geometry together-subjects that have been separated from each other in the usual presentation of geometry in K-12 schools. The solid analogs of plane figures, in particular sphere-and-plane constructions, are "seldom developed" or "slighted...owing to their theoretic nature" (Franklin, 1919, p. 147). Three-dimensional dynamic geometry software (e.g., GeoGebra or Cabri 3D) has made it possible to engage in such constructions, however the limitations of two-dimensional screens has constrained their practicability. But for users immersed in a three-dimensional space-where the user has natural control over the angle at which an object is viewed, is able to move and manipulate the object in space, and can readily select the components of a figure to be incorporated into a new construction-three-dimensional constructive geometry becomes more feasible. Thus, a final feature of the sandbox environment is three-dimensional analogs of classic construction tools. The arctus tool (Figure 5) allows users to make a sphere centered at a point, through any other point. The size of the arc shown in the figure is variable, and the midpoint of the arctus tool can be locked to any point in the display. Arctus is a spatial compass that creates spheres. The user sets the arc to have the desired radius and then generates a sphere by spinning the arc through space. Figure 5. The arctus tool being used to inscribe a sphere. Users position the tool on a center point and on a point on the surface of the sphere. To generate the sphere, one turns the circle through space by spinning the blue wheel. The flatface tool (Figure 6) allows users to define a plane through any three points. A user sets one of the lines of the flatface to coincide with two of the three points. Once in place, the user sets the second line so that it is collinear with the third point. To generate the plane, one acts with the stretch gesture on one of the lines of the flatface. We implemented plane-and-sphere constructions via gesture-(and motion-) based virtual tools to mimic the physical actions of spinning a compass or drawing a line with a straightedge. Our goal in doing so was to highlight the manual history of making geometric figures. Figure 6. Series of images showing the flatface tool being used to spawn a plane. With arctus and flatface, learners can complete solid geometry construction tasks that are inherently virtual, such as constructing a tetrahedron from three spheres (see Figure 7). Figure 7. Constructing a tetrahedron from three-spheres in the HandWaver sandbox. These tools provide an occasion for learners to explore how plane geometry construction protocols can be extended to higher dimensions. Other experiences within the HandWaver environment include a volume lab, an operator lab, and LatticeLand, which is a spatial analog of the geoboard (Kennedy & McDowell, 1998). Users can define the edges or faces of polyhedra by selecting a circuit of lattice points with a virtual pin (see Figure 8). Figure 8. Connecting the dots in LatticeLand to define a the edges of a cube (second frame), a parallelepiped (third frame), a pyramid (fourth frame), and a trapezoid (fifth frame); the sixth frame shows the trapezoid cut into components (the orange triangle, the blue trapezoid). MOTIVATION AND DESIGN CONSIDERATIONS Our primary goal in developing HandWaver was to provide a space where learners at all levels could use their hands to act directly on mathematical objects, without the need to mediate intuitions through equations, symbol systems, keyboards, or mouse clicks (Sinclair, 2014). We designed the environment around natural movements of user's hands to foreground the connection between diagrams and gestures (de Freitas & Sinclair, 2012; Chen & Herbst, 2013). As one example of how the environment realizes this connection, the stretch operator multiplies (Davis, 2015) single points into many to form a segment, or multiples single segments into many to form a plane figure, or multiplies a single plane figure into many to form a solid. The notion that n-dimensional figures consist of adjoined (n-1)-dimensional figures is foregrounded by the generative use of the stretching gesture.