Designing Vyo, a Robotic Smart Home Assistant: Bridging the Gap

Transcription

Designing Vyo, a Robotic Smart Home Assistant: Bridging the Gap
Designing Vyo, a Robotic Smart Home Assistant:
Bridging the Gap Between Device and Social Agent
Michal Luria
Guy Hoffman
Benny Megidish
Abstract— We describe the design process of “Vyo”, a personal assistant serving as a centralized interface for smart home
devices. Building on the concepts of ubiquitous and engaging
computing in the domestic environment, we identified five
design goals for the home robot: engaging, unobtrusive, devicelike, respectful, and reassuring. These goals led our design
process, which included simultaneous iterative development of
the robot’s morphology, nonverbal behavior and interaction
schemas. We continued with user-centered design research
using puppet prototypes of the robot to assess and refine
our design choices. The resulting robot, Vyo, straddles the
boundary between a monitoring device and a socially expressive
agent, and presents a number of novel design outcomes: The
combination of TUI “phicons” with social robotics; gesturerelated screen exposure; and a non-anthropomorphic monocular expressive face. We discuss how our design goals are
expressed in the elements of the robot’s final design.
Oren Zuckerman
Sung Park
Fig. 1. Vyo, a personal robot serving as a centralized interface to smart
home devices. The microscope-like robot transitions between being a social
agent communicating with the user (left) and an inspection device used as a
tool (right). 3D-printed icons represent smart home devices and are placed
on the robot’s turntable to activate devices and to control their settings.
I. I NTRODUCTION
Devices with new sensing and monitoring capabilities are
entering the home, often collectively called “Smart Home” or
“Internet of Things” (IoT) devices [1]. Researchers have been
divided over the desired user experience of these devices.
Conceptual frameworks range from high system autonomy
and invisibility on the one hand [2], [3], [4], to technology
that promotes the user’s sense of control and engagement on
the other [5], [6].
Social robots are also increasingly entering the domestic
environment. As such, they could provide a new model for
the “smart home” user experience, balancing autonomy and
engagement. In this paper we describe the design process of
such a robotic smart home interface.
Based on the research literature, user interviews, and
expert input, we define five design goals as guidelines: engaging, unobtrusive, device-like, respectful, and reassuring.
We then describe our process leading to the construction of a
new socially expressive smart home robot, Vyo. The process
simultaneously and iteratively tackles the robot’s morphology, nonverbal behavior (NVB) and interaction schemas.
The final prototype is a large microscope-like desktop
robot (Fig. 1) capable of social gestures. Users communicate with it using tangible objects (“phicons”) placed on
a turntable at the robot’s base. Each phicon represents one
smart home device, and is used to turn the device on and off,
M. Luria, G. Hoffman, B. Megidish and O. Zuckerman are with the
Media Innovation Lab, IDC Herzliya, P.O. Box 167, Herzliya 46150, Israel
[email protected]
G. Hoffman is with the Sibley School of Mechanical and
Aerospace Engineering, Cornell University, Ithaca, NY 14853, USA
[email protected]
S. Park is with Sk Telecom, Euljiro 2-ga, Jung-gu, Seoul, 100-999,
Republic of Korea [email protected]
to get information on its status, and to control its parameters.
The robot also features a screen, exposed to the user via an
expressive bowing gesture when a phicon is detected. In this
interaction schema the user looks “through” the robot’s head
into the information represented by the phicon, treating the
robot as an inspection tool. When a device requires the user’s
attention, the robot alerts them using unobtrusive gestures,
following the principle of “Peripheral Robotics” [7].
II. R ELATED W ORK
A. Smart Homes
Homes are becoming increasingly “smart” by being
equipped with new sensors and monitors [1]. This raises a
question on the way these smart homes should be controlled.
Weiser envisioned the future of domestic technology to be
calm, ubiquitous, autonomous, and transparent [2], an idea
later supported by other researchers [4], [3]. However, this
notion generates an “automation invisibility” problem, where
users could lose the sense of control over their domestic
environment [8]. In contrast, other notions on intelligent
environments suggest that technology should engage people
rather than soothe them. Rogers, in particular, argues for a
shift from proactive computing to proactive people [6].
In most cases, these “smart homes” are envisioned as
a single interconnected unit [9]. In reality we find a host
of unrelated, separately controlled devices. Research attempting to solve this problem spans a variety of interface
modalities, including voice recognition [10], screen-based
interfaces [11], gesture control [12] and tangible objects [13].
In this work, we suggest a social robot as a central interface
for smart home management.
B. Social Robotics in Smart Homes
In the past years social robots emerge beyond the laboratory into personal spaces. This trend is likely to grow, as
many people are in favor of robotic companions accompanying their domestic lives [14]. In this work we propose using
them as an interface for integrated smart home control.
Previous research explored the idea of a social robot as
part of the smart home [15], [16]. However, these works
focus on assistive technology for the elderly rather than smart
home management for the general public, as suggested here.
In particular, we present the design of a new social robot
as a centralized interface for smart homes, combining principles of robotic expressiveness and interaction with tangible
objects.
C. HRI Design Techniques
Academic research on social robot design has proposed a
variety of methods and techniques. These can be categorized
as addressing three interrelated aspects of HRI:
Some work relates to designing robotic nonverbal behaviors (NVB). These include embodied improvisation [17],
animation studies [18], [19] and observation of humanhuman interactions [20].
Other methods are considered for designing morphologies.
These include surveys, rapid prototyping, and laboratory
studies exploring and evaluating the robot’s body dimensions
[21], proportions [22] and facial features [23].
Finally the interaction schemas of the robot have been
designed using methodologies taken from HCI research, such
as interviews [24], ethnographies [25], storyboarding [26],
and interaction experience mapping [27].
In this work, we emphasize the interrelatedness between
the robot’s morphology, NVB, and interaction schemas using
a simultaneous iterative design process, which goes back and
forth between these three design channels.
III. D ESIGN G OALS
We identified five design goals, based on prior research in
ubiquitous computing, engaging technologies, and domestic
robots. These were further informed by interviews with users
about their needs and desires of smart home interfaces [28].
A. Engaging
A smart home interface should promote the user’s sense
of connection with their domestic environment. This sense
can be evoked by engaging the user and bringing back
“excitement of interaction” [6]. According to Rogers, one
way to engage the user would be to design for a physicaldigital experience, a tangible user interface.
B. Unobtrusive
Previous research, expert input from industry, and our
own user interviews all suggested that domestic technology
should be at least semi-automated, aspiring to as few disruptions as possible. This would allow people to focus on
developing close relationships within their family [29], [4].
Thus, our second design goal is to design a robot that will
Fig. 2. Morphology sketches and high-fidelity 3D models of two concepts:
Expressive Appliance (top) and Consumer Electronics (bottom).
draw attention only when it is essential. Of course there exists
tension between this and the “engaging” goal, and the design
should strive to balance the two.
C. Device-like
Most domestic robot designs see the robot primarily
as a social agent, often with humanoid or anthropomorphic form. Following the arguments for “Designing NonAnthropomorphic Robots” in [18], we aim for a more devicelike domestic robot design, striking a fine balance between
device and social agent.
This point is also supported by [14], where participants
preferred a home robot characterized as a “butler-like” assistant (79%) or as an appliance (71%). Only few wanted a
home robot to be their friend or mate.
D. Respectful
According to the butler-like preference described in [14],
people expect the robot to have “etiquette”, “politeness”, and
“sensitivity to social situations”. For example, a butler is
expected to sense when it is suitable to actively alert their
employer, as opposed to when to step aside and wait until
given further orders. We therefore suggest robotic assistants
should evoke a sense of respect towards their “employer”.
E. Reassuring
Finally, our interviews, as well as [14], strongly suggested
people need the domestic robot to be reliable, reassuring, and
trustworthy. This cannot rely solely on fault-tolerance [30],
but should also be embodied in the design of the robot’s
morphology, NVB, and interaction schemas.
IV. D ESIGN P ROCESS
To achieve these design goals, we collaborated with an
interdisciplinary team from the fields of behavioral science, design, and engineering, together with practitioners in
movement-related arts (actors, animators and puppeteers).
Our design process started with the simultaneous and iterative development of the robot’s morphology and nonverbal
behavior (NVB). Both paths continuously defined and refined
the robot’s interaction schemas. These three aspects were
evaluated in a user-centered design study using a puppet
version of the robot, leading to the final prototype design.
A. Morphology
The design of the robot’s morphology followed an iterative
process involving sketching, 3D modeling, and rapid prototyping, inspired by the movement-centric design approach
introduced in [18] and elaborated in [7]. Our design process
went beyond the work described in these papers in two ways:
First, in previous work, alternatives were considered only in
the early sketch stage. Here, we simultaneously developed
two options to an advanced design stage, including sketching,
3D modeling, and animation testing (Fig. 2). Second, we
built and rendered a high-fidelity 3D model (Fig. 2 right),
which was iteratively interleaved with the sketching and
3D animation phases. Through it we explored materials,
morphological accents, and finishing details at an early stage.
The following sections describe the considerations, steps,
and decisions made in the morphology design stage.
1) Sketching: We started with freehand sketches exploring
widely varying shapes based on real-world inspirations.
Supporting the “device-like” design goal, we did not want
the robot to have human features, but to be more of a
socially expressive household device. We thus started two
designs paths, which we called Consumer Electronics (CE)
and Expressive Appliance (EA).
On the CE path, we aimed for a minimal design that
would be familiar as a high-end consumer electronics object.
This resulted in simple shapes, expressing most of the user
feedback through LED lighting and graphical information. In
fact, we decided that any physical motion would not affect
the envelope of the robot, but only rotate elements along the
major axis of the robot (Fig. 2 bottom center). Inspirations
for this robot’s design were contemporary bluetooth speakers,
Amazon’s voice interface device “Echo”, and—for majoraxis rotation—the fictional robot R2D2.
On the EA path, sketches focused around appliance morphologies. Fig. 2 (top left) shows some early sketches, mostly
following a kitchen mixer / microscope / overhead projector
theme, with some diverging towards designs inspired by
wall-mounted displays and even clothes hangers (not shown).
Some key morphology and interaction ideas related to our
design goals emerged at the sketching stage: (a) straight
envelope lines—related to the device-like goal; (b) tangible
icons (“phicons”) representing the devices managed by the
robot—related to the engaging and reassuring goals; (c) a
“bowing” gesture, which exposes a hidden head-embedded
screen presenting additional information—related to the respectful and unobtrusive goals. The last design idea also
placed the robot on a midpoint between an inspection device
and a social agent. When the robot would rise to face the
user, it would be a social agent. When it would bow, it
will become more passive and will enable the user to “look
through” its head like a microscope, examining the status of
the smart home devices. This had the additional engaging
aspect of “peeking into the robot’s mind.”
As the sketches evolved, the microscope became an increasingly salient inspiration. As a result, we included a
rotating face plate with a lens-like feature that would serve
as an expressive element in the robot’s face design. The one
DoF lens-like “eye” placed off the face plate’s center of
rotation (see: Fig. 6 left) became an intriguing face design,
evoking both seriousness in inspecting the home device’s
phicons, but also expressing a surprising range of facial
expressions through movement.
2) Material: Since the robot is intended to be a household
object, we wanted the material to reflect appropriate qualities.
Each of the two design paths was assigned its own materiality. The EA path followed a nostalgic path, including white
enamel or Bakelite, with brass or gold accents, found often
in traditional household items. The CE path was designed
with more of a contemporary material in mind, such as matte
plastic or black anodized aluminum.
3) Rapid Prototyping of Low-fidelity Prototypes: Another
extension relative to our previous design process was the
use of several variations of low-fidelity prototypes. In our
previous work, we used a single wooden prototype to start
developing software on [31], [18]. In this case, we used
rapid prototyping techniques to develop variations of the
robot in low-fidelity “puppet” versions that could be moved
manually, structured according to the robot’s DoFs. This was
not only for early software development, but also to explore
various robot scales and to experience the DoFs as they
moved in space. Furthermore, we used these variations in
the movement simulations with a puppeteer, and in the usercentered design studies discussed in Sections IV-B and V.
B. Nonverbal Behavior
In parallel to working on the robot’s morphology, we
developed its movement and behavior. For this, we collaborated with experts in movement-related arts (animators,
actors and a puppeteer), using the above-mentioned lowfidelity prototypes.
1) Animation Sketches: Animation sketches along the line
of [18] explored the relationships between the robot’s parts.
In particular we wanted to explore how the placement and
angle of the various DoFs would affect the robot’s expressive
capabilities. Fig. 3 shows filmstrip stills from this stage of
the design. The animation sketches also helped us determine
the relative sizes between the robot’s parts.
2) Embodied Improvisations: Working with professional
actors, we carried out embodied improvisations in the spirit
of [17]. Each improvisation included two actors who were
asked to act out situations between a butler and a tenant.
This was a metaphor for the human-robot interaction we
aimed for. We used improvisations to learn about nuances
in a butler-tenant relationship, later implementing them in
the robot’s behavioral patterns. One of the findings was the
particular unobtrusive nonverbal behavior of the “butlers”.
This sparked the idea to give the robot a number of peripheral
notification gestures, which would be expressed using subtle
changes in its normal idle behavior. Another behavioral pattern we found was that the butlers were constantly attentive
and alert when the tenant was nearby.
3) Movement Simulations: We conducted movement simulations with a professional puppet designer, an expert in
Fig. 3.
Animation sketches used to explore the robot’s movement, DoF placements, and expression capabilities.
movement and NVB. We used the low-fidelity puppet prototypes of the robot to explore possible expressions, gestures
and responses in key scenarios. The designed behaviors
were later tested in the user-centered design studies (see
Section V).
4) Physical DoF Explorations: Finally, we 3D printed a
variety of angled connectors and head models to compare the
sense we gleaned from the animation sketches, when facing
the robot in reality. We compared the expressive capability
of various lengths of DoFs and different angles between
them. These were then integrated in the robot’s low-fidelity
prototypes and animated for comparison.
V. U SER -C ENTERED D ESIGN S TUDIES
After developing the robot’s morphology and NVB and
before constructing the final prototype, we conducted usercentered design studies using puppet versions of the robot.
We did this to simulate a variety of possible scenarios and
robot behaviors, akin to the methods presented in [17], [18].
The study aimed for early stage evaluation with potential
users in the spirit of paper prototyping [32]. The study set
out to evaluate the robot’s morphology, NVB, and interaction
schemas, and in particular to get a sense of the way the
robot’s personality is perceived before finalizing the design.
A. Procedure
We recruited a sample of seven students to participate
in the study (3 female, 4 male). Participants were Communication and Computer Science undergraduate students
unfamiliar with the project and its goals. We recruited
participants from our target population. All participants were
over the age of 23 (M=27) and manage their own household.
The small sample would serve as a preliminary assessment
of the robot design, partway along the process.
We used three low-fidelity versions of the robot, two of
them puppets with full articulation of the robot’s movement
according to its DoFs (Fig. 4 left). The study was held in an
experiment room with controlled lighting, no windows, and
two recording video cameras. The users were informed of
the robot’s general purpose: a robot serving as a centralized
smart home management interface.
The study included qualitative assessment through semistructured interviews and a quantitative part conducted with
the help of a “Personality Meter”, a hand-sized box with
five linear slide potentiometers. Each slider was assigned
to a perceived personality trait: responsibility, attentiveness,
politeness, professionalism and friendliness (Fig. 4 left). This
tool allowed participants a quick relative evaluation of the
robot along several measures at once.
We evaluated the three aspects of our design process:
1) Morphology / Size: The robot was introduced to the
participants in three different sizes (33cm, 25cm, and 21cm
tall) and in slightly different forms. Participants were asked
to evaluate each size on the “Personality Meter”.
2) NVB / Gestures: We defined two or three optional gestures for each behavior (e.g., “listening”, “getting attention”
and “accepting request”). The gestures were based on the
movement simulations conducted with the puppet designer.
The experimenter was trained in advance to accurately and
consistently puppeteer the robot’s defined gestures. Each
gesture was evaluated in both qualitative and quantitative
measures: first the participants were asked to identify the
meaning of each gesture (qualitative) and then they were
asked to evaluate it using the “Personality Meter”, addressing
the five personality traits (quantitative).
3) Interaction Schemas / Phicons: Two forms of objects
were given to the participants for a qualitative evaluation:
3D printed phicons and cubes (Fig. 4 right). The procedure
was gradual revelation: At first the participants were asked
what they thought the objects were; then they were told the
objects are a tool to communicate with the robot; next, that
each object represents a smart home device, and so on.
B. Results
In the quantitative part of the study, each “Personality
Meter” sample was coded using digital images of the meter,
and put on a scale of 1-5. As there were relatively few
participants (n = 7) we present the results in overview and
graphical form. The qualitative semi-structured interviews
were video recorded and transcribed. We analyzed the transcripts to identify recurrent themes.
1) Morphology / Size: When analyzing the quantitative
data, we found the largest robot was rated highest on all
five personality measures, with a trend showing that the
smaller the robot was, the lower it was rated (Fig. 5). The
qualitative findings also supported this conclusion. In the
Fig. 4. Design studies using puppet prototypes with a “Personality Meter”
(left), and two forms of TUI objects: “phicons” and cubes (right).
Fig. 5. Evaluation of the robot’s size according to five personality
traits. Box plots show quartiles, white dots show means.
interviews, the largest robot (33cm tall) was described as
more reliable, professional and responsible. As both methods
of the evaluation indicated the large robot is perceived more
positively, we designed the final prototype to be close to its
large scale version.
2) NVB / Gestures: When comparing two or three different gestures for a specific behavior, we were surprised to
find that most participants rated one of the gestures higher
across all personality traits. This was in contrast to our
expectations for a trade-off between characteristics for each
gesture. For instance, a gesture of the robot leaning in and
slightly turning its head as an expression of “listening” was
better evaluated than the robot rolling its head to express the
same action, on all five parameters. This could be due to the
fact that participants had difficulty separating the character
traits, or that people generally converge to a “winner-takesall” mentality. The qualitative interviews also suggested that
the meaning of the preferred gestures was better understood
by participants. This allowed us to easily choose the robot’s
gestures for each planned behavior.
3) Interaction Schemas / Phicons: Most participants understood the physical objects were a form of communication
with the robot, each representing a smart device. Some
participants perceived the robot’s turntable as an affordance
indicating the objects should be placed on top of it to
interact. Others needed further explanation. We did not find
a clear preference to one of the two object forms, but
similar advantages and disadvantages were pointed out in
the qualitative study: the phicons were perceived as playful
and innovative but also easy to lose. Conversely, the cubes
were described as unified and simple to store.
VI. F INAL P ROTOTYPE
According to our initial design goals and considering
the findings in our design exploration and research, we
developed the fully functional robot prototype, including its
structure, shell, electronics, and software. The robot includes
5 DoFs: One at the bottom of the turntable, one for base
tilt, one neck tilt, one combined neck pan and roll, and a
monocular rotating face (Fig. 1 and 6).
The robot is a microscope-inspired, Bakelite-like finished
robot, that communicates with the user using tangible icons
(phicons). Each phicon represents a smart home device,
allowing the user to control and monitor it by placing it onto
the robot’s rotating turntable. Moving the phicon controls the
parameters of the device (Fig. 6 right). Furthermore, the robot
features a hidden screen on the back of its head, exposed
through a respectful “bowing” gesture. The screen serves to
give more information about devices, and as visual feedback
to the user’s actions. The user can “look through” the robot’s
head at the information represented by the phicon.
The interaction is complemented by expressive gestures
using the robot’s five degrees of freedom. These include
the rotating monocular facial expression system mentioned
above (Fig. 6 left).
The robot’s main controller is a Raspberry Pi 2 Model B
running a combination of Java and Python code. In addition,
it includes a camera to detect phicons and faces, a 1.8-inch
TFT display, and a speaker for sound output. The DoFs are
driven by a combination of Dynamixel MX-64, MX-28, and
XT-320 motors, controlled from the Raspberry Pi GPIO. The
robot’s shell is 3D printed, and externally finished with five
layers of paint and sanding.
VII. R ELATION TO D ESIGN G OALS
Our design goals were embodied in the final prototype as
follows:
A. Engaging
To encourage engagement in interaction with the robot, we
used physical icons to control smart home appliances. The
metaphor that inspired our design was that of a microscope:
the user actively examines the status of a device by placing
its phicon on the turntable. The user then peeks into the
robot’s head and “reads its mind” onto the information about
a specific device. Previous research shows tangible objects
have the potential to increase playfulness and exploration
[33], [34]. We believe that the use of physical objects in our
design can also support this kind of engagement. According
to initial findings in our study, participants perceived the
phicons as playful, due to their color and form.
B. Unobtrusive
To balance this playful engagement with respect for the
human relationships in the home, we designed the interaction
using the “Peripheral Robotics” paradigm [7], avoiding interruptions unless there is a critical matter, and generally waiting for the user to initiate interaction. We designed several
peripheral gestures to replace traditional notifications used
in home appliances. Since the robot is part of the domestic
environment, we assume that its regular gestures will be well
recognized by the user and fade into the background after
a while. This allows design of subtle changes in the robot’s
movement, indicating unexpected occurrences. For example,
a nervous breathing gesture was designed to cue there is a
non-urgent matter to attend to, one that can be ignored by the
user if they are busy. A second movement pattern, “urgent
actors—when the butlers were trying to act out “respect”,
they were attentive as long as the tenant was around.
An additional way in which the robot expresses respect
towards the user is by employing “bowing-like” gestures.
E. Reassuring
Fig. 6. The resulting prototype, including a rotating monocular facial
expression system (left) and “phicons” to control and monitor smart home
devices (right).
panic”, was designed to move the robot from the periphery
of the user’s attention to the foreground.
The use of phicons also supports unobtrusive interaction,
compared to using a screen, making control quiet and domestic. This is inspired by minor rituals people do in their
homes—hanging a coat, placing keys in a bowl [35]. We
used this idea as a metaphor—a person can place phicons
on the robot’s surface to set their home preferences just as
easily as they would place keys on the counter. Furthermore,
if the preferences are constant, a glance toward the robot’s
turntable enables a quick status update.
C. Device-like
We attempted to balance between the “social agent” and
“device-like” qualities of the robot. In its “down” state, the
robot is structured to resemble a device and encourages
seeing it as a passive tool. In contrast, when greeting the
user or when accepting a command, the robot rises up
and serves as a social agent. This is supported by our
qualitative findings—usually when the robot rose to look
at the participant, the participant would look directly at the
robot’s face. However, when the robot was in its “down”
state, the participant would be focused on the turntable and
phicons.
This balance is also emphasized in its physical design—
the robot is angular from a lateral view, and more rounded
when viewed from the front.
D. Respectful
The respectful characteristic was inspired by the “butler”
metaphor, and was amplified by the actors’ embodied improvisations playing out the role of a human butler.
We express respectfulness in the design by rising to the
user’s attention when they are within the robot’s range.
For instance, when the user enters their home, the robot
responds with a greeting gesture, signaling it acknowledges
their arrival. The user can choose to respond to the robot,
or to ignore it, in which case the robot would “go back to
its own business”. Furthermore, whenever the user passes
by the robot, the robot notices them and responds in a slight
acknowledgment gesture. These decisions stem from our observations on human interactions played out by professional
In the design research we found the robot was perceived as
most reliable and authoritative in its largest form. Therefore,
we designed the robot in a size similar to the size of the
large puppet prototype.
We also designed expressive gestures to give feedback
to the user on their home status. For instance, when the
user places an object, the robot responds immediately by
examining it. Another way the user can be reassured is
through the screen on the robot’s head. The screen displays
the icon that is currently manipulated, coherent in both
icon form and color to the physical icon on the turntable.
Moving a phicon gives immediate feedback on the newly set
parameters. The physical icons also enhance the visibility of
the smart appliances being monitored. They can be instantly
observed with a cursory glance, contributing to the user’s
sense of control over their home.
VIII. C ONCLUSION AND F UTURE W ORK
Domestic robots are an attractive candidate for integrated
smart-home interaction, providing an alternative to the more
common screen-based and voice-based interfaces. Based on
prior research in domestic interaction and our own interviews
of potential users, we identified five design goals for smart
home robotic assistants: engaging, unobtrusive, device-like,
respectful, and reassuring. We reported on our iterative
design process, addressing the three core HRI design paths:
morphology, nonverbal behavior, and interaction schemas,
which we explored in an interleaved manner. Our process
also brought together a broad range of design techniques including sketching, animation, material exploration, embodied
improvisations and movement simulations.
The outcome of our design process is Vyo, a socially expressive personal assistant serving as a centralized interface
to smart home devices. Our design goals are expressed in
the final prototype: (a) Engaging—physical objects represent
smart home devices and are used to control them. This
increases playfulness and exploration. The robot also engages
the user via socially expressive gestures. (b) Unobtrusive—
the robot avoids interruptions unless there is a critical matter.
It uses peripheral gestures instead of traditional notifications, and alerts about unexpected occurrences through subtle
changes in its idling movement. (c) Device-like—the robot
is designed as a microscope-like desktop device, which the
user operates. It does not have an anthropomorphic face, but
an expressive monocular rotating dot for facial expression.
(d) Respectful—inspired by a butler metaphor, the robot rises
to the user’s attention when they are within the robots range,
and goes back to a neutral pose when the user walks away.
To provide more information, the robot exposes a hidden
screen with a low bowing gesture. (e) Reassuring—form
factor and size were selected based on user studies to convey
reassurance and reliability. The robot provides feedback
about home device status by examining the phicons placed
on its turntable. It uses short acknowledgment gestures and
sounds when parsing voice commands.
Our work also presents novel HRI design elements that
have not been previously explored. These include the combination of human-robot interaction with tangible interface
objects; the use of physical icons as both representing the
information in smart home devices and for their control; a
hidden screen that is revealed only when necessary through
an expressive gesture; the transition of the robot from being a
passive device-like tool to having social agent characteristics;
and the design of a 1-DoF rotating dot for facial expressions.
Along our design process, we conducted exploratory usercentered design studies to examine our design decisions, in
the spirit of paper prototyping [32]. These were run with a
small sample from our target user population. The next step
is to conduct a larger-scale evaluation of the final prototype
along the stated design goals. We are also in the process of
using Vyo to evaluate other aspects of domestic interaction.
We hope to further understand interaction patterns with a
device-like home robot that communicates with the users
and the home via tangible objects.
R EFERENCES
[1] M. R. Alam, M. B. I. Reaz, and M. A. M. Ali, “A review of smart
homes—past, present, and future,” Systems, Man, and Cybernetics,
Part C: Applications and Reviews, IEEE Transactions on, vol. 42,
no. 6, pp. 1190–1203, 2012.
[2] M. Weiser, “The computer for the 21st century,” Scientific american,
vol. 265, no. 3, pp. 94–104, 1991.
[3] N. Streitz and P. Nixon, “The disappearing computer,” Communications of the ACM, vol. 48, no. 3, pp. 32–35, 2005.
[4] A. Woodruff, S. Augustin, and B. Foucault, “Sabbath day home
automation: it’s like mixing technology and religion,” in Proceedings
of the SIGCHI conference on Human factors in computing systems.
ACM, 2007, pp. 527–536.
[5] S. S. Intille, “Designing a home of the future,” IEEE pervasive
computing, vol. 1, no. 2, pp. 76–82, 2002.
[6] Y. Rogers, “Moving on from weiser’s vision of calm computing:
Engaging ubicomp experiences,” in UbiComp 2006: Ubiquitous Computing. Springer, 2006, pp. 404–421.
[7] G. Hoffman, O. Zuckerman, G. Hirschberger, M. Luria, and T. ShaniSherman, “Design and evaluation of a peripheral robotic conversation
companion,” in Proc. of the Tenth Annual ACM/IEEE Int’l Conf. on
HRI. ACM, 2015, pp. 3–10.
[8] T. Koskela and K. Väänänen-Vainio-Mattila, “Evolution towards smart
home environments: empirical evaluation of three user interfaces,”
Personal and Ubiquitous Computing, vol. 8, no. 3-4, pp. 234–240,
2004.
[9] R. Lütolf, “Smart home concept and the integration of energy meters
into a home based system,” in 7th International Conference on
Metering Apparatus and Tariffs for Electricity Supply. IET, 1992,
pp. 277–278.
[10] F. Portet, M. Vacher, C. Golanski, C. Roux, and B. Meillon, “Design
and evaluation of a smart home voice interface for the elderly: acceptability and objection aspects,” Personal and Ubiquitous Computing,
vol. 17, no. 1, pp. 127–144, 2013.
[11] P. M. Corcoran and J. Desbonnet, “Browser-style interfaces to a home
automation network,” Consumer Electronics, IEEE Transactions on,
vol. 43, no. 4, pp. 1063–1069, 1997.
[12] C. Kühnel, T. Westermann, F. Hemmert, S. Kratz, A. Müller, and
S. Möller, “I’m home: Defining and evaluating a gesture set for smarthome control,” International Journal of Human-Computer Studies,
vol. 69, no. 11, pp. 693–704, 2011.
[13] S. Oh and W. Woo, “Manipulating multimedia contents with tangible
media control system,” in Entertainment Computing–ICEC 2004.
Springer, 2004, pp. 57–67.
[14] K. Dautenhahn, S. Woods, C. Kaouri, M. L. Walters, K. L. Koay,
and I. Werry, “What is a robot companion-friend, assistant or butler?”
in Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ
International Conference on. IEEE, 2005, pp. 1192–1197.
[15] K.-H. Park, H.-E. Lee, Y. Kim, and Z. Z. Bien, “A steward robot for
human-friendly human-machine interaction in a smart house environment,” Automation Science and Engineering, IEEE Transactions on,
vol. 5, no. 1, pp. 21–25, 2008.
[16] C.-C. Tsai, S.-M. Hsieh, Y.-P. Hsu, and Y.-S. Wang, “Human-robot
interaction of an active mobile robotic assistant in intelligent space
environments,” in Systems, Man and Cybernetics, 2009. SMC 2009.
IEEE International Conference on. IEEE, 2009, pp. 1953–1958.
[17] D. Sirkin and W. Ju, “Using embodied design improvisation as a
design research tool,” in Proc. Int’l. Conf. on Human Behavior in
Design, Ascona, Switzerland, 2014.
[18] G. Hoffman and W. Ju, “Designing robots with movement in mind,”
Journal of Human-Robot Interaction, vol. 3, no. 1, pp. 89–122, 2014.
[19] L. Takayama, D. Dooley, and W. Ju, “Expressing thought: improving
robot readability with animation principles,” in Proc of the 6th int’l
conf. on HRI. ACM, 2011, pp. 69–76.
[20] Y. Kuno, H. Sekiguchi, T. Tsubota, S. Moriyama, K. Yamazaki,
and A. Yamazaki, “Museum guide robot with communicative head
motion,” in Robot and Human Interactive Communication, 2006.
ROMAN 2006. The 15th IEEE International Symposium on. IEEE,
2006, pp. 33–38.
[21] M. K. Lee, J. Forlizzi, P. E. Rybski, F. Crabbe, W. Chung, J. Finkle,
E. Glaser, and S. Kiesler, “The snackbot: documenting the design
of a robot for long-term human-robot interaction,” in HRI, 2009 4th
ACM/IEEE Int’l Conf. on HRI. IEEE, 2009, pp. 7–14.
[22] C. Diana and A. L. Thomaz, “The shape of simon: creative design
of a humanoid robot shell,” in CHI’11 Extended Abstracts on Human
Factors in Computing Systems. ACM, 2011, pp. 283–298.
[23] C. F. DiSalvo, F. Gemperle, J. Forlizzi, and S. Kiesler, “All robots
are not created equal: the design and perception of humanoid robot
heads,” in Proc of the 4th conference on Designing Interactive Systems.
ACM, 2002, pp. 321–326.
[24] D. S. Syrdal, N. Otero, and K. Dautenhahn, “Video prototyping in
human-robot interaction: Results from a qualitative study,” in Proceedings of the 15th European conference on Cognitive ergonomics:
the ergonomics of cool interaction. ACM, 2008, p. 29.
[25] B. Mutlu and J. Forlizzi, “Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction,” in
Human-Robot Interaction (HRI), 2008 3rd ACM/IEEE International
Conference on. IEEE, 2008, pp. 287–294.
[26] J.-H. Han, M.-H. Jo, V. Jones, and J.-H. Jo, “Comparative study on the
educational use of home robots for children,” Journal of Information
Processing Systems, vol. 4, no. 4, pp. 159–168, 2008.
[27] J. E. Young, J. Sung, A. Voida, E. Sharlin, T. Igarashi, H. I. Christensen, and R. E. Grinter, “Evaluating human-robot interaction,” Int’l
Journal of Social Robotics, vol. 3, no. 1, pp. 53–67, 2011.
[28] M. Luria, S. Park, G. Hoffman, and O. Zuckerman, “What do users
seek in a smart home robot? an interview-based exploration,” IDC
Media Innovation Lab, Tech. Rep. MILAB-TR-2015-001, 2015.
[29] S. Davidoff, M. K. Lee, C. Yiu, J. Zimmerman, and A. K. Dey,
“Principles of smart home control,” in UbiComp 2006: Ubiquitous
Computing. Springer, 2006, pp. 19–34.
[30] M. L. Leuschen, I. D. Walker, and J. R. Cavallaro, “Robot reliability
through fuzzy markov models,” in Reliability and Maintainability
Symposium, 1998. Proceedings., Annual. IEEE, 1998, pp. 209–214.
[31] G. Hoffman, “Dumb robots, smart phones: A case study of music
listening companionship,” in RO-MAN. IEEE, 2012, pp. 358–363.
[32] C. Snyder, Paper prototyping: The fast and easy way to design and
refine user interfaces. Morgan Kaufmann, 2003.
[33] S. Price, Y. Rogers, M. Scaife, D. Stanton, and H. Neale, “Using
‘tangibles’ to promote novel forms of playful learning,” Interacting
with computers, vol. 15, no. 2, pp. 169–185, 2003.
[34] O. Zuckerman and A. Gal-Oz, “To tui or not to tui: Evaluating
performance and preference in tangible vs. graphical user interfaces,”
Int’l Journal of Human-Computer Studies, vol. 71, no. 7, pp. 803–820,
2013.
[35] A. S. Taylor, R. Harper, L. Swan, S. Izadi, A. Sellen, and M. Perry,
“Homes that make us smart,” Personal and Ubiquitous Computing,
vol. 11, no. 5, pp. 383–393, 2007.