skip to main content
10.1145/1330572acmconferencesBook PagePublication Pagesicmi-mlmiConference Proceedingsconference-collections
WMISI '07: Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
ACM2007 Proceeding
  • Conference Chairs:
  • Naoto Iwahashi,
  • Mikio Nakano
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
ICMI07: International Conference on Multimodal Interface Nagoya Japan 15 November 2007
ISBN:
978-1-59593-869-5
Published:
15 November 2007
Sponsors:

Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to Nagoya and the WMISI 2007, the Workshop on Multimodal Interfaces in Semantic Interaction.

With the advances in ubiquitous networks, data mining, communication robots, and sensing technologies, various information on the real world has become available in real time. This information, presented to the user, not only makes it possible to support his or her intellectual activities but may also be utilized as context, thus opening great possibilities of achieving situated intelligent functions. The information systems and robots that support human activities in everyday life should ideally have functions that allow them to interact with humans adaptively according to context, such as the situation in the real world and each human's individual characteristics. For instance, these functions might include the ability to understand the user's intention through his or her utterances, as well as the ability to provide suitable information at appropriate timing.

In order to realize such interaction ---as semantic interaction--- it is necessary to extract and use the valuable context information needed for understanding interaction from the obtained real-world information. This context information is multimodal information at several levels: 1) raw information obtained from sensors, 2) information obtained through categorization, and 3) the relationships between categories. In semantic interaction, it is important for the user and the machine to share knowledge and an understanding of a given situation. Thus, it is necessary to infer the user's intention and to represent the machine's inner state naturally through speech, images, graphics, manipulators, and so on. This is achieved based on the multimodal context information. Accordingly, the development of multimodal interfaces is a very important research theme.

After reviewing the submitted papers, this proceedings finally contain 11 high-quality papers. Of these, five are long papers and six are short papers.

Skip Table Of Content Section
research-article
Bringing context into play: supporting game interaction through real-time context acquisition

We present a new interaction technique that we call Context Interaction and we discuss it in relation with computer games due to their popularity. Although the HCI in gaming benefits of many devices and controllers as well as from many interaction ...

research-article
Learning the meaning of action commands based on "no news is good news" criterion

In the future, robots will become common in our daily life. For using the robot more efficiently, it is desirable that the robot would have learning ability. However, a human teaching process for robot learning in the real environment usually takes a ...

research-article
Introducing semantic information during conceptual modelling of interaction for virtual environments

The integration of semantic information in virtual environment interaction is mostly still ad-hoc. The system is usually designed in such a way that the design of the framework incorporates the semantic information which then can be used to utilise ...

research-article
Computational model of role reversal imitation through continuous human-robot interaction

This paper presents a novel computational model of role reversal imitation in continuous human-robot interaction. In role reversal imitation, a learner not only imitates what a tutor does, but also takes the tutor's role and performs the tutor's ...

research-article
Learning object-manipulation verbs for human-robot communication

This paper proposes a machine learning method for mapping object-manipulation verbs with sensory inputs and motor outputs that are grounded in the real world. The method learns motion concepts demonstrated by a user and generates a sequence of motions, ...

short-paper
A model for multimodal representation and processing for reference resolution

We present a model for dealing with designation activities of a user in multimodal systems. This model associates both a well defined language to each modality (NL, gesture, visual) and a mediator one. It takes into account several semantic features of ...

short-paper
Towards adaptive object recognition for situated human-computer interaction

Object recognition is an important part of human-computer interaction in situated environments, such as a home or an office. Especially useful is category-level recognition (e.g., recognizing the class of chairs, as opposed to a particular chair.) While ...

short-paper
Hand posture recognition for human-robot interaction

In this paper, we describe a fast and accurate method for hand posture recognition in video sequences using multiple video cameras. The technique we propose is based on the head detection, skin detection and human body proportions in order to recognize ...

short-paper
On-chip network of intelligent sensors for controlling a mobile robot in an environment with obstacles

The aim of the paper is to present the carrying out of an on-chip system (SOC) round about of an on-chip network (NOC) implemented on a FPGA structure. The hardware system is developed in order to control an autonomous mobile device that is able to ...

short-paper
Indoor-outdoor positioning and lifelog experiment with mobile phones

We developed a lifelog system that utilizes user's mobile phones. User's position, which is closely related to the user's activity and experience, is tracked by GPS in outdoors and by a newly developed positioning system utilizing signal strength of ...

short-paper
Proposal of a markup language for multimodal semantic interaction

In the present paper, the multimodal interaction markup language (MIML) is proposed. MIML adopts a three-layered model of interaction description that enables semantic interaction for various platforms. In addition, MIML enables rapid prototyping by ...

Contributors
  • Okayama Prefectural University
  • Honda Research Institute Japan Co., Ltd.
  1. Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction

      Recommendations