HOME > Organization / Research > Ito Laboratory

Organization / Research

Ito Laboratory


[PHOTO.1] Akinori ITO, Professor [PHOTO.2] Takashi NOSE, Associate Professor
[PHOTO.3] Yuya CHIBA, Assistant Professor

Today, machines such as a robot, a cellular mobile phone, PDA, a computer and so on, play very important roles in human society, and a human lives and collaborates with them.

This laboratory is aiming at the construction of the system which communicates with human using spoken language, emotional representation in speech, jesture and emotional expression in face. Such the system is friendly to human and shares knowledge with human.

In order to develop the system, we are investigating the component technologies such as speech recognition and understanding, image recognition and understanding, space understanding, natural language processing, artificial intelligence, digital signal processing, emotional information processing, human engineering, and so on. The following researches are carried out in the labolatory.

1. Dialog with a machine

Humans communicate and understand each other using speech, gesture, emotional expression on face, etc. It is desirable that machine can communicate and understand with a human using similar media as well as humans do. By utilizing information concerning to user identification and user location, a machine can communicate friendly with a user because the system can use user-specific knowledge. Furthermore, when it uses a large amount of knowledge accumulated in the internet, it can give more valuable and intelligent knowledge to a user. We have been investigateing high accuracy speech and image recognition methods, and a method unifying various knowledge.

2. Autonomous mobile robot

Autonomous robot has to recognize space information around him because he shoulddetermine his next action. Especially, he needs to know his location and to detect obstacles on his way in order to keep him safe. We have been developing a space recognition method using image and acoustic sensors.

3. CALL(Computer Assisted Language Lerning) system using speech recognition technology

Learning of second language is very difficult because a student does not know correct pronunciation and cannot determine whether or not he pronounce correctly. For example, in Japanese, there do not exist phonemes correspnding to English /l/ and /r/ . Therefore, a Japanese student cannot determine whether or not he pronounce English /r/ correctly. Using speech recognition technology, we can automatically detect position of error proninciation in input speech. In the laboratory, we have been developing a speech recognition based CALL system available through the internet.

4. Human friendly information retrieval

The internet is a huge size of knowledge database. If we efficiently use the internet as a knowledge database, we can obtain fruitful results. In most of traditional search systems, text input based search method is often used. However, if not only text but also sound, speech, image are used as search keys, the search system will be useful for human. There is another problem that there are a large number of search results from the internet, where most of the obtained results are useless infrmation. It is necessary to collect useful information. We have been developimg a method that can search multimedia data by showing a part of data, and a method how to detect and summarize reliable information.

5. Ultra low bit coding for multimedia data

While a cellular mobile phone and the internet are spreading over the world, low bit coding technology comes into the limelight again due to the limit of the bandwidth. Especially, multimedia data such as speech, music, image and movie require broad bandwidth. In the laboratory, we have been developing a ultra low bit coding method for speech and music.

Group of Electrical Engineering, Communication Engineering,
Electronic Engineering, and Information Engineering, Tohoku University
6-6-05, Aramaki Aza Aoba, Aoba-ku, Sendai, Miyagi 980-8579, Japan
TEL : 022-795-7186 (Japanese Only)
Email :