here are some quick thoughts on the sessions i attended at the Human Computer Interaction conference today :-)
Michitaka Hirose, of the University of Tokyo, is doing some really exciting work with his students. He kicked off the morning with his presentation 'real world video avatar'. if anyone remembers the two-dimensional life-sized interactive avatar in the 2002 interpretation of H G Wells's 'The Time Machine', well, this goes one better in that what Michitaka-san has acheived is three-dimensional. he's done this by putting two flat-panel displays back-to-back and rotating the whole assembly at high speed. in fact, in his presentation, he had a picture of the holo-transmitters featured in the 'Star Wars' series, most typified by Princess Leia making her appeal for help as the Imperial Stormtroopers advanced.
Jochen Ehnes, also of Tokyo, talked about ubiquitous computing and augmented realities, specifically in the superimposition of images onto planar surfaces (fixed or moving), using recognition of markers. he described three uses - indicating where to drill in a wall so as not to hit existing cables, transposing user interfaces onto objects which have none (such as taking measurements with a simple piece of cardboard) and three-dimensional volume projection using two perpendicular planar surfaces (something like a cat-scan).
Hideaki Kuzuoka, of the University of Tsukuba, talked about 'the effect of tangible avatar interface on virtual space navigation'. building on his earlier work in the application of technology to history education (specifically the study of Mayan civilisation), he showed how prototype tangible interfaces are superior in educational contexts to either life-sized immersive environments or game controllers. he's now extended his work to helping children understand the relationship of solar angle of incidence to latitude, using tangible avatars moved across the surface of a globe. his work is related to my most recent podcast entry, in that he shared the observation that tangible avatars are accurate navigational interfaces because they encourage users to make more frequent references to plan views of the given area. in fact, the tests he devised are quite similar to the pre- and post-tests used in my own study on cognitive maps. his first test involved the finding and placing of given landmarks on a blank map, using either game controllers or the tangible avatar. his second test involved path reproduction after an initially visible landmark in the virtual space was rendered invisible - he measured distance deviation and angle deviation.
Koichi Hirota, of the University of Tokyo, talked about 'perceptual user interface for wearables', specifically, auditory, haptic and olfactory interfaces. he shared that with respect to the latter, when participants were tasked to try to determine where the 'target' was in a real world space delineated by a grid of radio frequency identification (RFID) tag detectors using only olfactory cues, they would either adopt a 'hill climbing search', in which they would head in one direction until they realised it was wrong, or they would adopt a 'scanning search', which was a systematic scan of the entire area. sounds a lot like the olfactory version of Tversky's gaze tour and route.
Daniel Berger, of the Max Planck Institute for Biological Cybernetics, in his talk 'cognitive influences on self-rotation perception', shared that his doctoral research measuring the visual weight afforded by participants vis-a-vis other sensory cues such as vestibular and tactile when they were exposed to rotational forces in a structure similar to a commercial motion simulator, pointed to the fact that higher visual weights are accorded when the degree of rotation is small.
Asim Ant Ozok, of the University of Maryland Baltimore County, shared his interest in 'the impact of SMS on building online communities'. he hasn't started his research yet, however. Anxo Cereijo Roibas, of the University of Brighton, talked about how he felt that broadcast content over 3G phone networks was not as compelling to users as what he termed 'DIY content'.
Kris Luyten, of Hasselt University, talked about 'blended maps and layered semantics for a tourist mobile guide'. his contribution was on an extensible GIS architecture, rather than proprietary closed guides which he says are the norm. my primary takeaway from his presentation was that maps which are accurate in scale, are not necessarily the most useful to the tourist (ie, novice navigator).
Michael Robbs, of the Hellenic American Union, introduced how he has been using WebCT since 1999 in 'effective teacher education through distance learning' in Greece. his English as a Foreign Language courses for teachers are sixteen weeks in duration, he was full of praise for the system, which he said helped the teachers become more reflective and collaborative, and less dependent on traditional didactic teaching styles.
the most exciting presentation of the day came from Ryoko Ueoka, of the University of Tokyo. her talk was entitled 'virtual time machine'. she described it as experiencing past and future events as we do the present. this is possible because of two affordances of modern computing - the long-term storage of large amounts of information, and the ability to enact simulations. her work has tremendous implications in geography, history and social studies, but there are ethical issues to be mindful of. her work draws on DARPA's Life Log and Microsoft's My Life Bits (and, i would add, in a much simpler way, Apple's iLife and Nokia's Life Blog). she described two types of study so far - the ordinary situation, and the particular situation. for the former, it involves a span of a year, using a single wearable sensor. for the latter, it involves multiple sensors (such as heart-rate, voice-level, auditory, visual, GPS) over a short span of time - she described how it was used during a recent earthquake in Japan).
Ryoko-san's work shows how life has now imitated art - the Truman Show, and the Final Cut.
truly ground-breaking, thought-provoking stuff.