Second Life, the inSL logo and Linden Lab are trademarks of Linden Research, Inc.
the blog voyeurism, the podcast ventriloquy, and Raymaker Land Management are all not affiliated with, or sponsored by, Linden Research.
the one-hundred-and-ninety-first episode of ventriloquy is a recording of my talk yesterday entitled Developing adolescents' map literacy with Second Life during the annual Second Life Community Convention (SLCC 2011).
please do join me as i share my ideas in this 24.4 MB download.
(the Ustream video is here, sincere apologies in advance that brain wasn't firing on all cylinders)
the presentation is entitled 'Developing adolescents' map literacy with Second Life', and it's part of the convention's Public Sector and Education track (ably chaired by Shirley Marquez-Dulcey).
here's the description of the presentation:
This session describes a project carried out in some state-funded schools in Singapore earlier this year, in which Grade 7 students learned certain concepts of geography and the earth sciences using Second Life. The students were able to apply their newly-acquired map literacy to reading and analysing traditional paper-based topographical maps.
if you possibly can, please do join me in Oakland this August :-)
i'm excited to share that i'll be making my first visit to Israel next month :-)
the MOFET Institute is organising a Study Tour on Teacher Education and ICT, from 27 March till 3 April.
given my present work-scope of helping to seed ground-up innovations in schools in Singapore, i'm looking forward to the tour, which will include visits to schools in Israel and their ICT-partners from industry, as well as representatives from the Ministry of Education.
of course, it's a personal pilgrimage for me, and the itinerary includes visits to the Sea of Galilee, Jerusalem and the ancient fortifications of Masada.
all thanks be to God alone, for this opportunity to visit the Holy Land.
Creating animations has never come this close to you on the iPhone and iPod touch. Unlike other animation apps which requires you to draw every frame in your animation, iNim8 demands almost "no drawing" skill from you at all! All you have to do is to create your character once, make it strike cool poses and hit the "play" button. You will see your characters walking/running/dancing/fighting/doing all kinds of crazy stuff instantly.
iNim8 is simple and fun. It is designed for both animation hobbyist and beginner. It is also suitable for teaching or learning animations. If you always love animation and have the desire or curiosity to animate something yourself , then wait no more and try iNim8! And don't forget to impress your friend with that little short film you have created!
- rotate and translate shapes without the need of drawing them on every frame
- key frame based animation which doesn't require drawing every frame, instead users "define" the key frame and the app will create all the in-between frames
- a scrollable timeline for users to browse through key frames quickly
- allows user to adjust timing by tuning the frame index
- instant preview your animation before converting it to video file
- setting the start/end key for previewing a segment of the animation
- animate characters on different layers
- animatable camera with pan and zoom capability
- allows both straight-ahead and pose-to-pose animation
- copy/paste key frames on individual layers
- onion skins to show the action sequence
- A template editor for users to create or customize their own characters
- video tutorials and quick reference to guide beginners started quickly
- email your animation clip or upload to youtube to share with your friend
- with pre-installed stick figure and clips, user can start to animate straight away
revisiting dimensionality and co-presence - this blog post needs to be read in conjunction with this one, and with this one too.
for the first time that i can remember, i participated in a video-Skype-conference-call yesterday.
i have participated on many occasions in voice-enabled conference-calls in Second Life (SL), and i had thought the experience would be similar.
it was not.
i have been wondering why.
one obvious difference would be that the experience lacked dimensionality.
but how did the experience fare in terms of co-presence? surely a video-conference would afford a sense of co-presence?
we need to think carefully about what we mean by co-presence.
for someone who's never before participated in an activity in SL (not necessarily a voice-chat, or even a text-chat, but any kind of collaborative activity with someone else), it's easy to think that video-conferencing affords co-presence.
yet to me, the experience of the two was vastly different. why was this so?
first, synchronous-activity is not the same as co-presence - at least, not co-presence as it truly can be with publicly-accessible technologies such as virtual worlds and fictive environments.
second, i need to introduce the concept of (for-want-of-a-better-word-i-shall-call-it) the Interpretive Camera.
the Interpretive Camera, as i conceptualise it, is a primary lens through which one seeks to understand oneself through what-one-understands-as the eyes of one's audience (of one, or of many). note, it is not the lens which the audience uses in interacting with oneself, but the lens which oneself uses in an effort to understand how oneself is being understood by one's audience (hence, 'interpretive').
the greater one's intersubjectivity (Husserl, et al) with one's communicative other, the sharper the focus of the Interpretive Camera.
(as an aside, this conceptualisation of the Interpretive Camera is akin to, but not identical with, the use of the term in film-making, such as by van Dyke (1946) in Hollywood Quarterly, and by Proferes (2008))
as i conceptualise it, for any given interaction, the Interpretive Camera can take several different positions, vis-a-vis one's Real-World Identities (Gee, 2007). For example, in Gee's work on videogames, and, (by extension, at least for this particular discussion) virtual environments and fictive worlds, one could say that Gee's notion of Projective Identity aligns well with the Interpretive Camera being 'behind and slightly above one's physical / biological head'. in this framing, what the Self sees through the Interpretive Camera is an amalgam of the biological entity with the avataric identity as the Projective Identity.
to take another example, during metacognition (not the same as reflecting upon an event), the Interpretive Camera is within the Self - it can only be so, if metacognition is truly 'meta'.
now let's start tying together these various strands.
i would argue that an environment only affords a high-degree of co-presence when each interactant's Interpretive Camera is positioned such that they have a common actionspace (here, i make the distinction between actionspace and field-of-view). this is what environments / worlds such as SL and WoW do very well - and what films such as Inception depict, because not only do they nurture individual Projective Identities, but they also allow synchronous interactants to understand and interpret their respective Projective Identities vis-a-vis their impacts on, and the actions of, others.
i would say that the preceding two sentences are the most critical in this entire post.
so far, we've talked about two possible positions for the interpretive-camera, but there is a third.
i was in my bedroom when i took the Skype video-conference call yesterday. when the party on the other side asked to make a video-connection, without a second thought i clicked the 'video' button on the Skype interface. since i had no prior embodied experience of a video-conference call, the action was reflexive in as much as i took it to be analogous to simply engaging 'voice' in a conversation in SL.
but to my surprise it wasn't the same, because i suddenly saw myself in the small window, and one of the first questions i was asked by a member of the other party was "is that your bedroom? there are so many books!". i saw myself (rather than my well-dressed avatar) - on my own screen, and realised that this same image would be projected on a larger screen to my audience - in my (thankfully decent) pyjamas.
further, as the conversation progressed, i found myself wondering about (and therefore distracted by) where i should be looking. to anyone who's met me 'in Real Life', you'll know that when i'm listening intently i tend to look down. this is because i am trying to cant my head such that my ear is nearest to you, rather than my eyes. i found myself doing the same thing yesterday (even though, because i was wearing a headset, i had no need to), and caught myself from time to time, and when i did i made a conscious effort to give 'eye-contact', but then ran into the metacognitive problem of where exactly the interactants' 'eyes' were - do i look at my Mac's camera? or do i look at the Skype window on my screen (which was, of course, the more 'natural', rather than staring at the cold green light of the iSight camera).
now, consider where the Interpretive Camera is, during such video-conference calls. i would posit that it is in a third position - not behind-and-slightly-above one's head, neither within oneself, but in front of one's face, turned upon oneself.
this latter three-word phrase 'turned upon oneself' is a short one, but don't overlook it. for in this phrase lies the key reason why (as i put it fourteen paragraphs earlier) "synchronous-activity is not the same as co-presence".
to understand why not, one needs to return to what i termed the most critical sentence in this post, namely that "an environment only affords a high-degree of co-presence when each interactant's Interpretive Camera is positioned such that they have a common actionspace. this is what environments / worlds such as SL and WoW do very well... because they allow synchronous interactants to understand and interpret their respective Projective Identities vis-a-vis their impacts on, and the actions of, others."
to get to the point, video-conferencing does not afford co-presence (and therefore is a weak tool if co-presence, shared meaning and identity-formation is a priority for any given meeting). in fact, i would go almost as far as to say it actively acts against co-presence (while ironically looking as if it serves co-presence very well, through the guise of synchronous-activity) because of the diametrically-opposed focal points of each interactant's respective Interpretive Cameras.
that is, instead of each interactant's respective Interpretive Cameras focusing on a common - dimensional - actionspace, each Camera is instead reversed (both in position as well as in focal point) towards the physical / biological interactant. Thus, there is no opportunity for a Projective Identity to form, because the position and focal point of the Interpretive Cameras have been reversed (that is, they 'look out of the circle' rather than 'look in to the roasting marshmallows on the campfire'.
i'd like also to take the opportunity to invite you to consider voting for Kwan Tuck Soon's nomination as 'Best Individual Tweeter' - i've had opportunities to engage him in conversation and i've always come away impressed with how he 'groks' many of the key issues.
with God's grace, we will continue to do what i can to work with the wider community to advance the discourse in this exciting field :-)