Monday, November 29, 2010

State of the Internet interactive and the diminished reality within its artificial space



We now have the convenience of so many screenings to keep up with our stance on this world - the mobile phone screen, the TV screen, the PC screen, and yes, even the air conditioner remote screen counts too. With every layer we add onto our daily screens, we augment another alternate world.
Hybrid Reality bloggers Parag and Ayesha Khanna, contributors of Big Think, reported that our daily usage of augmented "apps" we wish to reflect back onto our screens speeds up the transparency processing - the glass that mediate you as the reader of the screen, and what you chose to reflect back towards you. This is called the "image recognition software" application that you've become acquainted with as a learned mental processing everytime you search for information through a particular screen. This processing is the usual "recognize, search, add" clicks on the virtual button as you chose the desired information from that restaurant listings you browsed on Yelp.


Through your speedier "software", you can choose to "see" anything you wish. Given that an architect has chosen to, say, "see" no homeless people on the streets before he builds his site, through that line of vision he shall instantaneously remove all contents related to homelessness. That includes both innocent and hazardous people, places, and things that are starving for other human charity as they beg for fortune.

This sharp example, given in The Pleasure and Danger of Augmented Reality article, led me to think back about the time I did my editorial, Language: The edge of cognitive power, where I found that The PEW Internet & American Life Project predicted the augmented and virtual realities shall no longer be more different with each other at about year 2020, scarily 10 years from now. Scientists working on contact lenses based on this "image recognition software" are helping visionaries put on these transparencies to choose selected realities they do and do not wish to see, which only locks the user into his or her own perception without the line of vision in relation to the unseen, as with the popular case of un-friending on Facebook in the context of virtual realities. (See: Facebook Friendonomics)

Studies of Human-Computer-Interaction (HCI) dates back to less than a century ago, nonetheless during the mid-20th century, when engineers gave birth to the first computer ever. HCI researchers include all sectors within the working industry, both the public and private sectors working together with fundings from the government, for the purposes of improving direct manipulation of virtual artifacts, and that which is studied in university settings and corporate institutions for product-design enhancements, and also for marketers to drive commercial products that we all reckon as the screens we all stare at everyday.

Approximate time lines showing where work was performed on some major technologies enlisted in the paper "A Brief History of Human Computer Interaction Technology" at Carnegie Mellon University, by Brad A. Meyers, in December 1996.

To that study, I came to remind myself about an iTunes U segment I downloaded quite a while ago,  a Stanford University HCI Seminar featuring guest speaker John Zimmerman, an assistant professor from Human Interaction Institute and School of Design at Carnegie Mellon University, where the seminar was taken place at the Stanford campus on November 3, 2006.

His slideshow, "Designing for the Self", explains how his role as a designer have spotted a disconnect the between consumer behavior researchers and the product feasibility that designers learn out of their engagement in HCI studies. His first example was in a very normal setting of scanning barcodes at the check-out counter. His sense of awareness in this mundane setting stresses the point where there's this job transition from the checker to the "barcode orientor", limiting human customer experience at that point of checking-out, taking away the intended use of the scanner as a tool and instead, becoming an experience where the buyer stares at the checker working for the machine.




Early this summer, I took a class with Paul Colardo, an award-winning executive producer who directed our short-form production class, and said very clearly, "You have to be smarter than the camera." Duh, you may think. As users of these increasingly cheap external devices, some of us has taken it to the next level by extending our selves onto the medium as manipulators of our contents, but others shy away from the augmented transparencies reflected on the screen, choosing to diminish a layer or two without accepting the livestream film of the natural experience, preferring to stay within the heavily-controlled alternative reality of the artificial space.


In the aspect of human experience, i.e. using technology as a tool not only for its function, Zimmerman went on exploring the designer's perspective on their loss of control to a desired consumer behavior, despite extensive HCI studies that aims to predict particular designs of the ideal products other than, obviously, its usability and practicality. This extends to the psychological school of thought on the pleasure principle, a well-known study Sigmund Freud coined within the human psychoanalytic study without that human element in HCI studies. In the case for computer-generative human behaviors, Zimmerman introduced a number of Maslowian-like models such as the Norman's model of product emotions, Jordan's four pleasures, and his favorite, the Forlizzi and Ford on user-product experience, which concludes to a point where the human-computer interaction is a principal mediator with their extended selves within those respective products they owned (they've established those possessions as "props in their lives") to the outer world in the social, cultural, and economic contexts that act as feedbacks for the user experience, such that the entire picture of alternate realities looks like a virtual body of co-experience sharing information in real-time, going "live".

The idea of that extended self in each of these products - manipulated, personalized, and customized by the user - comes from Zimmerman's referential basis on Jean-Paul Sartre's states of existence: having, doing, being.

What that turns into, if we "see" through the layers of transparency from a standpoint that witnesses these co-experiences, becomes a thriving setting like the speech Joey planned to give in Monica and Chandler's wedding ceremony:

Joey: Hey, I started working on what I'm going to say at the ceremony. Wanna hear it?
Monica, Chandler: Yeah!
Joey: We are gathered here today on this joyous occasion to celebrate the special love that Monica and Chandler share. 

        [Monica and Chandler look impressed]
 
Joey: It is a love based on giving and receiving as well as having and sharing. And the love that they give and have is shared and received. And through this having and giving and sharing and receiving, we too can share and love and have... and receive

        [later]
 
Chandler: Yeah, yeah. Okay.
Joey: When I think of the love that these two givers and receivers share, I cannot help but envy the lifetime ahead of having and loving and giving... and then I can't think of a good word for right here.
Monica: How about receiving?
Joey: Yes!



"Why continue to “live” this way, a mere shadow of one’s former self?" a dubious question Scott Brown asked at the end of his Wired column Managing Your Digital Remains, a possibly recurring awareness by the year 2020 for those victims of diminished reality.

As in social media, a popularized interactive virtual space, the presence of pleasure is therefore a high determinant for HCI studies as it gives the opportunistic roles for the product user, or in this case the account user, to express the self ("I am..."), switching their roles (Who am I: Now? Here? To you?), enhancement of their roles (for product designers to make consumers feel better), and being in transition, a favorite study for Zimmerman himself, and a practicality for myself in the context of my role and Saluna's role.

It's a no-brainer to mark the most important life events, such as coming-of-age, and that in Saluna's case entering Assaliuna, in the linear narrative, a real-time timeline of our lives, with purposes of "creating the digital self to discover who it is they want to be," Zimmerman explains.

His experiment on Carnegie Mellon freshman students has surprising results. I'll just point out a couple of things that are relevant here: There is a faster movement towards the ideal self in the context of using digital image as an insight, whereas desire moves "from sports car to umbrella", as in the case for entry-level college students submitting pictures of desirable things on a weekly basis, and then put together by the end of the study for HCI researchers to confirm that practicality is the acceptance of the new, natural space of college reality.

I am just curious: Can the controlled environment of the virtual reality foster a deeper connection of an ideal image between the real and the digital selves, by using the "image recognition software" as an evangelistic tool to discover an insight for the user, rather than a hindsight for that image reflective on our screens? Can it turn the activity of imagining into imaging?

Only time will tell how Saluna can enter Assaliuna. What matters is, we are both positively very, very curious.

No comments: