Response to Virilio, Open Sky pt. 3

Reality - worst game ever

My final reading response of the semester! This time, we’re finishing off Virilio’s Open Sky with a summary of Part 3! The reason for the silly picture above is Virilio’s more serious prediction that “we are about to lose our statuses as eyewitnesses of tangible reality once and for all, to the benefit for technical substitutes… which will make us of the ‘visually challenged.’” This picture is making fun of the fact that a growing number of people spend a lot of their time playing online games, interacting with other people in virtual worlds and not in the real world. The idea that all of our interactions could take place through a screen someday is not too far off. Virilio already talks about how what we see is obscured by the TV screen and by the windows of the vehicles that get us from place to place, and warns of the dangers of indirect light as opposed to direct optics discussed in the earlier parts. It would seem to me as though we’re trying so hard to recreate reality that we’re disregarding the reality that we already have. Instead of experiencing the world ourselves, we’re sitting inside darkened rooms and staring at the screen, living in virtual worlds and neglecting all that’s outside. Google Street View is an example of the real world becoming a virtual world. You can go almost anywhere in the United States, on almost any major public road and see pictures of what’s outside without actually going there.

Virilio also talks about how some of these images are so lifelike that we get confused and have trouble distinguishing CG images from real life. Think of Apple’s Retina Display technology and the 326 PPI (pixels per inch) that it’s capable of displaying. Virilio talks about lasers beaming the images directly into your optic nerves, but massive PPI displays are in commercial use today, and they’re so detailed that you can’t even tell that there are pixels. With 4K resolution and the increasing power of GPUs to generate images, as well as the power of real image manipulation tools such as Photoshop and After Effects, we’re getting to the point where things can look absolutely lifelike but unless you see them for yourself you’ll never really know if they’re real or not.

Response to Virilio, Open Sky pt. 2

Paging Dr. Nanobot! In the future, tiny robots small enough to fit into our bloodstream could help to remove blood clots, treat diseased tissue, and maintain our bodily functions. Part man, part machine! Part 2 of Paul Virilio’s Open Sky starts out with a discussion of the transplantation revolution, the idea that machines will be assisting with functions inside our body as well as outside it. It’s already happening, as scientists have grown a kidney in the lab just this week. Soon, we may be able to replace all sorts of body parts just like upgrading our computers. Virilio predicts that in the future, we may be able to embed a machine into our bodies that will allow us to act at a distance like the DataGlove or the DataSuit. Google Glass is a close contender, allowing us to interact with the environment in a whole new way, but it isn’t directly wired into your optic nerve. Virilio says that there are three intervals that are used to define acting at a distance:

  1. Space: geometric development and control of the physical environment. Innovations such as the car, the train, and mounted animals such as the horse and the mule are examples of this.
  2. Time: control of the physical environment and the invention of communication tools. Letters, telephones, TV, radio.
  3. Light: instantaneous control of microprocessor environment. Today’s modern computers that rely on the speed of light to send their signals.

As a human society, we’ve got a pretty firm grasp on all three of these intervals, and as technology advances, we’re not only obliterating the concept of distance, the interval of space, but we’re also miniaturizing our technology as well. Virilio says that less is more in today’s society, and that few human interactions are required to do a lot of tasks that had once to be done manually. With a push of a button, we can lower our blinds, turn on a light, lock our doors, change the channel on a TV, and automate our entire house without getting up from our chairs. We’re now well on our way to automating the last remaining tasks we still need to do, injecting robots into our blood stream to maintain our body without us needing to do anything.

 

Response to Virilio, Open Sky pt. 1

This week, we read part one of Paul Virilio’s Open Sky. As pro-technology as all the other readings have been, this one is certainly different. Virilio doesn’t care much for the latest technology. Telepresence, the ability to appear in two separate places at once through teleconferencing technologies such as webcams and the Internet, is shortening the distance required to communicate. You can communicate with someone anywhere in the world without leaving your house, and because of this we don’t get out as much. Just as cars and planes have shortened the distance around town and around the world as far as physical transportation, the entire need to go anywhere has also disappeared. Someday, people may all just lay in bed and virtually project themselves into robots that deliver impulses to our nerves at a distance, while never needing to even move. The example that Virilio mentions is the Datasuit, developed by NASA scientists that allows you to feel, see, and hear just like you’re somewhere else at the same time. This telepresence is going to destroy us by obliterating distance according to Virilio. Without the need to wait, everything also needs to happen immediately, in real-time. The flow of information is overwhelming. Virilio talks at length about physics and how they relate to time, and talks about different types of optics, large-scale and small-scale optics, and the difference between them. In the future, he says, we’ll have a tele-existence and become terminal men and women. It reminds me a bit of Jonathan Mostow’s Surrogates (2009), which warns of the dangers of this type of technology in which a giant company controls all sorts of robots that interface with humans.

Code Snippet: Extracting a 24-hour time from a 12-hour time string in MySQL

The snippet:

HOUR ( STR_TO_DATE ("2:30pm" , "%h:%i%p") ) = 14

Why I needed it:

I’m currently working on a time and date filter for ClassGet. If you’ve ever wanted to find a morning class that meets only on Tuesdays, this tool is for you! The main problem, however, is that Furman’s course listings file has the start and end dates for classes in this format:

Start Time: "12:30pm", End Time: "1:20pm"

and my import script doesn’t convert these into a DATETIME format in SQL. I want to be able to look at courses from If I want to find a class starting from 12:00 to 14:00, I’ll need to do some conversion. On the front-end, I’m using a jQuery UI slider for my hour control that goes from 5 (5:00 am) to 21 (9:00 pm). Why anyone would have a class starting at 9:00 pm is beyond me, but hey, it could happen. I’m not going to worry about minutes, and I know that no class will ever be offered overnight so I’ll never need to worry about a start time being later than an end time. My script will search for classes that start from one hour to another, so I’ll need to convert the dates to match. You would think I could just do something like this:

SELECT * FROM classes WHERE `Start Time` BETWEEN 12 AND 15

But it doesn’t work like that. The dates are stored as strings, e.g. "2:30pm". We’re lucky, though… MySQL has an HOUR function that will solve that!

SELECT * FROM classes WHERE HOUR(`Start Time`) BETWEEN 12 AND 15

But still, no classes are showing up that start at 2:30. How come? As it turns out, HOUR("2:30pm") returns 2, not 14! How do we fix that? The answer lies in MySQL’s STR_TO_DATE function, which is the reverse of the DATE_FORMAT function. Now, take a look at the final version:

SELECT * FROM classes WHERE HOUR(STR_TO_DATE(`Start Time`, "%h:%i%p")) BETWEEN 12 AND 15

There we go! Now without doing any PHP or JavaScript or modifying the database structure, I was able to create a date filter for class data.

Response to Garrett, “User Experience and Why it Matters”

iphone-5

Apple’s iOS home screen doesn’t look a lot different than it did when the iPhone was first released 6 years ago. Since 2007, the only real improvements to the home screen itself have been Spotlight search, the ability to create folders, and the Newsstand app’s sliding bookshelf. While other mobile operating systems such as Android are constantly changing and offer widgets, app drawers, live wallpapers, voice search and the like on their launcher screens, launch day Android looked a lot different than Android Jellybean. Also, each Android device looks and acts just a bit different than each other. The HTC Thunderbolt for example uses HTC’s Sense interface while the Samsung Galaxy S II uses Samsung’s TouchWiz interface. My ASUS SL101 tablet has the stock Android Ice Cream Sandwich interface but has some ASUS customized settings. The point is, if you have Android, you aren’t necessarily having the same experience as another Android user. If you have an iPhone, iPad, or iPod Touch, you’re going to be getting relatively the same experience on any of these devices, except for screen size. Apple is really good at this consistency.

Today we read the first two chapters of Jesse James Garrett’s The Elements of User Experience, which introduced the concept of a User Experience and why it’s important. Garrett defines user experience as “the experience [a] product creates for the people who use it in the real world.” In Apple’s case, the iPhone is easy to use and intuitive to learn. Interacting with an iPhone is a pleasant experience, everything is smooth and flows well. You don’t have as much freedom to tinker with an iPhone as you could with an Android phone, and this means that it’s harder to break. Some parts of the iPhone experience are cryptic, such as the many uses of the home button, but in general Apple has created a finely-tuned user experience.

At it’s worst, a bad user experience can kill you. The Therac-25 radiation therapy machine is a notorious example of this, delivering fatal doses of radiation due to poorly designed software and a bad user interface. From the Wikipedia article:

The system noticed that something was wrong and halted the X-ray beam, but merely displayed the word “MALFUNCTION” followed by a number from 1 to 64. The user manual did not explain or even address the error codes, so the operator pressed the P key to override the warning and proceed anyway.

 While a badly designed website won’t give you radiation poisoning, it certainly won’t give you any business either. Garrett explained that if people have a bad experience on your website, they’re unlikely to return. If you can’t find what you’re looking for, you probably won’t stick around for long.

The placement of web content matters as much as the content itself. Underneath the content is the frame of the page, and under that is the way that all the pages are organized. Garrett refers to these elements as planes, and all of these planes add up into a well-designed web product, and the more abstract planes form the basis for the more concrete ones. As with building a house, you should start with a good foundation and build up. If you have a good strategy and scope when building a website, it’ll make designing the structure, skeleton, and surface of the site a lot easier to build.

Response to Reddish, “Letting go of the Words: Writing Web Content that Works”

This week we’re taking a temporary break from video and moving on to websites! Our reading for today is Ginny Redish’s “Letting Go of the Words – Writing Web Content that Works”, and the first thing that Redish mentions in chapter 2 is that “understanding your audiences and what they need is critical to deciding what to write,” followed by a discussion about audiences. When writing for the web you need to understand who your audiences are and why they’ll arrive at your page. There are a lot of people who will visit your site, and it’s best to try and categorize and target different groups specifically.

Best Buy has used this type of profiling to create personas for each of their different customer groups. In the graphic above, Best Buy lists major characteristics of “Maria,” a middle-class mom that only goes to Best Buy when others in her family force her to go. The idea being that employees can better serve someone’s needs if they know a bit of background about them. Best Buy probably created their list of personas by conducting interviews and doing research to find out what type of people visited their stores. Redish recommends that you do that same when creating web content by observing and talking to users of the site, getting feedback as they go along.

Once you’ve formed personas, you should keep them in mind when designing your site. Constantly ask yourself whether certain personas will be able to find what they need or accomplish what they need to do on your site. Sometimes when you have two vastly different audiences it’s best to have two different websites rather than writing for everyone at once. Take a look at the difference between Microsoft’s Windows 8 site, their Xbox site, and their MSDN site. All of the websites feature Microsoft products, and all of them have the same general goal: to get you to spend money on Microsoft stuff. Each website has a different audience in mind, however. The Xbox website envisions a gaming audience that is there to have fun, while MSDN is all about software development, and therefore is more serious and professional. The Windows 8 website is targeted mainly towards home and small business users, but Microsoft also has a Windows 8 Enterprise website targeted towards those working in IT and large corporations.

If you understand your audience, you’ll know what they want, and if you know what they want you can design your website to work like they do. Take a look at these comics and you’ll see that a lot of websites miss the bus in terms of giving people what they want.

Response to Douglas & Harnden, “Point of View”

I love how much of my communications class ties into what I’ve been doing in my other classes. Today we’re going to cover various points of view as explained in chapter 3 of John S. Douglass and Glenn P. Harnden’s The Art of Technique. They say that “the moving POV shot is particularly effective in horror and monster movies to build suspense” and I’d say that this is best illustrated in John Carpenter’s Halloween (1978). This first person shot doesn’t let you know who the killer is or how old he is, and it makes it even more shocking when the camera switches to the third person and reveals the killer’s identity.

First person shots, then, are shots that show what the character would see with their own eyes. The first-person perspective can last the entirety of a novel or a video game, but it doesn’t work for long periods in film because it lacks the increased information of a third-person perspective. The longer a first-person shot goes, the harder it is to believe it.

Second-person perspectives also have a purpose. The idea is to address the viewer directly, and commercials frequently will refer to “you” the viewer. Training videos and informational videos are also designed specifically to target the active viewer. The idea is that someone is giving a presentation to you and hopefully you’ll respond to what they’re saying by buying a product, voting, heeding safety guidelines, etc.

Third-person perspectives are most common in movies and TV shows. They show all characters from an imaginary “observer’s point of view, but this point of view is not omniscient,” the reason being that videos don’t have time for all the inner thoughts of characters that a novel can provide. The scripts might write out these thoughts, but the scriptwriters are talking to actors not the audience.

Response to Osgood & Hinshaw, “The Aesthetics of Editing”

“Where did the soda go?” (via Reddit)

Ah, physical continuity! This week we’re discussing chapter 8 of Ronald Osgood and M. Joseph Hinshaw’s Visual Storytelling, and as they point out, some of the best editing doesn’t draw attention to itself. In the popular GIF above, the scene’s physical continuity isn’t maintained; the soda clearly disappears from the frame. Osgood and Hinshaw say that “the audience probably isn’t thinking about production techniques unless there are technical problems, misuse of technique, or the edit has been created to draw attention to the technique.” There’s a technical problem obviously in this scene, but let’s think about the circumstances surrounding it. The creators of the commercial obviously didn’t want soda spilling all over the guy, but the better idea would have been to forego the soda entirely. Most 30 second commercials are whittled down from 6-7 hours of footage; the idea being that the editor picks the best scenes to tell the intended story. Perhaps the editors of this commercial didn’t have a scene to work with where the man didn’t have the soda, or perhaps the soda scene was the best way to start but it ended badly, and the scene where he didn’t have a soda ended better, and that’s why they chose this combination of scenes. I guess without interviewing the actual editor there’s no way to tell.

Another job of the editor is to choose “when each edit should occur,” and the book points out that there is no gold standard in terms of shot length. Whatever pace a video has, it should be consistent with the mood and purpose of the video. Action packed music videos might have a quicker pace than the slow, panning shots of nature documentaries for example.

As important as choosing the right scenes and their duration is, changing the order of shots is also the editor’s role. To most clearly communicate what the video’s intentions are, manipulating timing and re-arranging scenes is essential. In Visual Storytelling, the authors discuss a photographer and a subject, and how if the photographer is shown first, the subject is presumed to know that she is being photographed. Indeed, providing a scene of explanation beforehand gives greater context to the videos that we watch.

Response to Zettl, “The Two-Dimensional Field: Forces within the Screen”

This week in my communications class we’re moving away from still images into the world of video, and what better place to start talking about video than the screen itself. In chapter 7 of Herbert Zettl’s Sight, Sound and Motion: Applied Media, we read about screen space and the field forces that shape our perception of objects.

Up until the early 2000′s, it was common for widescreen screens to be seen only in movie theaters. Television and computer screens were more square than rectangular, the shape being one of 4 feet in width for every 3 feet in height (4:3). With the advent of HDTVs, it was possible to display a lot more detail on consumer quality screens, and thus one could recreate the cinematic experience at home on their TV. Since most motion pictures are recorded in widescreen, or the aspect ratio 16:9, a 4:3 aspect ratio just wouldn’t do on HDTVs. Computer screens and TV screens are now a lot wider than they are long, such as the TV pictured above.

Now, why did we decide on 16:9 in the first place? Why not have 9:16 screens that are much taller than they are wide? Zettl explains that “a horizontal arrangement seems to suggest calmness, tranquility, and rest.” Vertical space, in contrast, seems “more dynamic, powerful, and exciting” than horizontal space. Tall buildings are a lot more powerful of a statement than a long stretch of beach, and people have used this property of our perception for centuries when designing structures.

As Zettl points out, people are very good at detecting when something isn’t horizontally stable. Does that picture of the TV above look weird to you? It looked weird to me when I added it, and the reason is because the left side is slightly higher in the frame than the left side, a fact that I’ve illustrated in the graphic above. It’s only a difference of about 10 pixels, but people are great at noticing these problems. At best, tilting the X axis can make an image appear more dynamic or can be used to enhance the instability of a horror movie, but at worst it can really make us uncomfortable. Diagonal lines are best to be avoided if you want peace and tranquility.

Just try and relax. I dare you.

Just try and relax. I dare you.

Another force at play inside the frame is the magnetic pull of its edges. If something is in the corner or right near the edge of the frame, it looks as though it’s glued there despite the pull of gravity. It can also be used to define a boundary. If a person is standing right at the edge of the frame it can give the illusion that they’re against a wall, which is why in horror movies people in hiding are often shown to be cornered in the edge of a frame. There’s no escape.

Drawing upon our earlier discussions of left and right, positioning is also a big deal on screen. We’re naturally inclined as a culture to pay attention to objects on the right-hand side of the screen as being more important than the left, and so in comedy shows the host is always on the right. In news broadcasts, the trend is usually reversed so that the screen is balanced. The low-energy footage is on the right and the newscaster is on the left, cancelling out our perceptions of left and right in a sense.