Press "Enter" to skip to content

Tag: reading-response

Response to Virilio, Open Sky pt. 3

Reality - worst game ever

My final reading response of the semester! This time, we’re finishing off Virilio’s Open Sky with a summary of Part 3! The reason for the silly picture above is Virilio’s more serious prediction that “we are about to lose our statuses as eyewitnesses of tangible reality once and for all, to the benefit for technical substitutes… which will make us of the ‘visually challenged.'” This picture is making fun of the fact that a growing number of people spend a lot of their time playing online games, interacting with other people in virtual worlds and not in the real world. The idea that all of our interactions could take place through a screen someday is not too far off. Virilio already talks about how what we see is obscured by the TV screen and by the windows of the vehicles that get us from place to place, and warns of the dangers of indirect light as opposed to direct optics discussed in the earlier parts. It would seem to me as though we’re trying so hard to recreate reality that we’re disregarding the reality that we already have. Instead of experiencing the world ourselves, we’re sitting inside darkened rooms and staring at the screen, living in virtual worlds and neglecting all that’s outside. Google Street View is an example of the real world becoming a virtual world. You can go almost anywhere in the United States, on almost any major public road and see pictures of what’s outside without actually going there.

Virilio also talks about how some of these images are so lifelike that we get confused and have trouble distinguishing CG images from real life. Think of Apple’s Retina Display technology and the 326 PPI (pixels per inch) that it’s capable of displaying. Virilio talks about lasers beaming the images directly into your optic nerves, but massive PPI displays are in commercial use today, and they’re so detailed that you can’t even tell that there are pixels. With 4K resolution and the increasing power of GPUs to generate images, as well as the power of real image manipulation tools such as Photoshop and After Effects, we’re getting to the point where things can look absolutely lifelike but unless you see them for yourself you’ll never really know if they’re real or not.

Response to Virilio, Open Sky pt. 2

Paging Dr. Nanobot! In the future, tiny robots small enough to fit into our bloodstream could help to remove blood clots, treat diseased tissue, and maintain our bodily functions. Part man, part machine! Part 2 of Paul Virilio’s Open Sky starts out with a discussion of the transplantation revolution, the idea that machines will be assisting with functions inside our body as well as outside it. It’s already happening, as scientists have grown a kidney in the lab just this week. Soon, we may be able to replace all sorts of body parts just like upgrading our computers. Virilio predicts that in the future, we may be able to embed a machine into our bodies that will allow us to act at a distance like the DataGlove or the DataSuit. Google Glass is a close contender, allowing us to interact with the environment in a whole new way, but it isn’t directly wired into your optic nerve. Virilio says that there are three intervals that are used to define acting at a distance:

  1. Space: geometric development and control of the physical environment. Innovations such as the car, the train, and mounted animals such as the horse and the mule are examples of this.
  2. Time: control of the physical environment and the invention of communication tools. Letters, telephones, TV, radio.
  3. Light: instantaneous control of microprocessor environment. Today’s modern computers that rely on the speed of light to send their signals.

As a human society, we’ve got a pretty firm grasp on all three of these intervals, and as technology advances, we’re not only obliterating the concept of distance, the interval of space, but we’re also miniaturizing our technology as well. Virilio says that less is more in today’s society, and that few human interactions are required to do a lot of tasks that had once to be done manually. With a push of a button, we can lower our blinds, turn on a light, lock our doors, change the channel on a TV, and automate our entire house without getting up from our chairs. We’re now well on our way to automating the last remaining tasks we still need to do, injecting robots into our blood stream to maintain our body without us needing to do anything.


Response to Virilio, Open Sky pt. 1

This week, we read part one of Paul Virilio’s Open Sky. As pro-technology as all the other readings have been, this one is certainly different. Virilio doesn’t care much for the latest technology. Telepresence, the ability to appear in two separate places at once through teleconferencing technologies such as webcams and the Internet, is shortening the distance required to communicate. You can communicate with someone anywhere in the world without leaving your house, and because of this we don’t get out as much. Just as cars and planes have shortened the distance around town and around the world as far as physical transportation, the entire need to go anywhere has also disappeared. Someday, people may all just lay in bed and virtually project themselves into robots that deliver impulses to our nerves at a distance, while never needing to even move. The example that Virilio mentions is the Datasuit, developed by NASA scientists that allows you to feel, see, and hear just like you’re somewhere else at the same time. This telepresence is going to destroy us by obliterating distance according to Virilio. Without the need to wait, everything also needs to happen immediately, in real-time. The flow of information is overwhelming. Virilio talks at length about physics and how they relate to time, and talks about different types of optics, large-scale and small-scale optics, and the difference between them. In the future, he says, we’ll have a tele-existence and become terminal men and women. It reminds me a bit of Jonathan Mostow’s Surrogates (2009), which warns of the dangers of this type of technology in which a giant company controls all sorts of robots that interface with humans.

Response to Garrett, “User Experience and Why it Matters”


Apple’s iOS home screen doesn’t look a lot different than it did when the iPhone was first released 6 years ago. Since 2007, the only real improvements to the home screen itself have been Spotlight search, the ability to create folders, and the Newsstand app’s sliding bookshelf. While other mobile operating systems such as Android are constantly changing and offer widgets, app drawers, live wallpapers, voice search and the like on their launcher screens, launch day Android looked a lot different than Android Jellybean. Also, each Android device looks and acts just a bit different than each other. The HTC Thunderbolt for example uses HTC’s Sense interface while the Samsung Galaxy S II uses Samsung’s TouchWiz interface. My ASUS SL101 tablet has the stock Android Ice Cream Sandwich interface but has some ASUS customized settings. The point is, if you have Android, you aren’t necessarily having the same experience as another Android user. If you have an iPhone, iPad, or iPod Touch, you’re going to be getting relatively the same experience on any of these devices, except for screen size. Apple is really good at this consistency.

Today we read the first two chapters of Jesse James Garrett’s The Elements of User Experience, which introduced the concept of a User Experience and why it’s important. Garrett defines user experience as “the experience [a] product creates for the people who use it in the real world.” In Apple’s case, the iPhone is easy to use and intuitive to learn. Interacting with an iPhone is a pleasant experience, everything is smooth and flows well. You don’t have as much freedom to tinker with an iPhone as you could with an Android phone, and this means that it’s harder to break. Some parts of the iPhone experience are cryptic, such as the many uses of the home button, but in general Apple has created a finely-tuned user experience.

At it’s worst, a bad user experience can kill you. The Therac-25 radiation therapy machine is a notorious example of this, delivering fatal doses of radiation due to poorly designed software and a bad user interface. From the Wikipedia article:

The system noticed that something was wrong and halted the X-ray beam, but merely displayed the word “MALFUNCTION” followed by a number from 1 to 64. The user manual did not explain or even address the error codes, so the operator pressed the P key to override the warning and proceed anyway.

 While a badly designed website won’t give you radiation poisoning, it certainly won’t give you any business either. Garrett explained that if people have a bad experience on your website, they’re unlikely to return. If you can’t find what you’re looking for, you probably won’t stick around for long.

The placement of web content matters as much as the content itself. Underneath the content is the frame of the page, and under that is the way that all the pages are organized. Garrett refers to these elements as planes, and all of these planes add up into a well-designed web product, and the more abstract planes form the basis for the more concrete ones. As with building a house, you should start with a good foundation and build up. If you have a good strategy and scope when building a website, it’ll make designing the structure, skeleton, and surface of the site a lot easier to build.

Response to Reddish, “Letting go of the Words: Writing Web Content that Works”

This week we’re taking a temporary break from video and moving on to websites! Our reading for today is Ginny Redish’s “Letting Go of the Words – Writing Web Content that Works”, and the first thing that Redish mentions in chapter 2 is that “understanding your audiences and what they need is critical to deciding what to write,” followed by a discussion about audiences. When writing for the web you need to understand who your audiences are and why they’ll arrive at your page. There are a lot of people who will visit your site, and it’s best to try and categorize and target different groups specifically.

Best Buy has used this type of profiling to create personas for each of their different customer groups. In the graphic above, Best Buy lists major characteristics of “Maria,” a middle-class mom that only goes to Best Buy when others in her family force her to go. The idea being that employees can better serve someone’s needs if they know a bit of background about them. Best Buy probably created their list of personas by conducting interviews and doing research to find out what type of people visited their stores. Redish recommends that you do that same when creating web content by observing and talking to users of the site, getting feedback as they go along.

Once you’ve formed personas, you should keep them in mind when designing your site. Constantly ask yourself whether certain personas will be able to find what they need or accomplish what they need to do on your site. Sometimes when you have two vastly different audiences it’s best to have two different websites rather than writing for everyone at once. Take a look at the difference between Microsoft’s Windows 8 site, their Xbox site, and their MSDN site. All of the websites feature Microsoft products, and all of them have the same general goal: to get you to spend money on Microsoft stuff. Each website has a different audience in mind, however. The Xbox website envisions a gaming audience that is there to have fun, while MSDN is all about software development, and therefore is more serious and professional. The Windows 8 website is targeted mainly towards home and small business users, but Microsoft also has a Windows 8 Enterprise website targeted towards those working in IT and large corporations.

If you understand your audience, you’ll know what they want, and if you know what they want you can design your website to work like they do. Take a look at these comics and you’ll see that a lot of websites miss the bus in terms of giving people what they want.

Response to Osgood & Hinshaw, “The Aesthetics of Editing”

“Where did the soda go?” (via Reddit)

Ah, physical continuity! This week we’re discussing chapter 8 of Ronald Osgood and M. Joseph Hinshaw’s Visual Storytelling, and as they point out, some of the best editing doesn’t draw attention to itself. In the popular GIF above, the scene’s physical continuity isn’t maintained; the soda clearly disappears from the frame. Osgood and Hinshaw say that “the audience probably isn’t thinking about production techniques unless there are technical problems, misuse of technique, or the edit has been created to draw attention to the technique.” There’s a technical problem obviously in this scene, but let’s think about the circumstances surrounding it. The creators of the commercial obviously didn’t want soda spilling all over the guy, but the better idea would have been to forego the soda entirely. Most 30 second commercials are whittled down from 6-7 hours of footage; the idea being that the editor picks the best scenes to tell the intended story. Perhaps the editors of this commercial didn’t have a scene to work with where the man didn’t have the soda, or perhaps the soda scene was the best way to start but it ended badly, and the scene where he didn’t have a soda ended better, and that’s why they chose this combination of scenes. I guess without interviewing the actual editor there’s no way to tell.

Another job of the editor is to choose “when each edit should occur,” and the book points out that there is no gold standard in terms of shot length. Whatever pace a video has, it should be consistent with the mood and purpose of the video. Action packed music videos might have a quicker pace than the slow, panning shots of nature documentaries for example.

As important as choosing the right scenes and their duration is, changing the order of shots is also the editor’s role. To most clearly communicate what the video’s intentions are, manipulating timing and re-arranging scenes is essential. In Visual Storytelling, the authors discuss a photographer and a subject, and how if the photographer is shown first, the subject is presumed to know that she is being photographed. Indeed, providing a scene of explanation beforehand gives greater context to the videos that we watch.

Response to Zettl, “The Two-Dimensional Field: Forces within the Screen”

This week in my communications class we’re moving away from still images into the world of video, and what better place to start talking about video than the screen itself. In chapter 7 of Herbert Zettl’s Sight, Sound and Motion: Applied Media, we read about screen space and the field forces that shape our perception of objects.

Up until the early 2000’s, it was common for widescreen screens to be seen only in movie theaters. Television and computer screens were more square than rectangular, the shape being one of 4 feet in width for every 3 feet in height (4:3). With the advent of HDTVs, it was possible to display a lot more detail on consumer quality screens, and thus one could recreate the cinematic experience at home on their TV. Since most motion pictures are recorded in widescreen, or the aspect ratio 16:9, a 4:3 aspect ratio just wouldn’t do on HDTVs. Computer screens and TV screens are now a lot wider than they are long, such as the TV pictured above.

Now, why did we decide on 16:9 in the first place? Why not have 9:16 screens that are much taller than they are wide? Zettl explains that “a horizontal arrangement seems to suggest calmness, tranquility, and rest.” Vertical space, in contrast, seems “more dynamic, powerful, and exciting” than horizontal space. Tall buildings are a lot more powerful of a statement than a long stretch of beach, and people have used this property of our perception for centuries when designing structures.

As Zettl points out, people are very good at detecting when something isn’t horizontally stable. Does that picture of the TV above look weird to you? It looked weird to me when I added it, and the reason is because the left side is slightly higher in the frame than the left side, a fact that I’ve illustrated in the graphic above. It’s only a difference of about 10 pixels, but people are great at noticing these problems. At best, tilting the X axis can make an image appear more dynamic or can be used to enhance the instability of a horror movie, but at worst it can really make us uncomfortable. Diagonal lines are best to be avoided if you want peace and tranquility.

Just try and relax. I dare you.
Just try and relax. I dare you.

Another force at play inside the frame is the magnetic pull of its edges. If something is in the corner or right near the edge of the frame, it looks as though it’s glued there despite the pull of gravity. It can also be used to define a boundary. If a person is standing right at the edge of the frame it can give the illusion that they’re against a wall, which is why in horror movies people in hiding are often shown to be cornered in the edge of a frame. There’s no escape.

Drawing upon our earlier discussions of left and right, positioning is also a big deal on screen. We’re naturally inclined as a culture to pay attention to objects on the right-hand side of the screen as being more important than the left, and so in comedy shows the host is always on the right. In news broadcasts, the trend is usually reversed so that the screen is balanced. The low-energy footage is on the right and the newscaster is on the left, cancelling out our perceptions of left and right in a sense.

Response to Fagerjord, “Multimodal Polyphony”

Ah, Flash Player. This little browser plugin has been shaping the web for over 17 years, and while its power and support are dwindling these days due to open video standards, HTML 5, and the Canvas element, it still powers most of the online multimedia that we experience today. Anders Fagerjord’s “Multimodal Polyphony” analyzes in depth a certain type of Flash powered content: the Flash Documentary. Born in the early 2000’s out of bandwidth concerns, a Flash Documentary is a presentation of still images and voice-over narration that mixes elements of TV and still photography. It’s kind of an enhanced slideshow, a PowerPoint presentation with a narrator. The Flash documentary that Fagerjord focuses on is National Geographic Magazine’s “The Way West,” the first of their Sights and Sounds series.

A Flash documentary uses still images, but not all images are the same. The window is a fixed size, just like a television screen, but some images may be different sizes or have different aspect ratios. To get around this problem and also to add extra excitement and interest to the presentation, Flash documentaries apply TV style effects to their images.

Ken Burns Effect

I didn’t make the above video, but it serves as a good illustration of how the Ken Burns effect works. Basically, you take a still (or moving) image, zoom in so that it is bigger than your frame, then slowly pan and zoom so as to show only a part of it at any given time. With these tools, you can create other effects, such as revealing a part of the image that was previously unknown to the viewer or pointing the viewer’s attention at a specific spot by zooming into it.

Multi media

Supplementing the visual portion of the presentation is audio, which can include music, background ambient noise, sound effects, and narration. In the National Geographic presentation, you hear the sounds of the old west before the presentation starts to get you in the mood. Music and visuals together can greatly increase the immersion that you feel when watching one of these presentations, even if you’re staring at a small Flash window and the images themselves are still.

We’re going to be making a Flash documentary style video soon as part of the class. Stay tuned!

Response to Douglas, “The Zen of Listening”

Ah, listening. We do it all the time, and unlike watching television, we can do other things while we listen. Listening tends to leave a lasting impression as well. Can you often remember what you were doing when you heard a popular song for the first time? Do certain pieces of audio bring back memories and feelings from the past? Susan J. Douglas explores what makes listening so powerful an emotion in Listening In. Douglas focuses particularly on radio, and what makes it stand out from visual communication such as television and print media.

Radio differs from TV in that, lacking visual material to accompany the audio, we need to fill in the missing visuals with our imagination. Douglas cited a study involving two groups of kids in which the first group watched TV and the second listened to radio, and found that “the children who had heard the story created much more imaginative conclusions than those who had seen the television version.” I know that my dad told me as a kid that he had to listen to all of his hometown baseball games on the radio, and he’d need to imagine how each game went. This active engagement with the material rather than sitting and watching it unfold made it a lot more exciting.

I’d very much agree with the strong emotions that sound alone can conjure, as I’m a horror story junkie. I can’t get enough scary stories in my life, and nothing is scarier than listening to an episode of the excellent Nosleep Podcast just before bed. Douglas nods at the ability of radio to creep people out, mentioning Cantril and Allport who said that

“When it comes to producing eerie and uncanny effects,” they added, “the radio has no rival.” They noted that even in the early 1930s, listeners would “enhance this distinctive quality of radio” by sitting in the dark and closing their eyes so that “their fantasies are free.”

In addition, radio is much more effective than print media because of the fact that it is a live stream of communication. Radio can be heard by everyone at exactly the same time, and it builds off of the newspaper culture in which everyone reads the same stories. Now everyone can experience the same thing at the same time.

Response to Kress and van Leeuwen, “Meaning of Composition”

In print and online, you can’t really have text without a layout. You’re reading this text through a layout right now. I’ve chosen to start the post with a picture, and followed it up with wall of text. In a previous post, I left-aligned my first image to give the post a magazine-style layout. Chapter 6 of Gunther Kress and Thed van Leeuwen’s Reading Images discusses various properties of composition, “the placement or arrangement of visual elements or ingredients in a work of art, as distinct from the subject of a work.”

The three properties focused on most are:

  1. Information value
  2. Salience
  3. Framing

Let’s talk about each of them.


Looking at the fictitious web layout on the right, what is the most important part of the page? Probably the big bright green box that takes up most of the top of the page. Indeed, in web composition, the biggest thing is usually the most important. Blogs are generally in a 2-column format with the main content on the left, and a smaller sidebar on the right with other, less important information.

The huge green box jumps out at you because it spans 3 of the 4 columns in this layout and is much bigger than any other element. The bright green also stands out in terms of contrast, catching your eye. Kress and van Leeuwen define salience as “elements… made to attract the viewer’s attention to different degrees” and is is often done through “relative size, contrasts in tonal value (or colour, differences in sharpness, etc.)” The low contrast blue on black of the four columns below the big green box don’t attract your attention as greatly as the box does.

Information Value

Top, Bottom, Left, Right, Center, and corresponding Margins.

The last few paragraphs between the horizontal lines can be used to demonstrate another principle of composition: its information value, or “the placement of elements… endows them with the specific values attached to the various ‘zones’ of the image” which are depicted above. In particular, the left and right zones are used. Kress and van Leeuwen define the left zone as Given, while the right zone is New. Given information is presented as something familiar, and this case the blog text was given. New information, on the other hand, is unfamiliar and prompts special attention from the reader. The graphic in the example was unfamiliar and placed on the right because it was a departure from the familiar textual format of the blog. The graphic therefore required special attention and was placed on the right.


The third aspect of composition is framing, which “disconnects or connects elements of the image, signifying that they belong or do not belong together in some sense.” This blog post utilizes framing devices to separate each of the different topics I’ve talked about. Take a look at its dissection on the left.

I used horizontal rule <hr> tags to separate each term from one another, and bold titles to make the divisions more prominent. The spacing that I used and the placement of images also helped to break up paragraphs and to make the post more appealing to read than if it was just a big wall of text. Putting big images in between text causes a break in the flow, and also signifies the end of one section and the start of another.

Even in this section, paragraph breaks show you what belongs together and what doesn’t. This sentence and the one before it are both in the same group, while the ones above are not. In the graphic on the left, I used color to highlight the different groups on the page, which is another technique that framing provides.


In conclusion, I hope that you’ve learned something about composition through the visual and textual examples I’ve put above.