Press "Enter" to skip to content

Month: March 2013

Response to Douglas & Harnden, “Point of View”

I love how much of my communications class ties into what I’ve been doing in my other classes. Today we’re going to cover various points of view as explained in chapter 3 of John S. Douglass and Glenn P. Harnden’s The Art of Technique. They say that “the moving POV shot is particularly effective in horror and monster movies to build suspense” and I’d say that this is best illustrated in John Carpenter’s Halloween (1978). This first person shot doesn’t let you know who the killer is or how old he is, and it makes it even more shocking when the camera switches to the third person and reveals the killer’s identity.

First person shots, then, are shots that show what the character would see with their own eyes. The first-person perspective can last the entirety of a novel or a video game, but it doesn’t work for long periods in film because it lacks the increased information of a third-person perspective. The longer a first-person shot goes, the harder it is to believe it.

Second-person perspectives also have a purpose. The idea is to address the viewer directly, and commercials frequently will refer to “you” the viewer. Training videos and informational videos are also designed specifically to target the active viewer. The idea is that someone is giving a presentation to you and hopefully you’ll respond to what they’re saying by buying a product, voting, heeding safety guidelines, etc.

Third-person perspectives are most common in movies and TV shows. They show all characters from an imaginary “observer’s point of view, but this point of view is not omniscient,” the reason being that videos don’t have time for all the inner thoughts of characters that a novel can provide. The scripts might write out these thoughts, but the scriptwriters are talking to actors not the audience.

Response to Osgood & Hinshaw, “The Aesthetics of Editing”

“Where did the soda go?” (via Reddit)

Ah, physical continuity! This week we’re discussing chapter 8 of Ronald Osgood and M. Joseph Hinshaw’s Visual Storytelling, and as they point out, some of the best editing doesn’t draw attention to itself. In the popular GIF above, the scene’s physical continuity isn’t maintained; the soda clearly disappears from the frame. Osgood and Hinshaw say that “the audience probably isn’t thinking about production techniques unless there are technical problems, misuse of technique, or the edit has been created to draw attention to the technique.” There’s a technical problem obviously in this scene, but let’s think about the circumstances surrounding it. The creators of the commercial obviously didn’t want soda spilling all over the guy, but the better idea would have been to forego the soda entirely. Most 30 second commercials are whittled down from 6-7 hours of footage; the idea being that the editor picks the best scenes to tell the intended story. Perhaps the editors of this commercial didn’t have a scene to work with where the man didn’t have the soda, or perhaps the soda scene was the best way to start but it ended badly, and the scene where he didn’t have a soda ended better, and that’s why they chose this combination of scenes. I guess without interviewing the actual editor there’s no way to tell.

Another job of the editor is to choose “when each edit should occur,” and the book points out that there is no gold standard in terms of shot length. Whatever pace a video has, it should be consistent with the mood and purpose of the video. Action packed music videos might have a quicker pace than the slow, panning shots of nature documentaries for example.

As important as choosing the right scenes and their duration is, changing the order of shots is also the editor’s role. To most clearly communicate what the video’s intentions are, manipulating timing and re-arranging scenes is essential. In Visual Storytelling, the authors discuss a photographer and a subject, and how if the photographer is shown first, the subject is presumed to know that she is being photographed. Indeed, providing a scene of explanation beforehand gives greater context to the videos that we watch.

Response to Zettl, “The Two-Dimensional Field: Forces within the Screen”

This week in my communications class we’re moving away from still images into the world of video, and what better place to start talking about video than the screen itself. In chapter 7 of Herbert Zettl’s Sight, Sound and Motion: Applied Media, we read about screen space and the field forces that shape our perception of objects.

Up until the early 2000’s, it was common for widescreen screens to be seen only in movie theaters. Television and computer screens were more square than rectangular, the shape being one of 4 feet in width for every 3 feet in height (4:3). With the advent of HDTVs, it was possible to display a lot more detail on consumer quality screens, and thus one could recreate the cinematic experience at home on their TV. Since most motion pictures are recorded in widescreen, or the aspect ratio 16:9, a 4:3 aspect ratio just wouldn’t do on HDTVs. Computer screens and TV screens are now a lot wider than they are long, such as the TV pictured above.

Now, why did we decide on 16:9 in the first place? Why not have 9:16 screens that are much taller than they are wide? Zettl explains that “a horizontal arrangement seems to suggest calmness, tranquility, and rest.” Vertical space, in contrast, seems “more dynamic, powerful, and exciting” than horizontal space. Tall buildings are a lot more powerful of a statement than a long stretch of beach, and people have used this property of our perception for centuries when designing structures.

As Zettl points out, people are very good at detecting when something isn’t horizontally stable. Does that picture of the TV above look weird to you? It looked weird to me when I added it, and the reason is because the left side is slightly higher in the frame than the left side, a fact that I’ve illustrated in the graphic above. It’s only a difference of about 10 pixels, but people are great at noticing these problems. At best, tilting the X axis can make an image appear more dynamic or can be used to enhance the instability of a horror movie, but at worst it can really make us uncomfortable. Diagonal lines are best to be avoided if you want peace and tranquility.

Just try and relax. I dare you.
Just try and relax. I dare you.

Another force at play inside the frame is the magnetic pull of its edges. If something is in the corner or right near the edge of the frame, it looks as though it’s glued there despite the pull of gravity. It can also be used to define a boundary. If a person is standing right at the edge of the frame it can give the illusion that they’re against a wall, which is why in horror movies people in hiding are often shown to be cornered in the edge of a frame. There’s no escape.

Drawing upon our earlier discussions of left and right, positioning is also a big deal on screen. We’re naturally inclined as a culture to pay attention to objects on the right-hand side of the screen as being more important than the left, and so in comedy shows the host is always on the right. In news broadcasts, the trend is usually reversed so that the screen is balanced. The low-energy footage is on the right and the newscaster is on the left, cancelling out our perceptions of left and right in a sense.

PechaKucha Project: CS LAN Party

A PechaKucha is a presentation in which 20 images are shown for 20 seconds each. This is the one I did for communications class. All of the music featured for longer than 30 seconds was found in an old video game, and some of the music doesn’t seem to be found anywhere else. As far as I know, it’s not copyrighted. These MIDI files were not pre-recorded either, as they were synthesized by my computer.

Response to Fagerjord, “Multimodal Polyphony”

Ah, Flash Player. This little browser plugin has been shaping the web for over 17 years, and while its power and support are dwindling these days due to open video standards, HTML 5, and the Canvas element, it still powers most of the online multimedia that we experience today. Anders Fagerjord’s “Multimodal Polyphony” analyzes in depth a certain type of Flash powered content: the Flash Documentary. Born in the early 2000’s out of bandwidth concerns, a Flash Documentary is a presentation of still images and voice-over narration that mixes elements of TV and still photography. It’s kind of an enhanced slideshow, a PowerPoint presentation with a narrator. The Flash documentary that Fagerjord focuses on is National Geographic Magazine’s “The Way West,” the first of their Sights and Sounds series.

A Flash documentary uses still images, but not all images are the same. The window is a fixed size, just like a television screen, but some images may be different sizes or have different aspect ratios. To get around this problem and also to add extra excitement and interest to the presentation, Flash documentaries apply TV style effects to their images.

Ken Burns Effect

I didn’t make the above video, but it serves as a good illustration of how the Ken Burns effect works. Basically, you take a still (or moving) image, zoom in so that it is bigger than your frame, then slowly pan and zoom so as to show only a part of it at any given time. With these tools, you can create other effects, such as revealing a part of the image that was previously unknown to the viewer or pointing the viewer’s attention at a specific spot by zooming into it.

Multi media

Supplementing the visual portion of the presentation is audio, which can include music, background ambient noise, sound effects, and narration. In the National Geographic presentation, you hear the sounds of the old west before the presentation starts to get you in the mood. Music and visuals together can greatly increase the immersion that you feel when watching one of these presentations, even if you’re staring at a small Flash window and the images themselves are still.

We’re going to be making a Flash documentary style video soon as part of the class. Stay tuned!

Weekend Project: Video Poker in C# and Silverlight

I’m working on building an electronic piggy bank style box like this one I’ve seen that will count my spare change, and I wondered if I could make it more interesting. I thought back to coin-operated arcade machines and thought that I could make a pretty cool video poker software that would read my piggy bank and let me play poker with my change as well. I decided to learn a new language to do it: Microsoft’s C# (sharp).

A few good friends of mine use C# in their jobs, and its an interesting choice because it’s a .NET language and the same code can be used on the Windows desktop and online using Microsoft Silverlight.

I ported some old Python code I had over to C# to learn about what makes it different, and after a bit of searching through the Microsoft Developer Network (MSDN) I found out the correct ways to do some things, and after some more tinkering and learning WPF, the Windows Presentation Format, I had my application.

Now, I don’t want to break any gambling laws so all the “credits” earned are 100% virtual, and have no cash value. In the future I can wire this up to the coin acceptor I’m building to accept more credits into the machine, but once they’re there, they stay there. I did make a rudimentary user account system that saves ones balance in-between sessions, so I can reboot my computer and I can still play later.

I still need to learn how to cache the card images somewhere, because every time I draw the cards the images are pulled from my server again and again.

To the web!

As I said, I’ve heard that Silverlight makes it easy to cross-compile applications for the web. It’s true for the most part, but there are some discrepancies. First, some controls such as the Label that work in WPF don’t work in Silverlight. Also, any Class Library DLL files that you’ve compiled for the desktop won’t be able to be added to Silverlight, so make sure if you’re compiling for both that you save your .cs files!

I make a few changes to my application since that screenshot was taken, and have made a web-playable version. This lacks user accounts, and you’re awarded 100 credits every time you visit the page! If you go below 0 credits, I don’t know what happens, but I think you just go into the negatives. It’s a prototype, and it’s controlled entirely via the keyboard for now. Make sure to click the blue background before using the keyboard to make sure the program has focus! I might improve it later, but my focus is on the desktop implementation for now.

I’ll be on Spring Break for the next week, but I’ll update once I get back!