06/17 360-technologies - mobility into the classroom
In this talk I try to reflect upon how 360-technologies can be used to bring experiences from the outside world into the classroom
360-technologies cover a series of different technical solutions, from completely computer generated environments, via various ways of showing objects in an environment, to the more accessible technologies for recording spheric still images and video.
All the big technological players are heavily into spheric media. Google has its strongest approach through the Android operating system, using mobile devices for viewing 360-media. Google have for some time provided a low cost entrance to spheric media by using mobile phones together with simple viewing devices. Even though this comes from the low end, the usage can be quite advanced. However, arguably the company’s most significant contribution to the spread of spheric media has been through the many years of extensive development of Street View. In addition Google is also work on their Project Tango, which is more about Augmented Reality as well as a bridge towards technologies that develop machine view. Other companies do also intensify their efforts developing spheric media, and related services. Facebook support publishing of 360-media and do also work on more advanced solutions through the acquisition of Oculus Rift. Microsoft is working on their Hololens, which is one of the more advanced viewing solutions at the moment. Apple, who also has a solid grip on mobile media, recently announced their AR developer kit for iOS.
Ambisonic sound is not covered here, but the importance of sound could hardly be emøhasized enough. Rich imagery with a flat sound file will never be able to create an immersive experience. An option when working with sound is FB360 Spatial Workstation. Another alternative might be The Ambisonic Toolkit.
When it comes to 360-video cameras, these range from cheap consumer gear, like Samsung Galaxy 360, Ricoh Theta and Nikon Keymission 360 etc, that can be bought for a few hundred Euro. In the other end there is also real expensive equipment, like the Nokia Ozo at 37,500.00 Euro. What is nice is that even among the cheap equipment one find cameras that delivers video quality that is fully usable for most practical purposes.
Given new technical solutions the important question should always be “What is the purpose of this technical solution?”, “How does it make communication different?” and “How can this enhance storytelling?”. Regardless of whether you are making 360-videos for education, entertainment, journalistic purposes etc the answer is mostly about taking the spectators into situations and experiences that otherwise would be difficult or impossible to access.
A war zone is an obvious example of a situation that most people will be unable to access, a place where neither students nor a news public can possibly go. In such situations the 360-view also addresses a problem that is often caused by the limitation of the frame in ordinary video: that is what is “hidden” outside the frame. The filmmaker does of course still decide where to place the 360-camera, when to do a recording, and how to edit the result. Still 360-video makes it more difficult to take out parts of the scene, that does not fit with the decided story. This way storytelling might become more complicated, but at the same time the text become more open.
Video moves along along an internal time axis, driven by a given framerate. This brings a spheric scene to an end, even when the user does not do anything, and cuts to the next scene. When working with spheric still images it is the other way around. Normally the user actively chooses between links that is offered to bring the discourse forward.
I use some examples from the PBS-production My Brother’s Keeper, which also is an example how Bandwith never stops being an issue. This video takes advantage of Youtube allowing 8K and stereoscopic images, which at present is extremely hard to handle.
One of the characteristics of spheric footage is the extreme wide angled lenses, and consequently a almost limitless depth of field. This differs from most current cinematography, where the optical effect of a shallow depth of field often seems to a wanted feature. I use an old example from Citizen Kane, Orson Welles’ and Gregg Toland’s brilliant film from 1941. The importance of the cinematography was emphasised by Welles sharing the credit with the photographer Toland. In this scene Toland was able to keep the whole set in focus, and the action is created giving importance to all the elements, from back to front. I use this to discuss how we would think of composing a similar scene in spheric media. First of all we would not be able to do a zoom and we would probably choose not to do the dolly-shot, where the camera moves through the room, revealing the scene and the characters. Instead of revealing the scene along the line defined by the camera movement, we would spread the characters and position them in various places in the panorama.
Maybe we can speak of a change from the traditional cinematic camera, looking into a scene, to a spheric camera, inviting the user to look out in the scene. It is possible to argue that this position the viewer in a more active state, a change of position that will have implications for the use of video and images in education. The control of the field of view also introduces some changes when it comes to narrative voice. Some argue that the user controlling the field of view implies a first person narrative. I do, however, believe that as long as the camera is not addressed by the actors we are deling with something that comes closer to a second person point of view – the viewer does, however, seldom become parts of the story universe beyond the sometimes strong feeling of "being there".