Cinema scholars and various experts maintain we are going through a “phase of convergence”. Philip Rosen, William J. Mitchell, Henry Jenkins1 make reference to a convergence relating cinematic creation and editing first, but also to the “inner life” of cinematographic archives and movie theaters. Digital tools do not only modify the way a movie is created – times and ways of shooting; editing and post-production – but they also affect the way of considering these phases, which are no longer seen as isolated but as pieces fitting together and integrated as a group.
The fact that digital media are now flexible and easy to be integrated has opened new possibilities in cinematic workflow as well. In the years 2000 to 2005, digital technology were radicalized and extensively used in editing and special effects whereas it is used in just 40% of the movies as far as color correction is concerned.
Subsequently, the cases of digital color correction definitely increased and the digital projection of films began to be widespread. In 2003 the total number of D-screens (digital screens) was 1,000 all over the world: this number rose to 4,000 in 2006, up to 7,000 in 2008.
The total amount of digital movie theaters increased (up to 25,000) in the next two years, between 2009 and 2010: this was partly due to Avatar (2009) by James Cameron.
This quota doubled (50,000 units) later in 2011, 18,000 of which were sold in Europe (52% of screens) and about 20,000 in the US. Moreover, the total number of 3D projection has also increased as ca. 30,000 movie theaters are equipped 3D apparatuses2.
Digitalization also affects the artistic heritage. As a matter of fact, several digital portals have been created: some of them are dedicated to music or libraries, some other to digitalization of archives, to the relationship between the Internet and museums and finally to the so-called “geo-portals” or, more generally, sites dedicated to territorial development3. Peter Wollen’s research4 – followed by Edmond Couchot’s and Norbert Hillaire’s – has first highlight the total interaction among these elements in a “liaison de tous vers tous5”.
Digital archives constantly take on new functions: they are no longer seen as “warehouses” aimed at preserving works of art, but as spaces dedicated to virtual exhibitions, direct access and privileged place to relate users and artwork. Bernard Deloche considers the current relationship between technology and documents as the basis for establishing a musée virtuel6, which – as the name itself suggests – fosters the merging between the concepts of ‘archive’ and ‘museum’. A sort of “museum 2.07” where the “viewing” mixes with the “navigating”, the contemplating with a denotative look and finally effective investigating with initial perfunctory view.
Recently, the influence created from the application of digital tools has experienced a first, albeit limited, developmental stage also in the context of film studies. The introduction of software tools represents potentially one of the most prolific ways to develop the analysis of audiovisual products. A kind of “Digital Way” for Film Studies which can combine common visual analysis and interpretation of the film with a new form of filmological observation of the text based on tools arising from new technologies, and functional to automated processes of information retrieval and segmentation of the film.
According to Katherine Hayles, “virtuality is the cultural perception that material objects are interpenetrated by information patterns8” and a kind of text analysis based on statistical aspects (such as type and duration of the shots, camera movements, kind of settings) becomes now a new possibility to interact with the cinematic text. It is now possible to work on film decomposition/factorization based on certain computer coordinates (or patterns), and also to film virtual-analytical reworking, moving film analysis on more qualitative aspects than quantitative ones, extending the statistical aspects to the possibility of their interpretation, and facing the text as it actually is, in the organic structure that makes it something unique, something that transcends the sum of its individual components.
In the early 2000s, Lino Miccichè9 highlighted the importance of the development of film studies to understand and bring out the “complex structure” of the cinematic text (based on “Literary-theatrical”, “musical-sound”, “artistic-visual” elements). Miccichè seems to have thrown a perfect bridge between new technologies and their placement within film analysis, trying to bring the “surface spectator” to become “depth spectator”. Today, the need evidenced by Miccichè seems to be able to find a perfect supporter in digital tools and in their ability to facilitate the analysis of the filmic text.
Clearly, the introduction of the digital component has not yet reached a definite stabilization into film studies processes; and also the film, as a medium, shows signs of a state of indefiniteness when analyzed through digital tools. Anne Friedberg highlights the current provisional state of film theory, which is now in a transition phase. The film becomes a storage device, conforming its nature to the nature of the medium, thus given not only by the film itself, but also by the DVD, the database or the digital server, as also indicated by Leonardo Quaresima and Valentina Re10. The spectator, as a scholar, can use the film archives and could intervene directly on the medium, on its content, i.e. manipulating the order of sequences11.
The concepts promoted by Barry Salt12 about the measurement of ASL, the average shot length, which has been developed since the Nineties, have found concreteness in the work of Yuri Tsivian and of CineMetrics in particular. Some hypermedia analysis have been focussed especially on the breakdown of the film in number of shots, on the calculation of the duration of individual shots and the average length of them, also allowing a first level of graphic annotation regarding the frame to define the different elements which form the composition. The database of film, analysed with this tool, has about 6000 titles and many scholars of history of cinema and of film studies, such as David Bordwell, Charles O’Brien and Adriano Aprà13 have used this tool for further film analysis.
The Moscow research group Hyperkino has developed a method of “Digital transcription of critical thinking” through a system of annotation and comment of the film text. This system is used in particular for the addition of a graphical interface inside the DVD’s editions and includes comments, notes and descriptions of single sequences. The activities of the group Digital Formalism, of the Department of Theatre, Film and Media Studies (TFM) of the University of Vienna, have concentrated on the analysis of the works of Dziga Vertov and in particular on the film The Man with the Movie Camera (Chelovek s Kino-apparatom, 1929). In this case too we witness the creation of a critical analysis intended to enrich the DVD edition of the film with a hypertext containing comments, selected images from the film and notes.
In 2007, as part of AI * IA ’07 (10th Congress of the Italian Association for Artificial Intelligence) a study on Interactive Analysis of Time in Film Stories was proposed by Francesco Mele, Antonio Calabrese and Roberta Marseglia. The paper proposes a method for the reconstruction of the time axis in the fable and the order of events in the plot. It also gives the possibility to write some indication about the temporal order of events. Further studies about the relationship between video and analysis based on information retrieval for video and video libraries are highlighted in the method of classification of films on the basis of gender, proposed in 2007 (as part of the CAIP ’07 – 12th International Conference on Computer analysis of images and patterns) by some scholars of the Department of Electrical Engineering, National Tsing Hua University, Hsinchu (Taiwan). This method is developed through the analysis of “movie previews”, often able to emphasize in few images the themes of a film, allowing the classification of them (Action, Drama, Thriller, etc.).
Moreover, as for hypertext, Universidad Carlos III of Madrid has also developed a film analysis of some works of Alfred Hitchcock (in particular Vertigo) by means of concept maps (CMap) i.e. there are graphical interfaces that show the segmentation of the film as to its relevant sequences.
Moving within the framework of software designed to provide automated decomposition of the film, from 2006 until 2009, the Institut de Recherche et de l’Innovation (IRI) of the Centre Georges Pompidou of Paris has created Lignes de Temps, a software which represents a first step in the context of the segmentation of an audiovisual product through the application of algorithms. Even within the limits still present, this software allows to create multiple types of analysis simultaneously, dividing a film into its individual shots and grouping them according to their type; it also allows to add some notes, remarks, but, in this case, in a serial way along with the text.
The software interface is based on the idea of editing software as Avid or Final Cut. The name itself Lignes de temps, is translated as “timeline”, based on the same timeline of editing softwares, pursuing the idea of “inverse editing”. But in this case, instead of creating a film, the software aims to decompose it, segmenting the film on the basis of patterns about the kind of shots, the type of exposure, the differences as to the lighting in a film.
Bernard Stiegler considers this kind of software, as Lignes de temps, like the path that could launch a special kind of relationship both with the researchers and with the spectators, with the amateurs14 (as said by Stiegler). But Stiegler is also searching a closer relationship with the film. The exploitation of the film does not become something extemporary, but a permanent act, durable, collective. This is the idea of “individuation collective15”, as said by the director Vincent Puig, indicating the relationship between public and software: this is an exercise that starts from the individual to become a collective action, which covers all public, and finally returns to the individual, in the form of new solutions and visual interpretations arising from the interaction between spectators (or amateurs).
What is missing in these studies is the ability to integrate the individual functions of applied research in the instruments described so far: for example to integrate functions such as information retrieval, tagging of annotations, spatial decomposition of the image, lighting retrieval in a single instrument. Secondly, it lacks a method for analysing the sound that allows the decomposition of the individual components and to add comments. Again, the possibility to create a system for sharing the results obtained is missing as well. There’s the need of a system that allows the sharing of information and data about a movie. This opportunity will help turn the work of interpretation not only an individual act, but a collective act, an act open to the exchange of analysis. The individual film analysis will be available to other users by means of a specific social network, in order to create a community of viewers / scholars “of depth” more involved in the development of the film studies, in the sharing of results and in the discuss about the different methods of analysis, thus following the theories of “convergence culture”.
Janet Harbord defines the relationship between cinema and new media “innocent monsters16”, emphasizing an undefined form of them, not yet satisfactory. We have to move from this lack of definition to search a new kind of definition, a new kind of relationship between new media and film studies. In particular, this definition must allow to move from a quantitative use of computer tools (now mainly used to record and manage large quantities of materials) to a qualitative use, able to automatically classify the material by factors and different types. The interaction between software and film studies must also be involved into digital humanities17, trying to renew the method of film analysis introducing the idea to organize the collected data in cognitive maps and in abstract models, such as graphs and trees18, interacting with the traditional and manual kinds of film analysis.
In this process of recreation of film studies, it’s not easy to predict whether this kind of software will lead to a new “ecdotica”, i.e. a new digital form of Textual Criticism. What is clear, starting from these early experiments, is the interaction between new technologies and analytical expertise, where new technologies become functional to the scholar, and where the scholar has the responsibility to influence the future developments of this kind of software. This knowledge could change the interface, the functions and the plug-ins in a ergonomic way, close to the needs of the film studies. But this knowledge could change the same methodologies of film studies, bringing them to a closer interaction with the possibilities of digital tools and increasing the attention to the complexity of a medium such as film, judging it as a whole, from the production stages to those of analysis.
1 Philip Rosen, Change Mummified: Cinema, Historicity, Theory, Minneapolis, University of Minnesota Press, 2001, p. 326; William J. Mitchell, The Reconfigured Eye: Visual Truth in the Post-Photographic Era, Cambridge (Ma.), MIT Press, 1992, p. 6; see also Henry Jenkins, Cultura convergente (2006), Milano, Apogeo, 2007.
2 The data given here are taken from Digital Cinema Forum, distributed in Italy by the Society of Motion Picture and Television Engineers, and through the sites www.mediasalles.it, www.screendigest.com, the society Screen Digest, particularly in the papers by Charlotte Jones, www.mpaa.org of the Motion Picture Association of America.
3 Here, we hypothesize a process of “digitalizing everything that can be digitalized”, see Fabio Armerio La mutazione digitale: fotografia, cinema, video, in Andrea Balzola, Anna Maria Monteverdi (eds.), Le arti multimediali digitali, Milano, Garzanti, 2004, p. 176-177.
4 Peter Wollen, Raiding the Icebox: Reflections on Twentieth-Century Culture, London, Verso Books, 2008, p. 65.
5 Edmond Couchot, Norbert Hillaire, L’art numérique. Comment la technologie vient au monde de l’art, Paris, Èditions Flammarion, 2003, p. 63.
6 See Bernard Deloche, Le Musée virtuel. Vers une éthique des nouvelles images, Paris, Presses Universitaires de France, 2001.
7 See Gaëlle Crenn, Geneviève Vidal, “Musée 2.0”, Culture & Recherche, “Dossier: Numérisation du patrimoine culturel”, 118-119, 2008/2009, p. 39.
8 N. Katherine Hayles, “The Condition of Virtuality”, in Nancy Vickers, Peter Stallybrass, Jeffrey Masten (eds.), Language Machines: Technologies of Literary and Cultural Production, New York, Routledge, 1997, p. 184.
9 Lino Miccichè, Filmologia e Filologia, Venezia, Marsilio, 2002.
10 Leonardo Quaresima, Valentina Re (eds.), Play the Movie. Il DVD e le nuove forme dell’esperienza audiovisiva, Torino, Kaplan, 2010.
11 See Anne Friedberg, “The End of Cinema: Multimedia and Technological Change”, in Christine Gledhill, Linda Williams (eds.), Reinventing Film Studies, London, Hodder Arnold, 2000, p. 440.
12 See Barry Salt, Film Style and Technology: History and Analysis, London, Starword, 1992 and Barry Salt, Moving Into Pictures: More on Film History, Style and Analysis, London, Starword, 2006.
14 Bernard Stiegler, “Institut de Recherche et de l’Innovation : vers une renaissance de l’amateur ?”, Coursives, 53, 2006, p. 4.
15 Vincent Puig, “Les amateurs du XXI siècle”, Culture et Recherche, cit., p. 37.
16 See Janet Harbord, The Evolution of Films, Cambridge (Ma.), Polity Press, 2007.
17 See Susan Schreibman, Ray Siemens, John Unsworth (eds.), A Companion to Digital Humanities, London, Wiley-Blackwell, 2008, and David M. Berry (eds.), Understanding Digital Humanities, London, Palgrave Macmillan, 2012.
18 See Franco Moretti, Graphs, Maps, Trees: Abstract Models for a Literary History, London, Verso, 2007, and Lev Manovich, “How to Compare One Million Images?”, in David M. Berry (eds.), Understanding Digital Humanities, cit., pp. 249-278.