In the digital world, audiovisual data is everywhere. From photographs to videos, games to mixed reality – we’re truly living in an audiovisual age. New media technologies such as Augmented or Virtual Reality or deepfakes are in a state of permanent becoming . However, such innovative hybrid-formats continuously enter the market place of ideas long before the old ones are fully understood. Hence, there is a strong necessity to explore current digital trends and critically investigate and inquire the impact of new media content on society and universities in the light of – what I define here as an umbrella term – audiovisual data.
In the digital sphere, all items, fragments or traces are translated into binary data. However, this doesn’t implicate that the content and meaning of it is binary, but rather underlies a complexity of informational contextualization, interpretation and socio-cultural imprinting. Data may consist of individual letters (typography), words (semantic), pictures (photography), images (illustrations, paintings), music or moving images (video, gifs, Virtual Reality etc.). In this sense, the data in the digital world gets almost exclusively depicted by recipients via two modalities (senses) as well as their interactions:
1) audio (e.g. spoken words, music …) by hearing.
2) visual (e.g. written words, pictures, images, diagrams, sketches …) by seeing.
3) audiovisual (e.g. videos, VR, AR, but also musical elements during the reading of a blog article) by hearing and seeing at the same time, a multimodality approach towards perception.
Our human audiovisual perception is determined of a strong and powerful interdependence of the senses (modalities). In an anthropological context, one of the most impressive phenomenon of the audiovisual refers to the tide and unbreakable connection of that see and hear at the same time. This phenomena has been a matter of inquiry in academia through various interdisciplinary fields for decades, especially in combination with moving images. Besides academia, experts from a number fields which I subsume as media design have creatively investigated the interrelation between image and sound for narration, storytelling or guiding the audiences emotions about a subject, object or issue (and propaganda).
For example, in each and every professional film production different images and sounds are recombined, which are not connected at all in the real world. But due the way the natural audiovisual perception is wired in the brains and bodies, the recipient won’t be able to separate the information, as our perception of audiovisual moving images is determined by a »forging of an immediate and necessary relationship between something one sees and something one hears at the same time«. The film theory scholar Michel Chion (1994) defines this resulted effect of the particular medium-specific characteristic as synchresis (from synchronism and synthesis). Hence the definition of the synchresis is not only a theory-based concept but it’s rather applied in every production of audiovisual moving images and refers to the special characteristics (affordances) of moving images such as videos, movies or film are .
This exemplified description only outlines the potential and challenges that audiovisual data offers. However, audiovisual data consists of a multitude of different products such as e.g. science videos, news pictures, deepfake videos, 360-Virtual Reality and so on.
Investigation into audiovisual data must benefit from an multidisciplinary research approach due merging theory, practice and expertise from a multitude of academic and practice-based knowledge fields. Using the term of multidisciplinary research approach, is inspired by the definition of the work culture of the MIT media lab, in which they try to overcome disciplinary thinking. This way of working exceeds “traditional” research design as most research studies are either defined as “basic science” or “applied science”.
Hence, although the foundation of the research is strongly rooted in media, film and communication research studies, it hails from multiple disciplines and is designed with a mixed-method approach.
 see: Kevin Kelly: The inevitable: understanding the 12 technological forces that will shape our future, Penguin, (2017).
 Jean-Luc Godard, (1960)
 see e.g. the highly cited “McGurk effect” which describes the crossmodal perception of videos: Harry McGurk und John MacDonald: »Hearing lip and seeing voices«, in: Nature 264.5588 (1976), 746-748.
 Michel Chion: Audio-vision, (1994), 224. “Sound has an influence on perception: through the phenomenon of added value, it interprets the meaning of the image, and makes us see in the image what we would not otherwise see, or would see differently.” ibid, 34.
 I will not make any distinction between the different materiality’s of moving imges but will use the term “video” (lat. video = I see) to include all formats of moving images in the digital world.