This is the fifth installation in a series of articles/posts on presentation technology and culture. The previous articles deal with Preliminaries, infrastructure and HUMlab, Performing scholarship, Angled screens and Curating events. This post contains some notes on software and hardware.
In the default classroom setting with a screen and a computer (and a laptop connection) there may not be a whole lot to consider in terms of software and hardware. The software is typically PowerPoint, Acrobat or a web browser. Slideware (PowerPoint, Keynote etc.) is the standard software platform. The screen is likely to be white, widescreen and connected to a projector (if relatively new, the resolution may be full HD, but very rarely higher than that). Expectations and templates are pervasive. It is very unlikely that someone would come to a traditional conference setting expecting to run content on two screens using a specific type of software. The templates are strong and are rarely challenged. At the same time the world is changing.
2. Presentation software and contemporary/emerging presentation environments
The software we primarily use to make/enact presentations has clearly not kept up with the development of presentation environments. It is true that most presentation environments are still very traditional (classrooms etc.) and do not change often or easily, but the proliferation of screen and interaction technology is changing the game. It is getting more common with more than one “set” screen in venues and most of us carry one or several screens on us (phones, tablets etc.). This development can also be related to the growth of alternative physical-digital environments (open creative spaces, flexible learning spaces etc.) in academia, at least in some parts of the world.
Most presentation software – including slideware – was designed for single screens and their logic does not easily (or at all) extend to multiple screens. There are ways of extending a series of slides across several screens and slides/content can be duplicated across many screens, but that is fundamentally different from engaging with story-telling across several screens beyond the serial slide model.
There is commercial software, such as WatchOut, which works in complex display environments. However, this type of software was essentially built around a time-based and pre-processed model – much like video editing software or professional broadcasting systems – which works well for video-based content, but often requires time-consuming work processes and will not enable easy on-the-fly editing/production of content (which is one of PowerPoint’s key strengths).
3. The role of resolution
Another important factor is display resolution. Although the resolution of large displays has not increased as much as that of small screens, there is a drive towards high-resolution screens, which also changes the basic conditions for story telling. Slideware was built for low-resolution content in several ways.
First, anyone who has tried to incorporate several very high-resolution images in a PowerPoint presentation will know that the software does not scale well. It becomes very sluggish (even on a fast computer with plenty of resources, a good video card etc.). The software was simply designed for relatively low-resolution screens, and with “normal” presentation displays it does not make sense to have high resolution content. Nowadays high-resolution content in this sense would be higher than full HD – maybe as high as 4K and beyond. In my experience, problems arise if you try to keep images high resolution internally (in PowerPoint) and also if you try to work with high resolution canvases (i.e. having a great deal of high-res content on one slide). It is also interesting to note that it may not be possibly to easily run high resolution content even in display environments designed to managed 4K resolutions – this has happened a couple of times to me, and the gist (I think) is that they are not really expecting really high resolution content (this is not true of high-end visualization environments in general, but sometimes the challenge is the coming together of everyday presentation tools and high-end displays).
Second, the basic logic of the slideware software does not give the presenter and audience any means to engage with high resolution content beyond just displaying the image. Alternative ways of engaging with content – for instance through zooming – makes much more sense with high resolution displays and content, but it is not supported by slideware (although there may be some simplistic zooming function available). While a tool such as Prezi is built around zooming for navigation it also built around a serial (almost slide) model.
In some less traditional presentation environments – including visualization labs and centers – it can be more natural to walk up to the screen, which is another way of engaging with detailed content (also through touch and sensor devices), but this is rare and also here performance problems with high-resolution content can be a problem. Such environments tend to have scientific or custom-made software, which is useful for some things and is likely to support zooming as narrative strategy for example (a key device in Planetariums for instance), but often does not support “easily” produced scholarly narratives across a range of uses. There is a tension between generic and specific applicability here and also between conventional and high-end environments.
A wider range of available screens and resolutions threatens the pervasiveness of a tool such as PowerPoint, which has essentially created a standardization of presentation situations. This standardization has been a key strength for the tool. It is rare (especially these days) that a PowerPoint file will not work in a presentation venue and often PowerPoint is what is expected. Furthermore, what was produced in PowerPoint on a local computer deploys to almost any presentation system in such a way that the presentation view looks like what showed on the local computer (in presentation mode).
The spread of wide-screen screens from about 2003 challenged this uniformity as there were (and still are) two different main ratios for presentations. A 4:3 presentation will work on a 16:9 or 16:10 screen, but not seamlessly (there will be black swatches on the sides of the screen). Currently, however, 4:3 screens have become very rare, and in a way format standardization has been achieved again. However, as noted above, changing information ecologies and screen resolutions challenge the standard presentation. This challenge is more serious as it seems contradictory to some of the basic logic of the software. In this sense, what we are seeing may be described as more of a paradigmatic challenge.
5. Sketching tools and interpretative presentations
The article series emphasizes “presentations” as way of enacting and co-creating knowledge, research and learning. The production of presentations (whether by a single person or a group) is a knowledge process tightly integrated with scholarship and the ideas being presented/enacted and this process is affected/enabled/conditioned by the tool or tools used. I recently attended a workshop where one of the presenters could not not show slides (because of time constraints). He kept making reference to the slides and was doing the slides on his own laptop to keep the presentation going. This simple example shows how the slide is part of how we think about and articulate our work. It also demonstrates how presentation tools help the presenter (sometimes more than the audience). Software and hardware do not determine how we make or present scholarship, but are part of knowledge processes and shape them to some extent. This means that we need to pay critical and creative attention the exact material/narrative/conceptual details of technological systems used to present our work.
The display system (Screen Scape Media System) we built in HUMlab to manage alternative infrastructure and presentation concepts is an example of such work (I would like to think). The below example comes from Nick Sousanis (who did a great presentation for our 2014 Genres of Scholarly Knowledge Production conference).
Scholarly work, digitally available materials and tools are getting increasingly integrated, and it would seem that we are moving from an “access model” to tools and platforms that are processual and interpretative – an integral part of the scholarly and/or educational process. A current example is the “magic wall” used by CNN (and John king) in the ongoing election process in the US. I took this shot a few weeks ago. It is striking how Mr. King is interacting with the magic wall and that is not just a tool for presentation. He uses it when he is not on camera and he interacts with it all the time when on camera.
There is a significant difference between a presentation that is ready-made and static and a presentation that is dynamically connected to data and research questions. In particular I would like to stress how tools can enact critical perspectives and allow experimental work. There can be a tension here between generic, ready-made off-the-shelf tools (such as PowerPoint) and tools that enable particular modes of engagement and critical readings.
Svensson, Patrik. 2016. “Presentation|Tech (V): Software and hardware”, Published March 28, 2016. http://patriksv.net/2016/03/presentationtech-v/.