The July 21 post by Robert Befus citing the “1986 UM/3M Study” and his July 14 post regarding the “1981 3M/Wharton Study” clearly point up the dearth of research on the impact of visuals in the persuasion process. Those studies are decades old and were done well before computer-based presentations became commonplace. Even the 1996 update of the UM/3M study is antique by today’s standards. Yet these are still about the best we have of this type of empirical research.
But maybe it is not the volume, timeliness or even the quality of research that is lacking. Maybe what the industry needs is a new approach to understanding the power and impact of media in presentations. Maybe what we need is a more enlightened and contemporary methodology, a fresh look at what we need to know.
As the situation stands, presentation pros must use their instincts and training to determine when the goulash of media elements in a presentation works and when it doesn’t. Often, the presentation pro never sees the final presentation. Typically, only feeble attempts are made to collect information after the fact about audience reaction or effectiveness, with little attention given to media specifics. In most cases it is impossible to determine which individual elements—visuals, sounds, graphics, message, environment, presenter skills—were effective and which were not, much less how the elements worked as an ensemble to influence the audience.
In our search for understanding the impact of media in presentations, are we looking in the right cupboards?
During the late 80s and early 90s, presentation product vendors and presentation creators were desperate for scientific validation of the effectiveness of their products and services. Overhead projectors and 35mm slides were the dominant forms of visual presentation. Both were expensive and difficult to create or update, so a strong rationale had to be given before they could be cost-justified. All-digital electronic presentations were even more difficult to rationalize. Presentation software, laptop computers and projectors were just coming on the scene. Presenting from a computer, particularly with any kind of dynamic media, was not only expensive but also considered foolhardy, like a triple summersault on the trapeze without a net.
The research scavenger hunt set out to prove what everyone in the presentation business already knew, namely that well designed and applied audiovisuals assist and enhance persuasion. For all its 232 pages of detailed research, the results of the UM/3M study were for most presentation pros a case of “Yeah, duh.”
I first became aware of the UM/3M research in 1989 while serving as editor of Presentation Business News and then Presentations magazine. At the time, 3M was the big dog in presentation. To make the case for its products—its top sellers were overhead projectors, flip charts and supplies—it founded what was then called the 3M Meeting Institute. The 3M Meeting Institute served as a semi-independent clearing house for information about visuals and presentations. It funded some research and collected what else could be found and everybody used the research to sell their stuff. Later, as overhead projectors fell out of vogue and 3M was unable to compete effectively in the video projector business, the company lost interest in funding the institute, eliminating one of the few sources of research dollars.
Since then, it appears no similar efforts to gather or fund research have occurred on any significant scale. As Mr. Befus points out, there is still precious little out there and no additional studies of sufficient scientific depth seem to have been done on the use of dynamic media in presentation. The Visual Being Presentation Facts section is a good start in that direction,
The justification of presentation visuals is hardly a pressing need anymore. The debate has shifted from “Are visuals effective?” to “When are they too much?” (see Kathy Sierra, Edward Tufte, Cliff Atkinson, et al) The studies we need today are what to use when and why. And what not to use when and why.
A lot of information is out there in the form of media arts training. Blogs like this one overflow with theory, practice, tips and tricks. We have some metrics, but are we measuring the right things? What is needed is a different kind of well-funded, professional research. What is missing are studies and/or methods for looking at how the various media elements work together to influence audience reaction and the effectiveness of persuasion. What we need is an integrated approach to understanding what happens when all of the elements in a presentation combine.
Clues to such methodologies already exist in the test screenings that the film business conducts before the release of some movies. Test audiences sit in a screening room with a response controller that they use to rate their level of interest during the film. The information is graphed by computer and used to determine what scenes work, what scenes don’t and if the flow and pacing of the film are as they should be. Different scenes, sounds and music can be tested in different combinations in search of just the right mix. It is not uncommon for films to be heavily re-edited or even new scenes added after a test screening.
Obviously, it would be impractical to hire a test audience to sit through every presentation before it is delivered. But why not use similar methods in a research setting to test variations in the use of media and other elements that influence a presentation? For example, a standardized presentation could be tested with and without a video clip, or without and without bullet slides. That sort of research could yield useful information about the overall dynamics of the presentation process, which is something we don’t have now.
Or, forget the laboratory, why not give response systems to real audiences and have them “express” their level of interest moment by moment, then correlate that information to the media usage and the content? Not only would this information be valuable for the presenter and presentation producer, it would encourage a deeper level of audience involvement. If enough of these types of presentations were studied, we might begin to see patterns that could be extrapolated into guidelines, processes and procedures for more effective media-enabled presentation.
The key to such research would be providing insight into the total process, not just one piece of it. To do that would require creative new approaches to the study of presentation and the persuasion process. This research should NOT be done by any one company with an axe to grind or a product to sell. It requires a broadly supported industry effort. No offence 3M, but this time it needs someone other than you as a backer.