Scarica Summary of the article 'Introduction to subtitling' by Díaz Cintas, Jorge and Aline Remael e più Appunti in PDF di Lingua Inglese solo su Docsity! ‘INTRODUCTION TO SUBTITLING’ BY DÍAZ CINTAS, JORGE AND ALINE REMAEL Subtitling may be defined as a translation practice that consists of presenting a written text, generally on the lower part of the screen, that endeavours to recount the original dialogue of the speakers, as well as the discursive elements that appear in the image, and the information that is contained on the soundtrack. All subtitled programmes are made up of three main components: the spoken word, the image and the subtitles. The interaction of these three components, along with the viewer’s ability to read both the images and the written text at a particular speed, and the actual size of the screen, determine the basic characteristics of the audiovisual medium. Subtitles must appear in synchrony with the image and dialogue, provide a semantically adequate account of the source language dialogue, and remain displayed on screen long enough for the viewers to be able to read them. 2. TRANSLATION OR ADAPTATION? AUDIOVISUAL TRANSLATION (AVT) Audiovisual programmes use two codes, image and sound, and films represent and actualize a particular reality based on specific images that have been put together by a director. Subtitling, dubbing and voice-over are constrained by the respect of synchrony in these new translational parameters of image and sound (subtitles should not contradict what the characters are doing on screen), and time (i.e. the delivery of the translated message should coincide with that of the original speech). In addition, subtitles entail a change of mode from oral to written and resort frequently to the omission of lexical items from the original. As far as space is concerned, the dimensions of the actual screen are finite and the target text will have to accommodate to the width of the screen. Although the figures vary, this means that a subtitle will have some 32 to 41 characters per line in a maximum of two lines. This is why it is considered as a type of adaptation. Jakobson (1959) is often cited as being one of the first academics to open up the field. He established three types of translation: intralingual (or rewording), interlingual (or translation proper) and intersemiotic (or transmutation). One of the first significant advances came from Reiss (1977 and 1981). She identifies three types of texts (informative, expressive and operative) that encompass the different language functions. Reiss points out the special attention deserved by written texts co-existing with other sign systems with which they must maintain a constant link. She creates an additional hyper-text type which she calls ‘audio-medial text type’. She defines it as a superstructure that takes into account the special characteristics of the spoken language and oral communication. This category sits above the three basic communicative situations and corresponding text types. During the 1980s and early 1990s the term ‘audiovisual translation’, abbreviated to AVT, appeared in academic circles. However, despite its popularity, it is not the only term used and some scholars prefer other terms such as ‘film translation’ or ‘cinema translation’, but they do not take into account other types of programmes, for example: sitcoms, documentaries, cartoons, etc. These concepts also become somewhat restricting. Other umbrella terms are also used. For example: “screen translation” that consider all products distributed on screen For example: television, movie or computer screen. ‘Multimedia translation’ refers to those products where the message is broadcast through multiple media and channels. This term establishes a stronger link with the localization of software and the translation of programmes that are distributed on the Internet, such as in ‘multidimensional translation’. New and innovative professional activities are o making a place for themselves within AVT, such as subtitling for the deaf and the hard-of-hearing (SDH) and audio description for the blind and the partially sighted (AD). Finally, computer games and interactive software programmes are taking subtitling to the borders between AVT and localization since these games are both subtitled and adapted to the cultural sensibilities of the target gamers. 3 CLASSIFICATION OF SUBTITLES The Intralingual subtitles contain subtitles for or the deaf and the hard-of-hearing (SDH), for language learning purposes, for Karaoke effect, for dialects of the same language, for notices and announcements. The Interlingual subtitles are for hearers, for the deaf and the hard-of-hearing (SDH). The last category is bilingual subtitles. INTRALINGUAL SUBTITLING Intralingual subtitling involves a shift from oral to written but stays always within the same language. The oral content of the actors’ dialogues is converted into written speech, which is presented in subtitles of up to three, or occasionally four, lines. They generally change colour on television depending on the person who is talking or the emphasis given to certain words within the same subtitle. Besides the dialogues, they also incorporate all paralinguistic information that contributes to the development of the plot or to the creation of atmosphere, which a deaf person cannot access from the soundtrack, for example a telephone ringing. As far as television broadcasting is concerned, the volume of SDH has undergone spectacular growth in recent years. A second group of intralingual subtitles are those specifically devised as a didactic tool for the teaching and learning of foreign languages. Watching and listening to films and programmes subtitled from other languages helps us not only to develop and expand our linguistic skills, but also to contextualize the language and culture of other countries. A third type of intralingual subtitling that is gaining tremendous popularity nowadays is known as karaoke. It is generally used with songs or movie musicals so that the public can sing along at the same time as the characters on screen. Another example of intralingual subtitling is the use of subtitles in movies and programmes for the dialogues of people whose accents are difficult to understand for audiences who share the same language. One example is the British film Trainspotting, where the actors speak English with such a strong Scottish accent and the movie was distributed in the United States with English subtitles. The fifth and last category of intralingual subtitling can be seen on monitors in underground stations and other public areas where subtitles are used for advertising, as well as for broadcasting the latest news. The use of written texts on screen allows the information to be transmitted without sound, so as not to disturb the public. INTERLINGUAL SUBTITLING Interlingual subtitling implies the translation from a source to a target language. Historically, in countries with a strong tradition of dubbing, such as Spain, Germany, Austria, France or Italy, the deaf could only watch programmes that had been originally produced in Spanish, German, French or Italian, and later also subtitled intralingually into these languages. Given that the translating custom of these five countries favours the dubbing of the vast majority of programmes imported from other countries, it has been difficult for the deaf and hard-of-hearing to access the information contained in these programmes and they have had to content themselves with the few foreign ones to be broadcast with subtitles. In other countries with a stronger subtitling tradition like for instance seconds and from this main rule we can then calculate the amount of text we can write in shorter subtitles. As far as the line length is concerned, cinemas may use up to a maximum of 40 or 41 characters – 43 at some film festivals – since it is an accepted norm in the profession that the viewer is able to read subtitles more easily and quickly on a cinema than on a television screen. 4.SURTITLES Surtitles were developed by the Canadian Opera Company in Toronto, and the first production in the world to be presented with surtitles was the staging of Elektra in January 1983. They are the translation of the words being sung, if the opera is sung in another language, and can be considered the equivalent of subtitles in the cinema. Surtitles tend to follow most of the conventions applied in subtitling. They shown on a LED display, normally placed above the stage. They either scroll from right to left or are presented stationary in subtitles of two or three lines, which seems to be less distracting for the audience. Of late, many theatres have also installed several smaller monitors throughout the theatre that are placed at the back of each seat in the auditorium. They are known as seat-back title screens and allow for subtitles to be provided in more than one language. Given that we are dealing with live performances, spotting is usually one of the main issues. It is normally done by a technician so that the subtitles can follow the delivery of the original as closely as possible. 5.INTERTITLES Intertitles are at the origin of subtitles and can be considered their oldest relatives. The first experiments with intertitles having taken place in the early 20th century. They are also known as ‘title cards’ and can be defined as a piece of filmed and printed text that appears between scenes. They were a mainstay of silent films and consisted of white short sentences written against a dark background. Their main functions were to convey dialogues and descriptive narrative material related to the images. Although they are no more necessary, some directors use them as an artistic device. The arrival of the soundtrack largely eliminated their usefulness, and when they are used in contemporary films they tend to be called inserts. 6.FANSUBS Computer subtitling programs have become much more affordable and accessible. Many of them are available free on the net. These programs, known by those with an interest in the subject as subbing programs, have facilitated the rise and consolidation of translation practices like fansubs. The origins of fansubbing go back to the 1980s, when it emerged as an attempt to popularize the Japanese cartoons known as manga and anime. American and European fans wanted to watch their favourite programmes but were faced with two main problems: on the one hand, the linguistic barrier and on the other, the scant distribution of these series in their respective countries. The alternative option was to subtitle these programmes themselves. Despite the questionable legality of this activity as far as the copyright of programmes is concerned, the idea of this type of subtitling is the free distribution over the Internet of audiovisual programmes with subtitles done by fans. Some of its defining features are the use of colours to identify speakers, the incorporation of explicative glosses and metalinguistic notes in the subtitles themselves or on the top of the screen, and the use of cumulative subtitles.