| Eat for Energy(2022),by American writer Ari Whitten and Alex Leaf,M.S.,is an innovative work of science that takes a scientific approach to understanding chronic fatigue and provides nutritional strategies with numerous images and evidence-based recommendations to help people better understand the subject.The author decides to translate this book aiming at making the knowledge about chronic fatigue accessible to Chinese readers and learning and spreading the health knowledge possible.As the book mainly presents its content on basis of texts and large numbers of images,which is significantly different from the text in a traditional sense,it is difficult to reproduce the meaning of ST without analyzing the relations between the images and the text.Hence there is a new challenge to this translation practice.In light of the growing use of images and text in the popular science books in the new media age,this study uses Sara Dicerto’s(2018)model of multimodal source text analysis as a theoretical framework,examines the transcription and analysis of multimodal image texts in specific examples,and then discusses the guiding role and specific operational steps of the model in the translation process.There are three analytical dimensions are used in Sara Dicerto’s static multimodal text model: the analysis of multimodal pragmatic meaning,the interaction between the modes,and the semantic representation of individual modalities.Among them,the interaction of the modes,namely COSMOROE(cross-media interaction relations)and logico-semantic relations,can help reveal the important roles that image-text relations play in the comprehension of the text.As it may provide solid foundation for ST analysis,the author chooses this model to guide her practice of translating this popular science book.It is found that the types of images in Eat for Energy(2022)have different COSMOROE and logical-semantic relations.There are three basic types of images:statistical charts,condition maps,and illustrations.The corresponding relations between image and text are listed as the following: the COSMOROE relation of the condition map is complementarity-exophora and the logico-semantic relation is expansion-enhancement;the COSMOROE relation of some illustrations is token-token and the logico-semantic relation is expansion-elaboration.Furthermore,the author has also explored some image-text relations that are not covered in Sara Dicerto’s model and named them as namely the echoing relation and the synchronization relation.In view of the different types of image-text relations derived from the inductive summary,the author has adopted different translation solutions in the translation process.It is found that annotation is a good choice in the case where there exist the complementarity-exophora and expansion-enhancement image-text relations based on conditional maps and text;and the literal translation is a fine option in the case where there exist the token-token and expansion-elaboration image-text relations based on partial illustrations;and the addition,free translation and omission,are also the good choice in the case where exist the echoing and synchronization relations.In short,when translating multimodal science texts,translators must combine the contents with the subject and adopt a flexible translation solution focusing on the relations between the images and the text,so as to better present the meaning of the translated text and realize the purpose of understanding and popularization. |