Study on Interactional Phenomena in Interpreter-Mediated Remote Teaching Setting

Abstract

The study of the phenomena of interpreting interaction in teaching settings has always been a hot topic in the field of interpreting research. With the development of online communication technology, remote interpreting also rises and develops continuously. Based on the interpreting practice of the English-Chinese consecutive interpreting in the course of Development Economics, combining Goffman’s Participation Framework Theory with Norris’ Multimodal Analysis Framework Theory, this paper proposes a Multi-modal Interpreter-Mediated Interaction (MIMI) Framework. Under the MIMI framework, in the tripartite interaction composed of “teacher”, “interpreter” and “student”, this paper analyzes interactional phenomena in interpreter-mediated remote consecutive interpreting, including: lack of presence, lack of interactive management, and lack of multi-modal interpreter-mediated interaction management, faced by interpreters in remote teaching setting, from the two aspects of verbal resources and embodied resources. To deepen the understanding of the remote class interactive phenomena, this paper further analyzes the specific interpreting problems, which are managing computer interface, managing the opening, and managing turn-taking, and then puts forward the corresponding interpreting strategies, which are applying minimizing computer interface, improving self-monitoring consciousness and initiating self-selected turn-taking. Through these three interpreting strategies, this paper intends to help interpreters be able to complete remote class interpreting tasks more efficiently.

Share and Cite:

Liu, H. and Liu, J. (2023) Study on Interactional Phenomena in Interpreter-Mediated Remote Teaching Setting. Open Journal of Modern Linguistics, 13, 749-769. doi: 10.4236/ojml.2023.135044.

1. Introduction

Because of the advantages of geographical breaking and time and cost reduction, video conference first appeared in the early 1990s, and developed rapidly in the following decades. Then it has been applied in interpreting and called Remote Interpreting (RI). According to Braun (2015: p. 346) , we know that the development of RI was originally driven by supranational multilingual institutions, and then this kind of method of interpreting has been applied to many other fields such as legal settings and healthcare settings.

Influenced by Covid-19 pandemic, some overseas educational programs for Chinese students of overseas universities such as Nueva Ecija University of Science and Technology (NEUST), a Philippine university, changed to online teaching form through online conference software Zoom. Before Covid-19, the teaching model of these programs was that the professors spoke in English for their Chinese students who went to the university to take the classes with the help of on-site consecutive interpreters. What is worth knowing is that the Philippines is the third-largest English-speaking country in the world and English is the official language of this country. In order to facilitate communication between teachers and students, language assistants or class consecutive interpreter is necessary. After Covid-19, the teaching model was changed to online form. In the context of this situation, the author was hired to be an interpreter by MUHO to interpret a series of classes of different majors and degrees of NEUST online, which means the students, the professors and the interpreter, located in their own homes respectively, can have class online without leaving their home even their study room.

Early research on remote interpreting focused on its technical feasibility. For example, Braun (2006: p. 1) investigated the impact of multimedia communication technologies on interpreting, and then concluded that remote interpreting might bring more flexibility for interpreters; some research emphasized the extra physical and psychological burden on the interpreter in remote settings. For example, Moser-Mercer (2005: p. 727) investigated issues of multi-sensory integration in a multilingual task in remote interpreting, and made a conclusion that the deterioration of interpreting quality seemed to be based on early onset of fatigue, which seems to be a consequence of allocating additional cognitive resources during remote interpreting process lacking presence. Then, the research switched to specific skills needed by interpreters in remote interpreting to improve their performance. For instance, Pöchhacker (2021: p. 2875) claimed that VRI requires appropriate technical and spatial arrangements as well as users capable of adapting their communicative behaviour to spatial and audiovisual constraints.

And with the development of communication technology, the video remote interpreting (VRI) has been applied to educational situation both in sign language and spoken language areas. The directions of research on remote educational interpreting are different, including interpreter-mediated interactions between people using a signed respective spoken language across distances in real time (Warnicke & Granberg, 2022: p. 1) ; evaluating using physical exercise and job crafting as a way to combat stress related to job boredom in the video remote educational setting (Musto, 2020) ; and developing VRI for conference situations such as seminars, conferences and academic lectures in university settings (Greco, 2020) .

This paper, based on the interpreting practice of the English-Chinese consecutive interpreting of the course of Development Economics, combining Goffman’s Participation Framework Theory with Norris’ Multimodal Analysis Framework Theory, proposes a Multi-modal Interpreter-Mediated Interaction (MIMI) Framework and puts forward interpreting strategies after analyzing the interactional phenomena in interpreter-mediated remote consecutive interpreting.

2. Multi-Modal Interpreter-Mediated Interactional (MIMI) Framework

2.1. Interpreter-Mediated Interactional Phenomena

Over the last few decades, the role of embodiment in human communication has been increasingly studied, within a wide range of settings. It has become clear that the production of socially shared meaning needs to be situated within a multi-layered context, including not only human interactants and their verbal exchanges, but also the physical environment in which they operate and the wide range of bodily resources they use in order to communicate. In other words, “human interaction is fundamentally embodied, and as such any research into human social interaction is research into embodied interaction” ( Hazel et al., 2014 , italics in the original). As a consequence, the verbal side of interaction has been increasingly integrated with “concurrently relevant semiotic fields” (Goodwin, 2000: p. 1499) such as gaze, gesture, posture, body and space orientation, and object manipulation, in an attempt to achieve a holistic model, capable of accounting for the complexity of naturally-occurring communicative events.

Based on Goffman’s participation framework and Norris’s multi-modal analytical framework, this paper developed a synthetic framework-Multi-modal interpreter-mediated interactional (MIMI) framework to analyze the interactive phenomena in remote class interpreting setting.

2.2. Goffman’s Participation Framework

Participation framework is the key notion in Goffman’s theory. “When a word is spoken, all those who happen to be in perceptual range of the event will have some sort of participation status relative to it. The codification of these various positions and the normative specification of appropriation conduct within each provide an essential background for interactional analysis” (Goffman, 1981: p. 3) . In a word, participant framework is a framework of participation in an utterance, under which two formats including production format and reception format construct the basis of it.

In fact, his sociolinguistics theory has been also applied into interpreting studies already. Wadensjö (1998) suggests an interactionalistic, non-normative, dialogical approach to studies of interpreter-mediated talk for a deepened, developed understanding of the interpreter’s role in face-to-face interaction. Renwen (2017) argues that Goffman’s theory is applicable to the analysis of interpreters’ discursive roles in bilingual situations featuring multi-party interactions as well.

It is useful for the study of participation frameworks to observe the participants’ attention/awareness levels (PF; Goffman, 1981 ), as it allows us to identify the participants’ relationships in interactional. It is necessary to know more about Goffman’s (1981) “ratification process” in reception format, namely participants’ positioning in terms of spatial orientation, eye contact and proxemics with each other to further understand the PE. In interpreter-mediated interactional a participant might be both verbally and visually addressed (full ratification) or only verbally or only visually addressed.

There is one thing that needs to be said: that ratification process—in Goffman’s terms—is initiated by speakers only, however, in interpreter-mediated interactional, the role of speaker and the listener are interchanged as the talk turns. For this reason, in this article, I make some modifications of Goffman’s PF as a more suitable analytical tool in a bid to capture more aspects of interpreter-mediated interaction.

2.3. Norris’ Multi-Modal Analytical Framework (MAF)

Multi-modal discourse analysis has become an important current research topic in discourse analysis. There have emerged three theoretical approaches to multi-modal discourse, namely the systemic functional-multi-modal discourse analysis, the multi-modal interactive analysis, and the corpus linguistic approach to situated discourse.

The systemic functional-multimodal discourse analysis is represented by Gunther Kress and Theo van Leeuwen. From the perspective of social semiotics, it constructs a theoretical framework and determines analysis categories on the basis of functional linguistics research, which is usually called systemic functional semiotics or systemic functional multi-modal discourse analysis. The second is the multi-modal interactive analysis with Sigrid Norris. The third is the multi-modal discourse analysis of corpus linguistics proposed by Gu Yueguo, which guides social behavior psychology, behavioral ecology and perceptual ecology, and serves for language engineering.

Norris (2004) makes a distinction in interaction between “higher-level actions” (HLAs), such as meetings, and “lower-level actions” (LLAs), such as utterances and gestures. HLAs are marked by social openings and closings and consist of multiple LLAs. In order to investigate co-constructed interactions in a more comprehensive way, Norris (2004) maintains that a two-level analysis of interaction is required. First, the analyst should identify and analyse participant-linked actions; that is, who does what with whom. At a second level, the analyst should unlink the actions from the participant and study only what the participant does (Norris, 2004) without presupposing other participants’ involvement. The second level of analysis allows the analyst to identify whether and how a certain HLA might be linked to other participants as well.

2.4. Multi-Modal Interpreter-Mediated Interactional (MIMI) Framework

Based on Goffman’s participation framework and Norris’s multi-modal analytical framework, this paper developed a synthetic framework-Multi-modal interpreter-mediated interactional (MIMI) framework to analyze the interactive phenomena in remote class interpreting setting which is a kind of Dialogue Interpreting (DI).

In the last twenty years, DI has been studied extensively from the approaches of discourse analysis and conversation analysis. As a result, DI has been recognized as an interactive communicative event, in which all the participants jointly and actively collaborate. At the beginning, most of these studies focused on the verbal level of interaction, recently its multimodal dimension has received more and more attention.

By analyzing and comparing these two dimensions, under Goffman’s participation framework (PE) and Norris’s multi-modal analysis framework (MAF), taken the remote interpreter-mediated class Development Economics as a case, this paper aims at showing how multimodal analysis can contribute to a deeper understanding of the interactive dynamics of DI in remote class setting. Particularly, the paper sheds light on how participants employ multimodal resources (gaze, gesture, body position, proxemics, object manipulation) to construct participation frameworks at the class, and how the “ecology of action” (i.e., the relationships between the participants and the surrounding environment) influences the development of interactional.

3. Interpreting Difficulties in Remote Class Setting in Interpreter-Mediated Interactional Framework

In order for the construction of the framework to develop in remote class setting, a set of key requirements must be fulfilled: the interpreter must be included in the interpreter-mediated framework of communication, while both the professor and the students must, among other things, be willing to interact with each other. However, in remote class setting, the nature of remoteness makes the interactional difficult for the participants to co-construct and engage in the participation framework. So this section gives a description of the interpreting difficulties in remote class settings, including lack of presence and lack of interactive management.

3.1. Lack of Presence

Franceschi, Lee, Zanakis, and Hinds (2009: p. 79) describe it in their study on social presence in virtual worlds as the ability “to psychologically transport the user to an artificial environment during the experience” and further explain that “[t]his psychological transportation is known as the user’s sense of presence during the virtual experience.”

In remote interpreting, the reduction of non-verbal communication causes lack of presence which will conduct negative influence on the performance of the interpreter. Presence is an obvious factor when utilizing audio-video technologies to roll out the delivery of interpreting services (Skinner, Napier, & Braun, 2018) . When interpreters are physically co-located with those for whom they are mediating the communication, they can therefore normally deduce much about the nature of the interactional and the interpersonal relationship between the interlocutors by drawing on contextualization cues. By contrast, being located at a distance has the potential to disrupt the perception of presence and to place the interpreter at a disadvantage (Moser-Mercer, 2005) .

In teaching activities, non-verbal communication undertakes the function of transmitting and assisting transmitting key information. Teachers’ non-verbal language can be effective if students can see the teacher rather than the teacher was being hidden behind a desk or board or teaching while turning his back to students (Bambaeeroo & Shokrpour, 2017) .

Lack of presence will conduct negative influence on the performance of the interpreter. In relation to presence, it is necessary to mention another term “state of flow”, which has the similar meaning in RI. The term “state of flow” or “flow” abbreviated has been described as “a mental state of operation, in which a person is fully immersed in what he or she is doing”. The flow is a major factor to improve the sense of presence. Non-verbal, visual inputs such as hand gestures, lip movement, and body language play an important role in comprehending the speaker’s message. In remote class interpreting settings, the non-verbal, visual inputs are seen from the speaker view at the screen; therefore, the computer interface management can not be ignored by interpreter. However, a new interpreter or an interpreter who is unfamiliar with RI may have no idea how to manage it which leads to two serious problems which are interpreter’s attention allocation to different views and lack of one or more views on the screen.

In conclusion, lack of presence in remote class setting constructs negative influence on interpreter’s performance, therefore, when we analyze the interaction in the remote class setting, in interpreter-mediated interactional framework, the notion and its effect of presence can not be ignored.

3.2. Lack of Interactive Management

The second difficulty is lack of interactive management. Speaking of making contact in remote communication, studies have focused on the impact of reduced access to embodied actions (eg. Davitti & Braun, 2020 ). Physical appearance is skewed and weakened by the two-dimensionality of the technological medium, which reduces access to essential visual cues and contextual information (Davitti & Braun, 2020: p. 281) . Surrounding environment (e.g. lighting conditions, camera angles and positioning in front of the screen) may also impact participants’ perception of and interactions with each other, compounded by which the majority of videoconferencing software such as Zoom, Tencent Meeting do not support eye-contact similar to face-to-face interactional: for instance, the camera is usually placed on top of a computer, tablet PC or smartphone screen, participants needs to each other as looking downwards.

Because of lack of interactive management, the interpreter may feel confused when she/he encounter problems in remote interpreting. So the management of embodied behaviors such as gaze, eye-contact, facial expression and body posture, surrounding environment and spatial organization needs more attention as analyzing interpreter-mediated interactional phenomena in remote interpreting.

The opening of an encounter is a crucial interactive moment in any communication. The opening represents a critical locus for all participants to settle in the communication and establish a rapport before the actual encounter begins (Davitti & Braun, 2020: p. 285) . A remote class interpreter needs to have the ability to deal with the problems in this phase strategically and efficiently even with the constraints of the medium.

4. Case Analysis

The following section selects some typical cases of interpreting from a interpreting practice in the remote doctoral course “Development Economics” for analyzing.

Case 1 Managing computer interface

Case 1 is taken from the first class between the interpreter (Int) and the professor (Pro). The sequence shows one interactive moment where the interpreter and the professor discuss the students’ list which is not sent to the professor before the class. The extract (Case 1) consists of 122 seconds.

Interpreter: 00:00

Sorry, I didn’t get you.

Professor: 00:03

1) Your administrator stuff will not be joining us tonight.

Figure 1

Interpreter: 00:09

2) I didn’t join you last night.

Figure 1

Professor: 00:12

I don’t have yet the official list of the students. I’ve been asking, oh, miss, what’s her name? Your coordinator for all, all for twice. I’ve already asked them as her twice for the list of the students, but I don’t have yet the list of students.

Interpreter: 00:34

Okay, I have the students list. Maybe i can help you. Okay.

Professor: 00:41

Figure 1. Interpreter operating the interpreting tool.

3) Can you please provide that to me? How that. Can I have your, can you have that in your, the chat box your email so that I could email you or how?

#Figure 2

Interpreter: 01:02

Maybe you be the one. 4) Maybe I can ask one of my colleagues to email this to you. Is it okay?

Professor: 01:13

Your coordinator knew my, I know my email. I just forgot her name, Miss.

Interpreter: 01:20

5) 没事,咱们谁? let me ask them, OK, 嗯, 咱们哪个老师是拿着这个名单来着?

Student 1: 01:32

6) 没有

Student 2:

7) Mam Angela will sent you later.

Interpreter: 01:37

And here Angela and.

Professor: 01:38

I are here. Miss yes, I have. He, he, he just sent this 14. Miss Angela is our coordinator. I just don’t know if Miss Angela, if I knew Miss Angela. Angela. Yes, I have here for 14 students.

Interpreter: 02:02

Yeah, 14 students.

Analyzing

Case 1 is almost entirely characterized by an interactive conversation produced by the professor, the interpreter and a student to discuss how to help the professor receive the students’ list. From the text, the paper finds two places where the interpreter didn’t understand the meaning of the professor.

The first misunderstanding happens when the professor says “Your administrator stuff will not be joining us tonight”, at which the interpreter needs to answer “yes” or “no”, however, she says “2) I didnt join you last night”, which means the interpreter doesn’t understand the question of the professor. The first misunderstanding leads to the second misunderstanding which happens when the professor says “3) Can you please provide that to me? How that. Can I have your, can you have that in your, the chat box your email so that I could email you or how?”, which means the professor wants the interpreter send her email address to the chat box so that the professor can email the interpreter and then let the interpreter send the list through email to the professor. However, the interpreter doesn’t understand the professor, so she answers that she will let her colleague send the list to the professor, which makes the conversation dull and confusing.

So, what is the main reason for these two misunderstandings? From the perspective of computer interface management, maybe it is easy to find the answer which is the interpreter’s attention allocation to different views at the screen. When the professor says “Your administrator stuff will not be joining us tonight”, the interpreter is operating the interpreting tool (seen Figure 1), so the interpreter’s attention is attracted by the operation which causes the interpreter misunderstanding. And when the professor says “3) Can you please provide that to me? How that. Can I have your, can you have that in your, the chat box your email so that I could email you or how?” the interpreter is staring the interpreting tool view (see Figure 2) to figure out what the professor is speaking, which means the interpreter is dependent on the interpreting tool to much and paying much time to the interpreting tool, which makes the second misunderstanding.

From the above two misunderstandings, this paper finds the importance of computer interface management which includes interface adjustment and attention to every view the interpreter needs to pay to during interpreting at remote class.

Figure 2. Interpreter staring the interpreting tool view.

Figure 3. Base layer (minimal interface schematic).

Johnson and Wiles (2003) suggested that, like games, non-leisure software could benefit from having a minimal interface and that this would improve user flow. Then the minimal interface (Figure 3) was applied in remote interpreting area (such as in Saeed, González, Korybski, Davitti, & Braun, 2022) .

According to proposed by Saeed et al. (2022) , this report proposed a minimal interface (Figure 3) which is more proper in remote class interpreting settings to deal with the problems including interpreter’s attention allocation to different views and view adjustment at the screen.

The schematic of the Minimal Interface (Figure 4) essentially consists of a narrow meeting control ribbon situated at the bottom of the screen, a presentation view situated in the middle, dominating the schematic, one area for speaker images on the right, and one area for CAI tool, the only addition, which is situated on the top of the screen. The meeting control ribbon which can be

Figure 4. Minimal interface schematic in remote class interpreting.

hidden, situated at the bottom of the screen, contains all essential meeting and interpreter controls such as mute button, and meeting exit. At the left side, a chat box can be seen and can be hidden if necessary.

In remote class interpreting settings, the professor usually shares her/his presentation, and all the lectures are surrounded by the presentation, so it is reasonable for interpreter to adjust her/his screen like Figure 4 and makes the presentation view in the middle of the screen and dominating the screen, which can help interpreter find out the key points at one glance in very short time. Because the non-verbal communication plays an important role in interactive phenomena, so it is necessary to put the speaker view at the right side of the presentation view to make it easy for interpreter to observe the non-verbal/embodied resources of the speaker.

CAI view is necessary in remote class interpreting settings due to the accent of the speaker which is more difficult for interpreter to recognize than British English accent or American English accent. CAI can help interpreter recognize some words with accent and show the word at the screen, which can relieve the interpreter’s accent recognition stress at some level. So putting the CAI view at the top of the screen is a sound choice. The control control ribbon situated at the bottom of the screen and the chat box at the very right side of the screen can help interpreter control the meeting and communicate with other participants when it is necessary, especially at the beginning and end of the lecture, so these two zones are often chosen to be hidden to make the screen clear to improve the interpreter’s performance.

What’s more, the interpreter needs to pay more attention to the PPT view and the speaker’s view to catch the main idea of the professor from both verbal resources and embodied resources. Although CAI tool can relieve the stress of the interpreter’s cognitive load, the technology needs further improvement to be used in interpreting settings.

Case 2 Managing the opening

The following Case analysis is taken from the first class of DE between a student (Stu), the professor of the class (Pro), and the interpreter (Int). The extract happens at the beginning of the class, which means it is the opening of the whole class. Through analyzing the embodied resources of the participants, this session tries to figure out the interpreting problems in the opening stage in a remote class interpreting setting.

1) Stu: Hello, professor, I’m here. I’m Li Dai. Hello, professor. Can you see me?

(looking down at the screen, then waving her left hand to screen,

#Figure 5

Finally touching her left face with her left hand and moving her eyeballs around the screen)

#Figure 6

Comm (Pro looking up at screen, Int looking at screen)

#Figure 7

2) Int: 啊,我可以听到。

(turning heads to left side and Looking at the right side of the screen, trying to make the conversation go on)

#Figure 8

Comm (Pro looking up at the left side of the screen and opening her mouth, Stu looking down at the screen)

#Figure 9

3) Pro: Bonnie, Landa (Li Dai)?

(looking up at the left side of the screen)

#Figure 10

Comm (Int looking up at the right side of the screen, Stu looking down at the screen and opening her mouth and waiting for her talk turn)

#Figure 10

4) Stu: Hello, 李代, my Chinese name is Li Dai. I come from Hebei Province. I’m a teacher in Qinghai University. I’m very uh honored to be your student. Nice to see you.

(looking down at the screen, waving her left hand,

#Figure 11

and then looking up at the screen

#Figure 12

and finally smiling

#Figure 13

Comm Int looking up at the right side of the screen,

#Figure 11

and lowering down her head,

#Figure 13

Pro looking up at the left side of the screen,

#Figure 11

then blinking her eyes,

#Figure 12

nodding,

#Figure 12

turning her head to the left side of the screen, and lowering her head to the desk with a smile)

#Figure 13

5) Pro: Yeah! This means only iii just I know, Bonnie.

Your English name is Bonnie.

(looking down)

#Figure 14

Comm (Int and Stu looking down)

#Figure 14.

(19:57-20:05 silence gap)

6) S1: Yes.

Figure 5. The embodied resources 1.

Figure 6. The embodied resources 2.

Figure 7. The embodied resources 3.

Figure 8. The embodied resources 4.

Figure 9. The embodied resources 5.

Figure 10. The embodied resources 6.

Figure 11. The embodied resources 7.

Figure 12. The embodied resources 8.

Figure 13. The embodied resources 9.

Figure 14. The embodied resources 10.

Analyzing

Two elements appear to be of particular relevance in relation to the opening phase: a) initial “meet and greet” phase; b) establishment of some ground rules for communication management. Case 3 shows a obvious effort from all parties to display openness towards one another. This is done through using a combination of devices, both verbal (e.g. nice to meet you in line 4; repetition of name on the part of the professor in line 5) and embodied (e.g. smiles, gestures accompanying talk, such as the student waving at the professor while introducing herself, see Figure 11). Coming across as polite and friendly is a key factor in establishing rapport, particularly in a collaborative setting such as the one presented here (e.g. smile, such as the student smiling while she saying nice to meet you, and the professor smiling as well). However, the Case also reveals pauses at unnatural places and repetitions (line 1) as well as an enhanced demeanour by all participants involved (exaggerated smiles and hand waving to greet). The initial “meet-and-greet” phase is thus honored, despite being permeated by what we may describe as “awkwardness” to capture specific dynamics that do not happen as smoothly or gracefully as one may expect, without necessarily hindering the outcome of the event. It may be a consequence of uncertainty about what “the other side” does in remote interactional, which has been shown to manifest itself, for Case, in over-elaboration, signalling some deviation from what would be expected (Braun, 2007) . Another explanation for the exaggerated demeanour (e.g. hand waving), may be the participants’ uncertainty of how to handle the (novel) situation. The term awkwardness stresses the fact that such elements do not necessarily lead to major communication breakdowns, but they may affect our perception of and behaviour during social encounters.

Actually, awareness can support increased self-monitoring to avoid over-elaboration and make sure the greeting is commensurate with the type of encounter so that the opening proceeds as smoothly and naturally as possible.

Although it is reasonable to show obvious effort from all parties to display openness towards one another through using a combination of devices, both verbal and embodied, however, it is also necessary for interpreter to improve self-monitoring awareness to make the process smooth.

Case 3 Managing turn-taking

Case 3 is taken from the first class between the interpreter (Int) and the professor (Pro). The sequence shows one interactive moment where the interpreter (Int) has just taken the floor after a long, multi-unit turn uttered by the professor and is ready to start her rendition for the students. The extract (Case 4) consists of 294 seconds, among which 167 s was spent by the professor, and the rest was spent by the interpreter.

Pro: 00:37:13-00:38:00 (47 s)

Okay, so we go to the definition of economic development in the 1960s. 1) So we have a discussion in 50 s. So the common alternative index is the rate of growth of income per capita or per capita GNP. 2) In the 1950s, it is the total GNP or total GDP, but in the 1960s, it is per capita per GNP or per capita income. So when we say per capita GNP, this is the per-head value of final goods and services produced by permanent residents of a country in a given period of time. It is converted to USD using the current exchange rate. So in here, what is the difference between total GNP and the per capital GNP?

(Looking up at the left side of screen and continuing speaking)

# Figure 15

Comm (Int looking at the screen without any facial expression or move)

#Figure 15

00:38:00-00:38:55 (55 s)

Per capita GNP is much more accurate than in, in say, in explanation and in interpreting or in comparison purposes, than the total GNP, also the GDP, because a country may have a big GNP or GDP because the country is big, no, in terms of population, in terms of area. 3) But compared with other countries, you cannot compare the G, the total GNP from one country to another, because of different population

Figure 15. The embodied resources 11.

and areas. But very safe for capital GNP, that is person, and it takes into consideration the population of a person of the country. That’s that you can compare the per capita GNP or GDP from one, from a country to another.

(looking at the right side of screen and continuing speaking with her mouth open)

#Figure 16

Comm (Int turning her head to the left, looking at the right side of screen, keeping her eyes open)

#Figure 16

00:38:55-00:40:02 (67 s)

And it’s in order to compare the, this per capita GNP; the per capita GNP should be converted into us that color in every country so that we will be able to compare the total, the per capita GNP. So this is what we call the PPP measure or the purchasing power parity measure. So this is the number of units of a country’s currency that quite to purchase the same basket of goods and services in the local market that a US dollar would buy in the USA. Under PPP exchange rate should adjust, equalize the price of a common basket of goods and services cross country. 4) So when we convert the per capital GNP per country into PPP measure that would, could be converted into us dollar, so that we will be able to compare the per capital GNP on a per country basis.

And in the succeeding slides I will show you some, the Case of this conversion and this PPP Measure.

(looking down at the left side of screen

#Figure 17

and then turning her head to the right side of the screen and looking down)

#Figure 18

Comm (Int looking at the screen with glazy stare.)

#Figure 17

Figure 16. The embodied resources 12.

Figure 17. The embodied resources 13.

Figure 18. The embodied resources 14.

Analyzing

This paper finds four interpreting omissions. The first two omissions happen because of the omission of note-taking but not turn-taking.

The third and last omissions happen because of the long turn. The third one is “But compared with other country, you cannot compare the G, the total GNP from one country to another, because of different population and areas”, which can be interpreted as “使用GNP或GDP总值则不合适”. This sentence clearly explain that “total GDP/GNP” is not suitable “when we compare the development of different country, and can emphasize the difference between “per capita GNP/GDP” and “total GNP/GDP”. The last one is “So when we convert the per capital GNP per country into PPP measure that would, could be converted into us dollar, so that we will be able to compare the per capital GNP on a per country basis.” which is a conclusion of the last meaning part, however, the interpreter miss this part because she is too tired to remember the sentence. From embodied resources of the interpreter, this paper finds out that the interpreter is looking down without any attention to the professor. So at that time, the interpreter may be taking notes or just letting her mind wander freely.

In order to initiate chucking in remote interpreter-mediated interactional, Davitti and Braun (2020) give their opinion about it as follows:

...elements of system design also need to be taken into account in the analysis of interactive phenomena, including the type of equipment (static or dynamic cameras; touch-screen), number and position of cameras and screens (camera-face distance and angle, implications of seating position and angle of all participants towards the cameras and screen), screen display (presence of multiple images, picture-in-picture), and screen size (which is gradually reducing as devices become increasingly mobile).

So, in the long run interpreting, it is suitable for interpreters to apply self-selected turn-taking which was initiated by Elena Davitti in 2019 in her paper named Methodological Explorations of Interpreter-Mediated Interactional: Novel Insights from Multimodal Analysis to deliver a better rendition.

5. Conclusion

Based on Goffman’s participation framework and Norris’ multi-modal analytical framework, this report proposes multi-modal interpreter-mediated interactional (MIMI) framework to analyze the difficulties and problems that occur in the remote class interpreting settings of Development Economics for DEM of NEUST. Besides, this report also proposes the strategies, including applying minimal computer interface, improving self-monitoring awareness and initiating self-selected turn-taking, which is helpful in dealing with the relevant interpreting problems in rendition products.

Funding

This paper marks a stage in a research that was made possible by Offline First-class Curriculum Construction Project (“English Interpreting Practice”) funded by Education Bureau of Inner Mongolia Autonomous Region, Key Research Project of Undergraduate Teaching Reform funded by Education Bureau of Inner Mongolia Autonomous Region (grant#2022JG20220222502022-09-01000016) and New Liberal Arts Research and Reform Practice Project “New Model and Practice Innovation of Digital Intelligence Collaborative Training of Foreign Language Teaching with Multidisciplinary Interdisciplinary Empowerment in the Context of New Liberal Arts” funded by Inner Mongolia Autonomous Region (grant#20900-231528).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Bambaeeroo, F., & Shokrpour, N. (2017). The Impact of the Teachers’ Non-Verbal Communication on Success in Teaching. Journal of Advances in Medical Education & Professionalism, 5, 51-59.
[2] Braun, S. (2006). Multimedia Communication Technologies and Their Impact on Interpreting. In Gerzymisch-Arbogast, H. (Ed.), Proceedings of the Marie Curie Euroconferences MuTra: Audiovisual Translation Scenarios—Copenhagen 1-5 May 2006 (pp. 27-57).
http://www.euroconferences.info/proceedings/2006_Proceedings/2006_proceedings.html
[3] Braun, S. (2007). Interpreting in Small-Group Bilingual Videoconferences: Challenges and Adaptation Processes. Interpreting, 9, 21-46.
https://doi.org/10.1075/intp.9.1.03bra
[4] Braun, S. (2015). Remote Interpreting. In H. Mikkelson & R. Jourdenais (Eds.), Routledge Encyclopaedia of Interpreting Studies (pp. 346-348). Routledge.
[5] Davitti, E. & Braun, S. (2020). Analysing Interactive Phenomena in Video Remote Interpreting in Collaborative Settings: Implications for Interpreter Education. The Interpreter and Translator Trainer, 14, 279-302.
https://doi.org/10.1080/1750399X.2020.1800364
[6] Davitti, E. (2018). Methodological Explorations of Interpreter-Mediated Interaction: Novel Insights from Multimodal Analysis. Qualitative Research, 19, 7-29.
https://doi.org/10.1177/1468794118761492
[7] Franceschi, K., Lee, R. M., Zanakis, S. H., & Hinds, D. (2009). Engaging Group E-Learning in Virtual Worlds. Journal of Management Information Systems, 26, 73-100.
https://doi.org/10.2753/MIS0742-1222260104
[8] Goffman, E. (1981). Forms of Talk. University of Pennsylvania Press.
[9] Goodwin, C. (2000). Action and Embodiment within Situated Human Interaction. Journal of Pragmatics, 32, 1489-1522.
https://doi.org/10.1016/S0378-2166(99)00096-X
[10] Greco, M. (2020). Video Remote Interpreting in University Settings. Translation and Translanguaging in Multilingual Contexts, 6, 148-160.
https://doi.org/10.1075/ttmc.00050.gre
[11] Hazel, S., Mortensen, K., & Rasmussen, G. (2014). Introduction: A Body of Resources—CA Studies of Social Conduct. Journal of Pragmatics, 65, 1-9.
https://doi.org/10.1016/j.pragma.2013.10.007
[12] Johnson, D., & Wiles, J. (2003). Effective Affective User Interface Design in Games. Ergonomics, 46, 1332-1345.
https://doi.org/10.1080/00140130310001610865
[13] Moser-Mercer, B. (2005). Remote Interpreting: Issues of Multi-Sensory Integration in a Multilingual Task. Meta, 50, 727-738.
https://doi.org/10.7202/011014ar
[14] Musto, A. (2020). Incorporating Physical Exercise and Job Crafting to Buffer Cardiovascular Disease and Job Boredom in Video Remote Educational Sign Language Interpreting. MSc. Thesis, Western Oregon University.
[15] Norris, S. (2004). Analyzing Multimodal Interaction: A Methodological Framework. Routledge.
https://doi.org/10.4324/9780203379493
[16] Pöchhacker, F. (2021). Multimodality in Interpreting. In Y. Gambier, & L. van Doorslaer (Eds.), Handbook of Translation Studies (pp. 151-157). John Benjamins Publishing Company.
https://doi.org/10.1075/hts.5.mul3
[17] Ren, W. (2017). Rethinking the Interpreter’s Speaking/Hearing Roles from Goffman’s Sociolinguistic Perspective. Chinese Translators Journal, 38, 18-25. (In Chinese)
[18] Saeed, M. A., González, E. R., Korybski, T., Davitti, E., & Braun, S. (2022). Connected yet Distant: An Experimental Study into the Visual Needs of the Interpreter in Remote Simultaneous Interpreting. In M. Kurosu (Ed.), Human-Computer Interaction. User Experience and Behavior. HCII 2022. Lecture Notes in Computer Science (Vol. 13304, pp. 214-232). Springer.
https://doi.org/10.1007/978-3-031-05412-9_16
[19] Skinner, R., Napier, J., & Braun, S. (2018). Interpreting via Video Link: Mapping of the Field. In J. Napier, R. Skinner, & S. Braun (Eds.), Here or There: Research on Interpreting via Video Link (pp. 11-35). Gallaudet.
[20] Wadensjö, C. (1998). Interpreting as Interaction. Longman.
[21] Warnicke, C., & Granberg, S. (2022). Interpreter-Mediated Interactions between People Using a Signed Respective Spoken Language across Distances in Real Time: A Scoping Review. BMC Health Services Research, 22, Article No. 387.
https://doi.org/10.1186/s12913-022-07776-y

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.