Journal articles: 'Automated subtitles' – Grafiati (2024)

  • Bibliography
  • Subscribe
  • News
  • Referencing guides Blog Automated transliteration Relevant bibliographies by topics

Log in

Українська Français Italiano Español Polski Português Deutsch

We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/. Even the smallest donation is hugely appreciated!

June discounts! -30% on all plans during the first week of June.

Get my discount

Relevant bibliographies by topics / Automated subtitles / Journal articles

To see the other types of publications on this topic, follow the link: Automated subtitles.

Author: Grafiati

Published: 10 December 2022

Last updated: 29 January 2023

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Consult the top 50 journal articles for your research on the topic 'Automated subtitles.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Baskakov,A.A., and A.G.Tarasov. "To the problem of using an automated workplace by people with disabilities." Advanced Engineering Research 21, no.3 (October18, 2021): 290–96. http://dx.doi.org/10.23947/2687-1653-2021-21-3-290-296.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Introduction. Employees of the banking sector with health restrictions have negative experience of using internal software to interact with customers and perform their official duties. Many employees, for example, with hearing problems, would like to work in call centers, but do not have this opportunity due to the outdated software. The research objective is to analyze the priority tasks for the further development of software products, taking into account the existing health problems of employees.Materials and Methods. One of the subsystems of the automated workplace (hereinafter referred to as the AWP) was selected the software, which allows the employee to interact directly with the clients of the given organization. The analysis used the method of expert evaluation by T. L. Saati with the assistance of one of the experts in the development of software for people with disabilities.Results. Using the fundamental preference scale and expert opinion in the field of software development for people with disabilities, a priority matrix was built for each of the criteria (subtitles, simplified fonts, voice guidance, simplified and remote management) and platforms (IOS, Android, Windows OS), as well as a global priority matrix for all criteria and platforms.Discussions and Conclusions. An expert assessment of several characteristics of the software of a commercial banking organization of the Russian Federation was carried out to identify the disadvantages of using the software by employees with disabilities. During the analysis, intermediate conclusions were made: the most demanded criterion for people with hearing problems is “Subtitle”; for people without the ability to leave the house — “Remote control”; for people with amputations or irreversible limb injuries — “Simplified control”. The other parameters are not recommended for implementation.

2

Song, Hye-Jeong, Hong-Ki Kim, Jong-Dae Kim, Chan-Young Park, and Yu-Seop Kim. "Inter-Sentence Segmentation of YouTube Subtitles Using Long-Short Term Memory (LSTM)." Applied Sciences 9, no.7 (April11, 2019): 1504. http://dx.doi.org/10.3390/app9071504.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Recently, with the development of Speech to Text, which converts voice to text, and machine translation, technologies for simultaneously translating the captions of video into other languages have been developed. Using this, YouTube, a video-sharing site, provides captions in many languages. Currently, the automatic caption system extracts voice data when uploading a video and provides a subtitle file converted into text. This method creates subtitles suitable for the running time. However, when extracting subtitles from video using Speech to Text, it is impossible to accurately translate the sentence because all sentences are generated without periods. Since the generated subtitles are separated by time units rather than sentence units, and are translated, it is very difficult to understand the translation result as a whole. In this paper, we propose a method to divide text into sentences and generate period marks to improve the accuracy of automatic translation of English subtitles. For this study, we use the 27,826 sentence subtitles provided by Stanford University’s courses as data. Since this lecture video provides complete sentence caption data, it can be used as training data by transforming the subtitles into general YouTube-like caption data. We build a model with the training data using the LSTM-RNN (Long-Short Term Memory – Recurrent Neural Networks) and predict the position of the period mark, resulting in prediction accuracy of 70.84%. Our research will provide people with more accurate translations of subtitles. In addition, we expect that language barriers in online education will be more easily broken by achieving more accurate translations of numerous video lectures in English.

3

Panchenko,L.F. "The study of Coursera’s data analysis courses." CTE Workshop Proceedings 2 (March20, 2014): 111–24. http://dx.doi.org/10.55056/cte.195.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Objective: to identify the particular methods of teaching in massive open online courses. Research object: a learning process of the massive open online courses. Research subject: general and special in teaching methods of Coursera’s data analysis courses. Research goals: to participate as a student in massive open online courses in category «Statistics and Data Analysis», to analyze the types of software that is used there, teaching methods and teaching materials. Research methods: participant observation, content analysis, the analysis of the products. Research results: general in the teaching methods is the course syllabus, the use of short video lectures with built-in tests and subtitles, lecture’s texts presentation in the ppt and pdf formats, automated tests on each topic, a forum for communication and assistance. The special are structure and content of video lectures, statistical software, the tasks of programming, data sets, the use of peer-assessment, final projects, teacher’s and student’s blogs. Conclusions and recommendations: teachers’ participation as students in the courses of leading professors from leading universities allows to get acquainted with different styles of teaching, to expand their knowledge and scientific horizons, to learn new types of software, to update and to expand the content of courses, to develop professional relationships, to integrate into the global educational community.

4

Pettigrew,CatharineM., BruceE.Murdoch, CurtisW.Ponton, Joseph Kei, HelenJ.Chenery, and Paavo Alku. "Subtitled Videos and Mismatch Negativity (MMN) Investigations of Spoken Word Processing." Journal of the American Academy of Audiology 15, no.07 (July 2004): 469–85. http://dx.doi.org/10.3766/jaaa.15.7.2.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The purpose of this study was to determine whether the presence of subtitles on a distracting, silent video affects the automatic mismatch negativity (MMN) response to simple tones, consonant-vowel (CV) nonwords, or CV words. Two experiments were conducted in this study, each including ten healthy young adult subjects. Experiment 1 investigated the effects of subtitles on the MMN response to simple tones (differing in frequency, duration, and intensity) and speech stimuli (CV nonwords and CV words with a /d/-/g/ contrast). Experiment 2 investigated the effects of subtitles on the MMN response to a variety of CV nonword and word contrasts that incorporated both small (e.g., /d/ vs. /g/) and/or large (e.g., /e:/ vs. /el/) acoustic deviances.The results indicated that the presence or absence of subtitles on the distracting silent video had no effect on the amplitude of the MMN or P3a responses to simple tones, CV nonwords, or CV words. In addition, the results also indicated that movement artifacts may be statistically reduced by the presence of subtitles on a distracting silent video. The implications of these results are that more "engaging" (i.e., subtitled) silent videos can be used as a distraction task for investigations into MMN responses to speech and nonspeech stimuli in young adult subjects, without affecting the amplitude of the responses.

5

Perego, Elisa, Fabio Del Missier, and Marta Stragà. "Dubbing vs. subtitling." Target. International Journal of Translation Studies 30, no.1 (February5, 2018): 137–57. http://dx.doi.org/10.1075/target.16083.per.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract Despite the claims regarding the potential disruptiveness of subtitling for audiovisual processing, existing empirical evidence supports the idea that subtitle processing is semi-automatic and cognitively effective, and that, in moderately complex viewing scenarios, dubbing does not necessarily help viewers. In this paper we appraise whether the complexity of the translated audiovisual material matters for the cognitive and evaluative reception of subtitled vs. dubbed audiovisual material. To this aim, we present the results of two studies on the viewers’ reception of film translation (dubbing vs. subtitling), in which we investigate the cognitive and evaluative consequences of audiovisual complexity. In Study 1, the results show that a moderately complex film is processed effectively and is enjoyed irrespective of the translation method. In Study 2, the subtitling (vs. dubbing) of a more complex film leads to more effortful processing and lower cognitive performance, but not to a lessened appreciation. These results expose the boundaries of subtitle processing, which are reached only when the audiovisual material to be processed is complex, and they encourage scholars and practitioners to reconsider old standards as well as to invest more effort in crafting diverse types of audiovisual translations tailored both to the degree of complexity of the source product and to the individual differences of the target viewers.

6

BISSON, MARIE-JOSÉE, WALTERJ.B.VANHEUVEN, KATHY CONKLIN, and RICHARDJ.TUNNEY. "Processing of native and foreign language subtitles in films: An eye tracking study." Applied Psycholinguistics 35, no.2 (October23, 2012): 399–418. http://dx.doi.org/10.1017/s0142716412000434.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

ABSTRACTForeign language (FL) films with subtitles are becoming increasingly popular, and many European countries use subtitling as a cheaper alternative to dubbing. However, the extent to which people process subtitles under different subtitling conditions remains unclear. In this study, participants watched part of a film under standard (FL soundtrack and native language subtitles), reversed (native language soundtrack and FL subtitles), or intralingual (FL soundtrack and FL subtitles) subtitling conditions while their eye movements were recorded. The results revealed that participants read the subtitles irrespective of the subtitling condition. However, participants exhibited more regular reading of the subtitles when the film soundtrack was in an unknown FL. To investigate the incidental acquisition of FL vocabulary, participants also completed an unexpected auditory vocabulary test. Because the results showed no vocabulary acquisition, the need for more sensitive measures of vocabulary acquisition are discussed. Finally, the reading of the subtitles is discussed in relation to the saliency of subtitles and automatic reading behavior.

7

Sharma, Prachi, Manasi Raj, Pooja Jangam, Sana Bhati, and Neelam Phadnis. "Automatic Generation of Subtitle in Videos." International Journal of Computer Science and Engineering 6, no.4 (April25, 2019): 11–15. http://dx.doi.org/10.14445/23488387/ijcse-v6i4p103.

Full text

APA, Harvard, Vancouver, ISO, and other styles

8

Stephen, Armstrong, Way Andy, Caffrey Colm, Flanagan Marian, Kenny Dorothy, and O'Hagan Minako. "LEADING BY EXAMPLE: AUTOMATIC TRANSLATION OF SUBTITLES VIA EBMT." Perspectives 14, no.3 (January31, 2007): 163–84. http://dx.doi.org/10.1080/09076760708669036.

Full text

APA, Harvard, Vancouver, ISO, and other styles

9

Jakhotiya, Akshay, Ketan Kulkarni, Chinmay Inamdar, Bhushan Mahajan, and Alka Londhe. "Automatic Subtitle Generation for English Language Videos." International Journal of Computer Science and Engineering 2, no.10 (October25, 2015): 5–7. http://dx.doi.org/10.14445/23488387/ijcse-v2i10p102.

Full text

APA, Harvard, Vancouver, ISO, and other styles

10

Álvarez, Aitor, Carlos-D. Martínez-Hinarejos, Haritz Arzelus, Marina Balenciaga, and Arantza del Pozo. "Improving the automatic segmentation of subtitles through conditional random field." Speech Communication 88 (April 2017): 83–95. http://dx.doi.org/10.1016/j.specom.2017.01.010.

Full text

APA, Harvard, Vancouver, ISO, and other styles

11

Yang, Xin, Zongliang Ma, Letian Yu, Ying Cao, Baocai Yin, Xiaopeng Wei, Qiang Zhang, and RynsonW.H.Lau. "Automatic Comic Generation with Stylistic Multi-page Layouts and Emotion-driven Text Balloon Generation." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no.2 (June 2021): 1–19. http://dx.doi.org/10.1145/3440053.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In this article, we propose a fully automatic system for generating comic books from videos without any human intervention. Given an input video along with its subtitles, our approach first extracts informative keyframes by analyzing the subtitles and stylizes keyframes into comic-style images. Then, we propose a novel automatic multi-page layout framework that can allocate the images across multiple pages and synthesize visually interesting layouts based on the rich semantics of the images (e.g., importance and inter-image relation). Finally, as opposed to using the same type of balloon as in previous works, we propose an emotion-aware balloon generation method to create different types of word balloons by analyzing the emotion of subtitles and audio. Our method is able to vary balloon shapes and word sizes in balloons in response to different emotions, leading to more enriched reading experience. Once the balloons are generated, they are placed adjacent to their corresponding speakers via speaker detection. Our results show that our method, without requiring any user inputs, can generate high-quality comic pages with visually rich layouts and balloons. Our user studies also demonstrate that users prefer our generated results over those by state-of-the-art comic generation systems.

12

Jo, Junyoung, Jangwon Gim, Byung-Won On, and Dongwon Jeong. "Subtitle Automatic Extraction System for Short-form Contents." Journal of Korean Institute of Information Technology 19, no.6 (June30, 2021): 29–37. http://dx.doi.org/10.14801/jkiit.2021.19.6.29.

Full text

APA, Harvard, Vancouver, ISO, and other styles

13

Davoudi, Mohsen, M.B.Menhaj, Nima Seif Naraghi, Ali Aref, Majid Davoodi, and Mehdi Davoudi. "A Fuzzy Logic-Based Video Subtitle and Caption Coloring System." Advances in Fuzzy Systems 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/671851.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

An approach has been proposed for automatic adaptive subtitle coloring using fuzzy logic-based algorithm. This system changes the color of the video subtitle/caption to “pleasant” color according to color harmony and the visual perception of the image background colors. In the fuzzy analyzer unit, using RGB histograms of background image, the R, G, and B values for the color of the subtitle/caption are computed using fixed fuzzy IF-THEN rules fully driven from the color harmony theories to satisfy complementary color and subtitle-background color harmony conditions. A real-time hardware structure has been proposed for implementation of the front-end processing unit as well as the fuzzy analyzer unit.

14

Volk, Martin. "The Automatic Translation of Film Subtitles. A Machine Translation Success Story?" Journal for Language Technology and Computational Linguistics 24, no.3 (July1, 2009): 113–25. http://dx.doi.org/10.21248/jlcl.24.2009.124.

Full text

APA, Harvard, Vancouver, ISO, and other styles

15

Smirnov,D.A., and G.B.Sologub. "Automatiс Recommendation of Video for Online School Lesson Using Neuro-Linguistic Programming." Моделирование и анализ данных 10, no.2 (2020): 102–9. http://dx.doi.org/10.17759/mda.2020100208.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The article describes an approach to automating the matching of video materials to text slides in English classes in an online school by vectorizing slide text and video subtitles using the TF-IDF measure and maximizing the cosine similarity measure of these vector representations.

16

Deckert, Mikołaj. "Translatorial dual-processing–evidence from interlingual trainee subtitling." Babel. Revue internationale de la traduction / International Journal of Translation 62, no.3 (November21, 2016): 495–515. http://dx.doi.org/10.1075/babel.62.3.07dec.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Drawing on cognitive linguistics and psychology, this paper attempts to model the subtitler’s decision-making as involving two types of operations. They are referred to as System 1 and System 2, the former being fast, automatic and requiring little effort, and the latter being slower, controlled and effortful. To test the dual-processing hypothesis, I analyse trainee subtitlers’ renditions with a focus on the construction “you + to like + me” which exemplifies a cross-language asymmetry and a potential (disguised) translation challenge. Remarkably, the English construction is employed equally-conventionally to represent the concept of being favourably disposed to somebody in a non-physical/sexual manner, on the one hand, and being attracted to somebody, on the other. In Polish, however, the “prototypes” will typically be represented as distinct expressions. The present findings suggest that because differentiating between the prototypes and coding them linguistically is not challenging to the participants, it is the automation of their judgment that leads them to settle for flawed target variants (Stage 1). Additional evidence is obtained (Stage 2) as participants are induced to go from System 1 to System 2 thinking–a cross-stage comparison indicates that the fast-to-slow switch reorients the trainees’ subtitling choices and ultimately improves translation quality.

17

Armstrong, Mike. "Automatic Recovery and Verification of Subtitles for Large Collections of Video Clips." SMPTE Motion Imaging Journal 126, no.8 (October 2017): 1–7. http://dx.doi.org/10.5594/jmi.2017.2732858.

Full text

APA, Harvard, Vancouver, ISO, and other styles

18

Gunawan, Riko, and Yosi Kristian. "AUTOMATIC PARENTAL GUIDE SCENE CLASSIFICATION MENGGUNAKAN METODE DEEP CONVOLUTIONAL NEURAL NETWORK DAN LSTM." Journal of Intelligent System and Computation 2, no.2 (October1, 2020): 86–90. http://dx.doi.org/10.52985/insyst.v2i2.124.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Menonton film merupakan salah satu hobi yang paling digemari oleh berbagai kalangan. Seiring dengan semakin bertambahnya film yang beredar di pasaran, semakin banyak pula konten tidak pantas pada film-film tersebutu. Oleh karena itu, dibutuhkan sebuah metode untuk mengklasifikasikan film agar konten yang ditonton sesuai dengan usia penonton. Konten film yang kurang coco*k untuk pengguna di bawah umur yang akan diklasifikasikan pada penelitian ini antara lain: kekerasan, pronografi, kata-kata kasar, minuman keras, penggunaan obat-obatan terlarang, merokok, adegan mengerikan (horror) dan intens. Metode klasifikasi yang digunakan berupa modifikasi dari convolutional neural network dan LSTM. Gabungan kedua metode ini dapat mengakomodasi data training dalam jumlah yang kecil, serta dapat melakukan multi klasifikasi berdasarkan video, audio, dan subtitle film. Penggunaan multi klasifikasi ini dikarenakan sebuah film selalu memiliki lebih dari satu klasifikasi. Dalam proses training dan testing pada penelitian ini digunakan sebanyak 1000 data untuk klasifikasi video, 600 data klasifikasi audio, dan 400 data klasifikasi subtitle yang didapatkan dari internet. Dari hasil percobaan dihasilkan tingkat akurasi yang diukur dengan menggunakan F1-Score sebesar 0.922 untuk klasifikasi video, 0.741 untuk klasifikasi audio, dan 0.844 untuk klasifikasi subtitle dengan rata-rata akurasi sebesar 0.835. Pada penelitian berikutnya akan dicoba dengan menggunakan metode Deep Convolutional Neural Network yang lain serta dengan memperbanyak jumlah dan variasi dari data testing.

19

Jain, Shubham, and A.Pandian. "Asurvey on automatic music generation." International Journal of Engineering & Technology 7, no.2.8 (March19, 2018): 677. http://dx.doi.org/10.14419/ijet.v7i2.8.10555.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Just like you should not watch a foreign language movie without its subtitles, identically you should not listen to music without its lyrics Music lyrics are words that combine to produce a song in harmony. Usually the music lyrics that we normally listen to are human written and no machine involvement is present. Writing music has never been a easy task, lot of challenges are involved in this because the music lyrics need to be meaningful and at the same time it needs to be in harmony and sync with the music being played over it . They are written by great artist who have been writing music lyrics for years. This project tries to automate the process of music lyrics generation using computer program which we produce lyrics and reduce the burden on human skills and can generate new music lyrics and a very faster rate than humans ever can. This project also aims toward the merge of human and artificial intelligence.

20

Yan, Li. "Real-Time Automatic Translation Algorithm for Chinese Subtitles in Media Playback Using Knowledge Base." Mobile Information Systems 2022 (June18, 2022): 1–11. http://dx.doi.org/10.1155/2022/5245035.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Currently, speech technology allows for simultaneous subtitling of live television programs using speech recognition and the respeaking approach. Although many previous studies on the quality of live subtitling utilizing voice recognition have been proposed, little attention has been paid to the quantitative elements of subtitles. Due to the high performance of neural machine translation (NMT), it has become the standard machine translation method. A data-driven translation approach requires high-quality, large-scale training data and powerful computing resources to achieve good performance. However, data-driven translation will face challenges when translating languages with limited resources. This paper’s research work focuses on how to integrate linguistic knowledge into the NMT model to improve the translation performance and quality of the NMT system. A method of integrating semantic concept information in the NMT system is proposed to address the problem of out-of-set words and low-frequency terms in the NMT system. This research also provides an NMT-centered read modeling and decoding approach integrating an external knowledge base. The experimental results show that the proposed strategy can effectively increase the MT system’s translation performance.

21

Hemaspaandra,LaneA. "SIGACT News Complexity Theory Column 110." ACM SIGACT News 52, no.3 (October17, 2021): 37. http://dx.doi.org/10.1145/3494656.3494665.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

My deepest thanks to Beatrice and Carlo for their fascinating article, Quantum Finite Automata: From Theory to Practice. Regarding the extent to which their article brings to life both parts of its subtitle... wow! And I think Section 4 is an absolute first for this column; please don't miss it!

22

Gonzalez-Carrasco,I., L.Puente, B.Ruiz-Mezcua, and J.L.Lopez-Cuadrado. "Sub-Sync: Automatic Synchronization of Subtitles in the Broadcasting of True Live programs in Spanish." IEEE Access 7 (2019): 60968–83. http://dx.doi.org/10.1109/access.2019.2915581.

Full text

APA, Harvard, Vancouver, ISO, and other styles

23

Gong, Wencui. "An Innovative English Teaching System Based on Computer Aided Technology and Corpus Management." International Journal of Emerging Technologies in Learning (iJET) 14, no.14 (July24, 2019): 69. http://dx.doi.org/10.3991/ijet.v14i14.10817.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

With the development of modern science and technology, more and more com-puter technologies have been successfully applied in English teaching. Based on computer aided technology and big data corpus management, this paper improves the traditional teaching method into an innovative teaching mode with a big data corpus as English learning resource. On this basis, a computer multimedia teach-ing system was set up to realize automatic matching of subtitles and vivid restora-tion of contexts. The teaching system achieved excellent results in application ver-ification. The research results can promote computer technology in English teach-ing.

24

Mocanu, Bogdan, and Ruxandra Tapu. "Automatic Subtitle Synchronization and Positioning System Dedicated to Deaf and Hearing Impaired People." IEEE Access 9 (2021): 139544–55. http://dx.doi.org/10.1109/access.2021.3119201.

Full text

APA, Harvard, Vancouver, ISO, and other styles

25

Marasek, Krzysztof, Danijel Koržinek, and Łukasz Brocki. "System for Automatic Transcription of Sessions of the Polish Senate." Archives of Acoustics 39, no.4 (March1, 2015): 501–9. http://dx.doi.org/10.2478/aoa-2014-0054.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract This paper describes research behind a Large-Vocabulary Continuous Speech Recognition (LVCSR) system for the transcription of Senate speeches for the Polish language. The system utilizes severalcomponents: a phonetic transcription system, language and acoustic model training systems, a Voice Activity Detector (VAD), a LVCSR decoder, and a subtitle generator and presentation system. Some of the modules relied on already available tools and some had to be made from the beginning but the authors ensured that they used the most advanced techniques they had available at the time. Finally, several experiments were performed to compare the performance of both more modern and more conventional technologies.

26

Shcherbak, Olena, Nataliya Shamanova, Svitlana Kaleniuk, Arkadii Proskurin, and Larisa Yeganova. "Improvement of automatic speech recognition skills of linguistics students through using ukrainian-english and ukrainian-german subtitles in publicistic movies." Revista Amazonia Investiga 11, no.53 (July4, 2022): 26–33. http://dx.doi.org/10.34069/ai/2022.53.05.3.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The increased world's attention to foreign language studies facilitates the development and improvement of its study system in higher education institutions. Such a system takes into account and promptly responds to the demands of today's multicultural society. All should start with the transformation and modernization of the higher education system. This system includes the introduction of innovative technologies in the study of English and German, which should be focused on the modern demands of the world labor market. All this has determined the relevance of the research. This article aims to establish ways for students to gain automatic recognition skills through subtitling Ukrainian-English and Ukrainian-German publicistic movies and series. The first assessment of new language audio and video corpus was developed at the Admiral Makarov National University of Shipbuilding, using an automatic subtitling mechanism to improve linguistics students' recognition and understanding of oral speech. The skills and abilities that improved during the work with the educational movie corpus have been identified.

27

Bjekic, Jovana, Ljiljana Lazarevic, Marko Zivanovic, and Goran Knezevic. "Psychometric evaluation of the Serbian dictionary for automatic text analysis - LIWCser." Psihologija 47, no.1 (2014): 5–32. http://dx.doi.org/10.2298/psi1401005b.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

LIWC (Linguistic Inquiry and Word Count) is widely used word-level content analysis software. It was used in large number of studies in the fields of clinical, social and personality psychology, and it is adapted for text analysis in 11 world languages. The aim of this research was to validate empirically newly constructed adaptation of LIWC software for Serbian language (LIWCser). The sample of the texts consisted of 384 texts in Serbian and 141 texts in English. It included scientific paper abstracts, newspaper articles, movie subtitles, short stories and essays. Comparative analysis of Serbian and English version of the software demonstrated acceptable level of equivalence (ICCM=.70). Average coverage of the texts with LIWCser dictionary was 69.93%, and variability of this measure in different types of texts is in line with expected. Adaptation of LIWC software for Serbian opens entirely new possibilities of assessment of spontaneous verbal behaviour that is highly relevant for different fields of psychology.

28

Wu, Qiuhao, and Rong Liu. "A Comparative Study of Different Online Teaching Video Modes." International Journal of Emerging Technologies in Learning (iJET) 17, no.17 (September8, 2022): 247–60. http://dx.doi.org/10.3991/ijet.v17i17.29353.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In order to study the influence of different modes of online educational videos as teaching materials on learners' learning effect, 60 college students were invited to conduct an experiment. On the experimental materials, traditional and automatic video recording and playing, which account for a large proportion of online education, were selected. The process of watching was recorded by eye tracker experiment, and the learning differences were reflected by subjective evaluation and objective indicators. It was found that compared with the automatic video, the traditional video was more attractive to the subjects, the subjects had more concentration, but the cognitive load was higher, and there was no significant difference between the two groups in terms of performance. In addition, the attention distribution of learners in different modes is mainly illustrated, followed by portrait and subtitles. In the two modes of pictures and text, pictures are the main mode. The above research can provide a scientific basis for video designers to select appropriate modes to highlight key information, help teachers and learners to choose more reasonable learning videos from a large number of educational videos, and help online education providers to improve learners' intuitive experience of videos and enhance their competitiveness.

29

Silva, Carlos Eduardo, and Lincoln Fernandes. "Apresentando o copa-trad versão 2.0 um sistema com base em corpus paralelo para pesquisa, ensino e prática da tradução." Ilha do Desterro A Journal of English Language, Literatures in English and Cultural Studies 73, no.1 (January31, 2020): 297–316. http://dx.doi.org/10.5007/2175-8026.2020v73n1p297.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

This paper describes COPA-TRAD Version 2.0, a parallel corpus-based system developed at the Universidade Federal de Santa Catarina (UFSC) for translation research, teaching and practice. COPA-TRAD enables the user to investigate the practices of professional translators by identifying translational patterns related to a particular element or linguistic pattern. In addition, the system allows for the comparison between human translation and automatic translation provided by three well-known machine translation systems available on the Internet (Google Translate, Microsoft Translator and Yandex). Currently, COPA-TRAD incorporates five subcorpora (Children's Literature, Literary Texts, Meta-Discourse in Translation, Subtitles and Legal Texts) and provides the following tools: parallel concordancer, monolingual concordancer, wordlist and a DIY Tool that enables the user to create his own parallel disposable corpus. The system also provides a POS-tagging tool interface to analyze and classify the parts of speech of a text.

30

Ezcurdia, Iñigo, Adriana Arregui, Oscar Ardaiz, Amalia Ortiz, and Asier Marzo. "Content Adaptation and Depth Perception in an Affordable Multi-View Display." Applied Sciences 10, no.20 (October21, 2020): 7357. http://dx.doi.org/10.3390/app10207357.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

We present SliceView, a simple and inexpensive multi-view display made with multiple parallel translucent sheets that sit on top of a regular monitor; each sheet reflects different 2D images that are perceived cumulatively. A technical study is performed on the reflected and transmitted light for sheets of different thicknesses. A user study compares SliceView with a commercial light-field display (LookingGlass) regarding the perception of information at multiple depths. More importantly, we present automatic adaptations of existing content to SliceView: 2D layered graphics such as retro-games or painting tools, movies and subtitles, and regular 3D scenes with multiple clipping z-planes. We show that it is possible to create an inexpensive multi-view display and automatically adapt content for it; moreover, the depth perception on some tasks is superior to the one obtained in a commercial light-field display. We hope that this work stimulates more research and applications with multi-view displays.

31

BANG, Jeong-Uk, Mu-Yeol CHOI, Sang-Hun KIM, and Oh-Wook KWON. "Automatic Construction of a Large-Scale Speech Recognition Database Using Multi-Genre Broadcast Data with Inaccurate Subtitle Timestamps." IEICE Transactions on Information and Systems E103.D, no.2 (February1, 2020): 406–15. http://dx.doi.org/10.1587/transinf.2019edp7234.

Full text

APA, Harvard, Vancouver, ISO, and other styles

32

Wang, Hui Hui. "Speech Recorder and Translator using Google Cloud Speech-to-Text and Translation." Journal of IT in Asia 9, no.1 (November30, 2021): 11–28. http://dx.doi.org/10.33736/jita.2815.2021.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The most popular video website YouTube has about 2 billion users worldwide who speak and understand different languages. Subtitles are essential for the users to get the message from the video. However, not all video owners provide subtitles for their videos. It causes the potential audiences to have difficulties in understanding the video content. Thus, this study proposed a speech recorder and translator to solve this problem. The general concept of this study was to combine Automatic Speech Recognition (ASR) and translation technologies to recognize the video content and translate it into other languages. This paper compared and discussed three different ASR technologies. They are Google Cloud Speech-to-Text, Limecraft Transcriber, and VoxSigma. Finally, the proposed system used Google Cloud Speech-to-Text because it supports more languages than Limecraft Transcriber and VoxSigma. Besides, it was more flexible to use with Google Cloud Translation. This paper also consisted of a questionnaire about the crucial features of the speech recorder and translator. There was a total of 19 university students participated in the questionnaire. Most of the respondents stated that high translation accuracy is vital for the proposed system. This paper also discussed a related work of speech recorder and translator. It was a study that compared speech recognition between ordinary voice and speech impaired voice. It used a mobile application to record acoustic voice input. Compared to the existing mobile App, this project proposed a web application. It was a different and new study, especially in terms of development and user experience. Finally, this project developed the proposed system successfully. The results showed that Google Cloud Speech-to-Text and Translation were reliable to use in video translation. However, it could not recognize the speech when the background music was too loud. Besides, it had a problem of direct translation, which was challenging. Thus, future research may need a custom trained model. In conclusion, the proposed system in this project was to contribute a new idea of a web application to solve the language barrier on the video watching platform.

33

Phan, Thuong-Cang, Anh-Cang Phan, Hung-Phi Cao, and Thanh-Ngoan Trieu. "Content-Based Video Big Data Retrieval with Extensive Features and Deep Learning." Applied Sciences 12, no.13 (July3, 2022): 6753. http://dx.doi.org/10.3390/app12136753.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In the era of digital media, the rapidly increasing volume and complexity of multimedia data cause many problems in storing, processing, and querying information in a reasonable time. Feature extraction and processing time play an extremely important role in large-scale video retrieval systems and currently receive much attention from researchers. We, therefore, propose an efficient approach to feature extraction on big video datasets using deep learning techniques. It focuses on the main features, including subtitles, speeches, and objects in video frames, by using a combination of three techniques: optical character recognition (OCR), automatic speech recognition (ASR), and object identification with deep learning techniques. We provide three network models developed from networks of Faster R-CNN ResNet, Faster R-CNN Inception ResNet V2, and Single Shot Detector MobileNet V2. The approach is implemented in Spark, the next-generation parallel and distributed computing environment, which reduces the time and space costs of the feature extraction process. Experimental results show that our proposal achieves an accuracy of 96% and a processing time reduction of 50%. This demonstrates the feasibility of the approach for content-based video retrieval systems in a big data context.

34

Polat, Huseyin, and Saadin Oyucu. "Building a Speech and Text Corpus of Turkish: Large Corpus Collection with Initial Speech Recognition Results." Symmetry 12, no.2 (February17, 2020): 290. http://dx.doi.org/10.3390/sym12020290.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

To build automatic speech recognition (ASR) systems with a low word error rate (WER), a large speech and text corpus is needed. Corpus preparation is the first step required for developing an ASR system for a language with few argument speech documents available. Turkish is a language with limited resources for ASR. Therefore, development of a symmetric Turkish transcribed speech corpus according to the high resources languages corpora is crucial for improving and promoting Turkish speech recognition activities. In this study, we constructed a viable alternative to classical transcribed corpus preparation techniques for collecting Turkish speech data. In the presented approach, three different methods were used. In the first step, subtitles, which are mainly supplied for people with hearing difficulties, were used as transcriptions for the speech utterances obtained from movies. In the second step, data were collected via a mobile application. In the third step, a transfer learning approach to the Grand National Assembly of Turkey session records (videotext) was used. We also provide the initial speech recognition results of artificial neural network and Gaussian mixture-model-based acoustic models for Turkish. For training models, the newly collected corpus and other existing corpora published by the Linguistic Data Consortium were used. In light of the test results of the other existing corpora, the current study showed the relative contribution of corpus variability in a symmetric speech recognition task. The decrease in WER after including the new corpus was more evident with increased verified data size, compensating for the status of Turkish as a low resource language. For further studies, the importance of the corpus and language model in the success of the Turkish ASR system is shown.

35

Chinita, Fátima. "Tapping into the senses: Corporeality and immanence in The Piano Tuner of EarthQuakes (Quay Brothers, 2005)." Empedocles: European Journal for the Philosophy of Communication 10, no.2 (November1, 2019): 151–66. http://dx.doi.org/10.1386/ejpc_00004_1.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract In The Piano Tuner of EarthQuakes (2006), the Quay Brothers' second feature, the sensual form and the meta-artistic content are truly interweaved, and the siblings' staple animated materials become part of the theme itself. Using Michel Serres's argument in Les cinq sens (2014, whose subtitle in English is A Philosophy of Mingled Bodies), I address the relationship between the Quays intermedial animation and the way the art forms of music, painting, theatre and sculpture are used to captivate the film viewer's sensorium in the same way that some of the characters are fascinated by the evil Droz, a scientist and failed composer who manipulates machines and people alike, among them Felisberto, a meek piano tuner with the ability to stir the natural elements. I further proceed to posit the entire film as an intended allegory of animation on the Quays part. Their haptic construction of a three-dimensional world which they control artistically is replicated in the film in Droz's and Felisberto's activities vis-à-vis Malvina van Stille, an abducted opera diva who is kept in a suspended animation state (just like a marionette) and several hydraulic automata with musical resounding properties, some of them made up of an uncanny assortment of body parts. The artificial life of these creatures is contrasted, in two ways, with their physical reality as beings that exist in the world: first, via Serres's sensorial strategy to transform a body into a conscious entity (i.e., endowed with a soul), an embodiment I call 'Corpo-Reality', and second, by resorting to Deleuze and Guattari's theory of the body without organs (BwO) in its advocacy of 'hard' nature and the rejection of a rigid assortment of body parts (either biological or social). The paradoxical organic objectivity of the 'marionettized' Malvina is pitted against the seemingly subjective doings of the mechanical automata, especially an android woodcutter. However, just as in the story things are not what they seem, and the automata actually reflect the 'real' world of Felisberto's tuning of them (and vice versa, in a process entitled 'vertical mise en abyme'), so the film itself can be a 'crystal-image' (per Deleuze), offering itself to the senses of the spectator.

36

Crawford, Kate. "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence." Perspectives on Science and Christian Faith 74, no.1 (March 2022): 61–62. http://dx.doi.org/10.56315/pscf3-22crawford.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

ATLAS OF AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford. New Haven, CT: Yale University Press, 2021. 336 pages. Hardcover; $28.00. ISBN: 9780300209570. *Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is Kate Crawford's analysis of the state of the AI industry. A central idea of her book is the importance of redefining Artificial Intelligence (AI). She states, "I've argued that there is much at stake in how we define AI, what its boundaries are, and who determines them: it shapes what can be seen and contested" (p. 217). *My own definition of AI goes something like this: I imagine a future where I'm sitting in a cafe drinking coffee with my friends, but in this future, one of my friends is a robot, who like me is trying to make a living in this world. A future where humans and robots live in harmony. Crawford views this definition as mythological: "These mythologies are particularly strong in the field of artificial intelligence, where the belief that human intelligence can be formalized and reproduced by machines has been axiomatic since the mid-twentieth century" (p. 5). I do not know if my definition of artificial intelligence can come true, but I am enjoying the process of building, experimenting, and dreaming. *In her book, she asks me to consider that I may be unknowingly participating, as she states, in "a material product of colonialism, with its patterns of extraction, conflict, and environmental destruction" (p. 38). The book's subtitle illuminates the purpose of the book: specifically, the power, politics, and planetary costs of usurping artificial intelligence. Of course, this is not exactly Crawford's subtitle, and this is where I both agree and disagree with her. The book's subtitle is actually Power, Politics, and the Planetary Costs of Artificial Intelligence. In my opinion, AI is more the canary in the coal mine. We can use the canary to detect the poisonous gases, but we cannot blame the canary for the poisonous gas. It risks missing the point. Is AI itself to be feared? Should we no longer teach or learn AI? Or is this more about how we discern responsible use and direction for AI technology? *There is another author who speaks to similar issues. In Weapons of Math Destruction, Cathy O'Neil states it this way, "If we had been clear-headed, we all would have taken a step back at this point to figure out how math had been misused ... But instead ... new mathematical techniques were hotter than ever ... A computer program could speed through thousands of resumes or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top" (p. 13). *Both Crawford and O'Neil point to human flaws that often lead to well-intentioned software developers creating code that results in unfair and discriminatory decisions. AI models encode unintended human biases that may not evaluate candidates as fairly as we would expect, yet there is a widespread notion that we can trust the algorithm. For example, the last time you registered an account on a website, did you click the checkbox confirming that "yes, I read the disclaimer" even though you did not? When we click "yes" we are accepting this disclaimer and placing trust in the software. Business owners place trust in software when they use it to make predictions. Engineers place trust in their algorithms when they write software without rigorous testing protocols. I am just as guilty. *Crawford suggests that AI is often used in ways that are harmful. In the Atlas of AI we are given a tour of how technology is damaging our world: strip mining, labor injustice, the misuse of personal data, issues of state and power, to name a few of the concerns Crawford raises. The reality is that AI is built upon existing infrastructure. For example, Facebook, Instagram, YouTube, Amazon, TikTok have been collecting our information for profit even before AI became important to them. The data centers, CPU houses, and worldwide network infrastructure were already in place to meet consumer demand and geopolitics. But it is true that AI brings new technologies to the table, such as automated face recognition and decision tools to compare prospective employment applicants with diverse databases and employee monitoring tools that can make automatic recommendations. Governments, militaries, and intelligence agencies have taken notice. As invasion of privacy and social justice concerns emerge, Crawford calls us to consider these issues carefully. *Reading Crawford's words pricked my conscience, convicting me to reconsider my erroneous ways. For big tech to exist, to supply what we demand, it needs resources. She walks us through the many resources the technology industry needs to provide what we want, and AI is the "new kid on the block." This book is not about AI, per se; it is instead about the side effects of poor business/research practices, opportunist behavior, power politics, and how these behaviors not only exploit our planet but also unjustly affect marginalized people. The AI industry is simply a new example of this reality: data mining, low wages to lower costs, foreign workers with fewer rights, strip mining, relying on coal and oil for electricity (although some tech companies have made strides to improve sustainability). This sounds more like a parable about the sins of the tech industry than a critique about the dangers of AI. *Could the machine learning community, like the inventors of dynamite who wanted to simply help railroads excavate tunnels, be unintentionally causing harm? Should we, as a community, be on the lookout for these potential harms? Do we have a moral responsibility? Maybe the technology sector needs to look more inwardly to ensure that process efficiency and cost savings are not elevated as most important. *I did not agree with everything that Crawford classified as AI, but I do agree that as a community we are responsible for our actions. If there are injustices, then this should be important to us. In particular, as people of faith, we should heed the call of Micah 6:8 to act justly in this world, and this includes how we use AI. *Reviewed by Joseph Vybihal, Professor of Computer Science, McGill University, Montreal, PQ H3A 0G4.

37

M.Joshi,Aditi, and SanjayG.Patel. "An Ancient Number Recognition using Freeman Chain Code with Deep Learning Approach." Computer Science & Engineering: An International Journal 12, no.1 (February28, 2022): 9–18. http://dx.doi.org/10.5121/cseij.2022.12102.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Sanskrit character and number documents have a lot of errors. Correcting those errors using conventional spell-checking approaches breaks down due to the limited vocabulary. This is because of high inflexions of Sanskrit, where words are dynamically formed by Sandhi rules, Samasa rules, Taddhita affixes, etc. Therefore, correcting OCR documents require huge efforts. Here, we can present different machine learning approaches and various ways to improve features for ameliorating the error corrections in Sanskrit documents. Simulation of Sanskrit dictionary for synthesizing off-the-shelf dictionary can be done. Most of the proposed methods can also work for general Sanskrit word corrections and Hindi word corrections. Handwriting recognition in Indic scripts, like Devanagari, is very challenging due to the subtitles in the scripts, variations in rendering and the cursive nature of the handwriting. Lack of public handwriting datasets in Indic scripts has long stymied the development of offline handwritten word recognizers and made comparison across different methods a tedious task in the field. In this paper, a new handwritten word dataset will be released for Devanagari, IIIT-HW-Dev to alleviate some of these issues. This process is required for successful training of deep learning architecture, availability of huge amounts of training data is crucial, as any typical architecture contains millions of parameters. A new method for the classification of freeman chain code using four-connectivity and eight-connectivity events with deep learning approach is presented. Application of CNN LeNet-5 is found to be suitable to get results in this cases as the numbers are formed with curved lines In contrast with the existing FCC event data analysis techniques, sampled grey images of the existing events are not used, but image files of the three-phase PQ event data are analysed by taking the advantage of the success of the deep learning approach on imagefile-classification. Therefore, the novelty of the proposed approach is that image files of the voltage waveforms of the three phases of the power grid are classified. It is shown that the test data can be classified with 100% accuracy. The proposed work is believed to serve the needs of the future smart grid applications, which are fast and taking automatic countermeasures against potential PQ events.

38

Joshaline, Chellappa Mabel, SubathraM., SubathraM., ShyamalaM., PadmavathyS., and Rekha Rekha. "Automated Teller Machine (ATM)- A “Pathogen City” – A surveillance Report from Locations in and around Madurai City, Tamil Nadu, India." International Journal of Public Health Science (IJPHS) 3, no.1 (March1, 2014): 51. http://dx.doi.org/10.11591/ijphs.v3i1.4674.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

<!--[if gte mso 9]><xml> <o:OfficeDocumentSettings> <o:RelyOnVML /> <o:AllowPNG /> </o:OfficeDocumentSettings> </xml><![endif]--><span style="font-size: 9.0pt; font-family: 'Times New Roman','serif'; mso-fareast-font-family: 'Times New Roman'; mso-ansi-language: EN-US; mso-fareast-language: EN-US; mso-bidi-language: AR-SA;">ATM is used by millions of people in a day. It is meant to be a public utility device.Hence the microorganism’s plays a major role in accommodating the safer place, ATM.Hence to this account an elaborate survey was taken for complete assessment of microbiology in and around Madurai city. Swabs were collected from each ATM screen, buttons, floor, user’s hand, exposure of plates and also extended the work in relation with microorganisms prevalent in ladies toilet the samples collected from ATM were plated in nutrient agar plates. The results showed the presence of increased bacterial count subsequently, most pathogens on characterization extended revealed the genus of the particular organism<em style="mso-bidi-font-style: normal;"> E-coli, Pseudomonas, Staphylococcus aures, Klebsiella, Micrococcus, Salmonella, Serratia </em>and fungal species included <em style="mso-bidi-font-style: normal;">Aspergillus sp, Mucor sp and Fusarium</em>. Antibiogram study of bacteria also provides us information about the antibiotic resistance pattern of the bacterial isolates.</span><!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves /> <w:TrackFormatting /> <w:PunctuationKerning /> <w:ValidateAgainstSchemas /> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF /> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>X-NONE</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables /> <w:SnapToGridInCell /> <w:WrapTextWithPunct /> <w:UseAsianBreakRules /> <w:DontGrowAutofit /> <w:SplitPgBreakAndParaMark /> <w:DontVertAlignCellWithSp /> <w:DontBreakConstrainedForcedTables /> <w:DontVertAlignInTxbx /> <w:Word11KerningPairs /> <w:CachedColBalance /> </w:Compatibility> <m:mathPr> <m:mathFont m:val="Cambria Math" /> <m:brkBin m:val="before" /> <m:brkBinSub m:val=" " /> <m:smallFrac m:val="off" /> <m:dispDef /> <m:lMargin m:val="0" /> <m:rMargin m:val="0" /> <m:defJc m:val="centerGroup" /> <m:wrapIndent m:val="1440" /> <m:intLim m:val="subSup" /> <m:naryLim m:val="undOvr" /> </m:mathPr></w:WordDocument> </xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal" /> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9" /> <w:LsdException Locked="false" Priority="39" Name="toc 1" /> <w:LsdException Locked="false" Priority="39" Name="toc 2" /> <w:LsdException Locked="false" Priority="39" Name="toc 3" /> <w:LsdException Locked="false" Priority="39" Name="toc 4" /> <w:LsdException Locked="false" Priority="39" Name="toc 5" /> <w:LsdException Locked="false" Priority="39" Name="toc 6" /> <w:LsdException Locked="false" Priority="39" Name="toc 7" /> <w:LsdException Locked="false" Priority="39" Name="toc 8" /> <w:LsdException Locked="false" Priority="39" Name="toc 9" /> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption" /> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title" /> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font" /> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle" /> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong" /> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis" /> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text" /> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision" /> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph" /> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote" /> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6" /> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis" /> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis" /> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference" /> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference" /> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title" /> <w:LsdException Locked="false" Priority="37" Name="Bibliography" /> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading" /> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} </style> <![endif]--><p> </p>

39

Joshaline, Chellappa Mabel, SubathraM., SubathraM., ShyamalaM., PadmavathyS., and Rekha Rekha. "Automated Teller Machine (ATM)- A “Pathogen City” – A surveillance Report from Locations in and around Madurai City, Tamil Nadu, India." International Journal of Public Health Science (IJPHS) 3, no.1 (March1, 2014): 51. http://dx.doi.org/10.11591/.v3i1.4674.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

<!--[if gte mso 9]><xml> <o:OfficeDocumentSettings> <o:RelyOnVML /> <o:AllowPNG /> </o:OfficeDocumentSettings> </xml><![endif]--><span style="font-size: 9.0pt; font-family: 'Times New Roman','serif'; mso-fareast-font-family: 'Times New Roman'; mso-ansi-language: EN-US; mso-fareast-language: EN-US; mso-bidi-language: AR-SA;">ATM is used by millions of people in a day. It is meant to be a public utility device.Hence the microorganism’s plays a major role in accommodating the safer place, ATM.Hence to this account an elaborate survey was taken for complete assessment of microbiology in and around Madurai city. Swabs were collected from each ATM screen, buttons, floor, user’s hand, exposure of plates and also extended the work in relation with microorganisms prevalent in ladies toilet the samples collected from ATM were plated in nutrient agar plates. The results showed the presence of increased bacterial count subsequently, most pathogens on characterization extended revealed the genus of the particular organism<em style="mso-bidi-font-style: normal;"> E-coli, Pseudomonas, Staphylococcus aures, Klebsiella, Micrococcus, Salmonella, Serratia </em>and fungal species included <em style="mso-bidi-font-style: normal;">Aspergillus sp, Mucor sp and Fusarium</em>. Antibiogram study of bacteria also provides us information about the antibiotic resistance pattern of the bacterial isolates.</span><!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves /> <w:TrackFormatting /> <w:PunctuationKerning /> <w:ValidateAgainstSchemas /> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF /> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>X-NONE</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables /> <w:SnapToGridInCell /> <w:WrapTextWithPunct /> <w:UseAsianBreakRules /> <w:DontGrowAutofit /> <w:SplitPgBreakAndParaMark /> <w:DontVertAlignCellWithSp /> <w:DontBreakConstrainedForcedTables /> <w:DontVertAlignInTxbx /> <w:Word11KerningPairs /> <w:CachedColBalance /> </w:Compatibility> <m:mathPr> <m:mathFont m:val="Cambria Math" /> <m:brkBin m:val="before" /> <m:brkBinSub m:val=" " /> <m:smallFrac m:val="off" /> <m:dispDef /> <m:lMargin m:val="0" /> <m:rMargin m:val="0" /> <m:defJc m:val="centerGroup" /> <m:wrapIndent m:val="1440" /> <m:intLim m:val="subSup" /> <m:naryLim m:val="undOvr" /> </m:mathPr></w:WordDocument> </xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal" /> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9" /> <w:LsdException Locked="false" Priority="39" Name="toc 1" /> <w:LsdException Locked="false" Priority="39" Name="toc 2" /> <w:LsdException Locked="false" Priority="39" Name="toc 3" /> <w:LsdException Locked="false" Priority="39" Name="toc 4" /> <w:LsdException Locked="false" Priority="39" Name="toc 5" /> <w:LsdException Locked="false" Priority="39" Name="toc 6" /> <w:LsdException Locked="false" Priority="39" Name="toc 7" /> <w:LsdException Locked="false" Priority="39" Name="toc 8" /> <w:LsdException Locked="false" Priority="39" Name="toc 9" /> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption" /> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title" /> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font" /> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle" /> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong" /> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis" /> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text" /> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision" /> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph" /> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote" /> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6" /> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis" /> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis" /> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference" /> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference" /> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title" /> <w:LsdException Locked="false" Priority="37" Name="Bibliography" /> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading" /> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} </style> <![endif]--><p> </p>

40

Heath, Malcolm. "Greek Literature." Greece and Rome 68, no.1 (March5, 2021): 114–20. http://dx.doi.org/10.1017/s0017383520000285.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

I begin with a warm welcome for Evangelos Alexiou's Greek Rhetoric of the 4th Century bc, a ‘revised and slightly abbreviated’ version of the modern Greek edition published in 2016 (ix). Though the volume's title points to a primary focus on the fourth century, sufficient attention is given to the late fifth and early third centuries to provide context. As ‘rhetoric’ in the title indicates, the book's scope is not limited to oratory: Chapter 1 outlines the development of a rhetorical culture; Chapter 2 introduces theoretical debates about rhetoric (Plato, Isocrates, Alcidamas); and Chapter 3 deals with rhetorical handbooks (Anaximenes, Aristotle, and the theoretical precepts embedded in Isocrates). Oratory comes to the fore in Chapter 4, which introduces the ‘canon’ of ten Attic orators: in keeping with the fourth-century focus, Antiphon, Andocides, and Lysias receive no more than sporadic attention; conversely, extra-canonical fourth-century orators (Apollodorus, the author of Against Neaera, Hegesippus, and Demades) receive limited coverage. The remaining chapters deal with the seven major canonical orators: Isocrates, Demosthenes, Aeschines, Isaeus, Lycurgus, Hyperides, and Dinarchus. Each chapter follows the same basic pattern: life, work, speeches, style, transmission of text and reception. Isocrates and Demosthenes have additional sections on research trends and on, respectively, Isocratean ideology and issues of authenticity in the Demosthenic corpus. In the case of Isaeus, there is a brief discussion of contract oratory; Lycurgus is introduced as ‘the relentless prosecutor’. Generous extracts from primary sources are provided, in Greek and in English translation; small-type sections signal a level of detail that some readers may wish to pass over. The footnotes provide extensive references to older as well as more recent scholarship. The thirty-page bibliography is organized by chapter (a helpful arrangement in a book of this kind, despite the resulting repetition); the footnotes supply some additional references. Bibliographical supplements to the original edition have been supplied ‘only in isolated cases’ (ix). In short, this volume is a thorough, well-conceived, and organized synthesis that will be recognized, without doubt, as a landmark contribution. There are, inevitably, potential points of contention. The volume's subtitle, ‘the elixir of democracy and individuality’, ties rhetoric more closely to democracy and to Athens than is warranted: the precarious balancing act which acknowledges that rhetoric ‘has never been divorced from human activity’ while insisting that ‘its vital political space was the democracy of city-states’ (ix–x) seems to me untenable. Alexiou acknowledges that ‘the gift of speaking well, natural eloquence, was considered a virtue already by Homer's era’ (ix), and that ‘the natural gift of speaking well was considered a virtue’ (1). But the repeated insistence on natural eloquence is perplexing. Phoenix, in the embassy scene in Iliad 9, makes it clear that his remit included the teaching of eloquence (Il. 9.442, διδασκέμεναι): Alexiou only quotes the following line, which he mistakenly assigns to Book 10. (The only other typo that I noticed was ‘Aritsotle’ [97]. I, too, have a tendency to mistype the Stagirite's name, though my own automatic transposition is, alas, embarrassingly scatological.) Alexiou provides examples of later Greek assessments of fourth-century orators, including (for example) Dionysius of Halicarnassus, Hermogenes, and the author of On Sublimity (the reluctance to commit to the ‘pseudo’ prefix is my, not Alexiou's, reservation). He observes cryptically that ‘we are aware of Didymus’ commentary’ (245); but the extensive late ancient scholia, which contain material from Menander's Demosthenic commentaries, disappointingly evoke no sign of awareness.

41

Chan, Wing Shan, Jan-Louis Kruger, and Stephen Doherty. "Comparing the impact of automatically generated and corrected subtitles on cognitive load and learning in a first- and second-language educational context." Linguistica Antverpiensia, New Series – Themes in Translation Studies 18 (January10, 2020). http://dx.doi.org/10.52034/lanstts.v18i0.506.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The addition of subtitles to videos has the potential to benefit students across the globe in a context where online video lectures have become a major channel for learning, particularly because, for many, language poses a barrier to learning. Automated subtitling, created with the use of speech-recognition software, may be a powerful way to make this a scalable and affordable solution. However, in the absence of thorough post-editing by human subtitlers, this mode of subtitling often results in serious errors that arise from problems with speech recognition, accuracy, segmentation and presentation speed. This study therefore aims to investigate the impact of automated subtitling on student learning in a sample of English first- and second-language speakers. Our results show that high error rates and high presentation speeds reduce the potential benefit of subtitles. These findings provide an important foundation for future studies on the use of subtitles in education.

42

Vitikainen, Kaisa, and Maarit Koponen. "Automation in the Intralingual Subtitling Process." Journal of Audiovisual Translation 4, no.3 (December28, 2021). http://dx.doi.org/10.47476/jat.v4i3.2021.197.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The demand for intralingual subtitles for television and video content is increasing. In Finland, major broadcasting companies are required to provide intralingual subtitles for all or a portion of their programming in Finnish and Swedish, excluding certain live events. To meet this need, technology could offer solutions in the form of automatic speech recognition and subtitle generation. Although fully automatic subtitles may not be of sufficient quality to be accepted by the target audience, they can be a useful tool for the subtitler. This article presents research conducted as part of the MeMAD project, where automatically generated subtitles for Finnish were tested in professional workflows with four subtitlers. We discuss observations regarding the effect of automation on productivity based on experiments where participants subtitled short video clips from scratch, by respeaking and by post-editing automatically generated subtitles, as well as the subtitlers’ experience based on feedback collected with questionnaires and interviews. Lay summary This article discusses how technology can help create subtitles for television programmes and videos. Subtitles in the same language as the content help the Deaf and the hard-of-hearing to access television programmes and videos. They are also useful for example for language learning or watching videos in noisy places. Demand for subtitles is growing and many countries also have laws that demand same-language subtitles. For example, major broadcasters in Finland must offer same-language subtitles for some programmes in Finnish and Swedish. However, broadcasters usually have limited time and money for subtitling. One useful tool could be speech recognition technology, which automatically converts speech to text. Subtitles made with speech recognition alone are not good enough yet, and need to be edited. We used speech recognition to automatically produce same-language subtitles in Finnish. Four professional subtitlers edited them to create subtitles for short videos. We measured the time and the number of keystrokes they needed for this task and compared whether this made subtitling faster. We also asked how the participants felt about using automatic subtitles in their work. This study shows that speech recognition can be a useful tool for subtitlers, but the quality and usability of technology are important.

43

Orellana, Anymir, Georgina Arguello, and Elda Kanzki-Veloso. "Online Presentations with PowerPoint Present Live Real-Time Automated Captions and Subtitles: Perceptions of Faculty and Administrators." Online Learning 26, no.2 (June1, 2022). http://dx.doi.org/10.24059/olj.v26i2.2763.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Captioning of recorded videos is beneficial to many and a matter of compliance with accessibility regulations and guidelines. Like recorded captions, real-time captions can also be means to implement the Universal Design for Learning checkpoint to offer text-based alternatives to auditory information. A cost-effective solution to implement the checkpoint for live online presentations is to use speech recognition technologies to generate automated captions. In particular, Microsoft PowerPoint Present Live (MSPL) is an application that can be used to present with real-time automated captions and subtitles in multiple languages, allowing individuals to follow the presentation in their preferred language. The purpose of this study was to identify challenges that participants could encounter when using the MSPL feature of real-time automated captions/subtitles, and to determine what they describe as potential uses, challenges, and benefits of the feature. Participants were full-time faculty and administrators with a faculty appointment in a higher education institution. Data from five native English speakers and five native Spanish speakers were analyzed. Activities of remote usability testing and interviews were conducted to collect data. Overall, participants did not encounter challenges that they could not overcome and described MSPL as an easy-to-use and useful tool to present with captions/subtitles for teaching or training and to reach English and Spanish-speaking audiences. The themes that emerged as potential challenges were training, distraction, and technology. Findings are discussed and further research is recommended.

44

Yao, Guangyuan. "Evaluation of Machine Translation in English-Chinese Automatic Subtitling of TED Talks." Modern Languages, Literatures, and Linguistics, August 2022. http://dx.doi.org/10.56968/mlll.v1i01.68.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

With technology becoming an essential competitive factor for subtitle translation services, the quality of machine subtitle translation in the current era deserves further examination. This paper examines the Chinese translation of 20 Ted Talks subtitled by an automatic subtitling software under the frame of the FAR model. This paper examines if machine subtitle translation has semantic, contextual, grammatical or spelling errors, idiomatic errors, syncopation and inappropriate synchronization, as well as wrong reading speed and paragraph length. It also emphasizes the need to broaden new perspectives on the quality of machine subtitle translation.

45

Hollier, Scott, KatieM.Ellis, and Mike Kent. "User-Generated Captions: From Hackers, to the Disability Digerati, to Fansubbers." M/C Journal 20, no.3 (June21, 2017). http://dx.doi.org/10.5204/mcj.1259.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Writing in the American Annals of the Deaf in 1931, Emil S. Ladner Jr, a Deaf high school student, predicted the invention of words on screen to facilitate access to “talkies”. He anticipated:Perhaps, in time, an invention will be perfected that will enable the deaf to hear the “talkies”, or an invention which will throw the words spoken directly under the screen as well as being spoken at the same time. (Ladner, cited in Downey Closed Captioning)This invention would eventually come to pass and be known as captions. Captions as we know them today have become widely available because of a complex interaction between technological change, volunteer effort, legislative activism, as well as increasing consumer demand. This began in the late 1950s when the technology to develop captions began to emerge. Almost immediately, volunteers began captioning and distributing both film and television in the US via schools for the deaf (Downey, Constructing Closed-Captioning in the Public Interest). Then, between the 1970s and 1990s Deaf activists and their allies began to campaign aggressively for the mandated provision of captions on television, leading eventually to the passing of the Television Decoder Circuitry Act in the US in 1990 (Ellis). This act decreed that any television with a screen greater than 13 inches must be designed/manufactured to be capable of displaying captions. The Act was replicated internationally, with countries such as Australia adopting the same requirements with their Australian standards regarding television sets imported into the country. As other papers in this issue demonstrate, this market ultimately led to the introduction of broadcasting requirements.Captions are also vital to the accessibility of videos in today’s online and streaming environment—captioning is listed as the highest priority in the definitive World Wide Web Consortium (W3C) Web Content Accessibility Guideline’s (WCAG) 2.0 standard (W3C, “Web Content Accessibility Guidelines 2.0”). This recognition of the requirement for captions online is further reflected in legislation, from both the US 21st Century Communications and Video Accessibility Act (CVAA) (2010) and from the Australian Human Rights Commission (2014).Television today is therefore much more freely available to a range of different groups. In addition to broadcast channels, captions are also increasingly available through streaming platforms such as Netflix and other subscription video on demand providers, as well as through user-generated video sites like YouTube. However, a clear discrepancy exists between guidelines, legislation and the industry’s approach. Guidelines such as the W3C are often resisted by industry until compliance is legislated.Historically, captions have been both unavailable (Ellcessor; Ellis) and inadequate (Ellis and Kent), and in many instances, they still are. For example, while the provision of captions in online video is viewed as a priority across international and domestic policies and frameworks, there is a stark contrast between the policy requirements and the practical implementation of these captions. This has led to the active development of a solution as part of an ongoing tradition of user-led development; user-generated captions. However, within disability studies, research around the agency of this activity—and the media savvy users facilitating it—has gone significantly underexplored.Agency of ActivityInformation sharing has featured heavily throughout visions of the Web—from Vannevar Bush’s 1945 notion of the memex (Bush), to the hacker ethic, to Zuckerberg’s motivations for creating Facebook in his dorm room in 2004 (Vogelstein)—resulting in a wide agency of activity on the Web. Running through this development of first the Internet and then the Web as a place for a variety of agents to share information has been the hackers’ ethic that sharing information is a powerful, positive good (Raymond 234), that information should be free (Levey), and that to achieve these goals will often involve working around intended information access protocols, sometimes illegally and normally anonymously. From the hacker culture comes the digerati, the elite of the digital world, web users who stand out by their contributions, success, or status in the development of digital technology. In the context of access to information for people with disabilities, we describe those who find these workarounds—providing access to information through mainstream online platforms that are not immediately apparent—as the disability digerati.An acknowledged mainstream member of the digerati, Tim Berners-Lee, inventor of the World Wide Web, articulated a vision for the Web and its role in information sharing as inclusive of everyone:Worldwide, there are more than 750 million people with disabilities. As we move towards a highly connected world, it is critical that the Web be useable by anyone, regardless of individual capabilities and disabilities … The W3C [World Wide Web Consortium] is committed to removing accessibility barriers for all people with disabilities—including the deaf, blind, physically challenged, and cognitively or visually impaired. We plan to work aggressively with government, industry, and community leaders to establish and attain Web accessibility goals. (Berners-Lee)Berners-Lee’s utopian vision of a connected world where people freely shared information online has subsequently been embraced by many key individuals and groups. His emphasis on people with disabilities, however, is somewhat unique. While maintaining a focus on accessibility, in 2006 he shifted focus to who could actually contribute to this idea of accessibility when he suggested the idea of “community captioning” to video bloggers struggling with the notion of including captions on their videos:The video blogger posts his blog—and the web community provides the captions that help others. (Berners-Lee, cited in Outlaw)Here, Berners-Lee was addressing community captioning in the context of video blogging and user-generated content. However, the concept is equally significant for professionally created videos, and media savvy users can now also offer instructions to audiences about how to access captions and subtitles. This shift—from user-generated to user access—must be situated historically in the context of an evolving Web 2.0 and changing accessibility legislation and policy.In the initial accessibility requirements of the Web, there was little mention of captioning at all, primarily due to video being difficult to stream over a dial-up connection. This was reflected in the initial WCAG 1.0 standard (W3C, “Web Content Accessibility Guidelines 1.0”) in which there was no requirement for videos to be captioned. WCAG 2.0 went some way in addressing this, making captioning online video an essential Level A priority (W3C, “Web Content Accessibility Guidelines 2.0”). However, there were few tools that could actually be used to create captions, and little interest from emerging online video providers in making this a priority.As a result, the possibility of user-generated captions for video content began to be explored by both developers and users. One initial captioning tool that gained popularity was MAGpie, produced by the WGBH National Center for Accessible Media (NCAM) (WGBH). While cumbersome by today’s standards, the arrival of MAGpie 2.0 in 2002 provided an affordable and professional captioning tool that allowed people to create captions for their own videos. However, at that point there was little opportunity to caption videos online, so the focus was more on captioning personal video collections offline. This changed with the launch of YouTube in 2005 and its later purchase by Google (CNET), leading to an explosion of user-generated video content online. However, while the introduction of YouTube closed captioned video support in 2006 ensured that captioned video content could be created (YouTube), the ability for users to create captions, save the output into one of the appropriate captioning file formats, upload the captions, and synchronise the captions to the video remained a difficult task.Improvements to the production and availability of user-generated captions arrived firstly through the launch of YouTube’s automated captions feature in 2009 (Google). This service meant that videos could be uploaded to YouTube and, if the user requested it, Google would caption the video within approximately 24 hours using its speech recognition software. While the introduction of this service was highly beneficial in terms of making captioning videos easier and ensuring that the timing of captions was accurate, the quality of captions ranged significantly. In essence, if the captions were not reviewed and errors not addressed, the automated captions were sometimes inaccurate to the point of hilarity (New Media Rock Stars). These inaccurate YouTube captions are colloquially described as craptions. A #nomorecraptions campaign was launched to address inaccurate YouTube captioning and call on YouTube to make improvements.The ability to create professional user-generated captions across a variety of platforms, including YouTube, arrived in 2010 with the launch of Amara Universal Subtitles (Amara). The Amara subtitle portal provides users with the opportunity to caption online videos, even if they are hosted by another service such as YouTube. The captioned file can be saved after its creation and then uploaded to the relevant video source if the user has access to the location of the video content. The arrival of Amara continues to provide ongoing benefits—it contains a professional captioning editing suite specifically catering for online video, the tool is free, and it can caption videos located on other websites. Furthermore, Amara offers the additional benefit of being able to address the issues of YouTube automated captions—users can benefit from the machine-generated captions of YouTube in relation to its timing, then download the captions for editing in Amara to fix the issues, then return the captions to the original video, saving a significant amount of time when captioning large amounts of video content. In recent years Google have also endeavoured to simplify the captioning process for YouTube users by including its own captioning editors, but these tools are generally considered inferior to Amara (Media Access Australia).Similarly, several crowdsourced caption services such as Viki (https://www.viki.com/community) have emerged to facilitate the provision of captions. However, most of these crowdsourcing captioning services can’t tap into commercial products instead offering a service for people that have a video they’ve created, or one that already exists on YouTube. While Viki was highlighted as a useful platform in protests regarding Netflix’s lack of captions in 2009, commercial entertainment providers still have a responsibility to make improvements to their captioning. As we discuss in the next section, people have resorted extreme measures to hack Netflix to access the captions they need. While the ability for people to publish captions on user-generated content has improved significantly, there is still a notable lack of captions for professionally developed videos, movies, and television shows available online.User-Generated Netflix CaptionsIn recent years there has been a worldwide explosion of subscription video on demand service providers. Netflix epitomises the trend. As such, for people with disabilities, there has been significant focus on the availability of captions on these services (see Ellcessor, Ellis and Kent). Netflix, as the current leading provider of subscription video entertainment in both the US and with a large market shares in other countries, has been at the centre of these discussions. While Netflix offers a comprehensive range of captioned video on its service today, there are still videos that do not have captions, particularly in non-English regions. As a result, users have endeavoured to produce user-generated captions for personal use and to find workarounds to access these through the Netflix system. This has been achieved with some success.There are a number of ways in which captions or subtitles can be added to Netflix video content to improve its accessibility for individual users. An early guide in a 2011 blog post (Emil’s Celebrations) identified that when using the Netflix player using the Silverlight plug-in, it is possible to access a hidden menu which allows a subtitle file in the DFXP format to be uploaded to Netflix for playback. However, this does not appear to provide this file to all Netflix users, and is generally referred to as a “soft upload” just for the individual user. Another method to do this, generally credited as the “easiest” way, is to find a SRT file that already exists for the video title, edit the timing to line up with Netflix, use a third-party tool to convert it to the DFXP format, and then upload it using the hidden menu that requires a specific keyboard command to access. While this may be considered uncomplicated for some, there is still a certain amount of technical knowledge required to complete this action, and it is likely to be too complex for many users.However, constant developments in technology are assisting with making access to captions an easier process. Recently, Cosmin Vasile highlighted that the ability to add captions and subtitle tracks can still be uploaded providing that the older Silverlight plug-in is used for playback instead of the new HTML5 player. Others add that it is technically possible to access the hidden feature in an HTML5 player, but an additional Super Netflix browser plug-in is required (Sommergirl). Further, while the procedure for uploading the file remains similar to the approach discussed earlier, there are some additional tools available online such as Subflicks which can provide a simple online conversion of the more common SRT file format to the DFXP format (Subflicks). However, while the ability to use a personal caption or subtitle file remains, the most common way to watch Netflix videos with alternative caption or subtitle files is through the use of the Smartflix service (Smartflix). Unlike other ad-hoc solutions, this service provides a simplified mechanism to bring alternative caption files to Netflix. The Smartflix website states that the service “automatically downloads and displays subtitles in your language for all titles using the largest online subtitles database.”This automatic download and sharing of captions online—known as fansubbing—facilitates easy access for all. For example, blog posts suggest that technology such as this creates important access opportunities for people who are deaf and hard of hearing. Nevertheless, they can be met with suspicion by copyright holders. For example, a recent case in the Netherlands ruled fansubbers were engaging in illegal activities and were encouraging people to download pirated videos. While the fansubbers, like the hackers discussed earlier, argued they were acting in the greater good, the Dutch antipiracy association (BREIN) maintained that subtitles are mainly used by people downloading pirated media and sought to outlaw the manufacture and distribution of third party captions (Anthony). The fansubbers took the issue to court in order to seek clarity about whether copyright holders can reserve exclusive rights to create and distribute subtitles. However, in a ruling against the fansubbers, the court agreed with BREIN that fansubbing violated copyright and incited piracy. What impact this ruling will have on the practice of user-generated captioning online, particularly around popular sites such as Netflix, is hard to predict; however, for people with disabilities who were relying on fansubbing to access content, it is of significant concern that the contention that the main users of user-generated subtitles (or captions) are engaging in illegal activities was so readily accepted.ConclusionThis article has focused on user-generated captions and the types of platforms available to create these. It has shown that this desire to provide access, to set the information free, has resulted in the disability digerati finding workarounds to allow users to upload their own captions and make content accessible. Indeed, the Internet and then the Web as a place for information sharing is evident throughout this history of user-generated captioning online, from Berner-Lee’s conception of community captioning, to Emil and Vasile’s instructions to a Netflix community of captioners, to finally a group of fansubbers who took BRIEN to court and lost. Therefore, while we have conceived of the disability digerati as a conflation of the hacker and the acknowledged digital influencer, these two positions may again part ways, and the disability digerati may—like the hackers before them—be driven underground.Captioned entertainment content offers a powerful, even vital, mode of inclusion for people who are deaf or hard of hearing. Yet, despite Berners-Lee’s urging that everything online be made accessible to people with all sorts of disabilities, captions were not addressed in the first iteration of the WCAG, perhaps reflecting the limitations of the speed of the medium itself. This continues to be the case today—although it is no longer difficult to stream video online, and Netflix have reached global dominance, audiences who require captions still find themselves fighting for access. Thus, in this sense, user-generated captions remain an important—yet seemingly technologically and legislatively complicated—avenue for inclusion.ReferencesAnthony, Sebastian. “Fan-Made Subtitles for TV Shows and Movies Are Illegal, Court Rules.” Arstechnica UK (2017). 21 May 2017 <https://arstechnica.com/tech-policy/2017/04/fan-made-subtitles-for-tv-shows-and-movies-are-illegal/>.Amara. “Amara Makes Video Globally Accessible.” Amara (2010). 25 Apr. 2017. <https://amara.org/en/ 2010>.Berners-Lee, Tim. “World Wide Web Consortium (W3C) Launches International Web Accessibility Initiative.” Web Accessibility Initiative (WAI) (1997). 19 June 2010. <http://www.w3.org/Press/WAI-Launch.html>.Bush, Vannevar. “As We May Think.” The Atlantic (1945). 26 June 2010 <http://www.theatlantic.com/magazine/print/1969/12/as-we-may-think/3881/>.CNET. “YouTube Turns 10: The Video Site That Went Viral.” CNET (2015). 24 Apr. 2017 <https://www.cnet.com/news/youtube-turns-10-the-video-site-that-went-viral/>.Downey, Greg. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: John Hopkins UP, 2008.———. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info: The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82.Ellcessor, Elizabeth. “Captions On, Off on TV, Online: Accessibility and Search Engine Optimization in Online Closed Captioning.” Television & New Media 13.4 (2012): 329-352. <http://tvn.sagepub.com/content/early/2011/10/24/1527476411425251.abstract?patientinform-links=yes&legid=sptvns;51v1>.Ellis, Katie. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Emil’s Celebrations. “How to Add Subtitles to Movies Streamed in Netflix.” 16 Oct. 2011. 9 Apr. 2017 <https://emladenov.wordpress.com/2011/10/16/how-to-add-subtitles-to-movies-streamed-in-netflix/>.Google. “Automatic Captions in Youtube.” 2009. 24 Apr. 2017 <https://googleblog.blogspot.com.au/2009/11/automatic-captions-in-youtube.html>.Jaeger, Paul. “Disability and the Internet: Confronting a Digital Divide.” Disability in Society. Ed. Ronald Berger. Boulder, London: Lynne Rienner Publishers, 2012.Levey, Steven. Hackers: Heroes of the Computer Revolution. North Sebastopol: O’Teilly Media, 1984.Media Access Australia. “How to Caption a Youtube Video.” 2017. 25 Apr. 2017 <https://mediaaccess.org.au/web/how-to-caption-a-youtube-video>.New Media Rock Stars. “Youtube’s 5 Worst Hilariously Catastrophic Auto Caption Fails.” 2013. 25 Apr. 2017 <http://newmediarockstars.com/2013/05/youtubes-5-worst-hilariously-catastrophic-auto-caption-fails/>.Outlaw. “Berners-Lee Applies Web 2.0 to Improve Accessibility.” Outlaw News (2006). 25 June 2010 <http://www.out-law.com/page-6946>.Raymond, Eric S. The New Hacker’s Dictionary. 3rd ed. Cambridge: MIT P, 1996.Smartflix. “Smartflix: Supercharge Your Netflix.” 2017. 9 Apr. 2017 <https://www.smartflix.io/>.Sommergirl. “[All] Adding Subtitles in a Different Language?” 2016. 9 Apr. 2017 <https://www.reddit.com/r/netflix/comments/32l8ob/all_adding_subtitles_in_a_different_language/>.Subflicks. “Subflicks V2.0.0.” 2017. 9 Apr. 2017 <http://subflicks.com/>.Vasile, Cosmin. “Netflix Has Just Informed Us That Its Movie Streaming Service Is Now Available in Just About Every Country That Matters Financially, Aside from China, of Course.” 2016. 9 Apr. 2017 <http://news.softpedia.com/news/how-to-add-custom-subtitles-to-netflix-498579.shtml>.Vogelstein, Fred. “The Wired Interview: Facebook’s Mark Zuckerberg.” Wired Magazine (2009). 20 Jun. 2010 <http://www.wired.com/epicenter/2009/06/mark-zuckerberg-speaks/>.W3C. “Web Content Accessibility Guidelines 1.0.” W3C Recommendation (1999). 25 Jun. 2010 <http://www.w3.org/TR/WCAG10/>.———. “Web Content Accessibility Guidelines (WCAG) 2.0.” 11 Dec. 2008. 21 Aug. 2013 <http://www.w3.org/TR/WCAG20/>.WGBH. “Magpie 2.0—Free, Do-It-Yourself Access Authoring Tool for Digital Multimedia Released by WGBH.” 2002. 25 Apr. 2017 <http://ncam.wgbh.org/about/news/pr_05072002>.YouTube. “Finally, Caption Video Playback.” 2006. 24 Apr. 2017 <http://googlevideo.blogspot.com.au/2006/09/finally-caption-playback.html>.

46

Malakul, Sivakorn, and Innwoo Park. "The effects of using an auto-subtitle system in educational videos to facilitate learning for secondary school students: learning comprehension, cognitive load, and satisfaction." Smart Learning Environments 10, no.1 (January10, 2023). http://dx.doi.org/10.1186/s40561-023-00224-2.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

AbstractWhile subtitles are considered a primary learning support tool for people who cannot understand video narration in foreign languages, recent advancements in artificial intelligence (AI) technologies have played a pivotal role in automatic subtitling on online video platforms such as YouTube. This study examines the effects of three different types of subtitles in the Thai language (i.e., auto-subtitles, edited subtitles, and no subtitles) on learning comprehension, cognitive load, and satisfaction to determine whether it is feasible to use AI technology as an auto-subtitles system to facilitate online learning with educational videos. To that aim, 79 Thai secondary school students from three Mathayom 5 (Grade 11) computer science classrooms participated in this study. This study used the static group comparison, which is the Posttest-Only Control Group Design. The results of this study found that the auto-subtitles system that generates Thai language subtitles for English educational videos has greater feasibility of implementation to facilitate online learning when compared to editorial subtitles by Thai natives. Therefore, Thai subtitles generated by the auto-subtitles system in English educational videos can facilitate students’ learning comprehension, cognitive load, and satisfaction.

47

Al-Rawi, Ahmed, Carmen Celestini, Nicole Stewart, and Nathan Worku. "How Google Autocomplete Algorithms about Conspiracy Theorists Mislead the Public." M/C Journal 25, no.1 (March21, 2022). http://dx.doi.org/10.5204/mcj.2852.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Introduction: Google Autocomplete Algorithms Despite recent attention to the impact of social media platforms on political discourse and public opinion, most people locate their news on search engines (Robertson et al.). When a user conducts a search, millions of outputs, in the form of videos, images, articles, and Websites are sorted to present the most relevant search predictions. Google, the most dominant search engine in the world, expanded its search index in 2009 to include the autocomplete function, which provides suggestions for query inputs (Dörr and Stephan). Google’s autocomplete function also allows users to “search smarter” by reducing typing time by 25 percent (Baker and Potts 189). Google’s complex algorithm is impacted upon by factors like search history, location, and keyword searches (Karapapa and Borghi), and there are policies to ensure the autocomplete function does not contain harmful content. In 2017, Google implemented a feedback tool to allow human evaluators to assess the quality of search results; however, the algorithm still provides misleading results that frame far-right actors as neutral. In this article, we use reverse engineering to understand the nature of these algorithms in relation to the descriptive outcome, to illustrate how autocomplete subtitles label conspiracists in three countries. According to Google, these “subtitles are generated automatically”, further stating that the “systems might determine that someone could be called an actor, director, or writer. Only one of these can appear as the subtitle” and that Google “cannot accept or create custom subtitles” (Google). We focused our attention on well-known conspiracy theorists because of their influence and audience outreach. In this article we argue that these subtitles are problematic because they can mislead the public and amplify extremist views. Google’s autocomplete feature is misleading because it does not highlight what is publicly known about these actors. The labels are neutral or positive but never negative, reflecting primary jobs and/or the actor’s preferred descriptions. This is harmful to the public because Google’s search rankings can influence a user’s knowledge and information preferences through the search engine manipulation effect (Epstein and Robertson). Users’ preferences and understanding of information can be manipulated based upon their trust in Google search results, thus allowing these labels to be widely accepted instead of providing a full picture of the harm their ideologies and belief cause. Algorithms That Mainstream Conspiracies Search engines establish order and visibility to Web pages that operationalise and stabilise meaning to particular queries (Gillespie). Google’s subtitles and blackbox operate as a complex algorithm for its search index and offer a mediated visibility to aspects of social and political life (Gillespie). Algorithms are designed to perform computational tasks through an operational sequence that computer systems must follow (Broussard), but they are also “invisible infrastructures” that Internet users consciously or unconsciously follow (Gran et al. 1779). The way algorithms rank, classify, sort, predict, and process data is political because it presents the world through a predetermined lens (Bucher 3) decided by proprietary knowledge – a “secret sauce” (O’Neil 29) – that is not disclosed to the general public (Christin). Technology titans, like Google, Facebook, and Amazon (Webb), rigorously protect and defend intellectual property for these algorithms, which are worth billions of dollars (O’Neil). As a result, algorithms are commonly defined as opaque, secret “black boxes” that conceal the decisions that are already made “behind corporate walls and layers of code” (Pasquale 899). The opacity of algorithms is related to layers of intentional secrecy, technical illiteracy, the size of algorithmic systems, and the ability of machine learning algorithms to evolve and become unintelligible to humans, even to those trained in programming languages (Christin 898-899). The opaque nature of algorithms alongside the perceived neutrality of algorithmic systems is problematic. Search engines are increasingly normalised and this leads to a socialisation where suppositions are made that “these artifacts are credible and provide accurate information that is fundamentally depoliticized and neutral” (Noble 25). Google’s autocomplete and PageRank algorithms exist outside of the veil of neutrality. In 2015, Google’s photos app, which uses machine learning techniques to help users collect, search, and categorise images, labelled two black people as ‘gorillas’ (O’Neil). Safiya Noble illustrates how media and technology are rooted in systems of white supremacy, and how these long-standing social biases surface in algorithms, illustrating how racial and gendered inequities embed into algorithmic systems. Google actively fixes algorithmic biases with band-aid-like solutions, which means the errors remain inevitable constituents within the algorithms. Rising levels of automation correspond to a rising level of errors, which can lead to confusion and misdirection of the algorithms that people use to manage their lives (O’Neil). As a result, software, code, machine learning algorithms, and facial/voice recognition technologies are scrutinised for producing and reproducing prejudices (Gray) and promoting conspiracies – often described as algorithmic bias (Bucher). Algorithmic bias occurs because algorithms are trained by historical data already embedded with social biases (O’Neil), and if that is not problematic enough, algorithms like Google’s search engine also learn and replicate the behaviours of Internet users (Benjamin 93), including conspiracy theorists and their followers. Technological errors, algorithmic bias, and increasing automation are further complicated by the fact that Google’s Internet service uses “2 billion lines of code” – a magnitude that is difficult to keep track of, including for “the programmers who designed the algorithm” (Christin 899). Understanding this level of code is not critical to understanding algorithmic logics, but we must be aware of the inscriptions such algorithms afford (Krasmann). As algorithms become more ubiquitous it is urgent to “demand that systems that hold algorithms accountable become ubiquitous as well” (O’Neil 231). This is particularly important because algorithms play a critical role in “providing the conditions for participation in public life”; however, the majority of the public has a modest to nonexistent awareness of algorithms (Gran et al. 1791). Given the heavy reliance of Internet users on Google’s search engine, it is necessary for research to provide a glimpse into the black boxes that people use to extract information especially when it comes to searching for information about conspiracy theorists. Our study fills a major gap in research as it examines a sub-category of Google’s autocomplete algorithm that has not been empirically explored before. Unlike the standard autocomplete feature that is primarily programmed according to popular searches, we examine the subtitle feature that operates as a fixed label for popular conspiracists within Google’s algorithm. Our initial foray into our research revealed that this is not only an issue with conspiracists, but also occurs with terrorists, extremists, and mass murderers. Method Using a reverse engineering approach (Bucher) from September to October 2021, we explored how Google’s autocomplete feature assigns subtitles to widely known conspiracists. The conspiracists were not geographically limited, and we searched for those who reside in the United States, Canada, United Kingdom, and various countries in Europe. Reverse engineering stems from Ashby’s canonical text on cybernetics, in which he argues that black boxes are not a problem; the problem or challenge is related to the way one can discern their contents. As Google’s algorithms are not disclosed to the general public (Christin), we use this method as an extraction tool to understand the nature of how these algorithms (Eilam) apply subtitles. To systematically document the search results, we took screenshots for every conspiracist we searched in an attempt to archive the Google autocomplete algorithm. By relying on previous literature, reports, and the figures’ public statements, we identified and searched Google for 37 Western-based and influencial conspiracy theorists. We initially experimented with other problematic figures, including terrorists, extremists, and mass murderers to see whether Google applied a subtitle or not. Additionally, we examined whether subtitles were positive, neutral, or negative, and compared this valence to personality descriptions for each figure. Using the standard procedures of content analysis (Krippendorff), we focus on the manifest or explicit meaning of text to inform subtitle valence in terms of their positive, negative, or neutral connotations. These manifest features refer to the “elements that are physically present and countable” (Gray and Densten 420) or what is known as the dictionary definitions of items. Using a manual query, we searched Google for subtitles ascribed to conspiracy theorists, and found the results were consistent across different countries. Searches were conducted on Firefox and Chrome and tested on an Android phone. Regardless of language input or the country location established by a Virtual Private Network (VPN), the search terms remained stable, regardless of who conducted the search. The conspiracy theorists in our dataset cover a wide range of conspiracies, including historical figures like Nesta Webster and John Robison, who were foundational in Illuminati lore, as well as contemporary conspiracists such as Marjorie Taylor Greene and Alex Jones. Each individual’s name was searched on Google with a VPN set to three countries. Results and Discussion This study examines Google’s autocomplete feature associated with subtitles of conspiratorial actors. We first tested Google’s subtitling system with known terrorists, convicted mass shooters, and controversial cult leaders like David Koresh. Garry et al. (154) argue that “while conspiracy theories may not have mass radicalising effects, they are extremely effective at leading to increased polarization within societies”. We believe that the impact of neutral subtitling of conspiracists reflects the integral role conspiracies plays in contemporary politics and right-wing extremism. The sample includes contemporary and historical conspiracists to establish consistency in labelling. For historical figures, the labels are less consequential and simply reflect the reality that Google’s subtitles are primarily neutral. Of the 37 conspiracy theorists we searched (see Table 1 in the Appendix), seven (18.9%) do not have an associated subtitle, and the other 30 (81%) have distinctive subtitles, but none of them reflects the public knowledge of the individuals’ harmful role in disseminating conspiracy theories. In the list, 16 (43.2%) are noted for their contribution to the arts, 4 are labelled as activists, 7 are associated with their professional affiliation or original jobs, 2 to the journalism industry, one is linked to his sports career, another one as a researcher, and 7 have no subtitle. The problem here is that when white nationalists or conspiracy theorists are not acknowledged as such in their subtitles, search engine users could possibly encounter content that may sway their understanding of society, politics, and culture. For example, a conspiracist like Alex Jones is labeled as an “American Radio Host” (see Figure 1), despite losing two defamation lawsuits for declaring that the shooting at Sandy Hook Elementary School in Newtown, Connecticut, was a ‘false flag’ event. Jones’s actions on his InfoWars media platforms led to parents of shooting victims being stalked and threatened. Another conspiracy theorist, Gavin McInnes, the creator of the far-right, neo-fascist Proud Boys organisation, a known terrorist entity in Canada and hate group in the United States, is listed simply as a “Canadian writer” (see Figure 1). Fig. 1: Screenshots of Google’s subtitles for Alex Jones and Gavin McInnes. Although subtitles under an individual’s name are not audio, video, or image content, the algorithms that create these subtitles are an invisible infrastructure that could cause harm through their uninterrogated status and pervasive presence. This could then be a potential conduit to media which could cause harm and develop distrust in electoral and civic processes, or all institutions. Examples from our list include Brittany Pettibone, whose subtitle states that she is an “American writer” despite being one of the main propagators of the Pizzagate conspiracy which led to Edgar Maddison Welch (whose subtitle is “Screenwriter”) travelling from North Carolina to Washington D.C. to violently threaten and confront those who worked at Comet Ping Pong Pizzeria. The same misleading label can be found via searching for James O’Keefe of Project Veritas, who is positively labelled as “American activist”. Veritas is known for releasing audio and video recordings that contain false information designed to discredit academic, political, and service organisations. In one instance, a 2020 video released by O’Keefe accused Democrat Ilhan Omar’s campaign of illegally collecting ballots. The same dissembling of distrust applies to Mike Lindell, whose Google subtitle is “CEO of My Pillow”, as well as Sidney Powell, who is listed as an “American lawyer”; both are propagators of conspiracy theories relating to the 2020 presidential election. The subtitles attributed to conspiracists on Google do not acknowledge the widescale public awareness of the negative role these individuals play in spreading conspiracy theories or causing harm to others. Some of the selected conspiracists are well known white nationalists, including Stefan Molyneux who has been banned from social media platforms like Twitter, Twitch, Facebook, and YouTube for the promotion of scientific racism and eugenics; however, he is neutrally listed on Google as a “Canadian podcaster”. In addition, Laura Loomer, who describes herself as a “proud Islamophobe,” is listed by Google as an “Author”. These subtitles can pose a threat by normalising individuals who spread conspiracy theories, sow dissension and distrust in institutions, and cause harm to minority groups and vulnerable individuals. Once clicking on the selected person, the results, although influenced by the algorithm, did not provide information that aligned with the associated subtitle. The search results are skewed to the actual conspiratorial nature of the individuals and associated news articles. In essence, the subtitles do not reflect the subsequent search results, and provide a counter-labelling to the reality of the resulting information provided to the user. Another significant example is Jerad Miller, who is listed as “American performer”, despite the fact that he is the Las Vegas shooter who posted anti-government and white nationalist 3 Percenters memes on his social media (SunStaff), even though the majority of search results connect him to the mass shooting he orchestrated in 2014. The subtitle “performer” is certainly not the common characteristic that should be associated with Jerad Miller. Table 1 in the Appendix shows that individuals who are not within the contemporary milieux of conspiracists, but have had a significant impact, such as Nesta Webster, Robert Welch Junior, and John Robison, were listed by their original profession or sometimes without a subtitle. David Icke, infamous for his lizard people conspiracies, has a subtitle reflecting his past football career. In all cases, Google’s subtitle was never consistent with the actor’s conspiratorial behaviour. Indeed, the neutral subtitles applied to conspiracists in our research may reflect some aspect of the individuals’ previous careers but are not an accurate reflection of the individuals’ publicly known role in propagating hate, which we argue is misleading to the public. For example, David Icke may be a former footballer, but the 4.7 million search results predominantly focus on his conspiracies, his public fora, and his status of being deplatformed by mainstream social media sites. The subtitles are not only neutral, but they are not based on the actual search results, and so are misleading in what the searcher will discover; most importantly, they do not provide a warning about the misinformation contained in the autocomplete subtitle. To conclude, algorithms automate the search engines that people use in the functions of everyday life, but are also entangled in technological errors, algorithmic bias, and have the capacity to mislead the public. Through a process of reverse engineering (Ashby; Bucher), we searched 37 conspiracy theorists to decode the Google autocomplete algorithms. We identified how the subtitles attributed to conspiracy theorists are neutral, positive, but never negative, which does not accurately reflect the widely known public conspiratorial discourse these individuals propagate on the Web. This is problematic because the algorithms that determine these subtitles are invisible infrastructures acting to misinform the public and to mainstream conspiracies within larger social, cultural, and political structures. This study highlights the urgent need for Google to review the subtitles attributed to conspiracy theorists, terrorists, and mass murderers, to better inform the public about the negative nature of these actors, rather than always labelling them in neutral or positive ways. Funding Acknowledgement This project has been made possible in part by the Canadian Department of Heritage – the Digital Citizen Contribution program – under grant no. R529384. The title of the project is “Understanding hate groups’ narratives and conspiracy theories in traditional and alternative social media”. References Ashby, W. Ross. An Introduction to Cybernetics. Chapman & Hall, 1961. Baker, Paul, and Amanda Potts. "‘Why Do White People Have Thin Lips?’ Google and the Perpetuation of Stereotypes via Auto-Complete Search Forms." Critical Discourse Studies 10.2 (2013): 187-204. Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019. Bucher, Taina. If... Then: Algorithmic Power and Politics. OUP, 2018. Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. MIT P, 2018. Christin, Angèle. "The Ethnographer and the Algorithm: Beyond the Black Box." Theory and Society 49.5 (2020): 897-918. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. MIT P, 2020. Dörr, Dieter, and Juliane Stephan. "The Google Autocomplete Function and the German General Right of Personality." Perspectives on Privacy. De Gruyter, 2014. 80-95. Eilam, Eldad. Reversing: Secrets of Reverse Engineering. John Wiley & Sons, 2011. Epstein, Robert, and Ronald E. Robertson. "The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections." Proceedings of the National Academy of Sciences 112.33 (2015): E4512-E4521. Garry, Amanda, et al. "QAnon Conspiracy Theory: Examining its Evolution and Mechanisms of Radicalization." Journal for Deradicalization 26 (2021): 152-216. Gillespie, Tarleton. "Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem." Information, Communication & Society 20.1 (2017): 63-80. Google. “Update your Google knowledge panel.” 2022. 3 Jan. 2022 <https://support.google.com/knowledgepanel/answer/7534842?hl=en#zippy=%2Csubtitle>. Gran, Anne-Britt, Peter Booth, and Taina Bucher. "To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide?" Information, Communication & Society 24.12 (2021): 1779-1796. Gray, Judy H., and Iain L. Densten. "Integrating Quantitative and Qualitative Analysis Using Latent and Manifest Variables." Quality and Quantity 32.4 (1998): 419-431. Gray, Kishonna L. Intersectional Tech: Black Users in Digital Gaming. LSU P, 2020. Karapapa, Stavroula, and Maurizio Borghi. "Search Engine Liability for Autocomplete Suggestions: Personality, Privacy and the Power of the Algorithm." International Journal of Law and Information Technology 23.3 (2015): 261-289. Krasmann, Susanne. "The Logic of the Surface: On the Epistemology of Algorithms in Times of Big Data." Information, Communication & Society 23.14 (2020): 2096-2109. Krippendorff, Klaus. Content Analysis: An Introduction to Its Methodology. Sage, 2004. Noble, Safiya Umoja. Algorithms of Oppression. New York UP, 2018. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. Pasquale, Frank. The Black Box Society. Harvard UP, 2015. Robertson, Ronald E., David Lazer, and Christo Wilson. "Auditing the Personalization and Composition of Politically-Related Search Engine Results Pages." Proceedings of the 2018 World Wide Web Conference. 2018. Staff, Sun. “A Look inside the Lives of Shooters Jerad Miller, Amanda Miller.” Las Vegas Sun 9 June 2014. <https://lasvegassun.com/news/2014/jun/09/look/>. Webb, Amy. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Hachette UK, 2019. Appendix Table 1: The subtitles of conspiracy theorists on Google autocomplete Conspiracy Theorist Google Autocomplete Subtitle Character Description Alex Jones American radio host InfoWars founder, American far-right radio show host and conspiracy theorist. The SPLC describes Alex Jones as "the most prolific conspiracy theorist in contemporary America." Barry Zwicker Canadian journalist Filmmaker who made a documentary that claimed fear was used to control the public after 9/11. Bart Sibrel American producer Writer, producer, and director of work to falsely claim the Apollo moon landings between 1969 and 1972 were staged by NASA. Ben Garrison American cartoonist Alt-right and QAnon political cartoonist Brittany Pettibone American writer Far-right, political vlogger on YouTube and propagator of #pizzagate. Cathy O’Brien American author Cathy O’Brien claims she was a victim of a government mind control project called Project Monarch. Dan Bongino American radio host Stakeholder in Parler, Radio Host, Ex-Spy, Conspiracist (Spygate, MAGA election fraud, etc.). David Icke Former footballer Reptilian humanoid conspiracist. David Wynn Miller (No subtitle) Conspiracist, far-right tax protester, and founder of the Sovereign Citizens Movement. Jack Posobiec American activist Alt-right, alt-lite political activist, conspiracy theorist, and Internet troll. Editor of Human Events Daily. James O’Keefe American activist Founder of Project Veritas, a far-right company that propagates disinformation and conspiracy theories. John Robison Foundational Illuminati conspiracist. Kevin Annett Canadian writer Former minister and writer, who wrote a book exposing the atrocities to Indigenous Communities, and now is a conspiracist and vlogger. Laura Loomer Author Far-right, anti-Muslim, conspiracy theorist, and Internet personality. Republican nominee in Florida's 21st congressional district in 2020. Marjorie Taylor Greene United States Representative Conspiracist, QAnon adherent, and U.S. representative for Georgia's 14th congressional district. Mark Dice American YouTuber Right-wing conservative pundit and conspiracy theorist. Mark Taylor (No subtitle) QAnon minister and self-proclaimed prophet of Donald Trump, the 45th U.S. President. Michael Chossudovsky Canadian economist Professor emeritus at the University of Ottawa, founder of the Centre for Research on Globalization, and conspiracist. Michael Cremo(Drutakarmā dāsa) American researcher Self-described Vedic creationist whose book, Forbidden Archeology, argues humans have lived on earth for millions of years. Mike Lindell CEO of My Pillow Business owner and conspiracist. Neil Patel English entrepreneur Founded The Daily Caller with Tucker Carlson. Nesta Helen Webster English author Foundational Illuminati conspiracist. Naomi Wolf American author Feminist turned conspiracist (ISIS, COVID-19, etc.). Owen Benjamin American comedian Former actor/comedian now conspiracist (Beartopia), who is banned from mainstream social media for using hate speech. Pamela Geller American activist Conspiracist, Anti-Islam, Blogger, Host. Paul Joseph Watson British YouTuber InfoWars co-host and host of the YouTube show PrisonPlanetLive. QAnon Shaman (Jake Angeli) American activist Conspiracy theorist who participated in the 2021 attack on Capitol Hil. Richard B. Spencer (No subtitle) American neo-Nazi, antisemitic conspiracy theorist, and white supremacist. Rick Wiles (No subtitle) Minister, Founded conspiracy site, TruNews. Robert W. Welch Jr. American businessman Founded the John Birch Society. Ronald Watkins (No subtitle) Founder of 8kun. Serge Monast Journalist Creator of Project Blue Beam conspiracy. Sidney Powell (No subtitle) One of former President Trump’s Lawyers, and renowned conspiracist regarding the 2020 Presidential election. Stanton T. Friedman Nuclear physicist Original civilian researcher of the 1947 Roswell UFO incident. Stefan Molyneux Canadian podcaster Irish-born, Canadian far-right white nationalist, podcaster, blogger, and banned YouTuber, who promotes conspiracy theories, scientific racism, eugenics, and racist views Tim LaHaye American author Founded the Council for National Policy, leader in the Moral Majority movement, and co-author of the Left Behind book series. Viva Frei (No subtitle) YouTuber/ Canadian Influencer, on the Far-Right and Covid conspiracy proponent. William Guy Carr Canadian author Illuminati/III World War Conspiracist Google searches conducted as of 9 October 2021.

48

Lu, Qingmei, and Yulin Wang. "Automatic text location of multimedia video for subtitle frame." Journal of Ambient Intelligence and Humanized Computing, December5, 2019. http://dx.doi.org/10.1007/s12652-019-01599-2.

Full text

APA, Harvard, Vancouver, ISO, and other styles

49

SantoshSKale, Shruti Dhanak, Paras Chavan, Jay Kakade, and Prasad Humbe. "A Survey Study on Automatic Subtitle Synchronization and Positioning System for Deaf and Hearing Impaired People." International Journal of Advanced Research in Science, Communication and Technology, November17, 2022, 423–28. http://dx.doi.org/10.48175/ijarsct-7393.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In this study, we provide a subtitle synchronisation and placement system intended to improve deaf and hearing-impaired individuals' access to multimedia content. The paper's main contributions are a novel synchronisation algorithm that can reliably align the closed caption with the audio transcript without any human involvement and a timestamp refinement technique that can modify the duration of the subtitle segments in accordance with audiovisual recommendations. Regardless of the kind of video, the experimental evaluation of the strategy on a sizable dataset of 30 films pulled from the French national television verifies the method with average accuracy scores above 90%. The success of our strategy is demonstrated by the subjective assessment of the suggested subtitle synchronization and location system, carried out with real hearing challenged persons.

50

Tardel, Anke. "Effort in Semi-Automatized Subtitling Processes." Journal of Audiovisual Translation 3, no.2 (December18, 2020). http://dx.doi.org/10.47476/jat.v3i2.2020.131.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The presented study investigates the impact of automatic speech recognition (ASR) and assisting scripts on effort during transcription and translation processes, two main subprocesses of interlingual subtitling. Applying keylogging and eye tracking, this study takes a first look at how the integration of ASR impacts these subprocesses. 12 professional subtitlers and 13 translation students were recorded performing two intralingual transcriptions and three translation tasks to evaluate the impact on temporal, technical, and cognitive effort, and split-attention. Measures include editing time, visit count and duration, insertions, and deletions. The main findings show that, in both tasks, ASR did not significantly impact task duration, but participants had fewer keystrokes, indicating less technical effort. Regarding visual attention the existence of an ASR script did not decrease the time spent replaying the video. The study also shows that students were less efficient in their typing and made more use of the ASR script. The results are discussed in context of the experiment and an outlook on further research is given.

To the bibliography
Journal articles: 'Automated subtitles' – Grafiati (2024)
Top Articles
Latest Posts
Article information

Author: Catherine Tremblay

Last Updated:

Views: 6156

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.