Aplicaciones de deepfakes : Manipulación de contenido audiovisual y riesgos para los usuarios basados en las políticas de privacidad

Boté-Vericad, Juan-José and Vállez, Mari Aplicaciones de deepfakes : Manipulación de contenido audiovisual y riesgos para los usuarios basados en las políticas de privacidad. Documentación de las Ciencias de la Información, 2022, vol. 45, n. 1, pp. 25-32. [Journal article (Paginated)]

[thumbnail of Bote_Vallez_Deepfakes.pdf]
Preview
Text
Bote_Vallez_Deepfakes.pdf - Published version
Available under License Creative Commons Attribution.

Download (453kB) | Preview

English abstract

The growth of fake news makes users insecure when they consume information. While text-based news is perhaps the most widespread, video is gaining ground quickly, and has been transformed into informative content that can be manipulated, leading to misinformation among users. This article analyses 63 applications for mobile devices that enable the creation of deepfakes. An analysis of the apps is carried out by examining 16 indicators across three dimensions: description of the application, treatment of the image, and data protection. Each application has its peculiarities regarding the modification of the physiognomy of a person, especially their face. In some cases, the legality of the applications is questionable as they can impact people’s fundamental rights. The results demonstrate that, in many cases, the user is not informed of the technology used by the applications, nor the types of data that are collected. In conclusion the apps allow the generation of fake videos by manipulating the physiognomy of people, however, there is a lack of information surrounding the privacy policies of the data they collect. In addition, the use of these applications can damage the image and/or reputation of a person and even supplant their identity.

Spanish abstract

El crecimiento de las noticias falsas provoca en los usuarios inseguridad cuando consumen información. Siendo los formatos de noticias de tipo de textual quizás los más extendidos, el vídeo está irrumpiendo con fuerza y se ha transformado en contenido informativo que puede manipularse llevando a la desinformación de los usuarios. En este artículo se analizan 63 aplicaciones para dispositivos móviles que permiten generar deepfakes. Se realiza un análisis de las apps utilizando 16 indicadores clasificados en tres dimensiones: descripción de la aplicación, tratamiento de la imagen y protección de datos. Cada aplicación tiene sus particularidades respecto a la modificación de la fisionomía de una persona, en especial la cara. En algunos casos podría cuestionarse la legalidad de las aplicaciones ya que pueden incidir en los derechos fundamentales de las personas. Se observa en los resultados que en muchos casos no se informa al usuario de la tecnología que emplean las aplicaciones, ni los tipos de datos que se recogen. Se concluye que las apps permiten generar vídeos falsos manipulando la fisionomía de las personas aunque hay una falta importante de información sobre las políticas de privacidad de los datos que recogen. Además, el uso de estas aplicaciones puede llegar a dañar la imagen y/o prestigio de una persona e incluso llegar a suplantar su identidad.

Item type: Journal article (Paginated)
Keywords: Deepfake; desinformación; alfabetización informacional; manipulación de vídeos; apps; aplicaciones móviles; disinformation; video manipulation; mobile applications
Subjects: B. Information use and sociology of information > BJ. Communication
B. Information use and sociology of information > BG. Information dissemination and diffusion.
E. Publishing and legal issues. > EA. Mass media.
H. Information sources, supports, channels. > HH. Audio-visual, Multimedia.
H. Information sources, supports, channels. > HT. Web 2.0, Social networks
Depositing user: Juan-José Boté-Vericad
Date deposited: 24 Jan 2023 10:48
Last modified: 24 Jan 2023 10:48
URI: http://hdl.handle.net/10760/43889

References

Atehortua, N. A., y Patino, S. (2021). COVID-19, a tale of two pandemics: Novel coronavirus and fake news messaging. Health Promotion International, 36(2), 524–534. https://doi.org/10.1093/heapro/daaa140.

Bates, D. W., Landman, A., y Levine, D. M. (2018). Health apps and health policy: What is needed? JAMA, 320(19), 1975–1976. https://doi.org/10.1001/jama.2018.14378.

BBC News. (12 de noviembre de 2019). Are You Fooled by This Johnson-Corbyn Video? BBC News. https://www.bbc.com/news/av/technology-50381728/the-fake-video-where-johnson-and-corbyn-endorse-each-other.

Boté-Vericad, J.-J. y Vállez, M. (2021). Apps para dispositivos Android que generan Deepfakes. [Data set]. https://doi.org/10.5281/zenodo.5528949.

Google Play. (2020). https://play.google.com.

Google Trends. (2021). Tendencia de búsqueda del término deepfake a nivel mundial. https://trends.google.com/trends/explore?date=2017-11-01%202021-06-26&q=Deepfake.

Güera, D., y Delp, E. J. (2018). Deepfake video detection using recurrent neural networks. En 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1-6). IEEE. https://doi.org/10.1109/AVSS.2018.8639163.

Hasan, H. R., y Salah, K. (2019). Combating deepfake videos using blockchain and smart contracts. IEEE Access, 7, 41596–41606. https://doi.org/10.1109/ACCESS.2019.2905689.

Hughes, H. C., y Waismel-Manor, I. (2021). The macedonian fake news industry and the 2016 US election. PS: Political Science & Politics, 54(1), 19–23. https://doi.org/10.1017/S1049096520000992.

Islam, M. B., Lai-Kuan, W., y Chee-Onn, W. (2017). A survey of aesthetics-driven image recomposition. Multimedia Tools and Applications, 76(7), 9517–9542. https://doi.org/10.1007/s11042-016-3561-5.

Ivakhiv, A. (2016). The Art of Morphogenesis: Cinema in and beyond the Capitalocene. En S. Denson y J. Leyda (Hg.), Post-Cinema. Theorizing 21st-Century Film (pp. 724-749). REFRAME Books. https://doi.org/10.25969/mediarep/13475.

Kirchengast, T. (2020). Deepfakes and image manipulation: criminalisation and control. Information & Communications Technology Law, 29(3), 308-323. https://10.1080/13600834.2020.1794615.

Kramer, R. S. S., Jenkins, R., y Burton, A. M. (2017). InterFace: A software package for face image warping, averaging, and principal components analysis. Behavior Research Methods, 49(6), 2002–2011. https://doi.org/10.3758/s13428-016-0837-7.

Krylov, A., Nasonova, A. y Nasonov, A. (2014). Image warping as an image enhancement post-processing tool. En Proceedings of the 9th Open German-Russian Workshop on Pattern Recognition and Image Understanding (132-135). University Koblenz-Landau. https://kola.opus.hbz-nrw.de/opus45-kola/frontdoor/deliver/index/docId/915/file/OGRW_2014_Proceedings.pdf#page=138

Li, D., He, K., Sun, J., y Zhou, K. (2015). A geodesic-preserving method for image warping. En Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (213–221). https://openaccess.thecvf.com/content_cvpr_2015/html/Li_A_Geodesic-Preserving_Method_2015_CVPR_paper.html.

Li, Y., Chang, M.-C., y Lyu, S. (2018). In ictu oculi: Exposing ai created fake videos by detecting eye blinking. 2018 IEEE International Workshop on Information Forensics and Security (WIFS) (1-7). https://doi.org/10.1109/WIFS.2018.8630787.

Mulder, T. (2019). Health Apps, Their Privacy Policies and the GDPR. European Journal of Law and Technology, 10(1). https://papers.ssrn.com/abstract=3506805.

Nguyen, H. H., Yamagishi, J. y Echizen, I. (2019). Use of a capsule network to detect fake images and videos. ArXiv:1910.12467 [Cs]. http://arxiv.org/abs/1910.12467.

Parker, L., Halter, V., Karliychuk, T. y Grundy, Q. (2019). How private is your mental health app data? An empirical study of mental health app privacy policies and practices. International Journal of Law and Psychiatry, 64, 198–204. https://doi.org/10.1016/j.ijlp.2019.04.002.

Powell, A., Singh, P. y Torous, J. (2018). The complexity of mental health app privacy policies: A potential barrier to privacy. JMIR MHealth and UHealth, 6(7), e158. https://doi.org/10.2196/mhealth.9871.

Prathap, K. S. V., Jilani, S. A. K. y Reddy, P. R. (2016). A critical review on Image Mosaicing. 2016 International Conference on Computer Communication and Informatics (ICCCI) (1-8). https://doi.org/10.1109/ICCCI.2016.7480028.

Puerto, S. (2018). Técnicas de animación e interrelación de imágenes bidimensionales. Mosaic, 165. https://doi.org/10.7238/m.n165.1842.

Robillard, J. M., Feng, T. L., Sporn, A. B., Lai, J.-A., Lo, C., Ta, M. y Nadler, R. (2019). Availability, readability, and content of privacy policies and terms of agreements of mental health apps. Internet Interventions, 17, 100243. https://doi.org/10.1016/j.invent.2019.100243.

Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J. y Niessner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. En Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (1-11). https://openaccess.thecvf.com/content_ICCV_2019/html/Rossler_FaceForensics_Learning_to_Detect_Manipulated_Facial_Images_ICCV_2019_paper.html.

Scherhag, U., Nautsch, A., Rathgeb, C., Gomez-Barrero, M., Veldhuis, R. N. J., Spreeuwers, L., Schils, M., Maltoni, D., Grother, P., Marcel, S., Breithaupt, R., Ramachandra, R., y Busch, C. (2017). Biometric systems under morphing attacks: Assessment of morphing techniques and vulnerability reporting. En International Conference of the Biometrics Special Interest Group (BIOSIG) (1-7). https://doi.org/10.23919/BIOSIG.2017.8053499.

Tang, J. y Ni, B. (2019). Progressive face dynamic morphing. En 2019 International Conference on Intelligent Computing, Automation and Systems (ICICAS) (48–53). https://doi.org/10.1109/ICICAS48597.2019.00019.

Unión Europea, (2012). Charter of Fundamental Rights of the European Union. Diario Oficial de la Unión Europea, C 326/02 de 26 de octubre de 2012. https://eur-lex.europa.eu/legal-content/ES/TXT/HTML/?uri=CELEX:12012P/TXT&from=EN.

Veiga, C., Lourenço, A. M., Mouinuddin, S., van Herk, M., Modat, M., Ourselin, S., Royle, G. y McClelland, J. R. (2015). Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm: Dose warping uncertainties due to registration algorithm. Medical Physics, 42(2), 760-769. https://doi.org/10.1118/1.4905050.

Wagner, T. L. y Blewer, A. (2019). “The word real is no longer real”: Deepfakes, gender, and the challenges of ai-altered video. Open Information Science, 3(1), 32-46. https://doi.org/10.1515/opis-2019-0003.

Zimmerle, J. C. y Wall, A. S. (2019). What’s in a policy? Evaluating the privacy policies of children’s apps and websites. Computers in the Schools, 36(1), 38-47. https://doi.org/10.1080/07380569.2019.1565628.


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item