Deepfakes and artificial intelligence in social engineering: Emerging threats in 21st-century cyberfraud
Deepfakes e inteligencia artificial en la ingeniería social: amenazas emergentes en el ciberfraude del siglo XXI
Deepfakes e inteligência artificial na engenharia social: ameaças emergentes na fraude cibernética do século XXI
Resumo (pt)
Este artigo analisa o impacto crescente do uso de deepfakes e IA generativa como evolução das estratégias de engenharia social, destacando seu papel como ameaça emergente no ciberfraude do século XXI. Os recentes avanços tecnológicos permitiram gerar conteúdos hiper-realistas que têm o potencial de suplantar a identidade de forma quase indetectável. A partir de uma abordagem mista, na qual se integram correntes da criminologia, estatísticas e doutrina especializada, serão examinados casos recentes de fraude cibernética nos quais foram utilizados deepfakes para manipular as vítimas e violar sistemas de autenticação e acesso a informações confidenciais. Além disso, discutiremos a pertinência de aplicar a teoria criminológica das atividades rotineiras como estratégia preventiva, com a qual é possível identificar rotinas diárias que podem ser exploradas por cibercriminosos. No século XXI, destaca-se a importância de contar com uma cultura de autoproteção e autorregulação digital em ambientes digitais para que as pessoas não sejam vítimas de fraude cibernética.
Resumo (en)
This essay will address the increasing significance of deepfakes and generative AI in the context of social engineering tactics, emphasizing their role as a developing danger in cyber fraud in the twenty-first century. Recent technological advances have enabled the generation of hyperrealistic content that has the potential for near-undetectable identity impersonation. Therefore, contemporary cases of cyberfraud where deepfakes were used to trick victims, break authentication systems, and get to private information will be looked at using a mixed method that includes criminology, statistics, and specialized doctrine. This essay also addresses the importance of applying the criminological theory of routine activities as a preventive strategy, which can be used to identify everyday routines that can be exploited by cybercriminals. In the 21st century, it is important to have a culture of digital self-protection and self-regulation in digital environments, so that people don't fall victim to cyber fraud.
Resumo (es)
En el presente artículo se analizará el impacto creciente del uso de los deepfakes y la IA generativa como evolución de las estrategias de ingeniería social, destacando su papel de amenaza emergente en el ciberfraude del siglo XXI. Los recientes avances tecnológicos han permitido generar contenidos hiperrealistas que tienen el potencial de suplantar la identidad de forma casi indetectable. A partir de un enfoque mixto, en el cual se integran corrientes de la criminología, estadísticas y doctrina especializada, se examinarán casos recientes de ciberfraude en los cuales se emplearon los deepfakes para manipular a las víctimas y vulnerar sistemas de autenticación y acceso a información reservada. Asimismo, se discute la pertinencia de aplicar la teoría criminológica de las actividades rutinarias como estrategia preventiva, con la cual se pueden identificar rutinas cotidianas que pueden ser explotadas por los ciberdelincuentes. En el siglo XXI, se destaca la importancia de contar con una cultura de autoprotección y autorregulación digital en los entornos digitales para que las personas no sean víctimas del fraude cibernético.
Referências
Aloisio, C., & Trajtenberg, N. (2009). Rationality in contemporary criminological theories: Uruguay from sociology VII. Department of Sociology of Uruguay, Faculty of Social Sciences, UDELAR.
Alvarez, F. (2020). Machine learning in e-commerce fraud detection applied to banking services. Science and Technology, 20, 81-95.
Amerini, I., Baldini, G., & Leotta, F. (Eds.). (2022). Image and video forensics. MDPI. https://doi.org/10.3390/books978-3-0365-2807-6
Benavides, E., Fuertes, W., Sanchez, S., & Nuñez-Agurto, D. (2020). Characterization of phishing attacks and techniques to mitigate these attacks: a systematic review of the literature. Science and Technology, 13(1), 97-104.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners. arXiv. https://doi.org/10.48550/arXiv.2005.14165
Campillo, Beatriz. 2021. From fake news to deepfakes: New challenges for cyberethics. Anuario Colombiano de Filosofía, 2(2), 89-106. https://tinyurl.com/yc3wcw45.
Cerdán Martínez, V. M., & Padilla Castillo, G. (2019). History of audiovisual fake: Deepfake and women in a falsified and perverse imaginary. History and Social Communication, 24(2), 505-520. https://doi.org/10.5209/hics.66293
Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44, 588-608. https://doi.org/10.2307/2094589
Corredera, J. C. (2023). Generative artificial intelligence. Anales de la Real Academia de Doctores, 8(3), 475–489.
Deressa, D. W., Lambert, P., Van Wallendael, G., Atnafu, S., & Mareen, H. (2024). Improved deepfake video detection using convolutional vision transformer. In 2024 IEEE Gaming, Entertainment, and Media Conference (GEM) (pp. 492–497). IEEE. https://doi.org/10.1109/GEM61861.2024.10585593
Domínguez Arteaga, R. A., & Vázquez, R. V. (2022). Spatial analysis of e-commerce cyberfraud: Considerations in the Tamaulipeca political agenda. Podium, (41), 21-40.
Estupiñán, R. (2002). Internal control and fraud. ECOE Ediciones.
Felson, M. (1986). Linking criminal choices, routine activities, informal control, and criminal outcomes. In D. B. Cornish & R. V. Clarke (Eds.), The reasoning criminal: Rational choice perspectives on offending (pp. 119–128). Springer-Verlag. https://doi.org/10.1007/978-1-4613-8625-4_8Felson, M., & Clarke, R. (1998). Opportunity makes the thief: Practical theory for crime prevention. Home Office Policing and Reducing Crime Unit, Research, Development and Statistics Directorate.
Franganillo, J. (2023). Generative artificial intelligence and its impact on media content creation. Methods: Journal of Social Sciences, 11(2), 10.
García-Peñalvo, F. J., Llorens-Largo, F., & Vidal, J. (2024). The new reality of education in the face of advances in generative artificial intelligence. RIED: Revista Iberoamericana de Educación a Distancia, 27(1), 9-39.
Garcia-Ull, F. (2021). Deepfakes: The next challenge in fake news detection. Anàlisi. Quaderns de Comunicació i Cultura, 64, 103-20. https://doi.org/10.5565/rev/analisi.3378
Kara, I., & Aydos, M. (2020). Cyber fraud: Detection and analysis of the crypto-ransomware. In 2020 11th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON); IEEE. https://doi.org/10.1109/UEMCON51285.2020.9298128
Khonji, M., Iraqi, Y., & Jones, A. (2013). Phishing detection: A literature survey. IEEE Communications Surveys & Tutorials, 15(4), 2091-2121.
Komatwar, R., & Kokare, M. (2020). Survey on malware detection and classification. Journal of Applied Security Research, 15(1), 1–31.
Lara Guijarro, E. G., & Albán Silva, L. C. (2017). The risks of internet banking transactions. Revista Publicando, 4(10[1]),, 62-74. https://revistapublicando.org/revista/index.php/crv/article/view/436
López Grande, C. E. (2015). Social engineering: The silent attack. Technological Journal, 8.
Norza, E., Ruiz, P. J., Rodríguez, M. L., & Useche, H. S. (2011). Theories and explanatory models of criminology. Revista Investigación Criminológica, 2(1).
O'Kane, P., Sezer, S., & Carlin, D. (2018). Evolution of ransomware. IET Networks, 7(5), 321-327.
Quevedo González, J. (2017) Investigation and proof of cybercrime [Doctoral dissertation, University of Barcelona]. http://hdl.handle.net/10803/665611
Rincón Nuñez, P. M. (2023). Social engineering based attacks in Colombia, good practices and recommendations to avoid risk. InterSedes, 24(49), 120-150. https://doi.org/10.15517/isucr.v24i49.50345.
Tekiner, E., Acar, A., Uluagac, A. S., Kirda, E., & Selcuk, A. A. (2021). SoK: Cryptojacking malware. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 120-139). IEEE.
Varlıoğlu, S., Elsayed, N., ElSayed, Z., & Ozer, M. (2022). The dangerous combo: Fileless malware and cryptojacking. In SoutheastCon 2022 (pp. 125–132). IEEE. Velasco Sanchez, L. (2022). New dilemmas introduced due to SARS-CoV-2 (COVID-19) from a malware point of view. University of Malaga.
Yavuz, C. (2024). Criminalisation of the dissemination of non-consensual sexual deepfakes in the European Union: A comparative legal analysis. Revue internationale de droit pénal, 95(2), 419-457.
Como Citar
Licença
Copyright (c) 2025 Yonni Albeiro Bermudez Bermudez

Este trabalho está licenciado sob uma licença Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Los autores mantienen los derechos sobre los artículos y por tanto son libres de compartir, copiar, distribuir, ejecutar y comunicar públicamente la obra bajo las condiciones siguientes:
Reconocer los créditos de la obra de la manera especificada por el autor o el licenciante (pero no de una manera que sugiera que tiene su apoyo o que apoyan el uso que hace de su obra).
Revista IUSTA está bajo una licencia Creative Commons Atribución-NoComercial-CompartirIgual 4.0 Internacional (CC BY-NC-SA 4.0)

La Universidad Santo Tomás conserva los derechos patrimoniales (copyright) de las obras publicadas, y favorece y permite la reutilización de las mismas bajo la licencia anteriormente mencionada.




