GET THE APP

Assessing the Risk of Fraud in Mobile Money Transactions in Sierra Leone: A Study of Orange and Telecommunication Companies
..

Telecommunications System & Management

ISSN: 2167-0919

Open Access

Mini Review - (2023) Volume 12, Issue 1

Assessing the Risk of Fraud in Mobile Money Transactions in Sierra Leone: A Study of Orange and Telecommunication Companies

Ahar Waker*
*Correspondence: Ahar Waker, Department of Network and Computer Security, University of New York, New York, USA, Email:
Department of Network and Computer Security, University of New York, New York, USA

Received: 02-Jan-2023, Manuscript No. jtsm-23-93203; Editor assigned: 03-Jan-2023, Pre QC No. P-93203; Reviewed: 16-Jan-2023, QC No. Q-93203; Revised: 21-Jan-2023, Manuscript No. R-93203; Published: 28-Jan-2023 , DOI: 10.37421/2167-0919.2023.12.364
Citation: Waker, Ahar. “Assessing the Risk of Fraud in Mobile Money Transactions in Sierra Leone: A Study of Orange and Telecommunication Companies." J Telecommun Syst Manage 12 (2023): 364.
Copyright: © 2023 Waker A. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

Deep fakes have become a major concern due to their potential impact on various fields such as politics, entertainment, and security. With the help of deep learning techniques, even non-experts can now create convincing and realistic fake videos or images that can be used for both benign and malicious purposes. One of the key issues with deep fakes is that they can be used to manipulate people's perceptions of reality by creating fake videos or images of individuals saying or doing things that they never did. This has serious implications for politics and elections, where deep fakes could be used to sway public opinion and affect the outcome of elections. Another concern is that deep fakes can be used to create fake identities or impersonate individuals, which can lead to identity theft, fraud, and other malicious activities. In addition, deep fakes can be used to manipulate or deceive facial recognition systems, which are used in various applications such as security and surveillance.

Keywords

Digital forensics • Multimedia manipulations • Digital face manipulations

Introduction

To address these issues, researchers have been working on developing techniques to detect and prevent deep fakes. These techniques include analyzing facial features and movements, using machine learning algorithms to detect anomalies in videos or images, and developing more advanced facial recognition systems that are less vulnerable to deep fakes. Overall, the development of deep fakes highlights the need for increased awareness and vigilance when it comes to online media, and the importance of developing effective strategies for detecting and preventing the spread of fake content. It is estimated that 1.8 billion images and videos are uploaded to online services each day, including social and professional networking sites. However, 40% to 50% of these images and videos appear to be manipulated for benign or adversarial purposes. Human face image/ video manipulation, in particular, is a serious issue endangering the integrity of information on the Internet and face recognition systems, as faces play a central role in human interactions and biometrics-based person identification. As a result, plausible manipulations in face samples can severely undermine trust in digital communications and security applications. Deep Fakes are multimedia files that have been digitally altered or created synthetically using deep learning models [1-3].

Literature Review

Additionally, open challenges and potential future directions (e.g., robust deep fake detection systems against adversarial attacks using multistream and filtering schemes) in this evolving field of deep fakes are highlighted. The primary goals of this article are to supplement previous survey papers with recent advances, to provide the reader with a more in-depth understanding of the deep fake creation and detection domain, and to use this article as ground truth to develop novel algorithms for deep fake and face manipulation generation and detection systems. There is a dearth of work on the interpretability and dependability of the deep fake detection framework. Most deep-learning-based deep fake or face manipulation detection methods in the literature do not explain why the final detection outcome occurred. It is primarily because deep learning techniques are a black box in nature. Deep fake or face manipulation detectors currently available only provide a label, confidence percentage, or fake ness probability score, but not an insight description of the results. Such a description would be useful in understanding why the detector made a particular decision. Deep fake or face manipulation can also be used for either benign or malicious purposes. Nonetheless, current deep fake or face manipulation detection techniques can't tell the difference [4].

Numerous methods for detecting deep fake and face manipulation

A systematic analysis, however, reveals that the majority of them have low generalisation capability, i.e., their performances plummet when they encounter a novel deep fake/manipulation type that was not used during the training stage, as demonstrated in. Prior research also viewed deep fake detection as a reactive defence mechanism rather than a battle between attackers (i.e., deep fake generation methods) and defenders (i.e., deep fake detection methods). As a result, there is a significant disconnect between academic deep fake solutions and real-world scenarios or requirements. For example, the preceding works typically lag in terms of system robustness against adversarial attacks, decision explain ability, and real-time mobile deep fake detection and there are numerous methods for detecting deep fake and face manipulation.

In recent years, the study of deep fake generation and detection has gained much more traction in the computer vision and machine learning communities. There are some review papers on the subject, but they are primarily concerned with deepfake or synthetic samples using generative adversarial networks. Furthermore, most survey articles were written from an academic perspective rather than a practical development perspective. They also did not cover the introduction of very recent face manipulation methods as well as new deep fake generation and detection techniques. As a result, this paper provides a concise but comprehensive overview from both theoretical and practical perspectives in order to provide the reader with an intellectual grasp and to facilitate the advancement of novel and more resilient techniques [5,6].

Discussion

Deep fakes, or AI-generated or digitally manipulated face samples, pose a significant threat to the dependability of face recognition systems and the integrity of information on the Internet. This paper provides an overview of recent advances in the generation and detection of deep fake and facial manipulation. Despite noticeable progress, several issues remain to be addressed in order to achieve highly effective and generalised generation and defence techniques. Thus, this article discussed some of the open challenges and research opportunities. Deep fake detection frameworks, which will require interdisciplinary research efforts in various domains such as machine learning, computer vision, human vision, psychophysiology, and so on, have a long way to go in the field. Overall, this survey could be used to develop novel AI-based algorithms for deep fake generation and detection. It is also hoped that this survey paper will inspire aspiring scientists, practitioners, researchers, and engineers to pursue deep fakes as a field of study.

Conclusion

One illustration gained from these 2 investigations is that the subtleties matter. Apparently minor contrasts in system can produce significant contentions. Contrasts in concentrate on plan, procedural technique, and factual examinations can cause stamped contrasts in concentrate on discoveries. As imagers, we can take an example from the playbook of our partners in interventional cardiology, who at a beginning phase had normalized definitions for clinical results and procedural achievement. Interestingly, we as imagers have not for the most part adopted this strategy. This lack is featured in the 2 examinations in this issue of JACC wherein a precise methodology with a beginning point in the coronal view will definitely bring about confounding discoveries that can be genuinely broke down yet not essentially demon.

Acknowledgement

None.

Conflict of Interest

There are no conflicts of interest by author.

References

  1. Juefei, Xu, Felix Run Wang, Yihao Huang and Qing Guo, et al. "Countering malicious deepfakes: Survey, battleground, and horizon." IJCV 130 (2022): 1678-1734.
  2. Google Scholar, Crossref, Indexed at

  3. Huang, Wenjing, Shikui Tu and Lei Xu. "IA-FaceS: A bidirectional method for semantic face editing." Neural Netw 158 (2023): 272-292.
  4. Google Scholar, Crossref, Indexed at

  5. Segura, David, Emil J. Khatib, Jorge Munilla and Raquel Barco. "5G numerologies assessment for URLLC in industrial communications." Sensors 21 (2021): 2489.
  6. Google Scholar, Crossref, Indexed at

  7. Khalid, Waqas, Heejung Yu, Rashid Ali and Rehmat Ullah. "Advanced physical-layer technologies for beyond 5G wireless communication networks." Sensors 21 (2021): 3197.
  8. Google Scholar, Crossref, Indexed at

  9. Ranyal, Eshta, Ayan Sadhu and Kamal Jain. "Road condition monitoring using smart sensing and artificial intelligence: A review." Sensors 22 (2022): 3044.
  10. Google Scholar, Crossref, Indexed at

  11. He, Jiang and Paul K. Whelton. "Elevated systolic blood pressure and risk of cardiovascular and renal disease: overview of evidence from observational epidemiologic studies and randomized controlled trial." Sensors 138 (1999):211–219.
  12. Google Scholar, Crossref, Indexed at

arrow_upward arrow_upward