Artificial Intelligence (AI) is no longer a topic of the future, it is part of the daily life of all Internet users who live with it by unlocking their devices with facial recognition, using browsers and online maps, or using voice assistants and/or or chatbots. However, experts warn that the efficiency of this tool can also be transferred to activities with malicious purposes.
According to Isabel Manjarrez, Security Researcher in Kaspersky’s Global Research and Analysis Team, among the purposes related to the use of AI are the collection of personal data, including recognition data (voice, face, fingerprints), generation of malicious code, evasion of security measures or DDoS attacks, such as the one that recently occurred with ChatGPT. However, there are other risks that stand out for their accelerated growth and high success rate, due to users’ lack of knowledge and the excess of information present online.
For example, there are personalized phishing attacks, with 42.8% of fraudulent messages aimed primarily at stealing financial data, according to the Threat Panorama for Latin America. It recorded 286 million attempted phishing attacks in the last year, which represents an alarming increase of 617% globally, compared to the previous year. Although this is not a recent risk, it is striking that its increase is due, among other factors, to the emergence of tools that use Artificial Intelligence to facilitate the creation of scams in an automated manner.
Another threat that has grown alongside AI is deepfakes. Kaspersky experts have warned about these contents where images and videos are altered to show information different from the original, for example, so that one person impersonates another. Likewise, the sale of this false material on the Darknet has been made known in order to facilitate financial fraud, business scams, political blackmail, revenge, harassment and pornography.
Now, researchers have also identified deepfakes of all kinds; with modified audio or voice, and other text, created using wording similar to that of a person known to the victim. Although the spread of this threat and its consequences can affect the reputation, privacy, as well as the finances of institutions and users, company figures reveal that the majority of Latin Americans do not know what a deepfake is (70%) and nor would they know how to recognize content of this type (67%). This makes people more susceptible to frauds and scams driven by this technique.
“For the better, Artificial Intelligence has become a valuable tool that complements human functions effectively. Its application, for example, facilitates simple activities from our daily lives, such as unlocking our mobile devices, to more complex actions, such as promoting digital literacy. To the bad, it has also facilitated the expansion of new and existing cyber threats, as their use becomes more widespread and more accessible,” commented Isabel Manjarrez. “Artificial Intelligence is already here and will continue to develop. Like any other technology, it can bring great possibilities as long as we use it safely and responsibly,” he added.
To avoid falling victim to AI-generated threats, experts recommend:
Stay informed about new technologies and their risks: Know AI, how it works and keep in mind that there are already threats associated with this tool.
Always use reliable sources of information: Remember that information illiteracy continues to be a crucial factor for the proliferation of cyber threats such as phishing and deepfakes.
Have good digital habits like “trust, but verify”: Be cautious and skeptical of emails, text or voice messages, calls, videos, or other multimedia content you see or receive, especially if they communicate strange or illogical information.
Use security solutions: Although protection against cyberattacks or scams generated by AI has only just begun to emerge, there are already tools, that protect against all types of threats, known and unknown.
Source link
TCRN STAFF