image by pexels.com

Cybercrime is no longer restricted to anonymous calls where the hacker asks for otp, fake messages claiming that you have won a lottery or fake links that hack into your bank account, but instead, due to artificial intelligence, technology has taken a dangerous turn. Modern digital crimes do not just empty bank accounts; they weaken trust, endanger identities, health information and blur one’s perception of reality. From digital arrests to AI-driven deception, cybercrime has turned into something where seeing, hearing or even trusting technology isn’t safe.

One of the most frightening concerns is the rise in digital arrest scams. In digital arrest scams, the scammer/s pretend to be police or government officials and contact the victims through video calls. The scammers wear fake uniforms, display duplicate documents and accuse the victims of crimes like money laundering or identity fraud. The victim is told that they are under digital surveillance and must immediately pay a fine to avoid arrest. According to a government estimate, Indians lost more than 1al9 billion rupees to digital scams in 2024. What is more concerning is the fact that digital arrest frauds nearly tripled between 2022 and 2024. In 2025,  YouTuber and influencer Ankush Bahuguna was digitally arrested for 40 hours by his scammers. He posted a reel and multiple stories to warn others about this and how they can stay safe. Along with Ashish, there are several other cases, a 50-year-old businesswoman who lost nearly Rs 1.6 crore and a 65-year-old woman who was scammed out of Rs 46 lakh.

During a digital arrest, the fear, urgency and being faced with authority paralyses the victim’s rational thinking, leading to the transfer of a large amount of money. A digital arrest scam not only causes financial loss but also inflicts emotional trauma on victims.

The AI Attacks:” Robots Hacking Robots”

In addition to these scams, there lies a darker frontier: AI attacking AI. In this kind of scam, the scammer has to provide AI with the necessary instructions, and the scammer can rest while AI does the required. The scammer uses ‘’shadow agents’’ which are independent programs that scan networks for weaknesses, abuse unsecured systems and break in. They work silently and efficiently while administrators are offline. In 2025, Anthropic. The company behind the popular Claude chatbot released a report that said that ‘’an unnamed hacker used AI to what we believe is an unprecedented degree’’ to research,hackand extort at least 17 companies.

Another equally dangerous cybercrime is model poisoning, in which AI models collect data and learn from it. Hackers intentionally feed wrong data into AI systems, which results in them producing flawed or harmful results. In late 2024, an AI robot known as Erbai convinced 12 larger showroom robots to follow out of their display. A poisoned AI might leak sensitive information, make poor financial decisions and deceive users. In sectors like banking, healthcare and governance, such manipulation can lead to big problems.

Prompt hijacking, an assault on AI chatbots and automated assistants, is closely related. Although these systems have limitations and constraints built in, capable attackers can nonetheless control them with well-crafted commands. Hackers can fool chatbots into disclosing restricted knowledge, organisational procedures, or sensitive data by taking advantage of linguistic flaws. This approach turns intelligence into vulnerability by using psychological manipulation rather than technical power.

Body-Hacking:” Hacking your Health”

Body-hacking is another way that cybercrime has made its way into the real world. Smartwatches, fitness trackers, and medical sensors are examples of wearable technology that gather extremely private data, including location, movement patterns, heart rate, and sleep cycles. Hackers may exploit this information for targeted crimes, blackmail, or stalking. The invasion affects people's safety and privacy in real-world settings and is not only digital.

The idea of a bio-ransom is even more worrisome. Life-sustaining gadgets like insulin pumps, heart monitors, artificial limbs, and neurological implants are becoming possible targets as medical technology grows more interconnected. In a bio-ransom scenario, hackers might interfere with or disable these devices and demand payment to get them back up and running. This is now extortion at the expense of human life rather than theft or fraud. Such crimes' ethical ramifications put current moral and legal frameworks to the test.

Mind Games:” Seeing is No Longer Believing”

The emergence of AI-driven deception, when visual and aural evidence can no longer be accepted, is perhaps the most psychologically unsettling development. Criminals can pose as actual persons during video calls thanks to live clones that are driven by deepfake technology. Someone who sounds and looks just like a parent, boss, or close friend may make urgent pleas to victims. Such deceit takes advantage of trust at its most personal level, causing a deep emotional shock.

More broadly, cybercrime has started using mood manipulation to affect people's emotions. Attackers can flood social media sites with frightening or deceptive content by organising thousands of phoney accounts. Political results, public behaviour, and stock markets can all be impacted by this manufactured hysteria. Societies are susceptible to turmoil, disinformation, and manipulation when fear is intentionally heightened.

Cybercrime is no longer merely a technological problem in this quickly changing digital environment. It's an ethical, psychological, and social crisis. Criminals become more adept at leveraging technology to take advantage of human weaknesses rather than just system defects as machines become smarter. In addition to sophisticated security systems, awareness, digital literacy, and critical thinking are necessary to combat these risks. Human alertness continues to be the most effective defence in a time when technology can mimic, trick, and manipulate.

.    .    .

Discus