Loading...

Criminals are applying for remote tech jobs using deepfakes and stolen personal data, warns FBI

Criminals are applying for remote tech jobs using deepfakes and stolen personal data, warns FBI
Photo Credit: Pixabay
Loading...

The Federal Bureau of Investigation (FBI) has warned of increasing complaints that cybercriminals are using people's stolen Personally Identifiable Information (PII) and deepfaked video and voice to apply to remote tech jobs at IT, programming, database, and software firms. 

Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness. These are often generated using artificial intelligence (AI) or machine learning (ML) technologies and are difficult to distinguish from authentic materials. Deepfaked content has been previously used to spread fake news and create revenge porn, but the lack of ethical limitations regarding their use has always been a source of controversy and concern. 

The report noted that many of these open positions had access to sensitive customer or employee data, as well as financial and proprietary company info. This further implies that the imposters could have a desire to steal sensitive information as well as a bent to cash a fraudulent paycheck. It is, however, unclear that how many of these fake attempts at getting a job were successful versus how many were caught and reported, or whether someone secured an offer, took a paycheck, and then got caught. 

Loading...

As per the report, these applicants were apparently using voice spoofing techniques during online interviews where lip movement did not match what’s being said during video calls. Also, the jig was up in some of these cases when the interviewee coughed or sneezed, which wasn’t picked up by the video spoofing software. 

The FBI was among several federal agencies to recently warn companies of individuals working for the North Korean government applying to remote positions in IT or other tech jobs in May. Those fake workers were bidding on remote contract work through sites like Upwork or Fiverr using fake documentation and references. 

According to a report by researchers from Carnegie Mellon University, AI meant to detect altered video could range in accuracy from 30% to 97%, and therefore it is not easy to detect a fake video. Humans can, however, detect fake video, especially once they’re trained to watch for certain visual glitches such as shadows that don’t behave as they should or skin texture that doesn’t seem accurate, said the report.

Loading...

Sign up for Newsletter

Select your Newsletter frequency