Loading...

Why should businesses care about deepfake technology

Why should businesses care about deepfake technology
Photo Credit: Pixabay
Loading...

Public figures and actors Rashmika Mandanna, Katrina Kaif, and Kajol have become the latest targets of deepfakes that were spread through various social media platforms. The malintention and defamatory nature of these deepfakes have yet again raised questions about responsible and governable artificial intelligence, and its impact on society.

It is just a matter of time before a deepfake related to a big business figure sinks the company and possibly shakes up the whole stock market. When coupled with other communication hacks, it can actually create geopolitical chaos.

Taking note, on November 23, Union Minister for Information Technology Ashwini Vaishnaw held a meeting with key social media players. Calling it a threat to democracy, Vaishnaw told the media that the government will shortly start drafting regulations for deepfakes, which could be either in the form of amending existing frameworks or introducing new rules. He also said that social media companies have agreed on the need for detection, prevention, and strengthening of reporting mechanisms.

Loading...

While deepfakes have been largely discussed on an individual level, they also pose threats to businesses. For instance, in August, Hong Kong police arrested six people in a loan scams case that targeted banks and money lenders. The scammers used deepfake technology to fake applicant identities. 

Deepfake technology helps create synthetic media by manipulating existing data, through underlying AI and machine learning algorithms. The use of generative adversarial networks (GANs) is one of the most common ways to generate deepfakes. GANs contain two neural networks called generator and discriminator, which work simultaneously against each other to generate authentic-looking outputs — be it image, video, or voice.

Businesses stand the risk of financial and reputational harm amid the proliferation of deepfakes, as observed in the example cited above. Deepfakes of important persons or executives can be used for phishing and other cyber attacks. 

Loading...

“Executives should be concerned as these AI-generated impersonations can be used for fraudulent activities, misinformation, or even to manipulate decision-making processes within an organisation,” said Kumar Ritesh, the chief executive officer (CEO) co-founder of cybersecurity firm Cyfirma.

Through deepfake-based impersonation, hostile actors can spread incorrect and defamatory information about the company which can have far-reaching consequences on its performance and shares. “Deepfakes can be used to deceive employees into believing they’re interacting with a trusted colleague or superior, leading to unauthorised access or information disclosure. Moreover, deepfakes can be utilised in spear-phishing attacks, making it harder for employees to discern genuine communications,” Kumar added.

“Raising awareness and educating consumers is the way to go ahead in limiting the harm of deepfakes. There is a need for an industry-wide consortium that will ensure that all players adopt responsible AI as a non-negotiable way of operations. Significant investment needs to be made to increase the efficacy of the algorithms that detect and certify content. This can even lead to establishing content certification agencies,” said Vijayasimha AJ, chief operating officer at digital solutions company Zensar.

Loading...

Several companies like Microsoft, Google, and Adobe are investing in mechanisms and tools that can identify real videos, according to Swayambhu Dutta and Chayan Bandyopadhyay, senior research managers at Tata Consultancy Services. “Infact, Microsoft has introduced the concept of content credential which is now agreed upon by over 900 companies across the world. It gives you the authenticity of the actual true media.”

Similarly, chipmaker Intel launched a deepfake detection platform called FakeCatcher that identifies deepfakes with high levels of accuracy in real-time.

But is deepfake an objectively bad technology? May be not entirely. It has applications in several industries including marketing, media, and retail. Such deepfakes can also be used in an unauthorised manner by using a celebrity for a brand promotion without his/her actual endorsement or contract. However, in case of proper consent and authorisation, such deepfakes can be used for advertisements and content creation ensuring localisation, customisation, and overall enhanced production.

Loading...

One such example is the advertisement campaign of chocolate brand Cadbury during the Covid-19 pandemic which allowed the creation of customised AI avatar of actor Shah Rukh Khan. Targeted mainly at small businesses, over 105,000 users created personalised versions of the ad for their brand.

Further, along with Augmented Reality/Virtual Reality (AR/VR), deepfake can be used to create immersive meeting experiences for corporates. Similar techniques could also be used by retail companies to show virtual layouts of shops for the better shopping experience.

Like many technologies, deepfakes pose both risk and opportunities. As it continues to advance, safeguarding against its misuse remains a critical challenge for policymakers, businesses, and society at large.

Loading...

Sign up for Newsletter

Select your Newsletter frequency