AIVideos

Deepfakes: When Contemporary Content Creation Goes Too Far

Google officially launched in 1998, and the world hasn’t been the same since. It has become part of our vocabulary, a ‘verb’ we find ourselves doing every day.

The first page of a Google search result is the most powerful position on the internet. It gets 95% of web traffic, leaving only 5% for the other pages. Most online surfers won’t even get past the fifth page unless they’re looking for less-known answers.

To compete for this coveted spot, webmasters are coming up with new ways to get your attention. Search Engine Optimization (SEO) revolutionized digital marketing by analyzing algorithms to increase a website’s visibility on Google’s result pages. With the help of professional SEO service providers, brands can increase their presence online. More importantly, SEO achieves its aims by improving a website’s quality.

While SEO experts are playing it fair, others resort to shallow, clickbait schemes that unfortunately work. Among these schemes, deepfakes are increasingly becoming more scary than amusing.

What Are Deepfakes?

The term is a portmanteau of “fake” and “deep learning,” a class of machine-learning methods. Deepfakes use real images and videos as raw input, altering their subjectinto someone else. Basically, it is a high-tech photo and video manipulation. Deepfakes may sometimes be accompanied by manipulated or generated audio content, increasing its potential for deception.

Deepfakes in Mainstream Media: Where Deepfake Content Can Prove Useful

Commercial sectors use deepfakes more often than you’d think. But they’re not referred to as deepfakes. Instead, they come under the guise of euphemistic jargon, such as “AI-generated videos” or “synthetic media,” which the tech-savvy population defines as any content fully or partially generated by computers.

Victor Riparbelli, CEO of software company Synthesia, once said that AI-generated media are “the future of content creation.”

Riparbelli’s London-based company specializes in synthetic media for business.Through Synthesia’s software, corporations can create content in a matter of minutes.Employers can download the software to transform team members into AI avatars. The avatars can then be processed to create custom video presentations without conducting a professional video shooting. Through AI technology alone, Synthesia has crafted corporate training videos for clients like Accenture.

Aside from corporate communications and education, deepfakes have also made their way on the news. Synthesia has also worked with Reuters to create automated, video-led reports.

This technology has also been used in South Korea when the cable channel MBN let an AI anchor run the show. The AI anchor was based on MBN’s real reporter, Kim Ju-ha.

The AI was able to capture Kim’s persona. It had their facial features and the sound of her voice. It mimicked her down to the idiosyncrasies, such as Kim’s mannerism of gently fiddling with a pen.

Through a series of test runs, the AI Kim Ju-ha reported the news four times a day. The viewers were informed of the adapted technology. In its introduction, the AI noted that it was created through deep learning 10 hours’ worth of videos featuring Kim Ju-ha. These videos allowed the AI to extract Kim’s facial expressions and movement.

Cable companies consider this a good thing. It can cut down labor and production costs. But for reporters and anchors, this could entail lost jobs and decreased opportunities.

Nonetheless, newscasters do more than just read scripts. They can also provide on-the-spot insights into political issues as they unfold. For this purpose, we have high hopes for the retention of real-life anchors in the future.

Deceptive Media: The Evils of Deepfakes

Deepfakes on social media started as an amusement. Instagram users like Jesse Richards create deepfake videos for fun, calling out celebrity mannerisms and referencing pop culture icons. Richards has a particular fondness for The Office, so do many Americans today. He takes a video of Pam, for example, and replaces her face with Arnold Schwarzenegger’s likeness. These videos are clearly fake-but only because we know the original video too well.

Without context, deepfakes possess great power of deception. And with great power comes a potential for evil.

In the political arena, deepfakes have been used as a weapon for misinformation warfare. A deepfake video of Donald Trump advising Belgians to “withdraw from the Paris climate agreement” deceived many users online, inciting them to start a dispute on Facebook and Twitter. Of course, no one won except the Belgian socialist party that circulated the video.

Deepfakes have also been used to circulate hoaxes, create fake celebrity sex tapes, and commit fraud.

How Can We Spot Deepfakes?

Online platforms like Twitter and Google are further developing their systems to filter out deep fakes. For your further protection, here are a few things you could look out for.
1. A lack of facial Expressions. The human face has 42 individual muscles that allow it to express a multitude of expressions. If the video’s subject is not displaying any microexpressions at all, such as the arching of the eyebrow, it might be an AI technology.
2. Videos That Look Unnatural When Slowed Down. Online sites like YouTube allow you to change the video speed. If you’re worried you’re watching a deepfake on YouTube, try using this option.
3. Blurred Lines. Look out for blurry pixels, especially around the person’s temples, jawline, and hair strands.

Awareness Prevents Misinformation

As malicious users exploit the dangers of deepfakes, knowledge will be your weapon. As always, never believe everything on the internet without doing your research. With proper prevention, deepfakes cannot control our content.

Sunit Kumar Nandi

Sysadmin, coder, e-magazine editor, tech reviewer, freelancer. I love science and all things nice. Leading officer of Tekh Decoded. Owner of Techno FAQ Digital Media.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Check Also
Close
Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker