Preloader Close
  • Internet
  • Post by StriveX Academy Administration
  • 0 Comments

The entertainment industry has been changing drastically over the past couple of years. Some examples of this are advancements in animation and visual effects like CGI. There have even been advancements in images like Photoshop, where you can completely transform an image, and make images yourself using AI. However, recently there has been a new Photoshop in video form called Deepfakes, and it is slowly destroying society as we know it, and making us question every video or interview we see on the internet…

Deepfakes are a form of artificial intelligence called deep learning that is used to make videos of fake events. According to MIT, “the term ‘deepfake’ was first coined in late 2017 by a Reddit user of the same name. This user created a space on the online news and aggregation site, where they shared pornographic videos that used open-source face-swapping technology. The term has since expanded to include ‘synthetic media applications’ that existed before the Reddit page and new creations like StyleGAN — “realistic-looking still images of people that don’t exist,” said Henry Ajder, head of threat intelligence at deepfake detection company Deeptrace.” Arguably, an example of early deepfakes, before they were even called deepfakes, could’ve been ElfYourself, where users would upload photos of themselves, and it would paste their face on an Elf dancing, and doing silly moves. However, some uses for Deepfakes are trying to manipulate the media to convince others that what they are seeing is real. And it can cause more harm than good…

Some risks about Deepfakes are that they can be used in politics, more specifically trying to make world leaders say whatever they want that is just not true. Also, this could be used on big founders, like Elon Musk and Jeff Bezos, to portray them as evil beings in society, and could cause crashes in their market, and could end their professional career as well as their whole brand. One example of this is a video of Meta founder, Mark Zuckerberg, appearing as if he is describing a future where he will collect all users personal data and claim it as his own, saying “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.” Meta, formerly known as Facebook, has had a rocky past with users’ data on the image and video sharing social media site, even appearing in court after being hit with a privacy lawsuit in October of 2021. Along with that, Zuckerberg introduced in 2021, something called the Metaverse, a network of 3D virtual worlds focused on social connection. Many are afraid of this because players can make realistic, human-like avatars using their real face, and essentially have the thought process for a deepfake, since they can most likely make avatars with images as well. Since it will be very hyper-realistic, many harassment and assault allegations have come forward from people being harassed/assaulted in this Virtual Reality universe. This could be used in a more harmful way since it could wreak havoc on society, and headlines would be created by the media to scare the public and cause concern. This could lead to riots, and affect people’s personal and professional life. It would cause distrust in governments, and people to rebel against society, and go against societal norms. For example, take a look at this video circulated in 2018, showing a deepfaked version of former President Barack Obama being acted out by Actor & Director, Jordan Peele. Though this was harmless at first, the video shows how dangerous deepfakes can be, and if created by the wrong hands, can cause a giant dystopia in our society. Also, for example, if government officials were deepfaked and came out speaking about how crime is a positive thing, it would lead to an increase in crimes, and it would become The Purge, a movie where all crime is legal for 12 hours. The January 6th riots are one example. Though not deepfaked, it showed the power of political leaders in our society, and how far people will go for one leader. This could be known as stan behavior, people who are extremely or excessively enthusiastic and devoted fans. Popularized by Eminem, a group of people known as stans will devote their whole existence to a celebrity figure in most cases. If a popular figure was deepfaked by an average person, these stans would follow every command they say. Take a look at this deepfaked video of model & influencer, Kim Kardashian, explaining how she loves manipulating people online for money. This, and the Mark Zuckerberg video above, was made by a company called Spectre, created by artists Bill Posters and Dr. Daniel Howe, and collaborated with leading AI technology start-ups to create the range of deep-fakes intended to explore the dangers of the Digital Influence Industry. One of the biggest risks, though, is deepfakes are mainly used in the pornographic industry. This means that anyone can have a pornographic video made of them, and it would look very real. One example of this is a cyberbully, Raffaela Spone, who originally remained anonymous, that was harassing teenage girls by using deepfaked nudes of them, and putting them in crime-like situational videos to try and kick them off their cheerleading team. University of Virginia law professor Danielle Citron, who researches digital harassment, privacy, and free speech, noted that deepfake sex videos are increasingly common, overwhelmingly target women, and most troubling, are getting easier to make and harder to spot. Businesses and companies could use deepfakes to seemingly run ads on their products, showing fake endorsement deals, so people will buy their products. Even children’s channels can have celebrity figures show up, and spread positive messages about their product as well. However, this could create a competitive landscape, and companies may cause harm by using celebrity figures to endorse companies even if the person is against said company entirely, and businesses can trash other companies with the same method as well, which can cause harm in the person’s reputation. This can also tie in with Deepfake threats, such as fraud and blackmail. An example would be scammers trying to obtain money from you by holding deepfaked pornographic videos of you hostage, and harassing you for money until you comply, otherwise, they could release them to the public. This could well make its way to big corporations and organizations if they are not careful enough, and cause an influx of financial scams from small businesses and big corporations and companies alike. One last risk however is that anyone has the power to make deepfakes, since deepfake software is all around the web, and a quick Google search could lead you in that direction. Heck, even one SINGLE photo can be enough to make you into a deepfake, and a talking head. An example of this is an app called Reface, an app where users can swap faces in videos that could be potentially used for harmful reasons. But that also begs the question, could deepfakes be used for good?

Surprisingly, there are some benefits towards deepfakes, such as historical figures coming back to life, and making a more engaging learning environment for people that are more into history. It would make old black and white videos seem like they were a joke in the past, and historical events could be recreated with the power of deepfakes and people can get a good, clear view of what was going on, and how times have changed in the past. Teachers could use this to make an interactive classroom, and students can get to see how times were like in the past, using a deepfake and a green screen as an example, to put them in a more interactive environment. This could also make more historically accurate movies and make it so dead people can seemingly come back to life, and play an acting role in the movie, and would reduce CGI costs as well. Children can meet their celebrity idols and crushes for that matter, and make it seem like they are sending a personalized video to them. One example of this is a TikTok account with a deepfaked version of actor Tom Cruise, telling jokes and doing magic tricks. This can be helpful for celebrity figures who are quite busy during the day, to have people deepfake them and interact with their fanbase to have a personal connection with them and gain a stronger liking towards them. One last benefit is in the video game industry, it could speed up game creation by a mile. One example of this is NVIDIA’s AI Playground, where you can explore their AI environment, as well as their first game demo created using AI-generated graphics. So, with all of this in mind, what should you think about Deepfakes?

Deepfakes are the future, no doubt about that, but how can YOU stay safe from people creating harmful deepfakes about you? Well, deepfakes are still in their early stages at the moment, and powerful software has not yet been released. There are signs that you can look for to spot a deepfake video, such as jerky movement, shifts in lighting from one frame to the next, shifts in skin tone, strange blinking or no blinking at all, lips poorly synched with speech, and digital artifacts in the image. However, deepfakes have the power to evolve, and software could be in development right now that could be so powerful, and you will need help from cyber-security programs. According to the popular cybersecurity program, Kaspersky, “some emerging technologies are now helping video makers authenticate their videos. A cryptographic algorithm can be used to insert hashes at set intervals during the video; if the video is altered, the hashes will change. AI and blockchain can register a tamper-proof digital fingerprint for videos. It’s similar to watermarking documents; the difficulty with video is that the hashes need to survive if the video is compressed for use with different codecs. Another way to repel Deepfake attempts is to use a program that inserts specially designed digital ‘artifacts’ into videos to conceal the patterns of pixels that face detection software uses. These then slow down Deepfake algorithms and lead to poor quality results — making the chances of successful Deepfaking less likely.” Also, good security procedures are an amazing way to protect yourself from becoming a victim. Spread awareness to friends, family, and employers and let them know how deepfaking works, and the challenges it can cause in the environment. Let them know how to spot a deepfake, and always get your information from good quality news sources, and unbiased ones too, such as The Associated Press or Wall Street Journal. Always verify anything suspicious you may find, such as calls, and always keep all your accounts private, so people will have a harder time trying to find images about you on the Web. Not to worry though, deepfake software is still in its early stages, and it is obvious to differentiate real from fake as of now, especially if you know that the person would never say anything like that. Always make sure to never believe everything you see on the Internet, and always get information from trustworthy news sources.

Overall, deepfakes have their benefits and risks, but it has a long way to go. The software as of yet is not completely perfect, and the average person can not easily get their hands on it, nor know how to fully operate the software since it can be so complex at times. Make sure you know the signs beforehand, and never believe everything you see on the internet. Right now, deepfakes can cause more harm than good, especially if it is in the wrong hands. Companies with a lot of money can invest in deepfaking technology to use for their benefit, especially corporations designed to have a very competitive space among them. In the future, deepfakes and AI may be the norm, and it can become so powerful and so believable, that it could be scary. However, there will be better technology out in the future as well that will also stop the spread of deepfakes and false information. Right now, Instagram has released its fact-checking mechanism, where if it spots any false information on a post, it will notify you about it, and many third-party fact-checkers, for example, Snopes and Politifact, are working to stop false information as well and to help correct others of where it originated from. Nonetheless, deepfakes have a long way to go. Please share this article with friends, family, employers, and anyone concerned with deepfakes to notify them about the future we could potentially have, and how to stop the signs, and to relieve any stress caused by deepfakes overall. 

 

Source: Pew Research Center. About three-quarters of Americans favor steps to restrict altered videos and images.

Leave A Comment