top of page
Writer's pictureSTEM To Go

How Machine Learning Can Get Dark: Deepfakes

Updated: Sep 3, 2020


What is Machine Learning?


As AI is becoming more widespread in the technology field, machine learning has become a key ingredient to providing software systems the ability to learn on their own with given data, making the progress of computing any code much more easier.





Deep Fakes: The Process


One area of machine learning which tests the boundaries between ethics and morals are deep fakes. For those unaware, deep fakes are programs using machine learning to automatically generate images and videos of any existing media.


A software system known as “Deepfake Neutral Networks” must first familiarize itself with analyzing images, audio and video footage of the original person/object, where the programmer will provide the system with content centered around the original person, which could be anyone from a famous celebrity, to a next door neighbor. This stage is to understand how objects/people are structured and how individuals/objects move around in their surroundings.


Next, the trained network system will begin to use another individual, which would be referred as the target personnel/object, as a reference to essentially combine the previous given images, audio and video footage of the original personal/object onto the target personal/object to augment the data, a process machine learning models implement to train the software systems to create more accurate results.


Here, an econder algorithm will be used to detect similarities and differences between the two individuals/objects. Then, an algorithm system known as decoder is implemented to begin the reconstruction of the two individuals/objects to merge into the likeness of the original individual/object. The final result will typically produce a freakily-identical imagery of the original individual, and the software system has the ability to also mimic their voices and movement. Here is an example of deep fakes being too good to be true:






However, deep fakes are a part of reality, and some have perfected to the extent of compelling users to believe the final, mimicked product. A research study done in Amsterdam had performed a test run, presenting a deep fake of their CDA leader, Sybrand Buma, to an audience of 278 people to detect if the users had been able to recognize if the deep fake had been real video footage of Sybrand Buma. As stated by the researcher Tom Debbie, who had mentioned a statistic that only 8 out of 140 people had raised suspicious doubts about the video, “And this one [deep fake] was not even perfect, you could see the lips moving crazily every now and then. It is remarkable that people fell for it completely.", suggesting the convincing power and influence of deep fakes. The usages of deepfakes suggest an underlying concern of exploitation within certain fields, which cover a confusing grounds of morals and ethics overall.


The Dark Side To Deep Fakes:


Most deep fakes are degrading and weaponizing to vulnerable female victims, as most widely used across the web are to mimic the appearance and personality of famous and attractive female celebrities onto the bodies of female workers in adult entertainment. The lack of current regulation implemented has assisted this specific usage of deep fakes to be popularized, as there is not much victims of these deep fake imagery can do. According to Deeptarce, an AI company, based on a study they had conducted, out of a sample of 15 000 deep fake videos, around 95% of the content had been using video footage of adult content. This new finding could suggest new ways of inflicting lewd crimes such as revenge work, and non consensual work.


Beyond this, deep fakes could also be used for scams, fraud or extortions for various kinds of individuals, like an instance of employees hearing a deep fake audio footage of their employers to transfer money, unknowingly being transferred to scams.


As well, deep fakes could be spread around more easily on social media to present a certain portalyal of political leaders as false news. However, with the current political climate and misinformation, it would be no surprise for a large number of users to believe the political deep fakes. It could be quite difficult to prove if any image/video/audio file is a deep fake, as some programmers are able to leave no flaws with the product, and the audience would likely have a hard time to find any subtle differences within these images/videos/audio files.





What is Being Done To Stop This:


Well deep fakes has its uses in instance of using such content to replace actors in films, to dub over dialogue in foreign shows or to recreate the voice of any victims who now live with a damaged vocal cord due to accidents or disabilities, the presence of deep fakes in illegal activities is unsettling, as anyone can fall victim to this.


Microsoft has now developed two softwares to help combat the presence of deep fakes in accessible media, as the softwares is able to detect the differences of the original source and the deep fake on a much microscopic and deeper level compared to the human eye.


As noted by Tom Burt, corporate vice president of customer security and trust at Microsoft, "Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate.”


The case with deep fakes is ongoing, as the technology is constantly improving to higher quality, and still, there are a lack of preventions and regulations hindering the implementation of deep fakes for inappropriate reasons.





It’s pretty definitive to say that most people will take what they see on their screen as true, and the rate that information comes out onto social platforms, it’s quite tiring to keep doubting the credibility of various works. However, deep fakes are an excellent example of demonstrating the lesson of being cautious what you see on your screen; Is this real? Has this content been modified and edited? As more attention is brought on deep fakes, perhaps more influential power can step in to create more regulations to help fix the current concerns of deep fakes, so it’s important to stay aware of such topics like this, where technology borders the lines of ethics and morals.


Works Cited:


BBC News. “Viral Video Deepfakes Celebrities.” BBC News, BBC, 5 Nov. 2019, \ www.bbc.com/news/topics/crm5plqk980t/deepfakes.


Nelson, Daniel. “What Are Deepfakes?” Unite.AI, 23 Aug. 2020, www.unite.ai/what-are-deepfakes/.

Pieters , Janene. “Deepfakes Very Convincing, Effective in Influencing People, Amsterdam Researchers Found.” NL Times, 24 Aug. 2020, nltimes.nl/2020/08/24/deepfakes-convincing-effective-influencing-people-amsterdam-researchers-found.


Sherr, Ian. “Microsoft's New Tech Spots Deepfakes to Fight Disinformation Ahead of 2020 US Election.” CNET, CNET, 1 Sept. 2020, www.cnet.com/news/microsofts-new-tech-spots-deepfakes-to-fight-disinformation-ahead-2020-us-election/.

19 views0 comments

Recent Posts

See All

Comments


bottom of page