top of page

Gender, Work, News

Fund women AI founders NOW

With deepfakes becoming a global threat, founders call for an increase in funding to women-led AI companies

MMS Staff

23 Jul 2024

5-min read

In what seems to be the latest in a string of gross misuse cases of AI technology, a teenage schoolboy from Victoria, Australia was arrested last month for allegedly generating and distributing deepfakes of his female classmates. 


The accused - who shared the images on social media - was caught by police, and soon released without charge. 


The incident has prompted renewed conversation around the gendered nature of the misuse of this technology and how concrete action can be taken to ensure the safety of girls and women. 


This is also not the first time AI-generated deepfakes have made their way into schools. Similar events have unfolded in Beverly Hills, California, Spain and New Jersey. 


And nudes aren’t the only kind of deepfake imagery being made. 


The last couple of years have seen AI-generated deepfake videos of political speeches, such as this one featuring Duwaraka Prabhakaran, daughter of Tamil Tiger militant chief Velupillai Prabhakaran, who - at the time of the release of this video - had died more than a decade ago. 


Then late last year, a deepfake video featuring Indian actor Rashmika Mandanna went viral. The actor subsequently tweeted: “...if this happened to me when I was in school or college, I genuinely can’t imagine how could I (sic) ever tackle this.” 


And more recently, the content creator and actor Bhuvan Bam saw his face in a deepfake, promoting betting in sport. The actor was quick to put out a clarification saying it wasn’t him but a deepfake, and that his team had already filed a police complaint alerting the authorities. 


On July 8, this story in the Deccan Chronicle mentioned that the Indian government plans to introduce a bill in parliament to put a check on AI technology and regulate online content. 


‘According to sources, the bill aims to explore better ways to use technology and develop legal frameworks to address challenges posed by deepfakes and AI-generated content, reflecting the growing global concern about these issues,’ the article said. 


However, culpability, it seems, doesn’t rest on the creators of deepfake apps. ‘These platforms make money from platforming content but take no responsibility for doing so,’ says the Guardian


Moreover, it has for long now been amply clear that the current landscape of these spaces is marked by a dominance of male perspectives. 


This story by Routledge points out how this affects the safety of women and girls in the digital world. 


The emergence of AI-driven deepfake technology has provided perpetrators with frightening new avenues for sexual exploitation, violation and abuse. With the help of this new technology, offenders can seamlessly blend and manipulate different visuals and audio clips taken from social media platforms, cameras placed in public, or private, settings, hacked devices, discussion boards, pornography websites, and other online spaces to create lifelike explicit content.   What is especially alarming is the sheer scale and scope of this phenomenon. Because of this new technology, perpetrators have a nearly limitless capacity to exploit anyone across the globe who has ever been photographed or captured on video. Such malicious content is at risk of being continuously shared, traded, consumed, distributed, and further manipulated by other men participating in these crimes. So, while recent developments in AI tools may be exciting to many, for women and girls in particular, there is a range of current and potential disadvantages and violations. 

Entrepreneur Cindy Gallop - who runs the crowdsourced social sex video sharing platform MakeLoveNotPorn - reacted to the Victoria news with this LinkedIn post, which reads: “FUND. FEMALE. AI. FOUNDERS. Because that’s the only way we can scale solutions to this.” 


Gallop’s call for funding for women entrepreneurs working in AI is a growing chorus of voices demanding a more equitable funding - and representation - landscape in AI. 


AI gender bias is all too common, resulting from skewed data and underrepresentation and misrepresentation of women, which has a bearing on machine learning technology and algorithms, further perpetuating biases, which, in less severe cases manifest in the form of denied opportunities to women, and, in more severe cases, as gender-based violence in the form of deepfakes. 


This 2019 research shows that women are the primary victims of deepfakes. And a related, concerning trend is the growing use of deepfake technology as a tool of revenge. 


‘96% of this type of online videos were of intimate or sexual nature. The victims were primarily women, often actresses, musicians, and to a lesser extent, media professionals. In contrast, videos without explicit content primarily targeted men (61 percent), mainly politicians and corporate figures,’ the study says. 


While discussions around protecting women’s safety in the AI era are multifaceted, one place to begin could be to have more women representation in tech companies and AI teams, partner up with more women founders, especially in AI, and use feminist data practices to help fill data gaps where women aren’t correctly - or adequately - represented. 


However, it’s also clear that AI is a space dominated by powerful, wealthy men who, as Gallop puts it, “have no intention of welcoming, listening to, funding, and working with women leaders, founders, technologists, and scientists whose views are not completely aligned with and preferably subordinate to theirs.” 


“The young white male founders of the giant tech platforms that dominate our lives today are not the primary targets (online or offline) of harassment, abuse, racism, sexual assault, violence, rape, revenge porn. So they didn’t, and they don’t, proactively design for the prevention of any of those things. Those of us who are at risk every single day — women, Black people, people of color (sic), LGBTQ, disabled — design safe spaces, and safe experiences,” she says. 


Gallop’s thinking and her approach to AI clearly goes to show how data and machine learning technologies not working with datasets that are heavily influenced by the white man’s view of the world can produce AI technology coded with algorithms that ensure women’s safety - both online and offline. 


Time to rewrite the rulebook - can AI become a tool for empowerment, or will it forever be coded with bias?

Much much relate? Share it now!

SHORTS

bottom of page