The morning started with a message from a friend: “I used your photos to train my local version of Midjourney. I hope you don’t mind,” followed up with generated pictures of me wearing a flirty steampunk costume.
I did, in fact, mind. I felt violated. Wouldn’t you? I bet Taylor Swift did when deepfakes of her hit the Internet. But is the legal status of my face different from the face of a celebrity?
Your facial information is a unique form of personal sensitive information. It can identify you. Intense profiling and mass government surveillance receive much attention. But businesses and individuals are also using tools that collect, store, and modify facial information, and we’re facing an unexpected wave of photos and videos generated with artificial intelligence (AI) tools.
The development of legal regulation for these uses is lagging. At what levels and in what ways should our facial information be protected?
Is implied consent enough?
The Australian Privacy Act considers biometric information (which would include your face) to be a part of our personal sensitive information. However, the act doesn’t define biometric information.
Despite its drawbacks, the Act is currently the main legislation in Australia aimed at protecting facial information. It states that biometric information cannot be collected without a person’s consent.
The law doesn’t specify whether it should be express or implied consent. Express consent is given explicitly, either orally or in writing. Implied consent means consent may reasonably be inferred from the individual’s actions in a given context. For example, if you walk into a store that has a sign stating “facial recognition camera on the premises,” your consent is implied.
However, using implied consent opens your facial data up to potential exploitation. Bunnings, Kmart, and Woolworths all use easy-to-miss signage, and facial recognition or camera technology is used in their stores.
Facial information is valuable and unprotected
Our facial information has become so valuable that data companies such as Clearview AI and PimEye are mercilessly hunting it down online without our consent.
These companies put together databases for sale that are used not only by the police in various countries, including Australia, but also by private companies.
Even if you delete all your facial data from the Internet, you can easily be captured in public and appear in some database anyway. Being in someone’s TikTok video without your consent is a prime example — in Australia, this is legal.
Furthermore, we’re now contending with generative AI programs such as Midjourney, DALL-E 3, Stable Diffusion and others. Not only the collection, but the modification of our facial information can be easily performed by anyone.
Our faces are unique to us; they’re part of what we perceive as ourselves. But they don’t have special legal status or special legal protection.
The only action you can take to protect your facial information from aggressive collection by a store or private entity is to complain to the office of the Australian Information Commissioner, which may or may not result in an investigation.
The same applies to deepfakes. The Australian Competition and Consumer Commission will consider only activities that apply to trade and commerce, such as if a deepfake is used for false advertising.
And the Privacy Act doesn’t protect us from other people’s actions. I didn’t consent to have someone train an AI with my facial information and produce made-up images. But there is no oversight on such use of generative AI tools, either.
There are currently no laws that prevent other people from collecting or modifying your facial information.
Catching up the law
We need a range of regulations on collecting and modifying facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising.
Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law.
It contains proposals for regulating the first stage of non-consensual activity: collecting personal information. That may help in the development of new laws.
Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices.”
There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions.
The government has agreed to consider enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. This work should be coordinated with the government’s ongoing work on Digital ID and the National Strategy for Identity Resilience.
As for consent, the government has agreed in principle that the definition of consent required for biometric information collection should be amended to specify that it must be voluntary, informed, current, specific and unambiguous.
As facial information is increasingly exploited, we’re all waiting to see whether these discussions become law — hopefully sooner rather than later.
Correction: we have amended a sentence to clarify Woolworths use camera technology, but not necessarily facial recognition technology.
Margarita Vladimirova, Ph.D. in Privacy Law and Facial Recognition Technology, Deakin University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Follow us on X, Facebook, or Pinterest