Google plans to implement a new policy on its Play Store, requiring developers of Android apps to include a feature for reporting offensive AI-generated content.
This move comes in response to the rise of problematic generative AI apps, including those that produce inappropriate or misleading content. By enabling users to flag and report such content within the app, developers can improve their filtering and moderation techniques.
Concerns have been raised about AI image generators being used for nefarious purposes, such as creating explicit material or spreading misinformation during elections.
The new policy states that AI-generated content examples include chatbots that engage in text-based conversations and apps that generate images based on text, image, or voice prompts. Google emphasizes that developers must adhere to its existing policies, which prohibit restricted content such as CSAM and deceptive behavior, even for AI content generators.
Google has announced that it will not only modify its policy to tackle AI content apps but also subject certain app permissions to scrutiny by the Google Play team. This includes apps that seek extensive photo and video permissions.
According to the new policy, an app will only be granted access to photos and videos if those options are directly relevant to its functionality. In cases where apps require one-time or occasional access, such as AI apps that request users to upload a series of selfies, they must utilize a system picker, such as the new Android photo picker.
The new policy aims to restrict disruptive, full-screen notifications to only high-priority situations. Many apps have misused this feature to push paid subscriptions or other offers, so Google will now require a special app access permission called “Full Screen Intent permission.” This permission will only be granted to apps targeting Android 14 and above that genuinely need full screen functionality.
It is innovative of Google to be the first to introduce a policy on AI apps and chatbots, as Apple has historically been the one to crack down on unwanted app behavior.
However, Apple currently does not have a formal AI or chatbot policy in its App Store Guidelines. Nonetheless, Apple has tightened regulations in other areas, such as apps requesting data for user or device identification (known as “fingerprinting”) and apps attempting to copy others.
While Google Play’s policy updates are being rolled out today, AI app developers have until early 2024 to implement the necessary changes and report them.