Models of generative artificial intelligence (AI) like ChatGPT and Google’s Bard AI might run afoul of the country’s recently passed data privacy law. This is because it mandates that all platforms reveal the personal data they currently hold and obtain user consent before processing it further.

The information provided by users in their prompts is also incorporated into the system to enhance the AI’s ability to produce the best correct output, even though large language models like ChatGPT and Bard AI mostly rely on publicly-available data.

Massive amounts of data are used to train the algorithms used in generative AI models. OpenAI’s GPT-3, which was released in 2020, was trained on 45 terabytes (TB) of text data to generate imaginative outputs. Experts believe that the separation of personal data from such a complicated database will likely be a crucial step.

In general, AI systems process datasets that are provided to them as-is, with no obvious distinction between processing personal and non-personal data unless specifically configured to do so. There may be privacy concerns as a result of the Bill’s lack of such protections and provisional distinction between processing of personal and non-personal data for AI.

The central government may designate specific platforms as key data fiduciaries, according to the Bill. It would depend on factors such as the quantity and sensitiveness of personal data, potential threats to user rights, and effects on India’s sovereignty and integrity, among others. Large language models typically trawl the web for basic content and use it for training models in the majority of application cases. The key distinction is between personal data and data. Therefore, this largely has no effect on the training of big language models. There could be

models that forecast different outcomes now. Additionally, certain models can require some user permission.

Our systems had features in place to quickly delete data upon request. Naturally, we will analyse the act in detail to find any potential issues that need more attention. This can entail implementing more thorough procedures for swiftly informing the Data Protection Board and any affected users of any personal data breaches.