High Hopes, Higher Hurdles: Analysis of the DPDP Act and its Draft Rules (Part II)
- Pushpit Singh and Silvia Tomy Simon
- 6 days ago
- 10 min read
-Pushpit Singh* and Silvia Tomy Simon**
[This is the Second part of a two-part contribution which discusses the Indian Data Protection regime and analyses the friction between the DPDP Act and the draft DPDP Rules. For a discussion on the tensions between the Act and the draft rules, with a focus on cross-border data flows and regulatory uncertainty, please see Part I. This part shall discuss and cover themes, such as the impact of the regime on Data Principals’ rights and civil liberties, and will also assess its implications on the AI sector.]
IMPACT ON DATA PRINCIPALS AND CIVIL LIBERTIES
The DPDP framework promises to strengthen individual autonomy by foregrounding the rights of Data Principals. At the heart of this framework is the principle of informed and voluntary consent. Section 6 of the Act formalizes this by requiring that consent be free, specific, informed, and unambiguous before any personal data is processed. Building on this, Rule 3 of the Draft Rules lays out detailed requirements for privacy notices. These notices must, at the point of data collection, clearly list each category of personal data being collected, every purpose for which it will be processed, and the specific goods or services to which that processing relates. While this granular approach may advance informational transparency, it also risks overwhelming Data Principals with lengthy or hyper-detailed disclosures, thereby contributing to “consent fatigue” — a phenomenon well-documented in other jurisdictions, including under the GDPR.
It is true that long and complex privacy notices have drawn criticism in the EU as well. Many GDPR-compliant organisations rely on detailed, legalistic formats to minimise liability. However, the GDPR framework provides several tools to reduce this burden. Regulatory bodies in the EU have supported layered notices, just-in-time disclosures, and the use of icons to improve user understanding. These approaches aim to balance legal accuracy with user experience. In contrast, the Indian framework offers no such guidance. There are no official standards or interpretive tools to help limit cognitive overload for users. As a result, there is a real risk that disclosures, though technically compliant, may fail to serve their larger purpose. They may confuse rather than empower users, ultimately weakening the Act’s aim of promoting genuine, informed choice.
Despite these concerns, a balanced interpretation of the DPDP Act and Draft Rules is still possible. Section 6 should not be viewed in isolation. It must be understood in light of the broader goals of user autonomy and accessibility. When interpreted purposively, the detailed notice requirements under Rule 3 need not result in rigid, one-size-fits-all disclosures. Instead, these obligations may be met through context-sensitive designs, such as tiered notices, dynamic prompts, or modular formats that align with both legal standards and real-world user behaviour. This approach would keep the consent system legally sound while avoiding the pitfalls of mechanical or performative opt-ins. It would also better reflect the Act’s underlying aim of fostering meaningful, informed participation by Data Principals. However, without explicit provisions supporting such flexibility, the burden may shift to the Data Protection Board or relevant sectoral regulators to issue practical guidance. In the absence of such interpretive support, there remains a real risk that longer, more complex notices, though formally compliant, may erode the very autonomy the law is meant to protect.
The Draft Rules introduce the concept of Consent Managers. These are registered intermediaries tasked with obtaining, managing, and facilitating the withdrawal of consent on behalf of Data Principals. This model reflects a design approach similar to India’s Account Aggregator framework in the financial sector. In that system, licensed intermediaries enable user-controlled, permission-based data sharing between financial institutions. In practice, Account Aggregators have improved the flow of consent-driven data while ensuring security and interoperability. This raises the possibility that Consent Managers could play a similar enabling role under the DPDP regime. However, Rule 4 and the First Schedule of the Draft Rules set high eligibility criteria for Consent Managers. These include net worth thresholds and strict conflict-of-interest safeguards. While these conditions are based on valid concerns, such as protecting Data Principals from misuse and ensuring the neutrality of consent processes, they may also restrict participation. In effect, only well-capitalised entities may qualify, limiting entry for smaller innovators. Since Consent Managers will handle the key mechanism of lawful processing — namely, valid consent — the case for financial and institutional stability is strong. Yet, this raises a key question: how can the law maintain high integrity without discouraging competition or innovation?
A more balanced approach could be adopted. For example, financial thresholds may be scaled according to the size of the user base or the sensitivity of the data. Additionally, a regulatory sandbox model may allow smaller consent managers to operate in a controlled environment before full accreditation. This would safeguard user trust while keeping the ecosystem open to innovation. Similarly, the conflict-of-interest rules could be more nuanced. Sector-specific platforms — like those in health-tech or ed-tech — could be allowed to act as Consent Managers, provided they follow strict internal firewalls, undergo regular audits, and disclose their affiliations clearly. Allowing a sector-specific platform to act as a Consent Manager can serve several practical and legal objectives. First, a sector-specific operator already understands the technical standards, regulatory terminologies, and risk profile unique to its domain. For example, a health-tech platform routinely navigates HIPAA-specific medical confidentiality duties and medical-record interoperability requirements. Similarly, an ed-tech provider is expected to be generally familiar with child-protection rules and age-appropriate data practices. That accumulated knowledge can help enable more precise, effective, and efficient consent flows, clearer notices, and context-specific choices that a non-sector-specific platform may not be able to deliver due to a lack of first-hand routine practical knowledge and experience in the day-to-day functioning of the sector. Second, users may benefit from reduced “consent fatigue.” This is because if the platform they already use to schedule medical appointments or access online courses provides a single, transparent, and integrated dashboard for viewing, granting, or withdrawing permissions, they can exercise their data rights without shifting between multiple platforms. Thus, this would maintain user confidence without entirely excluding integrated service providers.
Currently, the Draft Rules do not align the role of Consent Managers with the existing categories under the DPDP Act — namely, Data Principals, Data Fiduciaries, and Data Processors. Nor do they explain how the duties of Consent Managers might overlap with those of Data Fiduciaries. In contrast, the GDPR makes these distinctions clear. Controllers determine the purpose and means of processing, while Processors act on their instructions. Each role carries distinct legal responsibilities. Consent is primarily managed by Controllers, and no intermediary consent entities exist under the GDPR. That said, Article 26 of the GDPR allows for joint controllership when multiple parties determine the purpose of processing. In such cases, liability is proportionate and responsibilities are shared. India does not need to copy this structure entirely. Instead, a hybrid approach could be considered. Consent managers could be treated as “fiduciary-lites”—entities acting on behalf of data fiduciaries but with their own duties of neutrality, transparency, and consent revocation. Alternatively, a model of shared or vicarious liability could clarify responsibility where non-compliance occurs.
Overall, the regulation of consent managers should protect institutional integrity without enforcing exclusivity through rigid entry barriers. A balanced framework must combine trust, accountability, and competition. This is essential to building a decentralised and innovation-friendly consent ecosystem. Separately, the Draft Rules create significant exemptions for government agencies. Under Section 7(b) and Rule 5, these exemptions apply when personal data is processed for subsidies, benefits, or public services. This means government bodies enjoy broader discretion than private entities when handling citizen data. Private data fiduciaries must meet strict requirements on consent, security, and retention. In contrast, public authorities face fewer limitations when collecting and using personal information for official programs. This may pose a threat to privacy. Government agencies operating at scale can gather and store vast datasets. Without equivalent checks, this may lead to unchecked surveillance or profiling. While the GDPR also allows exemptions for public interest, it ties them to strong necessity and proportionality tests. India’s Draft Rules do not impose comparable safeguards. This raises concerns about how citizen data will be protected in large-scale welfare or administrative schemes. It highlights a clear accountability gap: private companies are tightly regulated, but public bodies handling the same data may not be.
IMPLICATIONS FOR THE AI SECTOR
Artificial intelligence technologies — especially machine learning and Generative AI (GenAI) — depend heavily on access to large, varied, and dynamic datasets. These datasets are essential for training and refining models. However, the DPDP Act and the Draft Rules introduce several obligations that may restrict this data flow. These include purpose limitation, data minimisation, mandatory erasure once the stated purpose is fulfilled, and prior notice before deletion. While these rules strengthen privacy protections, they also limit the flexibility needed in AI development. This is particularly challenging in iterative training environments, where models are refined continuously using historical or legacy datasets. As a result, developers may struggle to maintain the data continuity that AI systems often require, potentially slowing innovation or increasing operational complexity.
Under Rule 8, read with Section 8(7), personal data must be erased after its purpose is served, subject to narrow legal exceptions. In some cases, the Data Principal must also receive a 48-hour advance notice before the data is deleted. For AI developers, this presents a practical challenge. Many use historical data for tasks like incremental learning, fine-tuning, and bias detection. These processes require long-term access to datasets. As a result, strict deletion timelines and notice requirements may introduce friction into the development lifecycle.
Further complicating matters is the legal uncertainty around what qualifies as anonymised data. Indian law does not clearly define the boundary between personal and anonymised datasets. In practice, full anonymisation is rare. Most datasets use de-identification techniques, which are increasingly vulnerable to re-identification through advanced inference methods. Despite these risks, the Draft Rules do not specify any technical standards for anonymisation or pseudonymisation. This leaves developers unsure of whether their data practices fall within or outside the scope of regulatory obligations. In contrast, the GDPR offers more clarity. Recital 26 of GDPR explains when data is considered “irreversibly anonymised.” Additionally, objective metrics — such as k-anonymity, which ensures each data point is indistinguishable from at least k-1 other data points in a dataset, or differential privacy thresholds — help balance privacy with practical usability. Adopting similar guidance in the Indian framework would improve legal certainty. It would also support responsible innovation by helping developers assess compliance risks while working with complex datasets.
The Draft Rules also introduce the notion of algorithmic software scrutiny under Rule 12 for Significant Data Fiduciaries (“SDFs”), including the requirement to conduct annual Data Protection Impact Assessments (“DPIAs”) in high-risk contexts. However, the Rules do not elaborate on the risk assessment methodology, the types of AI systems that qualify as “significant,” or the thresholds that would trigger this classification. This lack of specificity may inhibit both compliance and enforcement. One way forward could involve borrowing from the EU’s High-Risk AI classification under the draft AI Act, which considers use-cases (e.g., credit scoring, recruitment, biometric surveillance) and context (e.g., scale, public impact) as risk thresholds. Indian regulators could develop a tiered DPIA obligation calibrated to impact, where risk is evaluated based on four criteria: the degree of autonomy in the decision-making process, the reversibility of outcomes, the volume of individuals affected, and the sensitivity of data involved.
Equally underdeveloped is the Indian regime’s treatment of fully or semi-automated decision-making. Unlike the GDPR, which under Article 22 provides data subjects the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, the DPDP Act and Draft Rules contain no explicit restrictions or safeguards in this regard. While it may be premature to transplant Article 22 in its entirety, Indian law must grapple with the emerging reality of AI systems deployed in credit approval, recruitment, predictive policing, and social scoring. These applications implicate concerns of opacity, discrimination, and lack of recourse.
Rather than imposing a blanket prohibition, India could adopt a layered approach. For example, the law could require that decisions with legal or materially significant consequences, such as access to employment, loans, or public benefits, must either (a) include a meaningful human-in-the-loop process or (b) be subject to post-decision review and justification rights. This could be enforced through sectoral regulations, DPIA mandates, or specific duties for SDFs handling high-stakes decision-making systems. Such a framework would strike a balance between safeguarding informational self-determination and avoiding a regulatory chilling effect on AI-driven services. Instead of merely signalling caution, the Indian regime must articulate how rights, risks, and technological design interact in the context of automated systems. This will not only advance legal certainty for innovators but also reinforce public trust in the legitimacy of AI deployments.
CONCLUSION
The DPDP Act, 2023, and the Draft DPDP Rules, 2025, represent a pivotal moment in India’s regulatory trajectory, marking the country’s shift toward a rights-based data governance framework. The Draft Rules provide much-needed operational clarity, especially in areas such as consent formats, security obligations, and the institutional structure of consent managers. However, as this article has demonstrated, several areas still require refinement to ensure that the regime is not only legally robust but also operationally viable and innovation-friendly. In particular, the regime’s open-ended approach to cross-border data flows, reliant on executive discretion through general or special orders, poses legal uncertainty for global businesses and risks deterring investment in AI infrastructure. Likewise, while the introduction of consent managers is promising, the financial and structural eligibility conditions currently proposed may inhibit market diversity unless a tiered or sandboxed entry model is considered.
On the AI front, the lack of clear thresholds for algorithmic audits, coupled with the absence of procedural safeguards for automated decision-making, raises questions about accountability in high-stakes contexts like recruitment and financial services. A calibrated framework — borrowing from the risk-based logic of GDPR’s DPIAs and Article 22, but adapted to India’s unique institutional context — could offer a middle path that respects individual rights while enabling responsible innovation. Finally, the retention and deletion framework, while grounded in strong privacy principles, may generate operational inefficiencies in data-heavy sectors like AI unless standards for anonymisation and exceptions for low-risk processing are more clearly defined. These challenges underscore the need for MeitY to adopt a context-sensitive, proportional approach in finalising the Rules — one that blends regulatory certainty with practical flexibility.
If India can harmonise legal ambition with institutional pragmatism — by refining the structure for cross-border transfers, clarifying the scope of AI regulation, and operationalising meaningful consent — the DPDP framework may serve as a regulatory model not only for India but for emerging digital economies across the Global South. The success of this framework will ultimately rest on its ability to balance trust and innovation, ensuring that the rights of Data Principals are protected without sacrificing the country’s aspirations to lead in global AI governance.
*Pushpit Singh is a TMT lawyer in Bengaluru, India. He has experience in dealing with commercial, technology, and data privacy matters. He is a BBA LLB graduate from Symbiosis Law School, Hyderabad (Symbiosis International University, Pune) and an Advanced Diploma Holder in Alternative Dispute Resolution from NALSAR, Hyderabad, India.
**Silvia Tomy Simon is a BA LLB graduate from Symbiosis Law School, Hyderabad (Symbiosis International University, Pune). She holds a certificate in Data: Law, Policy and Regulation from the London School of Economics and Political Science (LSE), UK. Her areas of interest revolve around technology, data privacy, and artificial intelligence.
Comments