Digital Personal Data Protection Act, 2023: An Interpretative Overview for Professionals

The enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act) represents a significant development in India’s legal framework governing the processing of personal data. As organisations increasingly rely on digital systems for accounting, taxation, compliance, customer management, analytics, and automation, personal data has become deeply embedded in everyday business operations. In this context, the protection of personal data is no longer a purely technological or IT-led concern, but an essential aspect of organisational responsibility and trust.

From a professional perspective, the DPDP Act requires an interpretative understanding rather than a narrow, checklist-driven approach. Its implications extend well beyond statutory compliance to areas such as corporate governance, internal controls, risk management, and audit assurance. Organisations are therefore expected to assess not only whether they comply with the letter of the law, but also whether appropriate governance frameworks, processes, and accountability mechanisms exist for responsible data handling.

This article provides an interpretative overview of the DPDP Act, 2023, and briefly explains the role of the Rules introduced in 2025, with a focus on aspects most relevant to professional and organisational practice.

Legislative Background and Scope

The DPDP Act is grounded in the Supreme Court’s recognition of the right to privacy as a fundamental right (Justice K.S. Puttaswamy v. Union of India, 2017). This landmark judgment established the constitutional basis for a comprehensive data protection regime in India, paving the way for legislation that balances individual rights with legitimate business and state interests.

Enacted in August 2023, the DPDP Act establishes a unified and nationally applicable framework for the processing of digital personal data in India. It seeks to replace fragmented and sector-specific practices with a consistent approach that applies across industries and organisational sizes.

The Act applies to:

  • Processing of digital personal data within India, and
  • Processing outside India where such data relates to individuals in India.

By adopting a principle-based approach, the legislature has focused on accountability and proportionality, rather than prescriptive compliance checklists. This provides organisations with flexibility in implementation, while placing the responsibility on them to demonstrate that data is handled lawfully and responsibly.

Key Concepts under the DPDP Act

The DPDP Act introduces foundational terms that carry significant governance implications and help clarify roles and responsibilities within the data ecosystem.

  • Data Principal – the individual to whom personal data relates, such as customers, employees, vendors, or users.
  • Data Fiduciary – the entity that determines the purpose and means of processing personal data.

This framework emphasises that organisations act as custodians of personal data, rather than owners. Personal data is therefore held in trust, and organisations are expected to exercise care, transparency, and accountability in how such data is collected, processed, and retained.

Consent and Lawful Processing

Consent forms the primary basis for lawful processing under the DPDP Act. Such consent must be free, specific, informed, and unambiguous, and must relate to a clearly defined and lawful purpose. Importantly, the Act also requires that consent be capable of being withdrawn, reinforcing individual control over personal data.

Although the Act recognises limited circumstances where processing may occur without consent—such as compliance with legal obligations—these situations are narrowly defined. Organisations must therefore design processes that ensure consent is not only obtained properly, but also documented, tracked, and honoured throughout the data lifecycle.

For professionals and organisations, this creates expectations similar to internal control documentation, where consent records, purpose limitation, and withdrawal mechanisms must be demonstrable, auditable, and consistently applied across systems.

Rights of the Data Principal

The DPDP Act grants enforceable rights to individuals, strengthening their ability to exercise control over their personal data. These include:

  • The right to access information relating to personal data.
  • The right to correction and erasure
  • The right to grievance redressal

These rights impose operational responsibilities on organisations to maintain systems and processes that enable timely responses, ensure data accuracy, and track actions taken. Inadequate handling of such requests may indicate governance and control deficiencies and may also undermine stakeholder trust.

Significant Data Fiduciaries

Certain entities may be classified as Significant Data Fiduciaries (SDFs) based on the volume and sensitivity of personal data processed, or the potential risk posed to individuals. This classification reflects the principle that higher-risk data processing should be subject to enhanced safeguards.

SDFs are subject to additional obligations, including the appointment of a Data Protection Officer and the conduct of Data Protection Impact Assessments. These measures are intended to embed privacy considerations into organisational decision-making and to proactively identify and mitigate data-related risks.

This risk-based differentiation aligns with established governance and assurance principles and mirrors global best practices in data protection regulation.

DPDP Act, 2023 and the Role of the Rules Introduced in 2025

While the DPDP Act was enacted in 2023, its implementation is supported by delegated legislation in the form of Rules. In 2025, the Government initiated the issuance of draft Digital Personal Data Protection Rules to operationalise the Act and provide procedural clarity.

It is important to clarify that:

  • The DPDP Act, 2023 remains the principal law.
  • The Rules do not amend or replace the Act.
  • The Rules specify procedural and operational requirements for compliance.

In effect, the Act defines what must be complied with, while the Rules outline how compliance is to be achieved. This distinction is well recognised in Indian regulatory practice and helps organisations translate legal principles into practical, implementable processes.

Penalties, Governance, and Professional Implications

The DPDP Act provides for significant monetary penalties in cases of non-compliance, particularly for failures relating to data security safeguards and personal data breaches. These penalties underscore the seriousness with which data protection obligations are viewed under the law.

However, for organisations, reputational impact and stakeholder trust often present greater risk than financial penalties alone. A data protection failure can affect customer confidence, business relationships, and regulatory standing.

From a governance perspective, the Act has direct implications for:

  • Internal control assessment
  • Risk management frameworks.
  • Vendor and outsourcing oversight
  • Board and audit committee reporting.

Organisations are increasingly expected to treat data protection as a board-level governance issue rather than an isolated compliance function.

Conclusion

The Digital Personal Data Protection Act, 2023 represents a significant step towards accountable and responsible data governance in India. For organisations and professionals alike, the Act reinforces the importance of trust, transparency, and sound governance in a digital economy.

An interpretative understanding of the Act—supplemented by awareness of the evolving Rules introduced in 2025—is essential for effective implementation. Viewed holistically, DPDP compliance should be regarded not merely as a legal requirement, but as an integral component of good corporate governance and sustainable business practice.

References

Draft-Digital-Personal-Data-Protection-Rules,2025(English).pdf

Digital-Personal-Data-Protection-Rules-2025.pdf

DPDP Rules 2025: India Notifies Digital Privacy Law

Implementing Trustworthy AI: A Practical View of ISO/IEC 42001:2023

Artificial Intelligence is no longer experimental or limited to tech teams. Today, it influences how businesses make decisions, interact with customers, automate operations, and extract insights from data. As AI becomes part of everyday business workflows, one question keeps coming up: how do we make sure AI is used responsibly?

This is where governance becomes essential. Without clear guardrails, AI systems can quietly introduce bias, make decisions that are hard to explain, or expose organizations to compliance and reputational risks.

To address this growing need, ISO/IEC 42001:2023 introduces a dedicated management system for Artificial Intelligence. Instead of focusing only on technology, the standard looks at how AI should be governed—covering people, processes, and oversight—so that AI systems remain ethical, safe, and transparent throughout their lifecycle.

More importantly, ISO/IEC 42001 provides a common language for AI governance. It helps organizations move from ad-hoc controls to a structured and auditable approach, where accountability and trust are built into AI operations from the start.

What is ISO/IEC 42001:2023?

ISO/IEC 42001:2023 is the first international standard created specifically to help organizations manage AI systems through an AI Management System (AIMS). It applies whether an organization is building AI models in-house, using third-party AI tools, or relying on AI features embedded in enterprise software.

Rather than prescribing how to build AI, the standard focuses on how AI should be governed across its lifecycle—from design and deployment to monitoring and improvement.

Key areas covered by the standard include:

  • Reducing bias and promoting fairness in AI outcomes
  • Improving transparency and explainability of automated decisions
  • Ensuring data quality and reliability
  • Managing safety, security, and system resilience
  • Addressing privacy and data protection concerns
  • Defining human oversight and accountability
  • Continuously monitoring AI performance and risks

Because of this broad scope, ISO/IEC 42001 is relevant to organizations of all sizes and across industries.

Why AI Governance Matters Today

As AI adoption increases, so do the risks that come with it. When AI systems are not properly governed, organizations may face challenges such as:

  • Biased or unfair decisions that impact customers or employees.
  • Black-box models that no one can fully explain.
  • Privacy breaches or misuse of sensitive data
  • Gaps between AI usage and regulatory expectations
  • Operational failures caused by unstable or poorly monitored models.
  • Loss of trust among users, regulators, and stakeholders

AI governance is no longer just a technical concern—it is a business and leadership responsibility. ISO/IEC 42001:2023 helps organizations address these issues by setting clear expectations for how AI should be managed responsibly.

Preparing for ISO/IEC 42001: Key Steps for Organizations

Organizations looking to align with ISO/IEC 42001 do not need to start from scratch. The journey typically begins with a few practical and achievable steps.

1. Identify and Classify AI Systems

Start by listing all AI applications used across the organization, including internal tools, vendor solutions, and embedded AI features.

Once identified, classify them based on their purpose, business impact, and potential risk.

2. Assess Risks and Impacts

For each AI use case, evaluate risks such as bias, lack of explainability, data privacy concerns, and operational dependency.

This helps determine where stronger controls or human oversight may be needed.

3. Define Ownership and Accountability

Clearly assign responsibility for AI systems, covering areas such as development, approval, monitoring, and escalation.

This ensures AI decisions are not “ownerless” and can be challenged or reviewed when needed.

4. Establish AI Policies and Guidelines

Develop or refine policies that define acceptable AI use, data handling practices, and ethical expectations.

These policies should align with ISO/IEC 42001 and integrate with existing governance frameworks.

5. Monitor, Review, and Improve

Set up ongoing monitoring to track AI performance, risks, and unintended outcomes over time.

Regular reviews help ensure AI systems continue to behave as expected as data, models, and contexts change.

6. Build Awareness Across Teams

Train employees involved in AI development, deployment, and decision-making on responsible AI practices.

Creating awareness ensures governance is not limited to compliance teams but shared across the organization.

Conclusion

AI has the potential to deliver enormous value, but only when it is deployed with care and accountability. ISO/IEC 42001:2023 offers a practical framework for organizations that want to move beyond informal controls and adopt a structured approach to trustworthy AI.

By following the principles of this standard, organizations can improve transparency, reduce AI-related risks, and show regulators, customers, and partners that they take responsible AI seriously. In an era where trust matters as much as innovation, strong AI governance is becoming a true competitive advantage.

Reference Links