Privacy 2023 Recap – Legislative Reform 2.0 and other developments

The march of legislative reform impacting privacy and AI continued during the past year. For privacy, the major thrust was Quebec’s Law 25, key elements coming into force in September.  At the federal level, with Bill C-27, the government is seeking to replace PIPEDA[1] with a new private sector privacy law at the same time as introducing an oversight regime for high-impact artificial intelligence.   Parliamentary committee hearings have been taking place over the fall.  A few significant court and regulatory cases were reported.  The government’s proposed Artificial Intelligence and Data Act (AIDA), which focusses on the mitigation of risks inherent in “high-impact” AI systems, was only one of several developments in the constantly evolving AI space.

Quebec’s Law 25 – Consent Guidelines issued by the CAI

The most impactful new rules under Quebec’s Law 25 amending its Private Sector Privacy Law came into force on September 22, 2023.

These address requirements for comprehensive privacy policies and enhanced consent and transparency procedures – both now more in line with PIPEDA and Bill C-27, impact assessments for new information processing projects and cross-border transfers, transparency and disclosure for processing data in automated systems and an obligation to destroy information once its purpose has been achieved.  Also introduced are Privacy by Design rules, minimum stipulations for service provider contracts, and a de-indexing right as well as rules defining and governing the usage of de-identified and anonymized information.

A key element of the Law 25 requirements is contained in the transparency and user friendliness rules related to the collection of personal information.  Following a five-month consultation, the Commission d’accès à l’information (CAI) issued its Guidelines for Valid Consent which provide the CAI’s interpretation of the law’s more rigorous consent transparency and procedural requirements as well as guidance for what it believes organizations need to do to comply with those rules.  The Guidelines include certain interpretations extending beyond the express statutory language.  In addition, they address the law’s requirement to obtain consent separately for different purposes – which will be particularly challenging for mobile phone interfaces seeking to collect user information, such as location data, for secondary uses.

INDU Committee hearings on Bill C-27 – the government’s proposed privacy reform and AI oversight laws

On June 16, 2022 the federal government introduced Bill C-27, the Digital Charter Implementation Act, 2022, enacting both the proposed Consumer Privacy Protection Act (CPPA), to replace PIPEDA, and its proposed AI oversight law, the Artificial Intelligence and Data Act (AIDA).

The Bill only received second reading in June of this year and hearings at the parliamentary Standing Committee on Industry and Technology (INDU Committee) did not begin until late September.  To date, there have been fifteen sessions of the Committee hearing evidence from witnesses across the spectrum including Innovation, Science and Economic Development Canada (ISED), the federal Privacy Commissioner, academics, privacy lawyers, industry associations, public interest advocacy groups and other interested stakeholders, with many of the hearings so far focussing on the CPPA segment of the Bill.

Many witnesses indicated support for the proposed privacy reform reflected in the Bill.  A consistent theme was that the principle of privacy protection and the goal of innovation throughout the economy should not be conflicting.  Various witnesses indicated selected amendments to the Bill addressing what they considered improvements including making privacy a fundamental right, enhanced protections for children, more clear rules regarding de-identification and anonymization, strengthening consent exceptions, enhanced accountability through privacy impact assessments, and expanding the powers of the Commissioner.

Early in the hearings, Minister Champagne tabled his own proposed amendments which included making privacy a fundamental right, certain additional protections for children and increased flexibility for the Commissioner in reaching compliance agreements.  However these proposed amendments did not include addressing a number of the other items for amendment put forward by witnesses.

Some witnesses argued in favour of proceeding with the CPPA segment separately from the AIDA segment and either withdrawing the proposed AI legislation pending substantial review and revisiting of its framework or separately addressing the AIDA subsequent to completing second reading review of the CPPA.  At the end of November Minister Champagne tabled at the INDU Committee detailed amendments to the AIDA that addressed a number of the criticisms of that proposed act including in particular modifying the definition of an AI system to align with international standards and fleshing out what is intended by the term “high-impact system”.

PHAC case – OPC provides rules for collecting and using non-personal data

In the Report of its investigation into the collection and use by the Public Health Agency of Canada (PHAC) of mobile location data during the COVID-19 pandemic, released at the end of May,[2] the federal Office of the Privacy Commissioner provided important guidance for organizations seeking to collect and use information about individuals that is anonymized to the extent that is no longer considered personal information, and therefore outside the application of existing privacy laws.

Usefully, the Report will serve to inform the review at Committee of Bill C-27.

The Report, entitled Investigation into the collection and use of de-identified mobility data in the course of the COVID-19 pandemic, examined whether mobile device data collected and used by PHAC in its response to the pandemic contained personal information as defined under the Privacy Act.  Specifically, it considered whether PHAC and its data providers implemented de-identification techniques and safeguards against re-identification deemed sufficient to reduce the risk of an individual being identified below the threshold of a “serious possibility”.  The “serious possibility” threshold is the rule articulated in relevant judicial guidance for determining whether data should be considered personal information or, conversely, whether it can be considered sufficiently de-identified to no longer be considered personal information.[3]

Based on its analysis, including taking recognition of the accepted practices in this field, the OPC concluded that the threshold of a serious possibility of re-identifying the data was not met and therefore the data as collected and used by PHAC was non-personal and was outside the scope of the Privacy Act.

Federal Court declines to order Facebook to change its privacy practices related to the Cambridge Analytica scandal

In April the Federal Court denied the OPC’s application to order Meta (formerly Facebook) to change its privacy policies and procedures that had led to the Cambridge Analytica data breach.[4]  The court proceedings arose out of the joint investigation by the Commissioner and the BC Information and Privacy Commissioner into the Facebook/Cambridge Analytica scandal.  That investigation focused on the unauthorized collection and sharing of the personal information of more than 50 million users worldwide, including over 600,000 in Canada, for the purposes of targeting political messages.

The application was considered by the Court as a de novo proceeding, meaning that the Commissioner needed to establish a breach of PIPEDA with sufficient evidence in court and could not simply rely on the OPC’s determinations in its investigation.

The Court’s decision contains some problematic determinations regarding interpretation of PIPEDA, as well as the nature of evidence required on a court application to compel compliance with the Commissioner’s findings in any investigation under the law.

The OPC’s investigation was highly critical of Facebook’s policies and procedures regarding collection of personal information by social media apps and the sharing of that information.  In particular, it found that Facebook failed to obtain meaningful consent from app users and their friends for the purposes for which the information was used.

The Court referred to PIPEDA’s Consent Principle as well as section 6.1 of the statute – which together require, for consent to be valid, that it is reasonable for individuals to understand the nature, purpose and consequences of the collection, use and disclosure of their personal information.  The Principle also includes a provision obliging an organization to make a “reasonable effort” to make sure an individual is advised of the purposes for which they are being asked to consent.

The Court determined that, notwithstanding the evidence of the policies and procedures which was before the Court – and for which there was no disagreement among the parties, such evidence was not sufficient for it to rule that the reasonable standard had not been met, indicating that it had no evidence before it to spell out what Facebook failed to do to demonstrate that it made a reasonable effort.

In the Court’s view, the burden was on the OPC to establish that the standard had not been met, by appropriate evidence.  A plain reading of the Facebook and the app’s policies and procedures (reproduced in the Court’s reasons) was not sufficient for the Court to conclude that they failed to represent a reasonable effort to inform users of the potential uses of their data.

By implication, the Court criticized the OPC’s investigation suggesting that it did not have sufficient basis for its determination that Facebook had breached PIPEDA.

Oversight of Artificial Intelligence (AI)

The federal government’s initiative to regulate certain AI systems under its proposed law, the AIDA, is only one of several significant regulatory and policy developments over the past year in the AI sphere.  The drive for these initiatives likely can be attributed in large part to the onset of the “ChatGPT” phenomenon in the fall of 2022.  The recent upheavals at OpenAI, the developer of ChatGPT, present in a microcosm the policy issues and challenges facing regulators – should AI be constrained to ensure that it performs ethically and “for good”, or should it be allowed to develop with little or no restraint in order to maximize its potential and encourage innovation.

Many of the criticisms of the AIDA have been directed at the apparent haste with which the government was moving – by coming forward with what has been characterized as only a skeletal framework, the details of which were to be worked out in due course, and yet focussing on an incomplete scope of application.  The ChatGPT phenomenon – using what is characterized as “generative AI” and falling more broadly under the rubric of “general-purpose” AI, was not clearly addressed in the Bill, but as seen over the past year, has become recognized as potentially the most impactful instance of the technology.  A general-purpose AI system may be adopted and put to use in a variety of contexts. These advanced systems may be used to perform many different kinds of tasks — such as writing emails, answering complex questions, creating content, generating realistic images or videos, or writing software code.

Following a short consultation over the summer, the government initially has sought to establish ethical guidelines for generative AI through a voluntary code, its Voluntary Code of Conduct on the responsible Development and Management of Generative AI Systems.  The signatories of the Code undertake to address six key outcomes, specifically: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness.  Initial signatories include the Responsible Artificial Intelligence Institute, the Council of Canadian Innovators, research organizations such as the Vector Institute and a number of industry parties active in AI including Telus and Blackberry.  To date, none of the “Big Tech” players (e.g. Google, Meta, Amazon) have signed on.

By contrast to the government’s Voluntary Code, which seeks to establish guardrails only for generative AI systems, its proposed AIDA has application to all AI systems, including those that make predictions, decisions and determinations.  However, as noted, the AIDA’s sole focus is on “high-impact” AI systems and the obligations of organizations to minimize the risks inherent in such systems.

One of the main criticisms of the current Bill C-27 version of the AIDA was that it left the definition of a high-impact system to regulations.  However, a significant element of the Minister’s November proposed amendments to the Bill is an articulation of what initially will be considered high-impact systems, significant not only because it provides guidance to users regading what is meant by “high-impact” but also because it provides a much clearer indication of the breadth of the intended scope of the law.

The proposed new definition will include seven classes of the use of AI, specifically: in human resources; in the provision of services to individuals; in relation to biometric information; in online content moderation in search engines, social media and other platforms; in health care; by a court or administrative body; and in law enforcement.

Other proposed amendments will revise and generalize the definition of AI to align with international standards such those of the OECD and the EU’s new Artificial Intelligence Act, add specificity to the accountability measures required of users, and provide quasi-independent oversight powers and duties for the proposed Artificial Intelligence and Data Commissioner.  Articulation of the Commissioner’s powers and duties was missing from the original version of the Bill, leading to the criticism that oversight would simply be exercised as an adjunct of ISED’s bureaucratic/policy functions, as opposed to by an independent regulator.

It remains to be seen whether the Minster’s amendments are sufficient to enable the AIDA to continue as part of the Bill when reported by the INDU Committee for Third Reading.

Finally and also significantly, the federal, provincial and territorial information and privacy commissioners have just published (December 7) a guidance document entitled  Principles for responsible, trustworthy and privacy-protective generative technologies focussing on the privacy implications of generative AI.  The guidance document reviews privacy compliance considerations for generative AI technologies according to generally accepted privacy principles, including: legal authority (i.e. consent) to collect and use personal information, appropriate purposes, necessity and proportionality, transparency, and accountability.[5]

The guidance is directed to both developers and organizations using generative AI.  Its significance is the message that privacy regulators are very much engaged in the ethical aspects of generative AI, with particular reference to the responsibility of users to identify and prevent risks to vulnerable groups, including children and groups that have historically experienced discrimination or bias. 


For more information please contact:      David Young       416-968-6286     david@davidyounglaw.ca

Note: The foregoing does not constitute legal advice. © David Young Law 2023

Read the PDF:  Privacy 2023 Recap – Legislative Reform 2.0 and other developments


[1] Personal Information Protection and Electronic Documents Act.

[2] Investigation into the collection and use of de-identified mobility data in the course of the COVID-19 pandemic, Office of the Privacy Commissioner of Canada, May 29, 2023.

[3] Gordon v. Canada (Health), 2008 FC 258.  Information is personally identifiable if there is a serious possibility that an individual could be identified through the use of that information, alone or in combination with other available information.

[4]  Privacy Commissioner of Canada v. Facebook, Inc., 2023 FC 533.

[5] Tracking the CSA Code’s Privacy Principles as set out in Schedule 1 of PIPEDA.