top of page

HKCNSA Interprets the Model Framework for Personal Data Protection in Artificial Intelligence (AI)

The Data Privacy Committee ("the Committee") of HKCNSA is concerned about the rapid development of global artificial intelligence (AI) technology and the increasing popularity of AI application scenarios. According to the Hong Kong Productivity Council, nearly half of the organizations in Hong Kong are expected to use AI this year, an increase of 20% from last year. The report also noted that AI poses a higher privacy risk compared to emerging technologies such as cloud computing and the Internet of Things.


To address the challenges posed by AI to personal data privacy protection and to embody China's Global Initiative on AI Governance, the Office of the Privacy Commissioner for Personal Data, Hong Kong ("the PCPD") released the "Model Framework for Personal Data Protection in Artificial Intelligence (AI)" ("the Model Framework") in early June. Following the PCPD's "Ethical Guidelines for the Development and Use of Artificial Intelligence" published in 2021, the Model Framework provides internationally recognized recommendations and best practices on the application of AI to assist Hong Kong organizations in procuring, implementing, and using AI, including ensuring that generative AI effectively complies with the Personal Data (Privacy) Ordinance ("the PDPO"), thereby fostering the development of innovative technology in Hong Kong and the Greater Bay Area.


The Model Framework is the first guidance framework in the Asia-Pacific region specifically developed for safeguarding personal data privacy in the field of AI. Its development has received strong support from the Office of the Government Chief Information Officer and the Hong Kong Applied Science and Technology Research Institute.


The Committee agrees that the development of AI is not in conflict with the protection of personal data privacy under the PDPO. On the contrary, the publication of the Model Framework provides concrete measures for the industry on how to apply AI solutions more safely and effectively. For instance, it is recommended that organizations should establish governance committees or relevant organizations to provide guidance and oversight for the procurement of AI solutions and how to use the systems. The governance committees are also required to report to the board or internally any AI system incidents or raise concerns about privacy protection or ethical issues to facilitate monitoring of AI application scenarios.


The Model Framework covers recommended measures in four areas: AI Strategy and Governance Framework, Risk Assessment and Human Oversight, Customized AI Models and Implementation, and Management of AI Systems, as well as Facilitating Communication with Different Stakeholders, including internal employees, AI suppliers, and consumers.


Regarding "Developing AI Strategy and Governance," the Model Framework outlines seven steps for procuring AI solutions: 1) Identifying AI solutions; 2) Selecting suitable AI solutions; 3) Collecting and preparing data; 4) Customizing AI models for specific purposes; 5) Testing, evaluating, and validating AI models; 6) Testing and auditing the security and privacy risks of systems and components; 7) Integrating AI solutions into organizational systems.


In terms of risk assessment and human oversight, the Model Framework categorizes the risk levels of AI systems into three stages from low to high: "Humans Out of the Loop," "Human-in-the-Control-Loop," and "Human-in-the-Loop." For example, if an organization provides customers with the most basic and simple information or inquiries through chatbots, it falls into the "Humans Out of the Loop" stage, where AI makes decisions without human intervention. In contrast, if AI needs to use biometric personal data for real-time identification (such as face recognition, iris scanning), such as AI-assisted medical image analysis or treatment, job applicant evaluation, job performance assessment, or termination of employment contracts, these scenarios are considered higher-risk categories. In these cases, human decision-makers should retain control during the decision-making process to prevent or reduce AI errors, which is defined as "Human-in-the-Loop."


The Committee notes that from August last year to February this year, the PCPD reviewed the use of AI tools in 28 organizations in Hong Kong. Most organizations had assessed privacy risks, and 10 of them collected personal data through AI without any violations found. In response to the PCPD's ongoing review of the industry's use of AI with varying degrees of risk, the Committee reminds organizations that the use of AI is to "assist" rather than "replace" human work. If AI is trained with insufficient or variable data, it may make incorrect or discriminatory decisions. If the training data contains personal information, the system may inadvertently disclose such information during output. Organizations should have sufficient manpower and large amounts of clean data when procuring and applying AI solutions to train and continuously correct AI systems. Additionally, organizations should refer to the recommended measures in the Model Framework and actively develop AI strategies and governance, particularly by providing adequate training for internal staff on AI usage, thereby safeguarding personal data privacy and ensuring the safe, ethical, and responsible application of innovative technology.

Comments


bottom of page