Artificial Intelligence Chatbots and Personal Data Protection

Artificial Intelligence Chatbots and Personal Data Protection

The Personal Data Protection Authority (“Authority”) published an Information Note on Chatbots (Example: ChatGPT) (“Information Note”) on its official website on November 8, 2024.

Preamble

AI-powered chatbots are defined as software that can simulate human conversations to communicate with end users. Chatbots like ChatGPT, Siri, Alexa and Gemini provide quick solutions in various fields such as customer support, information retrieval, content creation, coding and translation. However, these systems pose significant risks to personal data security due to their need to process large amounts of data to achieve high performance.

Definition and Operation of Chatbots

Chatbots utilize natural language processing (NLP) technology, along with sub-technologies such as natural language understanding (NLU) and natural language generation (NLG), to understand and execute directives provided by users. Users communicate with these systems through voice or text-based methods. The nature of the services offered by chatbots may vary depending on their intended purpose and application.

Processing of Personal Data

AI-powered chatbots can process various personal data such as users’ names, contact details, social media accounts, and IP addresses for the following purposes:

  • Providing, managing, and improving services
  • Enhancing the user experience
  • Ensuring the security of information technology systems
  • Fulfilling legal obligations
  • Protecting the legitimate interests of data controllers

Transparency is crucial for personal data security in chatbots. According to the Information Note, users must be clearly informed before their data is collected about how and for what purposes their data will be used, with whom it will be shared, and how long it will be retained. This is a critical step in increasing users’ control over their personal data.

Considering that chatbots may be used by underage persons, it is necessary to implement age verification mechanisms and take proactive measures to prevent potential negative experiences.

Data Security and Legal Compliance

During the development of chatbots, the following considerations should be taken into account:

  1. Risk Assessment: Risks should be analysed before personal data is processed.
  2. Accountability Principle: Applications should be designed and implemented in accordance with the accountability principle.
  3. Lawful Data Processing: The processing of personal data should be based on the general principles and legal grounds outlined in the Personal Data Protection Law. The legal basis for the processed data should be explicitly stated.
  4. Disclosure Obligation: Under Article 10 of the Personal Data Protection Law, data controllers must perform their obligation to inform users during the collection of data.
  5. Technical and Administrative Measures: Necessary technical and administrative measures should be taken to ensure data security, compliance with international standards should be ensured, and relevant certifications should be obtained.

Conclusion

Artificial intelligence-powered chatbots provide significant convenience in addressing user needs, but they also require compliance with legal regulations regarding data processing and the assurance of data security. Developers, manufacturers, and service providers should prioritize the protection of personal data by adhering to the recommendations outlined in the Information Note and act in accordance with the relevant legislation. This approach will not only enhance user trust but also ensure legal compliance.

Hande Alp
In Socials: