ChatGPT Still Fail to Meet the Data Validity Standards, Set by EU Data Protection Board.

A task force at the EU’s privacy watchdog stated that OpenAI’s attempts to make its ChatGPT chatbot generate less factually inaccurate output are insufficient to guarantee complete compliance with data regulations.

The task force stated in a report posted on its website on Friday that “although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle.”
A task group on ChatGPT was established last year by the body that unifies national privacy watchdogs in Europe in response to concerns expressed by national regulators, led by the Italian authorities, regarding the extensively utilized artificial intelligence service.

A request for comment from Reuters was not immediately answered by OpenAI.
The study stated that because the numerous investigations that national privacy watchdogs in several member states have started are still underway, it is not yet able to give a detailed account of the findings. The conclusions were to be interpreted by national authorities as a “common denominator.”
One of the tenets of the EU’s data protection regulations is data accuracy. In actuality, the research stated that the current training strategy creates a model that may potentially yield skewed or made-up outputs because of the probabilistic structure of the system.
Furthermore, end users are likely to see ChatGPT’s outputs as factually accurate.

Source : Reuters.

Kindly share this story

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top