OpenAI to allow users to customise ChatGPT
US-based tech firm OpenAI’s generative artificial intelligence (AI) tool, ChatGPT, will soon get an upgrade that would allow users to customise the service to their own needs and preferences, the company confirmed in a blog post on Thursday. While it is unclear as to exactly how these custom versions of ChatGPT would work, OpenAI is looking to add “more diverse views”, in order to enable the generative platform to create responses that “other people may strongly disagree with”.
The upgrade is part of changes to the platform that will see OpenAI’s engineers, researchers and reviewers make changes to address issues of bias in its responses.
“We believe that AI should be a useful tool for individual people, and thus customisable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customise its behaviour. This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging — taking customisation to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs,” the blog post stated.
The other change, alongside allowing users to customise the tool, includes research investments to improve bias issues, as well as addressing issues where ChatGPT would make up apparent facts that are actually incorrect. The company also published a list of guidelines around the way it handles political content, in order to “give more insight into how we view this critical aspect of such a foundational technology.”
The upgrade comes as questions have been raised regarding issues around factual accuracy, tonality of responses and other issues such as bias on politics, culture and gender, on generative AI platforms such as ChatGPT itself. Generative AI, to be sure, refers to algorithms that are trained on billions of data points of a language in order to gain context and understanding of it, and in turn reads questions written in plain text by users to generate human-like responses.
ChatGPT is a part of the underlying technology for Microsoft’s Prometheus Model, the AI model that powers its new conversation search service on its search engine, Bing. The latter has been flagged for producing numerous errors in its responses, which eventually pushed Microsoft to publish a blog post explaining why Bing gets “confused” in long conversations, and how its awareness of current affairs could be limited.
Fellow generative AI platform, Google Bard, also got off to a rocky start, with the company losing over $100 billion due to erroneous responses generated by the platform. ChatGPT has so far been the most popular example of generative AI, amassing over 100 million users around the world ever since it was opened to the public in November last year. However, it has also had numerous issues with its responses, including failing to identify whether personalities such as Adolf Hitler and John F. Kennedy were present at their own assassination attempts.