Google’s AI-powered chatbot, Bard, is reportedly being advanced in logic and reasoning tasks.
According to the company's official blog post, Bard now incorporates a new technique called "implicit code execution." This approach will enable the system to recognize and execute code in the background, leading to more accurate responses to string manipulation, coding inquiries, and mathematical operations.
As per the blog post, Bard will not rely on large language models, which are primarily suited for predictive tasks rather than tackling complicated challenges. The system is expected to detect instances where additional processing could enhance performance and subsequently produce supplementary code to enhance accuracy. Google's latest update has reportedly increased the accuracy of the computation-based word and math problems by 30 percent, as per the company's internal challenge datasets.
Bard is expected to offer a wide range of capabilities, including the ability to provide information on prime factors of numbers in the millions, growth rates of savings, and even the reverse spelling of words such as "lollipop." These features are expected to strengthen the credibility of AI technology among critics, positioning it as a powerful and indispensable tool, Google said.
"Our new method allows Bard to generate and execute code to boost its reasoning and math abilities. This approach takes inspiration from a well-studied dichotomy in human intelligence, notably covered in Daniel Kahneman's book 'Thinking, Fast and Slow'-- The separation of 'System 1' and 'System 2' thinking," the blog said.
The company has recently announced an update that merges the abilities of Large Language Models (LLMs) and traditional code, known as System 1 and System 2 respectively. System 1 thinking is defined by its fast, intuitive, and effortless, and, System 2 thinking is defined by a deliberate, effortful, and slow cognitive process. This integration aims to enhance the accuracy of Bard's responses.
While chatbots like Bard and ChatGPT rely on a wide variety of text samples from the internet, books, and other sources for their training, code-generating models like Github Copilot and CodeWhisperer rely completely on code samples for their training and fine-tuning.
Google created implicit code execution to help Bard write and run its code, which was motivated by the need to overcome the coding and mathematics limitations in general LLMs. The most recent version of Bard can detect questions that would be helped by logic code, create that code "under the hood," test it, and then utilise the outcome to produce a seemingly more correct response, said Google.