The launch of ChatGPT and GPT 3.5 (Generative Progressive Transformer-3.5) — which many claim will herald a new era in dialogue-based conversational AI — has ended the year on a high for conversational AI. People are using ChatGPT for tasks ranging from correcting code errors to rewriting the Bohemian Rhapsody and the number of ChatGPT users surpassed the million mark in less than a week last month.
While 2022 was about newer and more advanced tools and models, commercial use cases, regulation, and standardisation of AI are expected to define 2023 for this domain. Here's what to expect from the AI industry in 2023.
Generative AI for businesses
Generative AI, which is artificial intelligence that can create text, images, videos etc. without supervision, set the tone for this year and the trend will spill on to 2023 as well.
It started with the release of DALL.E 2 by AI research firm OpenAI back in July this year, the same company that created the GPT models. A successor to DALL.E, which was released in 2021, DALL.E 2 allows image generation using text prompts.
What followed was a barrage of similar, more advanced tools for text to ‘anything’ (image, video, speech, etc.). The most prominent examples of this include Midjourney, Stable Diffusion, and Google’s Imagen, which allow users to create art by simply describing things through text. So much so that an AI-generated artwork submitted by Jason Allen to the Colorado State Fair’s fine arts competition ended up winning the first spot in August, sparking both controversy and excitement about the future of art.
Large companies like Adobe and stock image supplier Shutterstock are also starting to take notice. In October, Adobe announced that it will introduce more generative AI assistance in its app. Shutterstock, on the other hand, announced a partnership with OpenAI in October that will allow integration of DALL.E with the former’s content for its users worldwide. OpenAI’s major partner – Microsoft – has also been leveraging tools like GPT-3 for its Office suite of tools.
In 2023, the world will see more business use cases of this technology. On October 10, Gartner predicted that generative AI will improve digital product output quality and will account for 10% of all the data produced by 2025.
Age of AIOps
First coined in 2016 by market research organisation Gartner, AI for IT operations (AIOps) has become an important cog in the Information Technology (IT) wheel today. As the term suggests, AIOps is the application of AI and machine learning to optimize and enhance IT operations. It offers advantages like automating root cause analysis, problem resolution processes, and incident management; performance monitoring; identifying potential outages; system availability monitoring, etc. An increasing number of organisations are planning and adopting AIOps implementation strategies.
A Coherent Market Insights report released this month has predicted that the global AIOps Platform market will reach $20.4 billion by 2025.
Accurate and relevant AI benchmarks
Standards for setting benchmarks for accuracy are changing fast. Benchmarks that were relevant just a few years back are now out of date. This is particularly true when we speak of emerging technologies like large language models and generative AI.
Work is on to deal with this specific problem. For example, a team from Stanford University recently unveiled Holistic Evaluation of Language Models (HELM), a new benchmarking approach that would serve “as a map for the world of language models.” Organisations like DeepMind and NVIDIA have also developed a few use case-specific and relevant benchmarks and evaluation standards. This trend is expected to continue in 2023 as well.
AI governance and regulation
Forrester’s report on predictions for 2023, released in October, stated that with the rising demand for trust in AI, one in four CIOs and CTOs would lead AI governance practices for their organisations. The report also said that AI governance’s scope will widen to include topics like cybersecurity and compliance.
While AI has been developing at a neck-breaking speed, the governance and regulation aspects have failed to keep pace. However, in light of growing awareness among the public and authorities tightening their noose, companies are slowly waking up to implement better practices.
The US government, for instance, blueprint for the AI bill of rights for regulating this technology and its applications. The EU, which already implements GDPR, also unveiled the AI Liability Directive bill to prevent companies from deploying harmful models and systems, this year. Further, the policy think tank of the government of India – NITI Aayog – released a discussion paper titled ‘Responsible AI for All’ where it suggested organisations deploying AI systems, to constitute internal committees for assessing the ethical implications of the decisions made by these models.