Loading...

More firms will build responsible AI tools, frameworks in 2024: Wipro’s Ivana Bartoletti 

More firms will build responsible AI tools, frameworks in 2024: Wipro’s Ivana Bartoletti 
Loading...

The growing interest in artificial intelligence (AI) has led to concerns about its ethical use and transparency. As AI takes on decision-making tasks previously done by humans, worries about fairness, bias, liability, and societal impact have gone up. In an exclusive conversation with TechCircle, Ivana Bartoletti, Global Chief Privacy Officer, of Indian IT major Wipro, emphasizes the need for collaboration between businesses and government in addressing the challenges posed by data misuse including misinformation and deepfakes. According to her, it is vital to establish a framework for governing, regulating, and guiding investments in technologies. She also shares her insights on the most effective AI practices for CXOs in 2024 and on promoting greater representation of women leaders in the field of AI. Edited excerpts:

The conversation surrounding digital rights and privacy has undergone significant changes in recent years, particularly with the rise of AI. How is this impacting the business landscape?   

Businesses recognise the importance of privacy and data protection for success and customer trust. The increase in data collection, awareness of proper data handling, and negative consequences of data misuse have shifted people's views on privacy. AI's rise has further fuelled the privacy conversation, as it heavily relies on data. Governments and companies are already developing responsible AI frameworks to address privacy concerns and we can expect more such cases in the coming months. Privacy is crucial in responsible AI because algorithms have the power to impact individuals' lives. By prioritising data protection, businesses can build trust and drive growth. 

Loading...

There is a lot of talk about safe AI practices nowadays, but very few have actually implemented them. Where do you see the gap?  

At Wipro, we prioritise safety in AI by focusing on responsibility. We have developed a comprehensive framework based on four pillars. Firstly, the individual dimension emphasizes privacy and security. We ensure our policies do not discriminate and are transparent about the data we use. Secondly, the technical dimension prioritises robust and safe AI systems. We embed security measures from the beginning to protect personal and company data. The societal dimension focuses on compliance with laws and regulations to uphold privacy. Our clients trust our AI products because they know how they were developed and handled responsibly. Lastly, the environmental dimension considers limiting the environmental footprint of AI systems. We promote the use of synthetic data and smart data processing to reduce costs and address data pollution. To ensure robust and safe AI, we establish clear policies, provide guidelines to developers, and foster a culture of monitoring.  

In your opinion, how well are ML and AI models tested and validated?  

Loading...

We have seen many cases over the last few years of AI models that have been deployed without enough scrutiny. For example, we have seen discriminatory algorithms due to an improper choice of parameters or an unfettered use of data. An algorithm is a bundle of data, parameters and people, none of which is neutral. Without proper due diligence, algorithms simply code existing inequalities into decisions affecting people and locking them out opportunities or services. So there must be due diligence before rolling out and monitoring afterwards. This applies to safety and robustness too – initiatives like red teaming for high risk AI can be particularly useful.  

Could you share some thoughts on the best AI practices for 2024? 

As we look ahead to 2024, it is clear that the Open AI story has shed light on a crucial issue: the delicate balance between regulation and innovation. This tension can only be resolved through effective governance. In the coming year, we can expect to see a significant acceleration in best practices, codes of conduct, and new legislation emerging worldwide. With this in mind, companies will be taking proactive steps towards responsible AI implementation. One key aspect of this will be investing in upskilling their workforce and fostering collaboration between different teams, all in the pursuit of innovation with AI. 

Loading...

Furthermore, the importance of companies' sandboxes will be emphasized and elevated. These sandboxes serve as safe spaces for developing AI technologies in a responsible manner, ensuring compliance with privacy, security, and other legal requirements. 

On a global scale, the governance debate will gain momentum, particularly with the United Nation's advisory group commencing its work. The demand for an agency with real authority will grow louder, pushing for concrete actions beyond mere summits and declarations of intent. 

With the increasing popularity of platforms like ChatGPT, there is a growing fear that AI will eventually replace many jobs. What roles do companies and the government have in addressing this concern?  

Loading...

There is no doubt that the world of work is changing, and it's important for people to start working alongside AI now. Companies should embrace AI as a tool to enhance productivity. This means, providing training and support to their workforce. At Wipro, we are training our entire workforce on responsible use of generative AI as we want our people to embrace these technologies, not fear them. Governments also have a role to play. They should assess the impact of AI and invest in digitalisation to create demand and combat exclusion. 

My view is that the jobs at risk will be the ones of those who do not start leveraging these technologies now. I also believe that the role of government and business is to distinguish real use of AI from hype and ensure employees and citizens maintain a critical outlook. This is necessary as these models still hallucinate. People need to understand that there is a difference between retrieval of information and reasoning. The latter is not here yet, while the former can help us a lot in speeding up our processes and allowing us to focus on more creative and less repetitive tasks.  

At Wipro, what conversations are taking place regarding the risks associated with the use of AI, and what actions are being taken to address these concerns?

Loading...

We have been leading on this from the front and were amongst the first to introduce a policy for the responsible use and development of generative AI, and to immediately roll out a training programme for the whole workforce. We have a taskforce bringing together leaders from across the company and a governance model based on the three lines of defense approach. We are bringing AI controls into our existing governance structure and adding new elements where necessary.  

As I earlier mentioned, our four pillar for responsible AI is based on our vision of risks that AI may bring – from privacy to security, unfairness, opacity, the “softwarisation” of inequality and the environmental impact. We are conscious that if we want AI to benefit the world we must cut through the hype and focus on AI that solves our most pressing problems.  

Your initiative, 'Women Leading in AI,' emphasizes the importance of having more women in decision-making and leadership roles to shape the future of AI. Could you provide further insights on this topic? 

Loading...

The lack of women in AI is a pressing issue that goes beyond technology. AI has the potential to revolutionise our lives and work, which is why it requires a diverse workforce to develop the necessary tools. Additionally, diverse leadership in companies and policy-making is crucial to determine the purpose and governance of AI. It's not just about addressing bias, which can arise at any stage of the AI lifecycle. It's also about the significant decisions that countries and companies make regarding AI strategies, plans, and governance. These decisions will shape the future, and diversity in decision-making rooms is essential. That's why we established the Women Leading in AI Network. AI encompasses a wide range of skills, from coding to policy-making, privacy engineering, ethics, and legal expertise. Therefore, there are no excuses – diversity is not only necessary but also achievable.


Sign up for Newsletter

Select your Newsletter frequency