How companies can assess artificial intelligence through an ESG lens
A tool to transform your business or a technology that threatens to disrupt it?
The buzz around artificial intelligence since ChatGPT launched last November -- and attracted 100m users in less than two months1 -- has generated endless coverage and debate.
It has also left many management teams confused about whether to embrace AI and how to assess its risks and opportunities.
Use ESG as a lens, suggests Yannick Ouaknine, Head of Sustainability Research at Societe Generale: “The rate of adoption of AI and hence the rate of change will be, and already is, very rapid. But the process can be manageable for corporates if they think about it in a structured and analytical way.”
Starting with the environment, AI tools can help the energy sector improve electricity grid management, which is becoming more critical as intermittent renewable energy contributes an ever-larger slice of power generation. Agriculture can raise productivity through robotics, such as automated tractors; through crop monitoring, with drones checking fields for pests and diseases; and via predictive analytics that helps farmers improve yields.
Indeed, companies generally can use AI to better monitor and potentially lower their greenhouse gas emissions. The EU Parliament’s Think Tank believes a 1.5-4% reduction of GHG emissions by 2030 is possible2.
On the flipside, training a single AI model can emit more than 626,000 pounds of carbon dioxide according to an MIT study3. The rapidly expanding number of data centres that power cloud computing and AI require a lot of energy and water to be run and cooled, are placing additional strain on the environment.
Managing the social challenges
The social impact of AI is even more complex. It could create almost 100m new jobs, estimates the World Economic Forum4: why not retrain as a drone pilot or an algorithm compliance officer? Simultaneously, it could eliminate over 80m roles as much of manufacturing and many service functions are automated. This is already happening in different sectors and across functions like customer service and marketing, from chatbots answering questions to Netflix suggesting your next movie.
The net impact on jobs does not look that bad and the savings on offer to companies are tempting, with McKinsey5 saying it can improve efficiency by 40% and reduce operating costs by 30%. The key, notes Mr Ouaknine, is how such a transformation is implemented: “As they upskill and reskill their workers and technicians, corporates must build enough internal competency to not only manage this process but also to control and oversee the AI models they start to use.”
If they fumble, they may trigger resistance from an increasingly aware workforce, potentially more than offsetting AI productivity gains given tight labour markets and skills shortages. In extreme cases, they may even alienate customers or policymakers, putting at risk their license to operate.
Just as serious is the risk of greenlighting AI-based services that management, especially senior executives, do not fully understand. Widespread use of automation and IoT sensors may open an enterprise up to cyberattack; the wrong diagnosis by a robo-doctor could kill; if an autonomous car knocks over a pedestrian, everyone from the ‘driver’ to the manufacturer to the technology provider may end up in court.
The rate of adoption of AI and hence the rate of change will be, and already is, very rapid. But the process can be manageable for corporates if they think about it in a structured and analytical way.
Embedding good governance
This is where ‘S’ and ‘G’ intersect. Clearly, getting the governance right, both at the level of individuals, corporates and governments will be critical in ensuring that AI is controlled well enough to gain broad social acceptance.
The heartening news is that, for once, regulators are on the front foot, says Mr Ouaknine, with every country looking to promulgate rules to combat misinformation and bias, smooth job losses and minimise disruption. In broad terms, the EU is focused on high-risk uses, putting the burden of proof on companies and seeking to maintain its role as the global technology regulator.
In the US, by contrast, political gridlock has so far prevented much legislation, so the leading AI and tech companies are working on self-regulatory measures with limited enforcement mechanisms. In China, the emphasis is on the transparency of algorithms and providers of generative AI need to register with the government, giving it political control.
Meanwhile, companies need to put in place their own governance procedures while working to understand and comply with new regulations as these come into force. A simple first step is to make a senior director or executive responsible for overseeing AI and giving that person enough budget and a skilled team to allow them to do so properly. Learning about best practice from peers may also be useful and testing AI models will gather valuable data.
However, companies should be careful not to launch or implement AI-based services and procedures if they do not fully understand how they work: for instance, how they learn and make recommendations. The temptation to harness this new tool will be strong. But doing so responsibly is key.
Sources:
1- “ChatGPT's explosive growth shows first decline in traffic since launch”, Reuters, July 2023
2-“Training a single AI model can emit as much carbon as five cars in their lifetimes”, MIT Technology Review, June 2019
3-“Artificial intelligence: threats and opportunities”, EU Parliament Think Tank, 2020.
https://www.europarl.europa.eu/pdfs/news/expert/2020/9/story/20200918STO87404/20200918STO87404_en.pdf
4-“The future of jobs report”, World Economic Forum, April 2023
5-“A future that works: Automation, employment, and productivity”, McKinsey and Company, 2017
Learn more with Societe Generale ESG Thematic Insight on AI