In today’s world in the age of ‘big data,’ AI is a ubiquitous and quintessential technology. The AI market size is forecast to rapidly increase, as is its usage in many industries. AI is transforming economies and looks to become something we are increasingly dependent upon; however, with the growth of AI comes greater urgency to deal with issues such as AI exacerbating inequality and the size of its carbon footprint.
The global AI market size, valued at $454bn in 2022 , is forecast to grow exponentially bigger to $1,8750bn by 2030 , in some predictions. This means that AI technologies will transform economies at a greater rate than before and hopefully grow them by enhancing their productivity and GDP potential. But in reality, productivity growth has been crawling since the middle of the 2000s.
Productivity growth is something that nearly all economists would agree is essential to better the standard of life, but AI have failed to bring growth out of stagnation. This is down to AI aiming to replace workers as opposed to extending workers’ capabilities. This focus on replacement rather than improvement reduces the wages of most people whilst fuelling the wages of those few with monopoly on the AI market leading to greater inequality. Daron Acemoglu, an MIT economist, argues 50-70% of the growth in US income inequality between 1980 and 2016 is down to automation, even before the surge in AI.
This could put the future of AI into jeopardy. Whilst a focus to replicate human activities has led to incredible technologies such as driverless cars, to truly harness the potential of AI and further transform the way we think of jobs and invent new groundbreaking technology requires a shift of focus from ‘automation to augmentation’.
Inequality is a large problem in the UK already without further automation. With stimulating customer demand forecast to be the single largest contributor to the UK’s economic gains between 2017 and 2030 combined with the UK having the 2nd highest Gini coefficient in Western Europe, automation-fuelled inequality poses a large threat. To manage it doesn’t necessarily demand a regulatory approach but making deliberate choices about the technologies we choose to invest in and fund.
Regulation could be used to mitigate the social and economic impact of AI, for instance to compensate people whose data is used to train algorithms that produce wealth for the few. Regulation could also help to shift the focus from ‘automation to augmentation.’ However, AI regulation produces another challenge which is the incompatibility of the inflexibility of regulation with the ever-evolving nature of AI which means regulation may rapidly become outdated and stifle innovation. This means that such regulation would be either restrictive or worded to be meaninglessly vague and non-restrictive but useless.
In the future, as many LICs become developed nations investing in their own evolving quaternary sectors, these questions will arise again: as AI starts to permeate their work more and more, will they, with a far larger demographic of young people than HICs, be wondering what to do when their large workforce becomes unemployed or will they be able to fully capture what AI could be to increase their quality of life?
The UK in 2018 had the highest employment rate it has had since nearly 150 years prior to that in 1872. To ensure a high employment rate means to ensure responsible and effective usage of AI.
But worryingly this is not the case. A recent report by Mckinsey showed that there has been no substantial increase in organisations’ reported mitigation of AI-related risks over a period of 3 years. If AI is to continue to pave the path of humanity’s development, then this is not the way forward.
And if AI is to pave humanity’s path, its role in global issues must be considered. One major problem for AI is the climate crisis. Whilst AI serves as a great tool to reduce the effects of climate change and environmental issues, e.g., in modelling climate change and developing low-emission infrastructure, it is Janus-faced with the training of one big language model estimated to be equivalent to between 300 metric tonnes of CO2 and 625 metric tonnes of CO2 . To put this into focus, the first figure is equivalent to the total CO2 emissions of 125 round-trip flights between New York and Beijing. This comes from the colossal quantity of energy needed to train and run AI. Since 2012, researchers at OpenAI have found the amount of computing power, connected to energy consumption, for deep learning research AI doubling on average every 3.4 months. Electronic waste disposal is another huge problem with harmful chemicals like lead, mercury and cadmium contaminating soils and groundwater endangering both the environment and human health. By 2050 the WEF predicts that e-waste will have surpassed 120mn metric tonnes.
The way to solve these problems lies in optimising the energy-efficiency of hardware and algorithms. In doing so, efficiency is not compromised whilst consuming less energy. Excessive data collection must also be stopped and considering the end-of-life for these AI in their development is also crucial. But convincing those who profit to stop collecting pointless data, despite not forcing them to stop all data collection, will be a most probably fruitless battle.
But the real problem lies in enforcing these solutions or limitations. If legislation is not the way because of its lack of adaptability, then something more temporary is needed. But something more temporary is too liable to change and does not provide the reliability of legislation. Therefore, what is required is well-thought-out regulation that can weather through most of the evolution of AI and remain useful.
This year there have been talks between the US and the UK on AI regulation. Let’s hope that these produce substantial and reasoned regulation to prevent AI causing damage and ensure the responsible and effective use of AI.