Exploring different ways to approach sustainability in the field of artificial intelligence.
Both ‘sustainability’ and ‘artificial intelligence’ can be hard concepts to grapple with. I do not believe I can pin down two incredibly complex terms in one article. Rather I think of this more as a short exploration of different ways to define sustainable artificial intelligence (AI). If you have comments or thoughts they would be very much appreciated.
These thoughts come after a discussion on Sustainable AI I moderated on the 21st of May as part of my role at the Norwegian Artificial Intelligence Research Consortium. I also wanted to do some thinking before the Sustainable AI conference the 15th-17th of June that will be hosted at the University of Bonn.
Pertaining to sustainable development, and as said in the report Our Common Future also known as the Brundtland Report, was published on October 1987:
“Humanity has the ability to make development sustainable to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs. The concept of sustainable development does imply limits — not absolute limits but limitations imposed by the present state of technology and social organization on environmental resources and by the ability of the biosphere to absorb the effects of human activities.”
This is an ever changing broad definition of sustainability due to the focus on ‘present’, ‘future’ and ‘needs’. In this way sustainability in this framework is constantly being redefined and challenged.
These notions were to some extend based on the economic resource-based forecasting in the Limits to Growth report:
“The Limits to Growth (LTG) is a 1972 report on the exponential economic and population growth with a finite supply of resources, studied by computer simulation.”
There had been thinking before this including, but of course not limited to:
As such, although Limits to Growth (1972) and Our Common Future (1987) popularised sustainability there were threads of thoughts that followed these lines previously.
Later convening work in UN-led conferences has played a part in developing a framework to operationalise commitment from nations.
These had similar content and form to the eventual Millenium Development Goals (MDGs). The MDGs were established in 2000 with goals for 2015, following the adoption of the United Nations Millennium Declaration. The Millennium Declaration has eight chapters and key objectives, adopted by 189 world leaders during the Millenium Summit 6th to the 8th of September 2000.
In 2016 these MDGs were succeeded by the UN Sustainable Development Goals (SDGs).
You have likely seen the colours and numbers around as they are visual and often seen in presentations by various businesses and governments:
It is important to note that these 17 goals also have indicators detailing progress towards each target.
“The global indicator framework includes 231 unique indicators. Please note that the total number of indicators listed in the global indicator framework of SDG indicators is 247.”
An attempt at displaying the available data can be seen in an online SDG tracker (made by Global Change Data Lab, a registered charity in England and Wales) and it is listed on the official website of the United Nations.
Within these indicators Internet is for example mentioned four times.
Machine learning, artificial intelligence, automation, and robotics receive no mention.
I do not claim AI is as important as the Internet, although I do believe that to some extent AI can have a horizontal influence across various sectors and areas of society. Especially with recent examples such as the Google’s LaMDA launched this May 2021, an AI system for language integrated across their search portal, voice assistant, and workplace.
That being said:
There are of course many terms that more broadly do not feature in the goals or the indicators, but these goals are still relevant for the conceptual and operational aspects involved in developing and applying AI.
One example could be by Aimee Van Wynsberghe, one of the hosts of the conference on Sustainable AI, in her article Sustainable AI: AI for sustainability and the sustainability of AI:
“I propose a definition of Sustainable AI; Sustainable AI is a movement to foster change in the entire lifecycle of AI products (i.e. idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice.”
Wynsberghe also argues:
“Sustainability of AI is focused on sustainable data sources, power supplies, and infrastructures as a way of measuring and reducing the carbon footprint from training and/or tuning an algorithm. Addressing these aspects gets to the heart of ensuring the sustainability of AI for the environment.”
In her article she splits this into the sustainability of the system and the application of AI for more sustainable purposes:
“In short, the AI which is being proposed to power our society cannot, through its development and use, make our society unsustainable”
Wynsberghe argues for three actions we have to take, I have shortened these slightly, but they can be read in full within her article:
This approach from Wynsberghe construct a duality of sustainable AI systems and and a thoughtful purpose in the application of AI. Both are important, and these can be useful in building a way to approach sustainable AI as a concept.
As a simple two-point heuristic for a complex issue sustainable AI is:
There are other ways to approach sustainability.
It is important to consider power and inequalities as they configure to some extent within the SDGs. These topics are often forgotten or ignored when artificial intelligence is discussed together with sustainability (although ‘bias’ is often mentioned).
Sustainable Development Goal number 10: reduced inequalities, what part does AI applications play in this regard?
I consider Weapons of Math Destruction by Cathy O’Brien to feature in this discussion, and it sparked a wide range of questions.
The recent film Coded Bias alongside the research and advocacy by Joy Buolamwini, Timnit Gebru, Deb Raji, and Tawana Petty on the inequalities (in the form of bias) in AI systems, particularly facial recognition is important.
I believe personally that another interesting further discussion of this at length can be found in the book The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Because there are both large questions of the resource system built around artificial intelligence and the delivery of services in various political contexts.
This is also about labour and minerals within planetary boundaries.
Power can to some extent create frameworks for what actions that we take. This is not new, yet AI has become a large part of framing decision-making processes with large populations/citizens/users depending on who you ask.
Another aspect is efficiency of language models and large models trained on enormous data is the challenging computational needs and potential impacts on society. Companies, NGOs and governments attempt to handle this through employing various AI ethics teams. Yet as can be demonstrated by the firing of the two co-leads of the AI ethics team in Google Timnit Gebru and Margaret Mitchell before the launch of a new large language model, this is by no means an easy relationship.
AI ethics teams can often have a narrow remit and sustainability is not necessarily discussed within these contexts. Activities can vary from large aggregated philosophical notions of varying morality or contesting benchmarks in machine learning datasets. I believe part of what AI ethics is can be seen as a way to address difficult ethical issues in the application of services or products. At times it seems that codes of conducts or principles are made as a way to argue for moral supervision in a company.
AI ethics can be either/or a technical exercise performed with developers on current delivery of applied AI or a proactive scenario-based thinking exercise that can help map issues in the application of AI.
It can also be important to challenge inferences in AI (decisions formed based on data or frameworks). Decisions are often extrapolated so that the application to an unknown situation is made by assuming that existing trends or data will continue or similar methods be applicable to a given situated.
Extrapolating may be difficult for social interactions, although not impossible, and therein lies a challenge more broadly for society (political influence or propaganda + AI being one prominent example).
Data can still be important to see trends, and we can conclude that action needs to be taken for increased sustainability. One area often discussed that is needed to sustain life on planet earth is to address the urgent climate crisis.
What can often be heard is carbon emissions and the trade-off mentioned by Strubell, Ganesh and McCallum. It posed a pervasive question that is being repeated in the AI community when discussions of climate arise: how much carbon does training a model emit?
There are arguments that AI can help in tackling the climate crisis. A community has over the last appeared in the field of AI focused on this question in particular.
In this sense it is a question of the trade-offs in application within the field of AI as mentioned by Wynsberghe, both the lifecycle system considerations and the applications in the field of AI.
If we think back to sustainable forest management I have previously thought about some examples and how AI could be useful.
One attempt to address this is by building models differently, especially with more biologically-inspired computational systems. One example in Norway is the research group NordSTAR.
A more prominent example could be the startup Another Brain focused on what they call ‘organic AI’ founded by Bruno Maisonnier who previously founded Aldebaran Robotics acquired by SoftBank Robotics in 2012.
As mentioned on their website:
“AnotherBrain has created a new kind of artificial intelligence, called Organic AI, very close to the functioning of the human brain and much more powerful than existing AI technologies. A new generation of AI to widen limits of possible and applications. Organic AI is self-learning, does not require big data for training, is very frugal in energy and therefore truly human-friendly.”
In this sense both the ‘frugality’ of the system and the application to address the climate crisis are necessary considerations. Additionally, it must be stressed that human-friendly does not necessarily mean planet-friendly.
Complex systems requires rethinking how education is delivered and how we collaborate in society. This is also the case for artificial intelligence.
Rethinking systems of AI and AI applications can mean broadly thinking about humanities and society. An example of funding related to this is the WASP-HS programme in Sweden.
It is doubtful that AI engineers have the time or resources to dive into the historical frameworks of a given context where their systems are applied nor the cultural peculiarities — or persisting systemic inequalities. That being said AI engineers can have an interest or engagement towards these topics, but approaching sustainability in society and nature will require both different educational backgrounds and diverse participation from different groups of people.
If you quantify actions in a society does it mean you can change it for the better?
This is about information and what we do with it as humans. However, it is also about social and ecological change.
We can amass almost unlimited wealth (if measured in numbers), to attain what we desire so to speak. Yet these large quantities of information may not automatically lead to decisions we desire for a sustainable future.
The purpose(s) for why systems are built in the field of AI are built relates to the context of different communities. Since that is the case it also relates to citizens and governance for populations in various areas.
Even though private companies are mentioned very often when AI is discussed states play an increasingly prominent role in this. Then again, one can indeed say they have since the early development of AI (with military spending and funding research). The interplay between various parts of society (also mentioned in SDG16) is worth considering, and peace should not be forgotten when we discuss AI. Existential risk is one area that is being explored in discussion of AI. This does not have to be a Terminator or Skynet-like situation, it could simply be an advanced AI project that has unintended consequences on a large scale.
Be it nongovernmental organisations, authoritarian regimes, citizens, informality, democracy and so on. Governing within the field of AI is a matter that pertains to the state:
These questions are not easily answered, yet I believe they are highly relevant to the sustainability of artificial intelligence.
Sustainability is often viewed as an equal balancing act with set goals, but it involves negotiations of a large extent of relationships in our shared ecosystem. I do not believe in perfect equilibrium of opportunities, however we should strive for sustainability regardless.
These are some of my notes and thoughts on the topic of sustainable AI.
What do you think? How does sustainability and artificial intelligence relate to each other, and what actions can be taken for increased sustainability in the field of AI?
Written by Alex Moltzau
Original publication: alexmoltzau.medium.com