Discover more from ESG on a Sunday
Week 18: Frankenstein is back, and his name is AI
In this issue: ▸ Creating a Frankenstein ▸ Game changer and job killer ▸ AI and fund managers ▸ The ethics of AI ▸ Management does not understand AI ▸ Lessons from Victor ▸ And much more...
Uncontrolled. Unregulated. All mighty. Fast. Cruel. Cold. Soulless. Yet useful and profitable. No, it is not a virus or a war. It is not a device or a management consulting solution grown out of a neoliberal Chicago school. It is something else. Something far more beautiful and vicious at the same time. It can be used as a cure and to kill; it can develop and destroy at the same time. Start wars. Read your mind and translate your thoughts. The ultimate nowhere-to-hide thing.
Thanks for reading ESG on a Sunday! Subscribe for free to receive new posts and support my work.
Some people call it divine art, where humans get outgrown by machines. Some call it the final stage of human decline. The Frankenstein singularity age is here, and it has been around for some time. The difference? Well, the difference is that we are getting to know how little we know about consequences.
Doing research about AI for this newsletter was an excursion in the land of questions and the land of ‘don’t know’. The spooky part of it is that the people we elected across the world to guide and protect our civil and private rights, politicians, are brutally lost. There is little information about how it can be regulated, on what premises and how would that regulation be enforced. It is a wild beast. Artificially smart and stupid.
We have built a more ferocious predator than ourselves…
The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence as well as the fact that we are the biggest predators on this planet. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.
This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.
With new artificial intelligence applications such as ChatGPT taking the world by storm, digital and technology politicians are caught between a rock and a hard place. The rest of us? Well, we are passengers on a moving train.
How to apply rules to the use of generative AI tools is becoming a pressing issue for governments around the world in the wake of the public debut of OpenAI’s ChatGPT last November. Since then, the chatbot app has demonstrated its high capacity to handle a variety of tasks, including finding and summarising information, drafting documents and checking programming code.
As more tech firms develop generative AI products, governance over the tech became one of the main topics of all political discussions around the world.
In Italy, concerns over data privacy prompted authorities to temporarily ban the use of ChatGPT last month after the chatbot service was allegedly discovered to be illegally collecting data. The country lifted the temporary ban after a couple of weeks.
EU policymakers, meanwhile, have reportedly been rushing to update the draft for the AI Act to regulate the use of copyrighted materials. The scary part is the regulatory bodies are not equipped with the expertise in artificial intelligence to engage in oversight without some real focus and investment.
The rapid rate of technological change means even the most informed legislators can’t keep pace. Requiring every new product using AI to be pre-screened for potential social harms is not only impractical but would create a huge drag on innovation.
Game changer and job killer…
The technology is widely seen as a game changer but has also ignited concerns over the possibility it could be a job-killer, while also helping spread false or misleading information.
Apparently, an AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person’s thoughts to be read non-invasively for the first time. The decoder could reconstruct speech with uncanny accuracy while people listened to a story — or even silently imagined one — using only fMRI scan data.
Previous language decoding systems have required surgical implants, and the latest advance raises the prospect of new ways to restore speech in patients struggling to communicate due to a stroke or motor neurone disease. Dr Alexander Huth, a neuroscientist who led the work at the University of Texas at Austin, said: “We were kind of shocked that it works as well as it does. I’ve been working on this for 15 years … so it was shocking and exciting when it finally did work."
The achievement overcomes a fundamental limitation of fMRI which is that while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real-time impossible. The lag exists because fMRI scans measure the blood flow response to brain activity, which peaks and returns to baseline over about 10 seconds, meaning even the most powerful scanner cannot improve on this. “It’s this noisy, sluggish proxy for neural activity,” said Huth.
This hard limit has hampered the ability to interpret brain activity in response to natural speech because it gives a “mishmash of information” spread over a few seconds. However, the advent of large language models — the kind of AI underpinning OpenAI’s ChatGPT – provided a new way in. These models can represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word.
Fund managers may be starting to look nervously over their shoulders
A selection of stocks picked by artificial intelligence chatbot ChatGPT has delivered better performance than some of the UK’s leading investment funds, according to an experiment conducted by finder.com.
Analysts at the personal finance comparison site asked ChatGPT to create a theoretical fund of more than 30 stocks, following a range of investing principles taken from leading funds.
In the eight weeks since its creation, the portfolio of 38 stocks has risen 4.9 per cent, compared with an average loss of 0.8 per cent for the 10 most popular funds on UK platform Interactive Investor, a list including Terry Smith’s Fundsmith Equity as well as a range of UK, US and global funds from Vanguard, Fidelity and HSBC, according to finder.com.
Jon Ostler, chief executive of finder.com, said: “It’s not taken the public long to find creative ways of getting ChatGPT to help them in areas where it shouldn’t technically do so.”
“The big question is how bad of an idea using ChatGPT for investing research currently would be,” Ostler said. “Big funds have increasingly been using AI for years, but the public using a rudimentary AI platform that openly says its data is patchy since September 2021 and lacks the intricacies of market psychology doesn’t sound like a good idea.”
Ostler said the democratisation of AI seemed set to disrupt and revolutionise financial industries, but argued it was too early for consumers to trust it when it came to their own finances.
Let’s look at the ethics of it, or shall we say, what are the consequences from the land of ‘we don’t know’…
How do machines affect our behaviour and interaction? Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.
This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.
Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimised with A/B testing, a rudimentary form of algorithmic optimisation for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.
On the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behaviour. However, in the wrong hands it could prove detrimental.
Once we consider machines as entities that can perceive, feel, and act, it's not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of "feeling" machines? Will they have “rights”?
Simply explained: management does not understand the technology…
Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples, and we see how it performs.
Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn't be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security, and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.
Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects, and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people. We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental.
Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change. The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good.
This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won't be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude. Predator de-luxe.
Read more here.
Promise vs peril
Worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic, according to a forecast released in August by technology research firm IDC. Retail and banking industries spent the most this year, at more than $5 billion each. The company expects the media industry and federal and central governments will invest most heavily between 2018 and 2023 and predicts that AI will be “the disrupting influence changing entire industries over the next decade.”
“Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,” said Joseph Fuller, professor of management practice at Harvard Business School, who co-leads Managing the Future of Work, a research project that studies, in part, the development and implementation of AI, including machine learning, robotics, sensors, and industrial automation, in business and the work world.
At the same time, AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment.
Can AI be a game-changer for ESG?
Although most businesses have the best intentions, that will not matter in the end unless substantial and demonstrable improvements are made. This starts with setting aggressive, impactful ESG goals. But developing ESG goals and then monitoring and making progress towards them is among the greatest challenges faced by global businesses today.
The incoming data sources are complex and divided, leading to insufficient analyses, inconsistent reporting, and unfulfilled promises. This is where artificial intelligence (AI) can be a game changer for managing ESG efforts and, ultimately, addressing climate change. AI can help move the needle in the right direction by providing comprehensive ESG management solutions, reporting capabilities, and actionable emissions insights for even the biggest enterprise.
Over the past two years, an increasing number of governmental bodies around the globe have enacted laws requiring corporations to report ESG metrics. In 2021, the European Commission adopted a proposal that will require companies to report on social and environmental impacts starting in 2024. The United Kingdom, Hong Kong, Singapore, and China have all updated their environmental and social disclosure guidance. And in August 2022, the US Securities and Exchange Commission proposed regulations to enhance and standardise climate-related disclosures.
But right now, most corporations are not yet prepared to meet these new requirements. They need automated solutions that integrate data and provide the full scope of emissions-tracking features and broader ESG performance management. AI is a big part of such a solution. AI-powered solutions provide near real-time data fusion, validation and mapping to current standards and frameworks. For systems infused with AI, reporting is no longer a burden, and we can ensure the correct metrics are tracked. This includes Scope 1, 2, and 3 emissions, the last of which is notoriously difficult to track.
This does require greater data collection and processing — mostly to track Scope 3 emissions — including the initial training for an AI model. However, once trained, the AI operates as inference that requires minimal computing resources. And results show that corporations can make a difference simply by adopting a comprehensive tracking system. Those with automated solutions for emissions measurements are 2.2 times more likely to measure emissions comprehensively and 1.9 times more likely to reduce emissions in line with their ambitions.
It’s increasingly clear that consumers and investors are becoming wise to greenwashing and false promises, while organisations are still struggling to implement sustainability solutions that provide meaningful climate action. The need to do so is urgent, for both the planet and to satisfy stakeholders.
In research from PwC, almost half of investors surveyed expressed a willingness to divest from companies that aren’t taking sufficient action on ESG issues. The demand to succeed is not only external. Recruiters have also noticed that more and more employees favour companies with clear ESG commitments.
Initiatives that provide serious improvement for standardising and delivering on ESG metrics can be delivered by AI solutions. This extends from ongoing operations all the way to reporting ESG outcomes. AI has the potential to contribute notably to improving the monitoring of ESG reporting and goals.
However, there are still challenges in analysing the extensive data available while the choice of one measure over another could have a large impact on the outcome. In the end, a comprehensive investment process should avoid placing too much confidence in a single measure.
Furthermore, one also needs to consider the costs of maintaining alternative datasets: not only the costs of acquiring data, but also the investment required to store and integrate these large datasets, activities that might necessitate a dedicated team.
Overall, the common consensus is that ESG integration into investment approaches will become more profound and the ability to use robust data will play a major role in that process. Not only can AI help to extract relevant information from existing data sources, it also offers exciting opportunities to create new ones.
Yes, yes, but… There is always a but.
According to the International Energy Agency, electricity consumption from cooling data centers could be as much as 15% to 30% of a country’s entire usage by 2030.
Running algorithms to process data also requires energy consumption. Training AI for firms’ use has a big environmental impact, according to Tanya Goodin, a tech ethicist expert and fellow of the Royal Society of Arts in London. “Training artificial intelligence is a highly energy-intensive process,” Goodin says. “AI are trained via deep learning, which involves processing vast amounts of data.”
Recent estimates from academics suggest that the carbon footprint from training a single AI is 284 tons, equivalent to five times the lifetime emissions of the average car. Separate calculations put the energy usage of one super-computer as the same as that of 10,000 households. Yet, accounting for this huge electricity use is often hidden.
Where an organisation owns its data centers, the carbon emissions will be captured and reported in its TCFD scope 1 and 2 emissions. If, however — as happens at an increasing number of financial firms — data centers are outsourced to a cloud provider, emissions drop down to scope 3 in terms of TCFD reporting, which tends to take place on a voluntary basis.
“I think it’s a classic misdirection — almost like a magician misdirection trick,” Goodin explains. “AI is being sold as a solution to climate change, and if you talk to any of the tech companies, they will say there’s huge potential for AI to be used to solve climate problems, but it’s a big part of the problem.”
In 2018, for example, researchers at the University of California-Berkeley, found that AI used in lending decisions was perpetuating racial bias. On average, Latino and African American borrowers were paying 5.3 basis points more in interest on their mortgages than white borrowers.
In the UK, research by the Institute and Faculty of Actuaries and the charity Fair By Design found that individuals in lower-income neighbourhoods were being charged £300 more a year for car insurance than those with identical vehicles living in more affluent areas.
The UK Financial Conduct Authority (FCA) has repeatedly warned firms that it is watching the way they treat their customers. In 2021, the FCA revised pricing rules for insurers after research showed that pricing algorithms were generating lower rates for new customers than those given to existing customers.
Likewise, the EU’s AI legislative package looks set to label algorithms used in credit scoring as high-risk and impose strict obligations on firms’ use of them. Financial firms also need to mindful of how data has been labelled, Goodin agrees. “When you build an AI, one of the elements that it still quite manual is that data must be labelled. Data labelling is being outsourced by all these big tech companies, largely to Third World countries paying [poorly],” she notes, adding that these situations are akin to “the disposable fashion industry and their sweatshops.”
Turning to governance, the biggest issue for financial services firms is a lack of technologically skilled staff, and that includes those at the senior management level. “There is a fundamental lack of expertise and experience in the investment industry about data,” says Dr. Rory Sullivan, co-founder and director of Chronos Sustainability and a visiting professor at the Grantham Research Institute on Climate Change at the London School of Economics.
Investment firms are blindly taking data and using it to create products without understanding any of the uncertainties or limitations that might be in the data, Sullivan says. “So, we have a problem of capacity and expertise, and it’s a very technical capacity issue around data and data interpretation,” he adds.
Goodin agrees, noting that all boards at financial firms should be employing ethicists to advise on the use of AI. “Quite a big area in the future is going to be around AI ethicists working with corporations to determine the ethical stance of the AI that they’re using,” she says.
Conclusions, if any?
Elon is in the game, and he knows things we humans don’t know. Elon Musk is developing plans to launch a new artificial intelligence start-up to compete with ChatGPT-maker OpenAI, as the billionaire seeks to join Silicon Valley’s race to build generative AI systems. The Tesla and Twitter chief is assembling a team of artificial intelligence researchers and engineers.
Musk incorporated a company named X.AI on March 9, according to Nevada business records. He is the company’s only director, its secretary is listed as Jared Birchall, the ex-Morgan Stanley banker who manages Musk’s wealth. Musk recently changed the name of Twitter to X Corp in company filings, as part of his plans to create an “everything app” under the brand “X”.
Musk is recruiting engineers from top AI labs including DeepMind, according to those with knowledge of his plans, who said he began to explore the idea of a rival company earlier this year in response to the rapid progress of OpenAI.
Musk has brought on Igor Babuschkin, a former DeepMind employee, and roughly half a dozen other engineers. The Information previously reported Babuschkin’s early talks with Musk.
For the new project, Musk has secured thousands of high-powered GPU processors from Nvidia, said people with knowledge of the move. GPUs are the high-end chips required for his aim to build a large language model — AI systems capable of ingesting enormous amounts of content and producing humanlike writing or realistic imagery, similar to the technology that powers ChatGPT.
Lessons from Victor
The main message that Frankenstein conveys is the danger in the pursuit of knowledge and advancement in Science and Technology.
In the novel we see Victor try to push forward the limits of science by creating a creature from old body parts. The creation backfired on Victor once the monster escaped. Victor’s life became obsessed with the monster because he could not control his creation. In the end, Victor lost everything from his pursuit to push science and the limit of human knowledge.
Frankenstein shows the issue that can come from technological and scientific advancement. It can be too much for humans to control causing a spiral downward. He brought the monster to life, but in the end, Victor could not control his creation.
I think this underlying message of not being prepared for a creation is extremely relevant to today’s society. As artificial intelligence is continually expanding every day, it is important to understand the ramifications of discovery. In social culture, we see shows like Black Mirror that capture the negative effects that technological advancements can have on humans.
Technology has the power to bring out human traits that are not positive. If we let technology take over our lives, then we will lose our human elements. We must constantly stay aware of the implications technological advancement has on society. As the world continues to evolve in advancement there must be knowledge of how to adapt. That said, we must welcome advancements in science and technology. If we do not, then there will be a negative effect on society as we saw with Victor.
And finally… “Glossy green” banks
I couldn’t finish this week’s newsletter without drawing attention to this. At the ECB’s banking supervision conference this week, a keynote speech and a new analysis paper flagged that the ECB’s top supervisors cannot claim to be in the dark about greenwashing.
The analysis showed the disconnect between banks’ environmental disclosures and lending activities. Banks portraying their activities as more sustainable extend more credit to borrowers in brown industries and borrowers with higher emissions in general. Brown lending is not offset by a greater lending activity in green industries. Instead, banks lend to the weakest borrowers in brown industries, especially if they have low capital adequacy.
In summary, the results suggest that banks oversell their stated climate goals and credentials while continuing their relationships with polluting borrowers. Glossy greenwash indeed!
That will be all for this week. I’ll leave you with the thought — do we really want to have our lives controlled by Elon and AI? I don’t.
Have a great ‘very human’ week!
Thanks for reading ESG on a Sunday! Subscribe for free to receive new posts and support my work.