How ESG* analysts should think – The Social Dilemma – Is profit driven AI human mind exploitation targeting mood alteration ethically acceptable?
The starting point here is the Netflix film ”/social dilemma”: https://www.netflix.com/se-en/title/81254224. If you have not seen it, I recommend you to do so, but brace for hard landing. Particularly if you have kids or plan to have.
Where to start – A personal experience
First of all, I cannot not caution all stated in the film as parts of this is far beyond my data science understanding, but there are a number of testimonials from former social media platform executives that makes sense and at least is worth listening to as a contribution to the debate. The film is characterised as “conspiration theory” by Facebook.
On a personal note I have experienced the social media “dependency” myself, how 30 to 60 minutes can fly away when getting lost in the social media scroll down. I have also seen how comfortable it is when content confirming your beliefs pops up. Many of us have seen this, and have since long time discussed this with our kids, but most of us have lost that debate. It feels like discussing a drug addition problem with a drug addict with resulting reactions that confirms the addiction to it, denial and anger. I decided to take back control over a year ago and uninstalled the social media apps on my iPhone with the exception of LinkedIn and I’m very satisfied with that decision.
The companies in scope and what makes their living
Facebook, Snapchat, Instagram, Youtube, Twitter, Google, etc.
Social media makes money when the user (you and I) is clicking the right things and making even more money when the user clicks more things after having been brain manipulated (mood alteration) by the AI or algorithms to do so.
What is the social dilemma discussed?
What started as fun projects with a ”doing communication better among human beings” feeling has turned into manipulative economic machines. The raw material is you and me and success is when the algorithms make you do what they make money on. The more you are addicted and the more you are manipulable the more valuable you are to them and the more their algorithms steadily improve through machine learning will fight to keep you glued to the screen in your hand. They know literally ”everything” about the users and can construct the profile and likely weaknesses they can exploit to sell more ads.
To keep us glued to the screen they present content you adhere to, and fake news is a good and powerful way to go apparently. This also contributes to the divide we see growing among groups of people globally. The latter point is a major issue attacking a core element of human life on earth, collaboration.
I’ll leave it with this simple description. Have a closer look at it and make up your own mind.
Close to no regulation – an open freeway to exploit personal data as no other sector is allowed to do and they do big time
The development of social media is ahead of regulation, partly because the political environment is not keeping up, and many decision makers are not aware of how these social media platforms work or the potential harm they can do. This is the creative It oriented ”youth” against the established dinosaurs. Therefore the regulation is not adequate. Will it ever be?
During a Senate hearing in the US on the topic, the Facebook CEO Mark Zukerberg, suggested that they could make algorithms themselves that could control their own algorithms… I doubt he felt very comfortable after coming up with such a “solution” to this. Would it not be better to simply make the initial algorithms “behave” in line with human ethics? Even that definition of where the line should be drawn is not simple.
Where in the ESG analysis does this come in?
Under the S – Social or more precisely the sub point Societal (social impact on and social risk from stakeholders outside the company). This is about Business ethics and how it is not at all respecting an important stake holder, the consumers and communities. We have international law forbidding trade in human organs. We need this also for commercial AI algorithm driven humain mind exploitation targeting mood alteration.
How an ESG* analyst should think about this?
Companies operating in this field have a hard time defending the ethics of this, as seen in the US senate hearing on these social networks. Here you can read the transcripts of the most recent dealing with many other known controversies, where the topic of my article comes is in some places. They simply refuse to admit this is how they operate and how their algorithms are working. They talk about communicating with friends and family, when the regulator talks about social abuse.
Regulation will likely come around, and the question is if it is possible to put in place regulation on an algorithm level. And how to practically control this? This is likely way too complicated for lawmakers to understand and in reality control.
Therefore the probability of introducing a ban on certain activity is relevant. Bans can mean severe reduction in valuation of the social networks. One variant could be to introduce an 18 year age limit, so only for adults. National bans and blocking have already been seen. There are a lot of different ways that regulation can impact social media companies negatively in the light of their current business model.
Is the long term solution to introduce a subscription model where users pay for access to replace the “harmful” and hidden exploitation of human beings’ minds?
With other words, there is a serious risk of stranded assets and Inevitable Political Response. The latter is more used for Climate related risks, but also seem appropriate here.
So, one side is the ethical part; is this right or not, and the other side is the ESG risk that investors need to be aware of and that can be costly.
As an ESG analyst, I see this as a severe ESG risk that investors should be aware of and discuss through engagement with the social media companies.
Surprisingly, the major ESG data providers are not highlighting this specific issue, likely because it is close to impossible to prove. But they are clamping down on abuse of personal data, abuse of competitive position and tax injustice. There are a lot of controversies. So, adding my point shows us a very problematic sector from an ESG perspective.
But this does not hinder investors making them among the biggest companies in the world.
What do you think?
*Environmental, Social and Governance factors