#8: Open AI, Decline in friendship, Our Planet II
Hello there, it’s Àlex, Marc & Miquel, authors of the 3X Newsletter, home of the impact society.
We are here to inspire some of our brightest citizens to write the next chapters of our history by scaling companies or working on projects that help solve society’s biggest problems, while also living environmentally conscious lives.
Let’s dive in!
One reflection
Great power, great responsibility
Welcome to the era of Artificial Intelligence (AI). In every breakthrough, owning or understanding the new technology at play leads to power. This is especially true in the context of AI, and it’s a power that can be leveraged to do good. Or bad.
Although we are still in the early days, the potential that AI holds leads us to ask ourselves: are we designing Artificial Intelligence to improve humanity?
Open AI?
Many companies are building AI today. Yet, there is one organization that seems to be ahead: Open AI - the creator of Chat GPT, Dall-E, and much more.
As its name suggests, Open AI claims to be a company on a mission: “To ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity”.
Beautiful. However, over the past few months, the company has received a lot of criticism. Especially over 2 key decisions:
“Closed AI”: The company has released little about what lies behind their product claiming competitive and safety fears:
“If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years, it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.” Ilya Sutskever, OpenAI’s Chief Scientist and Co-Founder.
“Former non-profit”: Though started as a non-profit, Open AI has recently become a “for-profit” company, claiming that it was the only way to raise enough capital (and attract top talent) to keep up with the rapid AI development pace:
Yet, to be fair, we must portray the full picture. Open AI has also made some very interesting design choices to ensure alignment with its bold mission:
Leadership team: Sam Altman, its CEO, owns no stock
Investors: The company has a 100x profit cap to initial investors (less to new ones), with the remaining profit going to its parent company, a non-profit
Board Members: Only a minority of Board members own stock, and only non-stockholders can vote on decisions where the interests of partners and OpenAI non-profit’s mission may conflict
The perils of an unregulated race
All things considered, it’s clear that:
History tells that giving excessive power to unregulated technological titans can lead to serious hazards for all of us - I.e. Google & privacy data, Facebook & teen addiction, or Amazon & labour conditions.
If AI falls into the wrong hands, such hazards can become existential risks - I.e. people manipulation, autonomous weapons, mass unemployment.
And, although we might trust Open AI and its incentives to lead the race, we can’t trust everyone to be so clean, especially given our recent history with tech behemoths. And, especially given the lucrative power that AI holds; even the Open AI team understands that:
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Sam Altman in a US Senate hearing.
“Matterialism”
It is evident that AI holds the power to solve the world’s most pressing issues: climate change, agriculture, healthcare, education and so on. And because of that, we can’t afford to halt AI’s development. But we need to make sure that we build it with the right intentions, incentives and systems (different from how we built technology in the past), with the open and transparent collaboration of industry, governments, and academia…
…and through companies with the right intentions, through companies that matter.
Read more about the topic here: Will AI Save the World?
Photo: “Artificial intelligence conquering humanity; Picasso style” by Dall-E (OpenAI)
Food for thought
As we discuss technology and its risks, we wonder whether the new Apple Vision Pro and augmented reality is going to move society in the direction we need. We celebrate progress, but…
One quote
"The earth has music for those who listen"
~William Shakespeare
To love ourselves, we must first understand ourselves. That is why philosophers usually say that self-knowledge precedes self-love. In this same spirit, learning about, and understanding, our planet and its other inhabitants may lead to loving them. And if we love our home and our neighbours, we will unequivocally feel more connected to it all, and it will be much easier to embrace a sustainable and ethical way of living.
See, the charts and data on climate and other topics that we usually share with you increase your knowledge at a macro level, and this type of knowledge can be leveraged as an external motivator to do good. But, when you spend time watching and learning about nature and other species directly, you get a different knowledge at a micro level that is a million times more powerful because it generates an inner energy that is rooted in love.
That is why the release of the “Our Planet II” series, narrated by Sir David Attenborough, is so exciting to us. You can watch here the introduction to the show and start, or continue, the journey to understand and love our fellow earthlings.
*Powered by notmaad.com - Impact advisory.