Stanford connects technologists and humanists in human-centred AI think-tank
The extent of the power and potential of artificial intelligence, still in its relatively early stages of discovery, remain unknown to mankind.
What is certain is it’s ubiquity, with the world’s biggest players from the US to China hiring the most prolific researchers to develop various aspects of it for uses in multiple industries. But similar to how groundbreaking AI discoveries could transform lives for the better, it’s potential for abuse is just as great.
“With AI, the power of it is so incredible that it will change society in some very deep ways,” says Microsoft co-founder turned philanthropist Bill Gates. Yet, he observed that there’s “not that many” areas in which the technology is being used today in a positively transformative way to society.
Gates, along with the likes of the late physicist Stephen Hawking and inventor Elon Musk, have for years expressed concerns about the darker side to AI, ie. where humans lose control of it or where technology is infected by human biases.
Recent discoveries have already proven these fears true. These technologies, largely developed by white and Asian men, are inheriting social and racial biases. Facial recognition technology isn’t able to recognise the faces of people of colour, while voice recognition struggles to identify non-mainstream English accents. If left unchecked, these systemic blips could amplify disinformation in public debate, even foment violence or drive deeper wedges in society.
Yet the wheels of technology continue to turn. Many companies are exploring AI’s uses to scale production or simplify operations by speeding up processes and taking over certain tasks, sparking fears of massive job losses in the age of automation. The problem is that human skills, human institutions and business processes simply don’t change as quickly as technology does, creating the dichotomy of whether it is technology that serves man or the other way around.
Ironically, the technology, discovered and developed by humans, is being used to replace human workers, instead of to augment human potential.
Stanford University, a pioneer in the technology (it was a Stanford scientist who coined the term in the first place), has set out to find a solution.
In a recent day-long symposium where Gates was invited to keynote, the school officially opened the Stanford Institute for Human-Centred Artificial Intelligence (Standford HAI), a sprawling think-tank that aims to put humanity at the very core of the AI debate.
Housed in a new 200,000 square foot building at the heart of Stanford’s campus, the institute will see the coming together of every stakeholder in the development of AI from businesses to academia, students and policymakers, all committed to the multidisciplinary approach to advancing the technology. Their work will be guided by three principles: to study and forecast AI’s human impact, and guide its development in light of that impact; that AI applications should augment human capabilities, not replace humans; and that the “intelligence” developed must be as subtle and nuanced as human intelligence.
For this, the university-wide institute is collaborating with companies across sectors, including technology, financial services, healthcare and manufacturing, to create a community of advocates and partners at the highest level.
On its Advisory Council chaired by Reid Hoffman of Greylock Partners, it counts among members the tech titans of Silicon Valley, from the likes of former Google executive chairman Eric Schmidt, LinkedIn co-founder Reid Hoffman, former Yahoo chief executive Marissa Mayer and co-founder Jerry Yang, and the prominent investor Jim Breyer.
Others include Jeff Dean, Google; Steve Denning, General Atlantic; John Hennessy, Stanford University; Eric Horvitz, Microsoft Research; Bob King, Peninsula Capital; James Manyika, McKinsey & Company; Sam Palmisano, Center for Global Enterprise; Heidi Roizen, DFJ/Threshold Ventures; Kevin Scott, Microsoft; Ram Shriram, Sherpalo Ventures; Vishal Sikka, Vian Systems; and Neil Shen, Sequoia Capital.
“AI is no longer just a technical field. If we’re going to make the best decisions for our collective future, we need technologists, business leaders, educators, policymakers, journalists and other parts of society to be versed in AI, and to contribute their perspectives,” says Fei-Fei Li, AI pioneer and former Google vice president.
“Stanford’s depth of expertise across academic disciplines combined with a rich history of collaboration with experts and stakeholders from around the world make it an ideal platform for this institute.”
Li, who is also a professor of computer science and former director of the Stanford AI Lab, is one of two directors of the institute. John Etchemendy, professor of philosophy and former Stanford University provost, is the other.

The institute is their brainchild, and began with a conversation they had in 2016 in Li’s driveway, a five-minute drive from campus. Li, it seems, was concerned by the lopsidedness of developments in the technology space, with those dictating humanity’s future all coming from similar backgrounds: math, computer science and engineering.
Where were all the philosophers, historians and behavioural scientists, she wondered? Where were the humanists and social thinkers? Surely they could inform the process of innovation and complete the feedback loop that would ensure technology connects, not alienates, societies? These questions led to months of further discussion, joined later by academics in other faculties across Stanford.
Fast-forward three years later and Stanford HAI is now the most recent addition to the university’s existing interdisciplinary institutes that harness Stanford’s collaborative culture to solve problems that sit at the boundary of disciplines.
“One beautiful thing about this world is that it’s made of people of all walks of life and diverse backgrounds,” said Li. “We need all kinds of people to participate and shape our collective future.”
Stanford HAI has already initiated over 50 research projects involving cross-disciplinary research on AI in areas ranging from medical decision-making to gender bias and refugee settlement, a true testament to the think-tank’s commitment to the diversity and the human-centred approach.
Commending the initiative, Gates says universities were an important cog in the global innovation machine.
“I think we should draw more universities in,” he says. “Universities, in general, are motivated to think more about societal benefit than the private sector. So it would be unfortunate if the universities fall behind… and it’s great that Stanford is putting together these initiatives.”
He adds: “… these AI technologies are completely done by universities and private companies, with the private sector somewhat ahead. Hopefully, your institution will bring in legislators and executive branch people [and] a few judges to get up to speed on these things because the pace and global nature of it (AI), and the fact that it’s really outside government hands, does make it particularly challenging.”
Stanford HAI aims to raise US$1 billion in funds that will be doled out through seed grants for research on advancements in AI with a human-centred focus. The centre just closed its second round of seed grant call for proposals in which it offered 25 grants of up to US$75,000 each to fund projects that aim to “support innovative and interdisciplinary seed research in Human-Centered Artificial Intelligence.”