Psychological research in the age of machine learning

It is easy to become dissuaded by psychological research when machines are taking over the world. You spent your entire career toying with three boxes and seven arrows (your model of human vision), while this grad student Alex put together two GPUs, tuned parameters, and was able to actually do object recognition, all the way from images to labels. You thought your petty 2x2 design was going to give you valuable insights into human behavior, while DeepMind crushed the top humans in Go without caring the slightest how humans played Go. And while you were busy arguing about the Trolley Problem, a bunch of companies were already driving on the roads, solving problems far more pressing than this one.

Call it black boxes, dismay the brute force approach, criticize it for the “lack of understanding”, but if your insights are circa 2000s state of the art machine learning, what value are you adding?

Here are three directions where psychological research could contribute meaningfully to machine learning.

Machines will have to interact with humans

In my TEDxVilnius talk, just moments before the Deep Learning Spring, I proclaimed that we need to understand human brain so that humans can relate to machines, and, in turn, trust them and use them in their daily lives. A few months ago I heard a nice analogy from Josh Tenenbaum on this point. When people from computer vision side defend their approach of building models of vision without caring about human visual system, a bird versus airplane analogy is often brought up. Both can fly but their principles are vastly different, and trying to build a machine that could fly like a bird took a hundred years more.

However, the situation is different in the case of AI. The airplane only had to fly to succeed. The benchmarks of success are far more stringent today. The AI airplane we’re building not only needs to fly (passive function) but interact and cooperate with its passengers (us). To do that efficiently and effectively, these systems will necessarily need to model human psychology. As flawed as we might be, these systems will have to deal with us and our idiosyncrasies. And for humans to trust machines, understanding and employing various psychological tricks will be necessary.

Even if we know how to build AI, we still want to understand humans

When pushed hard about the purpose of their research, some psychologists will claim that they simply want to understand human brain. I was never convinced by this argument but also never pushed against it, thinking that this is just some fascination that some people have for some reason.

But I think we can actually dig deeper. No matter what AI advances are to come, humans will be around for a long time still. And they will talk, and love, and argue, and work together. How do you motivate them? How do you teach them efficiently? How do you make them better people? How do you make them happy with their lives, even if all watched over by machines of loving grace? Psychological research is also about improving human condition, after all, not just unveiling the principles. This way, just like AI-focused research, psychological research line becomes product-driven too (tools to improve human condition), and thus much better defensible when funding is asked for.

Machines will have emotions, too

(…) one of my concerns is that [AI research has] been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society.

Joi Ito, Wired

I feel that the appeal of computers to many geeks is the perception that these are fully-controllable systems. Unlike in the real world, machines are completely rational and obey logic, which resonates with those less apt at dealing effectively with the primarily irrational nature of humanity.

While I don’t really believe that even the current systems are so controllable (ever tried installing something on Linux in a non-standard way?.. there’s this ill-willed gnome inside Linux, I’m sure of it), machine learning will change the landscape drastically. But I’m willing to bet that as a by-product of making machines intelligent, we will get machines with their own quirks––emotions, feelings, and thoughts––and boy will we be thankful for all this psychological research when managing those depressed sewage cleaners and bored autonomous vehicles.


Let’s use insights from biology to hack machine learning.

Let’s use insights from machines to hack human learning.