Researchers Have Trained a Machine to Judge Faces Exactly Like Humans Do

Researchers Have Trained a Machine to Judge Faces Exactly Like Humans Do
Text Size
- +

Toggle Dark Mode

Scientists have trained a machine-learning algorithm to judge people by their faces in almost exactly the same way that humans do, according to a new report published in a preprint archive of scientific material. The project was spearheaded by Mel McCurrie, and her colleagues at the University of Notre Dame, who trained their algorithm to decide whether or not a human face was trustworthy or dominant in the same way that you or I do.

The team used a dataset that their algorithm could learn from, created by using TestMyBrain.org — a crowdsourced science project that tests and measures psychological attributes of participants. It’s one of the most popular brain testing sites on the internet — over 1.6 million people have participated in testing using the platform.

Through the service, McCurrie and company asked subjects to rate over 6,000 grayscale pictures of various faces. The faces were rated by 32 participants for trustworthiness and dominance, and by 15 participants for age and IQ. They then further refined the results and the algorithm’s parameters. And then, finally, tested the machine’s ability to judge faces the same way people do.

They found that the machine — obviously — produces similar results that it learned from its human dataset. It comes up with, more or less, the same answers when given a face than humans did. The team was also able to assess what parts of a face that humans look to to make certain judgments. They did this by covering up parts of the human faces presented to the algorithm to see if there were any significant differences in its conclusion. Interestingly, the machine used pretty much the same parts of the face to render judgments as humans did.

The team’s research has a host of possible applications. For example, the researchers used the algorithm to judge cultural figures like Edward Snowden and Julian Assange — and then compared the findings to the actors who played them in recent movies, Joseph Gordon-Levitt and Benedict Cumberbatch, respectively. The team found that their machine produced “remarkably” similar results between the public figures and their respective actors — perhaps as a testament to the accuracy of the actors’ portrayals. While opinions about movies, like all art, have remained largely subjective, it’s interesting to bring an objective type of judgment to the medium.

This type of technology could have broad applications. For example, it could be used to test how perceptions differ between different demographics and cultures, and what factors fuel those differences. As far as acting goes, the team went even further. They went frame-by-frame to study how perceptions can change over time. Obviously, similar technology could be used in the future of marketing and advertising, research applications, political campaigns, and so on.

This type of technology could help researchers pick out what various subtle factors lead to our gut-feeling judgments and preconceptions. A recent article in the MIT Technology Review even poses an interesting question: if you discovered that people judge your face as untrustworthy, could this technology help you change that perception by shifting various, subtle factors? Whatever the answer, this research might be the first in a line of studies that could help us learn about why and how human beings render judgments about each other.

Sponsored
Social Sharing