Shaun Moore and Nezare Chafni didn’t initially intend to develop a new standalone facial recognition technology when they first got started developing the technology that would become their new company, Trueface.ai.
When the two serial entrepreneurs were planning their next act five years ago, they wanted to ride the wave of smart home technologies with the development of a new smart doorbell — called Chui.
That doorbell would be equipped with facial recognition software as a service. The company raised $500,000 in angel funding and opened a manufacturing facility in Medellin, Colombia.
What the two entrepreneurs discovered was that most existing facial recognition tools lacked the ability to identify spoof or presentation attacks, which rendered the tech unfeasible for the access control functions they were trying to develop.
So Moore and Chafni set out to develop better software for facial recognition.
“In 2014 we focused our engineering efforts on deploying face recognition on the edge in highly constrained environments that could identify hack or spoof attempts,” Moore, the chief executive of Trueface .ai, said in an email. “This technology is the core of what has become Trueface.”
With the upgrades to the product, Chui began tackling the commercial access control market, and while customers loved the software, they wanted to use their own hardware for the product, according to Moore.
So the two entrepreneurs shuttered the factory in 2017 and began focusing on selling the facial recognition product on its own. Thus, Trueface was born.
It’s actually the third company that the two founders have worked on together. Friends since their days studying business at Southern Methodist University, Moore and Chafni previously worked on a content management startup before moving on to Chui’s smart doorbell.
The company spun Trueface out of Chui in June 2017 and raised seed capital from investors, including Scout Ventures, with Harvard Business Angels and GSV Labs participating. That $1.5 million round has powered the company’s development since (including the integration with IFTT earlier this year to prove that its system worked).
But over the past few years, as damning stories around the risks associated with potentially bad training data being applied to facial recognition technologies continued to appear, the company set itself another task — aligning its training data with the real world.
To that end the company has partnered with a global nonprofit that is collecting facial images from Africa, Asia and Southeast Asia to create a more robust portfolio of images to train its recognition software.
“Like many facial recognition companies, we acknowledge the implicit bias in publicly available training data that can result in misidentification of certain ethnicities,” the company’s chief executive has written. “We think that is unacceptable, and have pioneered methods to collect a multiplicity of anonymized face data from around the world in order to balance our training models. For example, we partnered with non-profits in Africa and Southeast Asia to ensure our training data is diverse and inclusive, resulting in reduced bias and more accurate face recognition – for all.”
The company has also established three principles by which its technology will be applied. The first is an explicit commitment to reduce bias in training data; the second, an agreement with its customers that in any case that goes to court, human decision making is privileged over any data from its software; and finally, an explicit focus on data security to prevent breaches and data transparency so that customers disclose what information they’re collecting.
“When implemented responsibly, people will demand this technology for its daily benefits and utility, not fear it,” writes Moore.