Are there any two letters glued side by side more polarizing than AI? Depending on who you ask, artificial intelligence is either the utopian hero for a better tomorrow or the malicious villain conspiring against humankind.
But regardless if you’re a believer or skeptic, ready or not, there’s no denying that we are in the age of AI. International Data Corporation projects that by the end of this year, over $430 billion will have been spent on AI solutions, and that number will only grow over the next five years.
As someone who partners with brands committed to building a better future, I can’t help but be hopeful of the promises made by businesses developing AI like conserving endangered species, reducing traffic congestion, and saving lives in the ER. If they can pull it off, the impact these applications will have on our physical world will be ruled as ingenious, sophisticated, and even good.
But what happens when AI starts to study and analyze the intangible and deeply personal, like consciousness or personality? It’s that last part that presents more dystopian possibilities, and it’s exactly what some companies, like beauty tech company Perfect Corp., are already claiming to be able to achieve.
According to the Taiwanese beauty company’s website, its AI Personality Finder’s algorithm is “anchored in The Big Five Personality Traits… The advanced AI engine categorizes facial features and detects up to 65 types of unique facial attributes. The AI-powered solution then identifies key personality traits based on this analysis, and offers personalized product recommendations best fit to the customer’s unique personality.”
This feature is just one solution of many designed to deliver hyper-personalized shopping journeys to over 400 beauty brands and their millions of customers.
In my opinion, this idea that businesses are building technology that can analyze the immaterial and categorize the deeply personal feels unsettling, but at the same time, it’s unsurprising. Amazon already suggests what to buy next based on our shopping habits. TikTok’s algorithm feeds us neverending videos based on our interests.
AI is accelerating faster than we can regulate it and I fear that without careful deliberation, this technology is likely to yield a negative impact. Here are three considerations on my mind to kickstart this conversation:
1. GROUPS BUILDING AI SHOULD BE AS REPRESENTATIVE AND DIVERSE AS ITS INTENDED USER
There’s a reason why some self-driving cars may have a harder time detecting pedestrians with darker skin: racial bias built into the data. In recent years, companies have been exposed for non-diversity in their data and disproportionately white imagery being used to train AI and inform algorithms. The lack of representation in the groups that build AI is one of the major challenges in regulating the technology, according to Eric Schmidt, Google’s former CEO.
“This… goes back to the way tech works: A bunch of people have similar backgrounds, build tools that make sense to them without understanding that these tools will be used for other people in other ways.”
In other words, as long as humans are biased, the data will be too. Therefore, tech companies should hire and consult with teams as diverse and progressive as the models they hope to build.
2. BRING A KNOWLEDGEABLE, OBJECTIVE, AND INTERDISCIPLINARY GROUP TOGETHER TO BUILD A LIST OF PROPOSED REGULATIONS
As a society, I believe we jumped the gun and just started to build AI without asking ourselves the following questions: What do we want from this technology? What role should it fill? What applications are appropriate?
I believe it is important to bring together a working group of 10-20 people with different perspectives, backgrounds, and disciplines to build a list of proposed regulations. Seeing as governments are likely to regulate and enforce their laws surrounding AI, this list could serve as an expert-backed guideline.
3. CREATE A GLOBAL THIRD-PARTY AI ETHICS REVIEW COMMITTEE
If there’s anything to learn from the days of the Stanford Prison Experiment and other controversial psychological experiences done in the early ’50s to late ’70s, I believe it’s that all major disciplines should have an ethics committee to permeate risk.
AI should not go unregulated, and while some companies like Sony, IBM, and Adobe have in-house AI ethics committees, objectivity is not guaranteed.
This is where an independent third-party AI ethics review committee, composed of ethicists, lawyers, technologists, and business strategists, can come into the picture. One important question that will need to be answered is how much authority this committee will have.
As for Perfect Corp., I wouldn’t write the company off on account of its dystopian-like personality finder. Before going to market, Perfect Corp. was aware of potential racial bias and limitations of AI and developed technologies to create an inclusive experience.
Founder Alice Chang, a woman of color, and her team at Perfect Corp. made it a point during development to not only address racial biases in their data and imaging systems, but also to work with brand partners to address these challenges.
According to Fortune, Perfect Corp.’s AI foundation shade finder and similar technologies boast “a 95% test-retest reliability and continue to match or surpass human shade-matching and skin analysis.”
This just goes to show that when AI is designed intentionally, it can spark innovation without sacrificing functionality or performance.