The Boom of NLP
Natural Language Processing (NLP) is the hottest area in Artificial Intelligence right now. NLP uses software to automate the understanding of human language through the application of machine learning models. Businesses across industries are using this technology to efficiently understand and interact with customers in creative ways such as scanning social media activity to understand public sentiment or deploying online chatbots to address customer issues. Advances in speech recognition have also created opportunities for companies to automate call centers, which can significantly reduce costs and improve efficiency.
Although companies primarily focus NLP efforts on customer-facing or revenue-generating activities, these tools are also gaining traction in the realm of internal process improvement. For example, law firms are using NLP to quickly extract and compare key information such as timelines and dollar amounts across many dense contracts. Sales teams also use NLP to analyze what differentiates successful from unsuccessful conversations with potential customers.
However, amidst the exciting NLP use cases in business today, there is one tremendously important area that has been untouched – employee performance reviews.
Performance Reviews: Areas for Improvement
As humans, our brains are full of implicit bias. We make assumptions about people based on attributes like gender, age, or race without realizing it. The effects of gender bias in the workplace were famously brought to public attention in the early 2000s with the Heidi vs. Howard business school case experiment, in which professor Frank Flynn at Stanford presented his class with a case study about successful female venture capitalist, Heidi Roizen. However, half of the class received an altered version of the case where Heidi’s identity was replaced by a fictitious man named Howard. The students (roughly equal male and female) rated Heidi and Howard as equally competent, but they felt Howard was more likeable. The students used words like ‘confident’ to describe Howard, whereas words like ‘aggressive’ were used to describe Heidi.
Pew Research Center conducted a survey asking over 4,500 Americans what traits society views positively and negatively in men versus women. The results were striking: the adjective ‘powerful’ was used positively to describe men in 70% of cases compared to just 5% of the time for women. Bias in performance reviews isn’t only about gender and other demographics. Reviews are often used as a platform to exercise subtle retaliation. The Equal Employment Opportunity Commission (EEOC) recently reported that 75% of people who report facing some kind of discrimination also report experiencing retaliation, which is often in the form of a false negative review.
HR management teams are aware of these problems and devote a great deal of time and effort to reconcile them. It’s been estimated that on average, an HR Business Partner devotes 3 hours per day and $30k per year coaching people managers on how to give better feedback. One HR manager at a technology company described the burden that review bias places on her day-to-day responsibilities:
“I spend half my time reviewing, editing, and making suggestions on performance reviews from managers. I correct unconstructive language, make sure they are consistent in their messaging, and constantly have to ask for more examples.”
Upful is Born
Shirin Nikaein is a first-generation college graduate with a rockstar resume. She earned her bachelor’s and master’s degrees in Electrical Engineering from USC and has experience developing fraud detection tools (used by Google, Facebook, and Microsoft), building customer service chatbots, and leading engineering teams for Beats by Dre headphones. She’s intelligent, gritty, and has earned her success through relentless determination. So, when she learned how problematic bias is in performance reviews, it’s no surprise she decided to do something about it while pursuing her MBA at UCLA Anderson.
Shirin founded Upful.ai with the goal of using NLP to coach employees to write better, objective performance reviews. Eli Selkin, an experienced developer with a background in psychology, social work, and computer science, quickly joined the team as Lead Software Engineer. Detecting and reducing bias in language is not a clear-cut problem, so Shirin knew she also needed a team of advisors with research experience in the field. She brought on UCLA Anderson Assistant Dean of DEI Heather Caruso, UCLA Assistant Vice Chancellor of DEI Margaret Shih, UCLA Anderson Professor of Organizational Behavior Miguel Unzueta, and USC Marshall Professor of Clinical Business Communication Jolanta Aritz. Their collective research on bias and discrimination spans across social psychology, linguistics, and behavioral science. They advise Upful’s methodologies on how to identify and intervene with bias, while Shirin & Eli build those rules into the product.
Shirin also armed her team with two industry advisors to help understand what companies want from a solution like this; Angel Hu, Organizational Psychologist and Chief of Staff for the Chief People Officer at MongoDB and Jordan Knox, Co-founder of Butter.ai (NLP search platform acquired by Box) and previously a sales & marketing manager at Spot (chatbot for reporting workplace discrimination and retaliation). After gaining the technical, academic, and business perspectives, the Upful team was ready to get to work.
Upful’s mission is to coach employees to give better quality reviews and feedback. To do this, the system looks for vague, subjective, speculative/assumptive, extreme, or potentially biased language.
Initially, the strategy was to detect biased terms and suggest word replacements. If the writer used a potentially biased description in his/her review (such as “aggressive” or “emotional”), Upful would recommend more specific language to use (such as “assertive” or “passionate”). However, early feedback indicated that there were several problems with this approach. First, there were concerns that this approach would feel too robotic, as if a machine wrote the review instead of the employee. Several Chief Diversity Officers voiced concern that this approach would result in all reviews looking the same, when the goal was for reviews to be more colorful and vivid. This led the team to quickly shift its methodology to thinking about ways to change how people think without putting words in their mouth. Additional interviews, secondary research, and white papers on the topic further confirmed that telling people what to do doesn’t help change behavior.
Upful knew the company needed to take a non-accusatory approach to the coaching. It shifted from word replacements to probing questions that provoke deeper thinking, reminders of best practices, and asks for specific examples. This strategy pushed reviewers towards writing more objective, detailed reviews but in an engaged manner that allows them to express their individual perspective.
After receiving $86k in non-dilutive capital from various startup pitch competitions, Upful built its minimum viable product (MVP) in August 2020. Large companies have strict requirements for data privacy and security, which Upful is working to satisfy. The team is planning to support integrations with existing HR platforms, as many companies have voiced this as a requirement. Lastly, the next major item on the roadmap is a comprehensive analytics dashboard for HR Administrators to understand improvements in review quality over time and/or across various employee groups.
Interested in trying Upful with your team?
Contact Shirin at firstname.lastname@example.org to start using Upful today!