MeVitaeCONTACT

ANONYMIZED RECRUITING

Chosen by you, redacted by us. Tailor 26+ parameters while boosting hiring efficiency by 95% with the leading anonymization tool integrated directly into your ATS/HCM. Make fairness a reality today

LEARN MORE
Fact

MOXIE BENEFITS

50%By 2025, 50% of organizations will use blind hiring, growing 10% annually to boost fair employment practices

TEMPLATING

MeVitae’s templating solution helps professionals standardize documents for quick, consistent candidate profile reviews, allowing focus on key qualities and candidates competencies

LEARN MORE
Fact

MOXIE BENEFITS

80%Time saved by transforming CVs into a consistent format, removing the hassle of navigating unstructured layouts

PARSING

Revolutionize hiring with MeVitae’s parsing technology: parse CVs, cover letters, and more to make talent-focused, fair decisions powered by neuroscience-driven innovation

LEARN MORE
Fact

MOXIE BENEFITS

90%+Accuracy in resume parsing, ensuring reliable and detailed data extraction every time
<- Hover Over To Learn More

FEATURED RESOURCES

New year, new look! MeVitae unveils bold branding and a fresh identity to match our exciting expansion

GROWTH

Unlock your workforce's full potential. Centralize your HR data with MeVitae for smarter, data-driven decisions. Automate reporting, benchmark against industry standards, and improve workforce planning

LEARN MORE
Fact

MOXIE BENEFITS

$100kCut costs by automating analytics, consolidating HR data, and focusing resources on strategic growth

ENTERPRISE

Streamline decision-making by unifying HR, Legal, and Finance data. Automate processes, boost efficiency, and manage people risks with strategic foresight

LEARN MORE
Fact

MOXIE BENEFITS

30%+Increase productivity by aligning talent with strategic goals, improving team health and performance

HR PROVIDERS

Boost your HR technology by embedding MeVitae’s ethical AI under your brand. Simplify processes, reduce hiring time, and deliver tailored solutions that reflect your company’s identity.

LEARN MORE
Fact

MOXIE BENEFITS

50%+Increase in your customers’ time-to-hire with automated parsing, redaction, and screening within your platform.
<- Hover Over To Learn More

FEATURED RESOURCES

New year, new look! MeVitae unveils bold branding and a fresh identity to match our exciting expansion

ABOUT US

At MeVitae, we combine science and technology to eliminate barriers, mitigate risk, increase compliance, and empower growth. Together, we’re creating workplaces where everyone can thrive and succeed

LEARN MORE

DEI

Diversity isn’t just a checkbox, equity isn’t just a policy, and inclusion isn’t just a buzzword—they’re the foundation of MeVitae. It's at the heart of what we do. Learn more about our commitment

LEARN MORE

CAREERS

Talent has no limits. At MeVitae, we’re committed to creating an environment where talent leads the way, shaping a future full of growth, achievement, and innovation

LEARN MORE

PARTNERS

Join a network of industry leaders, tech innovators, and researchers collaborating to shape the future of the workforce and drive meaningful change

LEARN MORE
<- Hover Over To Learn More

FEATURED RESOURCES

New year, new look! MeVitae unveils bold branding and a fresh identity to match our exciting expansion

PEOPLE ANALYTICS

MeVitae's all-in-one people analytics solution that turns workforce data into insights, automating reporting and tracking performance for strategic decisions

LEARN MORE
Fact

MOXIE BENEFITS

3xFaster analysis, three times quicker: MeVitae’s AI tools help teams spot issues quickly, boosting decision-making

HEALTH CHECK

Gain deep insights with an AI-driven system that continuously scans and checks your organization’s HR performance, ensures compliance, and boosts workforce productivity with data-backed strategies

LEARN MORE
Fact

MOXIE BENEFITS

$100ksignificantly reducing costs and increasing long-term savings by optimizing workforce management

FORENSIC AUDIT

Our forensic audit solution identifies risks, detects non-compliance, and provides analysis to safeguard your organization, avoiding costly lawsuits while aligning with global standards for secure operations

LEARN MORE
Fact

MOXIE BENEFITS

70%Reduce legal exposure by identifying risks that could lead to costly lawsuits, uncovering risks you might miss

HR STRATEGY

Transform data into actionable HR Strategies. Predict trends, close gaps, and boost workforce performance with MeVitae’s AI-driven insights

LEARN MORE
Fact

MOXIE BENEFITS

35%Increase in top talent retention with predictive AI, reducing turnover and ensuring long-term workforce stability
<- Hover Over To Learn More

FEATURED RESOURCES

New year, new look! MeVitae unveils bold branding and a fresh identity to match our exciting expansion

BLOGS

Curious about future of work and how to implement it? Or wondering if AI will take over your job? Check out our latest blogs to stay ahead of the curve and keep learning about the future of work and its role in it

LEARN MORE

WHITE PAPER

Take a look at this first-of-its-kind guide on anonymizing recruitment. Dive into in-depth information and the latest insights, backed by experts in neuroscience to understand how it can transform decision-making practices

LEARN MORE

CASE STUDIES

Explore our case studies to see how MeVitae's solutions set new standards of excellence, helping clients achieve remarkable results and transform their operations with effective, results-driven technology

LEARN MORE

PRESS

Explore how MeVitae is shaping the future with ethical AI, driving innovation in workforce transformation. Our press page showcases groundbreaking tech and partnerships redefining human capital

LEARN MORE
<- Hover Over To Learn More

FEATURED RESOURCES

New year, new look! MeVitae unveils bold branding and a fresh identity to match our exciting expansion

  • Introduction
  • Unconscious Bias and How Our Brains Fail Us
  • What is Algorithmic Bias?
  • Overcoming Algorithmic Bias
  • What we are Doing at MeVitae

Algorithmic Bias Explained

Introduction

Our brains are not well adapted to decision making in the modern world. To overcome our brains’ limitations, we increasingly rely on automated algorithms to help us. Unfortunately, these algorithms are also imperfect and can be dogged by algorithmic biases. In this blog post we discuss how and under what circumstances our brains can fail us, how computer algorithms can come to the rescue, the dangers of algorithmic biases and how to avoid them.

Unconscious Bias and How Our Brains Fail Us

The weekly supermarket shop perfectly demonstrates the limitations of our brains’ abilities to make decisions in the modern world. In most modern supermarkets we are flooded with too much choice. None of us have the energy, motivation or time to assess the pros and cons of each breakfast cereal (for example) and make a rational, logical decision about which one is best for us. Instead we use mental shortcuts to make our decisions. Examples of some of the mental shortcuts (heuristics) that our brains make include the halo effect, confirmation bias, affinity bias, contrast bias, and many others.

In the case of breakfast cereals, the harm in using mental shortcuts is relatively innocuous, consumers may end up with non-optimal but perfectly acceptable breakfasts. In other instances, the impacts can be far more damaging. In the context of human resources and diversity and inclusion, mental shortcuts can result in accidental racism, sexism, homophobia, classism, ageism or ableism.

Decisions can either be made rationally and logically or they can be made quickly using mental shortcuts. Organizations are constantly seeking to improve productivity and in so doing face a dilemma: do they want good decisions or fast ones.

This is where algorithms can come to the rescue. Tasks that require thought and consideration can now be outsourced to machines thanks to advances is machine learning, powerful computers and large datasets. A novel computer algorithm that is effective at solving a particular problem can take a significant investment of time and energy to develop initially, but once in use can save huge amounts of time. These algorithms can, in principal, make good decisions quickly. Using them can eliminate the trade-offs between speed and quality that are necessary when asking humans to make decisions and complete tasks.

What is Algorithmic Bias?

The challenge in building these algorithms is to ensure that the decisions the algorithms make are good and not subject to algorithmic biases. An algorithmic bias is a systematic error, i.e. a mistake that is not caused by random chance but an inaccuracy or failing in the algorithm. The most pernicious of which negatively affect one group of people more than another.

People have attempted to group the sources and causes of algorithmic bias into various categories including confirmation bias, rescue bias, selection bias, sampling bias, orientation bias, and modelling bias (amongst many others). All these labels can make the concept seem very complicated, and possibly are not that helpful.

Conceptually, algorithmic bias is not complicated but to understand it we first need to discuss the three main components of a computer algorithm; the model, the data, and the loss function. Bias can be introduced by each of these components and we will discuss them below.

Modern computer algorithms work by a human building a mathematical model (a set of equations), which can replicate the brain’s ability to solve a very specific task. Ideally the model should be well-motivated and based on insight. For example, when trying to predict how a ball will bounce off a wall, we could either guess a model and hope for the best or alternatively we could use some physics and choose a model justified by our understanding of the natural world. An example of a biased model would be one that systematically predicts that balls bounce further than they do in real life. If using a physically motivated model, this bias could be caused by the developers asserting that balls are bouncier than they really are. If using a model chosen by guess work it could be caused by our guess being a bad one. When it is difficult (or even impossible) to choose a well-motivated model there are clever mathematical tools that can be used to compare lots of models at the same time choose the best one, or alternatively machine learning can be used. Machine learning basically boils down to building an incredibly complicated model that we think might be able to mimic a well-motivated one (as well as lots of poorly motivated ones), and then using large amounts of data to pull it in the right direction. Neural networks are examples of such models. Biases introduced by neural networks can be very difficult to understand and remove.

Like the human brain, these models need to learn and we train them using large datasets (huge datasets if we are using neural networks). If there are biases present in the data, the model will learn to replicate them. For example, if we were training an algorithm to identify whether a picture is of a nurse or a builder, we would need lots of pictures of nurses and builders. If in our training data all of the pictures of nurses were women and all of builders were men, the algorithm may well (mistakenly) conclude that women were all nurses and all men are builders. This is not true globally, but from the biased data presented to the algorithm it is a completely legitimate conclusion. This is perhaps the most common source if bias in computer algorithms, but also the easiest to deal with; get more representative data!

We quantify how well the algorithm does by calculating something called a loss function. The mathematical model is run on the training data and tweaked to lower its loss function. If the loss function does not penalise bias, then it can creep in. Because they form a small part of the global population, minority groups could also only weakly impact a simple loss function and the algorithm may not care if it makes incorrect predictions for them if it still does a good job on the majority.

There are many examples of biased algorithms being deployed commercially or by governments. For example, in 2016, the UK government introduced a tool that used a facial recognition algorithm to check identity photos in online passport applications. The algorithm struggled to cope with very light or dark skin and therefore made the application process more difficult for people in these groups. In the USA, an algorithm called COMPAS is used to predict reoffending rates and guide sentencing. In 2016 the news organization ProPublica found COMPAS to be racially biased against black defendants, and a study in 2018 showed randomly chosen untrained individuals made more accurate predictions than the algorithm. In 2018 Amazon scrapped their AI recruitment algorithm that was biased against women.

Overcoming Algorithmic Bias

The first step in reducing the bias of algorithms is to define what a fair outcome would look like at the start of the development process. For example, when predicting reoffending rates, one measure of fairness could be how similar the accuracy is across different protected characteristics. The false positive rates for all ethnic groups, genders, etc… should be similar. Without such metrics it is impossible to assess whether the algorithm is fair or not. This can be incorporated into the loss function.

The second step is to ensure that any training data are representative of what should be, not what is. For example, when training a recruitment algorithm, the history of past hires will likely have been impacted by the mental shortcuts that humans use when making decisions quickly, which we discussed earlier. An algorithm trained on these data will repeat those same shortcuts. Instead an exemplar of best practice should be curated and used to train the algorithm. The data must also be representative. That is, the data should contain examples of people from all categories and groups in substantial numbers. The example of Amazon’s sexist AI recruitment algorithm that was mentioned earlier failed due to biases baked into the training data.

The third step is to take a Bayesian approach. Wherever possible build a well-motivated generative model for the problem and fit it to your data. Such an approach can model and account for biases in the training data and can even help reduce the amount of training data needed.  A Bayesian approach also allows you to estimate how accurate your predictions are on an individualised basis. The alternative, throwing large amounts of data at a deep neural network and hoping for the best, is almost always doomed to introduce bias and lack explain-ability.

Large tech organizations take this incredibly seriously. For example, Facebook have an independent team dedicated to auditing their algorithms, IBM have developed an algorithmic bias detection tool called Fairness360, and publicly available training datasets are being constructed with diversity built in.

What we are Doing at MeVitae

One of our goals at MeVitae is to reduce the impact of unfair biases in the recruitment process to create more diverse workforces. Aside from the obvious moral advantages of a fair and more equal society, there are clear economic advantages too. A more diverse workforce tends to result in greater productivity, profitability, better governance and creativity.

When short- or long-listing applicants for a job, recruiters spend on average seven seconds per CV. Mental shortcuts therefore dominate the decision-making process and biases creep in. At MeVitae, we have already developed solutions that can automatically identify and redact potentially biasing information from CVs within a company’s applicant tracking system (ATS). If you are interested in this service please contact us for more information.

We are currently developing an algorithm that can shortlist candidates automatically and without bias. Our algorithms have been built with fairness and explain-ability built in from the ground up. It is based on research we have conducted with partner organizations such as the University of Oxford and the European Space Agency, using tools such as electroencephalogram (EEG) headsets, eye tracking cameras, and psychometric tests. For more information please visit our Labs pages.

Author: Riham Satti (Co-Founder and CEO)

Start Building a Fairer Workplace With Us

Dive into the future of work with our expertly crafted solutions. Experience firsthand how MeVitae’s AI-driven solutions can make a difference. Request a demo or consultation now.

Contact Us
Back to Top