This website requires JavaScript.

Evróputilskipun um gagnsæi í launum | Sækja rafbók hér.

Coffee Talk

#47 - Proactively Monitoring Bias in HR Tools

In this week's coffee talk, Henrike and Margrét followup on questions raised in Episode 46 about the applications of AI in HR. AI can make us faster and more efficient in some ways—speeding manual, tedious tasks, for example—but, if we aren't careful, biases can slip in and catch us off guard. Watch this coffee talk to learn about proactively monitoring AI tools in HR.

Large language models make us faster.

In this week's coffee talk, Fair Pay Innovation Lab's Henrike von Platen and PayAnalytics' Margrét Bjarnadóttir resume last week's discussion about the pitfalls and possibilities of AI tools, particularly for human resources practitioners.

"I'm thinking about all the processes we’re trying to do, including job evaluation, what work is worth—how much, or what. When AI comes into this equation, it is not bias-free…at least not if a biased person programmed it. Do you believe that we can make bias-free AI processes for everything surrounding fairness in this process? If yes, then how?” Henrike asks.

"Well...I'm an optimist. But we have to be very mindful about what we do," Margrét begins. 10 years ago, we were just starting to think about algorithmic bias. Now, it's a question intrinsic to automation in our toolboxes—in HR as in medicine.

How can we make them better (for us)?

If, for example, we're performing job evaluation. Our large language models (LLMs) can read our job descriptions, extract the responsibilities and required knowledge to perform the evaluation. But that doesn't mean we should then simply take that output at face value. And, in addition, we should invest the energy and time we've gained from those automations back into scrutinizing outcomes and understanding where our LLM failed. Then, we can start to interrogate the model and ask the hard questions:

Are we evaluating the jobs of women systematically lower than the jobs that are dominated by men? Are we undervaluing jobs that are predominately performed by immigrants?

Watch the coffee talk to hear the full story.

And remember, if you enjoyed this Coffee Talk, subscribe to our podcast, wherever you like to listen. We're on Spotify, Apple Podcasts, Stitcher, and more.

Friday Coffee Talk from Planet Fair is a podcast/videocast series co-hosted by PayAnalytics founder Margrét Bjarnadóttir and Henrike Von Platen, founder and CEO of the FPI Fair Pay Innovation Lab in Berlin. It is available through all podcast platforms as well as on YouTube as a videocast.

#58 - Introducing Structured Pay Equity Analysis

Systematic bias is stubborn. Demographic pay gaps can still remain, even after companies do a pay equity analysis and give raises to underpaid employees. But based on recent research by Margrét and her co-authors, there’s a better way to close the gap: structured pay equity analysis. Margrét and Henrike discuss in this week’s coffee talk.

Sjá alla Coffee Talk þætti