Hilke Schellmann, an Emmy award-winning journalist and professor at NYU, discusses her book "The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now" and what inspired her to write it. Schellmann highlights biases in these tools that companies use to assess candidates, advocates for transparency, explainability, and validity in AI hiring practices, and emphasizes the need for journalists to critically assess AI technologies and for AI literacy among job seekers and the public.
About Our Guest
Hilke Schelmann is an Emmy-Award winning journalist, freelance reporter and journalism professor at NYU. She covers artificial intelligence and the future of work for the Wall Street Journal, the Guardian and MIT Technology Review amongst others. She was a showrunner and reporter for the Wall Street Journal's video series Moving Upstream and her work has been featured in The New York Times, MIT Technology Review, TIME, The Atlantic and many others. She won an Emmy for her direction and filming of the documentary "Outlawed in Pakistan" which was also dubbed “among the standouts” at the Sundance Film Festival and she was a finalist for the Peabody Awards in 2017 for her investigation into student debt for VICE's flagship news magazine VICE on HBO.
Find Hilke Schellman online:
Instagram
Show Notes
1:07: What inspired you to start writing about AI
5:02: How companies use AI in job hiring processes
9:38: Drawbacks of using AI in job hiring processes
12:34: Improvements to job hiring processes
15:18: Advice for job seekers to succeed
21:06: What should journalists do to hold AI accountable
23:14: The need to improve AI literacy
26:41: Takes from lecture at University of Oregon
28:30: Optimism for the future of Journalism and AI
Read the transcript for this episode
This podcast was produced by Isaac Dubey; check out their portfolio to find out more about them:
Want to listen to this episode a different way? Find us wherever you get your podcasts:
Apple Podcasts
Spotify
YouTube
Amazon Music/Audible
Pandora
iHeartRadio
PodBean
TuneIn
Podchaser
Damian Radcliffe 00:00
Hello and welcome to the demystifying media Podcast. I'm Damian Radcliffe, the Caroline S. Chambers professor of journalism at the University of Oregon, and in this series, we talk to leading scholars and media practitioners about their work at the leading edge of Communication Studies and practice. Our guest today is Hilke Schellmann, an Emmy award winning journalism professor at New York University and a freelance investigative reporter focused on holding artificial intelligence accountable. Her work has been published in The Wall Street Journal, The Guardian The New York Times and MIT Technology Review, among others. She's the author of the critically acclaimed book the algorithm, how AI decides who gets hired, monitored, promoted and fired, and why we need to fight back now. Helga, thank you for joining us. Thank you for having me. So we were very fortunate to host you on campus here at the University of Oregon earlier this year, and I'm really delighted to have the opportunity to touch base about some of the themes and ideas we talked about during your time with us, and also touch base, are about some of the things that you have done subsequently. And I wanted to just start by setting the scene really and trying to get a sense of what it was that first introduced you to artificial intelligence and the future of work and why you decided that that was the beat for you.
Hilke Schellmann 01:25
Yeah, yeah. I mean, you know, it sort of is an evolving theme, but I think the moment that I heard even about artificial intelligence being used in hiring was actually a while back. I guess it's eight years in 2017 when I was in Washington, DC, at a lawyers conference. Had nothing to do with AI consumer lawyers, but I needed a ride from the conference venue to the train station. I called myself a lift, a ride chair, and I got in the back seat and asked the driver, how are you doing? And he said, You know, I'm having a weird day. And I was like, Oh, really? Why? And he said, You know, I just had an interview with a robot for a job. I never heard of that. You know, it's like interviews by robots. You know, it turns out that it wasn't quite a robot. It was like pre recorded questions. But it's, I think it sort of felt for the driver, that it was a robot interviewing him, right? There wasn't a human on the other side. And I never heard of it. And I started to dig in and talk to folks, and I just discovered that there's a whole universe that I hadn't heard of that this was actually already quite prevalent at the time, and now it's just blooming, right? It got a big push during the pandemic when everyone was working from home, if you wanted to hire somebody, kind of had to do it on Zoom or digitally. And these products already existed. So a lot of folks, a lot of companies bought them. And now that we live in the generative AI age, it's just like everywhere in the hiring process. So it just was, you know, that little signal that I got started.
Damian Radcliffe 02:54
Well, what was so interesting for me was that when I first came across this work that you had done, I think it was a Wall Street Journal series following that conference in DC. I asked my students about it, because this was completely new to me. And they were like, oh yeah. A bunch of them are like, Oh yeah, we've done that. So it was really interesting, sort of generationally, there was kind of an awareness of this, particularly for a lot of entry level jobs or in certain sectors. And as you say, that's, that's, that's only grown, and so did that kind of growth, particularly post post pandemic, was that the inspiration for taking this work that you've been doing over a period of years and then turning that into a book?
Hilke Schellmann 03:31
Yeah, I mean, I guess it came from, you know, I did an investigation at the Wall Street Journal, and that was, that was a video, is a 10 minute long video. And I felt like, wow, there's way more to tell here, right? Like, I went to all of the conferences, talked to so many stakeholders, did all the investigative reporting that we do, and I felt like, wow, I nearly didn't have enough time to cram it into a video. And also felt like video was kind of limited, because it just, you know, it's kind of software that we talk about. It's just not that visual. Like, 10 minutes, fine, and we can push the limits of visualization. But I felt like, I don't think this is the right medium for sort of a longer project. And the same with, I love doing a podcast about it, and I think that was that was pretty successful. But I also felt like, hey, I really want to talk about like, you know, there needs to be graphics, there needs to be explanations, you know, how do you describe these tools? Like, and I felt like a book is really the best medium. And I think I guess I'm in a fortunate position where I feel like I can sort of do a lot of the reporting, and then think through what parts of the reporting are better for podcasting, what is better for video, what lends itself to print? And in this case, I also felt like, Hey, this is like a really substantial shift in the industry that I think a lot of people are not aware of, and it needs to have a book, because it needs it's this big, and it's such it's such a big shift. So that pushed me to to do that book. And I think the pandemic helped it in a way that, like, I think a lot of more people got, you know, started to be aware of, like, oh, wait, these tools are out there, and they used daily 1000. 1000s, actually hundreds of 1000s of times.
Damian Radcliffe 05:03
So for people who aren't familiar with this, how would you describe kind of what this looks like? And I appreciate, as you said, like this is easier, perhaps to kind of grasp if you're able to see a video of it, but here we are talking in audio form. How would you kind of describe what this kind of looks like and feels like for people who are,
Hilke Schellmann 05:23
I mean, if you know, the way I would describe it is like we think about often as hiring a sort of a funnel, right? There are different steps in the hiring process. Like you send in your resume as a job seeker, and you may make it to the next round, but you know, sort of in the first steps already, in the in the resume screening. A lot of companies use AI. Certainly most of the big fortune, 500 companies, all of the job platforms, be it like Monster LinkedIn, I talked to them, they all kind of use some form of AI. And I think that also speaks to what's happening in the industry. Like a lot of companies complain that they get inundated by resumes, right? They get hundreds of, sometimes 1000s, if not 10s of 1000s, of applications per job, and it's just enough humans. And we also know humans are biased to go through all of this, so we see AI being used in that stage, but then we also see it used going down the funnel. You know, often people then have, like, a phone call with a with a recruiter or a hiring manager or an assessment, we see AI being used for that, right? We see like, one way video interview where or audio interviews where there is no one on the other side, and you get sort of pre recorded questions that you answer. We now have video avatars doing that kind of work so, and now we also have avatars for the job seekers. So there's a lot of AI against AI in this realm. And then, you know, we see it in assessment. We see like games that people play to find out their personality, that is all AI based. And then we see it also in sort of the background check phase too. We see, like, AI based background checks. We see AI based social media checks. So like, everywhere in the hiring funnel, we see companies using it. Not every companies uses everything, right? But like, and there is no central repository that I can check who uses what, but we see it used everywhere. And you write, it's also like, you know, if you want a job in like, investment banking or banking in general, especially in New York, like all of the recent graduates complain about AI screens, like all of them, hundreds of video interviews that they have to do every semester to get a job. So it's often used for, like, recent graduates, because you have a lot of people applying, and it's really hard for a hiring manager, maybe to decipher, like, oh, they have these, like, beautiful degrees. But who's really better than another is hard to tell, right? We also see it in fast food, in retail, in trucking, a lot of these AI screening tools, where you have a lot of applicants and you have a high turnover rate as the sort of lingo goes. We see a lot of a lot more of these tools being used, but we've also seen it used for teachers, flight attendants, so computer software developers, all kinds of AI tools are out there
Damian Radcliffe 08:07
well, and I'm guessing this also becomes a bit of an arms race, because you've described one of the challenges for a lot of these companies. Is being inundated with huge amounts of applications. And of course, now in the generative AI age, it is so easy to use those tools for job applications. So the sheer volume and number of applications that companies have to process is only going to grow as well, putting making it more and more likely they're going to have to use, or feel they have to use these kinds of tools.
Hilke Schellmann 08:35
Yeah, totally. I mean, it was like, you know, since the dawn of job platforms, which was like, the early 2000s when, since we had like, LinkedIn and monster and indeed, and all of those it's actually gotten. You know, it used to be one click. Now you can actually have aI generate even all of your application materials, the AI tools that, like, automatically apply for people. So we've got seen, you know, in surveys from hiring managers, we've seen, like, the volume go up at least another 50% of already feeling overwhelmed, right? Like, so you get 1000s of applications, you can't have enough humans to go through it. What do you do? You're going to use machines to go through it. And, you know, the AI vendors sort of play into this, right? They say this is like, finds the most qualified candidates, no bias, and it makes it more efficient and cheaper to do this work. And it's true, it makes it more efficient, and I think companies save a lot of money. But the problem is, I really haven't found any evidence that they find the most qualified candidates, and I've uncovered a lot of bias in these tools. So actually, the promise isn't there, right?
Damian Radcliffe 09:39
And could you give us perhaps an example of that also, so in your reporting and your investigative work, is there a particular example that kind of really step stands out that demonstrates that gap between the marketing promise and the reality?
Hilke Schellmann 09:54
Yeah, I think two things come to mind. Like, one is like, you know, in like, resume parsers, where, um. You know, a lot of lots of companies use it, and it's actually now built in AI. AI is built into applicant tracking systems. So used to be glorified spreadsheets that companies use to see like, oh, healthcare supplied, no, she's in this round, or she got rejected. And now that has AI built in. And what I found out from people who come in when AI vendors and companies talk to each other, should they use the technology? Some companies bring in lawyers or folks in that, in that, in that arena, and they get to look into the black box, right, like they want to know. So what goes on inside this tool? And what they found is that sometimes keywords are being used by the by the AI tool like Thomas, that indicate you would be more successful if that word is on your resume. Obviously, Thomas doesn't have anything to do with your qualifications. It's just probably a statistical correlation, right? Like, there's a pile of resumes that you put into the system and say, like, Hey, these are all my software developers at this company. Ai figure out what they all have in common. And apparently, in you know, this one instance, they were lots of Thomases. Obviously, that's not fair, and then we see it. You know, other tools like use like the words African or African American as bias proxies. We've seen in one tool, I learned that if you had the word baseball on your resume, you got more points. If you had the word softball in your resume, you got fewer points, and it wasn't a baseball softball coach position, which is a random position. And I think we see this a lot. When the tools aren't supervised, they do statistical analysis. And maybe, you know, maybe this was for, like, a company that already had gender disparities in hiring, right? That may have hired more men, and when you give it all the resumes of people currently in the job, maybe a lot of men in your company, like baseball, so then it becomes a criteria when it has nothing to do with the job. And that is like, sort of how bias can creep into these systems. And we see that again and again, not only in resume screeners and like the videos, and in other ways, again and again, that if you have biased data, you have biased outcomes. So the problem is, if we don't check and which we see often, that even the AI vendors, because they use deep neural networks, doesn't matter what that means. It just means that we know the data input and we know the results, but we don't actually always know. What does the system infer upon like, what kind of keywords does it use, what kind of facial expressions or other ways does it, does it take into it? And I think this, this is where bias very easily creeps in, and we don't see a lot of
Damian Radcliffe 12:32
oversight, right? And obviously that's, that's one of the challenges. I think one of the things that's interesting about about the book is that you're also looking at some of the solutions to this. So, so, I mean, it's, it's clear that this AI genie is out of the bottle. We're not going to see a turning off of these types of programs for all the reasons that we've we've talked about, but they could potentially be better. So can you tell us a little bit about some of the things that you would that you would like to see and that you are kind of recommending,
Hilke Schellmann 13:05
yeah, so, you know, there's a couple of things I think, first of all, like companies need to know what they're building. So we need a whole lot more transparency and explainability, right? If the company that builds the system can explain how some resumes are being put on their yes or no pile that strikes me as problematic. I don't actually have a problem with that kind of AI being used for, like, our spam filters. I think it's an excellent use. And if it doesn't work, I'm not going to use that spam filter. I'm going to use something else. But these are, like, high stakes decisions, right? This is like, human futures are on the line, and, you know, it matters if I get picked for a job or not. So I do think these are a little bit higher stakes, and we need to know what we're doing. So I want to know more explainability, transparency, transparency. I also want to know, like, a lot of validity, meaning, does the tool actually do what it says it does, like, you know, if I play a game for like, assessing my personality. Does it really assess my personality, and is my personality relevant for the job? So, like in one example, I had to pump up balloons until they pop and get money, and, you know, gather money, and it's supposed to check my propensity to risk taking. And I was sort of wondering. I was like, Maybe I'm a maverick in video games and games and a pump, pump, pump, and I make a lot of money. But does that really have anything to do with, like, real world behavior on the job? Am I really a risk taking Maverick? And then also the question is, does risk taking have to do with this job, right? Like, those are questions that are not being answered. So I want to know way more of that. And I think we just have to have more thoughtful, holistic hiring practices. I think what we did in this first generation is sort of take already flawed processes. It turns out resumes not very predictive review success, one of the few things that we have, though, and we just digitize that. And now the scope of this AI tool can you know. Sort of could be biased against millions of people, right? Not just one hiring manager who goes to the one job. So I think we just pushed out the scope, but we just sort of digitized, or AI fied a problematic process in the first place. So I want that all to change.
Damian Radcliffe 15:18
Can you say a little bit about the job seekers perspective, because I'm curious. I just think back to the taxi driver you had in Washington, DC, who didn't quite understand what was going on. And I also see this when I talk about things like resume screeners with my students and so forth, that there isn't necessarily the clarity provided to job seekers about how these tools are being used and and how they work, and indeed, then how to kind of navigate them, and kind of what AI literacy looks like in this kind of job seeking space.
Hilke Schellmann 15:53
Yeah, I think that's a real, real challenge. You know, I've, I've encountered so many job seekers, actually, I think everyone who's done like a one way video interview, where you don't have somebody on the other side, and you get pre recorded questions to answer. I think everyone thought a human was watching it, and that may be the case, because some companies, you know, have hiring managers we review those videos, bless their hearts, because they sometimes have to probably watch 1000s of videos, but a lot of companies also have an AI tool that that is being used, and none of the job seekers, it may have been in the like, sort of fine print, but they didn't. They weren't aware of it. You know, I think people aren't aware of it. If they upload their resume, what happens on the other side, right? Like, how they're being assessed? And I think that's, that's another problem, I think you play a video game, you probably have a sense that somehow, digitally, your scores will be added up, but you have no idea. Was this validated? How does this work? What does pumping up balloons have to do with the job right? Those are all kinds of questions. So we, I think job seekers know very little. And I think we see this often come out in litigation, in courts, because, you know, everyone is sort of waiting, like, Okay, we know that a lot of these tools are flawed, like, why aren't we seeing, you know, lawsuits in court. And I think the problem is that, like, job seekers first need to be aware that AI is being used, and you need to show that you've been harmed. And so you have to prove that somehow you've been harmed. And that's incredibly hard, because you are being rejected, and lots of people are too. And how are you going to prove that something fishy has happened? Right? I think most job seekers that I've encountered felt like, well, maybe I wasn't the best candidate, but that's not the case. But like, I think what it comes to is, like, the feelings for job seekers, I think should be underestimated. So frustrating. It's like hundreds, if not 1000s, of applications. So frustrating. I always tell them, It's not you, it's probably the algorithm, because it probably is. And we now know from surveys. And I think that's sort of another thing, you know, we talked we talked about that like, you know, the companies say, you know, this is the most qualified candidates, the most qualified candidates. And actually, like we know from surveys of C suite leaders, that when companies use AI tools, almost 90% said their tool rejected well qualified candidates. So like the people who use the tools also know that they don't work exactly as advertised. So it's not the job seekers fault, but if you're sitting there day on and day out, trying to apply for jobs, and like, sending hundreds and hundreds of application, not hearing back, it's incredibly frustrating, demoralizing, and it's not working. And I think you know, then the hiring managers get upset that they feel like, Oh, this is all AI generated. I don't know who the real person is, so it's like, this AI against AI, and it's not sure if anyone has anything of substance to judge, and this is really helping the job market.
Damian Radcliffe 18:48
Can you say a little bit about how you are then bringing this learning and your investigative work into in this space, into the classroom? So how, how are you teaching your students to hold AI accountable,
Hilke Schellmann 19:03
yeah. So what we are doing is, like a couple of things, you know, we, we, you know, I teach investigative reporting, and we want to use AI tools if, when appropriate, right? So, like this, this semester, we looked at over 500 lawsuits, and one of the things was, like, no one really wants to read 500 lawsuits. So we thought like, oh, maybe we could use AI, right? It's good at pattern detection. So we knew, for example, that, like, obviously, in every lawsuits there's sort of, like, allegations, right? Here's what happened. So we knew it was in every lawsuit. So we used a couple of AI tools to be like, oh, let's just use them to pull it out so we don't have to read it and copy and paste it. But you know, what a human drudgery work. We just want this beautiful spreadsheet. And we tested a couple of AI tools, and it just really didn't work, and we had to do most of the work ourselves anyway. So we test those tools, because I do think it's, like really important as a journalist to become. Is to understand what's out there to use as appropriate under certain guardrails, right? Like, you really want to think about like, accuracy, precision, recall, like, how good is this tool? But you also want to think through like, and I think that's really important for investigative journalists. If we get like, freedom of information requests back, there's often private information in there. Should we feed that into chat, GPT, and give this to open AI? Is it training data? I don't think so. Like, you know, there are all kinds of things that I, you know, sort of brainstorm with my students. Like, how should we tackle this? And I think, you know, the interesting thing is, like, AI tools are out there, so I sort of encourage them to use investigative techniques to fact check them, right? Like they're often claims this is true. Um, it's a very easy way to investigate, and I think that's a lot of fun. And I do want you know beyond journalism, I want to teach people like be more critical consumers of this technology, right? Because it's like out there, and we all need to critically consume it and talk about it so we can put pressure on companies to do better.
Damian Radcliffe 21:06
And presumably journalists, not just student journalists, have a responsibility to do this. So I'd love to know what would you like to see being done in this space, within professional newsrooms, as well as within journalism schools that perhaps, is not being done at the level that we need to see right now.
Hilke Schellmann 21:26
Yeah, I do think that, like, you know, in the near future, actually, probably already now, everyone, every journalist, is going to be in technology or AI journalists, right? It's like ubiquitous, moving into every space. And you know, even if you cover an education beat, you can talk to, I'm sure, to I'm sure, to a lot of students and teachers, and cover the school system and all of the things that that that you need to cover that sort of your traditional beat, but you will encounter AI technology, right? Because it's pushing into schools like an unprecedented vein. And what I felt like, what often happens is like that, I think a lot of journalists are very timid. They don't fully understand their technology as to a lot of lay people, and they sort of like, just take what companies say as the truth, right? And I think we would never sort of believe if, like, a shoe manufacturer told us the best shoe ever, we would actually like start asking experts, right? Like third party experts now don't work for the company to be like, Well, is this cool actually? Is this shoe or extraordinary? Can you explain yada yada yada? But for some reason, like we believe when chief technology officers of AI companies tell us, well, this will revolutionize mental health or therapy, I'm sort of like, Oh, yeah. What do you know about therapy? You know, you mental health professional. So I think we need to be much more skeptical and test these companies claims. And, you know, it's really hard because the algorithms are often black boxes. You know, the companies say it's proprietary, but there's different ways to get to that as journalists. So I want to foster that, that critical mindset in students, but also in journalists working. And I think we need to sort of embolden them that even though it seems quite complicated to cover complicated things, and it's our job to dig deeper and sort of critically examine, you know, the tools or technologies. Well, it sounds
Damian Radcliffe 23:15
like there's also a wider societal technological literacy piece that also needs
Hilke Schellmann 23:21
to be factored, yeah, totally, yeah. That's another thing we need to tackle. Is, like, AI literacy, and have people sort of assess it and understand it in much more nuanced way. I don't think we all need to know what sort of it's generally under the hood and what's happening, but just like that, you know? I mean, I think there was, like, this week, an article in The New York Times, that people actually, like, sort of thought, like chatgpt was, was like connecting them with dead people and, like, sort of, I don't know, just like, very interesting things that they thought there was some sort of sentient computer there that was actually talking to them. And I think that is quite telling to like that people don't understand that. This is a statistical word generator. It's not sentient. It's not a cognitive functioning person. It's not thinking. It's just a software tool that knows, I mean, knows, and can really produce authoritative, sounding, predictive text. And I think that, you know, that sort of shows the gap in AI literacy, that it's humongous.
Damian Radcliffe 24:30
And how are you, well, are you addressing some of that in terms of, like, some of the things that you're going to be doing next? Because the book came out last year, I think,
Hilke Schellmann 24:39
yeah, it came out about a year and change ago. Yes, I do think that, like, we need to really push into journalism. We need to build this field of, like, AI competency for like, students existing journalists, but also, like, sort of learn the lessons from other journalists. Like, how can we hold these tools accountable? Right? Like, I'm not the only one who does this work. And. And so I want to, you know, I'm building out this new initiative for journalism, technology and society, to sort of educate, but also do research and do kind of tool building, because I feel like some of the research I've shown, hey, this works and this doesn't work, and I think that should help us when we build tools. You know, I do strongly believe that, like, it's really important to critically assess the technology. But I also feel like, well, you know, tech companies are going to build something towards profit maximization, good for them, but I think there's actually many more use cases that we could use this AI technology that is not going to be covered, because it's probably not going to make a huge income. So why don't we build those tools? If we want to build a better world. Let's try to build these tools. So that's part of it. And I think the critical analysis and testing the tools is like sort of part of that, of that education. So I see it as sort of a full circle where all these things inform each other. And I also think a lot about like, sort of future threats to journalism, or I think they're probably more near term threats, unfortunately, like when I saw AI avatars trying to interview job seekers, but also job seekers pose, you know, AI avatars posing as job seekers, I did wonder. I was like, how on earth are journalists going to figure out that the if the person on the Zoom is actually a person? Like, there's all kinds of things that we see that we need to address. So I want to build this, like sort of initiative so we can educate and think about this, because I think we need to up, you know, educate ourselves, educate students and sort of the world and the universities that we work in about AI literacy, and I do think that, like journalism, is especially well situated to do then.
Damian Radcliffe 26:42
Well, I know when you shared your research with with our students, when you were visiting us here on on campus, they were really captivated by the work that you were doing and very inspired by it. I wondered if there was anything that you took away from from the visit as well. Do you have any particular things that really kind of resonated with you or stuck with you as you kind of look back six months on from from your visit out here? Yeah.
Hilke Schellmann 27:08
I mean, I thought it was just like, so warm and curious, and they, like, everywhere went and like, should have shared parts of my research. I was so blown away by, like, so many students taking it and running with it and asking, like, these really smart questions. I was like, I haven't thought about that. That's a good idea. So I thought I was, I was really impressed by the ability. So I felt like, oh, there is a lot of critical, critical thinking already at work, which I think is one of the key abilities that we need to teach folks right, like, sort of the, you know, media literacy, now that we see a lot of people getting their news on social media, versus sort of in, like, self contained websites or apps by news companies, where you feel like, okay, I can trust this, because it's on the New York Times app now it's free floating. And we need to be much, much better in assessing this kind of information, because we see a huge problem with like, disinformation, right? Like people think vaccines kill people now, and all of these kinds of things where I feel like, wow, we have a lot to do as journalists, which is our job is to inform the public about what is factually correct. And you know, there's a lot more work to do with, unfortunately, a little bit fewer resources, but that didn't make me less determined to do the work.
Damian Radcliffe 28:27
And it sounds like you're you have hope that the kind of next generation of journalists gets this. They understand this. They are they are curious. They're growing up with this technology, but there's also just opportunities for them to do really meaningful and impactful work.
Hilke Schellmann 28:46
Yeah, I totally agree. I mean, this is the time right we have, like, we are sort of at the at the moment where, like, AI is pushing into journalism and into our world at large. So there's, like, I think there's a ton of opportunity for Assessment and Reporting, and there's also a lot of opportunity to understand, like, what's not working. And now that the you know, it's becomes much easier for folks to vibe code and like, you know, you don't even have to be a computer scientist to do some coding, build your own tools, like figure it out. And I think by doing that, you'll learn a lot about what works and what doesn't work. Like, I was really surprised when I looked at some of the AI used in hiring that I was like, wow, some of this is, like, very basic. It doesn't mean it's like, it's just not as sophisticated as we think it is. And it's also basic and like, Wow. I have to, like, like, two or three things about it, like that. I think about, like, Wow. You use facial expression analysis to assess people how good they're, you know, I'm going to be at the job, and I'm like, but what if people have different facial expressions? Like, couldn't that be biased, like we have, we know, know that today, but we see companies doing it. And I'm like, really? And like, when you ask the companies, one of them told me, like, Oh, yeah. Like, we figured out when we did analysis, that if you use the word I a lot, you're not a team player, and if you use the word we a lot, you're a team player.
Speaker 130:10
And that feels a little bit basic analysis. And I'm like, isn't that context dependent on what questions you ask me?
Hilke Schellmann 30:19
You know, if you ask me, like, how have you ever overcome a team challenge, or have you ever dealt with, you know, difficult situation of work, maybe that tells you, or how somebody says we and I, and that seems very basic, so I do think this is like an opportunity for a lot of people to push into that space and help us figure it out. And I think it needs to go beyond journalists. I think we need to have a much, much broader coalition to work on this critical assessment and also to build better tools. Great.
Damian Radcliffe 30:47
Well, that's an excellent clarion call for us to end our conversation with today. Hilke, it's been so great to chat with you. Thank you so much for sharing your time and insights with us.
Hilke Schellmann 30:57
today. Oh yeah, anytime. Thank you for having me.
Damian Radcliffe 31:06
Thank you for listening to the demystifying media podcast. If you enjoyed this episode, please hit LIKE feel free to subscribe, or drop us a comment to let us know your thoughts about this series, and while you're at it, why not check out another podcast that we produce, the next generation leaders. Podcast features fresh conversations with young alumni from the University of Oregon School of Journalism and Communication. Just search for the next generation leaders podcast on all major podcast platforms.