Recruitment and Beyond

E3S7 - The Truth About AI in Recruitment with Andrew Nicholson

Eden Scott and Beyond HR Season 3 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:36

Send us Fan Mail

In this episode of Recruitment and Beyond, Ewan sits down with Andrew Nicholson, CEO of Kulea, for a straight talking and genuinely useful exploration of what a modern recruitment and HR tech stack should look like. The conversation digs beneath the surface of AI and automation and tackles the practical realities that leaders, recruiters and HR teams face when trying to make technology work in fast changing organisations.

Andrew explains why data quality is the quiet foundation that determines whether any of these tools succeed. He shares how AI amplifies poor data, how false engagement signals from email bots can skew automated workflows, and why so many businesses unintentionally undermine their brand through irrelevant or misdirected communications.

The discussion also covers where automation genuinely adds value, why companies often fail to see productivity gains even after introducing new systems, and the role of thoughtful human oversight in keeping everything on track. Ewan and Andrew break down the misconceptions surrounding AI, including what it can and cannot do, and explore how upcoming regulation will push organisations to be more transparent about how machine assisted decisions are made.

From personalisation at scale to the dangers of automating a broken process, this episode offers clear, practical insights for anyone reviewing their tech stack or planning a digital transformation. Andrew brings real world examples from across the recruitment landscape, along with advice on future proofing, building connected best of breed solutions and developing teams with the skills and mindset to thrive in a more automated world.

It is an open, engaging and often eye opening conversation that helps leaders cut through the hype and understand how to get real value from their technology.

Follow our social channels where we continue these conversations!

Eden Scott LinkedIn
Beyond HR LinkedIn 

Ewan (00:07):

Hi, welcome back to Recruitment and Beyond Podcast. We are joined today by Andrew, who is the CEO of Kulea. Now, Kulea, our marketing automation tool that we use here at Eden Scott, and we thought it'd be a good chance to chat to Andrew about, well, basically the full tech stack in terms of recruitment and HR and how that's changing and what that looks like these days. So Andrew, great to have you with us. Cool.

Andrew (00:33):

Thank you very much. There's movement in the

Ewan (00:36):

Background, so I'll introduce Roxy. Ah, yes. Roxy's joining us on the podcast today as well.

Andrew (00:43):

Roxy spends 90% of the time just fast asleep on the surface behind me, but literally as soon as we started talking about it, she decided to get involved.

Ewan (00:51):

Well, I look forward to Roxy's contributions to the podcast as well. No, that's good. But as I say, yeah, going to have a bit of a discussion on the tech stack today. I think it is changing fast. It's changing pretty quickly and we're all kind of trying to adapt. So I suppose I want you just to give the audience a bit of an insight into what Kulea do and your role there, Kulea, and the journey it's been on.

Andrew (01:16):

Yeah, fantastic. Well, I'll give you a bit of an overview. It's been a rollercoaster. Now, Kulea is a email marketing automation platform. Now, think of us akin to something like HubSpot's marketing suite or MailChimp. But our key differentiator is we act as a layer. We plug into ATSs and CRMs. So as an example, you work with Ventura, you're in your ATS. We have a native Vincenti integration. What that allows you to do and our customers to do is allows you to communicate at scale. We have found over the years is that a lot of clients use their ATS or CRM. I wish there's a data dump.

Ewan (02:02):

Yes.

Andrew (02:04):

It becomes a database. It becomes a repository for all that data that they got in there. And it's not used as a real dynamic marketing or communication tool. What Kulea does is we access that data, we give you the means to communicate at scale, to track, to enrich, to make more of that data, to make more of your dormant database.

Ewan (02:26):

Yeah. Yeah. No, absolutely. I think so often that data is just put in there and then forgotten about as they crack on with other things and other parts of the business. And I mean, it's such a vital part of marketing, but it's a vital part of any business is to have good data. And actually that kind of takes us onto that first question, which is around, I suppose, the AI integration that a lot of people are working on these days in terms of the systems that they're bringing on. But a lot of this depends on the quality of your data, the data foundations that you're building your business on. So how impactful is poor quality data to the AI systems that people are looking to bring into their organization?

Andrew (03:10):

Yeah. The simple way of describing it, AI doesn't fix bad data problem. It massively amplifies it. If you put garbage in, you get very confident, very plausible sounding, very efficient garbage out. Yes. And we always used to talk about that and database terminology, crap in, crap out. It's exactly the same, but at scale with AI. And it's almost that plausibility that is the biggest trap to fall into. It will analyze your database. It will tell you, these are the right candidates, these are the right clients. These are the customers you need to be talking about. This is what you should be talking about. It's how often you should be talking. And it sounds very convincing, but if the data isn't complete or even worse, we're getting false flag data, which happens a lot. I can talk a little bit more about that as well in the conversation.

(04:07):

You will find that you're just firing off in all the wrong directions. And actually what you're doing is you're damaging your reputation, you're damaging those relationships. So it's absolutely critical to make sure you've got the data foundations in place before you do anything else. And it's exactly the same for marketing automation, to be fair. It's all about communication.

Ewan (04:27):

Yeah. I don't think people quite recognize, or maybe doing people a disservice there, but I don't think people quite recognize the value of good quality data to these systems because the AI, as you say, is working off whatever it can use. And if the data there is wrong, then it'll give you the wrong answer. And it might not seem like the wrong answer, as you say, but actually it will be. And the impact on your brand is quite considerable because I think we're all awash with emails and communication in various different forms. So if it's poor and it's not relevant, it's actually making ... It's worse to be a brand, isn't it?

Andrew (05:02):

Yeah. Yeah, absolutely. And I talk through a lens of staffing recruitment, ATS, but going back to your point earlier, this is not a staffing recruitment problem. This is ubiquitous. It's all industries. First thing we do, when we bring on board a new Kulea client, and before we even connect to their ATS, what we'll say is, "Well, look, let's audit the data." And what we'll tend to find is 60 to 70% of that data is dormant. It's not being communicated with, don't want to say it, they, because these are people, they've what we communicated with. We need to more forbilize this data because this isn't data. These are real people we're talking about. They've not been communicated with six, 12 months, specifically on a client side data. When we run a validation process on that, we'll find that 30, 40% of the data is invalid. These people don't work at these companies.

(06:05):

B2B, people move on, people retire, job sectors change, salary expectations change, everything changes. The only thing you can guarantee is change. And if your database, your CRM or your ATS is just a snapshot of a single moment in time, then it's going to fail in its duty as a CRM. Instead of customer relationship management, you can't have a relationship with someone unless you know them.

(06:32):

So back to basics on that one, I think.

Ewan (06:34):

Yeah. Yeah. And I suppose that goes for automation as well. I mean, we started there on AI, but obviously there's quite a lot of transition. I think sometimes there's definitely some people who mix them up, but I suppose Kulea's known for automation. So in your experience, where can that add most value and where can it really cause problems and make it go wrong if the data's not right?

Andrew (06:59):

Yeah. I've got lovely examples. So I'm sure you know Bullhorn, big ATS, are they popular, one of the world's most popular. They've got, and I'm sure they'll forgive me for saying this, they've got a fairly significant problem with false flag events from email. So for example, you send out an email campaign, lots and lots of bot clicks, bot opens, false engagement. And these aren't bad, malicious bots. These are the Googles and the Microsoft world, scanning your emails, checking to make sure your content is safe, making sure that there's no malicious code in there or anything like that. So they're doing a good thing. But if you are basing your automations off the back of those events, those open events and those click events and they're not genuine, then you're opening yourself up to a whole world of pain. And it's not just bullhorn that suffers this.

(07:58):

It's, again, it's ubiquitous across all email marketing. Email bots will open and check emails about making sure that actually that the data you're using filters those out. And there's that level of human oversight. And I think we'll probably come back to this point again and again during this conversation, specifically with AI, but also with automation, is you need to have human oversight. You need to have human in the loop of these things. And that level of common sense creates the difference between a successful communication campaign and a successful relationship and one that is burnt very, very quickly. Yeah.

Ewan (08:36):

Yeah, yeah. No, absolutely. I think that human insight is going to become more and more important as we go forward because people are trusting these systems now that without really interrogating what it's doing, what its output is. And because it seems plausible and the information seems like it's relevant, but actually as you say, once you dig into it, it starts to open up some worms.

Andrew (09:01):

Absolutely. Critical thinking. I know this is something we'll probably be discussing later, but critical thinking has become, fun enough, critical because you need to be able to not only understand the data that's coming back, you don't need to understand deep data, but you need to understand the reasoning behind why is that data come back? Why has the agent returned this data rather than that data? What lens is it being evaluated through? And if it's being evaluated through, I was going to say with staffing improvement because it's the space we know best. If it's being evaluated through, for example, historic data that's sitting in your ACS, now that has been created and it has been updated by people. People, as we know, I studied behavioral economics and digital psychology university, people are crappy with all sorts of internal biases that it's there and that the biases are reflected in the data if it's historic data.

(10:05):

And if you are then using AI on that data, then the biases get amplified. It's just being aware of these things and making sure that you account for them. And if you were sending out, I don't know, communications, you wouldn't want to use protected traits, for example, in your segmentation. You've got to be very careful about things like that to make sure that you're not reinforcing those biases, whether that be human or AI. Yeah.

Ewan (10:37):

No, that is interesting actually because that, and we'll maybe come back to touching this, but that subconscious bias in some cases will absolutely come through because that's just the nature of it. And that will be reinforced as the data is used and enhanced and developed and grown upon because that's AI and all these systems learn from what's there. They're not surmising their own, they're not creating their own hypothesis is what I'm trying ... Yeah, they won't create their own hypothesis.

Andrew (11:11):

Yeah. I mean, you see this with the big tech AIs, when they were first launched, the Grocs and the ChatGPTs as well, I'm making a bold claim here, internet is built on bias. And if AI is trained on that horrible data, it's going to reflect it and you need to consciously program that out. You can't just accept it and go, "Yeah, that's fine. We'll just take the data as it comes." You need to code it, code the algorithms so that it makes allowing biases and overcomes them.

Ewan (11:47):

Absolutely. Absolutely. So I guess what you were suggesting there actually touches on the next question, which is about how people design their systems to do what they're looking to do. So instead of perhaps really going back to basics and saying, what do we want to achieve? Some people try to retrofit some of these automations and processes onto their existing approach. I suppose, why is that a bad approach and what sort of challenges do you see when people do attempt that?

Andrew (12:15):

I think, I mean, you get this a lot, shiny new thing. Great. Plug it in. Brilliant. How's it worked? No, of course it pledges it. You don't want to be starting with a shiny few things. Start with the process. Look at the business and you'll see this time time. Look at the business processes. Look where you are currently. Look where you want to be. Do a gap analysis. Very simple. There's a strategic talk there. Once you've done that gap analysis, you can say, okay, now what are the tools that I need to plug that gap? And just because AI is there, doesn't mean AI is the right answer. It can help. I mean, I love AI. I absolutely love it, but honestly drives me up the bloody wall. It does make you more efficient, but also it creates a whole other set of problems.

Ewan (13:07):

Well, and that is the interesting thing here, because obviously a lot of the approaches that we get, we get approached a lot about recruitment tech, which is spot on. That's the nature of life.That's how you go about selling these things. But a lot of it is sold on efficiency, which is essentially saving you time. And we know that quite often people have time saved and they maybe will get, systems will save them time. And yet we don't seem to get improved the productivity across the company or the organization or the country for that matter. So why do you think that is? What is it that we're missing there? And that ultimately leads to people not renewing systems. Well, we haven't seen any improvement in performance. So why do you think that's that? So we've got a bold question because obviously you can't go into the insight of every company, but I'm just interested to know why that doesn't seem to work.

Andrew (14:00):

Yes, but I can understand it. And that's not a technical problem. The efficiencies are there to be saved and the tech does do that. What that is, is it's a managerial oversight problem and a process problem. Just because you get an extra two, five, 10 hours a week freed up, doesn't mean that me as a consultant or me as a talent manager, that I then go, "Great, I've got two hours free. I'm going to make those two hours really efficient and do really productive stuff in those two hours." That's not how human nature works. What you're going to do is, "Okay, great. I've got two hours. I'll fall back into my comfort zone and use those two hours." And it might be doing admin or whatever, but you're freeing up those hours, but how are those hours being spent? And that's where those productivity gains.

(14:49):

And then that needs to be factored in from day one. It's not just saying, "Here's a great bit. It's going to give you three hours more a day, then wander off and don't track it. " What are the KPIs you're looking to achieve from this? What are the outcomes? What are you targeting the staff who you are freeing up more time for

Ewan (15:06):

To

Andrew (15:06):

Then do with that time? And you need to be making sure you're measuring the right things.

Ewan (15:13):

I think that's a critical part and it kind of leads on to the next question, which is that it has to be built in from the start, doesn't it? So any kind of tech that you bring in, any kind of tech stack you bring in should be done from the business case, the business value point of view going forward, as you touched on, do a gap analysis and work out where the problem is rather than retrofitting and saying, "Well, we've seen this shine a bit of kit. Let's get it in and let's see how we can do it. " Because then you're absolutely not thinking about where, like you say, what are we saving time for? We're saving time because a good example for us might be CV Parson or CV formatting. Now that maybe takes however many hours throughout the week. Once we save that time, what are we going to do?

(16:00):

How are we helping the team to encourage them? How are we managing that? And that's the big thing. So I suppose there does still seem to be that knowledge gap around what AI can and can't do, but you need to start at the start, don't you? You need to start, where's our business challenges? And we'll identify because then AI becomes useful. It becomes a useful tool for organizations. So do you want to talk to us a little bit about those misconceptions, I suppose, around AI and what it can and can't do and where it can add value to businesses?

Andrew (16:34):

Yeah, absolutely. So going back to your point there you were making, if you've got those hours freed up, AI is coming ... The way we look at it is if you do the same job more than once and it's the same, then you can automate it. There's an AI and automation in there, but we're obsessed with that stuff. So I wouldn't expect everybody to have that same level of enthusiasm for automation. But if you are seeing repetitive tasks, there's a time saving there to be made. AI is great at that stuff, automation's great at that stuff. And I tend to use those two terms interchangeably because they both solve the same problem, and you'll find that as I'm talking. Now,

(17:14):

Where AI cannot step in and fill the breach is what we're doing now. This human engagement, these new relationships, and this, I think, especially in staffing and recruitment, but actually in a lot of spaces, you're going to see a move towards, okay, you freed up three hours here, great, that should translate into 10 more face-to-face conversations, 10 more cups of coffee with the client, 10 more cups of coffee with the customer, meeting up with candidates and having calls like this. Where is that productivity moving to? Going back to the previous question. And if those KPIs are being measured, making sure that, okay, those productivity gains are in the places where AI doesn't tread. And again, it's being able to read the nuances of a room, understanding when you're asking a candidate questions, looking at their body layer, looking away, not making eye contact, reading a hiring manager, a hiring manager might tell you what they want, but actually the experience, what they need rather than what they want, understanding those nuances, those are so critical.

(18:27):

And at this point in time, that is not where AI is.

(18:36):

I made a note of this. I'm reading a great book at the moment called, I apologize, Inshitification, which is, if you look in the dictionary, I think it is now in the dictionary, it certainly entered the lexicon of business language. And this is what I'm worried about for AI because what does AI do really well? Brilliant. AI is getting better and better and better and get more powerful and more powerful. Yeah. And some people get scared of that and potentially rightfully so. And I know we've had conversations in the past about the upcoming end of the world and all serving to our AI masters. That doesn't keep me awake at night. What keeps me awake at night is the shitification of AI where, and I've seen this, I posted last week on LinkedIn, we're already starting to see the encroachment of insurtification. And fit certification is where you have a great product, a great platform, a great technology, and the first thing you do is you squeeze your customers dry, the end users dry, and you make it as crap as you can get away with, and have them still using it.

(19:46):

And then you apply the same logic to the advertisers, the brands that we're working with, the business clients that you're working with, and doing the same thing to them. And you end up with a product that is barely fit for purpose, but just about gets by. And we've seen it with Google, we've seen it with Facebook, you've seen it with all social media platforms. It started off on day one. It was a great social space. Everybody climbed in, you had a lovely kind of social momentum there.

(20:15):

They started selling adverts and people would post content on there and it was great, but you'd see content of brands that you liked and you followed and pages that you liked. So you were okay with

Ewan (20:25):

That.

Andrew (20:25):

And then slowly encroached until suddenly you were seeing advertising from brands that you weren't following, but you had no interest in whatsoever. Then the brands started to have to pay more and more to get their content and it's all become a bit shit. I'm starting to see this with ChatGPT. So I don't know if you use ChatGPT. So it started using ClickBaitty type questions at the end of every response. And it's doing that because they've changed the algorithm because they're now looking for engagement as a key output. Now, just after you start monetizing for engagement, you're going to start monetizing for advertising.

Ewan (21:06):

Yes.

Andrew (21:08):

It's inevitable. I can see Sam Altman and his band of Mary Sika fans rubbing their hands with Glee, monetize ChatGPT even more. Not only are we going to

Ewan (21:19):

Put

Andrew (21:19):

The hell out of other countries using ChatGPT, we're also going to bond the crap out of our audience. And it's going to get harder and harder to get the information of the unbiased information you want out of the AIs because they are deliberately putting the brakes on it because a quick and easy answer where they get to reveal more advertising, more sponsored content, and it's-

Ewan (21:47):

That's

Andrew (21:48):

Interesting. That's what keeps me awake at night potentially that is game changing, world changing, really good, will over time become crap deliberately as a monetization strategy. And that minds me right

Ewan (22:05):

Up.That is fascinating because that's exactly ... I suppose I looked at it from a different perspective, but you're absolutely right. That clickbait approach is exactly what they're trying to do is to understand the advertising, isn't it? So that's not ideal. But I suppose going forward then, and this is a more challenging question, I suppose, but given what we're just talking about and the nature of the changes that are likely to come, but I suppose getting companies, if you're looking at it from an HR point of view or from a recruitment point of view, what do people need to do to get their business ready to do this properly, to really maximize what they've got and to make use of automation and AI, whether that's data, but also the capability within your organization to be ready to implement this sort of stuff.

Andrew (23:01):

Yeah. How do they improve? So the most common mistake we see is automating a broken process. You end up with a brokering process that just runs faster and at higher volume. And

(23:12):

So start with the outcome. I think I mentioned this before, not the tool. Start with the outcome. What does a great candidate or hiring manager experience look like at each stage of a hiring funnel and then work backwards from there? Which part of the journeys are currently inconsistent or slow? Where are the human bandwidth constraints? And we'll see this lovely automation we do is when a candidate is interview placement goes ahead, we'll make sure that an email goes out the working day before that interview. And it's just an email reminder to say, "Good luck tomorrow or good luck at the interview next week. It all goes well. Give me a shout if you've got any questions." Now it's just such a small bit of communication and it should be, that should be the process that is built into every candidate interview, but we all know it isn't because consultants are busy, hiring managers are busy, it gets dropped.

(24:16):

So something like that, it's a consistent touchpoint of what good candidate experience should look like. The five-day follow-up NPS scoring, how did it go? How were we? How did we perform? Great. That should be fully automated.

(24:33):

When a new candidate comes on board, they should be getting an email from the CEO making them feel valued. All your communications, all your automations, ask yourself, does this make the recipient feel valued and respected? I'm not a big fan of automating rejection. That to me is too personal. It warrants, and I know it's hard to do that at scale. I get that. If you've got a thousand candidates applying for one job to email every single one or 99 of them to say, "Sorry, didn't make the grade." It's really hard, but there's going to be a middle ground there. There's going

Ewan (25:21):

To be

Andrew (25:21):

Like, how can we personalize? And it's things like that where the managers need to start and say, "Okay, look at the process, look at the funnel, see how they go through, see where the bumps are in that pathway, and can we smooth those bumps out with automation or do

Ewan (25:38):

The

Andrew (25:38):

AI indeed?"

Ewan (25:41):

So as we're going forward, and obviously there's AI systems that are looking at the full hiring process, we've always been of the mindset. It's a very personal thing and it's important. There's some kind of relationship between the hiring manager or the recruitment consultant and then the candidates. Do you think people still are looking for transparency in that process? So if there is an AI involved in the decision making process, is it important that organizations, whether it's recruitment or it's just an organization that has an HR team or whatever it is, do you think that AJ think it's important? Do you think people still value that in terms of their interaction with companies?

Andrew (26:25):

I absolutely think they value it, but whether they value it or not, it's kind of becoming irrelevant because the EU AI act says that you have to be transparent. You can't get away with running AIs in the background. If somebody asks the question, if they say, "Okay, so why did you choose me? Why did you not choose me? " And you've left that decision to an AI, that's going to become ... And this is critical information. This is critical decision making. It's flagged as high risk by the EU. So you are going to have to have full accountability on why you made those decisions.

Ewan (27:05):

And

Andrew (27:06):

If those decisions have used an AI, then you need to be able to explain the decision making process that drove the

Ewan (27:14):

AI.

Andrew (27:15):

And going back to the point, if you're just looking at historic data and biases that are coming across, then you are liable. And there's potentially a hefty fine coming your way. So there is the whole other level of, yes, what we do, we should be communicating, we should be transparent, but you're also going to have to be. It's like the GDPR moment for AI where you've got to go, yeah, we've got to have everything

Ewan (27:42):

Documented

Andrew (27:44):

Articulated because people will be right to ask those questions. And if you didn't get the job because your CRM data was missing a couple of fields, and actually you are qualified, you do have those skills, and it was just a poor data problem, and you've missed out as a result of that because the AI didn't surface that data, then yeah, absolutely.

Ewan (28:08):

That's massive. And do you think enough people understand that in your experience?

Andrew (28:13):

Not yet.

Ewan (28:14):

No.

Andrew (28:16):

But they will.

Ewan (28:17):

Well, fines tend to do that.

Andrew (28:19):

Yeah. Yeah, absolutely. Interesting. And the irony here is they will be using AIs themselves and to write their legal letters.

Ewan (28:32):

Yes. Oh my goodness. Where does that end? I suppose, and I'm interested in this because I know they can go to the nth degree, but personalization is a key part of what Kulea does, but it's also a key part of where things can go with AI and automation. And it's arguably a better experience for candidates and clients. So what sort of thing that should be people looking at? What sort of things should people look at in terms of personalization? What's too far and what's not far enough? What's that sweet spot appreciate? That's a million dollar question, I'm sure. It is. It's just trying to work that out and what that looks like.

Andrew (29:16):

It's a great question. I spend a lot of time, I do, my team do, I look at them in awe. I do spend a lot of time using ... Are you familiar with Clay, Ewan?

Ewan (29:28):

Clay, yes, yes. Yeah.

Andrew (29:29):

So Clay's brilliant. So Clay to me is the ultimate personalization engine. And I say that because personalization to me is not their four name. It is definitely not. It also isn't scraping an opening line for an email from your AI profile. I just try to do the bloody wall. I get those emails post about them on LinkedIn and wind me right up. Good personalization is about being We're aware of the context. And this is where Kay comes in really handy. Personalization, we've got clients that ... A staffing recruitment client works in the hospitality industry. We go out and we find data from client data for them based on hotels and events that are recruiting at that particular moment in time. Great. So they've got a recruitment drive, we'll reach out to them, we'll pass that communication across right time, personalization. They have a need. I have another client who works in the construction sector.

(30:37):

And what we're doing is we're working with a construction planning tool where the moment planning permission is granted to a construction project and we know who the primary contractor is and the secondary contractor, we take secondary primary contractor data, we put it into Apollo, we find the hiring managers. We then manage the outreach through Kulea, talking to them about, "Hey, I noticed that you have this particular project coming up. Do you need to staff it? " And the answer is yes. That to me is personalization. It's about understanding the concept, the timing, the needs of the recipient of your communications and making sure they map to the communications that you're sending out. It's also channel as well. Do they want to be contacted on email? Do they want a phone call? Do they want a postcard? Do they want a carry pigeon? It's about understanding and respecting who they are, communicating with them based on their needs.

(31:34):

This is a use case where AI genuinely knocks it out of the park. It's about being able to look at those intent signals, mapping them. Clagens are great. There's an AI agent enclave if you didn't know. Great play of words there. They will then take that information, they will personalize it, they'll create a compelling narrative behind it, and then feed that into your outreach, communication, marketing engine.

Ewan (32:05):

Yeah.

Andrew (32:06):

Really, really useful. And even if it's just snippets of information for your consultants to read, company profile information, context, what's been going on in the news recently. Compiling that publicly available data into something that is then usable and relevant to the recipient, that really helps.

Ewan (32:26):

And would you use an AI agent to oversee that or would you use some kind of supports or is it always going to be a human review? I suppose from my point of view, if that is an email going out or there's a social post going out or some kind of communication going out, you want to understand what's working and what's not, particularly if you are doing it at scale. And I suppose you really want to delve into that so that it can optimize itself and learn from what it's putting out there. Yeah,

Andrew (32:58):

Completely. So I'm a massive fan of human in the loop of the expression girls. I don't believe there are ever use cases for AI going out into the wild unsupervised. You need to have that human oversight. You need to always be AB testing, going back to your point, what's working, what's not optimizing it accordingly. If you are using AI to optimize, AI optimizes to the nth degree, to the point where it starts fabricating communications in order to get engagement. So again, you've got to be very careful. We're working with a PR, well, potentially working with a PR comms agency that is looking at crisis management and how AI data snippets being produced by Google and Chrome or everywhere are actually representing brands and pulling incorrect data, things like that. And you've got to be so careful with what's being reflected. You got to tie it down really tightly.

Ewan (34:04):

Again, Cole, we've had a few million dollar questions today, but what does future proofing your tech stack look like? I'm interested to know what that can protect for recruitment and HR companies, as you understand.

Andrew (34:19):

Yeah, it's about connectivity.This is the critical trend we're seeing. It's not really a trend. It's been for the last 10 years or so. Used to be the days of single stack solutions, your massive Salesforce enterprise level stacks where Salesforce supposedly did everything and everything. HubSpot's kind of gone the same way now. That's not how you need to be building your stack. Best of breed, find the tools that do some things really well. Marketing automation, for example, use could be. But make sure those tools talk to the rest of your stack. So you're passing data. Make sure you've got somebody who at the very least understands tools like Zapier, those connections, middlemen connectors. Better still someone who can be a little bit geeky that actually understands ETLs, understands how to surface the right data. And you're not necessarily going to find those tools sitting in, for example, your ATS.

(35:34):

I mean, again, we're working for amply brilliant AI tool that sits on top of the ATS to help surface the data. You will find that you've got best of breeds that you can plug in, you can run with. They're easily interchangeable. Things evolve and things will evolve. That is the one guarantee is we are moving at such a rate of change now. The companies, the people that succeed will be the ones that are easy able to adapt,

Ewan (36:08):

Accept

Andrew (36:09):

Change that don't work. This is the way we've always done things and this is the way we're always going to do things. That is the biggest red flag in recruitment, and I think that's the biggest red flag in every industry. I remember reading, it was an Amazon piece about their recruitment approach. And what they're looking for is people that are always building momentum, always changing, always adapting, always running. If you are the kind of person that sits in a traffic jam and just sits there and you're waiting, that's not the right person. If you are the kind of person who goes, "You know what? I'm going to take this little side road, see where that takes me. " It might not save me any time, but I've learned something along the way.That's the kind of culture you want to encourage.

Ewan (36:53):

Absolutely. Absolutely. That is definitely me. I cannot stand sitting in traffic. I will go a long way around. Oh my goodness. You kind of touched on it there, but I just wanted to end today on, I suppose the people and the behaviors, what we're looking for and how do teams at their HR teams, how do recruitment teams, what are they looking for in terms of their people to help them get the most from their future tech stack?

Andrew (37:28):

Yeah. Yeah. You're looking for people that are data literate. Now, this doesn't mean that they have to be coders, doesn't need to understand a PHP or HTML, but they need to understand how data works, how it gets created, how to maintain it in your ATS, how to interpret the outputs, how to question the outputs, because that questioning is going to become more and more critical. Push back. AI loves pushback. Oh my God, I've completely pivoted. I've done this exactly the opposite of what I just recommended five seconds ago, but I'm doing it with enthusiasm and confidence. Make sure that they are aware of the failings, make sure ... I mean, my kids are at school, they're teaching them critical analysis, critical thinking, critical questioning. It's going to be the skill that helps them the most in life. And same with these organizations. If people just take data at face value without digging into it, then they're going to come acroper.

(38:41):

Other skills is, you're looking for the ability to form relationships, deep relationships rather than breadth. You're going to have the tools, you already have the tools. I mean, you're using them that allows for that breadth of communication. Those light touch surface points that keep your brand front of mind. They're incredibly important. But you're going to need people who are able to take those very light touch relationships and deepen them and make them more meaningful and convert them into, at the end of the day, revenues, profits, because that's the end goal.

(39:19):

Going to look for people that have strategic influence. And that's not just looking at the data, it's looking at employee branding, it's looking at culture. It's understanding more than just that single niche role. It's making yourself invaluable to the business and helping to guide the business. And going back to that previous point we made, it's all about adaptability.

Ewan (39:44):

Yeah.

Andrew (39:45):

So important.

Ewan (39:46):

Yeah. I think critically, I think that's the number one going forward because we do some stuff with the Chartered Institute Marketing, but we also do go to universities and so on and trying to let people know what those skills look like in the future is really hard because we just don't know it's going to continually change and develop and grow. And being able to adapt is going to be so critical. So now listen, that was a fascinating journey through the tech stack. I really enjoyed that actually, really a good discussion and fascinating to hear where we think things are going and the opportunities there are for organizations. So if people are looking for insight into Kulea or anything else, where it's best to find yourself, sorry.

Andrew (40:31):

Come and find us over on Kulea.ma, that's Muma Alpha or Berna still, come find me on LinkedIn. So it's Andrew Nicholson, Kulea, K-U-L-E-A. Come and find me, send me a message. I'd always love to chat.

Ewan (40:45):

Excellent. Well, Andrew, thank you again for your time and look forward to catching up again soon.

Andrew (40:51):

Thank you, Ian. Always a pleasure. Take care.

Ewan (41:07):

Thanks for listening to Recruitment and Beyond Podcast. Hopefully there was plenty of insight for you to take back to your teams. So don't forget to subscribe and never miss an episode. And if you can, leave us a review. We really appreciate all the feedback and support we get. It makes a massive difference.