Opinions expressed by Entrepreneur contributors are their own.
Key Takeaways
- AI is making phishing harder to detect. The messages are increasingly polished and professional, often mimicking colleagues or executives, which removes the obvious signs people used to rely on.
- Employees generally know how to spot phishing, but they still fall for it because they’re busy, multitasking and making fast decisions under pressure. It’s not because they lack training.
- Leaders must accept that cybersecurity is an operational problem. They must examine communication norms, look at after-hours expectations and build friction deliberately.
There’s a version of the phishing problem that most companies think they’ve solved. You run the annual security training. You send the simulated phishing emails. You remind everyone to look for red flags — bad grammar, suspicious links, strange sender addresses. You do all of this and then feel reasonably confident that your team knows what to watch for.
The data suggests otherwise — not because your employees are ignoring the training, but because the threat has quietly changed around them. The habits that make people vulnerable were never really about awareness in the first place; they’re about how people respond to messages under pressure. That’s communications territory.
Here’s what’s actually happening
AI has gotten very good at writing. And the people using it to craft phishing messages have noticed. According to a recent Sagiss survey of 500 U.S. desk-based workers, 72% say phishing attempts are more convincing today than they were just a year ago — specifically because of AI-generated language. Sixty-six percent believe an AI-crafted message could successfully impersonate someone they actually work with. More than half say AI-written phishing is harder to spot simply because it feels more professional.
That last part is worth some reflection. The thing that used to make phishing detectable — the awkward phrasing, the stiff tone, the telltale grammatical errors — is disappearing. What’s replacing it is something that sounds a lot like your CFO, or your IT department, or that colleague who always messages you when she needs something fast. The messages don’t stand out. They blend in.
But here’s what the data also shows, and what most security conversations don’t spend enough time on: The problem isn’t just that phishing messages look better. It’s that your employees are making fast decisions under conditions that were never designed to support careful judgment.
Sixty-three percent of workers surveyed said they clicked a work-related link in the past year and later felt they should have double-checked it first. That includes 42% who said it happened more than once. Almost half had replied to a message and later questioned whether it was legitimate. Fifty-seven percent had verified a request only after already taking action.
Awareness isn’t the problem
Think about what that actually means. These aren’t people who don’t know about phishing. They know. And they still click, reply and engage first — then pause and wonder afterward. Why?
Because they’re working. They’re in back-to-back meetings, switching between five browser tabs, watching a Slack thread fill up in real time while a client waits for a response. When asked what situations make them most likely to make a mistake, 55% pointed to rushing between tasks, and 48% pointed to multitasking. Only 7% said the problem was that they didn’t know how to verify a message. The knowledge is there. The conditions to use it aren’t.
This matters because it reframes the entire conversation about phishing risk. We’ve spent years treating it primarily as an education problem. Train people harder, remind them more often, and make the simulations more sophisticated. But if your employees are already aware and still getting caught — not because they forgot, but because they’re managing 200 emails, three urgent requests and a meeting that started two minutes ago — then more training isn’t the answer. The environment is.
There’s another dimension here that doesn’t get nearly enough attention: after-hours access. Nearly 70% of workers in the survey said they check work email or chat outside of normal business hours at least sometimes. More than half said they feel pressure to respond after hours. And about a third said they had responded to a work message after hours and later felt they should have verified it more carefully first.
This is significant. The after-hours window is when attention is most fragmented, context is hardest to access and the impulse to just handle something quickly is strongest. It’s also when a well-crafted, AI-polished message that references a real project name and sounds like a real colleague has the best chance of passing the test. If your security posture assumes that risk is mostly a 9-to-5 problem, you’re missing a large and growing piece of the exposure.
How business leaders must respond
What does all of this actually require from business leaders? It requires accepting that cybersecurity is no longer just a technical or training problem; it’s an operational one. The conditions under which your people work every day are either helping them make good decisions or quietly undermining their ability to do so.
That means looking at communication norms. If your culture rewards instant responses and treats anything over an hour as slow, you’re implicitly pressuring people to skip verification. It means looking at after-hours expectations. If employees feel they have to stay continuously connected, you’re extending the window of risk without any additional safeguards in place. It means building friction deliberately — not to slow everyone down, but to create moments where a pause is normal and expected rather than a sign that someone isn’t keeping up.
And it means recognizing that the cues we’ve taught people to trust — a familiar name, professional language, workplace context — are now the exact cues attackers are replicating. The message that sounds most like someone your employee trusts may be the one that should trigger the most caution.
Your team isn’t the weak link because they’re careless. They’re the weak link because they’re busy, pressured and being targeted by tools that are getting better at looking legitimate. That’s a leadership problem, not a training one — and it starts with taking a hard look at the communications culture you’ve built.
Key Takeaways
- AI is making phishing harder to detect. The messages are increasingly polished and professional, often mimicking colleagues or executives, which removes the obvious signs people used to rely on.
- Employees generally know how to spot phishing, but they still fall for it because they’re busy, multitasking and making fast decisions under pressure. It’s not because they lack training.
- Leaders must accept that cybersecurity is an operational problem. They must examine communication norms, look at after-hours expectations and build friction deliberately.
There’s a version of the phishing problem that most companies think they’ve solved. You run the annual security training. You send the simulated phishing emails. You remind everyone to look for red flags — bad grammar, suspicious links, strange sender addresses. You do all of this and then feel reasonably confident that your team knows what to watch for.
The data suggests otherwise — not because your employees are ignoring the training, but because the threat has quietly changed around them. The habits that make people vulnerable were never really about awareness in the first place; they’re about how people respond to messages under pressure. That’s communications territory.
Here’s what’s actually happening
AI has gotten very good at writing. And the people using it to craft phishing messages have noticed. According to a recent Sagiss survey of 500 U.S. desk-based workers, 72% say phishing attempts are more convincing today than they were just a year ago — specifically because of AI-generated language. Sixty-six percent believe an AI-crafted message could successfully impersonate someone they actually work with. More than half say AI-written phishing is harder to spot simply because it feels more professional.



