The AI threat you should pay attention to.

Disclaimer:  This is me sharing thoughts on the increased threats to the industry and the critical importance of authentication.  While I’m not a CISO, I did stay at a holiday in express once and I think it gave me the confidence to share.

Amidst the AI hype, there is a lot of exaggeration about its current capabilities and potential to ‘run amok’.  There is understandably a lot of concern, especially from people who believe it should be regulated more to protect people from its misuse.  Instead of worrying about AI taking your job, focus on how scammers are already using AI to create more complex and believable scenarios to steal your money or your company’s money.

It’s safe to say that phishing is now so prevalent that anyone reading this will have had multiple attempts made to scam money from them.  Ranging from ‘you’ve won the lottery’ to ‘your computer is infected, please call 800…’ and the promise of romance for those who are lonely.  For most of us, these phishing attempts are simply annoying and easy to spot due to the poorly written messages or other obvious flaws in the way the phishing attempt is built.  However, sometimes, they are done well enough to fool people into paying for things they never bought, or paying people who extort money, or redirecting what would otherwise be legitimate bills from vendors working with your company and sending the money to a scammer who managed a business email compromise through a successful spear phishing campaign with one of your business partners.

Phishing is effective, which is why it has become such a widespread issue. In 2023, over 3 million phishing sites were reported by the APWG.  Companies will typically have more resources and money than individual people.  Which is why there has been so much focus on Business Email Compromise (BEC) where scammers impersonate management, trusted partners, vendors, and even customers to mislead people working at your company to get you to send money to them.  BEC attacks were responsible for $2.9B in losses in 2023 for just the United States.   When the payoffs are this lucrative, the bad guys are not shy about investing in better tools that will help them create better scams that are harder to detect, and AI is fast becoming a tool of choice for them making it much harder for the rest of us to tell if a request is legitimate or not.

Phishing emails were effective enough to siphon nearly $3 billion from the industry in 2023, but they were not as deceptive as they could be.  If today’s estimates show you can make this kind of money with a 4-5% success rate, imagine if that success rate triples due to much better forgeries from the scammers?  This is where AI comes in and if you’re not paying attention, then you, or your company, will be compromised.  And without good process and proper protections, you will be out a lot of money that you won’t likely recover. 

Scammers are now using AI to write better phishing emails.  Gone are the days of horrible grammar and spelling for what was supposed to be business communication.  Instead, the scammers are focused on the social engineering situations and then using AI to write much better and more believable communications that reinforce the perception of legitimacy of the email scam.  This will make it easier for them to create successful attacks and siphon even more money from people and companies.  Not only are written communications improving, but with AI you can now do voice cloning that is so good that you would swear it’s a recording of someone you know.  Now, imagine if you will, getting a voicemail from your CEO or CFO explaining they need your help with transferring funds for what sounds like a legitimate reason and since you are not reading it but hearing them, you find it perfectly believable and decide to help.  Any senior executive of a public company has likely been recorded in various public meetings from earnings calls to video blogs to symposia speaking sessions.  Which means there are ample opportunities to sample those recordings and make an excellent clone of an executive’s voice.

What are we to do?

While using AI to combat AI may be an option in the future, it is crucial now to quickly verify the identity of the person you are communicating with.  This way, if something seems amiss regarding what you are being asked to do, you can simply authenticate the person you are talking to and be sure it’s really them.  As tools like AI continue to advance to the next level and perhaps allow for real-time voice cloning, you must be sure the person you are speaking with is who they say they are.

Social engineering is alive and well and part of most of these scams.  Someone calling your corporate helpdesk claiming to be an employee can convince a well-intentioned service representative to change their password.  But if they were challenged to authenticate and could not, then the tool used to change the password would not allow them to.  Methods like this would have prevented well known breaches like the 2013 Target breach, the 2015 Ubiquiti networks breach, and the 2020 Twitter hack regarding high profile accounts.  All of these were situations where people convinced a helpdesk to change credentials by pretending to be someone who was an employee, a high-profile account, or a customer.

The great news is that the same best in class tools for Multi Factor Authentication (MFA) can be used here.  And while SMS is better than nothing for MFA (or 2FA), you are much better off setting up all your users in your company with authenticator applications-based MFA like Microsoft Authenticator, Google Authenticator, etc.  When an employee logs in for the first time in your organization, they should be set up with an authenticator application right from the start.  This Time-based One Time Password (TOTP) changes every 30 seconds and is easily verifiable by your organization.  It works even if the phone is not on-line and there’s no delay if SMS networks are having difficulty delivering a message. 

It would not be a heavy lift for your IT department to create tools that use the supported authenticator application for the helpdesk and even the average employee.  If the helpdesk gets a call to change a password, they can do that through a web interface that uses the TOTP for that person so that you know it’s them.  And more importantly, and why I was feeling compelled to write this, as AI continues to make it easier for bad actors to impersonate others, you can provide the same style tool to your employees through your intranet.  They simply can go to an internal website, type an employee’s name and then ask them for their TOTP, and you can confirm it’s really them.

This is an incredibly powerful way to avoid being fooled when you get a private message or other direct contact from your CEO, in her (or his) voice, asking you to do something that might otherwise seem a bit sketchy.  Simply ask them to authenticate when you feel it’s needed. 

Lastly, for this to work (and this is key to address), you need to make it normal/acceptable for anyone who is unsure/uncomfortable if they are talking to the real person to ask them to authenticate.  That is a cultural norm that needs to be established.  Nobody should be given a hard time if they want to be sure it’s really you before doing something that could have serious repercussions for the company.  Any leader who would fight against that is putting their ego before the wellbeing of the company.

In summary:

  1. Enable MFA with an authenticator application for all employees and set it up to be enabled on their first login (leave no token un-minted).
    1. For existing employees, enable it and use scripting to force the creation of a token for the application.  Communicate ahead and give them time, but at the end of the day EVERYONE with an account in the company’s domain should have this.
  2. Build a simple helpdesk application for changing passwords that only works for people who have been authenticated.  No matter how much a scammer yells or pressures, the helpdesk can’t make the change without authentication.
  3. Create a simple web tool for the same authentication, make it available for everyone to use to confirm a person’s identity. 
  4. Encourage the use of the tool whenever there is a question of if you are really talking to someone who is claiming to be someone else.
  5. Make it normal and encourage the use of authentication so that it is part of the culture and gets used as needed.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top