AIs as Computer Hackers

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies.

In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense.

There was a traditional human–team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didn’t come in last in every category all of the time.

I figured it was only a matter of time. It would be the same story we’ve seen in so many other areas of AI: the games of chess and go, X-ray and disease diagnostics, writing fake news. AIs would improve every year because all of the core technologies are continually improving. Humans would largely stay the same because we remain humans even as our tools improve. Eventually, the AIs would routinely beat the humans. I guessed that it would take about a decade.

But now, five years later, I have no idea if that prediction is still on track. Inexplicably, DARPA never repeated the event. Research on the individual components of the software vulnerability lifecycle does continue. There’s an enormous amount of work being done on automatic vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which machine learning systems excel, if they can only be taught how to recognize a vulnerability. There is also work on automatic vulnerability exploitation and lots on automatic update and patching. Still, there is something uniquely powerful about a competition that puts all of the components together and tests them against others.

To see that in action, you have to go to China. Since 2017, China has held at least seven of these competitions—called Robot Hacking Games—many with multiple qualifying rounds. The first included one team each from the United States, Russia, and Ukraine. The rest have been Chinese only: teams from Chinese universities, teams from companies like Baidu and Tencent, teams from the military. Rules seem to vary. Sometimes human–AI hybrid teams compete.

Details of these events are few. They’re Chinese language only, which naturally limits what the West knows about them. I didn’t even know they existed until Dakota Cary, a research analyst at the Center for Security and Emerging Technology and a Chinese speaker, wrote a report about them a few months ago. And they’re increasingly hosted by the People’s Liberation Army, which presumably controls how much detail becomes public.

Some things we can infer. In 2016, none of the Cyber Grand Challenge teams used modern machine learning techniques. Certainly most of the Robot Hacking Games entrants are using them today. And the competitions encourage collaboration as well as competition between the teams. Presumably that accelerates advances in the field.

None of this is to say that real robot hackers are poised to attack us today, but I wish I could predict with some certainty when that day will come. In 2018, I wrote about how AI could change the attack/defense balance in cybersecurity. I said that it is impossible to know which side would benefit more but predicted that the technologies would benefit the defense more, at least in the short term. I wrote: “Defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.”

Unfortunately, it’s the People’s Liberation Army and not DARPA that will be the first to learn if I am right or wrong and how soon it matters.

This essay originally appeared in the January/February 2022 issue of IEEE Security & Privacy.

Posted on February 2, 2023 at 6:59 AM13 Comments

Comments

IP-RET-END February 2, 2023 9:00 AM

“AIs as Computer Hackers”. Yeah, Perl, .vbs, .js .py, and many other scripting languages have been known to help accomplish just that, for quite some time now. Of course, any batch file/job/task (automation) could also be called “AI” just to make it sound “cool” but in the end it’s just some human being behind it, much like when you expose criminals within the government and then when those individual criminals pose as the “state” to go after you, so then you are retaliated against by the “STATE” and even though you have evidence that “they” only exist to protect one another-nothing happens. But now we’re wandering off into another territory called CORRUPTION. Point being – animals called humans are behind everything.

meestahgofyslf February 2, 2023 11:19 AM

IP-RET-END:

“corruption” ISN’T a default state for human beings, unless you’re talking about entropy and the body or you’re pathologically cynical (in which case, see a shrink). When it’s ethical corruption that HARMS OTHERS, it SHOULD be exposed, whether it’s threatening its victims for speaking out about it or not. Don’t like it? Don’t do it.

Winter February 2, 2023 12:11 PM

@meestahgofyslf

“corruption” ISN’T a default state for human beings,

Actually, it is. The default state of humans is to divide humanity in “Us” and “Them” and to work to the benefit of “Us” at the detriment of “Them”.

Corruption is extracting benefits for “Us” from “Them”.

It is the basis for “Power corrupts…”

Roger February 2, 2023 1:44 PM

Inexplicably, DARPA never repeated the event.

It’s explicable, all right. The event goes on with DoD participants and the results are never released to the public.

vas pup February 2, 2023 5:46 PM

@Bruce said “Defense is currently in a worse position than offense precisely because of the human components.”

Taking this into consideration let say you’re Kevin Mitnick of AI era.

Then, utilize this That’s… not my voice?
https://www.dw.com/en/thats-not-my-voice/audio-64593027 and make it perfect using GAN until voice is matching 100% of sysadmin or boss, call to the user using caller id spoofer which still is not banned in US and talk by the AI generated voice with pretext of extracting valuable security data. Do I need to continue?

Clive Robinson February 2, 2023 6:50 PM

@ Bruce, ALL,

Re : Faster v Smarter.

If we look at humans and computers we see that traditionaly,

Humans = Smart but slow.
Computers = dumb but fast.

That is why humans do research and come up with models and the computers do the grind of the mathmatical models, often at an accuracy well beyond human ability.

As long as this holds true then AI is not actually going to be able to do something new.

The difference is the first two steps of the “goal seeking” procedure. Which are,

1, Select a usefull goal.
2, Come up with a test for that goal.

The first is an unbounded problem and based on the wants of society within an environment and the desires to forefill them.

The second can be bound or unbound and it is dependent not just on the goal but the environment and the rules that it and the society within it imposes on agency within the environment.

The rules of an environment including a society within it have to some how be learned which are found by the same goal seeking proceadure or taught based on a previous runs of the proceadure.

A game such as chess has an object that can be not just described but easily tested for. The environment is strongly bound and the rules precicely defined.

Thus we know a solution can be found all be it slowely by iterating through every move untill the goal is met. Given sufficient time and resources an optimum solution can be found.

However the hard part is “learning” about what an opponent will do either randomly or as a sacrafice to obtain a better position. Whilst there is little that can be done about random, sufficient observation of an opponents play will show their general strategies.

Interestingly in a bound environment the same observation will show what the rules are all be it inefficiently. Likewise the goals of the game.

So in a bound environment with fixed rules and a defined outcome, just observing sufficient game play will allow what is at it’s base simple computational analysis to find the rules, and goal and find an optimum strategy against a set of consistent players.

The interesting thing though is on limited data can it find all the rules? And likewise see all the winning options? But more subtle is can it tell the difference between rules of the game and stratagies of a consistant player or group of players.

So we can see that basic analysis and a sufficient data set will get us to finding ways to win in an optimal way. As the computer in effect comes up with it’s own play stratagies it might appear as though it has “learned” but has it? or has it simply “recognized” paths that are predefined by the environment and the rules and just picked what is in effect one of the shorter ones.

Basically treating it as a “Traveling Salesman Problem” and using “simulated annealing algorithms”.

A solution I know works well for “cutting glass” to get minimum wastage. The basic rule of the game being all cuts have to go fully across a piece, and the resulting two pieces are then independent pieces thus the rule is applied to them and so on. Whilst it might look like walking a binary tree problem it’s actuall not as you are doing the inverse from an unknown starting position.

There are other algorithms that can be used based on metaphors of what we consider “natural processes” but they all have problems, as described in this more than tw decade old –but still relevant– IEEE article,

https://www.cse.unr.edu/~bebis/CS477/Papers/Nature'sAlgorithms.pdf

The problem with cutting glass is finding the balance between local and global maxima. If you have lots of small pieces a greedy algorithm using local optomisation tends to work, but with few large pieces a different approach is needed. As a rough rule “start with the largest” can be close to optimal, however starting with grids of small pieces that become variable sized large pieces can prove better.

The thing you learn is that as the game progresses the optimal stratagy changes, that is each move in chess and each cut of the glass changes the environment, even though the rules remain the same.

So the trick in a bound predictable rule environment becomes knowing when to use any given strategy to get you to the desirable end in the least number of moves or cuts.

But that is only one of several optimums, in manufacturing you generally trade work –effort energy– with cost. That is there is a trade off in terms of the time taken cost, and the wastage cost in cutting glass. But the wastage from one job is not necessarily “waste” but can become smaller sized stock and that in turn depends on “storage cost” as whilst sufficiently large off-cuts from one job can be used as stock for a later job, getting it back into and out of storage have their own costs, as does the use of storage space. You quickly realize that “optimum” has limitations.

The classic being,

“Asume the traveling sales man is doing his route in one working day. You could by searching every possability find the optimal path. But it takes a week of computation to find it.”

Is it of any use?

The answer is it depends on how often the route will be used. The computer can not work that out it is an unknown. And it can not usually be found from past data as the future changes in most environments.

Which brings us onto the fact that paths change based on previous actions and thus an adaptive response is required especially if there are two or more independent players with agency. This gives rise to the notion of not “following paths” but using localised algorithms running in parallel as a way to adapt to the changing environment. A simple visable example is “Conway’s game of life” where very simple cellular automata produce what appear to be very complex behaviours. But it is actually a “zero player game” that is no matter how complex the behaviour the starting conditions predefine everything. In effect it is a chaotic not random system.

Thus it can now be seen that without a random input AI is nothing other than “preordained” or “fully determanistic” in it’s behaviour and at best has no true agency. Something Alan Turing was apparently aware of as he insisted on a random generator being added to the Faranti Mk1 and the Manchester Computer design prior to his now famous 1950 paper “Computing Machinery and Intelligence”, which anticipated the whole subject of AI.

So, to attack a system an AI would have to include some random element, the most obvious from an external system attacker perspective is a “fuzzing tool”[1].

The problem with using fuzzing tools is that they “make a lot of noise”. That is to work malformed input is sent to the system under attack. The idea is that this input causes vulnarabilities to be activated and a fault to occur which is visable in the system output. However two issues arise,

1, Only a small subset of faults appear at the output.
2, Of those most are a catastrophic fault that halts the system.

Thus are just as easily detected by a defender, as the attacker.

Thus a threshold rule is needed on the generation of the malformed input, which in most cases will not cause a recognisable fault at the output.

The reality of fuzzing tools is that they’re most useful use is when you control the system under test / attack and you can halt the system and do a delta check on it’s state.

In most of the few “capture the flag” games I’ve observed the attackers bring attacks they have already developed against known software that are “quiet”. As such they are looking not for “new faults” but excercising “known faults” to step their way along a path to a goal as quietly as possible. Especially if the game is adverserial and the defender has agency.

Thus fuzzing or random acts are not all that usefull in an adverserial situation.

So it can be seen that for an adverserial game / attack, the attaker has to use “known attacks” –to the them– that have low or no visability to the defender. Otherwise the defender will simply shut the attacker out.

Thus the best use of AI + Fuzzing is not for “captutre the flag” games, but be used in a controled environment to automate finding new attack methods that are quiet.

Thus whilst “capture the flag” games might be fun as with real warfare the important activity is off of the battlefield in the equivalent of the R&D dept of the armaments manufacturer.

Whilst trying new tactics with the weapons at hand can and even sometimes does work on a battlefield. You very very rarely if ever create new weapons on the battlefield.

So not seeing AI at “capture the flag” games might not be such a loss as it might appear.

My chosen use for developing tactics would be to get two different teams be they human or AI to “table top” in a more controled or “lab” environment, out of sight.

[1] Fuzzing tools are seen by many –sales people– as an excellent technique for locating vulnerabilities in software. Fuzzing’s basic premise is to deliver intentionally malformed input to a target system and detect system failure (which is why it’s noisy). Generally a fuzzing system is seen as having three main parts,

1, The “Poet” which creates the malformed inputs.
2, The “Courier” which sends the malformed input to the target system.
3, The “Oracle” that monitors the target system looking for indicators of failure.

schwit February 3, 2023 9:14 AM

Why does DARPA/USGovt have to do this? Why not Google or the state of Texas or Elon Musk or MITRE or Carnegie-Melon?

me February 6, 2023 9:52 AM

Without switching the base of society from trickery, on a social scale, to love, there will be no solution.

We must change ourselves to become ever more loving, without which happiness is impossible, no matter who you are, what you have, achieved, etc.

This journey is the most important, essential, and absolutely unavoidable-if-you-want-happiness-here-on-earth.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.