AI : threat to computer security
A team of security researchers at IBM Corp have warned that AI programs are now capable of evading the best defenses of computer software. This poses a major threat to computer security.
The intellectual roots of AI and the concept of intelligent machines were first found in Greek mythology. Intelligent artefacts appear in literature, and since then, mechanical devices that have been created have demonstrated similar behaviour to some degree. Greek myths of Hephaestus, the blacksmith who manufactured mechanical servants, and the bronze man Talos incorporate the idea of intelligent robots.
In 1940, the programmable digital computer was invented, the machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Many of the participants predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. Eventually, it became obvious that they had grossly underestimated the difficulty of the project due to computer hardware limitations.
In 1973, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter". Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s, investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century when machine learning was successfully applied to many problems in academia and industry, due to the presence of powerful computer hardware. As in previous "AI summers", some observers (such as Ray Kurzweil) predicted the imminent arrival of artificial general intelligence: a machine with intellectual capabilities that exceed the abilities of human beings.
A team of researchers from IBM Corp have used the AI technique known as machine learning to build hacking programs that could effortlessly slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday. This has presented computer security with their worst nightmare - their best defenses being evaded by AI programs.
State-of-the-art defenses generally rely on examining what the attack software is doing, rather than the more commonplace technique of analyzing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop. No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence.
Researchers say that, at best, it's only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc's Google and others, and the ideas work all too well in practice. “I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. "It’s going to make it a lot harder to detect.”
The most advanced nation-state hackers have already shown that they can build attack programs that activate only when they have reached a target. The best-known example is Stuxnet, which was deployed by the U.S. and Israeli intelligence agencies against a uranium enrichment facility in Iran. The IBM effort, named DeepLocker, showed that a similar level of precision can be available to those with far fewer resources than a national government.
In a demonstration using publicly available photos of a sample target, the team used a hacked version of video conferencing software that swung into action only when it detected the face of a target. “We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Ph. Stoecklin. “This may have happened already, and we will see it two or three years from now.”
Our assessment is that this AI program takes computer hacking to an advanced level. Since these programs can be trained to stay dormant till they reach a specific target, we feel that this offsets countermeasures. We believe that the world needs a determined AI strategy because machine learning has particularly significant benefits.