Lately, I’ve been wondering why I’ve been feeling so obsessed about these concerns motivating ArtificialTimes.com, centering mainly on cyber-reality. I wonder if I am over-reacting. Why do I feel so alarmed? Obviously, I don’t know what the future holds for humanity. Maybe everything will turn out rosy, but my gut says that we’re in serious trouble.
My alarm is not caused primarily by the issues that I hear about frequently, such as the serious concerns as global warming and the threat of nuclear weaponry, although I don’t want to in minimize those worries, or any of the other major issues facing humanity in this second decade of the 21st century. It’s not even due mainly to the threat posed directly by the growing power of Artificial Intelligence (AI), although that is definitely a component.
Consider malware. Every day, new threats to businesses and public infrastructure posed by malicious software (computer viruses, worms, trojan horses, ransomware, spyware, adware, scareware, rootkits, botnets, etc.) evade the efforts to combat and contain them. Increasingly, such cyber attacks are more likely to be intelligent, as they converge with AI, big data, and pervasive computing.
Anyone who carefully investigates computer security matters will be quickly convinced that the problem is not going away anytime soon. For years the cyber criminals have been in a virtual arms-race with the security companies, whose strategies (such as virus protection software) are almost entirely reactive, in the sense that they cannot defend against a new piece of malware until it has already been identified in the wild. That gives the crooks an obvious advantage: they get to decide when and how to strike first. They also are able to cloak their attacks so that they are undetectable until they have been triggered by some external event (such as a date or a user action).
The programmers who create these exploits have access to a burgeoning universe of open source code that is freely available to anyone with the knowledge to use it. Every day, more and more of that code is devoted to the capabilities of Big Data Analytics and AI algorithms, because such capabilities are becoming ever more useful and relevant to mobile apps, social media, and the advertising that makes them profitable. But the coding strategies that can make a friendly app more helpful can also be used to make hostile software more explicitly and intentionally harmful. It’s also the best way to create smart viruses and worms, capable of inserting themselves where they will do the most harm, evade detection, and defend themselves from the security measures trying to eliminate them. They are becoming increasingly intelligent and their intelligence is not friendly.
They are no longer merely malware. They are malevolentware.