11/20/2015 - Coming Cyber Threats
November 20, 2015 – Eurasia Review
By Cheryl Pellerin
Two specific emerging challenges are among those that concern Navy Adm. Mike Rogers, commander of U.S. Cyber Command and director of the National Security Agency.
The challenges are a potential inability to trust financial and other data due to manipulation by adversaries, and the disregard of some non-state actors for connectivity and other staples of daily life in many parts of the world.
Rogers joined Marcel Lettre, acting undersecretary of defense for intelligence, and others in a recent panel on cyberwar during the recent annual Reagan National Defense Forum held in Simi Valley, California.
From a military perspective, Rogers said to the audience of government and industry leaders, data manipulation through network intrusion is probably his No. 1 concern.
“As a military commander, I’m used to the idea that I can walk into a darkened space with a lot of sensors coming together and look at a visual image that uses color, geography and symbology, and quickly assimilate what’s going on and make very quick tactical decisions,” Rogers said.
“But what happens if what I’m looking at does not reflect reality … [and] leads me to make decisions that exacerbate the problem I’m trying to deal with [or] make it worse?” he added.
The admiral said he’d just returned from New York, where he spent a day in related discussions with business leaders and with students at Columbia University.
The digital environment, for the private sector and the military, is founded on the idea of faith in the data, he said.
“The fundamental premise for most of us is that whatever we’re looking at, we can believe — whether it’s the balance in your personal account … or the transactions you make in the financial sector,” Rogers said.
What happens, he asked, if that trust is disrupted? What if the digital underpinning relied upon by people everywhere can no longer be believed?
Vision of the World
His second concern from a military perspective involves non-state actors.
“Nation-states, while they want to gain an advantage,” he said, “generally have come to the conclusion that if the price of gaining that advantage is destroying or destabilizing the basic status quo and underpinnings that we’ve all come to count on, that’s probably not in their best interest.”
With non-state actors like the Islamic State of Iraq and the Levant or al-Qaida, Rogers added, that premise is gone. They are interested in destroying the status quo to achieve their vision of the world as it should be, he said.
“So what happens when they suddenly start viewing cyber as a weapon system, as a capability that helps them achieve that end state — and one they can use as a vehicle to achieve destruction and disorder, just as we’re watching them do in the kinetic world?”
In his remarks on the panel, Lettre — who oversees all DoD intelligence and security organizations, including the National Security Agency — said the cyber threat picture is complex and a function of a geostrategic landscape that is as challenging as the nation has seen in 50 years.
Defense Secretary Ash Carter and Deputy Defense Secretary Bob Work “have been pushing for … an innovative approach [and] innovation in technologies to try to tackle this strategic landscape and deal with these challenges,” he said. (Read full article…)
11/19/2015 - Time To Meet The Wizard
November 19, 2015 – BuzzFeed
Facebook’s David Marcus says M is already a lot more than just people pretending to be robots.
By Alex Kantrowitz
Someone sent a Starbucks Pumpkin Spice Latte to my desk at work. Which was weird because I didn’t ask for it. And even weirder because it came from some of the most sophisticated technology in the world: M, the artificial intelligence–driven virtual assistant Facebook is building into its Messenger app. AI, I thought, was supposed to outsmart and kill us, not send autumnal coffee beverages. So forgive me for being a little suspicious of the grande red cup at my desk.
I’ve had M on my phone for about a month now. And while it’s currently still just an experiment, there’s a strong probability that it’s coming to your phone, too. Before it becomes pervasive, I wanted to understand it, and ask questions of it. I’ve pushed it incredibly hard. I’ve had it research and book a flight for me, reduce my cable bill, get Star Wars tickets, send free coffee, write a song, and, yes, even draw pictures. Many pictures.
But while I got the art of M, very literally, I wasn’t so sure about the science of it. Nor have I been sure of Facebook’s motivations, or the ethical implications of using M at all. Every time I asked M who or what I was talking to, it offered the vague line: “I’m AI, but humans train me.” This was frustrating. If M was mostly AI, then what I was witnessing was an astounding technological innovation, something that could change the way online business works — there’s plenty of money to be made when you become the place people go when they want to buy stuff (see: Google). But that works only at scale. If M were really an ordinary human sitting at a console with a calculator, then I was a dupe, not to mention a bit of an asshole for making real people do all this stuff for me. And so, after weeks of wondering, it was time to meet the wizard. (Read full article…)
11/3/2015 - Why sustainability must learn to love AI
November 3, 2015 – GreenBiz
By Heather Clancy
Just about the only emerging information technology guaranteed to generate more fear, uncertainty and doubt among humans than job-stealing robots is smarter-than-us artificial intelligence.
Yet sensors, systems and software that augment and automate decision-making, then take action based on the answers, will be vitally important for scaling many so-called smart solutions beyond early pilot tests. That goes for everything from urban parking guidance apps to energy-sipping lighting installations to autonomous vehicle steering controls.
The reason is pretty simple: The amount of information behind any one of these applications is simply staggering — some suggest as many as 150 billion “things” may be connected to the Internet within the next decade, creating myriad sources for interpretation. We’re talking everything from weather forecasts, to location-specific traffic updates, to building occupancy statistics to billions of images cataloging the world around us — and everything in between.
What if the world’s smartest people can’t solve the world’s biggest problems? Perhaps feeding data to a “brain in a box” could help.
No single person can interpret all those data points quickly, if at all. However, by programming machines to adapt behavior when certain conditions are met, society and business can move another step forward to making sustainable operations more systemic. That theme was sounded multiple times last week during VERGE 2015.
“Big Data is the headache; deep learning is the solution,” said well-respected venture capitalist Steve Jurvetson, partner at Draper Fisher Jurvetson and an early investor in multiple billion-dollar companies including SolarCity, Tesla Motors and Twitter.
During an onstage interview at VERGE, Jurvetson said it is no longer enough simply to find patterns in data — something that many software applications already do pretty well. The next imperative is teaching the software to make connections that are too complex for humans to perceive, a field that often goes by the name “machine learning.”
“You generate a computer program that in and of itself is capable of learning something,” he said. “It’s about adaption.”
To illustrate, Jurvetson points to the millions of sensors already dedicated to collecting information about which lights are on or off, or when temperatures increase or decrease dramatically. Using that data, for example, a building’s elevators might be “trained” to prioritize certain floors under certain conditions, such as when a certain percentage of offices go dark during a given timeframe.
Machine learning also could help prioritize what data is collected in the first place. “Over time, your algorithms would guide what to turn off. … There is an insane amount of images and data collected from all these systems,” Jurvetson said.
What if the world’s smartest people can’t solve the world’s biggest problems? Perhaps feeding data to a “brain in a box” could help, Jurvetson suggested.
Several projects highlighted during VERGE last week offer a glimpse of the possibilities. (Read full article …)
11/2/2015 - Security Software Blind Spots
November 2, 2015 – eWeek
By Wayne Rash
NEWS ANALYSIS: There are many things that your current security software simply can’t see and stopping emerging threats requires a new approach.
It’s worth noting that none of my antivirus packages picked up on this malware. Norton Internet Security, for example, said when I scanned it that the file was perfectly safe. Of course it wasn’t, and this pointed out the reason why you can’t put all of your trust into malware scanners that depend on signature scanning.
But that experience also points out why your instincts can play a vital role in security. Unfortunately, one person’s instincts, which are based on that one person’s experience, can’t possibly detect all of the malware that’s out there.
But there’s a way that instincts can play a critical role in defeating malware and cyber-attacks, and that’s to teach instinctive behavior to a powerful computer and then find a way to share everything there is to know about malware and cyber-attacks with that computer.
This is basically what Israel-based cyber-security company Deep Instinct is trying to do in its effort to apply deep machine learning to security. Deep learning is an area of artificial intelligence in which vast quantities of data are loaded into a computer, which then works to determine what is significant in the data by looking for connections in the way the data behaves.
According to the company’s CTO, Eli David, the company loads decomposed examples of every piece of malware it can find into its deep learning software, which looks for connections and characteristics in the malware so that it can learn what malware looks like in the real world.
The difference is that the deep learning process isn’t the same thing as searching for signatures. The idea instead is to determine what a wide range of malware has in common so that it becomes possible to identify malware just by looking at its components.
Dr. David compared it to being able to identify a photo of a cat by being able to see only portions of a photo of a cat. Once certain characteristics of the cat can be seen, such as the shape of an ear, the pupil of an eye or the pattern of the fur you can tell that it’s a cat. You don’t need to see the whole thing or wait to hear the meow to know this. (Read full article …)
- Click to email this to a friend (Opens in new window)
- Click to print (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)