Editor’s Note: Please be aware that items are presented here sequentially, based on the date they were originally published, not on when the link was added to this page. New links to published articles are added frequently, but they do not always appear at the top of the list. If you want to keep abreast of new additions, please scan through the list to check for them. Thank you for your interest!
3/30/2015 - Internet Hack Attacks Underlines Vulnerability
March 30, 2015 – Top Tech News
by Shirley Siluk
The recent spate of hack attacks on the IT systems of British Airways, GitHub, Slack and Rutgers University are all signs of the fast-changing nature of the Internet — and the growing number of people who are capable of launching attacks on it. And cybersecurity professionals worry that such incidents are only likely to become more common in years to come.A large distributed denial of service (DDoS ) attack apparently launched by Chinese hackers, on the coding collaboration site GitHub began last Thursday. A similar attack — linked to China and Ukraine — began Friday afternoon on the systems of Rutgers University.
At the moment, those attacks do not appear to be connected to other hacks involving British Airways, the workplace team chat site Slack or, reportedly, the person-to-person taxi service Uber. Slack confirmed that hackers were able to access information in its user database — although not, it believes, including encrypted passwords — over a four-day period in February, while complaints about stolen frequent-flier points from British Airway’s Executive Club members began emerging a couple of weeks ago. A number of Uber users have also reported apparent hacks into their accounts.
“As far as I know, there are no links between these hacks, and some (GitHub) are DDoS attacks while others (like Slack) are proper breaches,” said Patrick Nielsen, senior security researcher at Kaspersky Lab. “Also, it appears Slack was actually compromised in February. There are some theories that actors in China are behind the DDoS attack on GitHub because GitHub hosts anti-censorship tools, but early attribution, and indeed attribution in general, is very difficult no matter what kind of attack we’re talking about.”
Nielsen told us: “I would say that this clustering of releases is just a coincidence, but they highlight an ever-increasing number of targeted hacks, and the need for companies to take information security and incident response seriously. The best thing you can do for your company from a risk perspective is to figure out what you’re going to do when something like this happens to you, and how you’re going to let your customers know. Keeping a hack secret can be far more damaging to your reputation than the hack itself.”
According to a survey of IT professionals by Kaspersky Lab and B2B International, 94 percent of organizations around the world have encountered at least one cybersecurity incident over the past 12 months. Of those, 12 percent reported they were the victims of at least one targeted attack, up from the 9 percent reported by Kaspersky in 2012 and 2013. (read full article …)
3/30/2015 - Ransomware Spearphish Uses One-Click Dropbox Attack
March 30, 2015 – KnowBe4 Security Awareness Training Blog
by Stu Sjouwerman
The cyber-mafia is stepping up the pressure. As you know, there are several competing gangs that are furiously innovating in an attempt to grab as much money as possible. Call it a criminal virtual land-grab.
A new ransomware attack was spotted in Europe that uses a highly-targeted spear phishing attack using Dropbox as a delivery mechanism. It only takes one click to infect a workstation and a victim has just 24 hours to pay the ransom in Bitcoin, which is very aggressive. It’s called the “Pacman” ransomware, suggesting pictures of something eating up all files.
The ransomware strain is highly malicious. Besides containing a ransomware payload, the code includes a keylogger and has “kill process” capabilities that shut down Windows operating system functions like taskmgr, cmd, regedit and more which makes it very hard to remove this malware.
Europe is often used as a beta-testing ground for attacks on the U.S., so you can just wait for this to happen here. The problem is that this spear phishing attack is focused on a small vertical, but fully automated. In this case it’s chiropractors in Denmark. But remember that with the tens of millions of data-breach records out there, it’s very easy to do this. Next time it can be your employees getting one of these in their inbox, specifically targeted for your company. (read full article …)
3/29/2015 - Europol chief warns on computer encryption
March 29, 2015 – BBC News
A European police chief says the sophisticated online communications are the biggest problem for security agencies tackling terrorism. Hidden areas of the internet and encrypted communications make it harder to monitor terror suspects, warns Europol’s Rob Wainwright. Tech firms should consider the impact sophisticated encryption software has on law enforcement, he said. …
A spokesman for TechUK, the UK’s technology trade association, said: “With the right resources and cooperation between the security agencies and technology companies, alongside a clear legal framework for that cooperation, we can ensure both national security and economic security are upheld.”
Mr Wainwright said that in most current investigations the use of encrypted communications was found to be central to the way terrorists operated.
“It’s become perhaps the biggest problem for the police and the security service authorities in dealing with the threats from terrorism,” he explained. “It’s changed the very nature of counter-terrorist work from one that has been traditionally reliant on having good monitoring capability of communications to one that essentially doesn’t provide that anymore.”
Mr Wainwright, whose organisation supports police forces in Europe, said terrorists were exploiting the “dark net”, where users can go online anonymously, away from the gaze of police and security services. (read full article …)
3/29/2015 - FBI Wants us to Unencrypt Our Phones
March 29, 2015 – Gizmodo
by Maddie Stone
When Apple decided to encrypt its iPhones by default, the move was hailed as a major step forward for security. Except, of course, by the FBI, which is now saying that such encryption should be outlawed. For the safety of Americans, of course.
This past week, FBI director Jim Comey came before the House Appropriations Committee to plead for a law which would force tech companies to create a backdoor into any communications device that uses encryption. Over at The Guardian, Trevor Timm explains why the Bureau’s anxiety over the idea of Americans having more secure phones is not only hypocritical, but completely misguided:
The idea that all of a sudden the FBI is “going dark” and won’t be able to investigate criminals anymore thanks to a tiny improvement of cell phone security is patently absurd. Even if the phone itself is protected by a passphrase that encrypts the device, the FBI can still go to telecom companies to get all the phone metadata they want. They can also still track anyone they choose by getting a cell phone’s location information 24 hours a day, and of course they can still wiretap the calls themselves. Let’s not forget that with a four digit passcode – like iPhones come with by default – can easily broken into by the FBI without anyone’s help anyways. So a vast majority of this debate is already moot.
The only thing that’s possibly more disturbing than the FBI’s desire to make all of us less safe is the utter lack of technological literacy apparent among members of the Appropriations Committee Comey is now trying to persuade. Take, for example, this clip from the hearing, in which Representative John Carter, chairman of the subcommittee on Homeland Security, rambles for three cringeworthy minutes on the supposed “dangers” of encryption. The worst part? He prefaces his comments by saying, point blank, “I don’t know anything about this stuff.”
There you have it—a man in charge of doling out billions of dollars of cybersecurity money openly admits that he knows nothing about cybersecurity. The scene would be hilarious if its implications weren’t so disturbing.
I think this tweet by Computer Science professor Matt Blaze sums up the situation perfectly: “Basically, the FBI is saying that they think you’re more likely to commit a crime than need to protect yourself against crime.” [ The Guardian]
3/27/2015 - Festo unveils robotic ants
March 27, 2015 – Gizmag
by Paul Ridden
Designing a robot that can convincingly move like a member of the animal kingdom is a much more difficult prospect than merely building something that has the outward appearance of one. Some of the best examples of both have come from the engineers at Festo, including a herring gull named SmartBird and a bit of a bounder known as the BionicKangaroo. As a taste of things to come at next month’s Hannover Messe trade show in Germany, the company has now revealed three more biomimetic creations: a small colony of ants, a gripper modeled on a chameleon’s tongue and some fine flyers in the shape of some big blue butterflies.
Festo sees the development of its BionicANTs, where the latter half of the name stands for Autonomous Networking Technologies, as an indication of things to come on the factory floor, where production systems of the future are founded on adaptable and intelligent components which are able to work under a higher overall control hierarchy. These artificial insects don’t just look and move like giant versions of their counterparts in nature, but the company’s engineers have also managed to mimic the cooperative behavior of real world ants with the help of complex control algorithms.
“Like their natural role models, the BionicANTs work together under clear rules”, explained the company’s Head of Corporate Communication and Future Concepts, Dr.-Ing. Heinrich Frontzek. “They communicate with each other and coordinate both their actions and movements. Each ant makes its decisions autonomously, but in doing so is always subordinate to the common objective and thereby plays its part towards solving the task at hand.”
The use of Molded Interconnect Device technology sees visible, three-dimensional circuit structures integrated into laser-sintered shaped components during the build process, which is reported to allow for more design freedom as well as making assembly easier. Piezo-ceramic bending transducers are used in the actuators of the legs and gripper jaws, there are two cameras in the head and position is tracked using an optical mouse sensor mounted under the thorax.
Each BionicANT measures 13.5 cm (5.3 in) and runs on two 7.2 V batteries charged when the antennae touch metal bars running along the sides of an enclosure. (read full article …)
3/27/2015 - Google to develop AI surgical robots
March 27, 2015 – The Guardian
by Samuel Gibbs
Google has stuck a deal with the healthcare company Johnson & Johnson to develop surgical robots that use artificial intelligence.
Google’s life sciences division will work with Johnson & Johnson’s medical device company Ethicon to create a robotics-assisted surgical platform to help doctors in the operating theatre.
The robots will aid surgeons in minimally invasive operations, giving operators greater control and accuracy than is possible by hand, minimising trauma and damage to the patient. Some systems allow surgeons to remotely control devices inside the patient’s body to minimise entry wounds and reduce blood loss and scarring.
Robotic surgical systems such as the Da Vinci device developed by Imperial College London have been used in general operations since the early 2000s, and even starred in a Bond film in 2002.
Google believes it can enhance the robotic tools using artificial intelligence technologies including machine vision and image analysis employed in other parts of the business, including Google’s self-driving cars.
The two firms will explore how advanced imaging and sensors could complement surgeons’ abilities, for example by highlighting blood vessels, nerve cells, tumour margins or other important structures that could be hard to discern in tissue by eye or on a screen.
Augmented reality systems will be used to overlay important information required during surgery that is typically displayed on multiple monitors stacked around the surgeon, such as pre-operative images, lab test results and details of previous surgeries.
“We look forward to exploring how smart software could help give surgeons the information they need at just the right time during an operation,” said Andy Conrad, head of the life sciences team at Google. (read full article …)
3/26/2015 - Facebook AI Software Learns and Answers Questions
March 26, 2015 – MIT Technology Review
by Rachel Metz
Software able to read a synopsis of Lord of the Rings and answer questions about it could beef up Facebook search. Facebook is working on artificial intelligence software that can process text and then answer questions about it. The effort could eventually lead to anything from better search on Facebook itself to more accurate and useful personal assistant software.
The social network’s chief technology officer, Mike Schroepfer, introduced the software, called Memory Network, in a talk at Facebook’s F8 developer conference in San Francisco on Thursday. He demonstrated how the software could acquire knowledge from text by showing how it was fed a super-simple synopsis of the book “Lord of the Rings”, in the form of phrases including “Bilbo travelled to the cave” and “Gollum dropped the ring there.” After that, the software could answer questions that required following the flow of events in the text, such as “Where is the ring?” and “Where is Frodo now?”
Extracting information from text and figuring out how to put it together into brand-new facts is a difficult task for computers to do–as Shroepfer noted in his demo, it requires the machine to understand the relationships between objects over time.
Facebook is making this work with a new twist on a recently-popular approach to machine learning called deep learning (see “10 Breakthrough Technologies 2014: Deep Learning”). That technique involves using networks of crude “neurons” to process data. Facebook added what Schroepfer described as a “multimillion-slot memory system” to such a network, which functions essentially as a short-term memory where facts can be stored and processed. (
3/26/2015 - The World of 2020 According to DARPA
March 26, 2015 – Defense One
By Patrick Tucker
The research agency is making underwater robots that can sleep for years and other robots that can fix satellites in space.
Some are listed in the agency’s biannual Breakthrough Technologies for National Security report, released this morning to coincide with DARPA director Arati Prabhakar’s testimony before the House Armed Services Committee. Others have been highlighted by DARPA officials who recently spoke around Washington. They include:
Zombie Pods Of the Deep
The Upward Falling Payloads program seeks to put robot pods on the ocean floor and then allowing them to lie in wait for years until, triggered by either an event or a command, they wake from their deathly sleep and rise to the surface to release their payloads. “Those payloads could hold things like UAVs [drones] that can go up and do ISR [intelligence, surveillance and reconnaissance], to electronic warfare components to UUVs [underwater drones] that can do similar things under the water,” Walker said.
He added that the aim was to create a “worldwide” architecture for such pods, allowing them to be used everywhere —and potentially even replacing submarines.
“Today, the U.S. Navy puts capability on the ocean floor using very capable but expensive submarine platforms. What we would like to do in this program is pre-position capability on the ocean floor and have it be available to be triggered in real-time when needed,” said Walker. He highlighted a wide array of technical challenges in making zombie-pod drones, such as getting them to float to the surface in the right way (a phenomenon that they call upward falling), power supplies and protecting the payloads on the ocean floor for years at a time.
“You put this thing down beneath 4 kilometers you see extremely high pressures that have to be withstood for potentially years. There’s other issues like befouling you have to think about dealing with and then the [communication] system that wakes these things up and tells them what to do.”
The program consists of three parts, DARPA program manager Dick Urban said at a National Defense Association event in Washington. “One is to make a canister that is able to hold different types of payloads.”
The program will enter its second phase this year. “We haven’t actually built anything, but we’ve done the design studies,” Urban said. “We’ll be taking those different technologies, taking them into the water and testing and seeing how well they work.” He said, “If we’re successful in this program, we’ll be showing what’s possible here, but we’ll also be showing what’s possible in terms of a distributed architecture across the entire ocean.”
The Distributed Agile Submarine Hunting or DASH program seeks to develop what Walker called “sub-ulites.” Think of these as satellites for the ocean. “Because they’re deep, they have a detection envelope that’s pretty broad,” he said.
Meanwhile, Urban highlighted the Transformational Reliable Acoustic Path System or TRAPS program, a passive sonar that sits at the bottom of the sea at six kilometers, listening for acoustic signatures that could indicate passing submarines. When it detects one, it sends word to a surface node.
3/26/2015 - Military’s Smartest Hackers Aren’t Human
March 26, 2015 – Defense One
By Aliya Sternstein
A two-year competition led by DARPA could lay the groundwork for a world where machines are in charge of cybersecurity. Next month, unmanned computers all over the globe will face off in a dress rehearsal for a Las Vegas hacking tournament run by the U.S. military.
The $2 million “Cyber Grand Challenge” pits hacker-fighting software against malicious code programmed by Pentagon personnel. During the 2016 finals in Vegas, the humans who built these cyberbots might as well go play blackjack. At stake in the cyber challenge is a chunk of change and perhaps societal gratitude. That’s because the research and development gleaned during the two-year competition could lay the groundwork for a world where machines are in charge of cybersecurity.
At least, that’s the hope of many of the contestants and the Defense Advanced Research Projects Agency, the Pentagon component leading the program. The machines aren’t running the show entirely just yet. Teams of contenders are still doing a little hand-holding. Last December, DARPA held a 24-hour unofficial test run to see if each group’s vulnerability-obliterating software could even function.
During the practice session, “we certainly weren’t just sipping lemonade,” said player David Brumley, co-founder of the Pittsburgh-based startup For all Secure. Employees who are dedicated full time to the project were monitoring logs indicating the number of security weaknesses detected and the number that had been fixed. The team also had to make sure its system didn’t crash.
“Since it was mostly automated, we didn’t spend the whole 24 hours with ourselves there,” he said. “We didn’t have to baby-sit. We tried to run this as much like the real competition as possible.”
At the time, Brumley happened to be in Washington for a funding meeting. He and the seven employees assigned to the team often communicate with one another from a distance, using videoconferencing tools and chat rooms. “The Internet culture is distributed by nature, so it becomes second nature to collaborate,” Brumley said last October, when the team was still in the early stages of development.
Spotting the Next Heartbleed before the Bad Guys Do
Team members last year won a $750,000 grant that allows them to take time off work for the endeavor. ”Our main motivation is — it’s just fun for us,” explained Brumley, who also is a computer engineering professor at Carnegie Mellon University. “It’s just something that we like and care about. The money allows us to do that.”
That said, they’d be creating the same kind of software in a 9-5 setting even if DARPA hadn’t come calling. Since 2011, Brumley’s research has involved automatic “exploit generation,” which involves pinpointing security holes that are either created intentionally by hackers or, as in the case of the Heartbleed bug, unwittingly by software developers — and then breaking in.
“The way we see it is, the competition was written for our research,” he said last year. (read full article…)
3/25/2015 - Click, Collect, Crime: Preventing Cyber Espionage
March 25, 2015 – Forbes
by Christopher P. Skroupa
Sony’s highly visible data breach once again thrust cybercrime into the national spotlight, prompting heated discussions on company disclosure of breaches and public-private sector communication regarding these costly leaks.
The 2014 breach, which exposed Sony films, corporate emails and staff salaries, was one of the most public breaches in recent memory. But much of the time, cyber breaches are less visible, and can compromise a company for months and even years before management makes the unfortunate discovery.
In 2013, 1.5 million monitored cyberattacks occurred in the US, and according to studies, organizations are attacked almost 17,000 times a year – many resulting in a quantifiable data breach.
For the modern company, the most effective way to avoid becoming vulnerable to cybercrime and corporate espionage is to have the proper safeguards in place and a response plan ready in the event a of a cyber breach.
The Timeline of a Cyber Breach
The average date of intrusion of breach discovery is approximately 300 days, according to Don Ulsch, managing director with PwC, focused on cybercrime and breach response.
This can translate into a lot of lost intellectual property, trade secrets and other proprietary competitive information.
“Some breaches can remain undetected for years,” Ulsch said. “There is a direct correlation between the span of time between breach and detection and cost – and an undetected breach can certainly have a devastating competitive impact.”
Ulsch said that boards and management should also assume their company has been breached until proven otherwise.
“The first step is being aware of the potential risks cybersecurity threats can pose on a business and shareholder value,” he said. “We’re seeing smart board members beginning to ask the right questions about their business’ cybersecurity preparedness measures, inquiring about policies and procedures when it comes to cyber risk.”
William Gragido, director of threat intelligence at Bit9+CarbonBlack, said it’s also important to understand why an organization that’s been breached was initially targeted, and to recognize the distinction between the two types of targets.
“In reality, every organization should assume it’s a target. However, some are merely targets of opportunity as opposed to targets of intent,” he said. “Organizations that have been compromised and/or breached will need to understand their role as a target within the threat landscape ecosystem and understand what it means to their business partners, customers, providers/vendors, peers and competitors.” (read full article …)
3/25/2015 - IS 'CyberCaliphate' Hacked 600 Russian Websites In 2014
March 25, 2015 – Radio Free Europe/Radio Liberty
by Joanna Paraszczuk
Hackers aligned to Islamic State (IS) militants attacked 600 Russian websites last year, according to a new report by Russian cyber intelligence company Group-IB.
The websites targeted by the group include a number of banks, construction companies, government organizations, and even schools and a local history museum in the North Urals, Group-IB said on March 25.
According to Group-IB’s research, several pro-IS hacking groups appear to have been involved in the attacks on Russian websites. As well as the “CyberCaliphate,” the research also found groups calling themselves Team System Dz, Global Islamic Caliphate and FallaGa Team were involved in the hacks.
Group-IB’s Ilya Sachkov, who helped undertake in the research, told RFE/RL that the pro-IS hackers have, at least so far, used only simple hacking techniques and hacking kits to carry out their attacks.
However, companies and governments in Russia and elsewhere should not underestimate the IS group’s hacking capabilities, he warned.
The pro-IS hackers are “trying to find new kits and new malware” to use in future attacks, Sachkov said.
According to Sachkov, the number of pro-IS cyber criminals appears to be increasing.
“There is a risk that IS hackers will switch from relatively easy attacks to more complicated ones, including against critical infrastructure and industrial systems,” Sachkov said.
Pro-IS hackers are not just targeting Russia, Sachkov said.
“Right now, IS will attack any country — their mission is to carry out attacks for attention and to create panic,” he added. (read full article …)
3/23/2015 - Preparing for Cyber War: A Clarion Call
March 23, 2015 – Just Security
by Michael Schmitt
In every War College in the world, two core principles of military planning are that “hope is not a plan” and “the enemy gets a vote.” Any plan developed without sensitivity to these two maxims is doomed to fail. They apply irrespective of the mode in which the conflict is fought, the nature of the enemy, or the weapons system employed. Unfortunately, some states seem to be disregarding the maxims with respect to cyber operations. They include certain allies and friends around the world, states that the United States will fight alongside during future conflicts. The consequences could prove calamitous, especially in terms of crafting complementary strategies and ensuring interoperability in the battlespace.
Hope is Not a Plan
When planning for the future, states cannot hope away scenarios, including two that are looming: “cyber-only conflict” and cyber operations as an aspect of traditional warfare (“cyberized conflict”). Ignoring them is shortsighted. … For instance, a party to a conflict with a superior air force cannot prevail against an enemy that can blind its integrated air defenses by cyber means, thereby allowing it to conduct a devastating first strike that destroys the former’s air force while still on the ground. …
Prudent states create contingency plans for risky scenarios that cannot be ruled out; proper planning is necessarily sensitive to the legal environment in which the ensuing operations will take place. …
But, the process of doing so has been agonizingly slow. … Very few states have even considered whether and when a cyber only conflict qualifies as an “armed conflict,” international or non-international …. This actuality is problematic, since a failure to understand how international law limits or allows cyber operations is a bit like playing football without knowing the rules — the chances of winning are mighty slim. Some of our potential coalition partners appear to be hoping the game will simply never take place and that therefore there is no need to understand its rules.
The Enemy Gets a Vote
… As armed forces plan, they need to bear in mind that the enemy gets a vote. In other words, planning must always be sensitive to what the enemy is likely to, or might, do. …
In any toe-to-toe fight, a primary objective is to disrupt enemy C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance). The challenge in future conflict will be that much C4ISR cyber infrastructure is “dual-use” (used for both military and civilian purposes)…. The extent to which the enemy has elected to rely on dual use cyber infrastructure, rather than closed network military systems, will drive planning on how to disrupt its C4ISR Because of the connectivity involved, planners will have to carefully account for the likelihood of bleed-over into civilian systems. …
For instance, cyber attacks on civilian cyber infrastructure are no less likely than rocket attacks or the use of suicide bombers against civilians. Since the effectiveness of this strategic option increases in lock step with the frequency and severity of attacks, cyber attacks can be expected to be widespread and destructive. This dynamic will logically push states in the direction of operations that emphasize protection of the population from cyber attacks, an important planning consequence since resources that would otherwise be available to conduct offensive cyber operations may have to be re-tasked to defensive missions. Militaries that fail to plan for this eventuality in advance of armed conflict will be poorly organized, equipped, and trained to rebuff the enemy’s strategic goal of leveraging risk to the civilian population to its advantage.
The cyber variant of the “population at risk” strategy is particularly nefarious. … First, malware is very diverse and one size fits all countermeasures are usually unattainable. Second, the general population does not patch and update systems with sufficient frequency and care to reliably protect them from attack. Finally, technical attribution can be very difficult in cyber space, thereby making shooting back problematic. …
Spoofing, that is feigning the identity of an attacker, presents a particular challenge in this regard. It is highly likely that means will be employed to spoof civilian status. …
The use of “zombie” computers (or “bots”) is a further example of how civilian cyber infrastructure can be employed for military purposes. A zombie computer is one over which remote control has been established. Although the zombie qualifies as a targetable military objective, attacking it presents the same dilemmas outlined above with respect to operating from civilian infrastructure. So too does the use of botnets to mount a distributed denial of service operation. The operation involves many zombies (sometimes thousands of them) to overwhelm the processing capacity of a targeted system. When this happens, the system loses functionality. … Simpler cyber operations can be mounted by anyone with access to the necessary malware and basic knowledge of how to use a computer. …
The question for planners and commanders facing this highly probable scenario is how to respond to cyber direct participation that ranges widely across a population. … These are but a few examples of how law and cyber warfare will influence each other in tomorrow’s conflicts. It is time to think such issues through and to fold them into military planning. After all, if you don’t know where you are going, you will probably end up somewhere else. Nowhere is this truer than on the battlefield.
No plan survives first contact with the enemy
Returning to the original points, the arrival of cyber and cyberized conflict is imminent. Hoping it is not is a prescription for disaster on the battlefield. When it comes, the enemy will get a vote on how it unfolds. …
In fairness, it is difficult to anticipate the nature of future war. This reality is the basis for a third maxim War College graduates learn: “No plan survives first contact with the enemy.” There is a kernel of truth in this cautionary adage. But the key to successful military planning remains to prepare not only for eventualities that are certain, but also for those that are possible. Fortunately, the United States, particularly through U.S. Cyber Command and the Combatant Commands, has launched the process. But, as it stands today, we are likely to fight together with the armed forces of states that have disregarded this conspicuous imperative. The resulting lack of cyber (and legal) interoperability will inevitably sow confusion in coalition operations. For those states that are not planning for cyber war, this reality should serve as a clarion call. (read full article…)
3/23/2015 - Apple founder: 'Computers will take over from humans'
March 23, 2015 – Australian Financial Review
Apple co-founder Steve Wozniak has said he wants Apple to take on Tesla in the car business, that he plans to buy the cheapest Apple Watch available when it goes on sale, and that he has recently resigned himself to the fact that computers will one day become the masters of humanity. …
“Computers are going to take over from humans, no question,” Mr Wozniak said.
He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.
Steve Wozniak: “Computers are going to take over from humans, no question”
“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.
“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that … But … if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.”
Mr Wozniak said the negative outcome could be stopped from occurring by the likely end of Moore’s Law, the pattern whereby computer processing speeds double every two years. The ever increasing speeds have happened due to the shrinking size of transistors, which mean more can be included in a circuit. But it has been suggested that Moore’s Law cannot continue past 2020 because, by then, the size of a silicon transistor will have shrunk to a single atom.
So unless scientists can start controlling things at sub-atomic level, by developing so-called quantum computers, humanity will be protected from perpetual increases in computing power. (read full article …)
3/23/2015 - Will You Be Murdered By a Robot?
March 23, 2015 – The Daily Beast
by Nick Romeo
Frightening but never fear-mongering, the information supplied by the authors of The Future of Violence posits a tomorrow full of techno-threats demanding discerning vigilance.
With a bit of technical knowledge and a good imagination, any malevolent person may soon be able to eradicate the human race. This is a mildly exaggerated version of a fundamental claim in The Future of Violence: Robots and Germs, Hackers and Drones—Confronting a New Age of Threat, an alarming and informative new book by Benjamin Wittes and Gabriela Blum.
By combining the elements of the subtitle in sinister ways, Wittes and Blum conjure a number of nightmarish scenarios: a drone hovers above a packed sports stadium and sprays invisible anthrax spores into air breathed by tens of thousands, a miniature robotic drone that looks exactly like a spider assassinates a businessman as he showers, a malign molecular biology graduate student modifies the smallpox virus to enhance its lethality and overcome vaccinations.
Of course with a bit of technical knowledge and a good imagination, any thoughtful person can already eradicate the human race in all manner of weirdly engrossing hypotheticals. In fact some people, like the philosophers at Oxford’s Future of Humanity Institute, seem to make a nice living by contemplating scenarios of mass death. But Wittes and Blum are not professional prophets of doom. Wittes is a senior fellow at the Brookings Institute, and Blum teaches at Harvard Law School.
Their book doesn’t aim to convince us that terrifying but seemingly outlandish scenarios are in fact imminent. They start from the premise that the terrifying scenarios are not only possible, they’re almost certainly inevitable in some form. The essential task, then, is not to sketch in baroque detail the contours of particular horrific hypotheticals, but to develop a viable set of public and private tools to decrease the likelihood and diminish the severity of a large-scale catastrophe.
They start from the premise that the terrifying scenarios are not only possible, they’re almost certainly inevitable in some form. The essential task, then, is not to sketch in baroque detail the contours of particular horrific hypotheticals, but to develop a viable set of public and private tools to decrease the likelihood and diminish the severity of a large-scale catastrophe.
Before considering their solutions, though, it’s important to see at least some evidence that the threats they describe are in fact plausible. The major menaces fall into three broad categories that correspond to major areas of technological development: robotics, biotechnology, and networked computing. Each field is ambivalent, a Janus-faced force that can be turned to good or harm. Drones drop essential medicine in remote regions and catch corporations doing things like releasing pig blood from a slaughterhouse into Texas waterways. Then again, they are also ideal devices for unwarranted surveillance and remote-control murder of civilians in foreign countries.
The same stark contrasts characterize biotechnology and computer networks. Depending on the examples you select, research in biotechnology promises to cure deadly diseases or threatens to synthesize lethal and infectious pathogens specifically designed to exploit the human genome’s weaknesses. Networked computers, meanwhile, are either emancipating purveyors of knowledge and multipliers of ingenuity or massively vulnerable systems on which we depend for everything from electricity to a functioning economy. (read full article …)
3/23/2015 - Protecting the critical infrastructure
March 23, 2015 – Help Net Security
by Mirko Zorz
In this interview, Raj Samani, VP and CTO EMEA at Intel Security, talks about successful information security strategies aimed at the critical infrastructure, government challenges, the role of regulation, and more.
HNS: We often hear how ill-prepared governments are for a serious attack on the critical infrastructure. With so many complex and interdependent issues, what can governments do in order to bring the security of their critical infrastructure to a level that can withstand today’s fast-paced threat landscape? What challenges do they face while doing so?
Samani: The first, most fundamental step is to understand the risk. A really practical example of this is demonstrated by results from SHODAN disproving the previously held notion that Industrial Control System (ICS) devices are NOT connected to the internet.
You also make a really interesting point about the fast paced threat landscape, and I think that poses a significant challenge when it comes to remediation. Within your consumer life, you could potentially accept 90% uptime for the services that you use. However when we get into the critical national infrastructure (CNI) landscape even 99% is not an acceptable response. Therefore I believe that automation will be more prevalent as we consider the CNI landscape, for example secondary substations.
These geographically dispersed assets will go for years without an engineer visiting and, while in an unconnected world this may be acceptable, in the modern connected landscape we will need the ability to remotely connect, manage and also respond in real-time to threats to such systems. It is also worth noting that one of the challenges facing governments is building transparency into the security controls of CNI providers that are typically hosted in the private sector, and equally attracting talent that not only understands security (on an IT level) but also has the engineering experience that comes with working in a plant. (read full article …)
3/21/2015 - Artificial Intelligence Seen With Negativity
March 21, 2015 – Digital Journal
By Bill K. Anderson
Artificial Intelligence is being looked at with the same worried eyes that many had on nuclear power. What will it take to overcome the people’s disbelief?
Stuart Russell, winner of the 2012 Blaise Pascal Chair, sees artificial intelligence as having the same potential for destruction as nuclear power which, decades later, many populations still squirm at the most subtle of mentioning. As the nuclear physicists and geneticists before them, researchers in artificial intelligence have to be prepared for the possibility of the idea that their research may lead and ensure that the results are beneficial to the human species. Many of these ideas were covered in a by-phone explanation.
The initiative of the Future of Life Institute has encountered an unexpected echo. They now have more than 5,000 signatories, including most of the leaders in the field. It’s a sign that a culture of accountability is being developed in the artificial intelligence community. So the debate at the moment in Silicon Valley is if we should be afraid of artificial intelligence. Every big name technology company is lined up to have their say heard in debates and conferences on the topic. …
Common to all these players is that they are not experts in the field of artificial intelligence. They are, however, the instigators of the manifesto published on January 12 by the Future of Life Institute, an association founded by Estonian IT Jaan Tallinn, co-founder of Skype, and MIT professor Max Tegmark.
With the help of Elon Musk (who put $ 10 million on the table to fund the program), the Institute organized a seminar in early January in Puerto Rico, which released this open letter: “Priority Research for robust and beneficial artificial intelligence.” …
Asked a few weeks ago, Max Tegmark explained the process: so far, he notes that scientists are working to develop the machines, but given the huge investments and competition that exists between the technology giants, almost everyone now has their laboratory focusing on artificial intelligence. And that’s not to mention the engineer count on the Pacific Rim only beginning reflecting the numbers that are needed. “This is a race between the growing potential of artificial intelligence and the wisdom to manage it,” says the physicist. “All investments are devoted to trying to increase the capacity of machines and virtually nothing is invested on the side of wisdom.” … The researchers themselves are surprised to accelerate their achievement of the capacity of machines. “There are many areas in which it was assumed they would not succeed in our lifetime. Now people are advising to be careful, perhaps we will succeed.” summarizes Max Tegmark. (read full article …)
3/24/2015 - Cyber crime a systemic risk
March 21, 2015 – AFR Weekend
By Glenda Kwek
Cyber crime is a systemic risk and could be the next black swan event, the head of Australia’s corporate regulator says, as senior business executives warned that companies were not sufficiently prepared for such dangers. Advancements in technology had led to a “significant growth” of cyber crime and had an estimated global cost of $110 billion a year, the chairman of the Australian Securities and Investments Commission, Greg Medcraft, said on Monday.
Mr Medcraft, who was opening the regulator’s annual conference in Sydney, said each cyber attack was estimated to cost an Australian firm about $2 million. He added that a cyber attack could spread quickly and have a “very dangerous effect” on the financial system.
“We are all connected now, if you have access to the internet, so the potential for systemically attacking systems, if you think about it, is enormous. The issue with cyber crime is what you don’t know you don’t know, because it is constantly evolving. “You may never avoid it, but it is about being resilient.”
The ASIC chairman said that at a recent IOSCO (International Organisation of Securities Commissions) meeting, the actions of organisations such as the Syrian Electronic Army were raised as one example.
“It’s basically cyber terrorism, and frankly that is actually extremely scary given that we are becoming more and more connected,” he said.
The forum came a month after the Obama administration in the US unveiled its Cybersecurity Framework, a 39-page report on a plan for information sharing between the federal government and public and private critical infrastructure providers. Mr Medcraft said ASIC would draw from some of the ideas raised in Mr Obama’s proposal, and work with regulators around the world to establish international standards on risk management systems. (read full article …)
3/20/2015 - What should smart robots do?
March 20, 2015 – USA Today
by John Shinal
Not all technologists are barreling down the path toward extending human-like intelligence to machines made of metal and plastic. Discussion and debate among them during the South By Southwest Interactive conference in Austin showed a healthy disparity of views on the topic of artificial intelligence, or AI. If history is any guide, the industry that gave us chips, software and the Internet will be taking us all there, probably sooner than later. And when the human race arrives at the point when robots are as aware of us as we are of them, one thing we’ll all surely find: That day will bring important new moral dilemmas.
“The question becomes, if we can do anything, what will humanity choose to do,” says Stephen Wolfram, a mathematician whose company, Wolfram Research, built the so-called answer engine used by Apple and other device makers for their audio assistants, including Siri.
The word “anything” may be a bit too grand for most, but not for Wolfram, who sees robots proliferating in many industries, from knowledge work to construction. The results, as with human endeavors, are likely to range from frightening to helpful.
“The next wars will be fought using AI,” Wolfram said, during a dinner for technology executives and press in Austin last weekend.
Yet the cost of many things, such as building houses, is likely to fall as more of the work is turned over to robots. Wolfram predicts a future “when everything is made out of (programmable components).” (read full article …)
3/20/2015 - March Madness inspires brackets ... and cyber crime
March 20, 2015 – PC World
by Tony Bradley
March Madness has begun for NCAA basketball teams—and for cybercriminals, who are getting ready for some games of their own. The extreme popularity of March Madness makes the event a prime target for phishing scams, malware exploits and other cyber attacks.
Cybercriminals will use any major event or tragedy that has captured the attention of the general public as bait for attacks. The increased interest from users and the dramatic spike in emails, links, and other communications related to the event make it much easier to blend in. The demand for information combined with the massive audience also mean that the odds of an attack’s success are significantly higher.
“Security professionals at businesses of all sizes are preparing for a surge of potential March Madness related cyber-attacks over the next couple of weeks,” said Dan Lohrman, Chief Strategist and CSO at Security Mentor. “This is because nearly every aspect of any employee’s involvement with March Madness could open up the employee, as well as the organization, to new cyber risks.”
Security firm iSheriff posted a list of the top security threats to expect around this major college sports event. Most of the risks involve bogus apps or sites that lure unsuspecting users with promises of March Madness coverage or access. (read full article …)
3/18/2015 - China Reveals Its Cyberwar Secrets
March 18, 2015 – The Daily Beast
by Shane Harris
A high-level Chinese military organization has for the first time formally acknowledged that the country’s military and its intelligence community have specialized units for waging war on computer networks.
China’s hacking exploits, particularly those aimed at stealing trade secrets from U.S. companies, have been well known for years, and a source of constant tension between Washington and Beijing. But Chinese officials have routinely dismissed allegations that they spy on American corporations or have the ability to damage critical infrastructure, such as electrical power grids and gas pipelines, via cyber attacks.
Now it appears that China has dropped the charade. “This is the first time we’ve seen an explicit acknowledgement of the existence of China’s secretive cyber-warfare forces from the Chinese side,” says Joe McReynolds, who researches the country’s network warfare strategy, doctrine, and capabilities at the Center for Intelligence Research and Analysis.
McReynolds told The Daily Beast the acknowledgement of China’s cyber operations is contained in the latest edition of an influential publication, The Science of Military Strategy, which is put out by the top research institute of the People’s Liberation Army and is closely read by Western analysts and the U.S. intelligence community. The document is produced “once in a generation,” McReynolds said, and is widely seen as one of the best windows into Chinese strategy. The Pentagon cited the previous edition (PDF), published in 1999, for its authoritative description of China’s “comprehensive view of warfare,” which includes operations in cyberspace. (read full article …)
3/16/2015 - Stop the robots protest at SXSW
March 16, 2015 – International Business Times
by Mary-Ann Russon
Forget about worrying that robots will one day take over our jobs – how about if robots were to take over the world? This is the reality that “Stop the Robots” protesters at the South by Southwest (SXSW) festival in Austin, Texas are most bothered about.
A group of 24 protesters descended on the entrance to the festival on Sunday 15 March to protest against artificial intelligence, all wearing matching T-shirts quoting Elon Musk’s words of doom from an October 2014 speech: “With artificial intelligence we’re summoning the demon.”
The also carried signs with slogans like “Humans are the future”, “AI could spell the end” and “Enhance life, don’t replace it”. You might think that protesters against robots would be technophobes, but the group is led by a computer engineer and several of the members are from the University of Texas, which is renowned for its strong engineering degree programme.
“We stick with a pretty controversial message of ‘Stop the Robots’, even though we ourselves are technologists,” Stop the Robots protest organiser Adam Mason told BBC Radio Five Live.
“It’s when you take artificial intelligence and you put it in charge of a system or an entity that is not human where it can grow and learn and make decisions without a moral guideline. Humans make mistakes. If we make something that is as smart as humans or smarter, why won’t it make mistakes and how will it be beholden to us?”
The protesters are not keen to prevent the progress of technology – rather they are deeply concerned about the uncontrolled growth of artificial intelligence and robots, and feel that more legislation and checks are required to stop AI from getting out of control.
“We have planes that can take off, land and fly through the sky, yet when we put 50 people into the air, we still put a human behind the control,” stressed Mason. “I think we need to continue to consider solutions like that in the future, where we tie decisions like that to human morality, rather than the morality of a computer.” (read full article …)
3/16/2015 - Big Data is an Economic Justice Issue
March 16, 2015 – Huffington Post
by Nathan Newman
The control of personal data by “big data” companies is not just as issue of privacy but is becoming a critical issue of economic justice, argues a new report issued by the organization Data Justice, which itself is being publicly launched in conjunction with the report. I am Director of this new effort and wanted to outline why we see this as a critical issue for progressives. This steady loss of data by individuals into the hands of increasingly centralized corporate hands is helping to drive a large portion of the economic inequality that has becoming central to political debate in our nation.
Big data platforms collect so much information about so many people, details the report, that correlations emerge that allow individuals to be slotted into hiring and marketing categories in unexpected and often unwelcome ways that usually leave them at a distinct disadvantage in negotiations. This enables advertisers to offer goods at different prices to different people, what economists call price discrimination, to extract the maximum price from each individual consumer. Such online price discrimination raises prices overall for consumers, while often hurting lower-income and less technologically savvy households.
Data crunchers were key to manipulating financial markets and securities throughout the financial industry and big data platforms were critical parts of the marketing machine that used various forms of consumer profiling and price discrimination to push subprime financial products out to the most vulnerable members of the American public. Notably, by the mid-2000s, the lion’s share of the online advertising economy was being driven by subprime and related mortgage lenders, highlighting the ways the profits of big data platforms have often come at the expense of consumer welfare. (read full article …)
3/15/2015 - Finally, an AI movie with some brains
March 15, 2015 – Vanity Fair
by Joanna Robinson
After a recent run of overly-hyped, but ultimately disappointing sci-fi efforts, fans of the genre finally have something worth getting excited about. Non genre filmgoers should probably get excited too. For the first time in a long time, we have an artificial intelligence film with, well, intelligence.
Ex Machina, novelist and screenwriter Alex Garland’s directorial debut, treads on the well-worn sci-fi territory of technophobia, god complexes, and lethal robots, but it does so with an intelligence and sophistication that never underestimates either the viewer or the capacity of the genre.
The film kicks off when a nice young coder named Caleb (Domhnall Gleeson) wins a mysterious trip to spend a week at the opulent home (more of a compound, really) of Nathan (Oscar Isaac), his boss and CEO of fictional internet giant Blue Book. During his visit, Caleb meets two women: Nathan’s assistant and lover Kyoko (Sonoya Mizuno) and his invention, a robot named Ava (Alicia Vikander). Ava the robot is a CGI confection made of human face, hands, and feet held together by sinewy mesh, translucent limbs, and a see-through torso filled with blinking lights, and whirring gears.
Nathan tasks Caleb with engaging Ava in the “Turing test” (yes, named for that Turing) in order to determine whether his artificially intelligent creation can think and act like a human. As you might expect in a film about a vulnerable young man, an alluring robot girl, and a billionaire with a god complex locked up in a high-tec bunker, things start to go horribly wrong. …
That blurred relationship between the organic and inorganic –– used to best effect in Ava’s elegant flesh and mesh design –– crops up throughout Ex Machina to underline the notion of artificial intelligence as the next, inevitable phase in our evolution. Vikander’s face moves robotically, yes, but also takes on a curious bird-like quality –– bright eyes blinking and inquisitive head tilting. (read full article …)
3/14/2015 - MSU using evolution to create human-like robots
March 14, 2015 – MLive.com
by Rick Haglund
“Chappie,” a new movie about the first robot to possess human intelligence and emotions, isn’t far off from what Michigan State University researchers are attempting to develop. Chris Adami, a computational biologist at MSU, is leading the effort to create robots that could someday think and feel like people, and interact with them.
“My goal is to push this large-scale effort as far as we can,” Adami told me. Artificial intelligence has been the stuff of science fiction for decades. But Adami said its development in the real world has been slowed by the lack of understanding about how the human brain works. “The most that we have come up with is a vacuum cleaner–the Roomba,” he said.
Adami said he thinks the key to developing human-like robots is using the principles of Darwinian evolution to produce artificial brains that will give robots human-level consciousness. “We understand the process of evolution,” he said. “Let’s just evolve these brains. We can speed up the process inside a computer.”
But developing robots that can think and act like humans is becoming increasingly controversial as progress on artificial intelligence advances. …
Should they be used as slave labor? Maybe it’s more ethical to let them do whatever they want.
Adami, not surprisingly, is more upbeat. Technological advancement has historically led to greater societal wealth and he sees that trend continuing with the development of artificial intelligence. Initially, robots are likely to take dangerous or menial jobs that humans either can’t or won’t do, he said. Human-like robots also could provide care and companionship for elderly people.
“They would be very interesting conversation partners because they would have differing experiences.” Adami said.
But Adami acknowledges that artificial intelligence will pose a variety of ethical and economic challenges. For instance, should robots with human intelligence and emotions be forced to do jobs that humans want them to do? “Should they be used as slave labor? Maybe it’s more ethical to let them do whatever they want,” Adami said. “Why shouldn’t they have their own rights?”
Human-like robots are likely at least decades away. If he’s successful, Adami said it will probably take 10 to 20 years to evolve robot brains. And once robots are “born,” it would take another 10 to 15 years for their intelligence and feelings to mature. “They’re like infants,” Adami said. “They don’t know much about the real world.”
The question is: will they be friends or foes of humans in that world? (read full article …)
3/13/2015 - Facebook worm
March 13, 2015 – Help Net Security
by Zeljka Zorz
Facebook users are in danger of having their computers turned in a bot by a worm that spreads via the social network. The worm, identified as belonging to the Kilim malware family, ends on the victims’ computer after a series of links and redirections. According to Malwarebytes researcher Jerome Segura, it all starts with a message on Facebook linking to scandalous sex photos of teenagers.
The shortened ow.ly link leads to another one, which leads to an Amazon Web Services (AWS) page, which leads to a malicious site (videomasars.healthcare), which checks whether the victim is using a computer or mobile phone. If it’s the latter, they are redirected to affiliate pages for various offers. If they are on their computer, they are asked to download a file from a Box (cloud storage) account.
The file is a Trojan downloader and, when run, it downloads the worm component (a malicious Chrome extension) and additional binaries. It also creates a shortcut for Chrome that actually launches a malicious app in the browser directly to the Facebook website.
“In this ‘modified’ browser, attackers have full control to capture all user activity but also to restrict certain features,” Jerome Segura explains.
3/13/2015: US industrial control systems - 245+ cyber attacks in 2014
March 13, 2015 – HazardEx
US industrial control systems were hit by cyber attacks at least 245 times over a 12-month period, the US Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) has revealed. ICS-CERT is part of the National Cybersecurity and Integration Center, which is itself a unit of the Department of Homeland Security.
US industrial control systems were hit by cyber attacks at least 245 times over a 12-month period, the US Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) has revealed. ICS-CERT is part of the National Cybersecurity and Integration Center, which is itself a unit of the Department of Homeland Security. …
ICS-CERT also acknowledged that it is highly likely that it was unaware of other incidents that will have occurred during the period.
“The 245 incidents are only what was reported to ICS-CERT, either by the asset owner or through relationships with trusted third-party agencies and researchers. Many more incidents occur in critical infrastructure that go unreported,” the report said.
The report comes amid rising concerns that industrial control systems are being targeted by Russian hackers, who are seen as new and highly sophisticated players in the cyber arena. (read full article …)
3/13/2015: Artificial *UN*intelligence
March 13, 2015 – PopSci.com
by Erik Sofge
Stop fantasizing about super-smart A.I., and start worrying about dumb algorithms
Brace yourself. In these crucial weeks before the May release of The Avengers: Age of Ultron, editors and writers are going to unleash an onslaught of think pieces about the real-life threat of artificial intelligence (AI). Whatever box office records the upcoming movie does or doesn’t break, it will offer yet another vision of AI insurgency, in the form of Ultron. Created to protect humanity from a variety of threats, the embittered, James Spader-voiced peacekeeping software decides to throw the baby out with the bathwater, and just massacre all of us. It’s the latest, but certainly not the last time that Hollywood will turn the concept of AI superintelligence into action movie fodder. And for media outlets, it provides another opportunity to apply light reporting, and deeply furrowed brows, to the greatest problem in AI, that also happens to not be a problem at all.
I’m not arguing that AI is entirely harmless. If anything, it’s inevitable that autonomous algorithms will cause harm to humans. But just as there’s a difference between recognizing the inherent danger of defunct satellites turning into lethal space junk, and ranting about a future filled with orbital lasers and mind-control satellites, the risks associated with AI should be assessed for what they are, and with at least a modicum of sanity. The AI that’s poised to ruin lives has nothing in common with supervillains like Ultron, and won’t be what anyone would consider superintelligent. More likely, the AI that hurts us will be very, very dumb. (read full article …)
3/13/2015: Microsoft gets AI upgrade
March 13, 2015 – E&T Engineering and Technology Magazine
by Tereza Pultarova
Microsoft is reportedly developing an advanced version of its personal assistant application Cortana and hopes to expand its use to competitors’ devices. According to a report by Reuters, Microsoft has launched an artificial intelligence project called Einstein, hoping to give Cortana a competitive edge over Apple’s more successful Siri system.
Cortana has so far been available on Windows phones and will come in a version for PCs with the arrival of Windows 10 this autumn. However, Microsoft wants to go further and offer the app as a standalone software for phones and tablets powered by Apple’s iOS and Google’s Android operating system. “This kind of technology, which can read and understand email, will play a central role in the next roll out of Cortana, which we are working on now for the fall time frame,” said Eric Horvitz, managing director of Microsoft Research and a part of the Einstein project.
The endeavour is in line with efforts of Microsoft’s CEO Satya Nadella to free Microsoft’s products from the firm’s Windows operating system, making the applications more versatile. Reuters said Microsoft believes the artificial intelligence upgrade it is currently developing will make Cortana the first intelligent personal assistant capable of anticipating users’ needs. (read full article …)
3/12/2015: Top bank cop: 'cyber 9/11 attack could happen'
March 12, 2015 – Times Union
by Larry Rulison
New York state’s top bank regulator told a University at Albany audience on Thursday that one of the greatest threats to the economy today is a “cyber 9/11” attack that causes widespread panic in financial markets. Benjamin Lawsky, who as superintendent of the state Department of Financial Services oversees 3,800 banks and insurance companies, said that trying to stop cyber attacks on the state’s financial system — from data breaches to cyber terrorism — is his biggest concern.
“It’s the one issue that I personally work on every single day,” Lawsky said at UAlbany’s Business School, where he delivered the first-ever Massry Lecture. “What should we do to prevent these nightmare scenarios?” … He told the Times Union after the UAlbany speech that a massive cyber attack on financial markets could force a run on banks and cripple the economy. He said a successful attack that freezes or wipes out bank accounts for even a day would cause panic. (read full article …)
3/12/2015: FBI: Prepare for More Damaging Cyber Attacks
March 12, 2015 – Dark Matters
by Anthony M. Freed
FBI Special Agent for Cyber Special Operations Leo Taddeo warned that given the increasing sophistication of threat actors and the operations they are capable of, the U.S. should be prepared for extremely damaging cyber attacks against networks in both the public and private sectors. “It’s undeniable that the number of breaches is going up, and despite our best efforts, we are constantly surprised by new and important ways to affect these important [computer] networks,” Taddeo said on Bloomberg’s Market Makers show.
“I think that we would be well-served to prepare for — I won’t say a catastrophic attack, but an attack that has an impact that may shake some confidence-levels. The notion that you can protect your perimeter is falling by the wayside,” said Taddeo . “The best organizations out there are monitoring, they are detecting what is on their network before it [has] a major impact.” While Taddeo said he believes that the stock, bond and other critical financial markets are well-protected and generally not connected to the Internet, he admitted that “we’re always surprised” by what attackers are able to achieve, as exemplified by the recent breach at JP Morgan Chase.
In the wake of the highly publicized hack of Sony Entertainment’s networks late last year, the FBI reportedly issued a warning that U.S. businesses should be on alert for “destructive malware” targeting enterprise systems. … Existing security controls are much less effective today as zero-day threats, APTs, web, mobile, and application-layer attacks often bypass these defenses and leave an organization vulnerable to attack. (read full article …)
3/12/2015: 71% of orgs successfully attacked in 2014
March 12, 2015 – SC Magazine
by Adam Greenberg
The number of successful cyber attacks against organizations is increasing, according to the “2015 Cyberthreat Defense Report” from CyberEdge Group, which surveyed 814 IT security decision makers and practitioners from organizations – in 19 industries – across North America and Europe.
Altogether, 71 percent of respondents said that their organization’s global network was compromised by a successful cyber attack in 2014 – a number that jumped up from 62 percent in the year prior – and 22 percent said that their organization experienced six or more successful attacks, according to the report.
Not patching vulnerabilities is one reason successful attacks are on the rise, Steve Piper, CEO of CyberEdge Group, told SCMagazine.com in a Thursday email correspondence. He pointed to the report, which shows that 33 percent of organizations conduct full-network active vulnerability scans less often than quarterly, while 39 percent do so at least once per month.
Another reason for the rise is that attackers are refining their tactics – for example, they perform reconnaissance to carry out targeted spear phishing attacks involving malware, Piper said. In the report, respondents cited phishing attacks, malware and zero-day attacks as the top threats that are causing concern. (read full article …)
3/12/2015: How the CIA can get to cyberspy
March 12, 2015 – LA Times
by Jane Harman
Agility and digital savvy traditionally haven’t been the strong suits of government agencies, so it’s encouraging that CIA Director John O. Brennan wants a big investment in cyberespionage and a new Directorate of Digital Innovation as part of what he calls a “bold” reorganization of the CIA. Brennan’s overhaul is commendable, but it’s urgent to do more to make his agency cyber literate.
Cyber competence isn’t just a set of technical skills; it’s a state of mind. Digital thinking must be baked into the CIA’s whole intelligence mission and its covert operations. No agency employee should be able to say “cyber” isn’t in their job description. As Brennan brings more hackers to Langley, Va., he should be careful not to let new walls rise between the new digital spies and those undercover. There’s precedent for this: The agency’s counter-terrorism center successfully dismantled silos between analysts and operators to track militants around the globe.
Next, the Directorate of Digital Innovation should think critically about what it means to conduct clandestine operations in the digital realm. Unlike drone specs or bomb schematics, code is very difficult to keep classified. Think of the Stuxnet virus. Even though it was written to attack a closed computer network, the code escaped onto the broader Web, where it was publicly dissected by digital security firms such as Symantec. Since then, more cyberespionage tools have been uncovered “in the wild,” meaning some are suddenly available to rogue nations and terrorists. As the CIA gets into this game, it should keep in mind the old admonition not to write down anything you wouldn’t want to see on the front page. In this case, be wary of writing code you wouldn’t want thrown back against your own networks. (read full article …)
3/11/2015: Stuxnet leak probe stalls
March 11, 2015 – ars technica
by Dan Goodin
A criminal leak investigation into a top military official has stalled out of concern it could force US officials to confirm joint US-Israeli involvement behind the Stuxnet worm that targeted Iran’s nuclear program, according to a media report published Wednesday.
Federal prosecutors have been investigating whether retired Marine Gen. James E. “Hoss” Cartwright leaked highly sensitive information to New York Times reporter David Sanger. A 2012 book and article authored by Sanger said Stuxnet was among the crowning achievements of “Olympic Games,” a covert program jointly pursued by the US and Israel to curb Iran’s attempts to obtain nuclear weapons. As reported in author and Wired reporter Kim Zetter’s book Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon, Stuxnet was first seeded to a handful of carefully selected targets before taking hold inside Iran’s Natanz enrichment facility. From there, the malware caused computer-controlled centrifuges to spin erratically, an act of sabotage that forced engineers to scrap the damaged materials.
According to an article published Wednesday by The Washington Post, the probe into Cartwright’s suspected leak to Sanger is generating tension between national security concerns and the Obama administration’s desire to hold high-ranking officials accountable to disclosing classified information. (read full article …)
3/11/2015: Equation Group a 'Study in Stealth'
March 11, 2015 – ThreatPost
by Michael Mimoso
Spies thrive only when they’re able to quietly infiltrate targets and slither away unnoticed; this principle is the same whether we’re talking about the physical world, or digital. The recently uncovered Equation APT group is prime example of the investment nation-state sponsored attackers make in stealth. The group, which researchers at Kaspersky Lab speculate has been active since 2001—perhaps as far back as 1996—took great pains to avoid detection with this super valuable espionage platform. It was selective about against whom it was deployed, found unique ways to store stolen data, and developed more than 100 plug-ins, each with a specific function, that are deployed only to certain targets holding certain information.
Equation, announced in February during the Kaspersky Security Analyst Summit, has been linked to the developers of Stuxnet, Flame and other advanced actors. It has one of the biggest malware and exploit arsenals at its disposal, including one of the first modules enabling attackers to reprogram HDD or SSD firmware from more than a dozen vendors. Today, researchers at Kaspersky Lab released a deeper analysis of the older attack platform used by the Equation group. EquationDrug is a complete platform that is selectively installed on targets’ computers. It is used to deploy any of 116 modules (Kaspersky says it has found only 30 so far); the modules support a variety of cyberespionage functions ranging from data exfiltration to monitoring a target’s activities local activities and on the Web. (read full article …)
3/10/2015: Old and new ransomware families are active
March 10, 2015 – Help Net Security
Cyber crooks’ love for ransomware continues unabated, and user are warned about several active campaigns trying to deliver the malware on target computers. The campaigns have been set up to distribute different ransomware families. The most well-known and well-documented of these is TorrentLocker. The other two are called CryptoFortress and BandaChor. The latter is not new – in fact, it was first spotted in November 2014. But lately, the malware is again being delivered via email and possibly also via exploit kits. BandaChor attempts to encrypt diverse files found on the target computer: Office and image files, database files and archive files, movies and so on.
What’s interesting about this threat is that in order to get the encryption key, users are instructed to contact the criminals via email …. The crooks behind CryptoFortress, on the other hand, stick to the well-known modus operandi that includes asking for a ransom to be delivered in Bitcoin. (read full article …)
3/10/2015: AI transitions from decision support
March 10, 2015 – TDWI.org
by Mike Schiff
Thanks in great part to IBM’s Watson technology and its ability to answer queries presented in natural language, artificial intelligence (AI) has received much attention in the past few years. This was especially true after Watson competed on the quiz show Jeopardy! in 2011 and won against prior champions. Watson evolved from IBM’s DeepQA project and the Deep Blue chess computer that defeated chess champion Garry Kasporov in 1997.
Watson, whose design goal was to win at Jeopardy!, had major enhancements including the ability to leverage natural language processing, data retrieval, massively parallel processing, as well as access to vast amounts of structured and unstructured data. It also could generate hypotheses and “learn” from it results in order to modify and improve its logic and algorithms.
Recognizing the value of its Watson technology, IBM has actively moved to commercialize it by creating the IBM Watson Group in early 2014 and making it available as a cloud-based service. Big Blue is working with partners and developers to create Watson-based operational and analytic applications. Although IBM’s marketing team has been aggressively publicizing Watson’s capabilities, IBM is certainly not the only vendor to pursue artificial intelligence technology. Major companies including Google, Facebook, and others are investing in it, as are universities and, we can assume, numerous governments as well.
Artificial intelligence has many definitions and encompasses subsets such as heuristic programming, expert systems, machine learning, and cognitive computing technologies. I consider a working definition of AI to be the ability of a machine to improve upon its original programming and enhance its ability to perform tasks that normally require human intelligence. Over time, the application would make better and more accurate decisions. (read full article…)
3/10/2015: What we must do about cyber
March 10, 2015 – Lawfare
by Susan Landau
Cyber is listed as threat number 1 but it’s only been number 1 since 2012, suggesting just how fast the cyber threat landscape is changing. As late as 2009, cyber appeared toward the very end of the threat assessment, just behind drug trafficking in West Africa. In the 2007 assessment, cyber was not mentioned at all. No kidding. Not one word. So we’ve gone from no cyber to all-about-cyber in just eight years.
So what’s the explanation for this shift? It’s maturity: a combination of improved technological capabilities and a more sophisticated understanding of threats and US strategic goals.
The US has been suffering cyberexploits — theft of information from networked systems — since at least the early 2000s. But in 2007, cyberattacks — actual damage to networked systems — went from theory to proof of concept. Of course, we’d had little bits of damage before, including a compromised NYNEX phone switch in 1997 that shut down the Worcester Airport for several hours. But prior to 2007, the damage from such attacks had been relatively minor. That year the Idaho National Laboratory ran a test that demonstrated it was possible to destroy a power generator through a remote cyber attack. In 2008 the Baku-Tbilisi-Ceyhan oil pipeline exploded. The pipeline, built over Russian objections, was protectively designed with many sensors to measure pressure, oil flow, etc. Nonetheless, malware was inserted into the control network; this was activated to cause an explosion. Although the perpetrators were never caught, there is little doubt who was behind this: the attack occurred three days before the start of the Russo-Georgian War. In 2010 Stuxnet provided yet another proof of the ability of cyber to remotely cause kinetic damage. …
The Worldwide Threat Assessment brings a much more sophisticated and nuanced version of the cyber threats the US faces than earlier DoD descriptions did. It behooves not just the US military and political leaders to pay attention, but also US industry leaders. Unlike the previous cyber Armageddons characterizations, this assessment captures the real threats to the US public and private sectors. Such threats will only grow more complex and more severe with time; that argues for beginning the development of responses now. Sony and JPMorgan are undoubtedly paying attention to this; one hopes that a much wider swath of US industrial leaders are as well. (read more …)
3/10/2015: Protecting critical infrastructure
March 10, 2015 – The Conversation
by Zahir Tari and Carlos Queiroz
The systems responsible for controlling and monitoring most of our national infrastructure – the services that our society relies on, are known as Supervisory Control and Data Acquisition (SCADA) systems. These systems, on which infrastructure such as power stations, water distribution, roads and public transport rely on, are increasingly the target of cybercriminals. Needless to say, any disruptions to such systems could at best result in financial disasters and at worst the loss of lives.
There will be future wars based on this – you don’t need to attack a country’s military when you can attack it economically. If you stop the electrical system of New York, New York will collapse.
Faced with increasing and more sophisticated cyber attacks, governments and the private sector need to find increasingly innovative ways to protect themselves. These are the weapons of the future. There will be future wars based on this – you don’t need to attack a country’s military when you can attack it economically. If you stop the electrical system of New York, New York will collapse.
In the past, SCADA and consequently the systems monitored and controlled by them were somewhat protected because they relied on proprietary technologies, with little awareness held in the IT industry. With a very closed industry, little information spread beyond the SCADA community. Today, SCADA systems have evolved from standalone, proprietary solutions and closed networks into large-scale, highly distributed computing systems operating over open networks such as the internet. In addition the hardware and software utilised by SCADA systems are now, in most cases, based on COTS (Commercial Off-The-Shelf) solutions.
Although such changes have increased the efficiency and sophistication of the services provided, they have also increased their vulnerability to malicious and sophisticated attacks. The once closed, proprietary software and hardware infrastructure is now vulnerable to attacks originating from external (internet) and internal corporate networks. The attacks plaguing such systems are the same ones that have been affecting ordinary systems over the years, such as viruses, trojans and worms. Additionally, the network protocols used by SCADA systems were not designed with security requirements in mind. For instance, the majority of protocols do not support any type of encryption.
Over the last few years there has been a push from the computer security industry seeking to adapt its security tools and techniques to address the security issues of SCADA systems. You can see this in the number of conferences dedicated or with tracks dedicated to SCADA systems. At the same time, the US government together with industry has put in place a set of standards and regulations related to protecting SCADA systems. Those initiatives are on the right track to probably reach the level of security currently deployed on enterprise and personal computer systems. However as we all know, this is not sufficient, otherwise successful malicious attacks on computer systems would be non-existent. (read more …)
3/9/2015: Cyberespionage top priority for new CIA body
March 9, 2015 – Wired
by Andy Greenberg
In the CIA’s mission of global influence and espionage, its hackers have just been elevated to a powerful new role.
On Friday afternoon, CIA director John Brennan publicly issued a memo to the agency’s staff calling for a massive re-organization of its hierarchy and priorities. And center stage in the CIA’s new plans is a new Cyber Directorate that will treat “cyber”—in federal-speak, hackers and hacking—as a major new focus for both offense and defense.
“Digital technology holds great promise for mission excellence, while posing serious threats to the security of our operations and information, as well as to U.S. interests more broadly,” Brennan’s memo reads. “We must place our activities and operations in the digital domain at the very center of all our mission endeavors. To that end, we will establish a senior leadership position to oversee the acceleration of digital and cyber integration across all of our mission areas.” …
The CIA’s announcement represents yet another sign that cyber-offense is gaining importance for practically every intelligence and military agency. The FBI late last year asked for new rules of criminal procedure that would vastly expand its power to hack into the computers of criminal suspects. And we know from Snowden leaks that the NSA has built the world’s most powerful hacking organization, pulling off high-resource operations that have rarely been seen elsewhere in the cybersecurity world. The NSA’s most recent operations reportedly include hacking SIM card manufacturer Gemalto and planting insidious malware in the firmware of hard drives. (read more …)
3/9/2015: FAA vulnerable to cyberattacks
March 9, 2015 – NewsFactor.com
by Jef Cozza
A U.S. senator says the Federal Aviation Administration (FAA) is in dire need of a security upgrade. Without major changes to the agency’s computer systems, it will remain vulnerable to attacks from hackers, foreign governments, and terrorist organizations, according to Senator Chuck Schumer, a Democrat from New York. The FAA is responsible for the country’s network of air traffic controllers through the National Air Traffic Control System and security breach could spell disaster for U.S. air travel, he said. Schumer based his comments on findings by the Government Accountability Office (GAO), the agency responsible for auditing and monitoring the federal government’s various organizations, according to the New York Daily News.
“If they were able to hack the system, thousands of planes could be in the air unguided. Sophisticated terrorists could even steer planes into one another,” Schumer said. In January, the GAO issued a scathing 46-page report detailing the security failings it found at the FAA in a document titled “FAA Needs to Address Weaknesses in Air Traffic Control Systems.” The GAO listed 168 different actions that the FAA should take to better protect itself from a malicious breach. The FAA relies on the National Airspace System (NAS), a critical component of the nation’s transportation infrastructure, according to the GAO report. “Given the critical role of the NAS and the increasing connectivity of FAA’s systems, it is essential that the agency implement effective information security controls to protect its air traffic control systems from internal and external threats,” according to the report. (read more …)
3/9/2015: Anticipating cyberattacks with machine learning
March 9, 2015 – Wall Street Journal
by Rachael King
Artificial intelligence and machine learning are playing a larger role in cybersecurity, which can in theory help companies identify risks and anticipate problems before they occur. The idea is to create software that can adapt and evolve to combat ever-changing attack strategies, or identify patterns of suspicious behavior.
Traditional security mechanisms have leveraged rule, pattern, signature and algorithm-based approaches to detect threats, and that’s a problem, according to Paul Stokes, CIO of the University of Victoria in British Columbia. “These approaches require constant care and feeding to identify and mitigate security threats,” he said. “I think machine learning changes the game.” (read more …)
3/9/2015: US cyber crime & calls for zero tolerance
March 9, 2015 – ComputerWorld
by Warwick Ashford
The US indictment of three suspected cyber criminals in connection with the largest haul of names and email addresses to date, has prompted a call for zero tolerance for cyber intrusions. Two Vietnamese citizens living in the Netherlands, Quoc Nguyen and Giang Hoang Vu,were charged with hacking into at least eight US email service providers from February 2009 to June 2012, including Epsilon in 2011. Millions of names and email addresses were stolen from Epsilon, which handles email marketing campaigns for 2,500 companies, including Marks & Spencer and the Ritz-Carlton, which were among more than 40 companies affected by the breach. Epsilon confirmed it was among the victims in the case and thanked the US authorities for “bringing this criminal activity to prosecution,” in a statement to security blogger Brian Krebs.
Acting US Attorney John Horn said the scope of the intrusion is unnerving because the hackers did not stop at stealing marketing data and more than a billion email addresses, but also hijacked the email companies’ systems to send bulk spam emails.
The hackers then made two million dollars from the email traffic directed to specific websites, according to a statement published by the US department of justice. Canadian citizen David-Manuel Santos Da Silva of Montreal was charged with conspiracy to commit money laundering for helping Nguyen and Vu to generate revenue from the spam and launder the proceeds. FBI agent J. Britt Johnson said large-scale and sophisticated international cyber hacking rings are becoming more problematic for both the law enforcement community and US businesses. He said the federal indictments, apprehensions and extraditions in this case represents several years of work as the FBI and its cyber-trained agents and technical experts acted quickly to stop the ongoing damage to the numerous victim companies as a result of the individuals’ hacking activities. (read more …)
3/8/2015: Checking out what Big Data knows about you
March 8, 2015 – Pittsburgh Post-Gazette
by Rich Lord
Maybe the data gods don’t have us all pegged — yet. One of the biggest data collectors and sellers, Little Rock-based Acxiom, knows all about my house, my car and my purchases of vegetable seeds. It believes I have 32 interests — 21 of which are accurate. But its files also say that my education ended with high school, and that I’m an unmarried, childless craftsman with a truck, earning less than $30,000 — all wrong. (To be fair, Acxiom’s data on two of my colleagues were more accurate.)
Acxiom has thousands of pieces of information on nearly every U.S. consumer, the Federal Trade Commission reported last year. It has a sense of their political affiliations, charitable giving, community involvement, online and credit card purchasing habits, earnings and social media use, according to a General Accounting Office report from 2013. Unlike some of its competitors, Acxiom allows you to go to a website, aboutthedata.com, see your data profile, and edit or delete it. (It asks your name, address, email address, date of birth and the last four digits of your Social Security number, so Acxiom can verify that you are you.) …
Acxiom reports that 780,000 people have viewed their profiles, with just 3 percent making changes and 3 percent opting out of the company’s filing cabinet. A 2013 Senate Commerce Committee report found that Acxiom’s customers included “47 Fortune 100 clients; 12 of the top 15 credit card issuers; seven of the top 10 retail banks; eight of the top 10 telecom/media companies; seven of the top 10 retailers” and majorities of the big players in nine other industries. … Acxiom is one of a host of companies that rakes in data and sells it to marketers, who use it to decide what you get in the mailbox and see in banner ads online. The industry has claimed that it added $156 billion in revenue to the U.S. economy in 2012. (read more …)
March 8, 2015 – Harvard Political Review
by Pietro Galeone
As ballot day approaches for Israelis, observers and commentators around the world perk up their ears, make their predictions, and voice their opinions. Some of these opinions, however, tend to be louder than others. Staunchly opposed to Israeli nationalist policy, the international “hacktivist” group Anonymous has carried out its now-annual take-down-Israel cyber operation, which they call #OpIsrael. Although it began a few years ago as a campaign to release the Palestinian territories from Israeli occupation, it has by now turned into a general attack on numerous Israeli websites. The operation, publicized though Twitter accounts such as @AnonOps and @AnonymousPress, began on February 24 and peaked on March 1, taking down thousands of websites, more than 700 of which are directly controlled by the Israeli government. … After the Charlie Hebdo attacks in January … and the desecration of various Jewish cemeteries in France and Germany in February, many Israelis feel that their culture is under siege. With ISIS forces operating not far from Israel’s border and Iran’s delicate nuclear negotiations just next-door, Israel faces pressure from every cardinal direction. … As Netanyahu now seeks confirmation for a renewed mandate, these mounting pressures play an important role in the outcome of the election. … In particular, the pressure from Anonymous and recent events in Europe will benefit Netanyahu if he manages to exploit them as matters of national security. (read more …)
March 8, 2015 – Baltimore Post-Examiner
by Megan Wallin
Are you on your smartphone? Someone could be listening. Are you browsing Google? Someone knows what you’re searching for. Are you reading this in your underwear with a webcam on? Someone might be watching. …
And the fact that this is an issue of international concern makes it all the more frightening. … Prompted by the hacking of Sony, our president and the UK’s Prime Minister, David Cameron, have been in talks regarding cyber “war games.” What does this entail? This week, BBC reported that the cyber war games will begin with stagings involving the Bank of England and commercial banks, then move on to Wall Street and the City of London. The aim is to increase our ability to predict and thwart plans to sabotage persons and institutions via cyber threats. Like many great sci-fi thrillers, we will be testing our own underbelly and infrastructure with agents, politics and strategic tactics. However, all this excitement does make you wonder if we are not, in fact, becoming too reliant upon a potentially cannibalistic system. (read more …)
3/7/2015 - On Cyber Arms Control
March 7, 2015 – Lawfare Blog
by Herb Lin
A bit late, but one more observation about the New York Times editorial calling for cyber arms control. In their words, “the best way forward [to reduce cyber threats] is to accelerate international efforts to negotiate limits on the cyberarms race,” in much the same way that we did with the nuclear arms control treaties of the Cold War. Paul Rosenzweig correctly points out that we need to know what a cyber weapon is before we can regulate it. But there are two other fundamental points that also need to be addressed. …
For the United States (or any other nation) to agree to limits on destructive cyber weapons, it would also have to be willing to limit its cyber espionage activities. Since cyber espionage is such a productive approach for gathering intelligence information, it is highly unlikely that any nation would agree to such limits—especially in the uncertain world of today and tomorrow where clandestinely collected information is so very useful.
Second, a key aspect of the nuclear arms race involved each side accumulating more and more nuclear weapons relative to the other side. The cyber “arms race” doesn’t involve each side trying to match the other side—the goal is to stockpile as many zero-day vulnerabilities as possible regardless of how many the other side has. So the fundamental dynamic of numerical “action-reaction” just doesn’t apply. …
And what counts as our net national interest? That’s what we need to have a national discussion about. Many advocates of restraint will say “our aggressive pursuit and use of offensive cyber capabilities—including for espionage—legitimizes the use of such capabilities against us,” and so we should not do that. Many opponents of restraint will say that “our adversaries will use their offensive cyber capabilities against us no matter what,” so restraining ourselves unilaterally is foolish. (read more …)
March 7, 2015 – USA Today
by Steve Weisman
Just when you thought it was safe to use your computer again after last year’s Heatbleed, Shellshock and other computer bugs that threatened your security and just as I predicted in my column of Dec. 20, 2014, researchers have discovered yet another security flaw that threatens millions of Internet users. This one goes by the clever acronym FREAK which stands for Factoring Attack on RSA-EXPORT Keys. This bug affects SSL/TLS protocols used to encrypt data as it is transmitted over the Internet and potentially puts at risk private information sent over the Internet including passwords, banking and credit card information. To better understand FREAK, it is necessary to go back to restrictions of a maximum of 512-bit code encryption from the early 1990s used in software to be sold abroad.
The reason for this was that the federal government wanted to make it easier for federal intelligence agencies to spy on overseas software users. Following much criticism and protest by the technological community, these restrictions were ended. However, many software developers continued to use the weaker encryption. When you use the Internet, your computer communicates with your server on how best to protect your data. Due to the FREAK flaw, some software, including Apple’s Secure Transport, can be manipulated into accepting the weaker encryption program, which can then be hacked by a sophisticated hacker to steal your data. This type of hacking is called a “man-in-the middle attack” and is used to steal and unencrypt what the victim believes is protected, encrypted communications.
March 7, 2015 – JustSecurity.org
by Phil Hirschkorn
On April 3, 2009, Abid Naseer, a 22-year-old Pakistani student, sat in front of his computer in his Manchester, England, apartment and drafted an email to his al-Qaeda handler in Peshawar, Pakistan, announcing that the group’s planned car bombing of a Manchester shopping mall was ready to go. Naseer used code words, referring to the attack as an Islamic wedding, or nikah, and his chosen bomb component, ammonium nitrate, by a woman’s name, Nadia, for nitrate.
“My affair with Nadia is soon turning to family life,” Naseer wrote to “Sohaib,” an alias for his handler. “Both families have agreed to conduct the nikah after the 15th and before the 20th of this month.” “You should be ready between those dates,” Naseer continued. “I wished you could be here as well to join the party.”
Naseer saved the message to a thumb drive and went to the local Cyber Net Café. He plugged the drive into a public computer and started listening to a violent Arabic recording to psyche himself up. “We are marching toward them with turbans that will become their burial garments,” the chanting went. “They spilled their blood generously and with love. Looking forward to death in large numbers.”
Naseer uploaded the “wedding” message to his email account and sent it to Sohaib. Then Naseer deleted all the emails he had sent to al-Qaeda central in previous months. The clock was now ticking on the Manchester mall attack, possibly on the Easter holiday weekend. (read more …)
3/6/2015: IBM buys deep learning startup
March 6, 2015 – NewsFactor
by Jef Cozza
Big Blue moved to beef up its Watson artificial intelligence operation with the acquisition of AlchemyAPI, a machine-learning systems maker. Announced Wednesday, the deal aims to bring 40,000 new developers into IBM’s Watson developer community. IBM said it plans to integrate AlchemyAPI’s deep-learning technology into the core of its Watson platform, augmenting Watson’s abilities to identifiy hierarchies and understand relationships within large sets of data. The company said it expects the new technology to significantly improve Watson’s learning capabilities. Financial terms of the deal were not disclosed.
AlchemyAPI was founded in 2005. Its API services are designed to help developers build artificial intelligence applications with advanced data analysis capabilities such as taxonomy categorization, entity and keyword extraction, sentiment analysis and Web page cleaning. According to AlchemyAPI, its software platform already processes billions of API calls per month across 36 countries and in eight different languages: English, French, German, Italian, Portuguese, Russian, Spanish, and Swedish.
The deal aims to bring 40,000 new developers into IBM’s Watson developer community.
“We founded AlchemyAPI with the mission of democratizing deep learning artificial intelligence for real-time analysis of unstructured data and giving the world’s developers access to these capabilities to innovate,” said Elliot Turner, founder and CEO, AlchemyAPI. “As part of IBM’s Watson unit, we have an infinite opportunity to further that goal.”
In addition, the acquisition will expand the number and types of scalable cognitive computing APIs available to its clients, developers, partners and other members of the Watson ecosystem, according to IBM. That expansion includes language analysis APIs to address new types of text and visual recognition, and the ability to automatically detect, label and extract important details from images.
3/6/2015: What Chappie says-and doesn't say-about AI
March 6, 2015 – Scientific American Blogs
by Seth Fletcher
I’m not a scold about scientific accuracy in film. As long as a movie is not built on a fundamentally stupid premise (“Lucy,” the Scarlet Johansson vehicle predicated on the false notion that humans use only 10 percent of their brains, comes to mind), I am happy to let myself be entertained. You might say I endeavor to use only 10 percent of my brain when I watch Hollywood movies. Keep that in mind when I tell you that I enjoyed Chappie. It is a loud, silly, violent quasi-dystopian Short Circuit/Robocop tribute, and it is every bit as ridiculous as that sounds. Actual professional film critics will tell you all about the movie’s problems. (Check Rotten Tomatoes.) But I enjoyed watching Sharlto Copley, via motion-capture, play a charming and sympathetic robot, and visually, I thought it was great. As a bonus, I finally learned who Die Antwoord are. But I’m not here to play movie critic. I went to see Chappie with a specific question in mind: What does the movie have to say about one of the big debates of our age—whether we should embrace artificial intelligence or fear it? …
Is it fair to expect an action film like Chappie to deliver a nuanced critique of the relationship between humans and machines? Maybe not. But Blomkamp’s previous films dealt with big ideas, and Chappie’s publicists were offering interviews with a Caltech roboticist (who, to be fair, did not consult on the movie) as part of their PR campaign, which suggests they take the movie’s depiction of AI seriously. Plus, Blomkamp has been giving interviews on the subject, and from those we can gather that he thinks fear of robots is overblown, that we should be looking to artificial intelligence to help us solve problems.
He may be right. But I doubt the resolution to a debate that now involves some of the world’s smartest technical minds will be that simple. On one hand, we have Elon Musk, Stephen Hawking and Nick Bostrom warning that superintelligent machines could pose an existential threat to humanity. On the other, we have AI researchers and experts telling us these fears are misplaced. Meanwhile, technology is evolving quickly. Last week in Nature, researchers from Google’s DeepMind artificial-intelligence division announced that their deep Q-network algorithm could teach itself to play 49 classic Atari 2600 games with only minimal input. On about half of those games, the algorithm could play as well as a human. (read more …)
3/6/2015: CIA Director Brennan orders major overhaul
March 6, 2015 – BBC News
CIA director John Brennan has ordered one of the largest reorganisations of the spy agency in its history. In a memo to staff, the director said that the changes were driven by a wider range of threats and the impact of technological advancements. The reforms aim to impose greater accountability on managers and to improve cyber capabilities. … In his memo to staff, the spy chief highlighted the dangers presented by cyber terrorism, but also the opportunities that technological advancement offered the agency. He called on the CIA to “embrace and leverage the digital revolution” and announced the creation of the Directorate of Digital Innovation. (read more …)
3/6/2015: What it will take for a head transplant to work
March 6, 2015 – Washington Post
by Tuan C. NguyenIn
1970, a team of neurosurgeons pulled off a feat that was every bit as remarkable as it was controversial. Led by the renowned Robert J. White, they removed the head of a living rhesus monkey and hours later reattached it to a separate body. Though paralyzed, the donor body supplied enough blood to the brain to allow the monkey to consciously smell, hear, taste and see for a few days before the body’s immune system rejected the transplant. White, who passed away in 2010, was never able to advance the procedure to where it can be performed on humans, as he had hoped. Consequently, what was to be a milestone, a revolutionary way to save patients dying of organ failure or suffering from degenerative diseases such as muscular dystrophy and Lou Gehrig’s disease, has since gone down as little more than a medical oddity, albeit one with all sorts of ethical and philosophical implications.
But in recent years, there’s been renewed talk of perfecting such a procedure. This time it’s spearheaded by Sergio Canavero, an Italian neurosurgeon at the Turin Advanced Neuromodulation Group who has claimed that advances in medical science now make it possible to carry out head transplants that would allow patients to not only survive, but function normally. And with sufficient financial and legal support, he envisions successfully performing a transplant on a human as early as 2017.
“I think we are now at a point when the technical aspects are all feasible,” Canavero told New Scientist.
Technically it’s not any harder than a liver and heart transplant – Dr. James Harrop, director of Adult Reconstructive Spine at Thomas Jefferson University
While expert opinions on Canavero’s claims vary, the possibility isn’t as far fetched as it sounds. James Harrop, director of Adult Reconstructive Spine at Thomas Jefferson University in Philadelphia and co-editor of Congress of Neurological Surgeons, says that the kind of complications the surgeons faced back in 1970 could easily be fixed using today’s methods. “Technically it’s not any harder than a liver and heart transplant,” he says. “We now have immunosuppressant drugs that might prevent the body from rejecting it. Arteries and the ends of the esophagus can be sewn together. Bones can be fused. As long as the cuts are in place and if you do it high enough, there isn’t that much to hook back up.” (read more …)
March 6, 2015 – JustSecurity.org:
by Kristen Eichesehr
Last week Director of National Intelligence James Clapper released the 2015 Worldwide Threat Assessment of the US Intelligence Community and testified about it before the Senate Armed Services Committee. “Cyber” tops the list of “global threats” again this year. As others have noted (see here and here), the Assessment and DNI Clapper’s opening statement contained a number of reveals, including attributing the 2014 attack on the Las Vegas Sands Corporation to Iran and announcing that “the Russian cyber threat is more severe than we’ve previously assessed.” I want to focus in this post on a few additional issues raised by the Assessment: its effort to shift the debate on the nature of cyber risk; its emphasis on threats to integrity of information; and its repeated references to private parties as actors in national cyber strategy.
1. Changing the Debate on the Nature of Cyber Risk: … The Assessment states, “the cyber threat cannot be eliminated; rather, cyber risk must be managed.” This manage-but-not-eliminate strategy depends on the recharacterization of the nature of the threat. It seems implausible that the DNI would articulate a goal of only managing the risk of a “Cyber Armageddon,” but by discounting that risk and redefining cyber risk as “ongoing . . . low-to-moderate level cyber attacks,” the intelligence community has shifted the nature of the threat into something that can be managed and need not be eliminated. (read more…)
3/5/2015: DARPA Robotics Challenge Finals
March 5, 2015 – DARPA.mil Press Release
The international robotics community has turned out in force for the DARPA Robotics Challenge (DRC) Finals, a competition of robots and their human supervisors to be held June 5-6, 2015, at Fairplex in Pomona, Calif., outside of Los Angeles. In the competition, human-robot teams will be tested on capabilities that could enable them to provide assistance in future natural and man-made disasters. Fourteen new teams from Germany, Hong Kong, Italy, Japan, the People’s Republic of China, South Korea, and the United States qualified to join 11 previously announced teams. In total, 25 teams will now vie for a chance to win one of three cash prizes totaling $3.5 million at the DRC Finals. …
To qualify for the DRC Finals, the new teams had to submit videos showing successful completion of five sample tasks: engage an emergency shut-off switch, get up from a prone position, locomote ten meters without falling, pass over a barrier, and rotate a circular valve 360 degrees.
March 5, 2015
March 5, 2015 – BBC News
There are at least 46,000 Twitter accounts operating on behalf of Islamic State (IS), a new US study claims. The actual number, identified in the final three months of 2014, is probably much higher, says the report, co-authored by the Brookings Institution. It said typical IS supporters were located within the militants’ territories in Iraq and Syria. Three-quarters of them tweet in Arabic and about one-in-five use English. They have on average about 1,000 followers. Islamic State has become well known for its use of social media, especially Twitter, to propagate its message.
The study, called The Isis Twitter Census, was written by JM Berger of Brookings and Jonathon Morgan, a technologist. Jihadists will exploit any kind of technology that will work to their advantage, said Mr. Berger, but IS is much more successful than other groups. Most of these IS accounts were created in 2014, suggesting that the numbers are climbing very steeply, despite more than 1,000 IS-related accounts being shut down by Twitter in the final months of 2014. Mr. Berger’s report put a maximum estimate of pro-IS accounts at 90,000 but concluded that the “best” estimate was 46,000. Even that lower figure would put their reach into the millions, said Aaron Zelin, an expert on jihadist groups, and a fellow of the Washington Institute. (read more…)
March 5, 2015 – Fortune
by Erik Sherman
Fear of robots, computers, and automation may be at an all-time high since B movies of the 1950s. Not only is there concern about jobs — even white-collar occupations are vulnerable — but big names in technology have weighed in with their worries. Philanthropist and Microsoft co-founder Bill Gates said, “[I] don’t understand why some people aren’t concerned” about artificial super intelligence that could exceed human control. Physicist Stephen Hawking thinks that “development of full artificial intelligence could spell the end of the human race,” as machines could redesign themselves at a rate that would leave biological evolution in the dust. Tesla Motors CEO and technology investor Elon Musk said research in the area could be like “summoning the demon” that is beyond control. He donated $10 million to the Future of Life Institute, which sponsors research into how humanity can navigate the waters of change in the face of technology.
That’s one camp.
Then there’s another that says doomsday concerns are overblown and that, like a new age FDR, the only thing to fear is fear itself. These people — technologists, economists, and others — say that the combination of artificial intelligence, automation, and robotics will usher in new, better solutions to world problems. They argue that the fear of technology is old and past experience has proven that while new developments can kill off jobs, they create even more to replace them. Machines could, in theory, replace humans in a wide variety of occupations, but shortcomings in creativity, change, and even common sense are vast, making them unable to in the foreseeable future. Instead, these people suggest, robots and computers will work side by side with humans, enhancing productivity and opening new vistas of freedom for people to move beyond the drudgery of current life. In short, the coming years will look like all the ones that came before and society will sort itself out. In fact, a new film “Chappie,” due out March 6, depicts an anti-Terminator view, a world in which robots hold the solutions and humans are the bad guys. “You would have something that has 1,000 times the intelligence that we have, looking at the same problems that we look at,” the director Neill Blomkamp told NBC News. “I think the level of benefit would be immeasurable.” (read more…)
March 5, 2015 – Dark Reading
by Sara Peters
While many in the security industry are pushing for better methods of assigning attribution for cyberattacks after the damage is done, there is also a growing effort to strengthen early-stage defenses — to stop attacks before they have a chance to do much harm. OpenDNS has introduced a new tool to fit into that second category. NLPRank is an advanced threat detection model that uses the “malicious language of the Internet,” to identify suspicious domains almost as soon as they’re registered. “Only recently have we been able to prove just how valuable [NLPRank] is,” says OpenDNS director of security research Andrew Hay. Now, Hay says, the threat model has proven able to sniff out attack campaigns “long before” indicators of compromise or attribution theories are publicly released. (read more …)
March 5, 2015 – The Economist
Computers are notoriously insecure. Usually, this is by accident rather than design. Modern operating systems contain millions of lines of code, with millions more in the applications that do the things people want done. Human brains are simply too puny to build something so complicated without making mistakes.
On March 3rd, though, a group of researchers at Microsoft, an American computer company, Imdea, a Spanish research institute, and the National Institute for Research in Computer Science and Automation, in France, discovered something slightly different. They found a serious flaw in cryptography designed to guard private data such as e-mails, financial information and credit-card numbers as they wing their way across the internet. By exploiting this flaw, a malicious hacker could see such information as unencrypted text—and thus insert data of his own, such as password-stealing code, while making it seem to come from a trusted source.
Discovering such bugs in the mess of code that underpins the internet is not unusual. But unlike most flaws, this one—dubbed FREAK (for “Factoring RSA Export Keys”)—is not an accident. Rather, it is a direct result of the American government’s attempts to ensure, two decades ago, that it could spy on the scrambled communications of foreigners. That is an idea which, following Edward Snowden’s revelations about the long reach of Western spy agencies, is back in the news again. (read more … )
March 4, 2015
March 4, 2015, Bostinno: MIT Is Upping Its Cybersecurity Game with 3 New Programs, by Conor Ryan – Looking to find new solutions in the fight against digital breaches, MIT is set to launch three new research programs next week that will seek to confront the technical, regulatory and business challenges presented by cybersecurity. The three “institute-wide” initiatives, headed by both the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Sloan School of Management, will officially be launched on March 12. For MIT Professor of Information Technology Stuart Madnick and CSAIL director Daniela Rus, the inception of these programs comes at a time when both the risk and impact of cyber attacks is at an all-time high. “The danger is increasing and accelerating,” Madnick said, adding: “The attackers are becoming increasingly more sophisticated, as we’ve seen over time from going back to the TJX break-in to the Target break-in to Sony and so on. … The worst is yet to come in terms of what kind of impact the threat of cybersecurity is going to be.”
March 4, 2015, Upstart Business Journal: Sam Altman asks ‘all governments’ to regulate Artificial Intelligence as China-U.S. competition heats up, by Michael del Castillo – On the same day of a report that Chinese search giant Baidu has asked for the support of China’s military to develop its artificial intelligence capabilities, the president of Y Combinator called for governments around the world to regulate the technology. If Sam Altman‘s concerns are correct, we could be at the beginning of a “race to the bottom” in which artificial intelligence not only gets more sophisticated, but more dangerous. “The U.S. government, and all other governments, should regulate the development of SMI (superhuman machine intelligence),” Altman, whose company was last year estimated to be worth $1 billion, wrote in a post yesterday. “In an ideal world, regulation would slow down the bad guys and speed up the good guys — it seems like what happens with the first SMI to be developed will be very important.”
March 4, 2015, Fox411: ‘Chappie’ star Sigourney Weaver shares thoughts on artificial intelligence, by Ashley Dvorkin – Audiences will meet a new cinematic robot named “Chappie” March 6th as the latest film from director Neill Blomkamp (“District 9”) heads to theaters. In addition to being big-screen entertainment with its story of a robot given the ability to learn and grow up in the human world, the film lends itself to plenty of conversation over our fascination with the pros and cons of the future of Artificial Intelligence. So FOX411 brought this discussion to the cast. Actress Sigourney Weaver shares her thoughts on AI, the future of robotics, whether she’s someone who embraces all the latest technological advancements and wants the next new thing. …
FOX411: We’ve had this fascination with exploring the pros and cons of the future of Artificial Intelligence in cinema or otherwise. Is it something that interests you?
Sigourney Weaver: Yes. I’m waiting, I’m impatient. I went to the Microsoft home of the future and so much of it has technological assistance to people for things we really need. Robots I think, are part of that and I think if they’re programmed by humane people, they can be very useful to us and I don’t think there’s any reason for us to be afraid of them.
FOX411: So is this something that you see as feasible?
Weaver: I think it’s going to happen in about five minutes. And as long as your robot isn’t programmed by like Dr. Evil, I think you’re going to be fine.
March 3, 2015
March 3, 2015, The Guardian: Cockroach robots? Not nightmare fantasy but science lab reality, by Ian Sample – They lurk in dark corners, feed off crumbs, and obey the commands of humans. Years in the making, and a contender for the most revolting creation to emerge from a laboratory, the robo-roach has arrived. Built by engineers in Texas, the robotic insect fuses a live cockroach with a miniature computer that is wired into the animal’s nervous system. At the push of a button, a human operator can control the beast. Or at least which way it scuttles. Hong Liang, who led the research at Texas A&M University, said the controllable insect could carry tiny video cameras, microphones and other sensors. With those on board, it could gather information from places where humans would rather not be: collapsed buildings, broken sewers, and the kitchens in student house shares.
“Insects can do things a robot cannot. They can go into small places, sense the environment, and if there’s movement, from a predator say, they can escape much better than a system designed by a human,” Liang told the Guardian. “We wanted to find ways to work with them.” The US team made tiny backpacks for the cockroaches and stuck them on with paint. Normal glue did not attach to the insects’ waxy backs. Each backpack contained a computer chip that could send signals down a pair of fine wires into nerves that controlled legs on either side of the cockroach. With a rechargeable lithium battery to power the device, the total weight of the backpack was less than 3g.
March 3, 2015, Huffington Post: Artificial Intelligence Technology Is ‘Breaking Out of the Box’, by Peter Mellgard – We are entering an age of accelerated development of artificial and robotic technology, three panelists told an audience of investors, engineers and journalists recently. “A lot of the things that technology was traditionally lousy at are now really, really good, and getting better all the time,” said Andrew McAfee, the principal research scientist at the Massachusetts Institute of Technology. “This is unprecedented and unexpected.”
McAfee was speaking at an annual lecture on science and technology hosted by the Council on Foreign Relations; joining him on the panel were Rodney Brooks, an entrepreneur and emeritus professor of robotics at MIT, and Abhinav Gupta, of the Robotics Institute at Carnegie Mellon University. All three were optimistic about the development of robotics and artificial intelligence technology, but each acknowledged areas where there are still tremendous challenges to overcome. “Just in the past few years,” McAfee said, excitably, “digital technologies in all their manifestations, including robots, have been breaking out of the box and starting to demonstrate capabilities that they never ever had before.” …
“The exponential improvement in the elements of computing is not about to run out of gas,” McAfee said. “We’ve got generations more of it to go. Geeks out there are going to take that computational power and that ocean of data and do things that astonish us.” But the panelists couldn’t muster the same sentiment when it comes to policy-making, economics and security. “As different technologies have proliferated and really democratized access to innovation and to making things, the economy has moved in the other direction,” McAfee said. “It’s moved toward more concentration. The big guys are getting bigger. The workforce is becoming more polarized instead of less. I’m really worried about this polarization of opportunity and of mobility. I think it’s already a challenge and it’s going to get a lot bigger. And what we do about that is going to be the challenge that defines us for the next generation.”
Moreover, the global security landscape is becoming more threatening. The proliferation of accessible technology is as beneficial to Iran and the Taliban as it is for Silicon Valley start-ups. Equipment available at RadioShack is used in IEDs targeting American troops in Afghanistan, said Brooks.
“Cybersecurity is scary as hell,” McAfee noted.
“The threats of the future are going to be very different sorts of threats,” echoed Brooks. “And that worries me.”
March 3, 2015, South China Morning Post: ‘China brain’ project seeks military funding as Baidu makes artificial intelligence plans, by Bien Perez – Robin Li Yanhong, the founder and chief executive of online search giant Baidu, is looking to the nation’s military to support efforts which may make the mainland the world leader in developing artificial intelligence (AI) systems. One of the country’s wealthiest people, Li has proposed in his capacity as a delegate to the Chinese People’s Political Consultative Conference (CPPCC) that the mainland establish the “China Brain” project.
Li said the proposed project would be a massive, “state-level” initiative that could be comparable to how the Apollo space programme was undertaken by the United States to land the first humans on the moon in 1969. He told reporters on the sidelines of the CPPCC in Beijing that the military was welcome to join the project because it was a sector with extensive funding resources that had played a significant role in technology innovation and had huge demand for the latest hi-tech advances.
Under Li’s proposal, the China Brain project would focus on specific research areas: human-machine interaction, so-called big data analysis, automated driving, smart medical diagnosis, smart drones and robotics technologies for both military and civilian use.
“The government should support capable companies in building an open platform offering AI-related basic resources and public services,” he said. In addition, Li proposed that the platform “be kept open and competitive, rather than being made only available to select research institutes”.
“A market mechanism should help transform AI-related research into actual results and products, and push forward integration and innovation in traditional industry, the service sector and the military,” he said.
March 3, 2015, CMS Wire: 10 Years in Cyberspace Security, by Deb Miller – Ten years ago, I wrote a paper on the future of cyberspace. In it, I pointed to three areas that we needed to address to make cyberspace safe for information sharing: establishing strong cyber-trust, enabling secure mobility and striking a balance between security and privacy rights. So much has changed since then. Or has it?
It’s 2005. George Walker Bush is in the White House, the National Infrastructure Advisory Council is making recommendations to the President on information systems security, and one of the top TV shows is CSI: Crime Scene Investigation, featuring an elite team of police forensic evidence investigation experts using the best scientific and technical methods to work their cases in Las Vegas. Fast-forward 10 years and cyberspace is the new Las Vegas. President Barack Obama holds the first-ever White House summit on cybersecurity, DARPA runs its Cyber Grand Challenge tournament designed to speed defense against cyberattacks. The show CSI: Cyber launches, featuring Patricia Arquette as head of the FBI cybercrime division. Oh, and John Ellis Bush is running for the White House. Despite the effort we’ve expended over the past 10 years to secure cyberspace, cybercrime is increasing. Incidences against high-profile consumer retail, banking, health care and utility companies fill the news, and headlines proclaim growing privacy concerns and cyber threat incidences against government entities. Consider the following breaches — a decade apart and yet eerily similar:
- In 2005, hackers broke into CSS, one of the top payment processors for Visa, MasterCard and American Express, exposing 40 million credit card accounts and ultimately forcing CSS into acquisition.
- In February 2015, the New York Times reports an unknown group of hackers has allegedly stolen $300 million — possibly as much as triple that amount — from banks across the world.
As the French proverb puts it, “Plus ça change, plus c’est la même chose.“ While much has changed over the last decade, much remains the same.
March 3, 2015, The Diplomat: Russia Tops China as Principal Cyber Threat to US, by Franz-Stefan Gady – “While I can’t go into detail here, the Russian cyber threat is more severe than we had previously assessed,” the director of national intelligence, James Clapper, told the Senate Armed Services Committee during the 2015 presentation of the “Worldwide Threat Assessment of the U.S. Intelligence Community.” The report lists sophisticated cyberattacks as the principle national security threat facing the United States. “Cyber threats to U.S. national and economic security are increasing in frequency, scale, sophistication, and severity of impact,” the assessment notes. Russia is singled out as one of the most sophisticated nation-state actors in cyberspace. The report notes that Russia’s Ministry of Defense is establishing its own cyber command, responsible for conducting offensive cyber activities (similar to the United States Cyber Command). The report says that Russia’s cyber command will also be responsible, again similar to its U.S. counterpart, for attacking enemy command and control systems and conducting cyber propaganda operations. Furthermore, “unspecified Russian cyber actors” have developed the capability to target industrial control systems and thereby attack electric power grids, air-traffic control, and oil and gas distribution networks. However, the report points out that the United States will not have to fear debilitating strategic cyberattacks on a large scale:
“Rather than a ‘Cyber Armageddon’ scenario that debilitates the entire U.S. infrastructure, we envision something different. We foresee an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on U.S. economic competitiveness and national security.”
The assessment also provided a hint that we may see an increase in “naming and shaming” campaigns, similar to the cyber espionage charges against five Chinese military officials accused of hacking into U.S. companies back in May 2014. The report argues that “the muted response by most victims to cyber attacks has created a permissive environment in which low-level attacks can be used as a coercive tool short of war, with relatively low risk of retaliation.”
March 2, 2015
March 2, 2015, Global Atlanta: Foreign Policy Editor: Forget Extremism — Cyber Warfare the Real Generational Threat, by Trevor Williams – While the U.S. focuses its diplomatic resources on combating extremism, a “footnote in history,” it risks ignoring more lasting threats: the growing specter of cyber warfare and a widening gulf in opinions on how the Internet should be used and governed, the editor of Foreign Policy magazine said in Atlanta. For David Rothkopf, a Washington insider par excellence, it was an ironic way to end a wide-ranging conversation with World Affairs Council of Atlanta President Charles Shapiro, which focused mostly on issues related to extremism, especially U.S. failures in the Middle East and how to deal with ISIS, the militant group controlling parts of Iraq and Syria.
It’s a much grayer world, and at first glance you might not think it’s as dangerous as the other world, but the possibility for a mistake, an escalation, permanent tension is terrible. – David Rothkopf (Editor, Foreign Policy Magazine)
But a question on “technological terrorism” prompted a 15-minute response on hacking and cyber warfare, which Mr. Rothkopf said is just now entering its “early days.” In contrast to the Cold War, when the idea of “mutually assured destruction” deterred attacks, a “cool war” has emerged in cyberspace where the costs of attacks are so low and the potential for destructive payoffs so high that enemies launch them almost continually. In this murky realm, it’s tough to define enemies, much less retaliate against them, and the international system has been much too slow in setting out the rules of engagement. “It’s a much grayer world, and at first glance you might not think it’s as dangerous as the other world, but the possibility for a mistake, an escalation, permanent tension is terrible,” said Mr. Rothkopf, who was visiting Atlanta for a talk on his new book, “National Insecurity: American Leadership in an Age of Fear”.
March 2, 2015, Ars Technica: Google quietly backs away from encrypting new Lollipop devices by default, by Andrew Cunningham – Last year, Google made headlines when it revealed that its next version of Android would require full-disk encryption on all new phones. Older versions of Android had supported optional disk encryption, but Android 5.0 Lollipop would make it a standard feature. But we’re starting to see new Lollipop phones from Google’s partners, and they aren’t encrypted by default, contradicting Google’s previous statements. At some point between the original announcement in September of 2014 and the publication of the Android 5.0 hardware requirements in January of 2015, Google apparently decided to relax the requirement, pushing it off to some future version of Android. Here’s the timeline of events.
March 2, 2015, Salon: Striking the Balance on Artificial Intelligence, by Cecilia Tilli – In January, I joined Stephen Hawking, Elon Musk, Lord Martin Rees, and other artificial intelligence researchers, policymakers, and entrepreneurs in signing an open letter asking for a change in A.I. research priorities. The letter was the product of a four-day conference (held in Puerto Rico in January), and it makes three claims:
- Current A.I. research seeks to develop intelligent agents. The foremost goal of research is to construct systems that perceive and act in a particular environment at (or above) human level.
- A.I. research is advancing very quickly and has great potential to benefit humanity. Fast and steady progress in A.I. forecasts a growing impact on society. The potential benefits are unprecedented, so emphasis should be on developing “useful A.I.,” rather than simply improving capacity.
- With great power comes great responsibility. A.I. has great potential to help humanity but it can also be extremely damaging. Hence, great care is needed in reaping its benefits while avoiding potential pitfalls.
In response to the release of this letter (which anyone can now sign and has been endorsed by more than 6,000 people), news organizations published articles with headlines like: “Elon Musk, Stephen Hawking Warn of Artificial Intelligence Dangers.” … There is a disconnect here—the letter was an appeal to sense, not a warning of impending danger. But it’s not surprising: Alongside increased investment in A.I. and renewed hope in its capabilities, there have been plenty of headlines warning us about the end of mankind, the need for protection from machines, real-life Terminators, and keeping Skynet at bay. As someone involved in A.I. safety research, I know these headlines misrepresent our concerns and the state of the field by blurring the distinction between narrow and strong A.I. and their distinct probabilities and risks. Understanding the difference helps explain why we do indeed need to be mindful of A.I. downfalls. But being mindful doesn’t mean that experts believe danger lurks behind the next advance in artificial intelligence. Most current A.I. systems are narrow: They’re extremely efficient but nowhere near general intelligence. Their capabilities are specific, with very little flexibility. For example, a system that plays poker is usually terrible at chess. This lack of adaptability distinguishes current A.I. systems from science fiction’s portrayals of artificial intelligence—think of the difference between your GPS system and 2001 Space Odyssey’s Hal or Blade Runner’s replicants. While the latter are general thinkers—A.I.s that tackle the whole range of problems humans deal with in everyday life—a GPS system can only tell you about your location. The difference is crucial when evaluating the present and future of A.I. research.
March 2, 2015, Reuters: U.S. air safety threatened by possible hacking: senators – Major security vulnerabilities in the Federal Aviation Administration’s information systems are putting air traffic control programs, along with plane passengers, at risk, two U.S. senators said in a letter to the transportation secretary on Monday. “These vulnerabilities have potential to compromise the safety and efficiency of the national airspace system, which the traveling public relies on each and every day,” Senator Bill Nelson, a Florida Democrat, and Senator John Thune, a South Dakota Republican, wrote to Transportation Secretary Anthony Foxx. In a separate statement, Nelson said a hacker could cause “delays, near misses or potentially even a disaster.” Foxx will testify on Tuesday before the Senate Transportation Committee, which Thune chairs. The letter was prompted by a January report from the Government Accountability Office that warned that major weaknesses in the agency’s information systems put “the safe and uninterrupted operation of the nation’s air traffic control system at increased and unnecessary risk.”
March 1, 2015
March 1, 2015, New York Times: How Superfish’s Security-Compromising Adware Came to Inhabit Lenovo’s PCs, by Nicole Perlroth – Until its advertising software was discovered deep inside Lenovo personal computers two weeks ago, a little company called Superfish had maintained a surprisingly low profile for an outfit once named America’s fastest-growing software start-up. In 2013, Superfish revenues had increased more than 26,000 percent over the previous three years to $35.3 million. It had advertising deals with some of the biggest names in e-commerce — Amazon, eBay and Alibaba among them.
But as the start-up, based in Palo Alto, Calif., searched for new income sources last year, it landed a deal with Lenovo, the world’s largest PC maker, to put its software — often called adware — on several Lenovo consumer PCs. That deal has proved disastrous. Not only has it called into question the business practices of both Lenovo and Superfish, it has shined an unflattering light on makers of this sort of advertising technology.
Superfish’s software, a security researcher revealed, was logging every online movement of the people using those Lenovo machines and hijacking the security system that is supposed to protect online communications and commerce. The Department of Homeland Security even warned Lenovo PC users to remove the software because of the risk it presented. Superfish’s technology, security experts now say, is a particularly aggressive example of the targeted advertising technology that tracks consumers’ online movements without their knowledge.
March 1, 2015, Chicago Sun-Times: Hugh Jackman in rare role as villain in ‘Chappie’, by Bill Zwecker – Whether as Wolverine in the “X-Men” franchise, Valmont in “Les Miserables,” the charming Brit aristocrat in “Kate & Leopold” or as the rough-hewn Drover in “Australia,” we see Hugh Jackman — for the most part — playing good-guy or heroic roles. However, in Chappie (opening Friday), writer-director Neill Blomkamp’s sci-fi thriller about expanded roles for robots in society, the actor clearly is the bad guy. As Vincent Moore, Jackman plays a very ambitious computer designer interested only in creating a robotic killing machine for his tech company — one totally at odds with the thinking, feeling human-like robot named Chappie, invented by his corporate arch-enemy Deon Wilson, played by Dev Patel. In the film, Jackman will go to just about any ends to destroy both Chappie and the career of Wilson. …
For Jackman, the theme of “Chappie” was: “What if robots could think, and feel deeper and better than we can? Is that a good thing? Is it not? That’s a discussion we need to have.” In the final analysis, Jackman said, “what this movie ultimately does — giving us a robot with human qualities — is it makes us think about what it means to be human in the first place.”
For Jackman, the theme of “Chappie” was: “What if robots could think, and feel deeper and better than we can? Is that a good thing? Is it not? That’s a discussion we need to have.” As for using them in armies or the police force, Jackman admitted, “It’s a very tempting idea, philosophically. It’s a fascinating thing to delve into. “Immediately you think in some of those really dangerous, difficult and life-threatening situations — wouldn’t it be great to send in a robot, where if they were lost or destroyed or made a mistake it wouldn’t matter so much? But there are so many deeper questions to consider. Who’s controlling the robots? Who’s deciding what’s right or wrong? Who’s deciding when to use the robots, or not? Who’s deciding which side is good, or which side is evil? That’s where it all gets muddy.” In the final analysis, Jackman said, “what this movie ultimately does — giving us a robot with human qualities — is it makes us think about what it means to be human in the first place. And, it also makes us focus on our responsibilites to the world at large, as flesh-and-blood, breathing, living human beings.”
March 1, 2015, Tech CheatSheet: What Is Mind-Controlled Technology and How Does It Work? by Jess Bolluyt – Controlling an object or a video game with your mind sounds like something out of a science fiction movie, but gadgets that translate brain waves into commands that control a computer are already a reality. Mind-controlled technology uses a brain-computer interface to establish a pathway of communication between the user’s brain and an external device. It has the potential to augment or even repair patients’ damaged hearing, sight, or movement. EEG sensors have been incorporated into gaming systems that enable a player to control what happens onscreen with a headset, EEG-controlled exoskeletons translate users’ brain signals into movements, and implanted electrodes enable patients to control bionic limbs. …
EEG has emerged as a promising way for paralyzed patients to control devices like computers or wheelchairs — by wearing a cap and undergoing training to learn to control a device like a wheelchair by imaging that they’re moving a part of their body, or triggering commands with specific mental tasks. … Maintaining mental exercises while trying to maneuver a wheelchair around a complex environment can be tiring, and the concentration required creates noisier signals that can be more difficult for a computer to interpret. So some are experimenting with shared control, which combines brain control with artificial intelligence for another technique that can help turn crude brain signals into more complicated commands. With shared control, patients would need to continuously instruct a wheelchair to move forward. They would only need to think the command once, and the software would take over from there.
March 1, 2015, The Telegraph (London): Cybercrime could become more lucrative than drugs, police chief warns – International gangsters are increasingly abandoning drug dealing and other high risk rackets in favour of cybercrime, putting everyone who uses the Internet at risk, one of the country’s most senior police officers has warned. Up to a quarter of all organised criminals in Britain are now thought to be involved in some form of financial crime, netting them tens of billions of pounds in profit every year. But with the majority of online fraud being committed by overseas gangs and victims often unwilling to report offences, law enforcement agencies are finding it difficult to even assess the scale of the problem.
Adrian Leppard, the Commissioner of the City of London Police, which takes a lead in fighting fraud and cybercrime, said while traditional crimes were continuing to fall, financial offences were soaring. And he warned that many people who use the Internet every day to shop or do their banking, were doing the online equivalent of leaving their homes and vehicles unlocked for burglars to exploit. “If you ask a room full of people who has been a victim of some sort of fraud or financial crime, half of them will put their hand up. You would have difficulty finding any other area of crime with similar statistics,” he explained. “But we estimate that as much as 80 per cent of this sort of crime is not reported, so while we know there is a big problem, we can’t put a scale on it and that is one of our biggest challenges,” he added. Mr Leppard said cybercrime appealed to organised criminals because it was a “low-risk, high yield” offence. He said: “Organised crime is motivated by money. Whichever criminal activity delivers the most money that is where they will go.
March 1, 2015, The Cheat Sheet: 2 Reasons Why People Like Elon Musk Are Worried About AI, by Rakesh Sharma – Superstar entrepreneurs Elon Musk and Bill Gates recently joined a chorus of notable voices calling for caution in AI development. A letter that lists research priorities for artificial intelligence on the Future of Life website serves as a template of their concerns. … [The letter] demands that the seed AI’s architectural goals should be articulated clearly. Fear of artificial intelligence has pervaded institutions, as well: Stanford University recently announced a century-long study of the effects of artificial intelligence in 18 areas of society including the economy, war, and crime. … Here are two ways in which AI could spell the death of humanity:
1. The inexpensive wars of AI
Currently warfare is a mix of analysis, strategy, and tactics. Machines are tactical instruments in a war but overall strategy is always driven by human judgement. Human abilities and risk analysis is also key to operating instruments, such as remotely-piloted systems. But, AI-driven warfare might be the opposite of such a war. Machines will take over human roles. A story in The New York Times last year outlined the U.S. military’s efforts in testing missiles that “rely on artificial intelligence, rather than human instruction” to conduct bomb targets. According to the article, the United States is not the only country testing autonomous missiles:
Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control … Britain’s “fire and forget” Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets … Israel’s antiradar missile, the Harpy, loiters in the sky until an enemy radar is turned on. It then attacks and destroys the radar installation on its own.
By enabling autonomous missiles to judge their targets, the costs associated with missile operations and control are significantly reduced. In turn, war becomes relatively cheap. Increased tension and conflict could be the result of a reduction in the opportunity cost of such wars.
2. A predictably inert life
In a Wired story more than a decade ago, Sun Systems co-founder Bill Joy outlined his fears about AI. In the article, he quoted a passage from a book about spiritual robots by Ray Kurzweil, chief scientist at Google, about humanity living with the effects of artificial intelligence. The passage below is written by Unabomber Theodore Kacyzinski:
We are suggesting neither that the human race would voluntarily turn power over to the machines nor that the humans would not willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on machines that it would have no practical choice but to accept all of the machine’s decisions. As society and the problems it faces become more complex and machines become more intelligent, people will let machines make more of the decisions for them, simply because machine-made decisions will bring better results than man-made ones.
Sound familiar? This is already happening in our lives through the use of smart devices and the Internet. To achieve the desired results, artificial intelligence depends on a predictable set of outcomes. … Human life and experience, however, consists of an unpredictable set of events. Our days and their outcomes are rarely the same. Even our interests change. But, they are slowly becoming predictable, thanks to prediction algorithms. Thus, prediction algorithms that “suggest” choices based on our interests are really offering choices from a finite set of outcomes. The ostensible reason for the existence of prediction algorithms is to help us manage the complexity of an increasingly complicated world. … Prediction algorithms serve another purpose. They take away human agency involved in evaluating circumstances and decision making. Through continual force feeding of suggestions, your interests are defined and a garden wall of “customized interests” is created. The beauty of human experience lies in its unpredictability. Our loves, our passions, and even our hates are in a constant state of flux based on new information and outcomes in our lives. Currently, these outcomes are infinite. In a walled garden of circumscribed custom interests, the unpredictable nature of human existence becomes a bland and predictable set of mechanical tasks and, eventually, dissipates. And, that is when human identity is defined by machines.
March 1, 2015, Nevada Business Magazine: Data Security: A Growing and Vital Issue for All Businesses, by Frank Polston – Every week, it seems, brings news of another major corporation being sabotaged by internet hackers who break into its computer security and steal sensitive data, including financial and business information. Losses can run into the hundreds of millions of dollars, as evidenced by Target Corporation’s news in early 2014 that hacking of its internet security in late 2013 resulted in a loss of more than $150 million for the company as well as the job terminations of several key executives. Community Health Systems, Home Depot, JP Morgan Chase, Costco, PF Chang’s and SuperValu, to name but a few, have been plagued by hackers intent on wreaking financial and emotional havoc upon the companies, their customers and their shareholders. … A single data breach has the potential of serious risk for law suits, fines and diminished revenue or worse for businesses that may need to comply with any of more than 500 federal, state and other e-waste and data security laws on the books.