5/26/2015 - INFOGRAPHIC: 8 Popular Vulnerable Software Apps
May 26, 2015 – Social Times
INFOGRAPHIC: 8 Vulnerable Software Apps Exposing Your Computer to Cyber Attacks
By Justin Lafferty
While many people all over the world use Google Chrome, its widespread use also opens it up to vulnerabilities and cyber attacks.
Heimdal Security recently listed the eight most vulnerable software apps that have been open to attacks. Marketing and communication specialist Andra Zaharia wrote about these apps in a blog post:
Software is imperfect, just like the people who make it.
No matter how much work goes into a new version of software, it will still be fallible.
Cyber criminals are after those exact glitches, the little security holes in the vulnerable software you use that can be exploited for malicious purposes.
For example, you’re probably using add-ons in your browser and think they’re harmless. Except they can be a way for cyber criminals to distribute malware and unwanted applications, or at least monitoring your browsing data. Suffice to say that malware creators are having a field day with this sort of attack pattern!
For more info, examine these two infographics from Heimdal Security. (read full article …)
5/21/2015 - The Future of War
May 21, 2015 – Australian Broadcasting Corporation
The Future of War
By Margot O’Neill
A look at how technology including killer robots and drones will be used in air, water and ground campaigns in future conflicts.
Transcript
TONY JONES, PRESENTER: This story is about the future of war and how it’s really not so far away. We’ll take you into some of the Pentagon’s most advanced research programs and explain why there’s now a campaign to stop killer robots. Already drone technology can make war seem like a video game.
MALE VOICEOVER (Good Kill, Voltage Pictures): The remotely-piloted aircraft is not the future of war. It is the here and now. Make no mistake about it: this ain’t PlayStation. We are killing people.
ACTOR: Light ’em up.
TONY JONES: Well that’s Hollywood’s take, anyway. That clip was from an upcoming movie called Good Kill. The main characters are members of the so-called “Chair Force”, pilots who remotely operate drones from quiet rooms thousands of kilometres from the battlefield. But the movie’s not all that remote from what’s happening in the real world. The US Government says drone strikes make killing more precise and result in fewer casualties, especially for its own soldiers. But civilians are still dying in alarming numbers, leaving some Pakistani children praying for cloud cover.
ZUBAIR REHMAN, US DRONE STRIKE SURVIVOR (Oct. 2013, voiceover translation): I no longer love blue skies, I prefer grey skies. The drones do not fly when the skies are grey, and for a short period of time, the mental tension and fear eases, but when the sky brightens, the drones return and so does the fear.
TONY JONES: Well armed drones are just the beginning. The next generation of killer drones will be able to storm the skies as well as the deep sea and robots may spearhead infantry attacks. Sound familiar? (Read full transcript and watch video …)
5/21/2015 - Interview: Robotics Professor Noel Sharkey
May 21, 2015 – Australian Broadcasting Corporation
Interview: Robotics Professor Noel Sharkey
By Tony Jones
Noel Sharkey is the co-founder of the International Committee for Robot Arms Control and a Professor of artificial intelligence at Sheffield University. He spoke to Tony Jones about leading the global campaign to stop killer robots.
Transcript
TONY JONES, PRESENTER: All this talk of killer drones raises some very profound questions. For instance, are nations more likely to go to war if there’s less risk of their civilians being killed? And what does it mean if you take human decision-making out of the loop? Can a machine tell the difference between a civilian and a combatant?
Well in 2012 the Obama administration issued a directive on the human oversight required for autonomous weapons systems. It prompted this headline in the magazine Wired.
WIRED (male voiceover): “Pentagon: A human will always decide when a robot kills you”
TONY JONES: But the Pentagon denies there’s a blanket ban on robots making their own decisions. While there will be oversight and a chain of command, if the right criteria’s met, a machine could effectively operate on autopilot. So, last month a UN working group met to hear how nations should address the emerging reality of killer robots.
JODY WILLIAMS, NOBEL PEACE PRIZE WINNER (April 14): Allowing the world to move blindly in that direction is the start of another weapons “revolution”, if we want to look at it that way, that should never happen.
TONY JONES: Well Noel Sharkey is one of the scientists leading the global campaign to stop killer robots. He’s the co-founder of the International Committee for Robot Arms Control. He’s also a professor of Artificial Intelligence at Sheffield University and he joins us now from Manchester in England.
Thanks for being there, Noel Sharkey.
NOEL SHARKEY, ARTIFICIAL INTELLIGENCE, SHEFFIELD UNI.: Hi. Good to be here.
TONY JONES: Now that headline from Wired Magazine, “A human will always decide when a robot kills you,” is that essentially the kind of line in the sand which you would like to see set?
NOEL SHARKEY: Yes, it is, but he’s talking about the directive, the Department of Defence directive which is an acquisitions directive – that’s all. And it just says appropriate levels of human judgment will always be used, and of course appropriate levels could mean none. So we have to be careful about that.
TONY JONES: Well indeed. In fact Wired actually misinterpreted the Obama directive because the Pentagon in fact, as we just heard, denies there’s been a moratorium on building lethal autonomous weapons systems. What are the implications of that? (Read full transcript and watch video …)
5/21/2015 - Cyber extortion gang targets Europe
DD4BC cyber extortion gang targets key European sectors
May 21, 2015 – Computer Weekly
By Warwick Ashford
A gang using distributed denial of service (DDoS) attacks to extort bitcoins is now targeting high-profile organisations in key sectors in Europe, prompting government advisories.
This is line with the trend of criminal gangs repurposing DDoS attacks that were initially intended to knock organisations offline by flooding them with network traffic.
But cyber criminals are increasingly using DDoS attacks as a smokescreen to hide other activities, such as the theft of data or money and for extortion.
Extortion gang DD4BC (DDoS for bitcoins) looks set to take this form of attack to a new level, threatening financial and energy sector firms with unprecedented volumes of malicious traffic.
Financial institutions recognise that all persistent cyber criminal groups could pose a threat as they and their customers increasingly come under attack. …
The UK and Swiss computer emergency response teams (Certs) both issued guidance recently after DD4BC started targeting financial institutions.
The gang emerged in 2014 when it began targeting low-level bitcoin exchanges, entertainment websites, online casinos and online betting organisations.
Other recent targets have included organisations in the oil and gas, retail and technology sectors in Europe, New Zealand and Australia, according to communications and analysis firm Neustar.
“This is a very serious threat and critical issue for organisations that depend on connectivity and their web servers and websites to operate,” said Neustar product marketing director Margee Abrams.
Don Smith, technology director at Dell SecureWorks, said: “To date, Dell SecureWorks’ experience indicates that DD4BC certainly has the capability to conduct a sustained moderate DDoS attack and, as we have seen with other DDoS incidents, these can be extremely interruptive to an organisation’s business and processes.”
“If you haven’t prepared for a DDoS attack, there is very little you can do at the time other than wait it out. Often, it will be over by the time you have put any useful technical mitigations in place. Preparation is critical,” he said.
Smith said organisations should not pay extortion demands, even if it appears the least costly option in the short run, and they should ensure they have a fully tested DDoS incident response plan and mitigation solution in place prior to a DDoS incident occurring. (read full article …)
5/20/2015 - eNom discloses DNS attack to customers
May 20, 2015 – CSO Online
eNom discloses DNS attack to customers
By Steve Ragan
On Thursday, Taryn Naidu, the CEO of domain registrar eNom, sent a letter to customers disclosing a “very sophisticated attack” that targeted the DNS settings on four domains.
The email was sent in order to provide transparency, but eNom is the registrar of record for the Federal Reserve Bank of St. Louis, which reported a DNS hijacking earlier this week. Are the two incidents linked? …
“This is a very creative and intelligent attack, in that cybercriminals did not have to breach the heavily secured perimeter, but instead use a weak outside link that would stealthily move them inside their target,” said Trend Micro’s Christopher Budd in a statement concerning the St. Louis Fed incident. …
The eNom incident isn’t the first of its kind, other registrars have had to deal with DNS hijacks, leading many to call webhosting services and domain registration a weak link in the supply chain, as the vendors in question often fall victim to socially-based attacks.
In fact, earlier this year, this reporter’s GoDaddy account was easily hijacked after a security expert social engineered call center employees and used Photoshop to forge a state ID.
“A stealthy cybercriminal can easily do his or her homework using social media outlets to gain sufficient information to request an account reset through a call center,” added Budd.
“It’s unfortunately becoming an area of focus for criminals that turns into a nightmare for victims seeking to regain control.” (read full article …)
5/18/2015 - Nick Bostrom supports Hawking’s AI predictions
May 18, 2015 – TechWorld
Future of Humanity: Nick Bostrom supports Stephen Hawking’s AI predictions
The growing field of artificial intelligence is catching the eye of academics and technology leaders worldwide.
By Sam Shead
Renowned artificial intelligence (AI) expert Nick Bostrom yesterday said robots and machines will overtake humans in terms of intelligence within the next century, in an echo of a prediction from Professor Stephen Hawking mere days before.
Bostrom – who leads the Future of Humanity Institute at the University of Oxford and is known for his work on existential risk, human enhancement ethics and superintelligence risks – said there’s a “decent probability” that machines will outsmart humans within the next hundred years.
“100 years is quite long,” he said at the University of Oxford on Sunday during the annual Silicon Valley Comes to Oxford conference. “We haven’t even had computers for 100 years so everything we’ve seen so far has happened in like 70 years. If you think of the simplest computers, so some simple thing like Pong, and compare that to where we are now, it’s a fairly large distance. So it doesn’t seem that crazy to say that in 100 years, or indeed much less than that, we will take the remaining steps.”
AI can be defined as the intelligence exhibited by machines or software. It has the potential to have a profound impact on the world and it’s an area being pursued by global tech giants such as Google and Facebook.
Bostrom said there will be a fundamental transformation in human civilisation when machine intelligence reaches the same level as human intelligence, adding that it will arguably be the most important thing that will ever happen in human history.
“I personally believe that once human equivalence is reached, it will not be long before machines become superintelligent after that,” he told an audience of students, aspiring entrepreneurs, academics and business leaders. “It might take a long time to get to human level but I think the step from there to superintelligence might be very quick. I think these machines with superintelligence might be extremely powerful, for the same basic reasons that we humans are very powerful relative to other animals on this planet. It’s not because our muscles are stronger or our teeth are sharper, it’s because our brains are better.”
If humans do create superintelligent machines, Bostrom said our future is likely to be shaped by them, for the better or the worse.
“Superintelligence could help humans achieve our long term goals and values,” he said. “They could be an extremely powerful ally that could help us solve a number of other problems that we face.”
But superintelligence could also be “extremely dangerous” said Bostrom, pointing to the extinction of the Neanderthals and the near-extinction of the gorillas when the more intelligent Homo sapiens arrived. (read full article…)
5/18/2015 - Facebook accused of creating insecure Web
May 18, 2015 – The Hill
Facebook’s Internet.org accused of creating insecure Web
By Cory Bennett
Digital rights groups are piling on to criticism that Facebook’s worldwide Internet access project, Internet.org, doesn’t promote privacy or security.
On Monday, 60 groups from 28 countries wrote an open letter on Facebook expressing their concerns about the project from Facebook chief Mark Zuckerberg that is trying to bring basic Web services to the roughly 4 billion unconnected people worldwide.
Internet.org, the groups maintained, is “threatening freedom of expression, equality of opportunity, security, privacy and innovation.”
Similar criticism started gaining traction in early May, after the social networking giant opened up the Internet.org platform to developers.
The move allowed software engineers to develop third-party Internet services and apps using the website’s platform. But the platform doesn’t support apps that are encrypted or traffic protected with secure hypertext transfer protocol, a common method of securing website activity.
Facebook developers say the company is working to add these features soon.
While Internet.org has been lauded for its aspirations, digital rights advocates have criticized it for giving Facebook developers too much control over which services are available to the unconnected community.
Some have even alleged the initiative flies in the face of the Facebook-supported concept of net neutrality, in which traffic from all Internet services is valued equality.
Monday’s open letter gave a microphone to these allegations.
“It is our belief that Facebook is improperly defining net neutrality in public statements and building a walled garden in which the world’s poorest people will only be able to access a limited set of insecure websites and services,” the letter said.
Data collection through Internet.org is a major concern as well.
“Given the lack of statements to the contrary, it is likely Internet.org collects user data via apps and services,” the letter continued. “There is a lack of transparency about how that data are used by Internet.org and its telco partners.” (read full article…)
5/17/2015 - ISIS preps for cyber war
May 17, 2015 The Hill
ISIS preps for cyber war
By Cory Bennett and Elise Viebeck
Islamic terrorists are stoking alarm with threats of an all-out cyber crusade against the United States, and experts say the warnings should be taken seriously.
Hackers claiming affiliation with the Islamic State in Iraq and Syria (ISIS) released a video Monday vowing an “electronic war” against the United States and Europe and claiming access to “American leadership” online.
“Praise to Allah, today we extend on the land and in the Internet,” a faceless, hooded figure said in Arabic. “We send this message to America and Europe: We are the hackers of the Islamic State and the electronic war has not yet begun.”
The video received ridicule online for its poor phrasing and the group’s apparent inability to make good on its cyber threat this week.
But as hackers around the world become more sophisticated, terrorist groups are likely to follow their lead and use the same tools to further their ends, experts said.
“It’s only really a matter of time till we start seeing terrorist organizations using cyberattack techniques in a more expanded way,” said John Cohen, a former counterterrorism coordinator at the Department of Homeland Security.
“The concern is that, as an organization like ISIS acquires more resources financially, they will be able to hire the talent they need or outsource to criminal organizations,” Cohen added. “I think they’re probably moving in that direction anyway.”
Military officials agree. NSA Director Adm. Michael Rogers this week called the pending shift “a great concern and something that we pay lots of attention to.”
“At what point do they decide they need to move from viewing the Internet as a source of recruitment … [to] viewing it as a potential weapon system?” Rogers asked.
While ISIS has been widely recognized for its social media prowess, the growing computer science talent of its recruits has mostly gone unnoticed.
“A number of individuals that have recently joined the movement of ISIS were folks that studied computer science in British schools and European universities,” said Tom Kellermann, chief cybersecurity officer at security firm Trend Micro, who said ISIS’s cyber capabilities are “advancing dramatically.”
Even the man reportedly responsible for a number of the brutal ISIS beheadings, dubbed “Jihadi John” by his captives, has a computer science degree, Kellermann said. (read full article …)
5/17/2015 - UK government gives hacking GCHQ immunity
May 17, 2015 – Ars Technica UK
UK government quietly rewrites hacking laws to give GCHQ immunity

An aerial image of the Government Communications Headquarters (GCHQ) in Cheltenham, Gloucestershire.
By Sebastian Anthony
The UK government has quietly passed new legislation that exempts GCHQ, police, and other intelligence officers from prosecution for hacking into computers and mobile phones.
While major or controversial legislative changes usually go through normal parliamentary process (i.e. democratic debate) before being passed into law, in this case an amendment to the Computer Misuse Act was snuck in under the radar as secondary legislation. According to Privacy International, “It appears no regulators, commissioners responsible for overseeing the intelligence agencies, the Information Commissioner’s Office, industry, NGOs or the public were notified or consulted about the proposed legislative changes… There was no public debate.”
Privacy International also suggests that the change to the law was in direct response to a complaint that it filed last year. In May 2014, Privacy International and seven communications providers filed a complaint with the UK Investigatory Powers Tribunal (IPT), asserting that GCHQ’s hacking activities were unlawful under the Computer Misuse Act.
On June 6, just a few weeks after the complaint was filed, the UK government introduced the new legislation via the Serious Crime Bill that would allow GCHQ, intelligence officers, and the police to hack without criminal liability. The bill passed into law on March 3 this year, and became effective on May 3. Privacy International says there was no public debate before the law was enacted, with only a rather one-sided set of stakeholders being consulted (Ministry of Justice, Crown Prosecution Service, Scotland Office, Northern Ireland Office, GCHQ, police, and National Crime Agency). (read full article…)
5/15/2015 - Chinese Search Giant Beats Google With Smartest AI Yet
May 15, 2015 – Sputnik News
Chinese Search Giant Beats Google With Smartest Artificial Intelligence Yet
Chinese search company Baidu unveiled its latest advancement in developing artificial intelligence, claiming its supercomputer has taken the global lead in the field.
The Minwa supercomputer was set to a task of scanning and sorting a large database’s worth of images, and did so with a 95.42% rate of accuracy, Baidu reports. This beat both Google’s system, which scored a 95.2% and Microsoft’s, at 95.06%.
All three systems have, over recent months, outperformed humans at this work, who usually rate about a 95% accuracy rate at this type of task.
“I am very excited about all the progress in computer vision that the whole community has made,” Baidu’s chief scientist Andrew Ng told the Wall Street Journal about the rapid advances in this technology. “Computers can understand images so much better and do so many things that they couldn’t do just a year ago.”
The ImageNet database supplied more than 1 million images for Minwa to sift through and sort into more than 1,000 different pre-defined categories: for example, sorting images of animals or food, by species or type.
This type of artificial intelligence — “deep learning” — is hot in Silicon Valley circles right now. Google, for one, has acquired several AI companies, including DeepMind, a UK-based company that has, among other things, created systems to learn how to play different kinds of computer games. (read full article …)
5/14/2015 - China, Russia Seek New Internet World Order
May 14, 2015 – US News & World Report
China, Russia Seek New Internet World Order
The two nations’ recent cybersecurity pact shows their goal of undermining America’s Internet dominance.
By Tom Risen
China and Russia have made little attempt to hide their geopolitical ambitions. Militarily, each has asserted a right to terrain not recognized as theirs. Economically, the two have designs on gaining a greater foothold in the world marketplace, Western roadblocks be damned.
And while an unprecedented pact not to deploy network hackers against each other may prove largely symbolic, it’s yet another glaring sign of the two countries’ shared desire to shake up a world order largely dominated by the U.S. since the end of World War II.
“This agreement is not about Russia’s and China’s love for each other – it’s about how they dislike the U.S.,” says James Lewis, a cybersecurity researcher at the Center for Strategic and International Studies.
It was during a visit to Moscow this month by Chinese President Xi Jinping – punctuated by a military parade commemorating the 70th anniversary of Nazi Germany’s defeat in the Second World War – that Russia and China took their partnership to the next level. Xi and Russian President Vladimir Putin signed dozens of bilateral agreements, including one outlining their shared agenda for the Internet.
The accord states that both countries agree not to attack each other’s networks, and highlights the risk of technology that aims for the “destabilization [of] the political and socio-economic environment” and “to undermine the sovereignty and security [of the] State.” A rough translation of the original document, which was published by the Russian government, is available online. (read full article…)
5/13/2015 - Weary Soldier in America’s Cyber War
May 13, 2015 – Foreign Policy
The State Department’s Weary Soldier in America’s Cyber War
By Tim Starks
A new age of cyberwarfare is dawning, and a little-known State Department official named Christopher Painter — a self-described computer geek who made his name prosecuting hackers — is racing to digital battlegrounds around the world to help stave off potential future threats.
One of his stops was in South America, where he visited Argentina, Chile, and Uruguay, to hear about what those countries were doing to protect computer networks. One was in Costa Rica, to tout the U.S. vision for the Internet, including security. Another was in The Hague, to, among other things, promote international cooperation in cyberspace.
“It’s been a hectic couple of weeks,” he said
There’s a reason for that. Last month, Arlington, Va.-based security firm Lookingglass released a report detailing a full-scale cyber war being waged by Russia against Ukraine. Russia, Lookingglass concluded, was hacking Ukrainian computers and vacuuming up classified intelligence that could be used on the battlefield. The week before, the Pentagon publicly released a new strategic document declaring, for the first time, that it was prepared to pair cyber war with conventional warfare in future conflicts, such as by disrupting another country’s military networks to block it from attacking U.S. targets
Painter is charged with finding answers to some of thorniest policy questions confronting Washington in the digital age: How to wage cyber war, how not to, and how nations can or even should cooperate on establishing rules for cyber offense.
Countries have found it so hard to sort out answers to these difficult subjects, Painter is setting his sights low, at least for now. One of his initial goals: Promoting a set of voluntary international standards, such as one that says that nations should not knowingly support online activities that damage critical infrastructure that provides services to the public.
“We’re in the relative infancy of thinking about this issue,” Painter said. “This is a fast-changing technology. We’re at the beginning of the road.”
Other, related debates — on surveillance and cyber defense — are further along. Congress is working through a renewal of expiring provisions of the Patriot Act. Other countries are getting in on the act as well: France’s National Assembly this month approved a bill being dubbed “the French Patriot Act,” which controversially allows the government to collect mass e-mail data, and Canada’s House of Commons last week passed anti-terrorism legislation that critics contend endangers online privacy. Congress also has a good chance this year to pass a cybersecurity bill that fosters threat data sharing between companies and the government. (read full article …)
5/12/2015 - Allen's AI Institute launches startup incubator
May 12, 2015 – Geek Wire
Paul Allen’s Artificial Intelligence Institute launches startup incubator with top minds in AI
By John Cook
The Allen Institute for Artificial Intelligence has established a new startup incubator at its offices in Seattle, recruiting two high-level researchers who will try to develop technologies in the emerging field.
Joining the Institute’s new incubator program are Prismatic co-founder Aria Haghighi and Johns Hopkins University PhD graduate Xuchen Yao.
“We are quickly building an element of the Seattle tech ecosystem, and we’ve identified cutting-edge folks who are startup minded,” said Oren Etzioni, the former University of Washington computer science professor who now leads the Allen Institute for Artificial Intelligence. “Once we identify super-talented folks like Xuchen and Aria — we give them a lot of freedom to pursue their instincts and initiatives.”
Among the projects being developed at AI2 is Aristo, described as a “first step towards a machine that contains large amounts of knowledge in machine-computable form that can answer questions, explain those answers, and discuss those answers with users.” (read full article …)
5/11/2015 - An Important Step in Artificial Intelligence
May 11, 2015 – UC Santa Barbara Current
An Important Step in Artificial Intelligence
Researchers in UCSB’s Department of Electrical and Computer Engineering are seeking to make computer brains smarter by making them more like our own.
By Sonia Fernandez
In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit. For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.
“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.
For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform. (read full article…)
5/10/2015 - Does Artificial Intelligence Pose a Threat?
May 10, 2015 – Wall Street Journal
Does Artificial Intelligence Pose a Threat?
A panel of experts discusses the prospect of machines capable of autonomous reasoning
By Ted Greenwald
Paging Sarah Connor!
After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple’s Siri and Amazon’s Alexa, IBM IBM’s Watson and Google Brain, machines that understand the world and respond productively suddenly seem imminent.
The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?
The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement.
How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Taking part in the discussion are Jaan Tallinn, a co-founder of Skype and the think tanks Centre for the Study of Existential Risk and the Future of Life Institute; Guruduth S. Banavar, vice president of cognitive computing at IBM’s Thomas J. Watson Research Center; and Francesca Rossi, a professor of computer science at the University of Padua, a fellow at the Radcliffe Institute for Advanced Study at Harvard University and president of the International Joint Conferences on Artificial Intelligence, the main international gathering of researchers in AI. (read full article …)
5/8/2015 - How smart is today’s artificial intelligence?
May 8, 2015 – PBS News Hour
How smart is today’s artificial intelligence?
Artificial intelligence is creeping into our everyday lives through technology like check-scanning machines and GPS navigation. How far away are we from making intelligent machines that actually have minds of their own? Hari Sreenivasan reports on the ethical considerations of artificial intelligence as part of our Breakthroughs series.
JUDY WOODRUFF: You may not realize it, but artificial intelligence is all around us. We rely on smart machines to scan our checks at ATMs, to navigate us on road trips and much more.
Still, humans have quite an edge. Just today, four of the world’s best Texas Hold ‘Em poker players won an epic two-week tournament against, yes, an advanced computer program. The field of artificial intelligence is pushing new boundaries.
Hari Sreenivasan has the first in a series of stories about it and the concerns over where it may lead. It’s the latest report in our ongoing Breakthroughs series on invention and innovation.
HARI SREENIVASAN: Artificial intelligence has long captured our imaginations.
ACTOR: Open the pod bay doors, Hal.
ACTOR: I’m sorry, Dave. I’m afraid I cannot do that.
HARI SREENIVASAN: With robots like Hal in “2001: A Space Odyssey” and now Ava from the recently released “Ex Machina.”
ACTRESS: Hello. I have never met anyone new before.
HARI SREENIVASAN: And “Chappie.”
ACTRESS: A thinking robot could be the end of mankind.
HARI SREENIVASAN: The plots thicken when the intelligent machines question the authority of their makers, and begin acting on their own accord.
ACTRESS: Do you think I might be switched off?
ACTOR: It’s not up to me.
ACTRESS: Why is it up to anyone?
HARI SREENIVASAN: Make no mistake, these are Hollywood fantasies. But they do tap into real-life concerns about artificial intelligence, or A.I.
Elon Musk, founder and CEO of Tesla Motors & SpaceX, is not exactly a Luddite bent on stopping the advance of technology. But he says A.I. poses a potential threat more dangerous than nuclear weapons.
ELON MUSK, CEO, Tesla Motors & SpaceX: I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it’s probably that. With artificial intelligence, we are summoning the demon.
HARI SREENIVASAN: Musk recently donated $10 million to the Future of Life Institute, which is focused on keeping A.I. research beneficial to humanity. Add his voice to a list of bright minds like physicist Stephen Hawking, Microsoft founder Bill Gates and several leaders in the field of artificial intelligence, among them, Stuart Russell, who heads the A.I. Lab at the University of California, Berkeley.
What concerns you about how artificial intelligence is already being used, or will be used shortly?
STUART RUSSELL, University of California, Berkeley: In the near term, the biggest problem is the development of autonomous weapons. Everyone knows about drones. Drones are remotely piloted. They’re not robots in a real sense. There’s a person looking through the camera that’s on the aircraft, and deciding when to fire.
An autonomous weapon would do all of that itself. It chooses where to go, it decides what the target is, and it decides when to fire. (read full article & watch video …)
5/7/2015 - Three recent developments in synthetic biology
May 7, 2015 – Washington Post
Three recent developments in synthetic biology you need to know
By Dominic Basulto
Using synthetic biology techniques, researchers have created everything from new flavors and fragrances to new types of biofuels and materials. While the innovation potential of combining biology and engineering is unquestionable, now comes the hard part of proving that it is possible to design and build engineered biological systems on a cost-effective industrial scale, thereby creating true “bio-factories.”
For that scenario to become a reality, here are three developments in the synthetic biology space to keep an eye on in 2015:
1. New efforts to catalogue synthetic biology innovations
On April 29, the Wilson Center in Washington, D.C. launched a new initiative of its Synthetic Biology Project (which dates back to 2008): a first-of-its-kind inventory to track the dizzying array of new synthetic biology products that are emerging in fields such as agriculture, chemicals and materials. The task is so large that the Wilson Center is crowdsourcing the project, letting registered users track the functions and properties of these products. …
2. New initiatives to embrace industry-wide standards
… On March 31, the U.S. National Institute of Standards and Technology (NIST) convened a working group at Stanford University to launch the Synthetic Biology Standards Consortium. Working in groups, participants at Stanford discussed the types of standards would make it easier for researchers to share methods, materials and information within the field of synthetic biology.
This embrace of industry-wide standards could be a huge step forward for the synthetic biology industry, which is still only in its infancy. Industry standards are a cornerstone of any technology industry, and getting major companies such as Dow, DuPont, Lockheed Martin and Novartis – all participants of the consortium – onboard is a positive step. Going forward, academics, researchers and entrepreneurs will develop common standards for “automating methods, describing and assembling components and documenting the performance of engineered bacterial strains.”
3. The entry of innovation champions such as DARPA into the synthetic biology field
After announcing the launch of its new Biological Technologies Office in April 2014, DARPA is finally moving off the sidelines and getting into the game. If DARPA brings the same innovation know-how to synthetic biology that it has brought to fields such as robotics, the Internet and autonomous vehicles, this could be big. At the Biology is Technology (BiT) event hosted by DARPA in San Francisco in mid-February, the agency sought to outline all the innovative ways that it hoped to use biology for defense technology, such as through its Living Foundries program. (read full article …)
5/6/2015 - 2.2 Billion Malicious Attacks in Q1
May 6, 2015 – Infosecurity Magazine
2.2Bn Malicious Attacks in Q1 Show a Doubling of Threats in One Year
A staggering 2.2 billion+ malicious attacks on computers and mobile devices were mounted during the first quarter of 2015, which is double the amount detected in Q1 of 2014.
That’s according to Kaspersky Lab’s IT Threat Evolution Report for Q1 of 2015, which called the quarter “monumental” for malware.
Kaspersky said that it repelled 469 million attacks launched from online resources located all over the world, a third (32.8%) more than in Q1 of 2014. And, more than 93 million unique URLs were recognized as malicious by web antivirus, 14.3% more than in Q1 of 2014.
Interestingly, Russia continues to be a nexus for cybercriminal activity. Kaspersky said that 40% of web attacks neutralized by Kaspersky Lab products were carried out using malicious Web resources located in Russia. Last year Russia shared first place with the US, with the two countries accounting for 39% of web attacks between them.
“In the last few years, Kaspersky Lab has observed many advanced cyber-threat actors, appearing to be fluent in many languages, such as Russian, Chinese, English, Korean or Spanish,” said Aleks Gostev, chief security expert in the Kaspersky Lab Global Research and Analysis Team. “ In 2015 we reported on cyber-threats ‘speaking’ Arabic and French, and the question now is ‘who will be next?’”
On the mobile front, threats were in a decline but still considered dangerous. During the quarter, 103,072 new malicious programs for mobile devices were discovered, a 6.6% decline from the amount discovered in Q1 of 2014. However, mobile malware has shown to be evolving toward monetization as malware writers design SMS Trojans, banker Trojans and ransomware Trojans capable of stealing or extorting money and users’ bank data. This category of malware accounted for 23.2% of new mobile threats in Q1 of 2015. Kaspersky Lab also detected 1,527 new mobile banking Trojans, 29% more than in Q1 of 2014.
The report also covered the top threats in the quarter, including what it considers the most sophisticated advanced persistent cyber-espionage threat to date—The Equation Group.
This particular threat actor has surpassed anything known to date in terms of complexity and sophistication of tools, Kaspersky said. It’s been linked to the Stuxnet and Flame super threats; its first known sample dates back to 2002; and it is still active. Among its unique proficiencies is the ability to infect hard drive firmware, use an “interdiction” technique to infect victims, and mimic criminal malware. (read full article …)
5/6/2015 - How Watson will impact our fight against cancer
May 6, 2015 – Washington Post
How IBM Watson will impact our fight against cancer
By Dominic Basulto
At the inaugural World of Watson event in New York Tuesday, IBM announced a new Watson Genomics initiative that will utilize the computing capabilities of Watson to make it easier and faster to fight cancer. It will now be possible for clinicians at more than a dozen cancer institutes around the nation to apply Watson’s data-crunching abilities to sequence the genome of a cancerous tumor and then access the most relevant information in the medical literature to recommend a treatment option. The goal is an ambitious one: personalized medicine for cancer patients everywhere based on their unique genomic profile. …
The most important factor, say clinicians involved in the new Watson genomics initiative, is the faster processing time of DNA insights made possible by using Watson. In some cases, the processing time can be reduced from weeks to minutes. This is critically important, given the amount of genomic data out there. …
Rolling out Watson to more than a dozen cancer institutes around the nation is just the beginning. At some point, any oncologist might have access to Watson, meaning that cancer patients in any geographic locale would benefit, not just those located near a major cancer research institute.
“This collaboration is about giving clinicians the ability to do for a broader population what is currently only available to a small number – deliver personalized, precision cancer treatments,” said Steve Harvey, vice president of IBM Watson Health. “The technology that we’re applying to this challenge brings the power of cognitive computing to bear on one of the most urgent and pressing issues of our time – the fight against cancer – in a way that has never before been possible.” (read full article …)
5/5/2015 - Malware Corrupts Drives to Prevent Code Analysis
May 5, 2015 – eWeek
Complex Rombertik Malware Corrupts Drives to Prevent Code Analysis
By Robert Lemos
The malware, which attempts to steal information about Web sites and users, deletes the master boot record—or all user files—to avoid detection, according to a Cisco analysis.
Attackers are adopting increasingly malicious tactics to evade security researchers’ analysis efforts, with a recently-discovered data-stealing program erasing the master boot record of a system’s hard drive if it detects signs of an analysis environment, according to report published by Cisco on May 4.
The malware, dubbed Rombertik, compromises systems and attempts to steal information, such as login credentials and personal information, from the victim’s browser sessions, researchers with Cisco’s Talos security intelligence group stated in the report.
When the malware installs itself, the software runs several anti-analysis checks, attempting to determine if the system on which it is running is an analysis environment. If the last check fails, the malware deletes the master boot record, or MBR, which is required to correctly start up the computer system.
“The interesting bit with Rombertik is that we are seeing malware authors attempting to be incredibility evasive,” Alexander Chiu, a threat researcher with Cisco, said in an e-mail interview with eWEEK. “If Rombertik detects it’s being analyzed running in memory, it actively tries to trash the MBR of the computer it’s running on. This is not common behavior.” (read full article…)
5/1/2015 - ZuckerBorg: a potentially world-threatening FART
May 1, 2015 – The Register
ZuckerBorg assimilates Microsoft boffins into potentially world-threatening FART
Evil existential threat or waste of money? Probably both
By Alexander J. Martin
The ZuckerBorg has assimilated yet more humans from academia and industry into its Facebook Artificial-Intelligence Research Team (FART). Facebook claims their work will focus on several aspects of machine learning, with applications to image, speech and natural language understanding.
The free global ad platform announced that it had “bolstered” the team eam with some of the world’s leading researchers from Microsoft and academia.
Among those assimilated is the award-winning Leon Bottou, whose long-term goal “is to understand how to build human-level intelligence”. Also on board are Laurens van de Maaten and Anitha Kannan, who will continue their research into deep learning.
The former Microsoft employees have been at Facebook since March, although their employment has only been announced now.
Among the others joining them are Nicolas Usunier and Gabriel Synnaève, both of whom are reasonably established academics in the rather disparate fields around AI. Facebook states that it expects its employees at FART to continue to “contribute academic study in the computer, social and data sciences”, and inform “the development of products and other innovations that serve the Facebook community”.
The private sector has done well to poach academia’s brightest AI researchers in recent years, as well it might when it is pouring funding into the topic.
Google hired seven boffins from the University of Oxford last October, and FART now employs more than 40 people across its laboratories in San Francisco and New York.
Facebook describes itself as committed to advancing “the field of machine intelligence and developing technologies that give people better ways to communicate”. It also admits that “in the long term, we seek to understand intelligence and make intelligent machines”. (read full article …)
-
Noteworthy News for April 2015
-
Noteworthy News for March 2015
-
Noteworthy News for 2011 through Feb. 2015
Share this:
- Click to print (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)