10/17/2015 - Experts Warn UN Panel About Superintelligence
October 17, 2015 – Gizmodo Australia
By George Dvorsky
During a recent United Nations meeting about emerging global risks, political representatives from around the world were warned about the threats posed by artificial intelligence and other future technologies.
The event, organised by Georgia’s UN representatives and the UN Interregional Crime and Justice Research Institute (UNICRI), was set up to foster discussion about the national and international security risks posed by new technologies, including chemical, biological, radiological, and nuclear (CBRN) materials.
The panel was also treated to a special discussion on the potential threats raised by artificial superintelligence — that is, AI whose capabilities greatly exceed those of humans. The purpose of the meeting, held on October 14, was to discuss the implications of emerging technologies, and how to proactively mitigate the risks.
Full meeting. Max Tegmark’s talk begins at 1:55, and Bostrom’s at 2:14.
The meeting featured two prominent experts on the matter, Max Tegmark, a physicist at MIT, and Nick Bostrom, the founder of Oxford’s Future of Humanity Institute and author of the book Superintelligence: Paths, Dangers, Strategies. Both agreed that AI has the potential to transform human society in profoundly positive ways, but they also raised questions about how the technology could quickly get out of control and turn against us.
Last year, Tegmark, along with physicist Stephen Hawking, computer science professor Stuart Russell, and physicist Frank Wilczek, warned about the current culture of complacency regarding superintelligent machines.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” the authors wrote. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” (read full article …)
10/10/2015 - $500 million quest to build an artificial brain
October 10, 2015 – The Peninsula
By Ariana Eunjung Cha
Paul Allen has been waiting for the emergence of intelligent machines for a very long time. As a young boy, Allen spent much of his time in the library reading science-fiction novels in which robots manage our homes, perform surgery and fly around saving lives like superheroes. In his imagination, these beings would live among us, serving as our advisers, companions and friends.
Now 62 and worth an estimated $17.7 billion, the Microsoft co-founder is using his wealth to back two separate philanthropic research efforts at the intersection of neuroscience and artificial intelligence that he hopes will hasten that future.
The first project is to build an artificial brain from scratch that can pass a high school science test. It sounds simple enough, but trying to teach a machine not only to respond but also to reason is one of the hardest software-engineering endeavors attempted – far more complex than building his former company’s breakthrough Windows operating system, said to have 50 million lines of code.
The second project aims to understand intelligence by coming at it from the opposite direction – by starting with nature and deconstructing and analyzing the pieces. It’s an attempt to reverse-engineer the human brain by slicing it up – literally – modeling it and running simulations.
“Imagine being able to take a clean sheet of paper and replicate all the amazing things the human brain does,” Allen said in an interview.
He persuaded University of Washington AI researcher Oren Etzioni to lead the brain-building team and Caltech neuroscientist Christof Koch to lead the brain-deconstruction team. For them and the small army of other PhD scientists working for Allen, the quest to understand the brain and human intelligence has parallels in the early 1900s when men first began to ponder how to build a machine that could fly.
There were those who believed the best way would be to simulate birds, while there were others, like the Wright brothers, who were building machines that looked very different from species that could fly in nature. And it wasn’t clear back then which approach would get humanity into the skies first.
Whether they create something reflected in nature or invent something entirely novel, the mission is the same: conquering the final frontier of the human body – the brain – to enable people to live longer, better lives and answer fundamental questions about humans’ place in the universe. (read full article …)
10/10/2015 - Japanese entrepreneur uses AI to expose fraudulent research papers
October 10, 2015 – Nikkei Asian Review
By Yosuke Suzuki
A young Japanese entrepreneur is using cutting-edge image processing and artificial intelligence tools to uncover fraudulent research papers. This same technology can also be used to better diagnose cancer.
Yuki Shimahara, 28, is the founder and chief executive of LPixel, a start-up that originated at the University of Tokyo. In February 2014, Shimahara was busy creating a company for developing image analyzing software for use at university laboratories and at companies. This also happened to be when science research in Japan was reaching a crisis point due to a string of wrongdoings. The most notable incident was the exposure around that time of fabricated results in research papers by Japanese scientist Haruko Obokata.
Shimahara decided to try out his technology on images published in Obokata’s papers, which were about the stimulus-triggered acquisition of pluripotency (STAP) cells. He found that one image showing DNA analysis had been partially replaced with a photo from a different experiment.
These findings concerned Shimahara. He worried that the world of science could become corrupted if nothing were done, and so he began developing a service to detect alterations made to images.
Initially, Shimahara considered charging for the service. But this April, just after establishing the company, he started offering the service for free in the hopes that it could serve as a deterrent against research fraud.
By word of mouth, the service quickly spread to universities and research institutions around the world, and is now used as often as 1,000 times a day. (read full article …)
10/8/2015 - Stephen Hawking: Start Working on AI Safety Right Now
October 8, 2015 – Epoch Times
By Jonathan Zhou
Stephen Hawking continues his crusade for artificial intelligence (AI) safety in a long-awaited Ask Me Anything thread on Reddit, stating that the groundwork for such protocols needs to start not some time in the distant future, but right now.
“We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence,” the famed physicist wrote. “It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.”
Hawking has leveraged his prominence to bolster the AI safety movement since 2014, when he penned an editorial with other prominent AI researchers that warned about the existential risks advanced machines could pose to humanity. Other public figures in the science and technology sphere like Elon Musk and Steve Wozniak have since joined Hawking in trying to raise public awareness about AI risk, and earlier this year the three were among the signatories of an open letter that called for a ban on autonomous weapons, or armed robots.
The belief that humanity is on the verge of creating artificial intelligence greater than itself dates back to the 1950s. These expectations have repeatedly by foiled, but most AI researchers still believe that human-level machine intelligence will emerge within the next century.
A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. — Stephen Hawking
For Hawking, the way that advanced AIs are depicted in popular culture belies the actual threats posed by machines. Movies like “The Terminator” envision diabolical killer-robots bent on destroying humanity, ascribing to machines motives that won’t exist in real life and understating the simplicity of minimizing existential AI risk (just don’t build killer robots).
“The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble,” Hawking wrote. “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.” (read full article …)
10/8/2015 - AI: next major wave of computing
October 8, 2015 – CNET
By Shara Tibken
Andy Rubin, the father of Google’s mobile software, also says the industry needs to start figuring out how to interact with devices that don’t have a screen — like washing machines.
The next major phase of computing will be artificial intelligence, the creator of Google’s Android software predicted Wednesday.
Andy Rubin, speaking here at the Code Mobile conference, said mobile isn’t going away as the main method of computing, but other methods will emerge, including technology to make regular devices smarter through artificial intelligence and robotics. AI is the practice of making a machine behave in a smart way, such as making a robot smarter or adding Internet connectivity to something like a washing machine.
“Your dishwasher is a robot,” Rubin said. “It used to be a chore you did in the sink. … There’s a lot of definitions [of artificial intelligence]. … The thing that’s going to be new is the part of the cloud that’s forming the intelligence from all of the information that’s coming back.”
Rubin, creator of the Android mobile operating system, joined Google in 2005 when the search giant bought Android. Before working on Android, Rubin ran a company called Danger, which made an advanced cell phone. It was purchased by Microsoft in 2008.
Most recently at Google, Rubin was the head of the company’s nascent robotic efforts at its experimental Google X lab. But Rubin left the company a year ago to start Playground Global, a startup “incubator” that nurtures budding hardware companies. Google is an investor in Rubin’s new project. Redpoint Ventures, a venture-capital firm where Rubin is now a partner, also invested.
Silicon Valley has become a hot place for physical hardware projects, after the market had been driven for the last several years by software and Internet companies. Companies like smartwatch maker Pebble, home-video surveillance company Dropcam and the smart thermostat maker Nest have grabbed attention from the world’s largest tech companies, inspiring competition — like the Apple Watch and Google’s Android Wear watches — and acquisitions.
Rubin, speaking about his departure from Google, said he questioned what he was going to do for the next 10 years of his life. “Am I going to fight for 1 or 2 percent market share [in mobile devices], or am I going to do 10 more Androids?” he said. Playground closed its fundraising efforts “yesterday literally,” Rubin said Wednesday, and will now have $300 million to invest in hardware companies.
Rubin added that some areas he’s now focusing on include how to interact with things that don’t have screens, like appliances or swimming pools. One recent investment by his new fund was a company called Connected Yard, which makes a device that does a constant chemical analysis of the water in a swimming pool. (read full article …)
10/7/2015 - Don't assume you're safe from cyber-war
October 7, 2015 – SC Magazine
By Max Metzger
The new cyber-threat landscape includes the geopolitical dimension which organisations ignore at their peril, said Werner Thalmeier.
Werner Thalmeier, director of security solutions for the EMEA and CACI regions at Radware, had words of warning for visitors on the first day of IPExpo. There would be no product slug or advertisement here, he reassured his audience, only a sober briefing of the way cyber-attacks are taking on a geopolitical character in the ongoing cyber-war.
The talk, titled “The Next Cyber War: Geo-political Events and Cyber-attacks”, dealt with a phenomenon that has all but failed to keep out of the headlines.
Early this year, the hacktivist group Anonymous declared war on online Islamic fundamentalism in the wake of the Charlie Hebdo Massacre. Calling the campaign #opcharliehebdo, the mysterious group crowdsourced like-minded individuals to help hunt for social media accounts, forums and websites known to be popular with radical Islamists and promptly began attacking them.
Their targets, however, had a response. In turn, they launched a campaign called ANONghost, a statement of online Jihad, and with it, they attacked thousands of websites, including many French local government websites, plastering their webpages with pro-Islamic state, pro-radical propaganda. According to Thalmeier, 19,000 websites were affected.
Another such example was Operation Ababil, where online Jihadists once again attacked the networked capabilities of major western institutions. This time, they took aim at the banks, attacking every level of their networked systems and eventually found success attacking blindspots, namely the SSL servers.
These kinds of multi-vulnerability campaigns are often the attack method of choice of political actors such as these. Here, the adversaries will attack you at all stages of your network chain: the servers, the internet pipe and so on, until they find your blindspot.
From here, they won’t kill your firewall, they’ll just overwhelm the vulnerable spot and shut you down, said Thalmeier.
“When you become the victim of such an attack, there’s not a lot you can do,” he said. Even cloud protection, which doesn’t as easily fall prey to more traditional attacks is vulnerable. While it’s good for volumetric attacks, it’s “obsolete” for attacks which are “low and slow”. (read full article …)
10/7/2015 - IPsoft Humanizes Artificial Intelligence
October 7, 2015 – Business Wire
IPsoft today announced Version 2.0 of Amelia, its artificial intelligence platform that automates knowledge work across a broad range of business functions. The new updates bring Amelia closer to achieving near human cognitive capabilities and will be demonstrated for the first time at the Gartner Symposium/ITxpo in Orlando by IPsoft CEO, Chetan Dube.
The growing maturity of Amelia’s core understanding capabilities will widen the range of roles she can execute and the breadth of knowledge she can absorb. In parallel, Amelia’s physical appearance and expressiveness have been transformed to create a more human like avatar capable of deepening customer engagement. Her new animation form has been entirely remodeled on that of a real person.
“We are fast approaching the moment when technology is knocking at the Turing horizon, where machine intelligence starts to match human intelligence. Amelia will be the harbinger of that shift, inviting us to re-evaluate the relationship between man and machine in order to create a more efficient planet.” – Chetan Dube, IPsoft CEO
Within a matter of years, IPsoft plans to put the model and her facesake, Amelia, on stage and have a distinguished panel of analysts and journalists ask questions of the human and the machine. “In just one year we have seen Amelia ‘grow up’ tremendously. Just imagine how her maturity will accelerate over the next five,” said Chetan Dube, IPsoft CEO. “We are fast approaching the moment when technology is knocking at the Turing horizon, where machine intelligence starts to match human intelligence. Amelia will be the harbinger of that shift, inviting us to re-evaluate the relationship between man and machine in order to create a more efficient planet.” (read full article …)
10/6/2015 - Disease Detector
October 6, 2015 – Tufts Now
Tufts researchers use artificial intelligence to explain why some cells deviate from normal development—a finding that could lead to better diagnosis and treatment
“The artificial intelligence system evolved a pathway that correctly explains all the existing data,” says Michael Levin. “Best of all, it also made correct predictions on data it had never seen.”
By Deborah Halber
Uncle Joe smokes a pack a day, drinks like a fish and lives to a ripe old age. His brother, leading a similar lifestyle, succumbs to cancer at age 55. Why do some individuals develop certain diseases or disorders while others do not? In newly reported research that could help provide answers, scientists at Tufts University and the University of Florida have used artificial intelligence to illuminate fundamental cellular processes and suggest potential targets to correct biological missteps.
The findings, published Oct. 6 in Science Signaling online, are believed to mark the first time artificial intelligence has been used to discover a molecular model that explains why some groups of cells deviate from normal development during embryogenesis, says senior author Michael Levin, A92, the Vannevar Bush Professor of Biology at Tufts and director of the university’s Center for Regenerative and Developmental Biology.
The research could lay the groundwork for a future medical screening system that could pinpoint precisely where a molecular pathway takes a wrong turn, producing cancer, diabetes or developmental disorders—and then identify a way to fix it.
The latest findings build on the center’s earlier studies to understand the development and metastasis of melanoma-like cells in tadpoles, as well as work applying artificial intelligence to help explain planarian regeneration. The new findings, Levin says, indicate that “our methodology can be taken well beyond simple organisms and applied to the physiology of cell behavior in vertebrates.”
For the Science Signaling work, the researchers used a type of artificial intelligence called evolutionary computation to pinpoint the molecular mechanisms underlying the earlier research, in which they induced normal pigment cells in the embryos of a species of African frog to metastasize. They used a series of drugs to disrupt the cells’ normal bioelectrical and serotonergic signaling at a crucial stage of development. (read full article …)
10/6/2015 - Cost of Cyber-crime in the U.S. Rises to $15M
October 6, 2015 – eWeek
By Sean Michael Kerner
The annual Ponemon Institute Cost of Cyber Crime Study reports a rising cost in the U.S. and globally.
With a seemingly endless stream of breaches reported over the course of the past year, it should come as no surprise that costs associated with cyber-crime are on the rise. The annual Ponemon Institute 2015 Cost of Cyber Crime Study, sponsored by Hewlett-Packard, came out Oct. 6, reporting that in the United States the average annualized cost of cyber-crime is now $15 million, up 19 percent over the 2014 report.
The Cost of Cyber Crime Study also examined global costs, which are not as high on average as those in the U.S. For the 2015 study, the global average annualized cost of cyber-crime is $7.7 million for a 1.9 percent year-over-year increase. The global study methodology examined 252 companies across seven countries, with 1,928 attacks used to measure the total cost. Specifically in the U.S., the study looked at 58 companies, with 638 cyber-attacks used to measure the total cost.
“We were surprised by the consistent increase in the cost of cyber-crime over just one year in all countries,” Larry Ponemon, chairman and founder of the Ponemon Institute, told eWEEK. “We believe this is due to the increased sophistication and stealth of cyber-attacks.”
Ponemon added that what is happening is that instead of it getting easier for organizations to contain and remediate attacks, it is getting harder and is affecting the average cost. Additionally, he noted that there is also more sensitive and confidential information to protect and more disruptive technologies in the workforce.
“On a positive note, we are seeing steps companies can take to address the increase in cost such as deploying security intelligence systems and having internal security expertise,” Ponemon said.
In terms of specific security technologies that can help organizations lower cyber-crime costs, Ponemon found that security information and event management (SIEM) use leads to an average cost savings of $3.7 million per year.
Looking at what drives up cost, Ponemon said one factor is the time it takes to resolve a cyber-attack. In the U.S., it now takes on average 46 days for an organization to remediate a cyber-attack, up by a day from the 45 days reported in 2014. Ponemon said that typically a cyber-crime will cost more the longer it takes to resolve. In the U.S., the average number of days to resolve an attack is 46, with an average per-day cost of $43,327. (read full article …)
10/6/2015 - Man vs. machine circa 2018
October 6, 2015 – ZDNet
The relationship between people and machines will go from cooperative to co-dependent to competitive in the years ahead. This’ll be fun.
By Larry Dignan
Gartner’s strategic predictions for 2018 and beyond revolved around the relationship between people and machines and how it’ll go from cooperative to co-dependent to competitive. Is it just me or does this man machine thing sound like a bad relationship?
Here’s a look at Gartner’s strategic predictions for the years ahead via analyst Daryl Plummer and some thoughts on what makes sense and what’s questionable.
- By 2018, 20 percent of all business content will be authored by machines. Reality check: This robo content move is a no-brainer. News organizations are already using robo-writers in select instances and mutual fund reports are largely automated as are those fantasy football recaps.
- Six billion connected things will request support by 2018. Plummer said that IT leaders need to view things as customers and work to satisfy “their nonhuman requests.” Reality check: This prediction sounds like it came from Salesforce, which is betting on the machine-thing-customer connection. Things requesting support won’t be a surprise. Enterprises figuring out to manage these support requests well by 2018 will be surprising.
- By 2020, autonomous software agents outside human control will conduct 5 percent of all economic transactions. Reality check: This prediction seems like a slam dunk. Wall Street is mostly moved by algorithms today. Business will follow. Watch for glitches and flash crashes ahead. (read full article …)
10/1/2015 - Smart machines are about to run the world
October 1, 2015 – TechRepublic
By Hope Reese
“The robots are here,” Dr. Roman V. Yampolskiy told a packed room at IdeaFestival 2015 in Louisville, KY. “You may not see them every day, but we know how to make them. We fund research to design them. We have robots who can assist us and robot soldiers who can kill us.”
Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville and author of Artificial Superintelligence: a Futuristic Approach, studies the implications of AI, the interface between machines and people, and the influence they have on our workplace. AI has formally been around since in the 1950s and a lot of it—spell-check, for example—is no longer called artificial intelligence. “In your head,” Yampolskiy said, “those technologies aren’t AI. But they really are.”
AI, he said, is everywhere. “It’s in your phones, your cars. It’s Google. It’s every bit of technology we’re using.”
But AI has seen a recent explosion—a new Barbie doll, for example, will use AI to have conversations with children—and some worry advanced technology will begin to replace humans in the workplace. In Chengdu, China, Foxconn, a company making Apple and other electronics, has just built a factory entirely run by robots.
What is next, Yampolskiy asked rhetorically. His answer: Superintelligence, intelligence beyond human. There are projects funded at unprecedented levels, conferences devoted to this, and private companies employing the brightest people in the world to solve these problems, Yampolskiy said. “Given the funding and intelligence, it would be surprising if they don’t succeed.”
Here is Yampolskiy’s list of machine attributes to be aware of:
- Superfast—These machines are not only super smart, but they’re superfast. They can predict “ultrafast extreme events,” such as stock market crashes, at a pace no human can keep up with.
- Supercomplex—The intelligence that runs an airplane, for example, is made up of so many interconnected elements that the people operating them can’t fully comprehend.
- Supercontrolling—Once we cede power to the machines, Yampolskiy said, “we’ve lost it. We can’t take it back.”
What kind of devices will we see that have these abilities?
- Supersoldiers—The military, Yampolskiy said, will be the first to use the advanced technology, in the form of drones, robot soldiers, and more.
- Superviruses—We are only at the beginning of understanding how much damage can be done through computer viruses created with artificial intelligence.
- Superworkers—We have been losing physical labor jobs for years due to automation. Now we’re losing intellectual jobs. “Employers love robots,” Yampolskiy said. “You don’t have to deal with sick days, vacation, sexual harassment, 401k. There’s a good chance a lot of us will be out of jobs.”
There are potential positive impacts of AI as well. Yampolskiy pointed to the possibilities of a cure for AIDS, ending hunger, stimulating new kinds of economic growth. But “we don’t need to spend much time talking about it,” Yampolskiy said. “If it’s a good thing, you don’t need to get ready for it.” He spent more time on the potential downsides.
His list of negative impacts includes losing jobs, losing human and civic rights, potentially deadly military applications. And the biggest worry? The unknown. “AI can have completely different mental capacities, desires, and common sense. Things humans immediately understand and agree on will be very different from machines,” he said. Machines, Yampolskiy said, can behave like children in some situations. “When you take humans out of decision-making, you have a system making very important decisions with no common sense.” This, he said, could be dangerous. (read full article …)
10/1/2015 - 'Artificial intelligence is evolving'-Antoine Blondeau
October 1, 2015 – Wired
By Catherine Lawson
Antoine Blondeau worked on the project that became Apple’s Siri. Today he runs Sentient Technologies, the world’s best-funded AI company. He was previously COO of Zi Corporation, whose predictive text software has been embedded in hundreds of millions of devices.
Antoine will be speaking on the Main Stage at our flagship event, WIRED2015 on October 15-16. He will take part in the session, “When technology gets ambitious”, alongside Gabor Forgacs, Ryan Weed and Daniel McDuff.
Bringing the WIRED world to life, WIRED2015 showcases the innovators changing the world and promoting disruptive thinking and radical ideas. There will be around 45 speakers over the two-day event, presenting stories about their work in science, design, business and many other fields.
Can you give us some hints about what you’re planning to talk about at WIRED2015?
Sure, I’d be happy to. My talk centres around the role evolution has in creating a new frontier in artificial intelligence — what I like to call evolutionary intelligence. Up until now machine-learning experts have been very focused on trying to recreate human intelligence. But what if we think bigger than that? What if we could scale AI to an unfathomable size, a size so big that it would leapfrog anything else out there and be able to solve some of the world’s most complex problems?
Next, imagine if this massive AI system could evolve and adapt to its environment like living species do, to get better at the tasks it’s presented with.
The possibilities of such a system would be extremely exciting. “Evolutionary Intelligence agents” would evolve by themselves — trained to survive and thrive by writing their own code — spawning trillions of computer programs to solve incredibly complex problems such as those in healthcare, energy, finance and e-commerce.
This is possible, and is happening today. I’ll reveal all in my talk. (read full article …)
- Click to email this to a friend (Opens in new window)
- Click to print (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on Google+ (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)