12/13/2015 - Elon Musk & others invest $1billion in AI
December 13, 2015 – CNN Money
Elon Musk is sinking large amounts of money into something that scares him: artificial intelligence.
The Tesla and Space X founder is a noted artificial intelligence critic. Now he’s one of a number of big name technology investors behind a new non-profit artificial intelligence research center.
Launched on Friday [Dec. 11, 2015], OpenAI’s goal is to develop AI safely and share its research widely. It’s AI is specifically meant to be used in ways that will benefit humanity. The group could still decide to keep some research private if they feel there are safety concerns.
OpenAI’s backers — a group that includes Musk, Peter Thiel, Y Combinator’s Sam Altman and Jessica Livingston, and Reid Hoffman — are committing $1 billion to the project. OpenAI will start small, with nine full-time researchers working out of an office in San Francisco, though Altman says it could double in size in a year.
Well known AI researcher (and former Google employee) Ilya Sutskever will be the group’s research director.
Many major technology companies are heavily invested in developing their own artificial intelligence engines, including Facebook, Google, Microsoft and Apple. AI is already used in technology today, including Musk’s Teslas and Facebook’s photo tools.
While these companies share some findings openly, the fact that they are developing tools for profit is worrisome to many in science and technology.
“I believe it’s better to empower human kind with distributed artificial intelligence than a central artificial intelligence controlled by a single company,” said Altman in an interview.
Musk has called artificial intelligence “our biggest existential threat.” Earlier this year, he joined Stephen Hawking, Bill Gates and other respected science and technology thinkers to issue a warning that artificial intelligence could be more dangerous than nuclear weapons.
“Humanity’s position on this planet depends on its intelligence, so if our intelligence is exceeded, it’s unlikely we will remain in charge of the planet,” Musk previously said in an interview with CNN. (Read full article …)
12/8/2015 - Inside The Machine: HP Labs mission to remake computing
December 8, 2015 – TechRepublic
By Nick Heath
While the power of modern computers dwarves that of their ancestors, the design of today’s digital devices is still bound by that of the earliest, room-sized machines.
Hewlett Packard Enterprise (HPE) wants to remove these decades-old constraints on how machines store data and in doing so create a computer able to handle tasks vastly more complex than is possible today.
HPE is building what it calls The Machine, a system it hopes will be able to store and retrieve huge amounts of data far more rapidly than is currently feasible.
Director of Hewlett Packard Labs Martin Fink describes what such a machine would be capable of – giving the example of how it could resolve the surprisingly complex everyday problems, such as there being no available airport gates when your plane lands early.
“You think to yourself ‘How hard could this be, just turn the plane and park’, because you look out the window and there’s plenty of open gates,” he told the recent HPE Discover event in London.
“The reality is this is an extremely hard problem to orchestrate. But now what if you could take every pilot, every flight attendant, every single plane, every baggage handler, every handler for every gate, for every airport in the world and put it in memory all at the same time.”
Getting passengers off a plane ahead of time is just one outcome that could be made possible were a machine able to store all these variables in a way that captures the relationship between them says Fink.
To achieve this goal Hewlett Packard Labs – the central research organization for HPE – wants The Machine to introduce a new architecture for computing, one that changes how machines store data. Today computers typically rely on small pools of memory from which they fetch and temporarily store data. This memory can be accessed rapidly but is typically limited in size. To store large amounts of data and retain it when the system is powered down, machines have to rely on hard disc or solid state storage, which is far slower to access than memory.
In large computer systems with many processors, this architecture also has the effect of creating lots of isolated islands of memory, each tied to different processors and which have to swap data between them.
This design fundamentally limits their efficiency of modern machines when handling very large datasets, said Fink.
“You in effect have to chop your data into chunks in order to match the limitations of the processors, which act as a gatekeeper,” he said.
“The processors dictate how much memory you can have and as soon as you want to scale to more memory you need to add another processor and then often another server and so on and so forth.”
The Machine would change this approach, allowing processors to share access to a large – initially hundreds of terabytes, increasing to petabytes – pool of “universal memory”. This memory would differ from that typically used today in that it would be non-volatile, able to retain data in the event of losing power, while still able to read and write data far faster than hard disc or solid state storage.
“The Machine is by far and away the single most important research project we have at Hewlett Packard Labs,” said Fink.
“The goal here is with this architecture we can ingest, store, manipulate truly massive datasets while simultaneously achieving multiple orders of magnitude less energy per bit.”
One of the first uses that Hewlett Packard Labs envisions for The Machine is security, with Fink giving the example of how it could enable companies to create a DNS analyzer for net traffic capable of handling a lot more data than products today, which he said typically handle 50,000 events per second and hold five minutes worth of events.
“This is what we want to deliver as our first machine product and we call it The Security Machine. Being able to deal with ten million events per second and hold 14 days worth of events. This is what a 640 terabyte machine allows you to do.”
If the technology works, HPE has talked about ambitions to eventually shrink the architecture down to allow it to be used to data-intensive tasks, such as voice recognition, inside smartphones or document analysis inside printers. (Read full article …)