Tuesday 25 February 2014

The Internet of Things (IoT)



It’s estimated that by 2020, between 50 and 100 billion devices will be connected to the “Internet of Things,” the phrase used to describe all of the non-computer devices actively connected to the Internet—and one another. For many companies, learning how to design for this Internet of Things will be one of the major challenges of the next 5-10 years.
Emerging devices are making the Internet more useful and exciting, but at the same time, these devices are also making the internet more complicated. From live feed traffic cams to fitness trackers to smart watches, the family of Internet-based devices is growing more diverse every day, and integrating obscure gadgets that go far beyond the computers and servers that the Internet was initially built for.


Tuesday 11 February 2014

Sochi 2014: Coping with the Winter Olympics data blizzard


By Matthew Wall


BobsleighBobsleighs at the Sochi 2014 Winter Games will beam speed and acceleration data in real time
As the bobsleigh hurtles down the sinuous Sanki Sliding Center reaching speeds of more than 80mph (130km/h), it will beam real-time data to TV viewers around the world.
Technology of Business
Omega, the official timekeeper for the 2014 Sochi Winter Olympics in Russia, has added a unit capable of transmitting speed, acceleration, G-force and vertical track positioning data during their runs.
While this type of technology will be familiar to Formula One motorsport enthusiasts, it is the first time it has been applied to bobsleigh and is indicative of how this Winter Olympics is the most technologically complex, data-intensive Games ever.
Peter Hurzeler, member of Omega's timing board, told the BBC: "We began developing this technology three years ago and one of the more difficult tasks was to make the equipment compact - now the system weighs just 300g."
The unit was tested more than a thousand times in competitions before being cleared for use at the Games, he said.
Technology underpins almost every aspect of the Games: cross-country skiers are tracked by GPS technology, while speed skaters' times are measured to the nearest thousandth of a second using light beams on the surface of the ice at the finish line.
Cross country skiers in front of Olympic ringsCross-country skiers are tracked by GPS so that their relative positions can be ascertained in real time
Omega says it will measure more than 650,000 distances, times and scores during the Games, using 230 tonnes of timekeeping, scoring and data-handling equipment.
Data explosion

Start Quote

If you can secure the Games you can secure pretty much anything else on earth”
Patrick AdibaHead of Olympic Games and major events, Atos
The rise in the use of such data-transmitting sensors and mobile devices has led to a surge in data collection and usage, with a big knock-on effect for networking and security, IT providers say.
At the Vancouver Winter Olympics in 2010, the ratio of wired to wireless devices was four-to-one, according to Dean Frohwerk, head of networking architecture for Avaya, an official IT Olympic Partner providing services to the 40,000 officials, athletes, journalists and support staff at the Games.
"At Sochi this has reversed," he says. "We made provision for up to 120,000 bandwidth-hungry devices on site per day, equivalent to three gadgets per person."
Now that people can stream video on mobile and tablet devices, networks are having to cope with a tenfold increase in data volumes compared to four years ago, estimates Mr Frohwerk.
Luge competitor Albert DemchenkoCommentators can receive results "even before they hear the roar of the crowd", says Atos
This entails building a robust backbone infrastructure - routers, switches and the like - which can power seven virtual networks channelling data securely to the right audiences. It must also be scalable, so that it can cope with sudden unexpected spikes in data traffic, he says.
'Automated'
The firm with the unenviable task of integrating and co-ordinating all this IT and broadcasting technology across 11 venues at the Black Sea resort, is Atos, the European company that also provides services to the BBC.
It began planning for Sochi nearly five years ago.
Competition results recording is "almost fully automated", says Patrick Adiba, Atos' head of Olympic Games and major events.
This is useful when 17 competitions can be running at the same time.
High-speed networks enable TV commentators and news agencies to receive results and contextual background information on the competitors in a split second, "even before they hear the roar of the crowd", he says.
Atos technology centreThe Atos technology centre at Sochi co-ordinates IT for the entire Winter Olympics
All this extra data has to be accessible across all operating platforms and securely directed to the right places, via fibre optic cable, wireless networks, and satellite.
Atos is employing 400 computer servers just to store the data and serve applications.
Alan Murphy, European marketing director for networking specialist, Brocade, told the BBC: "This is a massive networking challenge - the scale of the whole event makes it hugely complex.
"But at least knowing how many people are going to be there and for how long makes it easier for IT providers to model the likely data needs."
Security and privacy
Given the threats of a terrorist attack and hacking, data security and reliability is obviously "fundamental", says Mr Adiba.
"All the systems are duplicated up to four times, in case something fails. Even the technology operating centre is duplicated and can be up and running in two hours if the first one falls over."
Planning for the Games involved about 100,000 hours of testing, he says, running through 700 problem scenarios.
During the 2012 London Olympics there were 250 million "security events" detected over the network during the 17 days of the Games, but only 400 were potentially serious, he says.
A security event can be something as innocuous as a journalist mis-typing a password.
Ski jumping practiceThe growth in wireless devices has seen a surge in the amount of data flying through the air
"We don't care too much about the cause of the security event, we just care about protecting the Games. So if someone does something suspicious or unauthorised, we immediately stop the connection," he says.
"If you can secure the Games you can secure pretty much anything else on earth."
But privacy is another issue.
Russian telecoms provider MegaFon is responsible for providing the local network for spectators, and the US State Department has warned visitors that: "Russian Federal law permits the monitoring, retention and analysis of all data that traverses Russian communication networks, including internet browsing, email messages, telephone calls, and fax transmissions."
The 2014 Sochi Winter Olympics may be the most data-intensive and networked Games ever, but they are unlikely to be the most private.

Friday 7 February 2014

Code-breaking feat re-enacted at Bletchley (BBC)






The National Museum of Computing in Bletchley Park, Milton Keynes, has staged a re-enactment of an attack by a Colossus computer on a German Lorenz cipher machine using a rebuilt Colossus computer.
The re-enactment comes 70 years after Colossus, Britain's first electronic computer, went into operation to help decrypt the messages of German High Command.
Andy Clark, a trustee of the museum, talked BBC News through the many complex processes involved in breaking the code, which included using the Tunny cipher machine, a British copy of the Lorenz machine which was built without ever seeing an actual Lorenz cipher.

Thursday 6 February 2014

The Questions That Computers Can Never Answer


imsorrydave
Image Credit: Armin Cifuentes
Computers can drive cars, land a rover on Mars, and beat humans at Jeopardy. But do you ever wonder if there’s anything that a computer can never do? Computers are, of course, limited by their hardware. My smartphone can’t double as an electric razor (yet). But that’s a physical limitation, one that we could overcome if we really wanted to. So let me be a little more precise in what I mean. What I’m asking is, are there any questions that a computer can never answer?
Now of course, there are plenty of questions that are really hard for computers to answer. Here’s an example. In school, we learn how to factor numbers. So, for example, 30 = 2 × 3 × 5, or 42 = 2 × 3 × 7. School kids learn to factor numbers by following a straightforward, algorithmic procedure. Yet, up until 2007, there was a $100,000 bounty on factoring this number:
13506641086599522334960321627880596993888147560566702752448514385152651060
48595338339402871505719094417982072821644715513736804197039641917430464965
89274256239341020864383202110372958725762358509643110564073501508187510676
59462920556368552947521350085287941637732853390610975054433499981115005697
7236890927563
And as of 2014, no one has publicly claimed the solution to this puzzle. It’s not that we don’t know howto solve it, it’s just that it would take way too long. Our computers are too slow. (In fact, the encryption that makes the internet possible relies on these huge numbers being impossibly difficult to factor.)
So lets rephrase our question so that it isn’t limited by current technology. Are there any questions that, no matter how powerful your computer, and no matter how long you waited, your computer would never be able to answer?
Surprisingly, the answer is yes. The Halting Problem asks whether a computer program will stop after some time, or whether it will keep running forever. This is a very practical concern, because an infinite loop is a common type of bug that can subtly creep in to one’s code. In 1936, the brilliant mathematician and codebreaker Alan Turing proved that it’s impossible for a computer to inspect any code that you give it, and correctly tell you whether the code will halt or run forever. In other words, Turing showed that a computer can never solve the Halting Problem.
You’ve probably experienced this situation: you’re copying some files, and the progress bar gets stuck (typically at 99%). At what point do you give up on waiting for it to move? How would you know whether it’s going to stay stuck forever, or whether, in a few hundred years, it’ll eventually copy your file? To use an analogy by Scott Aaronson, “If you bet a friend that your watch will never stop ticking, when could you declare victory?
copying
As you get sick of waiting for the copy bar to move, you begin to wonder, wouldn’t it be great if someone wrote a debugging program that could weed out all annoying bugs like this? Whoever wrote that program could sell it to Microsoft for a ton of money. But before you get to work on writing it yourself, you should heed Turing’s advice – a computer can never reliably inspect someone’s code and tell you whether it will halt or run forever.
Think about how bold a claim this is. Turing isn’t talking about what we can do today, instead he’s raised a fundamental limitation on what computers can possibly do. Be it now, or in the year 2450, there isn’t, and never will be, any computer program that can solve the Halting Problem.
In his proof, Turing first had to mathematically define what we mean by a computer and a program. With this groundwork covered, he could deliver the final blow using the time honored tactic of proof by contradiction. As a warm up to understanding Turing’s proof, let’s think about a toy problem called theLiar paradox. Imagine someone tells you, “this sentence is false.” If that sentence is true, then going by what they said, it must also be false. Similarly, if the sentence is false, then it accurately describes itself, so it must also be true. But it can’t be both true and false – so we have a contradiction. This idea of using self-reference to create a contradiction is at the heart of Turing’s proof.
Here’s how computer scientist Scott Aaronson introduces it:
[Turing's] proof is a beautiful example of self-reference. It formalizes an old argument about why you can never have perfect introspection: because if you could, then you could determine what you were going to do ten seconds from now, and then do something else. Turing imagined that there was a special machine that could solve the Halting Problem. Then he showed how we could have this machine analyze itself, in such a way that it has to halt if it runs forever, and run forever if it halts. Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction.
"Ouroboros" by the Flipside CORE project  and Burning Man
“Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction”
And so, let’s go through Turing’s proof that the Halting Problem can never be solved by a computer, or why you could never program a ‘loop snooper’. The proof I’m about to present is a rather unconventional one. It’s a poem written by Geoffrey Pullum in honor of Alan Turing, in the style of Dr. Seuss. I’ve reproduced it here, in entirety, with his permission.
SCOOPING THE LOOP SNOOPER
A proof that the Halting Problem is undecidable
Geoffrey K. Pullum
No general procedure for bug checks will do.
Now, I won’t just assert that, I’ll prove it to you.
I will prove that although you might work till you drop,
you cannot tell if computation will stop.
For imagine we have a procedure called P
that for specified input permits you to see
whether specified source code, with all of its faults,
defines a routine that eventually halts.
You feed in your program, with suitable data,
and P gets to work, and a little while later
(in finite compute time) correctly infers
whether infinite looping behavior occurs.
If there will be no looping, then P prints out ‘Good.’
That means work on this input will halt, as it should.
But if it detects an unstoppable loop,
then P reports ‘Bad!’ — which means you’re in the soup.
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here’s the trick that I’ll use — and it’s simple to do.
I’ll define a procedure, which I will call Q,
that will use P’s predictions of halting success
to stir up a terrible logical mess.
For a specified program, say A, one supplies,
the first step of this program called Q I devise
is to find out from P what’s the right thing to say
of the looping behavior of A run on A.
If P’s answer is ‘Bad!’, Q will suddenly stop.
But otherwise, Q will go back to the top,
and start off again, looping endlessly back,
till the universe dies and turns frozen and black.
And this program called Q wouldn’t stay on the shelf;
I would ask it to forecast its run on itself.
When it reads its own source code, just what will it do?
What’s the looping behavior of Q run on Q?
If P warns of infinite loops, Q will quit;
yet P is supposed to speak truly of it!
And if Q’s going to quit, then P should say ‘Good.’
Which makes Q start to loop! (P denied that it would.)
No matter how P might perform, Q will scoop it:
Q uses P’s output to make P look stupid.
Whatever P says, it cannot predict Q:
P is right when it’s wrong, and is false when it’s true!
I’ve created a paradox, neat as can be —
and simply by using your putative P.
When you posited P you stepped into a snare;
Your assumption has led you right into my lair.
So where can this argument possibly go?
I don’t have to tell you; I’m sure you must know.
A reductio: There cannot possibly be
a procedure that acts like the mythical P.
You can never find general mechanical means
for predicting the acts of computing machines;
it’s something that cannot be done. So we users
must find our own bugs. Our computers are losers!
What you just read, in delightfully whimsical poetic form, was the punchline of Turing’s proof. Here’s a visual representation of the same idea. The diamond represents the loop-snooping program P, which is asked to evaluate whether the program Q (the flow chart) will halt.
loop snooper serpents tail
“The program will halt when the loop snooper said it wouldn’t, and it runs forever when the loop snooper said it would halt!” Image Credit for serpent (right): Andrei
Like the serpent that tries to eat its tail, Turing conjured up a self-referential paradox. The program will halt when the loop snooper said it wouldn’t, and it runs forever when the loop snooper said it would halt! To resolve this contradiction, we’re forced to conclude that this loop snooping program can’t exist.
And this idea has far-reaching consequences. There are uncountably many questions for which computers can’t reliably give you the right answer. Many of these impossible questions are really just the loop snooper in disguise. Among the things that a computer can never do perfectly is identifying whether a program is a virus, or whether it contains vulnerable code that can be exploited. So much for our hopes of having the perfect anti-virus software or unbreakable software. It’s also impossible for a computer to always tell you whether two different programs do the same thing, an unfortunate fact for the poor souls who have to grade computer science homework.
By slaying the mythical loop snooper, Turing taught us that there are fundamental limits to what computers can do. We all have our limits, and in a way it’s comforting to know that the artificial brains that we create will always have theirs too.

Wednesday 5 February 2014

Dan Berkenstock: The world is one big dataset



We're all familiar with satellite imagery, but what we might now know is that much of it is out of date. That's because satellites are big and expensive, so there aren't that many of them up in space. As he explains in this fascinating talk, Dan Berkenstock and his team came up with a different solution, designing a cheap, lightweight satellite with a radically new approach to photographing what's going on on Earth.

Veterans gather for Colossus 70th anniversary (BBC news)


Colossus valvesThe Mark 2 Colossus used 2,400 valves to help it crack messages sent by German generals

Related Stories

The 70th anniversary of the pioneering Colossus computer is being celebrated at Bletchley Park.
The machine was first used to crack messages sent by Hitler and his generals on 5 February 1944.
The celebration will bring together some of the machine's creators and operators at The National Museum of Computing (TNMOC).
The machine's code-cracking prowess will be demonstrated on the day using the museum's rebuilt Colossus.
Now widely recognised as the first electronic computer, Colossus was kept a secret for 30 years because of the sensitive work it did during World War Two to crack German codes.
The work of the Colossus machines to decipher messages scrambled using the Lorenz enciphering machine that passed between the Wehrmacht's commanders is widely thought to have shortened the war and saved countless lives.
Colossus was created by Post Office engineer Tommy Flowers, and his first prototype was built out of parts from telephone exchanges including 1,600 valves. Later versions used even more valves and by the end of the war 10 of the machines were in use in the UK.
The celebrations will bring together some of the women who kept the different machines running as well as some of the engineers who built and maintained them. During wartime, about 550 people worked in the Bletchley Park unit that ran Colossus.
Also attending will be some of the children of the machine's creators and operators.
Most of the machines were broken up and the plans destroyed after the war in an attempt to keep the work secret and to conceal the fact Britain was still using two of the machines to read Soviet messages.
"The achievements of those who worked at Bletchley Park are humbling," said Tim Reynolds, chair of TNMOC. "This day is in honour of all the men and women who worked on breaking the Lorenz cipher."

Monday 3 February 2014

First Website Restored for 20th Anniversary of Open Web



On April 30, 1993, CERN made the World Wide Web technology available on a royalty-free basis. To celebrate the 20th anniversary of this Internet milestone, the organization has restored the very first website.
The move will "preserve the digital assets that are associated with the birth of the Web," CERN said on its website. Ultimately, the organization wants that Web address - info.cern.ch - to be "a destination that reflects the story of the beginnings of the web for the benefit of future generations."
CERN, or the European Organization for Nuclear Research, is an international organization that operates the world's largest particle physics laboratory.
The first URL was "http://info.cern.ch/hypertext/WWW/TheProject.html." For years, however, it has redirected to the CERN website's Web host root. But using the archive hosted on the W3C site, CERN put the files back online and recreated a 1992 version of the very first website.
"This may be the earliest copy that we can find, but we're going to keep looking for earlier ones," CERN said.
Not surprisingly, the site (below) is rather sparse, with links to information about the WWW project and how people can get involved.
First Website
The World Wide Web dates back to 1989, thanks to the work of Tim Berners-Lee (pictured), who created the first website at info.cern.ch. At the time, "the Internet was already a mature set of protocols," CERN said. But the World Wide Web created "a networked hypertext system that allowed CERN physicists to read and publish documents, and to create links between and within them."
By 1993, CERN made the World Wide Web's source code available on a royalty-free basis, leading to the growth of the Internet as we know it. As CERN pointed out, the WWW was easier to use than other systems that were available at the time, like WAIS and Gopher. At the end of 1993, there were 500 Web servers and the WWW made up 1 percent of Web traffic; today there are approximately 630 million websites.
In addition to restoring the first URL, CERN wants to comb through the CERN Web servers to "see what assets from them we can preserve and share."
"We will also sift through documentation and try to restore machine names and IP addresses to their original state," the organization said.
CERN also posted Berners-Lee's original proposal for the WWW, which he wrote in March 1989 and first distributed in May 1990. It was intended to persuade CERN that the development of the WWW was a worthwhile endeavor.