Wednesday, April 30, 2014

50 years after Basic, most users still can't or won't program anything (ZDNet)

By Jack Schofield for Jack's Blog |                                            
People who got their first taste of IT during the microcomputer boom in the 1970s and 1980s almost certainly started by writing programs in Basic, or at least, they debugged programs typed in from popular magazines. If you needed to do a complex calculation, and weren't lucky enough to own a copy of Software Arts' pioneering VisiCalc spreadsheet, then you could do it in a few lines of code. You couldn't download an app -- most people didn't even have modems -- and the web hadn't been invented, so you couldn't use Google or Wolfram Alpha.
When the IBM PC arrived in 1981, the culture of writing simple Basic programs extended to writing simple DOS batch files, to macros in Lotus 1-2-3, and eventually to VBA (Visual Basic for Applications) in Microsoft Excel. I expect some of these macros are still in use, though you probably wish they weren't.
This type of ad hoc programming was the design goal of Dartmouth Basic (Beginner's All-purpose Symbolic Instruction Code), and its greatest success. Professors John Kemeny and Thomas Kurtz created Basic (and a time-sharing system) so that students could use the university computer without having to be trained programmers. And it worked. In what was basically a rural, liberal arts-based Ivy League university, most students used the computer, and most of their programs worked.
The first Basic programs were officially run 50years ago at 4am on 1 May 1964, when Kemeny and a student programmer typed RUN and started their Basic programs at the same time, to demonstrate both Basic and time-sharing. Dartmouth is celebrating Basic's 50th birthday today (Wednesday, April 30) with several events, including a panel discussion on the future of computing that will be webcast live on Dartmouth’s YouTube channel.
Attendees will also be able to try an old Model 33 Teletype terminal connected to a Basic emulator designed by Kurtz. (Kemeny died in 1992.) In those days, terminals were more like giant typewriters, and printed your programs on paper rolls. This was a huge advance on 80-column punch cards or even punched paper tape. Later, of course, the Teletype's paper was replaced by a 40- or 80-column character-based screen.
Basic transformed home computing because you could type in what you wanted and get an instant result. If your program didn't work, you could simply retype the offending line. If you used the same line number, the new version replaced the old one. This flexibility made Basic a poor language for real programming, as Edsger Dijkstra (*) and others complained, but that really wasn't what Basic was for.
Basic enjoyed its greatest popularity between 1975 and 1990, thanks to the microcomputer revolution that started with the MITS Altair and Microsoft Basic. This was followed by the Apple II, Commodore Pet, Tandy TRS80 and then the IBM PC. Basic really became ubiquitous with the arrival of cheap home computers such as the Sinclair Spectrum and Commodore 64, and in the UK, the Acorn BBC Micro. It started to fade away after a usable version of Microsoft Windows appeared in 1990, though the Apple Macintosh had already dropped Basic in 1984.
The new world of graphical user interfaces encouraged point-and-click computing. GUIs didn't start users at a command line where they could type things and see the results.
Programmers who started with Basic generally moved on to languages such as Pascal or C, or Perl or PHP, and then perhaps Java. (Or to Microsoft Visual Basic, which isn't really Basic.) That's fine. It's the ordinary users who have lost out. Without having had the experience of coding in Basic, they don't seem to be willing to write small bits of code to solve simple problems.
In my experience, this even applies when no coding is required. You can, for example, perform a lot of simple world processing tasks by recording Word macros and saving them for re-use. I've tried showing people how easy this is, but it doesn't seem to "take". Apparently they'd rather spend hours editing texts manually to remove unwanted spaces and line endings etc. The same thing goes for actions that can easily be automated using the Automator program built into Mac OS X, or the free AutoHotkey for Windows. On the internet, how many ordinary users exploit IFTT?
Maybe your experience has been different -- I hope so -- but you can let me know below.
Coding has recently become fashionable again, with initiatives such as Code AcademyCode YearYear of Code etc. As Jeff Atwood has pointed out at Coding Horror, this is not necessarily a good thing (Please Don't Learn to Code). I think a basic understanding of how computer works, and the ability to knock up a quick website, are useful, but not everybody is going to write great business software or make a fortune from world-beating apps.
I'd settle for a rather more modest goal. Today's users have massive amounts of computer power at their disposal, thanks to sales of billions of desktop and laptop PCs, tablets and smartphones. They're all programmable. Users should be able to do just enough programming to make them work the way they want. Is that too much to ask?
* "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." -- Edsger Dijkstra, 1975

Tuesday, April 29, 2014

The 8 best music streaming services to use at work (TechRepublic)

By                                                 April 25, 2014
In case you left your portable radio in 1998, here are some more modern options for picking the music to pass your workday. 
spotifystreaming.png

Image: Spotify
Whistling while you work can get pretty old. Also, your co-workers might hate you.
The better option that many workers are using to inject music into the workday is a music streaming service, whether they're trying to help to drown out noise, stay focused, or even just combat boredom during work. Though, not all services are created equal. With companies cracking down on the strain that music streaming can put on bandwidth (or attention span), you might need to know the full breadth of your options. Here are some of the top services and what they offer in terms of features, pricing, and online/offline access.

1. Spotify

Spotify has two tiers, free and premium. With the free version, you can stream music to your mobile, tablet, or desktop, as long as you don't mind the ads too much. (But beware, if you're listening without headphones, there's no guarantee you won't run into an ad for Trojans or UTI medication.) For $9.99 a month, you get all the same access, except with high quality audio offline, and an ad-free, awkwardness-free experience. Spotify's offline capabilities could be a good thing for workers whose companies block music services, or have a policy against using them.

2. Pandora

Internet radio provider Pandora, also has free and paid versions. For free, Pandora can be accessed through mobile, tablet, and desktop in exchange for dealing with ads. PandoraOne usually costs $4.99 and offers no ads, better audio quality, and a desktop app that does not require a browser. Pandora's lack of offline access could pose a problem as it's a favorite target of companies that ban streaming services. If you're the subversive type, you might be able to get away with one of the other services on this list.

3. iTunes Radio

iTunes Radio works with any Apple device and has similar capabilities to other streaming services in terms of building stations based on musical preferences and tweaking them with likes/stars. If you use iTunes Match, a service that stores all you music in the iCloud for $24.99 a year, iTunes Radio is ad-free. If you want offline access, you can use iTunes the old fashioned way.
googleplaystreaming.png
 Image: GooglePlay

4. GooglePlay

GooglePlay offers two tiers, Standard and All Access. Both can host up to 20,000 songs (which sounds like a lot until you compare it to Amazon's 250,000), accessible from any device, including Android, iPhone and iPad. The Google Play Music app lets you pick songs and playlists to download and listen to when you're offline. For $9.99 a month, GooglePlay offers unlimited skips on the customizable radio feature, unlimited access to millions of songs and albums, and recommendations. It works something like a hybrid between a music locker and a streaming service where you can upload your existing music collection, but also stream (or even download offline) songs you don't own.

5. Last.fm

Creating a Last.fm profile is free. You can listen through the browser, or a media player, which requires Last.fm's Scrobbler software. Apps for iPhone and iPod are available. A subscription to Last.fm costs $3 a month (via PayPal) and means you don't have to deal with banner ads on the website or mobile app. You also get to see who has been looking at your page, as well as what Last.fm is working on in their labs. (Disclosure: Last.fm and TechRepublic are both part of CBS Interactive.)

6. Beats Music

Beats Music is the newcomer on this front. Unlike many other services, Beats Music has no free tier. It also lacks a desktop version. For $9.99 a month (for one person on up to 3 devices), the service offers access to more than 20 million songs, no ads, and playlist recommendations tailored to the user-- one of Beats' biggest talking points has been its expert curators. An updatenow includes in-app offline playback options.
amazonstreaming.jpg
 Image: Amazon

7. Amazon Cloud Player

Amazon Cloud Player is also a music locker. You can keep your music in your Cloud and access it from any device, any time, including Roku, Sonos, or Samsung Smart TV. The first 250 song you upload are free. Also, if you download music (MP3s or physical albums using AutoRip) from Amazon, that music does not count toward your space allotment. To store up to 250,000 high quality songs, the cost is $24.99 a year.

8. GrooveShark

The free ad-supported version of GrooveShark is available on the web and mobile browsers. For $9 a month, you can ditch the ads and get access to unlimited streaming, as well as apps for Apple, Android, and desktop. Other features include PowerHouse mode, video mode, and a visualizer. GrooveShark doesn't offer an offline mode.

About 

Erin Carson is a Staff Writer for TechRepublic. She covers the impact of social media in business and the ways technology is transforming the future of work.

Monday, April 28, 2014

The Dumb Way We Board Airplanes Remains Impervious to Good Data (BusinessWeek)


If the conference calls that come with earnings reports attracted exhausted travelers instead of financial analysts and journalists, there’s no way Delta  (DAL)United (UAL), American (AAL),  and  Southwest  (LUV)  could make it through the week without getting an earful about the agonies of boarding an airplane.
While airlines policies vary—there is no standard accepted way of loading passengers—any “eye test” indicates that having travelers line up at the gate, only to wait again inside the plane, isn’t efficient. Data back this up: Boeing’s (BA) research showed that boarding a plane was 50 percent slower in 1998 than in 1970. “Boeing believes that these trends will continue,” the study noted, “unless the root causes are understood and new tools and processes are developed to reverse the trend.”
From a data-driven perspective, this is nothing short of maddening. There are manyways to board a plane, with “back-to-front”—the chosen boarding process of most U.S. carriers—the slowest. It is so ineffective that timed research shows that simplerandom boarding ends up being faster.
One possibility is that airlines have no incentive to improve the process. As long as it remains terrible, they can sell early boarding privileges. Southwest, for example,charges $40 to be among the first 15 to board. Consider the indignity: The stress of boarding is so bad that people are willing to pay money to wait in the plane, rather than outside it—and they pay money to the very company causing that stress.
Getting on a plane is so complicated that Southwest even has a 12-question FAQ devoted to boarding. It’s simply not possible that a boarding procedure that looks like the one below is set up with customers’ best interests in mind.
Another minor perk that’s growing throughout the industry is the increased use of zone boarding as a way for airlines to reward passengers with a small status perk. Just owning a Delta Amex Card (AXP) brings early boarding privileges. These customers may not be able to upgrade to a higher class, but they can be consoled by making it to their bad seat earlier than others.
By charging money for checking bags, airlines encourage passengers to bring as much luggage as possible on board. That can only slow down the boarding process while simultaneously making it more stressful: What if there’s no more room for your carry-on? More reason to push to the front of the line or shell out for early boarding privileges. (United Airlines recently said it would crack down on oversized carry-on luggage—a move that could speed up boarding and generate more revenue at the same time.)
To be fair, some airlines are trying: American Airlines spent two years studying its boarding process and landed on a randomized, zone-based system. Last year it introduced a tweak that gives a slightly higher priority to passengers who have no carry-ons for the overhead bin. United uses an “outside-in” boarding process by which people with window seats board ahead of those on the aisle. This is a version of what’s known as the Steffen Method, after astrophysicist Jason Steffen‘s 2008research paper (PDF) offering a mathematically sounder approach to efficient boarding. A reality show producer recruited Steffen for a video segment about it; yet, three years after his paper was published, none of the major airlines had asked him for help.

Eric-chemi
Chemi is head of research for Businessweek and Bloomberg TV.

Friday, April 25, 2014

La Nueva Guerra Armamentista: La Digital (TechRepublic)


Inside the secret digital arms race: Facing the threat of a global cyberwar







The team was badly spooked, that much was clear. The bank was already reeling from two attacks on its systems, strikes that had brought it to a standstill and forced the cancellation of a high profile IPO.The board had called in the team of security experts to brief them on the developing crisis. After listening to some of the mass of technical detail, the bank's CEO cut to the chase.
"What should I tell the Prime Minister when I get to Cobra?" he demanded, a reference to the emergency committee the government had set up as it scrambled to respond to what was looking increasingly like a coordinated cyberattack.
The security analysts hesitated, shifting in their seats, fearing this was the beginning, not the end, of the offensive.
"We think this could just be a smokescreen," one said, finally. And it was. Before the end of next day, the attack had spread from banks to transport and utilities, culminating in an attack on a nuclear power station.
The mounting horror of the analysts, the outrage and lack of understanding from the execs was all disturbingly authentic, but fortunately, none of it was real. The scene formed part of a wargame, albeit one designed by the UK's GCHQ surveillance agency among others to attract new recruits into the field of cybersecurity.
As I watched the scenario progress (hosted in a World War II bunker under London for added drama) it was hard not to get just as caught up in the unfolding events as the competition finalists played the security analysts tasked with fighting the attack, and real industry executives took the role of the bank's management, if only because these sorts of scenarios are now increasingly plausible.
And it's not just mad criminal geniuses planning these sorts of digital doomsday attacks either. After years on the defensive, governments are building their own offensive capabilities to deliver attacks just like this against their enemies. It's all part of a secret, hidden arms race, where countries spend billions of dollars to create new armies and stockpiles of digital weapons.
This new type of warfare is incredibly complex and its consequences are little understood. Could this secret digital arms race make real-world confrontations more likely, not less? Have we replaced the cold war with the coders' war?
Even the experts are surprised by how fast the online threats have developed. As Mikko Hypponen, chief research officer at security company F-Secure, said a conference recently, "If someone would have told me ten years ago that by 2014 it would be commonplace for democratic western governments to develop and deploy malware against other democratic western governments, that would have sounded like science fiction. It would have sounded like a movie plot, but that's where we are today."

The first casualty of cyberwar is the web

It's taken less than a decade for digital warfare to go from theoretical to the worryingly possible. The web has been an unofficial battleground for many modern conflicts. At the most basic level, groups of hackers trying to publicise their cause have been hijacking or defacing websites for years. Some of these groups have acted alone, some have at least the tacit approval of their governments.
cyberwar.jpg
A wargame aimed at finding hidden cybersecurity talent took place in Winston Churchill's wartime bunker.
 Image: Steve Ranger
Most of these attacks -- taking over a few Twitter accounts, for example -- are little more than a nuisance, high profile but relatively trivial.
However, one attack has already risen to the level of international incident. In 2007,attacks on Estonia swamped banks, newspaper and government websites. They began after Estonia decided to move a Soviet war memorial, and lasted for three weeks (Russia denied any involvement).
Estonia is a small state with a population of just 1.3 million. However, it has a highly-developed online infrastructure, having invested heavily in e-government services, digital ID cards, and online banking. That made the attack particularly painful, as the head of IT security at the Estonian defence ministry told the BBC at the time, "If these services are made slower, we of course lose economically."
The attacks on Estonia were a turning point, proving that a digital bombardment could be used not just to derail a company or a website, but to attack a country. Since then, many nations have been scrambling to improve their digital defenses -- and their digital weapons.
While the attacks on Estonia used relatively simple tools against a small target, bigger weapons are being built to take on some of the mightiest of targets.
It's all part of a secret, hidden arms race, where countries spend billions of dollars to create new armies and stockpiles of digital weapons.
Last year the then-head of the US Cyber Command, General Keith Alexander, warned on the CBS 60 Minutes programme of the threat of foreign attacks, stating: "I believe that a foreign nation could impact and destroy major portions of our financial system."
In the same programme, the NSA warned of something it called the "BIOS plot," a work by an unnamed nation to exploit a software flaw that could have allowed them to destroy the BIOS in any PC and render the machine unusable.
Of course, the US isn't just on the defensive. It has been building up its own capabilities to strike, if needed.
The only documented successful use of a such a weapon -- the famous Stuxnet worm -- was masterminded by the US in the form that caused damage and delay to the Iranian nuclear programme.

Building digital armies

The military has been involved with the internet since its the start. It emerged from a US Department of Defense-funded project, so it's no surprise that the armed forces have kept a close eye on its potential.
And politicians and military leaders of all nations are naturally attracted to digital warfare as it offers the opportunity to neutralise an enemy without putting troops at risk.
As such, the last decade has seen rapid investment in what governments and the military have dubbed "cyberwar" -- sometimes shortened to just "cyber." Yes, it sounds like a cheaply sensational term borrowed from an airport thriller, (and to some the use of such an outmoded term reflects the limited level of understanding of the issues involved by those in charge) but the intent behind the investment is deadly serious.
The UK's defence secretary Philip Hammond has made no secret of the country's interest in the field, telling a newspaperlate last year, "We will build in Britain a cyber strike capability so we can strike back in cyberspace against enemies who attack us, putting cyber alongside land, sea, air and space as a mainstream military activity."
cyberwar3.jpg
One of the participants in the UK cybersecurity wargame scenario analyzes the situation.
 Image: Steve Ranger
The UK is thought to be spending as much as £500m on the project over the next few years. On an even larger scale, last year General Alexander revealed the NSA was building 13 teams to strike back in the event of an attack on the US. "I would like to be clear that this team, this defend-the-nation team, is not a defensive team," he said told the Senate Armed Services Committee last year.
And of course, it's not just the UK and US that are building up a digital army. In a time of declining budgets, it's a way for defence ministries and defence companies to see growth, leading some to warn of the emergence of a twenty-first century cyber-industrial complex. And the shift from investment in cyber-defence initiatives to cyber-offensives is a recent and, for some, worrying trend.
Peter W. Singer, director of the Center for 21st Century Security and Intelligence at the Brookings Institution, said 100 nations are building cyber military commands of that there are about 20 that are serious players, and a smaller number could carry out a whole cyberwar campaign. And the fear is that by emphasising their offensive capabilities, governments will up the ante for everyone else.
"We are seeing some of the same manifestations of a classic arms race that we saw in the Cold War or prior to World War One. The essence of an arms race is where the sides spend more and more on building up and advancing military capabilities but feel less and less secure -- and that definitely characterises this space today," he said.
It's taken less than a decade for digital warfare to go from theoretical to the worryingly possible.
Politicians may argue that building up these skills is a deterrent to others, and emphasise such weapons would only be used to counter an attack, never to launch one. But for some, far from scaring off any would-be threats, these investments in offensive cyber capabilities risk creating more instability.
"In international stability terms, arms races are never a positive thing: the problem is it's incredibly hard to get out of them because they are both illogical [and] make perfect sense," Singer said.
Similarly Richard Clarke, a former presidential advisor on cybersecurity told a conference in 2012, "We turn an awful lot of people off in this country and around the world when we have generals and admirals running around talking about 'dominating the cyber domain'. We need cooperation from a lot of people around the world and in this country to achieve cybersecurity and militarising the issue and talking about how the US military have to dominate the cyber domain is not helpful."
Thomas Rid, a reader in War Studies at King's College London said that many countries now feel that to be taken seriously they need to have a cyber command too.
"What you see is an escalation of preparation. All sorts of countries are preparing and because these targets are intelligence intensive you need that intel to develop attack tools you see a lot of probing, scanning systems for vulnerabilities, having a look inside if you can without doing anything, just seeing what's possible," Rid said.
As a result, in the shadows, various nations building up their digital military presence are mapping out what could be future digital battlegrounds and seeking out potential targets, even leaving behind code to be activated later in any conflict that might arise.

How cyber weapons work

As nations race to build their digital armies they also need to arm them. And that means developing new types of weapons.
While state-sponsored cyberwarfare may use some of the same tools as criminal hackers, and even some of the same targets, its wants to go further.
cyberwar2.jpg
Will Shackleton, center, was the winner in the GCHQ cybersecurity wargame scenario.
 Image: Steve Ranger
So while a state-sponsored cyber attack could use the old hacker standby of the denial of service attack (indeed theUK's GCHQ has already used such attacks itself, according to leaks from Edward Snowden), something like Stuxnet -- built with the aim of destroying the centrifuges used in the Iranian nuclear project -- is another thing entirely.
"Stuxnet was almost a Manhattan Project style in terms of the wide variety of expertise that was brought in: everything from intelligence analysts to some of the top cyber talent in the world to nuclear physicists to engineers, to build working models to test it out on, and another entire espionage effort to put it in to the systems in Iran that Iran thought were air-gapped. This was not a couple of kids," said Singer.
The big difference between military-grade cyber weapons and hacker tools is that the most sophisticated digital weapons want to breaking things. To create real, physical damage. And these weapons are bespoke, expensive to build, and have a very short shelf life.
To have a real impact, these attacks are likely to be levelled at the industrial software that runs production lines, power stations or energy grids, otherwise known as SCADA (supervisory control and data acquisition) systems.
Increasingly, SCADA systems are being internet-enabled to make them easier to manage, which of course, also makes them easier to attack. Easier doesn't mean easy though. These complex systems, often build to last for decades, are often built for a very narrow, specific purpose -- sometimes for a single building.
According to Rid, this makes them much harder to undermine. A bespoke, highly specific system requires a bespoke, highly specific attack, and a significant amount of intelligence, too.
"The essence of an arms race is where the sides spend more and more on building up and advancing military capabilities but feel less and less secure -- and that definitely characterises this space today."
Peter W. Singer, Center for 21st Century Security and Intelligence
"The only piece of target intelligence you need to attack somebody's email or a website is an email address or a URL. In the case of a control system, you need much more information about the target, about the entire logic that controls the process, and legacy systems that are part of the process you are attacking," Rid said.
That also means that delivering any more than a few of these attacks at a time would be almost impossible, making a long cyberwar campaign hard to sustain.
Similarly, these weapons need to exploit a unique weakness to be effective: so-called zero day flaws. These are vulnerabilities in software that have not been patched and therefore cannot be defended against.
This is what makes them potentially so devastating, but also limits their longevity. Zero-day flaws are relatively rare and expensive and hard to come by. They're sold for hundreds of thousands of dollars by their finders. A couple of years ago a Windows flaw might have earned its finder $100,000 on the black market, an iOS vulnerability twice that.
Zero-day flaws have an in-built weakness, though: they're a one-use only weapon. Once an attack has been launched, the zero-day used is known to everyone. Take Stuxnet. Even though it seems to have had one specific target -- an Iranian power plant -- once it was launched, Stuxnet spread very widely, meaning security companies around the world could examine the code, and making it much harder for anyone to use that exact same attack again.
"It's like dropping the bomb, but also [saying] here's the blueprint of how to build the bomb," explains Singer, author of the recent book Cybersecurity and Cyberwar.
But this leads to another, unseen problem. As governments stockpile zero-day flaws for use in their cyber-weapons, it means they aren't being reported to the software vendors to be fixed -- leaving unpatched systems around the world at risk when they could easily be fixed.

When is a cyberwar not a cyberwar?

The greatest trick cyberwar ever played was convincing the world it doesn't exist.
While the laws of armed conflict are well understood -- if not always adhered to -- what's striking about cyberwar is that no one really knows what the rules are.
1024px-gchq-aerial.jpg
"The Doughnut" in Gloucestershire is headquarters for GCHQ, the UK's intelligence agency.
 Image: GCHQ
As NATO's own National Cybersecurity Framework Manual notes: "In general, there is agreement that cyber activities can be a legitimate military activity, but there is no global agreement on the rules that should apply to it."
Dr. Heather A. Harrison Dinniss of the International Law Centre at the Swedish National Defence College said that most cyber warfare shouldn't need to be treated differently to regular warfare, and that the general legal concepts apply "equally regardless of whether your weapon is a missile or a string of ones and zeros."
But cyberwarfare does raise some more difficult issues, she says. What about attacks that do not cause physical harm, for example: do they constitute attacks as defined under the laws of armed conflict?
Dinniss says that some sort of consensus emerging that attacks which cause loss of functionality to a system do constitute an attack, but the question is certainly not settled in law.
Western nations have been reluctant to sign any treaty that tries to define cyberwar. In the topsy-turvy world of international relations, it is China and Russia that are keenest on international treaties that define cyberwarfare as part of their general desire to regulate internet usage.
The reluctance from the US and the UK is partly because no state wants to talk candidly about their cyberwarfare capabilities, but also by not clearly defining the status of cyberwarfare, they get a little more leeway in terms of how they use those weapons.
And, because in many countries cyberwarfare planning has grown out of intelligence agencies as much as out of the military, the line between surveillance-related hacking and more explicitly-offensive attacks is at best very blurred.
The greatest trick cyberwar ever played was convincing the world it doesn't exist.
That blurring suits the intelligence agencies and the military just fine. While espionage is not illegal under international law, it is outlawed under most states' domestic laws.
"It could well be that states were waiting to see what use would be made of cyber operations -- how much they could get away with under the rubric of espionage," Dinniss adds. For example, although the US might consider Stuxnet to be an espionage project, that might not be the way it is interpreted by others.
This is not some arcane debate, though. If a cyber attack can be defined as an attack under the laws of armed conflict, a nation has a much better case for launching any kind of response, up to and including using conventional weapons in response. And that could mean that using digital weapons could have unexpected -- and potentially disastrous -- consequences.
Right now all of this is a deliberately grey area, but it's not hard to envisage an internet espionage attempt that goes wrong, damages something, and rapidly escalates into a military conflict. Can a hacking attempt really lead to casualties on the battlefield? Possibly, but right now those rules around escalation aren't set. Nobody really knows how or if escalation works in a digital space.
If I hack your power grid, is it a fair response to shut down my central bank? At what point is a missile strike the correct response to a denial of service attack? Nobody really knows what a hierarchy of targets here would look like. And that's without the problem of working out exactly who has attacked you. It's much easier to see a missile launch than work out from where a distributed digital attack is being orchestrated. Any form of cyber arms control is a long way off.

The targets in cyberwar

"If I look out of the window I can see all sorts of [industrial control software] systems behind these building and bridges. That's the problem, not military systems," says King's College's Rid.
You can drop a bomb on pretty much anything, as long as you can find it. It's a little different with digital weapons.
Some targets just don't have computers and while politicians may dream of being able to 'switch off' an enemy's airfield, it's likely to be the civilian infrastructure that's going to be the most obvious target. That's the same as standard warfare. What is different now is that virtually any company could be a target, and many probably don't realise it.
nsahero1.jpg
The US National Security Agency is headquartered in Fort Meade, Maryland.
 Image: NSA
Companies are only gradually understanding the threats they face, especially as they start to connect their industrial control systems to the internet. Like the executives at the London cyber wargame, most real life executives fail to realise that they might be a target, or the potential risks.
Mark Brown, director of risk advisory at consultant KPMG, says, "Companies have recognised they can connect them to their core networks, to the internet, to operate them remotely but they haven't necessarily applied the same risk and controls methodology to the management of operational technology as they have to traditional IT."
Indeed, part of the problem is that these systems have never been thought about as security risks and so no-one has taken responsibility for them. "Not many CIOs have responsibility for those operational technology environments, at least not traditionally. Often you are caught in the crossfire of finger-pointing; the CIO says it's not my job, the head of engineering says it's not my job," Brown said.
A recent report warned that the cybersecurity efforts around the US electricity supply network are fragmented and not moving fast enough, while in the UK insurers are refusing cover to power companies because their defences are too weak.

Cyberwar: Coming to a living room near you?

Cyberwar is -- for all the billions being spent on it -- still largely theoretical, especially when it comes to the use of zero-day attacks against public utilities. Right now a fallen tree is a bigger threat to your power supply than a hacker.
"Most cyber warfare shouldn't need to be treated differently to regular warfare, and ... the general legal concepts apply "equally regardless of whether your weapon is a missile or a string of ones and zeros."
Dr. Heather A. Harrison Dinniss, International Law Centre
While states have the power to launch such attacks, for now they have little incentive. And the ones with the most sophisticated weapons also have the most sophisticated infrastructure and plenty to lose, which is why most activity is at the level of espionage rather than war.
However, there's no reason why this should remain the case forever. When countries spend billions on building up a stockpile of weapons, there is always the increased risk of confrontation, especially when the rules of engagement are still in flux.
But even now a new and even more dangerous battlefield is being built. As we connect more devices -- especially the ones in our homes -- to the web, cyberwar is poised to become much more personal.
As thermostats, fridges and cars become part of the internet of things, their usefulness to us may increase, but so does the risk of them being attacked. Just as more industrial systems are being connected up, we're doing the same to our homes.
The internet of things, and even wearable tech, bring with them great potential, but unless these systems are incredibly well-secured, they could be easy targets to compromise.
Cyberwarfare might seem like a remote, fanciful threat, but digital weapons could create the most personal attacks possible. Sure, it's hard to see much horror lurking in a denial of service attack against your internet-enabled toothbrush (or the fabled internet fridge) but the idea of an attack that turned your gadgets, your car or even your home against you is one we need to be aware of.
We tend to think of our use of technology as insulating us from risk, but in future that may no longer be the case. If cyberwar ever becomes a reality, the home front could become an unexpected battlefield.
Steve Ranger is the UK editor of TechRepublic, and has been writing about the impact of technology on people, business and culture for more than a decade. Before joining TechRepublic he was the editor of silicon.com.