The Great S3 Outage of February 2017

On tuesday the world split into different groups
  1. Those who knew that S3 was down, and the internet itself was in crisis.
  2. Those who knew that some of the web sites and phone apps they used weren't working right, but didn't know why.
  3. Those who didn't notice and wouldn't have cared.

I was obviously in group 1, the engineers, who whisper to each other, "where were you when S3 went down".
S3 Outage: Increased Error Rate

I was running the latest hadoop--aws s3a tests, and noticed as some of my tests were failing. Not the ones to s3 Ireland, but those against the landsat bucket we use in lots of our hadoop test as it is a source of a 20 MB CSV file where nobody has to pay download fees, or spend time creating a 20 MB CSV file. Apparently there are lots of landsat images too, but our hadoop tests stop at: seeking in the file. I've a spark test which does the whole CSV parse thing., as well as one I use in demos as an example not just of dataframes against cloud data, but of how data can be dirty, such as with a cloud cover of less than 0%.

Partial test failures: never good.

It was only when I noticed that other things were offline that I cheered up: unless somehow my delayed-commit multipart put requests had killed S3: I wasn't to blame. And with everything offline I could finish work at 18:30 and stick some lasagne in the oven. (I'm fending for myself & keeping a teenager fed this week).

What was impressive was seeing how deep it went into things. Strava app? toast. Various build tools and things? Offline.

Which means that S3 wasn't just a SPOF for my own code, but a lot of transitive dependencies, meaning that things just weren't working -all the way up the chain.

S3 Outage: We can update our status page

S3 is clearly so ubiquitous a store that the failure of US-East enough to have major failures, everywhere.

Which makes designing to be resilient to an S3 outage so hard: you not only have to make your own system somehow resilient to failure, you have to know how your dependencies cope with such problems. For which step one is: identify those dependencies.

Fortunately, we all got to find out on Tuesday.

Trying to mitigate against a full S3A outage is probably pretty hard. At the very least,
  1. replicated front end content across different S3 installations would allow you to present some kind of UI.
  2. if you are collecting data for processing, then a contingency plan for the sink being offline: alternate destinations, local buffering, discarding (nifi can be given rules here).
  3. We need our own status pages which can be updated even if the entire infra we depend on is missing. That is: host somewhere else, have multiple people with login rights, so an individual isn't the SPOF. Maybe even a facebook page too, as a final backup
  4. We can't trust the AWS status page no more.
Is it worth putting in lots of effort to eliminating an S3 outage as a SPOF? Well, the failure rate is such that it's a lot of effort for a very rare occurence. If you are user facing, some app like strava, maybe it's easiest to say "no". If you are providing a service for others though, availability, or at least the ability to degrade QoS is something to look at.

Anyway, we can now celebrate the fact that the entire internet now runs in four places: AWS, Google, Facebook and Azure. And we know what happens when one of them goes offline.


Why HTTPS is so essential, and auto-updating apps so dangerous

I'm building up two laptops right now. One, a work one to replace the four year old laptop which died. The other, a mid 2009 macbook pro which I've refurbed with an SSD and clean built up.

As I do this, I'm going through every single thing I'm installing to make sure I do somewhat trust it. That's me ignoring homebrew and where it pulls stuff from when I type something like "brew install calc". What I am doing is checking the provenance of everything else I pull down: validating any SHA-256 hashes they declare; making sure they come off HTTPS URLs, etc. The foundational stuff.

We have to recognise that serving software up over HTTP is something to be phasing out, and, if it is done, for the SHA-256 checksum to be published over HTTPS, or,  even better, for the checksum to be signed by a GPG key, after which it can be served anywhere. while OSX supports signed DMG files since OS/X El Capitan, and unless you expect the disk image to be signed, you aren't going to notice when you pick up an unsigned malware variant.

It's too easy for an open wifi station to redirect HTTP connections to somewhere malicious, and we all roam far too much. I realised while I was travelling, that all it would take to get lots of ASF developers on your malicious base station is simply to bring it up in the hotel foyer or in a quiet part of the conference area, giving it the name of the hotel or conference respectively. We conference-goers don't have a way to authenticate these wifi networks.

Anyway, most binaries I am downloading and installing are coming off HTTPS, which is reassuring.

One that doesn't is virtualbox: Oracle are still serving these up over HTTP. They do at least serve up the checksums over HTTP, but they don't do much in highlighting how much checking matters. No "to ensure that these binaries haven't been replaced by malicious one anywhere between your laptop and us, you MUST verify the checksums. No, it's just a mild hint, " You might want to compare the SHA256 checksums or the MD5 checksums to verify the integrity of downloaded packages".

Not HTTPS then, but with the artifacts something whose checksum I can validate from HTTPS. These are on the dev box, happily.

But here's something that I've just installed on the older, household laptop, "dogbert": Garmin Express. This is little app which looks at the data in a USB mounted Garmin bike computer, grabs the latest activities and updates them to Garmin's cloud infrastructure, where they make their way to Strava, somehow. Oh, and pushes firmware updates the other direction.

The Garmin Express application is downloaded over HTTP, no MD5, SHA1 or anything else. And while the app itself is signed, OSX can and will run unsigned apps if the permissions are set. I have to make sure that the "allow from anywhere" option is not set in the security panel before running any installer.

Here's the best bit though: that application does auto updates, any time, anywhere.
Garmin Express D/Ls from HTTP; autoupdate by default
Which means that little app, set to automatically run on boot, is out there checking for notifications of an updated application, then downloading it. It doesn't install it, but it will say "here's an update" and launch the installer.

Could I use this to get something malicious onto a machine? Maybe. I'd have to see if the probes for updates were on HTTP vs HTTPS, and if HTTP, what the payload was. If it was HTTPS, well, you are owned by whoever has their CAs installed on your system. That's way out of scope. But if HTTP is used, then getting the Garmin app to install an unsigned artifact looks straightforward. In fact, even if the update protocol is over HTTPS, given the artifact names of the updates can be determined, you could just serve up malicious copies all the time and hope that someone picks it up That's less aggressive through, and harder to guarantee any success from subverted base stations at a conference.

Rather than go to the effort of wireshark, we can play with lsof to see what network connections are set up on process launch

# lsof -i -n -P | grep -i garmin
Garmin 9966 12u 0x5ccb80e39679382b>
Garmin 9966 16u 0x5ccb80e39679382b>
Garmin 9967 10u 0x5ccb80e396b4a82b>
Garmin 9967 13u 0x5ccb80e39687182b>
Garmin 9967 15u 0x5ccb80e3910b7a1b>
Garmin 9967 16u 0x5ccb80e39669e63b>
Garmin 9967 17u 0x5ccb80e396b4a82b>
Garmin 9967 18u 0x5ccb80e39687182b>
Garmin 9967 19u 0x5ccb80e3910b7a1b>
Garmin 9967 20u 0x5ccb80e3960c782b>
Garmin 9967 21u 0x5ccb80e39669e63b>
Garmin 9967 22u 0x5ccb80e3979fa63b>
Garmin 9967 23u 0x5ccb80e3910b4d43>
Garmin 9967 24u 0x5ccb80e3910b4d43>
Garmin 9967 25u 0x5ccb80e3979fa63b>
Garmin 9967 26u 0x5ccb80e3960c782b> turns out to be https://garmin.com/, so it is at least checking in over HTTPS there. What about the address? Interesting indeed. tap that into firefox as and then go through the advanced bit of the warning, and you can see that the certificate served up is valid for a set of hosts:

dc.services.visualstudio.com, eus-breeziest-in.cloudapp.net, eus2-breeziest-in.cloudapp.net, cus-breeziest-in.cloudapp.net, wus-breeziest-in.cloudapp.net, ncus-breeziest-in.cloudapp.net, scus-breeziest-in.cloudapp.net, sea-breeziest-in.cloudapp.net, neu-breeziest-in.cloudapp.net, weu-breeziest-in.cloudapp.net, eustst-breeziest-in.cloudapp.net, gate.hockeyapp.net, dc.applicationinsights.microsoft.com

That's interesting because it means its something in azure space. in particular, rummaging around brings up hockeyapp.net as a key possible URL, given that Hockeyapp Is a monitoring service for instrumented applications. I distinctly recall selecting "no" when asked if I wanted to participate in the "help us improve our product" feature, but clearly something is being communicated. All these requests seem to go away once app launch is complete, but it may be on a schedule. At least now I can be somewhat confident that the checks for new versions are being done over HTTPS; I just don't trust the downloads that come after.


Towards a doctrine of the Zero Day

The Stuxnet/Olympic games malware is awesome and the engineering teams deserve respect. There, I said it. The first in-the-field sighting of a mil-spec virus puts the mass market toys to shame. It is the difference between the first amateur rockets and the V1 cruise and V2 ballistic missiles launched against the UK in WWII. It also represents that same change in warfare.

V1 Cruise missle and V2 rocket

I say this having watched the documentary Zero Days about nation-state hacking. One thing I like about it is it's underdramatization of the coders. Gone the clich├ęd angled shots of the hooded faceless hacker coding in darkness to a bleeping text prompt on a screen that looks like something from the matrix. Instead: offices with fluorescent lights compensating for the fact that the only people allocated windows are managers. What matrix-esque screen shots there were contained x86 assembly code in the font of IDA, showing asm code snippets accurate enough to give me flashbacks of when I wrote Win32/C++ code. Add some music and coffee mugs and it'd start to look like the real world.

The one thing they missed out on is the actual engineering; the issue tracker, with OLYMPIC-342, "doesn't work with Farsi version of Word" being the topic of the standup; the monthly regression test panic when when windows or flash updates shipped and everyone feared the upgrade had fixed the exploits. Classic engineering, hampered by the fact that the end users would never send stack traces. Even determining if your code worked in production would depend on intermittent status reports from the UN or order numbers for new parts from down the centrifuge supply chain. Let's face it: even getting the test hardware must have been an epic achievement of its own.

Because Olympic Games was not just a piece of malware using multiple zero days and stolen driver certificates to gain admin access on gateway systems before jumping the airgap over USB keys and then slowly sabotage the Iranian centrifuges. It was evidence that the government(s) behind decided that cyber-warfare (a term I really hate) had moved from a theoretical "look, this uranium stuff has energy" to the strategic "let's call this the manhattan project"

And it showed that they were prepared to apply their work against a strategic asset of another country, during peacetime. And had a larger program Nitro Zeus, intended to be the opening move of a war with Iran.

As with those missiles and their payloads, the nature of war has been redefined.

In Churchill's epic five volume history of WWII, he talks about the D-day landings, and how he wanted to watch it from a destroyer, but was blocked by King George, you ware too valuable". Churchill wrote that everyone on those beaches felt that they were too valuable to be there too -and that the people making the decisions should be there to see the consequences of them. He shortly thereafter goes on to discuss the first V1 attacks on London, discussing their morality. He felt that the "war-head". (a new word) was too indiscriminate. He was right - but given this was 14 months ahead of August 1945, his morality didn't run that deep. Or the V1 and V2 bombings had convinced him that it was the future. (Caveat: I've ignored RAF Bomber Command as it would only complicate this essay).

Eric Schlosser's book, Command and Control, discussed the post-war evolution of defence strategy in a nuclear age, and how nuclear weapons scared the military. before: 1000 bombers to destroy a city like Hamburg or Coventry. Now only one plane had to get through the air defences, and the country had lost. Which changed the economics and logistics of destroying nearby countries. The barrier to entry had just been reduced.

The whole strategy of Mutually Assured Destruction evolved there, which, luckily for us, managed to scrape us though to the twenty-first century: to now. But that doctrine wasn't immediate, and even there, the whole notion of tactical vs. strategic armaments skirted around the fact that once the first weapons went off over Germany or Korea, things were going to escalate.

Looking back though, you can see those step changes in technology and how the leading edge technologies of each war enabled the doctrine of the next. the US civil war: rifles, machine guns, ironclad naval vessels, the first wire obstacles on the battlefield. WWI: the trenches with their barbed wire and machine guns; planes and tanks the new tech, radio the emergent communications alongside those telegraphs issuing orders to "go over the top!" . WWII and Blitzkreig was built around planes and trains, radio critical to choreograph it; the Spanish civil war used to hone the concept and to inure Europe to the acceptance of bombing cities.

And in the Cold War, as discussed, missiles, computers and nuclear weapons were the tools of choice.

What now? Nuclear missiles are still the game-over weapons for humanity, but the non-nuclear weapons have changed and so the tactics of war have changed at. And just as the Manhattan Project showed how easy it was to flatten a city, the Olympic Games has shown how much damage you can do with laptops and a dedicated engineering team.

One of the screenshots in the documentary was of the North Korean dev team. They don't look like a dev team I'd recognise. It looks like the place where "breaking the build" carries severe punishment rather than having to keep the "I broke the build!" poster(*) up in your cubicle until a successor inherited it. But it was an engineering team, and a lot less expensive than their same government's missile program. And, it's something which can be used today, rather than used as a threat you dare not use.

What now? We have the weapons, perhaps a doctrine will emerge. What's likely is that you'll see multiple levels of attack

The 2016 election; the Sony hack: passive attack: data exfiltration and anonymous & selective release. We may as well assume the attacks are common, it's only in special cases that we get to directly see the outcome so tangibly.

Olympic Games and the rumoured BTC pipeline attack: destruction of targets -in peacetime, with deniability. These are deliberate attacks on the infrastructures of nations, executed without public announcement.

Nitro Zeus (undeployed) : this is the one we all have to fear in scale, but do we have to fear it's use? As the opening move to an invasion, it's the kind of thing that could be deployed against Estonia or other countries previously forced into the CCCP against their will. Kill all communications, shut down the the cities and within 24h Russian Troops could be in there "to protect Russian speakers from the chaos". China as a precursor to a forced reunification with Taiwan. Then there's North Korea. It's hard to see what a country that irrational would do -especially if they thought they could get away with it.

Us in the west?

Excluding Iraq, the smaller countries that Trump doesn't like: Cuba, N. Korea lack that infrastructure to destroy. The big target would be his new enemy, China -but hopefully the entirety of new administration isn't that mad. So instead it becomes a deterrent against equivalent attacks from other nation states with suitable infrastructure.

What we can't do though is use to as a deterrent for Stuxnet-class attacks, not just on account of the destruction it would cause, but because it's so hard to attribute blame.

I suspect what is going to happen is something a bit like the evolution of the Drone Warfare doctrine under Obama: it'll become acceptable to deploy Stuxnet-class attacks against other countries, in peacetime. Trump would no doubt love the power, though his need to seek public adulation will hamper the execution. You can't deny your work when your president announces it on twitter.

At the same time, I can imagine the lure of non-attributable damage to a competing nation state. Something that hurts and hinders them -but if they can't point the blame , what's not to lose.? That I could the Trump Regime going for -and if it does happen to, say, China, and they work it out -well, it's going to escalate.

Because that has always been the problem with the whole tactical to strategic nuclear arsenal. Once you've made the leap from conventional to nuclear weapons, it was going to escalate all the way.

Do we really think "cyber-weaponry" isn't going to go the same way? From deleting a few files, or shutting down a factory to disrupting transport, a power grid?

(*) the poster was a photo of the George Bush "mission accomplished" carrier landing, as I recall.


TRIDENT-877 missile veered towards wrong continent; hemisphere

Apparently a test of an submarine launched trident missile went wrong, it started to head in the wrong direction and chose to abort its flight. The payload ended up in the Bahamas.

Aeronautics Museum

The whole concept of software engineering came out of a NATO conference in 1968.

The military were the first to hit this, because they were building the most complex systems: airplanes, ships, submarines, content-wide radar systems. And of course: missiles.

Missiles whose aim in life is to travel from a potentially mobile launch location to a preplanned destination, via a suborbital ballistic trajectory. It's inevitably a really complex problem: you've got a multistage rocket designed to be moved around in a submarine for decades, designed to be launched without much preparation at a target a few thousand miles away. Which must make the navigation a fun little problem.

We can all use GPS to work out where we are, even spacecraft which know to use the other solution to the GPS timing equation - the one which doesn't have a solution close to the geode, our model of the Earth's surface. Submarines can't use GPS while under water and they, like their deliverables, can't rely on the GPS constellation existing at the time of use. Which leaves what? Gyroscopic compasses, and inertial navigation systems: mindnumbingly complex bits of sensor trying to work out acceleration on different axes, use that, time, and its knowledge of its starting point to work out where it is. Then there's a little computer nearby using that information to control the rocket engines.

Once above enough of the atmosphere to see stars in daylight, the missiles switch to astronomy. This turns out to be an interesting area of ongoing work -IR CCDs can position vehicles at sea level when it's not cloudy (tip: always choose your war zones in desert climates). While the Trident missiles are unlikely to have been updated, a full submarine refresh is bound to have installed the shiny new stuff. And in an qualification test of a real launch -that's something you'd want to try. Though of course you would compare any celestial position data with the GPS feed.

Yet somehow it failed. Apparently this was a "telemetry problem", the missile concluded that something had gone wrong and chose to crash into the sea instead. I'm really curious about the details now, though we'll never get the specifics at a level to be that informative. First point: telemetry from the submarine to the missile? That is, something tracking the launch and providing (authenticated?) data to the missile which it could compare with its own measures? Or was it the other way around: missile data to submarine? As that would seem more likely -having the missile broadcast out an encrypted stream of all its engine data and sensor input would be exactly what you want to identify launch time problems. Perhaps it was some new submarine software which got confused, or got fed bad data somehow. If that was the case, then, if you could replicate the failure by feeding in the same telemetry, then yes, you could fix it and be confident that the specific failure was found and addressed. Except: you can't be confident that there weren't more problems from that telemetry, or other things to go wrong -problems which didn't show up as the missile had been aborted
Or it was in-missile; sensor data on the rockets misleading the navigation system. In which case: why use the term "telemetry".

We aren't ever going to know the details, which is a pity as it would be interesting to know. It's going to be kept a secret though, not just for the sake of whoever we consider our enemies to be —but because it would scare us all.

I don't see that you can say the system is production ready if there was any software problem. One with wiring up, maybe, or some other hardware problem where a replacement board -a well qualified board- could be swapped in. Maybe even an operations issue which can be addressed with changes in the runbook. But software? No.

How do you show it works then? Well, testing is the obvious tactic, except, clearly, we can't afford to. Which is a good argument in favour of cruise missiles over ICBMs: they cost less to test.

Tomahawk Cruise missile

Governments just don't take into account the software engineering and implementation details of modern systems into account, of which missiles are a special case, but things like the F-35 Joint Strike Fighter another. Some the software from that comes from BAe Systems a few miles away, and from what I gather, it's a tough project. The usual: over-ambitious goals and deadlines, conflicting customers, integration problems, suppliers blaming each other, etc, etc. Which is why the delivery and quality of the software is called out a a key source of delays, this in what is self-admittedly the world's largest defence programme.

It's not that the teams aren't competent —it's that the systems we are trying to build are beyond what we can currently do, despite that ~50+ years of Software Engineering.


How long does FileSystem.exists() take against S3?

Ice on the downs

One thing I've been working on with my colleagues is improving performance of Hadoop, Hive and Spark against S3, one exists() or getFileStatus() call at a time.

Why? This is a log of a test run showing how long it takes to query S3 over a long haul link. This is midway through the test, so the HTTPS connection pool is up, DNS has already resolved the hostnames. So these should be warm links to S3 US-east. Yet it takes over a second just for one probe.
2016-12-01 15:47:10,359 - op_exists += 1  ->  6
2016-12-01 15:47:10,360 - op_get_file_status += 1  ->  20
2016-12-01 15:47:10,360 (S3AFileSystem.java:getFileStatus) -
  Getting path status for s3a://hwdev-stevel/numbers_rdd_tests
2016-12-01 15:47:10,360 - object_metadata_requests += 1 -> 39
2016-12-01 15:47:11,068 - object_metadata_requests += 1 -> 40
2016-12-01 15:47:11,241 - object_list_requests += 1 -> 21
2016-12-01 15:47:11,513 (S3AFileSystem.java:getFileStatus) -
  Found path as directory (with /)
The way we check for a path p in Hadoop's S3 Client(s) is
LIST prefix=p, suffix=/, count=1
A simple file: one HEAD. A directory marker, two, a path with no marker but 1+ child: three. In this run, it's an empty directory, so two of the probes are executed:
HEAD p => 708ms
HEAD p/ => 445ms
LIST prefix=p, suffix=/, count=1 => skipped
That's 1153ms from invocation of the exists() call to it returning true —long enough for you to see the log pause during the test run. Think about that: determining which operations to speed up not through some fancy profiler, but watching when the log stutters. That's how dramatic the long-haul cost of object store operations are. It's also why a core piece of the S3Guard work is to offload that metadata storage to DynamoDB. I'm not doing that code, but I am doing the committer to go with. To be ruthless, I'm not sure we can reliably do that O(1) rename, massively parallel rename being the only way to move blobs around, and the committer API as it stands precluding me from implementing a single-file-direct-commit committer. We can do the locking/leasing in dynamo though, along with the speedup.

What it should really highlight is that an assumption in a lot of code "getFileStatus() is too quick to measure" doesn't hold once you move into object stores, especially remote ones, and that any form of recursive treewalk is potentially pathologically bad.
Remember that that next time you edit your code.


Film Review: Arrival — Whorfian propaganda

Montepelier and beyond

Given the audience numbers for Arrival, in the first fortnight of its public release, more people will have encountered linguistic theory and been introduced to the Sapier-Whorf hypothesis than in the entire history of the study of linguistics (or indeed CS & AI courses, where I presume I first encountered it).

But it utterly dodges Chomsky's critique —that being the second irony: more people know Noam Chomsky(*) for his political opinions than his contributions to linguistics and his seminal work on Grammar; regexp being type 3, and HTML being very much not. While I'm happy to willingly suspend my disbelief about space aliens appearing from nowhere, the notion that S-W implies learning a new language changes the semantics of happens-before. grated on me. I'd have really preferred an ending where the lead protagonists retreat and admit defeat to the government, wherein Chomsky does a cameo, "told you!" before turning to the person by his side and going "More tea, Lamport?"

The whole premise of S-W, hence the film, is that language constrains your thinking: new languages enable new thoughts. That's very true in computing languages; you do think of solutions to problems in different ways, once you fully grasp the tenets of language like Lisp and Prolog. In human language: less clear. It certainly exposes you to a culture, and what that culture values (hint: there is no single word for Trainspotting in Italian, nor an english equivalent of Passiagata). And the S-W work was based on the different notions of time in Hopi, plus that "13 words for snow" story which implies the Inuit see snow differently from the rest of us. Bad news there: take up Scottish Winter Mountaineering and you not only end up with lots of words for Snow (snow, hail, slush, hardpack, softpack, perennial snowfield, ET-met snow, MF-met snow, powder, rime, corniche, verglas, sastrugi, ...), you end up with more words for rain. Does knowing the word Dreich make you appreciate it more? No, just that you have more of a scale of miserable.

Chomsky argued the notion of language comprehension being hardwired into our brain, the Front Temporal Lobe being the convention. Based on my own experiments, I'm confident that the location of my transient parser failures was separate from where numbers come from, so I'm kind of aligned with him here. After all: we haven't had a good conversation with a dolphin yet, and only once we can do that could we begin to make the case for what'd happen if we met other sentient life forms.

To summarise: while enjoying the lovely cinematography and abstract nature of the film, I couldn't sit there in disbelief about the language theme while wondering why they weren't asking the interesting questions, like The Halting Problem, whether P = NP, or even more fundamental: does maths exist, or is it something we've just made up?

Maybe that'll be the sequel.

Further reading

[Alford80] Demise of the Whorf Hypothesis.

(*) This had made me realise, I shoud add Chomsky to the list of CS grandees I should seek to be gently chided by, now having ticked Milner, Gray and Lamport off the list

(picture: 3Dom on Moon Lane)


Moving Abroad

Earlier this year I moved to a different country.

Whenever I think I've got accustomed to this country's differences, something happens. A minister proposes having companies publish lists of the numbers of non-British employees working on them. A newspaper denounce judges as Enemies of the People for having the audacity to rule that parliament must have a vote on government actions which remove rights from its citizens.  And then you realise: it's only just begun.
Boris meets Trump at Westmoreland House
A large proportion of the American population have just moved to the same country. For which I can issue a guarded "hello". I would say "welcome", except the country we've all moved to doesn't welcome outsiders —it views them with suspicion, especially if they are seen as different in any way. Language, religion and skin tone are the usual markers of "difference", but not sharing the same fear and hatred of others highlights you as a threat.

Because we have all moved from an apparently civilised country to one where it turns out half the people are the pitchfork waving barbarians who are happy to burn their opponents. That while we thought that humanity had put behind them the rallies for "the glorious leader" who blamed all their problems on the outsider —be it The Migrant, the Muslim, The Jew, The Mexican or some other folk demon, we hadn't; we'd just been waiting for glorious leaders that looked slightly better on colour TV.

Bristol Paintwork

One thing I've seen in the UK is that whenever something surfaces which shows how much of a trainwreck things will be (collapse in exchange rates, banks planning to move), the brexit advocates are unable to recognise or accept that they've made a mistake. Instead they blame: "the remainers", the press "talking down the country", the civil service "secretly working against brexit", the judicial system (same); disloyal companies. Being pro-EU is becoming as much a crime as being from the EU.

That's not going to go away: it's only gong to amplify as the consequences of brexit become apparent. Every time the Grand Plan stumbles, when bad news reaches the screens, someone will be needed to take the blame. And I know who it's going to be here in England —troublemakers like me.
We're sitting through a history book era. And not in a good way.

If there's one change from the past, forty years from now, PhD students studying the events, "the end of the consensus", "the burning of the elites", "the rise of the idiotocracies", or whatever it is called, they'll be using Facebook post archives and a snapshot of the twitter firehose dataset to model society. That is: unless people haven't gone back and deleted their posts/tweets to avoid being recorded as Enemies of the State


ps: Happy Kristallnacht/Berliner Mauer Tag to all! If you are thinking of something to watch on television tonight, consider: The Lives of Others