2017-11-23

How to play with the new S3A committers

Untitled

Following up from yesterday's post on the S3A committers, here's what you need for picking up the committers.
  1. Apache Hadoop trunk; builds to 3.1.0-SNAPSHOT:  
  2. The documentation on use.
  3. An AWS keypair, try not to commit them to git. Tip for the Uber team: git-secrets is something you can add as a checkin hook. Do as I do: keep them elsewhere.
  4. If you want to use the magic committer; turn S3Guard on. Initially I'd use the staging committer, specificially the "directory" on.
  5. switch s3a:// to use that committer: fs.s3a.committer.name =  partitioned
  6. Run your MR queries
  7. look in _SUCCESS for committer info. 0-bytes long: classic FileOutputCommitter. Bit of JSON naming committer, files committed and some metrics (SuccessData) and you are using an S3 committer.
If you do that: I'd like to see the numbers comparing FileOutputCommitter (which must have S3Guard) and the new committers. For benchmark consistency, leave S3Guard on.

If you can't get things to work because the docs are wrong: file a JIRA with a patch. If the code is wrong: submit a patch with the fix & tests.

Spark?
  1. Spark Master has a couple of patches to deal with integration issues (FNFE on magic output paths, Parquet being over-fussy about committers, I think the committer binding has enough workarounds for these to work with Spark 2.2 though.
  2. Checkout my cloud-integration for Apache Spark repo, and its production-time redistributable, spark-cloud-integration.
  3. Read its docs and use
  4. If you want to use Parquet over other formats, use this committer.  
  5. Again,. check _SUCCESS to see what's going on.
  6. There's a test module with various (scaleable) tests as well as a copy and paste of some of the Spark SQL test.
  7. Spark can work with the Partitioned committer. This is a staging committer which only worries about file conflicts in the final partitions. This lets you do in-situ updates of existing datasets, adding new partitions or overwriting existing ones, while leaving the rest alone. Hence: no need to move the output of a job into the reference datasets.
  8. Problems. File an issue. I've just seen Ewan has a couple of PRs I'd better look at, actually.
Committer-wise, that spark-cloud-integration module is ultimately transient. I think we can identify those remaining issues with committer setup in spark core, after which a hadoop 3.0+ specific module should be able to work out the box with the new committers.

There's still other things there, like
  • Cloud store optimised file input stream source
  • ParallizedWithLocalityRDD: and RDD which lets you provide custom functions to declare locality on a row-by-row basis. Used in my demo of implementing DistCp in Spark. Every row is a filename, which gets pushed out to a worker close to the data, it does the upload. This is very much a subset of distCP, but it shows this: you can have with with RDDs and cloud storage.
  • + all the tests
I think maybe Apache Bahir would be the ultimate home for this. For now, a bit too unstable.

(photo: spices on sale in a Mombasa market)

2017-11-22

subatomic

Untitled

I've just committed HADOOP-13786 Add S3A committer for zero-rename commits to S3 endpoints.. Contributed by Steve Loughran and Ryan Blue.

This is a serious and complex piece of work; I need to thank:
  1. Thomas Demoor and Ewan Higgs from WDC for their advice and testing. They understand the intricacies of the S3 protocol to the millimetre.
  2. Ryan Blue for his Staging-based S3 committer. The core algorithms and code will be in hadoop-aws come Hadoop 3.1.
  3. Colleagues for their support, including the illustrious Sanjay Radia, and Ram Venkatesh for letting me put so much time into this.
  4. Reviewers, especially Ryan Blue, Ewan Higgs, Mingliang Liu and extra especially Aaron Fabbri @ cloudera. It's a big piece of code to learn. First time a patch of mine has ever crossed the 1MB source barrier
I now understand a lot about commit protocols in Hadoop and Spark, including the history of interesting failures encountered, events which are reflected in the change logs of the relevant classes. Things you never knew about the Hadoop MapReduce commit protocol
  1. The two different algorithms, v1 and v2 have very different semantics about the atomicity of task and job commits, including when output becomes visible in the destination directory.
  2. Neither algorithm is atomic in both task and job commit.
  3. V1 is atomic in task commits, but O(files) in its non-atomic job commit. It can recover from any job failure without having rerun all succeeded tasks, but not from a failure in job commit.
  4. V2's job commit is a repeatable atomic O(1) operation, because it is a no-op. Task commits do the move/merge, which are O(file), make the output immediately visible, and as a consequence, mean that failure of a job means the output directory is in an unknown state.
  5. Both algorithms depend on the filesystem having consistent listings and Create/Update/Delete operations
  6. The routine to merge the output of a task to the destination is a real-world example of a co-recursive algorithm. These are so rare most developers don't even know the term for them -or have forgotten it.
  7. At-most-once execution is guaranteed by having the tasks and AM failing when they recognise that they are in trouble.
  8. The App Master refuses to commit a job if it hasn't had a heartbeat with the YARN Resource Manager within a specific time period. This stops it committing work if the network is partitioned and the AM/RM protocol fails...YARN may have considered the job dead and restarted it.
  9. tasks commit iff they get permission from the AM; thus they will not attempt to commit if the network partitions.
  10. if a task given permission to commit does not report a successful commit to the AM; the V1 algorithm can rerun the task; v2 must conclude its in an unknown state and abort the job.
  11. Spark can commit using the Hadoop FileOutputCommitter; its Parquet support has some "special" code which refuses to work if the committer is not a subclass of ParquetOutputCommitter
  12. . That is: it's special code makes it the hardest thing to bind to this: ORC, CSV, Avro: they all work out the box.
  13. It adds the ability for tasks to provide extra data to its job driver for use in job commit; this allows committers to explicitly pass commit information directly to the driver, rather than indirectly via the (consistent) filesystem.
  14. Everyone's code assumes that abort() completes in a bounded time, and does not ever throw that IOException its signature promises it can.
  15. There's lots of cruft in the MRv2 codebase to keep the MRv1 code alive, which would be really good to delete
This means I get to argue the semantics of commit algorithms with people, as I know what the runtimes "really do", rather than believed by everyone who has neither implemented part of it or stepped throught the code in a debugger.

If we had some TLA+ specifications of filesystems and object stores, we could perhaps write the algorithms as PlusCal examples, but that needs someone with the skills and the time. I'd have to find the time to learn TLA+ properly as well as specify everything, so it won't be me.

Returning to the committers, what do they do which is so special?

They upload task output to the final destination paths in the tasks, but don't make the uploads visible until the job is committed.

No renames, no copies, no job-commit-time merges, and no data visible until job commit. Tasks which fail/fail to commit do not have any adverse side effects on the destination directories.

First, read S3A Committers: Architecture and Implementation.

Then, if that seems interesting look at the source.

A key feature is that we've snuck in to FileOutputFormat a mechanism to allow you to provide different committers for different filesystem schemas.

Normal file output formats (i.e. not Parquet) will automatically get the committer for the target filesystems, which, for S3A, can be changed from the default FileOutputCommitter to an S3A-specific one. And any other object store which also offers delayed materialization of uploaded data can implement their own and run it alongside the S3A ones, which will be something to keep the Azure, GCS and openstack teams busy, perhaps.

For now though: users of Hadoop can use Amazon S3 (or compatible services) as the direct destination of Hadoop and Spark workloads without any overheads of copying the data, and the ability to support failure recovery and speculative execution. I'm happy with that as a good first step.

(photo: street vendors at the Kenya/Tanzania Border)

2017-11-06

I do not fear Kerberos, but I do fear Apple Itunes billing

I laugh at Kerberos messages. When I see a stack trace with a meaningless network error I go "that's interesting". I even learned PowerShell in a morning to fix where I'd managed to break our Windows build and tests.

But there is now one piece of software I do not ever want to approach, ever again. Apple icloud billing.

So far, since Saturday's warnings on my phone telling me that there was a billing problem
  1. Tried and repeatedly failed to update my card details
  2. Had my VISA card seemingly blocked by my bank,
  3. Been locked out of our Netflix subscription on account of them failing to bill a card which has been locked out by my may
  4. Had a chat line with someone on Apple online, who finally told me to phone an 800 number.
  5. Who are closed until office hours tomorrow
What am I trying to do? Set up iCloud family storage so I get a full resolution copy of my pics shared across devices, also give the other two members of our household lots of storage.

What have I achieved? Apart from a card lockout and loss of Netflix, nothing.

If this was a work problem I'd be loading debug level log files oftens of GB in editors, using regexps to delete all lines of noise, then trying to work backwards from the first stack trace in one process to where something in another system went awry. Not here though here I'm thinking "I don't need this". So if I don't get this sorted out by the end of the week, I won't be. I will have been defeated.

Last month I opted to pay £7/month for 2TB of iCloud storage. This not only looked great value for 2TB of storage, the fact I could share it with the rest of the family meant that we got a very good deal for all that data. And, with integration with iphotos, I could use to upload all my full resolution pictures. So sign up I did

My card is actually bonded to Bina's account, but here I set up the storage, had to reenter it. Where the fact that the dropdown menu switched to finnish was most amusing
finnish

With hindsight I should have taken "billing setup page cannot maintain consistency of locales between UI, known region of user, and menus" as a warning sign that something was broken.

Other than that, everything seemed to work. Photo upload working well. I don't yet keep my full photoset managed by iPhotos; it's long been a partitionedBy(year, month) directory tree built up with the now unmaintained Picasa, backed up at full res to our home server, at lower res to google photos. The iCloud experience seemed to be going smoothly; smoothly enough to think about the logistics of a full photo import. One factor there iCloud photos downloader works great as a way of downloading the full res images into the year/month layout, so I can pull images over to the server, so giving me backup and exit strategies.

That was on the Friday. On the Saturday a little alert pops up on the phone, matched by an email
Apple "we will take away all your photos"

Something has gone wrong. Well, no problem, over to billing. First, the phone UI. A couple of attempts and no, no joy. Over to the web page

Ths time, the menus are in german
appleID can't handle payment updates

"Something didn't work but we don't know what". Nice. Again? Same message.

Never mind, I recognise "PayPal" in german, lets try that:
And they can't handle paypal
No: failure.

Next attempt: use my Visa credit card, not the bank debit card I normally use. This *appears* to take. At least, I haven't got any more emails, and the photos haven't been deleted. All well to the limits of my observability.

Except, guess what ends up in my inbox instead? Netflix complaining about billing
Netflix "there was a problem"
Hypothesis: repeated failures of apple billing to set things up have caused the bank to lock down the card, it just so happens that Netflix bill the same day (does everyone do the first few days of each month?), and so: blocked off. That is, Apple Billing's issues are sufficient to break Netflix.

Over to the bank, review transactions, drop them a note.

My bank is fairly secure and uses 2FA with a chip-and-pin card inserted into a portable card reader. You can log in without it, but then cannot set up transfers to any new destination. I normally use the card reader and card. Not today though, signatures aren't being accepted. Solution, fall back to the "secrets" and then compose a message

Except of course, the first time I try that, it fails
And I can't talk to my bank about it

This is not a good day. Why can't I just have "Unknown failure at GSS API level". That I can handle. Instead what I am seeing here is a cross-service outage choreographed by Apple, which, if it really does take away my photos, will even go into my devices.

Solution: log out, log in. Compose the message in a text editor for ease of resubmission. Paste and submit. Off it goes.

Sunday: don't go near a computer. Phone still got a red marker "billing issues", though I can't distinguish from "old billing issues" from new billing issues. That is: no email to say things are fixed. At the same time, no emails to say "things are still broken". Same from netflix, neither a success message, or a failure one. Nothing from the bank either.

Monday: not worrying about this while working. No Kerberos errors there either. Today is a good day, apart from the thermostat on the ground floor not sending "turn the heating" on messages to the boiler, even after swapping the batteries.

After dinner, netflix. Except the TV has been logged out. Log in to netflix on the web and yes, my card is still not valid. Go to the bank, no response there yet. Go back to netflix, insert Visa credit card: its happy. This is good, as if this card started failing too, I'd be running out of functional payment mechanisms.

Now, what about apple?
apple id payment method; chrome
No, not english, or indeed, any language I know how to read. What now?

Apple support, in the form of a chat
Screen Shot 2017-11-06 at 21.10.51
After a couple of minutes wait I as talking to someone. I was a bit worried that the person I'm talking to was "allen". I know Allen. Sometimes he's helpful. Let's see.

After explaining my problem and sharing my appleId, Allen had a solution immediately: only the nominated owner of the family account can do the payment, even if the icloud storage account is in the name of another. So log in as them and try and sort stuff out there.

So: log out as me, long in as B., edit the billing. Which is the same card I've been using. Somehow, things went so wrong with Amazon billing trying to charge the system off my user ID and failing that I've been blocked everywhere. Solution: over to the VISA credit card. All "seems" well.

But how can I be sure? I've not got any emails from Apple Billing. The little alert in the settings window is gone, but I don't trust it. Without notification from Apple confirming that all is well, I have to assume that things are utterly broken. How can I trust a billing system which has managed to lock me out of my banking or netflix?

I raised this topic with Allen. After a bit of backwards and forwards, he gave me an 800 number to call. Which I did. They are closed after 19:00 hours, so I'll have to wait until tomorrow. I shall be calling them. I shall also be in touch with my bank.

Overall: this has been, so far, an utter disaster. Its not just that the system suffers from broken details (prompts in random languages), and deeply broken back ends (whose card is charged), but it manages to escalate the problem to transitively block out other parts of my online life.

If everything works tomorrow, I'll treat this as a transient disaster. If, on the other hand, things are not working tomorrow, I'm going to give up trying to maintain an iCloud storage account. I'll come up with some other solution. I just can't face having the billing system destroy the rest of my life.

2017-10-23

ROCA breaks my commit process

Since January I've been signing my git commits to the main Hadoop branches; along with Akira, Aaron and Allen we've been leading the way in trying to be more rigorous about authenticating our artifacts, in that gradual (and clearly futile) process to have some form of defensible INFOSEC policy on the development laptop (and ignoring homebrew, maven, sbt artifact installation,...).

For extra rigorousness, I've been using a Yubikey 4 for the signing: I don't have the secret key on my laptop *at all*, just the revocation secrets. To sign work, I use "git commit -S", the first commit of the day asks me to press the button and enter a PIN, from then on all I have to do to sign a commit is just press the button on the dongle plugged into a USB port on the monitor. Simple, seamless signing.

Yubikey rollout

Until Monday October 16, 2017.

There was some news in the morning about a WPA2 vulnerability. I looked at the summary and opted not to worry; the patch status of consumer electronics on the WLAN is more significant a risk than the WiFI password. No problem there, more a moral "never trust a network". As for a hinted at RSA vulnerability, it was going to inevitably be of two forms "utter disaster there's no point worrying about" or "hypothetical and irrelevant to most of us." Which is where I wasn't quite right.

Later on in the afternoon, glancing at fb on the phone and what I should I see but a message from facebook.

Your OpenPGP public key is weak. Please revoke it and generate a replacement

"Your OpenPGP public key is weak. Please revoke it and generate a replacement"

That's not the usual FB message. I go over to a laptop, log in to facebook and look at my settings: yes, I've pasted the public key in there. Not because I want encrypted comms with FB, but so that people can see if they really want to; part of my "publish the key broadly" program, as I've been trying to cross-sign/cross-trust other ASF committers' keys.

Then over to twitter and computing news sites and yes, there is a bug in a GPG keygen library used in SoC parts, from Estonian ID cards to Yubikeys like mine. And as a result, it is possible for someone to take my public key and generate the private one. While the vulnerability is public, the exact algorithm to regenerate the private key isn't so I have a bit of time left to kill my key. Which I do, and place an order for a replacement key (which has arrived)

And here's the problem. Git treats the revocation of a key as a sign that every single signature most now be untrusted.

Before, a one commit-per-line log of branch-2 --show-signature

git log --1show-signatures branch-2
After

git log1 --show-signature after revocation picked up

You see the difference? All my commits are now considered suspect. Anyone doing a log --show-signature will now actually get more warnings about the commits I signed than all those commits by other people which are not signed at all. Even worse, if someone were to ever try to do a full validation of the commit path at any time in the future is now going to see this. For the entire history of the git repo, those commits of mine are going to show up untrusted.

Given the way that git overreacts to key revocation, I didn't do this right.

What I should have done is simpler: force-expired the key by changing its expiry date the current date/time and pushing up the updated public key to the servers. As people update their keytabs from the servers, they'll see that the key isn't valid for signing new data, but that all commits issued by the user are marked as valid-at-the-time-but-with-an-expired-key. Key revocation would be reserved for the real emergency, "someone has my key and is actively using it".

I now have a new key, and will be rolling it out, This time I'm thinking ofr rolling my signing key every year, so that if I ever do have to revoke a key, it's only the last year's worth of commits which will be invalidated.

2017-09-09

Stocator: A High Performance Object Store Connector for Spark


Behind Picton Street

IBM have published a lovely paper on their Stocator 0-rename committer for Spark

Stocator is:
  1. An extended Swift client
  2. magic in their FS to redirect mkdir and file PUT/HEAD/GET calls under the normal MRv1 __temporary paths to new paths in the dest dir
  3. generating dest/part-0000 filenames using the attempt & task attempt ID to guarantee uniqueness and to ease cleanup: restarted jobs can delete the old attempts
  4. Commit performance comes from eliminating the COPY, which is O(data),
  5. And from tuning back the number of HTTP requests (probes for directories, mkdir 0 byte entries, deleting them)
  6. Failure recovery comes from explicit names of output files. (note, avoiding any saving of shuffle files, which this wouldn't work with...spark can do that in memory)
  7. They add summary data in the _SUCCESS file to list the files written & so work out what happened (though they don't actually use this data, instead relying on their swift service offering list consistency). (I've been doing something similar, primarily for testing & collection of statistics).

Page 10 has their benchmarks, all; of which are against an IBM storage system, not real amazon S3 with its different latencies and performance.

Table 5: Average run time


Read-Only 50GB
Read-Only 500GB
Teragen
Copy
Wordcount
Terasort
TPC-DS
Hadoop-Swift Base
37.80±0.48
393.10±0.92
624.60±4.00
622.10±13.52
244.10±17.72
681.90±6.10
101.50±1.50
S3a Base
33.30±0.42
254.80±4.00
699.50±8.40
705.10±8.50
193.50±1.80
746.00±7.20
104.50±2.20
Stocator
34.60±0.56
254.10±5.12
38.80±1.40
68.20±0.80
106.60±1.40
84.20±2.04
111.40±1.68
Hadoop-Swift Cv2
37.10±0.54
395.00±0.80
171.30±6.36
175.20±6.40
166.90±2.06
222.70±7.30
102.30±1.16
S3a Cv2
35.30±0.70
255.10±5.52
169.70±4.64
185.40±7.00
111.90±2.08
221.90±6.66
104.00±2.20
S3a Cv2 + FU
35.20±0.48
254.20±5.04
56.80±1.04
86.50±1.00
112.00±2.40
105.20±3.28
103.10±2.14

The S3a is the 2.7.x version, which has the stabilisation enough to be usable with Thomas Demoor's fast output stream (HADOOP-11183). That stream buffers in RAM & initiates the multipart upload once the block size threshold is reached. Provided you can upload data faster than you run out of RAM, it avoids the log waits at the end of close() calls, so has significant speedup. (The fast output stream has evolved into the S3ABlockOutput Stream (HADOOP-13560) which can buffer off heap and to HDD, and which will become the sole output stream once the great cruft cull of HADOOP-14738 goes in)

That means in the doc, "FU" == fast upload, == incremental upload & RAM storage. The default for S3A will become HDD storage, as unless you have a very fast pipe to a compatible S3 store, it's easy to overload the memory

Cv2 means MRv2 committer, the one which does  single rename operation on task commit (here the COPY), rather than one in task commit to promote that attempt, and then another in job commit to finalise the entire job. So only: one copy of every byte PUT, rather than 2, and the COPY calls can run in parallel, often off the critical path

 Table 6: Workload speedups when using Stocator



Read-Only 50GB
Read-Only 500GB
Teragen
Copy
Wordcount
Terasort
TPC-DS
Hadoop-Swift Base
x1.09
x1.55
x16.09
x9.12
x2.29
x8.10
x0.91
S3a Base
x0.96
x1.00
x18.03
x10.33
x1.82
x8.86
x0.94
Stocator
x1
x1
x1
x1
x1
x1
x1
Hadoop-Swift Cv2
x1.07
x1.55
x4.41
x2.57
x1.57
x2.64
x0.92
S3a Cv2
x1.02
x1.00
x4.37
x2.72
x1.05
x2.64
x0.93
S3a Cv2 + FU
x1.02
x1.00
x1.46
x1.27
x1.05
x1.25
x0.93


Their TCP-DS benchmarks show that stocator & swift is slower than TCP-DS Hadoop 2.7 S3a + Fast upload & MRv2 commit. Which means that (a) the Hadoop swift connector is pretty underperforming and (b) with fadvise=random and columnar data (ORC, Parquet) that speedup alone will give better numbers than swift & stocator. (Also shows how much the TCP-DS Benchmarks are IO heavy rather than output heavy the way the tera-x benchmarks are).

As the co-author of that original swift connector then, what the IBM paper is saying is "our zero rename commit just about compensates for the functional but utterly underperformant code Steve wrote in 2013 and gives us equivalent numbers to 2016 FS connectors by Steve and others, before they started the serious work on S3A speedup". Oh, and we used some of Steve's code to test it, removing the ASF headers.

Note that as the IBM endpoint is neither the classic python Openstack swift or Amazon's real S3, it won't exhibit the real issues those two have. Swift has the worst update inconsistency I've ever seen (i.e repeatable whenever I overwrote a large file with a smaller one), and aggressive throttling even of the DELETE calls in test teardown. AWS S3 has its own issues, not just in list inconsistency, but serious latency of HEAD/GET requests, as they always go through the S3 load balancers. That is, I would hope that IBM's storage offers significantly better numbers than you get over long-haul S3 connections. Although it'd be hard (impossible) to do a consistent test there, I 'd fear in-EC2 performance numbers to be actually worse than that measures.

I might post something faulting the paper, but maybe I'll should to do a benchmark of my new committer first. For now though, my critique of both the swift:// and s3a:// clients is as follows

Unless the storage services guarantees consistency of listing along with other operations, you can't use any of the MR commit algorithms to reliably commit work. So performance is moot. Here IBM do have a consistent store, so you can start to look at performance rather than just functionality. And as they note, committers which work with object store semantics are the way to do this: for operations like this you need the atomic operations of the store, not mocked operations in the client.

People who complain about the performance of using swift or s3a as a destination are blisfully unaware of the key issue: the risk of data loss due inconsistencies. Stocator solves both issues at once.

Anyway, means we should be planning a paper or two on our work too, maybe even start by doing something about random IO and object storage, as in "what can you do for and in columnar storage formats to make them work better in a world where a seek()+ read is potentially a new HTTP request."

(picture: parakeet behind Picton Street)






2017-05-22

Dissent is a right: Dissent is a duty. @Dissidentbot

It looks like the Russians interfered with the US elections, not just from the alleged publishing of the stolen emails, or through the alleged close links with the Trump campaign, but in the social networks, creating astroturfed campaigns and repeating the messages the country deemed important.

Now the UK is having an election. And no doubt the bots will be out. But if the Russians can do bots: so can I.

This then, is @dissidentbot.
Dissidentbot

Dissident bot is a Raspbery Pi running a 350 line ruby script tasked with heckling politicans
unrelated comments seem to work, if timely
It offers:
  • The ability to listen to tweets from a number of sources: currently a few UK politicians
  • To respond pick a random responses from a set of replies written explicitly for each one
  • To tweet the reply after a 20-60s sleep.
  • Admin CLI over Twitter Direct Messaging
  • Live update of response sets via github.
  • Live add/remove of new targets (just follow/unfollow from the twitter UI)
  • Ability to assign a probability of replying, 0-100
  • Random response to anyone tweeting about it when that is not a reply (disabled due to issues)
  • Good PUE numbers, being powered off the USB port of the wifi base station, SSD storage and fanless naturally cooled DC. Oh, and we're generating a lot of solar right now, so zero-CO2 for half the day.
It's the first Ruby script of more than ten lines I've ever written; interesting experience, and I've now got three chapters into a copy of the Pickaxe Book I've had sitting unloved alongside "ML for the working programmer".  It's nice to be able to develop just by saving the file & reloading it in the interpreter...not done that since I was Prolog programming. Refreshing.
Strong and Stable my arse
Without type checking its easy to ship code that's broken. I know, that's what tests are meant to find, but as this all depends on the live twitter APIs, it'd take effort, including maybe some split between Model and Control. Instead: broken the code into little methods I can run in the CLI.

As usual, the real problems surface once you go live:
  1. The bot kept failing overnight; nothing in the logs. Cause: its powered by the router and DD-WRT was set to reboot every night. Fix: disable.
  2. It's "reply to any reference which isn't a reply itself" doesn't work right. I think it's partly RT related, but not fully tracked it down.
  3. Although it can do a live update of the dissident.rb script, it's not yet restarting: I need to ssh in for that.
  4. I've been testing it by tweeting things myself, so I've been having to tweet random things during testing.
  5. Had to add handling of twitter blocking from too many API calls. Again: sleep a bit before retries.
  6. It's been blocked by the conservative party. That was because they've been tweeting 2-4 times/hour, and dissidentbot originally didn't have any jitter/sleep. After 24h of replying with 5s of their tweets, it's blocked.
The loopback code is the most annoying bug; nothing too serious though.

The DM CLI is nice, the fact that I haven't got live restart something which interferes with the workflow.
  Dissidentbot CLI via Twitter DM
Because the Pi is behind the firewall, I've no off-prem SSH access.

The fact the conservatives have blocked me, that's just amusing. I'll need another account.

One of the most amusing things is people argue with the bot. Even with "bot" in the name, a profile saying "a raspberry pi", people argue.
Arguing with Bots and losing

Overall the big barrier is content.  It turns out that you don't need to do anything clever about string matching to select the right tweet: random heckles seems to blend in. That's probably a metric of political debate in social media: a 350 line ruby script tweeting random phrases from a limited set is indistinguishable from humans.

I will accept Pull Requests of new content. Also: people are free to deploy their own copies. without the self.txt file it won't reply to any random mentions, just listen to its followed accounts and reply to those with a matching file in the data dir.

If the Russians can do it, so can we.

2017-05-15

The NHS gets 0wned

Friday's news was full of breaking panic about an "attack" on the NHS, making it sound like someone had deliberately made an attempt to get in there and cause damage.

It turns out that it wasn't an attack against the NHS itself, just a wide scale ransomware attack which combined click-through installation and intranet propagation by way of a vulnerability which the NSA had kept for internal use for some time.

Laptops, Lan ports and SICP

The NHS got decimated for a combination of issues:
  1. A massive intranet for SMB worms to run free.
  2. Clearly, lots of servers/desktops running the SMB protocol.
  3. One or more people reading an email with the original attack, bootstrapping the payload into the network.
  4. A tangible portion of the machines within some parts of the network running unpatched versions of Windows, clearly caused in part by the failure of successive governments to fund a replacement program while not paying MSFT for long-term support.
  5. Some of these systems within part of medical machines: MRI scanners, VO2 test systems, CAT scanners, whatever they use in the radiology dept —to name but some of the NHS machines I've been through in the past five years.
The overall combination then is: a large network/set of networks with unsecured, unpatched targets were vulnerable to a drive-by attack, the kind of attack, which, unlike a nation state itself, you may stand a chance of actually defending against.

What went wrong?

Issue 1: The intranet. Topic for another post.

Issue 2: SMB.

In servers this can be justified, though it's a shame that SMB sucks as a protocol. Desktops? It's that eternal problem: these things get stuck in as "features", but which sometimes come to burn you. Every process listening on a TCP or UDP port is a potential attack point. A 'netstat -a" will list running vulnerabilities on your system; enumerating running services "COM+, Sane.d? mDNS, ..." which you should review and decide whether they could be halted. Not that you can turn mDNS off on a macbook...

Issue 3: Email

With many staff, email clickthrough is a function of scale and probability: someone will, eventually. Probability always wins.

Issue 4: The unpatched XP boxes.

This is why Jeremy Hunt is in hiding, but it's also why our last Home Secretary, tasked with defending the nation's critical infrastructure, might want to avoid answering questions. Not that she is answering questions right now.

Finally, 5: The medical systems.

This is a complication on the "patch everything" story because every update to a server needs to be requalified. Why? Therac-25.

What's critical here is that the NHS was 0wned, not by some malicious nation state or dedicated hacker group: it fell victim to drive-by ransomware targeted at home users, small businesses, and anyone else with a weak INFOSEC policy This is the kind of thing that you do actually stand a chance of defending against, at least in the laptop, desktop and server.

Untitled

Defending against malicious nation state is probably near-impossible given physical access to the NHS network is trivial: phone up at 4am complaining of chest pains and you get a bed with a LAN port alongside it and told to stay there until there's a free slot in the radiology clinic.

What about the fact that the NSA had an exploit for the SMB vulnerability and were keeping quiet on it until the Shadow Brokers stuck up online? This is a complex issue & I don't know what the right answer is.

Whenever critical security patches go out, people try and reverse engineer them to get an attack which will work against unpatched versions of: IE, Flash, Java, etc. The problems here were:
  • the Shadow Broker upload included a functional exploit, 
  • it was over the network to enable worms, 
  • and it worked against widely deployed yet unsupported windows versions.
The cost of developing the exploit was reduced, and the target space vast, especially in a large organisation. Which, for a worm scanning and attacking vulnerable hosts, is a perfect breeding ground.

If someone else had found and fixed the patch, there'd still have been exploits out against it -the published code just made it easier and reduced the interval between patch and live exploit

The fact that it ran against an old windows version is also something which would have existed -unless MSFT were notified of the issue while they were still supporting WinXP. The disincentive for the NSA to disclose that is that a widely exploitable network attack is probably the equivalent of a strategic armament, one step below anything that can cut through a VPN and the routers, so getting you inside a network in the first place.

The issues we need to look at are
  1. How long is it defensible to hold on to an exploit like this?
  2. How to keep the exploit code secure during that period, while still using it when considered appropriate?
Here the MSFT "tomahawk" metaphor could be pushed a bit further. The US govt may have tomahawk missiles with nuclear payloads, but the ones they use are the low-damage conventional ones. That's what got out this time.

WMD in the Smithsonia

One thing that MSFT have to consider is: can they really continue with the "No more WinXP support" policy? I know they don't want to do it, the policy of making customers who care paying for the ongoing support is a fine way to do it, it's just it leaves multiple vulnerabilites. People at home, organisations without the money and who think "they won't be a target", and embedded systems everywhere -like a pub I visited last year whose cash registers were running Windows XP embedded; all those ATMs out there, etc, etc.

Windows XP systems are a de-facto part of the nation's critical infrastructure.

Having the UK and US governments pay for patches for the NHS and everyone else could be a cost effective way of securing a portion of the national infrastructure, for the NHS and beyond.

(Photos: me working on SICP during an unplanned five day stay and the Bristol Royal Infirmary. There's a LAN port above the bed I kept staring at; Windows XP Retail packaging, Smithsonian aerospace museum, the Mall, Washington DC)